diff --git a/README.md b/README.md index cd9d91546..28f38b7f9 100644 --- a/README.md +++ b/README.md @@ -21,12 +21,12 @@ -[TensorLayer](https://tensorlayer.readthedocs.io) is a novel TensorFlow-based deep learning and reinforcement learning library designed for researchers and engineers. It provides an extensive collection of customizable neural layers to build advanced AI models quickly, based on this, the community open-sourced mass [tutorials](https://github.com/tensorlayer/tensorlayer/blob/master/examples/reinforcement_learning/README.md) and [applications](https://github.com/tensorlayer). TensorLayer is awarded the 2017 Best Open Source Software by the [ACM Multimedia Society](https://twitter.com/ImperialDSI/status/923928895325442049). +[TensorLayer](https://tensorlayer.readthedocs.io) is a novel supports multiple backends deep learning and reinforcement learning library designed for researchers and engineers. It provides an extensive collection of customizable neural layers to build advanced AI models quickly, based on this, the community open-sourced mass [tutorials](https://github.com/tensorlayer/tensorlayer/blob/master/examples/reinforcement_learning/README.md) and [applications](https://github.com/tensorlayer). TensorLayer is awarded the 2017 Best Open Source Software by the [ACM Multimedia Society](https://twitter.com/ImperialDSI/status/923928895325442049). This project can also be found at [iHub](https://code.ihub.org.cn/projects/328) and [Gitee](https://gitee.com/organizations/TensorLayer). # News -🔥 **3.0.0 will supports multiple backends, such as TensorFlow, MindSpore and more, allowing users to run the code on different hardware like Nvidia-GPU and Huawei-Ascend. We need more people to join the dev team, if you are interested, please email hao.dong@pku.edu.cn** +🔥 **3.0.0 will supports multiple backends, such as TensorFlow, MindSpore , PaddlePaddle and more, allowing users to run the code on different hardware like Nvidia-GPU and Huawei-Ascend. We need more people to join the dev team, if you are interested, please email hao.dong@pku.edu.cn** 🔥 Reinforcement Learning Zoo: [Low-level APIs](https://github.com/tensorlayer/tensorlayer/tree/master/examples/reinforcement_learning) for professional usage, [High-level APIs](https://github.com/tensorlayer/RLzoo) for simple usage, and a corresponding [Springer textbook](http://springer.com/gp/book/9789811540943) @@ -72,7 +72,7 @@ You can find a large collection of examples that use TensorLayer in [here](examp # Getting Start -TensorLayer 2.0 relies on TensorFlow, numpy, and others. To use GPUs, CUDA and cuDNN are required. +TensorLayer 3.0 relies on TensorFlow, numpy, and others. To use GPUs, CUDA and cuDNN are required. Install TensorFlow: @@ -99,6 +99,15 @@ pip3 install --upgrade tensorlayer[all] # all additional dependenci pip3 install --upgrade tensorlayer[extra] # only the `extra` dependencies pip3 install --upgrade tensorlayer[contrib_loggers] # only the `contrib_loggers` dependencies ``` +If you want to use mindspore backend, you should install mindspore>=1.2.1 +```bash +pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/1.2.1/MindSpore/gpu/ubuntu_x86/cuda-10.1/mindspore_gpu-1.2.1-cp37-cp37m-linux_x86_64.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple +``` + +If you want to use paddlepaddle backend, you should install paddlepaddle>=2.1.1 +```bash +python -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple +``` If you are TensorFlow 1.X users, you can use TensorLayer 1.11.0: @@ -150,6 +159,7 @@ The following table shows the training speeds of [VGG16](http://www.robots.ox.ac | Graph | Keras | channel last | 8677 | 2580 | 2576 | 101 | | Eager | TensorFlow 2.0 | channel last | 8723 | 2052 | 2024 | 97 | | | TensorLayer 2.0 | channel last | 8723 | 2010 | 2007 | 95 | +| | TensorLayer 3.0 | channel last | | | | | # Getting Involved diff --git a/docs/index.rst b/docs/index.rst index c08623a76..27cac3d66 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -9,7 +9,7 @@ Welcome to TensorLayer **Documentation Version:** |release| -**Jun 2020** `Deep Reinforcement Learning Book Is Coming `__. +**Jun 2020** `Deep Reinforcement Learning Book Is Released `__. **Good News:** We won the **Best Open Source Software Award** `@ACM Multimedia (MM) 2017 `_. @@ -57,12 +57,14 @@ method, this part of the documentation is for you. modules/activation modules/array_ops modules/cost + modules/dataflow modules/prepro modules/files modules/iterate modules/layers modules/models modules/nlp + modules/vision modules/initializers modules/rein modules/utils diff --git a/docs/modules/activation.rst b/docs/modules/activation.rst index 79bad9601..be250d8cb 100644 --- a/docs/modules/activation.rst +++ b/docs/modules/activation.rst @@ -2,9 +2,7 @@ API - Activations ========================= To make TensorLayer simple, we minimize the number of activation functions as much as -we can. So we encourage you to use TensorFlow's function. TensorFlow provides -``tf.nn.relu``, ``tf.nn.relu6``, ``tf.nn.elu``, ``tf.nn.softplus``, -``tf.nn.softsign`` and so on. +we can. So we encourage you to use Customizes activation function. For parametric activation, please read the layer APIs. The shortcut of ``tensorlayer.activation`` is ``tensorlayer.act``. @@ -14,64 +12,71 @@ Your activation Customizes activation function in TensorLayer is very easy. The following example implements an activation that multiplies its input by 2. -For more complex activation, TensorFlow API will be required. +For more complex activation, TensorFlow(MindSpore/PaddlePaddle) API will be required. .. code-block:: python - def double_activation(x): - return x * 2 - - double_activation = lambda x: x * 2 + class DoubleActivation(object): + def __init__(self): + pass + def __call__(self, x): + return x * 2 + double_activation = DoubleActivation() -.. automodule:: tensorlayer.activation +.. automodule:: tensorlayer.layers.activation .. autosummary:: - leaky_relu - leaky_relu6 - leaky_twice_relu6 - ramp - swish - sign - hard_tanh - pixel_wise_softmax - mish - -Ramp + PRelu + PRelu6 + PTRelu6 + LeakyReLU + LeakyReLU6 + LeakyTwiceRelu6 + Ramp + Swish + HardTanh + Mish + +PRelu ------ -.. autofunction:: ramp +.. autofunction:: PRelu -Leaky ReLU +PRelu6 ------------ -.. autofunction:: leaky_relu +.. autofunction:: PRelu6 -Leaky ReLU6 +PTRelu6 ------------ -.. autofunction:: leaky_relu6 +.. autofunction:: PTRelu6 -Twice Leaky ReLU6 +LeakyReLU ----------------- -.. autofunction:: leaky_twice_relu6 +.. autofunction:: LeakyReLU -Swish +LeakyReLU6 ------------ -.. autofunction:: swish +.. autofunction:: LeakyReLU6 -Sign +LeakyTwiceRelu6 --------------------- -.. autofunction:: sign +.. autofunction:: LeakyTwiceRelu6 -Hard Tanh +Ramp --------------------- -.. autofunction:: hard_tanh +.. autofunction:: Ramp -Pixel-wise softmax +Swish -------------------- -.. autofunction:: pixel_wise_softmax +.. autofunction:: Swish + +HardTanh +---------------- +.. autofunction:: HardTanh -mish +Mish --------- -.. autofunction:: mish +.. autofunction:: Mish Parametric activation ------------------------------ diff --git a/docs/modules/cost.rst b/docs/modules/cost.rst index eba52f4ca..6277b9d71 100644 --- a/docs/modules/cost.rst +++ b/docs/modules/cost.rst @@ -11,7 +11,7 @@ we can. So we encourage you to use TensorFlow's function, , see `TensorFlow API .. autosummary:: - cross_entropy + softmax_cross_entropy_with_logits sigmoid_cross_entropy binary_cross_entropy mean_squared_error @@ -28,12 +28,11 @@ we can. So we encourage you to use TensorFlow's function, , see `TensorFlow API maxnorm_regularizer maxnorm_o_regularizer maxnorm_i_regularizer - huber_loss Softmax cross entropy ---------------------- -.. autofunction:: cross_entropy +.. autofunction:: softmax_cross_entropy_with_logits Sigmoid cross entropy ---------------------- @@ -94,7 +93,3 @@ Special .. autofunction:: lo_regularizer .. autofunction:: maxnorm_o_regularizer .. autofunction:: maxnorm_i_regularizer - -Huber Loss -^^^^^^^^^^ -.. autofunction:: huber_loss \ No newline at end of file diff --git a/docs/modules/dataflow.rst b/docs/modules/dataflow.rst new file mode 100644 index 000000000..5ffcc5656 --- /dev/null +++ b/docs/modules/dataflow.rst @@ -0,0 +1,79 @@ +API - Dataflow +============ + +.. automodule:: tensorlayer.dataflow + +.. ----------------------------------------------------------- +.. Dataflow List +.. ----------------------------------------------------------- + +Dataflow list +---------------------- + +.. autosummary:: + + Dataset + IterableDataset + FromGenerator + FromSlices + Dataloader + + Concat + Zip + Batch + Map + Repeat + Shuffle + +.. ----------------------------------------------------------- +.. Dataflow +.. ----------------------------------------------------------- + +Dataflow +----------------- + +Dataset +^^^^^^^^^^^^^^^^ +.. autoclass:: Dataset + + +IterableDataset +^^^^^^^^^^^^^^^^ +.. autoclass:: IterableDataset + +FromGenerator +^^^^^^^^^^^^^^^^ +.. autoclass:: FromGenerator + +FromSlices +^^^^^^^^^^^^^^^^ +.. autoclass:: FromSlices + +Dataloader +^^^^^^^^^^^^^^^^ +.. autoclass:: Dataloader + +Concat +^^^^^^^^^^^^^^^^ +.. autoclass:: Concat + +Zip +^^^^^^^^^^^^^^^^ +.. autoclass:: Zip + +Batch +^^^^^^^^^^^^^^^^ +.. autoclass:: Batch + +Map +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: Map + +Repeat +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: Repeat + +Shuffle +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: Shuffle + diff --git a/docs/modules/initializers.rst b/docs/modules/initializers.rst index 6311619f2..3bf421337 100644 --- a/docs/modules/initializers.rst +++ b/docs/modules/initializers.rst @@ -16,6 +16,7 @@ e.g. ``tf.initializers.he_normal``, please refer to TensorFlow provided initiali RandomUniform RandomNormal TruncatedNormal + HeNormal deconv2d_bilinear_upsampling_initializer Initializer @@ -46,6 +47,10 @@ TruncatedNormal --------------------- .. autoclass:: TruncatedNormal +HeNormal +------------ +.. autoclass:: HeNormal + deconv2d_bilinear_upsampling_initializer ------------------------------------------ .. autofunction:: deconv2d_bilinear_upsampling_initializer diff --git a/docs/modules/layers.rst b/docs/modules/layers.rst index 78e0eee9a..8f08aefde 100644 --- a/docs/modules/layers.rst +++ b/docs/modules/layers.rst @@ -12,10 +12,9 @@ Layer list .. autosummary:: - Layer + Module - ModelLayer - LayerList + SequentialLayer Input @@ -73,14 +72,6 @@ Layer list BatchNorm1d BatchNorm2d BatchNorm3d - LocalResponseNorm - InstanceNorm - InstanceNorm1d - InstanceNorm2d - InstanceNorm3d - LayerNorm - GroupNorm - SwitchNorm RNN SimpleRNN @@ -134,17 +125,13 @@ Layer list Base Layer ----------- -Base Layer -^^^^^^^^^^^^^^^^ -.. autoclass:: Layer - -Model Layer +Module ^^^^^^^^^^^^^^^^ -.. autoclass:: ModelLayer +.. autoclass:: Module -Layer List +Sequential Layer ^^^^^^^^^^^^^^^^ -.. autoclass:: LayerList +.. autoclass:: SequentialLayer .. ----------------------------------------------------------- .. Input Layer @@ -399,38 +386,6 @@ Batch Normalization 3D ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: BatchNorm3d -Local Response Normalization -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -.. autoclass:: LocalResponseNorm - -Instance Normalization -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -.. autoclass:: InstanceNorm - -Instance Normalization 1D -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -.. autoclass:: InstanceNorm1d - -Instance Normalization 2D -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -.. autoclass:: InstanceNorm2d - -Instance Normalization 3D -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -.. autoclass:: InstanceNorm3d - -Layer Normalization -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -.. autoclass:: LayerNorm - -Group Normalization -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -.. autoclass:: GroupNorm - -Switch Normalization -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -.. autoclass:: SwitchNorm - .. ----------------------------------------------------------- .. Padding Layers .. ----------------------------------------------------------- diff --git a/docs/modules/models.rst b/docs/modules/models.rst index 272f1d9c6..821d30c75 100644 --- a/docs/modules/models.rst +++ b/docs/modules/models.rst @@ -1,59 +1,34 @@ -API - Models +API - Pretrained Models ================================ TensorLayer provides many pretrained models, you can easily use the whole or a part of the pretrained models via these APIs. -.. automodule:: tensorlayer.models +.. automodule:: examples.model_zoo .. autosummary:: - Model - - VGG16 - VGG19 - SqueezeNetV1 - MobileNetV1 + vgg16 + vgg19 + YOLOv4 ResNet50 - Seq2seq - Seq2seqLuongAttention - - -Base Model ------------ -.. autoclass:: Model - -VGG16 +vgg16 ---------------------- -.. autofunction:: VGG16 +.. autofunction:: vgg16 -VGG19 +vgg19 ---------------------- -.. autofunction:: VGG19 - -SqueezeNetV1 ----------------- -.. autofunction:: SqueezeNetV1 +.. autofunction:: vgg19 -MobileNetV1 +YOLOv4 ---------------- -.. autofunction:: MobileNetV1 +.. autofunction:: YOLOv4 ResNet50 ---------------- -.. autofunction:: ResNet50 - -Seq2seq ------------------------- - -.. autoclass:: Seq2seq - - -Seq2seq Luong Attention ------------------------- +.. autofuncion:: ResNet50 -.. autoclass:: Seq2seqLuongAttention diff --git a/docs/modules/optimizers.rst b/docs/modules/optimizers.rst index 0ababc899..9f272d39c 100644 --- a/docs/modules/optimizers.rst +++ b/docs/modules/optimizers.rst @@ -5,6 +5,8 @@ API - Optimizers TensorLayer provides simple API and tools to ease research, development and reduce the time to production. Therefore, we provide the latest state of the art optimizers that work with Tensorflow. +The optimizers functions provided by TensorFlow can be used in TensorLayer. +We have also wrapped the optimizers functions for each framework, which can be found in tensorlayer.optimizers. Optimizers List --------------- @@ -12,6 +14,17 @@ Optimizers List .. autosummary:: AMSGrad + Adadelta + Adagrad + Adam + Adamax + Ftrl + Nadam + RMSprop + SGD + Momentum + Lamb + LARS AMSGrad Optimizer ----------------- diff --git a/docs/modules/vision.rst b/docs/modules/vision.rst new file mode 100644 index 000000000..70718bf64 --- /dev/null +++ b/docs/modules/vision.rst @@ -0,0 +1,204 @@ +API - Vision +============ + +.. automodule:: tensorlayer.vision.transforms + +.. ----------------------------------------------------------- +.. Vision Transforms List +.. ----------------------------------------------------------- + +Vision Transforms list +---------------------- + +.. autosummary:: + + ToTensor + Compose + + Crop + CentralCrop + RandomCrop + Pad + PadToBoundingbox + Resize + RandomResizedCrop + + RgbToGray + HsvToRgb + RgbToHsv + + AdjustBrightness + AdjustContrast + AdjustHue + AdjustSaturation + RandomBrightness + RandomContrast + RandomHue + RandomSaturation + ColorJitter + + FlipHorizontal + FlipVertical + RandomFlipHorizontal + RandomFlipVertical + + RandomRotation + RandomShift + RandomShear + RandomZoom + RandomAffine + + Transpose + HWC2CHW + CHW2HWC + + Normalize + StandardizePerImage + +.. ----------------------------------------------------------- +.. Vision Transforms +.. ----------------------------------------------------------- + +Vision Transforms +----------------- + +ToTensor +^^^^^^^^^^^^^^^^ +.. autoclass:: ToTensor + + +Compose +^^^^^^^^^^^^^^^^ +.. autoclass:: Compose + +Crop +^^^^^^^^^^^^^^^^ +.. autoclass:: Crop + +CentralCrop +^^^^^^^^^^^^^^^^ +.. autoclass:: CentralCrop + +RandomCrop +^^^^^^^^^^^^^^^^ +.. autoclass:: RandomCrop + +Pad +^^^^^^^^^^^^^^^^ +.. autoclass:: Pad + +PadToBoundingbox +^^^^^^^^^^^^^^^^ +.. autoclass:: PadToBoundingbox + +Resize +^^^^^^^^^^^^^^^^ +.. autoclass:: Resize + +RandomResizedCrop +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: RandomResizedCrop + +RgbToGray +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: RgbToGray + +HsvToRgb +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: HsvToRgb + +RgbToHsv +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: RgbToHsv + +AdjustBrightness +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: AdjustBrightness + +AdjustContrast +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: AdjustContrast + +AdjustHue +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: AdjustHue + +AdjustSaturation +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: AdjustSaturation + +RandomBrightness +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: RandomBrightness + +RandomContrast +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: RandomContrast + +RandomHue +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: RandomHue + +RandomSaturation +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: RandomSaturation + +ColorJitter +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: ColorJitter + +FlipHorizontal +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: FlipHorizontal + +FlipVertical +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: FlipVertical + +RandomFlipHorizontal +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: RandomFlipHorizontal + +RandomFlipVertical +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: RandomFlipVertical + +RandomRotation +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: RandomRotation + +RandomShift +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: RandomShift + +RandomShear +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: RandomShear + +RandomZoom +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: RandomZoom + +RandomAffine +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: RandomAffine + +Transpose +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: Transpose + +HWC2CHW +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: HWC2CHW + +CHW2HWC +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: CHW2HWC + +Normalize +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: Normalize + +StandardizePerImage +^^^^^^^^^^^^^^^^^^^^^ +.. autoclass:: StandardizePerImage \ No newline at end of file diff --git a/docs/modules/visualize.rst b/docs/modules/visualize.rst index 0bbe02861..0ef8f3b12 100644 --- a/docs/modules/visualize.rst +++ b/docs/modules/visualize.rst @@ -19,6 +19,7 @@ to visualize the model, activations etc. Here we provide more functions for data frame images2d tsne_embedding + draw_boxes_and_labels_to_image_with_json Save and read images @@ -44,6 +45,9 @@ Save image for object detection ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. autofunction:: draw_boxes_and_labels_to_image +Save image for object detection with json +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +.. autofunction:: draw_boxes_and_labels_to_image_with_json Save image for pose estimation (MPII) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ diff --git a/docs/user/contributing.rst b/docs/user/contributing.rst index 9b1d98f88..64c8354e3 100644 --- a/docs/user/contributing.rst +++ b/docs/user/contributing.rst @@ -4,8 +4,8 @@ Contributing =============== -TensorLayer 2.0 is a major ongoing research project in CFCS, Peking University, the first version was established at Imperial College London in 2016. The goal of the project is to develop a compositional language while complex learning systems -can be built through composition of neural network modules. +TensorLayer 3.0 is a major ongoing research project in Peking University and Pengcheng Laboratory, the first version was established at Imperial College London in 2016. The goal of the project is to develop a compositional languagea that is compatible with multiple deep learning frameworks, +while complex learning systems can be built through composition of neural network modules. Numerous contributors come from various horizons such as: Imperial College London, Tsinghua University, Carnegie Mellon University, Stanford, University of Technology of Compiegne, Google, Microsoft, Bloomberg and etc. @@ -25,6 +25,12 @@ Project Maintainers The TensorLayer project was started by `Hao Dong `_ at Imperial College London in June 2016. +For TensorLayer 3.x, it is now actively developing and maintaining by the following people *(in alphabetical order)*: + +- **Cheng Lai** (`@Laicheng0830 `_) - ``_ +- **Hao Dong** (`@zsdonghao `_) - ``_ +- **Jiarong Han** (`@hanjr92 `_) - ``_ + For TensorLayer 2.x, it is now actively developing and maintaining by the following people who has more than 50 contributions: - **Hao Dong** (`@zsdonghao `_) - ``_ diff --git a/docs/user/examples.rst b/docs/user/examples.rst index 91971c0a0..80c3e8b8b 100644 --- a/docs/user/examples.rst +++ b/docs/user/examples.rst @@ -6,13 +6,28 @@ Examples We list some examples here, but more tutorials and applications can be found in `Github examples `__ and `Awesome-TensorLayer `_. +Commonly used dataset and pretrained models +=========================================== + + - MNIST, see `MNIST `__. + - CIFAR10, see `CIFAR10 `__. + + - YOLOv4 Pretrained Model, see `YOLOv4 `__. password: idsz + - VGG16 Pretrained Model, see `VGG16 `__. password: t36u + - VGG19 Pretrained Model, see `VGG19 `__. password: rb8w + - ResNet50 Pretrained Model, see `ResNet50 `__. password: 3nui + Basics ============ - - Multi-layer perceptron (MNIST), simple usage. Classification task, see `tutorial_mnist_simple.py `__. - - Multi-layer perceptron (MNIST), dynamic model. Classification with dropout using iterator, see `tutorial_mnist_mlp_dynamic.py method2 `__. - - Multi-layer perceptron (MNIST), static model. Classification with dropout using iterator, see `tutorial_mnist_mlp_static.py `__. - - Convolutional Network (CIFAR-10). Classification task, see `tutorial_cifar10_cnn_static.py `_. + - Multi-layer perceptron (MNIST), simple usage and supports multiple backends. Classification task, see `tutorial_mnist_simple.py `__. + - Multi-layer perceptron (MNIST), mix of tensorlayer and tensorflow. Classification with dropout using iterator, see `tutorial_mnist_mlp_tensorflow_backend.py `__. + - Multi-layer perceptron (MNIST), mix of tensorlayer and mindspore. Classification task, see `tutorial_mnist_mlp_mindspore_backend.py `__. + - Multi-layer perceptron (MNIST), mix of tensorlayer and paddlepaddle. Classification task, see `tutorial_mnist_mlp_paddlepaddle_backend.py `__. + + - Convolutional Network (CIFAR-10). mix of tensorlayer and tensorflow. Classification task, see `tutorial_cifar10_cnn_tensorflow_backend.py `_. + - Convolutional Network (CIFAR-10). mix of tensorlayer and mindspore. Classification task, see `tutorial_cifar10_cnn_mindspore_backend.py `_. + - TensorFlow dataset API for object detection see `here `__. - Data augmentation with TFRecord. Effective way to load and pre-process data, see `tutorial_tfrecord*.py `__ and `tutorial_cifar10_tfrecord.py `__. - Data augmentation with TensorLayer. See `tutorial_fast_affine_transform.py `__ (for quick test only). @@ -20,15 +35,16 @@ Basics Pretrained Models ================== - - VGG 16 (ImageNet). Classification task, see `tutorial_models_vgg16 `__. + - VGG 16 (ImageNet). Classification task, see `pretrained_vgg16 `__. - VGG 19 (ImageNet). Classification task, see `tutorial_models_vgg19.py `__. - - SqueezeNet (ImageNet). Model compression, see `tutorial_models_squeezenetv1.py `__. - - MobileNet (ImageNet). Model compression, see `tutorial_models_mobilenetv1.py `__. + - YOLOv4 (MS-COCO). Object Detection, see `pretrained_yolov4.py `__. + - SqueezeNet (ImageNet, Based on TensroLayer2.0). Model compression, see `tutorial_models_squeezenetv1.py `__. + - MobileNet (ImageNet, Based on TensroLayer2.0). Model compression, see `tutorial_models_mobilenetv1.py `__. - All pretrained models in `pretrained-models `__. Vision ================== - +Warning:These examples below only support Tensorlayer 2.0. Tensorlayer 3.0 is under development. - Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization, see `examples `__. - ArcFace: Additive Angular Margin Loss for Deep Face Recognition, see `InsignFace `__. - BinaryNet. Model compression, see `mnist `__ `cifar10 `__. @@ -44,6 +60,7 @@ Vision Adversarial Learning ======================== +Warning:These examples below only support Tensorlayer 2.0. Tensorlayer 3.0 is under development. - DCGAN (CelebA). Generating images by `Deep Convolutional Generative Adversarial Networks `__ by `zsdonghao `__. - `Generative Adversarial Text to Image Synthesis `__ by `zsdonghao `__. - `Unsupervised Image to Image Translation with Generative Adversarial Networks `__ by `zsdonghao `__. @@ -54,7 +71,7 @@ Adversarial Learning Natural Language Processing ============================== - +Warning:These examples below only support Tensorlayer 2.0. Tensorlayer 3.0 is under development. - Recurrent Neural Network (LSTM). Apply multiple LSTM to PTB dataset for language modeling, see `tutorial_ptb_lstm_state_is_tuple.py `__. - Word Embedding (Word2vec). Train a word embedding matrix, see `tutorial_word2vec_basic.py `__. - Restore Embedding matrix. Restore a pre-train embedding matrix, see `tutorial_generate_text.py `__. @@ -65,7 +82,7 @@ Natural Language Processing Reinforcement Learning ============================== - +Warning:These examples below only support Tensorlayer 2.0. Tensorlayer 3.0 is under development. - Policy Gradient / Network (Atari Ping Pong), see `tutorial_atari_pong.py `__. - Deep Q-Network (Frozen lake), see `tutorial_frozenlake_dqn.py `__. - Q-Table learning algorithm (Frozen lake), see `tutorial_frozenlake_q_table.py `__. @@ -77,6 +94,7 @@ Reinforcement Learning Miscellaneous ================= +Warning:These examples below only support Tensorlayer 2.0. Tensorlayer 3.0 is under development. - `Sipeed `__ : Run TensorLayer on AI Chips diff --git a/docs/user/get_start_advance.rst b/docs/user/get_start_advance.rst index db3441cde..1dae18a7a 100644 --- a/docs/user/get_start_advance.rst +++ b/docs/user/get_start_advance.rst @@ -11,11 +11,13 @@ Customizing layer Layers with weights ---------------------- -The fully-connected layer is `a = f(x*W+b)`, the most simple implementation is as follow, which can only support static model. +The fully-connected layer is `a = f(x*W+b)`, the most simple implementation is as follow. .. code-block:: python - class Dense(Layer): + from tensorlayer.layers import Module + + class Dense(Module): """The :class:`Dense` class is a fully connected layer. Parameters @@ -33,12 +35,16 @@ The fully-connected layer is `a = f(x*W+b)`, the most simple implementation is a n_units, # the number of units/channels of this layer act=None, # None: no activation, tf.nn.relu or 'relu': ReLU ... name=None, # the name of this layer (optional) + in_channels = None ): super(Dense, self).__init__(name, act=act) # auto naming, dense_1, dense_2 ... self.n_units = n_units + self.in_channels = in_channels + self.build() + self._built = True - def build(self, inputs_shape): # initialize the model weights here - shape = [inputs_shape[1], self.n_units] + def build(self): # initialize the model weights here + shape = [self.in_channels, self.n_units] self.W = self._get_weights("weights", shape=tuple(shape), init=self.W_init) self.b = self._get_weights("biases", shape=(self.n_units, ), init=self.b_init) @@ -48,13 +54,14 @@ The fully-connected layer is `a = f(x*W+b)`, the most simple implementation is a z = self.act(z) return z -The full implementation is as follow, which supports both static and dynamic models and allows users to control whether to use the bias, how to initialize the weight values. +The full implementation is as follow, which supports both automatic inference input and dynamic models and allows users to control whether to use the bias, how to initialize the weight values. .. code-block:: python - class Dense(Layer): + + class Dense(Module): """The :class:`Dense` class is a fully connected layer. - + Parameters ---------- n_units : int @@ -70,38 +77,53 @@ The full implementation is as follow, which supports both static and dynamic mod If None, it will be automatically detected when the layer is forwarded for the first time. name : None or str A unique layer name. If None, a unique name will be automatically generated. + + Examples + -------- + With TensorLayer + + >>> net = tl.layers.Input([100, 50], name='input') + >>> dense = tl.layers.Dense(n_units=800, act=tl.ReLU, in_channels=50, name='dense_1') + >>> print(dense) + Dense(n_units=800, relu, in_channels='50', name='dense_1') + >>> tensor = tl.layers.Dense(n_units=800, act=tl.ReLU, name='dense_2')(net) + >>> print(tensor) + tf.Tensor([...], shape=(100, 800), dtype=float32) + + Notes + ----- + If the layer input has more than two axes, it needs to be flatten by using :class:`Flatten`. + """ - + def __init__( - self, - n_units, - act=None, - W_init=tl.initializers.truncated_normal(stddev=0.1), - b_init=tl.initializers.constant(value=0.0), - in_channels=None, # the number of units/channels of the previous layer - name=None, + self, + n_units, + act=None, + W_init=tl.initializers.truncated_normal(stddev=0.05), + b_init=tl.initializers.constant(value=0.0), + in_channels=None, + name=None, # 'dense', ): - # we feed activation function to the base layer, `None` denotes identity function - # string (e.g., relu, sigmoid) will be converted into function. - super(Dense, self).__init__(name, act=act) + + super(Dense, self).__init__(name, act=act) self.n_units = n_units self.W_init = W_init self.b_init = b_init self.in_channels = in_channels - # in dynamic model, the number of input channel is given, we initialize the weights here - if self.in_channels is not None: + if self.in_channels is not None: self.build(self.in_channels) self._built = True logging.info( "Dense %s: %d %s" % - (self.name, self.n_units, self.act.__name__ if self.act is not None else 'No Activation') + (self.name, self.n_units, self.act.__class__.__name__ if self.act is not None else 'No Activation') ) - def __repr__(self): # optional, for printing information - actstr = self.act.__name__ if self.act is not None else 'No Activation' + def __repr__(self): + actstr = self.act.__class__.__name__ if self.act is not None else 'No Activation' s = ('{classname}(n_units={n_units}, ' + actstr) if self.in_channels is not None: s += ', in_channels=\'{in_channels}\'' @@ -110,21 +132,40 @@ The full implementation is as follow, which supports both static and dynamic mod s += ')' return s.format(classname=self.__class__.__name__, **self.__dict__) - def build(self, inputs_shape): # initialize the model weights here - if self.in_channels: # if the number of input channel is given, use it + def build(self, inputs_shape): + if self.in_channels is None and len(inputs_shape) != 2: + raise AssertionError("The input dimension must be rank 2, please reshape or flatten it") + if self.in_channels: shape = [self.in_channels, self.n_units] - else: # otherwise, get it from static model + else: self.in_channels = inputs_shape[1] shape = [inputs_shape[1], self.n_units] + self.W = self._get_weights("weights", shape=tuple(shape), init=self.W_init) - if self.b_init: # if b_init is None, no bias is applied - self.b = self._get_weights("biases", shape=(self.n_units, ), init=self.b_init) - def forward(self, inputs): - z = tf.matmul(inputs, self.W) + self.b_init_flag = False if self.b_init: - z = tf.add(z, self.b) + self.b = self._get_weights("biases", shape=(self.n_units, ), init=self.b_init) + self.b_init_flag = True + self.bias_add = tl.ops.BiasAdd() + + self.act_init_flag = False if self.act: + self.act_init_flag = True + + self.matmul = tl.ops.MatMul() + + def forward(self, inputs): + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True + + z = self.matmul(inputs, self.W) + if self.b_init_flag: + z = self.bias_add(z, self.b) + if self.act_init_flag: z = self.act(z) return z @@ -136,37 +177,54 @@ We use Dropout as an example here: .. code-block:: python - class Dropout(Layer): - """ - The :class:`Dropout` class is a noise layer which randomly set some - activations to zero according to a keeping probability. - Parameters - ---------- - keep : float - The keeping probability. - The lower the probability it is, the more activations are set to zero. - name : None or str - A unique layer name. - """ - - def __init__(self, keep, name=None): - super(Dropout, self).__init__(name) - self.keep = keep - - self.build() - self._built = True - - logging.info("Dropout %s: keep: %f " % (self.name, self.keep)) - - def build(self, inputs_shape=None): - pass # no weights in dropout layer - - def forward(self, inputs): - if self.is_train: # this attribute is changed by Model.train() and Model.eval() described above - outputs = tf.nn.dropout(inputs, rate=1 - (self.keep), name=self.name) - else: - outputs = inputs - return outputs + class Dropout(Module): + """ + The :class:`Dropout` class is a noise layer which randomly set some + activations to zero according to a keeping probability. + + Parameters + ---------- + keep : float + The keeping probability. + The lower the probability it is, the more activations are set to zero. + seed : int or None + The seed for random dropout. + name : None or str + A unique layer name. + + Examples + -------- + >>> net = tl.layers.Input([10, 200]) + >>> net = tl.layers.Dropout(keep=0.2)(net) + + """ + + def __init__(self, keep, seed=0, name=None): #"dropout"): + super(Dropout, self).__init__(name) + self.keep = keep + self.seed = seed + + self.build() + self._built = True + + logging.info("Dropout %s: keep: %f " % (self.name, self.keep)) + + def __repr__(self): + s = ('{classname}(keep={keep}') + if self.name is not None: + s += ', name=\'{name}\'' + s += ')' + return s.format(classname=self.__class__.__name__, **self.__dict__) + + def build(self, inputs_shape=None): + self.dropout = tl.ops.Dropout(keep=self.keep, seed=self.seed) + + def forward(self, inputs): + if self.is_train: + outputs = self.dropout(inputs) + else: + outputs = inputs + return outputs Pre-trained CNN ================ @@ -176,42 +234,14 @@ Get entire CNN .. code-block:: python - import tensorflow as tf + import tensorlayer as tl import numpy as np from tensorlayer.models.imagenet_classes import class_names + from examples.model_zoo import vgg16 - vgg = tl.models.vgg16(pretrained=True) + vgg = vgg16(pretrained=True) img = tl.vis.read_image('data/tiger.jpeg') - img = tl.prepro.imresize(img, (224, 224)).astype(np.float32) / 255 + img = tl.prepro.imresize(img, (224, 224)).astype(tl.float32) / 255 output = vgg(img, is_train=False) -Get a part of CNN ------------------- - -.. code-block:: python - - # get VGG without the last layer - cnn = tl.models.vgg16(end_with='fc2_relu', mode='static').as_layer() - # add one more layer and build a new model - ni = tl.layers.Input([None, 224, 224, 3], name="inputs") - nn = cnn(ni) - nn = tl.layers.Dense(n_units=100, name='out')(nn) - model = tl.models.Model(inputs=ni, outputs=nn) - # train your own classifier (only update the last layer) - train_weights = model.get_layer('out').all_weights - -Reuse CNN ------------------- - -.. code-block:: python - - # in dynamic model, we can directly use the same model - # in static model - vgg_layer = tl.models.vgg16().as_layer() - ni_1 = tl.layers.Input([None, 224, 224, 3]) - ni_2 = tl.layers.Input([None, 224, 224, 3]) - a_1 = vgg_layer(ni_1) - a_2 = vgg_layer(ni_2) - M = Model(inputs=[ni_1, ni_2], outputs=[a_1, a_2]) - diff --git a/docs/user/get_start_model.rst b/docs/user/get_start_model.rst index 2337a7d55..d900f6836 100644 --- a/docs/user/get_start_model.rst +++ b/docs/user/get_start_model.rst @@ -5,31 +5,26 @@ Define a model =============== TensorLayer provides two ways to define a model. -Static model allows you to build model in a fluent way while dynamic model allows you to fully control the forward process. +Sequential model allows you to build model in a fluent way while dynamic model allows you to fully control the forward process. -Static model +Sequential model =============== .. code-block:: python - import tensorflow as tf - from tensorlayer.layers import Input, Dropout, Dense - from tensorlayer.models import Model - - def get_model(inputs_shape): - ni = Input(inputs_shape) - nn = Dropout(keep=0.8)(ni) - nn = Dense(n_units=800, act=tf.nn.relu, name="dense1")(nn) # “name" is optional - nn = Dropout(keep=0.8)(nn) - nn = Dense(n_units=800, act=tf.nn.relu)(nn) - nn = Dropout(keep=0.8)(nn) - nn = Dense(n_units=10, act=None)(nn) - M = Model(inputs=ni, outputs=nn, name="mlp") # “name" is optional - return M - - MLP = get_model([None, 784]) - MLP.eval() - outputs = MLP(data) + from tensorlayer.layers import SequentialLayer + from tensorlayer.layers import Dense + import tensorlayer as tl + + def get_model(): + layer_list = [] + layer_list.append(Dense(n_units=800, act=tl.ReLU, in_channels=784, name='Dense1')) + layer_list.append(Dense(n_units=800, act=tl.ReLU, in_channels=800, name='Dense2')) + layer_list.append(Dense(n_units=10, act=tl.ReLU, in_channels=800, name='Dense3')) + MLP = SequentialLayer(layer_list) + return MLP + + Dynamic model ======================= @@ -39,15 +34,18 @@ In this case, you need to manually input the output shape of the previous layer .. code-block:: python - class CustomModel(Model): + import tensorlayer as tl + from tensorlayer.layers import Module + from tensorlayer.layers import Dropout, Dense + class CustomModel(Module): def __init__(self): super(CustomModel, self).__init__() self.dropout1 = Dropout(keep=0.8) - self.dense1 = Dense(n_units=800, act=tf.nn.relu, in_channels=784) + self.dense1 = Dense(n_units=800, act=tl.ReLU, in_channels=784) self.dropout2 = Dropout(keep=0.8) - self.dense2 = Dense(n_units=800, act=tf.nn.relu, in_channels=800) + self.dense2 = Dense(n_units=800, act=tl.ReLU, in_channels=800) self.dropout3 = Dropout(keep=0.8) self.dense3 = Dense(n_units=10, act=None, in_channels=800) @@ -63,73 +61,83 @@ In this case, you need to manually input the output shape of the previous layer return out MLP = CustomModel() - MLP.eval() + MLP.set_eval() outputs = MLP(data, foo=True) # controls the forward here outputs = MLP(data, foo=False) +Dynamic model do not manually input the output shape +======================= + + +In this case, you do not manually input the output shape of the previous layer to the new layer. + +.. code-block:: python + + import tensorlayer as tl + from tensorlayer.layers import Module + from tensorlayer.layers import Dropout, Dense + class CustomModel(Module): + + def __init__(self): + super(CustomModel, self).__init__() + + self.dropout1 = Dropout(keep=0.8) + self.dense1 = Dense(n_units=800, act=tl.ReLU) + self.dropout2 = Dropout(keep=0.8) + self.dense2 = Dense(n_units=800, act=tl.ReLU) + self.dropout3 = Dropout(keep=0.8) + self.dense3 = Dense(n_units=10, act=None) + + def forward(self, x, foo=False): + z = self.dropout1(x) + z = self.dense1(z) + z = self.dropout2(z) + z = self.dense2(z) + z = self.dropout3(z) + out = self.dense3(z) + if foo: + out = tf.nn.softmax(out) + return out + + MLP = CustomModel() + MLP.init_build(tl.layers.Input(shape=(1, 784))) # init_build must be called to initialize the weights. + MLP.set_eval() + outputs = MLP(data, foo=True) # controls the forward here + outputs = MLP(data, foo=False) + Switching train/test modes ============================= .. code-block:: python # method 1: switch before forward - Model.train() # enable dropout, batch norm moving avg ... - output = Model(train_data) + MLP.set_train() # enable dropout, batch norm moving avg ... + output = MLP(train_data) ... # training code here - Model.eval() # disable dropout, batch norm moving avg ... - output = Model(test_data) + Model.set_eval() # disable dropout, batch norm moving avg ... + output = MLP(test_data) ... # testing code here - # method 2: switch while forward - output = Model(train_data, is_train=True) - output = Model(test_data, is_train=False) + # method 2: Using packaged training modules + model = tl.models.Model(network=MLP, loss_fn=tl.cost.softmax_cross_entropy_with_logits, optimizer=optimizer) + model.train(n_epoch=n_epoch, train_dataset=train_ds) Reuse weights ======================= -For static model, call the layer multiple time in model creation - -.. code-block:: python - - # create siamese network - - def create_base_network(input_shape): - '''Base network to be shared (eq. to feature extraction). - ''' - input = Input(shape=input_shape) - x = Flatten()(input) - x = Dense(128, act=tf.nn.relu)(x) - x = Dropout(0.9)(x) - x = Dense(128, act=tf.nn.relu)(x) - x = Dropout(0.9)(x) - x = Dense(128, act=tf.nn.relu)(x) - return Model(input, x) - - - def get_siamese_network(input_shape): - """Create siamese network with shared base network as layer - """ - base_layer = create_base_network(input_shape).as_layer() # convert model as layer - - ni_1 = Input(input_shape) - ni_2 = Input(input_shape) - nn_1 = base_layer(ni_1) # call base_layer twice - nn_2 = base_layer(ni_2) - return Model(inputs=[ni_1, ni_2], outputs=[nn_1, nn_2]) - - siamese_net = get_siamese_network([None, 784]) - For dynamic model, call the layer multiple time in forward function .. code-block:: python - class MyModel(Model): + import tensorlayer as tl + from tensorlayer.layers import Module, Dense, Concat + class MyModel(Module): def __init__(self): super(MyModel, self).__init__() - self.dense_shared = Dense(n_units=800, act=tf.nn.relu, in_channels=784) - self.dense1 = Dense(n_units=10, act=tf.nn.relu, in_channels=800) - self.dense2 = Dense(n_units=10, act=tf.nn.relu, in_channels=800) + self.dense_shared = Dense(n_units=800, act=tl.ReLU, in_channels=784) + self.dense1 = Dense(n_units=10, act=tl.ReLU, in_channels=800) + self.dense2 = Dense(n_units=10, act=tl.ReLU, in_channels=800) self.cat = Concat() def forward(self, x): @@ -158,56 +166,6 @@ Print model information # (dropout_2): Dropout(keep=0.8, name='dropout_2') # (dense_2): Dense(n_units=10, None, in_channels='800', name='dense_2') # ) - - import pprint - pprint.pprint(MLP.config) # print the model architecture - # {'inputs': '_inputlayer_1_node_0', - # 'model_architecture': [{'args': {'dtype': tf.float32, - # 'layer_type': 'normal', - # 'name': '_inputlayer_1', - # 'shape': [None, 784]}, - # 'class': '_InputLayer', - # 'prev_layer': None}, - # {'args': {'keep': 0.8, - # 'layer_type': 'normal', - # 'name': 'dropout_1'}, - # 'class': 'Dropout', - # 'prev_layer': ['_inputlayer_1_node_0']}, - # {'args': {'act': 'relu', - # 'layer_type': 'normal', - # 'n_units': 800, - # 'name': 'dense_1'}, - # 'class': 'Dense', - # 'prev_layer': ['dropout_1_node_0']}, - # {'args': {'keep': 0.8, - # 'layer_type': 'normal', - # 'name': 'dropout_2'}, - # 'class': 'Dropout', - # 'prev_layer': ['dense_1_node_0']}, - # {'args': {'act': 'relu', - # 'layer_type': 'normal', - # 'n_units': 800, - # 'name': 'dense_2'}, - # 'class': 'Dense', - # 'prev_layer': ['dropout_2_node_0']}, - # {'args': {'keep': 0.8, - # 'layer_type': 'normal', - # 'name': 'dropout_3'}, - # 'class': 'Dropout', - # 'prev_layer': ['dense_2_node_0']}, - # {'args': {'act': None, - # 'layer_type': 'normal', - # 'n_units': 10, - # 'name': 'dense_3'}, - # 'class': 'Dense', - # 'prev_layer': ['dropout_3_node_0']}], - # 'name': 'mlp', - # 'outputs': 'dense_3_node_0', - # 'version_info': {'backend': 'tensorflow', - # 'backend_version': '2.0.0-alpha0', - # 'save_date': None, - # 'tensorlayer_version': '2.1.0', - # 'training_device': 'gpu'}} Get specific weights ======================= @@ -220,10 +178,6 @@ We can get the specific weights by indexing or naming. all_weights = MLP.all_weights some_weights = MLP.all_weights[1:3] - # naming - some_weights = MLP.get_layer('dense1').all_weights - - Save and restore model ======================= @@ -235,15 +189,17 @@ Save weights only .. code-block:: python - MLP.save_weights('model_weights.h5') # by default, file will be in hdf5 format - MLP.load_weights('model_weights.h5') + MLP.save_weights('./model_weights.npz') # by default, file will be in hdf5 format + MLP.load_weights('./model_weights.npz') -Save model architecture and weights (optional) +Save model weights (optional) ----------------------------------------------- .. code-block:: python - # When using Model.load(), there is no need to reimplement or declare the architecture of the model explicitly in code - MLP.save('model.h5', save_weights=True) - MLP = Model.load('model.h5', load_weights=True) + # When using packaged training modules. Saving and loading the model can be done as follows + model = tl.models.Model(network=MLP, loss_fn=tl.cost.softmax_cross_entropy_with_logits, optimizer=optimizer) + model.train(n_epoch=n_epoch, train_dataset=train_ds) + model.save_weights('./model.npz', format='npz_dict') + model.load_weights('./model.npz', format='npz_dict') diff --git a/docs/user/installation.rst b/docs/user/installation.rst index 3ba467f84..d7e88cc70 100644 --- a/docs/user/installation.rst +++ b/docs/user/installation.rst @@ -15,8 +15,9 @@ Mac OX, Linux and Windows, or ask for help on `tensorlayer@gmail.com `_. -Install TensorFlow +Install Backend ========================= +TensorLayer supports multiple deep learning backends, default TensorFlow as backend also supports MindSpore and PaddlePaddle. .. code-block:: bash @@ -24,9 +25,24 @@ Install TensorFlow pip3 install tensorflow-gpu # GPU version pip3 install tensorflow # CPU version + The installation instructions of TensorFlow are written to be very detailed on `TensorFlow`_ website. However, there are something need to be considered. For example, `TensorFlow`_ officially supports GPU acceleration for Linux, Mac OX and Windows at present. For ARM processor architecture, you need to install TensorFlow from source. +If you want to use mindspore backend, you should install mindspore==1.2.1. + +.. code-block:: bash + + pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/1.2.1/MindSpore/gpu/ubuntu_x86/cuda-10.1/mindspore_gpu-1.2.1-cp37-cp37m-linux_x86_64.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple + + +If you want to use paddlepaddle backend, you should install paddlepaddle>=2.1.1 + +.. code-block:: bash + + python -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple + + Install TensorLayer ========================= @@ -192,7 +208,7 @@ After extracting cuDNN, you will get three folders (bin, lib, include). Then the Installing TensorLayer ------------------------ -For TensorLayer, please refer to the steps mentioned above. +For TensorLayer, please refer to the steps mentioned above. TensorLayer3.0 supports multiple backends. We use TensorFlow backend by default. If you need to use other backends you can refer to the following. .. code-block:: bash @@ -200,8 +216,6 @@ For TensorLayer, please refer to the steps mentioned above. pip3 install tensorflow-gpu   #GPU version (GPU version and CPU version just choose one) pip3 install tensorlayer       #Install tensorlayer - - Issue ======= diff --git a/examples/README.md b/examples/README.md deleted file mode 100644 index 1d96c6c4a..000000000 --- a/examples/README.md +++ /dev/null @@ -1,23 +0,0 @@ -
- - -
- -
-
- -
- -This page contains basic tutorials and examples that help you to learn TensorLayer quick, but for real-world applications, such as Chatbot, Super-Resolution, Pose Estimation, please check [Awesome-TensorLayer](https://github.com/tensorlayer/awesome-tensorlayer) and [Home-TensorLayer](https://github.com/tensorlayer) - -- [Basic tutorials](https://tensorlayer.readthedocs.io/en/latest/user/get_start_model.html) -- [Basic examples](https://github.com/tensorlayer/tensorlayer/tree/master/examples/basic_tutorials) -- [Using pre-trained CNNs](https://github.com/tensorlayer/tensorlayer/tree/master/examples/pretrained_cnn) -- [Quantized networks](https://github.com/tensorlayer/tensorlayer/tree/master/examples/quantized_net) -- [Reinforcement learning](https://github.com/tensorlayer/tensorlayer/tree/master/examples/reinforcement_learning) -- [Spatial transformer](https://github.com/tensorlayer/tensorlayer/tree/master/examples/spatial_transformer_network) -- [Text/sentence classification](https://github.com/tensorlayer/tensorlayer/tree/master/examples/text_classification) -- [Text/sentence generation](https://github.com/tensorlayer/tensorlayer/tree/master/examples/text_generation) -- [Language modeling](https://github.com/tensorlayer/tensorlayer/tree/master/examples/text_ptb) -- [Word embedding](https://github.com/tensorlayer/tensorlayer/tree/master/examples/text_word_embedding) -- [Many more ...](https://github.com/tensorlayer) diff --git a/examples/basic_tutorials/README.md b/examples/basic_tutorials/README.md deleted file mode 100644 index 222df955a..000000000 --- a/examples/basic_tutorials/README.md +++ /dev/null @@ -1,13 +0,0 @@ -# Before You Start - -TensorLayer has two types of models. -Static model allows you to build model in a fluent way while dynamic model allows you to fully control the forward process. -Please read this [DOCS](https://tensorlayer.readthedocs.io/en/latest/user/get_start_model.html#) before you start. - -- [MNIST Simplest Example](https://github.com/tensorlayer/tensorlayer/blob/master/examples/basic_tutorials/tutorial_mnist_simple.py) -- [MNIST Static Example](https://github.com/tensorlayer/tensorlayer/blob/master/examples/basic_tutorials/tutorial_mnist_mlp_static.py) -- [MNIST Static Example for Reused Model](https://github.com/tensorlayer/tensorlayer/blob/master/examples/basic_tutorials/tutorial_mnist_mlp_static_2.py) -- [MNIST Dynamic Example](https://github.com/tensorlayer/tensorlayer/blob/master/examples/basic_tutorials/tutorial_mnist_mlp_dynamic.py) -- [MNIST Dynamic Example for Seperated Models](https://github.com/tensorlayer/tensorlayer/blob/master/examples/basic_tutorials/tutorial_mnist_mlp_dynamic_2.py) -- [MNIST Static Siamese Model Example](https://github.com/tensorlayer/tensorlayer/blob/master/examples/basic_tutorials/tutorial_mnist_siamese.py) -- [CIFAR10 Static Example with Data Augmentation](https://github.com/tensorlayer/tensorlayer/blob/master/examples/basic_tutorials/tutorial_cifar10_cnn_static.py) diff --git a/examples/basic_tutorials/tutorial_LayerList.py b/examples/basic_tutorials/tutorial_LayerList.py new file mode 100644 index 000000000..23d480fc7 --- /dev/null +++ b/examples/basic_tutorials/tutorial_LayerList.py @@ -0,0 +1,34 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +from tensorlayer.layers import Module, LayerList, Dense +import tensorlayer as tl + +d1 = Dense(n_units=800, act=tl.ReLU, in_channels=784, name='Dense1') +d2 = Dense(n_units=800, act=tl.ReLU, in_channels=800, name='Dense2') +d3 = Dense(n_units=10, act=tl.ReLU, in_channels=800, name='Dense3') + +layer_list = LayerList([d1, d2]) +# Inserts a given d2 before a given index in the list +layer_list.insert(1, d2) +layer_list.insert(2, d2) +# Appends d2 from a Python iterable to the end of the list. +layer_list.extend([d2]) +# Appends a given d3 to the end of the list. +layer_list.append(d3) + +print(layer_list) + +class model(Module): + def __init__(self): + super(model, self).__init__() + self._list = layer_list + def forward(self, inputs): + output = self._list[0](inputs) + for i in range(1, len(self._list)): + output = self._list[i](output) + return output + +net = model() +print(net) +print(net(tl.layers.Input((10, 784)))) \ No newline at end of file diff --git a/examples/basic_tutorials/tutorial_SequentialLayer.py b/examples/basic_tutorials/tutorial_SequentialLayer.py new file mode 100644 index 000000000..1780d3fb2 --- /dev/null +++ b/examples/basic_tutorials/tutorial_SequentialLayer.py @@ -0,0 +1,46 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- +import os +os.environ['TL_BACKEND'] = 'tensorflow' + +from tensorlayer.layers import SequentialLayer +from tensorlayer.layers import Dense +import tensorlayer as tl +import numpy as np + +layer_list = [] +layer_list.append(Dense(n_units=800, act=tl.ReLU, in_channels=784, name='Dense1')) +layer_list.append(Dense(n_units=800, act=tl.ReLU, in_channels=800, name='Dense2')) +layer_list.append(Dense(n_units=10, act=tl.ReLU, in_channels=800, name='Dense3')) +MLP = SequentialLayer(layer_list) + +X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 784)) + + +def generator_train(): + inputs = X_train + targets = y_train + if len(inputs) != len(targets): + raise AssertionError("The length of inputs and targets should be equal") + for _input, _target in zip(inputs, targets): + yield (_input, np.array(_target)) + + +n_epoch = 50 +batch_size = 128 +print_freq = 2 +shuffle_buffer_size = 128 + +# train_weights = MLP.trainable_weights +# print(train_weights) +optimizer = tl.optimizers.Momentum(0.05, 0.9) +train_ds = tl.dataflow.FromGenerator( + generator_train, output_types=(tl.float32, tl.int32), column_names=['data', 'label'] +) +train_ds = tl.dataflow.Shuffle(train_ds, shuffle_buffer_size) +train_ds = tl.dataflow.Batch(train_ds, batch_size) + +model = tl.models.Model(network=MLP, loss_fn=tl.cost.softmax_cross_entropy_with_logits, optimizer=optimizer) +model.train(n_epoch=n_epoch, train_dataset=train_ds, print_freq=print_freq, print_train_batch=False) +model.save_weights('./model.npz', format='npz_dict') +model.load_weights('./model.npz', format='npz_dict') diff --git a/examples/basic_tutorials/tutorial_mnist_mlp_static_2.py b/examples/basic_tutorials/tutorial_automatic_inference_input _shape.py similarity index 50% rename from examples/basic_tutorials/tutorial_mnist_mlp_static_2.py rename to examples/basic_tutorials/tutorial_automatic_inference_input _shape.py index a4110eafb..3318b6982 100644 --- a/examples/basic_tutorials/tutorial_mnist_mlp_static_2.py +++ b/examples/basic_tutorials/tutorial_automatic_inference_input _shape.py @@ -1,99 +1,95 @@ -import time - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.layers import Dense, Dropout, Input -from tensorlayer.models import Model - -## enable debug logging -tl.logging.set_verbosity(tl.logging.DEBUG) - -## prepare MNIST data -X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 784)) - -## define the network -# the softmax is implemented internally in tl.cost.cross_entropy(y, y_) to -# speed up computation, so we use identity here. -# see tf.nn.sparse_softmax_cross_entropy_with_logits() - - -def hidden_model(inputs_shape): - ni = Input(inputs_shape) - nn = Dropout(keep=0.8)(ni) - nn = Dense(n_units=800, act=tf.nn.relu)(nn) - nn = Dropout(keep=0.8)(nn) - nn = Dense(n_units=800, act=tf.nn.relu)(nn) - - return Model(inputs=ni, outputs=nn, name="mlp_hidden") - - -def get_model(inputs_shape, hmodel): - hidden = hmodel.as_layer() - ni = Input(inputs_shape) - nn = hidden(ni) - nn = Dropout(keep=0.8)(nn) - nn = Dense(n_units=10, act=tf.nn.relu)(nn) - - return Model(inputs=ni, outputs=nn, name="mlp") - - -MLP_hidden = hidden_model([None, 784]) -MLP = get_model([None, 784], MLP_hidden) -# MLP.print_layers() -# MLP.print_weights() - -## start training -n_epoch = 500 -batch_size = 500 -print_freq = 5 -train_weights = MLP.trainable_weights -optimizer = tf.optimizers.Adam(lr=0.0001) - -## the following code can help you understand SGD deeply -for epoch in range(n_epoch): ## iterate the dataset n_epoch times - start_time = time.time() - ## iterate over the entire training set once (shuffle the data via training) - for X_batch, y_batch in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=True): - MLP.train() # enable dropout - with tf.GradientTape() as tape: - ## compute outputs - _logits = MLP(X_batch) # alternatively, you can use MLP(x, is_train=True) and remove MLP.train() - ## compute loss and update model - _loss = tl.cost.cross_entropy(_logits, y_batch, name='train_loss') - grad = tape.gradient(_loss, train_weights) - optimizer.apply_gradients(zip(grad, train_weights)) - - ## use training and evaluation sets to evaluate the model every print_freq epoch - if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: - MLP.eval() # disable dropout - print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time)) - train_loss, train_acc, n_iter = 0, 0, 0 - for X_batch, y_batch in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=False): - - _logits = MLP(X_batch) # alternatively, you can use MLP(x, is_train=False) and remove MLP.eval() - train_loss += tl.cost.cross_entropy(_logits, y_batch, name='eval_loss') - train_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - n_iter += 1 - print(" train loss: {}".format(train_loss / n_iter)) - print(" train acc: {}".format(train_acc / n_iter)) - val_loss, val_acc, n_iter = 0, 0, 0 - for X_batch, y_batch in tl.iterate.minibatches(X_val, y_val, batch_size, shuffle=False): - _logits = MLP(X_batch) # is_train=False, disable dropout - val_loss += tl.cost.cross_entropy(_logits, y_batch, name='eval_loss') - val_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - n_iter += 1 - print(" val loss: {}".format(val_loss / n_iter)) - print(" val acc: {}".format(val_acc / n_iter)) - -## use testing data to evaluate the model -MLP.eval() -test_loss, test_acc, n_iter = 0, 0, 0 -for X_batch, y_batch in tl.iterate.minibatches(X_test, y_test, batch_size, shuffle=False): - _logits = MLP(X_batch) - test_loss += tl.cost.cross_entropy(_logits, y_batch, name='test_loss') - test_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - n_iter += 1 -print(" test loss: {}".format(test_loss / n_iter)) -print(" test acc: {}".format(test_acc / n_iter)) +#! /usr/bin/python +# -*- coding: utf-8 -*- +import os +os.environ['TL_BACKEND'] = 'tensorflow' + +import numpy as np +import time +import tensorflow as tf +import tensorlayer as tl +from tensorlayer.layers import Module +from tensorlayer.layers import Dense, Dropout, BatchNorm1d + +X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 784)) + + +class CustomModel(Module): + + def __init__(self): + super(CustomModel, self).__init__() + self.dropout1 = Dropout(keep=0.8) + self.dense1 = Dense(n_units=800) + self.batchnorm = BatchNorm1d(act=tl.ReLU) + self.dropout2 = Dropout(keep=0.8) + self.dense2 = Dense(n_units=800, act=tl.ReLU) + self.dropout3 = Dropout(keep=0.8) + self.dense3 = Dense(n_units=10, act=tl.ReLU) + + def forward(self, x, foo=None): + z = self.dropout1(x) + z = self.dense1(z) + z = self.batchnorm(z) + z = self.dropout2(z) + z = self.dense2(z) + z = self.dropout3(z) + out = self.dense3(z) + if foo is not None: + out = tl.ops.relu(out) + return out + + +MLP = CustomModel() +# Automatic inference input of shape. +# If Layer has no input in_channels, init_build(input) must be called to initialize the weights. +MLP.init_build(tl.layers.Input(shape=(1, 784))) + +n_epoch = 50 +batch_size = 500 +print_freq = 5 +train_weights = MLP.trainable_weights +optimizer = tl.optimizers.Adam(lr=0.0001) + +for epoch in range(n_epoch): ## iterate the dataset n_epoch times + start_time = time.time() + ## iterate over the entire training set once (shuffle the data via training) + for X_batch, y_batch in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=True): + MLP.set_train() # enable dropout + with tf.GradientTape() as tape: + ## compute outputs + _logits = MLP(X_batch) + ## compute loss and update model + _loss = tl.cost.softmax_cross_entropy_with_logits(_logits, y_batch, name='train_loss') + grad = tape.gradient(_loss, train_weights) + optimizer.apply_gradients(zip(grad, train_weights)) + + ## use training and evaluation sets to evaluate the model every print_freq epoch + if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: + print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time)) + train_loss, train_acc, n_iter = 0, 0, 0 + for X_batch, y_batch in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=False): + _logits = MLP(X_batch) + train_loss += tl.cost.softmax_cross_entropy_with_logits(_logits, y_batch, name='eval_loss') + train_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) + n_iter += 1 + print(" train loss: {}".format(train_loss / n_iter)) + print(" train acc: {}".format(train_acc / n_iter)) + + val_loss, val_acc, n_iter = 0, 0, 0 + for X_batch, y_batch in tl.iterate.minibatches(X_val, y_val, batch_size, shuffle=False): + _logits = MLP(X_batch) # is_train=False, disable dropout + val_loss += tl.cost.softmax_cross_entropy_with_logits(_logits, y_batch, name='eval_loss') + val_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) + n_iter += 1 + print(" val loss: {}".format(val_loss / n_iter)) + print(" val acc: {}".format(val_acc / n_iter)) + +## use testing data to evaluate the model +MLP.set_eval() +test_loss, test_acc, n_iter = 0, 0, 0 +for X_batch, y_batch in tl.iterate.minibatches(X_test, y_test, batch_size, shuffle=False): + _logits = MLP(X_batch, foo=1) + test_loss += tl.cost.softmax_cross_entropy_with_logits(_logits, y_batch, name='test_loss') + test_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) + n_iter += 1 +print(" test foo=1 loss: {}".format(test_loss / n_iter)) +print(" test foo=1 acc: {}".format(test_acc / n_iter)) diff --git a/examples/basic_tutorials/tutorial_cifar10_cnn_mindspore_backend.py b/examples/basic_tutorials/tutorial_cifar10_cnn_mindspore_backend.py new file mode 100644 index 000000000..059b15620 --- /dev/null +++ b/examples/basic_tutorials/tutorial_cifar10_cnn_mindspore_backend.py @@ -0,0 +1,148 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +import os +os.environ['TL_BACKEND'] = 'mindspore' + +import time +import numpy as np +import multiprocessing +import tensorflow as tf +from tensorlayer.layers import Module +import tensorlayer as tl +from tensorlayer.layers import (Conv2d, Dense, Flatten, MaxPool2d, BatchNorm2d) + +from mindspore.nn import Momentum, WithLossCell +from mindspore import ParameterTuple +import mindspore.nn as nn +import mindspore as ms +from mindspore.ops import composite as C +import mindspore.ops.operations as P + +# enable debug logging +tl.logging.set_verbosity(tl.logging.DEBUG) +tl.logging.set_verbosity(tl.logging.DEBUG) + + +class CNN(Module): + + def __init__(self): + super(CNN, self).__init__() + self.conv1 = Conv2d(64, (5, 5), (2, 2), b_init=None, name='conv1', in_channels=3, act=tl.ReLU) + self.bn = BatchNorm2d(num_features=64, act=tl.ReLU) + self.maxpool1 = MaxPool2d((3, 3), (2, 2), name='pool1') + self.conv2 = Conv2d(128, (5, 5), (2, 2), act=tl.ReLU, b_init=None, name='conv2', in_channels=64) + self.maxpool2 = MaxPool2d((3, 3), (2, 2), name='pool2') + + self.flatten = Flatten(name='flatten') + self.dense1 = Dense(120, act=tl.ReLU, name='dense1relu', in_channels=512) + self.dense2 = Dense(84, act=tl.ReLU, name='dense2relu', in_channels=120) + self.dense3 = Dense(10, act=None, name='output', in_channels=84) + + def forward(self, x): + z = self.conv1(x) + z = self.bn(z) + z = self.maxpool1(z) + z = self.conv2(z) + z = self.maxpool2(z) + z = self.flatten(z) + z = self.dense1(z) + z = self.dense2(z) + z = self.dense3(z) + return z + + +# training settings +batch_size = 128 +n_epoch = 500 +shuffle_buffer_size = 128 + +# prepare cifar10 data +X_train, y_train, X_test, y_test = tl.files.load_cifar10_dataset(shape=(-1, 32, 32, 3), plotable=False) + + +def generator_train(): + inputs = X_train + targets = y_train + if len(inputs) != len(targets): + raise AssertionError("The length of inputs and targets should be equal") + for _input, _target in zip(inputs, targets): + yield _input, _target + + +def generator_test(): + inputs = X_test + targets = y_test + if len(inputs) != len(targets): + raise AssertionError("The length of inputs and targets should be equal") + for _input, _target in zip(inputs, targets): + # yield _input.encode('utf-8'), _target.encode('utf-8') + yield _input, _target + + +def _map_fn_train(img, target): + # 1. Randomly crop a [height, width] section of the image. + img = tf.image.random_crop(img, [24, 24, 3]) + # 2. Randomly flip the image horizontally. + img = tf.image.random_flip_left_right(img) + # 3. Randomly change brightness. + img = tf.image.random_brightness(img, max_delta=63) + # 4. Randomly change contrast. + img = tf.image.random_contrast(img, lower=0.2, upper=1.8) + # 5. Subtract off the mean and divide by the variance of the pixels. + img = tf.image.per_image_standardization(img) + target = tf.reshape(target, ()) + return img, target + + +class GradWrap(Module): + """ GradWrap definition """ + + def __init__(self, network): + super(GradWrap, self).__init__(auto_prefix=False) + self.network = network + self.weights = ParameterTuple(filter(lambda x: x.requires_grad, network.get_parameters())) + + def forward(self, x, label): + return C.GradOperation(get_by_list=True)(self.network, self.weights)(x, label) + + +# dataset API and augmentation +train_ds = tf.data.Dataset.from_generator( + generator_train, output_types=(tf.float32, tf.int32) +) # , output_shapes=((24, 24, 3), (1))) +train_ds = train_ds.map(_map_fn_train, num_parallel_calls=multiprocessing.cpu_count()) +# train_ds = train_ds.repeat(n_epoch) +train_ds = train_ds.shuffle(shuffle_buffer_size) +train_ds = train_ds.prefetch(buffer_size=4096) +train_ds = train_ds.batch(batch_size) + +# get the network +net = CNN() +train_weights = net.trainable_weights +# optimizer = Adam(train_weights, learning_rate=0.01) +optimizer = Momentum(train_weights, 0.01, 0.5) +criterion = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean') +net_with_criterion = WithLossCell(net, criterion) +train_network = GradWrap(net_with_criterion) +train_network.set_train() +# print(train_weights) +for epoch in range(n_epoch): + start_time = time.time() + train_network.set_train() + train_loss, train_acc, n_iter = 0, 0, 0 + for X_batch, y_batch in train_ds: + X_batch = ms.Tensor(X_batch.numpy(), dtype=ms.float32) + y_batch = ms.Tensor(y_batch.numpy(), dtype=ms.int32) + output = net(X_batch) + loss_output = criterion(output, y_batch) + grads = train_network(X_batch, y_batch) + success = optimizer(grads) + loss = loss_output.asnumpy() + train_loss += loss + n_iter += 1 + train_acc += np.mean((P.Equal()(P.Argmax(axis=1)(output), y_batch).asnumpy())) + print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time)) + print(" train loss: {}".format(train_loss / n_iter)) + print(" train acc: {}".format(train_acc / n_iter)) + print(" loss ", loss) diff --git a/examples/basic_tutorials/tutorial_cifar10_cnn_paddle_backend.py b/examples/basic_tutorials/tutorial_cifar10_cnn_paddle_backend.py new file mode 100644 index 000000000..133780bc3 --- /dev/null +++ b/examples/basic_tutorials/tutorial_cifar10_cnn_paddle_backend.py @@ -0,0 +1,165 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- +# The tensorlayer and tensorflow operators can be mixed +import os +os.environ['TL_BACKEND'] = 'paddle' + +import time +import numpy as np +import multiprocessing +import tensorflow as tf +import paddle as pd +from tensorlayer.layers import Module +import tensorlayer as tl +from tensorlayer.layers import (Conv2d, Dense, Flatten, MaxPool2d, BatchNorm2d) + +# enable debug logging +tl.logging.set_verbosity(tl.logging.DEBUG) +tl.logging.set_verbosity(tl.logging.DEBUG) + +# prepare cifar10 data +X_train, y_train, X_test, y_test = tl.files.load_cifar10_dataset(shape=(-1, 32, 32, 3), plotable=False) + + +class CNN(Module): + + def __init__(self): + super(CNN, self).__init__() + # weights init + W_init = tl.initializers.truncated_normal(stddev=5e-2) + W_init2 = tl.initializers.truncated_normal(stddev=0.04) + b_init2 = tl.initializers.constant(value=0.1) + + self.conv1 = Conv2d(64, (5, 5), (1, 1), padding='SAME', W_init=W_init, b_init=None, name='conv1', in_channels=3) + self.bn1 = BatchNorm2d(num_features=64, act=tl.ReLU) + self.maxpool1 = MaxPool2d((3, 3), (2, 2), padding='SAME', name='pool1') + + self.conv2 = Conv2d(64, (5, 5), (1, 1), padding='SAME', W_init=W_init, b_init=None, name='conv2', in_channels=64) + self.bn2 = BatchNorm2d(num_features=64, act=tl.ReLU) + self.maxpool2 = MaxPool2d((3, 3), (2, 2), padding='SAME', name='pool2') + + self.flatten = Flatten(name='flatten') + self.dense1 = Dense(384, act=tl.ReLU, W_init=W_init2, b_init=b_init2, name='dense1relu', in_channels=2304) + self.dense2 = Dense(192, act=tl.ReLU, W_init=W_init2, b_init=b_init2, name='dense2relu', in_channels=384) + self.dense3 = Dense(10, act=None, W_init=W_init2, name='output', in_channels=192) + + def forward(self, x): + z = self.conv1(x) + z = self.bn1(z) + z = self.maxpool1(z) + z = self.conv2(z) + z = self.bn2(z) + z = self.maxpool2(z) + z = self.flatten(z) + z = self.dense1(z) + z = self.dense2(z) + z = self.dense3(z) + return z + + +def generator_train(): + inputs = X_train + targets = y_train + if len(inputs) != len(targets): + raise AssertionError("The length of inputs and targets should be equal") + for _input, _target in zip(inputs, targets): + # yield _input.encode('utf-8'), _target.encode('utf-8') + yield _input, _target + + +def generator_test(): + inputs = X_test + targets = y_test + if len(inputs) != len(targets): + raise AssertionError("The length of inputs and targets should be equal") + for _input, _target in zip(inputs, targets): + # yield _input.encode('utf-8'), _target.encode('utf-8') + yield _input, _target + + +def _map_fn_train(img, target): + # 1. Randomly crop a [height, width] section of the image. + img = tf.image.random_crop(img, [24, 24, 3]) + # 2. Randomly flip the image horizontally. + img = tf.image.random_flip_left_right(img) + # 3. Randomly change brightness. + img = tf.image.random_brightness(img, max_delta=63) + # 4. Randomly change contrast. + img = tf.image.random_contrast(img, lower=0.2, upper=1.8) + # 5. Subtract off the mean and divide by the variance of the pixels. + img = tf.image.per_image_standardization(img) + target = tf.reshape(target, ()) + return img, target + + +def _map_fn_test(img, target): + # 1. Crop the central [height, width] of the image. + img = tf.image.resize_with_pad(img, 24, 24) + # 2. Subtract off the mean and divide by the variance of the pixels. + img = tf.image.per_image_standardization(img) + img = tf.reshape(img, (24, 24, 3)) + target = tf.reshape(target, ()) + return img, target + + +# get the network +net = CNN() + +# training settings +batch_size = 128 +n_epoch = 500 +learning_rate = 0.0001 +print_freq = 5 +shuffle_buffer_size = 128 +metrics = tl.metric.Accuracy() + +train_weights = net.trainable_weights +optimizer = tl.optimizers.Adam(learning_rate) +# looking for decay learning rate? see https://github.com/tensorlayer/srgan/blob/master/train.py + +# dataset API and augmentation +train_ds = tf.data.Dataset.from_generator( + generator_train, output_types=(tf.float32, tf.int32) +) # , output_shapes=((24, 24, 3), (1))) +train_ds = train_ds.map(_map_fn_train, num_parallel_calls=multiprocessing.cpu_count()) +# train_ds = train_ds.repeat(n_epoch) +train_ds = train_ds.shuffle(shuffle_buffer_size) +train_ds = train_ds.prefetch(buffer_size=4096) +train_ds = train_ds.batch(batch_size) +# value = train_ds.make_one_shot_iterator().get_next() + +test_ds = tf.data.Dataset.from_generator( + generator_test, output_types=(tf.float32, tf.int32) +) # , output_shapes=((24, 24, 3), (1))) +# test_ds = test_ds.shuffle(shuffle_buffer_size) +test_ds = test_ds.map(_map_fn_test, num_parallel_calls=multiprocessing.cpu_count()) +# test_ds = test_ds.repeat(n_epoch) +test_ds = test_ds.prefetch(buffer_size=4096) +test_ds = test_ds.batch(batch_size) +# value_test = test_ds.make_one_shot_iterator().get_next() + +for epoch in range(n_epoch): + train_loss, train_acc, n_iter = 0, 0, 0 + for X_batch, y_batch in test_ds: + start_time = time.time() + X_batch = pd.to_tensor(X_batch.numpy(), dtype=tl.float32) + y_batch = pd.to_tensor(y_batch.numpy(), dtype=tl.int64) + net.set_train() + + output = net(X_batch) + loss = pd.nn.functional.cross_entropy(output, y_batch) + loss_ce = loss.numpy() + params_grads = optimizer.gradient(loss, train_weights) + optimizer.apply_gradients(params_grads) + + train_loss += loss_ce + + if metrics: + metrics.update(output, y_batch) + train_acc += metrics.result() + metrics.reset() + n_iter += 1 + + print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time)) + print(" train loss: {}".format(train_loss / n_iter)) + print(" train acc: {}".format(train_acc / n_iter)) diff --git a/examples/basic_tutorials/tutorial_cifar10_cnn_static.py b/examples/basic_tutorials/tutorial_cifar10_cnn_tensorflow_backend.py similarity index 61% rename from examples/basic_tutorials/tutorial_cifar10_cnn_static.py rename to examples/basic_tutorials/tutorial_cifar10_cnn_tensorflow_backend.py index ee6af3d0b..97acb447c 100644 --- a/examples/basic_tutorials/tutorial_cifar10_cnn_static.py +++ b/examples/basic_tutorials/tutorial_cifar10_cnn_tensorflow_backend.py @@ -1,15 +1,17 @@ -#!/usr/bin/env python3 +#! /usr/bin/python # -*- coding: utf-8 -*- +# The tensorlayer and tensorflow operators can be mixed +import os +os.environ['TL_BACKEND'] = 'tensorflow' -import multiprocessing import time - import numpy as np +import multiprocessing import tensorflow as tf +from tensorlayer.layers import Module import tensorlayer as tl -from tensorlayer.layers import (BatchNorm, Conv2d, Dense, Flatten, Input, LocalResponseNorm, MaxPool2d) -from tensorlayer.models import Model +from tensorlayer.layers import (Conv2d, Dense, Flatten, MaxPool2d, BatchNorm2d) # enable debug logging tl.logging.set_verbosity(tl.logging.DEBUG) @@ -19,63 +21,48 @@ X_train, y_train, X_test, y_test = tl.files.load_cifar10_dataset(shape=(-1, 32, 32, 3), plotable=False) -# define the network -def get_model(inputs_shape): - # self defined initialization - W_init = tl.initializers.truncated_normal(stddev=5e-2) - W_init2 = tl.initializers.truncated_normal(stddev=0.04) - b_init2 = tl.initializers.constant(value=0.1) - - # build network - ni = Input(inputs_shape) - nn = Conv2d(64, (5, 5), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, b_init=None, name='conv1')(ni) - nn = MaxPool2d((3, 3), (2, 2), padding='SAME', name='pool1')(nn) - nn = LocalResponseNorm(depth_radius=4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name="norm1")(nn) - - nn = Conv2d(64, (5, 5), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, b_init=None, name='conv2')(nn) - nn = LocalResponseNorm(depth_radius=4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name="norm2")(nn) - nn = MaxPool2d((3, 3), (2, 2), padding='SAME', name='pool2')(nn) +class CNN(Module): - nn = Flatten(name='flatten')(nn) - nn = Dense(384, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='dense1relu')(nn) - nn = Dense(192, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='dense2relu')(nn) - nn = Dense(10, act=None, W_init=W_init2, name='output')(nn) + def __init__(self): + super(CNN, self).__init__() + # weights init + W_init = tl.initializers.truncated_normal(stddev=5e-2) + W_init2 = tl.initializers.truncated_normal(stddev=0.04) + b_init2 = tl.initializers.constant(value=0.1) - M = Model(inputs=ni, outputs=nn, name='cnn') - return M + self.conv1 = Conv2d(64, (5, 5), (1, 1), padding='SAME', W_init=W_init, b_init=None, name='conv1', in_channels=3) + self.bn = BatchNorm2d(num_features=64, act=tl.ReLU) + self.maxpool1 = MaxPool2d((3, 3), (2, 2), padding='SAME', name='pool1') + self.conv2 = Conv2d( + 64, (5, 5), (1, 1), padding='SAME', act=tl.ReLU, W_init=W_init, b_init=None, name='conv2', in_channels=64 + ) + self.maxpool2 = MaxPool2d((3, 3), (2, 2), padding='SAME', name='pool2') -def get_model_batchnorm(inputs_shape): - # self defined initialization - W_init = tl.initializers.truncated_normal(stddev=5e-2) - W_init2 = tl.initializers.truncated_normal(stddev=0.04) - b_init2 = tl.initializers.constant(value=0.1) + self.flatten = Flatten(name='flatten') + self.dense1 = Dense(384, act=tl.ReLU, W_init=W_init2, b_init=b_init2, name='dense1relu', in_channels=2304) + self.dense2 = Dense(192, act=tl.ReLU, W_init=W_init2, b_init=b_init2, name='dense2relu', in_channels=384) + self.dense3 = Dense(10, act=None, W_init=W_init2, name='output', in_channels=192) - # build network - ni = Input(inputs_shape) - nn = Conv2d(64, (5, 5), (1, 1), padding='SAME', W_init=W_init, b_init=None, name='conv1')(ni) - nn = BatchNorm(decay=0.99, act=tf.nn.relu, name='batch1')(nn) - nn = MaxPool2d((3, 3), (2, 2), padding='SAME', name='pool1')(nn) - - nn = Conv2d(64, (5, 5), (1, 1), padding='SAME', W_init=W_init, b_init=None, name='conv2')(nn) - nn = BatchNorm(decay=0.99, act=tf.nn.relu, name='batch2')(nn) - nn = MaxPool2d((3, 3), (2, 2), padding='SAME', name='pool2')(nn) - - nn = Flatten(name='flatten')(nn) - nn = Dense(384, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='dense1relu')(nn) - nn = Dense(192, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='dense2relu')(nn) - nn = Dense(10, act=None, W_init=W_init2, name='output')(nn) - - M = Model(inputs=ni, outputs=nn, name='cnn') - return M + def forward(self, x): + z = self.conv1(x) + z = self.bn(z) + z = self.maxpool1(z) + z = self.conv2(z) + z = self.maxpool2(z) + z = self.flatten(z) + z = self.dense1(z) + z = self.dense2(z) + z = self.dense3(z) + return z # get the network -net = get_model([None, 24, 24, 3]) +net = CNN() # training settings batch_size = 128 -n_epoch = 50000 +n_epoch = 500 learning_rate = 0.0001 print_freq = 5 n_step_epoch = int(len(y_train) / batch_size) @@ -83,7 +70,7 @@ def get_model_batchnorm(inputs_shape): shuffle_buffer_size = 128 train_weights = net.trainable_weights -optimizer = tf.optimizers.Adam(learning_rate) +optimizer = tl.optimizers.Adam(learning_rate) # looking for decay learning rate? see https://github.com/tensorlayer/srgan/blob/master/train.py @@ -155,41 +142,47 @@ def _map_fn_test(img, target): for epoch in range(n_epoch): start_time = time.time() + train_loss, train_acc, n_iter = 0, 0, 0 for X_batch, y_batch in train_ds: - net.train() + net.set_train() + with tf.GradientTape() as tape: # compute outputs _logits = net(X_batch) # compute loss and update model - _loss = tl.cost.cross_entropy(_logits, y_batch, name='train_loss') - grad = tape.gradient(_loss, train_weights) + _loss_ce = tl.cost.softmax_cross_entropy_with_logits(_logits, y_batch, name='train_loss') + + grad = tape.gradient(_loss_ce, train_weights) optimizer.apply_gradients(zip(grad, train_weights)) - train_loss += _loss + + train_loss += _loss_ce train_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) n_iter += 1 - # use training and evaluation sets to evaluate the model every print_freq epoch - if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time)) print(" train loss: {}".format(train_loss / n_iter)) print(" train acc: {}".format(train_acc / n_iter)) - net.eval() + + # use training and evaluation sets to evaluate the model every print_freq epoch + if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: + + net.set_eval() val_loss, val_acc, n_iter = 0, 0, 0 for X_batch, y_batch in test_ds: _logits = net(X_batch) # is_train=False, disable dropout - val_loss += tl.cost.cross_entropy(_logits, y_batch, name='eval_loss') + val_loss += tl.cost.softmax_cross_entropy_with_logits(_logits, y_batch, name='eval_loss') val_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) n_iter += 1 print(" val loss: {}".format(val_loss / n_iter)) print(" val acc: {}".format(val_acc / n_iter)) # use testing data to evaluate the model -net.eval() +net.set_eval() test_loss, test_acc, n_iter = 0, 0, 0 for X_batch, y_batch in test_ds: _logits = net(X_batch) - test_loss += tl.cost.cross_entropy(_logits, y_batch, name='test_loss') + test_loss += tl.cost.softmax_cross_entropy_with_logits(_logits, y_batch, name='test_loss') test_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) n_iter += 1 print(" test loss: {}".format(test_loss / n_iter)) diff --git a/examples/basic_tutorials/tutorial_dataflow.py b/examples/basic_tutorials/tutorial_dataflow.py new file mode 100644 index 000000000..57e1cd207 --- /dev/null +++ b/examples/basic_tutorials/tutorial_dataflow.py @@ -0,0 +1,84 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +import os +os.environ['TL_BACKEND'] = 'tensorflow' +# os.environ['TL_BACKEND'] = 'mindspore' +# os.environ['TL_BACKEND'] = 'paddle' + +import tensorlayer as tl +from tensorlayer.layers import Module +from tensorlayer.layers import Dense, Flatten +from tensorlayer.vision.transforms import Normalize, Compose +from tensorlayer.dataflow import Dataset, IterableDataset + +transform = Compose([Normalize(mean=[127.5], std=[127.5], data_format='HWC')]) + +print('download training data and load training data') + +X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 28, 28, 1)) +X_train = X_train * 255 + +print('load finished') + + +class mnistdataset(Dataset): + + def __init__(self, data=X_train, label=y_train, transform=transform): + self.data = data + self.label = label + self.transform = transform + + def __getitem__(self, index): + data = self.data[index].astype('float32') + data = self.transform(data) + label = self.label[index].astype('int64') + + return data, label + + def __len__(self): + + return len(self.data) + + +class mnistdataset1(IterableDataset): + + def __init__(self, data=X_train, label=y_train, transform=transform): + self.data = data + self.label = label + self.transform = transform + + def __iter__(self): + + for i in range(len(self.data)): + data = self.data[i].astype('float32') + data = self.transform(data) + label = self.label[i].astype('int64') + yield data, label + + +class MLP(Module): + + def __init__(self): + super(MLP, self).__init__() + self.linear1 = Dense(n_units=120, in_channels=784, act=tl.ReLU) + self.linear2 = Dense(n_units=84, in_channels=120, act=tl.ReLU) + self.linear3 = Dense(n_units=10, in_channels=84) + self.flatten = Flatten() + + def forward(self, x): + x = self.flatten(x) + x = self.linear1(x) + x = self.linear2(x) + x = self.linear3(x) + return x + + +train_dataset = mnistdataset1(data=X_train, label=y_train, transform=transform) +train_dataset = tl.dataflow.FromGenerator( + train_dataset, output_types=[tl.float32, tl.int64], column_names=['data', 'label'] +) +train_loader = tl.dataflow.Dataloader(train_dataset, batch_size=128, shuffle=False) + +for i in train_loader: + print(i[0].shape, i[1]) diff --git a/examples/basic_tutorials/tutorial_mnist_mlp_dynamic.py b/examples/basic_tutorials/tutorial_mnist_mlp_dynamic.py deleted file mode 100644 index d986b01a3..000000000 --- a/examples/basic_tutorials/tutorial_mnist_mlp_dynamic.py +++ /dev/null @@ -1,102 +0,0 @@ -import time - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.layers import Dense, Dropout, Input -from tensorlayer.models import Model - -## enable debug logging -tl.logging.set_verbosity(tl.logging.DEBUG) - -## prepare MNIST data -X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 784)) - - -## define the network -class CustomModel(Model): - - def __init__(self): - super(CustomModel, self).__init__() - self.dropout1 = Dropout(keep=0.8) #(self.innet) - self.dense1 = Dense(n_units=800, act=tf.nn.relu, in_channels=784) #(self.dropout1) - self.dropout2 = Dropout(keep=0.8) #(self.dense1) - self.dense2 = Dense(n_units=800, act=tf.nn.relu, in_channels=800) #(self.dropout2) - self.dropout3 = Dropout(keep=0.8) #(self.dense2) - self.dense3 = Dense(n_units=10, act=tf.nn.relu, in_channels=800) #(self.dropout3) - - def forward(self, x, foo=None): - z = self.dropout1(x) - z = self.dense1(z) - z = self.dropout2(z) - z = self.dense2(z) - z = self.dropout3(z) - out = self.dense3(z) - if foo is not None: - out = tf.nn.relu(out) - return out - - -MLP = CustomModel() - -## start training -n_epoch = 500 -batch_size = 500 -print_freq = 5 -train_weights = MLP.trainable_weights -optimizer = tf.optimizers.Adam(learning_rate=0.0001) - -## the following code can help you understand SGD deeply -for epoch in range(n_epoch): ## iterate the dataset n_epoch times - start_time = time.time() - ## iterate over the entire training set once (shuffle the data via training) - for X_batch, y_batch in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=True): - MLP.train() # enable dropout - with tf.GradientTape() as tape: - ## compute outputs - _logits = MLP(X_batch, foo=1) - ## compute loss and update model - _loss = tl.cost.cross_entropy(_logits, y_batch, name='train_loss') - grad = tape.gradient(_loss, train_weights) - optimizer.apply_gradients(zip(grad, train_weights)) - - ## use training and evaluation sets to evaluate the model every print_freq epoch - if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: - MLP.eval() # disable dropout - print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time)) - train_loss, train_acc, n_iter = 0, 0, 0 - for X_batch, y_batch in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=False): - _logits = MLP(X_batch, foo=1) - train_loss += tl.cost.cross_entropy(_logits, y_batch, name='eval_loss') - train_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - n_iter += 1 - print(" train foo=1 loss: {}".format(train_loss / n_iter)) - print(" train foo=1 acc: {}".format(train_acc / n_iter)) - val_loss, val_acc, n_iter = 0, 0, 0 - for X_batch, y_batch in tl.iterate.minibatches(X_val, y_val, batch_size, shuffle=False): - _logits = MLP(X_batch, foo=1) # is_train=False, disable dropout - val_loss += tl.cost.cross_entropy(_logits, y_batch, name='eval_loss') - val_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - n_iter += 1 - print(" val foo=1 loss: {}".format(val_loss / n_iter)) - print(" val foo=1 acc: {}".format(val_acc / n_iter)) - val_loss, val_acc, n_iter = 0, 0, 0 - for X_batch, y_batch in tl.iterate.minibatches(X_val, y_val, batch_size, shuffle=False): - _logits = MLP(X_batch) # is_train=False, disable dropout - val_loss += tl.cost.cross_entropy(_logits, y_batch, name='eval_loss') - val_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - n_iter += 1 - print(" val foo=0 loss: {}".format(val_loss / n_iter)) - print(" val foo=0 acc: {}".format(val_acc / n_iter)) - -## use testing data to evaluate the model -MLP.eval() -test_loss, test_acc, n_iter = 0, 0, 0 -for X_batch, y_batch in tl.iterate.minibatches(X_test, y_test, batch_size, shuffle=False): - _logits = MLP(X_batch, foo=1) - test_loss += tl.cost.cross_entropy(_logits, y_batch, name='test_loss') - test_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - n_iter += 1 -print(" test foo=1 loss: {}".format(val_loss / n_iter)) -print(" test foo=1 acc: {}".format(val_acc / n_iter)) diff --git a/examples/basic_tutorials/tutorial_mnist_mlp_dynamic_2.py b/examples/basic_tutorials/tutorial_mnist_mlp_dynamic_2.py deleted file mode 100644 index 58695c8ac..000000000 --- a/examples/basic_tutorials/tutorial_mnist_mlp_dynamic_2.py +++ /dev/null @@ -1,131 +0,0 @@ -import time - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.layers import Dense, Dropout, Input, LayerList -from tensorlayer.models import Model - -## enable debug logging -tl.logging.set_verbosity(tl.logging.DEBUG) - -## prepare MNIST data -X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 784)) - - -## define the network -class CustomModelHidden(Model): - - def __init__(self): - super(CustomModelHidden, self).__init__() - self.dropout1 = Dropout(keep=0.8) #(self.innet) - self.seq = LayerList( - [ - Dense(n_units=800, act=tf.nn.relu, in_channels=784), - Dropout(keep=0.8), - Dense(n_units=800, act=tf.nn.relu, in_channels=800), - ] - ) - self.dropout3 = Dropout(keep=0.8) #(self.seq) - - def forward(self, x): - z = self.dropout1(x) - z = self.seq(z) - z = self.dropout3(z) - return z - - -class CustomModelOut(Model): - - def __init__(self): - super(CustomModelOut, self).__init__() - self.dense3 = Dense(n_units=10, act=tf.nn.relu, in_channels=800) - - def forward(self, x, foo=None): - out = self.dense3(x) - if foo is not None: - out = tf.nn.relu(out) - return out - - -# NOTE: using previous defined model is different in dynamic network -# a dynamic network cannot be converted into Layer because the inputs and outputs are unknown until forwarding -# therefore, you may reuse a previous defined model in the following way - -MLP1 = CustomModelHidden() -MLP2 = CustomModelOut() -# MLP.print_layers() -# MLP.print_weights() -# print(MLP) - -## start training -n_epoch = 500 -batch_size = 500 -print_freq = 5 -train_weights = MLP1.trainable_weights + MLP2.trainable_weights -optimizer = tf.optimizers.Adam(learning_rate=0.0001) - -## the following code can help you understand SGD deeply -for epoch in range(n_epoch): ## iterate the dataset n_epoch times - start_time = time.time() - ## iterate over the entire training set once (shuffle the data via training) - for X_batch, y_batch in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=True): - MLP1.train() # enable dropout - MLP2.train() - with tf.GradientTape() as tape: - ## compute outputs - _hidden = MLP1(X_batch) - _logits = MLP2(_hidden, foo=1) - ## compute loss and update model - _loss = tl.cost.cross_entropy(_logits, y_batch, name='train_loss') - grad = tape.gradient(_loss, train_weights) - optimizer.apply_gradients(zip(grad, train_weights)) - - ## use training and evaluation sets to evaluate the model every print_freq epoch - if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: - MLP1.eval() # disable dropout - MLP2.eval() - print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time)) - train_loss, train_acc, n_iter = 0, 0, 0 - for X_batch, y_batch in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=False): - _hidden = MLP1(X_batch) - _logits = MLP2(_hidden, foo=1) - train_loss += tl.cost.cross_entropy(_logits, y_batch, name='eval_loss') - train_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - n_iter += 1 - print(" train foo=1 loss: {}".format(train_loss / n_iter)) - print(" train foo=1 acc: {}".format(train_acc / n_iter)) - - val_loss, val_acc, n_iter = 0, 0, 0 - for X_batch, y_batch in tl.iterate.minibatches(X_val, y_val, batch_size, shuffle=False): - _hidden = MLP1(X_batch) - _logits = MLP2(_hidden, foo=1) - val_loss += tl.cost.cross_entropy(_logits, y_batch, name='eval_loss') - val_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - n_iter += 1 - print(" val foo=1 loss: {}".format(val_loss / n_iter)) - print(" val foo=1 acc: {}".format(val_acc / n_iter)) - - val_loss, val_acc, n_iter = 0, 0, 0 - for X_batch, y_batch in tl.iterate.minibatches(X_val, y_val, batch_size, shuffle=False): - _hidden = MLP1(X_batch) - _logits = MLP2(_hidden, foo=0) - val_loss += tl.cost.cross_entropy(_logits, y_batch, name='eval_loss') - val_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - n_iter += 1 - print(" val foo=0 loss: {}".format(val_loss / n_iter)) - print(" val foo=0 acc: {}".format(val_acc / n_iter)) - -## use testing data to evaluate the model -MLP1.eval() -MLP2.eval() -test_loss, test_acc, n_iter = 0, 0, 0 -for X_batch, y_batch in tl.iterate.minibatches(X_test, y_test, batch_size, shuffle=False): - _hidden = MLP1(X_batch) - _logits = MLP2(_hidden, foo=0) - test_loss += tl.cost.cross_entropy(_logits, y_batch, name='test_loss') - test_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - n_iter += 1 -print(" test foo=1 loss: {}".format(val_loss / n_iter)) -print(" test foo=1 acc: {}".format(val_acc / n_iter)) diff --git a/examples/basic_tutorials/tutorial_mnist_mlp_mindspore_backend.py b/examples/basic_tutorials/tutorial_mnist_mlp_mindspore_backend.py new file mode 100644 index 000000000..d23d785d1 --- /dev/null +++ b/examples/basic_tutorials/tutorial_mnist_mlp_mindspore_backend.py @@ -0,0 +1,92 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- +import os +os.environ['TL_BACKEND'] = 'mindspore' + +import mindspore.ops.operations as P +from mindspore.ops import composite as C +from mindspore import ParameterTuple +from mindspore.nn import Momentum, WithLossCell + +import numpy as np +import tensorlayer as tl +import mindspore as ms +import tensorflow as tf +import time +from tensorlayer.layers import Module +from tensorlayer.layers import Dense +import mindspore.nn as nn + + +class MLP(Module): + + def __init__(self): + super(MLP, self).__init__() + self.dense1 = Dense(n_units=800, act=tl.ReLU, in_channels=784) + self.dense2 = Dense(n_units=800, act=tl.ReLU, in_channels=800) + self.dense3 = Dense(n_units=10, act=tl.ReLU, in_channels=800) + + def forward(self, x): + z = self.dense1(x) + z = self.dense2(z) + out = self.dense3(z) + return out + + +class GradWrap(Module): + """ GradWrap definition """ + + def __init__(self, network): + super(GradWrap, self).__init__(auto_prefix=False) + self.network = network + self.weights = ParameterTuple(filter(lambda x: x.requires_grad, network.get_parameters())) + + def forward(self, x, label): + return C.GradOperation(get_by_list=True)(self.network, self.weights)(x, label) + + +def generator_train(): + inputs = X_train + targets = y_train + if len(inputs) != len(targets): + raise AssertionError("The length of inputs and targets should be equal") + for _input, _target in zip(inputs, targets): + yield _input, _target + + +net = MLP() +train_weights = list(filter(lambda x: x.requires_grad, net.get_parameters())) +optimizer = Momentum(filter(lambda x: x.requires_grad, net.get_parameters()), 0.15, 0.8) + +criterion = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean') +net_with_criterion = WithLossCell(net, criterion) +train_network = GradWrap(net_with_criterion) +train_network.set_train() + +X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 784)) +train_ds = tf.data.Dataset.from_generator(generator_train, output_types=(tf.float32, tf.int32)) +shuffle_buffer_size = 128 +batch_size = 128 +train_ds = train_ds.shuffle(shuffle_buffer_size) +train_ds = train_ds.batch(batch_size) +n_epoch = 50 + +for epoch in range(n_epoch): + start_time = time.time() + train_network.set_train() + train_loss, train_acc, n_iter = 0, 0, 0 + for X_batch, y_batch in train_ds: + X_batch = ms.Tensor(X_batch.numpy(), dtype=ms.float32) + y_batch = ms.Tensor(y_batch.numpy(), dtype=ms.int32) + output = net(X_batch) + loss_output = criterion(output, y_batch) + grads = train_network(X_batch, y_batch) + success = optimizer(grads) + loss = loss_output.asnumpy() + train_loss += loss + n_iter += 1 + train_acc += np.mean((P.Equal()(P.Argmax(axis=1)(output), y_batch).asnumpy())) + print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time)) + print(" train loss: {}".format(train_loss / n_iter)) + print(" train acc: {}".format(train_acc / n_iter)) + print(" loss ", loss) diff --git a/examples/basic_tutorials/tutorial_mnist_mlp_paddlepaddle_backend.py b/examples/basic_tutorials/tutorial_mnist_mlp_paddlepaddle_backend.py new file mode 100644 index 000000000..c93cc87ed --- /dev/null +++ b/examples/basic_tutorials/tutorial_mnist_mlp_paddlepaddle_backend.py @@ -0,0 +1,52 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- +# The tensorlayer and Paddle operators can be mixed + +import os +os.environ['TL_BACKEND'] = 'paddle' + +import tensorlayer as tl +from tensorlayer.layers import Module +from tensorlayer.layers import Dense, Flatten +import paddle +from paddle.io import TensorDataset + +print('download training data and load training data') + +X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 784)) + +print('load finished') +X_train = paddle.to_tensor(X_train.astype('float32')) +y_train = paddle.to_tensor(y_train.astype('int64')) + + +class MLP(Module): + + def __init__(self): + super(MLP, self).__init__() + self.linear1 = Dense(n_units=120, in_channels=784, act=tl.ReLU) + self.linear2 = Dense(n_units=84, in_channels=120, act=tl.ReLU) + self.linear3 = Dense(n_units=10, in_channels=84) + self.flatten = Flatten() + + def forward(self, x): + x = self.flatten(x) + x = self.linear1(x) + x = self.linear2(x) + x = self.linear3(x) + return x + + +traindataset = paddle.io.TensorDataset([X_train, y_train]) +train_loader = paddle.io.DataLoader(traindataset, batch_size=64, shuffle=True) +net = MLP() + +optimizer = tl.optimizers.Adam(learning_rate=0.001) +metric = tl.metric.Accuracy() +model = tl.models.Model( + network=net, loss_fn=tl.cost.softmax_cross_entropy_with_logits, optimizer=optimizer, metrics=metric +) +model.train(n_epoch=2, train_dataset=train_loader, print_freq=5, print_train_batch=True) +model.save_weights('./model_mlp.npz', format='npz_dict') +model.load_weights('./model_mlp.npz', format='npz_dict') +# model.eval(train_loader) diff --git a/examples/basic_tutorials/tutorial_mnist_mlp_static.py b/examples/basic_tutorials/tutorial_mnist_mlp_tensorflow_backend.py similarity index 54% rename from examples/basic_tutorials/tutorial_mnist_mlp_static.py rename to examples/basic_tutorials/tutorial_mnist_mlp_tensorflow_backend.py index 358a0e561..2ed6771db 100644 --- a/examples/basic_tutorials/tutorial_mnist_mlp_static.py +++ b/examples/basic_tutorials/tutorial_mnist_mlp_tensorflow_backend.py @@ -1,89 +1,93 @@ -import pprint -import time - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.layers import Dense, Dropout, Input -from tensorlayer.models import Model - -## enable debug logging -tl.logging.set_verbosity(tl.logging.DEBUG) - -## prepare MNIST data -X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 784)) - - -## define the network -# the softmax is implemented internally in tl.cost.cross_entropy(y, y_) to -# speed up computation, so we use identity here. -# see tf.nn.sparse_softmax_cross_entropy_with_logits() -def get_model(inputs_shape): - ni = Input(inputs_shape) - nn = Dropout(keep=0.8)(ni) - nn = Dense(n_units=800, act=tf.nn.relu)(nn) - nn = Dropout(keep=0.8)(nn) - nn = Dense(n_units=800, act=tf.nn.relu)(nn) - nn = Dropout(keep=0.8)(nn) - nn = Dense(n_units=10, act=tf.nn.relu)(nn) - M = Model(inputs=ni, outputs=nn, name="mlp") - return M - - -MLP = get_model([None, 784]) -pprint.pprint(MLP.config) - -## start training -n_epoch = 500 -batch_size = 500 -print_freq = 5 -train_weights = MLP.trainable_weights -optimizer = tf.optimizers.Adam(lr=0.0001) - -## the following code can help you understand SGD deeply -for epoch in range(n_epoch): ## iterate the dataset n_epoch times - start_time = time.time() - ## iterate over the entire training set once (shuffle the data via training) - for X_batch, y_batch in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=True): - MLP.train() # enable dropout - with tf.GradientTape() as tape: - ## compute outputs - _logits = MLP(X_batch) - ## compute loss and update model - _loss = tl.cost.cross_entropy(_logits, y_batch, name='train_loss') - grad = tape.gradient(_loss, train_weights) - optimizer.apply_gradients(zip(grad, train_weights)) - - ## use training and evaluation sets to evaluate the model every print_freq epoch - if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: - MLP.eval() # disable dropout - print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time)) - train_loss, train_acc, n_iter = 0, 0, 0 - for X_batch, y_batch in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=False): - _logits = MLP(X_batch) - train_loss += tl.cost.cross_entropy(_logits, y_batch, name='eval_loss') - train_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - n_iter += 1 - print(" train loss: {}".format(train_loss / n_iter)) - print(" train acc: {}".format(train_acc / n_iter)) - - val_loss, val_acc, n_iter = 0, 0, 0 - for X_batch, y_batch in tl.iterate.minibatches(X_val, y_val, batch_size, shuffle=False): - _logits = MLP(X_batch) # is_train=False, disable dropout - val_loss += tl.cost.cross_entropy(_logits, y_batch, name='eval_loss') - val_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - n_iter += 1 - print(" val loss: {}".format(val_loss / n_iter)) - print(" val acc: {}".format(val_acc / n_iter)) - -## use testing data to evaluate the model -MLP.eval() -test_loss, test_acc, n_iter = 0, 0, 0 -for X_batch, y_batch in tl.iterate.minibatches(X_test, y_test, batch_size, shuffle=False): - _logits = MLP(X_batch) - test_loss += tl.cost.cross_entropy(_logits, y_batch, name='test_loss') - test_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - n_iter += 1 -print(" test loss: {}".format(test_loss / n_iter)) -print(" test acc: {}".format(test_acc / n_iter)) +#! /usr/bin/python +# -*- coding: utf-8 -*- +# The tensorlayer and tensorflow operators can be mixed +import os +os.environ['TL_BACKEND'] = 'tensorflow' + +import numpy as np +import time + +import tensorflow as tf +import tensorlayer as tl +from tensorlayer.layers import Module +from tensorlayer.layers import Dense, Dropout, BatchNorm1d + +X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 784)) + + +class CustomModel(Module): + + def __init__(self): + super(CustomModel, self).__init__() + self.dropout1 = Dropout(keep=0.8) + self.dense1 = Dense(n_units=800, in_channels=784) + self.batchnorm = BatchNorm1d(act=tl.ReLU, num_features=800) + self.dropout2 = Dropout(keep=0.8) + self.dense2 = Dense(n_units=800, act=tl.ReLU, in_channels=800) + self.dropout3 = Dropout(keep=0.8) + self.dense3 = Dense(n_units=10, act=tl.ReLU, in_channels=800) + + def forward(self, x, foo=None): + z = self.dropout1(x) + z = self.dense1(z) + z = self.batchnorm(z) + z = self.dropout2(z) + z = self.dense2(z) + z = self.dropout3(z) + out = self.dense3(z) + if foo is not None: + out = tl.ops.relu(out) + return out + + +MLP = CustomModel() +n_epoch = 50 +batch_size = 500 +print_freq = 5 +train_weights = MLP.trainable_weights +optimizer = tl.optimizers.Adam(lr=0.0001) + +for epoch in range(n_epoch): ## iterate the dataset n_epoch times + start_time = time.time() + ## iterate over the entire training set once (shuffle the data via training) + for X_batch, y_batch in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=True): + MLP.set_train() # enable dropout + with tf.GradientTape() as tape: + ## compute outputs + _logits = MLP(X_batch) + ## compute loss and update model + _loss = tl.cost.softmax_cross_entropy_with_logits(_logits, y_batch, name='train_loss') + grad = tape.gradient(_loss, train_weights) + optimizer.apply_gradients(zip(grad, train_weights)) + + ## use training and evaluation sets to evaluate the model every print_freq epoch + if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: + print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time)) + train_loss, train_acc, n_iter = 0, 0, 0 + for X_batch, y_batch in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=False): + _logits = MLP(X_batch) + train_loss += tl.cost.softmax_cross_entropy_with_logits(_logits, y_batch, name='eval_loss') + train_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) + n_iter += 1 + print(" train loss: {}".format(train_loss / n_iter)) + print(" train acc: {}".format(train_acc / n_iter)) + + val_loss, val_acc, n_iter = 0, 0, 0 + for X_batch, y_batch in tl.iterate.minibatches(X_val, y_val, batch_size, shuffle=False): + _logits = MLP(X_batch) # is_train=False, disable dropout + val_loss += tl.cost.softmax_cross_entropy_with_logits(_logits, y_batch, name='eval_loss') + val_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) + n_iter += 1 + print(" val loss: {}".format(val_loss / n_iter)) + print(" val acc: {}".format(val_acc / n_iter)) + +## use testing data to evaluate the model +MLP.set_eval() +test_loss, test_acc, n_iter = 0, 0, 0 +for X_batch, y_batch in tl.iterate.minibatches(X_test, y_test, batch_size, shuffle=False): + _logits = MLP(X_batch, foo=1) + test_loss += tl.cost.softmax_cross_entropy_with_logits(_logits, y_batch, name='test_loss') + test_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) + n_iter += 1 +print(" test foo=1 loss: {}".format(test_loss / n_iter)) +print(" test foo=1 acc: {}".format(test_acc / n_iter)) diff --git a/examples/basic_tutorials/tutorial_mnist_siamese.py b/examples/basic_tutorials/tutorial_mnist_siamese.py deleted file mode 100644 index 236a40542..000000000 --- a/examples/basic_tutorials/tutorial_mnist_siamese.py +++ /dev/null @@ -1,142 +0,0 @@ -'''Trains a Siamese MLP on pairs of digits from the MNIST dataset. -Get 96.7% accuracy on test data after 20 epochs training. - -For more details, see the reference paper. - -# References -- Dimensionality Reduction by Learning an Invariant Mapping - http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf - - -''' - -import random -import time - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.layers import Dense, Dropout, Flatten, Input -from tensorlayer.models import Model - -num_classes = 10 -epochs = 20 -batch_size = 128 -input_shape = (None, 784) - - -def contrastive_loss(label, feature1, feature2): - margin = 1.0 - eucd = tf.sqrt(tf.reduce_sum(tf.square(feature1 - feature2), axis=1)) - return tf.reduce_mean(label * tf.square(eucd) + (1 - label) * tf.square(tf.maximum(margin - eucd, 0))) - - -def compute_accuracy(label, feature1, feature2): - eucd = tf.sqrt(tf.reduce_sum((feature1 - feature2)**2, axis=1)) - pred = tf.cast(eucd < 0.5, label.dtype) - return tf.reduce_mean(tf.cast(tf.equal(pred, label), tf.float32)) - - -def create_base_network(input_shape): - '''Base network to be shared (eq. to feature extraction). - ''' - input = Input(shape=input_shape) - x = Flatten()(input) - x = Dense(128, act=tf.nn.relu)(x) - x = Dropout(0.9)(x) - x = Dense(128, act=tf.nn.relu)(x) - x = Dropout(0.9)(x) - x = Dense(128, act=tf.nn.relu)(x) - return Model(input, x) - - -def get_siamese_network(input_shape): - """Create siamese network with shared base network as layer - """ - base_layer = create_base_network(input_shape).as_layer() - - ni_1 = Input(input_shape) - ni_2 = Input(input_shape) - nn_1 = base_layer(ni_1) - nn_2 = base_layer(ni_2) - return Model(inputs=[ni_1, ni_2], outputs=[nn_1, nn_2]) - - -def create_pairs(x, digit_indices): - pairs = [] - labels = [] - n = min([len(digit_indices[d]) for d in range(num_classes)]) - 1 - for d in range(num_classes): - for i in range(n): - z1, z2 = digit_indices[d][i], digit_indices[d][i + 1] - pairs += [[x[z1], x[z2]]] - inc = random.randrange(1, num_classes) - dn = (d + inc) % num_classes - z1, z2 = digit_indices[d][i], digit_indices[dn][i] - pairs += [[x[z1], x[z2]]] - labels += [1, 0] - return np.array(pairs), np.array(labels).astype(np.float32) - - -# get network -model = get_siamese_network(input_shape) - -# create training+val+test positive and negative pairs -X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 784)) - -digit_indices = [np.where(y_train == i)[0] for i in range(num_classes)] -tr_pairs, tr_y = create_pairs(X_train, digit_indices) - -digit_indices = [np.where(y_val == i)[0] for i in range(num_classes)] -val_pairs, val_y = create_pairs(X_val, digit_indices) - -digit_indices = [np.where(y_test == i)[0] for i in range(num_classes)] -te_pairs, te_y = create_pairs(X_test, digit_indices) - -# training settings -print_freq = 5 -train_weights = model.trainable_weights -optimizer = tf.optimizers.RMSprop() - - -@tf.function -def train_step(X_batch, y_batch): - with tf.GradientTape() as tape: - _out1, _out2 = model([X_batch[:, 0, :], X_batch[:, 1, :]]) - _loss = contrastive_loss(y_batch, _out1, _out2) - - grad = tape.gradient(_loss, train_weights) - optimizer.apply_gradients(zip(grad, train_weights)) - - _acc = compute_accuracy(y_batch, _out1, _out2) - return _loss, _acc - - -# begin training -for epoch in range(epochs): - start_time = time.time() - - train_loss, train_acc, n_iter = 0, 0, 0 - model.train() # enable dropout - for X_batch, y_batch in tl.iterate.minibatches(tr_pairs, tr_y, batch_size, shuffle=True): - _loss, _acc = train_step(X_batch, y_batch) - train_loss += _loss - train_acc += _acc - n_iter += 1 - - if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: - print("Epoch {} of {} took {}".format(epoch + 1, epochs, time.time() - start_time)) - print(" train loss: {}".format(train_loss / n_iter)) - print(" train acc: {}".format(train_acc / n_iter)) - -# evaluate on test data -model.eval() -test_loss, test_acc, n_iter = 0, 0, 0 -for X_batch, y_batch in tl.iterate.minibatches(te_pairs, te_y, batch_size, shuffle=False): - _out1, _out2 = model([X_batch[:, 0, :], X_batch[:, 1, :]]) - test_loss += contrastive_loss(y_batch, _out1, _out2) - test_acc += compute_accuracy(y_batch, _out1, _out2) - n_iter += 1 -print(" test loss: {}".format(test_loss / n_iter)) -print(" test acc: {}".format(test_acc / n_iter)) diff --git a/examples/basic_tutorials/tutorial_mnist_simple.py b/examples/basic_tutorials/tutorial_mnist_simple.py index b1ccd052b..6aa2d089b 100644 --- a/examples/basic_tutorials/tutorial_mnist_simple.py +++ b/examples/basic_tutorials/tutorial_mnist_simple.py @@ -1,61 +1,76 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import numpy as np -import tensorflow as tf +# The same set of code can switch the backend with one line +import os +# os.environ['TL_BACKEND'] = 'tensorflow' +# os.environ['TL_BACKEND'] = 'mindspore' +os.environ['TL_BACKEND'] = 'paddle' import tensorlayer as tl +from tensorlayer.layers import Module +from tensorlayer.layers import Dense, Dropout +from tensorlayer.dataflow import Dataset -tl.logging.set_verbosity(tl.logging.DEBUG) -# set gpu mem fraction or allow growth -# tl.utils.set_gpu_fraction() - -# prepare data X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 784)) -# define the network -ni = tl.layers.Input([None, 784]) -nn = tl.layers.Dropout(keep=0.8)(ni) -nn = tl.layers.Dense(n_units=800, act=tf.nn.relu)(nn) -nn = tl.layers.Dropout(keep=0.5)(nn) -nn = tl.layers.Dense(n_units=800, act=tf.nn.relu)(nn) -nn = tl.layers.Dropout(keep=0.5)(nn) -nn = tl.layers.Dense(n_units=10, act=None)(nn) -network = tl.models.Model(inputs=ni, outputs=nn, name="mlp") +class mnistdataset(Dataset): + + def __init__(self, data = X_train, label = y_train): + self.data = data + self.label = label + + def __getitem__(self, index): + data = self.data[index].astype('float32') + label = self.label[index].astype('int64') + + return data, label + + def __len__(self): + + return len(self.data) -# define metric. -def acc(_logits, y_batch): - # return np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - return tf.reduce_mean( - tf.cast(tf.equal(tf.argmax(_logits, 1), tf.convert_to_tensor(y_batch, tf.int64)), tf.float32), name='accuracy' - ) +class CustomModel(Module): -# print network information -print(network) + def __init__(self): + super(CustomModel, self).__init__() + self.dropout1 = Dropout(keep=0.8) + self.dense1 = Dense(n_units=800, act=tl.ReLU, in_channels=784) + self.dropout2 = Dropout(keep=0.8) + self.dense2 = Dense(n_units=800, act=tl.ReLU, in_channels=800) + self.dropout3 = Dropout(keep=0.8) + self.dense3 = Dense(n_units=10, act=tl.ReLU, in_channels=800) -# open tensorboard -# tl.utils.open_tensorboard('./tb_log', port=6006) + def forward(self, x, foo=None): + z = self.dropout1(x) + z = self.dense1(z) + z = self.dropout2(z) + z = self.dense2(z) + z = self.dropout3(z) + out = self.dense3(z) + if foo is not None: + out = tl.ops.relu(out) + return out -# train the network -tl.utils.fit( - network, train_op=tf.optimizers.Adam(learning_rate=0.0001), cost=tl.cost.cross_entropy, X_train=X_train, - y_train=y_train, acc=acc, batch_size=256, n_epoch=20, X_val=X_val, y_val=y_val, eval_train=True, - tensorboard_dir='./tb_log' -) +MLP = CustomModel() -# test -tl.utils.test(network, acc, X_test, y_test, batch_size=None, cost=tl.cost.cross_entropy) +n_epoch = 50 +batch_size = 128 +print_freq = 2 -# evaluation -_logits = tl.utils.predict(network, X_test) -y_pred = np.argmax(_logits, 1) -tl.utils.evaluation(y_test, y_pred, n_classes=10) -# save network weights -network.save_weights('model.h5') +train_weights = MLP.trainable_weights +optimizer = tl.optimizers.Momentum(0.05, 0.9) +metric = tl.metric.Accuracy() +loss_fn = tl.cost.softmax_cross_entropy_with_logits +train_dataset = mnistdataset(data = X_train, label = y_train) +train_dataset = tl.dataflow.FromGenerator(train_dataset, output_types=[tl.float32, tl.int64], column_names=['data', 'label']) +train_loader = tl.dataflow.Dataloader(train_dataset, batch_size=batch_size, shuffle=True) -# close tensorboard -# tl.utils.exit_tensorflow(port=6006) +model = tl.models.Model(network=MLP, loss_fn=loss_fn, optimizer=optimizer, metrics=metric) +model.train(n_epoch=n_epoch, train_dataset=train_loader, print_freq=print_freq, print_train_batch=False) +model.save_weights('./model.npz', format='npz_dict') +model.load_weights('./model.npz', format='npz_dict') \ No newline at end of file diff --git a/examples/quantized_net/tutorial_quanconv_cifar10.py b/examples/basic_tutorials/tutorial_nested_usage_of_Layer.py similarity index 52% rename from examples/quantized_net/tutorial_quanconv_cifar10.py rename to examples/basic_tutorials/tutorial_nested_usage_of_Layer.py index 9b649e6f0..24c3574dd 100644 --- a/examples/quantized_net/tutorial_quanconv_cifar10.py +++ b/examples/basic_tutorials/tutorial_nested_usage_of_Layer.py @@ -1,88 +1,104 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -""" +import os +os.environ['TL_BACKEND'] = 'tensorflow' -- 1. This model has 1,068,298 paramters and quantization compression strategy(weight:8 bits, active: 8 bits here, you can change the setting), -after 705 epoches' training with GPU, test accurcy of 84.0% was found. +import time +import numpy as np +import multiprocessing +import tensorflow as tf -- 2. For simplified CNN layers see "Convolutional layer (Simplified)" -in read the docs website. +from tensorlayer.layers import Module, SequentialLayer +import tensorlayer as tl +from tensorlayer.layers import (Conv2d, Dense, Flatten, MaxPool2d, BatchNorm2d, Elementwise) -- 3. Data augmentation without TFRecord see `tutorial_image_preprocess.py` !! +X_train, y_train, X_test, y_test = tl.files.load_cifar10_dataset(shape=(-1, 32, 32, 3), plotable=False) -Links -------- -.. paper:https://arxiv.org/abs/1712.05877 +class Block(Module): -Note ------- -The optimizers between official code and this code are different. + def __init__(self, in_channels): + super(Block, self).__init__() + self.dense1 = Dense(in_channels=in_channels, n_units=256) + self.dense2 = Dense(in_channels=256, n_units=384) + self.dense3 = Dense(in_channels=in_channels, n_units=384) + self.concat = Elementwise(combine_fn=tl.ops.add) -Description ------------ -The images are processed as follows: -.. They are cropped to 24 x 24 pixels, centrally for evaluation or randomly for training. -.. They are approximately whitened to make the model insensitive to dynamic range. + def forward(self, inputs): + z = self.dense1(inputs) + z1 = self.dense2(z) -For training, we additionally apply a series of random distortions to -artificially increase the data set size: -.. Randomly flip the image from left to right. -.. Randomly distort the image brightness. -.. Randomly distort the image contrast. + z2 = self.dense3(inputs) + out = self.concat([z1, z2]) + return out -Speed Up --------- -Reading images from disk and distorting them can use a non-trivial amount -of processing time. To prevent these operations from slowing down training, -we run them inside 16 separate threads which continuously fill a TensorFlow queue. -""" -import multiprocessing -import time +class CNN(Module): -import numpy as np -import tensorflow as tf + def __init__(self): + super(CNN, self).__init__() + # weights init + W_init = tl.initializers.truncated_normal(stddev=5e-2) + W_init2 = tl.initializers.truncated_normal(stddev=0.04) + b_init2 = tl.initializers.constant(value=0.1) -import tensorlayer as tl -from tensorlayer.layers import (Dense, Flatten, Input, MaxPool2d, QuanConv2dWithBN, QuanDense) -from tensorlayer.models import Model + self.conv1 = Conv2d(64, (5, 5), (1, 1), padding='SAME', W_init=W_init, b_init=None, name='conv1', in_channels=3) + self.bn = BatchNorm2d(num_features=64, act=tl.ReLU) + self.maxpool1 = MaxPool2d((3, 3), (2, 2), padding='SAME', name='pool1') -tl.logging.set_verbosity(tl.logging.DEBUG) + self.conv2 = Conv2d( + 64, (5, 5), (1, 1), padding='SAME', act=tl.ReLU, W_init=W_init, b_init=None, name='conv2', in_channels=64 + ) + self.maxpool2 = MaxPool2d((3, 3), (2, 2), padding='SAME', name='pool2') -# Download data, and convert to TFRecord format, see ```tutorial_tfrecord.py``` -# prepare cifar10 data -X_train, y_train, X_test, y_test = tl.files.load_cifar10_dataset(shape=(-1, 32, 32, 3), plotable=False) + self.flatten = Flatten(name='flatten') + self.dense1 = Dense(384, act=tl.ReLU, W_init=W_init2, b_init=b_init2, name='dense1relu', in_channels=2304) + self.dense_add = self.make_layer(in_channel=384) + + self.dense2 = Dense(192, act=tl.ReLU, W_init=W_init2, b_init=b_init2, name='dense2relu', in_channels=384) + self.dense3 = Dense(10, act=None, W_init=W_init2, name='output', in_channels=192) + + def forward(self, x): + z = self.conv1(x) + z = self.bn(z) + z = self.maxpool1(z) + z = self.conv2(z) + z = self.maxpool2(z) + z = self.flatten(z) + z = self.dense1(z) + z = self.dense_add(z) + z = self.dense2(z) + z = self.dense3(z) + return z -def model(input_shape, n_classes, bitW, bitA): - in_net = Input(shape=input_shape, name='input') - net = QuanConv2dWithBN(64, (5, 5), (1, 1), act='relu', padding='SAME', bitW=bitW, bitA=bitA, name='qcnnbn1')(in_net) - net = MaxPool2d((3, 3), (2, 2), padding='SAME', name='pool1')(net) - net = QuanConv2dWithBN(64, (5, 5), (1, 1), padding='SAME', act='relu', bitW=bitW, bitA=bitA, name='qcnnbn2')(net) - net = MaxPool2d((3, 3), (2, 2), padding='SAME', name='pool2')(net) - net = Flatten(name='flatten')(net) - net = QuanDense(384, act=tf.nn.relu, bitW=bitW, bitA=bitA, name='qd1relu')(net) - net = QuanDense(192, act=tf.nn.relu, bitW=bitW, bitA=bitA, name='qd2relu')(net) - net = Dense(n_classes, act=None, name='output')(net) - net = Model(inputs=in_net, outputs=net, name='dorefanet') - return net + def make_layer(self, in_channel): + layers = [] + _block = Block(in_channel) + layers.append(_block) + for _ in range(1, 3): + range_block = Block(in_channel) + layers.append(range_block) + + return SequentialLayer(layers) + + +# get the network +net = CNN() +print(net) # training settings -bitW = 8 -bitA = 8 -net = model([None, 24, 24, 3], n_classes=10, bitW=bitW, bitA=bitA) batch_size = 128 -n_epoch = 50000 +n_epoch = 500 learning_rate = 0.0001 print_freq = 5 n_step_epoch = int(len(y_train) / batch_size) n_step = n_epoch * n_step_epoch shuffle_buffer_size = 128 -optimizer = tf.optimizers.Adam(learning_rate) -cost = tl.cost.cross_entropy +train_weights = net.trainable_weights +optimizer = tl.optimizers.Adam(learning_rate) def generator_train(): @@ -130,23 +146,6 @@ def _map_fn_test(img, target): return img, target -def _train_step(network, X_batch, y_batch, cost, train_op=tf.optimizers.Adam(learning_rate=0.0001), acc=None): - with tf.GradientTape() as tape: - y_pred = network(X_batch) - _loss = cost(y_pred, y_batch) - grad = tape.gradient(_loss, network.trainable_weights) - train_op.apply_gradients(zip(grad, network.trainable_weights)) - if acc is not None: - _acc = acc(y_pred, y_batch) - return _loss, _acc - else: - return _loss, None - - -def accuracy(_logits, y_batch): - return np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - - # dataset API and augmentation train_ds = tf.data.Dataset.from_generator( generator_train, output_types=(tf.float32, tf.int32) @@ -172,36 +171,45 @@ def accuracy(_logits, y_batch): start_time = time.time() train_loss, train_acc, n_iter = 0, 0, 0 - net.train() for X_batch, y_batch in train_ds: - _loss, acc = _train_step(net, X_batch, y_batch, cost=cost, train_op=optimizer, acc=accuracy) + net.set_train() - train_loss += _loss - train_acc += acc + with tf.GradientTape() as tape: + # compute outputs + _logits = net(X_batch) + # compute loss and update model + _loss_ce = tl.cost.softmax_cross_entropy_with_logits(_logits, y_batch, name='train_loss') + + grad = tape.gradient(_loss_ce, train_weights) + optimizer.apply_gradients(zip(grad, train_weights)) + + train_loss += _loss_ce + train_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) n_iter += 1 - # use training and evaluation sets to evaluate the model every print_freq epoch - if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time)) print(" train loss: {}".format(train_loss / n_iter)) print(" train acc: {}".format(train_acc / n_iter)) - net.eval() - val_loss, val_acc, n_val_iter = 0, 0, 0 + # use training and evaluation sets to evaluate the model every print_freq epoch + if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: + + net.set_eval() + val_loss, val_acc, n_iter = 0, 0, 0 for X_batch, y_batch in test_ds: _logits = net(X_batch) # is_train=False, disable dropout - val_loss += tl.cost.cross_entropy(_logits, y_batch, name='eval_loss') + val_loss += tl.cost.softmax_cross_entropy_with_logits(_logits, y_batch, name='eval_loss') val_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - n_val_iter += 1 - print(" val loss: {}".format(val_loss / n_val_iter)) - print(" val acc: {}".format(val_acc / n_val_iter)) + n_iter += 1 + print(" val loss: {}".format(val_loss / n_iter)) + print(" val acc: {}".format(val_acc / n_iter)) # use testing data to evaluate the model -net.eval() +net.set_eval() test_loss, test_acc, n_iter = 0, 0, 0 for X_batch, y_batch in test_ds: _logits = net(X_batch) - test_loss += tl.cost.cross_entropy(_logits, y_batch, name='test_loss') + test_loss += tl.cost.softmax_cross_entropy_with_logits(_logits, y_batch, name='test_loss') test_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) n_iter += 1 print(" test loss: {}".format(test_loss / n_iter)) diff --git a/examples/data_process/README.md b/examples/data_process/README.md deleted file mode 100644 index 85888b3ac..000000000 --- a/examples/data_process/README.md +++ /dev/null @@ -1,10 +0,0 @@ -The examples show here are not the best, `tl.prepro.threading_data` is for quick testing. -The state-of-the-art method is TensorFlow's `tf.data` and `tf.image`. -We will change all examples later. - -Please use `basic_tutorials/tutorial_cifar10_datasetapi.py`. - - -### Blogs - -- [如何用TensorLayer做目标检测的数据增强](https://zhuanlan.zhihu.com/p/31466173) \ No newline at end of file diff --git a/examples/data_process/data/.DS_Store b/examples/data_process/data/.DS_Store deleted file mode 100644 index f94622148..000000000 Binary files a/examples/data_process/data/.DS_Store and /dev/null differ diff --git a/examples/data_process/data/__init__.py b/examples/data_process/data/__init__.py deleted file mode 100644 index 83d5401c3..000000000 --- a/examples/data_process/data/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from __future__ import absolute_import - -from . import * diff --git a/examples/data_process/data/cat/img1.jpg b/examples/data_process/data/cat/img1.jpg deleted file mode 100644 index 11b289957..000000000 Binary files a/examples/data_process/data/cat/img1.jpg and /dev/null differ diff --git a/examples/data_process/data/cat/img2.jpg b/examples/data_process/data/cat/img2.jpg deleted file mode 100644 index 8ec9afe73..000000000 Binary files a/examples/data_process/data/cat/img2.jpg and /dev/null differ diff --git a/examples/data_process/data/cat/img3.jpg b/examples/data_process/data/cat/img3.jpg deleted file mode 100644 index 3b1f85e05..000000000 Binary files a/examples/data_process/data/cat/img3.jpg and /dev/null differ diff --git a/examples/data_process/data/cat/img4.jpg b/examples/data_process/data/cat/img4.jpg deleted file mode 100644 index 11daec57c..000000000 Binary files a/examples/data_process/data/cat/img4.jpg and /dev/null differ diff --git a/examples/data_process/data/cat/img5.jpg b/examples/data_process/data/cat/img5.jpg deleted file mode 100644 index 03954c45c..000000000 Binary files a/examples/data_process/data/cat/img5.jpg and /dev/null differ diff --git a/examples/data_process/data/cat/img6.jpg b/examples/data_process/data/cat/img6.jpg deleted file mode 100644 index 6e5ade90d..000000000 Binary files a/examples/data_process/data/cat/img6.jpg and /dev/null differ diff --git a/examples/data_process/data/cat/img7.jpg b/examples/data_process/data/cat/img7.jpg deleted file mode 100644 index 425733c15..000000000 Binary files a/examples/data_process/data/cat/img7.jpg and /dev/null differ diff --git a/examples/data_process/data/cat/img8.jpg b/examples/data_process/data/cat/img8.jpg deleted file mode 100644 index 3c2cf1f8f..000000000 Binary files a/examples/data_process/data/cat/img8.jpg and /dev/null differ diff --git a/examples/data_process/data/cat/img9.jpg b/examples/data_process/data/cat/img9.jpg deleted file mode 100644 index 1f8827ac1..000000000 Binary files a/examples/data_process/data/cat/img9.jpg and /dev/null differ diff --git a/examples/data_process/data/cat_caption.json b/examples/data_process/data/cat_caption.json deleted file mode 100644 index 6c8cce3ba..000000000 --- a/examples/data_process/data/cat_caption.json +++ /dev/null @@ -1,19 +0,0 @@ -{ - "info": {"description": "This is a cat dataset.", - "url": "http://tensorlayer.org", - "version": "1.0.0", - "year": 2016, - "contributor": "Hao Dong", - "date_created": "2016-09-30 12:11:50.00000"}, - "images":[ - {"file_name":"img1.jpg", "caption":"a yellow cat looks up"}, - {"file_name":"img2.jpg", "caption":"a grey cat with yellow eyes"}, - {"file_name":"img3.jpg", "caption":"grey cat with yellow eyes"}, - {"file_name":"img4.jpg", "caption":"yellow cat looks at you"}, - {"file_name":"img5.jpg", "caption":"a yellow cat stands up"}, - {"file_name":"img6.jpg", "caption":"a small cat sits down"}, - {"file_name":"img7.jpg", "caption":"it is a white cat with white eyes"}, - {"file_name":"img8.jpg", "caption":"it shows a head of cat"}, - {"file_name":"img9.jpg", "caption":"a black cat is running very fast"} - ] -} diff --git a/examples/data_process/data/dog/img1.jpg b/examples/data_process/data/dog/img1.jpg deleted file mode 100644 index cbafc08a8..000000000 Binary files a/examples/data_process/data/dog/img1.jpg and /dev/null differ diff --git a/examples/data_process/data/dog/img2.jpg b/examples/data_process/data/dog/img2.jpg deleted file mode 100644 index 31cdeab74..000000000 Binary files a/examples/data_process/data/dog/img2.jpg and /dev/null differ diff --git a/examples/data_process/data/dog/img3.jpg b/examples/data_process/data/dog/img3.jpg deleted file mode 100644 index e2fa74071..000000000 Binary files a/examples/data_process/data/dog/img3.jpg and /dev/null differ diff --git a/examples/data_process/data/dog/img4.jpg b/examples/data_process/data/dog/img4.jpg deleted file mode 100644 index 3e204712a..000000000 Binary files a/examples/data_process/data/dog/img4.jpg and /dev/null differ diff --git a/examples/data_process/data/dog/img5.jpg b/examples/data_process/data/dog/img5.jpg deleted file mode 100644 index af2956de2..000000000 Binary files a/examples/data_process/data/dog/img5.jpg and /dev/null differ diff --git a/examples/data_process/data/dog/img6.jpg b/examples/data_process/data/dog/img6.jpg deleted file mode 100644 index 97943eae9..000000000 Binary files a/examples/data_process/data/dog/img6.jpg and /dev/null differ diff --git a/examples/data_process/data/dog/img7.jpg b/examples/data_process/data/dog/img7.jpg deleted file mode 100644 index 07281f713..000000000 Binary files a/examples/data_process/data/dog/img7.jpg and /dev/null differ diff --git a/examples/data_process/data/dog/img8.jpg b/examples/data_process/data/dog/img8.jpg deleted file mode 100644 index 8a40fd88b..000000000 Binary files a/examples/data_process/data/dog/img8.jpg and /dev/null differ diff --git a/examples/data_process/data/dog/img9.jpg b/examples/data_process/data/dog/img9.jpg deleted file mode 100644 index aa0f6815c..000000000 Binary files a/examples/data_process/data/dog/img9.jpg and /dev/null differ diff --git a/examples/data_process/data/greenbackground/0.jpg b/examples/data_process/data/greenbackground/0.jpg deleted file mode 100644 index 7e203aae4..000000000 Binary files a/examples/data_process/data/greenbackground/0.jpg and /dev/null differ diff --git a/examples/data_process/data/greenbackground/1.jpg b/examples/data_process/data/greenbackground/1.jpg deleted file mode 100644 index 89e01d478..000000000 Binary files a/examples/data_process/data/greenbackground/1.jpg and /dev/null differ diff --git a/examples/data_process/data/tiger.jpeg b/examples/data_process/data/tiger.jpeg deleted file mode 100755 index 52c82a3c7..000000000 Binary files a/examples/data_process/data/tiger.jpeg and /dev/null differ diff --git a/examples/data_process/tutorial_fast_affine_transform.py b/examples/data_process/tutorial_fast_affine_transform.py deleted file mode 100644 index 85860e05a..000000000 --- a/examples/data_process/tutorial_fast_affine_transform.py +++ /dev/null @@ -1,139 +0,0 @@ -""" -Tutorial of fast affine transformation. -To run this tutorial, install opencv-python using pip. - -Comprehensive explanation of this tutorial can be found https://tensorlayer.readthedocs.io/en/stable/modules/prepro.html -""" - -import multiprocessing -import time - -import numpy as np - -import cv2 -import tensorflow as tf -import tensorlayer as tl - -# tl.logging.set_verbosity(tl.logging.DEBUG) -image = tl.vis.read_image('data/tiger.jpeg') -h, w, _ = image.shape - - -def create_transformation_matrix(): - # 1. Create required affine transformation matrices - ## fixed - # M_rotate = tl.prepro.affine_rotation_matrix(angle=20) - # M_flip = tl.prepro.affine_horizontal_flip_matrix(prob=1) - # M_shift = tl.prepro.affine_shift_matrix(wrg=0.1, hrg=0, h=h, w=w) - # M_shear = tl.prepro.affine_shear_matrix(x_shear=0.2, y_shear=0) - # M_zoom = tl.prepro.affine_zoom_matrix(zoom_range=0.8) - ## random - M_rotate = tl.prepro.affine_rotation_matrix(angle=(-20, 20)) - M_flip = tl.prepro.affine_horizontal_flip_matrix(prob=0.5) - M_shift = tl.prepro.affine_shift_matrix(wrg=(-0.1,0.1), hrg=(-0.1,0.1), h=h, w=w) - M_shear = tl.prepro.affine_shear_matrix(x_shear=(-0.2,0.2), y_shear=(-0.2,0.2)) - M_zoom = tl.prepro.affine_zoom_matrix(zoom_range=(0.8,1.2)) - - # 2. Combine matrices - # NOTE: operations are applied in a reversed order (i.e., rotation is performed first) - M_combined = M_shift.dot(M_zoom).dot(M_shear).dot(M_flip).dot(M_rotate) - - # 3. Convert the matrix from Cartesian coordinates (the origin in the middle of image) - # to image coordinates (the origin on the top-left of image) - transform_matrix = tl.prepro.transform_matrix_offset_center(M_combined, x=w, y=h) - return transform_matrix - - -def example1(): - """ Example 1: Applying transformation one-by-one is very SLOW ! """ - st = time.time() - for _ in range(100): # Try 100 times and compute the averaged speed - xx = tl.prepro.rotation(image, rg=-20, is_random=False) - xx = tl.prepro.flip_axis(xx, axis=1, is_random=False) - xx = tl.prepro.shear2(xx, shear=(0., -0.2), is_random=False) - xx = tl.prepro.zoom(xx, zoom_range=1 / 0.8) - xx = tl.prepro.shift(xx, wrg=-0.1, hrg=0, is_random=False) - print("apply transforms one-by-one took %fs for each image" % ((time.time() - st) / 100)) - tl.vis.save_image(xx, '_result_slow.png') - - -def example2(): - """ Example 2: Applying all transforms in one is very FAST ! """ - st = time.time() - for _ in range(100): # Repeat 100 times and compute the averaged speed - transform_matrix = create_transformation_matrix() - result = tl.prepro.affine_transform_cv2(image, transform_matrix, border_mode='replicate') # Transform the image using a single operation - tl.vis.save_image(result, '_result_fast_{}.png'.format(_)) - print("apply all transforms once took %fs for each image" % ((time.time() - st) / 100)) # usually 50x faster - tl.vis.save_image(result, '_result_fast.png') - - -def example3(): - """ Example 3: Using TF dataset API to load and process image for training """ - n_data = 100 - imgs_file_list = ['data/tiger.jpeg'] * n_data - train_targets = [np.ones(1)] * n_data - - def generator(): - if len(imgs_file_list) != len(train_targets): - raise RuntimeError('len(imgs_file_list) != len(train_targets)') - for _input, _target in zip(imgs_file_list, train_targets): - yield _input, _target - - def _data_aug_fn(image): - transform_matrix = create_transformation_matrix() - result = tl.prepro.affine_transform_cv2(image, transform_matrix) # Transform the image using a single operation - return result - - def _map_fn(image_path, target): - image = tf.io.read_file(image_path) - image = tf.image.decode_jpeg(image, channels=3) # Get RGB with 0~1 - image = tf.image.convert_image_dtype(image, dtype=tf.float32) - image = tf.numpy_function(_data_aug_fn, [image], [tf.float32])[0] - target = tf.reshape(target, ()) - return image, target - - n_epoch = 10 - batch_size = 5 - dataset = tf.data.Dataset.from_generator(generator, output_types=(tf.string, tf.int64)) - dataset = dataset.shuffle(buffer_size=4096) # shuffle before loading images - dataset = dataset.repeat(n_epoch) - dataset = dataset.map(_map_fn, num_parallel_calls=multiprocessing.cpu_count()) - dataset = dataset.batch(batch_size) # TODO: consider using tf.contrib.map_and_batch - dataset = dataset.prefetch(1) # prefetch 1 batch - - n_step = 0 - st = time.time() - for img, target in dataset: - n_step += 1 - pass - assert n_step == n_epoch * n_data / batch_size - print("dataset APIs took %fs for each image" % ((time.time() - st) / batch_size / n_step)) # CPU ~ 100% - - -def example4(): - """ Example 4: Transforming coordinates using affine matrix. """ - transform_matrix = create_transformation_matrix() - result = tl.prepro.affine_transform_cv2(image, transform_matrix) # 76 times faster - # Transform keypoint coordinates - coords = [[(50, 100), (100, 100), (100, 50), (200, 200)], [(250, 50), (200, 50), (200, 100)]] - coords_result = tl.prepro.affine_transform_keypoints(coords, transform_matrix) - - def imwrite(image, coords_list, name): - coords_list_ = [] - for coords in coords_list: - coords = np.array(coords, np.int32) - coords = coords.reshape((-1, 1, 2)) - coords_list_.append(coords) - image = cv2.polylines(image, coords_list_, True, (0, 255, 255), 3) - cv2.imwrite(name, image[..., ::-1]) - - imwrite(image, coords, '_with_keypoints_origin.png') - imwrite(result, coords_result, '_with_keypoints_result.png') - - -if __name__ == '__main__': - example1() - example2() - example3() - example4() diff --git a/examples/data_process/tutorial_tf_dataset_voc.py b/examples/data_process/tutorial_tf_dataset_voc.py deleted file mode 100644 index e430601b2..000000000 --- a/examples/data_process/tutorial_tf_dataset_voc.py +++ /dev/null @@ -1,113 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf8 -*- - -# tf import data dataset.map https://www.tensorflow.org/programmers_guide/datasets#applying_arbitrary_python_logic_with_tfpy_func -# tf.py_func https://www.tensorflow.org/api_docs/python/tf/py_func -# tl ref: https://github.com/tensorlayer/tensorlayer/blob/master/example/tutorial_imagenet_inceptionV3_distributed.py -# cn ref: https://blog.csdn.net/dQCFKyQDXYm3F8rB0/article/details/79342369 -# cn ref: https://zhuanlan.zhihu.com/p/31466173 - -import json -import multiprocessing -import random -import time - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl - -# tf.logging.set_verbosity(tf.logging.DEBUG) -tl.logging.set_verbosity(tl.logging.DEBUG) - -imgs_file_list, _, _, _, classes, _, _, _, objs_info_list, _ = tl.files.load_voc_dataset(dataset="2007") - -ann_list = [] -for info in objs_info_list: - ann = tl.prepro.parse_darknet_ann_str_to_list(info) - c, b = tl.prepro.parse_darknet_ann_list_to_cls_box(ann) - ann_list.append([c, b]) - -n_epoch = 10 -batch_size = 64 -im_size = [416, 416] -jitter = 0.2 -shuffle_buffer_size = 100 - - -def generator(): - inputs = imgs_file_list - targets = objs_info_list - - if len(inputs) != len(targets): - raise AssertionError("The length of inputs and targets should be equal") - - for _input, _target in zip(inputs, targets): - yield _input.encode('utf-8'), _target.encode('utf-8') - - -def _data_aug_fn(im, ann): - ## parse annotation - ann = ann.decode() - ann = tl.prepro.parse_darknet_ann_str_to_list(ann) - clas, coords = tl.prepro.parse_darknet_ann_list_to_cls_box(ann) - ## random brightness, contrast and saturation (tf.image API is faster) - # im = tl.prepro.brightness(im, gamma=0.5, gain=1, is_random=True) - # im = tl.prepro.illumination(im, gamma=(0.5, 1.5), - # contrast=(0.5, 1.5), saturation=(0.5, 1.5), is_random=True) # TypeError: Cannot handle this data type - ## random horizontal flip - im, coords = tl.prepro.obj_box_left_right_flip(im, coords, is_rescale=True, is_center=True, is_random=True) - ## random resize and crop - tmp0 = random.randint(1, int(im_size[0] * jitter)) - tmp1 = random.randint(1, int(im_size[1] * jitter)) - im, coords = tl.prepro.obj_box_imresize(im, coords, [im_size[0] + tmp0, im_size[1] + tmp1], \ - is_rescale=True, interp='bicubic') - im, clas, coords = tl.prepro.obj_box_crop(im, clas, coords, wrg=im_size[1], hrg=im_size[0], \ - is_rescale=True, is_center=True, is_random=True) - ## value [0, 255] to [-1, 1] (optional) - # im = im / 127.5 - 1 - ## value [0, 255] to [0, 1] (optional) - im = im / 255 - im = np.array(im, dtype=np.float32) # important - return im, str([clas, coords]).encode('utf-8') - - -def _map_fn(filename, annotation): - ## read image - image = tf.io.read_file(filename) - image = tf.image.decode_jpeg(image, channels=3) - image = tf.image.convert_image_dtype(image, dtype=tf.float32) - ## data augmentation for image only 0.02s - image = tf.image.random_brightness(image, max_delta=63) - image = tf.image.random_contrast(image, lower=0.2, upper=1.8) - # subtract off the mean and divide by the variance of the pixels. (optional) - # img = tf.image.per_image_standardization(img) - ## data augmentation for image and bounding box - image, annotation = tf.numpy_function(_data_aug_fn, [image, annotation], [tf.float32, tf.string]) - return image, annotation - - -ds = tf.data.Dataset.from_generator(generator, output_types=(tf.string, tf.string)) -ds = ds.shuffle(shuffle_buffer_size) -ds = ds.map(_map_fn, num_parallel_calls=multiprocessing.cpu_count()) -ds = ds.repeat(n_epoch) -ds = ds.prefetch(buffer_size=2048) -ds = ds.batch(batch_size) - -st = time.time() -im, annbyte = next(iter(ds)) -print('took {}s'.format(time.time() - st)) - -im = im.numpy() - -ann = [] -for a in annbyte: - a = a.numpy().decode() - ann.append(json.loads(a)) - -## save all images -for i in range(len(im)): - print(ann[i][1]) - tl.vis.draw_boxes_and_labels_to_image( - im[i] * 255, ann[i][0], ann[i][1], [], classes, True, save_name='_bbox_vis_%d.png' % i - ) diff --git a/examples/data_process/tutorial_tfrecord.py b/examples/data_process/tutorial_tfrecord.py deleted file mode 100644 index bd656a960..000000000 --- a/examples/data_process/tutorial_tfrecord.py +++ /dev/null @@ -1,98 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- -"""You will learn. - -1. How to save data into TFRecord format file. -2. How to read data from TFRecord format file. - -Reference: ------------ -English : https://www.tensorflow.org/alpha/tutorials/load_data/images#build_a_tfdatadataset - https://www.tensorflow.org/alpha/tutorials/load_data/tf_records#tfrecord_files_using_tfdata -Chinese : http://blog.csdn.net/u012759136/article/details/52232266 - https://github.com/ycszen/tf_lab/blob/master/reading_data/TensorFlow高效加载数据的方法.md - -More ------- -1. tutorial_tfrecord2.py -2. tutorial_cifar10_tfrecord.py - -""" - -import os - -import numpy as np -import tensorflow as tf -from PIL import Image - -import tensorlayer as tl - -## Save data ================================================================== -# see https://www.tensorflow.org/alpha/tutorials/load_data/tf_records#writing_a_tfrecord_file -classes = ['/data/cat', '/data/dog'] # cat is 0, dog is 1 -cwd = os.getcwd() -writer = tf.io.TFRecordWriter("train.tfrecords") -for index, name in enumerate(classes): - class_path = cwd + name + "/" - for img_name in os.listdir(class_path): - img_path = class_path + img_name - img = Image.open(img_path) - img = img.resize((224, 224)) - ## Visualize the image as follow: - # tl.visualize.frame(I=img, second=5, saveable=False, name='frame', fig_idx=12836) - ## Converts a image to bytes - img_raw = img.tobytes() - ## Convert the bytes back to image as follow: - # image = Image.frombytes('RGB', (224,224), img_raw) - # tl.visualize.frame(I=image, second=1, saveable=False, name='frame', fig_idx=1236) - ## Write the data into TF format - # image : Feature + BytesList - # label : Feature + Int64List or FloatList - # sentence : FeatureList + Int64List , see Google's im2txt example - example = tf.train.Example(features=tf.train.Features(feature={ # SequenceExample for seuqnce example - "label": tf.train.Feature(int64_list=tf.train.Int64List(value=[index])), - 'img_raw': tf.train.Feature(bytes_list=tf.train.BytesList(value=[img_raw])), - })) - writer.write(example.SerializeToString()) # Serialize To String -writer.close() - -## Load Data Method 1: Simple read ============================================ -# see https://www.tensorflow.org/alpha/tutorials/load_data/tf_records#reading_a_tfrecord_file_2 -# read data one by one in order -raw_dataset = tf.data.TFRecordDataset("train.tfrecords") -for serialized_example in raw_dataset: - example = tf.train.Example() # SequenceExample for seuqnce example - example.ParseFromString(serialized_example.numpy()) - img_raw = example.features.feature['img_raw'].bytes_list.value - label = example.features.feature['label'].int64_list.value - ## converts a image from bytes - image = Image.frombytes('RGB', (224, 224), img_raw[0]) - # tl.visualize.frame(np.asarray(image), second=0.5, saveable=False, name='frame', fig_idx=1283) - print(label) - - -## Read Data Method 2: using tf.data ======================================= -# see https://www.tensorflow.org/alpha/tutorials/load_data/tf_records#reading_a_tfrecord_file -# use shuffle and batch -def read_and_decode(filename): - # generate a queue with a given file name - raw_dataset = tf.data.TFRecordDataset([filename]).shuffle(1000).batch(4) - for serialized_example in raw_dataset: - features = tf.io.parse_example( - serialized_example, features={ - 'label': tf.io.FixedLenFeature([], tf.int64), - 'img_raw': tf.io.FixedLenFeature([], tf.string), - } - ) - # You can do more image distortion here for training data - img_batch = tf.io.decode_raw(features['img_raw'], tf.uint8) - img_batch = tf.reshape(img_batch, [4, 224, 224, 3]) - # img = tf.cast(img, tf.float32) * (1. / 255) - 0.5 - label_batch = tf.cast(features['label'], tf.int32) - yield img_batch, label_batch - - -img_batch, label_batch = next(read_and_decode("train.tfrecords")) -print("img_batch : %s" % img_batch.shape) -print("label_batch : %s" % label_batch.shape) -tl.visualize.images2d(img_batch, second=1, saveable=False, name='batch', dtype=None, fig_idx=2020121) diff --git a/examples/data_process/tutorial_tfrecord2.py b/examples/data_process/tutorial_tfrecord2.py deleted file mode 100755 index 163d2a64f..000000000 --- a/examples/data_process/tutorial_tfrecord2.py +++ /dev/null @@ -1,90 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- -"""You will learn. - -1. How to convert CIFAR-10 dataset into TFRecord format file. -2. How to read CIFAR-10 from TFRecord format file. - -More: -1. tutorial_tfrecord.py -2. tutoral_cifar10_tfrecord.py - -""" - -import os - -import numpy as np -# import matplotlib -# matplotlib.use('GTK') -import tensorflow as tf - -import tensorlayer as tl - -# Download data, and convert to TFRecord format, see ```tutorial_tfrecord.py``` -X_train, y_train, X_test, y_test = tl.files.load_cifar10_dataset(shape=(-1, 32, 32, 3), plotable=False) - -X_train = np.asarray(X_train, dtype=np.uint8) -y_train = np.asarray(y_train, dtype=np.int64) -X_test = np.asarray(X_test, dtype=np.float32) -y_test = np.asarray(y_test, dtype=np.int64) - -print('X_train.shape', X_train.shape) # (50000, 32, 32, 3) -print('y_train.shape', y_train.shape) # (50000,) -print('X_test.shape', X_test.shape) # (10000, 32, 32, 3) -print('y_test.shape', y_test.shape) # (10000,) -print('X %s y %s' % (X_test.dtype, y_test.dtype)) - -cwd = os.getcwd() -writer = tf.io.TFRecordWriter("train.cifar10") -for index, img in enumerate(X_train): - img_raw = img.tobytes() - ## Visualize a image - # tl.visualize.frame(np.asarray(img, dtype=np.uint8), second=1, saveable=False, name='frame', fig_idx=1236) - label = int(y_train[index]) - # print(label) - ## Convert the bytes back to image as follow: - # image = Image.frombytes('RGB', (32, 32), img_raw) - # image = np.fromstring(img_raw, np.float32) - # image = image.reshape([32, 32, 3]) - # tl.visualize.frame(np.asarray(image, dtype=np.uint8), second=1, saveable=False, name='frame', fig_idx=1236) - example = tf.train.Example( - features=tf.train.Features( - feature={ - "label": tf.train.Feature(int64_list=tf.train.Int64List(value=[label])), - 'img_raw': tf.train.Feature(bytes_list=tf.train.BytesList(value=[img_raw])), - } - ) - ) - writer.write(example.SerializeToString()) # Serialize To String -writer.close() - - -## Read Data by Queue and Thread ======================================= -def read_and_decode(filename): - batchsize = 4 - raw_dataset = tf.data.TFRecordDataset([filename]).shuffle(1000).batch(batchsize) - for serialized_example in raw_dataset: - features = tf.io.parse_example( - serialized_example, features={ - 'label': tf.io.FixedLenFeature([], tf.int64), - 'img_raw': tf.io.FixedLenFeature([], tf.string), - } - ) - # You can do more image distortion here for training data - img_batch = tf.io.decode_raw(features['img_raw'], tf.uint8) - img_batch = tf.reshape(img_batch, [-1, 32, 32, 3]) - # img = tf.cast(img, tf.float32) #* (1. / 255) - 0.5 # don't need to cast here, as it is float32 already - label_batch = tf.cast(features['label'], tf.int32) - yield img_batch, label_batch - - -img_batch, label_batch = next(read_and_decode("train.tfrecords")) -print("img_batch : %s" % img_batch.shape) -print("label_batch : %s" % label_batch.shape) - -i = 0 -for img_batch, label_batch in read_and_decode("train.cifar10"): - tl.visualize.images2d(img_batch, second=1, saveable=False, name='batch' + str(i), dtype=np.uint8, fig_idx=2020121) - i += 1 - if i >= 3: - break diff --git a/examples/data_process/tutorial_tfrecord3.py b/examples/data_process/tutorial_tfrecord3.py deleted file mode 100644 index 9e5751a25..000000000 --- a/examples/data_process/tutorial_tfrecord3.py +++ /dev/null @@ -1,464 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- -""" -You will learn. - -1. How to save time-series data (e.g. sentence) into TFRecord format file. -2. How to read time-series data from TFRecord format file. -3. How to create inputs, targets and mask. - -Reference ----------- -1. Google's im2txt - MSCOCO Image Captioning example -2. TFRecord in http://www.wildml.com/2016/08/rnns-in-tensorflow-a-practical-guide-and-undocumented-features/ -3. Batching and Padding data in http://www.wildml.com/2016/08/rnns-in-tensorflow-a-practical-guide-and-undocumented-features/ - -""" - -import json -import os - -import numpy as np -import tensorflow as tf -from PIL import Image - -import tensorlayer as tl - - -def _int64_feature(value): - """Wrapper for inserting an int64 Feature into a SequenceExample proto, - e.g, An integer label. - """ - return tf.train.Feature(int64_list=tf.train.Int64List(value=[value])) - - -def _bytes_feature(value): - """Wrapper for inserting a bytes Feature into a SequenceExample proto, - e.g, an image in byte - """ - # return tf.train.Feature(bytes_list=tf.train.BytesList(value=[str(value)])) - return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value])) - - -def _int64_feature_list(values): - """Wrapper for inserting an int64 FeatureList into a SequenceExample proto, - e.g, sentence in list of ints - """ - return tf.train.FeatureList(feature=[_int64_feature(v) for v in values]) - - -def _bytes_feature_list(values): - """Wrapper for inserting a bytes FeatureList into a SequenceExample proto, - e.g, sentence in list of bytes - """ - return tf.train.FeatureList(feature=[_bytes_feature(v) for v in values]) - - -# 1. Save data into TFRecord ===================================================== -cwd = os.getcwd() -IMG_DIR = cwd + '/data/cat/' -SEQ_FIR = cwd + '/data/cat_caption.json' -VOC_FIR = cwd + '/vocab.txt' -# read image captions from JSON -with tf.gfile.FastGFile(SEQ_FIR, "r") as f: - caption_data = json.loads(str(f.read())) # , encoding = "utf-8")) - -processed_capts, img_capts = [], [] -for idx in range(len(caption_data['images'])): - img_capt = caption_data['images'][idx]['caption'] - img_capts.append(img_capt) - processed_capts.append(tl.nlp.process_sentence(img_capt, start_word="", end_word="")) -print("Original Captions: %s" % img_capts) -print("Processed Captions: %s\n" % processed_capts) -# build vocab -_ = tl.nlp.create_vocab(processed_capts, word_counts_output_file=VOC_FIR, min_word_count=1) -vocab = tl.nlp.Vocabulary(VOC_FIR, start_word="", end_word="", unk_word="") - -# save -writer = tf.python_io.TFRecordWriter("train.cat_caption") -for idx in range(len(caption_data['images'])): - # get data - img_name = caption_data['images'][idx]['file_name'] - img_capt = ' ' + caption_data['images'][idx]['caption'] + ' ' - img_capt_ids = [vocab.word_to_id(word) for word in img_capt.split(' ')] - print("%s : %s : %s" % (img_name, img_capt, img_capt_ids)) - img = Image.open(IMG_DIR + img_name) - img = img.resize((299, 299)) - # tl.visualize.frame(I=img, second=0.2, saveable=False, name=img_name, fig_idx=12234) - img_raw = img.tobytes() - img_capt_b = [v.encode() for v in img_capt.split(' ')] - context = tf.train.Features(feature={ # Non-serial data uses Feature - "image/img_raw": _bytes_feature(img_raw), - }) - feature_lists = tf.train.FeatureLists( - feature_list={ # Serial data uses FeatureLists - "image/caption": _bytes_feature_list(img_capt_b), - "image/caption_ids": _int64_feature_list(img_capt_ids) - }) - sequence_example = tf.train.SequenceExample(context=context, feature_lists=feature_lists) - writer.write(sequence_example.SerializeToString()) # Serialize To String -writer.close() - -# 2. Simple read one image ======================================================= -filename_queue = tf.train.string_input_producer(["train.cat_caption"]) -reader = tf.TFRecordReader() -_, serialized_example = reader.read(filename_queue) # return the file and the name of file -# features, sequence_features = tf.parse_single_example(serialized_example, # see parse_single_sequence_example for sequence example -features, sequence_features = tf.parse_single_sequence_example( - serialized_example, context_features={ - 'image/img_raw': tf.FixedLenFeature([], tf.string), - }, sequence_features={ - "image/caption": tf.FixedLenSequenceFeature([], dtype=tf.string), - "image/caption_ids": tf.FixedLenSequenceFeature([], dtype=tf.int64), - } -) -c = tf.contrib.learn.run_n(features, n=1, feed_dict=None) -im = Image.frombytes('RGB', (299, 299), c[0]['image/img_raw']) -tl.visualize.frame(np.asarray(im), second=1, saveable=False, name='frame', fig_idx=1236) -c = tf.contrib.learn.run_n(sequence_features, n=1, feed_dict=None) -print(c[0]) - - -# 3. Prefetch serialized SequenceExample protos ================================== -def distort_image(image, thread_id): - """Perform random distortions on an image. - Args: - image: A float32 Tensor of shape [height, width, 3] with values in [0, 1). - thread_id: Preprocessing thread id used to select the ordering of color - distortions. There should be a multiple of 2 preprocessing threads. - Returns:```` - distorted_image: A float32 Tensor of shape [height, width, 3] with values in - [0, 1]. - """ - # Randomly flip horizontally. - with tf.name_scope("flip_horizontal"): # , values=[image]): # DH MOdify - # with tf.name_scope("flip_horizontal", values=[image]): - image = tf.image.random_flip_left_right(image) - # Randomly distort the colors based on thread id. - color_ordering = thread_id % 2 - with tf.name_scope("distort_color"): # , values=[image]): # DH MOdify - # with tf.name_scope("distort_color", values=[image]): # DH MOdify - if color_ordering == 0: - image = tf.image.random_brightness(image, max_delta=32. / 255.) - image = tf.image.random_saturation(image, lower=0.5, upper=1.5) - image = tf.image.random_hue(image, max_delta=0.032) - image = tf.image.random_contrast(image, lower=0.5, upper=1.5) - elif color_ordering == 1: - image = tf.image.random_brightness(image, max_delta=32. / 255.) - image = tf.image.random_contrast(image, lower=0.5, upper=1.5) - image = tf.image.random_saturation(image, lower=0.5, upper=1.5) - image = tf.image.random_hue(image, max_delta=0.032) - # The random_* ops do not necessarily clamp. - image = tf.clip_by_value(image, 0.0, 1.0) - - return image - - -# def process_image(encoded_image, -# is_training, -# height, -# width, -# resize_height=346, -# resize_width=346, -# thread_id=0, -# image_format="jpeg"): -# """Decode an image, resize and apply random distortions. -# In training, images are distorted slightly differently depending on thread_id. -# Args: -# encoded_image: String Tensor containing the image. -# is_training: Boolean; whether preprocessing for training or eval. -# height: Height of the output image. -# width: Width of the output image. -# resize_height: If > 0, resize height before crop to final dimensions. -# resize_width: If > 0, resize width before crop to final dimensions. -# thread_id: Preprocessing thread id used to select the ordering of color -# distortions. There should be a multiple of 2 preprocessing threads. -# image_format: "jpeg" or "png". -# Returns: -# A float32 Tensor of shape [height, width, 3] with values in [-1, 1]. -# Raises: -# ValueError: If image_format is invalid. -# """ -# # Helper function to log an image summary to the visualizer. Summaries are -# # only logged in thread 0. -# def image_summary(name, image): -# if not thread_id: -# tf.image_summary(name, tf.expand_dims(image, 0)) -# -# # Decode image into a float32 Tensor of shape [?, ?, 3] with values in [0, 1). -# with tf.name_scope("decode"):#, values=[encoded_image]): # DH modify -# # with tf.name_scope("decode", values=[encoded_image]): # DH modify -# if image_format == "jpeg": -# image = tf.image.decode_jpeg(encoded_image, channels=3) -# elif image_format == "png": -# image = tf.image.decode_png(encoded_image, channels=3) -# else: -# raise ValueError("Invalid image format: %s" % image_format) -# image = tf.image.convert_image_dtype(image, dtype=tf.float32) -# image_summary("original_image", image) -# -# # Resize image. -# assert (resize_height > 0) == (resize_width > 0) -# if resize_height: -# # image = tf.image.resize_images(image, -# # size=[resize_height, resize_width], -# # method=tf.image.ResizeMethod.BILINEAR) -# -# image = tf.image.resize_images(image, # DH Modify -# new_height=resize_height, -# new_width=resize_width, -# method=tf.image.ResizeMethod.BILINEAR) -# -# # Crop to final dimensions. -# if is_training: -# image = tf.random_crop(image, [height, width, 3]) -# else: -# # Central crop, assuming resize_height > height, resize_width > width. -# image = tf.image.resize_image_with_crop_or_pad(image, height, width) -# -# image_summary("resized_image", image) -# -# # Randomly distort the image. -# if is_training: -# image = distort_image(image, thread_id) -# -# image_summary("final_image", image) -# -# # Rescale to [-1,1] instead of [0, 1] -# image = tf.subtract(image, 0.5) -# image = tf.multiply(image, 2.0) -# return image - - -def prefetch_input_data( - reader, file_pattern, is_training, batch_size, values_per_shard, input_queue_capacity_factor=16, - num_reader_threads=1, shard_queue_name="filename_queue", value_queue_name="input_queue" -): - """Prefetches string values from disk into an input queue. - - In training the capacity of the queue is important because a larger queue - means better mixing of training examples between shards. The minimum number of - values kept in the queue is values_per_shard * input_queue_capacity_factor, - where input_queue_memory factor should be chosen to trade-off better mixing - with memory usage. - - Args: - reader: Instance of tf.ReaderBase. - file_pattern: Comma-separated list of file patterns (e.g. - /tmp/train_data-?????-of-00100). - is_training: Boolean; whether prefetching for training or eval. - batch_size: Model batch size used to determine queue capacity. - values_per_shard: Approximate number of values per shard. - input_queue_capacity_factor: Minimum number of values to keep in the queue - in multiples of values_per_shard. See comments above. - num_reader_threads: Number of reader threads to fill the queue. - shard_queue_name: Name for the shards filename queue. - value_queue_name: Name for the values input queue. - - Returns: - A Queue containing prefetched string values. - """ - data_files = [] - for pattern in file_pattern.split(","): - data_files.extend(tf.gfile.Glob(pattern)) - if not data_files: - tl.logging.fatal("Found no input files matching %s", file_pattern) - else: - tl.logging.info("Prefetching values from %d files matching %s", len(data_files), file_pattern) - - if is_training: - print(" is_training == True : RandomShuffleQueue") - filename_queue = tf.train.string_input_producer(data_files, shuffle=True, capacity=16, name=shard_queue_name) - min_queue_examples = values_per_shard * input_queue_capacity_factor - capacity = min_queue_examples + 100 * batch_size - values_queue = tf.RandomShuffleQueue( - capacity=capacity, min_after_dequeue=min_queue_examples, dtypes=[tf.string], - name="random_" + value_queue_name - ) - else: - print(" is_training == False : FIFOQueue") - filename_queue = tf.train.string_input_producer(data_files, shuffle=False, capacity=1, name=shard_queue_name) - capacity = values_per_shard + 3 * batch_size - values_queue = tf.FIFOQueue(capacity=capacity, dtypes=[tf.string], name="fifo_" + value_queue_name) - - enqueue_ops = [] - for _ in range(num_reader_threads): - _, value = reader.read(filename_queue) - enqueue_ops.append(values_queue.enqueue([value])) - tf.train.queue_runner.add_queue_runner(tf.train.queue_runner.QueueRunner(values_queue, enqueue_ops)) - - tf.summary.scalar( - "queue/%s/fraction_of_%d_full" % (values_queue.name, capacity), - tf.cast(values_queue.size(), tf.float32) * (1. / capacity) - ) - - return values_queue - - -is_training = True -resize_height = resize_width = 346 -height = width = 299 -# start to read -reader = tf.TFRecordReader() -input_queue = prefetch_input_data( - reader, - file_pattern="train.cat_caption", # sets train.???_caption to read many files - is_training=is_training, # if training, shuffle and random choice - batch_size=4, - values_per_shard=2300, # mixing between shards in training. - input_queue_capacity_factor=2, # minimum number of shards to keep in the input queue. - num_reader_threads=1 # number of threads for prefetching SequenceExample protos. -) -serialized_sequence_example = input_queue.dequeue() -# serialized_sequence_example = tf.train.string_input_producer(["train.cat_caption"]) # don't work -context, sequence = tf.parse_single_sequence_example( - serialized=serialized_sequence_example, context_features={"image/img_raw": tf.FixedLenFeature([], dtype=tf.string)}, - sequence_features={ - "image/caption": tf.FixedLenSequenceFeature([], dtype=tf.string), - "image/caption_ids": tf.FixedLenSequenceFeature([], dtype=tf.int64), - } -) - -img = tf.decode_raw(context["image/img_raw"], tf.uint8) -img = tf.reshape(img, [height, width, 3]) -img = tf.image.convert_image_dtype(img, dtype=tf.float32) - -try: - # for TensorFlow 0.11 - img = tf.image.resize_images(img, size=(resize_height, resize_width), method=tf.image.ResizeMethod.BILINEAR) -except Exception: - # for TensorFlow 0.10 - img = tf.image.resize_images( - img, new_height=resize_height, new_width=resize_width, method=tf.image.ResizeMethod.BILINEAR - ) -# Crop to final dimensions. -if is_training: - img = tf.random_crop(img, [height, width, 3]) -else: - # Central crop, assuming resize_height > height, resize_width > width. - img = tf.image.resize_image_with_crop_or_pad(img, height, width) -# Randomly distort the image. -if is_training: - img = distort_image(img, thread_id=0) -# Rescale to [-1, 1] instead of [0, 1] -img = tf.subtract(img, 0.5) -img = tf.multiply(img, 2.0) -img_cap = sequence["image/caption"] -img_cap_ids = sequence["image/caption_ids"] -img_batch, img_cap_batch, img_cap_ids_batch = tf.train.batch( - [img, img_cap, img_cap_ids], # Note: shuffle_batch doesn't support dynamic_pad - batch_size=4, - capacity=50000, - dynamic_pad=True, # string list pad with '', int list pad with 0 - num_threads=4 -) -sess = tf.Session() -# sess.run(tf.global_variables_initializer()) -tl.layers.initialize_global_variables(sess) -coord = tf.train.Coordinator() -threads = tf.train.start_queue_runners(sess=sess, coord=coord) -for _ in range(3): - print("Step %s" % _) - # print(sess.run([img, img_cap, img_cap_ids])) # one example only - imgs, caps, caps_id = sess.run([img_batch, img_cap_batch, img_cap_ids_batch]) # batch of examples with dynamic_pad - print(caps) - print(caps_id) - tl.visualize.images2d((imgs + 1) / 2, second=1, saveable=False, name='batch', dtype=None, fig_idx=202025) -coord.request_stop() -coord.join(threads) -sess.close() - - -# 4. Prefetch serialized SequenceExample protos. Create MASK and TARGET ======= -def batch_with_dynamic_pad(images_and_captions, batch_size, queue_capacity, add_summaries=True): - """Batches input images and captions. - - This function splits the caption into an input sequence and a target sequence, - where the target sequence is the input sequence right-shifted by 1. Input and - target sequences are batched and padded up to the maximum length of sequences - in the batch. A mask is created to distinguish real words from padding words. - - Example: - Actual captions in the batch ('-' denotes padded character): - [ - [ 1 2 5 4 5 ], - [ 1 2 3 4 - ], - [ 1 2 3 - - ], - ] - - input_seqs: - [ - [ 1 2 3 4 ], - [ 1 2 3 - ], - [ 1 2 - - ], - ] - - target_seqs: - [ - [ 2 3 4 5 ], - [ 2 3 4 - ], - [ 2 3 - - ], - ] - - mask: - [ - [ 1 1 1 1 ], - [ 1 1 1 0 ], - [ 1 1 0 0 ], - ] - - Args: - images_and_captions: A list of pairs [image, caption], where image is a - Tensor of shape [height, width, channels] and caption is a 1-D Tensor of - any length. Each pair will be processed and added to the queue in a - separate thread. - batch_size: Batch size. - queue_capacity: Queue capacity. - add_summaries: If true, add caption length summaries. - - Returns: - images: A Tensor of shape [batch_size, height, width, channels]. - input_seqs: An int32 Tensor of shape [batch_size, padded_length]. - target_seqs: An int32 Tensor of shape [batch_size, padded_length]. - mask: An int32 0/1 Tensor of shape [batch_size, padded_length]. - """ - enqueue_list = [] - for image, caption in images_and_captions: - caption_length = tf.shape(caption)[0] - input_length = tf.expand_dims(tf.subtract(caption_length, 1), 0) - - input_seq = tf.slice(caption, [0], input_length) - target_seq = tf.slice(caption, [1], input_length) - indicator = tf.ones(input_length, dtype=tf.int32) - enqueue_list.append([image, input_seq, target_seq, indicator]) - - images, input_seqs, target_seqs, mask = tf.train.batch_join( - enqueue_list, batch_size=batch_size, capacity=queue_capacity, dynamic_pad=True, name="batch_and_pad" - ) - - if add_summaries: - lengths = tf.add(tf.reduce_sum(mask, 1), 1) - tf.summary.scalar("caption_length/batch_min", tf.reduce_min(lengths)) - tf.summary.scalar("caption_length/batch_max", tf.reduce_max(lengths)) - tf.summary.scalar("caption_length/batch_mean", tf.reduce_mean(lengths)) - - return images, input_seqs, target_seqs, mask - - -images, input_seqs, target_seqs, input_mask = ( - batch_with_dynamic_pad(images_and_captions=[[img, img_cap]], batch_size=4, queue_capacity=50000) -) -sess = tf.Session() -sess.run(tf.global_variables_initializer()) -coord = tf.train.Coordinator() -threads = tf.train.start_queue_runners(sess=sess, coord=coord) -for _ in range(3): - print("Step %s" % _) - imgs, inputs, targets, masks = sess.run([images, input_seqs, target_seqs, input_mask]) - print(inputs) - print(targets) - print(masks) - tl.visualize.images2d((imgs + 1) / 2, second=1, saveable=False, name='batch', dtype=None, fig_idx=202025) -coord.request_stop() -coord.join(threads) -sess.close() diff --git a/examples/database/README.md b/examples/database/README.md deleted file mode 100644 index 0636abc7b..000000000 --- a/examples/database/README.md +++ /dev/null @@ -1,19 +0,0 @@ -# Dispatch Tasks - -1. This script (`dispatch_tasks.py`) creates 3 tasks (`task_script.py`) with different hyper-parameters and a dataset and pushes these tasks into the database. -2. On your GPU servers (for testing, it can be a new terminal on your local machine), run tasks as shown in `run_tasks.py`. -This script pulls and runs pending tasks, and saves the models and results to the database. -3. When all tasks complete, the dispatcher (`dispatch_tasks.py`) then selects the best model according to its accuracy. - - -# Save and load models - -- `task_script.py` shows how to save model. -- `dispatch_tasks.py ` shows how to find and load the model with the best testing accuracy. - -# Save and load datasets - -- `dispatch_tasks.py ` shows how to save a dataset. -- `task_script.py ` show how to find and load a dataset. - -#### More information in the online documentation. \ No newline at end of file diff --git a/examples/database/dispatch_tasks.py b/examples/database/dispatch_tasks.py deleted file mode 100644 index 4c8c02e44..000000000 --- a/examples/database/dispatch_tasks.py +++ /dev/null @@ -1,51 +0,0 @@ -""" -A sample script that shows how to distribute multiple tasks to multiple machine -using the database module. - -""" -import time - -import tensorflow as tf - -import tensorlayer as tl - -tl.logging.set_verbosity(tl.logging.DEBUG) -# tf.logging.set_verbosity(tf.logging.DEBUG) - -# connect to database -db = tl.db.TensorHub(ip='localhost', port=27017, dbname='temp', project_name='tutorial') - -# delete existing tasks, models and datasets in this project -db.delete_tasks() -db.delete_model() -db.delete_datasets() - -# save dataset into database, then allow other servers to use it -X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 784)) -db.save_dataset((X_train, y_train, X_val, y_val, X_test, y_test), 'mnist', description='handwriting digit') - -# push tasks into database, then allow other servers pull tasks to run -db.create_task( - task_name='mnist', script='task_script.py', hyper_parameters=dict(n_units1=800, n_units2=800), - saved_result_keys=['test_accuracy'], description='800-800' -) - -db.create_task( - task_name='mnist', script='task_script.py', hyper_parameters=dict(n_units1=600, n_units2=600), - saved_result_keys=['test_accuracy'], description='600-600' -) - -db.create_task( - task_name='mnist', script='task_script.py', hyper_parameters=dict(n_units1=400, n_units2=400), - saved_result_keys=['test_accuracy'], description='400-400' -) - -# wait for tasks to finish -while db.check_unfinished_task(task_name='mnist'): - print("waiting runners to finish the tasks") - time.sleep(1) - -# get the best model -print("all tasks finished") -net = db.find_top_model(model_name='mlp', sort=[("test_accuracy", -1)]) -print("the best accuracy {} is from model {}".format(net._test_accuracy, net._name)) diff --git a/examples/database/run_tasks.py b/examples/database/run_tasks.py deleted file mode 100644 index 446c2508f..000000000 --- a/examples/database/run_tasks.py +++ /dev/null @@ -1,19 +0,0 @@ -""" -Run this script on servers, it will monitor the database and run tasks when -task distributor push a task to the database. - -""" -import time - -import tensorlayer as tl - -# tl.logging.set_verbosity(tl.logging.DEBUG) - -# connect to database -db = tl.db.TensorHub(ip='localhost', port=27017, dbname='temp', project_name='tutorial') - -# monitors the database and pull tasks to run -while True: - print("waiting task from distributor") - db.run_top_task(task_name='mnist', sort=[("time", -1)]) - time.sleep(1) diff --git a/examples/database/task_script.py b/examples/database/task_script.py deleted file mode 100644 index 3f2f93ccd..000000000 --- a/examples/database/task_script.py +++ /dev/null @@ -1,71 +0,0 @@ -"""Sample task script.""" - -import tensorflow as tf - -import tensorlayer as tl - -# tf.logging.set_verbosity(tf.logging.DEBUG) -tl.logging.set_verbosity(tl.logging.DEBUG) - -# connect to database -db = tl.db.TensorHub(ip='localhost', port=27017, dbname='temp', project_name='tutorial') - -# load dataset from database -X_train, y_train, X_val, y_val, X_test, y_test = db.find_top_dataset('mnist') - - -# define the network -def mlp(): - ni = tl.layers.Input([None, 784], name='input') - net = tl.layers.Dropout(keep=0.8, name='drop1')(ni) - net = tl.layers.Dense(n_units=n_units1, act=tf.nn.relu, name='relu1')(net) - net = tl.layers.Dropout(keep=0.5, name='drop2')(net) - net = tl.layers.Dense(n_units=n_units2, act=tf.nn.relu, name='relu2')(net) - net = tl.layers.Dropout(keep=0.5, name='drop3')(net) - net = tl.layers.Dense(n_units=10, act=None, name='output')(net) - M = tl.models.Model(inputs=ni, outputs=net) - return M - - -network = mlp() - -# cost and accuracy -cost = tl.cost.cross_entropy - - -def acc(y, y_): - correct_prediction = tf.equal(tf.argmax(y, 1), tf.convert_to_tensor(y_, tf.int64)) - return tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) - - -# define the optimizer -train_op = tf.optimizers.Adam(learning_rate=0.0001) - -# train the network -# tl.utils.fit( -# network, train_op, cost, X_train, y_train, acc=acc, batch_size=500, n_epoch=20, print_freq=5, -# X_val=X_val, y_val=y_val, eval_train=False -# ) - -tl.utils.fit( - network, - train_op=tf.optimizers.Adam(learning_rate=0.0001), - cost=tl.cost.cross_entropy, - X_train=X_train, - y_train=y_train, - acc=acc, - batch_size=256, - n_epoch=20, - X_val=X_val, - y_val=y_val, - eval_train=False, -) - -# evaluation and save result that match the result_key -test_accuracy = tl.utils.test(network, acc, X_test, y_test, batch_size=None, cost=cost) -test_accuracy = float(test_accuracy) - -# save model into database -db.save_model(network, model_name='mlp', name=str(n_units1) + '-' + str(n_units2), test_accuracy=test_accuracy) -# in other script, you can load the model as follow -# net = db.find_model(sess=sess, model_name=str(n_units1)+'-'+str(n_units2) diff --git a/examples/deprecated_tutorials/tutorial_image_preprocess.py b/examples/deprecated_tutorials/tutorial_image_preprocess.py deleted file mode 100755 index 7b4167ea7..000000000 --- a/examples/deprecated_tutorials/tutorial_image_preprocess.py +++ /dev/null @@ -1,28 +0,0 @@ -"""Data Augmentation by numpy, scipy, threading and queue. - -Note that, TensorFlow's TFRecord and Dataset API are faster. - -""" - -import time - -import tensorlayer as tl - -tl.logging.set_verbosity(tl.logging.DEBUG) - -X_train, y_train, X_test, y_test = tl.files.load_cifar10_dataset(shape=(-1, 32, 32, 3), plotable=False) - - -def distort_img(x): - x = tl.prepro.flip_axis(x, axis=1, is_random=True) - x = tl.prepro.crop(x, wrg=28, hrg=28, is_random=True) - return x - - -s = time.time() -results = tl.prepro.threading_data(X_train[0:100], distort_img) -print("took %.3fs" % (time.time() - s)) -print(results.shape) - -tl.vis.save_images(X_train[0:10], [1, 10], '_original.png') -tl.vis.save_images(results[0:10], [1, 10], '_distorted.png') diff --git a/examples/deprecated_tutorials/tutorial_imagenet_inceptionV3_distributed.py b/examples/deprecated_tutorials/tutorial_imagenet_inceptionV3_distributed.py deleted file mode 100644 index 6c208f354..000000000 --- a/examples/deprecated_tutorials/tutorial_imagenet_inceptionV3_distributed.py +++ /dev/null @@ -1,452 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- -"""Example of training an Inception V3 model with ImageNet. - -The parameters are set as in the best results of the paper: https://arxiv.org/abs/1512.00567 - -The dataset can be downloaded from http://www.image-net.org/ or from the Kaggle competition: -https://www.kaggle.com/c/imagenet-object-localization-challenge/data - -""" - -import argparse -import logging -import multiprocessing -import os -import random -import sys -import time -from xml.etree import ElementTree - -import numpy as np -import tensorflow as tf -from tensorflow.contrib import slim -from tensorflow.contrib.slim.python.slim.nets.inception_v3 import (inception_v3, inception_v3_arg_scope) -from tensorflow.python.framework.errors_impl import OutOfRangeError -from tensorflow.python.training import session_run_hook -from tensorflow.python.training.basic_session_run_hooks import StopAtStepHook -from tensorflow.python.training.monitored_session import \ - SingularMonitoredSession - -import tensorlayer as tl - -tf.logging.set_verbosity(tf.logging.DEBUG) -tl.logging.set_verbosity(tl.logging.DEBUG) - -########## VARIABLES ########## - -# get the dataset: https://www.kaggle.com/c/imagenet-object-localization-challenge/data -# get the synset dictionary: http://www.image-net.org/archive/words.txt - -BASE_DIR = './' -ILSVRC_DIR = os.path.join(BASE_DIR, 'ILSVRC') -SYNSET_DICT = os.path.join(BASE_DIR, 'words.txt') -TRAIN_FILE = os.path.join(BASE_DIR, 'train.csv') -VAL_FILE = os.path.join(BASE_DIR, 'val.csv') -CLASSES_FILE = os.path.join(BASE_DIR, 'classes.csv') -CLASSES_VAL_FILE = os.path.join(BASE_DIR, 'classes_val.csv') -CHECKPOINTS_PATH = './checkpoints' - -########## DATASETS ########## - - -def get_data_sample(annotation_file, annotations_dir, data_dir): - labels = [] - image_file = annotation_file.replace(annotations_dir, data_dir).replace('.xml', '.JPEG') - if tf.gfile.Exists(annotation_file) and tf.gfile.Exists(image_file): - xmltree = ElementTree.parse(annotation_file) - objects = xmltree.findall("object") - for object_iter in objects: - labels.append(object_iter.find("name").text) - else: - image_file = None - return image_file, labels - - -def might_create_dataset(prefix, file, shuffle=False, suffix='**/*.xml'): - # load data - data = [] - labels = set() - annotations_dir = os.path.join(ILSVRC_DIR, 'Annotations', 'CLS-LOC', prefix) - data_dir = os.path.join(ILSVRC_DIR, 'Data', 'CLS-LOC', prefix) - for filename in tf.gfile.Glob(os.path.join(annotations_dir, suffix)): - image_path, image_labels = get_data_sample(filename, annotations_dir, data_dir) - if image_path is not None and len(image_labels) > 0: - data.append([image_path] + image_labels) - for label in image_labels: - labels.add(label) - if shuffle: - random.shuffle(data) - # write data - with tf.gfile.Open(file, 'w') as f: - for d in data: - f.write('{}\n'.format(','.join(d))) - return sorted(labels) - - -def might_create_training_set(): - if not tf.gfile.Exists(TRAIN_FILE): - labels = might_create_dataset('train', TRAIN_FILE, shuffle=True) - with tf.gfile.Open(CLASSES_FILE, 'w') as f: - for l in labels: - f.write('{}\n'.format(l)) - - -def might_create_validation_set(): - if not tf.gfile.Exists(VAL_FILE): - labels = might_create_dataset('val', VAL_FILE, suffix='*.xml') - with tf.gfile.Open(CLASSES_VAL_FILE, 'w') as f: - for l in labels: - f.write('{}\n'.format(l)) - - -def load_data(file, task_spec=None, batch_size=16, epochs=1, shuffle_size=0): - # load classes dict: - with tf.gfile.Open(CLASSES_FILE) as f: - labels = dict() - for i, line in enumerate(f.readlines()): - label = line.strip() - labels[label] = i - num_classes = len(labels) - # count file examples - with tf.gfile.Open(file) as f: - size = len(f.readlines()) - - image_size = inception_v3.default_image_size - dataset = tf.data.TextLineDataset([file]) - dataset = dataset.repeat(epochs) - # split the dataset in shards - if task_spec is not None and task_spec.num_workers > 1 and not task_spec.is_evaluator(): - dataset = dataset.shard(num_shards=task_spec.num_workers, index=task_spec.shard_index) - if shuffle_size > 0: - dataset = dataset.shuffle(buffer_size=shuffle_size) - - def _parse_example_fn(line): - line_split = line.decode().split(',') - filename = line_split[0] - labels_names = line_split[1:] - # labels - one_hot_labels = np.zeros(num_classes, dtype=np.float32) - for l in labels_names: - one_hot_labels[labels[l]] = 1.0 - # image - image_bytes = tf.gfile.FastGFile(filename, 'rb').read() - return image_bytes, one_hot_labels - - def _map_fn(example_serialized): - image_bytes, one_hot_labels = tf.py_func( - _parse_example_fn, [example_serialized], [tf.string, tf.float32], stateful=False - ) - - image = tf.image.decode_jpeg(image_bytes, channels=3) - image = tf.image.resize_images(image, size=[image_size, image_size]) - image = tf.image.convert_image_dtype(image, dtype=tf.float32) - one_hot_labels = tf.reshape(one_hot_labels, [num_classes]) - return image, one_hot_labels - - max_cpus = multiprocessing.cpu_count() - dataset = dataset.map(_map_fn, num_parallel_calls=max_cpus) - dataset = dataset.prefetch(batch_size * max_cpus + 100) - dataset = dataset.batch(batch_size) - images, one_hot_classes = dataset.make_one_shot_iterator().get_next() - - images = tf.reshape(images, [batch_size, image_size, image_size, 3]) - one_hot_classes = tf.reshape(one_hot_classes, [batch_size, num_classes]) - - return images, one_hot_classes, num_classes, size - - -########## NETWORK ########## - - -def build_network(image_input, num_classes=1001, is_training=False): - net_in = tl.layers.InputLayer(image_input, name='input_layer') - with slim.arg_scope(inception_v3_arg_scope()): - network = tl.layers.SlimNetsLayer( - prev_layer=net_in, slim_layer=inception_v3, slim_args={ - 'num_classes': num_classes, - 'is_training': is_training - }, name='InceptionV3' - ) - - predictions = tf.nn.sigmoid(network.outputs, name='Predictions') - return network, predictions - - -########## EVALUATOR ########## - - -class EvaluatorStops(Exception): - - def __init__(self, message): - super(EvaluatorStops, self).__init__(message) - - -class EvaluatorHook(session_run_hook.SessionRunHook): - - def __init__(self, checkpoints_path, saver): - self.checkpoints_path = checkpoints_path - self.summary_writer = tf.summary.FileWriter(os.path.join(checkpoints_path, 'validation')) - self.lastest_checkpoint = None - self.saver = saver - self.summary = None - - def after_create_session(self, session, coord): - checkpoint = tf.train.latest_checkpoint(self.checkpoints_path) - # wait until a new check point is available - total_waited_secs = 0 - while self.lastest_checkpoint == checkpoint: - time.sleep(30) # sleep 30 seconds waiting for a new checkpoint - checkpoint = tf.train.latest_checkpoint(self.checkpoints_path) - total_waited_secs += 30 - if total_waited_secs > 30 * 60 * 60: - raise EvaluatorStops('Waited more than half an hour to load a new checkpoint') - - # restore the checkpoint - self.saver.restore(session, checkpoint) - self.lastest_checkpoint = checkpoint - self.eval_step = int(self.lastest_checkpoint.split('-')[-1]) - - def end(self, session): - super(EvaluatorHook, self).end(session) - # save summaries - self.summary_writer.add_summary(self.summary, self.eval_step) - - -########## METRICS ########## - - -def calculate_metrics(predicted_batch, real_batch, threshold=0.5, is_training=False, ema_decay=0.9): - with tf.variable_scope('metric'): - threshold_graph = tf.constant(threshold, name='threshold') - zero_point_five = tf.constant(0.5) - predicted_bool = tf.greater_equal(predicted_batch, threshold_graph) - real_bool = tf.greater_equal(real_batch, zero_point_five) - predicted_bool_neg = tf.logical_not(predicted_bool) - real_bool_neg = tf.logical_not(real_bool) - differences_bool = tf.logical_xor(predicted_bool, real_bool) - tp = tf.logical_and(predicted_bool, real_bool) - tn = tf.logical_and(predicted_bool_neg, real_bool_neg) - fn = tf.logical_and(differences_bool, real_bool) - fp = tf.logical_and(differences_bool, predicted_bool) - tp = tf.reduce_sum(tf.cast(tp, tf.float32)) - tn = tf.reduce_sum(tf.cast(tn, tf.float32)) - fn = tf.reduce_sum(tf.cast(fn, tf.float32)) - fp = tf.reduce_sum(tf.cast(fp, tf.float32)) - - average_ops = None - init_op = None - if is_training: - ema = tf.train.ExponentialMovingAverage(decay=ema_decay) - average_ops = ema.apply([tp, tn, fp, fn]) - tp = ema.average(tp) - tn = ema.average(tn) - fp = ema.average(fp) - fn = ema.average(fn) - else: - tp_v = tf.Variable(0, dtype=tf.float32, name='true_positive', trainable=False) - tn_v = tf.Variable(0, dtype=tf.float32, name='true_negative', trainable=False) - fp_v = tf.Variable(0, dtype=tf.float32, name='false_positive', trainable=False) - fn_v = tf.Variable(0, dtype=tf.float32, name='false_negative', trainable=False) - init_op = [tf.assign(tp_v, 0), tf.assign(tn_v, 0), tf.assign(fp_v, 0), tf.assign(fn_v, 0)] - tp = tf.assign_add(tp_v, tp) - tn = tf.assign_add(tn_v, tn) - fp = tf.assign_add(fp_v, fp) - fn = tf.assign_add(fn_v, fn) - - # calculate metrics - precision = tp / (tp + fp) - recall = tp / (tp + fn) - accuracy = (tp + tn) / (tp + tn + fp + fn) - fall_out = fp / (tn + fp) - f1_score = tp * 2 / (tp * 2 + fp + fn) - - # remove NaNs and set them to 0 - zero = tf.constant(0, dtype=tf.float32) - precision = tf.cond(tf.equal(tp, 0.0), lambda: zero, lambda: precision) - recall = tf.cond(tf.equal(tp, 0.0), lambda: zero, lambda: recall) - accuracy = tf.cond(tf.equal(tp + tn, 0.0), lambda: zero, lambda: accuracy) - fall_out = tf.cond(tf.equal(fp, 0.0), lambda: zero, lambda: fall_out) - f1_score = tf.cond(tf.equal(tp, 0.0), lambda: zero, lambda: f1_score) - - # add to tensorboard - # tf.summary.scalar('accuracy', accuracy) - tf.summary.scalar('precision', precision) - tf.summary.scalar('recall', recall) - tf.summary.scalar('fall-out', fall_out) - tf.summary.scalar('f1-score', f1_score) - tf.summary.scalar('true_positive', tp) - tf.summary.scalar('true_negative', tn) - tf.summary.scalar('false_positive', fp) - tf.summary.scalar('false_negative', fn) - - metrics_ops = { - # 'accuracy' : accuracy, - 'precision': precision, - 'recall': recall, - 'fall-out': fall_out, - 'f1-score': f1_score, - 'true positive': tp, - 'true negative': tn, - 'false positive': fp, - 'false negative': fn, - } - return init_op, average_ops, metrics_ops - - -def run_evaluator(task_spec, checkpoints_path, batch_size=32): - with tf.Graph().as_default(): - # load dataset - images_input, one_hot_classes, num_classes, _dataset_size = load_data( - file=VAL_FILE, task_spec=task_spec, batch_size=batch_size, epochs=1 - ) - _network, predictions = build_network(images_input, num_classes=num_classes, is_training=False) - saver = tf.train.Saver() - # metrics - metrics_init_ops, _, metrics_ops = calculate_metrics( - predicted_batch=predictions, real_batch=one_hot_classes, is_training=False - ) - # tensorboard summary - summary_op = tf.summary.merge_all() - # session hook - evaluator_hook = EvaluatorHook(checkpoints_path=checkpoints_path, saver=saver) - - try: - # infinite loop - while True: - with SingularMonitoredSession(hooks=[evaluator_hook]) as sess: - sess.run(metrics_init_ops) - try: - while not sess.should_stop(): - metrics, summary = sess.run([metrics_ops, summary_op]) - evaluator_hook.summary = summary - except OutOfRangeError: - pass - logging.info('step: {} {}'.format(evaluator_hook.eval_step, metrics)) - except EvaluatorStops: - # the evaluator has waited too long for a new checkpoint - pass - - -########## TRAINING ########## - - -def run_worker(task_spec, checkpoints_path, batch_size=32, epochs=10): - device_fn = task_spec.device_fn() if task_spec is not None else None - # create graph - with tf.Graph().as_default(): - global_step = tf.train.get_or_create_global_step() - with tf.device(device_fn): - # load dataset - images_input, one_hot_classes, num_classes, dataset_size = load_data( - file=TRAIN_FILE, task_spec=task_spec, batch_size=batch_size, epochs=epochs, shuffle_size=10000 - ) - # network - network, predictions = build_network(images_input, num_classes=num_classes, is_training=True) - # training operations - loss = tl.cost.sigmoid_cross_entropy(output=network.outputs, target=one_hot_classes, name='loss') - steps_per_epoch = dataset_size / batch_size - learning_rate = tf.train.exponential_decay( - learning_rate=0.045, - global_step=global_step, - decay_steps=steps_per_epoch * 2, # 2 epochs - decay_rate=0.94, - staircase=True, - name='learning_rate' - ) - optimizer = tf.train.RMSPropOptimizer(learning_rate=learning_rate, decay=0.9, epsilon=1.0) - # clip and apply gradients - gvs = optimizer.compute_gradients(loss=loss, var_list=network.all_params) - capped_gvs = [] - for grad, var in gvs: - if grad is not None: - grad = tf.clip_by_value(grad, -2., 2.) - capped_gvs.append((grad, var)) - train_op = optimizer.apply_gradients(grads_and_vars=capped_gvs, global_step=global_step) - # metrics - tf.summary.scalar('learning_rate/value', learning_rate) - tf.summary.scalar('loss/logits', loss) - _, metrics_average_ops, metrics_ops = calculate_metrics( - predicted_batch=predictions, real_batch=one_hot_classes, is_training=True - ) - with tf.control_dependencies([train_op]): - train_op = tf.group(metrics_average_ops) - - # start training - hooks = [StopAtStepHook(last_step=steps_per_epoch * epochs)] - with tl.distributed.DistributedSession(task_spec=task_spec, hooks=hooks, checkpoint_dir=checkpoints_path, - save_summaries_secs=None, save_summaries_steps=300, - save_checkpoint_secs=60 * 60) as sess: - # print network information - if task_spec is None or task_spec.is_master(): - network.print_params(False, session=sess) - network.print_layers() - sys.stdout.flush() - # run training - try: - last_log_time = time.time() - next_log_time = last_log_time + 60 - while not sess.should_stop(): - step, loss_val, learning_rate_val, _, metrics = sess.run( - [global_step, loss, learning_rate, train_op, metrics_ops] - ) - if task_spec is None or task_spec.is_master(): - now = time.time() - if now > next_log_time: - last_log_time = now - next_log_time = last_log_time + 60 - current_epoch = '{:.3f}'.format(float(step) / steps_per_epoch) - max_steps = epochs * steps_per_epoch - m = 'Epoch: {}/{} Steps: {}/{} Loss: {} Learning rate: {} Metrics: {}' - logging.info( - m.format(current_epoch, epochs, step, max_steps, loss_val, learning_rate_val, metrics) - ) - except OutOfRangeError: - pass - - -########## MAIN ########## - -if __name__ == '__main__': - # print output logging - logging.basicConfig(level=logging.INFO, format='%(asctime)-15s %(message)s') - - if not tf.gfile.Exists(ILSVRC_DIR): - raise FileNotFoundError( - 'We cannot find the directory "{}"\n' - 'You need to modify the variable BASE_DIR with the path where the dataset is.\n' - 'The dataset can be downloaded from http://www.image-net.org/ or from the Kaggle competition:\n' - 'https://www.kaggle.com/c/imagenet-object-localization-challenge/data'.format(ILSVRC_DIR) - ) - - # args - parser = argparse.ArgumentParser() - parser.add_argument('--with_evaluator', dest='with_evaluator', action='store_true') - parser.add_argument('--batch_size', dest='batch_size', type=int, default=32) - parser.add_argument('--epochs', dest='epochs', type=int, default=100) - parser.set_defaults(with_evaluator=False) - args = parser.parse_args() - logging.info('Batch size: {}'.format(args.batch_size)) - logging.info('Epochs: {}'.format(args.epochs)) - - # check the dataset and create them if necessary - might_create_training_set() - might_create_validation_set() - - # load environment for distributed training using last worker as evaluator - task_spec = tl.distributed.TaskSpec() - - if task_spec is None: - logging.info('Run in single node') - run_worker(task_spec, CHECKPOINTS_PATH, batch_size=args.batch_size, epochs=args.epochs) - else: - if args.with_evaluator: - # run with evaluator - logging.info('Last worker is the evaluator') - task_spec = task_spec.use_last_worker_as_evaluator() - - if task_spec.is_evaluator(): - run_evaluator(task_spec, CHECKPOINTS_PATH, batch_size=args.batch_size) - else: - task_spec.create_server() - run_worker(task_spec, CHECKPOINTS_PATH, batch_size=args.batch_size, epochs=args.epochs) diff --git a/examples/deprecated_tutorials/tutorial_mnist_distributed.py b/examples/deprecated_tutorials/tutorial_mnist_distributed.py deleted file mode 100644 index 29d291ba4..000000000 --- a/examples/deprecated_tutorials/tutorial_mnist_distributed.py +++ /dev/null @@ -1,83 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- -"""Alpha Version for Distributed Training - -you can test this example in your local machine using 2 workers and 1 ps like below, -where CUDA_VISIBLE_DEVICES can be used to set the GPUs the process can use. - -CUDA_VISIBLE_DEVICES= TF_CONFIG='{"cluster": {"ps": ["127.0.0.1:3001"], "worker": ["127.0.0.1:3002", "127.0.0.1:3003"]}, "task": {"type": "worker", "index": 0}}' python example/tutorial_mnist_distributed.py > output-master 2>&1 & -CUDA_VISIBLE_DEVICES= TF_CONFIG='{"cluster": {"ps": ["127.0.0.1:3001"], "worker": ["127.0.0.1:3002", "127.0.0.1:3003"]}, "task": {"type": "worker", "index": 1}}' python example/tutorial_mnist_distributed.py > output-worker 2>&1 & -CUDA_VISIBLE_DEVICES= TF_CONFIG='{"cluster": {"ps": ["127.0.0.1:3001"], "worker": ["127.0.0.1:3002", "127.0.0.1:3003"]}, "task": {"type": "ps", "index": 0}}' python example/tutorial_mnist_distributed.py > output-ps 2>&1 & -Note: for GPU, please set CUDA_VISIBLE_DEVICES=GPU_ID - -""" - -import tensorflow as tf - -import tensorlayer as tl - -tf.logging.set_verbosity(tf.logging.DEBUG) -tl.logging.set_verbosity(tl.logging.DEBUG) - -# load environment for distributed training -task_spec = tl.distributed.TaskSpec() -task_spec.create_server() -device_fn = task_spec.device_fn() if task_spec is not None else None - -# prepare data -X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 784)) - -# create graph -with tf.device(device_fn): - # define placeholder - x = tf.placeholder(tf.float32, shape=[None, 784], name='x') - y_ = tf.placeholder(tf.int64, shape=[None], name='y_') - - # define the network - network = tl.layers.InputLayer(x, name='input') - network = tl.layers.DropoutLayer(network, keep=0.8, name='drop1') - network = tl.layers.DenseLayer(network, 800, tf.nn.relu, name='relu1') - network = tl.layers.DropoutLayer(network, keep=0.5, name='drop2') - network = tl.layers.DenseLayer(network, 800, tf.nn.relu, name='relu2') - network = tl.layers.DropoutLayer(network, keep=0.5, name='drop3') - # the softmax is implemented internally in tl.cost.cross_entropy(y, y_) to - # speed up computation, so we use identity here. - # see tf.nn.sparse_softmax_cross_entropy_with_logits() - network = tl.layers.DenseLayer(network, n_units=10, act=None, name='output') - - # define cost function and metric. - y = network.outputs - cost = tl.cost.cross_entropy(y, y_, name='cost') - correct_prediction = tf.equal(tf.argmax(y, 1), y_) - acc = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) - y_op = tf.argmax(tf.nn.softmax(y), 1) - - # define the optimizer - train_params = network.all_params - train_op = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(cost, var_list=train_params) - - with tl.distributed.DistributedSession(task_spec=task_spec) as sess: - # print network information - if task_spec.is_master(): - network.print_params(session=sess) - network.print_layers() - print_freq = 5 - eval_train = False - else: - print_freq = 1000 - eval_train = False - - # We do not need to initialize the variables as the session does it - #tl.layers.initialize_global_variables(sess) - - # train the network - tl.utils.fit(sess, network, train_op, cost, X_train, y_train, x, y_, \ - acc=acc, batch_size=500, n_epoch=500, print_freq=print_freq, \ - X_val=X_val, y_val=y_val, eval_train=eval_train) - - if task_spec.is_master(): - # evaluation - tl.utils.test(sess, network, acc, X_test, y_test, x, y_, batch_size=None, cost=cost) - - # save the network to .npz file - tl.files.save_npz(network.all_params, name='model.npz') diff --git a/examples/deprecated_tutorials/tutorial_mnist_distributed.yml b/examples/deprecated_tutorials/tutorial_mnist_distributed.yml deleted file mode 100644 index 1b7ff944e..000000000 --- a/examples/deprecated_tutorials/tutorial_mnist_distributed.yml +++ /dev/null @@ -1,87 +0,0 @@ -# https://docs.docker.com/compose/compose-file/ -# -# reference: https://docs.microsoft.com/en-us/azure/container-service/dcos-swarm/container-service-swarm-walkthrough -# 1. create a swarm cluster on azure: -# $ az group create -l southeastasia -n tensorlayer-swarm -o table --debug -# $ az acs create -n tl-swarm-culster --orchestrator-type Swarm -g tensorlayer-swarm --agent-count 3 -o table --debug -# -# 2. create a ssh tunnel to swarm master: -# $ master=$(az acs show -n tl-swarm-culster -g tensorlayer-swarm --query 'masterProfile.fqdn' | jq -r .) -# $ ssh -p 2200 -fNL 2375:localhost:2375 azureuser@$master -# $ export DOCKER_HOST=:2375 -# -# 3. start -# $ docker-compose -f tutorial_mnist_distributed.yml up - ---- -version: '3' -services: - master: - image: tensorlayer/tensorlayer:latest - entrypoint: - - python - - /tensorlayer/example/tutorial_mnist_distributed.py - environment: - CUDA_VISIBLE_DEVICES: '' - TF_CONFIG: |- - { - "cluster": { - "ps": [ - "ps:3001" - ], - "worker": [ - "master:3002", - "worker:3003" - ] - }, - "task": { - "type": "worker", - "index": 0 - } - } - worker: - image: tensorlayer/tensorlayer:latest - entrypoint: - - python - - /tensorlayer/example/tutorial_mnist_distributed.py - environment: - CUDA_VISIBLE_DEVICES: '' - TF_CONFIG: |- - { - "cluster": { - "ps": [ - "ps:3001" - ], - "worker": [ - "master:3002", - "worker:3003" - ] - }, - "task": { - "type": "worker", - "index": 1 - } - } - ps: - image: tensorlayer/tensorlayer:latest - entrypoint: - - python - - /tensorlayer/example/tutorial_mnist_distributed.py - environment: - CUDA_VISIBLE_DEVICES: '' - TF_CONFIG: |- - { - "cluster": { - "ps": [ - "ps:3001" - ], - "worker": [ - "master:3002", - "worker:3003" - ] - }, - "task": { - "type": "ps", - "index": 0 - } - } diff --git a/examples/distributed_training/README.md b/examples/distributed_training/README.md deleted file mode 100644 index e5ba08182..000000000 --- a/examples/distributed_training/README.md +++ /dev/null @@ -1 +0,0 @@ -Mai Luo: \ No newline at end of file diff --git a/examples/distributed_training/tutorial_cifar10_distributed_trainer.py b/examples/distributed_training/tutorial_cifar10_distributed_trainer.py deleted file mode 100644 index 830bf879b..000000000 --- a/examples/distributed_training/tutorial_cifar10_distributed_trainer.py +++ /dev/null @@ -1,124 +0,0 @@ -#! /usr/bin/env python3 -# -*- coding: utf-8 -*- -r""" -1. Before you start, run this script: https://github.com/tensorlayer/tensorlayer/blob/distributed/scripts/download_and_install_openmpi3_linux.sh -2. Update the PATH with OpenMPI bin by running: PATH=$PATH:$HOME/local/openmpi/bin - Update the PATH in ~/.bashrc if you want OpenMPI to be ready once the machine start -3. Then XXXXX Milo please add this part - mpirun -np 2 \ - -bind-to none -map-by slot \ - -x NCCL_DEBUG=INFO -x LD_LIBRARY_PATH -x PATH \ - -mca pml ob1 -mca btl ^openib \ - python3 xxxxx.py -""" - -import multiprocessing - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.layers import (BatchNormLayer, Conv2d, DenseLayer, FlattenLayer, InputLayer, MaxPool2d) - -tf.logging.set_verbosity(tf.logging.DEBUG) -tl.logging.set_verbosity(tl.logging.DEBUG) - - -def make_dataset(images, labels, num_epochs=1, shuffle_data_seed=0): - img = tf.data.Dataset.from_tensor_slices(images) - lab = tf.data.Dataset.from_tensor_slices(np.array(labels, dtype=np.int64)) - dataset = tf.data.Dataset.zip((img, lab)) - dataset = dataset.repeat(num_epochs).shuffle(buffer_size=10000, seed=shuffle_data_seed) - return dataset - - -def data_aug_train(img, ann): - # 1. Randomly crop a [height, width] section of the image. - img = tf.random_crop(img, [24, 24, 3]) - # 2. Randomly flip the image horizontally. - img = tf.image.random_flip_left_right(img) - # 3. Randomly change brightness. - img = tf.image.random_brightness(img, max_delta=63) - # 4. Randomly change contrast. - img = tf.image.random_contrast(img, lower=0.2, upper=1.8) - # 5. Subtract off the mean and divide by the variance of the pixels. - img = tf.image.per_image_standardization(img) - return img, ann - - -def data_aug_valid(img, ann): - # 1. Crop the central [height, width] of the image. - img = tf.image.resize_image_with_crop_or_pad(img, 24, 24) - # 2. Subtract off the mean and divide by the variance of the pixels. - img = tf.image.per_image_standardization(img) - return img, ann - - -def model(x, is_train): - with tf.variable_scope("model", reuse=tf.AUTO_REUSE): - net = InputLayer(x, name='input') - net = Conv2d(net, 64, (5, 5), (1, 1), padding='SAME', b_init=None, name='cnn1') - net = BatchNormLayer(net, decay=0.99, is_train=is_train, act=tf.nn.relu, name='batch1') - net = MaxPool2d(net, (3, 3), (2, 2), padding='SAME', name='pool1') - - net = Conv2d(net, 64, (5, 5), (1, 1), padding='SAME', b_init=None, name='cnn2') - net = BatchNormLayer(net, decay=0.99, is_train=is_train, act=tf.nn.relu, name='batch2') - net = MaxPool2d(net, (3, 3), (2, 2), padding='SAME', name='pool2') - - net = FlattenLayer(net, name='flatten') - net = DenseLayer(net, 384, act=tf.nn.relu, name='d1relu') - net = DenseLayer(net, 192, act=tf.nn.relu, name='d2relu') - net = DenseLayer(net, 10, act=None, name='output') - return net - - -def build_train(x, y_): - net = model(x, is_train=True) - cost = tl.cost.cross_entropy(net.outputs, y_, name='cost_train') - L2 = 0 - for p in tl.layers.get_variables_with_name('relu/W', True, True): - L2 += tf.contrib.layers.l2_regularizer(0.004)(p) - cost = cost + L2 - accurate_prediction = tf.equal(tf.argmax(net.outputs, 1), y_) - accuracy = tf.reduce_mean(tf.cast(accurate_prediction, tf.float32), name='accuracy_train') - log_tensors = {'cost': cost, 'accuracy': accuracy} - return net, cost, log_tensors - - -def build_validation(x, y_): - net = model(x, is_train=False) - cost = tl.cost.cross_entropy(net.outputs, y_, name='cost_test') - accurate_prediction = tf.equal(tf.argmax(net.outputs, 1), y_) - accuracy = tf.reduce_mean(tf.cast(accurate_prediction, tf.float32), name='accuracy_test') - return net, [cost, accuracy] - - -if __name__ == '__main__': - # Load CIFAR10 data - X_train, y_train, X_test, y_test = tl.files.load_cifar10_dataset(shape=(-1, 32, 32, 3), plotable=False) - - # Setup the trainer - training_dataset = make_dataset(X_train, y_train) - training_dataset = training_dataset.map(data_aug_train, num_parallel_calls=multiprocessing.cpu_count()) - # validation_dataset = make_dataset(X_test, y_test) - # validation_dataset = training_dataset.map(data_aug_valid, num_parallel_calls=multiprocessing.cpu_count()) - trainer = tl.distributed.Trainer( - build_training_func=build_train, training_dataset=training_dataset, optimizer=tf.train.AdamOptimizer, - optimizer_args={'learning_rate': 0.0001}, batch_size=128, prefetch_size=128 - # validation_dataset=validation_dataset, build_validation_func=build_validation - ) - - # There are multiple ways to use the trainer: - # 1. Easiest way to train all data: trainer.train_to_end() - # 2. Train with validation in the middle: trainer.train_and_validate_to_end(validate_step_size=100) - # 3. Train with full control like follows: - while not trainer.session.should_stop(): - try: - # Run a training step synchronously. - trainer.train_on_batch() - # TODO: do whatever you like to the training session. - except tf.errors.OutOfRangeError: - # The dataset would throw the OutOfRangeError when it reaches the end - break - - # TODO: Test the trained model diff --git a/examples/distributed_training/tutorial_mnist_distributed_trainer.py b/examples/distributed_training/tutorial_mnist_distributed_trainer.py deleted file mode 100755 index 0f1b8b6dd..000000000 --- a/examples/distributed_training/tutorial_mnist_distributed_trainer.py +++ /dev/null @@ -1,76 +0,0 @@ -#! /usr/bin/env python3 -# -*- coding: utf-8 -*- - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl - -tf.logging.set_verbosity(tf.logging.DEBUG) -tl.logging.set_verbosity(tl.logging.DEBUG) - - -def make_dataset(images, labels, num_epochs=1, shuffle_data_seed=0): - ds1 = tf.data.Dataset.from_tensor_slices(images) - ds2 = tf.data.Dataset.from_tensor_slices(np.array(labels, dtype=np.int64)) - dataset = tf.data.Dataset.zip((ds1, ds2)) - dataset = dataset.repeat(num_epochs).shuffle(buffer_size=10000, seed=shuffle_data_seed) - return dataset - - -def model(x, is_train): - with tf.variable_scope('mlp', reuse=tf.AUTO_REUSE): - network = tl.layers.InputLayer(x, name='input') - network = tl.layers.DropoutLayer(network, keep=0.8, name='drop1', is_fix=True, is_train=is_train) - network = tl.layers.DenseLayer(network, 800, tf.nn.relu, name='relu1') - network = tl.layers.DropoutLayer(network, keep=0.5, name='drop2', is_fix=True, is_train=is_train) - network = tl.layers.DenseLayer(network, 800, tf.nn.relu, name='relu2') - network = tl.layers.DropoutLayer(network, keep=0.5, name='drop3', is_fix=True, is_train=is_train) - network = tl.layers.DenseLayer(network, n_units=10, act=tf.identity, name='output') - return network - - -def build_train(x, y_): - net = model(x, is_train=True) - cost = tl.cost.cross_entropy(net.outputs, y_, name='cost_train') - accurate_prediction = tf.equal(tf.argmax(net.outputs, 1), y_) - accuracy = tf.reduce_mean(tf.cast(accurate_prediction, tf.float32), name='accuracy_train') - log_tensors = {'cost': cost, 'accuracy': accuracy} - return net, cost, log_tensors - - -def build_validation(x, y_): - net = model(x, is_train=False) - cost = tl.cost.cross_entropy(net.outputs, y_, name='cost_test') - accurate_prediction = tf.equal(tf.argmax(net.outputs, 1), y_) - accuracy = tf.reduce_mean(tf.cast(accurate_prediction, tf.float32), name='accuracy_test') - return net, [cost, accuracy] - - -if __name__ == '__main__': - # Load MNIST data - X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 784)) - - # Setup the trainer - training_dataset = make_dataset(X_train, y_train) - # validation_dataset = make_dataset(X_val, y_val) - trainer = tl.distributed.Trainer( - build_training_func=build_train, training_dataset=training_dataset, optimizer=tf.train.AdamOptimizer, - optimizer_args={'learning_rate': 0.001}, batch_size=500, prefetch_size=500 - # validation_dataset=validation_dataset, build_validation_func=build_validation - ) - - # There are multiple ways to use the trainer: - # 1. Easiest way to train all data: trainer.train_to_end() - # 2. Train with validation in the middle: trainer.train_and_validate_to_end(validate_step_size=100) - # 3. Train with full control like follows: - while not trainer.session.should_stop(): - try: - # Run a training step synchronously. - trainer.train_on_batch() - # TODO: do whatever you like to the training session. - except tf.errors.OutOfRangeError: - # The dataset would throw the OutOfRangeError when it reaches the end - break - - # TODO: Test the trained model diff --git a/examples/keras_tfslim/README.md b/examples/keras_tfslim/README.md deleted file mode 100644 index a796f0c86..000000000 --- a/examples/keras_tfslim/README.md +++ /dev/null @@ -1 +0,0 @@ -### All other TensorFlow's libraries can be connected into TensorLayer via LambdaLayer. \ No newline at end of file diff --git a/examples/keras_tfslim/tutorial_keras.py b/examples/keras_tfslim/tutorial_keras.py deleted file mode 100644 index 9d0606c5f..000000000 --- a/examples/keras_tfslim/tutorial_keras.py +++ /dev/null @@ -1,77 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- - -import time - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.layers import Input, Lambda - -tl.logging.set_verbosity(tl.logging.DEBUG) - -X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 784)) - -batch_size = 128 - -# keras layers -layers = [ - tf.keras.layers.Dropout(0.8), - tf.keras.layers.Dense(800, activation='relu'), - tf.keras.layers.Dropout(0.5), - tf.keras.layers.Dense(800, activation='relu'), - tf.keras.layers.Dropout(0.5), - tf.keras.layers.Dense(10, activation='linear') -] -keras_block = tf.keras.Sequential(layers) -# in order to compile keras model and get trainable_variables of the keras model -_ = keras_block(np.random.random([batch_size, 784]).astype(np.float32)) - -# build tl model using keras layers -ni = Input([None, 784], dtype=tf.float32) -nn = Lambda(fn=keras_block, fn_weights=keras_block.trainable_variables)(ni) -network = tl.models.Model(inputs=ni, outputs=nn) -print(network) - -n_epoch = 200 -learning_rate = 0.0001 - -train_params = network.trainable_weights -optimizer = tf.optimizers.Adam(learning_rate) - -for epoch in range(n_epoch): - start_time = time.time() - ## Training - for X_train_a, y_train_a in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=True): - with tf.GradientTape() as tape: - _logits = network(X_train_a, is_train=True) - err = tl.cost.cross_entropy(_logits, y_train_a, name='train_loss') - - grad = tape.gradient(err, train_params) - optimizer.apply_gradients(zip(grad, train_params)) - # _, _ = sess.run([cost, train_op], feed_dict={x: X_train_a, y_: y_train_a, K.learning_phase(): 1}) - - print("Epoch %d of %d took %fs" % (epoch + 1, n_epoch, time.time() - start_time)) - - ## Evaluation - train_loss, train_acc, n_batch = 0, 0, 0 - for X_train_a, y_train_a in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=False): - _logits = network(X_train_a, is_train=False) - err = tl.cost.cross_entropy(_logits, y_train_a, name='train_loss') - ac = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(_logits, 1), y_train_a), tf.float32)) - train_loss += err - train_acc += ac - n_batch += 1 - print(" train loss: %f" % (train_loss / n_batch)) - print(" train acc: %f" % (train_acc / n_batch)) - val_loss, val_acc, n_batch = 0, 0, 0 - for X_val_a, y_val_a in tl.iterate.minibatches(X_val, y_val, batch_size, shuffle=False): - _logits = network(X_val_a, is_train=False) - err = tl.cost.cross_entropy(_logits, y_val_a, name='train_loss') - ac = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(_logits, 1), y_val_a), tf.float32)) - val_loss += err - val_acc += ac - n_batch += 1 - print(" val loss: %f" % (val_loss / n_batch)) - print(" val acc: %f" % (val_acc / n_batch)) diff --git a/examples/model_zoo/__init__.py b/examples/model_zoo/__init__.py new file mode 100644 index 000000000..2fbe814aa --- /dev/null +++ b/examples/model_zoo/__init__.py @@ -0,0 +1,6 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +from .vgg import vgg16, vgg19 +from .yolo import YOLOv4 +from .resnet import ResNet50 \ No newline at end of file diff --git a/examples/model_zoo/common.py b/examples/model_zoo/common.py new file mode 100644 index 000000000..7bc1bfd0b --- /dev/null +++ b/examples/model_zoo/common.py @@ -0,0 +1,287 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +import tensorflow as tf +import colorsys, random, cv2 +import numpy as np +from tensorlayer.visualize import save_image + +def decode_tf(conv_output, output_size, NUM_CLASS, STRIDES, ANCHORS, i=0, XYSCALE=[1, 1, 1]): + batch_size = tf.shape(conv_output)[0] + conv_output = tf.reshape(conv_output, (batch_size, output_size, output_size, 3, 5 + NUM_CLASS)) + + conv_raw_dxdy, conv_raw_dwdh, conv_raw_conf, conv_raw_prob = tf.split(conv_output, (2, 2, 1, NUM_CLASS), axis=-1) + + xy_grid = tf.meshgrid(tf.range(output_size), tf.range(output_size)) + xy_grid = tf.expand_dims(tf.stack(xy_grid, axis=-1), axis=2) # [gx, gy, 1, 2] + xy_grid = tf.tile(tf.expand_dims(xy_grid, axis=0), [batch_size, 1, 1, 3, 1]) + + xy_grid = tf.cast(xy_grid, tf.float32) + + pred_xy = ((tf.sigmoid(conv_raw_dxdy) * XYSCALE[i]) - 0.5 * (XYSCALE[i] - 1) + xy_grid) * \ + STRIDES[i] + pred_wh = (tf.exp(conv_raw_dwdh) * ANCHORS[i]) + pred_xywh = tf.concat([pred_xy, pred_wh], axis=-1) + + pred_conf = tf.sigmoid(conv_raw_conf) + pred_prob = tf.sigmoid(conv_raw_prob) + + pred_prob = pred_conf * pred_prob + pred_prob = tf.reshape(pred_prob, (batch_size, -1, NUM_CLASS)) + pred_xywh = tf.reshape(pred_xywh, (batch_size, -1, 4)) + + return pred_xywh, pred_prob + + +def decode(conv_output, output_size, NUM_CLASS, STRIDES, ANCHORS, i, XYSCALE=[1, 1, 1]): + return decode_tf(conv_output, output_size, NUM_CLASS, STRIDES, ANCHORS, i=i, XYSCALE=XYSCALE) + + +def filter_boxes(box_xywh, scores, score_threshold=0.4, input_shape=tf.constant([416, 416])): + scores_max = tf.math.reduce_max(scores, axis=-1) + + mask = scores_max >= score_threshold + class_boxes = tf.boolean_mask(box_xywh, mask) + pred_conf = tf.boolean_mask(scores, mask) + class_boxes = tf.reshape(class_boxes, [tf.shape(scores)[0], -1, tf.shape(class_boxes)[-1]]) + pred_conf = tf.reshape(pred_conf, [tf.shape(scores)[0], -1, tf.shape(pred_conf)[-1]]) + + box_xy, box_wh = tf.split(class_boxes, (2, 2), axis=-1) + + input_shape = tf.cast(input_shape, dtype=tf.float32) + box_yx = box_xy[..., ::-1] + box_hw = box_wh[..., ::-1] + + box_mins = (box_yx - (box_hw / 2.)) / input_shape + box_maxes = (box_yx + (box_hw / 2.)) / input_shape + boxes = tf.concat( + [ + box_mins[..., 0:1], # y_min + box_mins[..., 1:2], # x_min + box_maxes[..., 0:1], # y_max + box_maxes[..., 1:2] # x_max + ], + axis=-1 + ) + # return tf.concat([boxes, pred_conf], axis=-1) + return (boxes, pred_conf) + + +def read_class_names(class_file_name): + names = {} + with open(class_file_name, 'r') as data: + for ID, name in enumerate(data): + names[ID] = name.strip('\n') + return names + + +def draw_bbox(image, bboxes, show_label=True): + classes = read_class_names('model/coco.names') + num_classes = len(classes) + image_h, image_w, _ = image.shape + hsv_tuples = [(1.0 * x / num_classes, 1., 1.) for x in range(num_classes)] + colors = list(map(lambda x: colorsys.hsv_to_rgb(*x), hsv_tuples)) + colors = list(map(lambda x: (int(x[0] * 255), int(x[1] * 255), int(x[2] * 255)), colors)) + + random.seed(0) + random.shuffle(colors) + random.seed(None) + + out_boxes, out_scores, out_classes, num_boxes = bboxes + for i in range(num_boxes[0]): + if int(out_classes[0][i]) < 0 or int(out_classes[0][i]) > num_classes: continue + coor = out_boxes[0][i] + coor[0] = int(coor[0] * image_h) + coor[2] = int(coor[2] * image_h) + coor[1] = int(coor[1] * image_w) + coor[3] = int(coor[3] * image_w) + + fontScale = 0.5 + score = out_scores[0][i] + class_ind = int(out_classes[0][i]) + bbox_color = colors[class_ind] + bbox_thick = int(0.6 * (image_h + image_w) / 600) + c1, c2 = (coor[1], coor[0]), (coor[3], coor[2]) + cv2.rectangle(image, c1, c2, bbox_color, bbox_thick) + + if show_label: + bbox_mess = '%s: %.2f' % (classes[class_ind], score) + t_size = cv2.getTextSize(bbox_mess, 0, fontScale, thickness=bbox_thick // 2)[0] + c3 = (c1[0] + t_size[0], c1[1] - t_size[1] - 3) + cv2.rectangle(image, c1, (np.float32(c3[0]), np.float32(c3[1])), bbox_color, -1) #filled + + cv2.putText( + image, bbox_mess, (c1[0], np.float32(c1[1] - 2)), cv2.FONT_HERSHEY_SIMPLEX, fontScale, (0, 0, 0), + bbox_thick // 2, lineType=cv2.LINE_AA + ) + return image + + +def get_anchors(anchors_path, tiny=False): + anchors = np.array(anchors_path) + if tiny: + return anchors.reshape(2, 3, 2) + else: + return anchors.reshape(3, 3, 2) + + +def decode_train(conv_output, output_size, NUM_CLASS, STRIDES, ANCHORS, i=0, XYSCALE=[1, 1, 1]): + conv_output = tf.reshape(conv_output, (tf.shape(conv_output)[0], output_size, output_size, 3, 5 + NUM_CLASS)) + + conv_raw_dxdy, conv_raw_dwdh, conv_raw_conf, conv_raw_prob = tf.split(conv_output, (2, 2, 1, NUM_CLASS), axis=-1) + + xy_grid = tf.meshgrid(tf.range(output_size), tf.range(output_size)) + xy_grid = tf.expand_dims(tf.stack(xy_grid, axis=-1), axis=2) # [gx, gy, 1, 2] + xy_grid = tf.tile(tf.expand_dims(xy_grid, axis=0), [tf.shape(conv_output)[0], 1, 1, 3, 1]) + + xy_grid = tf.cast(xy_grid, tf.float32) + + pred_xy = ((tf.sigmoid(conv_raw_dxdy) * XYSCALE[i]) - 0.5 * (XYSCALE[i] - 1) + xy_grid) * \ + STRIDES[i] + pred_wh = (tf.exp(conv_raw_dwdh) * ANCHORS[i]) + pred_xywh = tf.concat([pred_xy, pred_wh], axis=-1) + + pred_conf = tf.sigmoid(conv_raw_conf) + pred_prob = tf.sigmoid(conv_raw_prob) + + return tf.concat([pred_xywh, pred_conf, pred_prob], axis=-1) + + +def yolo4_input_processing(original_image): + image_data = cv2.resize(original_image, (416, 416)) + image_data = image_data / 255. + images_data = [] + for i in range(1): + images_data.append(image_data) + images_data = np.asarray(images_data).astype(np.float32) + batch_data = tf.constant(images_data) + return batch_data + + +def yolo4_output_processing(feature_maps): + STRIDES = [8, 16, 32] + ANCHORS = get_anchors([12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401]) + NUM_CLASS = 80 + XYSCALE = [1.2, 1.1, 1.05] + iou_threshold = 0.45 + score_threshold = 0.25 + + bbox_tensors = [] + prob_tensors = [] + score_thres = 0.2 + for i, fm in enumerate(feature_maps): + if i == 0: + output_tensors = decode(fm, 416 // 8, NUM_CLASS, STRIDES, ANCHORS, i, XYSCALE) + elif i == 1: + output_tensors = decode(fm, 416 // 16, NUM_CLASS, STRIDES, ANCHORS, i, XYSCALE) + else: + output_tensors = decode(fm, 416 // 32, NUM_CLASS, STRIDES, ANCHORS, i, XYSCALE) + bbox_tensors.append(output_tensors[0]) + prob_tensors.append(output_tensors[1]) + pred_bbox = tf.concat(bbox_tensors, axis=1) + pred_prob = tf.concat(prob_tensors, axis=1) + boxes, pred_conf = filter_boxes( + pred_bbox, pred_prob, score_threshold=score_thres, input_shape=tf.constant([416, 416]) + ) + pred = {'concat': tf.concat([boxes, pred_conf], axis=-1)} + + for key, value in pred.items(): + boxes = value[:, :, 0:4] + pred_conf = value[:, :, 4:] + + boxes, scores, classes, valid_detections = tf.image.combined_non_max_suppression( + boxes=tf.reshape(boxes, (tf.shape(boxes)[0], -1, 1, 4)), + scores=tf.reshape(pred_conf, (tf.shape(pred_conf)[0], -1, tf.shape(pred_conf)[-1])), + max_output_size_per_class=50, max_total_size=50, iou_threshold=iou_threshold, score_threshold=score_threshold + ) + output = [boxes.numpy(), scores.numpy(), classes.numpy(), valid_detections.numpy()] + return output + + +def result_to_json(image, pred_bbox): + image_h, image_w, _ = image.shape + out_boxes, out_scores, out_classes, num_boxes = pred_bbox + class_names = {} + json_result = [] + with open('model/coco.names', 'r') as data: + for ID, name in enumerate(data): + class_names[ID] = name.strip('\n') + nums_class = len(class_names) + + for i in range(num_boxes[0]): + if int(out_classes[0][i]) < 0 or int(out_classes[0][i]) > nums_class: continue + coor = out_boxes[0][i] + coor[0] = int(coor[0] * image_h) + coor[2] = int(coor[2] * image_h) + coor[1] = int(coor[1] * image_w) + coor[3] = int(coor[3] * image_w) + + score = float(out_scores[0][i]) + class_ind = int(out_classes[0][i]) + bbox = np.array([coor[1], coor[0], coor[3], coor[2]]).tolist() # [x1,y1,x2,y2] + json_result.append({'image': None, 'category_id': class_ind, 'bbox': bbox, 'score': score}) + + return json_result + + +def draw_boxes_and_labels_to_image_with_json(image, json_result, class_list, save_name=None): + """Draw bboxes and class labels on image. Return the image with bboxes. + + Parameters + ----------- + image : numpy.array + The RGB image [height, width, channel]. + json_result : list of dict + The object detection result with json format. + classes_list : list of str + For converting ID to string on image. + save_name : None or str + The name of image file (i.e. image.png), if None, not to save image. + + Returns + ------- + numpy.array + The saved image. + + References + ----------- + - OpenCV rectangle and putText. + - `scikit-image `__. + + """ + image_h, image_w, _ = image.shape + num_classes = len(class_list) + hsv_tuples = [(1.0 * x / num_classes, 1., 1.) for x in range(num_classes)] + colors = list(map(lambda x: colorsys.hsv_to_rgb(*x), hsv_tuples)) + colors = list(map(lambda x: (int(x[0] * 255), int(x[1] * 255), int(x[2] * 255)), colors)) + random.seed(0) + random.shuffle(colors) + random.seed(None) + bbox_thick = int(0.6 * (image_h + image_w) / 600) + fontScale = 0.5 + + for bbox_info in json_result: + image_name = bbox_info['image'] + category_id = bbox_info['category_id'] + if category_id < 0 or category_id > num_classes: continue + bbox = bbox_info['bbox'] # the order of coordinates is [x1, y2, x2, y2] + score = bbox_info['score'] + + bbox_color = colors[category_id] + c1, c2 = (int(bbox[0]), int(bbox[1])), (int(bbox[2]), int(bbox[3])) + cv2.rectangle(image, c1, c2, bbox_color, bbox_thick) + + bbox_mess = '%s: %.2f' % (class_list[category_id], score) + t_size = cv2.getTextSize(bbox_mess, 0, fontScale, thickness=bbox_thick // 2)[0] + c3 = (c1[0] + t_size[0], c1[1] - t_size[1] - 3) + cv2.rectangle(image, c1, (np.float32(c3[0]), np.float32(c3[1])), bbox_color, -1) + + cv2.putText( + image, bbox_mess, (c1[0], np.float32(c1[1] - 2)), cv2.FONT_HERSHEY_SIMPLEX, fontScale, (0, 0, 0), + bbox_thick // 2, lineType=cv2.LINE_AA + ) + + if save_name is not None: + save_image(image, save_name) + + return image \ No newline at end of file diff --git a/tensorlayer/models/imagenet_classes.py b/examples/model_zoo/imagenet_classes.py similarity index 100% rename from tensorlayer/models/imagenet_classes.py rename to examples/model_zoo/imagenet_classes.py diff --git a/examples/model_zoo/model/coco.names b/examples/model_zoo/model/coco.names new file mode 100644 index 000000000..ec82f0ffd --- /dev/null +++ b/examples/model_zoo/model/coco.names @@ -0,0 +1,80 @@ +person +bicycle +car +motorbike +aeroplane +bus +train +truck +boat +traffic light +fire hydrant +stop sign +parking meter +bench +bird +cat +dog +horse +sheep +cow +elephant +bear +zebra +giraffe +backpack +umbrella +handbag +tie +suitcase +frisbee +skis +snowboard +sports ball +kite +baseball bat +baseball glove +skateboard +surfboard +tennis racket +bottle +wine glass +cup +fork +knife +spoon +bowl +banana +apple +sandwich +orange +broccoli +carrot +hot dog +pizza +donut +cake +chair +sofa +potted plant +bed +dining table +toilet +tvmonitor +laptop +mouse +remote +keyboard +cell phone +microwave +oven +toaster +sink +refrigerator +book +clock +vase +scissors +teddy bear +hair drier +toothbrush diff --git a/examples/model_zoo/model/weights_2.txt b/examples/model_zoo/model/weights_2.txt new file mode 100644 index 000000000..42cc4997c --- /dev/null +++ b/examples/model_zoo/model/weights_2.txt @@ -0,0 +1,541 @@ +conv2d_1/filters:0 +batchnorm2d_1/beta:0 +batchnorm2d_1/gamma:0 +batchnorm2d_1/moving_mean:0 +batchnorm2d_1/moving_var:0 +conv2d_2/filters:0 +batchnorm2d_2/beta:0 +batchnorm2d_2/gamma:0 +batchnorm2d_2/moving_mean:0 +batchnorm2d_2/moving_var:0 +conv_rote_block_1/filters:0 +conv2d_3/filters:0 +batchnorm2d_3/beta:0 +batchnorm2d_3/gamma:0 +batchnorm2d_3/moving_mean:0 +batchnorm2d_3/moving_var:0 +batchnorm2d_4/beta:0 +batchnorm2d_4/gamma:0 +batchnorm2d_4/moving_mean:0 +batchnorm2d_4/moving_var:0 +conv2d_4/filters:0 +batchnorm2d_5/beta:0 +batchnorm2d_5/gamma:0 +batchnorm2d_5/moving_mean:0 +batchnorm2d_5/moving_var:0 +conv2d_5/filters:0 +batchnorm2d_6/beta:0 +batchnorm2d_6/gamma:0 +batchnorm2d_6/moving_mean:0 +batchnorm2d_6/moving_var:0 +conv2d_6/filters:0 +batchnorm2d_7/beta:0 +batchnorm2d_7/gamma:0 +batchnorm2d_7/moving_mean:0 +batchnorm2d_7/moving_var:0 +conv2d_7/filters:0 +batchnorm2d_8/beta:0 +batchnorm2d_8/gamma:0 +batchnorm2d_8/moving_mean:0 +batchnorm2d_8/moving_var:0 +conv2d_8/filters:0 +batchnorm2d_9/beta:0 +batchnorm2d_9/gamma:0 +batchnorm2d_9/moving_mean:0 +batchnorm2d_9/moving_var:0 +conv_rote_block_2/filters:0 +conv2d_9/filters:0 +batchnorm2d_10/beta:0 +batchnorm2d_10/gamma:0 +batchnorm2d_10/moving_mean:0 +batchnorm2d_10/moving_var:0 +batchnorm2d_11/beta:0 +batchnorm2d_11/gamma:0 +batchnorm2d_11/moving_mean:0 +batchnorm2d_11/moving_var:0 +conv2d_10/filters:0 +batchnorm2d_12/beta:0 +batchnorm2d_12/gamma:0 +batchnorm2d_12/moving_mean:0 +batchnorm2d_12/moving_var:0 +conv2d_11/filters:0 +batchnorm2d_13/beta:0 +batchnorm2d_13/gamma:0 +batchnorm2d_13/moving_mean:0 +batchnorm2d_13/moving_var:0 +conv2d_12/filters:0 +batchnorm2d_14/beta:0 +batchnorm2d_14/gamma:0 +batchnorm2d_14/moving_mean:0 +batchnorm2d_14/moving_var:0 +conv2d_13/filters:0 +batchnorm2d_15/beta:0 +batchnorm2d_15/gamma:0 +batchnorm2d_15/moving_mean:0 +batchnorm2d_15/moving_var:0 +conv2d_14/filters:0 +batchnorm2d_16/beta:0 +batchnorm2d_16/gamma:0 +batchnorm2d_16/moving_mean:0 +batchnorm2d_16/moving_var:0 +conv2d_15/filters:0 +batchnorm2d_17/beta:0 +batchnorm2d_17/gamma:0 +batchnorm2d_17/moving_mean:0 +batchnorm2d_17/moving_var:0 +conv2d_16/filters:0 +batchnorm2d_18/beta:0 +batchnorm2d_18/gamma:0 +batchnorm2d_18/moving_mean:0 +batchnorm2d_18/moving_var:0 +conv_rote_block_3/filters:0 +conv2d_17/filters:0 +batchnorm2d_19/beta:0 +batchnorm2d_19/gamma:0 +batchnorm2d_19/moving_mean:0 +batchnorm2d_19/moving_var:0 +batchnorm2d_20/beta:0 +batchnorm2d_20/gamma:0 +batchnorm2d_20/moving_mean:0 +batchnorm2d_20/moving_var:0 +conv2d_18/filters:0 +batchnorm2d_21/beta:0 +batchnorm2d_21/gamma:0 +batchnorm2d_21/moving_mean:0 +batchnorm2d_21/moving_var:0 +conv2d_19/filters:0 +batchnorm2d_22/beta:0 +batchnorm2d_22/gamma:0 +batchnorm2d_22/moving_mean:0 +batchnorm2d_22/moving_var:0 +conv2d_20/filters:0 +batchnorm2d_23/beta:0 +batchnorm2d_23/gamma:0 +batchnorm2d_23/moving_mean:0 +batchnorm2d_23/moving_var:0 +conv2d_21/filters:0 +batchnorm2d_24/beta:0 +batchnorm2d_24/gamma:0 +batchnorm2d_24/moving_mean:0 +batchnorm2d_24/moving_var:0 +conv2d_22/filters:0 +batchnorm2d_25/beta:0 +batchnorm2d_25/gamma:0 +batchnorm2d_25/moving_mean:0 +batchnorm2d_25/moving_var:0 +conv2d_23/filters:0 +batchnorm2d_26/beta:0 +batchnorm2d_26/gamma:0 +batchnorm2d_26/moving_mean:0 +batchnorm2d_26/moving_var:0 +conv2d_24/filters:0 +batchnorm2d_27/beta:0 +batchnorm2d_27/gamma:0 +batchnorm2d_27/moving_mean:0 +batchnorm2d_27/moving_var:0 +conv2d_25/filters:0 +batchnorm2d_28/beta:0 +batchnorm2d_28/gamma:0 +batchnorm2d_28/moving_mean:0 +batchnorm2d_28/moving_var:0 +conv2d_26/filters:0 +batchnorm2d_29/beta:0 +batchnorm2d_29/gamma:0 +batchnorm2d_29/moving_mean:0 +batchnorm2d_29/moving_var:0 +conv2d_27/filters:0 +batchnorm2d_30/beta:0 +batchnorm2d_30/gamma:0 +batchnorm2d_30/moving_mean:0 +batchnorm2d_30/moving_var:0 +conv2d_28/filters:0 +batchnorm2d_31/beta:0 +batchnorm2d_31/gamma:0 +batchnorm2d_31/moving_mean:0 +batchnorm2d_31/moving_var:0 +conv2d_29/filters:0 +batchnorm2d_32/beta:0 +batchnorm2d_32/gamma:0 +batchnorm2d_32/moving_mean:0 +batchnorm2d_32/moving_var:0 +conv2d_30/filters:0 +batchnorm2d_33/beta:0 +batchnorm2d_33/gamma:0 +batchnorm2d_33/moving_mean:0 +batchnorm2d_33/moving_var:0 +conv2d_31/filters:0 +batchnorm2d_34/beta:0 +batchnorm2d_34/gamma:0 +batchnorm2d_34/moving_mean:0 +batchnorm2d_34/moving_var:0 +conv2d_32/filters:0 +batchnorm2d_35/beta:0 +batchnorm2d_35/gamma:0 +batchnorm2d_35/moving_mean:0 +batchnorm2d_35/moving_var:0 +conv2d_33/filters:0 +batchnorm2d_36/beta:0 +batchnorm2d_36/gamma:0 +batchnorm2d_36/moving_mean:0 +batchnorm2d_36/moving_var:0 +conv2d_34/filters:0 +batchnorm2d_37/beta:0 +batchnorm2d_37/gamma:0 +batchnorm2d_37/moving_mean:0 +batchnorm2d_37/moving_var:0 +conv2d_35/filters:0 +batchnorm2d_38/beta:0 +batchnorm2d_38/gamma:0 +batchnorm2d_38/moving_mean:0 +batchnorm2d_38/moving_var:0 +conv_yolo_2/filters:0 +batchnorm2d_87/beta:0 +batchnorm2d_87/gamma:0 +batchnorm2d_87/moving_mean:0 +batchnorm2d_87/moving_var:0 +conv2d_36/filters:0 +batchnorm2d_39/beta:0 +batchnorm2d_39/gamma:0 +batchnorm2d_39/moving_mean:0 +batchnorm2d_39/moving_var:0 +conv_rote_block_4/filters:0 +conv2d_37/filters:0 +batchnorm2d_40/beta:0 +batchnorm2d_40/gamma:0 +batchnorm2d_40/moving_mean:0 +batchnorm2d_40/moving_var:0 +batchnorm2d_41/beta:0 +batchnorm2d_41/gamma:0 +batchnorm2d_41/moving_mean:0 +batchnorm2d_41/moving_var:0 +conv2d_38/filters:0 +batchnorm2d_42/beta:0 +batchnorm2d_42/gamma:0 +batchnorm2d_42/moving_mean:0 +batchnorm2d_42/moving_var:0 +conv2d_39/filters:0 +batchnorm2d_43/beta:0 +batchnorm2d_43/gamma:0 +batchnorm2d_43/moving_mean:0 +batchnorm2d_43/moving_var:0 +conv2d_40/filters:0 +batchnorm2d_44/beta:0 +batchnorm2d_44/gamma:0 +batchnorm2d_44/moving_mean:0 +batchnorm2d_44/moving_var:0 +conv2d_41/filters:0 +batchnorm2d_45/beta:0 +batchnorm2d_45/gamma:0 +batchnorm2d_45/moving_mean:0 +batchnorm2d_45/moving_var:0 +conv2d_42/filters:0 +batchnorm2d_46/beta:0 +batchnorm2d_46/gamma:0 +batchnorm2d_46/moving_mean:0 +batchnorm2d_46/moving_var:0 +conv2d_43/filters:0 +batchnorm2d_47/beta:0 +batchnorm2d_47/gamma:0 +batchnorm2d_47/moving_mean:0 +batchnorm2d_47/moving_var:0 +conv2d_44/filters:0 +batchnorm2d_48/beta:0 +batchnorm2d_48/gamma:0 +batchnorm2d_48/moving_mean:0 +batchnorm2d_48/moving_var:0 +conv2d_45/filters:0 +batchnorm2d_49/beta:0 +batchnorm2d_49/gamma:0 +batchnorm2d_49/moving_mean:0 +batchnorm2d_49/moving_var:0 +conv2d_46/filters:0 +batchnorm2d_50/beta:0 +batchnorm2d_50/gamma:0 +batchnorm2d_50/moving_mean:0 +batchnorm2d_50/moving_var:0 +conv2d_47/filters:0 +batchnorm2d_51/beta:0 +batchnorm2d_51/gamma:0 +batchnorm2d_51/moving_mean:0 +batchnorm2d_51/moving_var:0 +conv2d_48/filters:0 +batchnorm2d_52/beta:0 +batchnorm2d_52/gamma:0 +batchnorm2d_52/moving_mean:0 +batchnorm2d_52/moving_var:0 +conv2d_49/filters:0 +batchnorm2d_53/beta:0 +batchnorm2d_53/gamma:0 +batchnorm2d_53/moving_mean:0 +batchnorm2d_53/moving_var:0 +conv2d_50/filters:0 +batchnorm2d_54/beta:0 +batchnorm2d_54/gamma:0 +batchnorm2d_54/moving_mean:0 +batchnorm2d_54/moving_var:0 +conv2d_51/filters:0 +batchnorm2d_55/beta:0 +batchnorm2d_55/gamma:0 +batchnorm2d_55/moving_mean:0 +batchnorm2d_55/moving_var:0 +conv2d_52/filters:0 +batchnorm2d_56/beta:0 +batchnorm2d_56/gamma:0 +batchnorm2d_56/moving_mean:0 +batchnorm2d_56/moving_var:0 +conv2d_53/filters:0 +batchnorm2d_57/beta:0 +batchnorm2d_57/gamma:0 +batchnorm2d_57/moving_mean:0 +batchnorm2d_57/moving_var:0 +conv2d_54/filters:0 +batchnorm2d_58/beta:0 +batchnorm2d_58/gamma:0 +batchnorm2d_58/moving_mean:0 +batchnorm2d_58/moving_var:0 +conv2d_55/filters:0 +batchnorm2d_59/beta:0 +batchnorm2d_59/gamma:0 +batchnorm2d_59/moving_mean:0 +batchnorm2d_59/moving_var:0 +conv_yolo_1/filters:0 +batchnorm2d_80/beta:0 +batchnorm2d_80/gamma:0 +batchnorm2d_80/moving_mean:0 +batchnorm2d_80/moving_var:0 +conv2d_56/filters:0 +batchnorm2d_60/beta:0 +batchnorm2d_60/gamma:0 +batchnorm2d_60/moving_mean:0 +batchnorm2d_60/moving_var:0 +conv_rote_block_5/filters:0 +conv2d_57/filters:0 +batchnorm2d_61/beta:0 +batchnorm2d_61/gamma:0 +batchnorm2d_61/moving_mean:0 +batchnorm2d_61/moving_var:0 +batchnorm2d_62/beta:0 +batchnorm2d_62/gamma:0 +batchnorm2d_62/moving_mean:0 +batchnorm2d_62/moving_var:0 +conv2d_58/filters:0 +batchnorm2d_63/beta:0 +batchnorm2d_63/gamma:0 +batchnorm2d_63/moving_mean:0 +batchnorm2d_63/moving_var:0 +conv2d_59/filters:0 +batchnorm2d_64/beta:0 +batchnorm2d_64/gamma:0 +batchnorm2d_64/moving_mean:0 +batchnorm2d_64/moving_var:0 +conv2d_60/filters:0 +batchnorm2d_65/beta:0 +batchnorm2d_65/gamma:0 +batchnorm2d_65/moving_mean:0 +batchnorm2d_65/moving_var:0 +conv2d_61/filters:0 +batchnorm2d_66/beta:0 +batchnorm2d_66/gamma:0 +batchnorm2d_66/moving_mean:0 +batchnorm2d_66/moving_var:0 +conv2d_62/filters:0 +batchnorm2d_67/beta:0 +batchnorm2d_67/gamma:0 +batchnorm2d_67/moving_mean:0 +batchnorm2d_67/moving_var:0 +conv2d_63/filters:0 +batchnorm2d_68/beta:0 +batchnorm2d_68/gamma:0 +batchnorm2d_68/moving_mean:0 +batchnorm2d_68/moving_var:0 +conv2d_64/filters:0 +batchnorm2d_69/beta:0 +batchnorm2d_69/gamma:0 +batchnorm2d_69/moving_mean:0 +batchnorm2d_69/moving_var:0 +conv2d_65/filters:0 +batchnorm2d_70/beta:0 +batchnorm2d_70/gamma:0 +batchnorm2d_70/moving_mean:0 +batchnorm2d_70/moving_var:0 +conv2d_66/filters:0 +batchnorm2d_71/beta:0 +batchnorm2d_71/gamma:0 +batchnorm2d_71/moving_mean:0 +batchnorm2d_71/moving_var:0 +conv2d_67/filters:0 +batchnorm2d_72/beta:0 +batchnorm2d_72/gamma:0 +batchnorm2d_72/moving_mean:0 +batchnorm2d_72/moving_var:0 +conv2d_68/filters:0 +batchnorm2d_73/beta:0 +batchnorm2d_73/gamma:0 +batchnorm2d_73/moving_mean:0 +batchnorm2d_73/moving_var:0 +conv2d_69/filters:0 +batchnorm2d_74/beta:0 +batchnorm2d_74/gamma:0 +batchnorm2d_74/moving_mean:0 +batchnorm2d_74/moving_var:0 +conv2d_70/filters:0 +batchnorm2d_75/beta:0 +batchnorm2d_75/gamma:0 +batchnorm2d_75/moving_mean:0 +batchnorm2d_75/moving_var:0 +conv2d_71/filters:0 +batchnorm2d_76/beta:0 +batchnorm2d_76/gamma:0 +batchnorm2d_76/moving_mean:0 +batchnorm2d_76/moving_var:0 +conv2d_72/filters:0 +batchnorm2d_77/beta:0 +batchnorm2d_77/gamma:0 +batchnorm2d_77/moving_mean:0 +batchnorm2d_77/moving_var:0 +conv2d_73/filters:0 +batchnorm2d_78/beta:0 +batchnorm2d_78/gamma:0 +batchnorm2d_78/moving_mean:0 +batchnorm2d_78/moving_var:0 +conv2d_74/filters:0 +batchnorm2d_79/beta:0 +batchnorm2d_79/gamma:0 +batchnorm2d_79/moving_mean:0 +batchnorm2d_79/moving_var:0 +conv2d_75/filters:0 +batchnorm2d_81/beta:0 +batchnorm2d_81/gamma:0 +batchnorm2d_81/moving_mean:0 +batchnorm2d_81/moving_var:0 +conv2d_76/filters:0 +batchnorm2d_82/beta:0 +batchnorm2d_82/gamma:0 +batchnorm2d_82/moving_mean:0 +batchnorm2d_82/moving_var:0 +conv2d_77/filters:0 +batchnorm2d_83/beta:0 +batchnorm2d_83/gamma:0 +batchnorm2d_83/moving_mean:0 +batchnorm2d_83/moving_var:0 +conv2d_78/filters:0 +batchnorm2d_84/beta:0 +batchnorm2d_84/gamma:0 +batchnorm2d_84/moving_mean:0 +batchnorm2d_84/moving_var:0 +conv2d_79/filters:0 +batchnorm2d_85/beta:0 +batchnorm2d_85/gamma:0 +batchnorm2d_85/moving_mean:0 +batchnorm2d_85/moving_var:0 +conv2d_80/filters:0 +batchnorm2d_86/beta:0 +batchnorm2d_86/gamma:0 +batchnorm2d_86/moving_mean:0 +batchnorm2d_86/moving_var:0 +conv2d_81/filters:0 +batchnorm2d_88/beta:0 +batchnorm2d_88/gamma:0 +batchnorm2d_88/moving_mean:0 +batchnorm2d_88/moving_var:0 +conv2d_82/filters:0 +batchnorm2d_89/beta:0 +batchnorm2d_89/gamma:0 +batchnorm2d_89/moving_mean:0 +batchnorm2d_89/moving_var:0 +conv2d_83/filters:0 +batchnorm2d_90/beta:0 +batchnorm2d_90/gamma:0 +batchnorm2d_90/moving_mean:0 +batchnorm2d_90/moving_var:0 +conv2d_84/filters:0 +batchnorm2d_91/beta:0 +batchnorm2d_91/gamma:0 +batchnorm2d_91/moving_mean:0 +batchnorm2d_91/moving_var:0 +conv2d_85/filters:0 +batchnorm2d_92/beta:0 +batchnorm2d_92/gamma:0 +batchnorm2d_92/moving_mean:0 +batchnorm2d_92/moving_var:0 +conv_route_1/filters:0 +batchnorm2d_93/beta:0 +batchnorm2d_93/gamma:0 +batchnorm2d_93/moving_mean:0 +batchnorm2d_93/moving_var:0 +conv_route_2/filters:0 +conv2d_86/filters:0 +conv2d_86/biases:0 +batchnorm2d_94/beta:0 +batchnorm2d_94/gamma:0 +batchnorm2d_94/moving_mean:0 +batchnorm2d_94/moving_var:0 +conv2d_87/filters:0 +batchnorm2d_95/beta:0 +batchnorm2d_95/gamma:0 +batchnorm2d_95/moving_mean:0 +batchnorm2d_95/moving_var:0 +conv2d_88/filters:0 +batchnorm2d_96/beta:0 +batchnorm2d_96/gamma:0 +batchnorm2d_96/moving_mean:0 +batchnorm2d_96/moving_var:0 +conv2d_89/filters:0 +batchnorm2d_97/beta:0 +batchnorm2d_97/gamma:0 +batchnorm2d_97/moving_mean:0 +batchnorm2d_97/moving_var:0 +conv2d_90/filters:0 +batchnorm2d_98/beta:0 +batchnorm2d_98/gamma:0 +batchnorm2d_98/moving_mean:0 +batchnorm2d_98/moving_var:0 +conv2d_91/filters:0 +batchnorm2d_99/beta:0 +batchnorm2d_99/gamma:0 +batchnorm2d_99/moving_mean:0 +batchnorm2d_99/moving_var:0 +conv_route_3/filters:0 +batchnorm2d_100/beta:0 +batchnorm2d_100/gamma:0 +batchnorm2d_100/moving_mean:0 +batchnorm2d_100/moving_var:0 +conv_route_4/filters:0 +conv2d_92/filters:0 +conv2d_92/biases:0 +batchnorm2d_101/beta:0 +batchnorm2d_101/gamma:0 +batchnorm2d_101/moving_mean:0 +batchnorm2d_101/moving_var:0 +conv2d_93/filters:0 +batchnorm2d_102/beta:0 +batchnorm2d_102/gamma:0 +batchnorm2d_102/moving_mean:0 +batchnorm2d_102/moving_var:0 +conv2d_94/filters:0 +batchnorm2d_103/beta:0 +batchnorm2d_103/gamma:0 +batchnorm2d_103/moving_mean:0 +batchnorm2d_103/moving_var:0 +conv2d_95/filters:0 +batchnorm2d_104/beta:0 +batchnorm2d_104/gamma:0 +batchnorm2d_104/moving_mean:0 +batchnorm2d_104/moving_var:0 +conv2d_96/filters:0 +batchnorm2d_105/beta:0 +batchnorm2d_105/gamma:0 +batchnorm2d_105/moving_mean:0 +batchnorm2d_105/moving_var:0 +conv2d_97/filters:0 +batchnorm2d_106/beta:0 +batchnorm2d_106/gamma:0 +batchnorm2d_106/moving_mean:0 +batchnorm2d_106/moving_var:0 +conv2d_98/filters:0 +batchnorm2d_107/beta:0 +batchnorm2d_107/gamma:0 +batchnorm2d_107/moving_mean:0 +batchnorm2d_107/moving_var:0 +conv2d_99/filters:0 +conv2d_99/biases:0 \ No newline at end of file diff --git a/examples/model_zoo/model/weights_3.txt b/examples/model_zoo/model/weights_3.txt new file mode 100644 index 000000000..b9ff6e190 --- /dev/null +++ b/examples/model_zoo/model/weights_3.txt @@ -0,0 +1,541 @@ +conv2d_1/filters:0 +batchnorm2d_1/beta:0 +batchnorm2d_1/gamma:0 +batchnorm2d_1/moving_mean:0 +batchnorm2d_1/moving_var:0 +conv2d_2/filters:0 +batchnorm2d_2/beta:0 +batchnorm2d_2/gamma:0 +batchnorm2d_2/moving_mean:0 +batchnorm2d_2/moving_var:0 +conv_rote_block_1/filters:0 +batchnorm2d_3/beta:0 +batchnorm2d_3/gamma:0 +batchnorm2d_3/moving_mean:0 +batchnorm2d_3/moving_var:0 +conv2d_3/filters:0 +batchnorm2d_4/beta:0 +batchnorm2d_4/gamma:0 +batchnorm2d_4/moving_mean:0 +batchnorm2d_4/moving_var:0 +conv2d_4/filters:0 +batchnorm2d_5/beta:0 +batchnorm2d_5/gamma:0 +batchnorm2d_5/moving_mean:0 +batchnorm2d_5/moving_var:0 +conv2d_5/filters:0 +batchnorm2d_6/beta:0 +batchnorm2d_6/gamma:0 +batchnorm2d_6/moving_mean:0 +batchnorm2d_6/moving_var:0 +conv2d_6/filters:0 +batchnorm2d_7/beta:0 +batchnorm2d_7/gamma:0 +batchnorm2d_7/moving_mean:0 +batchnorm2d_7/moving_var:0 +conv2d_7/filters:0 +batchnorm2d_8/beta:0 +batchnorm2d_8/gamma:0 +batchnorm2d_8/moving_mean:0 +batchnorm2d_8/moving_var:0 +conv2d_8/filters:0 +batchnorm2d_9/beta:0 +batchnorm2d_9/gamma:0 +batchnorm2d_9/moving_mean:0 +batchnorm2d_9/moving_var:0 +conv_rote_block_2/filters:0 +batchnorm2d_10/beta:0 +batchnorm2d_10/gamma:0 +batchnorm2d_10/moving_mean:0 +batchnorm2d_10/moving_var:0 +conv2d_9/filters:0 +batchnorm2d_11/beta:0 +batchnorm2d_11/gamma:0 +batchnorm2d_11/moving_mean:0 +batchnorm2d_11/moving_var:0 +conv2d_10/filters:0 +batchnorm2d_12/beta:0 +batchnorm2d_12/gamma:0 +batchnorm2d_12/moving_mean:0 +batchnorm2d_12/moving_var:0 +conv2d_11/filters:0 +batchnorm2d_13/beta:0 +batchnorm2d_13/gamma:0 +batchnorm2d_13/moving_mean:0 +batchnorm2d_13/moving_var:0 +conv2d_12/filters:0 +batchnorm2d_14/beta:0 +batchnorm2d_14/gamma:0 +batchnorm2d_14/moving_mean:0 +batchnorm2d_14/moving_var:0 +conv2d_13/filters:0 +batchnorm2d_15/beta:0 +batchnorm2d_15/gamma:0 +batchnorm2d_15/moving_mean:0 +batchnorm2d_15/moving_var:0 +conv2d_14/filters:0 +batchnorm2d_16/beta:0 +batchnorm2d_16/gamma:0 +batchnorm2d_16/moving_mean:0 +batchnorm2d_16/moving_var:0 +conv2d_15/filters:0 +batchnorm2d_17/beta:0 +batchnorm2d_17/gamma:0 +batchnorm2d_17/moving_mean:0 +batchnorm2d_17/moving_var:0 +conv2d_16/filters:0 +batchnorm2d_18/beta:0 +batchnorm2d_18/gamma:0 +batchnorm2d_18/moving_mean:0 +batchnorm2d_18/moving_var:0 +conv_rote_block_3/filters:0 +batchnorm2d_19/beta:0 +batchnorm2d_19/gamma:0 +batchnorm2d_19/moving_mean:0 +batchnorm2d_19/moving_var:0 +conv2d_17/filters:0 +batchnorm2d_20/beta:0 +batchnorm2d_20/gamma:0 +batchnorm2d_20/moving_mean:0 +batchnorm2d_20/moving_var:0 +conv2d_18/filters:0 +batchnorm2d_21/beta:0 +batchnorm2d_21/gamma:0 +batchnorm2d_21/moving_mean:0 +batchnorm2d_21/moving_var:0 +conv2d_19/filters:0 +batchnorm2d_22/beta:0 +batchnorm2d_22/gamma:0 +batchnorm2d_22/moving_mean:0 +batchnorm2d_22/moving_var:0 +conv2d_20/filters:0 +batchnorm2d_23/beta:0 +batchnorm2d_23/gamma:0 +batchnorm2d_23/moving_mean:0 +batchnorm2d_23/moving_var:0 +conv2d_21/filters:0 +batchnorm2d_24/beta:0 +batchnorm2d_24/gamma:0 +batchnorm2d_24/moving_mean:0 +batchnorm2d_24/moving_var:0 +conv2d_22/filters:0 +batchnorm2d_25/beta:0 +batchnorm2d_25/gamma:0 +batchnorm2d_25/moving_mean:0 +batchnorm2d_25/moving_var:0 +conv2d_23/filters:0 +batchnorm2d_26/beta:0 +batchnorm2d_26/gamma:0 +batchnorm2d_26/moving_mean:0 +batchnorm2d_26/moving_var:0 +conv2d_24/filters:0 +batchnorm2d_27/beta:0 +batchnorm2d_27/gamma:0 +batchnorm2d_27/moving_mean:0 +batchnorm2d_27/moving_var:0 +conv2d_25/filters:0 +batchnorm2d_28/beta:0 +batchnorm2d_28/gamma:0 +batchnorm2d_28/moving_mean:0 +batchnorm2d_28/moving_var:0 +conv2d_26/filters:0 +batchnorm2d_29/beta:0 +batchnorm2d_29/gamma:0 +batchnorm2d_29/moving_mean:0 +batchnorm2d_29/moving_var:0 +conv2d_27/filters:0 +batchnorm2d_30/beta:0 +batchnorm2d_30/gamma:0 +batchnorm2d_30/moving_mean:0 +batchnorm2d_30/moving_var:0 +conv2d_28/filters:0 +batchnorm2d_31/beta:0 +batchnorm2d_31/gamma:0 +batchnorm2d_31/moving_mean:0 +batchnorm2d_31/moving_var:0 +conv2d_29/filters:0 +batchnorm2d_32/beta:0 +batchnorm2d_32/gamma:0 +batchnorm2d_32/moving_mean:0 +batchnorm2d_32/moving_var:0 +conv2d_30/filters:0 +batchnorm2d_33/beta:0 +batchnorm2d_33/gamma:0 +batchnorm2d_33/moving_mean:0 +batchnorm2d_33/moving_var:0 +conv2d_31/filters:0 +batchnorm2d_34/beta:0 +batchnorm2d_34/gamma:0 +batchnorm2d_34/moving_mean:0 +batchnorm2d_34/moving_var:0 +conv2d_32/filters:0 +batchnorm2d_35/beta:0 +batchnorm2d_35/gamma:0 +batchnorm2d_35/moving_mean:0 +batchnorm2d_35/moving_var:0 +conv2d_33/filters:0 +batchnorm2d_36/beta:0 +batchnorm2d_36/gamma:0 +batchnorm2d_36/moving_mean:0 +batchnorm2d_36/moving_var:0 +conv2d_34/filters:0 +batchnorm2d_37/beta:0 +batchnorm2d_37/gamma:0 +batchnorm2d_37/moving_mean:0 +batchnorm2d_37/moving_var:0 +conv2d_35/filters:0 +batchnorm2d_38/beta:0 +batchnorm2d_38/gamma:0 +batchnorm2d_38/moving_mean:0 +batchnorm2d_38/moving_var:0 +conv2d_36/filters:0 +batchnorm2d_39/beta:0 +batchnorm2d_39/gamma:0 +batchnorm2d_39/moving_mean:0 +batchnorm2d_39/moving_var:0 +conv_rote_block_4/filters:0 +batchnorm2d_40/beta:0 +batchnorm2d_40/gamma:0 +batchnorm2d_40/moving_mean:0 +batchnorm2d_40/moving_var:0 +conv2d_37/filters:0 +batchnorm2d_41/beta:0 +batchnorm2d_41/gamma:0 +batchnorm2d_41/moving_mean:0 +batchnorm2d_41/moving_var:0 +conv2d_38/filters:0 +batchnorm2d_42/beta:0 +batchnorm2d_42/gamma:0 +batchnorm2d_42/moving_mean:0 +batchnorm2d_42/moving_var:0 +conv2d_39/filters:0 +batchnorm2d_43/beta:0 +batchnorm2d_43/gamma:0 +batchnorm2d_43/moving_mean:0 +batchnorm2d_43/moving_var:0 +conv2d_40/filters:0 +batchnorm2d_44/beta:0 +batchnorm2d_44/gamma:0 +batchnorm2d_44/moving_mean:0 +batchnorm2d_44/moving_var:0 +conv2d_41/filters:0 +batchnorm2d_45/beta:0 +batchnorm2d_45/gamma:0 +batchnorm2d_45/moving_mean:0 +batchnorm2d_45/moving_var:0 +conv2d_42/filters:0 +batchnorm2d_46/beta:0 +batchnorm2d_46/gamma:0 +batchnorm2d_46/moving_mean:0 +batchnorm2d_46/moving_var:0 +conv2d_43/filters:0 +batchnorm2d_47/beta:0 +batchnorm2d_47/gamma:0 +batchnorm2d_47/moving_mean:0 +batchnorm2d_47/moving_var:0 +conv2d_44/filters:0 +batchnorm2d_48/beta:0 +batchnorm2d_48/gamma:0 +batchnorm2d_48/moving_mean:0 +batchnorm2d_48/moving_var:0 +conv2d_45/filters:0 +batchnorm2d_49/beta:0 +batchnorm2d_49/gamma:0 +batchnorm2d_49/moving_mean:0 +batchnorm2d_49/moving_var:0 +conv2d_46/filters:0 +batchnorm2d_50/beta:0 +batchnorm2d_50/gamma:0 +batchnorm2d_50/moving_mean:0 +batchnorm2d_50/moving_var:0 +conv2d_47/filters:0 +batchnorm2d_51/beta:0 +batchnorm2d_51/gamma:0 +batchnorm2d_51/moving_mean:0 +batchnorm2d_51/moving_var:0 +conv2d_48/filters:0 +batchnorm2d_52/beta:0 +batchnorm2d_52/gamma:0 +batchnorm2d_52/moving_mean:0 +batchnorm2d_52/moving_var:0 +conv2d_49/filters:0 +batchnorm2d_53/beta:0 +batchnorm2d_53/gamma:0 +batchnorm2d_53/moving_mean:0 +batchnorm2d_53/moving_var:0 +conv2d_50/filters:0 +batchnorm2d_54/beta:0 +batchnorm2d_54/gamma:0 +batchnorm2d_54/moving_mean:0 +batchnorm2d_54/moving_var:0 +conv2d_51/filters:0 +batchnorm2d_55/beta:0 +batchnorm2d_55/gamma:0 +batchnorm2d_55/moving_mean:0 +batchnorm2d_55/moving_var:0 +conv2d_52/filters:0 +batchnorm2d_56/beta:0 +batchnorm2d_56/gamma:0 +batchnorm2d_56/moving_mean:0 +batchnorm2d_56/moving_var:0 +conv2d_53/filters:0 +batchnorm2d_57/beta:0 +batchnorm2d_57/gamma:0 +batchnorm2d_57/moving_mean:0 +batchnorm2d_57/moving_var:0 +conv2d_54/filters:0 +batchnorm2d_58/beta:0 +batchnorm2d_58/gamma:0 +batchnorm2d_58/moving_mean:0 +batchnorm2d_58/moving_var:0 +conv2d_55/filters:0 +batchnorm2d_59/beta:0 +batchnorm2d_59/gamma:0 +batchnorm2d_59/moving_mean:0 +batchnorm2d_59/moving_var:0 +conv2d_56/filters:0 +batchnorm2d_60/beta:0 +batchnorm2d_60/gamma:0 +batchnorm2d_60/moving_mean:0 +batchnorm2d_60/moving_var:0 +conv_rote_block_5/filters:0 +batchnorm2d_61/beta:0 +batchnorm2d_61/gamma:0 +batchnorm2d_61/moving_mean:0 +batchnorm2d_61/moving_var:0 +conv2d_57/filters:0 +batchnorm2d_62/beta:0 +batchnorm2d_62/gamma:0 +batchnorm2d_62/moving_mean:0 +batchnorm2d_62/moving_var:0 +conv2d_58/filters:0 +batchnorm2d_63/beta:0 +batchnorm2d_63/gamma:0 +batchnorm2d_63/moving_mean:0 +batchnorm2d_63/moving_var:0 +conv2d_59/filters:0 +batchnorm2d_64/beta:0 +batchnorm2d_64/gamma:0 +batchnorm2d_64/moving_mean:0 +batchnorm2d_64/moving_var:0 +conv2d_60/filters:0 +batchnorm2d_65/beta:0 +batchnorm2d_65/gamma:0 +batchnorm2d_65/moving_mean:0 +batchnorm2d_65/moving_var:0 +conv2d_61/filters:0 +batchnorm2d_66/beta:0 +batchnorm2d_66/gamma:0 +batchnorm2d_66/moving_mean:0 +batchnorm2d_66/moving_var:0 +conv2d_62/filters:0 +batchnorm2d_67/beta:0 +batchnorm2d_67/gamma:0 +batchnorm2d_67/moving_mean:0 +batchnorm2d_67/moving_var:0 +conv2d_63/filters:0 +batchnorm2d_68/beta:0 +batchnorm2d_68/gamma:0 +batchnorm2d_68/moving_mean:0 +batchnorm2d_68/moving_var:0 +conv2d_64/filters:0 +batchnorm2d_69/beta:0 +batchnorm2d_69/gamma:0 +batchnorm2d_69/moving_mean:0 +batchnorm2d_69/moving_var:0 +conv2d_65/filters:0 +batchnorm2d_70/beta:0 +batchnorm2d_70/gamma:0 +batchnorm2d_70/moving_mean:0 +batchnorm2d_70/moving_var:0 +conv2d_66/filters:0 +batchnorm2d_71/beta:0 +batchnorm2d_71/gamma:0 +batchnorm2d_71/moving_mean:0 +batchnorm2d_71/moving_var:0 +conv2d_67/filters:0 +batchnorm2d_72/beta:0 +batchnorm2d_72/gamma:0 +batchnorm2d_72/moving_mean:0 +batchnorm2d_72/moving_var:0 +conv2d_68/filters:0 +batchnorm2d_73/beta:0 +batchnorm2d_73/gamma:0 +batchnorm2d_73/moving_mean:0 +batchnorm2d_73/moving_var:0 +conv2d_69/filters:0 +batchnorm2d_74/beta:0 +batchnorm2d_74/gamma:0 +batchnorm2d_74/moving_mean:0 +batchnorm2d_74/moving_var:0 +conv2d_70/filters:0 +batchnorm2d_75/beta:0 +batchnorm2d_75/gamma:0 +batchnorm2d_75/moving_mean:0 +batchnorm2d_75/moving_var:0 +conv2d_71/filters:0 +batchnorm2d_76/beta:0 +batchnorm2d_76/gamma:0 +batchnorm2d_76/moving_mean:0 +batchnorm2d_76/moving_var:0 +conv2d_72/filters:0 +batchnorm2d_77/beta:0 +batchnorm2d_77/gamma:0 +batchnorm2d_77/moving_mean:0 +batchnorm2d_77/moving_var:0 +conv2d_73/filters:0 +batchnorm2d_78/beta:0 +batchnorm2d_78/gamma:0 +batchnorm2d_78/moving_mean:0 +batchnorm2d_78/moving_var:0 +conv2d_74/filters:0 +batchnorm2d_79/beta:0 +batchnorm2d_79/gamma:0 +batchnorm2d_79/moving_mean:0 +batchnorm2d_79/moving_var:0 +conv_yolo_1/filters:0 +batchnorm2d_80/beta:0 +batchnorm2d_80/gamma:0 +batchnorm2d_80/moving_mean:0 +batchnorm2d_80/moving_var:0 +conv2d_75/filters:0 +batchnorm2d_81/beta:0 +batchnorm2d_81/gamma:0 +batchnorm2d_81/moving_mean:0 +batchnorm2d_81/moving_var:0 +conv2d_76/filters:0 +batchnorm2d_82/beta:0 +batchnorm2d_82/gamma:0 +batchnorm2d_82/moving_mean:0 +batchnorm2d_82/moving_var:0 +conv2d_77/filters:0 +batchnorm2d_83/beta:0 +batchnorm2d_83/gamma:0 +batchnorm2d_83/moving_mean:0 +batchnorm2d_83/moving_var:0 +conv2d_78/filters:0 +batchnorm2d_84/beta:0 +batchnorm2d_84/gamma:0 +batchnorm2d_84/moving_mean:0 +batchnorm2d_84/moving_var:0 +conv2d_79/filters:0 +batchnorm2d_85/beta:0 +batchnorm2d_85/gamma:0 +batchnorm2d_85/moving_mean:0 +batchnorm2d_85/moving_var:0 +conv2d_80/filters:0 +batchnorm2d_86/beta:0 +batchnorm2d_86/gamma:0 +batchnorm2d_86/moving_mean:0 +batchnorm2d_86/moving_var:0 +conv_yolo_2/filters:0 +batchnorm2d_87/beta:0 +batchnorm2d_87/gamma:0 +batchnorm2d_87/moving_mean:0 +batchnorm2d_87/moving_var:0 +conv2d_81/filters:0 +batchnorm2d_88/beta:0 +batchnorm2d_88/gamma:0 +batchnorm2d_88/moving_mean:0 +batchnorm2d_88/moving_var:0 +conv2d_82/filters:0 +batchnorm2d_89/beta:0 +batchnorm2d_89/gamma:0 +batchnorm2d_89/moving_mean:0 +batchnorm2d_89/moving_var:0 +conv2d_83/filters:0 +batchnorm2d_90/beta:0 +batchnorm2d_90/gamma:0 +batchnorm2d_90/moving_mean:0 +batchnorm2d_90/moving_var:0 +conv2d_84/filters:0 +batchnorm2d_91/beta:0 +batchnorm2d_91/gamma:0 +batchnorm2d_91/moving_mean:0 +batchnorm2d_91/moving_var:0 +conv2d_85/filters:0 +batchnorm2d_92/beta:0 +batchnorm2d_92/gamma:0 +batchnorm2d_92/moving_mean:0 +batchnorm2d_92/moving_var:0 +conv_route_1/filters:0 +batchnorm2d_93/beta:0 +batchnorm2d_93/gamma:0 +batchnorm2d_93/moving_mean:0 +batchnorm2d_93/moving_var:0 +conv2d_86/filters:0 +conv2d_86/biases:0 +conv_route_2/filters:0 +batchnorm2d_94/beta:0 +batchnorm2d_94/gamma:0 +batchnorm2d_94/moving_mean:0 +batchnorm2d_94/moving_var:0 +conv2d_87/filters:0 +batchnorm2d_95/beta:0 +batchnorm2d_95/gamma:0 +batchnorm2d_95/moving_mean:0 +batchnorm2d_95/moving_var:0 +conv2d_88/filters:0 +batchnorm2d_96/beta:0 +batchnorm2d_96/gamma:0 +batchnorm2d_96/moving_mean:0 +batchnorm2d_96/moving_var:0 +conv2d_89/filters:0 +batchnorm2d_97/beta:0 +batchnorm2d_97/gamma:0 +batchnorm2d_97/moving_mean:0 +batchnorm2d_97/moving_var:0 +conv2d_90/filters:0 +batchnorm2d_98/beta:0 +batchnorm2d_98/gamma:0 +batchnorm2d_98/moving_mean:0 +batchnorm2d_98/moving_var:0 +conv2d_91/filters:0 +batchnorm2d_99/beta:0 +batchnorm2d_99/gamma:0 +batchnorm2d_99/moving_mean:0 +batchnorm2d_99/moving_var:0 +conv_route_3/filters:0 +batchnorm2d_100/beta:0 +batchnorm2d_100/gamma:0 +batchnorm2d_100/moving_mean:0 +batchnorm2d_100/moving_var:0 +conv2d_92/filters:0 +conv2d_92/biases:0 +conv_route_4/filters:0 +batchnorm2d_101/beta:0 +batchnorm2d_101/gamma:0 +batchnorm2d_101/moving_mean:0 +batchnorm2d_101/moving_var:0 +conv2d_93/filters:0 +batchnorm2d_102/beta:0 +batchnorm2d_102/gamma:0 +batchnorm2d_102/moving_mean:0 +batchnorm2d_102/moving_var:0 +conv2d_94/filters:0 +batchnorm2d_103/beta:0 +batchnorm2d_103/gamma:0 +batchnorm2d_103/moving_mean:0 +batchnorm2d_103/moving_var:0 +conv2d_95/filters:0 +batchnorm2d_104/beta:0 +batchnorm2d_104/gamma:0 +batchnorm2d_104/moving_mean:0 +batchnorm2d_104/moving_var:0 +conv2d_96/filters:0 +batchnorm2d_105/beta:0 +batchnorm2d_105/gamma:0 +batchnorm2d_105/moving_mean:0 +batchnorm2d_105/moving_var:0 +conv2d_97/filters:0 +batchnorm2d_106/beta:0 +batchnorm2d_106/gamma:0 +batchnorm2d_106/moving_mean:0 +batchnorm2d_106/moving_var:0 +conv2d_98/filters:0 +batchnorm2d_107/beta:0 +batchnorm2d_107/gamma:0 +batchnorm2d_107/moving_mean:0 +batchnorm2d_107/moving_var:0 +conv2d_99/filters:0 +conv2d_99/biases:0 \ No newline at end of file diff --git a/examples/model_zoo/model/yolov4_weights3_config.txt b/examples/model_zoo/model/yolov4_weights3_config.txt new file mode 100644 index 000000000..5f31bb51d --- /dev/null +++ b/examples/model_zoo/model/yolov4_weights3_config.txt @@ -0,0 +1,541 @@ +layer_with_weights-0/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-1/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-1/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-1/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-1/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-2/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-3/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-3/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-3/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-3/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-11/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-13/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-13/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-13/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-13/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-4/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-5/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-5/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-5/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-5/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-6/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-7/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-7/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-7/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-7/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-8/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-9/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-9/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-9/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-9/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-10/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-12/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-12/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-12/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-12/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-14/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-15/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-15/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-15/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-15/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-16/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-17/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-17/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-17/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-17/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-29/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-31/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-31/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-31/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-31/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-18/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-19/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-19/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-19/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-19/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-20/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-21/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-21/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-21/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-21/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-22/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-23/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-23/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-23/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-23/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-24/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-25/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-25/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-25/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-25/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-26/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-27/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-27/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-27/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-27/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-28/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-30/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-30/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-30/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-30/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-32/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-33/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-33/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-33/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-33/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-34/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-35/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-35/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-35/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-35/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-71/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-73/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-73/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-73/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-73/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-36/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-37/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-37/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-37/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-37/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-38/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-39/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-39/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-39/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-39/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-40/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-41/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-41/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-41/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-41/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-42/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-43/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-43/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-43/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-43/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-44/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-45/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-45/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-45/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-45/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-46/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-47/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-47/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-47/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-47/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-48/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-49/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-49/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-49/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-49/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-50/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-51/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-51/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-51/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-51/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-52/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-53/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-53/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-53/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-53/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-54/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-55/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-55/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-55/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-55/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-56/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-57/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-57/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-57/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-57/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-58/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-59/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-59/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-59/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-59/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-60/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-61/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-61/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-61/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-61/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-62/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-63/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-63/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-63/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-63/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-64/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-65/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-65/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-65/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-65/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-66/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-67/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-67/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-67/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-67/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-68/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-69/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-69/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-69/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-69/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-70/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-72/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-72/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-72/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-72/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-74/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-75/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-75/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-75/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-75/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-76/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-77/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-77/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-77/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-77/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-113/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-115/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-115/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-115/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-115/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-78/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-79/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-79/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-79/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-79/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-80/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-81/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-81/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-81/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-81/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-82/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-83/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-83/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-83/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-83/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-84/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-85/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-85/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-85/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-85/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-86/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-87/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-87/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-87/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-87/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-88/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-89/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-89/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-89/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-89/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-90/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-91/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-91/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-91/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-91/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-92/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-93/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-93/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-93/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-93/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-94/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-95/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-95/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-95/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-95/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-96/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-97/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-97/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-97/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-97/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-98/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-99/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-99/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-99/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-99/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-100/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-101/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-101/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-101/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-101/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-102/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-103/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-103/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-103/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-103/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-104/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-105/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-105/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-105/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-105/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-106/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-107/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-107/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-107/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-107/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-108/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-109/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-109/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-109/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-109/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-110/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-111/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-111/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-111/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-111/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-112/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-114/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-114/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-114/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-114/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-116/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-117/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-117/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-117/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-117/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-118/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-119/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-119/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-119/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-119/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-139/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-141/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-141/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-141/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-141/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-120/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-121/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-121/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-121/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-121/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-122/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-123/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-123/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-123/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-123/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-124/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-125/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-125/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-125/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-125/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-126/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-127/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-127/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-127/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-127/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-128/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-129/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-129/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-129/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-129/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-130/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-131/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-131/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-131/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-131/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-132/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-133/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-133/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-133/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-133/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-134/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-135/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-135/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-135/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-135/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-136/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-137/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-137/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-137/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-137/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-138/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-140/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-140/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-140/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-140/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-142/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-143/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-143/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-143/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-143/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-144/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-145/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-145/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-145/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-145/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-146/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-147/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-147/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-147/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-147/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-148/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-149/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-149/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-149/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-149/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-150/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-151/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-151/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-151/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-151/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-152/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-153/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-153/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-153/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-153/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-154/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-155/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-155/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-155/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-155/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-156/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-158/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-158/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-158/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-158/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-157/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-159/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-159/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-159/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-159/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-160/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-161/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-161/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-161/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-161/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-162/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-163/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-163/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-163/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-163/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-164/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-165/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-165/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-165/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-165/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-166/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-167/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-167/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-167/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-167/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-168/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-169/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-169/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-169/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-169/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-170/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-172/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-172/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-172/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-172/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-171/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-173/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-173/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-173/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-173/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-174/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-175/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-175/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-175/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-175/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-176/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-177/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-177/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-177/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-177/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-178/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-179/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-179/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-179/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-179/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-180/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-181/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-181/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-181/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-181/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-182/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-183/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-183/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-183/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-183/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-208/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-211/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-211/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-211/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-211/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-214/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-214/bias/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-184/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-185/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-185/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-185/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-185/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-186/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-187/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-187/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-187/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-187/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-188/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-189/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-189/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-189/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-189/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-190/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-191/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-191/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-191/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-191/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-192/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-193/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-193/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-193/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-193/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-194/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-195/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-195/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-195/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-195/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-209/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-212/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-212/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-212/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-212/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-215/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-215/bias/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-196/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-197/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-197/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-197/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-197/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-198/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-199/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-199/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-199/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-199/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-200/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-201/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-201/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-201/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-201/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-202/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-203/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-203/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-203/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-203/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-204/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-205/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-205/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-205/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-205/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-206/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-207/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-207/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-207/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-207/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-210/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-213/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-213/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-213/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-213/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-216/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-216/bias/.ATTRIBUTES/VARIABLE_VALUE \ No newline at end of file diff --git a/examples/model_zoo/model/yolov4_weights_config.txt b/examples/model_zoo/model/yolov4_weights_config.txt new file mode 100644 index 000000000..2c28be036 --- /dev/null +++ b/examples/model_zoo/model/yolov4_weights_config.txt @@ -0,0 +1,541 @@ +layer_with_weights-0/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-1/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-1/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-1/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-1/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-2/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-3/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-3/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-3/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-3/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-11/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-4/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-13/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-13/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-13/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-13/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-5/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-5/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-5/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-5/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-6/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-7/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-7/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-7/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-7/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-8/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-9/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-9/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-9/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-9/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-10/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-12/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-12/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-12/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-12/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-14/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-15/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-15/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-15/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-15/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-16/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-17/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-17/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-17/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-17/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-29/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-18/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-31/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-31/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-31/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-31/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-19/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-19/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-19/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-19/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-20/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-21/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-21/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-21/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-21/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-22/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-23/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-23/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-23/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-23/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-24/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-25/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-25/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-25/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-25/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-26/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-27/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-27/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-27/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-27/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-28/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-30/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-30/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-30/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-30/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-32/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-33/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-33/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-33/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-33/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-34/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-35/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-35/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-35/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-35/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-71/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-36/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-73/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-73/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-73/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-73/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-37/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-37/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-37/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-37/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-38/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-39/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-39/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-39/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-39/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-40/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-41/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-41/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-41/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-41/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-42/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-43/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-43/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-43/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-43/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-44/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-45/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-45/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-45/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-45/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-46/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-47/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-47/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-47/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-47/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-48/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-49/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-49/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-49/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-49/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-50/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-51/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-51/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-51/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-51/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-52/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-53/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-53/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-53/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-53/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-54/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-55/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-55/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-55/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-55/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-56/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-57/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-57/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-57/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-57/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-58/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-59/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-59/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-59/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-59/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-60/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-61/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-61/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-61/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-61/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-62/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-63/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-63/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-63/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-63/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-64/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-65/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-65/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-65/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-65/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-66/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-67/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-67/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-67/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-67/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-68/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-69/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-69/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-69/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-69/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-70/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-72/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-72/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-72/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-72/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-74/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-75/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-75/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-75/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-75/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-171/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-173/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-173/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-173/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-173/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-76/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-77/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-77/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-77/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-77/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-113/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-78/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-115/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-115/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-115/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-115/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-79/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-79/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-79/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-79/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-80/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-81/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-81/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-81/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-81/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-82/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-83/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-83/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-83/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-83/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-84/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-85/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-85/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-85/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-85/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-86/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-87/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-87/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-87/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-87/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-88/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-89/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-89/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-89/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-89/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-90/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-91/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-91/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-91/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-91/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-92/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-93/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-93/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-93/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-93/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-94/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-95/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-95/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-95/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-95/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-96/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-97/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-97/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-97/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-97/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-98/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-99/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-99/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-99/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-99/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-100/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-101/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-101/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-101/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-101/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-102/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-103/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-103/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-103/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-103/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-104/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-105/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-105/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-105/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-105/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-106/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-107/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-107/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-107/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-107/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-108/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-109/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-109/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-109/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-109/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-110/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-111/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-111/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-111/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-111/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-112/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-114/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-114/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-114/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-114/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-116/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-117/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-117/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-117/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-117/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-157/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-159/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-159/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-159/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-159/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-118/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-119/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-119/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-119/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-119/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-139/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-120/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-141/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-141/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-141/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-141/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-121/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-121/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-121/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-121/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-122/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-123/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-123/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-123/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-123/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-124/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-125/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-125/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-125/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-125/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-126/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-127/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-127/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-127/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-127/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-128/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-129/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-129/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-129/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-129/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-130/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-131/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-131/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-131/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-131/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-132/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-133/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-133/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-133/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-133/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-134/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-135/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-135/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-135/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-135/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-136/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-137/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-137/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-137/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-137/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-138/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-140/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-140/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-140/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-140/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-142/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-143/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-143/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-143/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-143/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-144/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-145/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-145/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-145/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-145/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-146/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-147/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-147/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-147/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-147/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-148/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-149/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-149/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-149/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-149/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-150/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-151/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-151/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-151/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-151/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-152/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-153/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-153/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-153/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-153/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-154/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-155/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-155/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-155/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-155/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-156/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-158/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-158/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-158/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-158/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-160/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-161/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-161/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-161/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-161/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-162/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-163/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-163/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-163/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-163/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-164/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-165/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-165/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-165/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-165/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-166/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-167/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-167/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-167/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-167/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-168/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-169/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-169/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-169/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-169/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-170/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-172/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-172/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-172/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-172/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-174/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-175/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-175/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-175/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-175/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-176/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-177/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-177/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-177/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-177/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-178/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-179/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-179/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-179/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-179/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-180/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-181/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-181/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-181/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-181/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-182/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-183/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-183/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-183/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-183/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-208/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-211/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-211/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-211/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-211/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-184/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-214/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-214/bias/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-185/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-185/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-185/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-185/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-186/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-187/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-187/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-187/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-187/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-188/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-189/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-189/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-189/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-189/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-190/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-191/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-191/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-191/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-191/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-192/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-193/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-193/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-193/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-193/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-194/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-195/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-195/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-195/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-195/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-209/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-212/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-212/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-212/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-212/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-196/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-215/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-215/bias/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-197/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-197/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-197/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-197/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-198/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-199/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-199/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-199/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-199/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-200/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-201/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-201/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-201/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-201/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-202/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-203/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-203/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-203/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-203/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-204/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-205/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-205/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-205/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-205/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-206/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-207/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-207/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-207/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-207/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-210/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-213/beta/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-213/gamma/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-213/moving_mean/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-213/moving_variance/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-216/kernel/.ATTRIBUTES/VARIABLE_VALUE +layer_with_weights-216/bias/.ATTRIBUTES/VARIABLE_VALUE \ No newline at end of file diff --git a/examples/pretrained_cnn/tutorial_models_resnet50.py b/examples/model_zoo/pretrained_resnet50.py similarity index 70% rename from examples/pretrained_cnn/tutorial_models_resnet50.py rename to examples/model_zoo/pretrained_resnet50.py index b8f8b1c28..9c9761841 100644 --- a/examples/pretrained_cnn/tutorial_models_resnet50.py +++ b/examples/model_zoo/pretrained_resnet50.py @@ -6,18 +6,16 @@ """ import time - import numpy as np -import tensorflow as tf - import tensorlayer as tl -from tensorlayer.models.imagenet_classes import class_names +from examples.model_zoo.imagenet_classes import class_names +from examples.model_zoo.resnet import ResNet50 -# tf.logging.set_verbosity(tf.logging.DEBUG) tl.logging.set_verbosity(tl.logging.DEBUG) # get the whole model -resnet = tl.models.ResNet50(pretrained=True) +resnet = ResNet50(pretrained=True) +resnet.set_eval() img1 = tl.vis.read_image('data/tiger.jpeg') img1 = tl.prepro.imresize(img1, (224, 224))[:, :, ::-1] @@ -26,8 +24,8 @@ img1 = img1.astype(np.float32)[np.newaxis, ...] start_time = time.time() -output = resnet(img1, is_train=False) -prob = tf.nn.softmax(output)[0].numpy() +output = resnet(img1) +prob = tl.ops.softmax(output)[0].numpy() print(" End time : %.5ss" % (time.time() - start_time)) preds = (np.argsort(prob)[::-1])[0:5] for p in preds: diff --git a/examples/pretrained_cnn/tutorial_models_vgg16.py b/examples/model_zoo/pretrained_vgg16.py similarity index 77% rename from examples/pretrained_cnn/tutorial_models_vgg16.py rename to examples/model_zoo/pretrained_vgg16.py index 7d224c235..9bf4264ed 100644 --- a/examples/pretrained_cnn/tutorial_models_vgg16.py +++ b/examples/model_zoo/pretrained_vgg16.py @@ -8,18 +8,20 @@ import tensorflow as tf import tensorlayer as tl -from tensorlayer.models.imagenet_classes import class_names +from examples.model_zoo.imagenet_classes import class_names +from examples.model_zoo.vgg import vgg16 tl.logging.set_verbosity(tl.logging.DEBUG) # get the whole model -vgg = tl.models.vgg16(pretrained=True) +vgg = vgg16(pretrained=True) +vgg.set_eval() img = tl.vis.read_image('data/tiger.jpeg') img = tl.prepro.imresize(img, (224, 224)).astype(np.float32) / 255 start_time = time.time() -output = vgg(img, is_train=False) +output = vgg(img) probs = tf.nn.softmax(output)[0].numpy() print(" End time : %.5ss" % (time.time() - start_time)) preds = (np.argsort(probs)[::-1])[0:5] diff --git a/examples/model_zoo/pretrained_yolov4.py b/examples/model_zoo/pretrained_yolov4.py new file mode 100644 index 000000000..93c7ddc52 --- /dev/null +++ b/examples/model_zoo/pretrained_yolov4.py @@ -0,0 +1,31 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +import numpy as np +import cv2 +from PIL import Image +from examples.model_zoo.common import yolo4_input_processing, yolo4_output_processing, \ + result_to_json, read_class_names, draw_boxes_and_labels_to_image_with_json +from examples.model_zoo.yolo import YOLOv4 +import tensorlayer as tl + +tl.logging.set_verbosity(tl.logging.DEBUG) + +INPUT_SIZE = 416 +image_path = './data/kite.jpg' + +class_names = read_class_names('./model/coco.names') +original_image = cv2.imread(image_path) +image = cv2.cvtColor(np.array(original_image), cv2.COLOR_BGR2RGB) + +model = YOLOv4(NUM_CLASS=80, pretrained=True) +model.set_eval() + +batch_data = yolo4_input_processing(original_image) +feature_maps = model(batch_data) +pred_bbox = yolo4_output_processing(feature_maps) +json_result = result_to_json(image, pred_bbox) + +image = draw_boxes_and_labels_to_image_with_json(image, json_result, class_names) +image = Image.fromarray(image.astype(np.uint8)) +image.show() \ No newline at end of file diff --git a/examples/model_zoo/resnet.py b/examples/model_zoo/resnet.py new file mode 100644 index 000000000..2d134fea1 --- /dev/null +++ b/examples/model_zoo/resnet.py @@ -0,0 +1,227 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- +"""ResNet for ImageNet. + +# Reference: +- [Deep Residual Learning for Image Recognition]( + https://arxiv.org/abs/1512.03385) (CVPR 2016 Best Paper Award) + +""" + +import os + +import tensorlayer as tl + +from tensorlayer import logging +from tensorlayer.files import (assign_weights, maybe_download_and_extract) +from tensorlayer.layers import (BatchNorm, Conv2d, Dense, Elementwise, GlobalMeanPool2d, Input, MaxPool2d) +from tensorlayer.layers import Module, SequentialLayer + +__all__ = [ + 'ResNet50', +] + +block_names = ['2a', '2b', '2c', '3a', '3b', '3c', '3d', '4a', '4b', '4c', '4d', '4e', '4f', '5a', '5b', '5c' + ] + ['avg_pool', 'fc1000'] +block_filters = [[64, 64, 256], [128, 128, 512], [256, 256, 1024], [512, 512, 2048]] +in_channels_conv = [64, 256, 512, 1024] +in_channels_identity = [256, 512, 1024, 2048] +henorm = tl.initializers.he_normal() + +class identity_block(Module): + """The identity block where there is no conv layer at shortcut. + + Parameters + ---------- + input : tf tensor + Input tensor from above layer. + kernel_size : int + The kernel size of middle conv layer at main path. + n_filters : list of integers + The numbers of filters for 3 conv layer at main path. + stage : int + Current stage label. + block : str + Current block label. + + Returns + ------- + Output tensor of this block. + + """ + def __init__(self, kernel_size, n_filters, stage, block): + super(identity_block, self).__init__() + filters1, filters2, filters3 = n_filters + _in_channels = in_channels_identity[stage-2] + conv_name_base = 'res' + str(stage) + block + '_branch' + bn_name_base = 'bn' + str(stage) + block + '_branch' + + self.conv1 = Conv2d(filters1, (1, 1), W_init=henorm, name=conv_name_base + '2a', in_channels=_in_channels) + self.bn1 = BatchNorm(name=bn_name_base + '2a', act='relu', num_features=filters1) + + ks = (kernel_size, kernel_size) + self.conv2 = Conv2d(filters2, ks, padding='SAME', W_init=henorm, name=conv_name_base + '2b', in_channels=filters1) + self.bn2 = BatchNorm(name=bn_name_base + '2b', act='relu', num_features=filters2) + + self.conv3 = Conv2d(filters3, (1, 1), W_init=henorm, name=conv_name_base + '2c', in_channels=filters2) + self.bn3 = BatchNorm(name=bn_name_base + '2c', num_features=filters3) + + self.add = Elementwise(tl.add, act='relu') + + def forward(self, inputs): + output = self.conv1(inputs) + output = self.bn1(output) + output = self.conv2(output) + output = self.bn2(output) + output = self.conv3(output) + output = self.bn3(output) + result = self.add([output, inputs]) + return result + + +class conv_block(Module): + def __init__(self, kernel_size, n_filters, stage, block, strides=(2, 2)): + super(conv_block, self).__init__() + filters1, filters2, filters3 = n_filters + _in_channels = in_channels_conv[stage-2] + conv_name_base = 'res' + str(stage) + block + '_branch' + bn_name_base = 'bn' + str(stage) + block + '_branch' + self.conv1 = Conv2d(filters1, (1, 1), strides=strides, W_init=henorm, name=conv_name_base + '2a', in_channels=_in_channels) + self.bn1 = BatchNorm(name=bn_name_base + '2a', act='relu', num_features=filters1) + + ks = (kernel_size, kernel_size) + self.conv2 = Conv2d(filters2, ks, padding='SAME', W_init=henorm, name=conv_name_base + '2b', in_channels=filters1) + self.bn2 = BatchNorm(name=bn_name_base + '2b', act='relu', num_features=filters2) + + self.conv3 = Conv2d(filters3, (1, 1), W_init=henorm, name=conv_name_base + '2c', in_channels=filters2) + self.bn3 = BatchNorm(name=bn_name_base + '2c', num_features=filters3) + + self.shortcut_conv = Conv2d(filters3, (1, 1), strides=strides, W_init=henorm, name=conv_name_base + '1', in_channels=_in_channels) + self.shortcut_bn = BatchNorm(name=bn_name_base + '1', num_features=filters3) + + self.add = Elementwise(tl.add, act='relu') + + def forward(self, inputs): + output = self.conv1(inputs) + output = self.bn1(output) + output = self.conv2(output) + output = self.bn2(output) + output = self.conv3(output) + output = self.bn3(output) + + shortcut = self.shortcut_conv(inputs) + shortcut = self.shortcut_bn(shortcut) + + result = self.add([output, shortcut]) + return result + + +class ResNet50_model(Module): + def __init__(self, end_with='fc1000', n_classes=1000): + super(ResNet50_model, self).__init__() + self.end_with = end_with + self.n_classes = n_classes + self.conv1 = Conv2d(64, (7, 7), in_channels=3, strides=(2, 2), padding='SAME', W_init=henorm, name='conv1') + self.bn_conv1 = BatchNorm(name='bn_conv1', act="relu", num_features=64) + self.max_pool1 = MaxPool2d((3, 3), strides=(2, 2), name='max_pool1') + self.res_layer = self.make_layer() + + def forward(self, inputs): + z = self.conv1(inputs) + z = self.bn_conv1(z) + z = self.max_pool1(z) + z = self.res_layer(z) + return z + + def make_layer(self): + layer_list = [] + for i, block_name in enumerate(block_names): + if len(block_name) == 2: + stage = int(block_name[0]) + block = block_name[1] + if block == 'a': + strides = (1, 1) if stage == 2 else (2, 2) + layer_list.append(conv_block(3, block_filters[stage - 2], stage=stage, block=block, strides=strides)) + else: + layer_list.append(identity_block(3, block_filters[stage - 2], stage=stage, block=block)) + elif block_name == 'avg_pool': + layer_list.append(GlobalMeanPool2d(name='avg_pool')) + elif block_name == 'fc1000': + layer_list.append(Dense(self.n_classes, name='fc1000', in_channels=2048)) + + if block_name == self.end_with: + break + return SequentialLayer(layer_list) + + +def ResNet50(pretrained=False, end_with='fc1000', n_classes=1000): + """Pre-trained ResNet50 model. Input shape [?, 224, 224, 3]. + + To use pretrained model, input should be in BGR format and subtracted from ImageNet mean [103.939, 116.779, 123.68]. + + Parameters + ---------- + pretrained : boolean + Whether to load pretrained weights. Default False. + end_with : str + The end point of the model [conv, depth1, depth2 ... depth13, globalmeanpool, out]. + Default ``out`` i.e. the whole model. + n_classes : int + Number of classes in final prediction. + name : None or str + Name for this model. + + Examples + --------- + Classify ImageNet classes, see `tutorial_models_resnet50.py` + TODO Modify the usage example according to the model storage location + >>> # get the whole model with pretrained weights + >>> resnet = ResNet50(pretrained=True) + >>> # use for inferencing + >>> output = resnet(img1) + >>> prob = tl.ops.softmax(output)[0].numpy() + + Extract the features before fc layer + >>> resnet = ResNet50(pretrained=True, end_with='5c') + >>> output = resnet(img1) + + Returns + ------- + ResNet50 model. + + """ + + network = ResNet50_model(end_with=end_with, n_classes=n_classes) + + if pretrained: + restore_params(network) + + return network + + +def restore_params(network, path='models'): + logging.info("Restore pre-trained parameters") + maybe_download_and_extract( + 'resnet50_weights_tf_dim_ordering_tf_kernels.h5', + path, + 'https://github.com/fchollet/deep-learning-models/releases/download/v0.2/', + ) # ls -al + try: + import h5py + except Exception: + raise ImportError('h5py not imported') + + f = h5py.File(os.path.join(path, 'resnet50_weights_tf_dim_ordering_tf_kernels.h5'), 'r') + + # TODO Update parameter loading + # for layer in network.all_layers: + # if len(layer.all_weights) == 0: + # continue + # w_names = list(f[layer.name]) + # params = [f[layer.name][n][:] for n in w_names] + # # if 'bn' in layer.name: + # # params = [x.reshape(1, 1, 1, -1) for x in params] + # assign_weights(params, layer) + # del params + + f.close() diff --git a/tensorlayer/models/vgg.py b/examples/model_zoo/vgg.py similarity index 72% rename from tensorlayer/models/vgg.py rename to examples/model_zoo/vgg.py index c57572e24..c53612902 100644 --- a/tensorlayer/models/vgg.py +++ b/examples/model_zoo/vgg.py @@ -30,13 +30,12 @@ import os import numpy as np -import tensorflow as tf import tensorlayer as tl from tensorlayer import logging from tensorlayer.files import assign_weights, maybe_download_and_extract -from tensorlayer.layers import (BatchNorm, Conv2d, Dense, Flatten, Input, Lambda, LayerList, MaxPool2d) -from tensorlayer.models import Model +from tensorlayer.layers import (BatchNorm, Conv2d, Dense, Flatten, Input, SequentialLayer, MaxPool2d) +from tensorlayer.layers import Module __all__ = [ 'VGG', @@ -88,14 +87,14 @@ model_saved_name = {'vgg16': 'vgg16_weights.npz', 'vgg19': 'vgg19.npy'} -class VGG(Model): +class VGG(Module): def __init__(self, layer_type, batch_norm=False, end_with='outputs', name=None): super(VGG, self).__init__(name=name) self.end_with = end_with config = cfg[mapped_cfg[layer_type]] - self.layers = make_layers(config, batch_norm, end_with) + self.make_layer = make_layers(config, batch_norm, end_with) def forward(self, inputs): """ @@ -104,8 +103,7 @@ def forward(self, inputs): """ inputs = inputs * 255 - np.array([123.68, 116.779, 103.939], dtype=np.float32).reshape([1, 1, 1, 3]) - - out = self.layers.forward(inputs) + out = self.make_layer(inputs) return out @@ -126,12 +124,12 @@ def make_layers(config, batch_norm=False, end_with='outputs'): in_channels = layer_group[idx - 1] layer_list.append( Conv2d( - n_filter=n_filter, filter_size=(3, 3), strides=(1, 1), act=tf.nn.relu, padding='SAME', + n_filter=n_filter, filter_size=(3, 3), strides=(1, 1), act=tl.ReLU, padding='SAME', in_channels=in_channels, name=layer_name ) ) if batch_norm: - layer_list.append(BatchNorm()) + layer_list.append(BatchNorm(num_features=n_filter)) if layer_name == end_with: is_end = True break @@ -144,23 +142,22 @@ def make_layers(config, batch_norm=False, end_with='outputs'): elif layer_group == 'F': layer_list.append(Flatten(name='flatten')) elif layer_group == 'fc1': - layer_list.append(Dense(n_units=4096, act=tf.nn.relu, in_channels=512 * 7 * 7, name=layer_name)) + layer_list.append(Dense(n_units=4096, act=tl.ReLU, in_channels=512 * 7 * 7, name=layer_name)) elif layer_group == 'fc2': - layer_list.append(Dense(n_units=4096, act=tf.nn.relu, in_channels=4096, name=layer_name)) + layer_list.append(Dense(n_units=4096, act=tl.ReLU, in_channels=4096, name=layer_name)) if layer_name == end_with: is_end = True if is_end: break - return LayerList(layer_list) - + return SequentialLayer(layer_list) def restore_model(model, layer_type): logging.info("Restore pre-trained weights") # download weights - maybe_download_and_extract(model_saved_name[layer_type], 'models', model_urls[layer_type]) + maybe_download_and_extract(model_saved_name[layer_type], 'model', model_urls[layer_type]) weights = [] if layer_type == 'vgg16': - npz = np.load(os.path.join('models', model_saved_name[layer_type]), allow_pickle=True) + npz = np.load(os.path.join('model', model_saved_name[layer_type]), allow_pickle=True) # get weight list for val in sorted(npz.items()): logging.info(" Loading weights %s in %s" % (str(val[1].shape), val[0])) @@ -168,7 +165,7 @@ def restore_model(model, layer_type): if len(model.all_weights) == len(weights): break elif layer_type == 'vgg19': - npz = np.load(os.path.join('models', model_saved_name[layer_type]), allow_pickle=True, encoding='latin1').item() + npz = np.load(os.path.join('model', model_saved_name[layer_type]), allow_pickle=True, encoding='latin1').item() # get weight list for val in sorted(npz.items()): logging.info(" Loading %s in %s" % (str(val[1][0].shape), val[0])) @@ -180,22 +177,6 @@ def restore_model(model, layer_type): assign_weights(weights, model) del weights - -def VGG_static(layer_type, batch_norm=False, end_with='outputs', name=None): - ni = Input([None, 224, 224, 3]) - n = Lambda( - lambda x: x * 255 - np.array([123.68, 116.779, 103.939], dtype=np.float32).reshape([1, 1, 1, 3]), name='scale' - )(ni) - - config = cfg[mapped_cfg[layer_type]] - layers = make_layers(config, batch_norm, end_with) - - nn = layers(n) - - M = Model(inputs=ni, outputs=nn, name=name) - return M - - def vgg16(pretrained=False, end_with='outputs', mode='dynamic', name=None): """Pre-trained VGG16 model. @@ -214,43 +195,22 @@ def vgg16(pretrained=False, end_with='outputs', mode='dynamic', name=None): --------- Classify ImageNet classes with VGG16, see `tutorial_models_vgg.py `__ With TensorLayer + TODO Modify the usage example according to the model storage location >>> # get the whole model, without pre-trained VGG parameters - >>> vgg = tl.models.vgg16() + >>> vgg = vgg16() >>> # get the whole model, restore pre-trained VGG parameters - >>> vgg = tl.models.vgg16(pretrained=True) + >>> vgg = vgg16(pretrained=True) >>> # use for inferencing - >>> output = vgg(img, is_train=False) - >>> probs = tf.nn.softmax(output)[0].numpy() - - Extract features with VGG16 and Train a classifier with 100 classes - - >>> # get VGG without the last layer - >>> cnn = tl.models.vgg16(end_with='fc2_relu', mode='static').as_layer() - >>> # add one more layer and build a new model - >>> ni = Input([None, 224, 224, 3], name="inputs") - >>> nn = cnn(ni) - >>> nn = tl.layers.Dense(n_units=100, name='out')(nn) - >>> model = tl.models.Model(inputs=ni, outputs=nn) - >>> # train your own classifier (only update the last layer) - >>> train_params = model.get_layer('out').trainable_weights - - Reuse model - - >>> # in dynamic model, we can directly use the same model - >>> # in static model - >>> vgg_layer = tl.models.vgg16().as_layer() - >>> ni_1 = tl.layers.Input([None, 224, 244, 3]) - >>> ni_2 = tl.layers.Input([None, 224, 244, 3]) - >>> a_1 = vgg_layer(ni_1) - >>> a_2 = vgg_layer(ni_2) - >>> M = Model(inputs=[ni_1, ni_2], outputs=[a_1, a_2]) + >>> output = vgg(img) + >>> probs = tl.ops.softmax(output)[0].numpy() """ + if mode == 'dynamic': model = VGG(layer_type='vgg16', batch_norm=False, end_with=end_with, name=name) elif mode == 'static': - model = VGG_static(layer_type='vgg16', batch_norm=False, end_with=end_with, name=name) + raise NotImplementedError else: raise Exception("No such mode %s" % mode) if pretrained: @@ -278,41 +238,18 @@ def vgg19(pretrained=False, end_with='outputs', mode='dynamic', name=None): With TensorLayer >>> # get the whole model, without pre-trained VGG parameters - >>> vgg = tl.models.vgg19() + >>> vgg = vgg19() >>> # get the whole model, restore pre-trained VGG parameters - >>> vgg = tl.models.vgg19(pretrained=True) + >>> vgg = vgg19(pretrained=True) >>> # use for inferencing - >>> output = vgg(img, is_train=False) - >>> probs = tf.nn.softmax(output)[0].numpy() - - Extract features with VGG19 and Train a classifier with 100 classes - - >>> # get VGG without the last layer - >>> cnn = tl.models.vgg19(end_with='fc2_relu', mode='static').as_layer() - >>> # add one more layer and build a new model - >>> ni = Input([None, 224, 224, 3], name="inputs") - >>> nn = cnn(ni) - >>> nn = tl.layers.Dense(n_units=100, name='out')(nn) - >>> model = tl.models.Model(inputs=ni, outputs=nn) - >>> # train your own classifier (only update the last layer) - >>> train_params = model.get_layer('out').trainable_weights - - Reuse model - - >>> # in dynamic model, we can directly use the same model - >>> # in static model - >>> vgg_layer = tl.models.vgg19().as_layer() - >>> ni_1 = tl.layers.Input([None, 224, 244, 3]) - >>> ni_2 = tl.layers.Input([None, 224, 244, 3]) - >>> a_1 = vgg_layer(ni_1) - >>> a_2 = vgg_layer(ni_2) - >>> M = Model(inputs=[ni_1, ni_2], outputs=[a_1, a_2]) + >>> output = vgg(img) + >>> probs = tl.ops.softmax(output)[0].numpy() """ if mode == 'dynamic': model = VGG(layer_type='vgg19', batch_norm=False, end_with=end_with, name=name) elif mode == 'static': - model = VGG_static(layer_type='vgg19', batch_norm=False, end_with=end_with, name=name) + raise NotImplementedError else: raise Exception("No such mode %s" % mode) if pretrained: diff --git a/examples/model_zoo/yolo.py b/examples/model_zoo/yolo.py new file mode 100644 index 000000000..d7209dbb7 --- /dev/null +++ b/examples/model_zoo/yolo.py @@ -0,0 +1,380 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- +"""YOLOv4 for MS-COCO. + +# Reference: +- [tensorflow-yolov4-tflite]( + https://github.com/hunglc007/tensorflow-yolov4-tflite) + +""" + +import numpy as np +import tensorlayer as tl +from tensorlayer.layers.activation import Mish +from tensorlayer.layers import Conv2d, MaxPool2d, BatchNorm2d, ZeroPad2d, UpSampling2d, Concat, Elementwise +from tensorlayer.layers import Module, SequentialLayer +from tensorlayer import logging + +__all__ = [ + 'YOLOv4' +] + +INPUT_SIZE = 416 +weights_url = {'link': 'https://pan.baidu.com/s/1MC1dmEwpxsdgHO1MZ8fYRQ', 'password': 'idsz'} + + +class Convolutional(Module): + """ + Create Convolution layer + Because it is only a stack of reference layers, there is no build, so self._built=True + """ + def __init__(self, filters_shape, downsample=False, activate=True, bn=True, activate_type='leaky',name=None): + super(Convolutional, self).__init__() + self.act = activate + self.act_type = activate_type + self.downsample = downsample + self.bn = bn + self._built = True + if downsample: + padding = 'VALID' + strides = 2 + else: + strides = 1 + padding = 'SAME' + + if bn: + b_init = None + else: + b_init = tl.initializers.constant(value=0.0) + + self.zeropad = ZeroPad2d(((1, 0), (1, 0))) + self.conv = Conv2d(n_filter=filters_shape[-1], in_channels=filters_shape[2], filter_size=(filters_shape[0], filters_shape[1]), + strides=(strides, strides),padding=padding, b_init=b_init, name=name) + + if bn: + if activate == True: + if activate_type == 'leaky': + self.batchnorm2d = BatchNorm2d(act='leaky_relu0.1', num_features=filters_shape[-1]) + elif activate_type == 'mish': + self.batchnorm2d = BatchNorm2d(act=Mish, num_features=filters_shape[-1]) + else: + self.batchnorm2d = BatchNorm2d(act=None, num_features=filters_shape[-1]) + + def forward(self, input): + if self.downsample: + input = self.zeropad(input) + + output = self.conv(input) + + if self.bn: + output = self.batchnorm2d(output) + return output + +class residual_block(Module): + def __init__(self, input_channel, filter_num1, filter_num2, activate_type='leaky'): + super(residual_block, self).__init__() + self.conv1 = Convolutional(filters_shape=(1, 1, input_channel, filter_num1), activate_type=activate_type) + self.conv2 = Convolutional(filters_shape=(3, 3, filter_num1, filter_num2), activate_type=activate_type) + self.add = Elementwise(tl.add) + + def forward(self, inputs): + output = self.conv1(inputs) + output = self.conv2(output) + output = self.add([inputs, output]) + return output + +def residual_block_num(num, input_channel, filter_num1, filter_num2, activate_type='leaky'): + residual_list = [] + for i in range(num): + residual_list.append(residual_block(input_channel, filter_num1, filter_num2, activate_type=activate_type)) + return SequentialLayer(residual_list) + +class cspdarknet53(Module): + def __init__(self): + super(cspdarknet53, self).__init__() + self._built = True + self.conv1_1 = Convolutional((3, 3, 3, 32), activate_type='mish') + self.conv1_2 = Convolutional((3, 3, 32, 64), downsample=True, activate_type='mish') + self.conv1_3 = Convolutional((1, 1, 64, 64), activate_type='mish', name='conv_rote_block_1') + self.conv1_4 = Convolutional((1, 1, 64, 64), activate_type='mish') + self.residual_1 = residual_block_num(1, 64, 32, 64, activate_type="mish") + + self.conv2_1 = Convolutional((1, 1, 64, 64), activate_type='mish') + self.concat = Concat() + self.conv2_2 = Convolutional((1, 1, 128, 64), activate_type='mish') + self.conv2_3 = Convolutional((3, 3, 64, 128), downsample=True, activate_type='mish') + self.conv2_4 = Convolutional((1, 1, 128, 64), activate_type='mish', name='conv_rote_block_2') + self.conv2_5 = Convolutional((1, 1, 128, 64), activate_type='mish') + self.residual_2 = residual_block_num(2, 64, 64, 64, activate_type='mish') + + self.conv3_1 = Convolutional((1, 1, 64, 64), activate_type='mish') + self.conv3_2 = Convolutional((1, 1, 128, 128), activate_type='mish') + self.conv3_3 = Convolutional((3, 3, 128, 256), downsample=True, activate_type='mish') + self.conv3_4 = Convolutional((1, 1, 256, 128), activate_type='mish', name='conv_rote_block_3') + self.conv3_5 = Convolutional((1, 1, 256, 128), activate_type='mish') + self.residual_3 = residual_block_num(8, 128, 128, 128, activate_type="mish") + + self.conv4_1 = Convolutional((1, 1, 128, 128), activate_type='mish') + self.conv4_2 = Convolutional((1, 1, 256, 256), activate_type='mish') + self.conv4_3 = Convolutional((3, 3, 256, 512), downsample=True, activate_type='mish') + self.conv4_4 = Convolutional((1, 1, 512, 256), activate_type='mish', name='conv_rote_block_4') + self.conv4_5 = Convolutional((1, 1, 512, 256), activate_type='mish') + self.residual_4 = residual_block_num(8, 256, 256, 256, activate_type="mish") + + self.conv5_1 = Convolutional((1, 1, 256, 256), activate_type='mish') + self.conv5_2 = Convolutional((1, 1, 512, 512), activate_type='mish') + self.conv5_3 = Convolutional((3, 3, 512, 1024), downsample=True, activate_type='mish') + self.conv5_4 = Convolutional((1, 1, 1024, 512), activate_type='mish', name='conv_rote_block_5') + self.conv5_5 = Convolutional((1, 1, 1024, 512), activate_type='mish') + self.residual_5 = residual_block_num(4, 512, 512, 512, activate_type="mish") + + + self.conv6_1 = Convolutional((1, 1, 512, 512), activate_type='mish') + self.conv6_2 = Convolutional((1, 1, 1024, 1024), activate_type='mish') + self.conv6_3 = Convolutional((1, 1, 1024, 512)) + self.conv6_4 = Convolutional((3, 3, 512, 1024)) + self.conv6_5 = Convolutional((1, 1, 1024, 512)) + + self.maxpool1 = MaxPool2d(filter_size=(13, 13), strides=(1, 1)) + self.maxpool2 = MaxPool2d(filter_size=(9, 9), strides=(1, 1)) + self.maxpool3 = MaxPool2d(filter_size=(5, 5), strides=(1, 1)) + + self.conv7_1 = Convolutional((1, 1, 2048, 512)) + self.conv7_2 = Convolutional((3, 3, 512, 1024)) + self.conv7_3 = Convolutional((1, 1, 1024, 512)) + + def forward(self, input_data): + input_data = self.conv1_1(input_data) + input_data = self.conv1_2(input_data) + route = input_data + route = self.conv1_3(route) + input_data = self.conv1_4(input_data) + input_data = self.residual_1(input_data) + + input_data = self.conv2_1(input_data) + input_data = self.concat([input_data, route]) + input_data = self.conv2_2(input_data) + input_data = self.conv2_3(input_data) + route = input_data + route = self.conv2_4(route) + input_data = self.conv2_5(input_data) + input_data = self.residual_2(input_data) + + input_data = self.conv3_1(input_data) + input_data = self.concat([input_data, route]) + input_data = self.conv3_2(input_data) + input_data = self.conv3_3(input_data) + route = input_data + route = self.conv3_4(route) + input_data = self.conv3_5(input_data) + input_data = self.residual_3(input_data) + + input_data = self.conv4_1(input_data) + input_data = self.concat([input_data, route]) + input_data = self.conv4_2(input_data) + route_1 = input_data + input_data = self.conv4_3(input_data) + route = input_data + route = self.conv4_4(route) + input_data = self.conv4_5(input_data) + input_data = self.residual_4(input_data) + + input_data = self.conv5_1(input_data) + input_data = self.concat([input_data, route]) + input_data = self.conv5_2(input_data) + route_2 = input_data + input_data = self.conv5_3(input_data) + route = input_data + route = self.conv5_4(route) + input_data = self.conv5_5(input_data) + input_data = self.residual_5(input_data) + + input_data = self.conv6_1(input_data) + input_data = self.concat([input_data, route]) + + input_data = self.conv6_2(input_data) + input_data = self.conv6_3(input_data) + input_data = self.conv6_4(input_data) + input_data = self.conv6_5(input_data) + + maxpool1 = self.maxpool1(input_data) + maxpool2 = self.maxpool2(input_data) + maxpool3 = self.maxpool3(input_data) + input_data = self.concat([maxpool1, maxpool2, maxpool3, input_data]) + + input_data = self.conv7_1(input_data) + input_data = self.conv7_2(input_data) + input_data = self.conv7_3(input_data) + + return route_1, route_2, input_data + + +class YOLOv4_model(Module): + def __init__(self, NUM_CLASS): + super(YOLOv4_model, self).__init__() + self.cspdarnnet = cspdarknet53() + + self.conv1_1 = Convolutional((1, 1, 512, 256)) + self.upsamle = UpSampling2d(scale=2) + self.conv1_2 = Convolutional((1, 1, 512, 256), name='conv_yolo_1') + self.concat = Concat() + + self.conv2_1 = Convolutional((1, 1, 512, 256)) + self.conv2_2 = Convolutional((3, 3, 256, 512)) + self.conv2_3 = Convolutional((1, 1, 512, 256)) + self.conv2_4 = Convolutional((3, 3, 256, 512)) + self.conv2_5 = Convolutional((1, 1, 512, 256)) + + self.conv3_1 = Convolutional((1, 1, 256, 128)) + self.conv3_2 = Convolutional((1, 1, 256, 128), name='conv_yolo_2') + + self.conv4_1 = Convolutional((1, 1, 256, 128)) + self.conv4_2 = Convolutional((3, 3, 128, 256)) + self.conv4_3 = Convolutional((1, 1, 256, 128)) + self.conv4_4 = Convolutional((3, 3, 128, 256)) + self.conv4_5 = Convolutional((1, 1, 256, 128)) + + self.conv5_1 = Convolutional((3, 3, 128, 256), name='conv_route_1') + self.conv5_2 = Convolutional((1, 1, 256, 3 * (NUM_CLASS + 5)), activate=False, bn=False) + + self.conv6_1 = Convolutional((3, 3, 128, 256), downsample=True, name='conv_route_2') + self.conv6_2 = Convolutional((1, 1, 512, 256)) + self.conv6_3 = Convolutional((3, 3, 256, 512)) + self.conv6_4 = Convolutional((1, 1, 512, 256)) + self.conv6_5 = Convolutional((3, 3, 256, 512)) + self.conv6_6 = Convolutional((1, 1, 512, 256)) + + self.conv7_1 = Convolutional((3, 3, 256, 512), name='conv_route_3') + self.conv7_2 = Convolutional((1, 1, 512, 3 * (NUM_CLASS + 5)), activate=False, bn=False) + self.conv7_3 = Convolutional((3, 3, 256, 512), downsample=True, name='conv_route_4') + + self.conv8_1 = Convolutional((1, 1, 1024, 512)) + self.conv8_2 = Convolutional((3, 3, 512, 1024)) + self.conv8_3 = Convolutional((1, 1, 1024, 512)) + self.conv8_4 = Convolutional((3, 3, 512, 1024)) + self.conv8_5 = Convolutional((1, 1, 1024, 512)) + + self.conv9_1 = Convolutional((3, 3, 512, 1024)) + self.conv9_2 = Convolutional((1, 1, 1024, 3 * (NUM_CLASS + 5)), activate=False, bn=False) + + def forward(self, inputs): + route_1, route_2, conv = self.cspdarnnet(inputs) + + route = conv + conv = self.conv1_1(conv) + conv = self.upsamle(conv) + route_2 = self.conv1_2(route_2) + conv = self.concat([route_2, conv]) + + conv = self.conv2_1(conv) + conv = self.conv2_2(conv) + conv = self.conv2_3(conv) + conv = self.conv2_4(conv) + conv = self.conv2_5(conv) + + route_2 = conv + conv = self.conv3_1(conv) + conv = self.upsamle(conv) + route_1 = self.conv3_2(route_1) + conv = self.concat([route_1, conv]) + + conv = self.conv4_1(conv) + conv = self.conv4_2(conv) + conv = self.conv4_3(conv) + conv = self.conv4_4(conv) + conv = self.conv4_5(conv) + + route_1 = conv + conv = self.conv5_1(conv) + conv_sbbox = self.conv5_2(conv) + + conv = self.conv6_1(route_1) + conv = self.concat([conv, route_2]) + + conv = self.conv6_2(conv) + conv = self.conv6_3(conv) + conv = self.conv6_4(conv) + conv = self.conv6_5(conv) + conv = self.conv6_6(conv) + + route_2 = conv + conv = self.conv7_1(conv) + conv_mbbox = self.conv7_2(conv) + conv = self.conv7_3(route_2) + conv = self.concat([conv, route]) + + conv = self.conv8_1(conv) + conv = self.conv8_2(conv) + conv = self.conv8_3(conv) + conv = self.conv8_4(conv) + conv = self.conv8_5(conv) + + conv = self.conv9_1(conv) + conv_lbbox = self.conv9_2(conv) + + return conv_sbbox, conv_mbbox, conv_lbbox + +def YOLOv4(NUM_CLASS, pretrained=False): + """Pre-trained YOLOv4 model. + + Parameters + ------------ + NUM_CLASS : int + Number of classes in final prediction. + pretrained : boolean + Whether to load pretrained weights. Default False. + + Examples + --------- + Object Detection with YOLOv4, see `computer_vision.py + `__ + With TensorLayer + + >>> # get the whole model, without pre-trained YOLOv4 parameters + >>> yolov4 = YOLOv4(NUM_CLASS=80, pretrained=False) + >>> # get the whole model, restore pre-trained YOLOv4 parameters + >>> yolov4 = YOLOv4(NUM_CLASS=80, pretrained=True) + >>> # use for inferencing + >>> output = yolov4(img) + + """ + + network = YOLOv4_model(NUM_CLASS=NUM_CLASS) + + if pretrained: + restore_params(network, model_path='model/yolov4_model.npz') + + return network + + +def restore_params(network, model_path='models.npz'): + logging.info("Restore pre-trained weights") + + try: + npz = np.load(model_path, allow_pickle=True) + except: + print("Download the model file, placed in the /model ") + print("Weights download: ", weights_url['link'], "password:", weights_url['password']) + + txt_path = 'model/yolov4_weights3_config.txt' + f = open(txt_path, "r") + line = f.readlines() + for i in range(len(line)): + network.all_weights[i].assign(npz[line[i].strip()]) + logging.info(" Loading weights %s in %s" % (network.all_weights[i].shape, network.all_weights[i].name)) + +def tl2_weights_to_tl3_weights(weights_2_path='model/weights_2.txt', weights_3_path='model/weights_3.txt', txt_path='model/yolov4_weights_config.txt'): + weights_2_path = weights_2_path + weights_3_path = weights_3_path + txt_path = txt_path + f1 = open(weights_2_path, "r") + f2 = open(weights_3_path, "r") + f3 = open(txt_path, "r") + line1 = f1.readlines() + line2 = f2.readlines() + line3 = f3.readlines() + _dicts = {} + for i in range(len(line1)): + _dicts[line1[i].strip()] = line3[i].strip() + for j in range(len(line2)): + print(_dicts[line2[j].strip()]) diff --git a/examples/pretrained_cnn/README.md b/examples/pretrained_cnn/README.md deleted file mode 100644 index 8c4a96ee2..000000000 --- a/examples/pretrained_cnn/README.md +++ /dev/null @@ -1,2 +0,0 @@ -## Please read the docs for using [Pre-trained Models](https://tensorlayer.readthedocs.io/en/latest/user/get_start_advance.html#pre-trained-cnn) - diff --git a/examples/pretrained_cnn/data/__init__.py b/examples/pretrained_cnn/data/__init__.py deleted file mode 100644 index 83d5401c3..000000000 --- a/examples/pretrained_cnn/data/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from __future__ import absolute_import - -from . import * diff --git a/examples/pretrained_cnn/data/imagenet_class_index.json b/examples/pretrained_cnn/data/imagenet_class_index.json deleted file mode 100644 index 5fe0dfefc..000000000 --- a/examples/pretrained_cnn/data/imagenet_class_index.json +++ /dev/null @@ -1 +0,0 @@ -{"0": ["n01440764", "tench"], "1": ["n01443537", "goldfish"], "2": ["n01484850", "great_white_shark"], "3": ["n01491361", "tiger_shark"], "4": ["n01494475", "hammerhead"], "5": ["n01496331", "electric_ray"], "6": ["n01498041", "stingray"], "7": ["n01514668", "cock"], "8": ["n01514859", "hen"], "9": ["n01518878", "ostrich"], "10": ["n01530575", "brambling"], "11": ["n01531178", "goldfinch"], "12": ["n01532829", "house_finch"], "13": ["n01534433", "junco"], "14": ["n01537544", "indigo_bunting"], "15": ["n01558993", "robin"], "16": ["n01560419", "bulbul"], "17": ["n01580077", "jay"], "18": ["n01582220", "magpie"], "19": ["n01592084", "chickadee"], "20": ["n01601694", "water_ouzel"], "21": ["n01608432", "kite"], "22": ["n01614925", "bald_eagle"], "23": ["n01616318", "vulture"], "24": ["n01622779", "great_grey_owl"], "25": ["n01629819", "European_fire_salamander"], "26": ["n01630670", "common_newt"], "27": ["n01631663", "eft"], "28": ["n01632458", "spotted_salamander"], "29": ["n01632777", "axolotl"], "30": ["n01641577", "bullfrog"], "31": ["n01644373", "tree_frog"], "32": ["n01644900", "tailed_frog"], "33": ["n01664065", "loggerhead"], "34": ["n01665541", "leatherback_turtle"], "35": ["n01667114", "mud_turtle"], "36": ["n01667778", "terrapin"], "37": ["n01669191", "box_turtle"], "38": ["n01675722", "banded_gecko"], "39": ["n01677366", "common_iguana"], "40": ["n01682714", "American_chameleon"], "41": ["n01685808", "whiptail"], "42": ["n01687978", "agama"], "43": ["n01688243", "frilled_lizard"], "44": ["n01689811", "alligator_lizard"], "45": ["n01692333", "Gila_monster"], "46": ["n01693334", "green_lizard"], "47": ["n01694178", "African_chameleon"], "48": ["n01695060", "Komodo_dragon"], "49": ["n01697457", "African_crocodile"], "50": ["n01698640", "American_alligator"], "51": ["n01704323", "triceratops"], "52": ["n01728572", "thunder_snake"], "53": ["n01728920", "ringneck_snake"], "54": ["n01729322", "hognose_snake"], "55": ["n01729977", "green_snake"], "56": ["n01734418", "king_snake"], "57": ["n01735189", "garter_snake"], "58": ["n01737021", "water_snake"], "59": ["n01739381", "vine_snake"], "60": ["n01740131", "night_snake"], "61": ["n01742172", "boa_constrictor"], "62": ["n01744401", "rock_python"], "63": ["n01748264", "Indian_cobra"], "64": ["n01749939", "green_mamba"], "65": ["n01751748", "sea_snake"], "66": ["n01753488", "horned_viper"], "67": ["n01755581", "diamondback"], "68": ["n01756291", "sidewinder"], "69": ["n01768244", "trilobite"], "70": ["n01770081", "harvestman"], "71": ["n01770393", "scorpion"], "72": ["n01773157", "black_and_gold_garden_spider"], "73": ["n01773549", "barn_spider"], "74": ["n01773797", "garden_spider"], "75": ["n01774384", "black_widow"], "76": ["n01774750", "tarantula"], "77": ["n01775062", "wolf_spider"], "78": ["n01776313", "tick"], "79": ["n01784675", "centipede"], "80": ["n01795545", "black_grouse"], "81": ["n01796340", "ptarmigan"], "82": ["n01797886", "ruffed_grouse"], "83": ["n01798484", "prairie_chicken"], "84": ["n01806143", "peacock"], "85": ["n01806567", "quail"], "86": ["n01807496", "partridge"], "87": ["n01817953", "African_grey"], "88": ["n01818515", "macaw"], "89": ["n01819313", "sulphur-crested_cockatoo"], "90": ["n01820546", "lorikeet"], "91": ["n01824575", "coucal"], "92": ["n01828970", "bee_eater"], "93": ["n01829413", "hornbill"], "94": ["n01833805", "hummingbird"], "95": ["n01843065", "jacamar"], "96": ["n01843383", "toucan"], "97": ["n01847000", "drake"], "98": ["n01855032", "red-breasted_merganser"], "99": ["n01855672", "goose"], "100": ["n01860187", "black_swan"], "101": ["n01871265", "tusker"], "102": ["n01872401", "echidna"], "103": ["n01873310", "platypus"], "104": ["n01877812", "wallaby"], "105": ["n01882714", "koala"], "106": ["n01883070", "wombat"], "107": ["n01910747", "jellyfish"], "108": ["n01914609", "sea_anemone"], "109": ["n01917289", "brain_coral"], "110": ["n01924916", "flatworm"], "111": ["n01930112", "nematode"], "112": ["n01943899", "conch"], "113": ["n01944390", "snail"], "114": ["n01945685", "slug"], "115": ["n01950731", "sea_slug"], "116": ["n01955084", "chiton"], "117": ["n01968897", "chambered_nautilus"], "118": ["n01978287", "Dungeness_crab"], "119": ["n01978455", "rock_crab"], "120": ["n01980166", "fiddler_crab"], "121": ["n01981276", "king_crab"], "122": ["n01983481", "American_lobster"], "123": ["n01984695", "spiny_lobster"], "124": ["n01985128", "crayfish"], "125": ["n01986214", "hermit_crab"], "126": ["n01990800", "isopod"], "127": ["n02002556", "white_stork"], "128": ["n02002724", "black_stork"], "129": ["n02006656", "spoonbill"], "130": ["n02007558", "flamingo"], "131": ["n02009229", "little_blue_heron"], "132": ["n02009912", "American_egret"], "133": ["n02011460", "bittern"], "134": ["n02012849", "crane"], "135": ["n02013706", "limpkin"], "136": ["n02017213", "European_gallinule"], "137": ["n02018207", "American_coot"], "138": ["n02018795", "bustard"], "139": ["n02025239", "ruddy_turnstone"], "140": ["n02027492", "red-backed_sandpiper"], "141": ["n02028035", "redshank"], "142": ["n02033041", "dowitcher"], "143": ["n02037110", "oystercatcher"], "144": ["n02051845", "pelican"], "145": ["n02056570", "king_penguin"], "146": ["n02058221", "albatross"], "147": ["n02066245", "grey_whale"], "148": ["n02071294", "killer_whale"], "149": ["n02074367", "dugong"], "150": ["n02077923", "sea_lion"], "151": ["n02085620", "Chihuahua"], "152": ["n02085782", "Japanese_spaniel"], "153": ["n02085936", "Maltese_dog"], "154": ["n02086079", "Pekinese"], "155": ["n02086240", "Shih-Tzu"], "156": ["n02086646", "Blenheim_spaniel"], "157": ["n02086910", "papillon"], "158": ["n02087046", "toy_terrier"], "159": ["n02087394", "Rhodesian_ridgeback"], "160": ["n02088094", "Afghan_hound"], "161": ["n02088238", "basset"], "162": ["n02088364", "beagle"], "163": ["n02088466", "bloodhound"], "164": ["n02088632", "bluetick"], "165": ["n02089078", "black-and-tan_coonhound"], "166": ["n02089867", "Walker_hound"], "167": ["n02089973", "English_foxhound"], "168": ["n02090379", "redbone"], "169": ["n02090622", "borzoi"], "170": ["n02090721", "Irish_wolfhound"], "171": ["n02091032", "Italian_greyhound"], "172": ["n02091134", "whippet"], "173": ["n02091244", "Ibizan_hound"], "174": ["n02091467", "Norwegian_elkhound"], "175": ["n02091635", "otterhound"], "176": ["n02091831", "Saluki"], "177": ["n02092002", "Scottish_deerhound"], "178": ["n02092339", "Weimaraner"], "179": ["n02093256", "Staffordshire_bullterrier"], "180": ["n02093428", "American_Staffordshire_terrier"], "181": ["n02093647", "Bedlington_terrier"], "182": ["n02093754", "Border_terrier"], "183": ["n02093859", "Kerry_blue_terrier"], "184": ["n02093991", "Irish_terrier"], "185": ["n02094114", "Norfolk_terrier"], "186": ["n02094258", "Norwich_terrier"], "187": ["n02094433", "Yorkshire_terrier"], "188": ["n02095314", "wire-haired_fox_terrier"], "189": ["n02095570", "Lakeland_terrier"], "190": ["n02095889", "Sealyham_terrier"], "191": ["n02096051", "Airedale"], "192": ["n02096177", "cairn"], "193": ["n02096294", "Australian_terrier"], "194": ["n02096437", "Dandie_Dinmont"], "195": ["n02096585", "Boston_bull"], "196": ["n02097047", "miniature_schnauzer"], "197": ["n02097130", "giant_schnauzer"], "198": ["n02097209", "standard_schnauzer"], "199": ["n02097298", "Scotch_terrier"], "200": ["n02097474", "Tibetan_terrier"], "201": ["n02097658", "silky_terrier"], "202": ["n02098105", "soft-coated_wheaten_terrier"], "203": ["n02098286", "West_Highland_white_terrier"], "204": ["n02098413", "Lhasa"], "205": ["n02099267", "flat-coated_retriever"], "206": ["n02099429", "curly-coated_retriever"], "207": ["n02099601", "golden_retriever"], "208": ["n02099712", "Labrador_retriever"], "209": ["n02099849", "Chesapeake_Bay_retriever"], "210": ["n02100236", "German_short-haired_pointer"], "211": ["n02100583", "vizsla"], "212": ["n02100735", "English_setter"], "213": ["n02100877", "Irish_setter"], "214": ["n02101006", "Gordon_setter"], "215": ["n02101388", "Brittany_spaniel"], "216": ["n02101556", "clumber"], "217": ["n02102040", "English_springer"], "218": ["n02102177", "Welsh_springer_spaniel"], "219": ["n02102318", "cocker_spaniel"], "220": ["n02102480", "Sussex_spaniel"], "221": ["n02102973", "Irish_water_spaniel"], "222": ["n02104029", "kuvasz"], "223": ["n02104365", "schipperke"], "224": ["n02105056", "groenendael"], "225": ["n02105162", "malinois"], "226": ["n02105251", "briard"], "227": ["n02105412", "kelpie"], "228": ["n02105505", "komondor"], "229": ["n02105641", "Old_English_sheepdog"], "230": ["n02105855", "Shetland_sheepdog"], "231": ["n02106030", "collie"], "232": ["n02106166", "Border_collie"], "233": ["n02106382", "Bouvier_des_Flandres"], "234": ["n02106550", "Rottweiler"], "235": ["n02106662", "German_shepherd"], "236": ["n02107142", "Doberman"], "237": ["n02107312", "miniature_pinscher"], "238": ["n02107574", "Greater_Swiss_Mountain_dog"], "239": ["n02107683", "Bernese_mountain_dog"], "240": ["n02107908", "Appenzeller"], "241": ["n02108000", "EntleBucher"], "242": ["n02108089", "boxer"], "243": ["n02108422", "bull_mastiff"], "244": ["n02108551", "Tibetan_mastiff"], "245": ["n02108915", "French_bulldog"], "246": ["n02109047", "Great_Dane"], "247": ["n02109525", "Saint_Bernard"], "248": ["n02109961", "Eskimo_dog"], "249": ["n02110063", "malamute"], "250": ["n02110185", "Siberian_husky"], "251": ["n02110341", "dalmatian"], "252": ["n02110627", "affenpinscher"], "253": ["n02110806", "basenji"], "254": ["n02110958", "pug"], "255": ["n02111129", "Leonberg"], "256": ["n02111277", "Newfoundland"], "257": ["n02111500", "Great_Pyrenees"], "258": ["n02111889", "Samoyed"], "259": ["n02112018", "Pomeranian"], "260": ["n02112137", "chow"], "261": ["n02112350", "keeshond"], "262": ["n02112706", "Brabancon_griffon"], "263": ["n02113023", "Pembroke"], "264": ["n02113186", "Cardigan"], "265": ["n02113624", "toy_poodle"], "266": ["n02113712", "miniature_poodle"], "267": ["n02113799", "standard_poodle"], "268": ["n02113978", "Mexican_hairless"], "269": ["n02114367", "timber_wolf"], "270": ["n02114548", "white_wolf"], "271": ["n02114712", "red_wolf"], "272": ["n02114855", "coyote"], "273": ["n02115641", "dingo"], "274": ["n02115913", "dhole"], "275": ["n02116738", "African_hunting_dog"], "276": ["n02117135", "hyena"], "277": ["n02119022", "red_fox"], "278": ["n02119789", "kit_fox"], "279": ["n02120079", "Arctic_fox"], "280": ["n02120505", "grey_fox"], "281": ["n02123045", "tabby"], "282": ["n02123159", "tiger_cat"], "283": ["n02123394", "Persian_cat"], "284": ["n02123597", "Siamese_cat"], "285": ["n02124075", "Egyptian_cat"], "286": ["n02125311", "cougar"], "287": ["n02127052", "lynx"], "288": ["n02128385", "leopard"], "289": ["n02128757", "snow_leopard"], "290": ["n02128925", "jaguar"], "291": ["n02129165", "lion"], "292": ["n02129604", "tiger"], "293": ["n02130308", "cheetah"], "294": ["n02132136", "brown_bear"], "295": ["n02133161", "American_black_bear"], "296": ["n02134084", "ice_bear"], "297": ["n02134418", "sloth_bear"], "298": ["n02137549", "mongoose"], "299": ["n02138441", "meerkat"], "300": ["n02165105", "tiger_beetle"], "301": ["n02165456", "ladybug"], "302": ["n02167151", "ground_beetle"], "303": ["n02168699", "long-horned_beetle"], "304": ["n02169497", "leaf_beetle"], "305": ["n02172182", "dung_beetle"], "306": ["n02174001", "rhinoceros_beetle"], "307": ["n02177972", "weevil"], "308": ["n02190166", "fly"], "309": ["n02206856", "bee"], "310": ["n02219486", "ant"], "311": ["n02226429", "grasshopper"], "312": ["n02229544", "cricket"], "313": ["n02231487", "walking_stick"], "314": ["n02233338", "cockroach"], "315": ["n02236044", "mantis"], "316": ["n02256656", "cicada"], "317": ["n02259212", "leafhopper"], "318": ["n02264363", "lacewing"], "319": ["n02268443", "dragonfly"], "320": ["n02268853", "damselfly"], "321": ["n02276258", "admiral"], "322": ["n02277742", "ringlet"], "323": ["n02279972", "monarch"], "324": ["n02280649", "cabbage_butterfly"], "325": ["n02281406", "sulphur_butterfly"], "326": ["n02281787", "lycaenid"], "327": ["n02317335", "starfish"], "328": ["n02319095", "sea_urchin"], "329": ["n02321529", "sea_cucumber"], "330": ["n02325366", "wood_rabbit"], "331": ["n02326432", "hare"], "332": ["n02328150", "Angora"], "333": ["n02342885", "hamster"], "334": ["n02346627", "porcupine"], "335": ["n02356798", "fox_squirrel"], "336": ["n02361337", "marmot"], "337": ["n02363005", "beaver"], "338": ["n02364673", "guinea_pig"], "339": ["n02389026", "sorrel"], "340": ["n02391049", "zebra"], "341": ["n02395406", "hog"], "342": ["n02396427", "wild_boar"], "343": ["n02397096", "warthog"], "344": ["n02398521", "hippopotamus"], "345": ["n02403003", "ox"], "346": ["n02408429", "water_buffalo"], "347": ["n02410509", "bison"], "348": ["n02412080", "ram"], "349": ["n02415577", "bighorn"], "350": ["n02417914", "ibex"], "351": ["n02422106", "hartebeest"], "352": ["n02422699", "impala"], "353": ["n02423022", "gazelle"], "354": ["n02437312", "Arabian_camel"], "355": ["n02437616", "llama"], "356": ["n02441942", "weasel"], "357": ["n02442845", "mink"], "358": ["n02443114", "polecat"], "359": ["n02443484", "black-footed_ferret"], "360": ["n02444819", "otter"], "361": ["n02445715", "skunk"], "362": ["n02447366", "badger"], "363": ["n02454379", "armadillo"], "364": ["n02457408", "three-toed_sloth"], "365": ["n02480495", "orangutan"], "366": ["n02480855", "gorilla"], "367": ["n02481823", "chimpanzee"], "368": ["n02483362", "gibbon"], "369": ["n02483708", "siamang"], "370": ["n02484975", "guenon"], "371": ["n02486261", "patas"], "372": ["n02486410", "baboon"], "373": ["n02487347", "macaque"], "374": ["n02488291", "langur"], "375": ["n02488702", "colobus"], "376": ["n02489166", "proboscis_monkey"], "377": ["n02490219", "marmoset"], "378": ["n02492035", "capuchin"], "379": ["n02492660", "howler_monkey"], "380": ["n02493509", "titi"], "381": ["n02493793", "spider_monkey"], "382": ["n02494079", "squirrel_monkey"], "383": ["n02497673", "Madagascar_cat"], "384": ["n02500267", "indri"], "385": ["n02504013", "Indian_elephant"], "386": ["n02504458", "African_elephant"], "387": ["n02509815", "lesser_panda"], "388": ["n02510455", "giant_panda"], "389": ["n02514041", "barracouta"], "390": ["n02526121", "eel"], "391": ["n02536864", "coho"], "392": ["n02606052", "rock_beauty"], "393": ["n02607072", "anemone_fish"], "394": ["n02640242", "sturgeon"], "395": ["n02641379", "gar"], "396": ["n02643566", "lionfish"], "397": ["n02655020", "puffer"], "398": ["n02666196", "abacus"], "399": ["n02667093", "abaya"], "400": ["n02669723", "academic_gown"], "401": ["n02672831", "accordion"], "402": ["n02676566", "acoustic_guitar"], "403": ["n02687172", "aircraft_carrier"], "404": ["n02690373", "airliner"], "405": ["n02692877", "airship"], "406": ["n02699494", "altar"], "407": ["n02701002", "ambulance"], "408": ["n02704792", "amphibian"], "409": ["n02708093", "analog_clock"], "410": ["n02727426", "apiary"], "411": ["n02730930", "apron"], "412": ["n02747177", "ashcan"], "413": ["n02749479", "assault_rifle"], "414": ["n02769748", "backpack"], "415": ["n02776631", "bakery"], "416": ["n02777292", "balance_beam"], "417": ["n02782093", "balloon"], "418": ["n02783161", "ballpoint"], "419": ["n02786058", "Band_Aid"], "420": ["n02787622", "banjo"], "421": ["n02788148", "bannister"], "422": ["n02790996", "barbell"], "423": ["n02791124", "barber_chair"], "424": ["n02791270", "barbershop"], "425": ["n02793495", "barn"], "426": ["n02794156", "barometer"], "427": ["n02795169", "barrel"], "428": ["n02797295", "barrow"], "429": ["n02799071", "baseball"], "430": ["n02802426", "basketball"], "431": ["n02804414", "bassinet"], "432": ["n02804610", "bassoon"], "433": ["n02807133", "bathing_cap"], "434": ["n02808304", "bath_towel"], "435": ["n02808440", "bathtub"], "436": ["n02814533", "beach_wagon"], "437": ["n02814860", "beacon"], "438": ["n02815834", "beaker"], "439": ["n02817516", "bearskin"], "440": ["n02823428", "beer_bottle"], "441": ["n02823750", "beer_glass"], "442": ["n02825657", "bell_cote"], "443": ["n02834397", "bib"], "444": ["n02835271", "bicycle-built-for-two"], "445": ["n02837789", "bikini"], "446": ["n02840245", "binder"], "447": ["n02841315", "binoculars"], "448": ["n02843684", "birdhouse"], "449": ["n02859443", "boathouse"], "450": ["n02860847", "bobsled"], "451": ["n02865351", "bolo_tie"], "452": ["n02869837", "bonnet"], "453": ["n02870880", "bookcase"], "454": ["n02871525", "bookshop"], "455": ["n02877765", "bottlecap"], "456": ["n02879718", "bow"], "457": ["n02883205", "bow_tie"], "458": ["n02892201", "brass"], "459": ["n02892767", "brassiere"], "460": ["n02894605", "breakwater"], "461": ["n02895154", "breastplate"], "462": ["n02906734", "broom"], "463": ["n02909870", "bucket"], "464": ["n02910353", "buckle"], "465": ["n02916936", "bulletproof_vest"], "466": ["n02917067", "bullet_train"], "467": ["n02927161", "butcher_shop"], "468": ["n02930766", "cab"], "469": ["n02939185", "caldron"], "470": ["n02948072", "candle"], "471": ["n02950826", "cannon"], "472": ["n02951358", "canoe"], "473": ["n02951585", "can_opener"], "474": ["n02963159", "cardigan"], "475": ["n02965783", "car_mirror"], "476": ["n02966193", "carousel"], "477": ["n02966687", "carpenter's_kit"], "478": ["n02971356", "carton"], "479": ["n02974003", "car_wheel"], "480": ["n02977058", "cash_machine"], "481": ["n02978881", "cassette"], "482": ["n02979186", "cassette_player"], "483": ["n02980441", "castle"], "484": ["n02981792", "catamaran"], "485": ["n02988304", "CD_player"], "486": ["n02992211", "cello"], "487": ["n02992529", "cellular_telephone"], "488": ["n02999410", "chain"], "489": ["n03000134", "chainlink_fence"], "490": ["n03000247", "chain_mail"], "491": ["n03000684", "chain_saw"], "492": ["n03014705", "chest"], "493": ["n03016953", "chiffonier"], "494": ["n03017168", "chime"], "495": ["n03018349", "china_cabinet"], "496": ["n03026506", "Christmas_stocking"], "497": ["n03028079", "church"], "498": ["n03032252", "cinema"], "499": ["n03041632", "cleaver"], "500": ["n03042490", "cliff_dwelling"], "501": ["n03045698", "cloak"], "502": ["n03047690", "clog"], "503": ["n03062245", "cocktail_shaker"], "504": ["n03063599", "coffee_mug"], "505": ["n03063689", "coffeepot"], "506": ["n03065424", "coil"], "507": ["n03075370", "combination_lock"], "508": ["n03085013", "computer_keyboard"], "509": ["n03089624", "confectionery"], "510": ["n03095699", "container_ship"], "511": ["n03100240", "convertible"], "512": ["n03109150", "corkscrew"], "513": ["n03110669", "cornet"], "514": ["n03124043", "cowboy_boot"], "515": ["n03124170", "cowboy_hat"], "516": ["n03125729", "cradle"], "517": ["n03126707", "crane"], "518": ["n03127747", "crash_helmet"], "519": ["n03127925", "crate"], "520": ["n03131574", "crib"], "521": ["n03133878", "Crock_Pot"], "522": ["n03134739", "croquet_ball"], "523": ["n03141823", "crutch"], "524": ["n03146219", "cuirass"], "525": ["n03160309", "dam"], "526": ["n03179701", "desk"], "527": ["n03180011", "desktop_computer"], "528": ["n03187595", "dial_telephone"], "529": ["n03188531", "diaper"], "530": ["n03196217", "digital_clock"], "531": ["n03197337", "digital_watch"], "532": ["n03201208", "dining_table"], "533": ["n03207743", "dishrag"], "534": ["n03207941", "dishwasher"], "535": ["n03208938", "disk_brake"], "536": ["n03216828", "dock"], "537": ["n03218198", "dogsled"], "538": ["n03220513", "dome"], "539": ["n03223299", "doormat"], "540": ["n03240683", "drilling_platform"], "541": ["n03249569", "drum"], "542": ["n03250847", "drumstick"], "543": ["n03255030", "dumbbell"], "544": ["n03259280", "Dutch_oven"], "545": ["n03271574", "electric_fan"], "546": ["n03272010", "electric_guitar"], "547": ["n03272562", "electric_locomotive"], "548": ["n03290653", "entertainment_center"], "549": ["n03291819", "envelope"], "550": ["n03297495", "espresso_maker"], "551": ["n03314780", "face_powder"], "552": ["n03325584", "feather_boa"], "553": ["n03337140", "file"], "554": ["n03344393", "fireboat"], "555": ["n03345487", "fire_engine"], "556": ["n03347037", "fire_screen"], "557": ["n03355925", "flagpole"], "558": ["n03372029", "flute"], "559": ["n03376595", "folding_chair"], "560": ["n03379051", "football_helmet"], "561": ["n03384352", "forklift"], "562": ["n03388043", "fountain"], "563": ["n03388183", "fountain_pen"], "564": ["n03388549", "four-poster"], "565": ["n03393912", "freight_car"], "566": ["n03394916", "French_horn"], "567": ["n03400231", "frying_pan"], "568": ["n03404251", "fur_coat"], "569": ["n03417042", "garbage_truck"], "570": ["n03424325", "gasmask"], "571": ["n03425413", "gas_pump"], "572": ["n03443371", "goblet"], "573": ["n03444034", "go-kart"], "574": ["n03445777", "golf_ball"], "575": ["n03445924", "golfcart"], "576": ["n03447447", "gondola"], "577": ["n03447721", "gong"], "578": ["n03450230", "gown"], "579": ["n03452741", "grand_piano"], "580": ["n03457902", "greenhouse"], "581": ["n03459775", "grille"], "582": ["n03461385", "grocery_store"], "583": ["n03467068", "guillotine"], "584": ["n03476684", "hair_slide"], "585": ["n03476991", "hair_spray"], "586": ["n03478589", "half_track"], "587": ["n03481172", "hammer"], "588": ["n03482405", "hamper"], "589": ["n03483316", "hand_blower"], "590": ["n03485407", "hand-held_computer"], "591": ["n03485794", "handkerchief"], "592": ["n03492542", "hard_disc"], "593": ["n03494278", "harmonica"], "594": ["n03495258", "harp"], "595": ["n03496892", "harvester"], "596": ["n03498962", "hatchet"], "597": ["n03527444", "holster"], "598": ["n03529860", "home_theater"], "599": ["n03530642", "honeycomb"], "600": ["n03532672", "hook"], "601": ["n03534580", "hoopskirt"], "602": ["n03535780", "horizontal_bar"], "603": ["n03538406", "horse_cart"], "604": ["n03544143", "hourglass"], "605": ["n03584254", "iPod"], "606": ["n03584829", "iron"], "607": ["n03590841", "jack-o'-lantern"], "608": ["n03594734", "jean"], "609": ["n03594945", "jeep"], "610": ["n03595614", "jersey"], "611": ["n03598930", "jigsaw_puzzle"], "612": ["n03599486", "jinrikisha"], "613": ["n03602883", "joystick"], "614": ["n03617480", "kimono"], "615": ["n03623198", "knee_pad"], "616": ["n03627232", "knot"], "617": ["n03630383", "lab_coat"], "618": ["n03633091", "ladle"], "619": ["n03637318", "lampshade"], "620": ["n03642806", "laptop"], "621": ["n03649909", "lawn_mower"], "622": ["n03657121", "lens_cap"], "623": ["n03658185", "letter_opener"], "624": ["n03661043", "library"], "625": ["n03662601", "lifeboat"], "626": ["n03666591", "lighter"], "627": ["n03670208", "limousine"], "628": ["n03673027", "liner"], "629": ["n03676483", "lipstick"], "630": ["n03680355", "Loafer"], "631": ["n03690938", "lotion"], "632": ["n03691459", "loudspeaker"], "633": ["n03692522", "loupe"], "634": ["n03697007", "lumbermill"], "635": ["n03706229", "magnetic_compass"], "636": ["n03709823", "mailbag"], "637": ["n03710193", "mailbox"], "638": ["n03710637", "maillot"], "639": ["n03710721", "maillot"], "640": ["n03717622", "manhole_cover"], "641": ["n03720891", "maraca"], "642": ["n03721384", "marimba"], "643": ["n03724870", "mask"], "644": ["n03729826", "matchstick"], "645": ["n03733131", "maypole"], "646": ["n03733281", "maze"], "647": ["n03733805", "measuring_cup"], "648": ["n03742115", "medicine_chest"], "649": ["n03743016", "megalith"], "650": ["n03759954", "microphone"], "651": ["n03761084", "microwave"], "652": ["n03763968", "military_uniform"], "653": ["n03764736", "milk_can"], "654": ["n03769881", "minibus"], "655": ["n03770439", "miniskirt"], "656": ["n03770679", "minivan"], "657": ["n03773504", "missile"], "658": ["n03775071", "mitten"], "659": ["n03775546", "mixing_bowl"], "660": ["n03776460", "mobile_home"], "661": ["n03777568", "Model_T"], "662": ["n03777754", "modem"], "663": ["n03781244", "monastery"], "664": ["n03782006", "monitor"], "665": ["n03785016", "moped"], "666": ["n03786901", "mortar"], "667": ["n03787032", "mortarboard"], "668": ["n03788195", "mosque"], "669": ["n03788365", "mosquito_net"], "670": ["n03791053", "motor_scooter"], "671": ["n03792782", "mountain_bike"], "672": ["n03792972", "mountain_tent"], "673": ["n03793489", "mouse"], "674": ["n03794056", "mousetrap"], "675": ["n03796401", "moving_van"], "676": ["n03803284", "muzzle"], "677": ["n03804744", "nail"], "678": ["n03814639", "neck_brace"], "679": ["n03814906", "necklace"], "680": ["n03825788", "nipple"], "681": ["n03832673", "notebook"], "682": ["n03837869", "obelisk"], "683": ["n03838899", "oboe"], "684": ["n03840681", "ocarina"], "685": ["n03841143", "odometer"], "686": ["n03843555", "oil_filter"], "687": ["n03854065", "organ"], "688": ["n03857828", "oscilloscope"], "689": ["n03866082", "overskirt"], "690": ["n03868242", "oxcart"], "691": ["n03868863", "oxygen_mask"], "692": ["n03871628", "packet"], "693": ["n03873416", "paddle"], "694": ["n03874293", "paddlewheel"], "695": ["n03874599", "padlock"], "696": ["n03876231", "paintbrush"], "697": ["n03877472", "pajama"], "698": ["n03877845", "palace"], "699": ["n03884397", "panpipe"], "700": ["n03887697", "paper_towel"], "701": ["n03888257", "parachute"], "702": ["n03888605", "parallel_bars"], "703": ["n03891251", "park_bench"], "704": ["n03891332", "parking_meter"], "705": ["n03895866", "passenger_car"], "706": ["n03899768", "patio"], "707": ["n03902125", "pay-phone"], "708": ["n03903868", "pedestal"], "709": ["n03908618", "pencil_box"], "710": ["n03908714", "pencil_sharpener"], "711": ["n03916031", "perfume"], "712": ["n03920288", "Petri_dish"], "713": ["n03924679", "photocopier"], "714": ["n03929660", "pick"], "715": ["n03929855", "pickelhaube"], "716": ["n03930313", "picket_fence"], "717": ["n03930630", "pickup"], "718": ["n03933933", "pier"], "719": ["n03935335", "piggy_bank"], "720": ["n03937543", "pill_bottle"], "721": ["n03938244", "pillow"], "722": ["n03942813", "ping-pong_ball"], "723": ["n03944341", "pinwheel"], "724": ["n03947888", "pirate"], "725": ["n03950228", "pitcher"], "726": ["n03954731", "plane"], "727": ["n03956157", "planetarium"], "728": ["n03958227", "plastic_bag"], "729": ["n03961711", "plate_rack"], "730": ["n03967562", "plow"], "731": ["n03970156", "plunger"], "732": ["n03976467", "Polaroid_camera"], "733": ["n03976657", "pole"], "734": ["n03977966", "police_van"], "735": ["n03980874", "poncho"], "736": ["n03982430", "pool_table"], "737": ["n03983396", "pop_bottle"], "738": ["n03991062", "pot"], "739": ["n03992509", "potter's_wheel"], "740": ["n03995372", "power_drill"], "741": ["n03998194", "prayer_rug"], "742": ["n04004767", "printer"], "743": ["n04005630", "prison"], "744": ["n04008634", "projectile"], "745": ["n04009552", "projector"], "746": ["n04019541", "puck"], "747": ["n04023962", "punching_bag"], "748": ["n04026417", "purse"], "749": ["n04033901", "quill"], "750": ["n04033995", "quilt"], "751": ["n04037443", "racer"], "752": ["n04039381", "racket"], "753": ["n04040759", "radiator"], "754": ["n04041544", "radio"], "755": ["n04044716", "radio_telescope"], "756": ["n04049303", "rain_barrel"], "757": ["n04065272", "recreational_vehicle"], "758": ["n04067472", "reel"], "759": ["n04069434", "reflex_camera"], "760": ["n04070727", "refrigerator"], "761": ["n04074963", "remote_control"], "762": ["n04081281", "restaurant"], "763": ["n04086273", "revolver"], "764": ["n04090263", "rifle"], "765": ["n04099969", "rocking_chair"], "766": ["n04111531", "rotisserie"], "767": ["n04116512", "rubber_eraser"], "768": ["n04118538", "rugby_ball"], "769": ["n04118776", "rule"], "770": ["n04120489", "running_shoe"], "771": ["n04125021", "safe"], "772": ["n04127249", "safety_pin"], "773": ["n04131690", "saltshaker"], "774": ["n04133789", "sandal"], "775": ["n04136333", "sarong"], "776": ["n04141076", "sax"], "777": ["n04141327", "scabbard"], "778": ["n04141975", "scale"], "779": ["n04146614", "school_bus"], "780": ["n04147183", "schooner"], "781": ["n04149813", "scoreboard"], "782": ["n04152593", "screen"], "783": ["n04153751", "screw"], "784": ["n04154565", "screwdriver"], "785": ["n04162706", "seat_belt"], "786": ["n04179913", "sewing_machine"], "787": ["n04192698", "shield"], "788": ["n04200800", "shoe_shop"], "789": ["n04201297", "shoji"], "790": ["n04204238", "shopping_basket"], "791": ["n04204347", "shopping_cart"], "792": ["n04208210", "shovel"], "793": ["n04209133", "shower_cap"], "794": ["n04209239", "shower_curtain"], "795": ["n04228054", "ski"], "796": ["n04229816", "ski_mask"], "797": ["n04235860", "sleeping_bag"], "798": ["n04238763", "slide_rule"], "799": ["n04239074", "sliding_door"], "800": ["n04243546", "slot"], "801": ["n04251144", "snorkel"], "802": ["n04252077", "snowmobile"], "803": ["n04252225", "snowplow"], "804": ["n04254120", "soap_dispenser"], "805": ["n04254680", "soccer_ball"], "806": ["n04254777", "sock"], "807": ["n04258138", "solar_dish"], "808": ["n04259630", "sombrero"], "809": ["n04263257", "soup_bowl"], "810": ["n04264628", "space_bar"], "811": ["n04265275", "space_heater"], "812": ["n04266014", "space_shuttle"], "813": ["n04270147", "spatula"], "814": ["n04273569", "speedboat"], "815": ["n04275548", "spider_web"], "816": ["n04277352", "spindle"], "817": ["n04285008", "sports_car"], "818": ["n04286575", "spotlight"], "819": ["n04296562", "stage"], "820": ["n04310018", "steam_locomotive"], "821": ["n04311004", "steel_arch_bridge"], "822": ["n04311174", "steel_drum"], "823": ["n04317175", "stethoscope"], "824": ["n04325704", "stole"], "825": ["n04326547", "stone_wall"], "826": ["n04328186", "stopwatch"], "827": ["n04330267", "stove"], "828": ["n04332243", "strainer"], "829": ["n04335435", "streetcar"], "830": ["n04336792", "stretcher"], "831": ["n04344873", "studio_couch"], "832": ["n04346328", "stupa"], "833": ["n04347754", "submarine"], "834": ["n04350905", "suit"], "835": ["n04355338", "sundial"], "836": ["n04355933", "sunglass"], "837": ["n04356056", "sunglasses"], "838": ["n04357314", "sunscreen"], "839": ["n04366367", "suspension_bridge"], "840": ["n04367480", "swab"], "841": ["n04370456", "sweatshirt"], "842": ["n04371430", "swimming_trunks"], "843": ["n04371774", "swing"], "844": ["n04372370", "switch"], "845": ["n04376876", "syringe"], "846": ["n04380533", "table_lamp"], "847": ["n04389033", "tank"], "848": ["n04392985", "tape_player"], "849": ["n04398044", "teapot"], "850": ["n04399382", "teddy"], "851": ["n04404412", "television"], "852": ["n04409515", "tennis_ball"], "853": ["n04417672", "thatch"], "854": ["n04418357", "theater_curtain"], "855": ["n04423845", "thimble"], "856": ["n04428191", "thresher"], "857": ["n04429376", "throne"], "858": ["n04435653", "tile_roof"], "859": ["n04442312", "toaster"], "860": ["n04443257", "tobacco_shop"], "861": ["n04447861", "toilet_seat"], "862": ["n04456115", "torch"], "863": ["n04458633", "totem_pole"], "864": ["n04461696", "tow_truck"], "865": ["n04462240", "toyshop"], "866": ["n04465501", "tractor"], "867": ["n04467665", "trailer_truck"], "868": ["n04476259", "tray"], "869": ["n04479046", "trench_coat"], "870": ["n04482393", "tricycle"], "871": ["n04483307", "trimaran"], "872": ["n04485082", "tripod"], "873": ["n04486054", "triumphal_arch"], "874": ["n04487081", "trolleybus"], "875": ["n04487394", "trombone"], "876": ["n04493381", "tub"], "877": ["n04501370", "turnstile"], "878": ["n04505470", "typewriter_keyboard"], "879": ["n04507155", "umbrella"], "880": ["n04509417", "unicycle"], "881": ["n04515003", "upright"], "882": ["n04517823", "vacuum"], "883": ["n04522168", "vase"], "884": ["n04523525", "vault"], "885": ["n04525038", "velvet"], "886": ["n04525305", "vending_machine"], "887": ["n04532106", "vestment"], "888": ["n04532670", "viaduct"], "889": ["n04536866", "violin"], "890": ["n04540053", "volleyball"], "891": ["n04542943", "waffle_iron"], "892": ["n04548280", "wall_clock"], "893": ["n04548362", "wallet"], "894": ["n04550184", "wardrobe"], "895": ["n04552348", "warplane"], "896": ["n04553703", "washbasin"], "897": ["n04554684", "washer"], "898": ["n04557648", "water_bottle"], "899": ["n04560804", "water_jug"], "900": ["n04562935", "water_tower"], "901": ["n04579145", "whiskey_jug"], "902": ["n04579432", "whistle"], "903": ["n04584207", "wig"], "904": ["n04589890", "window_screen"], "905": ["n04590129", "window_shade"], "906": ["n04591157", "Windsor_tie"], "907": ["n04591713", "wine_bottle"], "908": ["n04592741", "wing"], "909": ["n04596742", "wok"], "910": ["n04597913", "wooden_spoon"], "911": ["n04599235", "wool"], "912": ["n04604644", "worm_fence"], "913": ["n04606251", "wreck"], "914": ["n04612504", "yawl"], "915": ["n04613696", "yurt"], "916": ["n06359193", "web_site"], "917": ["n06596364", "comic_book"], "918": ["n06785654", "crossword_puzzle"], "919": ["n06794110", "street_sign"], "920": ["n06874185", "traffic_light"], "921": ["n07248320", "book_jacket"], "922": ["n07565083", "menu"], "923": ["n07579787", "plate"], "924": ["n07583066", "guacamole"], "925": ["n07584110", "consomme"], "926": ["n07590611", "hot_pot"], "927": ["n07613480", "trifle"], "928": ["n07614500", "ice_cream"], "929": ["n07615774", "ice_lolly"], "930": ["n07684084", "French_loaf"], "931": ["n07693725", "bagel"], "932": ["n07695742", "pretzel"], "933": ["n07697313", "cheeseburger"], "934": ["n07697537", "hotdog"], "935": ["n07711569", "mashed_potato"], "936": ["n07714571", "head_cabbage"], "937": ["n07714990", "broccoli"], "938": ["n07715103", "cauliflower"], "939": ["n07716358", "zucchini"], "940": ["n07716906", "spaghetti_squash"], "941": ["n07717410", "acorn_squash"], "942": ["n07717556", "butternut_squash"], "943": ["n07718472", "cucumber"], "944": ["n07718747", "artichoke"], "945": ["n07720875", "bell_pepper"], "946": ["n07730033", "cardoon"], "947": ["n07734744", "mushroom"], "948": ["n07742313", "Granny_Smith"], "949": ["n07745940", "strawberry"], "950": ["n07747607", "orange"], "951": ["n07749582", "lemon"], "952": ["n07753113", "fig"], "953": ["n07753275", "pineapple"], "954": ["n07753592", "banana"], "955": ["n07754684", "jackfruit"], "956": ["n07760859", "custard_apple"], "957": ["n07768694", "pomegranate"], "958": ["n07802026", "hay"], "959": ["n07831146", "carbonara"], "960": ["n07836838", "chocolate_sauce"], "961": ["n07860988", "dough"], "962": ["n07871810", "meat_loaf"], "963": ["n07873807", "pizza"], "964": ["n07875152", "potpie"], "965": ["n07880968", "burrito"], "966": ["n07892512", "red_wine"], "967": ["n07920052", "espresso"], "968": ["n07930864", "cup"], "969": ["n07932039", "eggnog"], "970": ["n09193705", "alp"], "971": ["n09229709", "bubble"], "972": ["n09246464", "cliff"], "973": ["n09256479", "coral_reef"], "974": ["n09288635", "geyser"], "975": ["n09332890", "lakeside"], "976": ["n09399592", "promontory"], "977": ["n09421951", "sandbar"], "978": ["n09428293", "seashore"], "979": ["n09468604", "valley"], "980": ["n09472597", "volcano"], "981": ["n09835506", "ballplayer"], "982": ["n10148035", "groom"], "983": ["n10565667", "scuba_diver"], "984": ["n11879895", "rapeseed"], "985": ["n11939491", "daisy"], "986": ["n12057211", "yellow_lady's_slipper"], "987": ["n12144580", "corn"], "988": ["n12267677", "acorn"], "989": ["n12620546", "hip"], "990": ["n12768682", "buckeye"], "991": ["n12985857", "coral_fungus"], "992": ["n12998815", "agaric"], "993": ["n13037406", "gyromitra"], "994": ["n13040303", "stinkhorn"], "995": ["n13044778", "earthstar"], "996": ["n13052670", "hen-of-the-woods"], "997": ["n13054560", "bolete"], "998": ["n13133613", "ear"], "999": ["n15075141", "toilet_tissue"]} \ No newline at end of file diff --git a/examples/pretrained_cnn/data/imagenet_classes.py b/examples/pretrained_cnn/data/imagenet_classes.py deleted file mode 100644 index d721acd45..000000000 --- a/examples/pretrained_cnn/data/imagenet_classes.py +++ /dev/null @@ -1,1000 +0,0 @@ -class_names = '''tench, Tinca tinca -goldfish, Carassius auratus -great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias -tiger shark, Galeocerdo cuvieri -hammerhead, hammerhead shark -electric ray, crampfish, numbfish, torpedo -stingray -cock -hen -ostrich, Struthio camelus -brambling, Fringilla montifringilla -goldfinch, Carduelis carduelis -house finch, linnet, Carpodacus mexicanus -junco, snowbird -indigo bunting, indigo finch, indigo bird, Passerina cyanea -robin, American robin, Turdus migratorius -bulbul -jay -magpie -chickadee -water ouzel, dipper -kite -bald eagle, American eagle, Haliaeetus leucocephalus -vulture -great grey owl, great gray owl, Strix nebulosa -European fire salamander, Salamandra salamandra -common newt, Triturus vulgaris -eft -spotted salamander, Ambystoma maculatum -axolotl, mud puppy, Ambystoma mexicanum -bullfrog, Rana catesbeiana -tree frog, tree-frog -tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui -loggerhead, loggerhead turtle, Caretta caretta -leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea -mud turtle -terrapin -box turtle, box tortoise -banded gecko -common iguana, iguana, Iguana iguana -American chameleon, anole, Anolis carolinensis -whiptail, whiptail lizard -agama -frilled lizard, Chlamydosaurus kingi -alligator lizard -Gila monster, Heloderma suspectum -green lizard, Lacerta viridis -African chameleon, Chamaeleo chamaeleon -Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis -African crocodile, Nile crocodile, Crocodylus niloticus -American alligator, Alligator mississipiensis -triceratops -thunder snake, worm snake, Carphophis amoenus -ringneck snake, ring-necked snake, ring snake -hognose snake, puff adder, sand viper -green snake, grass snake -king snake, kingsnake -garter snake, grass snake -water snake -vine snake -night snake, Hypsiglena torquata -boa constrictor, Constrictor constrictor -rock python, rock snake, Python sebae -Indian cobra, Naja naja -green mamba -sea snake -horned viper, cerastes, sand viper, horned asp, Cerastes cornutus -diamondback, diamondback rattlesnake, Crotalus adamanteus -sidewinder, horned rattlesnake, Crotalus cerastes -trilobite -harvestman, daddy longlegs, Phalangium opilio -scorpion -black and gold garden spider, Argiope aurantia -barn spider, Araneus cavaticus -garden spider, Aranea diademata -black widow, Latrodectus mactans -tarantula -wolf spider, hunting spider -tick -centipede -black grouse -ptarmigan -ruffed grouse, partridge, Bonasa umbellus -prairie chicken, prairie grouse, prairie fowl -peacock -quail -partridge -African grey, African gray, Psittacus erithacus -macaw -sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita -lorikeet -coucal -bee eater -hornbill -hummingbird -jacamar -toucan -drake -red-breasted merganser, Mergus serrator -goose -black swan, Cygnus atratus -tusker -echidna, spiny anteater, anteater -platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus -wallaby, brush kangaroo -koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus -wombat -jellyfish -sea anemone, anemone -brain coral -flatworm, platyhelminth -nematode, nematode worm, roundworm -conch -snail -slug -sea slug, nudibranch -chiton, coat-of-mail shell, sea cradle, polyplacophore -chambered nautilus, pearly nautilus, nautilus -Dungeness crab, Cancer magister -rock crab, Cancer irroratus -fiddler crab -king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica -American lobster, Northern lobster, Maine lobster, Homarus americanus -spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish -crayfish, crawfish, crawdad, crawdaddy -hermit crab -isopod -white stork, Ciconia ciconia -black stork, Ciconia nigra -spoonbill -flamingo -little blue heron, Egretta caerulea -American egret, great white heron, Egretta albus -bittern -crane -limpkin, Aramus pictus -European gallinule, Porphyrio porphyrio -American coot, marsh hen, mud hen, water hen, Fulica americana -bustard -ruddy turnstone, Arenaria interpres -red-backed sandpiper, dunlin, Erolia alpina -redshank, Tringa totanus -dowitcher -oystercatcher, oyster catcher -pelican -king penguin, Aptenodytes patagonica -albatross, mollymawk -grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus -killer whale, killer, orca, grampus, sea wolf, Orcinus orca -dugong, Dugong dugon -sea lion -Chihuahua -Japanese spaniel -Maltese dog, Maltese terrier, Maltese -Pekinese, Pekingese, Peke -Shih-Tzu -Blenheim spaniel -papillon -toy terrier -Rhodesian ridgeback -Afghan hound, Afghan -basset, basset hound -beagle -bloodhound, sleuthhound -bluetick -black-and-tan coonhound -Walker hound, Walker foxhound -English foxhound -redbone -borzoi, Russian wolfhound -Irish wolfhound -Italian greyhound -whippet -Ibizan hound, Ibizan Podenco -Norwegian elkhound, elkhound -otterhound, otter hound -Saluki, gazelle hound -Scottish deerhound, deerhound -Weimaraner -Staffordshire bullterrier, Staffordshire bull terrier -American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier -Bedlington terrier -Border terrier -Kerry blue terrier -Irish terrier -Norfolk terrier -Norwich terrier -Yorkshire terrier -wire-haired fox terrier -Lakeland terrier -Sealyham terrier, Sealyham -Airedale, Airedale terrier -cairn, cairn terrier -Australian terrier -Dandie Dinmont, Dandie Dinmont terrier -Boston bull, Boston terrier -miniature schnauzer -giant schnauzer -standard schnauzer -Scotch terrier, Scottish terrier, Scottie -Tibetan terrier, chrysanthemum dog -silky terrier, Sydney silky -soft-coated wheaten terrier -West Highland white terrier -Lhasa, Lhasa apso -flat-coated retriever -curly-coated retriever -golden retriever -Labrador retriever -Chesapeake Bay retriever -German short-haired pointer -vizsla, Hungarian pointer -English setter -Irish setter, red setter -Gordon setter -Brittany spaniel -clumber, clumber spaniel -English springer, English springer spaniel -Welsh springer spaniel -cocker spaniel, English cocker spaniel, cocker -Sussex spaniel -Irish water spaniel -kuvasz -schipperke -groenendael -malinois -briard -kelpie -komondor -Old English sheepdog, bobtail -Shetland sheepdog, Shetland sheep dog, Shetland -collie -Border collie -Bouvier des Flandres, Bouviers des Flandres -Rottweiler -German shepherd, German shepherd dog, German police dog, alsatian -Doberman, Doberman pinscher -miniature pinscher -Greater Swiss Mountain dog -Bernese mountain dog -Appenzeller -EntleBucher -boxer -bull mastiff -Tibetan mastiff -French bulldog -Great Dane -Saint Bernard, St Bernard -Eskimo dog, husky -malamute, malemute, Alaskan malamute -Siberian husky -dalmatian, coach dog, carriage dog -affenpinscher, monkey pinscher, monkey dog -basenji -pug, pug-dog -Leonberg -Newfoundland, Newfoundland dog -Great Pyrenees -Samoyed, Samoyede -Pomeranian -chow, chow chow -keeshond -Brabancon griffon -Pembroke, Pembroke Welsh corgi -Cardigan, Cardigan Welsh corgi -toy poodle -miniature poodle -standard poodle -Mexican hairless -timber wolf, grey wolf, gray wolf, Canis lupus -white wolf, Arctic wolf, Canis lupus tundrarum -red wolf, maned wolf, Canis rufus, Canis niger -coyote, prairie wolf, brush wolf, Canis latrans -dingo, warrigal, warragal, Canis dingo -dhole, Cuon alpinus -African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus -hyena, hyaena -red fox, Vulpes vulpes -kit fox, Vulpes macrotis -Arctic fox, white fox, Alopex lagopus -grey fox, gray fox, Urocyon cinereoargenteus -tabby, tabby cat -tiger cat -Persian cat -Siamese cat, Siamese -Egyptian cat -cougar, puma, catamount, mountain lion, painter, panther, Felis concolor -lynx, catamount -leopard, Panthera pardus -snow leopard, ounce, Panthera uncia -jaguar, panther, Panthera onca, Felis onca -lion, king of beasts, Panthera leo -tiger, Panthera tigris -cheetah, chetah, Acinonyx jubatus -brown bear, bruin, Ursus arctos -American black bear, black bear, Ursus americanus, Euarctos americanus -ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus -sloth bear, Melursus ursinus, Ursus ursinus -mongoose -meerkat, mierkat -tiger beetle -ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle -ground beetle, carabid beetle -long-horned beetle, longicorn, longicorn beetle -leaf beetle, chrysomelid -dung beetle -rhinoceros beetle -weevil -fly -bee -ant, emmet, pismire -grasshopper, hopper -cricket -walking stick, walkingstick, stick insect -cockroach, roach -mantis, mantid -cicada, cicala -leafhopper -lacewing, lacewing fly -dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk -damselfly -admiral -ringlet, ringlet butterfly -monarch, monarch butterfly, milkweed butterfly, Danaus plexippus -cabbage butterfly -sulphur butterfly, sulfur butterfly -lycaenid, lycaenid butterfly -starfish, sea star -sea urchin -sea cucumber, holothurian -wood rabbit, cottontail, cottontail rabbit -hare -Angora, Angora rabbit -hamster -porcupine, hedgehog -fox squirrel, eastern fox squirrel, Sciurus niger -marmot -beaver -guinea pig, Cavia cobaya -sorrel -zebra -hog, pig, grunter, squealer, Sus scrofa -wild boar, boar, Sus scrofa -warthog -hippopotamus, hippo, river horse, Hippopotamus amphibius -ox -water buffalo, water ox, Asiatic buffalo, Bubalus bubalis -bison -ram, tup -bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis -ibex, Capra ibex -hartebeest -impala, Aepyceros melampus -gazelle -Arabian camel, dromedary, Camelus dromedarius -llama -weasel -mink -polecat, fitch, foulmart, foumart, Mustela putorius -black-footed ferret, ferret, Mustela nigripes -otter -skunk, polecat, wood pussy -badger -armadillo -three-toed sloth, ai, Bradypus tridactylus -orangutan, orang, orangutang, Pongo pygmaeus -gorilla, Gorilla gorilla -chimpanzee, chimp, Pan troglodytes -gibbon, Hylobates lar -siamang, Hylobates syndactylus, Symphalangus syndactylus -guenon, guenon monkey -patas, hussar monkey, Erythrocebus patas -baboon -macaque -langur -colobus, colobus monkey -proboscis monkey, Nasalis larvatus -marmoset -capuchin, ringtail, Cebus capucinus -howler monkey, howler -titi, titi monkey -spider monkey, Ateles geoffroyi -squirrel monkey, Saimiri sciureus -Madagascar cat, ring-tailed lemur, Lemur catta -indri, indris, Indri indri, Indri brevicaudatus -Indian elephant, Elephas maximus -African elephant, Loxodonta africana -lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens -giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca -barracouta, snoek -eel -coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch -rock beauty, Holocanthus tricolor -anemone fish -sturgeon -gar, garfish, garpike, billfish, Lepisosteus osseus -lionfish -puffer, pufferfish, blowfish, globefish -abacus -abaya -academic gown, academic robe, judge's robe -accordion, piano accordion, squeeze box -acoustic guitar -aircraft carrier, carrier, flattop, attack aircraft carrier -airliner -airship, dirigible -altar -ambulance -amphibian, amphibious vehicle -analog clock -apiary, bee house -apron -ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin -assault rifle, assault gun -backpack, back pack, knapsack, packsack, rucksack, haversack -bakery, bakeshop, bakehouse -balance beam, beam -balloon -ballpoint, ballpoint pen, ballpen, Biro -Band Aid -banjo -bannister, banister, balustrade, balusters, handrail -barbell -barber chair -barbershop -barn -barometer -barrel, cask -barrow, garden cart, lawn cart, wheelbarrow -baseball -basketball -bassinet -bassoon -bathing cap, swimming cap -bath towel -bathtub, bathing tub, bath, tub -beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon -beacon, lighthouse, beacon light, pharos -beaker -bearskin, busby, shako -beer bottle -beer glass -bell cote, bell cot -bib -bicycle-built-for-two, tandem bicycle, tandem -bikini, two-piece -binder, ring-binder -binoculars, field glasses, opera glasses -birdhouse -boathouse -bobsled, bobsleigh, bob -bolo tie, bolo, bola tie, bola -bonnet, poke bonnet -bookcase -bookshop, bookstore, bookstall -bottlecap -bow -bow tie, bow-tie, bowtie -brass, memorial tablet, plaque -brassiere, bra, bandeau -breakwater, groin, groyne, mole, bulwark, seawall, jetty -breastplate, aegis, egis -broom -bucket, pail -buckle -bulletproof vest -bullet train, bullet -butcher shop, meat market -cab, hack, taxi, taxicab -caldron, cauldron -candle, taper, wax light -cannon -canoe -can opener, tin opener -cardigan -car mirror -carousel, carrousel, merry-go-round, roundabout, whirligig -carpenter's kit, tool kit -carton -car wheel -cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM -cassette -cassette player -castle -catamaran -CD player -cello, violoncello -cellular telephone, cellular phone, cellphone, cell, mobile phone -chain -chainlink fence -chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour -chain saw, chainsaw -chest -chiffonier, commode -chime, bell, gong -china cabinet, china closet -Christmas stocking -church, church building -cinema, movie theater, movie theatre, movie house, picture palace -cleaver, meat cleaver, chopper -cliff dwelling -cloak -clog, geta, patten, sabot -cocktail shaker -coffee mug -coffeepot -coil, spiral, volute, whorl, helix -combination lock -computer keyboard, keypad -confectionery, confectionary, candy store -container ship, containership, container vessel -convertible -corkscrew, bottle screw -cornet, horn, trumpet, trump -cowboy boot -cowboy hat, ten-gallon hat -cradle -crane -crash helmet -crate -crib, cot -Crock Pot -croquet ball -crutch -cuirass -dam, dike, dyke -desk -desktop computer -dial telephone, dial phone -diaper, nappy, napkin -digital clock -digital watch -dining table, board -dishrag, dishcloth -dishwasher, dish washer, dishwashing machine -disk brake, disc brake -dock, dockage, docking facility -dogsled, dog sled, dog sleigh -dome -doormat, welcome mat -drilling platform, offshore rig -drum, membranophone, tympan -drumstick -dumbbell -Dutch oven -electric fan, blower -electric guitar -electric locomotive -entertainment center -envelope -espresso maker -face powder -feather boa, boa -file, file cabinet, filing cabinet -fireboat -fire engine, fire truck -fire screen, fireguard -flagpole, flagstaff -flute, transverse flute -folding chair -football helmet -forklift -fountain -fountain pen -four-poster -freight car -French horn, horn -frying pan, frypan, skillet -fur coat -garbage truck, dustcart -gasmask, respirator, gas helmet -gas pump, gasoline pump, petrol pump, island dispenser -goblet -go-kart -golf ball -golfcart, golf cart -gondola -gong, tam-tam -gown -grand piano, grand -greenhouse, nursery, glasshouse -grille, radiator grille -grocery store, grocery, food market, market -guillotine -hair slide -hair spray -half track -hammer -hamper -hand blower, blow dryer, blow drier, hair dryer, hair drier -hand-held computer, hand-held microcomputer -handkerchief, hankie, hanky, hankey -hard disc, hard disk, fixed disk -harmonica, mouth organ, harp, mouth harp -harp -harvester, reaper -hatchet -holster -home theater, home theatre -honeycomb -hook, claw -hoopskirt, crinoline -horizontal bar, high bar -horse cart, horse-cart -hourglass -iPod -iron, smoothing iron -jack-o'-lantern -jean, blue jean, denim -jeep, landrover -jersey, T-shirt, tee shirt -jigsaw puzzle -jinrikisha, ricksha, rickshaw -joystick -kimono -knee pad -knot -lab coat, laboratory coat -ladle -lampshade, lamp shade -laptop, laptop computer -lawn mower, mower -lens cap, lens cover -letter opener, paper knife, paperknife -library -lifeboat -lighter, light, igniter, ignitor -limousine, limo -liner, ocean liner -lipstick, lip rouge -Loafer -lotion -loudspeaker, speaker, speaker unit, loudspeaker system, speaker system -loupe, jeweler's loupe -lumbermill, sawmill -magnetic compass -mailbag, postbag -mailbox, letter box -maillot -maillot, tank suit -manhole cover -maraca -marimba, xylophone -mask -matchstick -maypole -maze, labyrinth -measuring cup -medicine chest, medicine cabinet -megalith, megalithic structure -microphone, mike -microwave, microwave oven -military uniform -milk can -minibus -miniskirt, mini -minivan -missile -mitten -mixing bowl -mobile home, manufactured home -Model T -modem -monastery -monitor -moped -mortar -mortarboard -mosque -mosquito net -motor scooter, scooter -mountain bike, all-terrain bike, off-roader -mountain tent -mouse, computer mouse -mousetrap -moving van -muzzle -nail -neck brace -necklace -nipple -notebook, notebook computer -obelisk -oboe, hautboy, hautbois -ocarina, sweet potato -odometer, hodometer, mileometer, milometer -oil filter -organ, pipe organ -oscilloscope, scope, cathode-ray oscilloscope, CRO -overskirt -oxcart -oxygen mask -packet -paddle, boat paddle -paddlewheel, paddle wheel -padlock -paintbrush -pajama, pyjama, pj's, jammies -palace -panpipe, pandean pipe, syrinx -paper towel -parachute, chute -parallel bars, bars -park bench -parking meter -passenger car, coach, carriage -patio, terrace -pay-phone, pay-station -pedestal, plinth, footstall -pencil box, pencil case -pencil sharpener -perfume, essence -Petri dish -photocopier -pick, plectrum, plectron -pickelhaube -picket fence, paling -pickup, pickup truck -pier -piggy bank, penny bank -pill bottle -pillow -ping-pong ball -pinwheel -pirate, pirate ship -pitcher, ewer -plane, carpenter's plane, woodworking plane -planetarium -plastic bag -plate rack -plow, plough -plunger, plumber's helper -Polaroid camera, Polaroid Land camera -pole -police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria -poncho -pool table, billiard table, snooker table -pop bottle, soda bottle -pot, flowerpot -potter's wheel -power drill -prayer rug, prayer mat -printer -prison, prison house -projectile, missile -projector -puck, hockey puck -punching bag, punch bag, punching ball, punchball -purse -quill, quill pen -quilt, comforter, comfort, puff -racer, race car, racing car -racket, racquet -radiator -radio, wireless -radio telescope, radio reflector -rain barrel -recreational vehicle, RV, R.V. -reel -reflex camera -refrigerator, icebox -remote control, remote -restaurant, eating house, eating place, eatery -revolver, six-gun, six-shooter -rifle -rocking chair, rocker -rotisserie -rubber eraser, rubber, pencil eraser -rugby ball -rule, ruler -running shoe -safe -safety pin -saltshaker, salt shaker -sandal -sarong -sax, saxophone -scabbard -scale, weighing machine -school bus -schooner -scoreboard -screen, CRT screen -screw -screwdriver -seat belt, seatbelt -sewing machine -shield, buckler -shoe shop, shoe-shop, shoe store -shoji -shopping basket -shopping cart -shovel -shower cap -shower curtain -ski -ski mask -sleeping bag -slide rule, slipstick -sliding door -slot, one-armed bandit -snorkel -snowmobile -snowplow, snowplough -soap dispenser -soccer ball -sock -solar dish, solar collector, solar furnace -sombrero -soup bowl -space bar -space heater -space shuttle -spatula -speedboat -spider web, spider's web -spindle -sports car, sport car -spotlight, spot -stage -steam locomotive -steel arch bridge -steel drum -stethoscope -stole -stone wall -stopwatch, stop watch -stove -strainer -streetcar, tram, tramcar, trolley, trolley car -stretcher -studio couch, day bed -stupa, tope -submarine, pigboat, sub, U-boat -suit, suit of clothes -sundial -sunglass -sunglasses, dark glasses, shades -sunscreen, sunblock, sun blocker -suspension bridge -swab, swob, mop -sweatshirt -swimming trunks, bathing trunks -swing -switch, electric switch, electrical switch -syringe -table lamp -tank, army tank, armored combat vehicle, armoured combat vehicle -tape player -teapot -teddy, teddy bear -television, television system -tennis ball -thatch, thatched roof -theater curtain, theatre curtain -thimble -thresher, thrasher, threshing machine -throne -tile roof -toaster -tobacco shop, tobacconist shop, tobacconist -toilet seat -torch -totem pole -tow truck, tow car, wrecker -toyshop -tractor -trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi -tray -trench coat -tricycle, trike, velocipede -trimaran -tripod -triumphal arch -trolleybus, trolley coach, trackless trolley -trombone -tub, vat -turnstile -typewriter keyboard -umbrella -unicycle, monocycle -upright, upright piano -vacuum, vacuum cleaner -vase -vault -velvet -vending machine -vestment -viaduct -violin, fiddle -volleyball -waffle iron -wall clock -wallet, billfold, notecase, pocketbook -wardrobe, closet, press -warplane, military plane -washbasin, handbasin, washbowl, lavabo, wash-hand basin -washer, automatic washer, washing machine -water bottle -water jug -water tower -whiskey jug -whistle -wig -window screen -window shade -Windsor tie -wine bottle -wing -wok -wooden spoon -wool, woolen, woollen -worm fence, snake fence, snake-rail fence, Virginia fence -wreck -yawl -yurt -web site, website, internet site, site -comic book -crossword puzzle, crossword -street sign -traffic light, traffic signal, stoplight -book jacket, dust cover, dust jacket, dust wrapper -menu -plate -guacamole -consomme -hot pot, hotpot -trifle -ice cream, icecream -ice lolly, lolly, lollipop, popsicle -French loaf -bagel, beigel -pretzel -cheeseburger -hotdog, hot dog, red hot -mashed potato -head cabbage -broccoli -cauliflower -zucchini, courgette -spaghetti squash -acorn squash -butternut squash -cucumber, cuke -artichoke, globe artichoke -bell pepper -cardoon -mushroom -Granny Smith -strawberry -orange -lemon -fig -pineapple, ananas -banana -jackfruit, jak, jack -custard apple -pomegranate -hay -carbonara -chocolate sauce, chocolate syrup -dough -meat loaf, meatloaf -pizza, pizza pie -potpie -burrito -red wine -espresso -cup -eggnog -alp -bubble -cliff, drop, drop-off -coral reef -geyser -lakeside, lakeshore -promontory, headland, head, foreland -sandbar, sand bar -seashore, coast, seacoast, sea-coast -valley, vale -volcano -ballplayer, baseball player -groom, bridegroom -scuba diver -rapeseed -daisy -yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum -corn -acorn -hip, rose hip, rosehip -buckeye, horse chestnut, conker -coral fungus -agaric -gyromitra -stinkhorn, carrion fungus -earthstar -hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa -bolete -ear, spike, capitulum -toilet tissue, toilet paper, bathroom tissue'''.split("\n") diff --git a/examples/pretrained_cnn/data/laska.png b/examples/pretrained_cnn/data/laska.png deleted file mode 100644 index e0ed7a16d..000000000 Binary files a/examples/pretrained_cnn/data/laska.png and /dev/null differ diff --git a/examples/pretrained_cnn/data/puzzle.jpeg b/examples/pretrained_cnn/data/puzzle.jpeg deleted file mode 100755 index bbd3a04f0..000000000 Binary files a/examples/pretrained_cnn/data/puzzle.jpeg and /dev/null differ diff --git a/examples/pretrained_cnn/data/tiger.jpeg b/examples/pretrained_cnn/data/tiger.jpeg deleted file mode 100755 index 52c82a3c7..000000000 Binary files a/examples/pretrained_cnn/data/tiger.jpeg and /dev/null differ diff --git a/examples/pretrained_cnn/tutorial_load_ckpt_weights_to_tensorlayer.py b/examples/pretrained_cnn/tutorial_load_ckpt_weights_to_tensorlayer.py deleted file mode 100644 index a3837f9fc..000000000 --- a/examples/pretrained_cnn/tutorial_load_ckpt_weights_to_tensorlayer.py +++ /dev/null @@ -1,70 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- - -import tensorlayer as tl -from tensorlayer.layers import (Input, Conv2d, Flatten, Dense, MaxPool2d) -from tensorlayer.models import Model -from tensorlayer.files import maybe_download_and_extract -import numpy as np -import tensorflow as tf - -filename = 'ckpt_parameters.zip' -url_score = 'https://media.githubusercontent.com/media/tensorlayer/pretrained-models/master/models/' - -# download weights -down_file = tl.files.maybe_download_and_extract( - filename=filename, working_directory='model/', url_source=url_score, extract=True -) - -model_file = 'model/ckpt_parameters' - -# ckpt to npz, rename_key used to match TL naming rule -tl.files.ckpt_to_npz_dict(model_file, rename_key=True) -weights = np.load('model.npz', allow_pickle=True) - -# View the parameters and weights shape -for key in weights.keys(): - print(key, weights[key].shape) - - -# build model -def create_model(inputs_shape): - W_init = tl.initializers.truncated_normal(stddev=5e-2) - W_init2 = tl.initializers.truncated_normal(stddev=0.04) - ni = Input(inputs_shape) - nn = Conv2d(64, (3, 3), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, name='conv1_1')(ni) - nn = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool1_1')(nn) - nn = Conv2d(64, (3, 3), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, b_init=None, name='conv1_2')(nn) - nn = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool1_2')(nn) - - nn = Conv2d(128, (3, 3), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, b_init=None, name='conv2_1')(nn) - nn = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool2_1')(nn) - nn = Conv2d(128, (3, 3), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, b_init=None, name='conv2_2')(nn) - nn = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool2_2')(nn) - - nn = Conv2d(256, (3, 3), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, b_init=None, name='conv3_1')(nn) - nn = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool3_1')(nn) - nn = Conv2d(256, (3, 3), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, b_init=None, name='conv3_2')(nn) - nn = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool3_2')(nn) - - nn = Conv2d(512, (3, 3), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, b_init=None, name='conv4_1')(nn) - nn = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool4_1')(nn) - nn = Conv2d(512, (3, 3), (1, 1), padding='SAME', act=tf.nn.relu, W_init=W_init, b_init=None, name='conv4_2')(nn) - nn = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool4_2')(nn) - - nn = Flatten(name='flatten')(nn) - nn = Dense(1000, act=None, W_init=W_init2, name='output')(nn) - - M = Model(inputs=ni, outputs=nn, name='cnn') - return M - - -net = create_model([None, 224, 224, 3]) -# loaded weights whose name is not found in network's weights will be skipped. -# If ckpt has the same naming rule as TL, We can restore the model with tl.files.load_and_assign_ckpt(model_dir=, network=, skip=True) -tl.files.load_and_assign_npz_dict(network=net, skip=True) - -# you can use the following code to view the restore the model parameters. -net_weights_name = [w.name for w in net.all_weights] -for i in range(len(net_weights_name)): - print(net_weights_name[i], net.all_weights[net_weights_name.index(net_weights_name[i])]) diff --git a/examples/pretrained_cnn/tutorial_models_mobilenetv1.py b/examples/pretrained_cnn/tutorial_models_mobilenetv1.py deleted file mode 100644 index 8d7b35a6b..000000000 --- a/examples/pretrained_cnn/tutorial_models_mobilenetv1.py +++ /dev/null @@ -1,34 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- -""" -MobileNetV1 for ImageNet using TL models - -- mobilenetv2 : https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet -- tf.slim : https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models -""" - -import time - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.models.imagenet_classes import class_names - -# tf.logging.set_verbosity(tf.logging.DEBUG) -tl.logging.set_verbosity(tl.logging.DEBUG) - -# get the whole model -mobilenetv1 = tl.models.MobileNetV1(pretrained=True) - -img1 = tl.vis.read_image('data/tiger.jpeg') -img1 = tl.prepro.imresize(img1, (224, 224)) / 255 -img1 = img1.astype(np.float32)[np.newaxis, ...] - -start_time = time.time() -output = mobilenetv1(img1, is_train=False) -prob = tf.nn.softmax(output)[0].numpy() -print(" End time : %.5ss" % (time.time() - start_time)) -preds = (np.argsort(prob)[::-1])[0:5] -for p in preds: - print(class_names[p], prob[p]) diff --git a/examples/pretrained_cnn/tutorial_models_squeezenetv1.py b/examples/pretrained_cnn/tutorial_models_squeezenetv1.py deleted file mode 100644 index 9b6ee4e7f..000000000 --- a/examples/pretrained_cnn/tutorial_models_squeezenetv1.py +++ /dev/null @@ -1,30 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- -"""SqueezeNet for ImageNet using TL models.""" - -import time - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.models.imagenet_classes import class_names - -# tf.logging.set_verbosity(tf.logging.DEBUG) -tl.logging.set_verbosity(tl.logging.DEBUG) - -# get the whole model -squeezenet = tl.models.SqueezeNetV1(pretrained=True) -print(squeezenet) - -img1 = tl.vis.read_image('data/tiger.jpeg') -img1 = tl.prepro.imresize(img1, (224, 224)) / 255 -img1 = img1.astype(np.float32)[np.newaxis, ...] - -start_time = time.time() -output = squeezenet(img1, is_train=False) -prob = tf.nn.softmax(output)[0].numpy() -print(" End time : %.5ss" % (time.time() - start_time)) -preds = (np.argsort(prob)[::-1])[0:5] -for p in preds: - print(class_names[p], prob[p]) diff --git a/examples/pretrained_cnn/tutorial_models_vgg19.py b/examples/pretrained_cnn/tutorial_models_vgg19.py deleted file mode 100644 index 3f04fe9b3..000000000 --- a/examples/pretrained_cnn/tutorial_models_vgg19.py +++ /dev/null @@ -1,27 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- -"""VGG-19 for ImageNet using TL models.""" - -import time - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.models.imagenet_classes import class_names - -tl.logging.set_verbosity(tl.logging.DEBUG) - -# get the whole model -vgg = tl.models.vgg19(pretrained=True) - -img = tl.vis.read_image('data/tiger.jpeg') -img = tl.prepro.imresize(img, (224, 224)).astype(np.float32) / 255 - -start_time = time.time() -output = vgg(img, is_train=False) -probs = tf.nn.softmax(output)[0].numpy() -print(" End time : %.5ss" % (time.time() - start_time)) -preds = (np.argsort(probs)[::-1])[0:5] -for p in preds: - print(class_names[p], probs[p]) diff --git a/examples/pretrained_cnn/tutorial_models_vgg_static.py b/examples/pretrained_cnn/tutorial_models_vgg_static.py deleted file mode 100644 index e5644395f..000000000 --- a/examples/pretrained_cnn/tutorial_models_vgg_static.py +++ /dev/null @@ -1,27 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- -"""VGG for ImageNet using TL models.""" - -import time - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.models.imagenet_classes import class_names - -tl.logging.set_verbosity(tl.logging.DEBUG) - -# get the whole model -vgg = tl.models.vgg16(pretrained=True, mode='static') - -img = tl.vis.read_image('data/tiger.jpeg') -img = tl.prepro.imresize(img, (224, 224)).astype(np.float32) / 255 - -start_time = time.time() -output = vgg(img, is_train=False) -probs = tf.nn.softmax(output)[0].numpy() -print(" End time : %.5ss" % (time.time() - start_time)) -preds = (np.argsort(probs)[::-1])[0:5] -for p in preds: - print(class_names[p], probs[p]) diff --git a/examples/quantized_net/README.md b/examples/quantized_net/README.md deleted file mode 100644 index 565313040..000000000 --- a/examples/quantized_net/README.md +++ /dev/null @@ -1,6 +0,0 @@ -### TODO -- All TFRecord implementation is better to be changed to Dataset API. - -### Blogs -- [Google量化网络实现(CVPR2018)](https://zhuanlan.zhihu.com/p/41121544) -- [神经网络加速之量化模型(附带代码)](https://zhuanlan.zhihu.com/p/37220669) \ No newline at end of file diff --git a/examples/quantized_net/tutorial_binarynet_cifar10_tfrecord.py b/examples/quantized_net/tutorial_binarynet_cifar10_tfrecord.py deleted file mode 100644 index 3f4d0fcf1..000000000 --- a/examples/quantized_net/tutorial_binarynet_cifar10_tfrecord.py +++ /dev/null @@ -1,218 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- -""" - -- 1. This model has 1,068,298 paramters and Dorefa compression strategy(weight:1 bit, active: 1 bit), -after 500 epoches' training with GPU,accurcy of 41.1% was found. - -- 2. For simplified CNN layers see "Convolutional layer (Simplified)" -in read the docs website. - -- 3. Data augmentation without TFRecord see `tutorial_image_preprocess.py` !! - -Links -------- -.. https://www.tensorflow.org/versions/r0.9/tutorials/deep_cnn/index.html -.. https://github.com/tensorflow/tensorflow/tree/r0.9/tensorflow/models/image/cifar10 - -Note ------- -The optimizers between official code and this code are different. - -Description ------------ -The images are processed as follows: -.. They are cropped to 24 x 24 pixels, centrally for evaluation or randomly for training. -.. They are approximately whitened to make the model insensitive to dynamic range. - -For training, we additionally apply a series of random distortions to -artificially increase the data set size: -.. Randomly flip the image from left to right. -.. Randomly distort the image brightness. -.. Randomly distort the image contrast. - -Speed Up --------- -Reading images from disk and distorting them can use a non-trivial amount -of processing time. To prevent these operations from slowing down training, -we run them inside 16 separate threads which continuously fill a TensorFlow queue. - -""" - -import multiprocessing -import time - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.layers import ( - BinaryConv2d, BinaryDense, Conv2d, Dense, Flatten, Input, LocalResponseNorm, MaxPool2d, Sign -) -from tensorlayer.models import Model - -tl.logging.set_verbosity(tl.logging.DEBUG) - -# Download data, and convert to TFRecord format, see ```tutorial_tfrecord.py``` -# prepare cifar10 data -X_train, y_train, X_test, y_test = tl.files.load_cifar10_dataset(shape=(-1, 32, 32, 3), plotable=False) - - -def binary_model(input_shape, n_classes): - in_net = Input(shape=input_shape, name='input') - - net = Conv2d(64, (5, 5), (1, 1), act='relu', padding='SAME', name='conv1')(in_net) - net = Sign(name='sign1')(net) - - net = MaxPool2d((3, 3), (2, 2), padding='SAME', name='pool1')(net) - net = LocalResponseNorm(4, 1.0, 0.001 / 9.0, 0.75, name='norm1')(net) - net = BinaryConv2d(64, (5, 5), (1, 1), act='relu', padding='SAME', name='bconv1')(net) - - net = LocalResponseNorm(4, 1.0, 0.001 / 9.0, 0.75, name='norm2')(net) - net = MaxPool2d((3, 3), (2, 2), padding='SAME', name='pool2')(net) - net = Flatten(name='flatten')(net) - net = Sign(name='sign2')(net) - net = BinaryDense(384, act='relu', name='d1relu')(net) - net = Sign(name='sign3')(net) - net = BinaryDense(192, act='relu', name='d2relu')(net) - net = Dense(n_classes, act=None, name='output')(net) - net = Model(inputs=in_net, outputs=net, name='binarynet') - return net - - -# training settings -net = binary_model([None, 24, 24, 3], n_classes=10) -batch_size = 128 -n_epoch = 50000 -learning_rate = 0.0001 -print_freq = 5 -n_step_epoch = int(len(y_train) / batch_size) -n_step = n_epoch * n_step_epoch -shuffle_buffer_size = 128 - -train_weights = net.trainable_weights -optimizer = tf.optimizers.Adam(learning_rate) -cost = tl.cost.cross_entropy - - -def generator_train(): - inputs = X_train - targets = y_train - if len(inputs) != len(targets): - raise AssertionError("The length of inputs and targets should be equal") - for _input, _target in zip(inputs, targets): - # yield _input.encode('utf-8'), _target.encode('utf-8') - yield _input, _target - - -def generator_test(): - inputs = X_test - targets = y_test - if len(inputs) != len(targets): - raise AssertionError("The length of inputs and targets should be equal") - for _input, _target in zip(inputs, targets): - # yield _input.encode('utf-8'), _target.encode('utf-8') - yield _input, _target - - -def _map_fn_train(img, target): - # 1. Randomly crop a [height, width] section of the image. - img = tf.image.random_crop(img, [24, 24, 3]) - # 2. Randomly flip the image horizontally. - img = tf.image.random_flip_left_right(img) - # 3. Randomly change brightness. - img = tf.image.random_brightness(img, max_delta=63) - # 4. Randomly change contrast. - img = tf.image.random_contrast(img, lower=0.2, upper=1.8) - # 5. Subtract off the mean and divide by the variance of the pixels. - img = tf.image.per_image_standardization(img) - target = tf.reshape(target, ()) - return img, target - - -def _map_fn_test(img, target): - # 1. Crop the central [height, width] of the image. - img = tf.image.resize_with_pad(img, 24, 24) - # 2. Subtract off the mean and divide by the variance of the pixels. - img = tf.image.per_image_standardization(img) - img = tf.reshape(img, (24, 24, 3)) - target = tf.reshape(target, ()) - return img, target - - -def _train_step(network, X_batch, y_batch, cost, train_op=tf.optimizers.Adam(learning_rate=0.0001), acc=None): - with tf.GradientTape() as tape: - y_pred = network(X_batch) - _loss = cost(y_pred, y_batch) - grad = tape.gradient(_loss, network.trainable_weights) - train_op.apply_gradients(zip(grad, network.trainable_weights)) - if acc is not None: - _acc = acc(y_pred, y_batch) - return _loss, _acc - else: - return _loss, None - - -def accuracy(_logits, y_batch): - return np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - - -# dataset API and augmentation -train_ds = tf.data.Dataset.from_generator( - generator_train, output_types=(tf.float32, tf.int32) -) # , output_shapes=((24, 24, 3), (1))) -train_ds = train_ds.map(_map_fn_train, num_parallel_calls=multiprocessing.cpu_count()) -# train_ds = train_ds.repeat(n_epoch) -train_ds = train_ds.shuffle(shuffle_buffer_size) -train_ds = train_ds.prefetch(buffer_size=4096) -train_ds = train_ds.batch(batch_size) -# value = train_ds.make_one_shot_iterator().get_next() - -test_ds = tf.data.Dataset.from_generator( - generator_test, output_types=(tf.float32, tf.int32) -) # , output_shapes=((24, 24, 3), (1))) -# test_ds = test_ds.shuffle(shuffle_buffer_size) -test_ds = test_ds.map(_map_fn_test, num_parallel_calls=multiprocessing.cpu_count()) -# test_ds = test_ds.repeat(n_epoch) -test_ds = test_ds.prefetch(buffer_size=4096) -test_ds = test_ds.batch(batch_size) -# value_test = test_ds.make_one_shot_iterator().get_next() - -for epoch in range(n_epoch): - start_time = time.time() - - train_loss, train_acc, n_iter = 0, 0, 0 - for X_batch, y_batch in train_ds: - net.train() - _loss, acc = _train_step(net, X_batch, y_batch, cost=cost, train_op=optimizer, acc=accuracy) - - train_loss += _loss - train_acc += acc - n_iter += 1 - - print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time)) - print(" train loss: {}".format(train_loss / n_iter)) - print(" train acc: {}".format(train_acc / n_iter)) - - # use training and evaluation sets to evaluate the model every print_freq epoch - if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: - net.eval() - val_loss, val_acc, n_val_iter = 0, 0, 0 - for X_batch, y_batch in test_ds: - _logits = net(X_batch) # is_train=False, disable dropout - val_loss += tl.cost.cross_entropy(_logits, y_batch, name='eval_loss') - val_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - n_val_iter += 1 - print(" val loss: {}".format(val_loss / n_val_iter)) - print(" val acc: {}".format(val_acc / n_val_iter)) - -# use testing data to evaluate the model -net.eval() -test_loss, test_acc, n_iter = 0, 0, 0 -for X_batch, y_batch in test_ds: - _logits = net(X_batch) - test_loss += tl.cost.cross_entropy(_logits, y_batch, name='test_loss') - test_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - n_iter += 1 -print(" test loss: {}".format(test_loss / n_iter)) -print(" test acc: {}".format(test_acc / n_iter)) diff --git a/examples/quantized_net/tutorial_binarynet_mnist_cnn.py b/examples/quantized_net/tutorial_binarynet_mnist_cnn.py deleted file mode 100644 index 4eccd5c2e..000000000 --- a/examples/quantized_net/tutorial_binarynet_mnist_cnn.py +++ /dev/null @@ -1,106 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- - -import time - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.layers import (BatchNorm, BinaryConv2d, BinaryDense, Flatten, Input, MaxPool2d, Sign) -from tensorlayer.models import Model - -tl.logging.set_verbosity(tl.logging.DEBUG) - -X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 28, 28, 1)) - -batch_size = 128 - - -def model(inputs_shape, n_class=10): - # In BNN, all the layers inputs are binary, with the exception of the first layer. - # ref: https://github.com/itayhubara/BinaryNet.tf/blob/master/models/BNN_cifar10.py - net_in = Input(inputs_shape, name='input') - net = BinaryConv2d(32, (5, 5), (1, 1), padding='SAME', b_init=None, name='bcnn1')(net_in) - net = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool1')(net) - net = BatchNorm(act=tl.act.htanh, name='bn1')(net) - - net = Sign("sign1")(net) - net = BinaryConv2d(64, (5, 5), (1, 1), padding='SAME', b_init=None, name='bcnn2')(net) - net = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool2')(net) - net = BatchNorm(act=tl.act.htanh, name='bn2')(net) - - net = Flatten('ft')(net) - net = Sign("sign2")(net) - net = BinaryDense(256, b_init=None, name='dense')(net) - net = BatchNorm(act=tl.act.htanh, name='bn3')(net) - - net = Sign("sign3")(net) - net = BinaryDense(10, b_init=None, name='bout')(net) - net = BatchNorm(name='bno')(net) - net = Model(inputs=net_in, outputs=net, name='binarynet') - return net - - -def _train_step(network, X_batch, y_batch, cost, train_op=tf.optimizers.Adam(learning_rate=0.0001), acc=None): - with tf.GradientTape() as tape: - y_pred = network(X_batch) - _loss = cost(y_pred, y_batch) - grad = tape.gradient(_loss, network.trainable_weights) - train_op.apply_gradients(zip(grad, network.trainable_weights)) - if acc is not None: - _acc = acc(y_pred, y_batch) - return _loss, _acc - else: - return _loss, None - - -def accuracy(_logits, y_batch): - return np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - - -n_epoch = 200 -print_freq = 5 - -net = model([None, 28, 28, 1]) -train_op = tf.optimizers.Adam(learning_rate=0.0001) -cost = tl.cost.cross_entropy - -for epoch in range(n_epoch): - start_time = time.time() - train_loss, train_acc, n_batch = 0, 0, 0 - net.train() - - for X_train_a, y_train_a in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=True): - _loss, acc = _train_step(net, X_train_a, y_train_a, cost=cost, train_op=train_op, acc=accuracy) - train_loss += _loss - train_acc += acc - n_batch += 1 - - # print("Epoch %d of %d took %fs" % (epoch + 1, n_epoch, time.time() - start_time)) - # print(" train loss: %f" % (train_loss / n_batch)) - # print(" train acc: %f" % (train_acc / n_batch)) - - if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: - print("Epoch %d of %d took %fs" % (epoch + 1, n_epoch, time.time() - start_time)) - print(" train loss: %f" % (train_loss / n_batch)) - print(" train acc: %f" % (train_acc / n_batch)) - val_loss, val_acc, val_batch = 0, 0, 0 - net.eval() - for X_val_a, y_val_a in tl.iterate.minibatches(X_val, y_val, batch_size, shuffle=True): - _logits = net(X_val_a) - val_loss += tl.cost.cross_entropy(_logits, y_val_a, name='eval_loss') - val_acc += np.mean(np.equal(np.argmax(_logits, 1), y_val_a)) - val_batch += 1 - print(" val loss: {}".format(val_loss / val_batch)) - print(" val acc: {}".format(val_acc / val_batch)) - -net.test() -test_loss, test_acc, n_test_batch = 0, 0, 0 -for X_test_a, y_test_a in tl.iterate.minibatches(X_test, y_test, batch_size, shuffle=True): - _logits = net(X_test_a) - test_loss += tl.cost.cross_entropy(_logits, y_test_a, name='test_loss') - test_acc += np.mean(np.equal(np.argmax(_logits, 1), y_test_a)) - n_test_batch += 1 -print(" test loss: %f" % (test_loss / n_test_batch)) -print(" test acc: %f" % (test_acc / n_test_batch)) diff --git a/examples/quantized_net/tutorial_dorefanet_cifar10_tfrecord.py b/examples/quantized_net/tutorial_dorefanet_cifar10_tfrecord.py deleted file mode 100644 index 5ebb7cfa6..000000000 --- a/examples/quantized_net/tutorial_dorefanet_cifar10_tfrecord.py +++ /dev/null @@ -1,211 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- -""" - -- 1. This model has 1,068,298 paramters and Dorefa compression strategy(weight:1 bit, active: 3 bits), -after 500 epoches' training with GPU,accurcy of 81.1% was found. - -- 2. For simplified CNN layers see "Convolutional layer (Simplified)" -in read the docs website. - -- 3. Data augmentation without TFRecord see `tutorial_image_preprocess.py` !! - -Links -------- -.. paper:https://arxiv.org/abs/1606.06160 -.. code:https://github.com/XJTUWYD/DoReFa_Cifar10 - -Note ------- -The optimizers between official code and this code are different. - -Description ------------ -The images are processed as follows: -.. They are cropped to 24 x 24 pixels, centrally for evaluation or randomly for training. -.. They are approximately whitened to make the model insensitive to dynamic range. - -For training, we additionally apply a series of random distortions to -artificially increase the data set size: -.. Randomly flip the image from left to right. -.. Randomly distort the image brightness. -.. Randomly distort the image contrast. - -Speed Up --------- -Reading images from disk and distorting them can use a non-trivial amount -of processing time. To prevent these operations from slowing down training, -we run them inside 16 separate threads which continuously fill a TensorFlow queue. - -""" - -import multiprocessing -import time - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.layers import (Conv2d, Dense, DorefaConv2d, DorefaDense, Flatten, Input, LocalResponseNorm, MaxPool2d) -from tensorlayer.models import Model - -tl.logging.set_verbosity(tl.logging.DEBUG) - -# Download data, and convert to TFRecord format, see ```tutorial_tfrecord.py``` -# prepare cifar10 data -X_train, y_train, X_test, y_test = tl.files.load_cifar10_dataset(shape=(-1, 32, 32, 3), plotable=False) - - -def dorefanet_model(input_shape, n_classes): - in_net = Input(shape=input_shape, name='input') - net = Conv2d(32, (5, 5), (1, 1), act='relu', padding='SAME', name='conv1')(in_net) - net = MaxPool2d((3, 3), (2, 2), padding='SAME', name='pool1')(net) - net = LocalResponseNorm(4, 1.0, 0.001 / 9.0, 0.75, name='norm1')(net) - net = tl.layers.Sign("sign")(net) - net = DorefaConv2d(8, 32, 64, (5, 5), (1, 1), act='relu', padding='SAME', name='DorefaConv1')(net) - net = LocalResponseNorm(4, 1.0, 0.001 / 9.0, 0.75, name='norm2')(net) - net = MaxPool2d((3, 3), (2, 2), padding='SAME', name='pool2')(net) - net = Flatten(name='flatten')(net) - net = DorefaDense(8, 16, 384, act='relu', name='DorefaDense1')(net) - net = DorefaDense(8, 16, 192, act='relu', name='DorefaDense2')(net) - net = Dense(n_classes, act=None, name='output')(net) - net = Model(inputs=in_net, outputs=net, name='dorefanet') - return net - - -# training settings -net = dorefanet_model([None, 24, 24, 3], n_classes=10) -batch_size = 128 -n_epoch = 50000 -learning_rate = 0.0001 -print_freq = 5 -n_step_epoch = int(len(y_train) / batch_size) -n_step = n_epoch * n_step_epoch -shuffle_buffer_size = 128 - -optimizer = tf.optimizers.Adam(learning_rate) -# optimizer = tf.optimizers.SGD(learning_rate) -cost = tl.cost.cross_entropy - - -def generator_train(): - inputs = X_train - targets = y_train - if len(inputs) != len(targets): - raise AssertionError("The length of inputs and targets should be equal") - for _input, _target in zip(inputs, targets): - # yield _input.encode('utf-8'), _target.encode('utf-8') - yield _input, _target - - -def generator_test(): - inputs = X_test - targets = y_test - if len(inputs) != len(targets): - raise AssertionError("The length of inputs and targets should be equal") - for _input, _target in zip(inputs, targets): - # yield _input.encode('utf-8'), _target.encode('utf-8') - yield _input, _target - - -def _map_fn_train(img, target): - # 1. Randomly crop a [height, width] section of the image. - img = tf.image.random_crop(img, [24, 24, 3]) - # 2. Randomly flip the image horizontally. - img = tf.image.random_flip_left_right(img) - # 3. Randomly change brightness. - img = tf.image.random_brightness(img, max_delta=63) - # 4. Randomly change contrast. - img = tf.image.random_contrast(img, lower=0.2, upper=1.8) - # 5. Subtract off the mean and divide by the variance of the pixels. - img = tf.image.per_image_standardization(img) - target = tf.reshape(target, ()) - return img, target - - -def _map_fn_test(img, target): - # 1. Crop the central [height, width] of the image. - img = tf.image.resize_with_pad(img, 24, 24) - # 2. Subtract off the mean and divide by the variance of the pixels. - img = tf.image.per_image_standardization(img) - img = tf.reshape(img, (24, 24, 3)) - target = tf.reshape(target, ()) - return img, target - - -def _train_step(network, X_batch, y_batch, cost, train_op=tf.optimizers.Adam(learning_rate=0.0001), acc=None): - with tf.GradientTape() as tape: - y_pred = network(X_batch) - _loss = cost(y_pred, y_batch) - grad = tape.gradient(_loss, network.trainable_weights) - train_op.apply_gradients(zip(grad, network.trainable_weights)) - if acc is not None: - _acc = acc(y_pred, y_batch) - return _loss, _acc - else: - return _loss, None - - -def accuracy(_logits, y_batch): - return np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - - -# dataset API and augmentation -train_ds = tf.data.Dataset.from_generator( - generator_train, output_types=(tf.float32, tf.int32) -) # , output_shapes=((24, 24, 3), (1))) -train_ds = train_ds.map(_map_fn_train, num_parallel_calls=multiprocessing.cpu_count()) -# train_ds = train_ds.repeat(n_epoch) -train_ds = train_ds.shuffle(shuffle_buffer_size) -train_ds = train_ds.prefetch(buffer_size=4096) -train_ds = train_ds.batch(batch_size) -# value = train_ds.make_one_shot_iterator().get_next() - -test_ds = tf.data.Dataset.from_generator( - generator_test, output_types=(tf.float32, tf.int32) -) # , output_shapes=((24, 24, 3), (1))) -# test_ds = test_ds.shuffle(shuffle_buffer_size) -test_ds = test_ds.map(_map_fn_test, num_parallel_calls=multiprocessing.cpu_count()) -# test_ds = test_ds.repeat(n_epoch) -test_ds = test_ds.prefetch(buffer_size=4096) -test_ds = test_ds.batch(batch_size) -# value_test = test_ds.make_one_shot_iterator().get_next() - -for epoch in range(n_epoch): - start_time = time.time() - - train_loss, train_acc, n_iter = 0, 0, 0 - net.train() - for X_batch, y_batch in train_ds: - _loss, acc = _train_step(net, X_batch, y_batch, cost=cost, train_op=optimizer, acc=accuracy) - - train_loss += _loss - train_acc += acc - n_iter += 1 - - # use training and evaluation sets to evaluate the model every print_freq epoch - if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: - print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time)) - print(" train loss: {}".format(train_loss / n_iter)) - print(" train acc: {}".format(train_acc / n_iter)) - - net.eval() - val_loss, val_acc, n_val_iter = 0, 0, 0 - for X_batch, y_batch in test_ds: - _logits = net(X_batch) # is_train=False, disable dropout - val_loss += tl.cost.cross_entropy(_logits, y_batch, name='eval_loss') - val_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - n_val_iter += 1 - print(" val loss: {}".format(val_loss / n_val_iter)) - print(" val acc: {}".format(val_acc / n_val_iter)) - -# use testing data to evaluate the model -net.eval() -test_loss, test_acc, n_iter = 0, 0, 0 -for X_batch, y_batch in test_ds: - _logits = net(X_batch) - test_loss += tl.cost.cross_entropy(_logits, y_batch, name='test_loss') - test_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - n_iter += 1 -print(" test loss: {}".format(test_loss / n_iter)) -print(" test acc: {}".format(test_acc / n_iter)) diff --git a/examples/quantized_net/tutorial_dorefanet_mnist_cnn.py b/examples/quantized_net/tutorial_dorefanet_mnist_cnn.py deleted file mode 100644 index 1cfd68124..000000000 --- a/examples/quantized_net/tutorial_dorefanet_mnist_cnn.py +++ /dev/null @@ -1,101 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- - -import time - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.layers import (BatchNorm, Dense, DorefaConv2d, DorefaDense, Flatten, Input, MaxPool2d) -from tensorlayer.models import Model - -tl.logging.set_verbosity(tl.logging.DEBUG) - -X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 28, 28, 1)) - -batch_size = 128 - - -def model(inputs_shape, n_class=10): - in_net = Input(inputs_shape, name='input') - net = DorefaConv2d(1, 3, 32, (5, 5), (1, 1), padding='SAME', b_init=None, name='bcnn1')(in_net) - net = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool1')(net) - net = BatchNorm(act=tl.act.htanh, name='bn1')(net) - - net = DorefaConv2d(1, 3, 64, (5, 5), (1, 1), padding='SAME', b_init=None, name='bcnn2')(net) - net = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool2')(net) - net = BatchNorm(act=tl.act.htanh, name='bn2')(net) - - net = Flatten('flatten')(net) - net = DorefaDense(1, 3, 256, b_init=None, name='dense')(net) - net = BatchNorm(act=tl.act.htanh, name='bn3')(net) - - net = Dense(n_class, b_init=None, name='bout')(net) - net = BatchNorm(name='bno')(net) - net = Model(inputs=in_net, outputs=net, name='dorefanet') - return net - - -def _train_step(network, X_batch, y_batch, cost, train_op=tf.optimizers.Adam(learning_rate=0.0001), acc=None): - with tf.GradientTape() as tape: - y_pred = network(X_batch) - _loss = cost(y_pred, y_batch) - grad = tape.gradient(_loss, network.trainable_weights) - train_op.apply_gradients(zip(grad, network.trainable_weights)) - if acc is not None: - _acc = acc(y_pred, y_batch) - return _loss, _acc - else: - return _loss, None - - -def accuracy(_logits, y_batch): - return np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - - -n_epoch = 200 -print_freq = 5 - -net = model([None, 28, 28, 1]) -train_op = tf.optimizers.Adam(learning_rate=0.0001) -cost = tl.cost.cross_entropy - -for epoch in range(n_epoch): - start_time = time.time() - train_loss, train_acc, n_batch = 0, 0, 0 - net.train() - - for X_train_a, y_train_a in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=True): - _loss, acc = _train_step(net, X_train_a, y_train_a, cost=cost, train_op=train_op, acc=accuracy) - train_loss += _loss - train_acc += acc - n_batch += 1 - - # print("Epoch %d of %d took %fs" % (epoch + 1, n_epoch, time.time() - start_time)) - # print(" train loss: %f" % (train_loss / n_batch)) - # print(" train acc: %f" % (train_acc / n_batch)) - - if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: - print("Epoch %d of %d took %fs" % (epoch + 1, n_epoch, time.time() - start_time)) - print(" train loss: %f" % (train_loss / n_batch)) - print(" train acc: %f" % (train_acc / n_batch)) - val_loss, val_acc, val_batch = 0, 0, 0 - net.eval() - for X_val_a, y_val_a in tl.iterate.minibatches(X_val, y_val, batch_size, shuffle=True): - _logits = net(X_val_a) - val_loss += tl.cost.cross_entropy(_logits, y_val_a, name='eval_loss') - val_acc += np.mean(np.equal(np.argmax(_logits, 1), y_val_a)) - val_batch += 1 - print(" val loss: {}".format(val_loss / val_batch)) - print(" val acc: {}".format(val_acc / val_batch)) - -net.test() -test_loss, test_acc, n_test_batch = 0, 0, 0 -for X_test_a, y_test_a in tl.iterate.minibatches(X_test, y_test, batch_size, shuffle=True): - _logits = net(X_test_a) - test_loss += tl.cost.cross_entropy(_logits, y_test_a, name='test_loss') - test_acc += np.mean(np.equal(np.argmax(_logits, 1), y_test_a)) - n_test_batch += 1 -print(" test loss: %f" % (test_loss / n_test_batch)) -print(" test acc: %f" % (test_acc / n_test_batch)) diff --git a/examples/quantized_net/tutorial_quanconv_mnist.py b/examples/quantized_net/tutorial_quanconv_mnist.py deleted file mode 100644 index 1dbfe8d4d..000000000 --- a/examples/quantized_net/tutorial_quanconv_mnist.py +++ /dev/null @@ -1,116 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -import time - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.layers import ( - Dense, Dropout, Flatten, Input, MaxPool2d, QuanConv2d, QuanConv2dWithBN, QuanDense, QuanDenseLayerWithBN -) -from tensorlayer.models import Model - -tl.logging.set_verbosity(tl.logging.DEBUG) - -X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 28, 28, 1)) -# X_train, y_train, X_test, y_test = tl.files.load_cropped_svhn(include_extra=False) - -batch_size = 128 - - -def model(inputs_shape, n_class=10): - net_in = Input(inputs_shape, name="input") - - net = QuanConv2dWithBN( - n_filter=32, filter_size=(5, 5), strides=(1, 1), padding='SAME', act=tl.nn.relu, name='qconvbn1' - )(net_in) - net = MaxPool2d(filter_size=(2, 2), strides=(2, 2), padding='SAME', name='pool1')(net) - - net = QuanConv2dWithBN( - n_filter=64, filter_size=(5, 5), strides=(1, 1), padding='SAME', act=tl.nn.relu, name='qconvbn2' - )(net) - net = MaxPool2d(filter_size=(2, 2), strides=(2, 2), padding='SAME', name='pool2')(net) - - net = Flatten(name='ft')(net) - - # net = QuanDense(256, act="relu", name='qdbn')(net) - # net = QuanDense(n_class, name='qdbn_out')(net) - - net = QuanDenseLayerWithBN(256, act="relu", name='qdbn')(net) - net = QuanDenseLayerWithBN(n_class, name='qdbn_out')(net) - - # net = Dense(256, act='relu', name='Dense1')(net) - # net = Dense(n_class, name='Dense2')(net) - - net = Model(inputs=net_in, outputs=net, name='quan') - return net - - -def _train_step(network, X_batch, y_batch, cost, train_op=tf.optimizers.Adam(learning_rate=0.0001), acc=None): - with tf.GradientTape() as tape: - y_pred = network(X_batch) - _loss = cost(y_pred, y_batch) - grad = tape.gradient(_loss, network.trainable_weights) - train_op.apply_gradients(zip(grad, network.trainable_weights)) - if acc is not None: - _acc = acc(y_pred, y_batch) - return _loss, _acc - else: - return _loss, None - - -def accuracy(_logits, y_batch): - return np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - - -n_epoch = 200 -print_freq = 1 - -# print(sess.run(net_test.all_params)) # print real values of parameters -net = model([None, 28, 28, 1]) -train_op = tf.optimizers.Adam(learning_rate=0.0001) -cost = tl.cost.cross_entropy - -for epoch in range(n_epoch): - start_time = time.time() - train_loss, train_acc, n_iter = 0, 0, 0 - - for X_train_a, y_train_a in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=True): - net.train() - _loss, acc = _train_step(net, X_train_a, y_train_a, cost=cost, train_op=train_op, acc=accuracy) - - train_loss += _loss - train_acc += acc - n_iter += 1 - - print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time)) - print(" train loss: {}".format(train_loss / n_iter)) - print(" train acc: {}".format(train_acc / n_iter)) - - if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: - - print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time)) - print(" train loss: {}".format(train_loss / n_iter)) - print(" train acc: {}".format(train_acc / n_iter)) - - # net.eval() - val_loss, val_acc, n_eval = 0, 0, 0 - for X_val_a, y_val_a in tl.iterate.minibatches(X_val, y_val, batch_size, shuffle=True): - _logits = net(X_val_a) # is_train=False, disable dropout - val_loss += tl.cost.cross_entropy(_logits, y_val_a, name='eval_loss') - val_acc += np.mean(np.equal(np.argmax(_logits, 1), y_val_a)) - n_eval += 1 - print(" val loss: {}".format(val_loss / n_eval)) - print(" val acc: {}".format(val_acc / n_eval)) - -# net.eval() -test_loss, test_acc, n_test_batch = 0, 0, 0 -for X_test_a, y_test_a in tl.iterate.minibatches(X_test, y_test, batch_size, shuffle=True): - _logits = net(X_test_a) - test_loss += tl.cost.cross_entropy(_logits, y_test_a, name='test_loss') - test_acc += np.mean(np.equal(np.argmax(_logits, 1), y_test_a)) - n_test_batch += 1 -print(" test loss: %f" % (test_loss / n_test_batch)) -print(" test acc: %f" % (test_acc / n_test_batch)) diff --git a/examples/quantized_net/tutorial_ternaryweight_cifar10_tfrecord.py b/examples/quantized_net/tutorial_ternaryweight_cifar10_tfrecord.py deleted file mode 100644 index c78686011..000000000 --- a/examples/quantized_net/tutorial_ternaryweight_cifar10_tfrecord.py +++ /dev/null @@ -1,221 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- -""" - -- 1. This model has 1,068,298 paramters and TWN compression strategy(weight:1,0,-1, output: float32), -after 500 epoches' training with GPU,accurcy of 80.6% was found. - -- 2. For simplified CNN layers see "Convolutional layer (Simplified)" -in read the docs website. - -- 3. Data augmentation without TFRecord see `tutorial_image_preprocess.py` !! - -Links -------- -.. https://arxiv.org/abs/1605.04711 -.. https://github.com/XJTUWYD/TWN - -Note ------- -The optimizers between official code and this code are different. - -Description ------------ -The images are processed as follows: -.. They are cropped to 24 x 24 pixels, centrally for evaluation or randomly for training. -.. They are approximately whitened to make the model insensitive to dynamic range. - -For training, we additionally apply a series of random distortions to -artificially increase the data set size: -.. Randomly flip the image from left to right. -.. Randomly distort the image brightness. -.. Randomly distort the image contrast. - -Speed Up --------- -Reading images from disk and distorting them can use a non-trivial amount -of processing time. To prevent these operations from slowing down training, -we run them inside 16 separate threads which continuously fill a TensorFlow queue. - -""" -import multiprocessing -import time - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.layers import ( - Conv2d, Dense, Flatten, Input, LocalResponseNorm, MaxPool2d, TernaryConv2d, TernaryDense -) -from tensorlayer.models import Model - -tl.logging.set_verbosity(tl.logging.DEBUG) - -# Download data, and convert to TFRecord format, see ```tutorial_tfrecord.py``` -# prepare cifar10 data -X_train, y_train, X_test, y_test = tl.files.load_cifar10_dataset(shape=(-1, 32, 32, 3), plotable=False) - - -def model(input_shape, n_classes): - in_net = Input(shape=input_shape, name='input') - - net = Conv2d(64, (5, 5), (1, 1), act=tf.nn.relu, padding='SAME', name='cnn1')(in_net) - net = MaxPool2d((3, 3), (2, 2), padding='SAME', name='pool1')(net) - net = LocalResponseNorm(4, 1.0, 0.001 / 9.0, 0.75, name='norm1')(net) - - net = TernaryConv2d(64, (5, 5), (1, 1), act=tf.nn.relu, padding='SAME', name='cnn2')(net) - net = LocalResponseNorm(4, 1.0, 0.001 / 9.0, 0.75, name='norm2')(net) - net = MaxPool2d((3, 3), (2, 2), padding='SAME', name='pool2')(net) - - net = Flatten(name='flatten')(net) - - net = TernaryDense(384, act=tf.nn.relu, name='d1relu')(net) - net = TernaryDense(192, act=tf.nn.relu, name='d2relu')(net) - net = Dense(n_classes, act=None, name='output')(net) - - net = Model(inputs=in_net, outputs=net, name='dorefanet') - return net - - -# training settings -bitW = 8 -bitA = 8 -net = model([None, 24, 24, 3], n_classes=10) -batch_size = 128 -n_epoch = 50000 -learning_rate = 0.0001 -print_freq = 5 -n_step_epoch = int(len(y_train) / batch_size) -n_step = n_epoch * n_step_epoch -shuffle_buffer_size = 128 - -optimizer = tf.optimizers.Adam(learning_rate) -cost = tl.cost.cross_entropy - - -def generator_train(): - inputs = X_train - targets = y_train - if len(inputs) != len(targets): - raise AssertionError("The length of inputs and targets should be equal") - for _input, _target in zip(inputs, targets): - # yield _input.encode('utf-8'), _target.encode('utf-8') - yield _input, _target - - -def generator_test(): - inputs = X_test - targets = y_test - if len(inputs) != len(targets): - raise AssertionError("The length of inputs and targets should be equal") - for _input, _target in zip(inputs, targets): - # yield _input.encode('utf-8'), _target.encode('utf-8') - yield _input, _target - - -def _map_fn_train(img, target): - # 1. Randomly crop a [height, width] section of the image. - img = tf.image.random_crop(img, [24, 24, 3]) - # 2. Randomly flip the image horizontally. - img = tf.image.random_flip_left_right(img) - # 3. Randomly change brightness. - img = tf.image.random_brightness(img, max_delta=63) - # 4. Randomly change contrast. - img = tf.image.random_contrast(img, lower=0.2, upper=1.8) - # 5. Subtract off the mean and divide by the variance of the pixels. - img = tf.image.per_image_standardization(img) - target = tf.reshape(target, ()) - return img, target - - -def _map_fn_test(img, target): - # 1. Crop the central [height, width] of the image. - img = tf.image.resize_with_pad(img, 24, 24) - # 2. Subtract off the mean and divide by the variance of the pixels. - img = tf.image.per_image_standardization(img) - img = tf.reshape(img, (24, 24, 3)) - target = tf.reshape(target, ()) - return img, target - - -def _train_step(network, X_batch, y_batch, cost, train_op=tf.optimizers.Adam(learning_rate=0.0001), acc=None): - with tf.GradientTape() as tape: - y_pred = network(X_batch) - _loss = cost(y_pred, y_batch) - grad = tape.gradient(_loss, network.trainable_weights) - train_op.apply_gradients(zip(grad, network.trainable_weights)) - if acc is not None: - _acc = acc(y_pred, y_batch) - return _loss, _acc - else: - return _loss, None - - -def accuracy(_logits, y_batch): - return np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - - -# dataset API and augmentation -train_ds = tf.data.Dataset.from_generator( - generator_train, output_types=(tf.float32, tf.int32) -) # , output_shapes=((24, 24, 3), (1))) -train_ds = train_ds.map(_map_fn_train, num_parallel_calls=multiprocessing.cpu_count()) -# train_ds = train_ds.repeat(n_epoch) -train_ds = train_ds.shuffle(shuffle_buffer_size) -train_ds = train_ds.prefetch(buffer_size=4096) -train_ds = train_ds.batch(batch_size) -# value = train_ds.make_one_shot_iterator().get_next() - -test_ds = tf.data.Dataset.from_generator( - generator_test, output_types=(tf.float32, tf.int32) -) # , output_shapes=((24, 24, 3), (1))) -# test_ds = test_ds.shuffle(shuffle_buffer_size) -test_ds = test_ds.map(_map_fn_test, num_parallel_calls=multiprocessing.cpu_count()) -# test_ds = test_ds.repeat(n_epoch) -test_ds = test_ds.prefetch(buffer_size=4096) -test_ds = test_ds.batch(batch_size) -# value_test = test_ds.make_one_shot_iterator().get_next() - -for epoch in range(n_epoch): - start_time = time.time() - - train_loss, train_acc, n_iter = 0, 0, 0 - net.train() - for X_batch, y_batch in train_ds: - _loss, acc = _train_step(net, X_batch, y_batch, cost=cost, train_op=optimizer, acc=accuracy) - - train_loss += _loss - train_acc += acc - n_iter += 1 - - print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time)) - print(" train loss: {}".format(train_loss / n_iter)) - print(" train acc: {}".format(train_acc / n_iter)) - - # use training and evaluation sets to evaluate the model every print_freq epoch - if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: - print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time)) - print(" train loss: {}".format(train_loss / n_iter)) - print(" train acc: {}".format(train_acc / n_iter)) - - net.eval() - val_loss, val_acc, n_val_iter = 0, 0, 0 - for X_batch, y_batch in test_ds: - _logits = net(X_batch) # is_train=False, disable dropout - val_loss += tl.cost.cross_entropy(_logits, y_batch, name='eval_loss') - val_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - n_val_iter += 1 - print(" val loss: {}".format(val_loss / n_val_iter)) - print(" val acc: {}".format(val_acc / n_val_iter)) - -# use testing data to evaluate the model -net.eval() -test_loss, test_acc, n_iter = 0, 0, 0 -for X_batch, y_batch in test_ds: - _logits = net(X_batch) - test_loss += tl.cost.cross_entropy(_logits, y_batch, name='test_loss') - test_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - n_iter += 1 -print(" test loss: {}".format(test_loss / n_iter)) -print(" test acc: {}".format(test_acc / n_iter)) diff --git a/examples/quantized_net/tutorial_ternaryweight_mnist_cnn.py b/examples/quantized_net/tutorial_ternaryweight_mnist_cnn.py deleted file mode 100644 index a708d1f0e..000000000 --- a/examples/quantized_net/tutorial_ternaryweight_mnist_cnn.py +++ /dev/null @@ -1,102 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- - -import time - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.layers import (BatchNorm, Dense, Flatten, Input, MaxPool2d, TernaryConv2d, TernaryDense) -from tensorlayer.models import Model - -tl.logging.set_verbosity(tl.logging.DEBUG) - -X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 28, 28, 1)) - -batch_size = 128 - - -def model(inputs_shape, n_class=10): - in_net = Input(inputs_shape, name='input') - net = TernaryConv2d(32, (5, 5), (1, 1), padding='SAME', b_init=None, name='bcnn1')(in_net) - net = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool1')(net) - net = BatchNorm(act=tl.act.htanh, name='bn1')(net) - - net = TernaryConv2d(64, (5, 5), (1, 1), padding='SAME', b_init=None, name='bcnn2')(net) - net = MaxPool2d((2, 2), (2, 2), padding='SAME', name='pool2')(net) - net = BatchNorm(act=tl.act.htanh, name='bn2')(net) - - net = Flatten('flatten')(net) - net = Dense(256, b_init=None, name='dense')(net) - net = BatchNorm(act=tl.act.htanh, name='bn3')(net) - - net = TernaryDense(n_class, b_init=None, name='bout')(net) - net = BatchNorm(name='bno')(net) - - net = Model(inputs=in_net, outputs=net, name='dorefanet') - return net - - -def _train_step(network, X_batch, y_batch, cost, train_op=tf.optimizers.Adam(learning_rate=0.0001), acc=None): - with tf.GradientTape() as tape: - y_pred = network(X_batch) - _loss = cost(y_pred, y_batch) - grad = tape.gradient(_loss, network.trainable_weights) - train_op.apply_gradients(zip(grad, network.trainable_weights)) - if acc is not None: - _acc = acc(y_pred, y_batch) - return _loss, _acc - else: - return _loss, None - - -def accuracy(_logits, y_batch): - return np.mean(np.equal(np.argmax(_logits, 1), y_batch)) - - -n_epoch = 200 -print_freq = 5 - -net = model([None, 28, 28, 1]) -train_op = tf.optimizers.Adam(learning_rate=0.0001) -cost = tl.cost.cross_entropy - -for epoch in range(n_epoch): - start_time = time.time() - train_loss, train_acc, n_batch = 0, 0, 0 - net.train() - - for X_train_a, y_train_a in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=True): - _loss, acc = _train_step(net, X_train_a, y_train_a, cost=cost, train_op=train_op, acc=accuracy) - train_loss += _loss - train_acc += acc - n_batch += 1 - - print("Epoch %d of %d took %fs" % (epoch + 1, n_epoch, time.time() - start_time)) - print(" train loss: %f" % (train_loss / n_batch)) - print(" train acc: %f" % (train_acc / n_batch)) - - if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: - print("Epoch %d of %d took %fs" % (epoch + 1, n_epoch, time.time() - start_time)) - print(" train loss: %f" % (train_loss / n_batch)) - print(" train acc: %f" % (train_acc / n_batch)) - val_loss, val_acc, val_batch = 0, 0, 0 - net.eval() - for X_val_a, y_val_a in tl.iterate.minibatches(X_val, y_val, batch_size, shuffle=True): - _logits = net(X_val_a) - val_loss += tl.cost.cross_entropy(_logits, y_val_a, name='eval_loss') - val_acc += np.mean(np.equal(np.argmax(_logits, 1), y_val_a)) - val_batch += 1 - print(" val loss: {}".format(val_loss / val_batch)) - print(" val acc: {}".format(val_acc / val_batch)) - -net.test() -test_loss, test_acc, n_test_batch = 0, 0, 0 -for X_test_a, y_test_a in tl.iterate.minibatches(X_test, y_test, batch_size, shuffle=True): - _logits = net(X_test_a) - test_loss += tl.cost.cross_entropy(_logits, y_test_a, name='test_loss') - test_acc += np.mean(np.equal(np.argmax(_logits, 1), y_test_a)) - n_test_batch += 1 -print(" test loss: %f" % (test_loss / n_test_batch)) -print(" test acc: %f" % (test_acc / n_test_batch)) diff --git a/examples/reinforcement_learning/.gitignore b/examples/reinforcement_learning/.gitignore deleted file mode 100644 index 92fdef002..000000000 --- a/examples/reinforcement_learning/.gitignore +++ /dev/null @@ -1,2 +0,0 @@ -model/ -image/ diff --git a/examples/reinforcement_learning/README.md b/examples/reinforcement_learning/README.md deleted file mode 100644 index 633009bbf..000000000 --- a/examples/reinforcement_learning/README.md +++ /dev/null @@ -1,364 +0,0 @@ -# Comprehensive Reinforcement Learning Tutorial - -![GitHub last commit (branch)](https://img.shields.io/github/last-commit/tensorlayer/tensorlayer/master.svg) -[![Supported TF Version](https://img.shields.io/badge/TensorFlow-2.0.0%2B-brightgreen.svg)](https://github.com/tensorflow/tensorflow/releases) -[![Documentation Status](https://readthedocs.org/projects/tensorlayer/badge/)](https://tensorlayer.readthedocs.io/) -[![Build Status](https://travis-ci.org/tensorlayer/tensorlayer.svg?branch=master)](https://travis-ci.org/tensorlayer/tensorlayer) -[![Downloads](http://pepy.tech/badge/tensorlayer)](http://pepy.tech/project/tensorlayer) - -
- -
- -
- -
-
- - - -This repository contains implementations of the most popular reinforcement learning algorithms, powered by [Tensorflow 2.0](https://www.tensorflow.org/alpha/guide/effective_tf2) and Tensorlayer 2.0. We aim to make the reinforcement learning tutorial simple, transparent and straight-forward, as this would not only benefits new learners of reinforcement learning, but also provide convenience for senior researchers to testify their new ideas quickly. - -A corresponding [Springer textbook](https://deepreinforcementlearningbook.org) is also provided, you can get the free PDF if your institute has Springer license. We also released an [RLzoo](https://github.com/tensorlayer/RLzoo) for simple usage. - -
- -
- -
- -
-
- -## Prerequisites: - -* python 3.5 -* tensorflow >= 2.0.0 or tensorflow-gpu >= 2.0.0a0 -* tensorlayer >= 2.0.1 -* tensorflow-probability - -*** If you meet the error`AttributeError: module 'tensorflow' has no attribute 'contrib'` when running the code after installing tensorflow-probability, try: - -`pip install --upgrade tf-nightly-2.0-preview tfp-nightly` - -## Status: Beta - -We are currently open to any suggestions or pull requests from you to make the reinforcement learning tutorial with TensorLayer2.0 a better code repository for both new learners and senior researchers. Some of the algorithms mentioned in the this markdown may be not yet available, since we are still trying to implement more RL algorithms and optimize their performances. However, those algorithms listed above will come out in a few weeks, and the repository will keep updating more advanced RL algorithms in the future. - -## To Use: - -For each tutorial, open a terminal and run: - - `python ***.py --train` for training and `python ***.py --test` for testing. - -The tutorial algorithms follow the same basic structure, as shown in file: [`./tutorial_format.py`](https://github.com/tensorlayer/tensorlayer/blob/reinforcement-learning/examples/reinforcement_learning/tutorial_format.py) - -The pretrained models and learning curves for each algorithm are stored [here](https://github.com/tensorlayer/pretrained-models). You can download the models and load the weights in the policies for tests. - -## Table of Contents: -### value-based -| Algorithms | Action Space | Tutorial Env | Papers | -| --------------- | ------------ | -------------- | -------| -|**value-based**|||| -| Q-learning | Discrete | FrozenLake | [Technical note: Q-learning. Watkins et al. 1992](http://www.gatsby.ucl.ac.uk/~dayan/papers/cjch.pdf)| -| Deep Q-Network (DQN)| Discrete | FrozenLake | [Human-level control through deep reinforcement learning, Mnih et al. 2015.](https://www.nature.com/articles/nature14236/) | -| Prioritized Experience Replay | Discrete | Pong, CartPole | [Schaul et al. Prioritized experience replay. Schaul et al. 2015.](https://arxiv.org/abs/1511.05952) | -|Dueling DQN|Discrete | Pong, CartPole |[Dueling network architectures for deep reinforcement learning. Wang et al. 2015.](https://arxiv.org/abs/1511.06581)| -|Double DQN| Discrete | Pong, CartPole |[Deep reinforcement learning with double q-learning. Van et al. 2016.](https://arxiv.org/abs/1509.06461)| -|Noisy DQN|Discrete | Pong, CartPole |[Noisy networks for exploration. Fortunato et al. 2017.](https://arxiv.org/pdf/1706.10295.pdf)| -| Distributed DQN (C51)| Discrete | Pong, CartPole | [A distributional perspective on reinforcement learning. Bellemare et al. 2017.](https://arxiv.org/pdf/1707.06887.pdf) | -|**policy-based**|||| -|REINFORCE(PG) |Discrete/Continuous|CartPole | [Reinforcement learning: An introduction. Sutton et al. 2011.](https://www.cambridge.org/core/journals/robotica/article/robot-learning-edited-by-jonathan-h-connell-and-sridhar-mahadevan-kluwer-boston-19931997-xii240-pp-isbn-0792393651-hardback-21800-guilders-12000-8995/737FD21CA908246DF17779E9C20B6DF6)| -| Trust Region Policy Optimization (TRPO)| Discrete/Continuous | Pendulum | [Abbeel et al. Trust region policy optimization. Schulman et al.2015.](https://arxiv.org/pdf/1502.05477.pdf) | -| Proximal Policy Optimization (PPO) |Discrete/Continuous |Pendulum| [Proximal policy optimization algorithms. Schulman et al. 2017.](https://arxiv.org/abs/1707.06347) | -|Distributed Proximal Policy Optimization (DPPO)|Discrete/Continuous |Pendulum|[Emergence of locomotion behaviours in rich environments. Heess et al. 2017.](https://arxiv.org/abs/1707.02286)| -|**actor-critic**|||| -|Actor-Critic (AC)|Discrete/Continuous|CartPole| [Actor-critic algorithms. Konda er al. 2000.](https://papers.nips.cc/paper/1786-actor-critic-algorithms.pdf)| -| Asynchronous Advantage Actor-Critic (A3C)| Discrete/Continuous | BipedalWalker| [Asynchronous methods for deep reinforcement learning. Mnih et al. 2016.](https://arxiv.org/pdf/1602.01783.pdf) | -| DDPG|Discrete/Continuous |Pendulum| [Continuous Control With Deep Reinforcement Learning, Lillicrap et al. 2016](https://arxiv.org/pdf/1509.02971.pdf) | -|TD3|Discrete/Continuous |Pendulum|[Addressing function approximation error in actor-critic methods. Fujimoto et al. 2018.](https://arxiv.org/pdf/1802.09477.pdf)| -|Soft Actor-Critic (SAC)|Discrete/Continuous |Pendulum|[Soft actor-critic algorithms and applications. Haarnoja et al. 2018.](https://arxiv.org/abs/1812.05905)| - -## Examples of RL Algorithms: - -* **Q-learning** - - Code: `./tutorial_Qlearning.py` - - Paper: [Technical Note Q-Learning](http://www.gatsby.ucl.ac.uk/~dayan/papers/cjch.pdf) - - Description: - - ``` - Q-learning is a non-deep-learning method with TD Learning, Off-Policy, e-Greedy Exploration. - - Central formula: - Q(S, A) <- Q(S, A) + alpha * (R + lambda * Q(newS, newA) - Q(S, A)) - - See David Silver RL Tutorial Lecture 5 - Q-Learning for more details. - ``` - - -* **Deep Q-Network (DQN)** - - Code: `./tutorial_DQN.py` - - Paper: [Human-level control through deep reinforcementlearning](https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf) - - [Playing Atari with Deep Reinforcement Learning](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf) - - Description: - - ``` - Deep Q-Network (DQN) is a method of TD Learning, Off-Policy, e-Greedy Exploration (GLIE). - - Central formula: - Q(S, A) <- Q(S, A) + alpha * (R + lambda * Q(newS, newA) - Q(S, A)), - delta_w = R + lambda * Q(newS, newA). - - See David Silver RL Tutorial Lecture 5 - Q-Learning for more details. - ``` - - - -* **Double DQN / Dueling DQN / Noisy DQN** - - Code: `./tutorial_DQN_variants.py` - - Paper: [Deep Reinforcement Learning with Double Q-learning](https://arxiv.org/abs/1509.06461) - - Description: - - ``` - We implement Double DQN, Dueling DQN and Noisy DQN here. - - -The max operator in standard DQN uses the same values both to select and to evaluate an action by: - - Q(s_t, a_t) = R\_{t+1\} + gamma \* max\_{a}Q\_\{target\}(s_{t+1}, a). - - -Double DQN proposes to use following evaluation to address overestimation problem of max operator: - - Q(s_t, a_t) = R\_{t+1\} + gamma \* Q\_{target}(s\_\{t+1\}, max{a}Q(s_{t+1}, a)). - - -Dueling DQN uses dueling architecture where the value of state and the advantage of each action is estimated separately. - - -Noisy DQN propose to explore by adding parameter noises. - ``` - - - - - -* **Prioritized Experience Replay** - - Code: `./tutorial_prioritized_replay.py` - - Paper: [Prioritized Experience Replay](https://arxiv.org/abs/1511.05952) - - Description: - - ``` - Prioritized experience replay is an efficient replay method that replay important transitions more frequently. Segment tree data structure is used to speed up indexing. - ``` - - - -* **Distributed DQN (C51)** - - Code: `./tutorial_C51.py` - - Paper: [A Distributional Perspective on Reinforcement Learning](https://arxiv.org/pdf/1707.06887.pdf) - - Description: - - ``` - Categorical 51 distributional RL algorithm is a distrbuted DQN, where 51 means the number of atoms. In this algorithm, instead of estimating actual expected value, value distribution over a series of continuous sub-intervals (atoms) is considered. - ``` - - -* **Actor-Critic (AC)** - - Code:`./tutorial_AC.py` - - Paper: [Actor-Critic Algorithms](https://papers.nips.cc/paper/1786-actor-critic-algorithms.pdf) - - Description: - - ``` - The implementation of Advantage Actor-Critic, using TD-error as the advantage. - ``` - - - -* **Asynchronous Advantage Actor-Critic (A3C)** - - Code: `./tutorial_A3C.py` - - Paper: [Asynchronous Methods for Deep Reinforcement Learning](https://arxiv.org/pdf/1602.01783.pdf) - - Description: - - ``` - The implementation of Asynchronous Advantage Actor-Critic (A3C), using multi-threading for distributed policy learning on Actor-Critic structure. - ``` - - - -* **Soft Actor-Critic (SAC)** - - Code: `./tutorial_SAC.py` - - Paper: [Soft Actor-Critic Algorithms and Applications](https://arxiv.org/pdf/1812.05905.pdf) - - Description: - - ``` - Actor policy in SAC is stochastic, with off-policy training. And 'soft' in SAC indicates the trade-off between the entropy and expected return. The additional consideration of entropy term helps with more explorative policy. And this implementation contains an automatic update for the entropy factor. - - This version of Soft Actor-Critic (SAC) implementation contains 5 networks: - 2 Q-networks, 2 target Q-networks and 1 policy network. - ``` - - - - -* **Vanilla Policy Gradient (PG or REINFORCE)** - - Code: `./tutorial_PG.py` - - Paper: [Policy Gradient Methods for Reinforcement Learning with Function Approximation](https://papers.nips.cc/paper/1713-policy-gradient-methods-for-reinforcement-learning-with-function-approximation.pdf) - - Description: - - ``` - The policy gradient algorithm works by updating policy parameters via stochastic gradient ascent on policy performance. It's an on-policy algorithm can be used for environments with either discrete or continuous action spaces. - - To apply it on continuous action space, you need to change the last softmax layer and the choose_action function. - ``` - - - -* **Deep Deterministic Policy Gradient (DDPG)** - - Code: `./tutorial_DDPG.py` - - Paper: [Continuous Control With Deep Reinforcement Learning](https://arxiv.org/pdf/1509.02971.pdf) - - Description: - - ``` - An algorithm concurrently learns a Q-function and a policy. - - It uses off-policy data and the Bellman equation to learn the Q-function, and uses the Q-function to learn the policy. - ``` - - - - -* **Twin Delayed DDPG (TD3)** - - Code: `./tutorial_TD3.py` - - Paper: [Addressing Function Approximation Error in Actor-Critic Methods](https://arxiv.org/pdf/1802.09477.pdf) - - Description: - - ``` - DDPG suffers from problems like overestimate of Q-values and sensitivity to hyper-parameters. - - Twin Delayed DDPG (TD3) is a variant of DDPG with several tricks: - - - Trick One: Clipped Double-Q Learning. TD3 learns two Q-functions instead of one (hence “twin”), and uses the smaller of the two Q-values to form the targets in the Bellman error loss functions. - - Trick Two: “Delayed” Policy Updates. TD3 updates the policy (and target networks) less frequently than the Q-function. - - Trick Three: Target Policy Smoothing. TD3 adds noise to the target action, to make it harder for the policy to exploit Q-function errors by smoothing out Q along changes in action. - - The implementation of TD3 includes 6 networks: - 2 Q-networks, 2 target Q-networks, 1 policy network, 1 target policy network. - - Actor policy in TD3 is deterministic, with Gaussian exploration noise. - ``` - - - -* **Trust Region Policy Optimization (TRPO)** - - Code: `./tutorial_TRPO.py` - - Paper: [Trust Region Policy Optimization](https://arxiv.org/pdf/1502.05477.pdf) - - Description: - - ``` - PG method with a large step can crash the policy performance, even with a small step can lead a large differences in policy. - - TRPO constraints the step in policy space using KL divergence (rather than in parameter space), which can monotonically improve performance and avoid a collapsed update. - ``` - - - -* **Proximal Policy Optimization (PPO)** - - Code: `./tutorial_PPO.py` - - Paper: [Proximal Policy Optimization Algorithms](https://arxiv.org/pdf/1707.06347.pdf) - - Description: - - ``` - A simple version of Proximal Policy Optimization (PPO) using single thread. - - PPO is a family of first-order methods that use a few other tricks to keep new policies close to old. - - PPO methods are significantly simpler to implement, and empirically seem to perform at least as well as TRPO. - - - ``` - - - -* **Distributed Proximal Policy Optimization (DPPO)** - - Code: `./tutorial_DPPO.py` - - Paper: [Emergence of Locomotion Behaviours in Rich Environments](https://arxiv.org/pdf/1707.02286.pdf) - - Description: - - ``` - A distributed version of OpenAI's Proximal Policy Optimization (PPO). - - Distribute the workers to collect data in parallel, then stop worker's roll-out and train PPO on collected data. - ``` - - - -* **More in recent weeks** - -## Environment: - -We typically apply game environments in [Openai Gym](https://gym.openai.com/) for our tutorials. For other environment sources like [DeepMind Control Suite](https://github.com/deepmind/dm_control) and [Marathon-Envs in Unity](https://github.com/Unity-Technologies/marathon-envs), they all have wrappers to convert into format of Gym environments, see [here](https://github.com/martinseilair/dm_control2gym) and [here](https://github.com/Unity-Technologies/marathon-envs/tree/master/gym-unity). - -Our env wrapper: `./tutorial_wrappers.py` - -## Authors -- @zsdonghao Hao Dong: AC, A3C, Q-Learning, DQN, PG -- @quantumiracle Zihan Ding: SAC, TD3. -- @Tokarev-TT-33 Tianyang Yu @initial-h Hongming Zhang : PG, DDPG, PPO, DPPO, TRPO -- @Officium Yanhua Huang: C51, DQN_variants, prioritized_replay, wrappers. - -## Recommended Materials - -- [李宏毅RL视频](https://www.bilibili.com/video/av58458003?from=search&seid=962941912089186406) -- [CS885 Spring 2018 - Reinforcement Learning by Pascal Poupart](https://cs.uwaterloo.ca/~ppoupart/teaching/cs885-spring18/schedule.html) -- [Youtube Video By David Silver, 2015 @ UCL](https://www.youtube.com/playlist?list=PLzuuYNsE1EZAXYR4FJ75jcJseBmo4KQ9-) -- [Teaching Materials By David Silver @ UCL](http://www0.cs.ucl.ac.uk/staff/D.Silver/web/Teaching.html) -- [Deep Reinforcement Learning: Fundamentals, Research and Applications By Hao Dong, Zihan Ding, Shanghang Zhang etc](http://deep-reinforcement-learning-book.github.io/) diff --git a/examples/reinforcement_learning/tutorial_A3C.py b/examples/reinforcement_learning/tutorial_A3C.py deleted file mode 100644 index f20530ebf..000000000 --- a/examples/reinforcement_learning/tutorial_A3C.py +++ /dev/null @@ -1,323 +0,0 @@ -""" -Asynchronous Advantage Actor Critic (A3C) with Continuous Action Space. - -Actor Critic History ----------------------- -A3C > DDPG (for continuous action space) > AC - -Advantage ----------- -Train faster and more stable than AC. - -Disadvantage -------------- -Have bias. - -Reference ----------- -Original Paper: https://arxiv.org/pdf/1602.01783.pdf -MorvanZhou's tutorial: https://morvanzhou.github.io/tutorials/ -MorvanZhou's code: https://github.com/MorvanZhou/Reinforcement-learning-with-tensorflow/blob/master/experiments/Solve_BipedalWalker/A3C.py - -Environment ------------ -BipedalWalker-v2 : https://gym.openai.com/envs/BipedalWalker-v2 - -Reward is given for moving forward, total 300+ points up to the far end. -If the robot falls, it gets -100. Applying motor torque costs a small amount of -points, more optimal agent will get better score. State consists of hull angle -speed, angular velocity, horizontal speed, vertical speed, position of joints -and joints angular speed, legs contact with ground, and 10 lidar rangefinder -measurements. There's no coordinates in the state vector. - -Prerequisites --------------- -tensorflow 2.0.0a0 -tensorflow-probability 0.6.0 -tensorlayer 2.0.0 -&& -pip install box2d box2d-kengz --user - -To run ------- -python tutorial_A3C.py --train/test - -""" - -import argparse -import multiprocessing -import os -import threading -import time - -import gym -import matplotlib.pyplot as plt -import numpy as np -import tensorflow as tf - -import tensorflow_probability as tfp -import tensorlayer as tl - -tfd = tfp.distributions - -tl.logging.set_verbosity(tl.logging.DEBUG) - -# add arguments in command --train/test -parser = argparse.ArgumentParser(description='Train or test neural net motor controller.') -parser.add_argument('--train', dest='train', action='store_true', default=False) -parser.add_argument('--test', dest='test', action='store_true', default=True) -args = parser.parse_args() - -##################### hyper parameters #################### - -ENV_ID = 'BipedalWalker-v2' # BipedalWalkerHardcore-v2 BipedalWalker-v2 LunarLanderContinuous-v2 -RANDOM_SEED = 2 # random seed, can be either an int number or None -RENDER = False # render while training - -ALG_NAME = 'A3C' -N_WORKERS = multiprocessing.cpu_count() # number of workers according to number of cores in cpu -# N_WORKERS = 2 # manually set number of workers -MAX_GLOBAL_EP = 15000 # number of training episodes -TEST_EPISODES = 10 # number of training episodes -GLOBAL_NET_SCOPE = 'Global_Net' -UPDATE_GLOBAL_ITER = 10 # update global policy after several episodes -GAMMA = 0.99 # reward discount factor -ENTROPY_BETA = 0.005 # factor for entropy boosted exploration -LR_A = 0.00005 # learning rate for actor -LR_C = 0.0001 # learning rate for critic -GLOBAL_RUNNING_R = [] -GLOBAL_EP = 0 # will increase during training, stop training when it >= MAX_GLOBAL_EP - -################### Asynchronous Advantage Actor Critic (A3C) #################################### - - -class ACNet(object): - - def __init__(self, scope): - self.scope = scope - - w_init = tf.keras.initializers.glorot_normal(seed=None) # initializer, glorot=xavier - - def get_actor(input_shape): # policy network - with tf.name_scope(self.scope): - ni = tl.layers.Input(input_shape, name='in') - nn = tl.layers.Dense(n_units=500, act=tf.nn.relu6, W_init=w_init, name='la')(ni) - nn = tl.layers.Dense(n_units=300, act=tf.nn.relu6, W_init=w_init, name='la2')(nn) - mu = tl.layers.Dense(n_units=N_A, act=tf.nn.tanh, W_init=w_init, name='mu')(nn) - sigma = tl.layers.Dense(n_units=N_A, act=tf.nn.softplus, W_init=w_init, name='sigma')(nn) - return tl.models.Model(inputs=ni, outputs=[mu, sigma], name=scope + '/Actor') - - self.actor = get_actor([None, N_S]) - self.actor.train() # train mode for Dropout, BatchNorm - - def get_critic(input_shape): # we use Value-function here, but not Q-function. - with tf.name_scope(self.scope): - ni = tl.layers.Input(input_shape, name='in') - nn = tl.layers.Dense(n_units=500, act=tf.nn.relu6, W_init=w_init, name='lc')(ni) - nn = tl.layers.Dense(n_units=300, act=tf.nn.relu6, W_init=w_init, name='lc2')(nn) - v = tl.layers.Dense(n_units=1, W_init=w_init, name='v')(nn) - return tl.models.Model(inputs=ni, outputs=v, name=scope + '/Critic') - - self.critic = get_critic([None, N_S]) - self.critic.train() # train mode for Dropout, BatchNorm - - @tf.function # convert numpy functions to tf.Operations in the TFgraph, return tensor - def update_global( - self, buffer_s, buffer_a, buffer_v_target, globalAC - ): # refer to the global Actor-Crtic network for updating it with samples - ''' update the global critic ''' - with tf.GradientTape() as tape: - self.v = self.critic(buffer_s) - self.v_target = buffer_v_target - td = tf.subtract(self.v_target, self.v, name='TD_error') - self.c_loss = tf.reduce_mean(tf.square(td)) - self.c_grads = tape.gradient(self.c_loss, self.critic.trainable_weights) - OPT_C.apply_gradients(zip(self.c_grads, globalAC.critic.trainable_weights)) # local grads applies to global net - # del tape # Drop the reference to the tape - ''' update the global actor ''' - with tf.GradientTape() as tape: - self.mu, self.sigma = self.actor(buffer_s) - self.test = self.sigma[0] - self.mu, self.sigma = self.mu * A_BOUND[1], self.sigma + 1e-5 - - normal_dist = tfd.Normal(self.mu, self.sigma) # no tf.contrib for tf2.0 - self.a_his = buffer_a # float32 - log_prob = normal_dist.log_prob(self.a_his) - exp_v = log_prob * td # td is from the critic part, no gradients for it - entropy = normal_dist.entropy() # encourage exploration - self.exp_v = ENTROPY_BETA * entropy + exp_v - self.a_loss = tf.reduce_mean(-self.exp_v) - self.a_grads = tape.gradient(self.a_loss, self.actor.trainable_weights) - OPT_A.apply_gradients(zip(self.a_grads, globalAC.actor.trainable_weights)) # local grads applies to global net - return self.test # for test purpose - - @tf.function - def pull_global(self, globalAC): # run by a local, pull weights from the global nets - for l_p, g_p in zip(self.actor.trainable_weights, globalAC.actor.trainable_weights): - l_p.assign(g_p) - for l_p, g_p in zip(self.critic.trainable_weights, globalAC.critic.trainable_weights): - l_p.assign(g_p) - - def get_action(self, s, greedy=False): # run by a local - s = s[np.newaxis, :] - self.mu, self.sigma = self.actor(s) - - with tf.name_scope('wrap_a_out'): - self.mu, self.sigma = self.mu * A_BOUND[1], self.sigma + 1e-5 - if greedy: - return self.mu.numpy()[0] - normal_dist = tfd.Normal(self.mu, self.sigma) # for continuous action space - self.A = tf.clip_by_value(tf.squeeze(normal_dist.sample(1), axis=0), *A_BOUND) - return self.A.numpy()[0] - - def save(self): # save trained weights - path = os.path.join('model', '_'.join([ALG_NAME, ENV_ID])) - if not os.path.exists(path): - os.makedirs(path) - tl.files.save_npz(self.actor.trainable_weights, name=os.path.join(path, 'model_actor.npz')) - tl.files.save_npz(self.critic.trainable_weights, name=os.path.join(path, 'model_critic.npz')) - - def load(self): # load trained weights - path = os.path.join('model', '_'.join([ALG_NAME, ENV_ID])) - tl.files.load_and_assign_npz(name=os.path.join(path, 'model_actor.npz'), network=self.actor) - tl.files.load_and_assign_npz(name=os.path.join(path, 'model_critic.npz'), network=self.critic) - - -class Worker(object): - - def __init__(self, name): - self.env = gym.make(ENV_ID) - self.name = name - self.AC = ACNet(name) - - # def work(self): - def work(self, globalAC): - global GLOBAL_RUNNING_R, GLOBAL_EP - total_step = 1 - buffer_s, buffer_a, buffer_r = [], [], [] - while not COORD.should_stop() and GLOBAL_EP < MAX_GLOBAL_EP: - s = self.env.reset() - ep_r = 0 - while True: - # visualize Worker_0 during training - if RENDER and self.name == 'Worker_0' and total_step % 30 == 0: - self.env.render() - s = s.astype('float32') # double to float - a = self.AC.get_action(s) - s_, r, done, _info = self.env.step(a) - - s_ = s_.astype('float32') # double to float - # set robot falls reward to -2 instead of -100 - if r == -100: r = -2 - - ep_r += r - buffer_s.append(s) - buffer_a.append(a) - buffer_r.append(r) - - if total_step % UPDATE_GLOBAL_ITER == 0 or done: # update global and assign to local net - - if done: - v_s_ = 0 # terminal - else: - v_s_ = self.AC.critic(s_[np.newaxis, :])[0, 0] # reduce dim from 2 to 0 - - buffer_v_target = [] - - for r in buffer_r[::-1]: # reverse buffer r - v_s_ = r + GAMMA * v_s_ - buffer_v_target.append(v_s_) - - buffer_v_target.reverse() - - buffer_s = tf.convert_to_tensor(np.vstack(buffer_s)) - buffer_a = tf.convert_to_tensor(np.vstack(buffer_a)) - buffer_v_target = tf.convert_to_tensor(np.vstack(buffer_v_target).astype('float32')) - - # update gradients on global network - self.AC.update_global(buffer_s, buffer_a, buffer_v_target, globalAC) - buffer_s, buffer_a, buffer_r = [], [], [] - - # update local network from global network - self.AC.pull_global(globalAC) - - s = s_ - total_step += 1 - if done: - if len(GLOBAL_RUNNING_R) == 0: # record running episode reward - GLOBAL_RUNNING_R.append(ep_r) - else: # moving average - GLOBAL_RUNNING_R.append(0.95 * GLOBAL_RUNNING_R[-1] + 0.05 * ep_r) - print('Training | {}, Episode: {}/{} | Episode Reward: {:.4f} | Running Time: {:.4f}' \ - .format(self.name, GLOBAL_EP, MAX_GLOBAL_EP, ep_r, time.time() - T0)) - GLOBAL_EP += 1 - break - - -if __name__ == "__main__": - - env = gym.make(ENV_ID) - # reproducible - np.random.seed(RANDOM_SEED) - tf.random.set_seed(RANDOM_SEED) - - N_S = env.observation_space.shape[0] - N_A = env.action_space.shape[0] - - A_BOUND = [env.action_space.low, env.action_space.high] - A_BOUND[0] = A_BOUND[0].reshape(1, N_A) - A_BOUND[1] = A_BOUND[1].reshape(1, N_A) - - with tf.device("/cpu:0"): - GLOBAL_AC = ACNet(GLOBAL_NET_SCOPE) # we only need its params - - T0 = time.time() - if args.train: - # ============================= TRAINING =============================== - with tf.device("/cpu:0"): - OPT_A = tf.optimizers.RMSprop(LR_A, name='RMSPropA') - OPT_C = tf.optimizers.RMSprop(LR_C, name='RMSPropC') - workers = [] - # Create worker - for i in range(N_WORKERS): - i_name = 'Worker_%i' % i # worker name - workers.append(Worker(i_name)) - - COORD = tf.train.Coordinator() - - # start TF threading - worker_threads = [] - for worker in workers: - job = lambda: worker.work(GLOBAL_AC) - t = threading.Thread(target=job) - t.start() - worker_threads.append(t) - COORD.join(worker_threads) - - GLOBAL_AC.save() - - plt.plot(GLOBAL_RUNNING_R) - if not os.path.exists('image'): - os.makedirs('image') - plt.savefig(os.path.join('image', '_'.join([ALG_NAME, ENV_ID]))) - - if args.test: - # ============================= EVALUATION ============================= - GLOBAL_AC.load() - for episode in range(TEST_EPISODES): - s = env.reset() - episode_reward = 0 - while True: - env.render() - s = s.astype('float32') # double to float - a = GLOBAL_AC.get_action(s, greedy=True) - s, r, d, _ = env.step(a) - episode_reward += r - if d: - break - print( - 'Testing | Episode: {}/{} | Episode Reward: {:.4f} | Running Time: {:.4f}'.format( - episode + 1, TEST_EPISODES, episode_reward, - time.time() - T0 - ) - ) diff --git a/examples/reinforcement_learning/tutorial_AC.py b/examples/reinforcement_learning/tutorial_AC.py deleted file mode 100644 index c497e714a..000000000 --- a/examples/reinforcement_learning/tutorial_AC.py +++ /dev/null @@ -1,277 +0,0 @@ -""" -Actor-Critic -------------- -It uses TD-error as the Advantage. - -Actor Critic History ----------------------- -A3C > DDPG > AC - -Advantage ----------- -AC converge faster than Policy Gradient. - -Disadvantage (IMPORTANT) ------------------------- -The Policy is oscillated (difficult to converge), DDPG can solve -this problem using advantage of DQN. - -Reference ----------- -paper: https://papers.nips.cc/paper/1786-actor-critic-algorithms.pdf -View more on MorvanZhou's tutorial page: https://morvanzhou.github.io/tutorials/ - -Environment ------------- -CartPole-v0: https://gym.openai.com/envs/CartPole-v0 - -A pole is attached by an un-actuated joint to a cart, which moves along a -frictionless track. The system is controlled by applying a force of +1 or -1 -to the cart. The pendulum starts upright, and the goal is to prevent it from -falling over. - -A reward of +1 is provided for every timestep that the pole remains upright. -The episode ends when the pole is more than 15 degrees from vertical, or the -cart moves more than 2.4 units from the center. - - -Prerequisites --------------- -tensorflow >=2.0.0a0 -tensorlayer >=2.0.0 - -To run ------- -python tutorial_AC.py --train/test - -""" -import argparse -import time -import matplotlib.pyplot as plt -import os - -import gym -import numpy as np -import tensorflow as tf - -import tensorlayer as tl - -tl.logging.set_verbosity(tl.logging.DEBUG) - -# add arguments in command --train/test -parser = argparse.ArgumentParser(description='Train or test neural net motor controller.') -parser.add_argument('--train', dest='train', action='store_true', default=False) -parser.add_argument('--test', dest='test', action='store_true', default=True) -args = parser.parse_args() - -##################### hyper parameters #################### - -ENV_ID = 'CartPole-v1' # environment id -RANDOM_SEED = 2 # random seed, can be either an int number or None -RENDER = False # render while training - -ALG_NAME = 'AC' -TRAIN_EPISODES = 200 # number of overall episodes for training -TEST_EPISODES = 10 # number of overall episodes for testing -MAX_STEPS = 500 # maximum time step in one episode -LAM = 0.9 # reward discount in TD error -LR_A = 0.001 # learning rate for actor -LR_C = 0.01 # learning rate for critic - - - -############################### Actor-Critic #################################### - - -class Actor(object): - - def __init__(self, state_dim, action_num, lr=0.001): - - input_layer = tl.layers.Input([None, state_dim], name='state') - layer = tl.layers.Dense( - n_units=30, act=tf.nn.relu6, W_init=tf.random_uniform_initializer(0, 0.01), name='hidden' - )(input_layer) - layer = tl.layers.Dense(n_units=action_num, name='actions')(layer) - self.model = tl.models.Model(inputs=input_layer, outputs=layer, name="Actor") - - self.model.train() - self.optimizer = tf.optimizers.Adam(lr) - - def learn(self, state, action, td_error): - with tf.GradientTape() as tape: - _logits = self.model(np.array([state])) - ## cross-entropy loss weighted by td-error (advantage), - # the cross-entropy mearsures the difference of two probability distributions: the predicted logits and sampled action distribution, - # then weighted by the td-error: small difference of real and predict actions for large td-error (advantage); and vice versa. - _exp_v = tl.rein.cross_entropy_reward_loss(logits=_logits, actions=[action], rewards=td_error[0]) - grad = tape.gradient(_exp_v, self.model.trainable_weights) - self.optimizer.apply_gradients(zip(grad, self.model.trainable_weights)) - return _exp_v - - def get_action(self, state, greedy=False): - _logits = self.model(np.array([state])) - _probs = tf.nn.softmax(_logits).numpy() - if greedy: - return np.argmax(_probs.ravel()) - return tl.rein.choice_action_by_probs(_probs.ravel()) # sample according to probability distribution - - def save(self): # save trained weights - path = os.path.join('model', '_'.join([ALG_NAME, ENV_ID])) - if not os.path.exists(path): - os.makedirs(path) - tl.files.save_npz(self.model.trainable_weights, name=os.path.join(path, 'model_actor.npz')) - - def load(self): # load trained weights - path = os.path.join('model', '_'.join([ALG_NAME, ENV_ID])) - tl.files.load_and_assign_npz(name=os.path.join(path, 'model_actor.npz'), network=self.model) - - -class Critic(object): - - def __init__(self, state_dim, lr=0.01): - input_layer = tl.layers.Input([1, state_dim], name='state') - layer = tl.layers.Dense( - n_units=30, act=tf.nn.relu6, W_init=tf.random_uniform_initializer(0, 0.01), name='hidden' - )(input_layer) - layer = tl.layers.Dense(n_units=1, act=None, name='value')(layer) - self.model = tl.models.Model(inputs=input_layer, outputs=layer, name="Critic") - self.model.train() - - self.optimizer = tf.optimizers.Adam(lr) - - def learn(self, state, reward, state_, done): - d = 0 if done else 1 - v_ = self.model(np.array([state_])) - with tf.GradientTape() as tape: - v = self.model(np.array([state])) - ## TD_error = r + d * lambda * V(newS) - V(S) - td_error = reward + d * LAM * v_ - v - loss = tf.square(td_error) - grad = tape.gradient(loss, self.model.trainable_weights) - self.optimizer.apply_gradients(zip(grad, self.model.trainable_weights)) - return td_error - - def save(self): # save trained weights - path = os.path.join('model', '_'.join([ALG_NAME, ENV_ID])) - if not os.path.exists(path): - os.makedirs(path) - tl.files.save_npz(self.model.trainable_weights, name=os.path.join(path, 'model_critic.npz')) - - def load(self): # load trained weights - path = os.path.join('model', '_'.join([ALG_NAME, ENV_ID])) - tl.files.load_and_assign_npz(name=os.path.join(path, 'model_critic.npz'), network=self.model) - - -if __name__ == '__main__': - ''' - choose environment - 1. Openai gym: - env = gym.make() - 2. DeepMind Control Suite: - env = dm_control2gym.make() - ''' - env = gym.make(ENV_ID).unwrapped - # dm_control2gym.create_render_mode('example mode', show=True, return_pixel=False, height=240, width=320, camera_id=-1, overlays=(), - # depth=False, scene_option=None) - # env = dm_control2gym.make(domain_name="cartpole", task_name="balance") - - env.seed(RANDOM_SEED) # reproducible - np.random.seed(RANDOM_SEED) - tf.random.set_seed(RANDOM_SEED) # reproducible - - N_F = env.observation_space.shape[0] - N_A = env.action_space.n - - print("observation dimension: %d" % N_F) # 4 - print("observation high: %s" % env.observation_space.high) # [ 2.4 , inf , 0.41887902 , inf] - print("observation low : %s" % env.observation_space.low) # [-2.4 , -inf , -0.41887902 , -inf] - print("num of actions: %d" % N_A) # 2 : left or right - - actor = Actor(state_dim=N_F, action_num=N_A, lr=LR_A) - # we need a good teacher, so the teacher should learn faster than the actor - critic = Critic(state_dim=N_F, lr=LR_C) - - t0 = time.time() - if args.train: - all_episode_reward = [] - for episode in range(TRAIN_EPISODES): - state = env.reset().astype(np.float32) - step = 0 # number of step in this episode - episode_reward = 0 # rewards of all steps - while True: - if RENDER: env.render() - - action = actor.get_action(state) - - state_new, reward, done, info = env.step(action) - state_new = state_new.astype(np.float32) - - if done: reward = -20 # reward shaping trick - # these may helpful in some tasks - # if abs(s_new[0]) >= env.observation_space.high[0]: - # # cart moves more than 2.4 units from the center - # r = -20 - # reward for the distance between cart to the center - # r -= abs(s_new[0]) * .1 - - episode_reward += reward - - try: - td_error = critic.learn( - state, reward, state_new, done - ) # learn Value-function : gradient = grad[r + lambda * V(s_new) - V(s)] - actor.learn(state, action, td_error) # learn Policy : true_gradient = grad[logPi(s, a) * td_error] - except KeyboardInterrupt: # if Ctrl+C at running actor.learn(), then save model, or exit if not at actor.learn() - actor.save() - critic.save() - - state = state_new - step += 1 - - if done or step >= MAX_STEPS: - break - - if episode == 0: - all_episode_reward.append(episode_reward) - else: - all_episode_reward.append(all_episode_reward[-1] * 0.9 + episode_reward * 0.1) - - print('Training | Episode: {}/{} | Episode Reward: {:.0f} | Running Time: {:.4f}' \ - .format(episode + 1, TRAIN_EPISODES, episode_reward, time.time() - t0)) - - # Early Stopping for quick check - if step >= MAX_STEPS: - print("Early Stopping") # Hao Dong: it is important for this task - break - actor.save() - critic.save() - - plt.plot(all_episode_reward) - if not os.path.exists('image'): - os.makedirs('image') - plt.savefig(os.path.join('image', '_'.join([ALG_NAME, ENV_ID]))) - - if args.test: - actor.load() - critic.load() - - for episode in range(TEST_EPISODES): - episode_time = time.time() - state = env.reset().astype(np.float32) - t = 0 # number of step in this episode - episode_reward = 0 - while True: - env.render() - action = actor.get_action(state, greedy=True) - state_new, reward, done, info = env.step(action) - state_new = state_new.astype(np.float32) - if done: reward = -20 - - episode_reward += reward - state = state_new - t += 1 - - if done or t >= MAX_STEPS: - print('Testing | Episode: {}/{} | Episode Reward: {:.0f} | Running Time: {:.4f}' \ - .format(episode + 1, TEST_EPISODES, episode_reward, time.time() - t0)) - break diff --git a/examples/reinforcement_learning/tutorial_C51.py b/examples/reinforcement_learning/tutorial_C51.py deleted file mode 100644 index 50b82d66e..000000000 --- a/examples/reinforcement_learning/tutorial_C51.py +++ /dev/null @@ -1,343 +0,0 @@ -""" -C51 Algorithm ------------------------- -Categorical 51 distributional RL algorithm, 51 means the number of atoms. In -this algorithm, instead of estimating actual expected value, value distribution -over a series of continuous sub-intervals (atoms) is considered. -Reference: ------------------------- -Bellemare M G, Dabney W, Munos R. A distributional perspective on reinforcement -learning[C]//Proceedings of the 34th International Conference on Machine -Learning-Volume 70. JMLR. org, 2017: 449-458. -Environment: ------------------------- -Cartpole and Pong in OpenAI Gym -Requirements: ------------------------- -tensorflow>=2.0.0a0 -tensorlayer>=2.0.0 -To run: ------------------------- -python tutorial_C51.py --mode=train -python tutorial_C51.py --mode=test --save_path=c51/8000.npz -""" -import argparse -import os -import random -import time - -import gym -import matplotlib.pyplot as plt -import numpy as np -import tensorflow as tf - -import tensorlayer as tl - -parser = argparse.ArgumentParser() -parser.add_argument('--train', dest='train', action='store_true', default=True) -parser.add_argument('--test', dest='test', action='store_true', default=True) -parser.add_argument( - '--save_path', default=None, help='folder to save if mode == train else model path,' - 'qnet will be saved once target net update' -) -parser.add_argument('--seed', help='random seed', type=int, default=0) -parser.add_argument('--env_id', default='CartPole-v0', help='CartPole-v0 or PongNoFrameskip-v4') -args = parser.parse_args() - -random.seed(args.seed) -np.random.seed(args.seed) -tf.random.set_seed(args.seed) # reproducible -env_id = args.env_id -env = gym.make(env_id) -env.seed(args.seed) -alg_name = 'C51' - -# #################### hyper parameters #################### -if env_id == 'CartPole-v0': - qnet_type = 'MLP' - number_timesteps = 10000 # total number of time steps to train on - explore_timesteps = 100 - # epsilon-greedy schedule, final exploit prob is 0.99 - epsilon = lambda i_iter: 1 - 0.99 * min(1, i_iter / explore_timesteps) - lr = 5e-3 # learning rate - buffer_size = 1000 # replay buffer size - target_q_update_freq = 50 # how frequency target q net update - ob_scale = 1.0 # scale observations - clipnorm = None -else: - # reward will increase obviously after 1e5 time steps - qnet_type = 'CNN' - number_timesteps = int(1e6) # total number of time steps to train on - explore_timesteps = 1e5 - # epsilon-greedy schedule, final exploit prob is 0.99 - epsilon = lambda i_iter: 1 - 0.99 * min(1, i_iter / explore_timesteps) - lr = 1e-4 # learning rate - buffer_size = 10000 # replay buffer size - target_q_update_freq = 200 # how frequency target q net update - ob_scale = 1.0 / 255 # scale observations - clipnorm = 10 - -in_dim = env.observation_space.shape -out_dim = env.action_space.n -reward_gamma = 0.99 # reward discount -batch_size = 32 # batch size for sampling from replay buffer -warm_start = buffer_size / 10 # sample times befor learning -atom_num = 51 -min_value = -10 -max_value = 10 -vrange = np.linspace(min_value, max_value, atom_num) -deltaz = float(max_value - min_value) / (atom_num - 1) - - -# ############################## Network #################################### -class MLP(tl.models.Model): - - def __init__(self, name): - super(MLP, self).__init__(name=name) - self.h1 = tl.layers.Dense(64, tf.nn.tanh, in_channels=in_dim[0], W_init=tf.initializers.GlorotUniform()) - self.qvalue = tl.layers.Dense( - out_dim * atom_num, in_channels=64, name='q', W_init=tf.initializers.GlorotUniform() - ) - self.reshape = tl.layers.Reshape((-1, out_dim, atom_num)) - - def forward(self, ni): - qvalues = self.qvalue(self.h1(ni)) - return tf.nn.log_softmax(self.reshape(qvalues), 2) - - -class CNN(tl.models.Model): - - def __init__(self, name): - super(CNN, self).__init__(name=name) - h, w, in_channels = in_dim - dense_in_channels = 64 * ((h - 28) // 8) * ((w - 28) // 8) - self.conv1 = tl.layers.Conv2d( - 32, (8, 8), (4, 4), tf.nn.relu, 'VALID', in_channels=in_channels, name='conv2d_1', - W_init=tf.initializers.GlorotUniform() - ) - self.conv2 = tl.layers.Conv2d( - 64, (4, 4), (2, 2), tf.nn.relu, 'VALID', in_channels=32, name='conv2d_2', - W_init=tf.initializers.GlorotUniform() - ) - self.conv3 = tl.layers.Conv2d( - 64, (3, 3), (1, 1), tf.nn.relu, 'VALID', in_channels=64, name='conv2d_3', - W_init=tf.initializers.GlorotUniform() - ) - self.flatten = tl.layers.Flatten(name='flatten') - self.preq = tl.layers.Dense( - 256, tf.nn.relu, in_channels=dense_in_channels, name='pre_q', W_init=tf.initializers.GlorotUniform() - ) - self.qvalue = tl.layers.Dense( - out_dim * atom_num, in_channels=256, name='q', W_init=tf.initializers.GlorotUniform() - ) - self.reshape = tl.layers.Reshape((-1, out_dim, atom_num)) - - def forward(self, ni): - feature = self.flatten(self.conv3(self.conv2(self.conv1(ni)))) - qvalues = self.qvalue(self.preq(feature)) - return tf.nn.log_softmax(self.reshape(qvalues), 2) - - -# ############################## Replay #################################### -class ReplayBuffer(object): - - def __init__(self, size): - self._storage = [] - self._maxsize = size - self._next_idx = 0 - - def __len__(self): - return len(self._storage) - - def add(self, *args): - if self._next_idx >= len(self._storage): - self._storage.append(args) - else: - self._storage[self._next_idx] = args - self._next_idx = (self._next_idx + 1) % self._maxsize - - def _encode_sample(self, idxes): - b_o, b_a, b_r, b_o_, b_d = [], [], [], [], [] - for i in idxes: - o, a, r, o_, d = self._storage[i] - b_o.append(o) - b_a.append(a) - b_r.append(r) - b_o_.append(o_) - b_d.append(d) - return ( - np.stack(b_o).astype('float32') * ob_scale, - np.stack(b_a).astype('int32'), - np.stack(b_r).astype('float32'), - np.stack(b_o_).astype('float32') * ob_scale, - np.stack(b_d).astype('float32'), - ) - - def sample(self, batch_size): - indexes = range(len(self._storage)) - idxes = [random.choice(indexes) for _ in range(batch_size)] - return self._encode_sample(idxes) - - -# ############################# Functions ################################### -def huber_loss(x): - """Loss function for value""" - return tf.where(tf.abs(x) < 1, tf.square(x) * 0.5, tf.abs(x) - 0.5) - - -def sync(net, net_tar): - """Copy q network to target q network""" - for var, var_tar in zip(net.trainable_weights, net_tar.trainable_weights): - var_tar.assign(var) - - -# ############################### DQN ##################################### -class DQN(object): - - def __init__(self): - model = MLP if qnet_type == 'MLP' else CNN - self.qnet = model('q') - if args.train: - self.qnet.train() - self.targetqnet = model('targetq') - self.targetqnet.infer() - sync(self.qnet, self.targetqnet) - else: - self.qnet.infer() - self.load(args.save_path) - self.niter = 0 - if clipnorm is not None: - self.optimizer = tf.optimizers.Adam(learning_rate=lr, clipnorm=clipnorm) - else: - self.optimizer = tf.optimizers.Adam(learning_rate=lr) - - def get_action(self, obv): - eps = epsilon(self.niter) - if args.train and random.random() < eps: - return int(random.random() * out_dim) - else: - obv = np.expand_dims(obv, 0).astype('float32') * ob_scale - qdist = np.exp(self._qvalues_func(obv).numpy()) - qvalues = (qdist * vrange).sum(-1) - return qvalues.argmax(1)[0] - - @tf.function - def _qvalues_func(self, obv): - return self.qnet(obv) - - def train(self, b_o, b_a, b_r, b_o_, b_d): - # TODO: move q_estimation in tf.function - b_dist_ = np.exp(self.targetqnet(b_o_).numpy()) - b_a_ = (b_dist_ * vrange).sum(-1).argmax(1) - b_tzj = np.clip(reward_gamma * (1 - b_d[:, None]) * vrange[None, :] + b_r[:, None], min_value, max_value) - b_i = (b_tzj - min_value) / deltaz - b_l = np.floor(b_i).astype('int64') - b_u = np.ceil(b_i).astype('int64') - templ = b_dist_[range(batch_size), b_a_, :] * (b_u - b_i) - tempu = b_dist_[range(batch_size), b_a_, :] * (b_i - b_l) - b_m = np.zeros((batch_size, atom_num)) - # TODO: aggregate value by index and batch update (scatter_add) - for j in range(batch_size): - for k in range(atom_num): - b_m[j][b_l[j][k]] += templ[j][k] - b_m[j][b_u[j][k]] += tempu[j][k] - b_m = tf.convert_to_tensor(b_m, dtype='float32') - b_index = np.stack([range(batch_size), b_a], 1) - b_index = tf.convert_to_tensor(b_index, 'int64') - - self._train_func(b_o, b_index, b_m) - - self.niter += 1 - if self.niter % target_q_update_freq == 0: - sync(self.qnet, self.targetqnet) - self.save(args.save_path) - - def save(self, path): - if path is None: - path = os.path.join('model', '_'.join([alg_name, env_id])) - if not os.path.exists(path): - os.makedirs(path) - tl.files.save_weights_to_hdf5(os.path.join(path, 'q_net.hdf5'), self.qnet) - - def load(self, path): - if path is None: - path = os.path.join('model', '_'.join([alg_name, env_id])) - tl.files.load_hdf5_to_weights_in_order(os.path.join(path, 'q_net.hdf5'), self.qnet) - - @tf.function - def _train_func(self, b_o, b_index, b_m): - with tf.GradientTape() as tape: - b_dist_a = tf.gather_nd(self.qnet(b_o), b_index) - loss = tf.reduce_mean(tf.negative(tf.reduce_sum(b_dist_a * b_m, 1))) - - grad = tape.gradient(loss, self.qnet.trainable_weights) - self.optimizer.apply_gradients(zip(grad, self.qnet.trainable_weights)) - - -# ############################# Trainer ################################### -if __name__ == '__main__': - dqn = DQN() - t0 = time.time() - if args.train: - buffer = ReplayBuffer(buffer_size) - nepisode = 0 - all_episode_reward = [] - for i in range(1, number_timesteps + 1): - o = env.reset() - episode_reward = 0 - while True: - a = dqn.get_action(o) - # execute action and feed to replay buffer - # note that `_` tail in var name means next - o_, r, done, info = env.step(a) - buffer.add(o, a, r, o_, done) - episode_reward += r - - if i >= warm_start: - transitions = buffer.sample(batch_size) - dqn.train(*transitions) - - if done: - break - else: - o = o_ - - if nepisode == 0: - all_episode_reward.append(episode_reward) - else: - all_episode_reward.append(all_episode_reward[-1] * 0.9 + episode_reward * 0.1) - nepisode += 1 - print( - 'Training | Episode: {} | Episode Reward: {:.4f} | Running Time: {:.4f}'.format( - nepisode, episode_reward, - time.time() - t0 - ) - ) # episode num starts from 1 in print - - dqn.save(args.save_path) - plt.plot(all_episode_reward) - if not os.path.exists('image'): - os.makedirs('image') - plt.savefig(os.path.join('image', '_'.join([alg_name, env_id]))) - - if args.test: - nepisode = 0 - for i in range(1, number_timesteps + 1): - o = env.reset() - episode_reward = 0 - while True: - env.render() - a = dqn.get_action(o) - o_, r, done, info = env.step(a) - episode_reward += r - if done: - break - else: - o = o_ - nepisode += 1 - print( - 'Testing | Episode: {} | Episode Reward: {:.4f} | Running Time: {:.4f}'.format( - nepisode, episode_reward, - time.time() - t0 - ) - ) diff --git a/examples/reinforcement_learning/tutorial_DDPG.py b/examples/reinforcement_learning/tutorial_DDPG.py deleted file mode 100644 index c006a7bf4..000000000 --- a/examples/reinforcement_learning/tutorial_DDPG.py +++ /dev/null @@ -1,305 +0,0 @@ -""" -Deep Deterministic Policy Gradient (DDPG) ------------------------------------------ -An algorithm concurrently learns a Q-function and a policy. -It uses off-policy data and the Bellman equation to learn the Q-function, -and uses the Q-function to learn the policy. - -Reference ---------- -Deterministic Policy Gradient Algorithms, Silver et al. 2014 -Continuous Control With Deep Reinforcement Learning, Lillicrap et al. 2016 -MorvanZhou's tutorial page: https://morvanzhou.github.io/tutorials/ - -Environment ------------ -Openai Gym Pendulum-v0, continual action space - -Prerequisites -------------- -tensorflow >=2.0.0a0 -tensorflow-proactionsbility 0.6.0 -tensorlayer >=2.0.0 - -To run ------- -python tutorial_DDPG.py --train/test - -""" - -import argparse -import os -import time - -import gym -import matplotlib.pyplot as plt -import numpy as np -import tensorflow as tf - -import tensorlayer as tl - -# add arguments in command --train/test -parser = argparse.ArgumentParser(description='Train or test neural net motor controller.') -parser.add_argument('--train', dest='train', action='store_true', default=False) -parser.add_argument('--test', dest='test', action='store_true', default=True) -args = parser.parse_args() - -##################### hyper parameters #################### - -ENV_ID = 'Pendulum-v0' # environment id -RANDOM_SEED = 2 # random seed, can be either an int number or None -RENDER = False # render while training - -ALG_NAME = 'DDPG' -TRAIN_EPISODES = 100 # total number of episodes for training -TEST_EPISODES = 10 # total number of episodes for training -MAX_STEPS = 200 # total number of steps for each episode - -LR_A = 0.001 # learning rate for actor -LR_C = 0.002 # learning rate for critic -GAMMA = 0.9 # reward discount -TAU = 0.01 # soft replacement -MEMORY_CAPACITY = 10000 # size of replay buffer -BATCH_SIZE = 32 # update action batch size -VAR = 2 # control exploration - -############################### DDPG #################################### - - -class DDPG(object): - """ - DDPG class - """ - - def __init__(self, action_dim, state_dim, action_range): - self.memory = np.zeros((MEMORY_CAPACITY, state_dim * 2 + action_dim + 1), dtype=np.float32) - self.pointer = 0 - self.action_dim, self.state_dim, self.action_range = action_dim, state_dim, action_range - self.var = VAR - - W_init = tf.random_normal_initializer(mean=0, stddev=0.3) - b_init = tf.constant_initializer(0.1) - - def get_actor(input_state_shape, name=''): - """ - Build actor network - :param input_state_shape: state - :param name: name - :return: act - """ - input_layer = tl.layers.Input(input_state_shape, name='A_input') - layer = tl.layers.Dense(n_units=64, act=tf.nn.relu, W_init=W_init, b_init=b_init, name='A_l1')(input_layer) - layer = tl.layers.Dense(n_units=64, act=tf.nn.relu, W_init=W_init, b_init=b_init, name='A_l2')(layer) - layer = tl.layers.Dense(n_units=action_dim, act=tf.nn.tanh, W_init=W_init, b_init=b_init, name='A_a')(layer) - layer = tl.layers.Lambda(lambda x: action_range * x)(layer) - return tl.models.Model(inputs=input_layer, outputs=layer, name='Actor' + name) - - def get_critic(input_state_shape, input_action_shape, name=''): - """ - Build critic network - :param input_state_shape: state - :param input_action_shape: act - :param name: name - :return: Q value Q(s,a) - """ - state_input = tl.layers.Input(input_state_shape, name='C_s_input') - action_input = tl.layers.Input(input_action_shape, name='C_a_input') - layer = tl.layers.Concat(1)([state_input, action_input]) - layer = tl.layers.Dense(n_units=64, act=tf.nn.relu, W_init=W_init, b_init=b_init, name='C_l1')(layer) - layer = tl.layers.Dense(n_units=64, act=tf.nn.relu, W_init=W_init, b_init=b_init, name='C_l2')(layer) - layer = tl.layers.Dense(n_units=1, W_init=W_init, b_init=b_init, name='C_out')(layer) - return tl.models.Model(inputs=[state_input, action_input], outputs=layer, name='Critic' + name) - - self.actor = get_actor([None, state_dim]) - self.critic = get_critic([None, state_dim], [None, action_dim]) - self.actor.train() - self.critic.train() - - def copy_para(from_model, to_model): - """ - Copy parameters for soft updating - :param from_model: latest model - :param to_model: target model - :return: None - """ - for i, j in zip(from_model.trainable_weights, to_model.trainable_weights): - j.assign(i) - - self.actor_target = get_actor([None, state_dim], name='_target') - copy_para(self.actor, self.actor_target) - self.actor_target.eval() - - self.critic_target = get_critic([None, state_dim], [None, action_dim], name='_target') - copy_para(self.critic, self.critic_target) - self.critic_target.eval() - - self.ema = tf.train.ExponentialMovingAverage(decay=1 - TAU) # soft replacement - - self.actor_opt = tf.optimizers.Adam(LR_A) - self.critic_opt = tf.optimizers.Adam(LR_C) - - def ema_update(self): - """ - Soft updating by exponential smoothing - :return: None - """ - paras = self.actor.trainable_weights + self.critic.trainable_weights - self.ema.apply(paras) - for i, j in zip(self.actor_target.trainable_weights + self.critic_target.trainable_weights, paras): - i.assign(self.ema.average(j)) - - def get_action(self, s, greedy=False): - """ - Choose action - :param s: state - :param greedy: get action greedy or not - :return: act - """ - a = self.actor(np.array([s], dtype=np.float32))[0] - if greedy: - return a - return np.clip( - np.random.normal(a, self.var), -self.action_range, self.action_range - ) # add randomness to action selection for exploration - - def learn(self): - """ - Update parameters - :return: None - """ - self.var *= .9995 - indices = np.random.choice(MEMORY_CAPACITY, size=BATCH_SIZE) - datas = self.memory[indices, :] - states = datas[:, :self.state_dim] - actions = datas[:, self.state_dim:self.state_dim + self.action_dim] - rewards = datas[:, -self.state_dim - 1:-self.state_dim] - states_ = datas[:, -self.state_dim:] - - with tf.GradientTape() as tape: - actions_ = self.actor_target(states_) - q_ = self.critic_target([states_, actions_]) - y = rewards + GAMMA * q_ - q = self.critic([states, actions]) - td_error = tf.losses.mean_squared_error(y, q) - critic_grads = tape.gradient(td_error, self.critic.trainable_weights) - self.critic_opt.apply_gradients(zip(critic_grads, self.critic.trainable_weights)) - - with tf.GradientTape() as tape: - a = self.actor(states) - q = self.critic([states, a]) - actor_loss = -tf.reduce_mean(q) # maximize the q - actor_grads = tape.gradient(actor_loss, self.actor.trainable_weights) - self.actor_opt.apply_gradients(zip(actor_grads, self.actor.trainable_weights)) - self.ema_update() - - def store_transition(self, s, a, r, s_): - """ - Store data in data buffer - :param s: state - :param a: act - :param r: reward - :param s_: next state - :return: None - """ - s = s.astype(np.float32) - s_ = s_.astype(np.float32) - transition = np.hstack((s, a, [r], s_)) - index = self.pointer % MEMORY_CAPACITY # replace the old memory with new memory - self.memory[index, :] = transition - self.pointer += 1 - - def save(self): - """ - save trained weights - :return: None - """ - path = os.path.join('model', '_'.join([ALG_NAME, ENV_ID])) - if not os.path.exists(path): - os.makedirs(path) - tl.files.save_weights_to_hdf5(os.path.join(path, 'actor.hdf5'), self.actor) - tl.files.save_weights_to_hdf5(os.path.join(path, 'actor_target.hdf5'), self.actor_target) - tl.files.save_weights_to_hdf5(os.path.join(path, 'critic.hdf5'), self.critic) - tl.files.save_weights_to_hdf5(os.path.join(path, 'critic_target.hdf5'), self.critic_target) - - def load(self): - """ - load trained weights - :return: None - """ - path = os.path.join('model', '_'.join([ALG_NAME, ENV_ID])) - tl.files.load_hdf5_to_weights_in_order(os.path.join(path, 'actor.hdf5'), self.actor) - tl.files.load_hdf5_to_weights_in_order(os.path.join(path, 'actor_target.hdf5'), self.actor_target) - tl.files.load_hdf5_to_weights_in_order(os.path.join(path, 'critic.hdf5'), self.critic) - tl.files.load_hdf5_to_weights_in_order(os.path.join(path, 'critic_target.hdf5'), self.critic_target) - - -if __name__ == '__main__': - env = gym.make(ENV_ID).unwrapped - - # reproducible - env.seed(RANDOM_SEED) - np.random.seed(RANDOM_SEED) - tf.random.set_seed(RANDOM_SEED) - - state_dim = env.observation_space.shape[0] - action_dim = env.action_space.shape[0] - action_range = env.action_space.high # scale action, [-action_range, action_range] - - agent = DDPG(action_dim, state_dim, action_range) - - t0 = time.time() - if args.train: # train - all_episode_reward = [] - for episode in range(TRAIN_EPISODES): - state = env.reset() - episode_reward = 0 - for step in range(MAX_STEPS): - if RENDER: - env.render() - # Add exploration noise - action = agent.get_action(state) - state_, reward, done, info = env.step(action) - agent.store_transition(state, action, reward, state_) - - if agent.pointer > MEMORY_CAPACITY: - agent.learn() - - state = state_ - episode_reward += reward - if done: - break - - if episode == 0: - all_episode_reward.append(episode_reward) - else: - all_episode_reward.append(all_episode_reward[-1] * 0.9 + episode_reward * 0.1) - print( - 'Training | Episode: {}/{} | Episode Reward: {:.4f} | Running Time: {:.4f}'.format( - episode + 1, TRAIN_EPISODES, episode_reward, - time.time() - t0 - ) - ) - agent.save() - plt.plot(all_episode_reward) - if not os.path.exists('image'): - os.makedirs('image') - plt.savefig(os.path.join('image', '_'.join([ALG_NAME, ENV_ID]))) - - if args.test: - # test - agent.load() - for episode in range(TEST_EPISODES): - state = env.reset() - episode_reward = 0 - for step in range(MAX_STEPS): - env.render() - state, reward, done, info = env.step(agent.get_action(state, greedy=True)) - episode_reward += reward - if done: - break - print( - 'Testing | Episode: {}/{} | Episode Reward: {:.4f} | Running Time: {:.4f}'.format( - episode + 1, TEST_EPISODES, episode_reward, - time.time() - t0 - ) - ) diff --git a/examples/reinforcement_learning/tutorial_DPPO.py b/examples/reinforcement_learning/tutorial_DPPO.py deleted file mode 100644 index dbfd78db5..000000000 --- a/examples/reinforcement_learning/tutorial_DPPO.py +++ /dev/null @@ -1,378 +0,0 @@ -""" -Distributed Proximal Policy Optimization (DPPO) ----------------------------- -A distributed version of OpenAI's Proximal Policy Optimization (PPO). -Workers in parallel to collect data, then stop worker's roll-out and train PPO on collected data. -Restart workers once PPO is updated. - -Reference ---------- -Emergence of Locomotion Behaviours in Rich Environments, Heess et al. 2017 -Proximal Policy Optimization Algorithms, Schulman et al. 2017 -High Dimensional Continuous Control Using Generalized Advantage Estimation, Schulman et al. 2016 -MorvanZhou's tutorial page: https://morvanzhou.github.io/tutorials - -Environment ------------ -Openai Gym Pendulum-v0, continual action space - -Prerequisites --------------- -tensorflow >=2.0.0a0 -tensorflow-probability 0.6.0 -tensorlayer >=2.0.0 - -To run ------- -python tutorial_DPPO.py --train/test -""" - -import argparse -import os -import queue -import threading -import time - -import gym -import matplotlib.pyplot as plt -import numpy as np -import tensorflow as tf -import tensorflow_probability as tfp - -import tensorlayer as tl - -parser = argparse.ArgumentParser(description='Train or test neural net motor controller.') -parser.add_argument('--train', dest='train', action='store_true', default=False) -parser.add_argument('--test', dest='test', action='store_true', default=True) -args = parser.parse_args() - -##################### hyper parameters #################### - -ENV_ID = 'Pendulum-v0' # environment name -RANDOMSEED = 2 # random seed -RENDER = False # render while training - -ALG_NAME = 'DPPO' -TRAIN_EPISODES = 1000 # total number of episodes for training -TEST_EPISODES = 10 # number of overall episodes for testing -MAX_STEPS = 200 # total number of steps for each episode -GAMMA = 0.9 # reward discount -LR_A = 0.0001 # learning rate for actor -LR_C = 0.0002 # learning rate for critic -ACTOR_UPDATE_STEPS = 10 # actor update steps -CRITIC_UPDATE_STEPS = 10 # critic update steps -MIN_BATCH_SIZE = 64 # minimum batch size for updating PPO - -N_WORKER = 4 # parallel workers -UPDATE_STEP = 10 # loop update operation n-steps - -# ppo-penalty parameters -KL_TARGET = 0.01 -LAM = 0.5 - -# ppo-clip parameters -EPSILON = 0.2 - - -############################### DPPO #################################### - - -class PPO(object): - """ - PPO class - """ - - def __init__(self, state_dim, action_dim, action_bound, method='clip'): - - # critic - with tf.name_scope('critic'): - inputs = tl.layers.Input([None, state_dim], tf.float32, 'state') - layer = tl.layers.Dense(64, tf.nn.relu)(inputs) - layer = tl.layers.Dense(64, tf.nn.relu)(layer) - v = tl.layers.Dense(1)(layer) - self.critic = tl.models.Model(inputs, v) - self.critic.train() - self.method = method - - # actor - with tf.name_scope('actor'): - inputs = tl.layers.Input([None, state_dim], tf.float32, 'state') - layer = tl.layers.Dense(64, tf.nn.relu)(inputs) - layer = tl.layers.Dense(64, tf.nn.relu)(layer) - a = tl.layers.Dense(action_dim, tf.nn.tanh)(layer) - mean = tl.layers.Lambda(lambda x: x * action_bound, name='lambda')(a) - logstd = tf.Variable(np.zeros(action_dim, dtype=np.float32)) - self.actor = tl.models.Model(inputs, mean) - self.actor.trainable_weights.append(logstd) - self.actor.logstd = logstd - self.actor.train() - - self.actor_opt = tf.optimizers.Adam(LR_A) - self.critic_opt = tf.optimizers.Adam(LR_C) - - self.method = method - if method == 'penalty': - self.kl_target = KL_TARGET - self.lam = LAM - elif method == 'clip': - self.epsilon = EPSILON - - self.state_buffer, self.action_buffer = [], [] - self.reward_buffer, self.cumulative_reward_buffer = [], [] - self.action_bound = action_bound - - def train_actor(self, state, action, adv, old_pi): - """ - Update policy network - :param state: state batch - :param action: action batch - :param adv: advantage batch - :param old_pi: old pi distribution - :return: kl_mean or None - """ - with tf.GradientTape() as tape: - mean, std = self.actor(state), tf.exp(self.actor.logstd) - pi = tfp.distributions.Normal(mean, std) - - ratio = tf.exp(pi.log_prob(action) - old_pi.log_prob(action)) - surr = ratio * adv - if self.method == 'penalty': # ppo penalty - kl = tfp.distributions.kl_divergence(old_pi, pi) - kl_mean = tf.reduce_mean(kl) - loss = -(tf.reduce_mean(surr - self.lam * kl)) - else: # ppo clip - loss = -tf.reduce_mean( - tf.minimum(surr, - tf.clip_by_value(ratio, 1. - self.epsilon, 1. + self.epsilon) * adv) - ) - a_gard = tape.gradient(loss, self.actor.trainable_weights) - self.actor_opt.apply_gradients(zip(a_gard, self.actor.trainable_weights)) - - if self.method == 'kl_pen': - return kl_mean - - def train_critic(self, reward, state): - """ - Update actor network - :param reward: cumulative reward batch - :param state: state batch - :return: None - """ - reward = np.array(reward, dtype=np.float32) - with tf.GradientTape() as tape: - advantage = reward - self.critic(state) - loss = tf.reduce_mean(tf.square(advantage)) - grad = tape.gradient(loss, self.critic.trainable_weights) - self.critic_opt.apply_gradients(zip(grad, self.critic.trainable_weights)) - - def update(self): - """ - Update parameter with the constraint of KL divergent - :return: None - """ - global GLOBAL_UPDATE_COUNTER - while not COORD.should_stop(): - if GLOBAL_EP < TRAIN_EPISODES: - UPDATE_EVENT.wait() # wait until get batch of data - - data = [QUEUE.get() for _ in range(QUEUE.qsize())] # collect data from all workers - s, a, r = zip(*data) - s = np.vstack(s).astype(np.float32) - a = np.vstack(a).astype(np.float32) - r = np.vstack(r).astype(np.float32) - mean, std = self.actor(s), tf.exp(self.actor.logstd) - pi = tfp.distributions.Normal(mean, std) - adv = r - self.critic(s) - # adv = (adv - adv.mean())/(adv.std()+1e-6) # sometimes helpful - - # update actor - if self.method == 'kl_pen': - for _ in range(ACTOR_UPDATE_STEPS): - kl = self.train_actor(s, a, adv, pi) - if kl < self.kl_target / 1.5: - self.lam /= 2 - elif kl > self.kl_target * 1.5: - self.lam *= 2 - else: - for _ in range(ACTOR_UPDATE_STEPS): - self.train_actor(s, a, adv, pi) - - # update critic - for _ in range(CRITIC_UPDATE_STEPS): - self.train_critic(r, s) - - UPDATE_EVENT.clear() # updating finished - GLOBAL_UPDATE_COUNTER = 0 # reset counter - ROLLING_EVENT.set() # set roll-out available - - def get_action(self, state, greedy=False): - """ - Choose action - :param state: state - :param greedy: choose action greedy or not - :return: clipped action - """ - state = state[np.newaxis, :].astype(np.float32) - mean, std = self.actor(state), tf.exp(self.actor.logstd) - if greedy: - action = mean[0] - else: - pi = tfp.distributions.Normal(mean, std) - action = tf.squeeze(pi.sample(1), axis=0)[0] # choosing action - return np.clip(action, -self.action_bound, self.action_bound) - - def save(self): - """ - save trained weights - :return: None - """ - path = os.path.join('model', '_'.join([ALG_NAME, ENV_ID])) - if not os.path.exists(path): - os.makedirs(path) - tl.files.save_weights_to_hdf5(os.path.join(path, 'actor.hdf5'), self.actor) - tl.files.save_weights_to_hdf5(os.path.join(path, 'critic.hdf5'), self.critic) - - def load(self): - """ - load trained weights - :return: None - """ - path = os.path.join('model', '_'.join([ALG_NAME, ENV_ID])) - tl.files.load_hdf5_to_weights_in_order(os.path.join(path, 'actor.hdf5'), self.actor) - tl.files.load_hdf5_to_weights_in_order(os.path.join(path, 'critic.hdf5'), self.critic) - - -"""--------------------------------------------------------------""" - - -class Worker(object): - """ - Worker class for distributional running - """ - - def __init__(self, wid): - self.wid = wid - self.env = gym.make(ENV_ID).unwrapped - self.env.seed(wid * 100 + RANDOMSEED) - self.ppo = GLOBAL_PPO - - def work(self): - """ - Define a worker - :return: None - """ - global GLOBAL_EP, GLOBAL_RUNNING_R, GLOBAL_UPDATE_COUNTER - while not COORD.should_stop(): - s = self.env.reset() - ep_r = 0 - buffer_s, buffer_a, buffer_r = [], [], [] - for t in range(MAX_STEPS): - if not ROLLING_EVENT.is_set(): # while global PPO is updating - ROLLING_EVENT.wait() # wait until PPO is updated - buffer_s, buffer_a, buffer_r = [], [], [] # clear history buffer, use new policy to collect data - a = self.ppo.get_action(s) - s_, r, done, _ = self.env.step(a) - if RENDER and self.wid == 0: - self.env.render() - buffer_s.append(s) - buffer_a.append(a) - buffer_r.append(r) - s = s_ - ep_r += r - - GLOBAL_UPDATE_COUNTER += 1 # count to minimum batch size, no need to wait other workers - if t == MAX_STEPS - 1 or GLOBAL_UPDATE_COUNTER >= MIN_BATCH_SIZE: - # finish patyh - if done: - v_s_ = 0 - else: - v_s_ = self.ppo.critic(np.array([s_], np.float32))[0][0] - discounted_r = [] # compute discounted reward - for r in buffer_r[::-1]: - v_s_ = r + GAMMA * v_s_ - discounted_r.append(v_s_) - discounted_r.reverse() - buffer_r = np.array(discounted_r)[:, np.newaxis] - QUEUE.put([buffer_s, buffer_a, buffer_r]) # put data in the queue - buffer_s, buffer_a, buffer_r = [], [], [] - - # update - if GLOBAL_UPDATE_COUNTER >= MIN_BATCH_SIZE: - ROLLING_EVENT.clear() # stop collecting data - UPDATE_EVENT.set() # globalPPO update - - # stop training - if GLOBAL_EP >= TRAIN_EPISODES: - COORD.request_stop() - break - - print( - 'Training | Episode: {}/{} | Worker: {} | Episode Reward: {:.4f} | Running Time: {:.4f}'.format( - GLOBAL_EP + 1, TRAIN_EPISODES, self.wid, ep_r, time.time() - T0 - ) - ) - # record reward changes, plot later - if len(GLOBAL_RUNNING_R) == 0: - GLOBAL_RUNNING_R.append(ep_r) - else: - GLOBAL_RUNNING_R.append(GLOBAL_RUNNING_R[-1] * 0.9 + ep_r * 0.1) - GLOBAL_EP += 1 - - -if __name__ == '__main__': - - # reproducible - np.random.seed(RANDOMSEED) - tf.random.set_seed(RANDOMSEED) - - env = gym.make(ENV_ID) - state_dim = env.observation_space.shape[0] - action_dim = env.action_space.shape[0] - action_bound = env.action_space.high - env.close() - - GLOBAL_PPO = PPO(state_dim, action_dim, action_bound) - T0 = time.time() - if args.train: # train - UPDATE_EVENT, ROLLING_EVENT = threading.Event(), threading.Event() - UPDATE_EVENT.clear() # not update now - ROLLING_EVENT.set() # start to roll out - workers = [Worker(wid=i) for i in range(N_WORKER)] - - GLOBAL_UPDATE_COUNTER, GLOBAL_EP = 0, 0 - GLOBAL_RUNNING_R = [] - COORD = tf.train.Coordinator() - QUEUE = queue.Queue() # workers putting data in this queue - threads = [] - for worker in workers: # worker threads - t = threading.Thread(target=worker.work) - t.start() # training - threads.append(t) - # add a PPO updating thread - threads.append(threading.Thread(target=GLOBAL_PPO.update)) - threads[-1].start() - COORD.join(threads) - - GLOBAL_PPO.save() - - plt.plot(GLOBAL_RUNNING_R) - if not os.path.exists('image'): - os.makedirs('image') - plt.savefig(os.path.join('image', '_'.join([ALG_NAME, ENV_ID]))) - - # test - if args.test: - GLOBAL_PPO.load() - for episode in range(TEST_EPISODES): - state = env.reset() - episode_reward = 0 - for step in range(MAX_STEPS): - env.render() - state, reward, done, info = env.step(GLOBAL_PPO.get_action(state, greedy=True)) - episode_reward += reward - if done: - break - print( - 'Testing | Episode: {}/{} | Episode Reward: {:.4f} | Running Time: {:.4f}'.format( - episode + 1, TEST_EPISODES, episode_reward, - time.time() - T0)) diff --git a/examples/reinforcement_learning/tutorial_DQN.py b/examples/reinforcement_learning/tutorial_DQN.py deleted file mode 100644 index 5fdabdeb2..000000000 --- a/examples/reinforcement_learning/tutorial_DQN.py +++ /dev/null @@ -1,182 +0,0 @@ -""" -Deep Q-Network Q(a, s) ------------------------ -TD Learning, Off-Policy, e-Greedy Exploration (GLIE). -Q(S, A) <- Q(S, A) + alpha * (R + lambda * Q(newS, newA) - Q(S, A)) -delta_w = R + lambda * Q(newS, newA) -See David Silver RL Tutorial Lecture 5 - Q-Learning for more details. -Reference ----------- -original paper: https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf -EN: https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0#.5m3361vlw -CN: https://zhuanlan.zhihu.com/p/25710327 -Note: Policy Network has been proved to be better than Q-Learning, see tutorial_atari_pong.py -Environment ------------ -# The FrozenLake v0 environment -https://gym.openai.com/envs/FrozenLake-v0 -The agent controls the movement of a character in a grid world. Some tiles of -the grid are walkable, and others lead to the agent falling into the water. -Additionally, the movement direction of the agent is uncertain and only partially -depends on the chosen direction. The agent is rewarded for finding a walkable -path to a goal tile. -SFFF (S: starting point, safe) -FHFH (F: frozen surface, safe) -FFFH (H: hole, fall to your doom) -HFFG (G: goal, where the frisbee is located) -The episode ends when you reach the goal or fall in a hole. You receive a reward -of 1 if you reach the goal, and zero otherwise. -Prerequisites --------------- -tensorflow>=2.0.0a0 -tensorlayer>=2.0.0 -To run -------- -python tutorial_DQN.py --train/test -""" -import argparse -import os -import time - -import gym -import matplotlib.pyplot as plt -import numpy as np -import tensorflow as tf - -import tensorlayer as tl - -# add arguments in command --train/test -parser = argparse.ArgumentParser(description='Train or test neural net motor controller.') -parser.add_argument('--train', dest='train', action='store_true', default=True) -parser.add_argument('--test', dest='test', action='store_true', default=True) -args = parser.parse_args() - -tl.logging.set_verbosity(tl.logging.DEBUG) - -##################### hyper parameters #################### -env_id = 'FrozenLake-v0' -alg_name = 'DQN' -lambd = .99 # decay factor -e = 0.1 # e-Greedy Exploration, the larger the more random -num_episodes = 10000 -render = False # display the game environment - -##################### DQN ########################## - - -def to_one_hot(i, n_classes=None): - a = np.zeros(n_classes, 'uint8') - a[i] = 1 - return a - - -## Define Q-network q(a,s) that ouput the rewards of 4 actions by given state, i.e. Action-Value Function. -# encoding for state: 4x4 grid can be represented by one-hot vector with 16 integers. -def get_model(inputs_shape): - ni = tl.layers.Input(inputs_shape, name='observation') - nn = tl.layers.Dense(4, act=None, W_init=tf.random_uniform_initializer(0, 0.01), b_init=None, name='q_a_s')(ni) - return tl.models.Model(inputs=ni, outputs=nn, name="Q-Network") - - -def save_ckpt(model): # save trained weights - path = os.path.join('model', '_'.join([alg_name, env_id])) - if not os.path.exists(path): - os.makedirs(path) - tl.files.save_weights_to_hdf5(os.path.join(path, 'dqn_model.hdf5'), model) - - -def load_ckpt(model): # load trained weights - path = os.path.join('model', '_'.join([alg_name, env_id])) - tl.files.save_weights_to_hdf5(os.path.join(path, 'dqn_model.hdf5'), model) - - -if __name__ == '__main__': - - qnetwork = get_model([None, 16]) - qnetwork.train() - train_weights = qnetwork.trainable_weights - - optimizer = tf.optimizers.SGD(learning_rate=0.1) - env = gym.make(env_id) - - t0 = time.time() - if args.train: - all_episode_reward = [] - for i in range(num_episodes): - ## Reset environment and get first new observation - s = env.reset() # observation is state, integer 0 ~ 15 - rAll = 0 - if render: env.render() - for j in range(99): # step index, maximum step is 99 - ## Choose an action by greedily (with e chance of random action) from the Q-network - allQ = qnetwork(np.asarray([to_one_hot(s, 16)], dtype=np.float32)).numpy() - a = np.argmax(allQ, 1) - - ## e-Greedy Exploration !!! sample random action - if np.random.rand(1) < e: - a[0] = env.action_space.sample() - ## Get new state and reward from environment - s1, r, d, _ = env.step(a[0]) - if render: env.render() - ## Obtain the Q' values by feeding the new state through our network - Q1 = qnetwork(np.asarray([to_one_hot(s1, 16)], dtype=np.float32)).numpy() - - ## Obtain maxQ' and set our target value for chosen action. - maxQ1 = np.max(Q1) # in Q-Learning, policy is greedy, so we use "max" to select the next action. - targetQ = allQ - targetQ[0, a[0]] = r + lambd * maxQ1 - ## Train network using target and predicted Q values - # it is not real target Q value, it is just an estimation, - # but check the Q-Learning update formula: - # Q'(s,a) <- Q(s,a) + alpha(r + lambd * maxQ(s',a') - Q(s, a)) - # minimizing |r + lambd * maxQ(s',a') - Q(s, a)|^2 equals to force Q'(s,a) ≈ Q(s,a) - with tf.GradientTape() as tape: - _qvalues = qnetwork(np.asarray([to_one_hot(s, 16)], dtype=np.float32)) - _loss = tl.cost.mean_squared_error(targetQ, _qvalues, is_mean=False) - grad = tape.gradient(_loss, train_weights) - optimizer.apply_gradients(zip(grad, train_weights)) - - rAll += r - s = s1 - ## Reduce chance of random action if an episode is done. - if d ==True: - e = 1. / ((i / 50) + 10) # reduce e, GLIE: Greey in the limit with infinite Exploration - break - - ## Note that, the rewards here with random action - print('Training | Episode: {}/{} | Episode Reward: {:.4f} | Running Time: {:.4f}' \ - .format(i, num_episodes, rAll, time.time() - t0)) - - if i == 0: - all_episode_reward.append(rAll) - else: - all_episode_reward.append(all_episode_reward[-1] * 0.9 + rAll * 0.1) - - save_ckpt(qnetwork) # save model - plt.plot(all_episode_reward) - if not os.path.exists('image'): - os.makedirs('image') - plt.savefig(os.path.join('image', '_'.join([alg_name, env_id]))) - - if args.test: - load_ckpt(qnetwork) # load model - for i in range(num_episodes): - ## Reset environment and get first new observation - s = env.reset() # observation is state, integer 0 ~ 15 - rAll = 0 - if render: env.render() - for j in range(99): # step index, maximum step is 99 - ## Choose an action by greedily (with e chance of random action) from the Q-network - allQ = qnetwork(np.asarray([to_one_hot(s, 16)], dtype=np.float32)).numpy() - a = np.argmax(allQ, 1) # no epsilon, only greedy for testing - - ## Get new state and reward from environment - s1, r, d, _ = env.step(a[0]) - rAll += r - s = s1 - if render: env.render() - ## Reduce chance of random action if an episode is done. - if d: break - - print('Testing | Episode: {}/{} | Episode Reward: {:.4f} | Running Time: {:.4f}' \ - .format(i, num_episodes, rAll, time.time() - t0)) diff --git a/examples/reinforcement_learning/tutorial_DQN_variants.py b/examples/reinforcement_learning/tutorial_DQN_variants.py deleted file mode 100644 index 5195ef61f..000000000 --- a/examples/reinforcement_learning/tutorial_DQN_variants.py +++ /dev/null @@ -1,433 +0,0 @@ -""" -DQN and its variants ------------------------- -We implement Double DQN, Dueling DQN and Noisy DQN here. -The max operator in standard DQN uses the same values both to select and to -evaluate an action by -Q(s_t, a_t) = R_{t+1} + \gamma * max_{a}Q_{tar}(s_{t+1}, a). -Double DQN propose to use following evaluation to address overestimation problem -of max operator: -Q(s_t, a_t) = R_{t+1} + \gamma * Q_{tar}(s_{t+1}, max_{a}Q(s_{t+1}, a)). -Dueling DQN uses dueling architecture where the value of state and the advantage -of each action is estimated separately. -Noisy DQN propose to explore by adding parameter noises. -Reference: ------------------------- -1. Double DQN - Van Hasselt H, Guez A, Silver D. Deep reinforcement learning with double - q-learning[C]//Thirtieth AAAI Conference on Artificial Intelligence. 2016. -2. Dueling DQN - Wang Z, Schaul T, Hessel M, et al. Dueling network architectures for deep - reinforcement learning[J]. arXiv preprint arXiv:1511.06581, 2015. -3. Noisy DQN - Plappert M, Houthooft R, Dhariwal P, et al. Parameter space noise for - exploration[J]. arXiv preprint arXiv:1706.01905, 2017. -Environment: ------------------------- -Cartpole and Pong in OpenAI Gym -Requirements: ------------------------- -tensorflow>=2.0.0a0 -tensorlayer>=2.0.0 -To run: ------------------------- -python tutorial_DQN_variantes.py --mode=train -python tutorial_DQN_variantes.py --mode=test --save_path=dqn_variants/8000.npz -""" -import argparse -import os -import random -import time - -import gym -import matplotlib.pyplot as plt -import numpy as np -import tensorflow as tf - -import tensorlayer as tl - -parser = argparse.ArgumentParser() -parser.add_argument('--train', dest='train', action='store_true', default=True) -parser.add_argument('--test', dest='test', action='store_true', default=True) -parser.add_argument( - '--save_path', default=None, help='folder to save if mode == train else model path,' - 'qnet will be saved once target net update' -) -parser.add_argument('--seed', help='random seed', type=int, default=0) -parser.add_argument('--env_id', default='CartPole-v0', help='CartPole-v0 or PongNoFrameskip-v4') -parser.add_argument('--noisy_scale', type=float, default=1e-2) -parser.add_argument('--disable_double', action='store_true', default=False) -parser.add_argument('--disable_dueling', action='store_true', default=False) -args = parser.parse_args() - -random.seed(args.seed) -np.random.seed(args.seed) -tf.random.set_seed(args.seed) # reproducible - -env_id = args.env_id -env = gym.make(env_id) -env.seed(args.seed) -noise_scale = args.noisy_scale -double = not args.disable_double -dueling = not args.disable_dueling - -alg_name = 'DQN' -if dueling: alg_name = 'Dueling_' + alg_name -if double: alg_name = 'Double_' + alg_name -if noise_scale != 0: alg_name = 'Noisy_' + alg_name -print(alg_name) -# #################### hyper parameters #################### -if env_id == 'CartPole-v0': - qnet_type = 'MLP' - number_timesteps = 10000 # total number of time steps to train on - explore_timesteps = 100 - # epsilon-greedy schedule, final exploit prob is 0.99 - epsilon = lambda i_iter: 1 - 0.99 * min(1, i_iter / explore_timesteps) - lr = 5e-3 # learning rate - buffer_size = 1000 # replay buffer size - target_q_update_freq = 50 # how frequency target q net update - ob_scale = 1.0 # scale observations - clipnorm = None -else: - # reward will increase obviously after 1e5 time steps - qnet_type = 'CNN' - number_timesteps = int(1e6) # total number of time steps to train on - explore_timesteps = 1e5 - # epsilon-greedy schedule, final exploit prob is 0.99 - epsilon = lambda i_iter: 1 - 0.99 * min(1, i_iter / explore_timesteps) - lr = 1e-4 # learning rate - buffer_size = 10000 # replay buffer size - target_q_update_freq = 200 # how frequency target q net update - ob_scale = 1.0 / 255 # scale observations - clipnorm = 10 - -in_dim = env.observation_space.shape -out_dim = env.action_space.n -reward_gamma = 0.99 # reward discount -batch_size = 32 # batch size for sampling from replay buffer -warm_start = buffer_size / 10 # sample times befor learning -noise_update_freq = 50 # how frequency param noise net update - - -# ############################## Network #################################### -class MLP(tl.models.Model): - - def __init__(self, name): - super(MLP, self).__init__(name=name) - self.h1 = tl.layers.Dense(64, tf.nn.tanh, in_channels=in_dim[0]) - self.qvalue = tl.layers.Dense(out_dim, in_channels=64, name='q', W_init=tf.initializers.GlorotUniform()) - self.svalue = tl.layers.Dense(1, in_channels=64, name='s', W_init=tf.initializers.GlorotUniform()) - self.noise_scale = 0 - - def forward(self, ni): - feature = self.h1(ni) - - # apply noise to all linear layer - if self.noise_scale != 0: - noises = [] - for layer in [self.qvalue, self.svalue]: - for var in layer.trainable_weights: - noise = tf.random.normal(tf.shape(var), 0, self.noise_scale) - noises.append(noise) - var.assign_add(noise) - - qvalue = self.qvalue(feature) - svalue = self.svalue(feature) - - if self.noise_scale != 0: - idx = 0 - for layer in [self.qvalue, self.svalue]: - for var in layer.trainable_weights: - var.assign_sub(noises[idx]) - idx += 1 - - if dueling: - # dueling network - return svalue + qvalue - tf.reduce_mean(qvalue, 1, keepdims=True) - else: - return qvalue - - -class CNN(tl.models.Model): - - def __init__(self, name): - super(CNN, self).__init__(name=name) - h, w, in_channels = in_dim - dense_in_channels = 64 * ((h - 28) // 8) * ((w - 28) // 8) - self.conv1 = tl.layers.Conv2d( - 32, (8, 8), (4, 4), tf.nn.relu, 'VALID', in_channels=in_channels, name='conv2d_1', - W_init=tf.initializers.GlorotUniform() - ) - self.conv2 = tl.layers.Conv2d( - 64, (4, 4), (2, 2), tf.nn.relu, 'VALID', in_channels=32, name='conv2d_2', - W_init=tf.initializers.GlorotUniform() - ) - self.conv3 = tl.layers.Conv2d( - 64, (3, 3), (1, 1), tf.nn.relu, 'VALID', in_channels=64, name='conv2d_3', - W_init=tf.initializers.GlorotUniform() - ) - self.flatten = tl.layers.Flatten(name='flatten') - self.preq = tl.layers.Dense( - 256, tf.nn.relu, in_channels=dense_in_channels, name='pre_q', W_init=tf.initializers.GlorotUniform() - ) - self.qvalue = tl.layers.Dense(out_dim, in_channels=256, name='q', W_init=tf.initializers.GlorotUniform()) - self.pres = tl.layers.Dense( - 256, tf.nn.relu, in_channels=dense_in_channels, name='pre_s', W_init=tf.initializers.GlorotUniform() - ) - self.svalue = tl.layers.Dense(1, in_channels=256, name='state', W_init=tf.initializers.GlorotUniform()) - self.noise_scale = 0 - - def forward(self, ni): - feature = self.flatten(self.conv3(self.conv2(self.conv1(ni)))) - - # apply noise to all linear layer - if self.noise_scale != 0: - noises = [] - for layer in [self.preq, self.qvalue, self.pres, self.svalue]: - for var in layer.trainable_weights: - noise = tf.random.normal(tf.shape(var), 0, self.noise_scale) - noises.append(noise) - var.assign_add(noise) - - qvalue = self.qvalue(self.preq(feature)) - svalue = self.svalue(self.pres(feature)) - - if self.noise_scale != 0: - idx = 0 - for layer in [self.preq, self.qvalue, self.pres, self.svalue]: - for var in layer.trainable_weights: - var.assign_sub(noises[idx]) - idx += 1 - - if dueling: - # dueling network - return svalue + qvalue - tf.reduce_mean(qvalue, 1, keepdims=True) - else: - return qvalue - - -# ############################## Replay #################################### -class ReplayBuffer(object): - - def __init__(self, size): - self._storage = [] - self._maxsize = size - self._next_idx = 0 - - def __len__(self): - return len(self._storage) - - def add(self, *args): - if self._next_idx >= len(self._storage): - self._storage.append(args) - else: - self._storage[self._next_idx] = args - self._next_idx = (self._next_idx + 1) % self._maxsize - - def _encode_sample(self, idxes): - b_o, b_a, b_r, b_o_, b_d = [], [], [], [], [] - for i in idxes: - o, a, r, o_, d = self._storage[i] - b_o.append(o) - b_a.append(a) - b_r.append(r) - b_o_.append(o_) - b_d.append(d) - return ( - np.stack(b_o).astype('float32') * ob_scale, - np.stack(b_a).astype('int32'), - np.stack(b_r).astype('float32'), - np.stack(b_o_).astype('float32') * ob_scale, - np.stack(b_d).astype('float32'), - ) - - def sample(self, batch_size): - indexes = range(len(self._storage)) - idxes = [random.choice(indexes) for _ in range(batch_size)] - return self._encode_sample(idxes) - - -# ############################# Functions ################################### -def huber_loss(x): - """Loss function for value""" - return tf.where(tf.abs(x) < 1, tf.square(x) * 0.5, tf.abs(x) - 0.5) - - -def sync(net, net_tar): - """Copy q network to target q network""" - for var, var_tar in zip(net.trainable_weights, net_tar.trainable_weights): - var_tar.assign(var) - - -def log_softmax(x, dim): - temp = x - np.max(x, dim, keepdims=True) - return temp - np.log(np.exp(temp).sum(dim, keepdims=True)) - - -def softmax(x, dim): - temp = np.exp(x - np.max(x, dim, keepdims=True)) - return temp / temp.sum(dim, keepdims=True) - - -# ############################### DQN ##################################### -class DQN(object): - - def __init__(self): - model = MLP if qnet_type == 'MLP' else CNN - self.qnet = model('q') - if args.train: - self.qnet.train() - self.targetqnet = model('targetq') - self.targetqnet.infer() - sync(self.qnet, self.targetqnet) - else: - self.qnet.infer() - self.load(args.save_path) - self.niter = 0 - if clipnorm is not None: - self.optimizer = tf.optimizers.Adam(learning_rate=lr, clipnorm=clipnorm) - else: - self.optimizer = tf.optimizers.Adam(learning_rate=lr) - self.noise_scale = noise_scale - - def get_action(self, obv): - eps = epsilon(self.niter) - if args.train: - if random.random() < eps: - return int(random.random() * out_dim) - obv = np.expand_dims(obv, 0).astype('float32') * ob_scale - if self.niter < explore_timesteps: - self.qnet.noise_scale = self.noise_scale - q_ptb = self._qvalues_func(obv).numpy() - self.qnet.noise_scale = 0 - if i % noise_update_freq == 0: - q = self._qvalues_func(obv).numpy() - kl_ptb = (log_softmax(q, 1) - log_softmax(q_ptb, 1)) - kl_ptb = np.sum(kl_ptb * softmax(q, 1), 1).mean() - kl_explore = -np.log(1 - eps + eps / out_dim) - if kl_ptb < kl_explore: - self.noise_scale *= 1.01 - else: - self.noise_scale /= 1.01 - return q_ptb.argmax(1)[0] - else: - return self._qvalues_func(obv).numpy().argmax(1)[0] - else: - obv = np.expand_dims(obv, 0).astype('float32') * ob_scale - return self._qvalues_func(obv).numpy().argmax(1)[0] - - @tf.function - def _qvalues_func(self, obv): - return self.qnet(obv) - - def train(self, b_o, b_a, b_r, b_o_, b_d): - self._train_func(b_o, b_a, b_r, b_o_, b_d) - - self.niter += 1 - if self.niter % target_q_update_freq == 0: - sync(self.qnet, self.targetqnet) - self.save(args.save_path) - - @tf.function - def _train_func(self, b_o, b_a, b_r, b_o_, b_d): - with tf.GradientTape() as tape: - td_errors = self._tderror_func(b_o, b_a, b_r, b_o_, b_d) - loss = tf.reduce_mean(huber_loss(td_errors)) - - grad = tape.gradient(loss, self.qnet.trainable_weights) - self.optimizer.apply_gradients(zip(grad, self.qnet.trainable_weights)) - - return td_errors - - @tf.function - def _tderror_func(self, b_o, b_a, b_r, b_o_, b_d): - if double: - b_a_ = tf.one_hot(tf.argmax(self.qnet(b_o_), 1), out_dim) - b_q_ = (1 - b_d) * tf.reduce_sum(self.targetqnet(b_o_) * b_a_, 1) - else: - b_q_ = (1 - b_d) * tf.reduce_max(self.targetqnet(b_o_), 1) - - b_q = tf.reduce_sum(self.qnet(b_o) * tf.one_hot(b_a, out_dim), 1) - return b_q - (b_r + reward_gamma * b_q_) - - def save(self, path): - if path is None: - path = os.path.join('model', '_'.join([alg_name, env_id])) - if not os.path.exists(path): - os.makedirs(path) - tl.files.save_weights_to_hdf5(os.path.join(path, 'q_net.hdf5'), self.qnet) - - def load(self, path): - if path is None: - path = os.path.join('model', '_'.join([alg_name, env_id])) - tl.files.load_hdf5_to_weights_in_order(os.path.join(path, 'q_net.hdf5'), self.qnet) - - -# ############################# Trainer ################################### -if __name__ == '__main__': - dqn = DQN() - t0 = time.time() - if args.train: - buffer = ReplayBuffer(buffer_size) - nepisode = 0 - all_episode_reward = [] - for i in range(1, number_timesteps + 1): - o = env.reset() - episode_reward = 0 - while True: - a = dqn.get_action(o) - - # execute action and feed to replay buffer - # note that `_` tail in var name means next - o_, r, done, info = env.step(a) - buffer.add(o, a, r, o_, done) - episode_reward += r - - if i >= warm_start: - transitions = buffer.sample(batch_size) - dqn.train(*transitions) - - if done: - break - else: - o = o_ - - if nepisode == 0: - all_episode_reward.append(episode_reward) - else: - all_episode_reward.append(all_episode_reward[-1] * 0.9 + episode_reward * 0.1) - nepisode += 1 - print( - 'Training | Episode: {} | Episode Reward: {:.4f} | Running Time: {:.4f}'.format( - nepisode, episode_reward, - time.time() - t0 - ) - ) # episode num starts from 1 in print - - dqn.save(args.save_path) - plt.plot(all_episode_reward) - if not os.path.exists('image'): - os.makedirs('image') - plt.savefig(os.path.join('image', '_'.join([alg_name, env_id]))) - - if args.test: - nepisode = 0 - for i in range(1, number_timesteps + 1): - o = env.reset() - episode_reward = 0 - while True: - env.render() - a = dqn.get_action(o) - o_, r, done, info = env.step(a) - episode_reward += r - if done: - break - else: - o = o_ - nepisode += 1 - print( - 'Testing | Episode: {} | Episode Reward: {:.4f} | Running Time: {:.4f}'.format( - nepisode, episode_reward, - time.time() - t0 - ) - ) diff --git a/examples/reinforcement_learning/tutorial_PG.py b/examples/reinforcement_learning/tutorial_PG.py deleted file mode 100644 index 776cd6ac4..000000000 --- a/examples/reinforcement_learning/tutorial_PG.py +++ /dev/null @@ -1,233 +0,0 @@ -""" -Vanilla Policy Gradient(VPG or REINFORCE) ------------------------------------------ -The policy gradient algorithm works by updating policy parameters via stochastic gradient ascent on policy performance. -It's an on-policy algorithm can be used for environments with either discrete or continuous action spaces. -Here is an example on discrete action space game CartPole-v0. -To apply it on continuous action space, you need to change the last softmax layer and the get_action function. - -Reference ---------- -Cookbook: Barto A G, Sutton R S. Reinforcement Learning: An Introduction[J]. 1998. -MorvanZhou's tutorial page: https://morvanzhou.github.io/tutorials/ - -Environment ------------ -Openai Gym CartPole-v0, discrete action space - -Prerequisites --------------- -tensorflow >=2.0.0a0 -tensorflow-probability 0.6.0 -tensorlayer >=2.0.0 - -To run ------- -python tutorial_PG.py --train/test - -""" -import argparse -import os -import time - -import gym -import matplotlib.pyplot as plt -import numpy as np -import tensorflow as tf - -import tensorlayer as tl - -parser = argparse.ArgumentParser(description='Train or test neural net motor controller.') -parser.add_argument('--train', dest='train', action='store_true', default=False) -parser.add_argument('--test', dest='test', action='store_true', default=True) -args = parser.parse_args() - -##################### hyper parameters #################### - -ENV_ID = 'CartPole-v1' # environment id -RANDOM_SEED = 1 # random seed, can be either an int number or None -RENDER = False # render while training - -ALG_NAME = 'PG' -TRAIN_EPISODES = 200 -TEST_EPISODES = 10 -MAX_STEPS = 500 - -############################### PG #################################### - - -class PolicyGradient: - """ - PG class - """ - - def __init__(self, state_dim, action_num, learning_rate=0.02, gamma=0.99): - self.gamma = gamma - - self.state_buffer, self.action_buffer, self.reward_buffer = [], [], [] - - input_layer = tl.layers.Input([None, state_dim], tf.float32) - layer = tl.layers.Dense( - n_units=30, act=tf.nn.tanh, W_init=tf.random_normal_initializer(mean=0, stddev=0.3), - b_init=tf.constant_initializer(0.1) - )(input_layer) - all_act = tl.layers.Dense( - n_units=action_num, act=None, W_init=tf.random_normal_initializer(mean=0, stddev=0.3), - b_init=tf.constant_initializer(0.1) - )(layer) - - self.model = tl.models.Model(inputs=input_layer, outputs=all_act) - self.model.train() - self.optimizer = tf.optimizers.Adam(learning_rate) - - def get_action(self, s, greedy=False): - """ - choose action with probabilities. - :param s: state - :param greedy: choose action greedy or not - :return: act - """ - _logits = self.model(np.array([s], np.float32)) - _probs = tf.nn.softmax(_logits).numpy() - if greedy: - return np.argmax(_probs.ravel()) - return tl.rein.choice_action_by_probs(_probs.ravel()) - - def store_transition(self, s, a, r): - """ - store data in memory buffer - :param s: state - :param a: act - :param r: reward - :return: - """ - self.state_buffer.append(np.array([s], np.float32)) - self.action_buffer.append(a) - self.reward_buffer.append(r) - - def learn(self): - """ - update policy parameters via stochastic gradient ascent - :return: None - """ - discounted_reward_buffer_norm = self._discount_and_norm_rewards() - - with tf.GradientTape() as tape: - _logits = self.model(np.vstack(self.state_buffer)) - neg_log_prob = tf.nn.sparse_softmax_cross_entropy_with_logits( - logits=_logits, labels=np.array(self.action_buffer) - ) - loss = tf.reduce_mean(neg_log_prob * discounted_reward_buffer_norm) - - grad = tape.gradient(loss, self.model.trainable_weights) - self.optimizer.apply_gradients(zip(grad, self.model.trainable_weights)) - - self.state_buffer, self.action_buffer, self.reward_buffer = [], [], [] # empty episode data - return discounted_reward_buffer_norm - - def _discount_and_norm_rewards(self): - """ - compute discount_and_norm_rewards - :return: discount_and_norm_rewards - """ - # discount episode rewards - discounted_reward_buffer = np.zeros_like(self.reward_buffer) - running_add = 0 - for t in reversed(range(0, len(self.reward_buffer))): - running_add = running_add * self.gamma + self.reward_buffer[t] - discounted_reward_buffer[t] = running_add - - # normalize episode rewards - discounted_reward_buffer -= np.mean(discounted_reward_buffer) - discounted_reward_buffer /= np.std(discounted_reward_buffer) - return discounted_reward_buffer - - def save(self): - """ - save trained weights - :return: None - """ - path = os.path.join('model', '_'.join([ALG_NAME, ENV_ID])) - if not os.path.exists(path): - os.makedirs(path) - tl.files.save_weights_to_hdf5(os.path.join(path, 'pg_policy.hdf5'), self.model) - - def load(self): - """ - load trained weights - :return: None - """ - path = os.path.join('model', '_'.join([ALG_NAME, ENV_ID])) - tl.files.load_hdf5_to_weights_in_order(os.path.join(path, 'pg_policy.hdf5'), self.model) - - -if __name__ == '__main__': - env = gym.make(ENV_ID).unwrapped - - # reproducible - np.random.seed(RANDOM_SEED) - tf.random.set_seed(RANDOM_SEED) - env.seed(RANDOM_SEED) - - agent = PolicyGradient( - action_num=env.action_space.n, - state_dim=env.observation_space.shape[0], - ) - - t0 = time.time() - - if args.train: - all_episode_reward = [] - for episode in range(TRAIN_EPISODES): - - state = env.reset() - episode_reward = 0 - - for step in range(MAX_STEPS): # in one episode - if RENDER: - env.render() - - action = agent.get_action(state) - next_state, reward, done, info = env.step(action) - agent.store_transition(state, action, reward) - state = next_state - episode_reward += reward - if done: - break - agent.learn() - print( - 'Training | Episode: {}/{} | Episode Reward: {:.0f} | Running Time: {:.4f}'.format( - episode + 1, TRAIN_EPISODES, episode_reward, - time.time() - t0 - ) - ) - - if episode == 0: - all_episode_reward.append(episode_reward) - else: - all_episode_reward.append(all_episode_reward[-1] * 0.9 + episode_reward * 0.1) - - agent.save() - plt.plot(all_episode_reward) - if not os.path.exists('image'): - os.makedirs('image') - plt.savefig(os.path.join('image', '_'.join([ALG_NAME, ENV_ID]))) - - if args.test: - # test - agent.load() - for episode in range(TEST_EPISODES): - state = env.reset() - episode_reward = 0 - for step in range(MAX_STEPS): - env.render() - state, reward, done, info = env.step(agent.get_action(state, True)) - episode_reward += reward - if done: - break - print( - 'Testing | Episode: {}/{} | Episode Reward: {:.0f} | Running Time: {:.4f}'.format( - episode + 1, TEST_EPISODES, episode_reward, - time.time() - t0 - ) - ) diff --git a/examples/reinforcement_learning/tutorial_PPO.py b/examples/reinforcement_learning/tutorial_PPO.py deleted file mode 100644 index 82d20d2e3..000000000 --- a/examples/reinforcement_learning/tutorial_PPO.py +++ /dev/null @@ -1,322 +0,0 @@ -""" -Proximal Policy Optimization (PPO) ----------------------------- -A simple version of Proximal Policy Optimization (PPO) using single thread. -PPO is a family of first-order methods that use a few other tricks to keep new policies close to old. -PPO methods are significantly simpler to implement, and empirically seem to perform at least as well as TRPO. -Reference ---------- -Proximal Policy Optimization Algorithms, Schulman et al. 2017 -High Dimensional Continuous Control Using Generalized Advantage Estimation, Schulman et al. 2016 -Emergence of Locomotion Behaviours in Rich Environments, Heess et al. 2017 -MorvanZhou's tutorial page: https://morvanzhou.github.io/tutorials -Environment ------------ -Openai Gym Pendulum-v0, continual action space -Prerequisites --------------- -tensorflow >=2.0.0a0 -tensorflow-probability 0.6.0 -tensorlayer >=2.0.0 -To run ------- -python tutorial_PPO.py --train/test -""" -import argparse -import os -import time - -import gym -import matplotlib.pyplot as plt -import numpy as np -import tensorflow as tf -import tensorflow_probability as tfp - -import tensorlayer as tl - -parser = argparse.ArgumentParser(description='Train or test neural net motor controller.') -parser.add_argument('--train', dest='train', action='store_true', default=False) -parser.add_argument('--test', dest='test', action='store_true', default=True) -args = parser.parse_args() - -##################### hyper parameters #################### - -ENV_ID = 'Pendulum-v0' # environment id -RANDOM_SEED = 1 # random seed -RENDER = False # render while training - -ALG_NAME = 'PPO' -TRAIN_EPISODES = 1000 # total number of episodes for training -TEST_EPISODES = 10 # total number of episodes for testing -MAX_STEPS = 200 # total number of steps for each episode -GAMMA = 0.9 # reward discount -LR_A = 0.0001 # learning rate for actor -LR_C = 0.0002 # learning rate for critic -BATCH_SIZE = 32 # update batch size -ACTOR_UPDATE_STEPS = 10 # actor update steps -CRITIC_UPDATE_STEPS = 10 # critic update steps - -# ppo-penalty parameters -KL_TARGET = 0.01 -LAM = 0.5 - -# ppo-clip parameters -EPSILON = 0.2 - - -############################### PPO #################################### - - -class PPO(object): - """ - PPO class - """ - def __init__(self, state_dim, action_dim, action_bound, method='clip'): - # critic - with tf.name_scope('critic'): - inputs = tl.layers.Input([None, state_dim], tf.float32, 'state') - layer = tl.layers.Dense(64, tf.nn.relu)(inputs) - layer = tl.layers.Dense(64, tf.nn.relu)(layer) - v = tl.layers.Dense(1)(layer) - self.critic = tl.models.Model(inputs, v) - self.critic.train() - - # actor - with tf.name_scope('actor'): - inputs = tl.layers.Input([None, state_dim], tf.float32, 'state') - layer = tl.layers.Dense(64, tf.nn.relu)(inputs) - layer = tl.layers.Dense(64, tf.nn.relu)(layer) - a = tl.layers.Dense(action_dim, tf.nn.tanh)(layer) - mean = tl.layers.Lambda(lambda x: x * action_bound, name='lambda')(a) - logstd = tf.Variable(np.zeros(action_dim, dtype=np.float32)) - self.actor = tl.models.Model(inputs, mean) - self.actor.trainable_weights.append(logstd) - self.actor.logstd = logstd - self.actor.train() - - self.actor_opt = tf.optimizers.Adam(LR_A) - self.critic_opt = tf.optimizers.Adam(LR_C) - - self.method = method - if method == 'penalty': - self.kl_target = KL_TARGET - self.lam = LAM - elif method == 'clip': - self.epsilon = EPSILON - - self.state_buffer, self.action_buffer = [], [] - self.reward_buffer, self.cumulative_reward_buffer = [], [] - self.action_bound = action_bound - - def train_actor(self, state, action, adv, old_pi): - """ - Update policy network - :param state: state batch - :param action: action batch - :param adv: advantage batch - :param old_pi: old pi distribution - :return: kl_mean or None - """ - with tf.GradientTape() as tape: - mean, std = self.actor(state), tf.exp(self.actor.logstd) - pi = tfp.distributions.Normal(mean, std) - - ratio = tf.exp(pi.log_prob(action) - old_pi.log_prob(action)) - surr = ratio * adv - if self.method == 'penalty': # ppo penalty - kl = tfp.distributions.kl_divergence(old_pi, pi) - kl_mean = tf.reduce_mean(kl) - loss = -(tf.reduce_mean(surr - self.lam * kl)) - else: # ppo clip - loss = -tf.reduce_mean( - tf.minimum(surr, - tf.clip_by_value(ratio, 1. - self.epsilon, 1. + self.epsilon) * adv) - ) - a_gard = tape.gradient(loss, self.actor.trainable_weights) - self.actor_opt.apply_gradients(zip(a_gard, self.actor.trainable_weights)) - - if self.method == 'kl_pen': - return kl_mean - - def train_critic(self, reward, state): - """ - Update actor network - :param reward: cumulative reward batch - :param state: state batch - :return: None - """ - reward = np.array(reward, dtype=np.float32) - with tf.GradientTape() as tape: - advantage = reward - self.critic(state) - loss = tf.reduce_mean(tf.square(advantage)) - grad = tape.gradient(loss, self.critic.trainable_weights) - self.critic_opt.apply_gradients(zip(grad, self.critic.trainable_weights)) - - def update(self): - """ - Update parameter with the constraint of KL divergent - :return: None - """ - s = np.array(self.state_buffer, np.float32) - a = np.array(self.action_buffer, np.float32) - r = np.array(self.cumulative_reward_buffer, np.float32) - mean, std = self.actor(s), tf.exp(self.actor.logstd) - pi = tfp.distributions.Normal(mean, std) - adv = r - self.critic(s) - - # update actor - if self.method == 'kl_pen': - for _ in range(ACTOR_UPDATE_STEPS): - kl = self.train_actor(s, a, adv, pi) - if kl < self.kl_target / 1.5: - self.lam /= 2 - elif kl > self.kl_target * 1.5: - self.lam *= 2 - else: - for _ in range(ACTOR_UPDATE_STEPS): - self.train_actor(s, a, adv, pi) - - # update critic - for _ in range(CRITIC_UPDATE_STEPS): - self.train_critic(r, s) - - self.state_buffer.clear() - self.action_buffer.clear() - self.cumulative_reward_buffer.clear() - self.reward_buffer.clear() - - def get_action(self, state, greedy=False): - """ - Choose action - :param state: state - :param greedy: choose action greedy or not - :return: clipped action - """ - state = state[np.newaxis, :].astype(np.float32) - mean, std = self.actor(state), tf.exp(self.actor.logstd) - if greedy: - action = mean[0] - else: - pi = tfp.distributions.Normal(mean, std) - action = tf.squeeze(pi.sample(1), axis=0)[0] # choosing action - return np.clip(action, -self.action_bound, self.action_bound) - - def save(self): - """ - save trained weights - :return: None - """ - path = os.path.join('model', '_'.join([ALG_NAME, ENV_ID])) - if not os.path.exists(path): - os.makedirs(path) - tl.files.save_weights_to_hdf5(os.path.join(path, 'actor.hdf5'), self.actor) - tl.files.save_weights_to_hdf5(os.path.join(path, 'critic.hdf5'), self.critic) - - def load(self): - """ - load trained weights - :return: None - """ - path = os.path.join('model', '_'.join([ALG_NAME, ENV_ID])) - tl.files.load_hdf5_to_weights_in_order(os.path.join(path, 'actor.hdf5'), self.actor) - tl.files.load_hdf5_to_weights_in_order(os.path.join(path, 'critic.hdf5'), self.critic) - - def store_transition(self, state, action, reward): - """ - Store state, action, reward at each step - :param state: - :param action: - :param reward: - :return: None - """ - self.state_buffer.append(state) - self.action_buffer.append(action) - self.reward_buffer.append(reward) - - def finish_path(self, next_state, done): - """ - Calculate cumulative reward - :param next_state: - :return: None - """ - if done: - v_s_ = 0 - else: - v_s_ = self.critic(np.array([next_state], np.float32))[0, 0] - discounted_r = [] - for r in self.reward_buffer[::-1]: - v_s_ = r + GAMMA * v_s_ - discounted_r.append(v_s_) - discounted_r.reverse() - discounted_r = np.array(discounted_r)[:, np.newaxis] - self.cumulative_reward_buffer.extend(discounted_r) - self.reward_buffer.clear() - - -if __name__ == '__main__': - env = gym.make(ENV_ID).unwrapped - - # reproducible - env.seed(RANDOM_SEED) - np.random.seed(RANDOM_SEED) - tf.random.set_seed(RANDOM_SEED) - - state_dim = env.observation_space.shape[0] - action_dim = env.action_space.shape[0] - action_bound = env.action_space.high - - agent = PPO(state_dim, action_dim, action_bound) - - t0 = time.time() - if args.train: - all_episode_reward = [] - for episode in range(TRAIN_EPISODES): - state = env.reset() - episode_reward = 0 - for step in range(MAX_STEPS): # in one episode - if RENDER: - env.render() - action = agent.get_action(state) - state_, reward, done, info = env.step(action) - agent.store_transition(state, action, reward) - state = state_ - episode_reward += reward - - # update ppo - if len(agent.state_buffer) >= BATCH_SIZE: - agent.finish_path(state_, done) - agent.update() - if done: - break - agent.finish_path(state_, done) - print( - 'Training | Episode: {}/{} | Episode Reward: {:.4f} | Running Time: {:.4f}'.format( - episode + 1, TRAIN_EPISODES, episode_reward, time.time() - t0) - ) - if episode == 0: - all_episode_reward.append(episode_reward) - else: - all_episode_reward.append(all_episode_reward[-1] * 0.9 + episode_reward * 0.1) - agent.save() - - plt.plot(all_episode_reward) - if not os.path.exists('image'): - os.makedirs('image') - plt.savefig(os.path.join('image', '_'.join([ALG_NAME, ENV_ID]))) - - if args.test: - # test - agent.load() - for episode in range(TEST_EPISODES): - state = env.reset() - episode_reward = 0 - for step in range(MAX_STEPS): - env.render() - state, reward, done, info = env.step(agent.get_action(state, greedy=True)) - episode_reward += reward - if done: - break - print( - 'Testing | Episode: {}/{} | Episode Reward: {:.4f} | Running Time: {:.4f}'.format( - episode + 1, TEST_EPISODES, episode_reward, - time.time() - t0)) diff --git a/examples/reinforcement_learning/tutorial_Qlearning.py b/examples/reinforcement_learning/tutorial_Qlearning.py deleted file mode 100644 index b2d553403..000000000 --- a/examples/reinforcement_learning/tutorial_Qlearning.py +++ /dev/null @@ -1,113 +0,0 @@ -"""Q-Table learning algorithm. -Non deep learning - TD Learning, Off-Policy, e-Greedy Exploration -Q(S, A) <- Q(S, A) + alpha * (R + lambda * Q(newS, newA) - Q(S, A)) -See David Silver RL Tutorial Lecture 5 - Q-Learning for more details. -For Q-Network, see tutorial_frozenlake_q_network.py -EN: https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0#.5m3361vlw -CN: https://zhuanlan.zhihu.com/p/25710327 -tensorflow==2.0.0a0 -tensorlayer==2.0.0 -""" - -import argparse -import os -import time - -import gym -import matplotlib.pyplot as plt -import numpy as np - -parser = argparse.ArgumentParser() -parser.add_argument('--train', dest='train', action='store_true', default=True) -parser.add_argument('--test', dest='test', action='store_true', default=True) - -parser.add_argument( - '--save_path', default=None, help='folder to save if mode == train else model path,' - 'qnet will be saved once target net update' -) -parser.add_argument('--seed', help='random seed', type=int, default=0) -parser.add_argument('--env_id', default='FrozenLake-v0') -args = parser.parse_args() - -## Load the environment -alg_name = 'Qlearning' -env_id = args.env_id -env = gym.make(env_id) -render = False # display the game environment - -##================= Implement Q-Table learning algorithm =====================## -## Initialize table with all zeros -Q = np.zeros([env.observation_space.n, env.action_space.n]) -## Set learning parameters -lr = .85 # alpha, if use value function approximation, we can ignore it -lambd = .99 # decay factor -num_episodes = 10000 -t0 = time.time() - -if args.train: - all_episode_reward = [] - for i in range(num_episodes): - ## Reset environment and get first new observation - s = env.reset() - rAll = 0 - ## The Q-Table learning algorithm - for j in range(99): - if render: env.render() - ## Choose an action by greedily (with noise) picking from Q table - a = np.argmax(Q[s, :] + np.random.randn(1, env.action_space.n) * (1. / (i + 1))) - ## Get new state and reward from environment - s1, r, d, _ = env.step(a) - ## Update Q-Table with new knowledge - Q[s, a] = Q[s, a] + lr * (r + lambd * np.max(Q[s1, :]) - Q[s, a]) - rAll += r - s = s1 - if d is True: - break - print( - 'Training | Episode: {}/{} | Episode Reward: {:.4f} | Running Time: {:.4f}'.format( - i + 1, num_episodes, rAll, - time.time() - t0 - ) - ) - if i == 0: - all_episode_reward.append(rAll) - else: - all_episode_reward.append(all_episode_reward[-1] * 0.9 + rAll * 0.1) - - # save - path = os.path.join('model', '_'.join([alg_name, env_id])) - if not os.path.exists(path): - os.makedirs(path) - np.save(os.path.join(path, 'Q_table.npy'), Q) - - plt.plot(all_episode_reward) - if not os.path.exists('image'): - os.makedirs('image') - plt.savefig(os.path.join('image', '_'.join([alg_name, env_id]))) - - # print("Final Q-Table Values:/n %s" % Q) - -if args.test: - path = os.path.join('model', '_'.join([alg_name, env_id])) - Q = np.load(os.path.join(path, 'Q_table.npy')) - for i in range(num_episodes): - ## Reset environment and get first new observation - s = env.reset() - rAll = 0 - ## The Q-Table learning algorithm - for j in range(99): - ## Choose an action by greedily (with noise) picking from Q table - a = np.argmax(Q[s, :]) - ## Get new state and reward from environment - s1, r, d, _ = env.step(a) - ## Update Q-Table with new knowledge - rAll += r - s = s1 - if d is True: - break - print( - 'Testing | Episode: {}/{} | Episode Reward: {:.4f} | Running Time: {:.4f}'.format( - i + 1, num_episodes, rAll, - time.time() - t0 - ) - ) diff --git a/examples/reinforcement_learning/tutorial_SAC.py b/examples/reinforcement_learning/tutorial_SAC.py deleted file mode 100644 index ef9b28d44..000000000 --- a/examples/reinforcement_learning/tutorial_SAC.py +++ /dev/null @@ -1,453 +0,0 @@ -""" -Soft Actor-Critic (SAC) ------------------- -Actor policy in SAC is stochastic, with off-policy training. -And 'soft' in SAC indicates the trade-off between the entropy and expected return. -The additional consideration of entropy term helps with more explorative policy. -And this implementation contains an automatic update for the entropy factor. -This version of Soft Actor-Critic (SAC) implementation contains 5 networks: -2 Q net, 2 target Q net, 1 policy net. -It uses alpha loss. -Reference ---------- -paper: https://arxiv.org/pdf/1812.05905.pdf -Environment ---- -Openai Gym Pendulum-v0, continuous action space -https://gym.openai.com/envs/Pendulum-v0/ -Prerequisites --------------- -tensorflow >=2.0.0a0 -tensorflow-probability 0.6.0 -tensorlayer >=2.0.0 -&& -pip install box2d box2d-kengz --user -To run ------- -python tutorial_SAC.py --train/test -""" - -import argparse -import os -import random -import time - -import gym -import matplotlib.pyplot as plt -import numpy as np -import tensorflow as tf - -import tensorflow_probability as tfp -import tensorlayer as tl -from tensorlayer.layers import Dense -from tensorlayer.models import Model - -Normal = tfp.distributions.Normal -tl.logging.set_verbosity(tl.logging.DEBUG) - -# add arguments in command --train/test -parser = argparse.ArgumentParser(description='Train or test neural net motor controller.') -parser.add_argument('--train', dest='train', action='store_true', default=False) -parser.add_argument('--test', dest='test', action='store_true', default=True) -args = parser.parse_args() - -##################### hyper parameters #################### - -ENV_ID = 'Pendulum-v0' # environment id -RANDOM_SEED = 2 # random seed -RENDER = False # render while training - -# RL training -ALG_NAME = 'SAC' -TRAIN_EPISODES = 100 # total number of episodes for training -TEST_EPISODES = 10 # total number of episodes for training -MAX_STEPS = 200 # total number of steps for each episode -EXPLORE_STEPS = 100 # 500 for random action sampling in the beginning of training - -BATCH_SIZE = 256 # update batch size -HIDDEN_DIM = 32 # size of hidden layers for networks -UPDATE_ITR = 3 # repeated updates for single step -SOFT_Q_LR = 3e-4 # q_net learning rate -POLICY_LR = 3e-4 # policy_net learning rate -ALPHA_LR = 3e-4 # alpha learning rate -POLICY_TARGET_UPDATE_INTERVAL = 3 # delayed update for the policy network and target networks -REWARD_SCALE = 1. # value range of reward -REPLAY_BUFFER_SIZE = 5e5 # size of the replay buffer - -AUTO_ENTROPY = True # automatically updating variable alpha for entropy - -############################### SAC #################################### - - -class ReplayBuffer: - """ - a ring buffer for storing transitions and sampling for training - :state: (state_dim,) - :action: (action_dim,) - :reward: (,), scalar - :next_state: (state_dim,) - :done: (,), scalar (0 and 1) or bool (True and False) - """ - - def __init__(self, capacity): - self.capacity = capacity - self.buffer = [] - self.position = 0 - - def push(self, state, action, reward, next_state, done): - if len(self.buffer) < self.capacity: - self.buffer.append(None) - self.buffer[self.position] = (state, action, reward, next_state, done) - self.position = int((self.position + 1) % self.capacity) # as a ring buffer - - def sample(self, BATCH_SIZE): - batch = random.sample(self.buffer, BATCH_SIZE) - state, action, reward, next_state, done = map(np.stack, zip(*batch)) # stack for each element - """ - the * serves as unpack: sum(a,b) <=> batch=(a,b), sum(*batch) ; - zip: a=[1,2], b=[2,3], zip(a,b) => [(1, 2), (2, 3)] ; - the map serves as mapping the function on each list element: map(square, [2,3]) => [4,9] ; - np.stack((1,2)) => array([1, 2]) - """ - return state, action, reward, next_state, done - - def __len__(self): - return len(self.buffer) - - -class SoftQNetwork(Model): - """ the network for evaluate values of state-action pairs: Q(s,a) """ - - def __init__(self, num_inputs, num_actions, hidden_dim, init_w=3e-3): - super(SoftQNetwork, self).__init__() - input_dim = num_inputs + num_actions - w_init = tf.keras.initializers.glorot_normal( - seed=None - ) # glorot initialization is better than uniform in practice - # w_init = tf.random_uniform_initializer(-init_w, init_w) - - self.linear1 = Dense(n_units=hidden_dim, act=tf.nn.relu, W_init=w_init, in_channels=input_dim, name='q1') - self.linear2 = Dense(n_units=hidden_dim, act=tf.nn.relu, W_init=w_init, in_channels=hidden_dim, name='q2') - self.linear3 = Dense(n_units=1, W_init=w_init, in_channels=hidden_dim, name='q3') - - def forward(self, input): - x = self.linear1(input) - x = self.linear2(x) - x = self.linear3(x) - return x - - -class PolicyNetwork(Model): - """ the network for generating non-determinstic (Gaussian distributed) action from the state input """ - - def __init__( - self, num_inputs, num_actions, hidden_dim, action_range=1., init_w=3e-3, log_std_min=-20, log_std_max=2 - ): - super(PolicyNetwork, self).__init__() - - self.log_std_min = log_std_min - self.log_std_max = log_std_max - - w_init = tf.keras.initializers.glorot_normal(seed=None) - # w_init = tf.random_uniform_initializer(-init_w, init_w) - - self.linear1 = Dense(n_units=hidden_dim, act=tf.nn.relu, W_init=w_init, in_channels=num_inputs, name='policy1') - self.linear2 = Dense(n_units=hidden_dim, act=tf.nn.relu, W_init=w_init, in_channels=hidden_dim, name='policy2') - self.linear3 = Dense(n_units=hidden_dim, act=tf.nn.relu, W_init=w_init, in_channels=hidden_dim, name='policy3') - - self.mean_linear = Dense( - n_units=num_actions, W_init=w_init, b_init=tf.random_uniform_initializer(-init_w, init_w), - in_channels=hidden_dim, name='policy_mean' - ) - self.log_std_linear = Dense( - n_units=num_actions, W_init=w_init, b_init=tf.random_uniform_initializer(-init_w, init_w), - in_channels=hidden_dim, name='policy_logstd' - ) - - self.action_range = action_range - self.num_actions = num_actions - - def forward(self, state): - x = self.linear1(state) - x = self.linear2(x) - x = self.linear3(x) - - mean = self.mean_linear(x) - log_std = self.log_std_linear(x) - log_std = tf.clip_by_value(log_std, self.log_std_min, self.log_std_max) - - return mean, log_std - - def evaluate(self, state, epsilon=1e-6): - """ generate action with state for calculating gradients """ - state = state.astype(np.float32) - mean, log_std = self.forward(state) - std = tf.math.exp(log_std) # no clip in evaluation, clip affects gradients flow - - normal = Normal(0, 1) - z = normal.sample(mean.shape) - action_0 = tf.math.tanh(mean + std * z) # TanhNormal distribution as actions; reparameterization trick - action = self.action_range * action_0 - # according to original paper, with an extra last term for normalizing different action range - log_prob = Normal(mean, std).log_prob(mean + std * z) - tf.math.log(1. - action_0**2 + - epsilon) - np.log(self.action_range) - # both dims of normal.log_prob and -log(1-a**2) are (N,dim_of_action); - # the Normal.log_prob outputs the same dim of input features instead of 1 dim probability, - # needs sum up across the dim of actions to get 1 dim probability; or else use Multivariate Normal. - log_prob = tf.reduce_sum(log_prob, axis=1)[:, np.newaxis] # expand dim as reduce_sum causes 1 dim reduced - - return action, log_prob, z, mean, log_std - - def get_action(self, state, greedy=False): - """ generate action with state for interaction with envronment """ - mean, log_std = self.forward([state]) - std = tf.math.exp(log_std) - - normal = Normal(0, 1) - z = normal.sample(mean.shape) - action = self.action_range * tf.math.tanh( - mean + std * z - ) # TanhNormal distribution as actions; reparameterization trick - - action = self.action_range * tf.math.tanh(mean) if greedy else action - return action.numpy()[0] - - def sample_action(self, ): - """ generate random actions for exploration """ - a = tf.random.uniform([self.num_actions], -1, 1) - return self.action_range * a.numpy() - - -class SAC: - - def __init__( - self, state_dim, action_dim, action_range, hidden_dim, replay_buffer, SOFT_Q_LR=3e-4, POLICY_LR=3e-4, - ALPHA_LR=3e-4 - ): - self.replay_buffer = replay_buffer - - # initialize all networks - self.soft_q_net1 = SoftQNetwork(state_dim, action_dim, hidden_dim) - self.soft_q_net2 = SoftQNetwork(state_dim, action_dim, hidden_dim) - self.target_soft_q_net1 = SoftQNetwork(state_dim, action_dim, hidden_dim) - self.target_soft_q_net2 = SoftQNetwork(state_dim, action_dim, hidden_dim) - self.policy_net = PolicyNetwork(state_dim, action_dim, hidden_dim, action_range) - self.soft_q_net1.train() - self.soft_q_net2.train() - self.target_soft_q_net1.eval() - self.target_soft_q_net2.eval() - self.policy_net.train() - - self.log_alpha = tf.Variable(0, dtype=np.float32, name='log_alpha') - self.alpha = tf.math.exp(self.log_alpha) - print('Soft Q Network (1,2): ', self.soft_q_net1) - print('Policy Network: ', self.policy_net) - # set mode - self.soft_q_net1.train() - self.soft_q_net2.train() - self.target_soft_q_net1.eval() - self.target_soft_q_net2.eval() - self.policy_net.train() - - # initialize weights of target networks - self.target_soft_q_net1 = self.target_ini(self.soft_q_net1, self.target_soft_q_net1) - self.target_soft_q_net2 = self.target_ini(self.soft_q_net2, self.target_soft_q_net2) - - self.soft_q_optimizer1 = tf.optimizers.Adam(SOFT_Q_LR) - self.soft_q_optimizer2 = tf.optimizers.Adam(SOFT_Q_LR) - self.policy_optimizer = tf.optimizers.Adam(POLICY_LR) - self.alpha_optimizer = tf.optimizers.Adam(ALPHA_LR) - - def target_ini(self, net, target_net): - """ hard-copy update for initializing target networks """ - for target_param, param in zip(target_net.trainable_weights, net.trainable_weights): - target_param.assign(param) - return target_net - - def target_soft_update(self, net, target_net, soft_tau): - """ soft update the target net with Polyak averaging """ - for target_param, param in zip(target_net.trainable_weights, net.trainable_weights): - target_param.assign( # copy weight value into target parameters - target_param * (1.0 - soft_tau) + param * soft_tau - ) - return target_net - - def update(self, batch_size, reward_scale=10., auto_entropy=True, target_entropy=-2, gamma=0.99, soft_tau=1e-2): - """ update all networks in SAC """ - state, action, reward, next_state, done = self.replay_buffer.sample(batch_size) - - reward = reward[:, np.newaxis] # expand dim - done = done[:, np.newaxis] - - reward = reward_scale * (reward - np.mean(reward, axis=0)) / ( - np.std(reward, axis=0) + 1e-6 - ) # normalize with batch mean and std; plus a small number to prevent numerical problem - - # Training Q Function - new_next_action, next_log_prob, _, _, _ = self.policy_net.evaluate(next_state) - target_q_input = tf.concat([next_state, new_next_action], 1) # the dim 0 is number of samples - target_q_min = tf.minimum( - self.target_soft_q_net1(target_q_input), self.target_soft_q_net2(target_q_input) - ) - self.alpha * next_log_prob - target_q_value = reward + (1 - done) * gamma * target_q_min # if done==1, only reward - q_input = tf.concat([state, action], 1) # the dim 0 is number of samples - - with tf.GradientTape() as q1_tape: - predicted_q_value1 = self.soft_q_net1(q_input) - q_value_loss1 = tf.reduce_mean(tf.losses.mean_squared_error(predicted_q_value1, target_q_value)) - q1_grad = q1_tape.gradient(q_value_loss1, self.soft_q_net1.trainable_weights) - self.soft_q_optimizer1.apply_gradients(zip(q1_grad, self.soft_q_net1.trainable_weights)) - - with tf.GradientTape() as q2_tape: - predicted_q_value2 = self.soft_q_net2(q_input) - q_value_loss2 = tf.reduce_mean(tf.losses.mean_squared_error(predicted_q_value2, target_q_value)) - q2_grad = q2_tape.gradient(q_value_loss2, self.soft_q_net2.trainable_weights) - self.soft_q_optimizer2.apply_gradients(zip(q2_grad, self.soft_q_net2.trainable_weights)) - - # Training Policy Function - with tf.GradientTape() as p_tape: - new_action, log_prob, z, mean, log_std = self.policy_net.evaluate(state) - new_q_input = tf.concat([state, new_action], 1) # the dim 0 is number of samples - """ implementation 1 """ - predicted_new_q_value = tf.minimum(self.soft_q_net1(new_q_input), self.soft_q_net2(new_q_input)) - # """ implementation 2 """ - # predicted_new_q_value = self.soft_q_net1(new_q_input) - policy_loss = tf.reduce_mean(self.alpha * log_prob - predicted_new_q_value) - p_grad = p_tape.gradient(policy_loss, self.policy_net.trainable_weights) - self.policy_optimizer.apply_gradients(zip(p_grad, self.policy_net.trainable_weights)) - - # Updating alpha w.r.t entropy - # alpha: trade-off between exploration (max entropy) and exploitation (max Q) - if auto_entropy is True: - with tf.GradientTape() as alpha_tape: - alpha_loss = -tf.reduce_mean((self.log_alpha * (log_prob + target_entropy))) - alpha_grad = alpha_tape.gradient(alpha_loss, [self.log_alpha]) - self.alpha_optimizer.apply_gradients(zip(alpha_grad, [self.log_alpha])) - self.alpha = tf.math.exp(self.log_alpha) - else: # fixed alpha - self.alpha = 1. - alpha_loss = 0 - - # Soft update the target value nets - self.target_soft_q_net1 = self.target_soft_update(self.soft_q_net1, self.target_soft_q_net1, soft_tau) - self.target_soft_q_net2 = self.target_soft_update(self.soft_q_net2, self.target_soft_q_net2, soft_tau) - - def save(self): # save trained weights - path = os.path.join('model', '_'.join([ALG_NAME, ENV_ID])) - if not os.path.exists(path): - os.makedirs(path) - extend_path = lambda s: os.path.join(path, s) - tl.files.save_npz(self.soft_q_net1.trainable_weights, extend_path('model_q_net1.npz')) - tl.files.save_npz(self.soft_q_net2.trainable_weights, extend_path('model_q_net2.npz')) - tl.files.save_npz(self.target_soft_q_net1.trainable_weights, extend_path('model_target_q_net1.npz')) - tl.files.save_npz(self.target_soft_q_net2.trainable_weights, extend_path('model_target_q_net2.npz')) - tl.files.save_npz(self.policy_net.trainable_weights, extend_path('model_policy_net.npz')) - np.save(extend_path('log_alpha.npy'), self.log_alpha.numpy()) # save log_alpha variable - - def load_weights(self): # load trained weights - path = os.path.join('model', '_'.join([ALG_NAME, ENV_ID])) - extend_path = lambda s: os.path.join(path, s) - tl.files.load_and_assign_npz(extend_path('model_q_net1.npz'), self.soft_q_net1) - tl.files.load_and_assign_npz(extend_path('model_q_net2.npz'), self.soft_q_net2) - tl.files.load_and_assign_npz(extend_path('model_target_q_net1.npz'), self.target_soft_q_net1) - tl.files.load_and_assign_npz(extend_path('model_target_q_net2.npz'), self.target_soft_q_net2) - tl.files.load_and_assign_npz(extend_path('model_policy_net.npz'), self.policy_net) - self.log_alpha.assign(np.load(extend_path('log_alpha.npy'))) # load log_alpha variable - - -if __name__ == '__main__': - # initialization of env - env = gym.make(ENV_ID).unwrapped - state_dim = env.observation_space.shape[0] - action_dim = env.action_space.shape[0] - action_range = env.action_space.high # scale action, [-action_range, action_range] - - # reproducible - env.seed(RANDOM_SEED) - random.seed(RANDOM_SEED) - np.random.seed(RANDOM_SEED) - tf.random.set_seed(RANDOM_SEED) - - # initialization of buffer - replay_buffer = ReplayBuffer(REPLAY_BUFFER_SIZE) - # initialization of trainer - agent = SAC(state_dim, action_dim, action_range, HIDDEN_DIM, replay_buffer, SOFT_Q_LR, POLICY_LR, ALPHA_LR) - - t0 = time.time() - # training loop - if args.train: - frame_idx = 0 - all_episode_reward = [] - - # need an extra call here to make inside functions be able to use model.forward - state = env.reset().astype(np.float32) - agent.policy_net([state]) - - for episode in range(TRAIN_EPISODES): - state = env.reset().astype(np.float32) - episode_reward = 0 - for step in range(MAX_STEPS): - if RENDER: - env.render() - if frame_idx > EXPLORE_STEPS: - action = agent.policy_net.get_action(state) - else: - action = agent.policy_net.sample_action() - - next_state, reward, done, _ = env.step(action) - next_state = next_state.astype(np.float32) - done = 1 if done is True else 0 - - replay_buffer.push(state, action, reward, next_state, done) - state = next_state - episode_reward += reward - frame_idx += 1 - - if len(replay_buffer) > BATCH_SIZE: - for i in range(UPDATE_ITR): - agent.update( - BATCH_SIZE, reward_scale=REWARD_SCALE, auto_entropy=AUTO_ENTROPY, - target_entropy=-1. * action_dim - ) - - if done: - break - if episode == 0: - all_episode_reward.append(episode_reward) - else: - all_episode_reward.append(all_episode_reward[-1] * 0.9 + episode_reward * 0.1) - print( - 'Training | Episode: {}/{} | Episode Reward: {:.4f} | Running Time: {:.4f}'.format( - episode + 1, TRAIN_EPISODES, episode_reward, - time.time() - t0 - ) - ) - agent.save() - plt.plot(all_episode_reward) - if not os.path.exists('image'): - os.makedirs('image') - plt.savefig(os.path.join('image', '_'.join([ALG_NAME, ENV_ID]))) - - if args.test: - agent.load_weights() - - # need an extra call here to make inside functions be able to use model.forward - state = env.reset().astype(np.float32) - agent.policy_net([state]) - - for episode in range(TEST_EPISODES): - state = env.reset().astype(np.float32) - episode_reward = 0 - for step in range(MAX_STEPS): - env.render() - state, reward, done, info = env.step(agent.policy_net.get_action(state, greedy=True)) - state = state.astype(np.float32) - episode_reward += reward - if done: - break - print( - 'Testing | Episode: {}/{} | Episode Reward: {:.4f} | Running Time: {:.4f}'.format( - episode + 1, TEST_EPISODES, episode_reward, - time.time() - t0 - ) - ) diff --git a/examples/reinforcement_learning/tutorial_TD3.py b/examples/reinforcement_learning/tutorial_TD3.py deleted file mode 100644 index 531eb20f7..000000000 --- a/examples/reinforcement_learning/tutorial_TD3.py +++ /dev/null @@ -1,436 +0,0 @@ -""" -Twin Delayed DDPG (TD3) ------------------------- -DDPG suffers from problems like overestimate of Q-values and sensitivity to hyper-parameters. -Twin Delayed DDPG (TD3) is a variant of DDPG with several tricks: -* Trick One: Clipped Double-Q Learning. TD3 learns two Q-functions instead of one (hence "twin"), -and uses the smaller of the two Q-values to form the targets in the Bellman error loss functions. - -* Trick Two: "Delayed" Policy Updates. TD3 updates the policy (and target networks) less frequently -than the Q-function. - -* Trick Three: Target Policy Smoothing. TD3 adds noise to the target action, to make it harder for -the policy to exploit Q-function errors by smoothing out Q along changes in action. - -The implementation of TD3 includes 6 networks: 2 Q-net, 2 target Q-net, 1 policy net, 1 target policy net -Actor policy in TD3 is deterministic, with Gaussian exploration noise. - -Reference ---------- -original paper: https://arxiv.org/pdf/1802.09477.pdf - - -Environment ---- -Openai Gym Pendulum-v0, continuous action space -https://gym.openai.com/envs/Pendulum-v0/ - -Prerequisites ---- -tensorflow >=2.0.0a0 -tensorflow-probability 0.6.0 -tensorlayer >=2.0.0 - -&& -pip install box2d box2d-kengz --user - -To run -------- -python tutorial_TD3.py --train/test - -""" - -import argparse -import os -import random -import time - -import gym -import matplotlib.pyplot as plt -import numpy as np -import tensorflow as tf - -import tensorflow_probability as tfp -import tensorlayer as tl -from tensorlayer.layers import Dense -from tensorlayer.models import Model - -Normal = tfp.distributions.Normal -tl.logging.set_verbosity(tl.logging.DEBUG) - -# add arguments in command --train/test -parser = argparse.ArgumentParser(description='Train or test neural net motor controller.') -parser.add_argument('--train', dest='train', action='store_true', default=False) -parser.add_argument('--test', dest='test', action='store_true', default=True) -args = parser.parse_args() - -##################### hyper parameters #################### -# choose env -ENV_ID = 'Pendulum-v0' # environment id -RANDOM_SEED = 2 # random seed -RENDER = False # render while training - -# RL training -ALG_NAME = 'TD3' -TRAIN_EPISODES = 100 # total number of episodes for training -TEST_EPISODES = 10 # total number of episodes for training -MAX_STEPS = 200 # maximum number of steps for one episode -BATCH_SIZE = 64 # update batch size -EXPLORE_STEPS = 500 # 500 for random action sampling in the beginning of training - -HIDDEN_DIM = 64 # size of hidden layers for networks -UPDATE_ITR = 3 # repeated updates for single step -Q_LR = 3e-4 # q_net learning rate -POLICY_LR = 3e-4 # policy_net learning rate -POLICY_TARGET_UPDATE_INTERVAL = 3 # delayed steps for updating the policy network and target networks -EXPLORE_NOISE_SCALE = 1.0 # range of action noise for exploration -EVAL_NOISE_SCALE = 0.5 # range of action noise for evaluation of action value -REWARD_SCALE = 1. # value range of reward -REPLAY_BUFFER_SIZE = 5e5 # size of replay buffer - -############################### TD3 #################################### - - -class ReplayBuffer: - """ - a ring buffer for storing transitions and sampling for training - :state: (state_dim,) - :action: (action_dim,) - :reward: (,), scalar - :next_state: (state_dim,) - :done: (,), scalar (0 and 1) or bool (True and False) - """ - - def __init__(self, capacity): - self.capacity = capacity - self.buffer = [] - self.position = 0 - - def push(self, state, action, reward, next_state, done): - if len(self.buffer) < self.capacity: - self.buffer.append(None) - self.buffer[self.position] = (state, action, reward, next_state, done) - self.position = int((self.position + 1) % self.capacity) # as a ring buffer - - def sample(self, batch_size): - batch = random.sample(self.buffer, batch_size) - state, action, reward, next_state, done = map(np.stack, zip(*batch)) # stack for each element - """ - the * serves as unpack: sum(a,b) <=> batch=(a,b), sum(*batch) ; - zip: a=[1,2], b=[2,3], zip(a,b) => [(1, 2), (2, 3)] ; - the map serves as mapping the function on each list element: map(square, [2,3]) => [4,9] ; - np.stack((1,2)) => array([1, 2]) - """ - return state, action, reward, next_state, done - - def __len__(self): - return len(self.buffer) - - -class QNetwork(Model): - """ the network for evaluate values of state-action pairs: Q(s,a) """ - - def __init__(self, num_inputs, num_actions, hidden_dim, init_w=3e-3): - super(QNetwork, self).__init__() - input_dim = num_inputs + num_actions - # w_init = tf.keras.initializers.glorot_normal(seed=None) - w_init = tf.random_uniform_initializer(-init_w, init_w) - - self.linear1 = Dense(n_units=hidden_dim, act=tf.nn.relu, W_init=w_init, in_channels=input_dim, name='q1') - self.linear2 = Dense(n_units=hidden_dim, act=tf.nn.relu, W_init=w_init, in_channels=hidden_dim, name='q2') - self.linear3 = Dense(n_units=1, W_init=w_init, in_channels=hidden_dim, name='q3') - - def forward(self, input): - x = self.linear1(input) - x = self.linear2(x) - x = self.linear3(x) - return x - - -class PolicyNetwork(Model): - """ the network for generating non-determinstic (Gaussian distributed) action from the state input """ - - def __init__(self, num_inputs, num_actions, hidden_dim, action_range=1., init_w=3e-3): - super(PolicyNetwork, self).__init__() - w_init = tf.random_uniform_initializer(-init_w, init_w) - - self.linear1 = Dense(n_units=hidden_dim, act=tf.nn.relu, W_init=w_init, in_channels=num_inputs, name='policy1') - self.linear2 = Dense(n_units=hidden_dim, act=tf.nn.relu, W_init=w_init, in_channels=hidden_dim, name='policy2') - self.linear3 = Dense(n_units=hidden_dim, act=tf.nn.relu, W_init=w_init, in_channels=hidden_dim, name='policy3') - self.output_linear = Dense( - n_units=num_actions, W_init=w_init, b_init=tf.random_uniform_initializer(-init_w, init_w), - in_channels=hidden_dim, name='policy_output' - ) - self.action_range = action_range - self.num_actions = num_actions - - def forward(self, state): - x = self.linear1(state) - x = self.linear2(x) - x = self.linear3(x) - output = tf.nn.tanh(self.output_linear(x)) # unit range output [-1, 1] - return output - - def evaluate(self, state, eval_noise_scale): - """ - generate action with state for calculating gradients; - eval_noise_scale: as the trick of target policy smoothing, for generating noisy actions. - """ - state = state.astype(np.float32) - action = self.forward(state) - - action = self.action_range * action - - # add noise - normal = Normal(0, 1) - eval_noise_clip = 2 * eval_noise_scale - noise = normal.sample(action.shape) * eval_noise_scale - noise = tf.clip_by_value(noise, -eval_noise_clip, eval_noise_clip) - action = action + noise - return action - - def get_action(self, state, explore_noise_scale, greedy=False): - """ generate action with state for interaction with envronment """ - action = self.forward([state]) - action = self.action_range * action.numpy()[0] - if greedy: - return action - # add noise - normal = Normal(0, 1) - noise = normal.sample(action.shape) * explore_noise_scale - action += noise - return action.numpy() - - def sample_action(self): - """ generate random actions for exploration """ - a = tf.random.uniform([self.num_actions], -1, 1) - return self.action_range * a.numpy() - - -class TD3: - - def __init__( - self, state_dim, action_dim, action_range, hidden_dim, replay_buffer, policy_target_update_interval=1, - q_lr=3e-4, policy_lr=3e-4 - ): - self.replay_buffer = replay_buffer - - # initialize all networks - self.q_net1 = QNetwork(state_dim, action_dim, hidden_dim) - self.q_net2 = QNetwork(state_dim, action_dim, hidden_dim) - self.target_q_net1 = QNetwork(state_dim, action_dim, hidden_dim) - self.target_q_net2 = QNetwork(state_dim, action_dim, hidden_dim) - self.policy_net = PolicyNetwork(state_dim, action_dim, hidden_dim, action_range) - self.target_policy_net = PolicyNetwork(state_dim, action_dim, hidden_dim, action_range) - print('Q Network (1,2): ', self.q_net1) - print('Policy Network: ', self.policy_net) - - # initialize weights of target networks - self.target_q_net1 = self.target_ini(self.q_net1, self.target_q_net1) - self.target_q_net2 = self.target_ini(self.q_net2, self.target_q_net2) - self.target_policy_net = self.target_ini(self.policy_net, self.target_policy_net) - - # set train mode - self.q_net1.train() - self.q_net2.train() - self.target_q_net1.eval() - self.target_q_net2.eval() - self.policy_net.train() - self.target_policy_net.eval() - - self.update_cnt = 0 - self.policy_target_update_interval = policy_target_update_interval - - self.q_optimizer1 = tf.optimizers.Adam(q_lr) - self.q_optimizer2 = tf.optimizers.Adam(q_lr) - self.policy_optimizer = tf.optimizers.Adam(policy_lr) - - def target_ini(self, net, target_net): - """ hard-copy update for initializing target networks """ - for target_param, param in zip(target_net.trainable_weights, net.trainable_weights): - target_param.assign(param) - return target_net - - def target_soft_update(self, net, target_net, soft_tau): - """ soft update the target net with Polyak averaging """ - for target_param, param in zip(target_net.trainable_weights, net.trainable_weights): - target_param.assign( # copy weight value into target parameters - target_param * (1.0 - soft_tau) + param * soft_tau - ) - return target_net - - def update(self, batch_size, eval_noise_scale, reward_scale=10., gamma=0.9, soft_tau=1e-2): - """ update all networks in TD3 """ - self.update_cnt += 1 - state, action, reward, next_state, done = self.replay_buffer.sample(batch_size) - - reward = reward[:, np.newaxis] # expand dim - done = done[:, np.newaxis] - - new_next_action = self.target_policy_net.evaluate( - next_state, eval_noise_scale=eval_noise_scale - ) # clipped normal noise - reward = reward_scale * (reward - np.mean(reward, axis=0)) / ( - np.std(reward, axis=0) + 1e-6 - ) # normalize with batch mean and std; plus a small number to prevent numerical problem - - # Training Q Function - target_q_input = tf.concat([next_state, new_next_action], 1) # the dim 0 is number of samples - target_q_min = tf.minimum(self.target_q_net1(target_q_input), self.target_q_net2(target_q_input)) - - target_q_value = reward + (1 - done) * gamma * target_q_min # if done==1, only reward - q_input = tf.concat([state, action], 1) # input of q_net - - with tf.GradientTape() as q1_tape: - predicted_q_value1 = self.q_net1(q_input) - q_value_loss1 = tf.reduce_mean(tf.square(predicted_q_value1 - target_q_value)) - q1_grad = q1_tape.gradient(q_value_loss1, self.q_net1.trainable_weights) - self.q_optimizer1.apply_gradients(zip(q1_grad, self.q_net1.trainable_weights)) - - with tf.GradientTape() as q2_tape: - predicted_q_value2 = self.q_net2(q_input) - q_value_loss2 = tf.reduce_mean(tf.square(predicted_q_value2 - target_q_value)) - q2_grad = q2_tape.gradient(q_value_loss2, self.q_net2.trainable_weights) - self.q_optimizer2.apply_gradients(zip(q2_grad, self.q_net2.trainable_weights)) - - # Training Policy Function - if self.update_cnt % self.policy_target_update_interval == 0: - with tf.GradientTape() as p_tape: - new_action = self.policy_net.evaluate( - state, eval_noise_scale=0.0 - ) # no noise, deterministic policy gradients - new_q_input = tf.concat([state, new_action], 1) - # """ implementation 1 """ - # predicted_new_q_value = tf.minimum(self.q_net1(new_q_input),self.q_net2(new_q_input)) - """ implementation 2 """ - predicted_new_q_value = self.q_net1(new_q_input) - policy_loss = -tf.reduce_mean(predicted_new_q_value) - p_grad = p_tape.gradient(policy_loss, self.policy_net.trainable_weights) - self.policy_optimizer.apply_gradients(zip(p_grad, self.policy_net.trainable_weights)) - - # Soft update the target nets - self.target_q_net1 = self.target_soft_update(self.q_net1, self.target_q_net1, soft_tau) - self.target_q_net2 = self.target_soft_update(self.q_net2, self.target_q_net2, soft_tau) - self.target_policy_net = self.target_soft_update(self.policy_net, self.target_policy_net, soft_tau) - - def save(self): # save trained weights - path = os.path.join('model', '_'.join([ALG_NAME, ENV_ID])) - if not os.path.exists(path): - os.makedirs(path) - extend_path = lambda s: os.path.join(path, s) - tl.files.save_npz(self.q_net1.trainable_weights, extend_path('model_q_net1.npz')) - tl.files.save_npz(self.q_net2.trainable_weights, extend_path('model_q_net2.npz')) - tl.files.save_npz(self.target_q_net1.trainable_weights, extend_path('model_target_q_net1.npz')) - tl.files.save_npz(self.target_q_net2.trainable_weights, extend_path('model_target_q_net2.npz')) - tl.files.save_npz(self.policy_net.trainable_weights, extend_path('model_policy_net.npz')) - tl.files.save_npz(self.target_policy_net.trainable_weights, extend_path('model_target_policy_net.npz')) - - def load(self): # load trained weights - path = os.path.join('model', '_'.join([ALG_NAME, ENV_ID])) - extend_path = lambda s: os.path.join(path, s) - tl.files.load_and_assign_npz(extend_path('model_q_net1.npz'), self.q_net1) - tl.files.load_and_assign_npz(extend_path('model_q_net2.npz'), self.q_net2) - tl.files.load_and_assign_npz(extend_path('model_target_q_net1.npz'), self.target_q_net1) - tl.files.load_and_assign_npz(extend_path('model_target_q_net2.npz'), self.target_q_net2) - tl.files.load_and_assign_npz(extend_path('model_policy_net.npz'), self.policy_net) - tl.files.load_and_assign_npz(extend_path('model_target_policy_net.npz'), self.target_policy_net) - - -if __name__ == '__main__': - # initialization of env - env = gym.make(ENV_ID).unwrapped - state_dim = env.observation_space.shape[0] - action_dim = env.action_space.shape[0] - action_range = env.action_space.high # scale action, [-action_range, action_range] - - # reproducible - env.seed(RANDOM_SEED) - random.seed(RANDOM_SEED) - np.random.seed(RANDOM_SEED) - tf.random.set_seed(RANDOM_SEED) - - # initialization of buffer - replay_buffer = ReplayBuffer(REPLAY_BUFFER_SIZE) - # initialization of trainer - agent = TD3( - state_dim, action_dim, action_range, HIDDEN_DIM, replay_buffer, POLICY_TARGET_UPDATE_INTERVAL, Q_LR, POLICY_LR - ) - t0 = time.time() - - # training loop - if args.train: - frame_idx = 0 - all_episode_reward = [] - - # need an extra call here to make inside functions be able to use model.forward - state = env.reset().astype(np.float32) - agent.policy_net([state]) - agent.target_policy_net([state]) - - for episode in range(TRAIN_EPISODES): - state = env.reset().astype(np.float32) - episode_reward = 0 - - for step in range(MAX_STEPS): - if RENDER: - env.render() - if frame_idx > EXPLORE_STEPS: - action = agent.policy_net.get_action(state, EXPLORE_NOISE_SCALE) - else: - action = agent.policy_net.sample_action() - - next_state, reward, done, _ = env.step(action) - next_state = next_state.astype(np.float32) - done = 1 if done is True else 0 - - replay_buffer.push(state, action, reward, next_state, done) - state = next_state - episode_reward += reward - frame_idx += 1 - - if len(replay_buffer) > BATCH_SIZE: - for i in range(UPDATE_ITR): - agent.update(BATCH_SIZE, EVAL_NOISE_SCALE, REWARD_SCALE) - - if done: - break - if episode == 0: - all_episode_reward.append(episode_reward) - else: - all_episode_reward.append(all_episode_reward[-1] * 0.9 + episode_reward * 0.1) - print( - 'Training | Episode: {}/{} | Episode Reward: {:.4f} | Running Time: {:.4f}'.format( - episode + 1, TRAIN_EPISODES, episode_reward, - time.time() - t0 - ) - ) - agent.save() - plt.plot(all_episode_reward) - if not os.path.exists('image'): - os.makedirs('image') - plt.savefig(os.path.join('image', '_'.join([ALG_NAME, ENV_ID]))) - - if args.test: - agent.load() - - # need an extra call here to make inside functions be able to use model.forward - state = env.reset().astype(np.float32) - agent.policy_net([state]) - - for episode in range(TEST_EPISODES): - state = env.reset().astype(np.float32) - episode_reward = 0 - for step in range(MAX_STEPS): - env.render() - action = agent.policy_net.get_action(state, EXPLORE_NOISE_SCALE, greedy=True) - state, reward, done, info = env.step(action) - state = state.astype(np.float32) - episode_reward += reward - if done: - break - print( - 'Testing | Episode: {}/{} | Episode Reward: {:.4f} | Running Time: {:.4f}'.format( - episode + 1, TEST_EPISODES, episode_reward, - time.time() - t0 - ) - ) diff --git a/examples/reinforcement_learning/tutorial_TRPO.py b/examples/reinforcement_learning/tutorial_TRPO.py deleted file mode 100644 index ae47a20bd..000000000 --- a/examples/reinforcement_learning/tutorial_TRPO.py +++ /dev/null @@ -1,512 +0,0 @@ -""" -Trust Region Policy Optimization (TRPO) ---------------------------------------- -PG method with a large step can collapse the policy performance, -even with a small step can lead a large differences in policy. -TRPO constraint the step in policy space using KL divergence (rather than in parameter space), -which can monotonically improve performance and avoid a collapsed update. - -Reference ---------- -Trust Region Policy Optimization, Schulman et al. 2015 -High Dimensional Continuous Control Using Generalized Advantage Estimation, Schulman et al. 2016 -Approximately Optimal Approximate Reinforcement Learning, Kakade and Langford 2002 -openai/spinningup : http://spinningup.openai.com/en/latest/algorithms/trpo.html - -Environment ------------ -Openai Gym Pendulum-v0, continual action space - -Prerequisites --------------- -tensorflow >=2.0.0a0 -tensorflow-probability 0.6.0 -tensorlayer >=2.0.0 - -To run ------- -python tutorial_TRPO.py --train/test - -""" -import argparse -import copy -import os -import threading -import time - -import gym -import matplotlib.pyplot as plt -import numpy as np -import scipy.signal -import tensorflow as tf - -import tensorflow_probability as tfp -import tensorlayer as tl - -parser = argparse.ArgumentParser(description='Train or test neural net motor controller.') -parser.add_argument('--train', dest='train', action='store_true', default=False) -parser.add_argument('--test', dest='test', action='store_true', default=True) -args = parser.parse_args() - -##################### hyper parameters #################### - -ENV_ID = 'Pendulum-v0' # environment id -RANDOM_SEED = 2 # random seed -RENDER = False - -ALG_NAME = 'TRPO' -TRAIN_EPISODES = 1000 # total number of episodes for training -TEST_EPISODES = 100 # total number of episodes for testing -MAX_STEPS = 200 # total number of steps for each episode - -HIDDEN_SIZES = [64, 64] # hidden layer size -GAMMA = 0.99 # reward discount -DELTA = 0.01 # KL-divergence limit for TRPO update. -VF_LR = 1e-3 # Learning rate for value function optimizer -TRAIN_VF_ITERS = 100 # Number of gradient descent steps to take on value function per epoch -DAMPING_COEFF = 0.1 # Artifact for numerical stability -CG_ITERS = 10 # Number of iterations of conjugate gradient to perform -BACKTRACK_ITERS = 10 # Maximum number of steps allowed in the backtracking line search -BACKTRACK_COEFF = 0.8 # How far back to step during backtracking line search -LAM = 0.97 # lambda for GAE-lambda -SAVE_FREQ = 10 # How often (in terms of gap between epochs) to save the current policy and value function -EPS = 1e-8 # epsilon -BATCH_SIZE = 512 # batch size - -##################### functions #################### - - -class GAE_Buffer: - """ - A buffer for storing trajectories experienced by a TRPO agent interacting - with the environment, and using Generalized Advantage Estimation (GAE-lambda) - for calculating the advantages of state-action pairs. - """ - - def __init__(self, obs_dim, act_dim, size, gamma=0.99, lam=0.95): - self.obs_buf = np.zeros((size, obs_dim), dtype=np.float32) - self.act_buf = np.zeros((size, act_dim), dtype=np.float32) - self.adv_buf = np.zeros(size, dtype=np.float32) - self.rew_buf = np.zeros(size, dtype=np.float32) - self.ret_buf = np.zeros(size, dtype=np.float32) - self.val_buf = np.zeros(size, dtype=np.float32) - self.logp_buf = np.zeros(size, dtype=np.float32) - self.mean_buf = np.zeros(size, dtype=np.float32) - self.log_std_buf = np.zeros(size, dtype=np.float32) - self.gamma, self.lam = gamma, lam - self.ptr, self.path_start_idx, self.max_size = 0, 0, size - - def store(self, obs, act, rew, val, logp, mean, log_std): - """ - Append one timestep of agent-environment interaction to the buffer. - """ - assert self.ptr < self.max_size # buffer has to have room so you can store - self.obs_buf[self.ptr] = obs - self.act_buf[self.ptr] = act - self.rew_buf[self.ptr] = rew - self.val_buf[self.ptr] = val - self.logp_buf[self.ptr] = logp - self.mean_buf[self.ptr] = mean - self.log_std_buf[self.ptr] = log_std - self.ptr += 1 - - def finish_path(self, last_val=0): - """ - Call this at the end of a trajectory, or when one gets cut off - by an epoch ending. This looks back in the buffer to where the - trajectory started, and uses rewards and value estimates from - the whole trajectory to compute advantage estimates with GAE-lambda, - as well as compute the rewards-to-go for each state, to use as - the targets for the value function. - - The "last_val" argument should be 0 if the trajectory ended - because the agent reached a terminal state (died), and otherwise - should be V(s_T), the value function estimated for the last state. - This allows us to bootstrap the reward-to-go calculation to account - for timesteps beyond the arbitrary episode horizon (or epoch cutoff). - """ - path_slice = slice(self.path_start_idx, self.ptr) - rews = np.append(self.rew_buf[path_slice], last_val) - vals = np.append(self.val_buf[path_slice], last_val) - # the next two lines implement GAE-lambda advantage calculation - deltas = rews[:-1] + self.gamma * vals[1:] - vals[:-1] - self.adv_buf[path_slice] = self._discount_cumsum(deltas, self.gamma * self.lam) - - # the next line computes rewards-to-go, to be targets for the value function - self.ret_buf[path_slice] = self._discount_cumsum(rews, self.gamma)[:-1] - - self.path_start_idx = self.ptr - - def _discount_cumsum(self, x, discount): - """ - magic from rllab for computing discounted cumulative sums of vectors. - - input: - vector x, - [x0, - x1, - x2] - - output: - [x0 + discount * x1 + discount^2 * x2, - x1 + discount * x2, - x2] - """ - return scipy.signal.lfilter([1], [1, float(-discount)], x[::-1], axis=0)[::-1] - - def is_full(self): - return self.ptr == self.max_size - - def get(self): - """ - Call this at the end of an epoch to get all of the data from - the buffer, with advantages appropriately normalized (shifted to have - mean zero and std one). Also, resets some pointers in the buffer. - """ - assert self.ptr == self.max_size # buffer has to be full before you can get - self.ptr, self.path_start_idx = 0, 0 - - # the next two lines implement the advantage normalization trick - adv_mean, adv_std = np.mean(self.adv_buf), np.std(self.adv_buf) - self.adv_buf = (self.adv_buf - adv_mean) / adv_std - return [self.obs_buf, self.act_buf, self.adv_buf, self.ret_buf, self.logp_buf, self.mean_buf, self.log_std_buf] - - -""" -Trust Region Policy Optimization -""" - - -class TRPO: - """ - trpo class - """ - - def __init__(self, state_dim, action_dim, action_bound): - # critic - with tf.name_scope('critic'): - layer = input_layer = tl.layers.Input([None, state_dim], tf.float32) - for d in HIDDEN_SIZES: - layer = tl.layers.Dense(d, tf.nn.relu)(layer) - v = tl.layers.Dense(1)(layer) - self.critic = tl.models.Model(input_layer, v) - self.critic.train() - - # actor - with tf.name_scope('actor'): - layer = input_layer = tl.layers.Input([None, state_dim], tf.float32) - for d in HIDDEN_SIZES: - layer = tl.layers.Dense(d, tf.nn.relu)(layer) - mean = tl.layers.Dense(action_dim, tf.nn.tanh)(layer) - mean = tl.layers.Lambda(lambda x: x * action_bound)(mean) - log_std = tf.Variable(np.zeros(action_dim, dtype=np.float32)) - - self.actor = tl.models.Model(input_layer, mean) - self.actor.trainable_weights.append(log_std) - self.actor.log_std = log_std - self.actor.train() - - self.buf = GAE_Buffer(state_dim, action_dim, BATCH_SIZE, GAMMA, LAM) - self.critic_optimizer = tf.optimizers.Adam(learning_rate=VF_LR) - self.action_bound = action_bound - - def get_action(self, state, greedy=False): - """ - get action - :param state: state input - :param greedy: get action greedy or not - :return: pi, v, logp_pi, mean, log_std - """ - state = np.array([state], np.float32) - mean = self.actor(state) - log_std = tf.convert_to_tensor(self.actor.log_std) - std = tf.exp(log_std) - std = tf.ones_like(mean) * std - pi = tfp.distributions.Normal(mean, std) - - if greedy: - action = mean - else: - action = pi.sample() - action = np.clip(action, -self.action_bound, self.action_bound) - logp_pi = pi.log_prob(action) - - value = self.critic(state) - return action[0], value, logp_pi, mean, log_std - - def pi_loss(self, states, actions, adv, old_log_prob): - """ - calculate pi loss - :param states: state batch - :param actions: action batch - :param adv: advantage batch - :param old_log_prob: old log probability - :return: pi loss - """ - mean = self.actor(states) - pi = tfp.distributions.Normal(mean, tf.exp(self.actor.log_std)) - log_prob = pi.log_prob(actions)[:, 0] - ratio = tf.exp(log_prob - old_log_prob) - surr = tf.reduce_mean(ratio * adv) - return -surr - - def gradient(self, states, actions, adv, old_log_prob): - """ - pi gradients - :param states: state batch - :param actions: actions batch - :param adv: advantage batch - :param old_log_prob: old log probability batch - :return: gradient - """ - pi_params = self.actor.trainable_weights - with tf.GradientTape() as tape: - loss = self.pi_loss(states, actions, adv, old_log_prob) - grad = tape.gradient(loss, pi_params) - gradient = self._flat_concat(grad) - return gradient, loss - - def train_vf(self, states, rewards_to_go): - """ - train v function - :param states: state batch - :param rewards_to_go: rewards-to-go batch - :return: None - """ - with tf.GradientTape() as tape: - value = self.critic(states) - loss = tf.reduce_mean((rewards_to_go - value[:, 0])**2) - grad = tape.gradient(loss, self.critic.trainable_weights) - self.critic_optimizer.apply_gradients(zip(grad, self.critic.trainable_weights)) - - def kl(self, states, old_mean, old_log_std): - """ - calculate kl-divergence - :param states: state batch - :param old_mean: mean batch of the old pi - :param old_log_std: log std batch of the old pi - :return: kl_mean or None - """ - old_mean = old_mean[:, np.newaxis] - old_log_std = old_log_std[:, np.newaxis] - old_std = tf.exp(old_log_std) - old_pi = tfp.distributions.Normal(old_mean, old_std) - - mean = self.actor(states) - std = tf.exp(self.actor.log_std) * tf.ones_like(mean) - pi = tfp.distributions.Normal(mean, std) - - kl = tfp.distributions.kl_divergence(pi, old_pi) - all_kls = tf.reduce_sum(kl, axis=1) - return tf.reduce_mean(all_kls) - - def _flat_concat(self, xs): - """ - flat concat input - :param xs: a list of tensor - :return: flat tensor - """ - return tf.concat([tf.reshape(x, (-1, )) for x in xs], axis=0) - - def get_pi_params(self): - """ - get actor trainable parameters - :return: flat actor trainable parameters - """ - pi_params = self.actor.trainable_weights - return self._flat_concat(pi_params) - - def set_pi_params(self, flat_params): - """ - set actor trainable parameters - :param flat_params: inputs - :return: None - """ - pi_params = self.actor.trainable_weights - flat_size = lambda p: int(np.prod(p.shape.as_list())) # the 'int' is important for scalars - splits = tf.split(flat_params, [flat_size(p) for p in pi_params]) - new_params = [tf.reshape(p_new, p.shape) for p, p_new in zip(pi_params, splits)] - return tf.group([p.assign(p_new) for p, p_new in zip(pi_params, new_params)]) - - def save(self): - """ - save trained weights - :return: None - """ - path = os.path.join('model', '_'.join([ALG_NAME, ENV_ID])) - if not os.path.exists(path): - os.makedirs(path) - tl.files.save_weights_to_hdf5(os.path.join(path, 'actor.hdf5'), self.actor) - tl.files.save_weights_to_hdf5(os.path.join(path, 'critic.hdf5'), self.critic) - - def load(self): - """ - load trained weights - :return: None - """ - path = os.path.join('model', '_'.join([ALG_NAME, ENV_ID])) - tl.files.load_hdf5_to_weights_in_order(os.path.join(path, 'actor.hdf5'), self.actor) - tl.files.load_hdf5_to_weights_in_order(os.path.join(path, 'critic.hdf5'), self.critic) - - def cg(self, Ax, b): - """ - Conjugate gradient algorithm - (see https://en.wikipedia.org/wiki/Conjugate_gradient_method) - """ - x = np.zeros_like(b) - r = copy.deepcopy(b) # Note: should be 'b - Ax(x)', but for x=0, Ax(x)=0. Change if doing warm start. - p = copy.deepcopy(r) - r_dot_old = np.dot(r, r) - for _ in range(CG_ITERS): - z = Ax(p) - alpha = r_dot_old / (np.dot(p, z) + EPS) - x += alpha * p - r -= alpha * z - r_dot_new = np.dot(r, r) - p = r + (r_dot_new / r_dot_old) * p - r_dot_old = r_dot_new - return x - - def hvp(self, states, old_mean, old_log_std, x): - """ - calculate Hessian-vector product - :param states: state batch - :param old_mean: mean batch of the old pi - :param old_log_std: log std batch of the old pi - :return: hvp - """ - pi_params = self.actor.trainable_weights - with tf.GradientTape() as tape1: - with tf.GradientTape() as tape0: - d_kl = self.kl(states, old_mean, old_log_std) - g = self._flat_concat(tape0.gradient(d_kl, pi_params)) - l = tf.reduce_sum(g * x) - hvp = self._flat_concat(tape1.gradient(l, pi_params)) - - if DAMPING_COEFF > 0: - hvp += DAMPING_COEFF * x - return hvp - - def update(self): - """ - update trpo - :return: None - """ - states, actions, adv, rewards_to_go, logp_old_ph, old_mu, old_log_std = self.buf.get() - g, pi_l_old = self.gradient(states, actions, adv, logp_old_ph) - - Hx = lambda x: self.hvp(states, old_mu, old_log_std, x) - x = self.cg(Hx, g) - - alpha = np.sqrt(2 * DELTA / (np.dot(x, Hx(x)) + EPS)) - old_params = self.get_pi_params() - - def set_and_eval(step): - params = old_params - alpha * x * step - self.set_pi_params(params) - d_kl = self.kl(states, old_mu, old_log_std) - loss = self.pi_loss(states, actions, adv, logp_old_ph) - return [d_kl, loss] - - # trpo with backtracking line search, hard kl - for j in range(BACKTRACK_ITERS): - kl, pi_l_new = set_and_eval(step=BACKTRACK_COEFF**j) - if kl <= DELTA and pi_l_new <= pi_l_old: - # Accepting new params at step of line search - break - else: - # Line search failed! Keeping old params. - set_and_eval(step=0.) - - # Value function updates - for _ in range(TRAIN_VF_ITERS): - self.train_vf(states, rewards_to_go) - - def finish_path(self, done, next_state): - """ - finish a trajectory - :param done: whether the epoch is done - :param next_state: next state - :return: None - """ - if not done: - next_state = np.array([next_state], np.float32) - last_val = self.critic(next_state) - else: - last_val = 0 - self.buf.finish_path(last_val) - - -if __name__ == '__main__': - env = gym.make(ENV_ID).unwrapped - - # reproducible - np.random.seed(RANDOM_SEED) - tf.random.set_seed(RANDOM_SEED) - env.seed(RANDOM_SEED) - - state_dim = env.observation_space.shape[0] - action_dim = env.action_space.shape[0] - action_bound = env.action_space.high - - agent = TRPO(state_dim, action_dim, action_bound) - - t0 = time.time() - if args.train: # train - all_episode_reward = [] - for episode in range(TRAIN_EPISODES): - state = env.reset() - state = np.array(state, np.float32) - episode_reward = 0 - for step in range(MAX_STEPS): - if RENDER: - env.render() - action, value, logp, mean, log_std = agent.get_action(state) - next_state, reward, done, _ = env.step(action) - next_state = np.array(next_state, np.float32) - agent.buf.store(state, action, reward, value, logp, mean, log_std) - episode_reward += reward - state = next_state - if agent.buf.is_full(): - agent.finish_path(done, next_state) - agent.update() - if done: - break - agent.finish_path(done, next_state) - if episode == 0: - all_episode_reward.append(episode_reward) - else: - all_episode_reward.append(all_episode_reward[-1] * 0.9 + episode_reward * 0.1) - print( - 'Training | Episode: {}/{} | Episode Reward: {:.4f} | Running Time: {:.4f}'.format( - episode + 1, TRAIN_EPISODES, episode_reward, - time.time() - t0 - ) - ) - if episode % SAVE_FREQ == 0: - agent.save() - agent.save() - plt.plot(all_episode_reward) - if not os.path.exists('image'): - os.makedirs('image') - plt.savefig(os.path.join('image', '_'.join([ALG_NAME, ENV_ID]))) - - if args.test: - # test - agent.load() - for episode in range(TEST_EPISODES): - state = env.reset() - episode_reward = 0 - for step in range(MAX_STEPS): - env.render() - action, *_ = agent.get_action(state, greedy=True) - state, reward, done, info = env.step(action) - episode_reward += reward - if done: - break - print( - 'Testing | Episode: {}/{} | Episode Reward: {:.4f} | Running Time: {:.4f}'.format( - episode + 1, TEST_EPISODES, episode_reward, - time.time() - t0 - ) - ) diff --git a/examples/reinforcement_learning/tutorial_atari_pong.py b/examples/reinforcement_learning/tutorial_atari_pong.py deleted file mode 100644 index e28f70bec..000000000 --- a/examples/reinforcement_learning/tutorial_atari_pong.py +++ /dev/null @@ -1,147 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- -"""Monte-Carlo Policy Network π(a|s) (REINFORCE). -To understand Reinforcement Learning, we let computer to learn how to play -Pong game from the original screen inputs. Before we start, we highly recommend -you to go through a famous blog called “Deep Reinforcement Learning: Pong from -Pixels” which is a minimalistic implementation of deep reinforcement learning by -using python-numpy and OpenAI gym environment. -The code here is the reimplementation of Karpathy's Blog by using TensorLayer. -Compare with Karpathy's code, we store observation for a batch, but he store -observation for only one episode and gradients. (so we will use -more memory if the observation is very large.) - -TODO ------ -- update grads every step rather than storing all observation! -- tensorlayer@gmail.com - -References ------------- -- http://karpathy.github.io/2016/05/31/rl/ -""" -import time - -import gym -import numpy as np -import tensorflow as tf - -import tensorlayer as tl - -tl.logging.set_verbosity(tl.logging.DEBUG) - -# hyper-parameters -image_size = 80 -D = image_size * image_size -H = 200 -batch_size = 10 -learning_rate = 1e-4 -gamma = 0.99 -decay_rate = 0.99 -render = False # display the game environment -# resume = True # load existing policy network -model_file_name = "model_pong" -np.set_printoptions(threshold=np.inf) - - -def prepro(I): - """Prepro 210x160x3 uint8 frame into 6400 (80x80) 1D float vector.""" - I = I[35:195] - I = I[::2, ::2, 0] - I[I == 144] = 0 - I[I == 109] = 0 - I[I != 0] = 1 - return I.astype(np.float32).ravel() - - -env = gym.make("Pong-v0") -observation = env.reset() -prev_x = None -running_reward = None -reward_sum = 0 -episode_number = 0 - -xs, ys, rs = [], [], [] - - -# policy network -def get_model(inputs_shape): - ni = tl.layers.Input(inputs_shape) - nn = tl.layers.Dense(n_units=H, act=tf.nn.relu, name='hidden')(ni) - nn = tl.layers.Dense(n_units=3, name='output')(nn) - M = tl.models.Model(inputs=ni, outputs=nn, name="mlp") - return M - - -model = get_model([None, D]) -train_weights = model.trainable_weights - -optimizer = tf.optimizers.RMSprop(lr=learning_rate, decay=decay_rate) - -model.train() # set model to train mode (in case you add dropout into the model) - -start_time = time.time() -game_number = 0 -while True: - if render: - env.render() - - cur_x = prepro(observation) - x = cur_x - prev_x if prev_x is not None else np.zeros(D, dtype=np.float32) - x = x.reshape(1, D) - prev_x = cur_x - - _prob = model(x) - prob = tf.nn.softmax(_prob) - - # action. 1: STOP 2: UP 3: DOWN - # action = np.random.choice([1,2,3], p=prob.flatten()) - # action = tl.rein.choice_action_by_probs(prob.flatten(), [1, 2, 3]) - action = tl.rein.choice_action_by_probs(prob[0].numpy(), [1, 2, 3]) - - observation, reward, done, _ = env.step(action) - reward_sum += reward - xs.append(x) # all observations in an episode - ys.append(action - 1) # all fake labels in an episode (action begins from 1, so minus 1) - rs.append(reward) # all rewards in an episode - - if done: - episode_number += 1 - game_number = 0 - - if episode_number % batch_size == 0: - print('batch over...... updating parameters......') - epx = np.vstack(xs) - epy = np.asarray(ys) - epr = np.asarray(rs) - disR = tl.rein.discount_episode_rewards(epr, gamma) - disR -= np.mean(disR) - disR /= np.std(disR) - - xs, ys, rs = [], [], [] - - with tf.GradientTape() as tape: - _prob = model(epx) - _loss = tl.rein.cross_entropy_reward_loss(_prob, epy, disR) - grad = tape.gradient(_loss, train_weights) - optimizer.apply_gradients(zip(grad, train_weights)) - - ## TODO - # if episode_number % (batch_size * 100) == 0: - # tl.files.save_npz(network.all_params, name=model_file_name + '.npz') - - running_reward = reward_sum if running_reward is None else running_reward * 0.99 + reward_sum * 0.01 - print('resetting env. episode reward total was {}. running mean: {}'.format(reward_sum, running_reward)) - reward_sum = 0 - observation = env.reset() # reset env - prev_x = None - - if reward != 0: - print( - ( - 'episode %d: game %d took %.5fs, reward: %f' % - (episode_number, game_number, time.time() - start_time, reward) - ), ('' if reward == -1 else ' !!!!!!!!') - ) - start_time = time.time() - game_number += 1 diff --git a/examples/reinforcement_learning/tutorial_format.py b/examples/reinforcement_learning/tutorial_format.py deleted file mode 100644 index cd27ef2c4..000000000 --- a/examples/reinforcement_learning/tutorial_format.py +++ /dev/null @@ -1,98 +0,0 @@ -# the format of turorial algorithm # -# please heavily annotate the code # -''' -Algorithm Name ------------------------- -Briefly describe the algorithms, add some details. - -Reference ---------- -original paper: e.g. https://arxiv.org/pdf/1802.09477.pdf -website: ... - - -Environment ------------ -e.g. Openai Gym Pendulum-v0, continuous action space - -Prerequisites ---------------- -tensorflow >=2.0.0a0 -tensorlayer >=2.0.0 -... - -To run -------- -python tutorial_***.py --train/test - -''' - -import argparse -import time - -import numpy as np -import tensorflow as tf - -# import 'other package name' - -np.random.seed(2) -tf.random.set_seed(2) # reproducible - -# add arguments in command --train/test -parser = argparse.ArgumentParser(description='Train or test neural net motor controller.') -parser.add_argument('--train', dest='train', action='store_true', default=False) -parser.add_argument('--test', dest='test', action='store_true', default=True) -args = parser.parse_args() - -##################### hyper parameters #################### -A = a # description of hyper parameter -B = b # description of hyper parameter - -############################### Algorithm Name #################################### - - -class C(): # algorithm-specific classes - ''' description of class ''' - - def C1(): - ''' description of function''' - - -def D(): # some common functions, could be extracted into utils afterwards - ''' description of function ''' - - -if __name__ == '__main__': - '''initialization of env, buffer, networks in algorithms''' - env = 'env model' - buffer = 'buffer model' - network1 = 'network model1' - network2 = 'network model2' - - # training loop - if args.train: - t0 = time.time() - while NOT_FINISHED: # loop of episodes - while NOT_DONE: # loop of steps in episode - ''' step ''' - ''' train ''' - - print('Episode: {}/{} | Episode Reward: {:.4f} | Running Time: {:.4f}'\ - .format(episode, all_episodes, episode_reward, time.time()-t0 )) - ''' plot , following the format of ./baselines/utils/plot()''' - plot(rewards, Algorithm_name='SAC', Env_name='Pendulum-v0') - ''' save weights, implemented in defined classes above, following the format of ./baselines/utils/save_model() ''' - model.save_weights() - - # testing loop - if args.test: - t0 = time.time() - ''' save weights, implemented in defined classes above, following the format of ./baselines/utils/load_model() ''' - model.load_weights() - - while NOT_FINISHED: # loop of episodes - while NOT_DONE: # loop of steps in episode - ''' step ''' - - print('Episode: {}/{} | Episode Reward: {:.4f} | Running Time: {:.4f}'\ - .format(episode, all_episodes, episode_reward, time.time()-t0 ) ) diff --git a/examples/reinforcement_learning/tutorial_prioritized_replay.py b/examples/reinforcement_learning/tutorial_prioritized_replay.py deleted file mode 100644 index f2c5745fd..000000000 --- a/examples/reinforcement_learning/tutorial_prioritized_replay.py +++ /dev/null @@ -1,527 +0,0 @@ -""" -Prioritized Experience Replay ------------------------- -Prioritized experience replay is an efficient replay method that replay -important transitions more frequently. Segment tree data structure is used to -speed up indexing. -Reference: ------------------------- -Schaul T, Quan J, Antonoglou I, et al. Prioritized experience replay[J]. arXiv -preprint arXiv:1511.05952, 2015. -Dhariwal P, Hesse C, Klimov O, et al. Openai baselines (2017)[J]. URL -https://github. com/opfenai/baselines. -Environment: ------------------------- -Cartpole and Pong in OpenAI Gym -Requirements: ------------------------- -tensorflow>=2.0.0a0 -tensorlayer>=2.0.0 -To run: ------------------------- -python tutorial_prioritized_replay.py --mode=train -python tutorial_prioritized_replay.py --mode=test --save_path=per/8000.npz -""" -import argparse -import operator -import os -import random -import time - -import gym -import matplotlib.pyplot as plt -import numpy as np -import tensorflow as tf - -import tensorlayer as tl - -parser = argparse.ArgumentParser() -# add arguments in command --train/test -parser.add_argument('--train', dest='train', action='store_true', default=True) -parser.add_argument('--test', dest='test', action='store_true', default=True) -parser.add_argument( - '--save_path', default=None, help='folder to save if mode == train else model path,' - 'qnet will be saved once target net update' -) -parser.add_argument('--seed', help='random seed', type=int, default=0) -parser.add_argument('--env_id', default='CartPole-v0', help='CartPole-v0 or PongNoFrameskip-v4') -args = parser.parse_args() - -random.seed(args.seed) -np.random.seed(args.seed) -tf.random.set_seed(args.seed) # reproducible -env_id = args.env_id -env = gym.make(env_id) -env.seed(args.seed) -alg_name = 'prioritized_replay' - -# #################### hyper parameters #################### -if env_id == 'CartPole-v0': - qnet_type = 'MLP' - number_timesteps = 10000 # total number of time steps to train on - explore_timesteps = 100 - # epsilon-greedy schedule, final exploit prob is 0.99 - epsilon = lambda i_iter: 1 - 0.99 * min(1, i_iter / explore_timesteps) - lr = 5e-3 # learning rate - buffer_size = 1000 # replay buffer size - target_q_update_freq = 50 # how frequency target q net update - ob_scale = 1.0 # scale observations - clipnorm = None -else: - # reward will increase obviously after 1e5 time steps - qnet_type = 'CNN' - number_timesteps = int(1e6) # total number of time steps to train on - explore_timesteps = 1e5 - # epsilon-greedy schedule, final exploit prob is 0.99 - epsilon = lambda i_iter: 1 - 0.99 * min(1, i_iter / explore_timesteps) - lr = 1e-4 # learning rate - buffer_size = 10000 # replay buffer size - target_q_update_freq = 200 # how frequency target q net update - ob_scale = 1.0 / 255 # scale observations - clipnorm = 10 - -in_dim = env.observation_space.shape -out_dim = env.action_space.n -reward_gamma = 0.99 # reward discount -batch_size = 32 # batch size for sampling from replay buffer -warm_start = buffer_size / 10 # sample times befor learning -prioritized_replay_alpha = 0.6 # alpha in PER -prioritized_replay_beta0 = 0.4 # initial beta in PER - - -# ############################## Network #################################### -class MLP(tl.models.Model): - - def __init__(self, name): - super(MLP, self).__init__(name=name) - self.h1 = tl.layers.Dense(64, tf.nn.tanh, in_channels=in_dim[0]) - self.qvalue = tl.layers.Dense(out_dim, in_channels=64, name='q', W_init=tf.initializers.GlorotUniform()) - - def forward(self, ni): - return self.qvalue(self.h1(ni)) - - -class CNN(tl.models.Model): - - def __init__(self, name): - super(CNN, self).__init__(name=name) - h, w, in_channels = in_dim - dense_in_channels = 64 * ((h - 28) // 8) * ((w - 28) // 8) - self.conv1 = tl.layers.Conv2d( - 32, (8, 8), (4, 4), tf.nn.relu, 'VALID', in_channels=in_channels, name='conv2d_1', - W_init=tf.initializers.GlorotUniform() - ) - self.conv2 = tl.layers.Conv2d( - 64, (4, 4), (2, 2), tf.nn.relu, 'VALID', in_channels=32, name='conv2d_2', - W_init=tf.initializers.GlorotUniform() - ) - self.conv3 = tl.layers.Conv2d( - 64, (3, 3), (1, 1), tf.nn.relu, 'VALID', in_channels=64, name='conv2d_3', - W_init=tf.initializers.GlorotUniform() - ) - self.flatten = tl.layers.Flatten(name='flatten') - self.preq = tl.layers.Dense( - 256, tf.nn.relu, in_channels=dense_in_channels, name='pre_q', W_init=tf.initializers.GlorotUniform() - ) - self.qvalue = tl.layers.Dense(out_dim, in_channels=256, name='q', W_init=tf.initializers.GlorotUniform()) - - def forward(self, ni): - feature = self.flatten(self.conv3(self.conv2(self.conv1(ni)))) - return self.qvalue(self.preq(feature)) - - -# ############################## Replay #################################### -class SegmentTree(object): - - def __init__(self, capacity, operation, neutral_element): - """Build a Segment Tree data structure. - https://en.wikipedia.org/wiki/Segment_tree - Can be used as regular array, but with two - important differences: - a) setting item's value is slightly slower. - It is O(lg capacity) instead of O(1). - b) user has access to an efficient ( O(log segment size) ) - `reduce` operation which reduces `operation` over - a contiguous subsequence of items in the array. - Paramters - --------- - capacity: int - Total size of the array - must be a power of two. - operation: lambda obj, obj -> obj - and operation for combining elements (eg. sum, max) - must form a mathematical group together with the set of - possible values for array elements (i.e. be associative) - neutral_element: obj - neutral element for the operation above. eg. float('-inf') - for max and 0 for sum. - """ - assert capacity > 0 and capacity & (capacity - 1) == 0, \ - "capacity must be positive and a power of 2." - self._capacity = capacity - self._value = [neutral_element for _ in range(2 * capacity)] - self._operation = operation - - def _reduce_helper(self, start, end, node, node_start, node_end): - if start == node_start and end == node_end: - return self._value[node] - mid = (node_start + node_end) // 2 - if end <= mid: - return self._reduce_helper(start, end, 2 * node, node_start, mid) - else: - if mid + 1 <= start: - return self._reduce_helper(start, end, 2 * node + 1, mid + 1, node_end) - else: - return self._operation( - self._reduce_helper(start, mid, 2 * node, node_start, mid), - self._reduce_helper(mid + 1, end, 2 * node + 1, mid + 1, node_end) - ) - - def reduce(self, start=0, end=None): - """Returns result of applying `self.operation` - to a contiguous subsequence of the array. - Parameters - ---------- - start: int - beginning of the subsequence - end: int - end of the subsequences - Returns - ------- - reduced: obj - result of reducing self.operation over the specified range of array. - """ - if end is None: - end = self._capacity - if end < 0: - end += self._capacity - end -= 1 - return self._reduce_helper(start, end, 1, 0, self._capacity - 1) - - def __setitem__(self, idx, val): - # index of the leaf - idx += self._capacity - self._value[idx] = val - idx //= 2 - while idx >= 1: - self._value[idx] = self._operation(self._value[2 * idx], self._value[2 * idx + 1]) - idx //= 2 - - def __getitem__(self, idx): - assert 0 <= idx < self._capacity - return self._value[self._capacity + idx] - - -class SumSegmentTree(SegmentTree): - - def __init__(self, capacity): - super(SumSegmentTree, self).__init__(capacity=capacity, operation=operator.add, neutral_element=0.0) - - def sum(self, start=0, end=None): - """Returns arr[start] + ... + arr[end]""" - return super(SumSegmentTree, self).reduce(start, end) - - def find_prefixsum_idx(self, prefixsum): - """Find the highest index `i` in the array such that - sum(arr[0] + arr[1] + ... + arr[i - i]) <= prefixsum - if array values are probabilities, this function - allows to sample indexes according to the discrete - probability efficiently. - Parameters - ---------- - perfixsum: float - upperbound on the sum of array prefix - Returns - ------- - idx: int - highest index satisfying the prefixsum constraint - """ - assert 0 <= prefixsum <= self.sum() + 1e-5 - idx = 1 - while idx < self._capacity: # while non-leaf - if self._value[2 * idx] > prefixsum: - idx = 2 * idx - else: - prefixsum -= self._value[2 * idx] - idx = 2 * idx + 1 - return idx - self._capacity - - -class MinSegmentTree(SegmentTree): - - def __init__(self, capacity): - super(MinSegmentTree, self).__init__(capacity=capacity, operation=min, neutral_element=float('inf')) - - def min(self, start=0, end=None): - """Returns min(arr[start], ..., arr[end])""" - - return super(MinSegmentTree, self).reduce(start, end) - - -class ReplayBuffer(object): - - def __init__(self, size): - self._storage = [] - self._maxsize = size - self._next_idx = 0 - - def __len__(self): - return len(self._storage) - - def add(self, *args): - if self._next_idx >= len(self._storage): - self._storage.append(args) - else: - self._storage[self._next_idx] = args - self._next_idx = (self._next_idx + 1) % self._maxsize - - def _encode_sample(self, idxes): - b_o, b_a, b_r, b_o_, b_d = [], [], [], [], [] - for i in idxes: - o, a, r, o_, d = self._storage[i] - b_o.append(o) - b_a.append(a) - b_r.append(r) - b_o_.append(o_) - b_d.append(d) - return ( - np.stack(b_o).astype('float32') * ob_scale, - np.stack(b_a).astype('int32'), - np.stack(b_r).astype('float32'), - np.stack(b_o_).astype('float32') * ob_scale, - np.stack(b_d).astype('float32'), - ) - - def sample(self, batch_size): - indexes = range(len(self._storage)) - idxes = [random.choice(indexes) for _ in range(batch_size)] - return self._encode_sample(idxes) - - -class PrioritizedReplayBuffer(ReplayBuffer): - - def __init__(self, size, alpha, beta): - """Create Prioritized Replay buffer. - Parameters - ---------- - size: int - Max number of transitions to store in the buffer. When the buffer - overflows the old memories are dropped. - alpha: float - how much prioritization is used - (0 - no prioritization, 1 - full prioritization) - See Also - -------- - ReplayBuffer.__init__ - """ - super(PrioritizedReplayBuffer, self).__init__(size) - assert alpha >= 0 - self._alpha = alpha - - it_capacity = 1 - while it_capacity < size: - it_capacity *= 2 - - self._it_sum = SumSegmentTree(it_capacity) - self._it_min = MinSegmentTree(it_capacity) - self._max_priority = 1.0 - self.beta = beta - - def add(self, *args): - """See ReplayBuffer.store_effect""" - idx = self._next_idx - super().add(*args) - self._it_sum[idx] = self._max_priority**self._alpha - self._it_min[idx] = self._max_priority**self._alpha - - def _sample_proportional(self, batch_size): - res = [] - p_total = self._it_sum.sum(0, len(self._storage) - 1) - every_range_len = p_total / batch_size - for i in range(batch_size): - mass = random.random() * every_range_len + i * every_range_len - idx = self._it_sum.find_prefixsum_idx(mass) - res.append(idx) - return res - - def sample(self, batch_size): - """Sample a batch of experiences""" - idxes = self._sample_proportional(batch_size) - - it_sum = self._it_sum.sum() - p_min = self._it_min.min() / it_sum - max_weight = (p_min * len(self._storage))**(-self.beta) - - p_samples = np.asarray([self._it_sum[idx] for idx in idxes]) / it_sum - weights = (p_samples * len(self._storage))**(-self.beta) / max_weight - encoded_sample = self._encode_sample(idxes) - return encoded_sample + (weights.astype('float32'), idxes) - - def update_priorities(self, idxes, priorities): - """Update priorities of sampled transitions""" - assert len(idxes) == len(priorities) - for idx, priority in zip(idxes, priorities): - assert priority > 0 - assert 0 <= idx < len(self._storage) - self._it_sum[idx] = priority**self._alpha - self._it_min[idx] = priority**self._alpha - - self._max_priority = max(self._max_priority, priority) - - -# ############################# Functions ################################### -def huber_loss(x): - """Loss function for value""" - return tf.where(tf.abs(x) < 1, tf.square(x) * 0.5, tf.abs(x) - 0.5) - - -def sync(net, net_tar): - """Copy q network to target q network""" - for var, var_tar in zip(net.trainable_weights, net_tar.trainable_weights): - var_tar.assign(var) - - -# ############################### DQN ##################################### -class DQN(object): - - def __init__(self): - model = MLP if qnet_type == 'MLP' else CNN - self.qnet = model('q') - if args.train: - self.qnet.train() - self.targetqnet = model('targetq') - self.targetqnet.infer() - sync(self.qnet, self.targetqnet) - else: - self.qnet.infer() - self.load(args.save_path) - self.niter = 0 - if clipnorm is not None: - self.optimizer = tf.optimizers.Adam(learning_rate=lr, clipnorm=clipnorm) - else: - self.optimizer = tf.optimizers.Adam(learning_rate=lr) - - def get_action(self, obv): - eps = epsilon(self.niter) - if args.train and random.random() < eps: - return int(random.random() * out_dim) - else: - obv = np.expand_dims(obv, 0).astype('float32') * ob_scale - return self._qvalues_func(obv).numpy().argmax(1)[0] - - @tf.function - def _qvalues_func(self, obv): - return self.qnet(obv) - - def train(self, b_o, b_a, b_r, b_o_, b_d, weights=None): - if weights is None: - weights = np.ones_like(b_r) - td_errors = self._train_func(b_o, b_a, b_r, b_o_, b_d, weights) - - self.niter += 1 - if self.niter % target_q_update_freq == 0: - sync(self.qnet, self.targetqnet) - self.save(args.save_path) - return td_errors.numpy() - - def save(self, path): - if path is None: - path = os.path.join('model', '_'.join([alg_name, env_id])) - if not os.path.exists(path): - os.makedirs(path) - tl.files.save_weights_to_hdf5(os.path.join(path, 'q_net.hdf5'), self.qnet) - - def load(self, path): - if path is None: - path = os.path.join('model', '_'.join([alg_name, env_id])) - tl.files.load_hdf5_to_weights_in_order(os.path.join(path, 'q_net.hdf5'), self.qnet) - - @tf.function - def _train_func(self, b_o, b_a, b_r, b_o_, b_d, weights): - with tf.GradientTape() as tape: - td_errors = self._tderror_func(b_o, b_a, b_r, b_o_, b_d) - loss = tf.reduce_mean(huber_loss(td_errors) * weights) - - grad = tape.gradient(loss, self.qnet.trainable_weights) - self.optimizer.apply_gradients(zip(grad, self.qnet.trainable_weights)) - - return td_errors - - @tf.function - def _tderror_func(self, b_o, b_a, b_r, b_o_, b_d): - b_q_ = (1 - b_d) * tf.reduce_max(self.targetqnet(b_o_), 1) - b_q = tf.reduce_sum(self.qnet(b_o) * tf.one_hot(b_a, out_dim), 1) - return b_q - (b_r + reward_gamma * b_q_) - - -# ############################# Trainer ################################### -if __name__ == '__main__': - dqn = DQN() - t0 = time.time() - if args.train: - buffer = PrioritizedReplayBuffer(buffer_size, prioritized_replay_alpha, prioritized_replay_beta0) - nepisode = 0 - all_episode_reward = [] - for i in range(1, number_timesteps + 1): - o = env.reset() - episode_reward = 0 - while True: - buffer.beta += (1 - prioritized_replay_beta0) / number_timesteps - - a = dqn.get_action(o) - - # execute action and feed to replay buffer - # note that `_` tail in var name means next - o_, r, done, info = env.step(a) - buffer.add(o, a, r, o_, done) - episode_reward += r - - if i >= warm_start: - *transitions, idxs = buffer.sample(batch_size) - priorities = dqn.train(*transitions) - priorities = np.clip(np.abs(priorities), 1e-6, None) - buffer.update_priorities(idxs, priorities) - - if done: - break - else: - o = o_ - - if nepisode == 0: - all_episode_reward.append(episode_reward) - else: - all_episode_reward.append(all_episode_reward[-1] * 0.9 + episode_reward * 0.1) - nepisode += 1 - print( - 'Training | Episode: {} | Episode Reward: {:.4f} | Running Time: {:.4f}'.format( - nepisode, episode_reward, - time.time() - t0 - ) - ) # episode num starts from 1 in print - - dqn.save(args.save_path) - plt.plot(all_episode_reward) - if not os.path.exists('image'): - os.makedirs('image') - plt.savefig(os.path.join('image', '_'.join([alg_name, env_id]))) - - if args.test: - nepisode = 0 - for i in range(1, number_timesteps + 1): - o = env.reset() - episode_reward = 0 - while True: - env.render() - a = dqn.get_action(o) - o_, r, done, info = env.step(a) - episode_reward += r - if done: - break - else: - o = o_ - nepisode += 1 - print( - 'Testing | Episode: {} | Episode Reward: {:.4f} | Running Time: {:.4f}'.format( - nepisode, episode_reward, - time.time() - t0 - ) - ) diff --git a/examples/reinforcement_learning/tutorial_wrappers.py b/examples/reinforcement_learning/tutorial_wrappers.py deleted file mode 100644 index c7395f063..000000000 --- a/examples/reinforcement_learning/tutorial_wrappers.py +++ /dev/null @@ -1,563 +0,0 @@ -"""Env wrappers -Note that this file is adapted from `https://pypi.org/project/gym-vec-env` and -`https://github.com/openai/baselines/blob/master/baselines/common/*wrappers.py` -""" -from collections import deque -from functools import partial -from multiprocessing import Pipe, Process, cpu_count -from sys import platform - -import cv2 -import gym -import numpy as np -from gym import spaces - -__all__ = ( - 'build_env', # build env - 'TimeLimit', # Time limit wrapper - 'NoopResetEnv', # Run random number of no-ops on reset - 'FireResetEnv', # Reset wrapper for envs with fire action - 'EpisodicLifeEnv', # end-of-life == end-of-episode wrapper - 'MaxAndSkipEnv', # skip frame wrapper - 'ClipRewardEnv', # clip reward wrapper - 'WarpFrame', # warp observation wrapper - 'FrameStack', # stack frame wrapper - 'LazyFrames', # lazy store wrapper - 'RewardScaler', # reward scale - 'SubprocVecEnv', # vectorized env wrapper - 'VecFrameStack', # stack frames in vectorized env - 'Monitor', # Episode reward and length monitor -) -cv2.ocl.setUseOpenCL(False) -# env_id -> env_type -id2type = dict() -for _env in gym.envs.registry.all(): - id2type[_env.id] = _env._entry_point.split(':')[0].rsplit('.', 1)[1] - - -def build_env(env_id, vectorized=False, seed=0, reward_scale=1.0, nenv=0): - """Build env based on options""" - env_type = id2type[env_id] - nenv = nenv or cpu_count() // (1 + (platform == 'darwin')) - stack = env_type == 'atari' - if not vectorized: - env = _make_env(env_id, env_type, seed, reward_scale, stack) - else: - env = _make_vec_env(env_id, env_type, nenv, seed, reward_scale, stack) - - return env - - -def _make_env(env_id, env_type, seed, reward_scale, frame_stack=True): - """Make single env""" - if env_type == 'atari': - env = gym.make(env_id) - assert 'NoFrameskip' in env.spec.id - env = NoopResetEnv(env, noop_max=30) - env = MaxAndSkipEnv(env, skip=4) - env = Monitor(env) - # deepmind wrap - env = EpisodicLifeEnv(env) - if 'FIRE' in env.unwrapped.get_action_meanings(): - env = FireResetEnv(env) - env = WarpFrame(env) - env = ClipRewardEnv(env) - if frame_stack: - env = FrameStack(env, 4) - elif env_type == 'classic_control': - env = Monitor(gym.make(env_id)) - else: - raise NotImplementedError - if reward_scale != 1: - env = RewardScaler(env, reward_scale) - env.seed(seed) - return env - - -def _make_vec_env(env_id, env_type, nenv, seed, reward_scale, frame_stack=True): - """Make vectorized env""" - env = SubprocVecEnv([partial(_make_env, env_id, env_type, seed + i, reward_scale, False) for i in range(nenv)]) - if frame_stack: - env = VecFrameStack(env, 4) - return env - - -class TimeLimit(gym.Wrapper): - - def __init__(self, env, max_episode_steps=None): - super(TimeLimit, self).__init__(env) - self._max_episode_steps = max_episode_steps - self._elapsed_steps = 0 - - def step(self, ac): - observation, reward, done, info = self.env.step(ac) - self._elapsed_steps += 1 - if self._elapsed_steps >= self._max_episode_steps: - done = True - info['TimeLimit.truncated'] = True - return observation, reward, done, info - - def reset(self, **kwargs): - self._elapsed_steps = 0 - return self.env.reset(**kwargs) - - -class NoopResetEnv(gym.Wrapper): - - def __init__(self, env, noop_max=30): - """Sample initial states by taking random number of no-ops on reset. - No-op is assumed to be action 0. - """ - super(NoopResetEnv, self).__init__(env) - self.noop_max = noop_max - self.override_num_noops = None - self.noop_action = 0 - assert env.unwrapped.get_action_meanings()[0] == 'NOOP' - - def reset(self, **kwargs): - """ Do no-op action for a number of steps in [1, noop_max].""" - self.env.reset(**kwargs) - if self.override_num_noops is not None: - noops = self.override_num_noops - else: - noops = self.unwrapped.np_random.randint(1, self.noop_max + 1) - assert noops > 0 - obs = None - for _ in range(noops): - obs, _, done, _ = self.env.step(self.noop_action) - if done: - obs = self.env.reset(**kwargs) - return obs - - def step(self, ac): - return self.env.step(ac) - - -class FireResetEnv(gym.Wrapper): - - def __init__(self, env): - """Take action on reset for environments that are fixed until firing.""" - super(FireResetEnv, self).__init__(env) - assert env.unwrapped.get_action_meanings()[1] == 'FIRE' - assert len(env.unwrapped.get_action_meanings()) >= 3 - - def reset(self, **kwargs): - self.env.reset(**kwargs) - obs, _, done, _ = self.env.step(1) - if done: - self.env.reset(**kwargs) - obs, _, done, _ = self.env.step(2) - if done: - self.env.reset(**kwargs) - return obs - - def step(self, ac): - return self.env.step(ac) - - -class EpisodicLifeEnv(gym.Wrapper): - - def __init__(self, env): - """Make end-of-life == end-of-episode, but only reset on true game over. - Done by DeepMind for the DQN and co. since it helps value estimation. - """ - super(EpisodicLifeEnv, self).__init__(env) - self.lives = 0 - self.was_real_done = True - - def step(self, action): - obs, reward, done, info = self.env.step(action) - self.was_real_done = done - # check current lives, make loss of life terminal, - # then update lives to handle bonus lives - lives = self.env.unwrapped.ale.lives() - if 0 < lives < self.lives: - # for Qbert sometimes we stay in lives == 0 condition for a few - # frames so it's important to keep lives > 0, so that we only reset - # once the environment advertises done. - done = True - self.lives = lives - return obs, reward, done, info - - def reset(self, **kwargs): - """Reset only when lives are exhausted. - This way all states are still reachable even though lives are episodic, - and the learner need not know about any of this behind-the-scenes. - """ - if self.was_real_done: - obs = self.env.reset(**kwargs) - else: - # no-op step to advance from terminal/lost life state - obs, _, _, _ = self.env.step(0) - self.lives = self.env.unwrapped.ale.lives() - return obs - - -class MaxAndSkipEnv(gym.Wrapper): - - def __init__(self, env, skip=4): - """Return only every `skip`-th frame""" - super(MaxAndSkipEnv, self).__init__(env) - # most recent raw observations (for max pooling across time steps) - shape = (2, ) + env.observation_space.shape - self._obs_buffer = np.zeros(shape, dtype=np.uint8) - self._skip = skip - - def step(self, action): - """Repeat action, sum reward, and max over last observations.""" - total_reward = 0.0 - done = info = None - for i in range(self._skip): - obs, reward, done, info = self.env.step(action) - if i == self._skip - 2: - self._obs_buffer[0] = obs - if i == self._skip - 1: - self._obs_buffer[1] = obs - total_reward += reward - if done: - break - # Note that the observation on the done=True frame doesn't matter - max_frame = self._obs_buffer.max(axis=0) - - return max_frame, total_reward, done, info - - def reset(self, **kwargs): - return self.env.reset(**kwargs) - - -class ClipRewardEnv(gym.RewardWrapper): - - def __init__(self, env): - super(ClipRewardEnv, self).__init__(env) - - def reward(self, reward): - """Bin reward to {+1, 0, -1} by its sign.""" - return np.sign(reward) - - -class WarpFrame(gym.ObservationWrapper): - - def __init__(self, env, width=84, height=84, grayscale=True): - """Warp frames to 84x84 as done in the Nature paper and later work.""" - super(WarpFrame, self).__init__(env) - self.width = width - self.height = height - self.grayscale = grayscale - shape = (self.height, self.width, 1 if self.grayscale else 3) - self.observation_space = spaces.Box(low=0, high=255, shape=shape, dtype=np.uint8) - - def observation(self, frame): - if self.grayscale: - frame = cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY) - size = (self.width, self.height) - frame = cv2.resize(frame, size, interpolation=cv2.INTER_AREA) - if self.grayscale: - frame = np.expand_dims(frame, -1) - return frame - - -class FrameStack(gym.Wrapper): - - def __init__(self, env, k): - """Stack k last frames. - Returns lazy array, which is much more memory efficient. - See Also `LazyFrames` - """ - super(FrameStack, self).__init__(env) - self.k = k - self.frames = deque([], maxlen=k) - shp = env.observation_space.shape - shape = shp[:-1] + (shp[-1] * k, ) - self.observation_space = spaces.Box(low=0, high=255, shape=shape, dtype=env.observation_space.dtype) - - def reset(self): - ob = self.env.reset() - for _ in range(self.k): - self.frames.append(ob) - return np.asarray(self._get_ob()) - - def step(self, action): - ob, reward, done, info = self.env.step(action) - self.frames.append(ob) - return np.asarray(self._get_ob()), reward, done, info - - def _get_ob(self): - assert len(self.frames) == self.k - return LazyFrames(list(self.frames)) - - -class LazyFrames(object): - - def __init__(self, frames): - """This object ensures that common frames between the observations are - only stored once. It exists purely to optimize memory usage which can be - huge for DQN's 1M frames replay buffers. - - This object should only be converted to numpy array before being passed - to the model. You'd not believe how complex the previous solution was. - """ - self._frames = frames - self._out = None - - def _force(self): - if self._out is None: - self._out = np.concatenate(self._frames, axis=-1) - self._frames = None - return self._out - - def __array__(self, dtype=None): - out = self._force() - if dtype is not None: - out = out.astype(dtype) - return out - - def __len__(self): - return len(self._force()) - - def __getitem__(self, i): - return self._force()[i] - - -class RewardScaler(gym.RewardWrapper): - """Bring rewards to a reasonable scale for PPO. - This is incredibly important and effects performance drastically. - """ - - def __init__(self, env, scale=0.01): - super(RewardScaler, self).__init__(env) - self.scale = scale - - def reward(self, reward): - return reward * self.scale - - -class VecFrameStack(object): - - def __init__(self, env, k): - self.env = env - self.k = k - self.action_space = env.action_space - self.frames = deque([], maxlen=k) - shp = env.observation_space.shape - shape = shp[:-1] + (shp[-1] * k, ) - self.observation_space = spaces.Box(low=0, high=255, shape=shape, dtype=env.observation_space.dtype) - - def reset(self): - ob = self.env.reset() - for _ in range(self.k): - self.frames.append(ob) - return np.asarray(self._get_ob()) - - def step(self, action): - ob, reward, done, info = self.env.step(action) - self.frames.append(ob) - return np.asarray(self._get_ob()), reward, done, info - - def _get_ob(self): - assert len(self.frames) == self.k - return LazyFrames(list(self.frames)) - - -def _worker(remote, parent_remote, env_fn_wrapper): - parent_remote.close() - env = env_fn_wrapper.x() - while True: - cmd, data = remote.recv() - if cmd == 'step': - ob, reward, done, info = env.step(data) - if done: - ob = env.reset() - remote.send((ob, reward, done, info)) - elif cmd == 'reset': - ob = env.reset() - remote.send(ob) - elif cmd == 'reset_task': - ob = env._reset_task() - remote.send(ob) - elif cmd == 'close': - remote.close() - break - elif cmd == 'get_spaces': - remote.send((env.observation_space, env.action_space)) - else: - raise NotImplementedError - - -class CloudpickleWrapper(object): - """ - Uses cloudpickle to serialize contents - """ - - def __init__(self, x): - self.x = x - - def __getstate__(self): - import cloudpickle - return cloudpickle.dumps(self.x) - - def __setstate__(self, ob): - import pickle - self.x = pickle.loads(ob) - - -class SubprocVecEnv(object): - - def __init__(self, env_fns): - """ - envs: list of gym environments to run in subprocesses - """ - self.num_envs = len(env_fns) - - self.waiting = False - self.closed = False - nenvs = len(env_fns) - self.nenvs = nenvs - self.remotes, self.work_remotes = zip(*[Pipe() for _ in range(nenvs)]) - zipped_args = zip(self.work_remotes, self.remotes, env_fns) - self.ps = [ - Process(target=_worker, args=(work_remote, remote, CloudpickleWrapper(env_fn))) - for (work_remote, remote, env_fn) in zipped_args - ] - - for p in self.ps: - # if the main process crashes, we should not cause things to hang - p.daemon = True - p.start() - for remote in self.work_remotes: - remote.close() - - self.remotes[0].send(('get_spaces', None)) - observation_space, action_space = self.remotes[0].recv() - self.observation_space = observation_space - self.action_space = action_space - - def _step_async(self, actions): - """ - Tell all the environments to start taking a step - with the given actions. - Call step_wait() to get the results of the step. - You should not call this if a step_async run is - already pending. - """ - for remote, action in zip(self.remotes, actions): - remote.send(('step', action)) - self.waiting = True - - def _step_wait(self): - """ - Wait for the step taken with step_async(). - Returns (obs, rews, dones, infos): - - obs: an array of observations, or a tuple of - arrays of observations. - - rews: an array of rewards - - dones: an array of "episode done" booleans - - infos: a sequence of info objects - """ - results = [remote.recv() for remote in self.remotes] - self.waiting = False - obs, rews, dones, infos = zip(*results) - return np.stack(obs), np.stack(rews), np.stack(dones), infos - - def reset(self): - """ - Reset all the environments and return an array of - observations, or a tuple of observation arrays. - If step_async is still doing work, that work will - be cancelled and step_wait() should not be called - until step_async() is invoked again. - """ - for remote in self.remotes: - remote.send(('reset', None)) - return np.stack([remote.recv() for remote in self.remotes]) - - def _reset_task(self): - for remote in self.remotes: - remote.send(('reset_task', None)) - return np.stack([remote.recv() for remote in self.remotes]) - - def close(self): - if self.closed: - return - if self.waiting: - for remote in self.remotes: - remote.recv() - for remote in self.remotes: - remote.send(('close', None)) - for p in self.ps: - p.join() - self.closed = True - - def __len__(self): - return self.nenvs - - def step(self, actions): - self._step_async(actions) - return self._step_wait() - - -class Monitor(gym.Wrapper): - - def __init__(self, env): - super(Monitor, self).__init__(env) - self._monitor_rewards = None - - def reset(self, **kwargs): - self._monitor_rewards = [] - return self.env.reset(**kwargs) - - def step(self, action): - o_, r, done, info = self.env.step(action) - self._monitor_rewards.append(r) - if done: - info['episode'] = {'r': sum(self._monitor_rewards), 'l': len(self._monitor_rewards)} - return o_, r, done, info - - -class NormalizedActions(gym.ActionWrapper): - - def _action(self, action): - low = self.action_space.low - high = self.action_space.high - - action = low + (action + 1.0) * 0.5 * (high - low) - action = np.clip(action, low, high) - - return action - - def _reverse_action(self, action): - low = self.action_space.low - high = self.action_space.high - - action = 2 * (action - low) / (high - low) - 1 - action = np.clip(action, low, high) - - return action - - -def unit_test(): - env_id = 'CartPole-v0' - unwrapped_env = gym.make(env_id) - wrapped_env = build_env(env_id, False) - o = wrapped_env.reset() - print('Reset {} observation shape {}'.format(env_id, o.shape)) - done = False - while not done: - a = unwrapped_env.action_space.sample() - o_, r, done, info = wrapped_env.step(a) - print('Take action {} get reward {} info {}'.format(a, r, info)) - - env_id = 'PongNoFrameskip-v4' - nenv = 2 - unwrapped_env = gym.make(env_id) - wrapped_env = build_env(env_id, True, nenv=nenv) - o = wrapped_env.reset() - print('Reset {} observation shape {}'.format(env_id, o.shape)) - for _ in range(1000): - a = [unwrapped_env.action_space.sample() for _ in range(nenv)] - a = np.asarray(a, 'int64') - o_, r, done, info = wrapped_env.step(a) - print('Take action {} get reward {} info {}'.format(a, r, info)) - - -if __name__ == '__main__': - unit_test() diff --git a/examples/spatial_transformer_network/README.md b/examples/spatial_transformer_network/README.md deleted file mode 100644 index b9daece2b..000000000 --- a/examples/spatial_transformer_network/README.md +++ /dev/null @@ -1,44 +0,0 @@ -# Spatial Transformer Networks - -[Spatial Transformer Networks](https://arxiv.org/abs/1506.02025) (STN) is a dynamic mechanism that produces transformations of input images (or feature maps)including scaling, cropping, rotations, as well as non-rigid deformations. This enables the network to not only select regions of an image that are most relevant (attention), but also to transform those regions to simplify recognition in the following layers. - -Video for different transformation [click me](https://drive.google.com/file/d/0B1nQa_sA3W2iN3RQLXVFRkNXN0k/view). - -In this repositary, we implemented a STN for [2D Affine Transformation](https://en.wikipedia.org/wiki/Affine_transformation) on MNIST dataset. We generated images with size of 40x40 from the original MNIST dataset, and distorted the images by random rotation, shifting, shearing and zoom in/out. The STN was able to learn to automatically apply transformations on distorted images via classification task. - - -
- -
- Fig 1:Transformation -
- - -
- -
- Fig 2:Network -
- -
- -
- Fig 3:Formula -
- -## Result - -After classification task, the STN is able to transform the distorted image from Fig 4 back to Fig 5. - -
- -
- Fig 4: Input -
- -
- -
- Fig 5: Output -
- diff --git a/examples/spatial_transformer_network/tutorial_spatial_transformer_network_dynamic.py b/examples/spatial_transformer_network/tutorial_spatial_transformer_network_dynamic.py deleted file mode 100644 index 40695339f..000000000 --- a/examples/spatial_transformer_network/tutorial_spatial_transformer_network_dynamic.py +++ /dev/null @@ -1,167 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf8 -*- -import time - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.layers import * -from tensorlayer.models import Model - -##================== PREPARE DATA ============================================## -X_train, y_train, X_val, y_val, X_test, y_test = \ - tl.files.load_mnist_dataset(shape=(-1, 28, 28, 1)) - - -def pad_distort_im_fn(x): - """ Zero pads an image to 40x40, and distort it. - - Examples - --------- - x = pad_distort_im_fn(X_train[0]) - print(x, x.shape, x.max()) - tl.vis.save_image(x, '_xd.png') - tl.vis.save_image(X_train[0], '_x.png') - """ - b = np.zeros((40, 40, 1), dtype=np.float32) - o = int((40 - 28) / 2) - b[o:o + 28, o:o + 28] = x - x = b - x = tl.prepro.rotation(x, rg=30, is_random=True, fill_mode='constant') - x = tl.prepro.shear(x, 0.05, is_random=True, fill_mode='constant') - x = tl.prepro.shift(x, wrg=0.25, hrg=0.25, is_random=True, fill_mode='constant') - x = tl.prepro.zoom(x, zoom_range=(0.95, 1.05)) - return x - - -def pad_distort_ims_fn(X): - """ Zero pads images to 40x40, and distort them. """ - X_40 = [] - for X_a, _ in tl.iterate.minibatches(X, X, 50, shuffle=False): - X_40.extend(tl.prepro.threading_data(X_a, fn=pad_distort_im_fn)) - X_40 = np.asarray(X_40) - return X_40 - - -# create dataset with size of 40x40 with distortion -X_train_40 = pad_distort_ims_fn(X_train) -X_val_40 = pad_distort_ims_fn(X_val) -X_test_40 = pad_distort_ims_fn(X_test) - -tl.vis.save_images(X_test[0:32], [4, 8], '_imgs_original.png') -tl.vis.save_images(X_test_40[0:32], [4, 8], '_imgs_distorted.png') - - -##================== DEFINE MODEL ============================================## -class Net(Model): - - def __init__(self): - super(Net, self).__init__() - - ## 1. Localisation network - # use MLP as the localisation net - self.flatten1 = Flatten() - self.dense1 = Dense(n_units=20, in_channels=1600, act=tf.nn.tanh) - self.dropout1 = Dropout(keep=0.8) - # you can also use CNN instead for MLP as the localisation net - - ## 2. Spatial transformer module (sampler) - self.stn = SpatialTransformer2dAffine(out_size=(40, 40), in_channels=20) - - ## 3. Classifier - self.conv1 = Conv2d(16, (3, 3), (2, 2), act=tf.nn.relu, padding='SAME', in_channels=1) - self.conv2 = Conv2d(16, (3, 3), (2, 2), act=tf.nn.relu, padding='SAME', in_channels=16) - self.flatten2 = Flatten() - self.dense2 = Dense(n_units=1024, in_channels=1600, act=tf.nn.relu) - self.dense3 = Dense(n_units=10, in_channels=1024, act=tf.identity) - - def forward(self, inputs): - theta_input = self.dropout1(self.dense1(self.flatten1(inputs))) - V = self.stn((theta_input, inputs)) - _logits = self.dense3(self.dense2(self.flatten2(self.conv2(self.conv1(V))))) - return _logits, V - - -net = Net() - -##================== DEFINE TRAIN OPS ========================================## -n_epoch = 100 -learning_rate = 0.0001 -print_freq = 10 -batch_size = 64 -train_weights = net.trainable_weights -optimizer = tf.optimizers.Adam(lr=learning_rate) - -##================== TRAINING ================================================## -print("Training ...") -for epoch in range(n_epoch): - start_time = time.time() - - net.train() # enable dropout - - for X_train_a, y_train_a in tl.iterate.minibatches(X_train_40, y_train, batch_size, shuffle=True): - # input_dim must be of length 4 - X_train_a = tf.expand_dims(X_train_a, 3) - - with tf.GradientTape() as tape: - ## compute outputs - _logits, _ = net(X_train_a) # alternatively, you can use MLP(x, is_train=True) and remove MLP.train() - ## compute loss and update model - _loss = tl.cost.cross_entropy(_logits, y_train_a, name='train_loss') - - grad = tape.gradient(_loss, train_weights) - optimizer.apply_gradients(zip(grad, train_weights)) - - ## use training and evaluation sets to evaluate the model every print_freq epoch - if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: - - net.eval() # disable dropout - - print("Epoch %d of %d took %fs" % (epoch + 1, n_epoch, time.time() - start_time)) - - train_loss, train_acc, n_iter = 0, 0, 0 - for X_train_a, y_train_a in tl.iterate.minibatches(X_train_40, y_train, batch_size, shuffle=False): - # input_dim must be of length 4 - X_train_a = tf.expand_dims(X_train_a, 3) - - _logits, _ = net(X_train_a) # alternatively, you can use MLP(x, is_train=False) and remove MLP.eval() - train_loss += tl.cost.cross_entropy(_logits, y_train_a, name='eval_train_loss') - train_acc += np.mean(np.equal(np.argmax(_logits, 1), y_train_a)) - n_iter += 1 - print(" train loss: %f" % (train_loss / n_iter)) - print(" train acc: %f" % (train_acc / n_iter)) - - val_loss, val_acc, n_iter = 0, 0, 0 - for X_val_a, y_val_a in tl.iterate.minibatches(X_val_40, y_val, batch_size, shuffle=False): - # input_dim must be of length 4 - X_val_a = tf.expand_dims(X_val_a, 3) - - _logits, _ = net(X_val_a) # is_train=False, disable dropout - val_loss += tl.cost.cross_entropy(_logits, y_val_a, name='eval_loss') - val_acc += np.mean(np.equal(np.argmax(_logits, 1), y_val_a)) - n_iter += 1 - print(" val loss: %f" % (val_loss / n_iter)) - print(" val acc: %f" % (val_acc / n_iter)) - - print('save images') - _, trans_imgs = net(tf.expand_dims(X_test_40[0:64], 3)) - trans_imgs = trans_imgs.numpy() - tl.vis.save_images(trans_imgs[0:32], [4, 8], '_imgs_distorted_after_stn_%s.png' % epoch) - -##================== EVALUATION ==============================================## -print('Evaluation') - -net.eval() - -test_loss, test_acc, n_iter = 0, 0, 0 -for X_test_a, y_test_a in tl.iterate.minibatches(X_test_40, y_test, batch_size, shuffle=False): - # input_dim must be of length 4 - X_test_a = tf.expand_dims(X_test_a, 3) - - _logits, _ = net(X_test_a) - test_loss += tl.cost.cross_entropy(_logits, y_test_a, name='test_loss') - test_acc += np.mean(np.equal(np.argmax(_logits, 1), y_test_a)) - n_iter += 1 -print(" test loss: %f" % (test_loss / n_iter)) -print(" test acc: %f" % (test_acc / n_iter)) diff --git a/examples/spatial_transformer_network/tutorial_spatial_transformer_network_static.py b/examples/spatial_transformer_network/tutorial_spatial_transformer_network_static.py deleted file mode 100644 index 450284d91..000000000 --- a/examples/spatial_transformer_network/tutorial_spatial_transformer_network_static.py +++ /dev/null @@ -1,164 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf8 -*- -import time - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.layers import * -from tensorlayer.models import Model - -##================== PREPARE DATA ============================================## -X_train, y_train, X_val, y_val, X_test, y_test = \ - tl.files.load_mnist_dataset(shape=(-1, 28, 28, 1)) - - -def pad_distort_im_fn(x): - """ Zero pads an image to 40x40, and distort it. - - Examples - --------- - x = pad_distort_im_fn(X_train[0]) - print(x, x.shape, x.max()) - tl.vis.save_image(x, '_xd.png') - tl.vis.save_image(X_train[0], '_x.png') - """ - b = np.zeros((40, 40, 1), dtype=np.float32) - o = int((40 - 28) / 2) - b[o:o + 28, o:o + 28] = x - x = b - x = tl.prepro.rotation(x, rg=30, is_random=True, fill_mode='constant') - x = tl.prepro.shear(x, 0.05, is_random=True, fill_mode='constant') - x = tl.prepro.shift(x, wrg=0.25, hrg=0.25, is_random=True, fill_mode='constant') - x = tl.prepro.zoom(x, zoom_range=(0.95, 1.05)) - return x - - -def pad_distort_ims_fn(X): - """ Zero pads images to 40x40, and distort them. """ - X_40 = [] - for X_a, _ in tl.iterate.minibatches(X, X, 50, shuffle=False): - X_40.extend(tl.prepro.threading_data(X_a, fn=pad_distort_im_fn)) - X_40 = np.asarray(X_40) - return X_40 - - -# create dataset with size of 40x40 with distortion -X_train_40 = pad_distort_ims_fn(X_train) -X_val_40 = pad_distort_ims_fn(X_val) -X_test_40 = pad_distort_ims_fn(X_test) - -tl.vis.save_images(X_test[0:32], [4, 8], '_imgs_original.png') -tl.vis.save_images(X_test_40[0:32], [4, 8], '_imgs_distorted.png') - - -##================== DEFINE MODEL ============================================## -def get_model(inputs_shape): - ni = Input(inputs_shape) - - ## 1. Localisation network - # use MLP as the localisation net - nn = Flatten()(ni) - nn = Dense(n_units=20, act=tf.nn.tanh)(nn) - nn = Dropout(keep=0.8)(nn) - # you can also use CNN instead for MLP as the localisation net - - ## 2. Spatial transformer module (sampler) - stn = SpatialTransformer2dAffine(out_size=(40, 40), in_channels=20) - nn = stn((nn, ni)) - s = nn - - ## 3. Classifier - nn = Conv2d(16, (3, 3), (2, 2), act=tf.nn.relu, padding='SAME')(nn) - nn = Conv2d(16, (3, 3), (2, 2), act=tf.nn.relu, padding='SAME')(nn) - nn = Flatten()(nn) - nn = Dense(n_units=1024, act=tf.nn.relu)(nn) - nn = Dense(n_units=10, act=tf.identity)(nn) - - M = Model(inputs=ni, outputs=[nn, s]) - return M - - -net = get_model([None, 40, 40, 1]) - -##================== DEFINE TRAIN OPS ========================================## -n_epoch = 100 -learning_rate = 0.0001 -print_freq = 10 -batch_size = 64 -train_weights = net.trainable_weights -optimizer = tf.optimizers.Adam(lr=learning_rate) - -##================== TRAINING ================================================## -print("Training ...") -for epoch in range(n_epoch): - start_time = time.time() - - net.train() # enable dropout - - for X_train_a, y_train_a in tl.iterate.minibatches(X_train_40, y_train, batch_size, shuffle=True): - # input_dim must be of length 4 - X_train_a = tf.expand_dims(X_train_a, 3) - - with tf.GradientTape() as tape: - ## compute outputs - _logits, _ = net(X_train_a) # alternatively, you can use MLP(x, is_train=True) and remove MLP.train() - ## compute loss and update model - _loss = tl.cost.cross_entropy(_logits, y_train_a, name='train_loss') - - grad = tape.gradient(_loss, train_weights) - optimizer.apply_gradients(zip(grad, train_weights)) - - ## use training and evaluation sets to evaluate the model every print_freq epoch - if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: - - net.eval() # disable dropout - - print("Epoch %d of %d took %fs" % (epoch + 1, n_epoch, time.time() - start_time)) - - train_loss, train_acc, n_iter = 0, 0, 0 - for X_train_a, y_train_a in tl.iterate.minibatches(X_train_40, y_train, batch_size, shuffle=False): - # input_dim must be of length 4 - X_train_a = tf.expand_dims(X_train_a, 3) - - _logits, _ = net(X_train_a) # alternatively, you can use MLP(x, is_train=False) and remove MLP.eval() - train_loss += tl.cost.cross_entropy(_logits, y_train_a, name='eval_train_loss') - train_acc += np.mean(np.equal(np.argmax(_logits, 1), y_train_a)) - n_iter += 1 - print(" train loss: %f" % (train_loss / n_iter)) - print(" train acc: %f" % (train_acc / n_iter)) - - val_loss, val_acc, n_iter = 0, 0, 0 - for X_val_a, y_val_a in tl.iterate.minibatches(X_val_40, y_val, batch_size, shuffle=False): - # input_dim must be of length 4 - X_val_a = tf.expand_dims(X_val_a, 3) - - _logits, _ = net(X_val_a) # is_train=False, disable dropout - val_loss += tl.cost.cross_entropy(_logits, y_val_a, name='eval_loss') - val_acc += np.mean(np.equal(np.argmax(_logits, 1), y_val_a)) - n_iter += 1 - print(" val loss: %f" % (val_loss / n_iter)) - print(" val acc: %f" % (val_acc / n_iter)) - - print('save images') - _, trans_imgs = net(tf.expand_dims(X_test_40[0:64], 3)) - trans_imgs = trans_imgs.numpy() - tl.vis.save_images(trans_imgs[0:32], [4, 8], '_imgs_distorted_after_stn_%s.png' % epoch) - -##================== EVALUATION ==============================================## -print('Evaluation') - -net.eval() - -test_loss, test_acc, n_iter = 0, 0, 0 -for X_test_a, y_test_a in tl.iterate.minibatches(X_test_40, y_test, batch_size, shuffle=False): - # input_dim must be of length 4 - X_test_a = tf.expand_dims(X_test_a, 3) - - _logits, _ = net(X_test_a) - test_loss += tl.cost.cross_entropy(_logits, y_test_a, name='test_loss') - test_acc += np.mean(np.equal(np.argmax(_logits, 1), y_test_a)) - n_iter += 1 -print(" test loss: %f" % (test_loss / n_iter)) -print(" test acc: %f" % (test_acc / n_iter)) diff --git a/examples/text_classification/readme.md b/examples/text_classification/readme.md deleted file mode 100644 index 1045efc2f..000000000 --- a/examples/text_classification/readme.md +++ /dev/null @@ -1,29 +0,0 @@ - -### Introduction - -The demos implement [FastText](http://arxiv.org/abs/1607.01759)[1] for sentence classification. - -Code: [tutorial_imdb_fasttext.py](tutorial_imdb_fasttext.py) - -FastText is a simple model for text classification with performance often close -to state-of-the-art, and is useful as a solid baseline. - -There are some important differences between this implementation and what -is described in the paper. Instead of Hogwild! SGD[2], we use Adam optimizer -with mini-batches. Hierarchical softmax is also not supported; if you have -a large label space, consider utilizing candidate sampling methods provided -by TensorFlow[3]. - -After 5 epochs, you should get test accuracy around 90.3%. - -### References - -[1] Joulin, A., Grave, E., Bojanowski, P., & Mikolov, T. (2016). - Bag of Tricks for Efficient Text Classification. - - -[2] Recht, B., Re, C., Wright, S., & Niu, F. (2011). - Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent. - In Advances in Neural Information Processing Systems 24 (pp. 693–701). - -[3] diff --git a/examples/text_classification/tutorial_imdb_fasttext.py b/examples/text_classification/tutorial_imdb_fasttext.py deleted file mode 100644 index 53b0fdce7..000000000 --- a/examples/text_classification/tutorial_imdb_fasttext.py +++ /dev/null @@ -1,175 +0,0 @@ -#!/usr/bin/env python -""" -This demo implements FastText[1] for sentence classification. This demo should be run in eager mode and -can be slower than the corresponding demo in graph mode. - -FastText is a simple model for text classification with performance often close -to state-of-the-art, and is useful as a solid baseline. - -There are some important differences between this implementation and what -is described in the paper. Instead of Hogwild! SGD[2], we use Adam optimizer -with mini-batches. Hierarchical softmax is also not supported; if you have -a large label space, consider utilizing candidate sampling methods provided -by TensorFlow[3]. - -After 5 epochs, you should get test accuracy around 90.3%. - -[1] Joulin, A., Grave, E., Bojanowski, P., & Mikolov, T. (2016). - Bag of Tricks for Efficient Text Classification. - http://arxiv.org/abs/1607.01759 - -[2] Recht, B., Re, C., Wright, S., & Niu, F. (2011). - Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent. - In Advances in Neural Information Processing Systems 24 (pp. 693–701). - -[3] https://www.tensorflow.org/api_guides/python/nn#Candidate_Sampling - -""" -import array -import hashlib -import os -import time - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.layers import * -from tensorlayer.models import * - -# Hashed n-grams with 1 < n <= N_GRAM are included as features -# in addition to unigrams. -N_GRAM = 2 - -# Size of vocabulary; less frequent words will be treated as "unknown" -VOCAB_SIZE = 100000 - -# Number of buckets used for hashing n-grams -N_BUCKETS = 1000000 - -# Size of the embedding vectors -EMBEDDING_SIZE = 50 - -# Number of epochs for which the model is trained -N_EPOCH = 5 - -# Number of steps for printing -N_STEPS_TO_PRINT = 100 - -# Size of training mini-batches -BATCH_SIZE = 32 - -# Learning rate -LEARNING_RATE = 0.01 - -# Path to which to save the trained model -MODEL_FILE_PATH = 'model_dynamic.hdf5' - - -class FastTextModel(Model): - """ Model structure and forwarding of FastText """ - - def __init__(self, vocab_size, embedding_size, n_labels, name='fasttext'): - super(FastTextModel, self).__init__(name=name) - - self.avg_embed = AverageEmbedding(vocab_size, embedding_size) - self.dense1 = Dense(n_units=10, in_channels=embedding_size) - self.dense2 = Dense(n_units=n_labels, in_channels=10) - - def forward(self, x): - z = self.avg_embed(x) - z = self.dense1(z) - z = self.dense2(z) - return z - - -def augment_with_ngrams(unigrams, unigram_vocab_size, n_buckets, n=2): - """Augment unigram features with hashed n-gram features.""" - - def get_ngrams(n): - return list(zip(*[unigrams[i:] for i in range(n)])) - - def hash_ngram(ngram): - bytes_ = array.array('L', ngram).tobytes() - hash_ = int(hashlib.sha256(bytes_).hexdigest(), 16) - return unigram_vocab_size + hash_ % n_buckets - - return unigrams + [hash_ngram(ngram) for i in range(2, n + 1) for ngram in get_ngrams(i)] - - -def load_and_preprocess_imdb_data(n_gram=None): - """Load IMDb data and augment with hashed n-gram features.""" - tl.logging.info("Loading and preprocessing IMDB data.") - - X_train, y_train, X_test, y_test = tl.files.load_imdb_dataset(nb_words=VOCAB_SIZE) - - if n_gram is not None: - X_train = np.array([augment_with_ngrams(x, VOCAB_SIZE, N_BUCKETS, n=n_gram) for x in X_train]) - X_test = np.array([augment_with_ngrams(x, VOCAB_SIZE, N_BUCKETS, n=n_gram) for x in X_test]) - - return X_train, y_train, X_test, y_test - - -def train_test_and_save_model(): - X_train, y_train, X_test, y_test = load_and_preprocess_imdb_data(N_GRAM) - model = FastTextModel( - vocab_size=VOCAB_SIZE + N_BUCKETS, - embedding_size=EMBEDDING_SIZE, - n_labels=2, - ) - optimizer = tf.optimizers.Adam(learning_rate=LEARNING_RATE) - - if os.path.exists(MODEL_FILE_PATH): - # loading pre-trained model if applicable - model.load_weights(MODEL_FILE_PATH) - else: - # training - model.train() - - for epoch in range(N_EPOCH): - start_time = time.time() - print('Epoch %d/%d' % (epoch + 1, N_EPOCH)) - train_accuracy = list() - for X_batch, y_batch in tl.iterate.minibatches(X_train, y_train, batch_size=BATCH_SIZE, shuffle=True): - - # forward and define the loss function - # TODO: use tf.function to speed up - with tf.GradientTape() as tape: - y_pred = model(tl.prepro.pad_sequences(X_batch)) - cost = tl.cost.cross_entropy(y_pred, y_batch, name='cost') - - # backward, calculate gradients and update the weights - grad = tape.gradient(cost, model.trainable_weights) - optimizer.apply_gradients(zip(grad, model.trainable_weights)) - - # calculate the accuracy - predictions = tf.argmax(y_pred, axis=1, output_type=tf.int32) - are_predictions_correct = tf.equal(predictions, y_batch) - accuracy = tf.reduce_mean(tf.cast(are_predictions_correct, tf.float32)) - - train_accuracy.append(accuracy) - if len(train_accuracy) % N_STEPS_TO_PRINT == 0: - print( - "\t[%d/%d][%d]accuracy " % (epoch + 1, N_EPOCH, len(train_accuracy)), - np.mean(train_accuracy[-N_STEPS_TO_PRINT:]) - ) - - print("\tSummary: time %.5fs, overall accuracy" % (time.time() - start_time), np.mean(train_accuracy)) - - # evaluation and testing - model.eval() - - # forward and calculate the accuracy - y_pred = model(tl.prepro.pad_sequences(X_test)) - predictions = tf.argmax(y_pred, axis=1, output_type=tf.int32) - are_predictions_correct = tf.equal(predictions, y_test) - test_accuracy = tf.reduce_mean(tf.cast(are_predictions_correct, tf.float32)) - - print('Test accuracy: %.5f' % test_accuracy) - - # saving the model - model.save_weights(MODEL_FILE_PATH) - - -if __name__ == '__main__': - train_test_and_save_model() diff --git a/examples/text_generation/data/.DS_Store b/examples/text_generation/data/.DS_Store deleted file mode 100644 index 5008ddfcf..000000000 Binary files a/examples/text_generation/data/.DS_Store and /dev/null differ diff --git a/examples/text_generation/data/__init__.py b/examples/text_generation/data/__init__.py deleted file mode 100644 index 5feb25700..000000000 --- a/examples/text_generation/data/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -from __future__ import absolute_import - -from . import imagenet_classes - -# from . import diff --git a/examples/text_generation/data/trump/trump_text.txt b/examples/text_generation/data/trump/trump_text.txt deleted file mode 100644 index b8ca22ccb..000000000 --- a/examples/text_generation/data/trump/trump_text.txt +++ /dev/null @@ -1,7332 +0,0 @@ -Thank you for joining me today. - -This was going to be a speech on Hillary Clinton and how bad a President, especially in these times of Radical Islamic Terrorism, she would be. - -Even her former Secret Service Agent, who has seen her under pressure and in times of stress, has stated that she lacks the temperament and integrity to be president. - -There will be plenty of opportunity to discuss these important issues at a later time, and I will deliver that speech soon. - -But today there is only one thing to discuss: the growing threat of terrorism inside of our borders. - -The attack on the Pulse Nightclub in Orlando, Florida, was the worst terrorist strike on our soil since September 11th, and the worst mass shooting in our country’s history. - -So many people dead, so many people gravely injured, so much carnage, such a disgrace. - -The horror is beyond description. - -The families of these wonderful people are totally devastated. Likewise, our whole nation, and indeed the whole world, is devastated. - -We express our deepest sympathies to the victims, the wounded, and their families. - -We mourn, as one people, for our nation’s loss – and pledge our support to any and all who need it. - -I would like to ask now that we all observe a moment of silence for the victims of the attack. - -Our nation stands together in solidarity with the members of Orlando's LGBT Community. - -This is a very dark moment in America’s history. - -A radical Islamic terrorist targeted the nightclub not only because he wanted to kill Americans, but in order to execute gay and lesbian citizens because of their sexual orientation. - -It is a strike at the heart and soul of who we are as a nation. - -It is an assault on the ability of free people to live their lives, love who they want and express their identity. - -It is an attack on the right of every single American to live in peace and safety in their own country. - -We need to respond to this attack on America as one united people – with force, purpose and determination. - -But the current politically correct response cripples our ability to talk and think and act clearly. - -If we don't get tough, and we don't get smart – and fast – we're not going to have a country anymore -- there will be nothing left. - -The killer, whose name I will not use, or ever say, was born to Afghan parents who immigrated to the United States. His father published support for the Afghan Taliban, a regime which murders those who don’t share its radical views. The father even said he was running for President of that country. - -The bottom line is that the only reason the killer was in America in the first place was because we allowed his family to come here. - -That is a fact, and it's a fact we need to talk about. - -We have a dysfunctional immigration system which does not permit us to know who we let into our country, and it does not permit us to protect our citizens. - -We have an incompetent administration, and if I am not elected President, that will not change over the next four years -- but it must change, and it must change now. - -With fifty people dead, and dozens more wounded, we cannot afford to talk around the issue anymore -- we have to address it head on. - -I called for a ban after San Bernardino, and was met with great scorn and anger but now, many are saying I was right to do so -- and although the pause is temporary, we must find out what is going on. The ban will be lifted when we as a nation are in a position to properly and perfectly screen those people coming into our country. - -The immigration laws of the United States give the President the power to suspend entry into the country of any class of persons that the President deems detrimental to the interests or security of the United States, as he deems appropriate. - -I will use this power to protect the American people. When I am elected, I will suspend immigration from areas of the world when there is a proven history of terrorism against the United States, Europe or our allies, until we understand how to end these threats. - -After a full, impartial and long overdue security assessment, we will develop a responsible immigration policy that serves the interests and values of America. - -We cannot continue to allow thousands upon thousands of people to pour into our country, many of whom have the same thought process as this savage killer. - -Many of the principles of Radical Islam are incompatible with Western values and institutions. - -Radical Islam is anti-woman, anti-gay and anti-American. - -I refuse to allow America to become a place where gay people, Christian people, and Jewish people, are the targets of persecution and intimidation by Radical Islamic preachers of hate and violence. - -It’s not just a national security issue. It is a quality of life issue. - -If we want to protect the quality of life for all Americans – women and children, gay and straight, Jews and Christians and all people – then we need to tell the truth about Radical Islam. - -We need to tell the truth, also, about how Radical Islam is coming to our shores. - -We are importing Radical Islamic Terrorism into the West through a failed immigration system -- and through an intelligence community held back by our president. - -Even our own FBI Director has admitted that we cannot effectively check the backgrounds of the people we are letting into America. - -All of the September 11th hijackers were issued visas. - -Large numbers of Somali refugees in Minnesota have tried to join ISIS. - -The Boston Bombers came here through political asylum. - -The male shooter in San Bernardino – again, whose name I won't mention -- was the child of immigrants from Pakistan, and he brought his wife – the other terrorist - from Saudi Arabia, through another one of our easily exploited visa programs. - -Immigration from Afghanistan into the United States has increased nearly five-fold in just one year. According to Pew Research, 99% of people in Afghanistan support oppressive Sharia Law. - -We admit many more from other countries in the region who share these same oppressive views. - -If we want to remain a free and open society, then we have to control our borders. - -Yet, Hillary Clinton – for months and despite so many attacks – repeatedly refused to even say the words “radical Islam,” until I challenged her yesterday to say the words or leave the race. - -However, Hillary Clinton – who has been forced to say the words today after policies she supports have caused us so much damage – still has no clue what Radical Islam is, and won’t speak honestly about what it is. - -She is in total denial, and her continuing reluctance to ever name the enemy broadcasts weakness across the world. - -In fact, just a few weeks before the San Bernardino slaughter, Hillary Clinton explained her refusal to say the words Radical Islam. Here is what she said: “Muslims are peaceful and tolerant people, and have nothing whatsoever to do with terrorism.” - -Hillary Clinton says the solution is to ban guns. They tried that in France, which has among the toughest gun laws in the world, and 130 were brutally murdered by Islamic terrorists in cold blood. Her plan is to disarm law-abiding Americans, abolishing the 2nd amendment, and leaving only the bad guys and terrorists with guns. She wants to take away Americans’ guns, then admit the very people who want to slaughter us. - -I will be meeting with the NRA, which has given me their earliest endorsement in a Presidential race, to discuss how to ensure Americans have the means to protect themselves in this age of terror. - -The bottom line is that Hillary supports the policies that bring the threat of Radical Islam into America, and allow it to grow overseas. - -In fact, Hillary Clinton’s catastrophic immigration plan will bring vastly more Radical Islamic immigration into this country, threatening not only our security but our way of life. - -When it comes to Radical Islamic terrorism, ignorance is not bliss – it's deadly. - -The Obama Administration, with the support of Hillary Clinton and others, has also damaged our security by restraining our intelligence-gathering and failing to support law enforcement. They have put political correctness above common sense, above your safety, and above all else. - -I refuse to be politically correct. - -I will do the right thing--I want to straighten things out and to Make America Great Again. - -The days of deadly ignorance will end, and they will end soon. - -As President I will give our intelligence community, law enforcement and military the tools they need to prevent terrorist attacks. - -We need an intelligence-gathering system second to none. That includes better cooperation between state, local and federal officials – and with our allies. - -I will have an Attorney General, a Director of National Intelligence, and a Secretary of Defense who will know how to fight the war on Radical Islamic Terrorism – and who will have the support they require to get the job done. - -We also must ensure the American people are provided the information they need to understand the threat. - -The Senate Subcommittee on Immigration has already identified hundreds of immigrants charged with terrorist activities inside the United States since September 11th. - -Nearly a year ago, the Senate Subcommittee asked President Obama's Departments of Justice, State and Homeland Security to provide the immigration history of all terrorists inside the United States. - -These Departments refused to comply. - -President Obama must release the full and complete immigration histories of all individuals implicated in terrorist activity of any kind since 9/11. - -The public has a right to know how these people got here. - -We have to screen applicants to know whether they are affiliated with, or support, radical groups and beliefs. - -We have to control the amount of future immigration into this country to prevent large pockets of radicalization from forming inside America. - -Even a single individual can be devastating, just look at what happened in Orlando. Can you imagine large groups? - -Truly, our President doesn't know what he is doing. He has failed us, and failed us badly, and under his leadership, this situation will not get any better -- it will only get worse. - -Each year, the United States permanently admits more than 100,000 immigrants from the Middle East, and many more from Muslim countries outside the Middle East. Our government has been admitting ever-growing numbers, year after year, without any effective plan for our security. - -In fact, Clinton's State Department was in charge of the admissions process for people applying to enter from overseas. - -Having learned nothing from these attacks, she now plans to massively increase admissions without a screening plan, including a 500% increase in Syrian refugees. - -This could be a better, bigger version of the legendary Trojan Horse. - -We can't let this happen. - -Altogether, under the Clinton plan, you'd be admitting hundreds of thousands of refugees from the Middle East with no system to vet them, or to prevent the radicalization of their children. - -The burden is on Hillary Clinton to tell us why she believes immigration from these dangerous countries should be increased without any effective system to screen who we are bringing in. - -The burden is on Hillary Clinton to tell us why we should admit anyone into our country who supports violence of any kind against gay and lesbian Americans. - -The burden is also on Hillary Clinton to tell us how she will pay for it. Her plan will cost Americans hundreds of billions of dollars long-term. - -Wouldn't this money be better spent on rebuilding America for our current population, including the many poor people already living here? - -We have to stop the tremendous flow of Syrian refugees into the United States – we don't know who they are, they have no documentation, and we don't know what they're planning. - -What I want is common sense. I want a mainstream immigration policy that promotes American values. - -That is the choice I put before the American people: a mainstream immigration policy designed to benefit America, or Hillary Clinton's radical immigration policy designed to benefit politically-correct special interests. - -We've got to get smart, and tough, and vigilant, and we've got to do it now, because later is too late. - -Ask yourself, who is really the friend of women and the LGBT community, Donald Trump with his actions, or Hillary Clinton with her words? Clinton wants to allow Radical Islamic terrorists to pour into our country—they enslave women, and murder gays. - -I don’t want them in our country. - -The terrorist attack on the Pulse Night Club demands a full and complete investigation into every aspect of the assault. - -In San Bernardino, as an example, people knew what was going on, but they used the excuse of racial profiling for not reporting it. - -We need to know what the killer discussed with his relatives, parents, friends and associates. - -We need to know if he was affiliated with any radical Mosques or radical activists and what, if any, is their immigration status. - -We need to know if he travelled anywhere, and who he travelled with. - -We need to make sure every single last person involved in this plan – including anyone who knew something but didn't tell us – is brought to justice. - -If it can be proven that somebody had information about any attack, and did not give this information to authorities, they must serve prison time . - -America must do more – much more – to protect its citizens, especially people who are potential victims of crimes based on their backgrounds or sexual orientations. - -It also means we must change our foreign policy. - -The decision to overthrow the regime in Libya, then pushing for the overthrow of the regime in Syria, among other things, without plans for the day after, have created space for ISIS to expand and grow. - -These actions, along with our disastrous Iran deal, have also reduced our ability to work in partnership with our Muslim allies in the region. - -That is why our new goal must be to defeat Islamic terrorism, not nation-building. - -For instance, the last major NATO mission was Hillary Clinton's war in Libya. That mission helped unleash ISIS on a new continent. - -I've said NATO needs to change its focus to stopping terrorism. Since I've raised that criticism, NATO has since announced a new initiative focused on just that. - -America must unite the whole civilized world in the fight against Islamic terrorism, just like we did against communism in the Cold War. - -We've tried it President Obama's way. He gave the world his apology tour, we got ISIS, and many other problems, in return. - -I'd like to conclude my remarks today by again expressing our solidarity with the people of Orlando who have come under attack. - -When I am President, I pledge to protect and defend all Americans who live inside of our borders. Wherever they come from, wherever they were born, all Americans living here and following our laws will be protected. - -America will be a tolerant and open society. - -America will also be a safe society. - -We will protect our borders at home. - -We will defeat ISIS overseas. - -We will ensure every parent can raise their children in peace and safety. - -We will make America rich again. - -We will make America safe again. - -We will make American Great Again. - -Thank you. - -The media talks about “homegrown,” terrorism, but Islamic radicalism, and the networks that nurture it, are imports from overseas. - -Yes, there are many radicalized people already inside our country as a result of the poor policies of the past. But the whole point is that it will be much, much easier to deal with our current problem if we don’t keep on bringing in people who add to the problem. - -For instance, the controversial Mosque attended by the Boston Bombers had as its founder an immigrant from overseas charged in an assassination plot. - -This shooter in Orlando was the child of an immigrant father who supported one of the most repressive regimes on Earth. Why would we admit people who support violent hatred? - -Hillary Clinton can never claim to be a friend of the gay community as long as she continues to support immigration policies that bring Islamic extremists to our country who suppress women, gays and anyone who doesn’t share their views. - -She can’t have it both ways. She can’t claim to be supportive of these communities while trying to increase the number of people coming in who want to oppress them. - -How does this kind of immigration make our life better? How does this kind of immigration make our country better? - -Why does Hillary Clinton want to bring people here—in vast numbers—who reject our values? - -Immigration is a privilege, and we should not let anyone into this country who doesn’t support our communities – all of our communities. - -America has already admitted four times more immigrants than any country on earth, and we continue to admit millions more with no real checks or scrutiny. - -Not surprisingly, wages for our workers haven’t budged in many years. - -So whether it’s matter of national security, or financial security, we can’t afford to keep on going like this. We owe $19 trillion in debt, and no longer have options. - -All our communities, from all backgrounds, are ready for some relief. This is not an act of offense against anyone; it is an act of defense. - -I want us all to work together, including in partnership with our Muslim communities. But Muslim communities must cooperate with law enforcement and turn in the people who they know are bad – and they do know where they are. - -I want to fix our schools, roads, bridges and job market. I want every American to succeed. Hillary Clinton wants to empty out the Treasury to bring people into the country that include individuals who preach hate against our own citizens. - -I want to protect our citizens – all of our citizens. - -Last night, our nation was attacked by a radical Islamic terrorist. It was the worst terrorist attack on our soil since 9/11, and the second of its kind in 6 months. My deepest sympathy and support goes out to the victims, the wounded, and their families. - -In his remarks today, President Obama disgracefully refused to even say the words 'Radical Islam'. For that reason alone, he should step down. If Hillary Clinton, after this attack, still cannot say the two words 'Radical Islam' she should get out of this race for the Presidency. - -If we do not get tough and smart real fast, we are not going to have a country anymore. Because our leaders are weak, I said this was going to happen – and it is only going to get worse. I am trying to save lives and prevent the next terrorist attack. We can't afford to be politically correct anymore. - -The terrorist, Omar Mir Saddique Mateen, is the son of an immigrant from Afghanistan who openly published his support for the Afghanistani Taliban and even tried to run for President of Afghanistan. According to Pew, 99% of people in Afghanistan support oppressive Sharia Law. - -We admit more than 100,000 lifetime migrants from the Middle East each year. Since 9/11, hundreds of migrants and their children have been implicated in terrorism in the United States. - - -Hillary Clinton wants to dramatically increase admissions from the Middle East, bringing in many hundreds of thousands during a first term – and we will have no way to screen them, pay for them, or prevent the second generation from radicalizing. - -We need to protect all Americans, of all backgrounds and all beliefs, from Radical Islamic Terrorism - which has no place in an open and tolerant society. Radical Islam advocates hate for women, gays, Jews, Christians and all Americans. I am going to be a President for all Americans, and I am going to protect and defend all Americans. We are going to make America safe again and great again for everyone. - -It is unfortunate that my comments have been misconstrued as a categorical attack against people of Mexican heritage. I am friends with and employ thousands of people of Mexican and Hispanic descent. The American justice system relies on fair and impartial judges. All judges should be held to that standard. I do not feel that one’s heritage makes them incapable of being impartial, but, based on the rulings that I have received in the Trump University civil case, I feel justified in questioning whether I am receiving a fair trial. - -Over the past few weeks, I have watched as the media has reported one inaccuracy after another concerning the ongoing litigation involving Trump University. There are several important facts the public should know and that the media has failed to report. - -Throughout the litigation my attorneys have continually demonstrated that students who participated in Trump University were provided a substantive, valuable education based upon a curriculum developed by professors from Northwestern University, Columbia Business School, Stanford University and other respected institutions. And, the response from students was overwhelming. Over a five year period, more than 10,000 paying students filled out surveys giving the courses high marks and expressing their overwhelming satisfaction with Trump University’s programs. For example: - -Former student Tarla Makaeff, the original plaintiff in the litigation, not only completed multiple surveys rating Trump University’s three-day seminar “excellent” in every category, but also praised Trump University’s mentorship program in a glowing 5 plus minute video testimonial. When asked “how could Trump University help to meet [her] goals”, she simply stated “[c]ontinue to offer great classes.” Once the plaintiffs’ lawyers realized how disastrous a witness she was, they asked to have her removed from the case. Over my lawyers’ objections, the judge granted the plaintiffs’ motion, but allowed the case to continue. -Art Cohen, a lead plaintiffs in the litigation, completed a survey in which he not only rated Trump University’s three-day seminar “excellent” in virtually every category, but went so far as to indicate that he would “attend another Trump University seminar” and even “recommend Trump University seminars to a friend.” When asked how Trump University could improve the seminar, Mr. Cohen’s only suggestion was to “[h]ave lunch sandwiches brought in” and make the lunch break 45 minutes. -Former student Bob Giullo, who has been critical of Trump University in numerous interviews and negative advertisements from my political opponents, also expressed his satisfaction, rating Trump University’s programs “excellent” in every category. When asked how Trump University could improve its programs, Mr. Giullo simply asked that students be provided “more comfortable chairs.” - -Indeed, these are just a few of literally thousands of positive surveys, all of which can be viewed online at www.98percentapproval.com. - -For those students who decided that Trump University’s programs were not for them, the company had a generous refund policy, offering a full refund to any student who asked for their money back within 3 days of signing up for a program or by the end of the first day of any multi-day program, whichever came later. - -Normally, legal issues in a civil case would be heard in a neutral environment. However, given my unique circumstances as nominee of the Republican Party and the core issues of my campaign that focus on illegal immigration, jobs and unfair trade, I have concerns as to my ability to receive a fair trial. - -I am fighting hard to bring jobs back to the United States. Many companies – like Ford, Nabisco, Carrier – are moving production to Mexico. Drugs and illegal immigrants are also pouring across our border. This is bad for all Americans, regardless of their heritage. - -Due to what I believe are unfair and mistaken rulings in this case and the Judge’s reported associations with certain professional organizations, questions were raised regarding the Obama appointed Judge’s impartiality. It is a fair question. I hope it is not the case. - -While this lawsuit should have been dismissed, it is now scheduled for trial in November. I do not intend to comment on this matter any further. With all of the thousands of people who have given the courses such high marks and accolades, we will win this case! - -Based on the fact that the Democratic nominating process is totally rigged and Crooked Hillary Clinton and Deborah Wasserman Schultz will not allow Bernie Sanders to win, and now that I am the presumptive Republican nominee, it seems inappropriate that I would debate the second place finisher. Likewise, the networks want to make a killing on these events and are not proving to be too generous to charitable causes, in this case, women’s health issues. Therefore, as much as I want to debate Bernie Sanders - and it would be an easy payday - I will wait to debate the first place finisher in the Democratic Party, probably Crooked Hillary Clinton, or whoever it may be. - -I’m delighted to be in North Dakota, a state at the forefront of a new energy revolution. - -Oil and natural gas production is up significantly in the last decade. Our oil imports have been cut in half. - -But all this occurred in spite of massive new bureaucratic and political barriers. - -President Obama has done everything he can to get in the way of American energy. He’s made life much more difficult for North Dakota, as costly regulation makes it harder and harder to turn a profit. - -If Hillary Clinton is in charge, things will get much worse. She will shut down energy production across this country. - -Millions of jobs, and trillions of dollars of wealth, will be destroyed as a result. - -That is why our choice this November is so crucial. - -Here’s what it comes down to. - -Wealth versus poverty. - -North Dakota shows how energy exploration creates shared prosperity. Better schools. More funding for infrastructure. Higher wages. Lower unemployment. - -Things we’ve been missing. - -It’s a choice between sharing in this great energy wealth, or sharing in the poverty promised by Hillary Clinton. - -You don’t have to take my word for it. Just listen to Hillary Clinton’s own words. She has declared war on the American worker. - -Here is what Hillary Clinton said earlier this year: “We are going to put a lot of coal miners and coal companies out of work.” - -She wants to shut down the coal mines. - -And if Crooked Hillary can shut down the mines, she can shut down your business too. - -Let me tell you how President Obama Undermined Our Middle Class - -President Obama’s stated intent is to eliminate oil and natural gas production in America. - -His policy is death by a thousand cuts through an onslaught of regulations. - -The Environmental Protection Agency’s use of totalitarian tactics forces energy operators in North Dakota into paying unprecedented multi-billion dollar fines before a penalty is even confirmed. - -Government misconduct goes on and on: - -The Department of Justice filed a lawsuit against seven North Dakota oil companies for the deaths of 28 birds while the Administration fast-tracked wind projects that kill more than 1 million birds a year. -The U.S Fish and Wildlife Service abuses the Endangered Species Act to restrict oil and gas exploration. -Adding to the pain, President Obama now proposes a $10-per-barrel tax on American-produced oil in the middle of a downturn. - -At the same time President Obama lifts economic sanctions on Iran, he imposes economic sanctions on America. He has allowed this country to hit the lowest oil rig count since 1999, producing thousands of layoffs. -America’s incredible energy potential remains untapped. It is a totally self-inflicted wound. - -Under my presidency, we will accomplish complete American energy independence. - -Imagine a world in which our foes, and the oil cartels, can no longer use energy as a weapon. - -But President Obama has done everything he can to keep us dependent on others. Let me list some of the good energy projects he killed. - -He rejected the Keystone XL Pipeline despite the fact that: - - -It would create and support more than 42,000 jobs. -His own State Department concluded that it would be the safest pipeline ever built in the United States. -And it would have no significant impact on the environment. -Yet, even as he rejected this America-Canada pipeline, he made a deal that allows Iran to transport more oil through its pipeline that would have ever flowed through Keystone –with no environmental review. - -President Obama has done everything he can to kill the coal industry. Here are a few of President Obama’s decrees: - - -Regulations that shut down hundreds of coal-fired power plants and block the construction of new ones. - - -A prohibition against coal production on federal land. - -Draconian climate rules that, unless stopped, would effectively bypass Congress to impose job-killing cap-and-trade. - -President Obama has aggressively blocked the production of oil & natural gas: - - -He’s taken a huge percentage of the Alaska National Petroleum Reserve off the table -Oil and natural gas production on federal lands is down 10%. -87% of available land in the Outer Continental Shelf has been put off limits. -Atlantic Lease sales were closed down too – despite the fact that they would create 280,000 jobs and $23.5 billion in economic activity. -President Obama entered the United States into the Paris Climate Accords – unilaterally, and without the permission of Congress. This agreement gives foreign bureaucrats control over how much energy we use right here in America. -These actions have denied millions of Americans access to the energy wealth sitting under our feet. - -This is your treasure, and you – the American People – are entitled to share in the riches. - -President Obama’s anti-energy orders have also weakened our security, by keeping us reliant on foreign sources of energy. - -Every dollar of energy we don’t explore here, is a dollar of energy that makes someone else rich over there. - -If President Obama wanted to weaken America he couldn’t have done a better job. - -As bad as President Obama is, Hillary Clinton will be worse. - -She will escalate the war against American energy, and unleash the EPA to control every aspect of our lives. -She declared that “we’ve got to move away from coal and all the other fossil fuels,” locking away trillions in American wealth. -In March, Hillary Clinton said: “by the time we get through all of my conditions, I do not think there will be many places in America where fracking will continue to take place.” Keep in mind, shale energy production could add 2 million jobs in 7 years. - -Yet, while Hillary Clinton doesn’t want American energy, she is strongly in favor of foreign energy. Here is what she told China as Secretary of State: - -“American experts and Chinese experts will work to develop China’s natural gas resources. Imagine what it would mean for China if China unleashed its own natural gas resources so you are not dependent on foreign oil.” -Hillary Clinton has her priorities wrong. But we are going to turn all of that around. - - -A Trump Administration will develop an America First energy plan. Here is how this plan will make America Wealthy Again: - -American energy dominance will be declared a strategic economic and foreign policy goal of the United States. -America has 1.5 times as much oil as the combined proven resources of all OPEC countries; we have more Natural Gas than Russia, Iran, Qatar and Saudi Arabia Combined; we have three times more coal than Russia. Our total untapped oil and gas reserves on federal lands equal an estimated $50 trillion. -We will become, and stay, totally independent of any need to import energy from the OPEC cartel or any nations hostile to our interests. -At the same time, we will work with our Gulf allies to develop a positive energy relationship as part of our anti-terrorism strategy. -We will use the revenues from energy production to rebuild our roads, schools, bridges and public infrastructure. Cheaper energy will also boost American agriculture. -We will get the bureaucracy out of the way of innovation, so we can pursue all forms of energy. This includes renewable energies and the technologies of the future. It includes nuclear, wind and solar energy – but not to the exclusion of other energy. The government should not pick winners and losers. Instead, it should remove obstacles to exploration. Any market has ups and downs, but lifting these draconian barriers will ensure that we are no longer at the mercy of global markets. - -A Trump Administration will focus on real environmental challenges, not phony ones: - -We will reject Hillary Clinton’s poverty-expansion agenda that enriches her friends and makes everyone else poor. -We’ll solve real environmental problems in our communities like the need for clean and safe drinking water. President Obama actually tried to cut the funding for our drinking water infrastructure – even as he pushed to increase funding for his EPA bureaucrats. -American workers will be the ones building this new infrastructure. - -Here is my 100-day action plan: - -We’re going to rescind all the job-destroying Obama executive actions including the Climate Action Plan and the Waters of the U.S. rule. -We’re going to save the coal industry and other industries threatened by Hillary Clinton’s extremist agenda. -I’m going to ask Trans Canada to renew its permit application for the Keystone Pipeline. -We’re going to lift moratoriums on energy production in federal areas -We’re going to revoke policies that impose unwarranted restrictions on new drilling technologies. These technologies create millions of jobs with a smaller footprint than ever before. -We’re going to cancel the Paris Climate Agreement and stop all payments of U.S. tax dollars to U.N. global warming programs. -Any regulation that is outdated, unnecessary, bad for workers, or contrary to the national interest will be scrapped. We will also eliminate duplication, provide regulatory certainty, and trust local officials and local residents. -Any future regulation will go through a simple test: is this regulation good for the American worker? If it doesn’t pass this test, the rule will not be approved. -Policy decisions will be public and transparent. They won’t be made on Hillary’s private email account. - -We’re going to do all this while taking proper regard for rational environmental concerns. We are going to conserve our beautiful natural habitats, reserves and resources. - -In a Trump Administration, political activists with extreme agendas will no longer write the rules. Instead, we will work with conservationists whose only agenda is protecting nature. - -From an environmental standpoint, my priorities are very simple: clean air and clean water. - -My America First energy plan will do for the American People what Hillary Clinton will never do: create real jobs and real wage growth. - -According to the Institute for Energy Research, lifting the restrictions on American energy will create a flood of new jobs: - -Almost a $700 billion increase in annual economic output over the next 30 years. -More than a $30 billion increase in annual wages over the next 7 years. -Over the next four decades, more than $20 trillion in additional economic activity and $6 trillion in new tax revenue. - -The oil and natural gas industry supports 10 million high-paying Americans jobs and can create another 400,000 new jobs per year. This exploration will also create a resurgence in American manufacturing -- dramatically reducing both our trade deficit and our budget deficit. - -Compare this future to Hillary Clinton’s Venezuela-style politics of poverty. - -If you think about it, not one idea Hillary Clinton has will actually create a single net job or create a single new dollar to put in workers’ pockets. - -In fact, every idea Hillary has will make jobs disappear. - -Hillary Clinton’s agenda is job destruction. My agenda is job creation. - -She wants to tax and regulate our workers to the point of extinction. - -She wants terrible trade deals, like NAFTA, signed by her husband, that will empty out our manufacturing. - -During her time as Secretary of State, she surrendered to China – allowing them to steal hundreds of billions of dollars in our intellectual property. - -She let them devalue their currency and add more than a trillion dollars to our trade deficit. - -Then there was Libya. - -Secretary Clinton’s reckless Libya invasion handed the country over to ISIS, which now controls the oil. - -The Middle East that Clinton inherited was far less dangerous than the Middle East she left us with today. - -Her reckless decisions in Iraq, Libya, Iran, Egypt and Syria have made the Middle East more unstable than ever before. - -The Hillary Clinton foreign policy legacy is chaos. - -Hillary Clinton also wants totally open borders in America, which would further plunge our workers into poverty. - -Hillary’s open borders agenda means a young single mom living in poverty would have to compete for a job or a raise against millions of lower-wage workers rushing into the country, but she doesn’t care. - -My agenda will be accomplished through a series of reforms that put America First: - -Energy reform that creates trillions in new wealth. -Immigration reform that protects our borders and defends our workers. -Tax reform that brings millions of new jobs to America. -Regulation reform that eliminates stupid rules that send our jobs overseas. -Welfare reform that requires employers to recruit from the unemployment office – not the immigration office. -Trade reform that brings back our manufacturing jobs and stands up to countries that cheat. -There is one more thing we must do to make America wealthy again: we have to make our communities safe again. - -Violent crime is rising in major cities across the country. This is unacceptable. Every parent has the right to raise their kids in safety. - -When we put political correctness before justice, we hurt those who have the least. It undermines their schools, slashes the value of their homes, and drives away their jobs. - -Crime is a stealth tax on the poor. - -To those living in fear, I say: help is coming. A Trump Administration will return law and order to America. Security is not something that should only be enjoyed by the rich and powerful. - -By the way, I was endorsed by the National Rifle Association, and we are not going to let Hillary Clinton abolish the 2nd amendment, either. - -My reform agenda is going to bring wealth and security to the poorest communities in this country. - -What does Hillary have to offer the poor but more of the same? - -In Chicago, for instance, one-fourth of young Hispanics and one-third of young African-Americans are unemployed. - -My message today to all the people trapped in poverty is this: politicians like Hillary Clinton have failed you. - -They have used you. - -You need something new. I am the only who will deliver it. - -We are going to put America back to work. - -We are going to put people before government. - -We are going to rebuild our inner cities. - -We are going to make you and your family safe, secure and prosperous. - -The choice in November is a choice between a Clinton Agenda that puts Donors First – or a new agenda that puts America First. - -It is a choice between a Clinton government of, by and for the powerful – or a return to government of, by and for the people. - -It is a choice between certain decline, or a revival of America’s promise. - -The people in charge of our government say things can’t change. - -I am here to tell you that things have to change. - -They want you to keep trusting the same people who’ve betrayed you. - -I am here to tell you that if you keep supporting those who’ve let you down, then you will keep getting let down for the rest of your life. - -I am prepared to kick the special interests out of Washington, D.C. and to hand their seat of power over to you. - -It’s about time. - -Together, we will put the American people first again. - -We will make our communities wealthy again. - -We will make our cities safe again. - -We will make our country strong again. - -Ladies and Gentlemen: We will make America Great Again. - -The fact that Hillary thinks the temporary Muslim ban, which she calls the "Muslim ban", promotes terrorism, proves Bernie Sanders was correct when he said she is not qualified to be President. - -Look at the carnage all over the world including the World Trade Center, San Bernardino, Paris, the USS Cole, Brussels and an unlimited number of other places. She and our totally ignorant President won't even use the term Radical Islamic Terrorism. And by the way, ask Hillary who blew up the plane last night - another terrible, but preventable tragedy. She has bad judgement and is unfit to serve as President at this delicate and difficult time in our country's history. - -Justice Scalia was a remarkable person and a brilliant Supreme Court Justice. His career was defined by his reverence for the Constitution and his legacy of protecting Americans’ most cherished freedoms. He was a Justice who did not believe in legislating from the bench and he is a person whom I held in the highest regard and will always greatly respect his intelligence and conviction to uphold the Constitution of our country. The following list of potential Supreme Court justices is representative of the kind of constitutional principles I value and, as President, I plan to use this list as a guide to nominate our next United States Supreme Court Justices. - -We are pleased to have this partnership in place with the national party. By working together with the RNC to raise support for Republicans everywhere, we are going to defeat Hillary Clinton, keep Republican majorities in Congress and in the states, and Make America Great Again. - -I filed my PFD, which I am proud to say is the largest in the history of the FEC. Despite the fact that I am allowed extensions, I have again filed my report, which is 104 pages, on time. Bernie Sanders has requested, on the other hand, an extension for his small report. This is the difference between a businessman and the all talk, no action politicians that have failed the American people for far too long. I have built an incredible company and have accumulated one of the greatest portfolios of real estate assets, many of which are considered to be among the finest and most iconic properties in the world. This is the kind of thinking the country needs. - -I am proud to receive the endorsement of former Congressman Lightfoot. I have tremendous respect for him and I greatly appreciate his support. - -It is tremendous to be working with these leaders and their colleagues on winning solutions that will really move us forward. A strong House Republican Majority is imperative to fixing the problems facing America and making our country better and stronger than ever before. - -The United States cannot afford another four years of the Obama White House, which is what Hillary Clinton represents. That is why it’s critical that Republicans unite around our shared principles, advance a conservative agenda, and do all we can to win this fall. With that focus, we had a great conversation this morning. While we were honest about our few differences, we recognize that there are also many important areas of common ground. We will be having additional discussions, but remain confident there’s a great opportunity to unify our party and win this fall, and we are totally committed to working together to achieve that goal. We are extremely proud of the fact that many millions of new voters have entered the primary system, far more than ever before in the Republican Party's history. This was our first meeting, but it was a very positive step toward unification. - -It is a great honor to have won both West Virginia and Nebraska, especially by such massive margins. My time spent in both states was a wonderful and enlightening experience for me. I learned a lot, and that knowledge will be put to good use towards the creation of businesses, jobs, and the strengthening and revival of their economies. I look forward to returning to West Virginia and Nebraska soon, and hope to win both states in the general election. Likewise, my time spent last week with the great people of Oregon will hopefully lead to another victory next Tuesday. - -Governor Christie is an extremely knowledgeable and loyal person with the tools and resources to put together an unparalleled Transition Team, one that will be prepared to take over the White House when we win in November. I am grateful to Governor Christie for his contributions to this movement. - -I want to thank Senator Bob Dole for his endorsement. He is a wonderful man and it is a great honor to have his support. - -I fully understand why Lindsey Graham cannot support me. If I got beaten as badly as I beat him, and all the other candidates he endorsed, I would not be able to give my support either. Every time I see Lindsey Graham spew hate during interviews I ask why the media never questions how I single-handedly destroyed his hapless run for President. As a candidate who did not receive 1% in his own state - compared to my victory at nearly 40% with many others in the race - he has zero credibility. He was a poor representative and an embarrassment to the great people of South Carolina. Judging by the incompetent way he ran his campaign, it is easy to see why his military strategies have failed so badly --- we can’t even beat ISIS! - -While I will unify the party, Lindsey Graham has shown himself to be beyond rehabilitation. And like the voters who rejected him, so will I! - -I am not ready to support Speaker Ryan's agenda. Perhaps in the future we can work together and come to an agreement about what is best for the American people. They have been treated so badly for so long that it is about time for politicians to put them first! - -Steven is a professional at the highest level with an extensive and very successful financial background. He brings unprecedented experience and expertise to a fundraising operation that will benefit the Republican Party and ultimately defeat Hillary Clinton. - -Ted Cruz is a desperate candidate trying to save his failing campaign. It is no surprise he has resorted to his usual tactics of over-the-top rhetoric that nobody believes. Over the last week, I have watched Lyin’ Ted become more and more unhinged as he is unable to react under the pressure and stress of losing, in all cases by landslides, the last six primary elections --- in fact, coming in last place in all but one of them. Today’s ridiculous outburst only proves what I have been saying for a long time, that Ted Cruz does not have the temperament to be President of the United States. - -I am pleased to have the support of Representative Duncan (TN) who is one of the most fiscally conservative Members of the House. If more Members voted like Rep. Duncan, we wouldn't be wasting trillions of the taxpayer dollars in foreign countries. - -After massive defeats in Arizona, New York, Pennsylvania, Rhode Island, Delaware, Connecticut and Maryland, (in addition to twenty other contests) and given the fact that Senator Cruz has millions of votes less than me and is being clobbered on the delegate front, this is a pure waste of time. It reminds me very much of the already failed Kasich 'collusion' ­a desperate attempt to save a failing campaign by an all talk, no action politician. The people of Indiana are very smart ­and they will see through this just like they saw through the already failed Kasich alliance. Cruz has no path to victory - he is only trying to stay relevant. - -Thank you for the opportunity to speak to you, and thank you to the Center for the National Interest for honoring me with this invitation. - -I would like to talk today about how to develop a new foreign policy direction for our country – one that replaces randomness with purpose, ideology with strategy, and chaos with peace. - -It is time to shake the rust off of America’s foreign policy. It's time to invite new voices and new visions into the fold. - -The direction I will outline today will also return us to a timeless principle. My foreign policy will always put the interests of the American people, and American security, above all else. That will be the foundation of every decision that I will make. - -America First will be the major and overriding theme of my administration. - -But to chart our path forward, we must first briefly look back. - -We have a lot to be proud of. In the 1940s we saved the world. The Greatest Generation beat back the Nazis and the Japanese Imperialists. - -Then we saved the world again, this time from totalitarian Communism. The Cold War lasted for decades, but we won. - -Democrats and Republicans working together got Mr. Gorbachev to heed the words of President Reagan when he said: “tear down this wall.” - -History will not forget what we did. - -Unfortunately, after the Cold War, our foreign policy veered badly off course. We failed to develop a new vision for a new time. In fact, as time went on, our foreign policy began to make less and less sense. - -Logic was replaced with foolishness and arrogance, and this led to one foreign policy disaster after another. - -We went from mistakes in Iraq to Egypt to Libya, to President Obama’s line in the sand in Syria. Each of these actions have helped to throw the region into chaos, and gave ISIS the space it needs to grow and prosper. - -It all began with the dangerous idea that we could make Western democracies out of countries that had no experience or interest in becoming a Western Democracy. - -We tore up what institutions they had and then were surprised at what we unleashed. Civil war, religious fanaticism; thousands of American lives, and many trillions of dollars, were lost as a result. The vacuum was created that ISIS would fill. Iran, too, would rush in and fill the void, much to their unjust enrichment. - -Our foreign policy is a complete and total disaster. - -No vision, no purpose, no direction, no strategy. - -Today, I want to identify five main weaknesses in our foreign policy. - - -First, Our Resources Are Overextended - -President Obama has weakened our military by weakening our economy. He’s crippled us with wasteful spending, massive debt, low growth, a huge trade deficit and open borders. - -Our manufacturing trade deficit with the world is now approaching $1 trillion a year. We’re rebuilding other countries while weakening our own. - -Ending the theft of American jobs will give us the resources we need to rebuild our military and regain our financial independence and strength. - -I am the only person running for the Presidency who understands this problem and knows how to fix it. - - -Secondly, our allies are not paying their fair share. - -Our allies must contribute toward the financial, political and human costs of our tremendous security burden. But many of them are simply not doing so. They look at the United States as weak and forgiving and feel no obligation to honor their agreements with us. - -In NATO, for instance, only 4 of 28 other member countries, besides America, are spending the minimum required 2% of GDP on defense. - -We have spent trillions of dollars over time – on planes, missiles, ships, equipment – building up our military to provide a strong defense for Europe and Asia. The countries we are defending must pay for the cost of this defense – and, if not, the U.S. must be prepared to let these countries defend themselves. - -The whole world will be safer if our allies do their part to support our common defense and security. - -A Trump Administration will lead a free world that is properly armed and funded. - - -Thirdly, our friends are beginning to think they can’t depend on us. - -We’ve had a president who dislikes our friends and bows to our enemies. - -He negotiated a disastrous deal with Iran, and then we watched them ignore its terms, even before the ink was dry. - -Iran cannot be allowed to have a nuclear weapon and, under a Trump Administration, will never be allowed to have a nuclear weapon. - -All of this without even mentioning the humiliation of the United States with Iran’s treatment of our ten captured sailors. - -In negotiation, you must be willing to walk. The Iran deal, like so many of our worst agreements, is the result of not being willing to leave the table. When the other side knows you’re not going to walk, it becomes absolutely impossible to win. - -At the same time, your friends need to know that you will stick by the agreements that you have with them. - -President Obama gutted our missile defense program, then abandoned our missile defense plans with Poland and the Czech Republic. - -He supported the ouster of a friendly regime in Egypt that had a longstanding peace treaty with Israel – and then helped bring the Muslim Brotherhood to power in its place. - -Israel, our great friend and the one true Democracy in the Middle East, has been snubbed and criticized by an Administration that lacks moral clarity. Just a few days ago, Vice President Biden again criticized Israel – a force for justice and peace – for acting as an impediment to peace in the region. - -President Obama has not been a friend to Israel. He has treated Iran with tender love and care and made it a great power in the Middle East – all at the expense of Israel, our other allies in the region and, critically, the United States. - -We’ve picked fights with our oldest friends, and now they’re starting to look elsewhere for help. - -Fourth, our rivals no longer respect us. - -In fact, they are just as confused as our allies, but an even bigger problem is that they don’t take us seriously any more. - -When President Obama landed in Cuba on Air Force One, no leader was there to meet or greet him – perhaps an incident without precedent in the long and prestigious history of Air Force One. - -Then, amazingly, the same thing happened in Saudi Arabia -- it's called no respect. - -Do you remember when the President made a long and expensive trip to Copenhagen, Denmark to get the Olympics for our country, and, after this unprecedented effort, it was announced that the United States came in fourth place? - -He should have known the result before making such an embarrassing commitment. - -The list of humiliations goes on and on. - -President Obama watches helplessly as North Korea increases its aggression and expands even further with its nuclear reach. - -Our president has allowed China to continue its economic assault on American jobs and wealth, refusing to enforce trade rules – or apply the leverage on China necessary to rein in North Korea. - -He has even allowed China to steal government secrets with cyber attacks and engage in industrial espionage against the United States and its companies. - -We’ve let our rivals and challengers think they can get away with anything. - -If President Obama’s goal had been to weaken America, he could not have done a better job. - -Finally, America no longer has a clear understanding of our foreign policy goals. - -Since the end of the Cold War and the break-up of the Soviet Union, we’ve lacked a coherent foreign policy. - -One day we’re bombing Libya and getting rid of a dictator to foster democracy for civilians, the next day we are watching the same civilians suffer while that country falls apart. - -We're a humanitarian nation. But the legacy of the Obama-Clinton interventions will be weakness, confusion, and disarray. - -We have made the Middle East more unstable and chaotic than ever before. - -We left Christians subject to intense persecution and even genocide. - -Our actions in Iraq, Libya and Syria have helped unleash ISIS. - -And we’re in a war against radical Islam, but President Obama won’t even name the enemy! - -Hillary Clinton also refuses to say the words “radical Islam,” even as she pushes for a massive increase in refugees. - -After Secretary Clinton’s failed intervention in Libya, Islamic terrorists in Benghazi took down our consulate and killed our ambassador and three brave Americans. Then, instead of taking charge that night, Hillary Clinton decided to go home and sleep! Incredible. - -Clinton blames it all on a video, an excuse that was a total lie. Our Ambassador was murdered and our Secretary of State misled the nation – and by the way, she was not awake to take that call at 3 o'clock in the morning. - -And now ISIS is making millions of dollars a week selling Libyan oil. - -This will change when I am president. - -To all our friends and allies, I say America is going to be strong again. America is going to be a reliable friend and ally again. - -We’re going to finally have a coherent foreign policy based upon American interests, and the shared interests of our allies. - -We are getting out of the nation-building business, and instead focusing on creating stability in the world. - -Our moments of greatest strength came when politics ended at the water’s edge. - -We need a new, rational American foreign policy, informed by the best minds and supported by both parties, as well as by our close allies. - -This is how we won the Cold War, and it’s how we will win our new and future struggles. - -First, we need a long-term plan to halt the spread and reach of radical Islam. - -Containing the spread of radical Islam must be a major foreign policy goal of the United States. - -Events may require the use of military force. But it’s also a philosophical struggle, like our long struggle in the Cold War. - -In this we’re going to be working very closely with our allies in the Muslim world, all of which are at risk from radical Islamic violence. - -We should work together with any nation in the region that is threatened by the rise of radical Islam. But this has to be a two-way street – they must also be good to us and remember us and all we are doing for them. - -The struggle against radical Islam also takes place in our homeland. There are scores of recent migrants inside our borders charged with terrorism. For every case known to the public, there are dozens more. - -We must stop importing extremism through senseless immigration policies. - -A pause for reassessment will help us to prevent the next San Bernardino or worse -- all you have to do is look at the World Trade Center and September 11th. - -And then there’s ISIS. I have a simple message for them. Their days are numbered. I won’t tell them where and I won’t tell them how. We must as, a nation, be more unpredictable. But they’re going to be gone. And soon. - -Secondly, we have to rebuild our military and our economy. - -The Russians and Chinese have rapidly expanded their military capability, but look what’s happened to us! - -Our nuclear weapons arsenal – our ultimate deterrent – has been allowed to atrophy and is desperately in need of modernization and renewal. - -Our active duty armed forces have shrunk from 2 million in 1991 to about 1.3 million today. - -The Navy has shrunk from over 500 ships to 272 ships during that time. - -The Air Force is about 1/3 smaller than 1991. Pilots are flying B-52s in combat missions today which are older than most people in this room. - -And what are we doing about this? President Obama has proposed a 2017 defense budget that, in real dollars, cuts nearly 25% from what we were spending in 2011. - -Our military is depleted, and we’re asking our generals and military leaders to worry about global warming. - -We will spend what we need to rebuild our military. It is the cheapest investment we can make. We will develop, build and purchase the best equipment known to mankind. Our military dominance must be unquestioned. - -But we will look for savings and spend our money wisely. In this time of mounting debt, not one dollar can be wasted. - -We are also going to have to change our trade, immigration and economic policies to make our economy strong again – and to put Americans first again. This will ensure that our own workers, right here in America, get the jobs and higher pay that will grow our tax revenue and increase our economic might as a nation. - -We need to think smarter about areas where our technological superiority gives us an edge. This includes 3-D printing, artificial intelligence and cyberwarfare. - -A great country also takes care of its warriors. Our commitment to them is absolute. A Trump Administration will give our service men and women the best equipment and support in the world when they serve, and the best care in the world when they return as veterans to civilian life. - -Finally, we must develop a foreign policy based on American interests. - - -Businesses do not succeed when they lose sight of their core interests and neither do countries. - -Look at what happened in the 1990s. Our embassies in Kenya and Tanzania were attacked and seventeen brave sailors were killed on the USS Cole. And what did we do? It seemed we put more effort into adding China to the World Trade Organization – which has been a disaster for the United States – than into stopping Al Qaeda. - -We even had an opportunity to take out Osama Bin Laden, and didn’t do it. And then, we got hit at the World Trade Center and the Pentagon, the worst attack on our country in its history. - -Our foreign policy goals must be based on America’s core national security interests, and the following will be my priorities. - -In the Middle East, our goals must be to defeat terrorists and promote regional stability, not radical change. We need to be clear-sighted about the groups that will never be anything other than enemies. - -And we must only be generous to those that prove they are our friends. - -We desire to live peacefully and in friendship with Russia and China. We have serious differences with these two nations, and must regard them with open eyes. But we are not bound to be adversaries. We should seek common ground based on shared interests. Russia, for instance, has also seen the horror of Islamic terrorism. - -I believe an easing of tensions and improved relations with Russia – from a position of strength – is possible. Common sense says this cycle of hostility must end. Some say the Russians won’t be reasonable. I intend to find out. If we can’t make a good deal for America, then we will quickly walk from the table. - -Fixing our relations with China is another important step towards a prosperous century. China respects strength, and by letting them take advantage of us economically, we have lost all of their respect. We have a massive trade deficit with China, a deficit we must find a way, quickly, to balance. - -A strong and smart America is an America that will find a better friend in China. We can both benefit or we can both go our separate ways. - -After I am elected President, I will also call for a summit with our NATO allies, and a separate summit with our Asian allies. In these summits, we will not only discuss a rebalancing of financial commitments, but take a fresh look at how we can adopt new strategies for tackling our common challenges. - -For instance, we will discuss how we can upgrade NATO’s outdated mission and structure – grown out of the Cold War – to confront our shared challenges, including migration and Islamic terrorism. - -I will not hesitate to deploy military force when there is no alternative. But if America fights, it must fight to win. I will never send our finest into battle unless necessary – and will only do so if we have a plan for victory. - -Our goal is peace and prosperity, not war and destruction. - -The best way to achieve those goals is through a disciplined, deliberate and consistent foreign policy. - -With President Obama and Secretary Clinton we’ve had the exact opposite: a reckless, rudderless and aimless foreign policy – one that has blazed a path of destruction in its wake. - -After losing thousands of lives and spending trillions of dollars, we are in far worse shape now in the Middle East than ever before. - -I challenge anyone to explain the strategic foreign policy vision of Obama-Clinton – it has been a complete and total disaster. - -I will also be prepared to deploy America’s economic resources. Financial leverage and sanctions can be very persuasive – but we need to use them selectively and with determination. Our power will be used if others do not play by the rules. - -Our friends and enemies must know that if I draw a line in the sand, I will enforce it. - -However, unlike other candidates for the presidency, war and aggression will not be my first instinct. You cannot have a foreign policy without diplomacy. A superpower understands that caution and restraint are signs of strength. - -Although not in government service, I was totally against the War in Iraq, saying for many years that it would destabilize the Middle East. Sadly, I was correct, and the biggest beneficiary was Iran, who is systematically taking over Iraq and gaining access to their rich oil reserves – something it has wanted to do for decades. And now, to top it all off, we have ISIS. - -My goal is to establish a foreign policy that will endure for several generations. - -That is why I will also look for talented experts with new approaches, and practical ideas, rather than surrounding myself with those who have perfect resumes but very little to brag about except responsibility for a long history of failed policies and continued losses at war. - -Finally, I will work with our allies to reinvigorate Western values and institutions. Instead of trying to spread “universal values” that not everyone shares, we should understand that strengthening and promoting Western civilization and its accomplishments will do more to inspire positive reforms around the world than military interventions. - -These are my goals, as president. - -I will seek a foreign policy that all Americans, whatever their party, can support, and which our friends and allies will respect and welcome. - -The world must know that we do not go abroad in search of enemies, that we are always happy when old enemies become friends, and when old friends become allies. - -To achieve these goals, Americans must have confidence in their country and its leadership again. - -Many Americans must wonder why our politicians seem more interested in defending the borders of foreign countries than their own. - -Americans must know that we are putting the American people first again. On trade, on immigration, on foreign policy – the jobs, incomes and security of the American worker will always be my first priority. - -No country has ever prospered that failed to put its own interests first. Both our friends and enemies put their countries above ours and we, while being fair to them, must do the same. - -We will no longer surrender this country, or its people, to the false song of globalism. - -The nation-state remains the true foundation for happiness and harmony. I am skeptical of international unions that tie us up and bring America down, and will never enter America into any agreement that reduces our ability to control our own affairs. - -NAFTA, as an example, has been a total disaster for the U.S. and has emptied our states of our manufacturing and our jobs. Never again. Only the reverse will happen. We will keep our jobs and bring in new ones. Their will be consequences for companies that leave the U.S. only to exploit it later. - -Under a Trump Administration, no American citizen will ever again feel that their needs come second to the citizens of foreign countries. - -I will view the world through the clear lens of American interests. - -I will be America’s greatest defender and most loyal champion. We will not apologize for becoming successful again, but will instead embrace the unique heritage that makes us who we are. - -The world is most peaceful, and most prosperous, when America is strongest. - -America will continually play the role of peacemaker. - -We will always help to save lives and, indeed, humanity itself. But to play that role, we must make America strong again. - -We must make America respected again. And we must make America great again. - -If we do that, perhaps this century can be the most peaceful and prosperous the world has ever known. Thank you. - -Ken has a proven track record in winning state political races. He will support our delegate operations team and bolster our ground game efforts. He brings tremendous experience to the job, and I know he is up to the task of working with my team. - -It is sad that two grown politicians have to collude against one person who has only been a politician for ten months in order to try and stop that person from getting the Republican nomination. - - -Senator Cruz has done very poorly and after his New York performance, which was a total disaster, he is in free fall and as everyone has seen, he does not react well under pressure. Also, approximately 80% of the Republican Party is against him. Governor Kasich, who has only won 1 state out of 41, in other words, he is 1 for 41 and he is not even doing as well as other candidates who could have stubbornly stayed in the race like him but chose not to do so. Marco Rubio, as an example, has more delegates than Kasich and yet suspended his campaign one month ago. Others, likewise, have done much better than Kasich, who would get slaughtered by Hillary Clinton once the negative ads against him begin. 85% of Republican voters are against Kasich. - - -Collusion is often illegal in many other industries and yet these two Washington insiders have had to revert to collusion in order to stay alive. They are mathematically dead and this act only shows, as puppets of donors and special interests, how truly weak they and their campaigns are. I have brought millions of voters into the Republican primary system and have received many millions of votes more than Cruz or Kasich. Additionally, I am far ahead of both candidates with delegates and would be receiving in excess of 60% of the vote except for the fact that there were so many candidates running against me. - - -Because of me, everyone now sees that the Republican primary system is totally rigged. When two candidates who have no path to victory get together to stop a candidate who is expanding the party by millions of voters, (all of whom will drop out if I am not in the race) it is yet another example of everything that is wrong in Washington and our political system. This horrible act of desperation, from two campaigns who have totally failed, makes me even more determined, for the good of the Republican Party and our country, to prevail! - -I am honored to be invited to speak at an organization founded by former President Richard Nixon, and look forward to sharing my views on the many serious foreign policy issues facing our country and our allies around the world. Trade, immigration and security policies are critical concerns of all Americans, and we must develop a clear, consistent long-term foreign policy for making America safe and prosperous. - -Rick is a seasoned political expert with a very successful career in winning elections. He brings decades of experience, and his deep ties to political leaders and activists across the country will be a tremendous asset as we enter the final phase of securing the nomination. - -I am pleased to bring Tim on board to organize what is a very important state. I know he will be an asset to the team and ultimately deliver a win in California. - -Thank you to the great people of Missouri who voted for me and the state officials who worked to ensure the votes of the people mattered. It is great to have yet another victory as we look forward to the upcoming primary in New York. - -My campaign continues to receive tremendous support from voters all across the country. We have won far more states, far more delegates, and millions more votes than any other candidate. The nomination process has reached a point that requires someone familiar with the complexities involved in the final stages. I am organizing these responsibilities under someone who has done this job successfully in many campaigns. This will allow the rest of my team to deal with the increasing needs of a national campaign for both the pre-Convention phase and most importantly, the general election. Paul is a well-respected expert in this regard and we are pleased to have him join the efforts to Make America Great Again. - -New York is my home and I am so proud to have been able to assemble such an incredible team. I have watched and known these people for so many years. They love New York and our country. Together we will Make America Great Again. - -Congressman Hunter and Congressman Collins are conservative stalwarts. I am honored to have the support of these two well respected Members of Congress who share my vision of securing our borders, strengthening our military, treating our veterans with the respect and care they deserve and putting Americans first again. - -If Congress were to pass legislation making abortion illegal and the federal courts upheld this legislation, or any state were permitted to ban abortion under state and federal law, the doctor or any other person performing this illegal act upon a woman would be held legally responsible, not the woman. The woman is a victim in this case as is the life in her womb. My position has not changed - like Ronald Reagan, I am pro-life with exceptions. - -I am deeply grateful for this extraordinary and historic endorsement from America's Border Patrol Agents, and especially honored that they would break with their past precedent of not endorsing in presidential primaries in order to endorse my candidacy. This endorsement represents a total rejection of the corrupt politicians who have allowed transnational gangs and cartels to terrorize American communities. - -The National Border Patrol Council is the official body representing America's front-line Border Patrol Agents who sacrifice every day, under intolerable political restraints, to keep America safe. America's 16,500 border patrol agents represented by the NBPC are the first line of defense for our nation. The NBPC provides the vital outlet to learn the truth - not the political spin from bureaucrats - about what is really happening on our border. And the NBPC has been the one outlet these agents have to prevent their voice from being drowned out by big money special interests. - -As President, I will work tirelessly with the NBPC and their rank-and-file agents to secure our border once and for all. I will ensure that every rank-and-file officer has the resources, tools and support they need to protect this nation and stop the influx of drugs, gangs and cartel violence. Together, we will save thousands of American lives, millions of American jobs, and billions of American tax dollars. - -I am deeply privileged to have the official support of America's border patrol agents, and will never let them down. - -This is our chance to make our country safe again, and ultimately, Make America Great Again. - -Paul is a great asset and an important addition as we consolidate the tremendous support we have received in the primaries and caucuses, garnering millions more votes than any other candidate. Paul Manafort, and the team I am building, bring the needed skill sets to ensure that the will of the Republican voters, not the Washington political establishment, determines who will be the nominee for the Republican Party. I look forward to winning the nomination, and ultimately the presidency in order to Make America Great Again. - -I have no idea whether or not the cover story about Ted Cruz in this week's issue of the National Enquirer is true or not, but I had absolutely nothing to do with it, did not know about it, and have not, as yet, read it. Likewise, I have nothing to do with the National Enquirer and unlike Lyin’ Ted Cruz I do not surround myself with political hacks and henchmen and then pretend total innocence. Ted Cruz’s problem with the National Enquirer is his and his alone, and while they were right about O.J. Simpson, John Edwards, and many others, I certainly hope they are not right about Lyin’ Ted Cruz. I look forward to spending the week in Wisconsin, winning the Republican nomination and ultimately the Presidency in order to Make America Great Again. - -Good evening. I speak to you today as a lifelong supporter and true friend of Israel. I am a newcomer to politics but not to backing the Jewish state. - -In late 2001, weeks after the attacks on New York City and Washington - attacks perpetrated by Islamic fundamentalists, Mayor Giuliani visited Israel to show solidarity with terror victims. I sent him in my plane because I backed the mission 100%. - -In Spring 2004, at the height of violence in the Gaza Strip, I was the Grand Marshal of the 40th Salute to Israel Parade, the largest single gathering in support of the Jewish state. - -It was a very dangerous time for Israel and frankly for anyone supporting Israel - many people turned down this honor –I did not, I took the risk. - -I didn't come here tonight to pander to you about Israel. That's what politicians do: all talk, no action. I came here to speak to you about where I stand on the future of American relations with our strategic ally, our unbreakable friendship, and our cultural brother, the only democracy in the Middle East, the State of Israel. - -My number one priority is to dismantle the disastrous deal with Iran. I have been in business a long time. I know deal-making and let me tell you, this deal is catastrophic - for America, for Israel, and for the whole Middle East. - -The problem here is fundamental. We have rewarded the world's leading state sponsor of terror with $150 billion and we received absolutely nothing in return. - -I've studied this issue in greater detail than almost anybody. The biggest concern with the deal is not necessarily that Iran is going to violate it, although it already has, the bigger problem is that they can keep the terms and still get to the bomb by simply running out the clock, and, of course, they keep the billions. - -The deal doesn’t even require Iran to dismantle its military nuclear capability! Yes, it places limits on its military nuclear program for only a certain number of years. But when those restrictions expire, Iran will have an industrial-size military nuclear capability ready to go, and with zero provision for delay no matter how bad Iran's behavior is. When I am president, I will adopt a strategy that focuses on three things when it comes to Iran. - -First, we will stand up to Iran’s aggressive push to destabilize and dominate the region. Iran is a very big problem and will continue to be, but if I'm elected President, I know how to deal with trouble. Iran is a problem in Iraq, a problem in Syria, a problem in Lebanon, a problem in Yemen, and will be a very major problem for Saudi Arabia. Literally every day, Iran provides more and better weapons to their puppet states. - -Hezbollah in Lebanon has received sophisticated anti-ship weapons, anti-aircraft weapons, and GPS systems on rockets. Now they're in Syria trying to establish another front against Israel from the Syrian side of the Golan Heights. - -In Gaza, Iran is supporting Hamas and Islamic Jihad - and in the West Bank they are openly offering Palestinians $7,000 per terror attack and $30,000 for every Palestinian terrorist's home that’s been destroyed. - -Iran is financing military forces throughout the Middle East and it is absolutely indefensible that we handed them over $150 billion to facilitate even more acts of terror. - -Secondly, we will totally dismantle Iran’s global terror network. Iran has seeded terror groups all over the world. During the last five years, Iran has perpetrated terror attacks in 25 different countries on five continents. They’ve got terror cells everywhere, including in the western hemisphere very close to home. Iran is the biggest sponsor of terrorism around the world and we will work to dismantle that reach. - -Third, at the very least, we must hold Iran accountable by restructuring the terms of the previous deal. Iran has already - since the deal is in place - test-fired ballistic missiles three times. Those ballistic missiles, with a range of 1,250 miles, were designed to intimidate not only Israel, which is only 600 miles away but also intended to frighten Europe, and, someday, the United States. - -Do you want to hear something really shocking? As many of the great people in this room know, painted on those missiles – in both Hebrew and Farsi - were the words “Israel must be wiped off the face of the earth.” - -What kind of demented minds write that in Hebrew? And here's another twisted part - testing these missiles does not even violate the horrible deal that we made! - -The deal is silent on test missiles but those tests DO violate UN Security Council Resolutions. The problem is, no one has done anything about it. Which brings me to my next point – the utter weakness and incompetence of the United Nations. - -The United Nations is not a friend of democracy. It's not a friend to freedom. It's not a friend even to the United States of America, where as all know, it has its home. And it surely isn’t a friend to Israel. - -With President Obama in his final year, discussions have been swirling about an attempt to bring a security council resolution on the terms of an eventual agreement between Israel and Palestine. Let me be clear: An agreement imposed by the UN would be a total and complete disaster. The United States must oppose this resolution and use the power of our veto. Why? Because that's not how you make a deal. - -Deals are made when parties come to the table and negotiate. Each side must give up something it values in exchange for something it requires. A deal that imposes conditions on Israel and the Palestinian Authority will do nothing to bring peace. It will only further delegitimize Israel and it would reward Palestinian terrorism, because every day they are stabbing Israelis – and even Americans. - -Just last week, American Taylor Allen Force, a West Point grad who served in Iraq and Afghanistan, was murdered in the street by a knife-wielding Palestinian. You don't reward that behavior, you confront it! - -It's not up the United Nations to impose a solution. The parties must negotiate a resolution themselves. The United States can be useful as a facilitator of negotiations, but no one should be telling Israel it must abide by some agreement made by others thousands of miles away that don't even really know what's happening. - -When I'm president, believe me, I will veto any attempt by the UN to impose its will on the Jewish state. You see, I know about deal-making - that's what I do. I wrote The Art of the Deal, one of the all-time best-selling books about deals and deal making. To make a great deal, you need two willing participants. - -We know Israel is willing to deal. Israel has been trying to sit down at the negotiating table, without pre-conditions, for years. You had Camp David in 2000, where Prime Minister Barak made an incredible offer – maybe even too generous. Arafat rejected it. - -In 2008, Prime Minister Olmert made an equally generous offer. The Palestinian Authority rejected it. Then John Kerry tried to come up with a framework and Abbas didn't even respond, not even to the Secretary of State of the United States of America! - -When I become President, the days of treating Israel like a second-class citizen will end on Day One. I will meet with Prime Minister Netanyahu immediately. I have known him for many years and we will be able to work closely together to help bring stability and peace to Israel and to the entire region. - -Meanwhile, every single day, you have rampant incitement and children being taught to hate Israel and hate the Jews. When you live in a society where the firefighters are the hero’s little kids want to be firefighters. - -When you live in a society where athletes and movie stars are heroes, little kids want to be athletes and movie stars. In Palestinian society, the heroes are those who murder Jews - we can't let this continue. You cannot achieve peace if terrorists are treated as martyrs. Glorifying terrorists is a tremendous barrier to peace. - -In Palestinian textbooks and mosques, you’ve got a culture of hatred that has been fermenting there for years, and if we want to achieve peace, they’ve got to end this indoctrination of hatred. There is no moral equivalency. Israel does not name public squares after terrorists. Israel does not pay its children to stab random Palestinians. - -You see, what President Obama gets wrong about deal making is that he constantly applies pressure to our friends and rewards our enemies. That pattern, practiced by the President and his administration, including former Secretary of State, Hillary Clinton, has repeated itself over and over and has done nothing but embolden those who hate America. We saw that with releasing $150 billion to Iran in the hope that they would magically join the world community - It's the same with Israel and Palestine. - -President Obama thinks that applying pressure to Israel will force the issue, but it's precisely the opposite. Already, half the population of Palestine has been taken over by the Palestinian ISIS in Hamas, and the other half refuses to confront the first half, so it’s a very difficult situation but when the United States stands with Israel, the chances of peace actually rise. That's what will happen when I’m president. - -We will move the American embassy to the eternal capital of the Jewish people, Jerusalem - and we will send a clear signal that there is no daylight between America and our most reliable ally, the state of Israel. - -The Palestinians must come to the table knowing that the bond between the United States and Israel is unbreakable. They must come to the table willing and able to stop the terror being committed on a daily basis against Israel and they must come to the table willing to accept that Israel is a Jewish State and it will forever exist as a Jewish State. - -Thank you very much, its been a great honor to be with you. - -It is my great honor to receive the endorsement from a leader as highly respected as Attorney General Pam Bondi. I love the people of Florida, where over the years, I have invested my time, hundreds of millions of dollars and employed thousands of people. Pam is one of the many individuals I have formed a great relationship with and I am very proud to receive her support. - -Ben is one of the truly great people I know. It is my honor to receive his endorsement and enthusiasm behind this incredible movement we are all building together. With Ben’s help, we will continue to grow the Republican party by bringing new people into the process to ensure we defeat Hillary Clinton in November. - -Record rates of immigration have produced lower wages and higher unemployment for U.S. workers. Pew polling shows 83 percent of all voters - Democrats, Republicans and Independents - think immigration should be frozen or reduced. The biggest beneficiaries of allowing fewer foreign workers into our country would be minority workers, including all immigrants now living here, who are competing for jobs, benefits and community resources against record waves of foreign workers. Limiting job competition would reopen pathways to middle-class stability and shrink welfare rolls. In addition, it would relieve overcrowding in our schools and hospitals that afflict our poorest communities. Yet, Senators Cruz and Rubio have led the charge for even higher immigration rates - a policy supported by only 7 percent of the Republican electorate. When I am President we will listen to the people - not the special interests - and get immigration numbers under control, as the voters have demanded. - -Lightweight Senator Marco Rubio is a dishonest person. He has cheated with credit cards, and does favors for lobbyists. In my opinion, he is a total crook and I am doing the people of Florida a great favor by further exposing him. In addition to everything else, he is an absentee Senator with one of the worst voting records in the history of the United States Senate, instead preferring to spend his time begging for campaign contributions. He takes his orders from the Republican establishment and Super PACs who are spending millions of dollars to keep their puppet alive and well so he will continue to do what they say. Former Prosecutor and now Governor of New Jersey, Chris Christie, exposed him on the debate stage for being what he is - a choke artist. We are now going a step further. - -Megyn Kelly asked about highly-skilled immigration. The H-1B program is neither high-skilled nor immigration: these are temporary foreign workers, imported from abroad, for the explicit purpose of substituting for American workers at lower pay. I remain totally committed to eliminating rampant, widespread H-1B abuse and ending outrageous practices such as those that occurred at Disney in Florida when Americans were forced to train their foreign replacements. I will end forever the use of the H-1B as a cheap labor program, and institute an absolute requirement to hire American workers first for every visa and immigration program. No exceptions. - -It is an honor to have Jeff as a member of the team. I have such great respect for him and I look forward to working with him on the issues most important to Americans. - -Thanks to the efforts of these talented staff members and the support of millions who understand my vision to make our country, better and stronger than ever before, we have had definitive victories in most of the early state elections and on Super Tuesday. As we look forward to more primary election victories we are expanding their roles within the campaign. I am proud to have assembled a team of staff and supporters who are loyal only to the American people, not special interests, and truly want to Make America Great Again! - -It is truly an honor to receive the endorsement of two individuals I hold in the highest regard. They are smart, successful and the kind of business leaders our country needs to help negotiate trade deals, create jobs and spur economic development. I am proud to have their support and the support of two business leaders including Carl Icahn. - -I am proud to receive the endorsement of such an iconic brand and a quality person such as Brian. Brian has a wonderful family and is an incredibly successful business person. I have great respect for Brian and I am grateful for his support and that of Bill Elliott, one of the best drivers in history, and active stock car racers, including his son Chase Elliott, Ryan Newman and David Lee Regan. - -Trump University has a 98% approval rating and an “A” rating from the Better Business Bureau. New York Attorney General, Eric Schneiderman continues to waste taxpayer money trying to smear me, but the fact is that the overwhelming majority of students had a great experience. It’s a minor civil case I have not settled out of principle. Lightweight Marco Rubio is grasping at straws and produced terrible ads featuring three people who all provided written statements praising the program. I demand an immediate retraction of this false and libelous ads. It just shows how low a failing campaign will go to help their failing candidate. - -I am deeply honored to have the endorsement of Senator Jeff Sessions, leader of congressional conservatives. He has been called the Senate's indispensable man and the gold standard. He led the fight against the Gang of Eight, against Obama's trade deal, against Obama's judges, and for American sovereignty. He has stood up to special interests as few have. There is no more respected man in Congress and we are closely aligned on many issues, including trade and illegal immigration, and I am proud to consider Jeff Sessions an advisor, friend and ally. - -I am truly honored to have the support of these American heroes, the best of their generation. The American people can know with certainty, I will always place their interest above all else. I am the most militaristic person and it is so important to me to strengthen our military and protect American families and freedoms. - -I love the state of Arizona and have received incredible support throughout the state. I am leading in all the polls and we have had amazing events with tremendous crowds. I am honored to receive this endorsement from Governor Brewer. - -I am proud to receive Governor LePage’s endorsement. He will be a great asset as we continue to campaign across the country, and I’m grateful for his support. - -It is my great honor to receive the endorsement of the Governor. We have had a wonderful relationship for many years. He is a solid person that I have tremendous respect for. I am really proud to receive the support of the Governor and his family. - -I have great respect for Governor Mike Huckabee and we have a mutual admiration for our wonderful families. It is great to have his daughter, Sarah, join the campaign. - -If and when the Vatican is attacked by ISIS, which as everyone knows is ISIS’s ultimate trophy, I can promise you that the Pope would have only wished and prayed that Donald Trump would have been President because this would not have happened. ISIS would have been eradicated unlike what is happening now with our all talk, no action politicians. - -The Mexican government and its leadership has made many disparaging remarks about me to the Pope, because they want to continue to rip off the United States, both on trade and at the border, and they understand I am totally wise to them. The Pope only heard one side of the story - he didn’t see the crime, the drug trafficking and the negative economic impact the current policies have on the United States. He doesn’t see how Mexican leadership is outsmarting President Obama and our leadership in every aspect of negotiation. - -For a religious leader to question a person’s faith is disgraceful. I am proud to be a Christian and as President I will not allow Christianity to be consistently attacked and weakened, unlike what is happening now, with our current President. No leader, especially a religious leader, should have the right to question another man’s religion or faith. They are using the Pope as a pawn and they should be ashamed of themselves for doing so, especially when so many lives are involved and when illegal immigration is so rampant. - -Despite Senator Ted Cruz attempting to smear me and totally lie about my beliefs and positions on almost all of the issues, I am a conservative person and I believe in conservative values. Like Ronald Reagan, on many issues, I have evolved. I am pro-life and have been for a long time. - -Let me be clear—I am pro-life. I support that position with exceptions allowed for rape, incest or the life of the mother being at risk. I did not always hold this position, but I had a significant personal experience that brought the precious gift of life into perspective for me. My story is well documented, so I will not retell it here. However, what I will do with the remaining space is express my feelings about life, and the culture of life, as we approach the 43nd anniversary of the Roe v. Wade. - -I build things. There is a process involved in building things. We tap into a lot of disciplines with engineering being one of the most important. The rules for putting structures together are as strict as are the rules of physics. These rules have stood the test of time and have become the path to putting together structures that endure and are beautiful. America, when it is at its best, follows a set of rules that have worked since our founding. One of those rules is that we, as Americans, revere life and have done so since our Founders made it the first, and most important, of our “unalienable” rights. - -Over time, our culture of life in this country has started sliding toward a culture of death. Perhaps the most significant piece of evidence to support this assertion is that since Roe v. Wade was decided by the Supreme Count 43 years ago over 50 million Americans never had the chance to enjoy the opportunities offered by this country. They never had the chance to become doctors, musicians, farmers, teachers, husbands, fathers, sons or daughters. They never had the chance to enrich the culture of this nation or to bring their skills, lives, loves or passions into the fabric of country. They are missing, and they are missed. - -The Supreme Court in 1973 based their decision on imagining rights and liberties in the Constitution that are nowhere to be found. Even if we take the court at its word, that abortion is a matter of privacy, we should then extend the argument to the logical conclusion that private funds, then, should subsidize this choice rather than the half billion dollars given to abortion providers every year by Congress. Public funding of abortion providers is an insult to people of conscience at the least and an affront to good governance at best. - -If using taxpayer money to facilitate our slide to a culture of death was not enough, the 1973 decision became a landmark decision demonstrating the utter contempt the court had for federalism and the 10th Amendment. Roe v. Wade gave the court an excuse to dismantle the decisions of state legislatures and the votes of the people. This is a pattern that the court has repeated over and over again since that decision. Perhaps Roe v. Wade became yet another incidence of disconnect between the people and their government. - -We are in the middle of a presidential political cycle and votes will be cast in just days. The citizens of this nation will have the chance to vote for candidates that are aligned with their individual worldviews. It is my hope that they will choose the builder, the man who has the ability to imagine the greatness of this nation. The next President must follow those principles that work best and that reinforce the reverence Americans hold for life. A culture of life is too important to let slip away for convenience or political correctness. It is by preserving our culture of life that we will Make America Great Again. - -Ted Cruz is a totally unstable individual. He is the single biggest liar I’ve ever come across, in politics or otherwise, and I have seen some of the best of them. His statements are totally untrue and completely outrageous. It is hard to believe a person who proclaims to be a Christian could be so dishonest and lie so much. - -Cruz said I would be appointing a liberal judge when in fact I will appoint a great conservative and I am the only candidate who has gone so far, at the debate, as to suggest two individuals I feel would best represent the conservative values we need to protect: William “Bill” Pryor Jr. and Diane Sykes. - -Cruz says I am pro-choice, when in fact I am staunchly pro-life and have been for a long time. Like Ronald Reagan, on many issues, I have evolved. - -Cruz says I am in favor of ObamaCare, when in fact I have spoken about repealing and replacing this disaster of a system at every speech throughout my campaign and since it’s inception. Meanwhile, Cruz was responsible for getting Bush to put in the judge that failed to vote against ObamaCare twice. - -Cruz says I will try to take away your second amendment rights, when I am one of the strongest proponents of the right to bear arms and I say so in every speech that I have made for years. I am a proud member of the NRA and so are my sons. - -Cruz has become unhinged and is lying with the hopes that his statements will go unchecked until after the election and he will save his failing campaign. - -In Iowa, Cruz told thousands of Ben Carson voters that Dr. Carson had left the race and to instead vote for Ted Cruz. He apologized when the race was over. Likewise, his fraudulent voter violation form sent to Iowa voters. If Ted is going to continue to lie with such desperation, I have no choice but to fight back. - -One of the ways I can fight back is to bring a lawsuit against him relative to the fact that he was born in Canada and therefore cannot be President. If he doesn’t take down his false ads and retract his lies, I will do so immediately. Additionally, the RNC should intervene and if they don’t they are in default of their pledge to me. - -I am the strongest on the borders and I will build a wall, and it will be a real wall. I am strongest on illegal immigration, strongest on ISIS, strongest on the military and I will take care of our Vets. I will end common core and preserve the second amendment. I will renegotiate our trade deals and bring our jobs back to our country. I am the only person who will Make America Great Again. - -I would like to offer my sincerest condolences to the Scalia family after the passing of Justice Scalia. Justice Scalia was a remarkable person and a brilliant Supreme Court Justice, one of the best of all time. His career was defined by his reverence for the Constitution and his legacy of protecting Americans’ most cherished freedoms. He was a Justice who did not believe in legislating from the bench and he is a person whom I held in the highest regard and will always greatly respect his intelligence and conviction to uphold the Constitution of our country. My thoughts and prayers are with his family during this time. - -The RNC, which is probably not on my side, just illegally put out a fundraising notice saying ‘Trump wants you to contribute to the RNC.' The RNC does not treat me well and then they use my name, without my knowledge, to raise money for themselves. At my insistence, they have withdrawn their request. I am self-funding my campaign and this totally unauthorized notice is yet another example of deceptive Washington tricks used to take advantage of the voters and get money from the hard-working people the politicians have failed. I will not stand for it and neither should you. - -People wouldn’t be talking about illegal immigration had I not brought it up when I announced I was running for President. It is a massive problem in our country and now everybody agrees with me --- bad for crime and the economy. A wonderful young man, Jamiel Shaw Jr., whose father has become a friend of mine, was shot in the face for no reason by an illegal immigrant. He was getting ready to go to college on a football scholarship and his sole fault was walking home to see his father. Because of the relationship I have established with his father, Jamiel’s death is very personal to me. We must stop illegal immigration. - -I am proud to be endorsed by such accomplished and well-respected military experts. Additionally, to be endorsed by a Gold Star mother is such a wonderful honor. Homeland security and strengthening our military are two of the most important issues and cornerstones of my campaign. Jim and Gary’s support is so important to me and Susan is an incredible person and with their help we will win Florida and Make America Great Again! - -It is my great honor to receive these coveted and influential endorsements from tremendous people in the state of Georgia. I have visited many times and had great crowds and poll numbers. I look forward to being in Georgia again soon and working with each of these local leaders to Make America Great Again. - -I am delighted Bill Stern has accepted a leadership role in my campaign. He is a successful real estate developer and for eight years he served as Chair of the South Carolina Ports Authority, which is one of the most important commercial ports in the world. I look forward to working closely with Bill --- with his support we will Make American Great Again. - -This is a movement. People believe in my vision to make our country better and stronger than ever before and I am so honored to have their support. I am going to make you so proud--- we will take our country back and Make America Great Again! - -Our Veterans have been treated like third-class citizens and it is my great honor to support them with this $1 million dollar contribution – they are truly incredible people. We are going to strengthen our military, take care of our Vets and Make America Great Again. - -We have such tremendous support all over the country, the people of Georgia have been amazing. We are leading in all the polls, with 33% in the most recent Georgia survey. The support from this state is so important to me --- with their help we will Make America Great Again! - -Lt. Governor McMaster has a distinguished record of service to the people of South Carolina and the country, and his endorsement is a great honor. I’m proud to have Henry and his wife Peggy, join the team. The people of South Carolina are amazing and I continue to lead all state polls by wide, double digit margins. - -I have great respect for Sheriff Arpaio. We must restore law and order on the border and respect the men and women of our police forces. I thank him for his support of my policies and candidacy for President. - -It is truly an honor to receive Jerry’s endorsement. Not only is he a high quality person, with a wonderful family, whom I have great respect for – I also consider him a very good friend and his support means so much to me. - -It was a great honor to be introduced by the legendary Jerry Falwell Jr. He has built a tremendous institution and is a really terrific person with a beautiful family. We have always had a wonderful relationship and I am proud to share his kind words with the people of Iowa and South Carolina. - -Ted Cruz is a total hypocrite and, until recently, a Canadian citizen who may not even have a legal right to run for President. He didn’t disclose loans, pretending he’s Robin Hood, when he’s just another all talk, no action politician. Had I not brought up the subject of illegal immigration, an issue which Ted Cruz is very weak on, nobody would even be talking about it. I will build a great wall, and Mexico will pay for it. - -I am truly honored to receive Willie’s endorsement. He is a great person, has had tremendous success and a really terrific family. He believes in my message and knows that I am the only one who will Make America Great Again! - -Jeff is a terrific guy. He has been supportive of my campaign since the very beginning and it is a honor to receive the endorsement of such an intelligent, high quality individual, committed to my vision to Make America Great Again! - -I am greatly honored to receive Sarah’s endorsement. She is a friend, and a high quality person whom I have great respect for. I am proud to have her support. - -I am proud to announce our Leadership Team in Louisiana where we have tremendous grassroots support and a great team in place. I look forward to visiting soon and working with these individuals, whose support is so important, to share my vision to Make America Great Again! - -My family has been so supportive of me and my candidacy and I am so proud of Ivanka. She is a terrific person, a devoted mother and an exceptional entrepreneur. Ivanka is doing a fantastic job running my company, alongside her brothers, and is building many of my biggest jobs. It is so important to have the support of all of my children and I’m really proud of this ad. - -It is such an honor to have the support of so many grassroots leaders across Virginia where we are leading in the polls by substantial margins. The people of Virginia are ready to Make America Great Again! - -I have tremendous crowds, we are leading all the polls and there are so many people that want to see our country be greater than ever before. My life has been about winning and Iowa and New Hampshire are so important to me. This is where we will begin to Make America Great Again! - -It is my great honor to receive the endorsement of the highly respected Sue Everhart in addition to so many other terrific people in Georgia. We have had a tremendous response across the state and I look forward to visiting again soon to share my message to Make America Great Again! - -It is my great honor to receive endorsements from each of these incredible people. Their support for my message and endorsement of my candidacy for President of the United States means so much to me, and with their help, and the help of so many great people in Florida and all over the country, we will Make America Great Again! - -It is my pleasure to have Joe serve as our Honorary Chairman in the great state of Rhode Island. Securing our border and solving the issue of illegal immigration has become a cornerstone of my campaign and I have great respect for Joe’s passion with regard to solving this problem. With the help of Joe and so many other supporters in Rhode Island, we will Make America Great Again. - -I am honored to receive these important endorsements from such accomplished businessmen in the great state of New Hampshire. Their support, and the support of so many incredible people all over the state means so much to me. It is time to Make America Great Again and with your help, on February 9th, we will do just that. - -It is my honor to be on the ballot in West Virginia. We have received tremendous support from so many people in the Mountain State and are leading in all the polls by double digits. In fact, my support in West Virginia is higher than in any other state in the nation. I look forward to visiting soon and sharing more about my message and my plans to Make America Great Again! - -The people of Georgia are incredible and the response to my message has been amazing. We have built a great team and with their help, we will Make America Great Again! - -We already have tremendous grassroots support across the state of Louisiana and with the help of Ryan and our Leadership team we will continue to build a top-tier operation. I look forward to visiting soon and sharing my vision to Make America Great Again! - -I love the people of Texas and I am proud to have such a strong team in place in this important state as we work together to Make America Great Again! - -I love the people of New Hampshire and have made many great friends over the years. We have a great team in place, the biggest crowds and continue to lead all the polls. Andrew is a great addition and with his help we will win New Hampshire and Make America Great Again! - -It is my honor to be on the ballot in Illinois and to file a full slate of delegates. We have received tremendous support from so many people in Illinois, we are leading in all the polls and have had the largest crowds. With the support of the people of Illinois, and around the country, we will Make America Great Again. - -I am leading in every poll by wide, double digit margins. We have tremendous crowds, incredible support from all over the country and I am $35 million dollars under budget. We have spent the least amount of money and have the best results and this is the kind of thinking the country needs. I am very proud of this ad, I don’t know if I need it, but I don’t want to take any chances because if I win we are going to Make America Great Again. - -I love Texas and have visited many times. We have had tremendous crowds, are winning all the polls and have met some terrific people who truly want to Make America Great Again. With their support, we will make the country better than ever before. - -I am pleased to welcome Ryan to the team. We already have tremendous grassroots support across the state of Louisiana and I look forward to visiting soon and sharing my vision to Make America Great Again! - -It is a great honor to have such incredible grassroots support across the state of New Hampshire. We have tremendous poll results and crowds and I look forward to visiting many times over the next several weeks. With the support of each Town Chair and so many others, we will Make America Great Again! - -I am excited to return to Michigan where we had an incredible rally attended by over 5,000 people in August. Scott will be a great asset to our team as we continue to build infrastructure beyond the early primary states and share my vision to Make America Great Again. - -If anyone needed more evidence of why the American people are suffering at the hands of their own government, look no further than the budget deal announced by Speaker Ryan. In order to avoid a government shutdown, a cowardly threat from an incompetent President, the elected Republicans in Congress threw in the towel and showed absolutely no budget discipline. - -The American people will have to absorb higher deficits, greater debt, less economic liberty and more corporate welfare. Congress cannot seem to help itself in bending to every whim of special interests. How can they face their constituents when they continue to burden our children and grandchildren with debts they will never be able to repay? Our government is failing us, so we must do something about it. Who knows how bad things will be when the next administration comes in and has to pick up the pieces? - -The only special interest not being served by our government is the American people. It is time we imposed budget discipline by holding the line on spending, getting rid of waste, fraud and abuse, and by taking on our debt. To do these things, we need a President who can lead the fight to hold Congress and the rest of government accountable. Together, we can Make America Great Again. - -I am proud to share this report, written by the highly respected Dr. Harold Bornstein of Lenox Hill Hospital, stating that I am in excellent health. I am fortunate to have been blessed with great genes --- both of my parents had very long and productive lives. I have truly enjoyed working on the campaign trail with one objective in mind, to Make America Great Again! People have been impressed by my stamina, but to me it has been easy because I am truly doing something that I love. Our country will soon be better and stronger than ever before. - -I am incredibly honored to receive this endorsement. My entire life has been spent defending the police and the incredible job they do. Especially today, they will play an increasingly vital part in making our nation safe. With their support and hard work together we will Make America Great Again. - -I am so proud with this tremendous showing of support in New Hampshire and Tennessee. We are winning all the polls by huge, double digit margins and have the biggest crowds. With support from each of these incredible delegates and supporters across both states, we will Make America Great Again. - -It is a great honor to receive the endorsement of so many strong, accomplished women in the great state of Texas. Their leadership will help us win this important state on March 1st, 2016. With their support and the support of so many other women across the country, we are going to Make America Great Again. - -I am proud to have the support of so many community leaders in Oklahoma and it is an honor to be on the ballot. I visited in September and was overwhelmed by the enthusiasm at the State Fair where over 20,000 people attended my speech. With the support of those people and my team, we will Make America Great Again! - -Without looking at the various polling data, it is obvious to anybody the hatred is beyond comprehension. Where this hatred comes from and why we will have to determine. Until we are able to determine and understand this problem and the dangerous threat it poses, our country cannot be the victims of horrendous attacks by people that believe only in Jihad, and have no sense of reason or respect for human life. If I win the election for President, we are going to Make America Great Again. - -I love Massachusetts and have many great friends there, with a recent poll showing me in first place with 48%. The support has been incredible and I look forward to being there again soon and visiting Mississippi for the first time. With support from these two important states we will Make America Great Again! - -It is my honor to be on the ballot in Georgia, Louisiana, Missouri, Tennessee and Utah. I have visited Tennessee and Georgia many times and we have tremendous crowds and support from the great people there and I look forward to campaigning in Louisiana, Missouri, and Utah soon. We are leading in all the polls and with your help we will Make America Great Again! - -This was an extraordinary meeting of religious leaders many of whom have decided to endorse me--- such a great honor! I look forward to future meetings with the Coalition of African American Ministers. - -It is my honor to be on the ballot in Arizona, the Commonwealth of the Northern Mariana Islands, the District of Columbia, Vermont and the U.S. Virgin Islands. We are going to make this country better than it has ever been before by building a wall to secure our southern border, creating jobs, strengthening our military, taking care of our Vets and protecting our country. We will Make America Great Again! - -It is a great honor to receive Kat's endorsement. She is a genuine patriot who has worked tirelessly in service to our country and her fellow Veterans. With her support and the support of so many other Veterans across the country, we are going to Make America Great Again. - -Serge Kovaleski must think a lot of himself if he thinks I remember him from decades ago – if I ever met him at all, which I doubt I did. He should stop using his disability to grandstand and get back to reporting for a paper that is rapidly going down the tubes. - -I am proud to announce these fantastic additions to my Virginia leadership team. We have had tremendous crowds and enthusiasm from supporters across the state. I will do a great job for the people of Virginia and with their help we will Make America Great Again! - -I have created thousands of jobs and own some of the most iconic assets in the state. I love the people of Florida and I am proud to have such overwhelming support and a great staff in place. I look forward to visiting often and working with my team to share my vision to Make America Great Again. - -It is great to announce the additions of Earl and Taylor who will be valuable members of our operation in North Carolina, where I have been leading in every poll for many months. I look forward to being back in North Carolina soon as I continue to share my vision to Make America Great Again! - -It is an honor to be on ballot in the state of Virginia where we have a great staff and team of volunteers in place. I am pleased to announce the Southwest Virginia Leadership Team as we continue to be successful in Virginia and across the country. - -We must address Islamic terrorism and protect our country first. I will lead by example, as I always have, by vowing to defeat ISIS, stop illegal immigration and the Syrian refugee program, secure our border and bring real change to Washington, D.C. I am the only one who can Make America Great Again. - -November is a great month for Americans. In New York City, we get to look out on Central Park and see the leaves change as we move from the beauty of Fall to a magnificent city cloaked in the chill and briskness of Winter. What we Americans often gloss over is that November is a time for celebrating the very reason we enjoy the freedom, opportunity and prosperity found nowhere else in the world. The reason we are able to live like we do is due to the courage, dedication and sacrifice of those who serve in the armed forces of the United States and to those who have given the last full measure. - -This November we should pay particular attention to the world situation. All over the world, young men and women who have volunteered to serve this nation are standing watch to protect us from the evil that is focused on taking away all that we value. We have turmoil in the Middle East, unrest in Europe, aggressive expansion in the Pacific Rim and stateless terrorism that threatens every aspect of freedom all across the globe. At no time in history have things been so precarious. Yet, we are able to find the best among us to step forward and take on the traditions and values of generations past, humbly putting on the uniform of this nation. They do so to support and defend the Constitution, not the President, the Congress or even the flag. They swear to defend an idea and an ideal. As is so subtly stated in the Preamble of our Founding Document, they take on the task "to secure Liberty for ourselves and our posterity." - -On November 10th, the Marine Corps celebrated its 240th birthday. The Marines were, are and will remain unique in the world, a force that is first to fight and last to come home. The Marines have maintained the highest standards and "once a Marine, always a Marine," captures the sense of duty and honor that permeates every pore of those who have worn the Eagle, Globe and Anchor. - -Today, we honor all veterans who are and have served in the armed forces. This day has special meaning to so many. Over the years, I have hired thousands of veteransand have found them to be among the very best. They bring no attention to themselves except in their extraordinary professionalism and dedication to whatever they choose to do. Quietly, humbly, and without fanfare, they go about their daily lives being great citizens, good moms and dads, and loving, devoted companions. Bearing up under the terrible damage of war, so many carry on with dignity and grace. We should be so proud of them and we should honor them every day.November is also the month we celebrate the Military Family. Too many of us have never had to be away from family and friends on holidays, birthdays and anniversaries. We have never been asked to risk life and limb in places we cannot even find on a map. We go about our daily lives enjoying freedom, most of the time without ever thinking about what it is like to have a loved one in harm's way. The sacrifice and strength of military families, those who also serve, is so important to morale and welfare of those so far from home. - -I have lived a full life and have benefited more than most from the opportunities found in this country. I have a loving, healthy family and I am running for President -- only in America. To be where I am, I thank God every day for those who stand on the ramparts of liberty with courage, strength, humility and dignity. So, this November, let's celebrate our veterans and their families and keep them in our prayers. Because of these dedicated Americans, we will be able to Make America Great Again! - -I greatly appreciate the professional service shown by the representatives of the states of Nevada and Kentucky. It is my great honor to be on the ballot in these important states and I look forward to Making America Great Again. - -I am self-funding my campaign and therefore I will not be controlled by the donors, special interests and lobbyists who have corrupted our politics and politicians for far too long. I have disavowed all Super PAC's, requested the return of all donations made to said PAC's, and I am calling on all Presidential candidates to do the same. The character of our country is only as strong as our leaders---the only special interest I am beholden to is the American people and together we will Make America Great Again! - -I am grateful to have the endorsement of Senator Zaun who understands what is at stake in this election. Our country faces tough challenges. But America can be bigger and better than ever before. It will require leadership and a commitment to do what is right even when it may not be popular among the elites in Washington, D.C. who have created more problems than they have solved. With the support of Senator Zaun and so many other conservatives across the country, we will Make America Great Again - -I am grateful to have the support of so many leaders in Oklahoma. These individuals share my concern for our country and know that our challenges are not going to be solved by career politicians in Washington, D.C. With their support and the support of so many other conservatives across the country, we are going to Make America Great Again - -My message to Make America Great Again has been so overwhelmingly well received, drawing record crowds and putting me in first place in all the polls. America is crippled right now and Washington, DC is broken. Our politicians are not capable or competent enough to fix our problems. We will continue to build out a substantial campaign team that allows us to take our message across the country and continue to share my vision to Make America Great Again - -We are in this to win it. These staff additions are the continuation of our plan to have a strategic and significant presence across the country. I am pleased that my vision to Make America Great Again has generated so much support and such a positive response that we are leading in all the polls. By adding to our team in these critical states we will be able to build on the tremendous support we have received and share our message with even more voters in these states.,. . I look forward to being in these states even more as I continue to share my ideas about how to put America back on top! - -It is great to welcome Seth and Darren as we continue to build our team around the country and organize in early states. We have received great support in Georgia and Tennessee where my message to create jobs, secure our border, strengthen our military and take care of our vets resonates strongly with voters who are ready to Make America Great Again. - -Have you seen today’s poll? What about this Fox poll? We are in first place! All the polls confirm it! We continue to prove we are ready to take our country back and, more than any other candidate; we have the support of the people. - -Yesterday I traveled to Laredo, Texas to visit the border and meet with local law enforcement officials. We discussed the need for stronger border security and the importance of continuing the national discussion on this issue. Strengthening our border will most benefit legal immigrants and working class Americans. Continuing to allow illegal immigrants to cross a weak border is hurting America’s economy. - -Many politicians in Washington talk about the border but have no idea how dangerous it really is. I went to the border to make sure the American people have the opportunity to see the truth. - -Earlier this week I hosted my South Carolina campaign kickoff. Hundreds of veterans came to watch my speech. I announced the launch of a new hotline, 855 – VETS – 352, and email address, veterans@donaldtrump.com, for Veterans to share their stories about the need to reform our Veterans Administration. The way veterans are being treated in our country is a disgrace and I am the candidate that will fix it. - -I made a campaign stop in Nevada to speak at Freedom Fest. In front of 2,000 energized conservatives I made a pledge that I will remake today; as President I will restore the American free market and ensure that companies are incentivized to bring factories and jobs back to American soil. - -I also traveled to Phoenix to host a rally. 15,000 people showed up. It was an incredible testament to the deep commitment of the American people to restoring American greatness. We have stayed silent for far too longwatching bad policies wreak havoc on our economy and our nation. I will take care of our veterans, rebuild our military, and secure our borders. - -Most politicians would have backed down after being relentlessly targeted by the media. I will never stop speaking out on behalf of all Americans. - -We have been bringing our message across America over the past several weeks. In California I met with the families of six victims who were killed by illegal aliens. Each family shared their story. It was a heartbreaking reminder of why we must secure our border – to make sure no more Americans are senselessly killed by illegal aliens. - -Our unprotected border is a threat to the stability of our economy, the personal safety of Americans, and our national security. Too many American lives have been lost because our leaders refuse to secure the border. - -I also filed my FEC financial disclosure form. I have spent my life building businesses in large American cities and small American towns. I will use my experience to put Americans back to work – and the polls prove that the people know I am the only one who can do it. - -This campaign is about changing Washington. We need to once again have a government that is of the people, for the people and by the people. - -We will make America Great Again! - -Our Veterans are incredibly important and I’m proud to have the support of this coalition, especially in New Hampshire, where if I am elected I will build a full-service, first-class VA hospital to ensure all New Hampshire Veterans received the care they deserve. I love all Veterans and will help them finally lead the kind of lives that they should be leading. - -First people said I would never run, and I did. Then, they said, I would never file my statement of candidacy with the FEC, and I did. Next, they said I would never file my personal financial disclosure forms. I filed them early despite the fact that I am allowed two 45 days extensions. Now I have surged in the polls and am fighting to Make America Great Again. I look forward to the challenge of winning the presidency and doing a fantastic job for our country. I will make the United States rich and strong and respected again, but also a country with a 'big heart' toward the care of our people. - -The Obama Administration’s agreement with Iran is very dangerous. Iran developing a nuclear weapon, either through uranium or nuclear fuel, and defying the world is still a very real possibility. The inspections will not be followed, and Iran will no longer have any sanctions. Iran gets everything and loses nothing. - -Every promise the Obama Administration made in the beginning of negotiations, including the vow (made at the beginning of the negotiations) to get our great American prisoners returned to the U. S. has been broken. This is a bad deal that sets a dangerous precedent. - -This deal sets off a nuclear arms race in the Middle East, which is the most-unstable region in the world. It is a horrible and perhaps catastrophic event for Israel. - -Furthermore, we should have kept the billions of dollars we have agreed to pay them. Any great dealmaker would know this is a perfect example of “tapping along” and because they have been unchecked for so long throughout this extremely lengthy process, I guarantee they are much closer to producing a nuclear weapon than they were at the start of negotiations. -The fact is, the US has incompetent leaders and even more incompetent negotiators. We must do better for America and the world. We have to Make America Great Again. - -Failing candidate Hillary Clinton, who is desperately trying to hold on to her lead in the democratic primary against Bernie Sanders, is knowingly putting out lies about my stance on illegal immigration. I said “Mexico is sending”--- I’m not knocking immigration or immigrants, but rather am very critical of the country of Mexico for sending us people that they don’t want. Likewise I am very critical of illegal immigration and the tremendous problems including crime, which it causes. - -She is desperate, she is sad, and she is obviously very nervous when she has to revert to issues that have already been settled given the absolute accuracy of my statement. She speaks about “my tone” and that’s the problem with our country’s leaders. They are more worried about tone than results! It’s not about being nice--- it’s about being competent. - -Hillary should spend more time producing her illegally hidden emails and less time trying to obfuscate a statement by me that is totally clear and obviously very much accepted by the public as true. I am honored, however, that she is attacking me, instead of Jeb Bush. Obviously she knows that JEB is no longer her real competition. The last person she wants to face is Donald Trump. - -Stephen is a great addition to our New Hampshire leadership team and I am proud to have his endorsement for the Republican presidential primary nomination. He understands what is at stake with our economy and will be an asset as we continue to solidify our position at the top of the field, both in New Hampshire and nationally. - -I am pleased to submit this filing to the Federal Election Commission, formalizing my campaign for President of the United States. I can rebuild the American Dream so that it is stronger, bigger and better than ever before. Together we will Make America Great Again! - -The tragic events that occurred on Wednesday evening should be our nation’s primary focus for the foreseeable future. - -This is a time for healing, not politics. - -I look forward to returning to South Carolina and continuing our discussion on how we can best move our country forward,” said Trump. “Until that time our prayers and deepest condolences are with the people of Charleston and the families of those who have been torn apart by this senseless act of violence and hate. - -Quite simply, it is time to bring real leadership to Washington. The fact is, the American Dream is dead -- but if I win, I will bring it back bigger and better and stronger than ever before. Together we will Make America Great Again! - -I have great respect for the people of New Hampshire --- people who work hard and love this country. We are always greeted by massive crowds and immense support, which is reflected in these results. If I run, and if I win, the people of New Hampshire will be very proud--- we will Make America Great Again! - -Are these the people you want negotiating for you? Our country is in big trouble. We are not utilizing our best. It’s time we fix Washington with outsiders who actually know what they’re doing. It’s time we Make America Great Again! - -The American Dream is dead, but if I run, and if I win, we will bring it back stronger, bigger, and better than ever before! - -I’m proud to announce these individuals that share my passion for the country and understand that now is the time for executive leadership and real action. Together we can Make America Great Again! - -Yet again, the politicians are allowing our president to reinforce the lack of respect countries like China and Japan now have for the United States. They will devalue their currency, exploit our trade agreements, continue to destroy our economy and put Americans out of work. Politicians are all talk and no action. Instead of fast tracking TPP, Congress should pass legislation that holds China and Japan accountable for currency manipulation. This would send a message to the world that there are consequences for cheating the United States. It’s time for action. It’s time to Make America Great Again! - -I’ve always enjoyed my time in New Hampshire and have great respect for the people of New Hampshire --- people who work hard and love this country. We are always greeted by huge crowds and overwhelming support which is reflected in these results. I have a great love for our country, but it is a country that is in serious trouble. Americans deserve better than what they get from their politicians --- who are all talk and no action! I am the only one who can make America truly great again—I know how and the politicians don’t. - -I have a great love for our country, but it is a country that is in serious trouble. We have lost the respect of the entire world. Americans deserve better than what they get from their politicians --- who are all talk and no action! I have built a great company, created thousands of jobs and built a tremendous net worth with some of the finest and most prestigious assets in the world --- and very little debt! All Americans deserve the same opportunity. Our real unemployment rate is staggering while our manufacturing base is eroding on a daily basis. We must rebuild our infrastructure, control our borders, support local control of education, greatly strengthen our military, care for our veterans and put Americans back to work! We must stop other countries from totally taking advantage of our representatives who are being out-negotiated at every turn. I am the only one who can make America truly great again! - -One of the biggest political events anywhere in the world is happening right now with the Republican Party. Millions and millions of people are going out to the polls and they're voting. They're voting out of enthusiasm. They're voting out of love. Some of these people, frankly, have never voted before - 50 years old, 60 years old, 70 years old - never voted before. -We're taking people from the Democrat Party. We're taking people as independents, and they're all coming out and the whole world is talking about it. It's very exciting. I think, frankly, the Republican establishment, or whatever you want to call it, should embrace what's happening. -We're having millions of extra people join. We are going to beat the Democrats. We are going to beat Hillary or whoever it may be. And we're going to beat them soundly. - -Because nobody knows the system better than me. I know the H1B. I know the H2B. Nobody knows it better than me. I'm a businessman. These are laws. These are regulations. These are rules. We're allowed to do it. And frankly, because of the devaluations that other countries - the monetary devaluations that other countries are constantly doing and brilliantly doing against us, it's very, very hard for our companies in this country, in our country, to compete. -So I will take advantage of it; they're the laws. But I'm the one that knows how to change it. Nobody else on this dais knows how to change it like I do, believe me. - -I will. First of all, I think and I know the H1B very well. And it's something that I frankly use and I shouldn't be allowed to use it. We shouldn't have it. Very, very bad for workers. And second of all, I think it's very important to say, well, I'm a businessman and I have to do what I have to do. -When it's sitting there waiting for you, but it's very bad. It's very bad for business in terms of - and it's very bad for our workers and it's unfair for our workers. And we should end it. Very importantly, the Disney workers endorsed me, as you probably read. -And I got a full endorsement because they are the ones that said, and they had a news conference, and they said, he's the only one that's going to be able to fix it. Because it is a mess. I think for a period of a year to two years we have to look back and we have to see, just to answer the second part of your question, where we are, where we stand, what's going on. -We have to sort of take a strong, good, hard look and come up with plans that work. And we're rushing into things, and we're just - we're leading with the chin. - -We're leading with people that don't know what they are doing in terms of our leadership. I'd say a minimum of one year, maybe two years. - -Education through Washington, D.C. I don't want that. I want local education. I want the parents, and I want all of the teachers, and I want everybody to get together around a school and to make education great. -And it was very interesting, I was with Dr. Ben Carson today, who is endorsing me, by the way, tomorrow morning, and he is - -We were talking. We spoke for over an hour on education. And he has such a great handle on it. He wants competitive schools. He wants a lot of different things that are terrific, including charter schools, by the way, that the unions are fighting like crazy. But charter schools work and they work very well. -So there are a lot of things. But I'm going to have Ben very involved with education, something that's an expertise of his. - -You're right, Jake. But it has been taken over by the federal government. It was originally supposed to be that way. And certainly sounds better that way. But it has all been taken over now by the bureaucrats in Washington, and they are not interested in what's happening in Miami or in Florida, in many cases. -Now in some cases they would be. But in many cases they are more interested in their paycheck and the big bureaucracy than they are taking care of the children. - -Well, first of all, I want you to understand that the Democrats, and I've watched them very intensely, even though it's a very, very boring thing to watch, that the Democrats are doing nothing with Social Security. They're leaving it the way it is. In fact, they want to increase it. They want to actually give more. -And that's what we're up against. And whether we like it or not, that is what we're up against. -I will do everything within my power not to touch Social Security, to leave it the way it is; to make this country rich again; to bring back our jobs; to get rid of deficits; to get rid of waste, fraud and abuse, which is rampant in this country, rampant, totally rampant. - -And it's my absolute intention to leave Social Security the way it is. Not increase the age and to leave it as is. -You have 22 years, you have a long time to go. It's not long in terms of what we're talking about, but it's still a long time to go, and I want to leave Social Security as is, I want to make our country rich again so we can afford it. I want to bring back our jobs, I want to do things that will make us, that will bring back GDP -I mean, as an example, GDP was zero essentially for the last two quarters. If that ever happened in China. you would have had a depression like nobody's ever seen before. They go down to 7 percent, 8 percent, and it's a - it's a national tragedy. We're at zero, we're not doing anything. -We've lost our jobs. We've lost everything. We're losing everything. Our jobs are gone, our businesses are being taken out of the country. I want to make America great again and I want to leave Social Security as is. We're going to get rid of waste, fraud, abuse and bring back business. - -Because they don't cover most of the subjects. We're the policemen of the world. We take care of the entire world. We're going to have a stronger military, much stronger. Our military is depleted. But we take care of Germany, we take care of Saudi Arabia, we take care of Japan, we take care of South Korea. We take - every time this maniac from North Korea does anything, we immediately send our ships. We get virtually nothing. -We have 28,000 soldiers on the line, on the border between North and South Korea. We have so many places. Saudi Arabia was making a billion dollars a day, and we were getting virtually nothing to protect them. We are going to be in a different world. We're going to negotiate real deals now, and we're going to bring the wealth back to our country. We owe $19 trillion. We're going to bring wealth back to our country. - -Well, I don't know if he's saying that. Look, I'm just saying very simply we have a country that I've never seen anything like it. I've been going over budgets and looking at budgets. We don't bid things out. We don't bid out, as an example, the drug industry, pharmaceutical industry. They don't go out to bid. They just pay almost as if you walk into a drug store. That's what they're paying. -And the reason is they have a fantastic lobby. They take care of all of the senators, the Congressmen. They have great power and they don't bid out. The military is never properly bid. When we go out to military bids, it's not properly bid. And the people that really sell us the product are oftentimes the product we don't want, only because that particular company has political juice, OK? -I'm self-funding my campaign. Nobody is going to be taking care of me. I don't want anybody's money. I will tell you something. We're going to go out to bid in virtually every different facet of our government. We're going to save a fortune. - -Yes. If you look back to Iowa, Ted did change his view and his stance on ethanol quite a bit. He did and - at the end. Not full on, but he did change his view in the hopes of maybe doing well. And you know, I think everybody knows that. It was a front page story all over the place, and he did make a change. - -Well, that's fine. First of all, Ted was in favor of amnesty. So there's no question about that. And Sheriff Joe Arpaio recently endorsed me and there's nobody tougher on the borders than Sheriff Joe. And Jeff Sessions, one of the most respected Senator in Washington, an incredible man, also endorsed me. -And there's nobody that knows more about the borders than Senator Jeff Sessions. I would say this. We're all in this together. We're going to come up with solutions. We're going to find the answers to things. And so far I cannot believe how civil it's been up here. - -Well, first of all, I don't really think that. I think that I hold views that are very similar to many of the people. We are more inclusive. And if you look at the polls and if you look at the millions of people that have been pouring into the polls, it's, again, the biggest story. -You look at all of these people that are coming in, something is happening. I am different in one primary respect, and that's trade. I feel that we have had horrible negotiators, horrible trade deals. The jobs in this country are disappearing, and especially the good jobs. -You look at the recent jobs reports, which are really done so that presidents and politicians look good, because all of these people looking for jobs, when they give up, they go home, they give up, and they are considered statistically employed. So that's that. -But I will say, trade deals are absolutely killing our country. The devaluations of their currencies by China and Japan and many, many other countries, and we don't do it because we don't play the game. and the only way we're going to be able to do it is we're going to have to do taxes unless they behave. -If you don't tax certain products coming into this country from certain countries that are taking advantage of the United States and laughing at our stupidity, we're going to continue to lose businesses and we're going to continue to lose jobs. -And if you look at the average worker over the last 12 years, their salary and their pay have gone down, not up. It has gone down. And I think that's why there has been such an outpouring of love to what I'm saying. - -The 45 percent tax is a threat. It was not a tax, it was a threat. It will be a tax if they don't behave. Take China as an example. I have many friends, great manufacturers, they want to go into China. They can't. China won't let them. We talk about free trade. It's not tree free trade, it's stupid trade. -China dumps everything that they have over here. No tax, no nothing, no problems, no curfews (ph), no anything. We can't get into China. I have the best people, manufacturers, they can't get in. When they get in, they have to pay a tremendous tax. -The 45 percent is a threat that if they don't behave, if they don't follow the rules and regulations so that we can have it equal on both sides, we will tax you. It doesn't have to be 45, it could be less. But it has to be something because our country and our trade and our deals and most importantly our jobs are going to hell. - -Jake, I have to say - honestly, it's just the opposite. What will happen if they don't behave, we will put on a tax of some amount, and it could be a large amount, and we will start building those factories and those plants. Instead of in China, we'll build them here. And people will buy products from here, rather than buying it through China where we're being ripped off. And we have a $505 billion trade deficit right now. -So we'll build our factories here and we'll make our own products. And that's the way it should be done. And the way we've been doing it for the last long period of time is our country - our country is in serious, serious trouble. It's a bubble and it's going to explode, believe me. - -I mean a lot of them. I mean a lot of them. - -Well, you know, I've been watching the debate today. And they're talking about radical Islamic terrorism or radical Islam. But I will tell you this. There's something going on that maybe you don't know about, maybe a lot of other people don't know about, but there's tremendous hatred. And I will stick with exactly what I said to Anderson Cooper. - -Marco talks about consequences. Well, we've had a lot of consequences, including airplanes flying into the World Trade Center, the Pentagon and could have been the White House. There have been a lot of problems. -Now you can say what you want, and you can be politically correct if you want. I don't want to be so politically correct. I like to solve problems. We have a serious, serious problem of hate. - -There is tremendous hate. There is tremendous hate. Where large portions of a group of people, Islam, large portions want to use very, very harsh means. Let me go a step further. Women are treated horribly. You know that. You do know that. Women are treated horribly, and other things are happening that are very, very bad. - -Now I will say this, there is tremendous hatred. The question was asked, what do you think? I said, there is hatred. Now it would be very easy for me to say something differently. And everybody would say, oh, isn't that wonderful. - -We better solve the problem before it's too late. - -Let me go back to the other just for a second. In large mosques, all over the Middle East, you have people chanting "death to the USA." Now, that does not sound like a friendly act to me. -As far as the families are concerned, and as far as the law is concerned, we have a law - this all started with your question on water boarding. We have a law that doesn't allow right now water boarding. They have no laws. They have no rules. They have no regulations. They chop off heads. They drown 40, 50, 60 people at a time in big steel cages, pull them up an hour later, everyone dead. And we're working on a different set of parameters. -Now, we have to obey the laws. Have to obey the laws. But we have to expand those laws, because we have to be able to fight on at least somewhat of an equal footing or we will never ever knock out ISIS and all of the others that are so bad. -We better expand our laws or we're being a bunch of suckers, and they are laughing at us. They are laughing at us, believe me. - -First of all, there's nobody on this stage that's more pro Israel than I am. OK. There's nobody. - -I am pro-Israel. - -I was the grand marshall, not so long ago, of the Israeli Day Parade down 5th avenue. I've made massive contributions to Israel. I have a lot of - I have tremendous love for Israel. I happen to have a son-in-law and a daughter that are Jewish, OK? And two grandchildren that are Jewish. - -But I will tell you, I think if we're going to ever negotiate a peace settlement, which every Israeli wants, and I've spoken to the toughest and the sharpest, they all want peace, I think it would be much more helpful is - I'm a negotiator. If I go in, I'll say I'm pro-Israel and I've told that to everybody and anybody that would listen. -But I would like to at least have the other side think I'm somewhat neutral as to them, so that we can maybe get a deal done. Maybe we can get a deal. I think it's probably the toughest negotiation of all time. But maybe we can get a deal done. - -And, by the way, just so you understand, as far as Iran, I would have never made that deal. I think it's maybe the world deal I've ever seen. I think it's the worst deal I've ever seen negotiated. I will be so tough on them and ultimately that deal will be broken unless they behave better than they've ever behaved in their lives, which is probably unlikely. That deal will be broken. - -If I become president of the United States, one of the things that will be an absolute priority is number one, protection of Israel, but also seeing if a deal can be made, the toughest deal, the toughest negotiation there probably is of any kind no matter where you look, no matter how hard you look. - -We really have no choice. We have to knock out ISIS. We have to knock the hell out of them. We have to get rid of it. And then come back and rebuild our country, which is falling apart. We have no choice. - -I would listen to the generals, but I'm hearing numbers of 20,000 to 30,000. We have to knock them out fast. Look, we're not allowed to fight. We can't fight. We're not knocking out the oil because they don't want to create environmental pollution up in the air. -I mean, these are things that nobody even believes. They think we're kidding. They didn't want to knock out the oil because of what it's going to do to the carbon footprint. We don't fight like we used to fight. We used to fight to win. Now we fight for no reason whatsoever. We don't even know what we're doing. - -So, the answer is we have to knock them out. We have to knock them out fast. And we have to get back home. And we have to rebuild our country which is falling apart. - -Well, I don't really agree with President Obama. I think I'm somewhere in the middle. What I want is I want a much better deal to be made because right now, Cuba is making - as usual with our country, we don't make good deal. We don't have our right people negotiating, we have people that don't have a clue. -As an example, I heard recently where the threat was made that they want reparations for years of abuse by the United States, and nobody's talking about it and they'll end up signing a deal and then we'll get sued for $400 billion or $1 trillion. -All that stuff has to be agreed to now. We don't want to get sued after the deal is made. So I don't agree with President Obama, I do agree something should be - should take place. After 50 years, it's enough time, folks. But we have to make a good deal and we have to get rid of all the litigation that's going to happen. -This was just a little story but it was a big story to me because I said oh, here we go, we make a deal, then get sued for a tremendous amount of money for reparations. So I want to do something, but it's got to be done intelligently. We have to make good deal. - -I would want to make a good deal, I would want to make a strong, solid, good deal because right now, everything is in Cuba's favor. Right now, everything, every single aspect of this deal is in Cuba's favor. It the same way as the Iran deal. -We never walked - we never - all we do is keep giving. We give and give and give. - -I would probably have the embassy closed until such time as a really good deal was made and struck by the United States. - -Well, if Ted was listening, he would have heard me say something very similar. I said we would not do the deal unless it was going to be a very good deal for us. And I think I said it loud and I think I said it very clear. And I think after 50 years, and I have many friends, I own many properties in Miami, many, many, and I have many people that know and they feel exactly the way I do, make a deal, it would be great, but it's got to be a great deal for the United States, not a bad deal for the United States. As far as Iran is concerned, I would have never made that deal. That is one of the worst deals ever, ever made by this country. It is a disaster. So for Ted to say I agree with this deal, I mean, it's a staple in my speeches that that may he worst single deal I've ever seen negotiated. So don't try to put it on me like it's wonderful, like I love it. - -I was against the giving of the money at all cost. I said don't negotiate at all until you get the prisoners back. If the prisoners don't come back early - three years ago. One of the longest negotiations I've ever seen, by the way. If they don't come back early, I was saying don't negotiate. They come back early. -What you do is you take it back and you say, either give us the prisoners or we double up the sanctions. What we should have done is doubled up the sanctions and made a much better deal. Cause that deal is a disaster. -Ted, the money is largely gone because of incompetent and very, very poor negotiators. But that money, the $150 billion, is largely gone and already spent everywhere but the United States. - -That doesn't mean I was endorsing that. I was not endorsing it. I said that is a strong, powerful government that put it down with strength. And then they kept down the riot. It was a horrible thing. It doesn't mean at all I was endorsing it. -As far as Putin is concerned, I think Putin has been a very strong leader for Russia. I think he has been a lot stronger than our leader, that I can tell you. I mean, for Russia, that doesn't mean I'm endorsing Putin. - -I used to think Merkel was a great leader until she did what she did to Germany. Germany is a disaster right now. So I used to think that. - -And strong doesn't mean good. Putin is a strong leader, absolutely. I could name many strong leaders. I could name very many very weak leaders. But he is a strong leader. Now I don't say that in a good way or a bad way. I say it as a fact. - -I hope not. I truly hope not. I will say this. We have 25 (thousand), 30,000 people - you've seen it yourself. People come with tremendous passion and love for the country, and when they see protest - in some cases - you know, you're mentioning one case, which I haven't seen, I heard about it, which I don't like. But when they see what's going on in this country, they have anger that's unbelievable. They have anger. -They love this country. They don't like seeing bad trade deals, they don't like seeing higher taxes, they don't like seeing a loss of their jobs where our jobs have just been devastated. And I know - I mean, I see it. There is some anger. There's also great love for the country. It's a beautiful thing in many respects. But I certainly do not condone that at all, Jake. - -We have some protesters who are bad dudes, they have done bad things. They are swinging, they are really dangerous and they get in there and they start hitting people. And we had a couple big, strong, powerful guys doing damage to people, not only the loudness, the loudness I don't mind. But doing serious damage. And if they've got to be taken out, to be honest, I mean, we have to run something. -And it's not me. It's usually the municipal government, the police because I don't have guards all over these stadiums. I mean, we fill up stadiums. It's usually the police - and, by the way, speaking of the police, we should pay our respects to the police because they are taking tremendous abuse in this country and they do a phenomenal job. - -So we should pay - we should truly give our police. They're incredible people, we should give them a great deal more respect than they receive. - -It shows the total dishonesty of the press. We were having - on a few occasions, again massive crowds. And we're talking and I'm saying who is going to vote on Tuesday? Who is going to vote? The place goes crazy. Then I say, hey, do me a favor. Raise your right hand. Do you swear you're going to vote for Donald Trump? -Everyone's laughing, we're all having a good time. That's why I have much bigger crowds than Ted, because we have a good time at mine. - -But we're all having a good time and the next day, on the Today Show and a couple of other place, not too many. Because when you look at it, everyone's smiling, laughing. Their arms are raised like this. They had pictures, still pictures of people and they tried to equate it to Nazi Germany. - -It is a disgrace. It was a total disgrace. And I've had reporters, people that you know, come up to me and said that - what they did on the Today Show was a disgrace. - -I think that what should happen, getting back maybe a little bit to your first question, I think that whoever - first of all, I think I'm going to have the delegates. OK? I think. Let's see what happens. - -But if somebody doesn't have the delegates, and I guess there's two of us up here that can and there are two of us that cannot at this moment. But if - no, that's just - by the way, that is not meant to be a criticism. That's just a mathematical fact. OK? -If two of us get up there, I would say this, if - if Marco, if the governor, if Ted had more votes than me in the form of delegates, I think whoever gets to the top position as opposed to solving that artificial number that was by somebody, which is a very random number, I think that whoever gets the most delegates should win. That's what I think. HEWITT: Senator Cruz, if you - if you overtake Donald Trump at the convention, what will you do to take his very passionate supporters and keep them from bolting the convention and sabotaging the fall election? - -Make me president. - -You know, I listen and I watch Ted on television and when he speaks, and he's always saying, "I'm the only one that beat Donald in six contests; and I beat him." But I beat him in 13 contests. He never mentions that. - -And let me just tell you another little fact, little minor fact. I have about a 1.6 million votes during this primary season, more votes than Ted. The other thing is, I beat Hillary, and I will give you the list, I beat Hillary in many of the polls that have been taken. And each week, I get better and better. And believe me, I haven't even started on her yet. - -I have not made that decision yet. I will make a decision on that, but I have not made that decision. My decision was that I would go through the entire primary season and I have turned down probably $275 million worth. I have many, many friends that come up all day long, $5 million, $10 million, I'm turning down money. I feel sort of foolish to be honest with you. I don't know if I get any credit for it but I'm self-funding my campaign. - -And other than - and by the way, other than very small donations where people are sending in $200, $15, $20, and we have some of that, but it's not a large amount. No, I'm self-funding my campaign, and the reason is that I've been in this business a long time and I was on the other side - until eight months ago I was on the or side. I made massive contributions, large contributions to politicians, both Democrats and Republicans. I was liked by everybody, which is an important thing. -I will say this - people control special interests, lobbyists, donors, they make large contributions to politicians and they have total control over those politicians. I don't want anybody to control me but the people right out there. And I'm going to do the right thing. - -Ted was given to PACs. I mean, PACs - you know, these super PACs are a disaster, by the way, folks. Very corrupt. It's going to lead to lots of disasters. But Ted has super PACs and you have to look at the people that are giving to those super PACs, number one. It's very important to do that. -There is total control of the candidates, I know it better than anybody that probably ever lived. And I will tell you this, I know the system far better than anybody else and I know the system is broken. And I'm the one, because I know it so well because I was on both sides of it, I was on the other side all my life and I've always made large contributions. -And frankly, I know the system better than anybody else and I'm the only one up here that's going to be able to fix that system because that system is wrong. - -It depends on what comes up. You never know. It depends on what comes up. Look, look, we had a great president, Ronald Reagan. We had Tip O'Neill, speaker. And what do we do, we take these two men that are very, very different men, they got along, they had relationships, and they got things, and very beautifully. -Nobody is complaining about the deals that Ronald Reagan made. And he made it with Tip O'Neill. We need to have people get together and work good deals out, good deals out from our standpoint. And I'll tell you this, it can be done. -We don't want to continue to watch people signing executive orders because that was not what the Constitution and the brilliant designers of this incredible document had in mind. We need people that can make deals and can work, because right now in Washington there's total, absolute gridlock. - -Thank you very much. -The Republican Party has a great chance to embrace millions of people that it's never known before. They're coming by the millions. We should seize that opportunity. These are great people. These are fantastic people. These are people that love our country. These are people that want to see America be great again. -These are people that will win us the election and win it easily. These are people that once the election is won will be able to put Supreme Court justices up that will do a fabulous job. Because let me tell you, if we lose this election, you're going to have three, four or maybe even five justices and this country will never, ever recover. It will take centuries to recover. -So I just say embrace these millions of people that now for the first time ever love the Republican Party. And unify. Be smart and unify. - -Well look, he was a failed candidate, he should have beaten president Obama very easy. - -He failed miserably, and it was an embarrassment to everybody, including the Republican party. It looked like he went away on a vacation the last month. So, I don't take that, and I guess, obviously, he wants to be relevant. He wants to be back in the game. - -As far as domestic policy and trade which is killing our country, he said free trade and I believe in free trade also. But, if you look at China, and you look Japan, and if you look at Mexico, both at the border, by the way, where they're killing us. - -Both at the border, and with trade -- and every other country we do business with we are getting absolutely crushed on trade. And, he said free trade, I say free trade great. But, not when they're beating us so badly. - -With China we're going to lose $505 billion dollars in terms of trades. You just can't do it. - -Mexico, $58 billion dollars. - -Japan, probably about, they don't know it yet, but about $109 billion dollars. - -Every country we lose money with. As far as I'm concerned, we've got to reduce -- we have to redo our trade deals 100 percent. I have the greatest business people in the world lined up to do it. We will make great trade deals. - -I totally disavow the Klu Klux Klan. I totally disavow David Duke. I've been doing it now for two weeks, this is your -- you're probably about the 18th person that's asked me the question. It was very clear, that question was also talked about in the form of groups. Groups, I want to know which groups are you talking about? You have to tell me which groups? - -Ultimately, he got to the Klu Klux Klan, which obviously I'm going to disavow. And, by the way, if you look on my Twitter account, almost immediately after the program they were disavowed again. - -You know, it's amazing. When I do something on Twitter, everybody picks it up, goes all over the place. But, when I did this one nobody ever picks it up. Take a look at my Twitter account. - -Thank you. Thanks. - -And we will. - -Well, I also happened to call him a lightweight, OK? And I have said that. So I would like to take that back. He is really not that much of a lightweight. And as far as -- and I have to say this, I have to say this. He hit my hands. Nobody has ever hit my hands. I have never heard of this. Look at those hands. Are they small hands? - -And he referred to my hands, if they are small, something else must be small. I guarantee you there is no problem. I guarantee. - -I have heard Ted say that over and over again on television, that he is the only one that can beat me. Just, for the record, I have won 10. He has won three or four. Last week, in fact, on Tuesday, I was a half a million votes higher than him. I was a million votes higher than Marco, 1 million votes. That's a lot of votes. And was by far in first place. - -So I keep hearing that he is the only one that can beat me but he is getting beaten very, very badly. So where does this come from? Where does it come from? - -Very nice words, but happens to be wrong. CNN just came out with a poll two days ago that - -That national poll -- excuse me - -The national poll -- a national poll where he's at 15, he's at 14 - -And, I'm at 49, so when he says 75 percent, that would mean that 80 percent of the people don't dig you, and I'm back down to 50 - -Wrong - -I beat Hillary Clinton. I beat Hillary Clinton in many polls - -I beat Hillary Clinton in many polls - -I think I'm talking - -I beat Hillary Clinton - -I hope you think - -I beat Hillary Clinton in many polls. The Cue (ph) poll just came out. I beat Hillary Clinton in a recent Fox poll, I beat Hillary Clinton in USA Today, I beat her today in a poll in Ohio. I beat -- I'm the only one that beats Hillary Clinton. - -I beat -- and I have not started on Hillary yet. Believe me, I will start soon. I haven't even started. - -In one poll - -Wrong. Wrong. - -This little guy has lied so much about my record. - -He has lied so much about my record. - -And I will tell you this. First of all, I got a call from my sister and brother tonight, and they said we had no idea Dad gave you $200 million. Believe me, I started off with $1 million. I built a company that's worth more than $10 billion. And I say it not in a bragging way, but that's the kind of thinking we need. - -Very low debt, tremendous cash flow. My financials are all -- they're all in there with the federal elections. You've seen them. Everybody has seen them. I say it only because that's the kind of thinking this country needs with $19 trillion in debit. Believe me. - -They devalue their currencies. I will do that. And by the way, I have been doing it more and more. But they devalue their currencies, in particular China. Mexico is doing a big number now, also. Japan is unbelievable what they're doing. - -They devalue their currencies, and they make it impossible for clothing-makers in this country to do clothing in this country. And if you look at what's happened on Seventh Avenue, and you look at what's happened in New York with the garment industry, so much of the clothing now comes out from Vietnam, China, and other places. And it's all because of devaluation. - -By the way, the Trans-Pacific, if you look at the TPP, a total disaster, which, by the way, Marco is in favor of, they need -- it is a disaster for our country. It's trying to be approved by various people, including President Obama. And I'll tell you something. The biggest problem with that is: They don't take into concurrence the devaluation. They're devaluing their currency. - -And they're killing - -No, no. I have very good answers. - -I know what's happening with the economy. You don't know a thing. - -You haven't employed in your life one person. - -I have employed tens of thousands of people. - -You haven't employed one person. - -Oh, you know what? You know what? Take a look at Trump Steaks. - -By the way, that's the other thing - -Mitt Romney - -false, totally false. And now the funny thing is he didn't talk about the hundreds of really successful jobs, the buildings all over the world that have made a fortune. - -I will. Don't worry about it, Marco. Don't worry about it. Don't worry about it little Marco, I will. - -Don't worry about it, little Marco. - -This guy has a number one -- the number one absentee record in the United States - -He doesn't show up to vote. - -That's why the people in Florida do not like him. - -Correct. - -Department of Education. We're cutting Common Core. We're getting rid of Common Core. We're bringing education locally. Department of Environmental Protection. We are going to get rid are of it in almost every form. We're going to have little tidbits left but we're going to take a tremendous amount out. - -We have various other things. If you look at the IRS, if you look at every single agency, we can cut it down, and I mean really cut it down and save. The waste, fraud, and abuse is massive. - -Larry Kudlow, great guy, everybody respects him, said my plan for taxes and tax cutting is the best by far of everybody. - -Let me explain something. Because of the fact that the pharmaceutical companies -- because of the fact that the pharmaceutical companies are not mandated to bid properly, they have hundreds of billions of dollars in waste. - -We don't bid properly. We don't have proper bidding procedures. The reason we don't is because they take care of all of the senators, all of the congressman, and they don't bid. They don't go out to bid. - -Take a look -- excuse me. You are talking about hundreds of billions of dollars. - -if we went out to the proper bid. Of course you are. - -I'm saying saving through negotiation throughout the economy, you will save $300 billion a year. - -And that's a huge -- of course it is. We are going to buy things for less money. Of course it is. That works out. - -I'm not only talking about drugs, I'm talking about other things. We will save $300 billion a year if we properly negotiate. We don't do that. We don't negotiate. We don't negotiate anything. - -Well, all of a sudden, I hear for 40 years I've been involved in Washington. I have been supporting people for many years. And these people have been politicians, and they've been on both sides, Democrats, Republicans, liberals, conservatives. I've supported everybody, because, until recently, I wasn't a politician, and I hope maybe you don't all consider me a politician right now. I hate the term politician. - -But I've been supporting politicians. A recent article somewhere said Donald Trump is a world-class businessman who goes out and he does get along with everybody. I've supported Democrats, and I've supported Republicans. And as a businessman, I owed that to my company, to my family, to my workers, to everybody to get along. - -Part of the problem we have in Washington, Chris. - -is it's total gridlock. Nobody gets along. We need people to get along. We need to be able to get things done. - -Actually, it was for business. It was. It was. It was for business. I pride myself, including outside of the United States. I'm doing almost 120 deals outside of the -- which I hope to be able to stop very soon and let my children handle it -- but we're doing many, many deals outside of the United States. - -I support politicians. In 2008, I supported Hillary Clinton. I supported many other people, by the way. And that was because of the fact that I'm in business. I did support very heavily Ronald Reagan. I also supported George Bush, by the way. - -Let me tell you, something, Ted. The last person that Hillary Clinton wants to face is Donald Trump. That I can tell you. - -Hello. - -Nice to be with you, Megyn. - -You're looking well. You're looking well. - -I don't know exactly what -- when you talk about off the record. First of all, Buzzfeed? They were the ones that said under no circumstances will I run for president. And were they wrong. But a lot of people said that. - -Then, I did have a meeting with the editorial board of the New York Times, a very nice meeting. Many of those things were off the record, I think at their suggestion and my suggestion. And I think being off the record is a very important thing. I think it's a very, very powerful thing. - -And I will say this. These three gentlemen have gone off the record many times with reporters. And I think they want to honor it, and I would always honor that. - -I will say, though, in terms of immigration -- and almost anything else -- there always has to be some, you know, tug and pull and deal. And, you know, when I watch Ted stand on the Senate floor, I had great respect for what he did. He stood there for a day-and-a- half or something. In the meantime, what came of it? Nothing. You have to be able to have some flexibility, some negotiation. - -Now, sometimes you ask for more than you want and you negotiate down to the point. I may have discussed something like that with the New York Times, but I would never release off-the-record conversations. I don't think it's fair, frankly, to do that to anybody. - -Not very flexible. No, not very flexible. I give the example -- I'm going to build a wall. I'm the one that wants the wall. I'm the one that can build the wall. - -It's going to get built. And by the way, Mexico is going to pay for the wall. I can tell you that. Mexico is going to pay for the wall. - -But -- and I used an example. And this isn't necessarily what was said, but whatever was said, the wall's 50 feet high. Is it going to be 45 feet or 40 feet? That could very well be. That could very well -- he wants it to be higher. - -That could very well be. But there's always give and take. There's always negotiation. And the best negotiator that knows what he's doing will make a great deal. But we need give and take in government. If you don't have give and take, you're never going to agree on anything. - -Fine. - -I will say one thing, what Marco said is -- I understand it. He is talking about a little give and take and a little negotiation. And you know what? That's OK. That's not the worst thing in the world. - -There is nothing wrong with that. I happen to be much stronger on illegal immigration. Sheriff Joe Arpaio endorsed me. And if he endorses you, believe me, you are the strongest, from Arizona. - -But give and take is OK. And I thought what he said is OK. We may differ on the degree. But what he said to me is OK. - -No. I never do that. I would not do that. I don't think -- I have too much respect -- if I deal with you off the record, if I deal with Bret or Chris off the record, I have too much respect for that process to say, just release everything. I would not do that. - -I'm changing. I'm changing. We need highly skilled people in this country, and if we can't do it, we'll get them in. But, and we do need in Silicon Valley, we absolutely have to have. - -So, we do need highly skilled, and one of the biggest problems we have is people go to the best colleges. They'll go to Harvard, they'll go to Stanford, they'll go to Wharton, as soon as they're finished they'll get shoved out. They want to stay in this country. They want to stay here desperately, they're not able to stay here. For that purpose, we absolutely have to be able to keep the brain power in this country. - -I'm changing it, and I'm softening the position because we have to have talented people in this country. - -That is correct. - -No, I'm not playing. - -I'm not playing to anybody's fantasies, I'm playing to the fact that our country is in trouble, that we have a tremendous problem with crime. The border is a disaster, it's like a piece of Swiss cheese. We're going to stop it, we're going to stop people from coming into our country illegally. We're going to stop it. - -First of all I've had tens of thousands of people working for me, most of which are -- 98, 97, 98 percent of the people in this country, from this country. I'm very proud of it. You have a club in Palm Beach, Florida called the Mar-a-Lago Club, it's a very, very successful club. It has a very short season, it's called, the Season, and it goes from November until March. - -It's a few months, five months at the most. People don't want a short-term job. They don't want -- so, we will bring people in, and we will send the people out. All done legally, all done with the process that's approved by government in Palm Beach, or West Palm Beach. We bring people in, we bring them out. We want to hire as many Americans as we can, but they don't want part-time, very short part-time jobs. - -Wrong. - -That's wrong. - -Wrong. - -Wrong. - -The -- the -- the other hotels during the season, they do the same thing. They take in a lot of people, because you can't get them. They take in a lot of people. Long-term employees, we don't do that, but short-term employees, we have no choice but to do it, and other hotels in that very, very hot area. It is a very hot area. - -It's very, very hard to get people. But other hotels do the exact same thing. And just so you understand, just again, this is a legal process. This is a procedure. It's part of the law. I take advantage of that. There's nothing wrong with it. We have no choice. - -This wasn't on the subject. - -Tapes were not on the subject. - -No, no. You're the liar. You're the lying guy up here. - -You're the -- you're the one. You're the one. - -You're the one. Now, let me just tell you. Let me just tell you. - -Excuse me. Excuse me. I've given my answer, Lyin' Ted. I've given my answer. - -They won't refuse. They're not going to refuse me. Believe me. - -Let me just tell you, you look at the Middle East. They're chopping off heads. They're chopping off the heads of Christians and anybody else that happens to be in the way. They're drowning people in steel cages. And he -- now we're talking about waterboarding. - -This really started with Ted, a question was asked of Ted last -- two debates ago about waterboarding. And Ted was, you know, having a hard time with that question, to be totally honest with you. They then came to me, what do you think of waterboarding? I said it's fine. And if we want to go stronger, I'd go stronger, too, because, frankly that's the way I feel. Can you imagine -- can you imagine these people, these animals over in the Middle East, that chop off heads, sitting around talking and seeing that we're having a hard problem with waterboarding? We should go for waterboarding and we should go tougher than waterboarding. That's my opinion. - -And -- and -- and -- I'm a leader. I'm a leader. I've always been a leader. I've never had any problem leading people. If I say do it, they're going to do it. That's what leadership is all about. - -Well, look, you know, when a family flies into the World Trade Center, a man flies into the World Trade Center, and his family gets sent back to where they were going -- and I think most of you know where they went -- and, by the way, it wasn't Iraq -- but they went back to a certain territory, they knew what was happening. The wife knew exactly what was happening. - -They left two days early, with respect to the World Trade Center, and they went back to where they went, and they watched their husband on television flying into the World Trade Center, flying into the Pentagon, and probably trying to fly into the White House, except we had some very, very brave souls on that third plane. All right? - -I have no problem with it. - -I think Richard Haass is excellent. I have a lot of respect for him. I think General Keane is excellent. I think that there are -- I like Colonel Jacobs very much. I see him. I know him. I have many people that I think are really excellent but in the end it's going to be my decision. - -When you just asked the question about Snowden, I will tell you right from the beginning, I said he was a spy and we should get him back. And if Russia respected our country, they would have sent him back immediately, but he was a spy. It didn't take me a long time to figure that one out. Believe me. - -But I would get the best people, people that I'd be comfortable with. And we will do the right thing. - -We've made a terrible mistake getting involved there in the first place. That thing will collapse about two seconds after they leave. Just as I said that Iraq was going to collapse after we leave. - -We made a mistake going into Iraq. I've never said we made a mistake - -Well, OK, I never said that. - -OK. Wouldn't matter. I never said it. - -Should I respond to that first? - -You'll be here a long time. - -I hate the concept of it, but on a humanitarian basis, with what's happening, you have to. It's living in Hell in Syria; there's no question about it. They're living in Hell. - -Look, from a humanitarian standpoint, I'd love to help, but we have our own problems. We have so many problems that we have to solve. - -They lied. They said there were weapons of mass destruction; there were none. And they knew there were none. - -I don't know if he lied or not. He could have lied. Maybe he did. Maybe he didn't. I guess you'd have to ask him. - -Well, on Afghanistan, I did mean Iraq. I think you have to stay in Afghanistan for a while, because of the fact that you're right next to Pakistan, which has nuclear weapons, and we have to protect that. Nuclear weapons change the game. - -And I was always against going into Iraq. In fact, I -- believe me, I was always against it. There was some cases where I sort of -- in one interview with a great friend of mine, and yours, Howard Stern -- said that -- said that I said very meekly, long before we went in, I said very meekly, well, maybe, maybe, I don't know. By the time it got to that point, I was always against Iraq. But Afghanistan, I felt -- and in that one, if you notice, I corrected it the second day. OK? Second question? - -No, no. - -Now on -- let me explain that. You're right. Let me explain. First time the question had been put to me, it was very early on. The migration had just started. And I had heard that the number was a very, very small number. - -By the second day, two or three days later, I heard the number was going to be thousands and thousands of people. You know, when they originally heard about it, they were talking about bringing very, very small numbers in. - -And I said, begrudgingly, well, I guess maybe that's OK. It was not like, "Let's bring them in," because I think we should build a safe zone in -- we should really -- what we should be doing is building safe zones so they can stay in their own country and not go all over, and at least this way we're not going to have the problem. That's what we have to do. - -But just -- just to set -- because I fully understand what you're asking. When I first heard the question, first time the question was ever asked to me, first time I really had known about the question, the migration had just started. I was very much like, OK, by the time I went back and studied it, and they were talking about bringing thousands and thousands, I changed my tune. And I don't think there's anything wrong with that. - -Megyn, I have a very strong core. - -I have a very strong core. But I've never seen a successful person who wasn't flexible, who didn't have a certain degree of flexibility. You have to have a certain degree of flexibility. - -You can't -- for instance, let's say, on -- on the second question, you can't say it's OK, and then you find out it's not OK, and you don't want to do anything. You have to be flexible, because you learn. I mean, before I knew the question was asked by Bill, and the next day, or the couple of days later, the question was asked by, by -- you know -- I was asked by a number of people, actually. I was asked by Sean, but I was asked by a number of people. But by that time, the number had increased significantly. - -The next day. But I had learned. I mean, nobody had ever asked me the question. This was brand new. But -- and I really mean it. You have to show a degree of flexibility. If you're going to be one way and you think it's wrong, does that mean the rest of your life you have to go in the wrong direction because you don't want to change? - -That's not right. - -And, by the way, just so you understand. - -This is a case I could have settled very easily, but I don't settle cases very easily when I'm right. Ninety-eight percent approval rating, we have an "A" from the Better Business Bureau. - -We have a 98 percent approval rating from the people who took the course. We have an "A" from the Better Business Bureau. And, people like it. Now, he's saying they didn't learn. - -We have many, many people that will be witnesses. Again, I don't settle cases. I don't do it because that's why I don't get sued very often, because I don't settle, unlike a lot of other people. - -We have a situation where we will win in court. But, many of the people that are witnesses did tremendously well, and made a lot of money. - -By taking the course. - -You're going to see, you don't know. - -You're going to see, you're going to see. - -No, no. Before they had the information. - -Before they had the information. - -Before they had the information it got -- it is right now an "A", once they had the information. - -The only reason that is was a "D" was because we didn't care -- we didn't give them the information. - -When they got the information it became an "A". - -Marco you don't know. - -Yes. - -But it was elevated to an "A". - -I can give it to you. I can give it to you tomorrow. - -It was elevated to an "A". - -Small business. - -Right. - -The lead plaintiff is now getting out of the case because it's so bad for her. - -Excuse me, the lead plaintiff signed a letter saying how great it was, and it on tape saying how great it was. - -She's trying to get out of the case. She's trying to get out of the case. - -Oh, give me a break. - -Give me a break. - -Give me a break. - -You know what, let's see what happens in court. This is a civil case. Very easy to have settled. Could settle it now. Very easy to have settled. Let's see what happens at the end of a couple years when this case is over, OK? - -Yes, it has been going for a long time. - -We'll win the case. - -One, one of the victims. - -I gave many people their money back. - -We will see who's right at the end of a few years. But all of the -- almost all of the people, many, many people signed what's called the report card at the end, did you like the course, how did you like it. - -Almost all of them said it was terrific, OK? With letters, with this. Some of them are on tape saying it was terrific. Let's see what happens at the end of three years. - -I gave some refunds to people because if they asked for the refunds in a certain period of time, and we gave refunds to people. - -But let's see what happens at the end of three years. Let's see who's right. - -It's called pending litigation. - -Let me tell you the real con artist. Excuse me. Excuse me. The real con artist is Senator Marco Rubio who was elected in Florida and who has the worst voting record in the United States Senate. - -He doesn't go to vote. He's absent. He doesn't go. Now, the people of Florida can't stand him. He couldn't get elected dogcatcher. The people of Florida -- the people of Florida -- and by the way, I know he's going to spend $25 million on ads. Without that he wouldn't have a chance. He's 20 points south. - -The people in Florida wouldn't elect him dogcatcher. He couldn't get any -- he's right now 21 points down to me. And, you know. Again, there will be a lot of advertising. It's the only thing that might save him. But I doubt it. - -He scammed the people of Florida. He scammed people. He doesn't vote. He doesn't show up for the U.S. Senate. He doesn't vote. He scammed the people. He defrauded the people of Florida. - -You defrauded the people of Florida, little Marco. - -That was licensing. - -Oh, stop it. - -It's just a minor case. It's a minor case. - -It's a minor civil case. - -Give me a break. - -It's a minor civil case. - -There are many, many civil cases. - -Give me a break. - -I don't believe these politicians. All talk, no action. I'm standing here listening to -- I'm hearing him say about a percentage. CNN, he gets 15. That means 85 percent, based on what you're saying, of the people don't dig you, number one, number one. Is that a correct statement? How do you get -- are you at 15 in the new CNN poll? Do you believe in CNN? I mean, I know we're with FOX. But CNN spent a lot of money on a poll, just came out. I'm at 49. He's at 15. He tells me about 65 percent of the people. It's not 65 percent of the people. If you go by that, 85 percent of the people. - -Then he goes, we have five. And -- well, excuse me, I won 10. I won 10 states. If you listen to him, it's like -- I won 10 states. Everybody knows that on Super Tuesday Trump was the winner. There wasn't one person that didn't say that. Even the two people on your left and right said we did a great job. So how does he take -- how does he take five and say it's better than 10? - -I am by far the leader. But if you listen to a politician, he'll try and convince you otherwise. - -No, I don't. No, I don't. - -I know, but your recent polls have me beating Hillary Clinton, and very, very easily. - -I have nothing to say. I mean, generally speaking, agree with what he said. I would have certainly have rather left it to the states. I was always in favor -- I was very surprised when they came up with that decision. - -I would have certainly -- I would have preferred had it been left to the states and I think most people would have preferred that. - -No, I'm a big defender of the Second Amendment. And if you look at what's happened, whether it's in California, where you had the 14 people killed, whether it's in Paris -- which, by the way, has the toughest gun laws in the world and 130 people killed. Many, many people in the hospital gravely injured. They will be dying. Many people will be dying in addition. - -If we had guns, or if they had guns on the other side of the room, with the bullets going in the opposite direction, you would not have had 130 people killed. That I can tell you right now. - -So I'm a very, very big supporter of the Second Amendment. - -I don't support it anymore. I do not support the ban on assault. - -I -- I did not say that. I did not say that. - -I did not say that. - -So we're listening to the all-talk, no-action politician, and he was the primary supporter of John Roberts, who gave us Obamacare. - -No, it's not. You take a look. He was the primary supporter. He pushed John Roberts, and pushed him, and pushed him, and Bush ultimately appointed him. He got appointed. And when it came his time to raise his hand and kill Obamacare, not once, but twice, he let us down, and he did the wrong thing. - -This is the man that was the primary supporter. And you can read law journal, you can read whatever you want to read -- I've read plenty of it. There was no stronger supporter of John Roberts than him. And it was a very, very big mistake. - -Not what you say in the op-ed. - -That is not what you said in the op-ed. - -Yeah, I know it is. But it's not what you said in the op-ed. - -Lyin' Ted. - -Well, let me just say this. I've gotten to know Marco over a period of time, believe me, he is not a leader. Believe me. - -He didn't answer -- he's not a leader. And, frankly, when I say they'll do as I tell them, they'll do as I tell them. And that's very -- it's very simple. It's very simple. - -We are in a very dangerous place. We have a depleted military. Totally depleted. We have -- by the way, our vets are treated horribly. We're going to take care of our vets. We're going to start taking care of our vets, properly, like we should. - -But we're going to build up our military, and we're going to get the equipment we want, not the equipment that's sold to us by somebody that gave him and him and not the governor campaign contributions. OK? We're going to get the equipment that the generals and the soldiers want. - -I will prove to be a great leader. And, you know, it's very interesting, we talk about the polls. Every single poll when it comes to ISIS and the military and the border say, by far, Trump is the best. - -Wrong. Wrong. - -Wrong. - -Wrong. - -He said very good things about me - -Yeah, finish. - -Let me just tell you, first of all, I've been hearing this man so long talking about Putin. Putin said about me -- I didn't say about Putin -- Putin said very nice things about me. And I say very nicely, wouldn't it be nice if actually we could get along with Russia, we could get along with foreign countries, instead of spending trillions and trillions of dollars? - -You're talking about Flint, Michigan. You're talking about places -- we need to rebuild the infrastructure of our country. Wouldn't it be nice if we got along with the world, and maybe Russia could help us in our quest to get rid of ISIS, et cetera, et cetera? - -I think I'd get along very well with Vladimir Putin. I just think so. - -Even if it's not me? - -OK -- that I'm very, very proud of -- millions and millions of people have come to the Republican Party over the last little while. They've come to the Republican Party. And by the way, the Democrats are losing people. This is a trend that's taking place. It's the biggest thing happening in politics, and I'm very proud to be a part of it. And I'm going to give them some credit, too, even though they don't deserve it. But the answer is: Yes, I will. - -Yes, I will. Yes. I will. - -Thank you. I am going to bring jobs back to the United States like nobody else can. We're going to fix our very depleted military. We're going to take care of our vets. We're going to strengthen our borders. And you're going to be very, very proud of this country in just a few years if I'm elected president. Thank you. - -Thank you. My whole theme is make America great again. We don't win anymore as a country. We don't win with trade, we don't win with the military. ISIS, we can't even knock out ISIS, and we will, believe me. We will. - -We don't win in any capacity with healthcare. We have terrible health care, Obamacare is going to be repealed and replaced. We just don't win. - -You look at our borders, they're like swiss cheese, everybody pours in. - -We're going to make a great country again. We're going to start winning again. We're going to win a lot, it's going to be a big difference, believe me. It's going to be a big difference. - -First of all, he was in charge of amnesty, he was the leader, and you can ask Marco because they've been debating this every debate that we've had. - -As far as coming back in, number one, you wouldn't even be talking, and you wouldn't have asked that as the first question if it weren't for me when my opening when I talked about illegals immigration. It wouldn't even be a big subject. - -But, we either have a country, or we don't have a country. We have at least 11 million people in this country that came in illegally. They will go out. They will come back -- some will come back, the best, through a process. They have to come back legally. They have to come back through a process, and it may not be a very quick process, but I think that's very fair, and very fine. - -They're going to get in line with other people. The best of them will come back, but they're going to come back through a process. - -Well, I'm very glad that Ted mentioned Arizona because probably the toughest man on borders is Sheriff Joe Arpaio, and two days ago he totally endorsed me, so, thank you. - -Well, first of all, self-deportation is people are going to leave as soon as they see others going out. If you look at Dwight Eisenhower in the 1950s, they started moving people out and the rest of them left. - -Self-deportation, as I really define it, and that's the way I define it, is you're going to get some to go, and the rest are going to go out. - -As far as the people that I've hired in various parts of Florida during the absolute prime season, like Palm Beach and other locations, you could not get help. It's the up season. People didn't want to have part-time jobs. There were part-time jobs, very seasonal, 90-day jobs, 120-day jobs, and you couldn't get. - -Everybody agrees with me on that. They were part-time jobs. You needed them, or we just might as well close the doors, because you couldn't get help in those hot, hot sections of Florida. - -I criticized Mitt Romney for losing the election. He should have won that election. He had a failed president. He ran a terrible campaign. He was a terrible candidate. That's what I criticize Mitt Romney - -Excuse me. He ran one terrible campaign. That's an election that should have been won. - -No, no, I'm the only one on the stage that's hired people. You haven't hired anybody. - -And by the way, I've hired -- and by the way, I've hired tens of thousands of people over at my job. You've hired nobody. - -You've had nothing but problems with your credit cards, et cetera. So don't tell me about that. - -You haven't hired one person, you liar. - -That's wrong. That's wrong. Totally wrong. - -I've hired tens of thousands of people over my lifetime. Tens of thousands. - -Be quiet. Just be quiet. - -Let me talk. I've hired tens of thousands of people. He brings up something from 30 years ago, it worked out very well. Everybody was happy. - -And by the way, the laws were totally different. That was a whole different world. - -But I've hired people. Nobody up here has hired anybody. - -I can only say this, and I've said it loud and clear and I've said it for years. And many of these people are sitting right in the audience right now -- your lobbyist and your special interest and your donors, because the audience is packed with them, and they're packed with you. - -I've had an amazing relationship with politicians -- with politicians both Democrat, Republican, because I was a businessman. As one magazine said, he's a world-class businessman; he was friendly with everybody. I got along with everybody. - -You get along with nobody. You don't have one Republican -- you don't have one Republican senator, and you work with them every day of your life, although you skipped a lot of time. These are minor details. But you don't have one Republican senator backing you; not one. You don't have the endorsement of one Republican senator and you work with these people. You should be ashamed of yourself. - -Here's a man -- Robin Hood. This is Robin Hood over here. He talks about corruption. On his financial disclosure form, he didn't even put that he's borrowed money from Citibank and from Goldman Sachs, which is a total violation. He didn't talk about the fact that he pays almost no interest. He just left it off, and now he's going to protect the people from the big bad banks. - -Give me a break. - -Correct. - -I will, and the wall just got 10 feet taller, believe me. - -It just got 10 feet taller. I saw him make that -- I saw him make the statement. I saw him use the word that he used. I can only tell you, if I would have used even half of that word, it would have been national scandal. - -This guy used a filthy, disgusting word on television, and he should be ashamed of himself, and he should apologize, OK? Number one. Number two, we have a trade deficit with Mexico of $58 billion a year. And that doesn't include all the drugs that are pouring across and destroying our country. - -We're going to make them pay for that wall. Now, the wall is $10 billion to $12 billion, if I do it. If these guys do it, it'll end up costing $200 billion. - -But the wall is $10 billion to $12 billion. You need 1,000 -- you need 1,000 miles. The Great Wall of China, built 2,000 years ago -- 2,000, is 13,000 miles. We need 1,000, because we have a lot of natural barriers. - -We can do it for $10 billion to $12 billion, and it's a real wall. This is a wall that's a heck of a lot higher than the ceiling you're looking at. This is a wall that's going to work. - -Mexico will pay for it, because they are not doing us any favors. They could stop all of this illegal trade if they wanted to immediately. Mexico will pay for the wall. It's a small portion of the kind of money that we lose and the deficits that we have with Mexico. - -Well, you know, I don't mind trade wars when we're losing $58 billion a year, you want to know the truth. We're losing so much. - -We're losing so much with Mexico and China -- with China, we're losing $500 billion a year. And then people say, "don't we want to trade?" I don't mind trading, but I don't want to lose $500 billion. I don't want to lose $58 billion. - -Mexico just took Carrier Corporation, maker of air conditioners. They just took Ford. They're building a $2.5 billion plant. They just took Nabisco out of Chicago. - -And I always say I'm not having Oreos anymore, which is true, by the way. But they just took a big plant from Nabisco into Mexico. They're taking our businesses. I don't mind. - -Such a cute sound bite. - -All right, you know what? - -Because they devalue their currency -- they devalue their currencies that makes it -- well, you don't know a thing about business. You lose on everything. - -Let me just tell you -- they de-value their currency. They de-value their currencies. - -That makes it -- well, you don't know a thing about business. You lose on everything you do. - -Let me just tell you, they de-value their currencies. China, Mexico, everybody. Japan with the cars. They de-value their currencies to such an extent that our businesses cannot compete with them, our workers lose their jobs. - -But you wouldn't know anything about it because you're a lousy businessman. - -No, I -- and you know why? You know why? - -You know why? - -And by the way I've won most of the lawsuits. - -And they actually did a very good job, but I've won most of the lawsuits. - -Excuse me. Hey Wolf, let me ask you. Am I allowed to respond to this? - -OK. Well let -- no, I haven't. I really haven't. - -Here's a guy -- here's a guy that buys a house for $179,000, he sells it to a lobbyist who's probably here for $380,000 and then legislation is passed. You tell me about this guy. This is what we're going to have as president. - -No, no, no. - -I borrowed $1 million, I turned it into $10 billion, more than $10 billion. - -I have to say, he lied this time. He lied. 100 percent. 100 percent. - -Yes, yes, yes. 38 years ago. - -True. - -True. - -No. - -First of all, I don't believe anything Telemundo says. - -Number one. Number two, I currently employ thousands of Hispanics, and over the years, I've employed tens of thousands of Hispanics. They're incredible people. They know, and the reason I won in Nevada, not only won the big one, but I also won subs, like, as an example, I won with women. - -I won with every single category. I won with men, I won with high-income, low-income, I won with Hispanics. And I got 46 percent. Nobody else was close. Because they know I'm going to bring jobs back from China, from Japan, from so many other places. - -They get it. They're incredible people. They're incredible workers. They get it. And I've won many of the polls with Hispanics. I didn't maybe win the Telemundo poll. - -But one thing I'm also going to do, I'm going to be getting -- bringing a lot of people in who are Democrats, who are independents, and you're seeing that with the polls, because if you look at anywhere, look at any of the elections, every single election, it has been record-setting. - -And the good news is, for the Republican Party, the Democrats are getting very poor numbers in terms of bringing them in. We're getting record-setting numbers. I think I have something to do with that. - -We're getting record-setting numbers. And I won every one -- the three of them that I won, I won with record-setting numbers. - -New people are coming into the Republican Party. We are building a new Republican Party, a lot of new people are coming in. - -I love them. I love them. - -They're fine. Do you know what? They're fine. - -Why did they take the poll? Why did they - -I'm just telling you, I'm doing very well with Hispanics. And by the way, I settled my suit, as you know, with Univision. It was settled. We're good friends now. It was all settled up. - -Very happy, very happy. Very good people. - -I'm just telling you -- I'm just telling you that I will do really well with Hispanics. I will do better than anybody on this stage. I have respect for the people on the stage, but I will do very well with Hispanics. But I'm telling you also, I'm bringing people, Democrats over and I'm bringing independents over, and we're building a much bigger, much stronger Republican Party. - -Yes, I would. And I've been there. And I've been there very strongly. I do have to say something, and this is interesting and it's not anybody's fault. It's not Ted's fault. Justice Roberts was strongly recommended and pushed by Ted. Justice Roberts gave us Obamacare. Might as well be called Roberts-care. Two times of the Supreme Court, Justice Roberts approved something that he should have never raised his hand to approve. And we ended up with Obamacare. - -That is a rough thing. And I know Ted feels badly about it. And I think he probably still respects the judge. But that judge has been a disaster in terms of everything we stand for because there is no way -- no way that he should have approved Obamacare. - -Now, with that being said, these are the things that happen. But Ted very, very strongly pushed Judge Roberts, and Justice Roberts gave us something that we don't want. - -Well, let -- let me -- let me just say -- let me just say this. Look, I watched Ted -- and I respected it, but he gets nowhere -- stand on the Senate floor for a day or two days, and talk and talk and talk. - -I watched the other senators laughing and smiling. And when Ted was totally exhausted, he left the Senate floor, and they went back to work. OK? We have to have somebody that's going to make deals. - -It's wonderful to stand up for two days and do that. Now, Ted's been very critical -- I have a sister who's a brilliant. - -excuse me. She's a brilliant judge. He's been criticizing -- he's been criticizing my sister for signing a certain bill. You know who else signed that bill? Justice Samuel Alito, a very conservative member of the Supreme Court, with my sister, signed that bill. - -So I think that maybe we should get a little bit of an apology from Ted. What do you think? - -When you say crazy zealot, are you talking about you? Crazy zealot -- give me a break. - -Well, let -- let me just say -- let me just say, first of all, I have great respect for Justice Scalia. I thought he was terrific. And if you talk about evolving, Ronald Reagan was a somewhat liberal Democrat. Ronald Reagan evolved into a somewhat strong conservative -- more importantly, he was a great president. A great president. - -As far as Planned Parenthood is concerned, I'm pro-life. I'm totally against abortion, having to do with Planned Parenthood. But millions and millions of women -- cervical cancer, breast cancer -- are helped by Planned Parenthood. - -So you can say whatever you want, but they have millions of women going through Planned Parenthood that are helped greatly. And I wouldn't fund it. - -I would defund it because of the abortion factor, which they say is 3 percent. I don't know what percentage it is. They say it's 3 percent. But I would defund it, because I'm pro-life. But millions of women are helped by Planned Parenthood. - -I just want to say, I agree with that 100 percent, except pre-existing conditions, I would absolutely get rid of Obamacare. We're going to have something much better, but pre-existing conditions, when I'm referring to that, and I was referring to that very strongly on the show with Anderson Cooper, I want to keep pre- existing conditions. - -I think we need it. I think it's a modern age. And I think we have to have it. - -I think they're wrong 100 percent. What we need -- look, the insurance companies take care of the politicians. The insurance companies get what they want. We should have gotten rid of the lines around each state so we can have real competition. - -We thought that was gone, we thought those lines were going to be gone, so something happened at the last moment where Obamacare got approved, and all of that was thrown out the window. - -The reason is some of the people in the audience are insurance people, and insurance lobbyists, and special interests. They got -- I'm not going to point to these gentlemen, of course, they're part of the problem, other than Ben, in all fairness. - -And, actually, the Governor too, let's just talk about these too, OK? - -Because I don't think the Governor had too much to do with this. - -But, we should have gotten rid of the borders, we should have gotten rid of the lines around the state so there's great competition. The insurance companies are making a fortune on every single thing they do. - -I'm self-funding my campaign. I'm the only one in either party self-funding my campaign. I'm going to do what's right. We have to get rid of the lines around the states so that there's serious, serious competition. - -And, you're going to see -- excuse me. You're going to see preexisting conditions and everything else be part of it, but the price will be done, and the insurance companies can pay. Right now they're making a fortune. - -That's going to solve the problem. And, the insurance companies aren't going to say that, they want to keep it. They want to say -- they say whatever they have to say to keep it the way it is. I know the insurance companies, they're friends of mine. The top guys, they're friends of mine. I shouldn't tell you guys, you'll say it's terrible, I have a conflict of interest. They're friends of mine, there's some right in the audience. One of them was just waving to me, he was laughing and smiling. He's not laughing so much anymore. - -Hi. - -Look, the insurance companies are making an absolute fortune. Yes, they will keep preexisting conditions, and that would be a great thing. Get rid of Obamacare, we'll come up with new plans. But, we should keep preexisting conditions. - -And, you don't know what it means. - -You don't know. - -The biggest problem, I'll have you know. - -You know, I watched him meltdown two weeks ago with Chris Christie. I got to tell you, the biggest problem he's got is he really doesn't know about the lines. The biggest thing we've got, and the reason we've got no competition, is because we have lines around the state, and you have essentially. - -You don't know much. - -The lines around the states and, it was almost done -- not now - -Excuse me. Excuse me. - -You get rid of the lines, it brings in competition. So, instead of having one insurance company taking care of New York, or Texas, you'll have many. They'll compete, and it'll be a beautiful thing. - -The nice part of the plan -- you'll have many different plans. You'll have competition, you'll have so many different plans. - -No, no, no. - -I watched him repeat himself five times four weeks ago. - -I watched him meltdown on the stage like that, I've never seen it in anybody. - -I thought he came out of the swimming pool. - -We're going to have many different plans because competition - -There is going to be competition among all of the states, and the insurance companies. They're going to have many, many different plans. BASH: Is there anything else you would like to add to that - -No, there's nothing to add. - -What is to add? - -I do not want socialized medicine, just so you understand. He goes around saying oh, he wants it. I do not want socialized medicine. I do agree with him that it's going to be a disaster, Obamacare, for the economy. - -In 2017, it will be impossible for us to pay for it if you look at what's going on. That's why it has to be repealed, for a lot of reasons, Number one, it doesn't work, number two, premium. You look at premiums going up, 25, 35, even 45 percent, and more. We have to get rid of Obamacare. It is going to destroy our economy completely. Our economy is not doing well. It is going to destroy our economy greatly. And on that, I agree. - -That's false. - -No, I said it worked in a couple of countries. - -No, I did not. No I did not. - -Correct. I will not let people die on the streets if I'm president. - -Excuse me. Let me talk. If people -- my plan is very simple. I will not -- we're going to have private -- we are going to have health care, but I will not allow people to die on the sidewalks and the streets of our country if I'm president. You may let it and you may be fine with it. - -I'm not fine with it. We are going to take those people. - -Excuse me. We are going to take those people and those people are going to be serviced by doctors and hospitals. We're going to make great deals on it, but we're not going to let them die in the streets. - -You know what? Call it what you want. - -Call it what you want, people are not going to be dying on the sidewalk. - -Because the country will become a dynamic economy. We'll be dynamic again. If you look at what's going on, we have the highest taxes anywhere in the world. We pay more business tax, we pay more personal tax. We have the highest taxes in the world. - -It's shutting off our economy. It's shutting off our country. We have trillions of dollars outside that we can't get in. Yes, we will do my tax plan, and it will be great. We will have a dynamic economy again. - -We're going to make many cuts in business. We're getting rid of -- we're going to get rid of so many different things. Department of Education -- Common Core is out. We're going local. Have to go local. - -Environmental protection -- we waste all of this money. We're going to bring that back to the states. And we're going to have other many things. -We are going to cut many of the agencies, we will balance our budget, and we will be dynamic again. - -Waste, fraud and abuse all over the place. Waste, fraud and abuse. - -You look at what's happening with Social Security, you look -- look at what's happening with every agency -- waste, fraud and abuse. We will cut so much, your head will spin. - -I just want to say -- and I'm a big fan of the governor, but they also struck oil, OK, so that helped Iowa a lot. - -All right. First of all, let me just explain. I was the first one to file a financial disclosure form -- almost 100 pages. You don't learn anything about somebody's wealth with a tax return. You learn it from statements. - -I filed -- which shows that I'm worth over $10 billion. I built a great company with very little debt. People were shocked, the people in the back, the reporters, they were shocked when they went down. And I filed it on time. I didn't ask for five 45-day extensions, which I would have been entitled to. - -So as far as that's concerned, I filed it. And that's where you find out what kind of a company. You don't learn anything from a tax return. - -I will say this. Mitt Romney looked like a fool when he delayed and delayed and delayed. And Harry Reid baited him so beautifully. And Mitt Romney didn't file his return until a September 21st of 2012, about a month-and-a-half before the election. And it cost him big league. - -As far as my return, I want to file it, except for many years, I've been audited every year. Twelve years, or something like that. Every year they audit me, audit me, audit me. - -Nobody gets audited -- I have friends that are very wealthy people. They never get audited. I get audited every year. I will absolutely give my return, but I'm being audited now for two or three years, so I can't do it until the audit is finished, obviously. And I think people would understand that. - -Are you going to ask anybody else that question? - -Every single question comes to me? - -I know I'm here for the ratings, but it's a little bit ridiculous. - -True. - -No, I'm not. First of all, very few people listen to your radio show. That's the good news. - -Let me just tell you, let me just -- which happens to be true. Check out the ratings. - -Look, let me just tell you something. Let me just tell you something. I want to release my tax returns but I can't release it while I'm under an audit. We're under a routine audit. I've had it for years, I get audited. - -And obviously if I'm being audited, I'm not going to release a return. As soon as the audit is done, I love it. - -Eighty-five percent say you, big difference. - -So at the beginning, I said openly to everybody that I contribute to many, many politicians, both Republican and Democrat. And I have, over the years. I'm a businessman. I have, over the years. - -And I sort of have to laugh when Ted makes a big deal out of the fact that he's doing well in the polls. Well, I'm beating him in virtually every poll. I'm tied in Texas, by the way, which I shouldn't be. But I think I'll do very well. - -But a poll just came out -- a Bloomberg poll -- where I am beating him so badly that it's, like, embarrassing even for me to say I'm beating him that badly. - -And -- and here's the thing -- it was sort of funny -- 65 percent of the people don't like you -- I just got 36 percent of the vote, right? I just got 46 percent on another one. I got 38 percent on another one. That means -- and he got 20 and 22, and he lost in South Carolina so badly -- that was going to be his stronghold. He said a year ago, "I can't lose South Carolina." I beat him in a landslide. - -Last week in Nevada, I beat him in a landslide, and he sang (ph) about the polls. One other thing -- Hillary Clinton -- take a look at USA Today, take a look at the Q poll. I beat her, and I beat her badly. And I -- and I haven't even started at her. I only had one little interchange. - -I only had one little interchange, and that was four weeks ago, when she said I was sexist. And believe me, they had a rough weekend that weekend, between Bill and Hillary. They had a rough weekend. - -Nothing. - -I'm not afraid. - -First of all, he's talking about the polls. I'm beating him awfully badly in the polls. - -Well, then, if I can't -- if -- hey, if I can't beat her, you're really going to get killed, aren't you? - -I'm being audited 12 years in a row, at least. - -Shocking. - -Right. - -Well, first of all, I don't think they do under President Obama because I think he's treated Israel horribly, all right? I think he's treated Israel horribly. - -I was the grand marshall down 5th Avenue a number of years ago for the Israeli Day Parade, I have very close ties to Israel. I've received the Tree of Life Award and many of the greatest awards given by Israel. - -As president, however, there's nothing that I would rather do to bring peace to Israel and its neighbors generally. And I think it serves no purpose to say that you have a good guy and a bad guy. - -Now, I may not be successful in doing it. It's probably the toughest negotiation anywhere in the world of any kind. OK? But it doesn't help if I start saying, "I am very pro-Israel, very pro, more than anybody on this stage." But it doesn't do any good to start demeaning the neighbors, because I would love to do something with regard to negotiating peace, finally, for Israel and for their neighbors. - -And I can't do that as well -- as a negotiator, I cannot do that as well if I'm taking big, big sides. With that being said, I am totally pro-Israel. - -Well, I can only say -- look, I can only say I've been a big contributor to Israel over the years. I've received many, many awards from Israel, as I've said before. I have a great relationship with Israel. And I'm going to keep it that way. And if I could bring peace, that would be a fantastic thing. It would be one of my greatest achievements as president. - -I'm a negotiator. I've done very well over the years through negotiation. It's very important that we do that. In all fairness, Marco is not a negotiator. I watched him melt down and I'll tell you, it was one of the saddest things I've ever seen. He's not going down -- excuse me, wait a minute, and these people may even be tougher than Chris Christie. OK? - -OK, no, no, no -- a deal is a deal. Let me tell you that. I learned a long time ago. - -You are not a negotiator. You are not a negotiator. - -And, with your thinking, you will never bring peace. You will never bring peace. - -Excuse me, I want to be able to bring peace. - -He will never be able to do it. I think I may be able to do it, although I will say this. Probably the toughest deal of any kind is that particular deal. - -One thing I'd like to add to what the Governor's saying, I think that we are now in a position -- are $19 trillion dollars because of the horrible omnibus budget that was approved six weeks ago, it's going to be $21 trillion dollars. We can no longer defend all of these countries, Japan, Germany, South Korea. - -You order televisions, you order almost anything, you're getting it from these countries. Whether it's a Mercedes-Benz, or whether it's an air conditioning unit. They're coming out of these countries. They are making a fortune. Saudi Arabia, we are defending Saudi Arabia. Before the oil went down, now they're making less, but they're making plenty. They were making $1 billion dollars a day. - -We defend all of these countries for peanuts. You talk about budgets. We have to start getting reimbursed for taking care of the military services for all of these countries. - -I really don't because it not working and the countries aren't agreeing to it and the rebels aren't agreeing and Syria is not agreeing. So It's a meaningless ceasefire. - -I love the idea of a ceasefire. I love the idea of -- with a total cessation. But it's not working, as you know very well. It's not working. If -- we can do what we want with Russia but nobody else is adhering to it. - -So I certainly support it, I would certainly love it, but all parties have to be part of it. - -Again, I think I gave them both checks to be exactly honest. I think they both liked me very much. - -Well, I think Bush did a hell of a bad as far as that's concerned. You know it and so do I. - -Be honest. Be honest. No, this was before. The check came early. - -But let me just tell you, Syria, he's saying that I was in favor of Syria. He said I was in favor of Libya? I never discussed that subject. I was in favor of Libya? We would be so much better off if Gadhafi were in charge right now. - -If these politicians went to the beach and didn't do a thing, and we had Saddam Hussein and if we had Gadhafi in charge, instead of having terrorism all over the place, we'd be -- at least they killed terrorists, all right? - -And I'm not saying they were good because they were bad, they were really bad, but we don't know what we're getting. You look at Libya right now, ISIS, as we speak, is taking over their oil. As we speak, it's a total mess. - -We would have been better off if the politicians took a day off instead of going into war. - -I never said walk away. I wouldn't want to walk away. I want them to pay us much more money. We cannot afford to subsidize a lot. I'll negotiate a lot more money than you'll ever get. - -As far as John Kerry is concerned, there has been no tougher critic of this man, I think he negotiated one of the worst deals in the history of our country, the Iran deal, where they get their $150 billion and all of the other things that take place. - -It is a disaster for this country, and speaking of Israel, it's a disaster for Israel. I'm no fan of John Kerry. - -Well, look, my response is very simple. There is nobody on this stage that has done more for Israel than I have. Nobody. You might say, you might talk, you're politicians, all talk, no action. - -I've been watching it all my life. You are all talk and no action. - -What I've seen up here -- I mean, first of all, this guy is a choke artist, and this guy is a liar. You have a combination. - -You have a combination of factors. He can't do it for the obvious reason, and he can't do it because he doesn't know how to tell the truth. Other than that, I rest my case. - -I watched -- I watched the lobbyists. I watched what this man did to Dr. Ben Carson, who I respect, in Iowa, where he said that Ben Carson is out of the race -- he has left Iowa and he's out of the race. And I thought it was disgraceful. - -And got a lot of votes because of that -- a lot of votes. Took them away from Ben Carson. I watched that. Probably took them away from me, too. But I watched it. - -I also watched where he did a forum that looked like it came right out of a government agency, and it said on top, "Voter Violation," and then it graded you and it scared the hell out of people, and it said the only way you clear up the violation, essentially, is to go and vote for Ted Cruz. I watched that fraudulent document, and I said it's the worst thing I've ever seen in politics. - -To me, that was even worse than what he did to Ben. - -I know politicians -- I know politicians, believe it or not, better than you do. And it's not good. - -I funded you. I funded him. Can you believe it? - -I funded this guy. I gave him a check. - -I gave him a check. He never funded me. - -You know why? I didn't want to, but he sent me his book with his autograph. - -Mr. Trump, you're doing a great job. I have his book. - -Thank you -- thank you for the book. Go ahead. - -This is a lot of fun up here tonight, I have to tell you. - -Go ahead. I'm relaxed. You're the basket case. - -Go ahead. Don't get nervous. - -Go ahead. - -I've seen you. - -You're losing so badly. - -you don't know what's happening. - -First of all, you're talking about a border that's many, many times longer. You're talking about a massive border. - -We have far less problem with that border than we do with our Southern border, and tremendous amounts -- you know, I won, I had the privilege of winning by a landslide, by the way, New Hampshire. - -You go to New Hampshire, the first thing they talk about is heroin and drugs pouring in. And, you wouldn't think this beautiful place -- it's beautiful. With the trees and the roads, and the countryside. Their biggest problem is heroin, and it's such a shame to see it. - -They're pouring in from the Southern border, so I'm talking about great security. I'm talking about a wall that can absolutely be built, and I'll build it on time, on budget. It'll be a very high wall, a great wall. It's going to be built, it's going to be built. It's going to be paid for by Canada, by the way -- maybe I'll get Canada to pay? Got to be paid for by Mexico. - -The problem with Canada, you're talking about a massively long piece. You're talking about a border that would be about four times longer. It would be very, very hard to do, and we -- it is not our biggest problem. I don't care what anyone says. It is not our big problem. Our big problem is not only people coming in, and in many cases the wrong people, it's the tremendous amount of drugs that are coming in. - -Thank you. - -Nobody knows politicians better than I do. They're all talk, they're no action, nothing gets done. I've watched it for years. Take a look at what's happening to our country. - -All of the things that I've been talking about, whether it's trade, whether it's building up our depleted military, whether it's taking care of our vets, whether it's getting rid of Common Core, which is a disaster, or knocking out Obamacare and coming up with something so much better, I will get it done. Politicians will never, ever get it done. And we will make America great again. Thank you. - -Now, until that audit's done, and I don't think anybody would blame me, I'm not giving it. - -Well, I can say this. If the President, and if I were President now I would certainly want to try and nominate a justice. I’m sure that, frankly, I’m absolutely sure that President Obama will try and do it. I hope that our Senate is going to be able — Mitch, and the entire group, is going to be able to do something about it. -In times of delay, we could have a Diane Sykes, or you could have a Bill Pryor, we have some fantastic people. But this is a tremendous blow to conservatism. It’s a tremendous blow, frankly to our country. - -I think he’s going to do it whether or I’m OK with it or not. I think it’s up to Mitch McConnell, and everybody else to stop it. It’s called delay, delay, delay. - -What we want to do, when we want to do it, and how hard do we want to hit? Because we are going to have to hit very, very hard to knock out ISIS. -We’re going to also have to learn who our allies are. We have allies, so-called allies, we’re spending billions and billions of dollars supporting people — we have no idea who they are in Syria. Do we want to stay that route, or do we want to go and make something with Russia? -I hate to say Iran, but with Russia, because we — and the Iran deal is one of the worst deals I have ever seen negotiated in my entire life. It’s a disgrace that this country negotiated that deal. But very important not only a disgrace, it’s a disgrace and an embarrassment. But very important, who are we fighting with? Who are we fighting for? What are we doing? We have to rebuild our country. But we have to — I’m the only one on this stage that said, “Do not go into Iraq. Do not attack Iraq.” Nobody else on this stage said that. And I said it loud and strong. And I was in the private sector. I wasn’t a politician, fortunately. -But I said it, and I said it loud and clear, “You’ll destabilize the Middle East.” That’s exactly what happened. -I also said, by the way, four years ago, three years ago, attack the oil, take the wealth away, attack the oil and keep the oil. They didn’t listen. They just started that a few months ago. - -Called me a genius, I like him so far, I have to tell you. Let me just tell you this. -Jeb is so wrong. Jeb is absolutely self — just so you understand, you know what that is? That’s Jeb’s special interest and lobbyist talking. -Look, let me just tell you something, Jeb — Jeb is so wrong. You got to fight ISIS first. You fight ISIS first. Right now you have Russia, you have Iran, you have them with Assad, and you have them with Syria. You have to knock out ISIS. They’re chopping off heads. These are animals. You have to knock em out. You have to knock them off strong. You decide what to do after, you can’t fight two wars at one time. -If you listen to him, and you listen to some of the folks that I’ve been listening to, that’s why we’ve been in the Middle East for 15 years, and we haven’t won anything. We’ve spent $5 trillion dollars in the Middle East with thinking like that. We’ve spent $5 - -Lindsey Graham, who backs him, had zero on his polls. Let me just say something — we’ve spent — we’ve spent. -I only tell the truth, lobbyists. -We’ve spent $5 trillion dollars all over the — we have to rebuild our country. We have to rebuild our infrastructure. you listen to that you’re going to be there for another 15. - -You’ll end up with world war three. - -We’re supporting troops that we don’t even know who they are. - -We’re supporting troops that we don’t even know who they are. - -We have no idea who they are. - -Oh, yeah, yeah. - -Let 44 million in New Hampshire, it was practically 44 million — give me a break. - -First of all, I have to say, as a businessman I get along with everybody. I have business all over the world. - -I know so many of the people in the audience. And by the way, I’m a self-funder. I don’t have — I have my wife and I have my son. That’s all I have. I don’t have this. - -So let me just tell you, I get along with everybody, which is my obligation to my company, to myself, et cetera. -Obviously, the war in Iraq was a big, fat mistake. All right? Now, you can take it any way you want, and it took — it took Jeb Bush, if you remember at the beginning of his announcement, when he announced for president, it took him five days. -He went back, it was a mistake, it wasn’t a mistake. It took him five days before his people told him what to say, and he ultimately said, “it was a mistake.” The war in Iraq, we spent $2 trillion, thousands of lives, we don’t even have it. Iran has taken over Iraq with the second-largest oil reserves in the world. -Obviously, it was a mistake. - -George Bush made a mistake. We can make mistakes. But that one was a beauty. We should have never been in Iraq. We have destabilized the Middle East. - -I’m being nice. - -The World Trade Center came down during your brother’s reign, remember that. - -That’s not keeping us safe. - -She should be running. - -How did he keep us safe when the World Trade Center — the World — excuse me. I lost hundreds of friends. The World Trade Center came down during the reign of George Bush. He kept us safe? That is not safe. That is not safe, Marco. That is not safe. - -And George Bush– by the way, George Bush had the chance, also, and he didn’t listen to the advice of his CIA. - -I don’t want to go. - -Yes. - -First of all, the — when you say I’m the only candidate, if you listen to the Democrats, they want to do many things to Social Security and I want to do them on its own merit. You listen to them, what they want to do to Social Security, none of these folks are getting elected, OK, whether they can do it or not. I’m going to save Social Security. I’m going to bring jobs back from China. I’m going to bring jobs back from Mexico and from Japan, where they’re all — every country throughout the world — now Vietnam, that’s the new one. -They are taking our jobs. They are taking our wealth. They are taking our base. And you and I have had this discussion. We’re going to make our economy strong again. I’m lowering taxes. We have $2.5 trillion offshore. We have 2.5 trillion that I think is actually five trillion because the government has no idea when they say 2.5, they have no idea what they’re doing or saying, as they’ve proven very well. -We’re going to bring that money back. You take a look at what happened just this week, China bought the Chicago Stock Exchange, China, a Chinese company. Carrier is moving to Mexico, air conditioning company. Not only the ones I talk about all the time, Nabisco and Ford and — they’re all moving out. -We have an economy that last quarter, GDP didn’t grow. It was flat. We have to make our economy grow again. We’re dying. This country is dying. And our workers are losing their jobs, and you’re going. - -I’m the only one who is going to save Social Security, believe me. - -Because you have tremendous waste. I’ll tell you. - -You have tremendous waste, fraud and abuse. That we’re taking care of. That we’re taking care of. It’s tremendous. We have in Social Security right now thousand and thousands of people that are over 106 years old. Now, you know they don’t exist. They don’t exist. There’s tremendous waste, fraud and abuse, and we’re going to get it. But we’re not going to hurt the people who have been paying into Social Security their whole life and then all of a sudden they’re supposed to get less. We’re bringing our jobs back. We’re going to make our economy great again. - -I want everybody taken care of, but we have to take care of our people in this country. We’re not taking care of our people. We have no border. We have no control. People are flooding across. We can’t have it. We either have a border, and I’m very strongly — I’m not proposing. I will build a wall. I will build a wall. -Remember this, the wall will be paid for by Mexico. We are not being treated right. - -We are not being treated properly. If we don’t have borders, if we don’t have strength, we don’t have a country. People are flowing across. We have to take care of our people. Believe me. - -Look, when I announced that I was running for president on June 16th, illegal immigration wasn’t even a subject. If I didn’t bring it up, we wouldn’t even be talking. - -Now I don’t often agree with Marco, and I don’t often agree with Ted, but I can in this case. The weakest person on this stage by far on illegal immigration is Jeb Bush. They come out of an act of love, whether you like it or not. He is so weak on illegal immigration it’s laughable, and everybody knows it. - -Spend a little more money on the commercials. - -I don’t know what you’re talking about. - -I never called him — I don’t call him. - -He also said about language. Two days ago he said he would take his pants off and moon everybody, and that’s fine. Nobody reports that. He gets up and says that, and then he tells me, oh, my language was a little bit rough. - -My language. Give me a break. - -You did say it, You did say it. Been reported in 10 different news. - -Or a tax. - -I would build consensus with Congress and Congress would agree with me. I’ll give you an example because I don’t like the idea of using executive orders like our president. It is a disaster what he’s doing. I would build consensus, but consensus means you have to work hard. You have to cajole. You have to get them into the Oval Office and get them all together, and you have to make deals. -Let me just tell you, I mentioned before, China — big Chinese company bought the Chicago Exchange. Kerry is moving — and if you saw the people, because they have a video of the announcement that Carrier is moving to Mexico, OK? -Well, I’ll tell you what. I would go right now to Carrier and I would say I am going to work awfully hard. You’re going to make air conditioners now in Mexico. You’re going to get all of these 1400 people that are being laid off — they’re laid off. They were crying. They were — it was a very sad situation. You’re going to go to Mexico. You’re going to make air conditioners in Mexico, you’re going to put them across our border with no tax. -I’m going to tell them right now, I am going to get consensus from Congress and we’re going to tax you when those air conditioners come. So stay where you are or build in the United States because we are killing ourselves with trade pacts that are no good for us and no good for our workers. - -John, in life you have flexibility. You do have flexibility. When you’re fighting wars, you’re going one way, you have a plan. It’s a beautiful plan. It can’t lose. The enemy makes a change, and all of a sudden you have to change. -You have to have flexibility. In Ronald Reagan, though, in terms of what we’re talking about, was the great example. He was a somewhat liberal Democrat who became a somewhat, pretty strong conservative. He became — most importantly he became a great president. He made many of the changes that I’ve made — I mean, I’ve seen as a grew up, I’ve seen, and as I get older and wiser, and I feel that I am a conservative. -Now, I also feel I’m a common-sense conservative, because some of the views I don’t agree with. And I think a lot of people agree with me, obviously, based on what’s happening. - -Well, I think these people always hit me with eminent domain, and frankly, I’m not in love with eminent domain. But eminent domain is something you need very strongly. -When Jeb had said, “You used eminent domain privately for a parking lot.” It wasn’t for a parking lot. The state of New Jersey — too bad Chris Christie is not here, he could tell you — the state of New Jersey went to build a very large tower that was going to employ thousands of people. -I mean, it was going to really do a big job in terms of economic development. Now, just so you understand, I got hit very hard. It’s private, it’s private eminent domain. You understand that they took over a stadium in Texas, and they used private eminent domain, but he just found that out after he made the charge. - -Yeah. Well, Jeb, wouldn’t have known about it. - -You shouldn’t have used it then, Jeb. - -Thank you very much, I appreciate it. - -You probably are worse than Jeb Bush. You are single biggest liar. This guys lied – let me just tell you, this guy lied about Ben Carson when he took votes away from Ben Carson in Iowa and he just continues. Today, we had robo-calls saying. “Donald Trump is not going to run in South Carolina,” — where I’m leading by a lot.” -I’m not going to vote for Ted Cruz. This is the same thing he did to Ben Carson. This guy will say anything, nasty guy. Now I know why he doesn’t have one endorsement from any of his colleagues. - -He’s a nasty guy. - -Where did I support it? Where did I. - -Again, where did I support it? - -Hey Ted, where I support it? - -Where did I support? - -That’s a lot of lies. - -It does do wonderful things but not as it relates to abortion. - -Excuse me. Excuse me, there are wonderful things having to do with women’s health. - -But not when it comes to abortion. - -Hold on. - -Ted Cruz told your brother that he wanted John Roberts to be on the United States Supreme Court. They both pushed him, he twice approved Obamacare. - -OK, governor. - -You pushed him. You pushed him. - -You worked with him and you pushed him. Why do you lie? - -Why do you lie? - -You pushed him. - -Yeah, yeah, I know, you’re an adult. - - -Well, I would say my wife tells me I’m wrong all the time. And I listen. - -Oh, let me just say — look, I am very open — I hired top people. I’ve had great success. I built a great, great company. I don’t need to do this. I’m self-funding. I’m spending a lot of money. I’ve spent — like in New Hampshire, I spent $3 million. Jeb bush spent $44 million. He came in five, and I came in No. 1. -That’s what the country needs, folks. I spent $3, he spends 42 of their money, of special interest money. And it’s just — this is not going to make — excuse me. This is not going to make our country great again. -This is not what we need in our country. We need people that know what the hell they’re doing. And politicians, they’re all talk, they’re no action. And that’s why people are supporting me. -I do listen to people. I hire experts. I hire top, top people. And I do listen. And you know what? Sometimes they’re wrong. You have to know what to do, when to do it. But sometimes they’re wrong. - -Well, I’ll tell you — over the years, I’ve made many speeches. People have asked me, big companies have asked me to make speeches, and friends of mine that run big companies on success. -And occasion, in order to sort of really highlight something, I’ll use a profanity. One of the profanities that I got credited with using, that I didn’t use, was a very bad word, two weeks ago, that I never used. -I said, “You.” And everybody said “Oh, he didn’t say anything wrong.” But you bleeped it, so everyone thinks I said the — I didn’t say anything. I never said the word. -It is very unfair, that criticism. Now, I will say this, with all of that being said, I have said I will not do it at all, because if I say a word that’s a little bit off color, a little bit, it ends up being a headline. -I will not do it again. I was a very good student at a great school not using — by the way — not using profanity is very easy. - -That’s not — let me respond. That’s another lie. I never went bankrupt! - -No, but it’s another lie. - -No, but it’s another lie. This guy doesn’t know what he’s talking about. Just a lie. - -Let me just tell you. Jeb goes around saying, just like the biggest business leaders in this country, I’ve used the laws of the land to chapter — I bought a company, I threw it immediately into a chapter, I made a great deal. I uses the laws to my benefit, because I run a company. - -Excuse me, Jeb! - -I never went bankrupt, never. Now — but you don’t want to say that. Now, let me just say, I’ve used it, just like the biggest leaders in the country. Let me tell you something — Florida. - -Florida, he put so much debt on Florida. You know, we keep saying he’s a wonderful governor, wonderful governor. He put so much debt on Florida, and he increased spending so much that as soon as he got out of office, Florida crashed. -I happened to be there. It’s my second home. Florida crashed. He didn’t do a good job as governor. - -And you haven’t — excuse me, you haven’t heard that. You listen to the good record in Florida. You take a look at what happened, as soon as that year ended he got out, Florida crashed. Too much debt. -He loaded it up with debt, and his spending went through the roof. - -By the way, he was not a good governor. - -Take a look at your numbers. - -Florida went down the tubes right after he got out of office. - -Went right down because of what he did to it. - -Thank you. -Politicians are all talk, no action. You’ve seen where they’ve take you to. We are 19 trillion dollars right now. It’s going to be increased with that horrible budget from a month ago that was just approved by politicians. -We need a change. We need a very big change. We’re going to make our country great again. -I say this every night, every day, every afternoon and it’s so true – we don’t win anymore. We don’t win with healthcare, we don’t win with ISIS and the military, we don’t take care of our vets, we don’t care of our borders, we don’t win. We are going to start winning again. We are not going to be controlled by people that are special interests and lobbyists that everybody here has contributed to. And you know what, they do exactly what those folks want them to do. -We are going to make our country great and we’re going to do the right thing. I’m working for you. I’m not working for anybody else. -Thank you very much. - -I actually think I have the best temperament. I built a massive corporation. I employ thousands and thousands of people. I’ve gotten along with people for years and years, have tremendous relationships with many people, including politicians on both sides. And no matter how you cut it, when I — when I came out, I hit immigration, I hit it very hard. Everybody said, “Oh, the temperament,” because I talked about illegal immigration. - -Now, everybody’s coming to me, they’re all trying to say, well, he’s right, we have to come to him. I hit other things. I talked about Muslims. We have a problem. Nobody else wanted to mention the problem, I brought it up. I took a lot of heat. We have to have a temporary something, because there’s something going on that’s not good. And remember this, I’m the only one up here, when the war of Iraq — in Iraq, I was the one that said, “Don’t go, don’t do it, you’re going to destabilize the Middle East.” So, I’m not one with a trigger. I’m not one with a trigger. Other people up here, believe me, would be a lot faster. - -But I’ll build the mill arbitrary stronger, bigger, better than anybody up here, and nobody is going to mess with us. That, I can tell you. - -Am I allowed to respond? I have to respond. - -First of all, I respect what Ted just said, but if you noticed, he didn’t answer your question. And that’s what’s going to happen — OK. - -That’s what’s going to happen with our enemies and the people we compete against. We’re going to win with Trump. We’re going to win. We don’t win anymore. Our country doesn’t win anymore. We’re going to win with Trump. And people back down with Trump. And that’s what I like and that’s what the country is going to like. - -Well, let me say a couple of things. First of all, Marco said earlier on that President Obama knows exactly what he’s doing, like we have this president that really knows. I disagree, respectfully, with Marco. - -I think we have a president who, as a president, is totally incompetent, and he doesn’t know what he’s doing. - -I think he has no idea what he’s doing. And our country is going to hell. So, I just want to say, we disagree on that. Is that okay? - -Good. - -As to North Korea? - -We have — tremendous — has been just sucked out of our country by China. China says they don’t have that good of control over North Korea. They have tremendous control. I deal with the Chinese all of the time. I do tremendous — the largest bank in the world is in one of my buildings in Manhattan. - -I deal with them. They tell me. They have total, absolute control, practically, of North Korea. They are sucking trillions of dollars out of our country — they’re rebuilding China with the money they take out of our country. I would get on with China, let China solve that problem. - -They can do it quickly and surgically. That’s what we should do with North Korea. - -Good evening. - -Yes. - -I don’t think I am. I think I’m closer to common sense. We are going to repeal Obamacare. - -We’re going to repeal Obamacare. We are going to replace Obamacare with something so much better. And there are so many examples of it. And I will tell you, part of the reason we have some people laughing, because you have insurance people that take care of everybody up here. - -In addition to that, you have the health care savings plans, which are excellent. What I do say is, there will be a certain number of people that will be on the street dying and as a Republican, I don’t want that to happen. We’re going to take care of people that are dying on the street because there will be a group of people that are not going to be able to even think in terms of private or anything else and we’re going to take care of those people. - -And I think everybody on this stage would have to agree you’re not going to let people die, sitting in the middle of a street in any city in this country. - -Well, let me just tell you about eminent domain because almost all of these people actually criticize it, but so many people have hit me with commercials and other things about eminent domain. - -Eminent domain is an absolute necessity for a country, for our country. Without it, you wouldn’t have roads, you wouldn’t have hospitals, you wouldn’t have anything. You wouldn’t have schools, you wouldn’t have bridges. You need eminent domain. And a lot of the big conservatives that tell me how conservative they are — I think I’m more than they are — they tell me, oh — well, they all want the Keystone Pipeline. The Keystone Pipeline, without eminent domain, it wouldn’t go 10 feet, OK? You need eminent domain. And eminent domain is a good thing, not a bad thing. - -And what a lot of people don’t know because they were all saying, oh, you’re going to take their property. When somebody — when eminent domain is used on somebody’s property, that person gets a fortune. They get at least fair market value, and if they are smart, they’ll get two or three times the value of their property. But without eminent domain, you don’t have roads, highways, schools, bridges or anything. - -So eminent domain — it’s not that I love it, but eminent domain is absolutely — it’s a necessity for a country. And certainly it’s a necessity for our country. - -Yes. - -Jeb wants to be — he wants to be a tough guy tonight. I didn’t take the property. - -I didn’t take the property. - -The woman ultimately didn’t want to do that. I walked away. - -Well, let me just — you know, he wants to be a tough guy. A lot of times, you’ll have — you’ll have — and it didn’t work very well. - -A lot of time — let me talk. Quiet. A lot of times — a lot of times...BUSH: How tough it is to take away a property from an elderly woman? - -you — let me talk. Let me talk. Quiet. A lot of times that’s all of his donors and special interests out there. - -So — it’s what it is. That’s what — and by the way, let me just tell you, we needed tickets. You can’t get them. You know who has the tickets for the — I’m talking about, to the television audience? Donors, special interests, the people that are putting up the money. - -That’s who it is. The RNC told us. We have all donors in the audience. And the reason they’re not loving me the reason they’re not — excuse me. The reason they’re not loving me is, I don’t want their money. I’m going to do the right thing for the American public. I don’t want their money. I don’t need their money. And I’m the only one up here that can say that. - -Eminent domain, the Keystone pipeline — do you consider that a private job? Do you — do you consider that. - -No — no, let me ask you, Jeb. - -Do you consider the Keystone pipeline private? - -Is it public or private? - -Real — a public use? - -No, it’s a private job. - -It’s a private job. - -You wouldn’t have the Keystone pipeline that you want so badly without eminent domain. - -You wouldn’t have massive — excuse me, Josh — you wouldn’t have massive factories without eminent domain. - -Well, I think I am, and to me, I view the word conservative as a derivative I — of — of the word conserve. We want to converse our money. We want to conserve our wealth. We want to conserve. We want to be smart. We want to be smart where we go, where we spend, how we spend. We want to conserve our country. We want to save our country. And we have people that have no idea how to do that and they are not doing it, and it’s a very important word and it’s something I believe in very, very strongly. - -Well, before I go there, I will tell you, I will bring jobs back from China. I will bring jobs back from Japan. I will bring jobs back from Mexico, where New Hampshire, by the way, has been virtually wiped out. They’ve lost so many businesses going to Mexico because of horrible trade deals. And now we’re about to sign another trade deal, TPP, which is going to be a disaster for this country because they don’t talk about monetary manipulation. It is going to be a disaster. - -I’m going to bring jobs back and I’ll start bringing them back very fast. Under my tax plan — right now, we’re the highest taxed country in the world. Under my plan, we cut not only taxes for the middle class, but we cut taxes for corporations. We will bring back trillions of dollars that’s offshore. Right now, they have $2.5 trillion, and in my opinion, it’s much more than that. That’s what the government says. All of that money is going to come back. - -And we’re not going to lose Pfizer, which is now leaving, and other great companies, which is now leaving. And they’re all leaving. We have many, many companies that are leaving this country. We’re not going to lose them anymore because we’re going to have a tax structure that is going to keep them in our country. - -Well, four years ago, I said, bomb the oil and take the oil. And if we did that, they wouldn’t have the wealth they have right now. Now, I still say the same thing, because we’re doing little pinpricks. We’re not even bombing — if somebody’s driving a truck, they give notice to the person driving the truck, “we’re going to bomb.” If they don’t get out of the truck, the truck sails away with the oil. - -We actually have a case where we don’t want to bomb the oil, because we don’t want to hurt — pollute the atmosphere. Can you imagine General Douglas MacArthur or General Patton saying we can’t bomb because we’re gonna hurt the atmosphere? - -You have to knock the hell out of the oil. You have to take the oil. And you have also back channels of banking. You have people that you think are our great allies, our friends, in the Middle East, that are paying tremendous numbers of — tremendous amounts of money to ISIS. - -So we have to stop those circuits. Nobody knows banking better than I do. They have back circuits, back channels. Tremendous amounts of money is coming in through the banking system. So between the oil and the banking, you will dry them up. But it should have been done four years ago, not now. - -You have to go in — first of all, when you take away their money, when you take away their wealth, that’ll very much weaken — and it will happen fairly fast. - -They’ll last for about a year, based on all of the wealth they’ve accumulated. But when you stop the banking channels and when you stop the oil and take the oil — not just bomb it, take it — when you do that, it’s going to dry up very quickly. They’re going to become a very weakened power, quickly. Thank you. - -Well, I’ll tell you what. In the Middle East, we have people chopping the heads off Christians, we have people chopping the heads off many other people. We have things that we have never seen before — as a group, we have never seen before, what’s happening right now. - -The medieval times — I mean, we studied medieval times — not since medieval times have people seen what’s going on. I would bring back waterboarding and I’d bring back a hell of a lot worse than waterboarding. - -No, a good deal maker will make great deals, but we’ll do it the way our founders thought it should be done. People get together, they make deals. Ronald Reagan did it with Tip O’Neil very successfully, you didn’t hear so much about executive orders, if you heard about it at all. You have to be able to get a consensus. - -Now, the real person like it was mentioned about the deal with Iran, how bad a deal is that? It doesn’t get any more amateurish than that. A good deal maker would never make a deal like that. With Congress, you have to get everybody in a room, and you have to get them to agree. But, you have to get them to agree what you want, and that’s part of being a deal maker. You can’t leave the White House, go to Hawaii and play golf for three weeks and be a real deal maker. It doesn’t work that way. You have to get people in, grab them, hug them, kiss them, and get the deal done. But, it’s got to be the deal that you want. - -Some? - -The problem with executive authority for the president, it’s really bad news for this reason. Since he’s given up on working with Congress, he thinks he can impose anything he wants. He’s not a king. He’s a president. An executive order should be used frankly in consolidation and with consulting with the leadership in the — in the Congress. - -I’ve done it in Ohio. I consult. I could use executive orders, but I don’t trump the legislature, because if you do, you aggravate them, you anger them and then the long-term prospects get bleak. We have to solve problems in America by coming together, Republicans and Democrats, Americans first, party and ideology second — in the second back seat of this country. That’s what we need to do. - -And we can do it. And we can do it. - -Yes. OK, good. It looked like he was looking right at me, right there. - -I think that — I look at what’s going on, I look at all of the polls, I do very, very well against Hillary Clinton. I can tell you, I’m the last person that she wants to run against. - -And I think you can see what we’ve done in terms of galvanizing. I’ve been all over the country. We’re — last night, I was in South Carolina, we had 12,000 people. It set up in about four days. We have galvanized and we’ve created a movement. A lot of it has to do with — as an example, Josh’s question on drugs. - -I’m the first person that said, “Build a wall.” But I mean, a real wall, not a toy wall like they have right now. A real wall. And you’ll solve lots of problems. - -But we will galvanize the people of this country, and we will beat Hillary Clinton. Because — assuming that she runs, by the way, how she gets away with the e-mail stuff is hard to believe. So, I don’t know that she’s going to be running. But on the assumption she runs. - -I mean, look. And speaking of that, if she runs, she’s running for one reason. She’s going to be able to run for one reason, and that’s because the Democrats are protecting her. Because so many people have done so much less than her, and they were absolutely — their lives have been destroyed. - -But on the assumption they do protect her, I will win the election and we will win it by a lot. We will win it handily. We cannot have another four years of essentially Barack Obama. - -Well, there is a divide, but I have to say that the police are absolutely mistreated and misunderstood, and if there is an incident, whether it’s an incident done purposely — which is a horror, and you should really take very strong action — or if it is a mistake, it’s on your news casts all night, all week, all month, and it never ends. - -The police in this country have done an unbelievable job of keeping law and order, and they’re afraid for their jobs, they’re afraid of the mistreatment they get, and I’m telling you that not only, me speaking, minorities all over the country, they respect the police of this country and we have to give them more respect. - -They can’t act. They can’t act. They’re afraid for losing their pension, their job. They don’t know what to do. And I deal with them all the time. We have to give great respect, far greater than we are right now, to our really fantastic police. - -Well, they do. And, you know, they sue. Everybody sues, right? They see excessive — I mean, they go out, they sue. We have so much litigation — I see the courts, I see what they’re doing. They sue, and you know what? We don’t want excessive force. But at what point — you know, either you’re going to have a police force that can do its job... - -I was just up in Manchester, I met with the police officers yesterday. Tremendous people. They love the area, they love the people, they love all the people. They want to do their job. And you’re going to have abuse and you’re going to have problems, and you’ve got to solve the problems and you have to weed out the problems. But the police in this country are absolutely amazing people. - -Well, I — I know Diane Foley very well. Her husband and — these are tremendous people. I spoke for them, I raised a lot of money for the foundation. I fully understand, James, one of — that was really the first that we saw, really visually saw — it was so horrible. - -And I will tell you, though, with all of that being said, you can not negotiate this way with terrorists. If you do, you are going to have many, many more James Foleys. - -James Foley was a great young man. His parents are incredible people. They’ve done such a good job, since his — since his death. But you just cannot negotiate that way with terrorists, or you’re gonna have so many other James Foleys. - -And one thing on the vets — during the last debate, I raised $6 million for the vets, and I will tell you something. - -Carolina. - -That’s because he got Ben Carson’s votes, by the way, but we won’t (inaudible). Our country that we love so much doesn’t win anymore. We don’t win with the military, we don’t’ win on the border. You look at New Hampshire with the tremendous problem we have with heroin. Number one thing I hear from the people of New Hampshire, who I love, and developed such relationships, we don’t win with healthcare. We don’t win with trade. - -You look at what other countries are doing to us. China. Everyone, they’re killing us on trade. If I’m elected president, we will win, and we will win, and we will win. Thank you, thank you very much. - -Thank you. -I began this journey six months ago. My total focus was on building up our military, building up our strength, building up our borders, making sure that China, Japan, Mexico, both at the border and in trade, no longer takes advantage of our country. -Certainly would never have made that horrible, disgusting, absolutely incompetent deal with Iran where they get $150 billion. They’re a terrorist nation. But I began it talking about other things. -And those things are things that I’m very good at and maybe that’s why I’m center stage. People saw it. People liked it. People respected it. -A month ago things changed. Radical Islamic terrorism came into effect even more so than it has been in the past. People like what I say. People respect what I say. And we’ve opened up a very big discussion that needed to be opened up. -Thank you very much. - -We are not talking about isolation. We’re talking about security. We’re not talking about religion. We’re talking about security. Our country is out of control. People are pouring across the southern border. I will build a wall. It will be a great wall. People will not come in unless they come in legally. Drugs will not pour through that wall. -As far as other people like in the migration, where they’re going, tens of thousands of people having cell phones with ISIS flags on them? I don’t think so, Wolf. They’re not coming to this country. And if I’m president and if Obama has brought some to this country, they are leaving. They’re going. They’re gone. - -Jeb doesn’t really believe I’m unhinged. He said that very simply because he has failed in this campaign. It’s been a total disaster. Nobody cares. And frankly, I’m the most solid person up here. I built a tremendous company and all I want to do is make America great again. -I don’t want our country to be taken away from us, and that’s what’s happening. The policies that we’ve suffered under other presidents have been a disaster for our country. We want to make America great again. And Jeb, in all fairness, he doesn’t believe that. - -Well, look, this is so easy to answer. ISIS is recruiting through the Internet. ISIS is using the Internet better than we are using the Internet, and it was our idea. What I wanted to do is I wanted to get our brilliant people from Silicon Valley and other places and figure out a way that ISIS cannot do what they’re doing. -You talk freedom of speech. You talk freedom of anything you want. I don’t want them using our Internet to take our young, impressionable youth and watching the media talking about how they’re masterminds — these are masterminds. They shouldn’t be using the word “mastermind.” These are thugs. These are terrible people in ISIS, not masterminds. And we have to change it from every standpoint. But we should be using our brilliant people, our most brilliant minds to figure a way that ISIS cannot use the Internet. And then on second, we should be able to penetrate the Internet and find out exactly where ISIS is and everything about ISIS. And we can do that if we use our good people. - -I would certainly be open to closing areas where we are at war with somebody. I sure as hell don’t want to let people that want to kill us and kill our nation use our Internet. Yes, sir, I am. - -We have to be much tougher. We have to be much stronger than we’ve been. We have people that know what is going on. You take a look at just the attack in California the other day. There were numerous people, including the mother, that knew what was going on. -They saw a pipe bomb sitting all over the floor. They saw ammunition all over the place. They knew exactly what was going on. -When you had the World Trade Center go, people were put into planes that were friends, family, girlfriends, and they were put into planes and they were sent back, for the most part, to Saudi Arabia. -They knew what was going on. They went home and they wanted to watch their boyfriends on television. I would be very, very firm with families. Frankly, that will make people think because they may not care much about their lives, but they do care, believe it or not, about their families’ lives. - -Look, the problem is we need toughness. Honestly, I think Jeb is a very nice person. He’s a very nice person. But we need tough people. We need toughness. We need intelligence and we need tough. -Jeb said when they come across the southern border they come as an act of love. - -Am I talking or are you talking, Jeb? - -You can go back. You’re not talking. You interrupted me. - -Are you going to apologize, Jeb? No. Am I allowed to finish? - -Excuse me, am I allowed to finish? - -I know you’re trying to build up your energy, Jeb, but it’s not working very well. - -Look, look, look. We need a toughness. We need strength. We’re not respected, you know, as a nation anymore. We don’t have that level of respect that we need. And if we don’t get it back fast, we’re just going to go weaker, weaker and just disintegrate. -We can’t allow that to happen. We need strength. We don’t have it. When Jeb comes out and he talks about the border, and I saw it and I was witness to it, and so was everyone else, and I was standing there, “they come across as an act of love,” he’s saying the same thing right now with radical Islam. -And we can’t have that in our country. It just won’t work. We need strength. - -With Jeb’s attitude, we will never be great again, that I can tell you. We will never be great again. - -So, they can kill us, but we can’t kill them? That’s what you’re saying. And as far as the Internet is concerned, we’re not talking about closing the Internet. I’m talking about parts of Syria, parts of Iraq, where ISIS is, spotting it. -Now, you could close it. What I like even better than that is getting our smartest and getting our best to infiltrate their Internet, so that we know exactly where they’re going, exactly where they’re going to be. I like that better. - -But we have to — who would be — I just can’t imagine somebody booing. These are people that want to kill us, folks, and you’re — you’re objecting to us infiltrating their conversations? I don’t think so. I don’t think so. - -In my opinion, we’ve spent $4 trillion trying to topple various people that frankly, if they were there and if we could’ve spent that $4 trillion in the United States to fix our roads, our bridges, and all of the other problems; our airports and all of the other problems we’ve had, we would’ve been a lot better off. I can tell you that right now. -We have done a tremendous disservice, not only to Middle East, we’ve done a tremendous disservice to humanity. The people that have been killed, the people that have wiped away, and for what? It’s not like we had victory. -It’s a mess. The Middle East is totally destabilized. A total and complete mess. I wish we had the $4 trillion or $5 trillion. I wish it were spent right here in the United States, on our schools, hospitals, roads, airports, and everything else that are all falling apart. - -Well, there’s nothing to respond to. Well, people feel differently. I mean, the fact is Benghazi was a disaster because of Libya, everything just fell into place. It could not have been worse. -What do we have now? We have nothing. We’ve spent $3 trillion and probably much more – I have no idea what we’ve spent. Thousands and thousands of lives, we have nothing. Wounded warriors all over the place who I love, we have nothing for it. -And by the way – and Ben said incorrectly – and I’m not saying this as a knock – he’s one of finest men. You’re not going to find a finer men. -But I’ve been talking about oil for three years. I’ve been saying,, “take the oil, take the oil.” I didn’t say, “just bomb it,” I said,” take it and use it and distribute it so that the wounded warriors -” People, I’ve been saying this now for many years. - -Now, all of a sudden everybody’s saying, “take the oil.” It wasn’t so fashionable to take the oil six months ago. I’ve been saying it for years. - -I think Assad is a bad guy, a very bad guy, all right? Lots of people killed. I think we are backing people we have no idea who they are. The rebels, we call them the rebels, the patriotic rebels. We have no idea. A lot of people think, Hugh, that they are ISIS. -We have to do one thing at a time. We can’t be fighting ISIS and fighting Assad. Assad is fighting ISIS. He is fighting ISIS. Russia is fighting now ISIS. And Iran is fighting ISIS. -We have to do one thing at a time. We can’t go — and I watched Lindsey Graham, he said, I have been here for 10 years fighting. Well, he will be there with that thinking for another 50 years. He won’t be able to solve the problem. -We have to get rid of ISIS first. After we get rid of ISIS, we’ll start thinking about it. But we can’t be fighting Assad. And when you’re fighting Assad, you are fighting Russia, you’re fighting — you’re fighting a lot of different groups. -But we can’t be fighting everybody at one time. - -I think it’s very sad that CNN leads Jeb Bush, Governor Bush, down a road by starting off virtually all the questions, “Mr. Trump this, Mister” — I think it’s very sad. And, frankly, I watched — I think it’s very sad. And, frankly, I watched the first debate, and the first long number of questions were, “Mr. Trump said this, Mr. Trump said that. Mr. Trump” — these poor guys — although, I must tell you, Santorum, good guy. Governor Huckabee, good guy. They were very nice, and I respect them greatly. But I thought it was very unfair that virtually the entire early portion of the debate was Trump this, Trump that, in order to get ratings, I guess. In order to get ratings, I guess. - -I just think it’s very — excuse me. - -Excuse me. I think it’s very unprofessional. - -Well, I think it’s very unprofessional. - -OK, fine. - -This isn’t tough and easy. I wish it was always this easy as you, Jeb. - -Oh, yeah. - -Oh, I know. You’re a tough guy, Jeb. I know. - -You’re tough. - -Well, let’s see. I’m at 42, and you’re at 3. So, so far, I’m doing better. - -So far, I’m doing better. You know, you started off over here, Jeb. You’re moving over further and further. Pretty soon you’re going to be off the end. - -I believe I did. - -I have a very hardline position, we have a country or we don’t have a country. People that have come into our country illegally, they have to go. They have to come back into through a legal process. -I want a strong border. I do want a wall. Walls do work, you just have to speak to the folks in Israel. Walls work if they’re properly constructed. I know how to build, believe me, I know how to build. -I feel a very, very strong bind, and really I’m bound to this country, we either have a border or we don’t. People can come into the country, we welcome people to come but they have to come in legally. - -Well, first of all, I think we need somebody absolutely that we can trust, who is totally responsible; who really knows what he or she is doing. That is so powerful and so important. And one of the things that I’m frankly most proud of is that in 2003, 2004, I was totally against going into Iraq because you’re going to destabilize the Middle East. I called it. I called it very strongly. And it was very important. -But we have to be extremely vigilant and extremely careful when it comes to nuclear. Nuclear changes the whole ball game. Frankly, I would have said get out of Syria; get out — if we didn’t have the power of weaponry today. The power is so massive that we can’t just leave areas that 50 years ago or 75 years ago we wouldn’t care. It was hand-to-hand combat. -The biggest problem this world has today is not President Obama with global warming, which is inconceivable, this is what he’s saying. The biggest problem we have is nuclear — nuclear proliferation and having some maniac, having some madman go out and get a nuclear weapon. That’s in my opinion, that is the single biggest problem that our country faces right now. - -I think — I think, for me, nuclear is just the power, the devastation is very important to me. - -I did. - -Let me just say that I have gotten to know him over the last three or four days. He has a wonderful temperament. - -He’s just fine. Don’t worry about it. - -You better not attack. - -I really am. I’ll be honest, I really am. - -I mean, the people have been putting me. - -I really am. - -Let me just. - -I’ve gained great respect for the Republican leadership. I’ve gained great respect for many — and I’m going to even say — I mean, in different forms for the people on the dais, in different forms. - -In different forms. -But I have great respect for the people I have met through this process. I’ve never done this process before. I’ve never been a politician. I mean, for the last six months I’ve been a politician. -But I will tell you, I am totally committed to the Republican Party. I feel very honored to be the front runner. - -And I think I’ll do very well if I’m chosen. If I’m so fortunate to be chosen, I think I’ll do very well. -Polls have come out recently saying I would beat Hillary. I will do everything in my power to beat Hillary Clinton, I promise you. - -Our country doesn’t win anymore. We don’t win on trade. We don’t win on the military. We can’t defeat ISIS. We’re not taking care of our great people, the veterans. We’re not taking care of them. -We have to change our whole way, our health care system is a disaster. It’s going to implode in 2017, just like you’re sitting there. It doesn’t work. Nothing works in our country. If I’m elected president, we will win again. We will win a lot. And we’re going to have a great, great country, greater than ever before. -Thank you. - -I can’t be Neil. And the and the reason I can’t be is that we are a country that is being beaten on every front economically, militarily. There is nothing that we do now to win. We don’t win anymore. Our taxes are too high. I’ve come up with a tax plan that many, many people like very much. It’s going to be a tremendous plan. I think it’ll make our country and our economy very dynamic. - -But, taxes too high, wages too high, we’re not going to be able to compete against the world. I hate to say it, but we have to leave it the way it is. People have to go out, they have to work really hard and have to get into that upper stratum. But we can not do this if we are going to compete with the rest of the world. We just can’t do it. - -I would not do it. - -I was so happy yesterday when I saw that decision come down. That was an unbelievable decision. - -And we don’t have enough of those decisions coming down. He of the executive order, because nobody wants to listen to him, including the Democrats, so he just goes around signing executive orders. - -That was a great day. And, frankly, we have to stop illegal immigration. It’s hurting us economically. It’s hurting us from every standpoint. It’s causing tremendous difficulty with respect to drugs and what that does to many of our inner cities in particular. - -And it really is — was such an unbelievable moment because the courts have not been ruling in our favor. And it was a 2-1 decision. And it was a terrific thing that happened. - -And I will tell you, we are a country of laws. We need borders. We will have a wall. The wall will be built. The wall will be successful. And if you think walls don’t work, all you have to do is ask Israel. The wall works, believe me. Properly done. Believe me. - -You are going to have to bring people — you are going to have to send people out. Look, we’re a country. - -Maria, we’re a country of laws. We either have a country or we don’t have a country. We are a country of laws. Going to have to go out and they will come back but they are going to have to go out and hopefully they get back. - -But we have no choice if we’re going to run our country properly and if we’re going to be a country. - -All I can say is, you’re lucky in Ohio that you struck oil. That is for one thing. - -Moved them again beyond the border, they came back. Didn’t like it. Moved them way south. They never came back. - -No, it’s unfair. - -built an unbelievable company worth billions and billions of dollars. I don’t have to hear from this man, believe me. I don’t have to hear from him. - -We have millions of people right now on line trying to come into this country. Very, very unfair to the people that want to come into our country legally. They’ve gone through the process. They’re on line. They’re waiting. Very, very unfair to them. That I can tell you. - -Yes. - -No, I’m sorry. No, excuse me. I was there. - -We have to make our military bigger, better, stronger than ever before so that nobody messes with us, and a long run, it’s going to save us. I agree with Marco, I agree with Ted, we have no choice. And, I can tell you this with certainty. We all have a different tax plan. Some I don’t totally agree with. - -One thing we understand, each one of those tax plans is better than the mess that we have right now. - -Yes. - -Yeah. - -It’s a horrible deal. - -The TPP is horrible deal. It is a deal that is going to lead to nothing but trouble. It’s a deal that was designed for China to come in, as they always do, through the back door and totally take advantage of everyone. It’s 5,600 pages long. So complex that nobodies read it. It’s like Obamacare; nobody ever read it. They passed it; nobody read it. And look at mess we have right now. And it will be repealed. - -But this is one of the worst trade deals. And I would, yes, rather not have it. With all of these countries, and all of the bad ones getting advantage and taking advantage of what the good ones would normally get, I’d rather make individual deals with individual countries. We will do much better. - -We lose a fortune on trade. The United States loses with everybody. We’re losing now over $500 billion in terms of imbalance with China, $75 billion a year imbalance with Japan. By the way, Mexico, $50 billion a year imbalance. - -So I must say, Gerard, I just think it’s a terrible deal. I love trade. I’m a free trader, 100 percent. But we need smart people making the deals, and we don’t have smart people making the deals. - -Yes. Well, the currency manipulation they don’t discuss in the agreement, which is a disaster. If you look at the way China and India and almost everybody takes advantage of the United States — China in particular, because they’re so good. It’s the number-one abuser of this country. And if you look at the way they take advantage, it’s through currency manipulation. It’s not even discussed in the almost 6,000-page agreement. It’s not even discussed. - -And as you understand, I mean, you understand very well from the Wall Street Journal, currency manipulation is the single great weapon people have. They don’t even discuss it in this agreement. - -So I say, it’s a very bad deal, should not be approved. If it is approved, it will just be more bad trade deals, more loss of jobs for our country. We are losing jobs like nobody’s ever lost jobs before. I want to bring jobs back into this country. - -Well, first of all, it’s not only Russia. We have problems with North Korea where they actually have nuclear weapons. You know, nobody talks about it, we talk about Iran, and that’s one of the worst deals ever made. One of the worst contracts ever signed, ever, in anything, and it’s a disgrace. But, we have somebody over there, a madman, who already has nuclear weapons we don’t talk about that. That’s a problem. - -China is a problem, both economically in what they’re doing in the South China Sea, I mean, they are becoming a very, very major force. So, we have more than just Russia. But, as far as the Ukraine is concerned, and you could Syria — as far as Syria, I like — if Putin wants to go in, and I got to know him very well because we were both on 60 Minutes, we were stablemates, and we did very well that night. - -But, you know that. - -But, if Putin wants to go and knocked the hell out of ISIS, I am all for it, 100%, and I can’t understand how anybody would be against it. - -They blew up — hold it. - -They blew up, wait a minute. - -They blew up a Russian airplane. He cannot be in love with these people. He’s going in, and we can go in, and everybody should go in. As far as the Ukraine is concerned, we have a group of people, and a group of countries, including Germany — tremendous economic behemoth — why are we always doing the work? - -We are — I’m all for protecting Ukraine and working — but, we have countries that are surrounding the Ukraine that aren’t doing anything. They say, “Keep going, keep going, you dummies, keep going. Protect us.” - -And we have to get smart. We can’t continue to be the policeman of the world. We are $19 trillion dollars, we have a country that’s going to hell, we have an infrastructure that’s falling apart. Our roads, our bridges, our schools, our airports, and we have to start investing money in our country. - -Assad is a bad guy, but we have no idea who the so-called rebels — I read about the rebels, nobody even knows who they are. I spoke to a general two weeks ago, he said — he was very up on exactly what we’re talking about. He said, “You know, Mr. Trump? We’re giving hundreds of millions of dollars of equipment to these people, we have no idea who they are.” - -So, I don’t like Assad. Who’s going to like Assad? But, we have no idea who these people, and what they’re going to be, and what they’re going to represent. They may be far worse than Assad. Look at Libya. Look at Iraq. Look at the mess we have after spending $2 trillion dollars, thousands of lives, wounded warriors all over the place — who I love, OK? All over. - -We have nothing. And, I said, keep the oil. And we should have kept the oil, believe me. We should have kept the oil. And, you know what? We should have given the oil. We should’ve given big chunks to the people that lost their arms, their legs, and their families, and their sons, and daughters, because right now, you know who has a lot of that oil? Iran, and ISIS. - -Why does she keep interrupting everybody? - -Terrible. - -We are not. - -No, no, no. - -Well, what’s happening right now, Neil, is something that not been a subject of conversation by politicians. As primarily the only politician, I guess other than Carly on the stage, they haven’t talked about a corporate inversion. A corporate inversion — companies are leaving. You know, we used to leave New York to go to Florida. We got better taxes, we got, maybe, something else. - -Now, they’re the United States to go to other countries. They have trillions of dollars in those other countries. They’re going for two reasons, they can’t get their money back in. It’s something where the democrats and the republicans both agree, it’s the only thing I can think of. They both agree, let the money come back in. - -Three and a half years, they still can’t make a deal. They can’t get the money in. It’s probably two and a half trillion, but, I think it’s much more than that. All of that money could become — could come right in and be used to rebuild our country, and investments in our country. They can’t do it. What we have to do, and what I’ve done, is made the tax rate — and one of the reasons they don’t the taxes so obnoxious, they can’t do it. - -Where, I made it a 10% number, as you know. I’ve been very highly praised for it. A lot of money’s going to come back in, we’re going to get rid of the bureaucratic problems, and roadblocks, because that’s also a problem. And, we’re going to have all of this money pour back into the United States. It’s going to be used to build businesses, for jobs, and everything else. - -And, as I say, my expression is, let’s make America great again. - -Thank you. Over the years, I’ve created tens of thousands of jobs and a great company. It’s a company I’m very proud of. Some of the most iconic assets anywhere in the world. And I will tell you, I don’t have to give you a website because I’m self-funding my campaign. I’m putting up my own money. - -I want to do something really special. I want to make our country greater than it’s ever been. I think we have that potential. We cannot lose this election. We cannot let Hillary Clinton, who is the worst secretary of state in the history of our country, win this election. - -We will fight. We will win. And we truly will make this even more special. We have to make it better than ever before. And I will tell you, the United States can actually be better than ever before. Thank you. - -I think maybe my greatest weakness is that I trust people too much. I’m too trusting. And when they let me down, if they let me down, I never forgive. I find it very, very hard to forgive people that deceived me. So I don’t know if you would call that a weakness, but my wife said “let up.” - -Right. - -Right. - -That’s right. - -No, not a comic book, and it’s not a very nicely asked question the way you say that. -Larry Kudlow is an example, who I have a lot of respect for, who loves my tax plan. We’re reducing taxes to 15 percent. We’re bringing corporate taxes down, bringing money back in, corporate inversions. We have $2.5 trillion outside of the United States which we want to bring back in. -As far as the wall is concerned, we’re going to build a wall. We’re going to create a border. We’re going to let people in, but they’re going to come in legally. They’re going to come in legally. And it’s something that can be done, and I get questioned about that. They built the great wall of China. That’s 13,000 miles. Here, we actually need 1,000 because we have natural barriers. So we need 1,000. - -We can do a wall. We’re going to have a big, fat beautiful door right in the middle of the wall. We’re going to have people come in, but they’re coming in legally. And Mexico’s going to pay for the wall because Mexico — I love the Mexican people; I respect the Mexican leaders — but the leaders are much sharper, smarter and more cunning than our leaders. -And just to finish, people say, how will you get Mexico to pay? A politician other than the people in the states — I don’t want to — a politician cannot get them to pay. I can. We lose, we have a trade imbalance. Excuse me, John, of $50 billion. - -believe me the world is peanuts by comparison. - -Right. Dynamically. - -Then you have to get rid of Larry Kudlow, who sits on your panel, who’s a great guy, who came out the other day and said, I love Trump’s tax plan. - -First of all, John got lucky with a thing called fracking, OK? He hit oil. He got lucky with fracking. Believe me, that is why Ohio is doing well. Number — and that is important for you to know. -Number two, this was the man that was a managing general partner at Lehman Brothers when it went down the tubes and almost took every one of us with it, including Ben and myself, because I was there and I watched what happened. -And Lehman Brothers started it all. He was on the board. And he was a managing general partner. -And just thirdly, he was so nice. He was such a nice guy. And he said, oh, I’m never going to attack. But then his poll numbers tanked. He has got — that is why he is on the end. - -And he got nasty. And he got nasty. So you know what? You can have him. - -Well, first of all, like many other very big businessmen, I could name them here, but I’m not going to do that for a lot of obvious reasons, but the biggest, and almost all of them, they’ve all used the chapter laws, the bankruptcy laws to their own benefit. -Before this, I was a very successful person as a developer and as a businessman. Atlantic City has gone bad. I mean, Chris will know about that. I’m not blaming Chris, by the way, but he will know about that. Caesar’s — excuse me — Caesar’s, the Rolls-Royce, as you know, is in bankruptcy. Almost every hotel in Atlantic City has either been in bankruptcy or will be in bankruptcy — the biggest. -But also the biggest people (ph) — now I’ve used that to my advantage as a business man, for my family, for myself. I never filed for bankruptcy. But many, many people did. What happened with Atlantic City is very, very disgraceful. -Now hundreds of companies I’ve opened. I’ve used it three times, maybe four times. Came out great. But I guess I’m supposed to come out great. That is what I could do for the country. We owe $19 trillion, boy am I good at solving debt problems. Nobody can solve it like me. -But I will tell you this, Atlantic City, you’re using that, hundreds of companies that I have opened have thrived. I built a net worth of way over $10 billion, and I have done it four times out of hundreds. And I’m glad I did it. -I used the laws of the country to my benefit, I’m sorry. - -Thank you. - -I was not at all critical of him. I was not at all. In fact, frankly, he’s complaining about the fact that we’re losing some of the most talented people. They go to Harvard. They go to Yale. They go to Princeton. They come from another country and they’re immediately sent out. -I am all in favor of keeping these talented people here so they can go to work in Silicon Valley. - -So I have nothing at all critical of him. - -Probably, I don’t know — you people write the stuff. I don’t know where you. - -And if I could say just one thing. I am the only person in either campaign that’s self-funding. I’m putting up 100 percent of my own money. And right now, I will be putting up a tremendous — so far, I’ve put up less than anybody and I have the best results. Wouldn’t that be nice if the country could do that? -But I will be putting — I will be putting up, you know, tremendous amounts of money. SuperPacs are a disaster. They’re a scam. They cause dishonesty. And you better get rid of them because they are causing a lot of bad decisions to be made by some very good people. And I’m not blaming these folks — well, I guess I could. - -Very good people are making very bad decisions right now. And if anything comes out of this whole thing with some of these nasty and ridiculous questions, I will tell you, you better get rid of the SuperPacs because they causing a big problem with this country, not only in dishonesty and what’s going on, but also in a lot of bad decisions that have been made for the benefit of lobbyists and special interests. - -I never said that. I never said that. - -You’ve got another gentleman in Florida, who happens to be a very nice guy, but not. - -he’s really doing some bad - -I never said that. I never said that. - -He has got another gentleman in Florida, who happens to be a very nice guy, but not. - -Everybody is really doing some bad fact. - -I’m in favor of people coming into this country legally. And you know what? They can have it anyway you want. You can call it visas, you can call it work permits, you can call it anything you want. I’ve created tens of thousands of jobs, and in all due respect — and actually some of these folks I really like a lot — but I’m the only one that can say that. I have created tens of thousands of jobs, and I’ll be creating many millions of jobs if I’m given — if I’m given the opportunity to be president. -As far as Mark is concerned, as far as the visas are concerned, if we need people, they have — it’s fine. They have to come into this country legally. We have a country of borders. We have a country of laws. We have to obey the laws. It’s fine if they come in, but they have to come in legally. - -Yes. - -Or somebody else. Right. - -Yes, I might feel more comfortable. I would say that I would and I have a permit, which is very unusual in New York — a permit to carry. And I do carry on occasion, sometimes a lot. But I like to be unpredictable so that people don’t know exactly. - -By the way, unlike our country where we’re totally predictable and the enemy, whether it’s ISIS or anybody else, they know exactly what we’re doing because we have the wrong leadership. - -I would change them. I would change them. - -Such a nasty — such a nasty question, but thank you, Governor. - -Yes, it’s very simple. We’re going to make a really dynamic economy from what we have right now, which is not at all dynamic. We’re going to bring jobs back from Japan, we’re going to bring jobs back from China, we’re going to bring, frankly, jobs back from Mexico where, as you probably saw, Nabisco is leaving Chicago with one of their biggest plants, and they’re moving it to Mexico. -We’re going to bring jobs and manufacturing back. We’re going to cut costs. We’re going to save Social Security, and we’re going to save Medicare. - -Our country doesn’t win anymore. We used to win, we don’t win anymore. We lose on trade. We lose with ISIS. We lose with one of the worst deals I’ve ever seen negotiated of any kind, that’s our recent catastrophe with Iran. We don’t win. -Let me give you one quick example. These folks, CNBC, they had it down at three, three and a half hours. I just read today in the New York Times, $250,000 for a 30 second ad. I went out and said, it’s ridiculous. Nobody — I could stand up here all night. Nobody wants to watch three and a half, or three hours. It was a back sacrifice, and I have to hand it to Ben. -We called Ben, he was with me 100%. We called in, we said, that’s it. We’re not doing it. They lost a lot of money, everybody said it couldn’t be done. Everybody said it was going to be three hours, three and a half, including them, and in about two minutes I renegotiated it so we can get the hell out of here. Not bad. - -And, I’ll do that with the country. We will make America great again. And, thank you everybody. Just for the record. - -That’s not right. That is absolutely not right. You know that. That is not right. - -I'm Donald Trump. I wrote "The Art of the Deal". I say not in a braggadocious way, I've made billions and billions of dollars dealing with people all over the world, and I want to put whatever that talent is to work for this country so we have great trade deals, we make our country rich again, we make it great again. We build our military, we take care of our vets, we get rid of Obamacare, and we have a great life altogether. - -Thank you. Thank you. - -Well, first of all, Rand Paul shouldn't even be on this stage. He's number 11, he's got 1 percent in the polls, and how he got up here, there's far too many people anyway. - -As far as temperament -- and we all know that -- as far as temperament, I think I have a great temperament. I built a phenomenal business with incredible, iconic assets, one of the really truly great real-estate businesses. - -And I may be an entertainer, because I've had tremendous success with number-one bestsellers all over the place, with "The Apprentice" and everything else I've done. - -But I will tell you this: What I am far and away greater than an entertainer is a businessman, and that's the kind of mindset this country needs to bring it back, because we owe $19 trillion right now, $19 trillion, and you need this kind of thinking to bring our country back. - -And believe me, my temperament is very good, very calm. But we will be respected outside of this country. We are not respected now. - -I never attacked him on his look, and believe me, there's plenty of subject matter right there. - -That I can tell you. - -I've actually been in politics all my life, although I've been on that side as opposed to this side. I'm now a politician for about three months. Obviously, I'm doing pretty well. I'm number one in every polls (sic) by a lot. - -But the qualification is that I've dealt with people all over the world, been successful all over the world. Everything I've done virtually has been a tremendous success. - -When markets changed, when things turned, I heard Governor Pataki, who, by the way, was a failed governor in New York, a very seriously failed -- he wouldn't be elected dog catcher right now. I heard what he had to say. - -And I will tell you this: Atlantic City, I've made a tremendous amount of money in Atlantic City. I left seven years ago, I've gotten great credit for my timing, and that's what I'm all about. - -I'm a businessman, did really well, really well, and Jeb, what I want to do is put that ability into this country to make our country rich again. And I can do that, and I'm not sure that anybody else in the group will be able to do that. - -But I have to say. - -Well, in Wisconsin. - -Excuse me. - -In Wisconsin, you're losing $2.2 billion right now. - -I would do so much better than that. - -No. - -I'm using facts. - -Every major business leader has used the -- I never went bankrupt, by the way, as you know, everybody knows. But we -- hundreds of companies, hundreds of deals, I've used into bankruptcy. That's what's wrong with politicians in Washington right now. They think we can take a country into bankruptcy. - -Every major business leader, has used the -- I never went bank bankrupt, by the way, as you know, everybody knows. But -- hundreds of companies, hundreds of deals, I used the law four times and made a tremendous thing. I'm in business. I did a very good job. - -But I will say this, and people are very, very impressed with what I've done, the business people. But when the folks of Iowa found out the true facts of the job that you've done in Wisconsin, all of a sudden you, tubed (ph), he was No. 1 and now he's No. 6 or seven in the polls. - -So, look, we brought it out, you were supposed to make a billion dollars in the state. You lost 2.2 -- you have right now, a huge budget deficit. That's not a Democratic point. That's a point. That's a fact. And when the people of Iowa found that out, I went to No. 1 and you went down the tubes. - -I didn't. - -Totally false. - -I would have gotten it. - -I promise I would have gotten it. - -I promise if I wanted it, I would have gotten it. - -I know my people. - -I know my people. - -No. I just will tell you that, you know, Jeb made the statement. I'm not only referring to him. I -- a lot of money was raised by a lot of different people that are standing up here. And the donors, the special interests, the lobbyists have very strong power over these people. - -I'm spending all of my money, I'm not spending -- I'm not getting any -- I turned down -- I turn down so much, I could have right now from special interests and donors, I could have double and triple what he's got. I've turned it down. I turned down last week $5 million from somebody. - -So I will tell you I understand the game, I've been on the other side all of my life. And they have a lot of control over our politicians. And I don't say that favorably, and I'm not sure if there's another system, but I say this. I am not accepting any money from anybody. Nobody has control of me other than the people of this country. I'm going to do the right thing. - -That's true. That's true. - -I was -- excuse me, Jeb. - -I was a businessman, I got along with Clinton, I got along with everybody. That was my job, to get along with people. - -I didn't want to -- excuse me. One second. - -OK, more energy tonight. I like that. - -I didn't want -- it was my obligation as a businessman to my family, to my company, to my employees, to get along with all politicians. I get along with all of them, and I did a damn good job in doing it. Go ahead. - -Got along with everybody. - -Wrong. - -Don't make things up. Jeb, don't make things up. Come on. - -Don't make things up. - -So, number one, they have to respect you. He has absolutely no respect for President Obama. Zero. - -Syria's a mess. You look at what's going on with ISIS in there, now think of this: we're fighting ISIS. ISIS wants to fight Syria. Why are we fighting ISIS in Syria? Let them fight each other and pick up the remnants. - -I would talk to him. I would get along with him. I believe -- and I may be wrong, in which case I'd probably have to take a different path, but I would get along with a lot of the world leaders that this country is not getting along with. - -We don't get along with China. We don't get along with the heads of Mexico. We don't get along with anybody, and yet, at the same time, they rip us left and right. They take advantage of us economically and every other way. We get along with nobody. - -I will get along -- I think -- with Putin, and I will get along with others, and we will have a much more stable -- stable world. - -I believe that I will get along -- we will do -- between that, Ukraine, all of the other problems, we won't have the kind of problems that our country has right now with Russia and many other nations. TAPPER: Senator Rubio, you've taken a very different approach to the -- the question of Russia. You've called Vladimir Putin a, quote, "gangster." - -Why would President Rubio's approach be more effective than President Trump's? - -I wouldn't have drawn the line, but once he drew it, he had no choice but to go across. They do bear some responsibility, but I think he probably didn't do it, not for that reason. - -Somehow, he just doesn't have courage. There is something missing from our president. Had he crossed the line and really gone in with force, done something to Assad -- if he had gone in with tremendous force, you wouldn't have millions of people displaced all over the world. - -They had a responsibility, absolutely. I think we have three of them here. - -I think they had a responsibility, yes. - -I think it will haunt him. I think it's a terrible. I think it's going to haunt him absolutely. He came back later and he said he misspoke. There was no question because I heard when he said the statement. I was watching and he said the statement. - -And I said, wow, I can't believe it. I will take care of women. I respect women. I will take care of women. - -One thing we will say and I would like to get back to the Iran situation. We're talking about Iran. The agreement was terrible. It was incompetent. I've never seen anything like it. One of the worst contracts of any kind I've ever seen. - -And nobody ever mentions North Korea where you have this maniac sitting there and he actually has nuclear weapons and somebody better start thinking about North Korea and perhaps a couple of other places. But certainly North Korea. - -And Ted and I have spoken. We've -- a lot of us have spoken. We're talking about Iran. They are bad actors, bad things are going to happen. But in the meantime, you have somebody right now in North Korea who has got nuclear weapons and who is saying almost every other week, I'm ready to use them. And we don't even mention it. - -So why didn't you say it? Why didn't you say it? - -I know, but why did you say it? I heard it myself. Why did you say it? - -You said you're going to cut funding for women's health. You said it. - -You said it. - -I think she's got a beautiful face, and I think she's a beautiful woman. - -Correct. First of all, I want to build a wall, a wall that works. So important, and it's a big part of it. - -Second of all, we have a lot of really bad dudes in this country from outside, and I think Chris knows that, maybe as well as anybody. - -They go, if I get elected, first day they're gone. Gangs all over the place. Chicago, Baltimore, no matter where you look. - -We have a country based on laws. I will make sure that those laws are adhered to. These are illegal immigrants. I don't think you'd even be asking this question if I didn't run because when I ran, and I brought this up, my opening remarks at Trump Tower, I took heat like nobody has taken heat in a long time. And, then they found out with the killing of Katie, from San Francisco, and so many other crimes, they found out that I was right. - -And, most people, many people, apologized to me. I don't think you'd even be talking about illegal immigration if it weren't for me. So, we have a country of laws, they're going to go out, and they'll come back if they deserve to come back. If they've had a bad record, if they've been arrested, if they've been in jail, they're never coming back. We're going to have a country again. Right now, we don't have a country, we don't have a border, and we're going to do something about it, and it can be done with proper management, and it can be done with heart. - -By the way, I agree with -- with what Chris is saying, but, I will say this. Illegal immigration is costing us more than $200 billion dollars a year just to maintain what we have. - -Correct. - -Well, I have to tell you, I hear phenomenal things. I hear your wife is a lovely woman. - -I don't know her, and this is a total mischaracterization. - -Good. - -No, I won't do that, because I've said nothing wrong. - -But I do hear she's a lovely woman. - -Jeb said that they come into our country as an act of love. - -With all of the problems we that we have, in so many instances -- we have wonderful people coming in. But with all of the problems -- this is not an act of love. He's weak on immigration -- by the way, in favor of Common Core, which is also a disaster, but weak on immigration. - -He doesn't get my vote. - -Not with this intensity. - -As I said, we are spending $200 billion -- we are spending $200 billion a year on maintaining what we have. We will move them out. The great ones will come back, the good ones will come back. - -They'll be expedited, they'll be back, they'll come back legally. We'll have a country -- they'll come back, legally. - -Well, I think it's wonderful and all, but I did it a little bit half-heartedly, but I do mean it to a large extent. - -We have a country, where, to assimilate, you have to speak English. And I think that where he was, and the way it came out didn't sound right to me. We have to have assimilation -- to have a country, we have to have assimilation. - -I'm not the first one to say this, Dana. We've had many people over the years, for many, many years, saying the same thing. This is a country where we speak English, not Spanish. - -This is a reporter, not a high school kid. - -Well, first of all, the -- the 14th Amendment says very, very clearly to a lot of great legal scholars -- not television scholars, but legal scholars -- that it is wrong. It can be corrected with an act of Congress, probably doesn't even need that. - -A woman gets pregnant. She's nine months, she walks across the border, she has the baby in the United States, and we take care of the baby for 85 years. I don't think so. - -And by the way, Mexico and almost every other country anywhere in the world doesn't have that. We're the only ones dumb enough, stupid enough to have it. And people -- and by the way, this is not just with respect to Mexico. They are coming from Asia to have babies here, and all of a sudden, we have to take care of the babies for the life of the baby. - -The 14th Amendment, it reads properly, you can go and -- it's probably going to be have to be check -- go through a process of court, probably ends up at the Supreme Court, but there are a lot of great legal scholars that say that is not correct. - -And in my opinion, it makes absolutely no -- we're the only -- one of the only countries, we're going to take care of those babies for 70, 75, 80, 90 years? I don't think so. - -I agree 100 percent, by the way, with Carly on the fact that the Democrats do not want to solve this problem, for the obvious reasons, but they do not. - -But I believe that a reading of the 14th Amendment allows you to have an interpretation where this is not legal and where it can't be done. I've seen both sides, but some of the greatest scholars agree with me, without having to go through Congress. - -If you do go through Congress, you can absolutely solve the problem. - -That's true, sure. - -Let me just explain. The head of the Yale Business School, Jeffrey Sonnenfeld, wrote a paper recently, one of the worst tenures for a CEO that he has ever seen, ranked one of the top 20 in the history of business. The company is a disaster and continues to be a disaster. They still haven't recovered. In fact, today, on the front page of the Wall Street Journal, they fired another 25 or 30,000 people saying we still haven't recovered from the catastrophe. - -When Carly says the revenues went up, that's because she bought Compaq, it was a terrible deal, and it really led to the destruction of the company. - -Now one other company before that was Lucent. Carly was at Lucent before that. And Lucent turned out to be a catastrophe also. So I only say this. She can't run any of my companies. That I can tell you. - -I never filed for bankruptcy. - -I'll tell you why; it's very simple. - -I've made over $10 billion. I had a casino company -- Caesars just filed for bankruptcy. Chris will tell you -- it's not Chris' fault either -- but almost everybody in Atlantic City is either in trouble or filed for -- maybe I'll blame Chris. - -But Atlantic City is a disaster. - -Wait a minute, Carly. Wait. I let you speak. Atlantic City is a disaster, and I did great in Atlantic City. I knew when to get out. My timing was great. And I got a lot of credit for it. - -Many of the great business people that you know -- and Carl Icon (ph) is going to work with me on making great deals for this country. But whether it's Carl or so many others that we read about all the time. They have used the laws of the land. - -Well, I'd like to respond, I'd like to respond. - -Well, I think the thing about the flat tax, I know it very well. What I don't like is that if you make $200 million a year, you pay ten percent, you're paying very little relatively to somebody that's making $50,000 a year, and has to hire H&R Block to do the -- because it's so complicated. - -One thing I'll say to Ben is that we've had a graduated tax system for many years, so it's not a socialistic thing. What I'd like to do, and I'll be putting in the plan in about two weeks, and I think people are going to like it, it's a major reduction in taxes. It's a major reduction for the middle class. The hedge fund guys won't like me as much as they like me right now. I know them all, but they'll pay more. - -I know people that are making a tremendous amount of money and paying virtually no tax, and I think it's unfair. - -Well, I heard Hugh Hewitt, a nice man, he apologized because he actually said that we had a misunderstanding. And he said today that Donald Trump is maybe the best interview there is anywhere that he has ever done. - -Now unless he was just saying that on CNN to be nice, but he did say that. - -And we had a legitimate misunderstanding in terms of his pronunciation of a word. - -But I would say just. - -Well, I think it was. And he actually said that. Did you say that? - -OK. So I will say this, though, Hugh was giving me name after name, Arab name, Arab name, and there are few people anywhere, anywhere that would have known those names. I think he was reading them off a sheet. - -And frankly I will have -- and I told him, I will have the finest team that anybody has put together and we will solve a lot of problems. - -You know, right now they know a lot and look at what is happening. The world is blowing up around us. We will have great teams and great people. - -I hope that answers your question. I mean, you are in the Senate, but I hope that answers your question. - -No, I don't think he's suggesting that at all. - -I don't think he's suggesting that at all. - -Well, you have to understand, I am not sitting in the United States Senate with, by the way, the worst voting record there is today. Number one. I am not sitting in the United States Senate. I'm a businessman doing business transactions. - -I am doing business transactions. I will know more about this -- and, as you said, that was very acceptable, and when you listen to that whole interview, it's a great interview, you said it, I didn't. Well, now I did. - -Listen, just one second. Just one second. - -I will know more about the problems of this world by the time I sit, and you look at what's going in this world right now by people that supposedly know, this world is a mess. TAPPER: Senator Rubio, he did invoke your absentee record in the Senate. - -I'm -- and I'm meeting with people that are terrific people, but I have to say something because it's about judgment. - -I am the only person on this dais -- the only person -- that fought very, very hard against us (ph), and I wasn't a sitting politician going into Iraq, because I said going into Iraq -- that was in 2003, you can check it out, check out -- I'll give you 25 different stories. - -In fact, a delegation was sent to my office to see me because I was so vocal about it. I'm a very militaristic person, but you have to know when to use the military. I'm the only person up here that fought against going into Iraq. - -Just excuse me, one second, Rand. - -If you don't mind, Rand -- you know, you are on last -- you do have your 1 percent. - -I would like -- and I think it's very important. I think it's important, because it's about judgment. It's about judgment. - -I didn't want to go into Iraq, and I fought it, because what I said -- what I said was you're going to -- you're going to destabilize the Middle East, and that's what happened. - -If you think about it, your brother -- and your brother's administration gave us Barack Obama, because it was such a disaster, those last three months, that Abraham Lincoln couldn't have been elected. - -I don't know. You feel safe right now? I don't feel so safe. - -Or the collapse of the economy. - -Speaking for myself, I'm OK with it. I think there's a certain truth to it. I know people that, frankly, it has no impact on their life whatsoever. There are many people. - -I would almost say leave it up to them, but I would be willing to check it off, and say I will not get Social Security. - -As a policy, I would almost leave it up to the people. Don't forget they pay in and they pay in, and maybe they do well, and maybe some people want it. But the fact is that there are people that truly don't need it, and there are many people that do need it very, very badly. And I would be willing to write mine off 100 percent, Dana. BASH: So is a voluntary program the way to get the Social Security system solvent again like that. - -Well, I -- I -- I'd like to respond. - -I'd like to respond. - -Autism has become an epidemic. Twenty-five years ago, 35 years ago, you look at the statistics, not even close. It has gotten totally out of control. - -I am totally in favor of vaccines. But I want smaller doses over a longer period of time. Because you take a baby in -- and I've seen it -- and I've seen it, and I had my children taken care of over a long period of time, over a two or three year period of time. - -Same exact amount, but you take this little beautiful baby, and you pump -- I mean, it looks just like it's meant for a horse, not for a child, and we've had so many instances, people that work for me. - -Just the other day, two years old, two and a half years old, a child, a beautiful child went to have the vaccine, and came back, and a week later got a tremendous fever, got very, very sick, now is autistic. - -I only say it's not -- I'm in favor of vaccines, do them over a longer period of time, same amount. - -But just in -- in little sections. - -I think -- and I think you're going to have -- I think you're going to see a big impact on autism. - -And that's all I'm saying, Jake. That's all I'm saying. - -Well, because she's been sitting for three hours, I think my daughter, Ivanka, who's right here. - -Other than that we'll go with Rosa Parks. I like that. - -Humble. - -If I become president, we will do something really special. We will make this country greater than ever before. We'll have more jobs. We'll have more of everything. - -We were discussing disease, we were discussing all sorts of things tonight, many of which will just be words, it will just pass on. I don't want to say politicians, all talk, no action. But a lot of what we talked about is words and it will be forgotten very quickly. - -If I'm president, many of the things that we discussed tonight will not be forgotten. We'll find solutions. And the world will respect us. They will respect us like never before. And it will be actually a friendlier world. - -And I have to say, it is a great honor to be here tonight. - -I fully understand. - -I fully understand. - -I cannot say. I have to respect the person that, if it’s not me, the person that wins, if I do win, and I’m leading by quite a bit, that’s what I want to do. I can totally make that pledge. If I’m the nominee, I will pledge I will not run as an independent. But — and I am discussing it with everybody, but I’m, you know, talking about a lot of leverage. We want to win, and we will win. But I want to win as the Republican. I want to run as the Republican nominee. - -Well, I’ve given him plenty of money. - -I will not make the pledge at this time. - -Only Rosie O’Donnell. - -Thank you. - -Yes, I’m sure it was. - -I think the big problem this country has is being politically correct. - -’ve been challenged by so many people, and I don’t frankly have time for total political correctness. And to be honest with you, this country doesn’t have time either. This country is in big trouble. We don’t win anymore. We lose to China. We lose to Mexico both in trade and at the border. We lose to everybody. - -And frankly, what I say, and oftentimes it’s fun, it’s kidding. We have a good time. What I say is what I say. And honestly Megyn, if you don’t like it, I’m sorry. I’ve been very nice to you, although I could probably maybe not be, based on the way you have treated me. But I wouldn’t do that. - -But you know what, we — we need strength, we need energy, we need quickness and we need brain in this country to turn it around. That, I can tell you right now. - -So, if it weren’t for me, you wouldn’t even be talking about illegal immigration, Chris. You wouldn’t even be talking about it. - -This was not a subject that was on anybody’s mind until I brought it up at my announcement. And I said, Mexico is sending. Except the reporters, because they’re a very dishonest lot, generally speaking, in the world of politics, they didn’t cover my statement the way I said it. - -The fact is, since then, many killings,murders, crime, drugs pouring across the border, are money going out and the drugs coming in. And I said we need to build a wall, and it has to be built quickly. - -And I don’t mind having a big beautiful door in that wall so that people can come into this country legally. But we need, Jeb, to build a wall, we need to keep illegals out. - -Border Patrol, I was at the border last week. Border Patrol, people that I deal with, that I talk to, they say this is what’s happening. Because our leaders are stupid. Our politicians are stupid. - -And the Mexican government is much smarter, much sharper, much more cunning. And they send the bad ones over because they don’t want to pay for them. They don’t want to take care of them. - -Why should they when the stupid leaders of the United States will do it for them? And that’s what is happening whether you like it or not. - -A complete disaster, yes. - -Correct. - -First of all, I’d like to just go back to one. In July of 2004, I came out strongly against the war with Iraq, because it was going to destabilize the Middle East. And I’m the only one on this stage that knew that and had the vision to say it. And that’s exactly what happened. - -And the Middle East became totally destabilized. So I just want to say. - -As far as single payer, it works in Canada. It works incredibly well in Scotland. It could have worked in a different age, which is the age you’re talking about here. - -What I’d like to see is a private system without the artificial lines around every state. I have a big company with thousands and thousands of employees. And if I’m negotiating in New York or in New Jersey or in California, I have like one bidder. Nobody can bid. - -You know why? - -Because the insurance companies are making a fortune because they have control of the politicians, of course, with the exception of the politicians on this stage. - -But they have total control of the politicians. They’re making a fortune. - -Get rid of the artificial lines and you will have yourself great plans. And then we have to take care of the people that can’t take care of themselves. And I will do that through a different system. - -I’m not — I’m not are — I don’t think you heard me. You’re having a hard time tonight. - -You’d better believe it. - -If I ask them, if I need them, you know, most of the people on this stage I’ve given to, just so you understand, a lot of money. - -Many of them. - -Not much. - -Good. - -Sounds good. Sounds good to me, Governor. - -I will tell you that our system is broken. I gave to many people, before this, before two months ago, I was a businessman. I give to everybody. When they call, I give. - -And do you know what? - -When I need something from them two years later, three years later, I call them, they are there for me. - -And that’s a broken system. - -Well, I’ll tell you what, with Hillary Clinton, I said be at my wedding and she came to my wedding. - -You know why? - -She didn’t have a choice because I gave. I gave to a foundation that, frankly, that foundation is supposed to do good. I didn’t know her money would be used on private jets going all over the world. It was. - -Because I have used the laws of this country just like the greatest people that you read about every day in business have used the laws of this country, the chapter laws, to do a great job for my company, for myself, for my employees, for my family, et cetera. - -I have never gone bankrupt, by the way. I have never. - -But out of hundreds of deals. - -Excuse me. Excuse me. - -Excuse me, what am I saying? Out of hundreds of deals that I’ve done, hundreds, on four occasions I’ve taken advantage of the laws of this country, like other people. I’m not going to name their names because I’m not going to embarrass, but virtually every person that you read about on the front page of the business sections, they’ve used the law. - -The difference is, when somebody else uses those laws, nobody writes about it. When I use it, they say, “Trump, Trump, Trump.” The fact is, I built a net worth of more than $10 billion. I have a great, great company. I employ thousands of people. And I’m very proud of the job I did. - -Again Chris, hundreds and hundreds of deals. Four times, I’ve taken advantage of the laws. And frankly, so has everybody else in my position. - -et me just tell you about the lenders. First of all, these lenders aren’t babies. These are total killers. These are not the nice, sweet little people that you think, OK? - -And by the way, this country right now owes $19 trillion. And they need somebody like me to straighten out that mess. - -I don’t think they like me very much. I’ll tell you what. I’ve evolved on many issues over the years. And you know who else has? Is Ronald Reagan evolved on many issues. - -And I am pro-life. And if you look at the question, I was in business. They asked me a question as to pro-life or choice. And I said if you let it run, that I hate the concept of abortion. I hate the concept of abortion. And then since then, I’ve very much evolved. - -And what happened is friends of mine years ago were going to have a child, and it was going to be aborted. And it wasn’t aborted. And that child today is a total superstar, a great, great child. And I saw that. And I saw other instances. - -And I am very, very proud to say that I am pro-life. - -As far as being a Republican is concerned, I come from a place, New York City, which is virtually, I mean, it is almost exclusively Democrat. And I have really started to see some of the negatives — as an example, and I have a lot of liking for this man, but the last number of months of his brother’s administration were a catastrophe. And unfortunately, those few months gave us President Obama. And you can’t be happy about that. - -First of all, Jeb, I am very happy that you denied that, and I appreciate that very much. He is a true gentleman. He really is. - -One thing he did say, and I mean that. The one thing he did say about me, however, was my tone. And I also understand that. But when you have people that are cutting Christians’ heads off, when you have a world that the border and at so many places, that it is medieval times, we’ve never — it almost has to be as bad as it ever was in terms of the violence and the horror, we don’t have time for tone. We have to go out and get the job done. - -I would be so different from what you have right now. Like, the polar opposite. We have a president who doesn’t have a clue. I would say he’s incompetent, but I don’t want to do that because that’s not nice. - -But if you look at the deals we make, whether it’s the nuclear deal with 24 hour periods — and by the way, before you get to the 24 hours, you have to go through a system. You look at Sergeant Bergdahl, we get Bergdahl, a traitor, and they get five of the big, great killers leaders that they want. We have people in Washington that don’t know what they’re doing. Now I agree. - -Now, with Iran, we’re making a deal, you would say, we want him. We want out our prisoners. We want all these things, and we don’t get anything. We’re giving them $150 billion dollars plus, they are going to be — I’ll tell you what, if Iran was a stock, you folks should go out and buy it right now because you’ll quadruple — this, what’s happening in Iran, is a disgrace, and it’s going to lead to destruction in large portions of the world. - -Our country is in serious trouble. We don’t win anymore. - -We don’t beat China in trade. We don’t beat Japan, with their millions and millions of cars coming into this country, in trade. We can’t beat Mexico, at the border or in trade. - -We can’t do anything right. Our military has to be strengthened. Our vets have to be taken care of. We have to end Obamacare, and we have to make our country great again, and I will do that. - -Thank you. - -This book is dedicated to my parents, Mary and Fred C. Trump, and my brothers and sisters—Maryanne, Robert, Elizabeth, and Fred. Also, my wonderful wife, Melania, and my incredibly supportive children, Don Jr., Ivanka, Eric, Tiffany, and Barron. - -And importantly, to the people who are ready to Make America Great Again! - -YOU GOTTA BELIEVE - - -SOME READERS MAY BE wondering why the picture we used on the cover of this book is so angry and so mean looking. I had some beautiful pictures taken in which I had a big smile on my face. I looked happy, I looked content, I looked like a very nice person, which in theory I am. My family loved those pictures and wanted me to use one of them. The photographer did a great job. But I decided it wasn’t appropriate. In this book we’re talking about Crippled America—that’s a tough title. Unfortunately, there’s very little that’s nice about it. Hence, the picture on the cover. - -So I wanted a picture where I wasn’t happy, a picture that reflected the anger and unhappiness that I feel, rather than joy. There’s nothing to be joyful about. Because we are not in a joyous situation right now. We’re in a situation where we have to go back to work to make America great again. All of us. That’s why I’ve written this book. People say that I have self-confidence. Who knows? When I began speaking out, I was a realist. I knew the relentless and incompetent naysayers of the status quo would anxiously line up against me, and they have: - -The politicians who talk a great game in campaigns—and play like total losers when they try to actually govern because they can’t govern; they don’t know how to govern. - -The lobbyists and special interests with their hands in our pockets on behalf of their clients or others. - -The members of the media who are so far lost when it comes to being fair that they have no concept of the difference between “fact” and “opinion.” - -The illegal immigrants who have taken jobs that should go to people here legally, while over 20 percent of Americans are currently unemployed or underemployed. Believe me, they’re all over the place. I see them. I talk to them. I hug them. I hold them. They are all over the place. - -Congress, which has been deadlocked for years and virtually unable to deal with any of our most pressing domestic problems, or even the most basic ones, such as passing a budget. Think of it: a little thing like passing the budget. They don’t even have a clue. - -Meanwhile, the bedrock of this country—the middle class—and those 45 million Americans stuck in poverty have seen their incomes decline over the past 20 years. Understandably, their disenchantment and frustration at what’s happening grows every day, and it gets worse and worse and worse. - -And even our lawyers and judges, the reflective “wise men,” have been stepping all over the US Constitution, the bulwark of our democracy. They have recklessly appointed themselves to be policy makers, because our actual elected officials are paralyzed by partisanship. They can’t move; they can’t act. They are totally impotent. - -As for the presidency and the executive branch, the incompetence is beyond belief. - -As I write this, Russian president Vladimir Putin is totally outmaneuvering our president by putting together a coalition in Syria that will make Putin the only effective leader in the world. He and his allies—most notably Iran—have positioned themselves exactly where President Obama and our military have failed miserably for years. They are total failures. They are not leaders. We are no longer a leader. Putin has become the leader, and it’s an embarrassment to our country. - -We’ve wasted literally trillions of dollars in the Middle East, with virtually nothing to show for it except for alienating our best ally, Israel. To make matters worse, we’ve negotiated a worthless and costly nuclear treaty with Iran (now Russia’s best friend) on the supposition that it will lead to greater harmony and world peace, which it won’t. It will lead to just the opposite. - -The idea of American Greatness, of our country as the leader of the free and unfree world, has vanished. - -Despite all of these challenges—and actually because of the challenges—I decided to do something about it. I couldn’t stand to see what was happening to our great country. This mess calls for leadership in the worst way. It needs someone with common sense and business acumen, someone who can truly lead America back to what has made us great in the past. - -We need someone with a proven track record in business who understands greatness, someone who can rally us to the standard of excellence we once epitomized and explain what needs to be done. - -When I started speaking out, I had no idea what the reaction would be. I know I’m a great builder, I’ve built buildings all over the world. I’ve had tremendous success. But I hadn’t fully exposed my political thoughts and ideas to restore America’s greatness. - -I also knew that the Trump brand is one of the world’s great icons of quality and excellence. Everybody talks about it. Everybody knows about it. It’s very very special. I’m very proud of it. Our buildings and resorts now stand very proudly (and beautifully) all over the United States and in many other countries. - -I started with the issue of illegal immigration, and proposed building a major wall that would be very high and completely impervious to the flood of immigrants who we don’t want or need here illegally. We love people coming in, but not when it’s done illegally. - -Suddenly, Americans started to wake up to what was going on with regard to illegal immigration. Despite the large number of candidates who were running for the Republican nomination, what I was saying started to really hit home with people, and everybody picked it up and they picked it up gladly. - -I started drawing crowds so large that we had to move our rallies into football stadiums and convention centers. The first national debate drew 24 million viewers, which set a record for cable television. Despite some of the ridiculous, antagonistic questions—or maybe because of them—I fought back as I always do and began to explain my vision. As a result, most people thought I won the debate. - -People were applauding. All of a sudden, people who had never cared about elections or never voted were rushing to our rallies. The rallies became massive. The crowds were unbelievable. The enthusiasm was based on pure love and love of what we were doing. - -The media, the politicians, and the so-called leaders of our country reacted in horror. But I persevered and went directly to the people, because I don’t need anyone’s financial support, nor do I need anyone’s approval of what I say or do. I just had to do the right thing. I had to do it. I had no choice. I see what’s happening to our country; it’s going to hell. I had to do it. - -I have now begun to fill in some of the details of my vision. I’ve released a tax plan that gives the middle class and those with lower incomes a chance to keep more of what they earn, while restructuring how the richest Americans will be paying taxes. - -I’ve committed to a truly more powerful military, one prepared and equipped to stand up to any and all of our foes. When we draw a line in the sand, it needs to mean something to all—especially our enemies. - -I’ve introduced a whole new approach to job creation by encouraging companies to bring more of their jobs and manufacturing back to America (home where it belongs), along with the trillions of dollars currently being held in foreign banks overseas. We’re bringing that money back. It’s a massive amount of money. And guess what? Lots of good things are going to happen. They’re going to spend that money on roads, on bridges, on companies, on jobs. It’s going to be amazing. - -I’ve explained why Obamacare is a costly, ludicrous solution to our health care woes and one which must be repealed and replaced with a much better option. We need to fix the problem by creating competition in the private sector between insurance companies, and by allowing patients to choose the family doctors they want. This will be a much better plan, a much less costly plan—better doctors, better service. It will be something really special. And think of it: the United States will save a fortune as a country. People will be better served. A combination that cannot be beat. - -Competition is a magic word in education as well. Parents should have the right to choose the schools where their kids can get the best education. The weaker schools will be closed, and ineffective teachers will be fired. One-size-fits-all education—Common Core—is bad. It’s not going to happen. We don’t want our children to be educated from Washington. We want local eduction. Education should be locally based. - -Domestically, we need to undertake a massive rebuilding of our infrastructure. Too many bridges have become dangerous, our roads are decaying and full of potholes, while traffic jams are costing millions in lost income for drivers who have jobs in congested cities. Public transit is overcrowded and unreliable and our airports must be rebuilt. You go to countries like China and many others and you look at their train systems and their public transport. It’s so much better. We’re like a third-world country. - -I could go on and on regarding many of the ideas I’ve written about in this book, and more that will be forthcoming. But let me add that while my critics are pushing their policy agendas, the last thing we need are more plans that evaporate after the elections. - -What we need is leadership that can deal with our mess and begin to apply practical solutions to our problems. My goal is not to design hundreds of pages of government regulation and red tape like others propose. We need to outline commonsense policies and then knock some heads together if necessary to make them work. The fact is we are over-regulated. People can’t move. They’re stymied. Companies can’t be built. We’re over-regulated. - -I know how to deal with complex issues and how to bring together all the various elements necessary for success. I’ve done it for years and have built a great company and a massive net worth. - -This book is designed to give the reader a better understanding of me and my ideas for our future. I’m a really nice guy, believe me, I pride myself on being a nice guy but I’m also passionate and determined to make our country great again. - -It’s time we turn America around from despair and anger to joy and accomplishment. It can happen, and it will happen. - -Our best days still lie ahead. There is so much untapped greatness in our country. We’re rich in natural resources, and we’re rich in human talent. - -Enjoy this book—and together, let’s make America great again! - - - - - - - - - - - -WINNING AGAIN - - -AMERICA NEEDS TO START winning again. - -Nobody likes a loser and nobody likes to be bullied. Yet, here we stand today, the greatest superpower on Earth, and everyone is eating our lunch. That’s not winning. - -We have a president who tries to get tough and draw a line in the sand, but when that line gets crossed, there are no repercussions. - -And when we try to negotiate with foreign countries? We don’t stand up. We don’t threaten to walk away. And, more important, we don’t walk away. We make concession after concession. That’s not winning. - -If I ran my business that way, I’d fire myself. - -Take one of the worst agreements in our history—the nuclear “treaty” with Iran—which John Kerry negotiated and President Obama rammed through and around Congress. (Or, rather, he convinced his party to support it and filibuster any debate or vote on it.) This is probably the most important treaty of our time, and our very stupid leaders in Washington, DC, couldn’t even bring themselves to hold a discussion and vote on it. - -Ronald Reagan said, “Trust but verify”—but in this case we aren’t following either piece of advice. How can we trust a man like the Ayatollah Khamenei? Just a month before we approved the treaty, he reiterated that his country was pledged to destroy and eliminate Israel, our most important ally and longtime partner in preserving some semblance of stability in the region. And as for verification, we don’t even know what side-deals the International Atomic Energy Agency has struck with Iran. Or if we do know, they haven’t been made public. - -That’s not winning—that’s criminal negligence, in my view. - -Then when every Senate Republican criticized this deal (and some of the Democrats did as well), the president compared his critics to our adversaries. - -In other words, he sells out his friends and allies, and then defends his treaty by comparing his critics to our enemies. - -That’s what we call successful diplomacy? - -Now we’re going to open the gates to refugees from places like Syria, which is like extending a personal invitation to ISIS members to come live here and try to destroy our country from within. - -This is America today, the shining city on a hill, which other countries used to admire and try to be like. - -So what can be done about it? How do we start winning again? - -To start with, we need a government that is committed to winning and has experience in winning. This book is about how we do that. - - - -In early September 2015, I spoke at a major rally in Washington, DC. I told them that we need a military that will be so strong that we won’t have to use it. And then I asked, “Are you listening, President Obama?” Almost everyone in the crowd cheered, but I understand why some of them were skeptical. Americans are used to hearing the same old promises from the same tired politicians who never produce any results, let alone any victories. I should know. For years I gave money—lots of money—to candidates from both parties who made personal pleas for my support for their campaigns. They promised to change things with new ideas and bring government back to its original, more limited purpose of protecting our country and putting our people first. - -Candidate after candidate made all kinds of pledges like this, and very little, if anything, was done. How many of those problems have been solved? Nothing seemed to move forward in Washington. - -Look at Congress, which has an understandably negative reputation among Americans. - -And why not? They do nothing. - -They can’t even pass an annual budget. They constantly bicker, which means that they just throw all our problems and our huge debt on to our children and possibly our grandchildren. - -This has to stop. - -Finally, I realized that America doesn’t need more “all-talk, no-action” politicians running things. It needs smart businesspeople who understand how to manage. We don’t need more political rhetoric—we need more common sense. “If it ain’t broke, don’t fix it”—but if it is broke, let’s stop talking about it and fix it. - -I know how to fix it. - -A lot of people were encouraging me to speak out, and I realized that with my well-known success story and record of building residential and office buildings and developing public spaces—all the while accumulating personal wealth—I could inspire people to help create the most massive turnaround in American history. - -Of course, there were doubters. Between journalists who sell newspapers by creating controversy, and established politicians eager to preserve the status quo that in turn preserves their jobs, there were many “experts” predicting my demise. They’ve been reading the “polls.” They’ve been listening to all the lobbyists and special interests saying “Trump is a threat to our well-being.” They’ve even been saying I was a bully or that I was prejudiced or that I hated women or hated Hispanics. Some of them even said—and this is the cardinal sin in politics—I was willing to take on even the richest people in America with all their tax benefits. - -I have proven everybody wrong. - -EVERYBODY! - -Suddenly, those same newspapers and “experts” were only talking about my ideas. And even as I’ve had to respond to some of the toughest and dumbest questions from supposedly nonpartisan journalists, people continue to listen to me and support my ideas—and guess what? Women are flocking to my message because they’re just as tired as men are about how little is being accomplished in Washington. - -Likewise, Hispanics are climbing on board because they’ve heard—from Hispanic employees who’ve actually worked for me and know me as a boss and leader—that Donald Trump builds businesses. - -Donald Trump builds buildings. - -Donald Trump develops magnificent golf courses. - -Donald Trump makes investments that create jobs. - -And Donald Trump creates jobs for legal immigrants and all Americans. - -Even the most jaded journalists are realizing that Donald Trump is for real and that the people are responding to someone who is completely different from every other politician. - -No one is paying me to say these things. I am paying my own way, and I’m not beholden to any special interests and lobbyists. - -I’m not playing by the usual status-quo rules. - -I’m not a politician taking polls to see what I should “believe” or be saying. - -I am telling it like it is and going to the heart of what I think will make America great again. - -I’m not a diplomat who wants everybody else to be happy. I’m a practical businessman who has learned that when you believe in something, you never stop, you never quit, and if you get knocked down, you climb right back up and keep fighting until you win. That’s been my strategy all my life, and I’ve been very successful following it. - -Winning matters. Being the best matters. - -I’m going to keep fighting for our country until our country is great again. - -Too many people think the American dream is dead, but we can bring it back bigger, better, and stronger than ever before. But we must start now. - -We need to ensure America starts winning once again. - - - - - - - - - - - -OUR “UNBIASED” POLITICAL MEDIA - - -FOR A LONG TIME I’ve been the man the media loves to hate. - -It hasn’t taken me long to learn how truly dishonest the political media can be. At the first Republican debate, Fox journalist Megyn Kelly was clearly out to get me. And of course, at the second debate, virtually everyone was attacking me because most of their poll numbers were sinking while mine were surging. - -I’m perhaps a controversial person. I say what’s on my mind. I don’t wait to hear what a pollster has to say because I don’t use pollsters. The media loves my candor. They know I’m not going to dodge or ignore their questions. I have no problem telling it like it is. These presidential debates would normally have attracted a couple million viewers, but the first night we had 24 million tune in, and the second debate drew a similar number. These were the largest audiences in Fox News’ and CNN’s history—bigger than the NBA Finals, the World Series, and most NFL telecasts. - -Why do you think people tuned in? To hear the nasty questions? To watch a bunch of politicians trying to pretend they are outsiders (like I truly am) so they can be more successful? The fact is I give people what they need and deserve to hear—exactly what they don’t get from politicians—and that is The Truth. Our country is a mess right now and we don’t have time to pretend otherwise. We don’t have time to waste on being politically correct. - -You listen to the politicians and it’s as if they are speaking from a script titled “How Boring Can I Possibly Be?” Watching some of these people being interviewed is about as exciting as watching paint dry. They’re so afraid of tripping on their own words, terrified that they’re going to say something unscripted and go off message—that’s the phrase they use, “go off message”—that they are verbally paralyzed. They’ll do anything they can to avoid answering a question—and the media plays the game with them. - -The object of this game is to appear thoughtful while still looking like a regular guy (or gal) who would be fun to have a beer with. The pollsters tell them how to be everything to everybody without alienating anyone. These same politicians who boldly promise they are going to stand up to our enemies won’t even give direct answers to reporters. I don’t play that game, because I’m a very successful businessman and my mind-set is that this country needs to bring itself back from the depths of all our problems and the $19 trillion we owe. - -At the first debate, I responded to Megyn Kelly’s adversarial question by telling her, “I think the big problem this country has is being politically correct. I’ve been challenged by so many people, and I don’t frankly have time for total political correctness. And to be honest with you, this country doesn’t have time either. This country is in big trouble. We don’t win anymore. We lose to China. We lose to Mexico both in trade and at the border. We lose to Russia and Iran and Saudi Arabia.” - -I’m not bragging when I say that I’m a winner. I have experience in winning. That’s what we call leadership. That means that people will follow me and be inspired by what I do. How do I know? I’ve been a leader my whole life. Thousands of my employees know that I’ll deliver and help them deliver. Sometimes I can be self-effacing, injecting a little humor, having some fun, and kidding around. We have a good time. What I say is what I say, and everyone that knows me really appreciates it. - -With the problems we’re facing, these debates have become “Trump versus The Others.” The attacks are coming at me from all directions, because they all know I am the only one talking about really changing this country and making America great again. The moderators read some quote of mine (or misinterpret a quote of mine) and then ask someone else to comment. Do I have the right temperament? Would I run the country like a business? When did I “actually become a Republican?” These exchanges make great TV. Sadly, they’re almost like watching a sporting event. - -And guess what? Few, if any, of these questions get to the heart of what is wrong with our country and what really matters to Americans. It’s all very personal, because politicians (and their journalist cronies) know that the public doesn’t want to hear the details of our nuclear sellout to Iran or what we’re going to do about all the federal red ink bleeding the American taxpayer dry these days. The personal exchanges between me and the others become the big story of the debate and the focus of news coverage for weeks. You’d like to think that Fox News and CNN could do better. For the record, I think CNN and Fox treated me badly. Still, you’d think a major news network would take their responsibilities more seriously and use these debates to help the public determine who has the best plan to make our country great again. - -But they missed that opportunity. - -The whole debate format has worked out fine for me. The American people are smart and figured out pretty quickly what the real motives are for turning up the personal attacks against me. And I get more minutes, more front-page coverage, more requests for interviews than anyone else—and most important for America—the opportunity to speak directly to the people. - -There are many reporters whom I have a lot of respect for, especially in the financial media. When the financial journalists interview you they know what they’re doing, and they ask direct questions that can provide important information to their viewers. There’s money at stake and they don’t play the same silly “gotcha” games as the political media do. They can’t afford to. - -I don’t mind being attacked. I use the media the way the media uses me—to attract attention. Once I have that attention, it’s up to me to use it to my advantage. I learned a long time ago that if you’re not afraid to be outspoken, the media will write about you or beg you to come on their shows. If you do things a little differently, if you say outrageous things and fight back, they love you. So sometimes I make outrageous comments and give them what they want—viewers and readers—in order to make a point. I’m a businessman with a brand to sell. When was the last time you saw a sign hanging outside a pizzeria claiming “The fourth best pizza in the world”?! But now I am using those talents, honed through years of tremendous success, to inspire people to think that our country can get better and be great again and that we can turn things around. - -The cost of a full-page ad in the New York Times can be more than $100,000. But when they write a story about one of my deals, it doesn’t cost me a cent, and I get more important publicity. I have a mutually profitable two-way relationship with the media—we give each other what we need. And now I am using that relationship to talk about the future of America. - -Many people believe I do well with the press. Maybe I do, sometimes, but anyone who believes I can use the media is absolutely wrong. Nobody can use the press. It’s too big, too widespread. For me, it has been absolutely necessary to try to build relationships with reporters. There are many journalists I respect. Some of the finest people I know are journalists. They are honest, decent, and hardworking; they bring honor to their profession. If I do something wrong or make a mistake, they report it accurately. I’ve got no problem with that. The mistake bothers me, not the reporting. - -But there also are a lot of times I believe that the media is abusive, both to people like me and to the process. The key word is “accurately.” Like in every other profession, there are people who are not good. There is no question that considering all the press I’ve had, both good and bad, I’ve definitely met people at both the very top as well as the lowest end of the food chain. I mean, the very bottom: They are horrible human beings, they are dishonest. I’ve seen these so-called journalists flat-out lie. I say that because incompetence doesn’t begin to explain the inaccurate stories they have written. There is no other explanation. - -The image I created through the media enabled me to build one of the greatest luxury brands in the world. People buy my apartments, buy my label, and play on my golf courses, because they know if I put my name on it, it has to be top quality. Why do you think NBC gave me my own show, The Apprentice? They did it because I set myself apart to be a target, the big, tough employer. The result was one of the most successful shows in television history. I’m the only boss in the world who boosts a person’s future status by firing them. - -Sometimes the truth hurts, but sometimes that is the only way to get better. And a lot of the viewers told me that by watching my show they learned how to be more effective in their jobs so they wouldn’t get fired. - -I don’t mind criticism. People call me thin-skinned, but I have thick skin. I have a wonderful and beautiful wife. I’ve got billions of dollars. My children are highly intelligent and accomplished executives who work with me. I’ve got a pile of potentially huge projects sitting on my desk. I can’t walk into a room or down a street without people racing toward me and telling me that they are excited for our country to win again. So criticism doesn’t bother me, and it can’t hurt me. I’ve had power and I’ve had profits, but now it’s time to help the people have a voice and to make sure the people are heard. I am doing this to make our country great again. - -Not too long ago, a lot of the pundits kept asking me if I was serious. I thought they were asking the wrong question. What they should have been asking was if I was serious about the future of our country. I have never been more serious about anything in my life. - -In the quest for ratings, every show is trying to make news. The problem is that they aren’t doing their job. They aren’t interested in informing the public. Instead, they play their own game, the “gotcha” game. As I’ve said, some of the political media are very dishonest. They don’t care about printing the truth, they don’t want to repeat my entire remarks, and they don’t want to be bothered explaining what I meant. They know what I said, they know what I meant, and they edit it or interpret it to have a different meaning. - -I was reminded of this behavior when I announced that I was running for president on June 16 in New York. I spoke at great length about a lot of different topics. I listed a lot of the problems we were facing: illegal immigration, underemployment, a shrinking gross domestic product, an aging nuclear arsenal, and Islamic terrorism. I went through them all. What did the media focus on? They concentrated on the fact that I said Mexico was sending its worst people over our southern border. “They’re sending people that have lots of problems,” I said. “And they’re bringing those problems to us.” - -The next thing you heard was that Trump said all immigrants were criminals. That wasn’t what I said at all, but it made a better story for the media. It gave them some headlines. What I said was that among all the illegal immigrants coming from Mexico were some pretty bad people, some of them are rapists, some of them are drug dealers, some of them are coming here to live off the system, and we’d better take immediate and tough measures to close our borders to “illegals.” - -People who know me know I would never insult Hispanics or any group of people. I’ve done business with many Hispanics. I’ve lived in New York all my life. I know how wonderful the Latino culture can be. I know the contributions they make to our country. I’ve employed many hardworking Hispanic people through the years. I have great respect for Hispanic people, but that’s not what the media reported. - -Here’s what the media reported: TRUMP CALLS ALL IMMIGRANTS CRIMINALS and TRUMP CALLS ALL MEXICANS RAPISTS! - -Completely ridiculous. - -One of the problems the political media has with me is that I’m not afraid of them. Others run around practically begging for attention. I don’t. People respond to my ideas. These media types sell more magazines when my face is on the cover, or when I bring a bigger audience to their television show than they normally attract, and by far. And what’s funny is that it turns out the best way for them to get that attention is to criticize me. - -But the American people are beginning to understand that. They have finally figured out that a lot of the political media aren’t trying to give the people a fair representation of the important issues. Instead, they are trying to manipulate the people—and the election—in favor of the candidates they want to see elected. These media companies are owned by billionaires. These are smart people who know which candidates are going to be best for them, and they find a way to support the person they want. - -It would be impossible for me to even estimate how many times I’ve been interviewed by how many reporters. I couldn’t even tell you how many magazine covers I’ve been on. - -Recently, I was interviewed by conservative radio host Hugh Hewitt. “Best interview in America,” he called me. Here’s what happened: - -During the show, he started asking me a series of questions about an Iranian general and various terrorist leaders. “I’m looking for the next commander in chief to know who Hassan Nasrallah is, and Zawahiri, and al-Julani, and al-Baghdadi. Do you know the players without a scorecard yet?” - -What a ridiculous question. I don’t think knowing the names of each terrorist leader more than a year before the election is a test of whether someone is qualified. We’re not playing Trivial Pursuit. Every question Hugh asked me was like that—although I noticed he didn’t ask too many questions about our economic policy or about reforming the tax system—things I’ve spent my life mastering. Instead, he asked these “gotcha” questions that proved nothing except that he was able to read some names and pronounce them correctly. Does anybody believe George W. Bush and Barack Obama could name the leaders of all terrorist organizations? (Not that they are the standard!) - -People see through this nonsense. We have real problems and I am talking about how to fix them, and the media continues to play these same old games. In the end though, Hugh Hewittt was just fine, and has since said some great things about me. - -Every question was “gotcha, gotcha, gotcha.” I gave Hewitt the best possible answer: Those people probably won’t even be there in a year. I should have added that if America doesn’t do the right things, we won’t be help much longer either. - -Let me tell you something: When I need to know something, I know it. When I decided to build the most magnificent golf resort in the world in Aberdeen, Scotland, I didn’t know the names of the Scottish officials who would be involved in this project—but by the time we went to work, I knew every person it was necessary to know. I’d probably met most of them, too. At the beginning of any kind of project I know what I need to know—and then I get the information to make sure the project gets done to my satisfaction. And I have strong executives who know how to—as their title suggests—execute. - -So here’s the way I work: I find the people who are the best in the world at what needs to be done, then I hire them to do it, and then I let them do it . . . but I always watch over them. - -We have great military leaders in this country. We produce the finest officers and soldiers anywhere in the world. And we have some really smart men and women working in our intelligence community. These people spend all day, every day, working on serious problems. These professionals are the real experts. They know all the players. - -One reason that I have been successful in business is that I hire the best people. I pay them well, and I keep them working for me. There are times when I meet someone working on the other side of the deal. Maybe they don’t beat me, but they give me a tough time. I respect that. In fact, I respect that so much that sometimes I hire them away from the company they were negotiating for. - -Truthfully though, I can’t really blame Hugh Hewitt for doing what he did. Just like Megyn Kelly, he figured out that the best way to get attention is to attack Donald Trump. This guy got more headlines from our little exchange than he probably ever got in his whole career. It wasn’t the names of terrorist leaders that he cared about—it was his own name. And it worked for him. - -It’s just the same old game, where the people come last. That needs to change, too. - -Begging for attention really sums up the problem we face in this country with our media. There is such competition that they’re more interested in entertaining their audience than educating them. They like me because I help them attract more viewers. They hate me because they know I don’t need them. I learned a long time ago how to talk directly to the people who matter—to regular Americans who are fed up with the career politicians. - -That’s probably you—the real Americans—which is why I’ve written this book. - - - - - - - - - - - -IMMIGRATION: GOOD WALLS MAKE GOOD NEIGHBORS - - -WHEN I ANNOUNCED MY candidacy I spoke for almost an hour, covering just about every challenge that we’re facing. But the subject that got the most attention was my focus on our immigration policy. Or, in fact, our lack of any coherent immigration policy. I was pretty tough on illegal immigrants, and a lot of people didn’t like that. I said that many countries are dumping their worst people on our border and that it has to stop. A country that doesn’t control its borders can’t survive—especially with what’s going on right now. - -What I said only makes common sense. I speak to border patrol guards, and they tell us who we’re letting across our border. The countries south of us are not sending us their best people. The bad people are coming from places other than just Mexico. They’re coming from all over Central and South America, and they’re coming probably—probably—from the Middle East. Let me add now: Allowing tens of thousands of Syrian refugees in the door will certainly bring a lot of problems. But we won’t know how bad, because we have no protection and we have no competence. We don’t know what’s happening. It’s got to stop, and it’s got to stop quickly. - -Later in my announcement I added, “I would build a great wall, and nobody builds walls better than me, believe me, and I’ll build it very inexpensively. I will build a great wall on our southern border. And I will have Mexico pay for that wall. Mark my words.” I spoke for quite a while that day. I covered just about all the problems our country is facing. But what did the media report about that speech? “Trump is anti-immigration.” “Trump calls immigrants rapists.” “Trump is starting a war with Mexico.” You want to know why we aren’t solving our problems? Why nothing changes? It’s because we’re not facing the problems and taking action. - -The flow of illegal immigrants into this country is one of the most serious problems we face. It’s killing us. But until I made that point during my speech, nobody was talking about it honestly. And instead of saying, “Trump’s right and we’d better do something to stop illegal immigration right now or we’re going to lose our country,” they said, “Oh, what a terrible thing Trump said about the nice people who live south of our borders. I hope they don’t get upset at us because of that. Maybe he’ll apologize.” I understand why that happened. It’s a lot easier to criticize me for being blunt than it is to actually admit this immigration situation is a dangerous problem and then to find a way to deal with it. - -Let me state this clearly: I am not against immigration. - -My mother emigrated to this country from Scotland in 1918 and married my father, whose parents had come here from Germany in 1885. My parents were two of the best people who ever lived, and it was millions of people like them who made this country so wonderful and so successful. - -I love immigration. - -Immigrants come to this country, they want to work hard, be successful, raise their kids, and share in the American dream. It’s a beautiful story. I can close my eyes and just imagine what my relatives must have been thinking when they sailed past the Statue of Liberty into New York and their new lives. And if they could only see the results of their risk and sacrifice! How can anyone not appreciate the courage it took for these people to leave their families and come here? - -What I don’t love is the concept of illegal immigration. - -It’s not fair to everyone else, including people who have been waiting on line for years to come into our country legally. And the flood of illegal immigrants coming across our borders has become a dangerous problem. We don’t protect our borders. We don’t know who’s here, but I bet wherever they came from knows that they are gone. Yet those governments do nothing to help us. The estimate is that there are 11 million illegal immigrants in America, but the fact is that nobody knows how many there really are. We have no way of tracking them. - -What we do know is that some of those immigrants are a source of real crime. In 2011, the Government Accountability Office reported that there were three million arrests that could be attributed to the incarcerated alien population, including tens of thousands of violent criminals. There were 351,000 criminal illegal aliens in our prisons—that number does not include the crime of crossing our borders. It costs us more than a billion dollars a year just to keep these people in prison. - -I understand that the vast majority of these people are honest, decent, hardworking people who came here to improve their own lives and their children’s lives. America holds so much promise, and what honest person wouldn’t want to come here to try to make a better life for himself and his children? But illegal immigration is a problem that must be confronted by the United States government who, in turn, must confront other countries. I feel as sorry for these individuals as anyone else does. Conditions in some of their countries are deplorable. - -Nonetheless, illegal immigration has to stop. A country that can’t protect its borders isn’t a country. We are the only country in the world whose immigration system places the needs of other nations ahead of our own. - -There is a word to describe people who do that: fools. - -I have great respect for the people of Mexico. The people have tremendous spirit. I’ve been involved in deals with Mexican businessmen. But those businessmen aren’t the people the Mexican government is sending us. Too many people have forgotten the Mariel boatlift. In 1980, Fidel Castro told the Cuban people that anyone who wanted to leave Cuba was free to do so. President Carter opened our borders to anyone who came here. Except Castro was too smart for him. He emptied Cuba’s prisons and insane asylums and sent his biggest problems here. He got rid of the worst people in that country, and we were left to deal with them. More than 125,000 Cubans came here, and despite there being many, many great ones, some were criminals or had mental problems. More than thirty years later we’re still dealing with that. - -Does anybody really believe that the Mexican government—for that matter, all the governments in South and Central America—didn’t get that message? The Mexican government has published pamphlets explaining how to illegally emigrate to the United States. Which makes my point—this is not about a few individuals seeking a better life; this is about foreign governments behaving badly and our own career politicians and “leaders” not doing their jobs. - -And who can blame these foreign governments? It’s a great way for those governments to get rid of their worst people without paying any price for their bad behavior. Instead of putting these bad people in their prisons, they send them to us. And the bad guys are bringing the drug business and other criminal activity with them. Some of them are rapists, as a matter of fact, and as we have now seen in San Francisco, some of them are killers. The man who shot and killed a beautiful young woman had been pushed out of Mexico five times. He should have been in jail there, but instead they sent him here. - -The price we’re paying for illegal immigration is enormous. - -It has to stop. - -The first thing we need to do is secure our southern border—and we need to do it now. We have to stop that flood, and the best way to do that is to build a wall. People say you can’t do it—how do you build a wall across the whole border? - -Believe me, it can be done. - -Nobody can build a wall like me. I will build a great wall on our southern border. It doesn’t have to cover the entire border. Some areas are already secured with physical barriers. In other areas the terrain is too difficult for people to cross. It’s probably about 1,000 miles we will need to secure with the new wall. - -There are people who say it can’t be done, that it’s not possible to build a wall 1,000 miles long. Except beginning more than 2,000 years ago the Chinese built a wall that eventually stretched almost 13,000 miles that could never be breached. It was a combination of massive walls, impassible trenches and ditches, and rugged natural terrain, as well as an estimated 25,000 watchtowers. Believe me, our wall-building technology has improved a lot in 2,000 years. What we don’t have that the Chinese had is the commitment to do it. They understood the danger of leaving their border unprotected and they did something about it. We talk about it and do nothing. - -Walls work. The Israelis spent $2 million per kilometer to build a wall—which has been hugely successful in stopping terrorists from getting into the country. Ironically, some of the same people who claim we shouldn’t build this wall cite the success of Israel’s wall. While obviously we don’t face the same level of terrorist threat as our closest Middle East ally, there is no question about the value of a wall in the fight against terrorism. - -Many people don’t know that even Mexico has built its own wall on its southern border—to keep out illegal immigrants. - -It wouldn’t even be that difficult. We already have a model: Yuma, Arizona, for example, built three walls separated by a 75-yard no-man’s-land that allows border agents to patrol within that area with their vehicles. They installed cameras, radio communications, radar, and a great lighting system. After it was built, the 120-mile-long stretch known as the Yuma sector saw an incredible 72 percent decrease in the number of people apprehended trying to get into this country illegally—and mine will be much better. - -Construction of the wall needs to start as soon as possible. And Mexico has to pay for it. - -Let me repeat that, one way or another: Mexico will pay for it. - -How? We could increase the various border fees we charge. We could increase the fees on temporary visas. We could even impound remittance payments derived from illegal wages. Foreign governments could tell their embassies to start helping, otherwise they risk troubled relations with America. - -If necessary we could pay for the wall through a tariff or cut foreign aid to Mexico or simply make it clear to the Mexican government that it is to the benefit of their very profitable—for them—relationship with the United States to pay for it. - -But one way or another, they are going to pay for it. - -I don’t mind putting a big, beautiful door in that wall so people can come in and out . . . LEGALLY. - -The wall will be a good start, but by itself it won’t be enough. Without the wall, however, everything else is more of the same old big talk we hear from the politicians. - -We’ve been trying to get this problem under control for more than 75 years. We’ve tried a lot of different solutions, and the result is that now illegal immigration is worse than ever. One of the solutions that did show promise was President Eisenhower’s attempt to deal with illegal immigration on our southern border, which had become known as the truly terribly named “Operation Wetback.” But even with that awful name the program was successful. It was a joint effort between the INS and the Mexican government. Special immigration teams were created to quickly process and deport illegal immigrants. One of the reasons it worked is that people who were caught were given to Mexican government agents, who moved them into central Mexico, where they could find jobs. In the first year, more than one million people were sent back. - -What we need is the comprehensive program I have outlined that will enable us to get our immigration system under control. It starts with enforcing the existing laws. A country either has laws or it doesn’t. But having laws that we don’t enforce makes no sense to me. And in addition to keeping bad people from coming in, we’ve got to get the criminals out. When you break our laws you get thrown out. It’s simple. Why should we absorb the expense of keeping criminals in prisons? Let their countries of origin deal with the problems they sent us. If they refuse to take them back, we can stop issuing visas to those countries, preventing their citizens from legally visiting the United States. - -I also would triple the number of immigration officers we currently employ until the wall is built. We are asking these people to do a job that would be difficult even if they had all the support they need, and they don’t. Think of it this way: Currently there are about 5,000 officers attempting to enforce the existing immigration laws against the more than 11 million illegal aliens. Compare that to the 10,000 members of the Los Angeles Police Department or the 35,000 officers in the New York Police Department. Since 9/11 we have tripled the size of the border patrol but haven’t substantially increased the number of ICE officers—the officers who enforce immigration laws. - -The career politicians love to talk about having a nationwide “E-verify system” so potential employers will be able to determine who is here legally and eligible for work and who isn’t. Certainly, this will help protect the jobs for unemployed Americans. But let’s not kid ourselves. Our “leaders” must lead on this, and engage with foreign governments to stop illegal immigration, and not simply impose something on our businesses and think that some Internet verification system alone will solve the problem. - -We have to cut off federal grants to sanctuary cities—those places that refuse to cooperate with federal law enforcement and actually abet criminal behavior—we have to end them. I repeat, we either are a nation of laws or we’re not. - -We also need to do what is necessary to enforce our visa regulations. People get a visa and come here legally, and when that visa expires, many stay here illegally. If they get caught, nothing happens to them. That’s got to change. We need to have real penalties for people who overstay their visas. I am sick and tired of hearing politicians who are all talk and no action. President Obama and his people are great at sending letters and press releases, but they never seem to have any consequences for foreign governments that don’t listen to them. - -Most important is ending or curtailing so-called birthright citizenship, or anchor babies. American citizenship is an extraordinary gift. Its value over a lifetime can’t be measured. So the fact that the Fourteenth Amendment has been interpreted to mean that any child born in the United States automatically is an American citizen—and that baby can be used as an anchor to keep its family here—is the single biggest magnet attracting illegal immigrants. - -The Fourteenth Amendment was never intended to be used that way. The original purpose of the Fourteenth Amendment, which was ratified in 1868, following the Civil War, was to guarantee all rights granted to citizens in the Constitution to freed slaves. No serious historian could possibly interpret any of the supporting language in the Congressional Record that the birthright citizenship was intended for anyone other than the freed slaves. - -It wasn’t until 1898 that the Supreme Court ruled that, with certain specific exceptions, the provisions of the Fourteenth Amendment granted citizenship to the children of those lawfully here who gave birth on American soil. By a huge margin, Americans want to change that policy. Even Democrat Harry Reid admitted that “no sane country” would grant citizenship to the children of illegal immigrants. It’s estimated that about 300,000 of these children are born here annually. That’s 300,000 children who are entitled to all the rights and privileges granted to American citizens because their mothers entered this country illegally by walking over the border for a day in the south or by flying in from another country under fraudulent documentation. There are businesses that specialize in making this happen! They call it “birth tourism”—pregnant foreign women travel to this country just so that they can give birth here to babies who then automatically become American citizens. - -Citizenship is not a gift we can afford to keep giving away, and I will find a legal way of stopping this policy. A lot of really smart people and lawyers believe the Fourteenth Amendment was never intended to create a whole new path to citizenship. We’re going to test it every possible way. We will win in court and we will win in Congress. - -I don’t want to stop legal immigration to this country. In fact, I would like to reform and increase immigration in some important ways. Our current immigration laws are upside down—they make it tough on the people we need to have here, and easy for the people we don’t want here. - -This country is a magnet for many of the smartest, hardest-working people born in other countries, yet we make it difficult for these bright people who follow the laws to settle here. - -It’s amazing that people who come here to earn a master’s degree and who demonstrate wonderful skills are forced to wait on a very long line when they want to stay and contribute to this country. In fact, for a lot of them, their number may never be called. Bright young kids come here from all over the world to study in our colleges. They get the best education in the world. They graduate with honors and we hand them a diploma and a plane ticket. Their mistake is that they are honest people—they follow the law. They want to stay here, but we send them back to their countries, and ultimately they use the knowledge they gained here to compete against us. - -If you’re a criminal, though, or an unskilled worker, or someone escaping criminal charges in another country, you are able to sneak into our country and in many cases get some benefits and never leave. These “enforcement” policies and this backward approach to immigration have to change. Our immigration policy needs to work to make America great again. - -My immigration policy is actually pretty simple. We need to make changes to our laws to make it easier for those people who can contribute to this country to come here legally while making it impossible for criminal elements and other people to get here illegally. I want good people to come here from all over the world, but I want them to do so legally. We can expedite the process, we can reward achievement and excellence, but we have to respect the legal process. And those people who take advantage of the system and come here illegally should never enjoy the benefits of being a resident—or citizen—of this nation. So I am against any path to citizenship for undocumented workers or anyone else who is in this country illegally. - -They should—and need to—go home and get in line. - -And you know who agrees with me? The Mexicans, the Chinese, and all the people from other countries who want to be here legally and can’t get a visa or fit into a quota, yet see millions of people living here illegally. They don’t understand how we can undermine our own interests. - -If you have laws that you don’t enforce, then you don’t have laws. This leads to lawlessness. - -We can be generous and do all of this humanely. But the security and prosperity of American citizens have to come first. - -Our country, our people, and our laws have to be our top priority. - - - - - - - - - - - -FOREIGN POLICY: FIGHTING FOR PEACE - - -THE CAREER DIPLOMATS WHO got us into many foreign policy messes say I have no experience in foreign policy. They think that successful diplomacy requires years of experience and an understanding of all the nuances that have to be carefully considered before reaching a conclusion. Only then do these pinstriped bureaucrats consider taking action. - -Look at the state of the world right now. It’s a terrible mess, and that’s putting it kindly. - -There has never been a more dangerous time. The so-called insiders within the Washington ruling class are the people who got us into this trouble. So why should we continue to pay attention to them? - -Some of these so-called “experts” are trying to scare people by saying that my approach would make the world more dangerous. - -More dangerous? More dangerous than what? More dangerous than where we are now? - -Here’s what I know—what we are doing now isn’t working. And years ago, when I was just starting out in business, I figured out a pretty simple approach that has always worked well for me: - -When you’re digging yourself deeper and deeper into a hole, stop digging. - -My approach to foreign policy is built on a strong foundation: Operate from strength. That means we have to maintain the strongest military in the world, by far. We have to demonstrate a willingness to use our economic strength to reward those countries that work with us and punish those countries that don’t. That means going after the banks and financial institutions that launder money for our enemies, then move it around to facilitate terrorism. And we have to create alliances with our allies that reveal mutual benefits. - -If we’re going to continue to be the policemen of the world, we ought to be paid for it. - -Teddy Roosevelt always believed we should “speak softly and carry a big stick.” I’ve never been afraid to speak up to protect my interests and, truthfully, I don’t understand why we don’t speak more loudly about the ways we are losing around the world. If we don’t speak up, how is anything ever going to get better? How are we ever going to win? - -America is the most powerful country in the world and we shouldn’t be afraid to say it. “Iron Mike” Tyson, the famous fighter, once explained his philosophy, saying, “Everybody has a plan until they get punched in the mouth.” - -The first thing we need to do is build up our ability to throw that punch. We need to spend whatever it takes to completely fund our military properly. Fifteen years ago I wrote, “We can’t pursue forward military and foreign-policy objectives on a backward military budget.” - -The best way not to have to use your military power is to make sure that power is visible. - -When people know that we will use force if necessary and that we really mean it, we’ll be treated differently. - -With respect. - -Right now, no one believes us because we’ve been so weak with our approach to military policy in the Middle East and elsewhere. - -Building up our military is cheap when you consider the alternative. We’re buying peace and we’re locking in our national security. Right now we are in bad shape militarily. We’re decreasing the size of our forces and we’re not giving them the best equipment. Recruiting the best people has fallen off, and we can’t get the people we have trained to the level they need to be. There are a lot of questions about the state of our nuclear weapons. When I read reports of what is going on, I’m shocked. - -It’s no wonder nobody respects us. It’s no surprise that we never win. - -Spending money on our military is also smart business. Who do people think build our airplanes and ships, and all the equipment that our troops should have? American workers, that’s who. So building up our military also makes economic sense because it allows us to put real money into the system and put thousands of people back to work. - -There is another way to pay to modernize our military forces. If other countries are depending on us to protect them, shouldn’t they be willing to make sure we have the capability to do it? Shouldn’t they be willing to pay for the servicemen and servicewomen and the equipment we’re providing? - -Depending on the price of oil, Saudi Arabia earns somewhere between half a billion and a billion dollars every day. They wouldn’t exist, let alone have that wealth, without our protection. We get nothing from them. Nothing. - -We defend Germany. We defend Japan. We defend South Korea. These are powerful and wealthy countries. We get nothing from them. - -It’s time to change all that. It’s time to win again. - -We’ve got 28,500 wonderful American soldiers on South Korea’s border with North Korea. They’re in harm’s way every single day. They’re the only thing that is protecting South Korea. And what do we get from South Korea for it? They sell us products—at a nice profit. They compete with us. - -We spent two trillion dollars doing whatever we did in Iraq. I still don’t know why we did it, but we did. Iraq is sitting on an ocean of oil. Is it out of line to suggest that they should contribute to their own future? And after the blood and the money we spent trying to bring some semblance of stability to the Iraqi people, maybe they should be willing to make sure we can rebuild the army that fought for them. - -When Kuwait was attacked by Saddam Hussein, all the wealthy Kuwaitis ran to Paris. They didn’t just rent suites—they took up whole buildings, entire hotels. They lived like kings while their country was occupied. - -Who did they turn to for help? Who else? Uncle Sucker. That’s us. - -We spent billions of dollars sending our army to win back Kuwait. Our people were killed and wounded, but the Iraqis went back to their country. - -About two months after the war, several Kuwaitis came up to my office to discuss a deal I wanted to do with them. Believe me, they would not have lost money on this deal. They told me, “No, no, no, we do not like the United States for investment purposes. We have great respect for you, but we want to invest outside of the United States.” - -We had just handed them back their country! - -They were watching TV in the best hotel rooms in Paris while our kids were fighting for them. And they didn’t want to invest in this country? - -How stupid are we?! - -Why didn’t the United States make a deal with them that outlined how they would pay for us to get their country back for them? They would have paid anything if just asked. - -The point is, we’re spending trillions of dollars to safeguard other countries. We’re paying for the privilege of fighting their battles. It makes no sense to me. - -It really is time the rest of the world paid their fair share, and if I have anything to say about it, they will! - - - -The biggest question people ask about foreign policy is at what point do we put boots on the ground? We can’t be afraid to use our military, but sending our sons and daughters should be the very last resort. I’ve seen what wars do to our kids. I’ve seen their broken bodies, know all about the horrors that live in their heads, and the enormous effects of trauma. We cannot commit American troops to battle without a real and tangible objective. - -My rules of engagement have always been pretty simple—if we are going to intervene in a conflict, there had better be a direct threat to our national interests. The threat should be so obvious that most Americans will know where the hot spot is on the globe and will quickly understand why we are getting involved. Also, we’d better have an airtight plan to win and get out. - -In other words, my strategy would be the exact opposite of our strategy in going to war with Iraq. - -Iraq was no threat to us. The American people had no idea why the Bush administration decided to attack. - -Our brilliant strategists had to twist our intelligence reports and drum up reasons for an invasion. We targeted Saddam Hussein’s alleged weapons of mass destruction as a justification. There was no plan (or a very flawed one) to win and leave. Before the war started I came out very strongly against it. It made no sense to me. I said then that it would be a disaster and would destabilize the Middle East. I said that without Iraq to hold them back, Iran would attempt to take over the Middle East. - -And that’s exactly what has happened. - -There are some places in the world where massive force is necessary. The threat from ISIS is real. It is a new kind of enemy and it has to be stopped. The longer we wait before doing that, the more dangerous it will become. We don’t need another 9/11 to understand that these people want to kill us, and we’re not doing enough to prevent them from spreading their vicious brand of terrorism. The headlines and videos tell us what we’re dealing with: rapes, kidnapping, and lining up civilians in order to cut their heads off. There is also strong evidence that ISIS is resorting to chemical warfare. - -It’s time to get serious about our response. Either we’re fighting to win or we’re going to continue to be big losers. - -Unfortunately, it may require boots on the ground to fight the Islamic State. I don’t think it’s necessary to broadcast our strategy. (In fact, one of the most ridiculous policy blunders President Obama has committed was to announce our timetable for withdrawal from Iraq and Afghanistan.) If military advisers recommend it, we should commit a limited—but sufficient—number of troops to fight on the ground. We could also easily expand air operations to make it impossible for ISIS to ever find safe haven anywhere in the region. Our policy of trying to be “advisers” in the field has certainly been a failure. - -However, I have a unique perspective on what action we should take. While ISIS is our most violent enemy, they ended up with oil in Iraq and Syria that we should have taken. That oil, along with ransom and extortion, is funding their army. I’ve advocated bombing the hell out of those oil fields to cut off the source of their money. This would barely affect the world oil supply, but it would dramatically reduce their ability to fund terrorism. - -We have to take that oil because it is the source of their wealth. We would hit them so hard and so fast in so many different ways they wouldn’t know what happened. And then we’d hit them again and again until ISIS ceased to exist as a threat to anybody. - -We don’t have a choice. These people are medieval barbarians. They cut off heads, they drown people, they torture people, and we can’t allow them to ever gain a safe foothold anywhere. - -The number of ISIS troops is relatively small. Our intelligence community has estimated that there are no more than 30,000 to 50,000 ISIS fighters. People are usually surprised by that number. ISIS has done such a good job promoting fear that people assume it to be a much larger force. It isn’t. The entire ISIS force probably wouldn’t even fill Yankee Stadium. So defeating them requires a real commitment to go after them relentlessly wherever they are, without stopping, until every one of them is dead—and always bringing in other countries to help out. - -Iran is a much more complex problem. - -I am not afraid to criticize President Obama when he gets it wrong. When he was running for president in 2008, he correctly said, “Iran is a grave threat. It has an illicit nuclear program, it supports terrorism across the region and militias in Iraq, it threatens Israel’s existence, and it denies the Holocaust.” - -So why when Iran was struggling financially would he agree to a nuclear deal that releases billions of dollars’ worth of assets, which will further subsidize their terrorism business? It makes no sense. - -Iran was a powerful nation until the religious fanatics took over. As long as those people remain in power, Iran will be our enemy and a threat to Israel’s existence. Their supreme leader, Ayatollah Khamenei, has promised that Israel won’t exist in 25 years. We have to take that threat seriously and act accordingly. - -I’ve always loved and admired the Jewish people and supported the special relationship we have with Israel. The next president has to restore our traditionally strong partnership. We have been there for Israel and will continue to be there for Israel, because it is the one stable democracy in that region. It has become a fair-trading partner and a fellow pioneer on the frontiers of medicine, communications, technology, and energy development, which will benefit both of our nations well into the future. - -The miles that separate us right now from Iran are only a temporary barrier for them. If, or when, they develop missiles that can reach this country they will become a much greater threat. Meanwhile, they are financially supporting terrorist groups all over the world—and those groups are a real threat to our country and to our military serving overseas. Our enemies no longer need huge armies or billion-dollar missile systems to attack this country. Technology has made it possible for one or two terrorists to inflict terrible damage on us. We’ve got to stop Iran from sponsoring these murderers. - -But instead, we continue losing. - -The deal President Obama negotiated with Iran was the worst I have ever seen. We couldn’t have done worse. - -Iran was boxed in and the sanctions were hurting them. President Obama put his “legacy” on the line and before we walked into negotiations, the mullahs knew he had to have a deal or end up looking even more incompetent, so they fleeced him. - -Disgraceful. - -We did everything wrong in those negotiations. Instead of removing the sanctions that forced the Iranians to negotiate, we should have doubled or tripled the sanctions. - -Remember the principal strategy of negotiation: The side that needs the deal the most is the one that should walk away with the least. - -I would have increased the sanctions until the conditions there were so terrible that the Iranian leaders were begging for a deal. - -I would have laid down certain conditions that had to be agreed to, starting with the release of our four prisoners. - -I wouldn’t have settled for less than a complete dismantling of all their nuclear facilities, destruction of all their centrifuges, and on-site inspections anytime, anywhere. - -We didn’t get any of that—none of it—and then we released billions of dollars that had been frozen. - -We literally paid them to force us to accept a terrible deal. That would be like me beginning negotiations to build another magnificent skyscraper along the Hudson with 50-mile views in all directions, and walking out with approval to put up a small three-story building facing a wall. - -Iran got what it wanted (the release of their seized assets) and in return gave up what might have seemed like huge concessions, only to find out that there were so many loopholes that it will be nearly impossible to enforce anything meaningful. - -The possibility of Iran defying the world and developing a nuclear weapon is still very real. If the Iranians decide to prevent us (or the International Atomic Energy Agency) from inspecting their facilities, there isn’t too much that we can do about it other than take military action. The coalition of countries that enforced those sanctions is finished. Those countries—and several of them couldn’t care less about Israel—had people in Tehran talking business before the ink had dried on the side agreements. - -And then President Obama wouldn’t let Congress look at the deal. Once the new Iranian “partners” start making money there is no way the sanctions can ever be put back into place. - -Unfortunately, the deal is done. Once the sanctions are removed there is no going back, no “snapback.” Putting sanctions back in place unilaterally won’t do any good. I am especially good at reading a contract. There is always a loophole, we need to find it and, if necessary, they will pay big-league dollars. - -Whatever it takes, whatever we have to do, Iran cannot be allowed to build a nuclear weapon. - -There are many different ways to make sure that Iran is never armed with nuclear weapons. I’d be happy to sit down with the Iranian leaders when they understand that the best course for them, if they want to be a major player in the civilized world, is to close down their entire nuclear program. An Iran with a nuclear weapon would start a nuclear arms race in the Middle East with potentially devastating consequences. The situation would rapidly escalate to being the most dangerous threat Israel has ever faced. And it would force us to use extreme measures in defense of Israel and other allies in the region. - -That’s not going to happen, whatever Iran might think right now. - - - -Today the world has to deal with two “sets” of China. - -The good China is the one that has built great cities and provided housing and education for millions of people. The good China allows its citizens to travel around the world and get an education, and has helped create a growing middle class. - -The bad China is the one that’s mostly hidden to outsiders. It’s the government that controls Internet access for its citizens, cracks down on political dissent, closes newspapers, jails dissidents, restricts individual freedoms, launches cyber-attacks, and uses its clout around the world to manipulate economies. - -And all the while it is building up its military strength. - -There is no question that dealing with China, along with Russia, is going to continue to be our biggest challenge long-term. - -Our competition with China right now is economic, and we’ve been losing that battle for a long time. China has become our third-largest trading partner, behind only our neighbors Canada and Mexico. Yet China holds more of our American debt—more than $1.5 trillion—than any other country. (Although Japan is close.) As we saw in the summer of 2015 when the Chinese stock markets collapsed, our economies are tied together in a very negative way. - -Many years ago, there was an adage that “When General Motors sneezes, the stock market catches a cold.” In those days, GM was such a big player in the economy that if it stumbled, our economy suffered, too. The recent precipitous decline of the Chinese stock market caused our own Dow Jones average to plummet 1,000 points in a couple of days as investors ran for cover. Likewise, our trade deficit has been a dangerous drag on our economy. When China devalues its currency, this upsets the already tenuous balance of trade. - -We know that we have become dependent on the emerging Chinese markets—but they have become dependent on us, too. In 2014, we imported 17 percent more Chinese goods than any other country in the world. Hong Kong, which is a wholly owned subsidiary of China, was second and Japan a distant third. The health of the Chinese economy depends on us. They need our trade more than we need them. - -Foolishly, we don’t use that to our advantage. - -For the last few decades, China’s economy has been growing at a phenomenal 9 to 10 percent each year, although more recently there are signs of a cooling off. Despite these recent upheavals, economists have made predictions that within the next decade, China will replace the United States as the world’s largest economy. What have we done to make sure we will be able to compete with them? What have we done to beat them? - -I’ll tell you what we’ve done: We’ve rolled over. - -There are people who wish I wouldn’t refer to China as our enemy. But that’s exactly what they are. They have destroyed entire industries by utilizing low-wage workers, cost us tens of thousands of jobs, spied on our businesses, stolen our technology, and have manipulated and devalued their currency, which makes importing our goods more expensive—and sometimes, impossible. - -I know from my own experience that this is a difficult problem. The Chinese are very savvy businesspeople, and they have great advantages over our manufacturers. I’ve had several Trump-brand products made there. - -That’s a good example of the difference between a politician and a businessman. To stay in business I have to be smarter than my competition. I could make a very important point if I refused to have my goods manufactured there. - -As long as we’re playing under these conditions American companies don’t have a choice. Third-world countries have substantially lower production costs. They have lower overhead and pay their workers a lot less. As a businessman, I have an obligation to all of my employees and to consumers and stockholders to produce the best product at the lowest possible price. - -However, as a matter of American global policy, we want to take away China’s advantages. Last year, President Obama went to China and they held a beautiful banquet for him. Before Chinese president Xi Jinping made a reciprocal visit here, the White House announced plans for a lavish dinner. I made the point that hosting a state dinner in his honor was about the last thing I would do. Instead I’d tell him it was time we got down to business, and we would go to work. For starters, the Chinese regime must stop devaluing their currency because doing so makes it even harder for the rest of the world to compete. - -The reality is that China needs a strong American economy as much as we need their business. In May 2015, for example, Americans bought $1 out of every $5 worth of products China exported that month. We buy almost 20 percent of all their exports, considerably more than the EU does, which is the second-biggest consumer of Chinese goods. And that American percentage is increasing every year, making China more and more dependent on the American consumer for its own prosperity. - -As Steve Forbes wrote in his magazine, “China’s holdings in US Treasuries, which reached record levels in 2013, are setting off alarm bells. They shouldn’t. They underscore that Beijing is becoming more dependent on the US and the rest of the world for its strength and prosperity.” - -Remember: The Chinese need us as much as we need them. - -Maybe even more. - -So what should we do about it? We are going to use the leverage we have to change the situation so that it favors America and our people. We have to start by getting tough with the Chinese. I’ve negotiated with Chinese companies. I know how they do business. I’m actually landlord to China’s largest bank, which has its offices in Trump Tower. We’ve successfully negotiated several leases. It hasn’t always been easy. These are skilled people but I never backed down. - -Believe me, I know the best negotiators in this country, and a lot of them would be ready to go to work creating a fair balance of trade. If people like Carl Icahn were representing America, we would see a big difference in our trading policy. - -We actually hold a very strong hand. Unfortunately, our politicians are either too stupid or too foolish to understand this. Maybe they are both. We have several very good options, but it is always important to be flexible—and never reveal our cards. Our politicians talk too much. - -President Obama makes strong statements and promises us vigorous actions then nothing happens. - -So what happens when he makes those promises and never follows through? He loses all his credibility. I wonder what our great generals, men like MacArthur and Patton, would say if they heard a president revealing our plans for the Middle East or daring our enemies to cross a line. - -A very good story recently quoted a businessman describing me as “unpredictable,” noting it was one of my better qualities and helped me make a lot of money. Now that I am running for president, which so many experts predicted I would not do, that same trait has made it really hard for all my critics to figure out how to compete with my message. They’re all busy playing nicely, following all the establishment rules, taking every predictable step, trying to fit inside the conventional wisdom—and when I don’t play that game, they don’t know how to respond. - -Tipping your hand is one of the dumbest mistakes you can make in a military confrontation. I’ve read a lot of history and I don’t recall reading that General George Washington made hotel reservations in Valley Forge, or that he sent ahead his best wishes to the Hessians in Trenton. The element of surprise wins battles. So I don’t tell the other side what I’m doing, I don’t warn them, and I don’t let them fit me comfortably into a predictable pattern. I don’t want people to know exactly what I’m doing—or thinking. I like being unpredictable. - -It keeps them off balance. - -As a leader, I also know there are times when you should keep your cards close to the vest. When I was assembling property to build a skyscraper, for example, I had to buy many small lots so I could combine them into one very large and valuable buildable location, and total secrecy was an absolute necessity. If the owners of those properties had found out what I was doing they would have been able to squeeze considerably more money out of me for their properties. - -My point is that right now we’re doing too much talking. - -When dealing with China we need to stand up to them and remind them that it’s bad business to take advantage of your best customer. And then we should sit down and figure out how to make this a more equitable relationship. - -There is no one-size-fits-all foreign policy. We need to make our beliefs very clear and let them form the framework of our policy. - -Everything begins with a strong military. Everything. - -We will have the strongest military in our history, and our people will be equipped with the best weaponry and protection available. - -Period. - -That means the best missile systems, the best cyber-warfare training and equipment, and the best-trained soldiers. And when they come home after a war, battered and bruised, our troops won’t have to wait months for treatment. - -We owe those who serve us the best and the fastest care. It’s ridiculous how long our vets have to wait to get the help they deserve. They are our heroes, and the present administration has forgotten them. - -So how do we turn the tide and start winning again? - -As I’ve said, it starts with the most advanced and muscular military in the world, the most mobile one as well. We need to put some of the bill for this transformation on the Saudi Arabians, the South Koreans, the Germans, the Japanese, and the British. We’re protecting them, after all, and they should share in the costs. - -Next, we need to operate from a position of economic strength. We have the most powerful consumer engine in the world. We just need to start using it to our full advantage. - -Nobody likes to do business more than I do, but every deal I make will have one objective: America wins. - -We need to use the economic strength of American markets and the American consumer to assist our friends and remind our enemies about the benefits of cooperation. - -We need to use those strengths to form stronger alliances with our natural allies, but we need to expect them to be there when they are needed. I still don’t understand why Germany and other countries watched impassively as Putin marched into Ukraine. You can be sure Israel can be counted on to stand tall with us in the Middle East. - -And finally, we need to pay special attention to the Chinese. Their days of undercutting us with protectionist policies and cyber-theft are over. - -The new dawn of America has just begun. - - - - - - - - - - - -EDUCATION: A FAILING GRADE - - -MY FATHER DID NOT graduate from college. He was too busy working and building his business, but he understood and appreciated the value of an education. He had great respect for people with college degrees, even though he had built a large real estate business and earned many times more than most of them. With my father’s financial assistance, his younger brother, John, earned his master’s degree in physics from Columbia and his PhD from the Massachusetts Institute of Technology, one of the most prestigious universities in America. John became a noted professor at MIT and invented one of the first million-volt X-ray generators that was used to save the lives of cancer patients. During World War II, he played an important role in the development of radar. President Truman awarded him the President’s Certificate of Merit, and he was a recipient of the National Medal of Science. - -From my father and my uncle I learned the value of work and the value of a good education. From my own experience I learned what happens when you put them together. I went to the Wharton School of Finance at the University of Pennsylvania, which is, in my opinion, the best business school in America—and arguably the hardest there is to get into. - -There is one thing I know that even the professional politicians will support—education is good. It’s the easiest statement for a politician to support. But the question is, how do we make sure the best education possible is available for the most American kids? - -Because right now that is not the situation. - -Like so many other areas that our so-called leaders have wreaked their havoc upon, the American educational system is failing. We’re 26th in the world—26th! That’s an embarrassment. We spend more money on education, per capita, than any other nation—but 25 countries in the developed world provide a better education for their kids than we do for ours. This is simply unacceptable. - -Part of the problem is the politicians! They are unable to run a national education system with a top-down, one-size-fits-all approach. Our states and local districts are doing just fine making their own decisions on how best to educate our children. Now the federal Department of Education has been dictating educational policy for too long, and that needs to stop. Common Core doesn’t work. - -A lot of people believe the Department of Education should just be eliminated. Get rid of it. If we don’t eliminate it completely, we certainly need to cut its power and reach. Education has to be run locally. Common Core, No Child Left Behind, and Race to the Top are all programs that take decisions away from parents and local school boards. These programs allow the progressives in the Department of Education to indoctrinate, not educate, our kids. What they are doing does not fit the American model of governance. - -I am totally against these programs and the Department of Education. It’s a disaster. We cannot continue to fail our children—the very future of this nation. - -I went to a military school, New York Military Academy. It was a tough, tough place. There were ex-drill sergeants all over the place. and these people liked to scream and, above all, they liked to fight! Our instructors were demanding about everything from academics to personal hygiene. I learned American history and I learned how to neatly fold my clothing so it could be stacked. That might not be a skill that has had much application in my life, but it was part of teaching my fellow cadets and me discipline, focus, and self-reliance. - -The main rule was pretty simple: Do it right or do it again. One of my roommates from school told a reporter recently, “The school taught you how to be a leader. It taught you, ‘show me a sore loser, and I’ll show you a loser.’ . . . Honesty and straightforwardness were the rule of law. It got ingrained in us that you don’t lie, cheat, or steal, or tolerate those who do.” - -This may be why I never became a politician (until now)! - -Our national educational system was never intended to be limited to the three R’s, history, and science. It was designed to produce well-rounded young people capable of prospering in the world. In addition to an education, kids were supposed to graduate with some basic values, self-discipline, and life skills. A little common sense wouldn’t hurt either. Our schools don’t teach that anymore. Instead we’re more concerned about kids having self-esteem and feeling good about themselves than we are about preparing them for real life. The politically correct crowd has taken over our schools, and as a result we are failing our children. And our children will fail America if we don’t do something about it. Educators are worried that kids will feel bad if they flunk a test. You know what makes a kid feel good? - -Winning. - -Succeeding. - -We’ve dumbed down the curriculum to the lowest common denominator; in many schools, we’ve eliminated grading entirely and diplomas have been practically devalued into certificates of attendance. - -Our schools, our teachers, and our kids are capable of more. A lot more. - -The problem is we’re taking the easy way out. Instead of creating high standards and demanding more, we’re expecting less. We have to get tougher. Forget that self-esteem stuff; we need to start challenging kids. We need to allow them to fail when they don’t work hard. - -Anyone who has succeeded in business has survived a lot of failure—but they were tough enough to get back up and try again and again. Kids need to learn that success requires persistence. Self-esteem should come from overcoming challenges and surviving the hard knocks of trying to be better. - -Yet today, some teachers and school administrators are more concerned about hurting their students’ feelings or about hearing complaints from parents that they’re being too tough. Instead of becoming more competitive, we’re actually eliminating competition. That’s incredible—and wrong. - -Competition makes you stronger, it forces you to work harder, to do more. Corporations that can’t compete with other companies go out of business, no matter how nice they are or how good they feel about themselves. Small businesses have the same challenge. The owners have to work hard and compete for their survival or they won’t make it. - -Competition is why I’m very much in favor of school choice. Let schools compete for kids. I guarantee that if you forced schools to get better or close because parents didn’t want to enroll their kids there, they would get better. Those schools that weren’t good enough to attract students would close, and that’s a good thing. - -For two decades I’ve been urging politicians to open the schoolhouse doors and let parents decide which schools are best for their children. Professional educators look to options such as school choice, charter schools, voucher programs, magnet schools, and opportunity scholarships. - -Call them what you want—they all come down to the same thing: fostering competition. - -Those people who are against offering parents choices claim that doing so would be the end of good public schools. Better charter or magnet schools would drain the top kids out of that system, or hurt the morale of those left behind. - -Suddenly, the excellence that comes from competition is being criticized. - -Let’s look at the facts. While the number of charter schools has grown substantially, they are still a small percentage of our public schools. But it looks like they are making a difference, especially in urban areas. Stanford University’s Center for Research on Education Outcomes looked at the impact charter schools have made in 41 urban areas. They report that charter school students, compared to students in public schools, learn 40 days more advanced in math, and 28 more days in reading. That is significant, no matter how you look at it. - -Look, I know that people both for and against school choice can roll out endless arguments and statistics showing charter schools are either very successful or make no difference at all. This is a legitimate debate. But anyone except a politician running for office and looking for support from the teacher unions has to realize that smaller class sizes, more individualized instruction, and stricter discipline all make a huge positive difference. Making teachers accountable is important, but we should stop measuring their performance with mindless standardized tests. We should be embracing the success stories and using them as a model for improving the others. - -I’m not as concerned about the kids growing up in wealthy communities, where high property taxes have allowed them to build great schools, hire the best teachers, and provide all the supplies they need. Those schools are doing fine. - -In many urban areas, however, schools must fight for every tax dollar and are forced to have teachers and students bring in their own basic supplies such as pencils and paper. That’s a national tragedy. - -The problem with public schools is that in many places there is no way to take an honest measurement of how they’re doing. If a charter school isn’t doing the job, it closes. That’s the type of accountability we need throughout our educational system. - -One huge obstacle is the strength of the teacher unions. Teacher unions don’t want school choice because it means a potential reduction in union-protected jobs. In New York, for example, the unions have been so powerful for so long that, more than four decades ago, Woody Allen had a scene in his movie Sleeper in which a man wakes up in the future and is told that the world he’d known had been destroyed when the president of the powerful teachers union “got hold of a nuclear warhead.” Thanks to strong contracts negotiated by the New York City teacher union, it’s become almost impossible to discipline a teacher, much less actually fire one. - -When there is a legitimate complaint against a teacher in the New York system, rather than having a quick hearing to determine the validity of the complaint, teachers are assigned to an area known as “the rubber room” while they wait for their hearing. - -And they wait. They sit in empty classrooms or converted closets and do nothing—but they still get paid their whole salary. Some teachers spend several years waiting. No wonder they call it the rubber room—the whole concept is insane. But it’s the result of the contracts that strong unions have forced on New York and other cities. When teacher unions fight against school choice the unions are saying that their product isn’t good enough to compete in a free marketplace. Maybe they are right. And what about the good teachers? They can get stuck too and are at the mercy of the union. - -These unions have a nice monopoly going, so why wouldn’t they want to protect their turf? By the way, the teachers are not the only ones with troublesome unions. In New York City, the janitors don’t arrive in the morning until exactly the same time as the students. That means the boiler might not be fired up yet, or doors might not be unlocked, so students have to wait outside. - -To be upfront, I’m not a fan of the teacher unions, but I have great admiration and respect for teachers. Most of us can name a teacher or two who had a profound influence on our lives. But we’ve made teaching a tough profession. Good teachers love to teach. They respect and honor their profession. In too many classrooms, though, we’ve taken away their right to discipline disruptive kids, turning the teachers into babysitters as much as educators. - -And a lot of good teachers aren’t paid enough. It’s an interesting choice we’ve made as a society. We entrust our kids to teachers for most of the daytime, where they’ll have a really big impact on how their students will grow up. But we don’t pay enough to attract the best people to the profession. - -Unfortunately, teachers are not paid on merit. The standard for advancement is mostly the number of years of service—seniority. The really good and inspirational teachers burn out under the painful conditions found in too many schools. The bad teachers tend to hang around since they have nowhere else to go. Thus, the paychecks tend to be bigger for the less capable. - -That’s exactly the opposite of what we should be doing. - -One way of making the profession more attractive is to put some discipline back in the school. A lot of our schools aren’t safe. Putting metal detectors at the door may prevent kids from bringing in weapons, but it still doesn’t prevent them from causing problems. We need to get a lot tougher on troublemakers. We need to stop feeling sorry for them. They are robbing other kids of time to learn. - -I’m not saying we should go back to the days when teachers would get physical with students, but we need to restore rules about behavior in the classroom and hire trained security officers who can help enforce those rules. The parents or guardians must be brought into the process as well. - -Most disciplinary problems among students begin in the home. All parents should ask themselves: What kind of example am I setting? - -At the same time, there is nothing more important to the future of this country than our colleges and universities. We have the best higher-education system in the world. There is a reason that young people from all over the world come here to study at our schools. - -The problem is that the cost of higher education is skyrocketing, making it so far out of reach that many potential students either can’t afford it or have to take out huge loans to pay the tuition. Instead of making it easier for more of our young people to get the education they need, we’re making it harder to access, and thus available to only the wealthier families. - -My father succeeded without a college degree, but that would be much harder to do today. According to the Census Bureau, people with a bachelor’s degree earn an average of $51,000 a year. That’s $23,000 more a year than people with just high school diplomas and almost three times as much as high school dropouts. - -When I speak at a college, the students surround me and ask me two questions: First, can I give or get them a job? And second, what can we do about their loans? They haven’t even graduated from school, they haven’t yet started working, and already they’ve mortgaged their future. - -A four-year degree today can be expensive enough to create six-figure debt. - -Getting an advanced degree or a medical education can put a young professional well over $100,000 to $200,000 in debt. - -If the students can’t get enough scholarships or loan support, the parents have to step in, despite the risks to their own retirement funds. They may have to borrow the money, often by taking out a second mortgage if they have sufficient value in their home. - -We can’t forgive these loans, but we should take steps to help them. - -The big problem is the federal government. There is no reason the federal government should profit from student loans. This only makes an already difficult problem worse. The Federal Student Loan Program turned a $41.3 billion profit in 2013. - -These student loans are probably one of the only things that the government shouldn’t make money from and yet it does. - -And do you think this has anything to do with why schools continue to raise their tuition every year? Those loans should be viewed as an investment in America’s future. - -In the end, we have no choice. We have to change the way we educate our children. We should return the basic control and responsibility for our schools to the states and local communities. They need to set standards for their teachers and students that reward competitive quality and excellence. Our communities have to make education a priority, with flexibility in the property taxes and other funding involved. And most important, the parents have to instill a spirit of discipline, focus, and passion for learning in their children because the schools can’t do it alone. - -We are living in a very competitive world. If we study how the Asian countries have taken over in so many of the technology-based industries, the handwriting is on the wall. - -The future of our country is studying in our classrooms right now. - -Making our education system work is an important step toward making America great again. - - - - - - - - - - - -THE ENERGY DEBATE: A LOT OF HOT AIR - - -AS OFTEN ATTRIBUTED TO Mark Twain, “Everybody talks about the weather, but nobody does anything about it.” Apparently we’re trying to prove him wrong. - -We are actually blaming weather patterns on man-made causes. First, the so-called “experts” told us we were responsible for global warming, but then, when temperatures started dropping, scientists began referring to these variations as “climate change.” - -Now these “experts” can’t figure out whether it’s getting too hot or too cold, so the new term is “extreme weather conditions.” That covers everything from boiling heat to frigid ice. However, the point is the same: By sending the by-products of burning fossil fuels into the atmosphere, we have supposedly changed the natural weather patterns. - -In his 2015 State of the Union speech, President Obama declared the biggest threat on the planet today is climate change. The biggest threat?! We have ISIS troops chopping off the heads of innocent Christian missionaries. We have a coalition of adversaries in Syria supporting a dictator who uses chemical weapons on his own people. We have millions of Americans who have mortgages greater than the value of their property, while middle-class incomes are stagnant and more than 40 million citizens are living at poverty levels. - -And our president is most concerned about climate change? - -If you go back in history, you’ll find that the biggest tornadoes we’ve had in this country took place in the 1890s, and the most hurricanes occurred in the 1860s and ’70s. Violent climate “changes” are nothing new. - -We have even had ice ages. - -I just don’t happen to believe they are man-made. - -I do agree that so-called global climate change is causing us some problems: It’s causing us to waste billions of dollars to develop technologies we don’t need to fulfill our energy needs. - -President Obama introduced a program known as “cap and trade,” which sets a ceiling, or cap, on annual carbon dioxide emissions for companies. This would have forced them to reduce those emissions or pay a tax for the excess released above their cap. Because he could not get this legislation through the Congress, he has had his minions at the Environmental Protection Agency try to impose this plan through rule-making. - -This plan has succeeded mostly in doing one thing—keeping oil at an inflated price. Even after oil has dropped to $50 a barrel, we still live with prices at the pump that are too high. - -The truth is, we have sufficient energy supplies in this country to power us into the next century—all we have to do is develop them. Among all the gifts that God gave to America was an abundant supply of natural energy. According to the Department of Energy, the natural gas reserves we have in the ground could supply our energy needs for centuries. - -For example, the Marcellus Shale Fields lying under New York, Pennsylvania, Ohio, and West Virginia could produce the equivalent of tens of billions of barrels of oil, giving us plenty of time to develop sensible and cheaper alternative forms of energy. - -Right now, we are greatly dependent on oil. The cost of energy is one of the driving forces of our economy. Job creation is tied directly to the cost of oil. The more it costs to get it out of the ground and to the consumer, the fewer jobs that are created in all the industries that run on oil. We don’t even know how much oil is sitting buried under your feet as you read this book right now. - -Researchers at Rice University in Houston, Texas, have estimated we might have two trillion barrels of recoverable oil, enough to last the next 285 years. Technology has changed so much in the last few years that a Goldman Sachs study has estimated that by 2017 or 2018, we could overtake both Saudi Arabia and Russia to become the world’s largest oil producer. - -The oil is there for the taking; we just have to take it. - -I’ve never understood why, with all of our own reserves, we’ve allowed this country to be held hostage by OPEC, the cartel of oil-producing countries, some of which are hostile to America. For the last few decades, the leaders of OPEC have been sitting around their conference table, setting the price of oil and laughing at us. - -They know we have no leadership and we’ll pay whatever price they conspire to create. For years I’ve been urging our politicians to have the guts to bust the OPEC cartel, but then I remember something else Twain said: “Suppose you were an idiot. And suppose you were a member of Congress. But I repeat myself.” - -We can’t be fooled or lulled into a sense of security by the current drop in oil prices, which is unpredictable and still insufficient, given the amount of oil out there. Those oil prices are like the weather: guaranteed to change. We need to be prepared to drill our own oil. And we need to take advantage of every opportunity, including approving the Keystone XL Pipeline. - -It’s an outrage that Obama has delayed and probably even killed the 1,179-mile-long pipeline that would carry oil from Canada’s tar sands to Nebraska, where it would connect to existing pipelines that would take it all the way to Texas, and at the same time create thousands of construction jobs. The excess of oil on the market, which has caused a great drop in prices, has made it seem less vital today, but eventually the world will need that oil, and we will need the good jobs that it will create. - -One of the main criticisms of the pipeline has been the possibility of oil spills. Even the State Department has said the pipeline will be safe, and far better and safer than the existing system of transport. But mere possibilities shouldn’t prevent progress. You prepare for these situations, taking as many precautions as possible, and when they occur, you clean them up. - -We need to expand our own sources of oil, because the Middle East, our largest external source, is becoming more and more unstable. We still need Saudi Arabian oil, although we’re less dependent on their product than we were only a few years ago. - -But Saudi Arabia is a main target of or in some cases the home of terrorists. Given the Saudi overreliance on oil exports and their lack of a sustainable economy outside of oil, they are probably going to need our help at some point to stay in business. That’s a real threat, which is why we need to reduce our foreign oil dependence considerably. - -Our first priorities need to be approving the Keystone XL Pipeline and starting to drill everywhere oil is accessible. - -There has been a big push to develop alternative forms of energy—so-called green energy—from renewable sources. That’s another big mistake. To begin with, the whole push for renewable energy is being driven by the wrong motivation, the mistaken belief that global climate change is being caused by carbon emissions. If you don’t buy that—and I don’t—then what we have is really just an expensive way of making the tree-huggers feel good about themselves. - -The most popular source of green energy is solar panels. They work, but they don’t make economic sense. They don’t provide enough energy savings to cover the cost of installing and using them. They are the most highly subsidized form of green energy in America. - -Some estimates claim it takes as long as several decades after installing solar panels to get your money back. That’s not exactly what I would call a sound investment. - -Even if that number is only half right, what kind of investment do you make that takes 20 years before you break even? I understand solar energy is eventually going to become more efficient and maybe even cost-effective. Maybe. When it proves to be affordable and reliable in providing a substantial percent of our energy needs, then maybe it’ll be worth discussing. Meanwhile, we have to keep our cars and trucks running and our homes and buildings heated. There are much more efficient, cost-effective, and reliable ways of doing that. - -It’s no secret that I’ve had serious personal issues with the supporters of wind turbines. For several years I battled the Scottish government over its plan to construct a really ugly wind farm consisting of eleven giant turbines right offshore of one of the most beautiful golf resorts in the world in Aberdeen. - -The Trump International Golf Links Scotland resort in Aberdeen is a great tourist attraction that will benefit the Scottish economy and create jobs, while these turbines destroy some of the great beauty of the world. - -There isn’t sufficient wind power anyplace else? - -To me, this policy never made sense. Even at its peak output the Scottish government was going to have to spend millions of pounds a year subsidizing this wind farm. We held up the project in court for almost five years and during that time the price of oil fell so drastically that this project no longer makes economic sense. So it is never going to be built. I did Scotland a big favor. - -Like other countries, Scotland is trying to completely fulfill its energy needs from renewable sources within the next decade, but there is considerable skepticism about that plan. Bill Gates said flatly in 2015, “Renewable energy can’t do the job. Governments should switch green subsidies into R&D.” The cost to generate that much power from solar and wind energy would be, he said, “beyond astronomical.” He told the Financial Times that the answer to supplying our future energy needs is going to come from technological breakthroughs yet to be achieved. Gates said he intended to invest as much as $2 billion in renewable energy research—but not in the development of wind and solar energy. - -There are also a lot of questions about the damage that solar and wind power do to the environment. A recent study reported by a British think tank concluded that wind energy is “inordinately expensive and ineffective at cutting CO2 emissions.” Not only that, it added, “wind power, backed by conventional gas-fired generation, can emit more CO2 than the most efficient gas turbines running alone”—and building these steel monsters, mostly in China, causes many pollutents. - -Ironically, at the same time the wind farm in Scotland was going ahead, a similar project was denied approval in Doonbeg, Ireland, where I am building another beautiful resort. The plan there was to spoil the lavish views with nine 413-foot turbines—that’s like lining up nine vertical football fields, including both end zones. - -Fortunately, this plan was denied because the turbines might harm the estimated 7,000 freshwater pearl mussels, an endangered species on the European Union list, that were living in the Doonbeg River, and also be bad for tourism. - -This magnificent golf course resort, absolutely one of the best in the world, was offering huge benefits to the local economy. - -We were saved by mussels. - -The bottom line is that we are going to remain dependent on oil and natural gas to fill our energy needs for a long time into the future. So if we are going to become energy independent, we need to keep drilling. The good news is that we have tremendous supplies of fossil fuels. We just need to decide to go after it. - -We need to use every cost-effective method we have available to retrieve these resources. That includes fracking. For those who don’t know, fracking is a technology that involves injecting fluids into shale beds at a very high pressure to free locked-in resources. It makes it possible to recover vast amounts of oil and gas that otherwise can’t be reached through traditional methods. - -While New York governor Andrew Cuomo has banned fracking, this technology has created an economic boom in North Dakota, Pennsylvania, and Ohio. There were more jobs created and less unemployment in those areas than practically anywhere else in the country. Upstate New Yorkers would like to replicate that boom in their region, lower taxes, and pay off massive New York State debt. - -The bottom line on energy is that until there is a better “alternate” or “green” way of supplying our energy needs, we must put our resources to work for us, and now. - - - - - - - - - - - -HEALTH CARE IS MAKING US ALL SICK - - -THE BASIC DIFFERENCE BETWEEN the politicians’ way and my way is that I’ve actually had to do the things that politicians only talk about doing. - -I’ve hired thousands of employees. I’ve had to negotiate with contractors and unions. I’ve had to provide health care coverage for my workers. I know what the real costs are, I know what the problems are. I know what works and what doesn’t work. - -Most important, I know where the waste is and how to provide good medical coverage at reasonable costs. - -Politicians don’t want to hear the truth, nor do they want to tell you the truth. They’re total hypocrites, especially when campaigning for reelection. They love to take to the stump and condemn “reckless government spending” and “government waste.” And yet virtually every bill passed by Congress is loaded with special goodies for their districts. - -We call this the “pork barrel” approach, which is a real disservice to pigs, who are only eating to survive. The pork barrel in politics is creating government waste in order to reward some special donor or interest group or to mollify a cranky member of Congress in return for his or her vote. - -And we’re paying for it. - -I get very angry when I think about how our “Affordable Care” Act was rammed down a lot of sore throats by the Democrats. - -Even Nancy Pelosi, the Democratic House Majority Leader at the time, conceded that most supporters of the bill had not actually read it. - -Clearly, the public didn’t understand what “Obamacare” was providing: its complexity, its concessions to the insurance lobby, its taking away of the right to keep your current physicians, and, naturally, the hidden, escalating costs of health care, especially for state treasuries and businesses of all sizes. And for individuals who are young and healthy, there’s no way out of it without paying a fine. - -Virtually all Republicans—and a growing number of Democrats—realize this is already a disaster that will only get worse. Premiums are skyrocketing—up 30 percent to 50 percent—and that will only get worse. - -Look, I’m lucky. I’m able to afford the best health care in the world for myself and my family and my employees. I know that, but I also know that most people can’t do that and need some help. This is a subject that has been really important to me for a very long time. - -There’s no question. Obamacare is a catastrophe, and it has to be repealed and replaced. And it was only approved because President Obama lied 28 times saying you could keep your doctor and your plan—a fraud and the Republicans should have sued—and meant it. As the different provisions kick in over the next few years, individual deductibles are going to continue to rise. People will have to get hit by a truck to be eligible for coverage because those deductibles are going to be so high. - -Medical people hate it. - -Doctors are quitting all over the place. - -I have a friend who is one of the best doctors in the country. You would know the names of many of his patients. He told me, “Donald, I’ve never seen anything like this. I can’t practice medicine the way I want to anymore. I have more accountants and computer programmers working for me than I have nurses.” He’s right. There are now more than 100 codes for doctors to get reimbursement from insurance companies. - -We’ve turned the “paperwork” or “computer folders” in our medical system into the same nightmare as our 80,000-page tax code. - -As I’ve repeatedly said, the “un-Affordable” Care Act has to be replaced. Where I differ from what others say—as usual—is in the way I would change it. Many years ago, long before anybody else was talking about it, I knew we had to make changes in the system. I knew it because I saw what effect health care costs were having on the bottom line. I knew it because at that time we had more than 40 million Americans without any insurance at all, and now we are forcing “part-time” jobs down the system. - -I said then that we needed to find a plan for everyone that was affordable, well-administered, and that provided freedom of choice. You know, a plan that actually allows you to keep your doctor if you want to. At that time I talked about a single-payer plan which, in our then much less complicated system, may have had a chance of working. But it was only one of several suggestions from a nonpolitician at a time when many different concepts and ideas also were discussed. This was 15 years ago, but it still gets brought up a lot by other people. I guess they have nothing new to complain about. As usual, because they have no solutions of their own, they resort to “gotcha politics,” which gets us nowhere closer to solving this problem or any other. They are all talk and no action. The Affordable Care Act is a clear example of that. - -To succeed in business, you have to be flexible and you have to change with the realities of the world. The world has changed; I’ve changed. I don’t think a single-payer system makes sense anymore. If I did, I would say it; I wouldn’t need anyone else to say it for me. Maybe a single-payer system works in other countries. It works incredibly well in Scotland, for example, and maybe it could have worked here at a different time. - -But not anymore. - -So what can we do about it? There’s no question we need real health care reform. We can’t let Americans go without health care because they don’t have the right resources. Sadly, that statement might cost me—but I still believe Republicans have big, beautiful “hearts” and want to help the poor and the sick—and can do so at the right price. I can’t even imagine what it must be like to be sick and unable to go to a doctor. This only throws people back into emergency rooms that are overcrowded and inefficient already. - -The Census Bureau has reported that 10 million people have now been added to the system. We have to find a way to take care of those people who can’t take care of themselves. I believe that very strongly—even if it costs me. - -I know Americans agree with me, because wherever I go in Ohio, Florida, Iowa, South Carolina, and New Hampshire, when I say it, people give me a standing ovation. The real argument is how do we take care of those who cannot take care of themselves? How do we make sure Americans have access to good health care so that our kids get everything they need, and that even people who can’t afford the basic programs get at least reasonable care? - -To me, for politicians to claim that we have an answer to every problem is silly. When you listen to some politicians reeling off their prepared answers, you almost fall for it. They’re so smart that they already have a solution to every problem, and it’s always better than everyone else’s solutions. How convenient. But not for our country, because nothing gets done. Nothing gets solved, and we don’t win. What I hear is a lot of ridiculous promises from politicians about how they intend to fix everything. They’re all experts. But nothing ever happens. They’re all talk and no action. - -Most of them have gotten really good at saying absolutely nothing. They’ve all got some kind of program, but when you listen to them, you still don’t know what they’re talking about. - -My approach is completely different. I approach complicated problems such as how to provide health care for most Americans at a price we can afford the same way I solve the toughest business problems. We should hire the most knowledgeable people in the world on this subject and lock them in a room—and not unlock the door until they’ve agreed on the steps we need to take. - -A lot of times when I speak, people say I don’t provide specific policies that some pollster has determined are what people want to hear. I know that’s not the way the professional politicians do it—they seem to poll and focus-group every word. But there’s nobody like me. - -Nobody. - -I ask people to look at what I’ve done throughout my whole career. Look at how successful I’ve been doing things my way. So they have a choice: They can pretend some impossible solution is actually going to happen, or they can listen to the person who has proved that he can solve problems. - -I started in a relatively small real estate company based in Brooklyn and made more than $10 billion. I now live on what is considered the best block of real estate anywhere in the world—Fifth Avenue between 56th Street and 57th Street, right next to Tiffany’s in the heart of New York City. - -That doesn’t mean I don’t have some ideas about the right approach to take. First of all, we cannot cut either Social Security or Medicare benefits. That’s off the table. Those programs can be saved by growing the economy. Second, there are some simple changes that would provide real benefits. - -As I’ve said, I’d like to see a private insurance system without artificial lines drawn between states. We need to get rid of those lines and let people and companies cross state lines to purchase the best plan for them. The government should get out of the way and let insurance companies compete for your business. - -I have a big company. I have thousands of employees. If I’m negotiating for health insurance for my people in New York or California or Texas, I usually have one bidder in each state. Competition brings down prices, and the way the law is now, it discourages real competition between insurance companies for customers. They have virtual monopolies within the states. That makes no sense. It’s very stupid and unfair for us. - -You know who loves a lack of competition? Those insurance companies, who are making a fortune because they control the politicians. They’ve paid for them with their contributions, and it’s a good investment from their perspectives. For our country, not so much. They give money to almost all the politicians. I’m using my own money so I am free to do what’s right, and serve the people, not the lobbyists. - -Nobody understands business better than I do. You want better plans at a better price? Increase competition for customers. - -The government doesn’t belong in health care except as the very last resort. The main way the government should be involved is to make sure the insurance companies are financially strong so that if there is a catastrophic event or they make some kind of miscalculation, they have the resources they’ll need to handle it. - -If we follow my logic, our health care system, and our economy, will be well again very soon. - - - - - - - - - - - -IT’S STILL THE ECONOMY, STUPID - - -ALL THE PUNDITS, AND just about everyone else, said I would never really run for the presidency. When I announced I was a candidate for president, some of those same people predicted it wasn’t really going to happen. They were sure I would drop out of the race before submitting my financial disclosures. - -Apparently they thought I would be embarrassed to admit that I was not as wealthy as most people thought. But after filing those papers they found out I was worth much more. - -I’m rich. I mean, I’m really rich. I’ve earned more money than even I thought I would—and I’ve had some pretty big dreams. - -You know, I hear politicians talk, and they say things like “I was a constitutional law professor, so I’m an expert on the Constitution.” Or maybe they say, “I was on the Senate Foreign Relations committee for 25 years, so that makes me an expert on foreign policy.” They point out how “successful” they were when they were CEO of a great company—where they cut 30,000 jobs, many of which ended up overseas, thus making them experts on job creation—experts on sending jobs outside of America to replace jobs inside of America. - -I listen to these people talking about how they are going to fix our economy, how they are going to create jobs, how they are going to lower taxes and balance the budget. I shake my head and I think, You wouldn’t have even qualified to be a contestant on The Apprentice. - -We shouldn’t take any fiscal advice from members of a Congress that can’t pass a budget, nor should we expect them to keep their job-creating promises. We need someone who is a tough negotiator and a real leader. Sadly, the Republican majority doesn’t possess the leadership or the negotiating skills necessary to pass a budget that would eliminate programs that ought to be entirely in private hands, or even eliminated completely. - -The only time they really stand up to Obama, and then they fold, is in the final days when spending authorizations are running out. Where were they this summer when the real work and consensus could have been developed? - -They’re going to screw up the lives of millions of Americans—and destroy our credit rating—because they don’t have the leadership skills needed to make our country great again and to look out for Americans. - -What we are confronted with is a mixture of bad management and bad politics. - -We need leadership in the White House that will keep government functioning while getting the feds out of all the areas where they don’t belong. If the government is properly sized and properly focused, we won’t need to go from crisis to crisis. - -We need to start with the United States Congress. We’ve had presidents (Lyndon Johnson for one; Ronald Reagan for another) who have managed to build consensus and get things done. When President Reagan fired the air traffic controllers during his seventh month in office, he sent a signal to the unions that they heard loud and clear. When President Johnson twisted arms to get enough votes for the passage of a civil rights bill, he took on the far left and the far right and threatened them in order to get his way. - -It can be done. - -President Obama is big on playing golf. But he doesn’t play with the right people. He should be playing with those smart people who can help our country, establishing bonds to get things done—and not just his friends. - -Believe me, I know how to use a golf course—and golf clubs—to make deals. The only things that work are having a clear point of view and knowing how to get your message across to the country so that the people support and understand your mission. This way we’re not divided, and special interest groups cannot buy the outcomes they want and rip us apart. - -It all comes down to leadership. I don’t think many people would disagree that I tell it like it is. When you see the coverage of me on television, in newspapers, and on social media, you’d have to agree that I get more attention for my opinions than all the other Republican candidates put together. Hopefully, that’s respect and not pure entertainment—but it may be a little of both. - -I manage to blast through the ridiculous liberal bias of the media and speak right to the hearts of the people—or at least I try. Even New York magazine, hardly a conservative outlet, has given me credit on its cover for shaking up the status quo. - -Again, we’re talking leadership. - -When it comes to creating jobs and straightening out our economy, I am the only expert who isn’t talking in “theory.” I talk common sense and practical realism learned from the school of hard knocks. I’ve been there, done that, suffered through adversity, gone into debt, fought back, and come out on top, and much biggger and stronger than ever before. During the Recession of 1990 many of my friends went bankrupt, and never recovered. I never went bankrupt. I survived, and learned so much about how to deal with bad times. Our country is going through a bad time—I get it, and I know how to solve it. - -I’m a fighter. Knock me down, and I come back even stronger. I love it! - -I’ve spent my entire life not just making money but, more importantly, learning how to manage my resources and share them with the thousands who have worked for me. To hear our left-wing critics tell it, we need socialism to make this country move forward, and we need a president who can make up the rules as he goes along. If he can’t get Congress to do something, he needs to rule by executive order. - -I say that’s complete nonsense. - -The free market works—it just needs leadership, not dictatorship. Our government needs to employ a strong adherence to the Constitution and maintain social programs that inspire and reward achievement and that are constantly accountable for their spending and outcomes. I’m very concerned about the 46.5 million people living in poverty, and the great majority of middle-class Americans who can barely afford their homes (or have lost them). I am very concerned for the people who can’t pay for the education of their children. In short, I am concerned for the people who can’t buy into the American dream because the financial programs of this country are so tilted in favor of the rich. - -That’s why one of my strongest ideas is to look at the tax code in both its complexity and its obvious bias toward the rich. Hedge fund and money managers are important for our pension funds and the 401(k) plans that help millions of Americans—but far less important than they think. But financial advisers should pay taxes at the highest levels when they’re earning money at those levels. Often, these financial engineers are “flipping” companies, laying people off, and making billions—yes, billions—of dollars by “downsizing” and destroying people’s lives and sometimes entire companies. Believe me, I know the value of a billion dollars—but I also know the importance of a single dollar. - -The money I’ve earned was the result of my own work—projects I created, deals I made, companies I bought and turned around. I understand what it means for my employees to work in construction, one of the toughest and most dangerous jobs in the world. - -Those who spend their days sweating at their job should not have to sweat about their lives at night. - -I’ve never had the “security” of being on the government payroll. I was the guy who made out the payroll. It hasn’t always been so easy either. In the 1990s, the government changed the real estate tax laws and made those changes retroactive. It was very unfair, but I fought through it and thrived. It absolutely killed the construction industry. It put a lot of people out of business. The misguided passion of environmentalists today makes building anything much more difficult. Now we have crazy overregulation. You can barely buy a paper clip without being in violation of some governmental policy. - -It’s no surprise that stress in our society is at an all-time high. Let good and fair-minded businessmen and businesswomen run their companies, especially small businesses, without so much interference. Then they can make more money, put more people to work—and not just part-timers forced in by Obamacare—and have happier lives for themselves. - -Right now this country is in serious financial trouble. Our national debt is more than $19 trillion, and we’re on our way to $20 trillion. Even the most liberal economists warn that as we head past the $20+ trillion debt levels, we’ll be in big, big trouble. That’s when our financial system really starts to falter and diminish our borrowing capacity as well as drive up the interest costs on our debt. - -That’s when we will lose a lot of credibility in the world markets. For the past year, the United States has been the one country that has maintained financial stability while Europe and Asia faltered. Our debt is a very dangerous burden to carry around. There are overwhelming numbers of Americans who have not participated in the economic growth of the past year, or of the past 20 years, for that matter. They are being forced to mortgage their dreams—their American dreams—just to maintain where they are—just to get by. They have little or no hope of getting ahead. - -This is a case where our system is broken, and we need to fix it. We’ve got to do something to change the way we’re developing policy, and we’ve got to start right now. We need people who understand the scope of the problems and know how to turn the ship of state around. - -We need leadership! - -Some of the proposed solutions make no sense. There are politicians who think one way of reducing the national debt is to cut Social Security or other entitlement programs. We have to tread very carefully here. Since our “great” depression more than 80 years ago, America has always provided a social safety net for those who fall off the economic chart. Retired seniors in particular rely on pensions and Social Security, as well as Medicare. - -We have to be very careful about changing the rules for those whose monthly checks make a big difference in their survival. A lot of people live from check to check. There’s no way I’m letting those payments be reduced. No way. This country made a deal with our citizens. That’s their money. They paid it into the system their whole working lives so that older people could get their monthly checks. - -Now it’s their turn. - -We should not touch Social Security. It’s off the table. - -But you know what? There are a lot of wealthy people who don’t need it. So if the government offered me the opportunity to give it up, I would check that box. I’m sure there are other wealthy individuals who would do the same thing. Even so, the impact that would have on solving the financial crisis we face would be minimal. - -Changing the tax code to be more fair for all income classes is a much better answer to this bigger problem. - -There are certainly “entitlements” that can be reviewed for waste and misguided direction or wasteful execution. I discuss immigration policies elsewhere, but I question whether illegal immigrants—or their children—should be receiving the same benefits as bona fide citizens or those who are here lawfully. - -At the same time, government largesse for many businesses and industries—“entitlements for the rich”—needs to be examined. I am very suspicious of income-supplement programs that seem to expand for industries with large lobbying teams or for companies run by major contributors to election campaigns. - -To solve our overall economic problem, we have to start rebuilding our industries to meet the challenge from foreign competitors and create real jobs. Government statistics are made to look very positive, but in real life the situation is terrible. - -When you look at the unemployment situation, there are two very significant variables. One is the percentage of people who give up and drop out of the labor market. They aren’t included in the unemployment sample. Our so-called labor participation rate—those who have stayed in the job market—is the lowest it’s been in almost 40 years. It hasn’t been this low since President Jimmy Carter was running the country, and he presided over an inflationary spiral in which interest rates exceeded 20 percent. - -When you also take into account the large number of jobholders who are underemployed, the real unemployment rate soars to the high teens or even 20 percent. I know many wise financial heads question the government’s assessment of the job market and the statistics it puts out. In our daily lives, we see from our friends and neighbors that the job market is still very troubled, as downsizing continues to be a popular buzzword for corporations trying to hype their stock. - -It’s not just jobs that are being lost to other countries. We are seeing whole industries vanish overseas. - -Americans want to work. We have a great work ethic in this country. The problem is that when young people look for their first good jobs, or people who have lost their jobs look for new ones, they can’t find any. - -The jobs aren’t there. They’ve vanished! - -I’ve certainly done my part in my businesses. I know how to create jobs. I have created tens of thousands of jobs in my career. Thousands of people currently work for me and many thousands more are employed by my partnerships. I’m involved in literally hundreds of companies, almost all of which are working beautifully, and setting new standards and records. - -They include everything from a bottled springwater company to a vineyard. We manage ice-skating rinks, we produce TV shows, we make leather goods, we create fragrances, and we own beautiful restaurants. - -Of course, our bread-and-butter is in our bricks-and-mortar or real estate. We own, build, manage, and/or license many beautiful buildings of all types. - -There is only one thing that every single one of my many different businesses have in common: They all help provide jobs for people. When I construct a building or develop a golf resort, it creates jobs for construction workers and for all the companies supplying the materials, from the flooring to the lighting fixtures. - -These are good jobs. - -When a building is finished and occupied, or when people are playing on one of my golf courses, or staying at a hotel, we supply the service personnel who keep these businesses running. - -More good jobs. - -The same thing is true with having my products made in China or Mexico or other countries. Some have attacked me for urging that we complain about these countries at the same time I’m having goods manufactured there. - -My response: I’m a realist. I’m a competitor. - -When I am working on a business deal, I make the best deal. But we should be changing the business climate so that manufacturers can get the best deal right here in the US. Right now it doesn’t work that way. - -We need legislation that gives American companies the tax priorities and financial support to create more of their technology and to redirect more of their manufacturing here at home. - -We must stop certain countries from devaluing their currency at the drop of a hat. - -We’re the home team, and we should come first. - -So how do we get back the jobs we’ve lost to other countries? - -Answer: Start by negotiating better trade agreements with our “friendly” partners. - -We have to bring jobs back from places like China, Japan, and Mexico. We have to stand up and be tough. In too many ways we’re giving away the greatest market in the world—the American consumer. - -Ford recently announced that it’s building a $2.5 billion plant in Mexico. Nabisco is moving a big plant from Chicago to Mexico. A German auto company was all set to build a plant in Tennessee, but then it changed its mind and is building it in Mexico instead. - -How does that happen? How many good jobs did we lose in just those two deals? How many more deals like that have slipped through our fingers without our even realizing it? Hundreds, maybe even thousands, but no more! - -It’s ridiculous. We all know that the American labor force is the best there is. We just have to allow them to compete. - -But we sit there while we’re getting beaten in trade agreements. In my companies, we fight for every deal. We fight for the best price on cleaning materials for the restaurants and the best price for the printing of the labels on our wine bottles. - -I fight for my people every day. - -Now I am fighting for America. I want our country to start winning again. And we can! - -All it takes is a commitment to winning and making “Made in America” a badge of honor just like it used to be. - - - - - - - - - - - -NICE GUYS CAN FINISH FIRST - - -I’M A NICE GUY. I really am. But I have a nasty habit that most career politicians don’t have: I tell the truth. I’m not afraid to say exactly what I believe. When I’m asked a question, I don’t answer with a speech that ignores a controversial subject. I answer the question. - -Sometimes people don’t like my answers. Too bad. - -So they attack me. And when someone attacks me, I fight back. Hard. - -That has always been my philosophy: If my critics attack me, then I’ll fight back. Let’s be honest and truthful with one another. I’m confident my answer makes the most sense. - -You know who really appreciates this approach? The American people. - -They’re not used to hearing the truth from politicians, but they love it, and they love hearing it from me. - -They have never seen anyone like me in politics. They have never seen anyone who is willing to stand up to the lobbyists, the PACs, the special interests, who all have way too much influence over Washington politicians. I am paying my own way so I can say whatever I want. I will only do what is right for our country, which I love. - -Sometimes there is a price I pay for that. Loyalty is extremely important to me. My family and close friends will say that I am loyal to a fault. That’s why, when I announced that I was running, I was very interested to see which of my so-called friends would remain loyal to me. - -In politics, 55 percent of the vote is considered a landslide—but that means 45 percent of the people are against you. I’ve never had 45 percent against me. When I went to events, people would cheer, I would hear very few boos or hecklers. But when you run for political office, suddenly you hear some boos in the background. One night, at a charity event where I had made a major contribution, my wife, Melania, was with me as I was cheered loudly. But we were surprised to hear a small number of people booing in the background. Melania said to me, “Darling, do you know what? You’ve never been booed before.” I looked at her and said, “Welcome to the world of politics.” - -In fact, I have been surprised by some people I once considered friends. One of my biggest surprises was Macy’s. I’ve had a long and good relationship with the chairman and CEO, Terry Lundgren—a very nice guy and good executive. I’ve sold shirts, ties, cuff links, and fragrances at Macy’s. We’ve done very well. I like the fact that Trump was the only brand that could sell a $50 million apartment and a $37 tie. - -Terry Lundgren was a good friend. We spent a lot of time together at Mar-a-Lago and at many Trump golf courses. I’ve introduced him to people who have become good friends of his. I got a call from him in August 2015 when I was receiving a lot of bad press regarding my statements about illegal immigration. I was getting ready to speak to a large crowd in New Hampshire when my cell phone rang. The emcee on the dais had already started introducing me—he was talking about some of my buildings, how well I was doing in the polls. But when I saw Terry—a friend—was calling, I answered. - -“Donald, Donald, I have to speak to you,” he said in a rushed and nervous tone. “We’re receiving calls from Mexicans. They’re going to picket Macy’s.” - -I said, “That’s no big deal. They’ll be there for an hour.” - -“I can’t let this happen,” he said. “It wouldn’t be good for our company’s reputation.” - -I told him I was getting ready to make a speech and couldn’t talk to him, but said pointedly, “If you do this, it would truly be an act of disloyalty because you’re getting a little bit of heat over selling my ties and shirts. Aside from that, it wouldn’t make me look very good.” - -Terry said, “I’ve got to do something. We’re putting out a press release that we’re terminating you.” Wow, I thought to myself, and this is a company that just paid a massive fine for some terrible acts to its customers. Not nice! - -As he read the release the emcee announced my name and the crowd roared. “Wait a second. You’re reading this while I have to speak to this packed house? Can’t it wait until tomorrow?” - -“We have to do it now,” he said. “It can’t wait.” - -“Wow. What a great act of disloyalty. I’m telling you that if they picket, they’ll be there for an hour. Nobody cares.” - -My ties, shirts, cuff links, and fragrances are now available at Trump Tower, not at Macy’s. I’ve been told that many thousands of people cut up their Macy’s credit cards and mailed them back to the store because of this. The public gets it. - -I’ve also heard that other companies have stopped doing business with Macy’s. And at least one prominent businessman told me, “I can’t believe how disloyal Terry Lundgren was.” He added jokingly, “He used Mar-a-Lago more than you do!” - -Likewise, NBC and Univision refused to broadcast the Miss Universe/Miss USA Pageants. I sued NBC, but settled after buying its half of the company and selling the whole thing to IMG. Currently I am suing Univision for a substantial amount of money. - -I’d had a long and very successful relationship with NBC, which made millions broadcasting my top-rated show, The Apprentice. But before this happened I’d told them that if I ran for president, because of the equal-time regulations, I would not be doing the show anymore. The Apprentice had already been renewed and top executives of NBC and Comcast came to my office to try to convince me to change my mind. - -Steve Burke of Comcast, NBC’s president Bob Greenblatt, and Paul Telegdy, head of reality television, are great guys, and my relationship with all of them has been an amazing experience. I’m so glad we settled our litigation, and life goes on. - -My lawsuit against Univision continues though, and at some point I expect to win a lot of money from them. They broke a contract and for that they must pay. It’s sad because I had such a great liking for the two top executives, Randy Falco and Beau Ferrari. Who knows? At some point we’ll probably have that relationship again. - -The publicity about severing ties those first few weeks was relentless: ESPN BREAKS TIES WITH TRUMP—even though I never had a deal with ESPN. They were using my golf course on the Pacific Ocean, Trump National Los Angeles, for a golf outing. NASCAR CUTS ALL TIES WITH TRUMP—but I had no ties with NASCAR, they were renting a ballroom at Trump National Doral for their annual banquet. And, in fact, I kept their substantial deposits and will rent those places to someone else—hopefully for more money. - -Things have calmed down and people are now giving me great credit for raising the problem of illegal immigration. I made that issue so important because it is so important to the future of America. I wasn’t surprised it caused a lot of problems. Most politicians don’t want to get too close to something that controversial. I don’t care. I learned how to be direct, how to be honest, and how to stand up for my beliefs from my father. - -Fred Trump, my wonderful, tough but loving father, built, owned, and managed buildings in Queens and Brooklyn. He made enough money to just sit back and relax, but that wasn’t who he was. Even on weekends he’d be walking through a building, a house, or a construction site. If the halls were dirty or a bulb was out, the people working there would know about it. My father wasn’t overly concerned with hurting someone’s feelings—he wanted the floors to be cleaned or, as he would often say, in “mint condition.” If the person responsible couldn’t keep them clean, he was gone. My father believed he had an obligation to his tenants. His motto was simple: You do your job, you keep your job. Do it well, you get a better job. That always made sense to me. - -Unfortunately, politics doesn’t work that way. In politics, once someone gets elected, it’s tough to get them out. There’s no motivation to try to get anything done. If the American public had any idea what really goes on, they’d be much angrier than they are already. Congress’s approval rating would be even lower than it is now. Career politicians like it this way; being a politician is their career. I know many of them; believe me, they couldn’t get a job in private industry. They don’t want anyone taking away their great pension plan and health benefits—that you are paying for. - -The special interests and lobbyists also like it this way. They’re earning a lot of money selling influence—and giving away money is a lot easier than cleaning floors. Believe me, I know how it works, I’ve made a lot of campaign contributions. - -I’m not taking a penny from those people. I’m paying my own way. So the old rules don’t apply to me—and those people who benefit from those rules don’t know how to react. At first they hoped if they ignored me I would go away. The American people certainly proved them wrong. They love the fact that someone is finally standing up for their interests! - -They couldn’t ignore me, so they started attacking me. These veteran politicians looked for the place I was most vulnerable—which is why they attacked my hair, which is mine, by the way. They showed a lot of courage attacking my hair; this resulted in what might be the strangest political headline ever written when NBC News reported: TRUMP DEFENDS HAIR, ATTACKS MEDIA AT CAMPAIGN RALLY! - -Recently though, they have been claiming I haven’t put out enough specifics. There’s a good reason for this, and it fits perfectly with my overall philosophy of leadership: Many of our problems, caused by years of stupid decisions or no decisions at all, have grown into a huge mess. If I could wave a magic wand and fix them, I’d do it. But there are a lot of different voices—and interests—that have to be considered when working toward solutions. This involves getting people into a room and negotiating compromises until everyone walks out of that room on the same page. - -No one likes to compromise. Believe me, I will never compromise on the basic principles I’m discussing in this book. Yet every party to a decision needs to feel his position is understood. The hardest part of putting up a building is getting the city officials, the city council, the environmentalists, local zoning boards, and the ever-critical media to agree that this was an acceptable project. Then we have to bring in the banks, the contractors, and the unions to make sure the project is financially feasible. - -If I’d said at the beginning, “This is exactly the way we’re going to construct this building,” the headlines would have announced: MAJOR OPPOSITION TO NEW TRUMP PROJECT! Nothing would get done. - -The same principles apply to management of the federal government. Congress can’t pass a budget because no one knows how to negotiate with the various interests involved in funding our government. Most of the time Congress simply accepts last year’s spending, which was a continuation of the previous year’s spending. That is followed by an agreement on an emergency temporary stopgap measure. There is no final resolution, so the same broken process is repeated year after year. - -We need to find the best people, including experts in various fields and economists, as well as congressional leaders to provide perspective and determine which programs are working and should be kept or expanded, which programs should be cut, and what new programs might be added to deal with the changing world. Career politicians always claim to have these answers—but how is that possible when they haven’t properly analyzed the situation? - -A great leader has to be flexible, holding his ground on the major principles but finding room for compromises that can bring people together. A great leader has to be savvy at negotiations so we don’t drown every bill in pork barrel bridges to nowhere. I know how to stand my ground—but I also know that Republicans and Democrats need to find common ground to stand on as well. - -We need to see more real achievements in the first 100 days of the next administration than we’ve seen in the seven years of the Obama presidency. Washington needs to get moving in the right direction again. Hopefully you will understand that is more important than all the wonky details of grand plans that will never be enacted. - -And by the way, I have outlined plenty of policy initiatives. This is not “the politics of hope.” This is “the politics of reality,” which only a strong businessman like me can develop. - -Another favorite gimmick my opponents use to attack my ideas is to claim I’m not a conservative, or not even a Republican. Or worse, I’m not a politician! They claim this makes it impossible for me to get things done in Washington. - -I’ve got news for them: Washington doesn’t work. - -Ironically, it was this type of criticism that helped my ideas attract attention and gain popularity in the first place. The contrast reminded Americans what they really think of career politicians. - -As for being a Republican and conservative, let me tell you a story about how our political system really works. In May 2015 the president of a major conservative advocacy group, the Club for Growth, came up to my office in Trump Tower. He seemed like a very nice, reasonable guy. During that meeting he said some very complimentary things about my business success and told me that people like me were needed in Washington. - -A week later we received a letter from him reading, “As we both know, it is business owners who create jobs—not the government.” Then he asked for a million-dollar donation. - -A million dollars! - -When I turned him down he attacked me in the press. I was not a real candidate, he said, “and it would be unfortunate if I took away a spot at even one Republican debate.” - -Take away a spot from whom? Someone, I suspect, who gave them that big donation. - -When I pulled ahead in the polls, this group spent a million dollars on ads attacking me in Iowa. This is one smart group; they come to my office asking for a million-dollar donation—and it ends up costing them a million dollars. - -Meanwhile they’re bad-mouthing me to their followers: “Donald Trump is the worst kind of politician who will say anything to get elected.” Saying anything to them means telling the truth to me. - -This demonstrates everything that is wrong with our political system. We look at politicians and think: This one’s owned by this millionaire. That one’s owned by that millionaire, or lobbyist, or special interest group. - -Me? I speak for the people. - -So the establishment attacks me. They can’t own me, they can’t dictate to me, so they search for ways to dismiss me. They point out (accurately, for once) that at one time I was a registered Democrat. I grew up and worked in New York, where virtually everyone is a Democrat. - -You know who else was a Democrat? Ronald Reagan. He switched, and I switched years ago, when I began to see what liberal Democrats were doing to our country. Now I’m a conservative Republican with a big heart. I didn’t decide to become a Republican. That’s who I have always been. - -By nature, I’m a conservative person. I believe in a strong work ethic, traditional values, being frugal in many ways and aggressive in military and foreign policy. I support a tight interpretation of the Constitution, which means judges should stick to precedent and not write social policy. - -I represent traditional conservative values. I get up every morning and go to work. I work hard, I’ve been honest and I’m very successful. The billions I have? I earned every penny. When I was beginning my career my father never gave me much money, but he gave me a great work ethic. I always know a hater when they say my father gave me $200 million when I was starting out. I only wish! - -Number one: He didn’t have that kind of money. In those days, all of Brooklyn wasn’t worth $200 million. And number two: If he did, he would never have given it to me. - -When I wanted to leave Brooklyn and Queens and venture into Manhattan, he thought I was crazy. Nevertheless he had confidence in me. I’ll never forget when he told my incredible mother, “Look, I don’t know if he is right or wrong, but I’ve got to let him do it. He has great ability and talent, and who knows? He may be able to pull it off.” My father was a tough cookie, but he had a warm heart. He was a man who truly loved his wife and five children: Maryanne, Elizabeth, Robert, Fred, and me. He always wanted what was best for us. - -He loaned me a small amount of money—loaned, not gave—around $1 million—money that I probably could have gotten from a bank—and the biggest part of my journey began. I paid my father back a few years later, with full interest, after my Manhattan deals started to come in—and very successfully. One of them, the Grand Hyatt Hotel, was a big hit, built by me—on time and under budget. I made a lot of money. He was very happy and even more proud of me than ever before. - -When my father passed away at the age of 93, he left his estate to his children. By that time, I had already built a massive and internationally recognized company. After the family split the assets and estate taxes, the money I got was—relative to what I had built—not that consequential. Nice to have but not a big-money factor. What he left me, much more importantly, were the best “genes” that anybody could get. He was a special man and father. - -Let’s review the conservative scorecard and check my grades: - -Affordable health care? Here’s my word—and I never go back on my word: Obamacare needs to be repealed ASAP—and replaced with something far better. - -Immigration reform? Has anybody been more of a leader on this issue than me? My plan is simple: We build a wall and take back control of our country. Massive law enforcement on the borders. Legal immigrants should speak or learn English; without it they can never assimilate. - -Anchor babies? They’re here for one day and the child is entitled to a lifetime of benefits when others have spent a lifetime, or their lives, earning them. This needs to end! - -The Iran deal? Iran cannot be allowed to build a nuclear weapon. That’s not a threat. It’s a statement of fact. Our allies and foes alike should take heed. - -The Second Amendment? I believe the rights of law-abiding gun owners must be fully protected. - -Defense of religious freedom? I believe religious freedom is the most fundamental constitutional right we have and must be protected. - -Fix our broken tax system? There is no politician who understands our tax system like I do. It has to be changed to make it fair for all Americans—and simplified. - -I am a strong, proud conservative. The biggest difference between me and all the do-nothing politicians who are all talk, no action? Those people constantly claiming they are more conservative than anyone else? I don’t talk about things, I get things done. - -I am standing up for this country because our so-called leaders haven’t been able to. So the next time someone questions my conservative credentials, show them this list! - - - - - - - - - - - -LUCKY TO BE AN AMERICAN - - -I KNOW HOW LUCKY I am. The day I was born I had already won the greatest lottery on Earth. I was born in the United States of America. With that came the amazing opportunities that every American citizen has: The right to become the best possible person you can be. The right to be treated equally with all other Americans. The right to speak freely (and by the way I take that right very seriously). The right to practice the religion of your choice the way you choose. The right to achieve as much as your own hard work and talent allow. The right to be secure in your home thanks to the greatest law enforcement agencies anywhere, and the privilege of raising your family knowing that you are protected by the men and women of the finest military forces in the world. - -I think my parents must have known how proud I would be to be an American: I was born on Flag Day, June 14! - -I’ll tell you how proud I am to be an American. You may have heard that I own a house in Palm Beach, Florida. It’s called Mar-a-Lago, which means “Sea to Lake.” It has 128 rooms. It’s listed as a National Historic Landmark because it is one of the most beautiful homes ever built. It was built by E. F. Hutton and his cereal heiress wife, Marjorie Merriweather Post, in 1927. - -The land it sits on is reportedly the most valuable 20 acres of land in Florida. After I bought it, I wanted people to know how proud and grateful I am to be an American, so I decided to fly an American flag in front of my house, an American flag that nobody could miss, a flag fitting for this beautiful house. - -So I raised an extra-large flag, 15 feet by 25 feet on an 80-foot-high flagpole. - -Watching that flag catch the wind and fly proudly was a beautiful sight. Except the city of Palm Beach decided my flag was too big. They claimed it exceeded zoning regulations. Who knew there was a law about the size of flag you are allowed to fly? When I politely informed them I had no intention of taking down my American flag, they began fining me $250 a day until I removed it. - -As I said at the time, “The town council of Palm Beach should be ashamed of itself. They’re fining me for putting up the American flag. The day you need a permit to put up an American flag, that will be a very sad day for this country.” - -My guess is you know what I did next. I filed a lawsuit against the town for $25 million, claiming my First, Eighth, and Fourteenth Amendment rights were being violated. As we wrote in that lawsuit, “A smaller flag and pole on Mar-a-Lago’s property would be lost given the property’s massive size, look silly instead of making a statement, and most importantly would fail to appropriately express the magnitude of Donald J. Trump’s and the club members’ patriotism.” - -Those fines added up to $120,000 by the time we had worked out a deal with the city. Rather than paying the fine, I donated $100,000 to Iraq War veterans’ charities. - -I actually thought that issue was done, but in 2014 the city of Rancho Palos Verdes, California, wanted me to lower the 70-foot-high flagpole flying over my golf course on the Pacific Ocean. One of the officials who wanted me to lower it admitted, “This flag now has become a symbol, and to the people in this community this flag symbolizes patriotism.” So we won that fight! - -As we all know, the flag is much more than a red, white, and blue cloth rectangle. It is a symbol to me, to you, and to people around the world. It represents equality, hope, and fairness. It represents great courage and sacrifice. - -Everyone has heard me talking about our immigration problem. Well, there is an important reason that people are willing to risk their lives to get into this country. In 2015, more than 4.4 million people had applied and were waiting to legally emigrate to the United States—that list even includes more than 50,000 Iranians. For people coming from some countries, the estimated waiting period is 33 years. We also have somewhere between 12 and 15 million people here legally on green cards or temporary visas. Nobody knows how many illegal immigrants are here, but the usual estimate is more than 11 million people. - -For the last several years I’ve been watching things change. Like most of you, I don’t like what has been happening. I was asked by Chuck Todd on Meet the Press when the last time was that I thought America was living up to its promise. During the administration of Ronald Reagan, I said. It was a time when we felt so proud to be Americans. - -I’ve spent my entire career standing up for this country. There is a writer on a conservative site who doesn’t like me at all. I understand that—these people all have their favorite politicians. But even while he was calling me some nasty names he wrote, “And tell me: Why is Donald Trump . . . the only candidate willing to unambiguously state that the first duty of American politicians is to American citizens? Would those who disagree kindly provide us with a list of their priorities, showing us exactly where they think American citizens fall?” - -I believe in always putting the interests of American citizens first—always. There aren’t any second or third places. That level of commitment is what has been missing for so long in our foreign policy, in our trade policy, in our immigration policy. Somewhere we started worrying too much about what other countries thought about us. Does anybody reading this believe that I’m concerned about making other countries feel good? They used to fear us. They used to want to be us. We were respected. - -Many years ago my daughter Ivanka went to what was then Czechoslovakia to visit her mother’s family. At that time it was a Communist country. She told me that the Czechs would tape American currency to the windshield of their cars, even if it was just a dollar bill, to show how proud they were to have anything from America. Even a one-dollar bill—they just wanted that association with America. Now? Now they’re laughing at us. There’s an old phrase that, sadly, you rarely hear anymore: “Made in America.” We will start saying this again—in spades. We’re unique. In case there’s any doubt, that is exactly what I believe. - -One way I have always shown my patriotism is by strongly supporting our military. We haven’t been doing such a good job of that lately, but that needs to change. Our military must have all the manpower and the tools it needs to fulfill any mission. I like to say that the United States military should be so strong that we will never have to use it. - -I was absolutely horrified to find out that we have been sending our soldiers into combat situations without the best available protection. It wasn’t so long ago that parents were raising money at home to buy additional available protection and sending it to their kids in combat. I couldn’t believe it. We need to make this promise to our fighting forces: No American will ever go into the field unless he or she has the best equipment available and as much of it as is needed. And when our troops come home, we are going to take good care of them. They are going to have the medical care they’ve earned. They are going to be respected for their service. The way we treat our veterans today is a disgrace, and that needs to change. - -Unlike a lot of politicians, my active involvement with our veterans began more than two decades ago, when only about one hundred spectators turned out to watch New York’s annual Veterans Day parade. In that case this country was “celebrating” the 50th anniversary of the end of World War II. - -A hundred spectators? It was humiliating. It was an insult to those men and women who had literally saved the world for democracy. One hundred people. - -Mayor Rudy Giuliani and I decided to do something about it. I donated a one-million-dollar matching grant to finance a second parade. On November 11, I walked down Fifth Avenue with 25,000 veterans, many of them dressed in their uniforms, as an estimated 1.4 million spectators cheered them. That was a parade worthy of their sacrifice, and one of N.Y.’s biggest ever. - -A month later, I was honored at the Pentagon at a lunch attended by the secretary of defense and the entire Joint Chiefs of Staff. Since that time I’ve actively supported veterans’ causes and hired veterans throughout my organization. - -Currently the biggest crisis our veterans are facing is getting the medical care they were promised. We’ve got young men and women who come back from Iraq and Afghanistan and have to fight to get the treatment they need and were promised. We made a contract with all our veterans and we’re not delivering. How in the world can we talk about how much we love this country when we’re not taking care of the people who protect us? In September I said we need to take the existing system apart. We need to create a whole new system. We have to, and it will really work. - -The Department of Veterans Affairs (VA) is probably the most incompetently run agency in the United States government. And that’s saying something. If it was one of my companies, the people running it would have been fired a long time ago. The problem is that there are too many political people involved within its operation. It is astonishing that illegal immigrants in many cases are treated better than our veterans. The taxpayers pay more than $150 billion a year for the VA, and what do we get for that? - -The Las Vegas Review-Journal summed it up correctly in 2014, saying, “The Department of Veterans Affairs finally is under intense scrutiny for its bogus waiting lists and the unconscionable treatment delays that have caused an untold number of preventable patient deaths. But new information shows that malfeasance, malpractice, and outright corruption within the VA is worse than Americans could have imagined—much worse.” - -That needs to end. Right now the VA is being run by people who don’t know what they’re doing. They’re getting more money from the government than ever before and yet the care gets worse. The list of men and women waiting for care is growing and their wait times are longer. How can the VA possibly be so inefficient? We need to put people in charge who know how to run big operations. We have to get the best managers and give them the power, the money, and the tools to get the job done. We owe our veterans nothing less. - -One way or another, we are going to take care of our veterans. If the VA hospitals can’t do the job, then the veterans go to private doctors, private hospitals. The government will reimburse those doctors and those hospitals because we must fulfill our obligation to our veterans. - -Finally, jobs: What kind of country sends their young men and women off to fight for them and then, when they come back, tells them, “Sorry, but while you were gone other people got all the jobs”? - -Getting a good job is hard, but it’s even more difficult for a veteran. Too many veterans find themselves struggling to find an opportunity. They have been out of the job market, often for several years. So we need a program that recognizes the sacrifices they made for all of us and puts them right back in the middle of the job market. - -Being born in this country is a matter of luck. Being grateful and proud of this country and what it represents and honoring the people who have protected it is a privilege I am proud to share with all Americans. - - - - - - - - - - - -THE RIGHT TO BEAR ARMS - - -THE SECOND AMENDMENT IS clear to me: “A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.” - -Period. - -The fact that the Founding Fathers made it the Second Amendment, second only to our First Amendment freedoms of speech, religion, the press, and the right of assembly and to petition the government, shows that they understood how important the right to bear arms would be for all Americans. - -James Madison pointed out that this right was a unique historical protection when he said that the Constitution preserves “the advantage of being armed, which the Americans possess over the people of almost every other nation . . . [where] the governments are afraid to trust the people with arms.” - -We all enjoy this fundamental right in order to defend ourselves and our families. The Founding Fathers knew it was essential to a free society and passed this amendment to make sure the government could never take it (or our arms) away. Throughout history, we’ve seen oppressive governments consolidate and ensure their control over those they govern by taking away the means necessary for citizens to defend themselves. - -I own guns. Fortunately, I have never had to use them, but, believe me, I feel a lot safer knowing that they are there. - -I also have a concealed-carry permit that allows me to carry a concealed weapon. - -I took the time and the effort to get that permit because the constitutional right to defend yourself doesn’t stop at the end of your driveway. That doesn’t apply just to me either. It applies to all our driveways or front doors. - -That’s why I’m very much in favor of making all concealed-carry permits valid in every state. - -Every state has its own driving test that residents have to pass before becoming licensed to drive. Those tests are different in many states, but once a state licenses you to drive, every other state recognizes that license as valid. - -If we can do that for driving—which is a privilege, not a right—then surely we can do that for concealed carry, which is a right, not a privilege. That seems logical to me. - -The Second Amendment has been under attack for a long time. Throughout the years, state governments have chipped away at it, adding restrictions. No other right in the Bill of Rights has been attacked as often as the Second Amendment. Some of these restrictions obviously make sense. For example, felons and mentally ill people should not have access to guns. - -A purpose of a gun among other things is to offer protection, to warn those people who would try to harm us that we are carrying a weapon and that we will use it. - -In order to protect the Second Amendment, there are several significant steps we need to take. Most important, we need to start getting serious about prosecuting violent criminals. Sometimes it looks to me like the Obama administration has made only a token effort to take violent offenders off our streets. - -The problem is compounded by the pressure being put on police departments by community organizations who seek to make our police do their jobs with one hand tied behind their backs. - -Violent crime in our inner cities is out of control. Murder rates are way up. There are far too many hardened drug dealers and gang members who are repeatedly involved in burglaries and drive-by killings. We need to get them off the streets so that they don’t continue to terrorize their neighborhoods and ruin more lives. - -Here’s an example of what can work. In 1997, a program called Project Exile was started in Richmond, Virginia. It mandated that if a criminal was caught committing a crime with a gun, he had to be tried in federal court rather than city or state court. If convicted, there was a mandatory minimum sentence of five years in a federal prison without a chance of parole or early release. - -This was such a sensible program that it was supported by both the NRA and the Brady Campaign, sponsors of the Brady Bill, which had fought for restricted gun ownership. - -The Project Exile program was enacted and it worked. This message was posted on billboards around the city: “An Illegal Gun Gets You Five Years in Federal Prison.” In the first year, homicides and armed robberies declined by about a third, and 350 armed criminals were taken off the streets. - -A decade later, when the primary elements of the program had been supplemented by a somewhat less tough state law, the number of homicides in Richmond had still been cut by more than half. - -Why is this important to law-abiding gun owners? First of all, it offers an intelligent approach to reducing crime, something we all want. Second, it clearly shows that guns are not the problem—dangerous, unstable criminals are the problem. - -The antigun lobby still seems to be confused about this distinction. - -We don’t need to keep guns out of the hands of law-abiding citizens. We need to crack down harder on the career criminals who traffic in guns illegally. Programs like Project Exile will help make our communities safer. - -Another important way to fight crime is to create an environment where our law enforcement officers are appreciated for all the good work they do as opposed to being singled out and criticized for the few bad officers who give police a bad name. I realize—and deeply regret—those situations where a police officer acted poorly under pressure and used unnecessary force. - -These incidents always draw much more attention than the exemplary police work that goes on day-to-day. - -Let’s be clear about one thing: Our police do an amazing job in dealing with all the potentially explosive situations they face on a daily basis. We know, for example, that most crime is committed locally, within a neighborhood or even a household, where an argument can escalate into violent anger and action. - -Who gets called into these situations? The police, of course. It is their job to rush in and calm things down. They are protecting neighborhood residents from the criminals in their midst. Detectives have to pick up the pieces when a robbery or murder occurs, so that the perpetrators of crimes can be brought to justice. Our law enforcement officers are very professional and well trained. - -Ultimately, protecting ourselves and our families is our own responsibility. I know that. We have to be alert and report suspicious strangers or packages. We have to create community boards that can work in tandem, not in “gotcha mode,” when dealing with local authorities. As relatives and friends, we have to be vigilant when someone close to us is suddenly showing deep signs of depression or erratic behavior while posting threats on social networks. - -We also have the right to protect ourselves with gun ownership. It’s as fundamental as choosing our type of religious worship or allowing the press to be critical of our government. - -What is foolish and unnecessary is the media criticism that immediately ties a well-publicized crime to the gun rather than to the criminal. - -There are a number of steps that can be taken that will benefit all Americans, including the millions of law-abiding gun owners as well as those people who believe wrongly that guns are the source of our crime problems. - -We have to keep guns out of the hands of people with mental health issues. The fact that people with mental health problems can obtain guns is not right. We all agree about that and we have to stop it, but there are some big hurdles. - -Let’s deal with reality: Our mental health system is broken, and it needs to be fixed. Politicians have ignored this issue because it is such a complex problem, and it might cost some big money. - -But the fact is we need to fix this problem now. - -Many of the mass murders that have taken place in this country over the last several years have one glaring fact in common: There were red flags that were ignored, and warning signs about the future “murderers” that were ignored. Parents and close friends, even Facebook friends, chose not to say anything or to look the other way. Denial is not responsible behavior. - -Most people with mental health problems aren’t violent; they just need help. We have to invest the money and resources to expand treatment programs that can provide that help. But there are people who are violent. They are a danger to the community and they are a danger to themselves. - -There are people who should be institutionalized and not living on the streets. Judges say they are entitled to their rights, which of course is true. They are entitled up to the point when they become dangerous to others and themselves. Then the situation changes. Then we have to protect the rights of young children going innocently to school or families out for a relaxing evening at a movie. - -Why is this important to law-abiding gun owners? Because you are the citizens the antigun movement and the media blame when a deranged madman uses a gun to commit a horrific act. When one of these tragedies occurs, you can be sure two things are going to follow. First, opponents of gun rights will immediately exploit the situation to push their antigun agenda, and second, none of their proposed restrictions would have prevented the tragedy from taking place. - -We need real solutions to solve real problems. We don’t need advocates of useless gun restrictions taking advantage of emotional situations to push their agenda. - -So how can we protect and extend the rights of law-abiding gun owners? We accomplish that by educating all Americans about the facts. For example, there has been a long and expensive campaign to find different ways to ban guns or gun hardware. In effect, just get rid of guns. That’s the answer gun control advocates give. - -This tactic is a road to nowhere. - -Opponents of gun rights often use a lot of scary descriptive phrases when proposing legislative action against various types of weapons. Ban “assault weapons” they say, or “military-style weapons,” or “high-capacity magazines.” - -Those all do sound a little ominous, until you understand what they are actually talking about are common, popular semiautomatic rifles and standard magazines that are owned and used by tens of millions of Americans. - -I worry when our social-policy makers, looking for a “cause,” pick on guns. The Supreme Court has made it clear that the government simply has no business and, in fact, no right to dictate to gun owners what types of firearms law-abiding Americans are allowed to own. Gun owners should be allowed to purchase the best type of weapon for their needs, whether it’s for self-protection, sport shooting, or any other purpose. - -There has been a lot of speculation about background checks, as if researching the background of everyone attempting to legally purchase a gun will somehow keep guns out of the hands of criminals. The national background-check system has been in place since 1998. Every time a gun is purchased from a federally licensed gun dealer, which is how the overwhelming majority of all gun purchases take place, they have to go through a federal background check. - -Unfortunately, as expected, bringing more government regulation into the situation has accomplished very little. The main “benefit” has been to make it difficult for a law-abiding American to buy a gun. As study after study has proven, few criminals are stupid enough to try to pass a background check or have their names in any kind of system. - -So they get their guns the same way bad guys have always gotten their guns—by stealing them or by buying them from an unlicensed source or getting them from family and friends. - -This system is another example of federal regulation that has turned into a complete failure. When the system was put in place, gun owners were promised it would be instant, accurate, and fair. That isn’t what has happened at all. - -One final caveat. We need to allow our military members to carry firearms on bases and at recruiting centers. As we have seen, our current policies leave our military members—and their families—defenseless on their own bases. They can be sitting ducks for one crazy person with a machine gun. - -In the end, we must understand and appreciate why the right to keep and bear arms is so essential for law-abiding citizens. And we must recognize that the red tape proposed to infringe upon that right is a tremendous waste and possible danger to us all. My sons Donald and Eric are members of the NRA—and so am I—and proud of it! - - - - - - - - - - - -OUR INFRASTRUCTURE IS CRUMBLING - - -THERE ARE SOME THINGS so obvious that even Joe Biden can see them. - -Take, for example, the state of our country’s infrastructure. Vice President Biden once said, “If I blindfolded someone and took him at two o’clock in the morning into the airport in Hong Kong and said, ‘Where do you think you are?’ they’d say, ‘This must be America. It’s a modern airport.’ But if I blindfolded you and took you to LaGuardia Airport in New York, you’d think, ‘I must be in some third-world country.’ ” - -The good news is that London Bridge isn’t falling down. But that bridge, which is now located in Lake Havasu City, Arizona, may be the only bridge in America that isn’t in danger of falling down. - -Our airports, bridges, water tunnels, power grids, rail systems—our nation’s entire infrastructure—is crumbling, and we aren’t doing anything about it. Former secretary of transportation Ray LaHood knows all about this and got it right when he said, “If we are going to have safe transportation systems in America, you have to invest in them. We haven’t done that.” - -He described our way of dealing with this problem as the “limp along, go along” system. “There’s no vision. No leadership in Washington to fix it, and they are trying to put Band-Aids and duct tape and other things on these fixes and they simply do not work.” - -This country’s infrastructure is falling apart. According to engineers, one out of every nine bridges in this country is structurally deficient, approximately a quarter of them are already functionally obsolete, and almost a third of them have exceeded their design lives. - -Some of these bridges have already collapsed. Barry LePatner, who wrote a book about this topic, said the following: “Since 1989 we’ve had more than 600 bridge failures in this country and . . . a large number of bridges in every state are really a danger to the traveling public.” - -Our infrastructure is terrible, and it’s only getting worse and more expensive to fix. It’s already costing the American people an estimated $200 billion a year in reduced productivity. That number is increasing annually. Instead of being at the office or in the factory getting work done, Americans waste countless hours every day sitting in traffic jams or waiting for stalled trains. We depend on our truckers to deliver the goods we need, and they end up wasting an unbelievable amount of time because our highway system is falling apart. - -I used to think the traffic jams in New York were the worst anywhere in the country; they’re not even close anymore. The problems are everywhere. Our roads are corroded with potholes. Our airports? Are you kidding me? A disgrace. - -When Joe Biden sees it, you know it’s bad. - -If you land at LaGuardia, it feels like the wheels of the aircraft have come off. - -I fly in from China or from Qatar, and it’s as if I’ve come from a different world. It isn’t just LaGuardia, which by the way is finally getting billions of dollars to rebuild; this is a coast-to-coast problem. Los Angeles International Airport is an entirely different kind of disaster. - -Our power grid, the infrastructure for electricity that keeps everything operating, is way out-of-date. There is simply no way it will be able to meet our power needs in the future. Our high-speed Internet access is only 16th best in the world. When I travel internationally, I see magnificent places you wouldn’t believe. I see properly maintained bridges, tunnels, and airports. I see great highways and unbelievably efficient power systems. - -Then I come home and I get caught in traffic, and when the car moves, it bangs over potholes. It never seems to get better. - -I wonder, Why can’t we get these problems fixed? The answer is that the people we put in charge don’t know how to fix them. - -We’re spending billions of dollars protecting countries that should be paying us to do the job yet we can’t build roads in our own cities. We can’t build schools in our own communities. I’ve been to China numerous times, and everywhere you look there are cranes reaching toward the sky. The Chinese build new cities over there in about 12 minutes, while we take years to get the permits to add a dormer window to our own homes. - -The World Economic Forum ranks the US infrastructure as only the 12th best in the world, behind countries like Spain, the Netherlands, and the United Arab Emirates. Part of the reason is that we don’t spend enough to fix, build, or maintain our “plant.” Europe and China spend as much as 9 percent of their GDP on infrastructure projects. We spend 2.4 percent. - -When you talk about building, you had better talk about Trump. There is no single builder in this country who has his name on as great a range of projects as I’ve constructed. - -New York City wasted seven years trying to get a skating rink done. I did it in less than four months—and got it done under budget. There was a huge railroad yard overlooking the Hudson River that nobody could figure out how to develop. Drive by there now and you’ll see thousands of magnificent apartments, all with the same name on the buildings—Trump. - -Think about 40 Wall Street, one of the greatest buildings in New York City. For a brief time, along with the Chrysler Building, it was one of the two tallest buildings in the world. But it had fallen into disrepair. It was awful. They couldn’t rent office space there. - -I bought 40 Wall Street and completely redid it. Now it is a classic—and by the way, 100 percent rented and a very profitable building. My home in Palm Beach, Mar-a-Lago, was once the greatest mansion in the country, but its previous owner, the United States government, had let it deteriorate. Nobody had the vision to see what it could be once again. I restored it, rebuilt it, and now—go online and you can see what I’ve accomplished there. We brought the property back to the greatness it once was—and then made it better! - -The same can be true of our country. - -In Washington, DC, I’m converting the Old Post Office Building on Pennsylvania Avenue into one of the world’s greatest hotels. I got the building from the General Services Administration (GSA). Many people wanted to buy it, but the GSA wanted to make sure whoever they sold it to had the ability to turn it into something special, so they sold it to me. I got it for four reasons. Number one—we’re really good. Number two—we had a really great plan. Number three—we had a great financial statement. Number four—we’re EXCELLENT, not just very good, at fulfilling or even exceeding our agreements. The GSA, who are true professionals, saw that from the beginning. - -That’s the way the country should be run. - -Fixing our infrastructure will be one of the biggest projects this country has ever undertaken. There isn’t going to be a second chance to get it right. Let me ask you, if your own house was falling down and you had to hire someone to fix it before it completely collapsed, who would you hire? A guy who tells you what he’s planning to do, or a guy who has proven what he can do countless times before? - -In America, our house is falling down. Numerous times I’ve developed project after project. I raise the money, solve endless problems, bring in the right people, and get it done. Those are four words politicians can’t use: I get it done. - -There isn’t any doubt that we are going to have to find a way to deal with our infrastructure problems if we want to be the greatest economy in the world. Our economy requires movement, literally and figuratively, and we need the infrastructure that can support and promote that movement. - -When you are getting ready to start the greatest long-term building project in American history, you’d better have the right person in charge. You need someone who has done it before and who isn’t intimidated to take on that tremendous responsibility. You need someone who knows how to deal with unions and suppliers and, without any doubt, lawyers. I deal with them all each day, and I don’t lose to them. - -Different people might approach complex problems like this differently. There are people who look at a problem like this and shake their heads, thinking that it can’t be done. There is a name for people like that: Governor. Then there are people who talk about the issue, throw around other people’s money, and maybe even show you drawings. There’s a name for those people too: Senator. - -For me, fixing the country’s infrastructure would be a major priority project. I was speaking to thousands of people in New Hampshire when a nice young man asked me what I thought about the project being planned to send humans to Mars. - -“I think it’s wonderful,” I told him. “But I want to rebuild our infrastructure first on Earth, okay?” I mean, I don’t understand how we can put a man on the moon but we can’t fix the potholes on the way to O’Hare International Airport. - -Where are our priorities? - -Before we build bridges to Mars, let’s make sure the bridges over the Mississippi River aren’t going to fall down. - -I love difficult challenges. Nobody responds better than I do when I’m told that something can’t be done. What other people see as a terrible problem, I see as a great opportunity. There is nothing, absolutely nothing, that stimulates the economy better than construction. - -A few years ago, Moody’s, the financial investment agency, calculated that every $1 of federal money invested in improving the infrastructure for highways and public schools would generate $1.44 back to the economy. The Congressional Budget Office said that infrastructure investments have one of the strongest direct economic impacts. - -You know why that is? Jobs. - -These projects put people to work—not just the people doing the work but also the manufacturers, the suppliers, the designers, and, yes, even the lawyers. The Senate Budget Committee estimates that rebuilding America will create 13 million jobs. - -Our economy needs more available jobs. I know what the unemployment rate supposedly is, but I also know there is no such thing as the Easter Bunny. Ask the construction unions and trade unions how many of their members are looking for jobs. Ask the unemployed electricians, plumbers, and masons how hard it is to find a good job. - -If we do what we have to do correctly, we can create the biggest economic boom in this country since the New Deal when our vast infrastructure was first put into place. It’s a no-brainer. It’s so obvious that even the Democrats can figure it out. - -The biggest questions are “How much is it going to cost?” and “Where is that money going to come from?” Financing a project is far too complex for most politicians to understand. These projects require real-world dollars, not figures on paper. Experience is required to understand how to budget properly. - -I think we can all agree, after watching our politicians waste our tax dollars, that the last thing we want to do is to put them in charge of a trillion-dollar rebuilding program. - -When I build a project, I watch the money. At least some of it is coming directly out of my pocket—and if I do the job right, a lot more is going back into that same pocket. I know what things cost, I know where the money goes, I know who is doing a good job, and I know who is just phoning it in. Our government should, too. - -On the federal level, this is going to be an expensive investment, no question about that. But in the long run it will more than pay for itself. It will stimulate our economy while it is being built and make it a lot easier to do business when it’s done—and it can be done on time and under budget. - -There are a lot of different ways to finance these projects. We need to put together a variety of sources to get it done. In some places there need to be bonds issued. The money is there—we just have to get it into place. The beauty of this is that every city and state has needs, which means that we can truly make this a national effort, controlled at the local level. - -If we are serious about making America great again, this is where we have to start. Not only will repairing our infrastructure create jobs and stimulate the economy, it’ll make it easier for us all to get home at the end of a long day. And in this case we can make America beautiful again. - - - - - - - - - - - -VALUES - - -THE ONE QUESTION I get asked all the time is, “Mr. Trump, how do I get rich?” - -What they are really asking me is, “How do I achieve happiness?” - -Most people believe that once they’re rich they’ll automatically become happy. I’m not going to pretend that being rich doesn’t offer a lot of wonderful opportunities, but it doesn’t necessarily make you happy. I’ve learned that wealth and happiness are two completely different things. - -I know the richest people in the world. Many of them are great negotiators and great businesspeople. But they’re not necessarily nice people, nor are they the happiest people. They’re rich, they’re smart—I’d hire them to negotiate for me anytime, yet their personal lives may leave something to be desired. - -The happiest people I know are those people who have great families and real values. I’ve seen it. I know it. People who have a loving spouse and have children they really love are happy people. Religion also plays a very large factor in happiness. People who have God in their lives receive a tremendous amount of joy and satisfaction from their faith. - -Those who have watched me fire people on The Apprentice, who have read my bestselling books, or who have attended my Learning Annex seminars think they know me. Well, they know part of me—my business side. The professional part. I usually don’t speak much about my personal life or my personal values or about how I came to be who I am today. - -To begin with, my father and my mother were enormous influences on me. Fred Trump was a rich man, but he made sure his kids worked hard. Believe me, he didn’t hand us anything—we had to work for what we got. He would drag me around with him while he collected small rents in tough sections of Brooklyn. It’s not fun being a landlord. You have to be tough. - -I’d see him ring the bell and then stand way over to the side of the door. - -“How come you’re over there?” I asked once. - -“Because sometimes they shoot right through the door,” he replied. Rent collectors usually did this work, but the methods were the same. - -My work ethic came from my father. I don’t know anybody who works harder than I do. I’m working all the time. It’s not about money—I just don’t know a different way of life, and I love it. - -I raised my own kids the same way my parents raised me. I have five great kids. While my older ones were growing up, I’d have dinner with my kids almost every night. When they needed me, I was there for them. - -Truthfully, I was a much better father than I was a husband, always working too much to be the husband my wives wanted me to be. I blame myself. I was making my mark in real estate and business, and it was very hard for a relationship to compete with that aspect of my life. - -My kids were a different story. I was always there for them. My two oldest sons claim they’re the only sons of a billionaire who know how to run a Caterpillar D10. While my daughter Ivanka’s friends were vacationing in the South of France, she was in New York working. - -My children have great mothers. My kids were raised to become hardworking, respectful adults. I could not be prouder of them. We never had any of the drug or alcohol problems that some of my friends’ families have had to deal with. Hopefully it stays that way! Now I see my kids becoming great parents. - -Growing up in Queens, I was a pretty tough kid. I wanted to be the toughest kid in the neighborhood and had a habit of mouthing off to everybody while backing down to no one. Honestly, I was a bit of a troublemaker. My parents finally took me out of school and sent me upstate to the New York Military Academy. I had my share of run-ins there as well. - -While I wasn’t afraid to fight, eventually I got the message. I learned respect for other people. I learned self-discipline. By the time I was a senior, I was made cadet captain—one of the highest ranking cadets. - -My religious values were instilled in me by my mother. The first church I belonged to was the First Presbyterian Church in Jamaica, Queens. I went there every Sunday for Bible class. The church had a strong influence on me. Later I went to Reverend Norman Vincent Peale’s Marble Collegiate Church when I was in New York, and joined Bethesda-by-the-Sea in Palm Beach, Florida. - -Reverend Peale was the type of minister that I liked, and I liked him personally as well. I especially loved his sermons. He would instill a very positive feeling about God that also made me feel positive about myself. I would literally leave that church feeling like I could listen to another three sermons. - -I learned a lot from Norman Vincent Peale, who wrote the classic The Power of Positive Thinking. - -I think people are shocked when they find out that I am a Christian, that I am a religious person. They see me with all the surroundings of wealth so they sometimes don’t associate that with being religious. That’s not accurate. I go to church, I love God, and I love having a relationship with Him. - -I’ve said it before—I think the Bible is the most important book ever written—not even close. - -Perhaps The Art of the Deal is second. (Just kidding!) - -I’ve had a good relationship with the church over the years—God is in my life every day. I don’t get to church every Sunday, but I do go as often as I can. A lot of Sundays, when there’s a special occasion, and always on the major holidays, I make sure I am there. People like to give me Bibles, which I love. - -Jimmy Fallon asked me a question on his show one night: “Have you ever apologized? Ever, in your whole life?” I told him that I think apologizing is a great thing—but you have to have been wrong. Then I promised, “I will apologize in the distant future if I am ever wrong.” The audience laughed, as they should have. If you want to know if I’ve ever been wrong, the best thing to do would be to ask my kids. They’ll tell you the truth about that. - -Of course I’ve done things wrong. Show me a human being who hasn’t. But when I do, I go out and try to make things right. I try to do a better job going forward. - -I have been asked if I thought the Gospels would have a bearing on my public policy choices. That question has been asked of candidates for political office since Al Smith, a Catholic, ran for president in 1928. Many people thought JFK ended the discussion in 1960 when he said he would be president of all Americans. I am who I am, and deep down the Gospels helped make me that person. In business, I don’t actively make decisions based on my religious beliefs, but those beliefs are there—big-time. - -What does offend me is the way our religious beliefs are being treated in public. There are restrictions on what you can say and what you can’t say, as well as what you can put up in a beautiful public area. The fact is that our deep-rooted religious beliefs have made this country great. That belief in the lessons of the Bible has had a lot to do with our growth and success. - -That’s our tradition, and for more than 200 years it has worked very well. For years you’d have beautiful mangers in public spaces and nobody complained about it. - -Now? Mary and the baby Jesus are seldom shown. Even the word “Christmas” has somehow become controversial. - -Who in the world could be offended by someone saying “Merry Christmas”?! That greeting isn’t critical of any other religion, and it isn’t being disrespectful to those who practice another religion. It’s a wonderful tradition. - -I don’t understand why the same people who demand respect for their beliefs often don’t show respect for the beliefs of others. It seems like every week there is a negative ruling on some issue having to do with Christianity. I think it’s outrageous, totally outrageous. The president should do something about it. If the president has to go through the court system to do it, the president should do it. But this president won’t. - -It’s well-known I am not fond of President Obama. I think he has been an awful president. His inexperience and arrogance have been very costly to this country. He’s weakened our military, alienated our allies, and emboldened our enemies. He’s abused his power by taking executive actions that he had no right to take. The next president is going to have to reverse and repeal many of the actions he’s taken. - -I did take a lot of criticism for not responding when an individual made what some people considered to be an anti-Muslim comment at an event in New Hampshire. People have their beliefs and their opinions. It’s not my job to defend the president. President Obama would never defend me. - -Anybody who wonders how I feel about women should just take a good look at the Trump Organization. - -My positive feelings about women are reflected in the number of women who have worked in my organizations. I placed women in important leadership positions in the Trump Organization long before anybody else gave them that opportunity because I knew they could handle it. I was the first developer ever to put a woman in charge of a major construction project in New York City. - -On The Apprentice, I was always pointing out the business skills of women. Talk to any of the women who worked for me and they will all tell you the same thing—I am a tough, demanding boss. I reward success and I penalize failure. I treat women no differently than I treat the men who work for me. I give women the responsibility they earn with their performance, I pay them the same, promote them accordingly, and, when they mess up, fire them the same. - -I couldn’t be more proud of my record with women. - -Maybe my spokesperson on this subject should be my daughter Ivanka. I take a tremendous amount of pride in the fact that my children not only work with me, but when I’m criticized, they are the first to defend me. - - - - - - - - - - - -A NEW GAME IN TOWN - - -CONTRARY TO THE JOKES, I don’t think the White House needs any bright neon signs on the roof. No need to add additional wings or to sell the air rights. - -I do, however, think we need to bring some business acumen to the White House. - -The one thing you can be certain about is that, unlike the Obama administration, I stand up for this country, proudly and loudly. I continue to be exactly what I have been—the greatest cheerleader for America—the America that won rather than constantly lost. - -As has become evident throughout my life, I am not afraid to look my opponents right in the eyes and say exactly what I believe. - -I never worry about being politically correct. I don’t need to read the polls to make my decisions. - -And I don’t see any reason to change my approach. - -The issues facing our country are too important for anything less than an honest assessment of where we are and what needs to be done. - -We are unique among the nations of the world, and we should be leading, not following. - -Winning, not losing. - -We have an amazing history. America is the greatest country that has ever existed on the Earth, and yet for some reason our leaders are reluctant to press our advantage. - -I’ve successfully built one of the most respected brands in the world by representing it in everything that I do. I realized a long time ago that if I’m not proud of what I’m selling, then there is no reason for anybody else to feel that pride. - -I put my own name on my buildings and on my products, and I stand behind them. People have come to expect top quality from anything that carries my name. - -There is nothing in this world in which I take more pride than the United States of America. I will always be its best defender, and the best salesperson and cheerleader we’ve ever had. - -America is the leader of the free world—we’ve earned the right to boast and make it clear that we are ready and willing to do whatever is necessary to defend this country as well as liberty anywhere in the world. - -Our national anthem gets it right: This is the land of the free and the home of the brave. It’s time we lived that message and let the world know we’re willing to back it up. - -Making America Great Again means standing by our word. We’ve watched President Obama draw a line in the sand, then another line in the sand, then no line at all. We’ve become an embarrassment to ourselves and to our history. - -When your allies don’t trust you and your enemies don’t fear you, you have zero credibility in the world. Right now, our allies don’t know what to believe about us, or how, or if, to value our word. President Obama has been talking into the wind for a long time. - -We’ve seen Putin ignore him. We’ve seen just about every faction warring in Syria pay no attention to him. We’ve seen the Chinese taking tremendous advantage of our trade policies. We’ve seen the Iranians leave the conference table where we were negotiating a nuclear treaty (where a “new era” of cooperation is proclaimed), and then a few weeks later the Ayatollah is threatening again to destroy Israel—and laughing at the US. - -Closer to home, only blocks away from the White House, Congress is preparing to decide whether or not to shut the government down. This happens almost every other year. - -We need a leader who is going to restore the respect this country enjoyed in the past. I’m criticized for not issuing elaborate, detailed policy statements. What good are detailed plans if your country doesn’t have the credibility to carry them out—but I issue them anyway. - -Let’s go back to the basics, back to the America our citizens embraced, because we were recognized as the major force for progress and peace. - -Many of the lessons I’ve learned in business are applicable to our current situation. The most important lesson is this—Stand behind your word, and make sure your word stands up. People who have done business with me will tell you that I never say something unless I mean it. - -I don’t make promises I can’t keep. I don’t make threats without following through. Don’t ever make the mistake of thinking you can bully me. My business partners and employees know that my word is as good as any contract—and that better go for the other side’s word as well. - -I stand behind my commitments, and our commitments as a nation. - -I stand—without question—behind the Constitution at home, and I stand, without question, behind our allies abroad. - -No friendly country, and no allied leader, should question our ironclad support again. - -No enemy, and no enemy leader, should misinterpret our resolve to fight to the death—their death. - -We won’t need the president of Israel to come to our shores and explain to Congress what we used to stand for. - -Making America Great Again means never taking another step backward. Yes, we’ll take inspiration from the heroism of our past, but we’re only going to charge ahead now. When I played sports growing up, there was a saying in the locker room: If you take the first step backward, you might as well just keep going. - -To put it another way: If you can accept losing, then you’ve already lost. - -There have been times in business when it has made sense to change strategy or even walk away from a deal. You must never be afraid to walk away from a bad deal. - -Someone should explain that principle to President Obama and John Kerry. - -It’s only by being willing to say “enough” that you gather strength and force your adversaries to modify their behavior. - -I understand that you can’t be totally rigid in negotiations. But on our core principles and core strengths, there can be no backing up or retreat. That’s why we need to rebuild our military, so that no one will have any doubt about our strength or our intentions. If we are challenged, we will meet that challenge, and other leaders and other countries will think seriously before they doubt us again. - -My own style of conducting business is straightforward. I think big. I aim very high, and then I keep pushing and pushing toward that goal—and beyond it. In the end I may not get everything I want—I understand that—but I never compromise on the basic goals I set out for. - -Making America Great Again means convincing the smartest and the best people to come to Washington and join in putting our country first. The truth is that politicians have given government a bad name. It’s a shame. The best people don’t want to get involved in a bureaucracy where nothing ever seems to get done. - -Who can blame them? - -The kinds of people we need in government are executives who know how to get things done. These are the types of workers and executives who already are, or who will become, stars in any industry. There are also many workers in our civil service who are waiting for good leaders to inspire them. - -Years ago these types of people wanted to be in government because they had faith that government was there to help people and that they could contribute to our nation by doing a good job. They believed in the service part of public service. - -Now those inside the Washington circles are demoralized. Those outside don’t want to go there. So many people go to Washington aiming to change it only to see themselves changed instead. And not for the better. - -Ambitious government workers can’t break through the red tape, and that drives them to leave government to go into private industry. You end up with mostly careerist “lifers” doing the day-to-day work. - -It’s a terrible cycle. The government is filled with good people who are stymied in trying to get things done, and because nothing gets done, the best people outside Washington don’t want to go into government, so nothing ever gets done or improves. - -We need to create an exciting atmosphere and put good people in the right position to Make America Great Again. One reason we rarely have difficulty recruiting the people we want for the Trump Organization is that they know they are going to play a key role in an aggressive company that exists to make big things happen. It is an exciting place to work. - -People want to have a stake in that type of organization. They know they are going to be respected and judged on their accomplishments. They know that life in my company is never going to be boring and that they are going to be well rewarded for their hard work and share in the success. - -Really talented outsiders will relish the thought of becoming part of the future. Of course, due to necessary budget controls, there will be fewer employees in the government overall. This just means the competition will be even fiercer to become one of the best. - -Making America Great Again means restoring law and order, both on the street and in our courtrooms. Our police officers are doing an incredible job keeping us as safe as possible, but the job is getting harder for them because they aren’t getting the support they need. Like our military, they must have whatever equipment they need to protect themselves and all our honest, hardworking citizens. - -Government must be on their side instead of coddling criminals. - -Obviously that means putting judges on the bench who will uphold the law rather than look for loopholes or try to make law. - -We need to appoint justices—not just on the Supreme Court, but throughout the entire federal system—who will leave lawmaking to the legislators, as specified in the Constitution. The next president may well have the opportunity to appoint two or more Supreme Court justices. These appointments could determine the direction of the court for several decades. We need the right caliber of judge sitting in the highest courts. - -Making America Great Again begins at home. It means restoring a sense of dignity to the White House, and to our country in general. The president of the United States is the most powerful person in the world. The president is the spokesperson for democracy and liberty. Isn’t it time we brought back the pomp and circumstance, and the sense of awe for that office that we all once held? - -That means everyone working in the administration should look and act professionally at all times—especially the president. The way you dress and the way you act is an important way of showing respect for the people you are representing and the people you are dealing with. Impressions matter. - -Making America Great Again means taking our country back from the big-money interests. We have a country where the big problems can’t be solved by consensus because the small-minded lobbyists and special interests are clogging the halls of Congress with their “special access.” - -Everyone talks about listening to the voice of the people. But how can you hear that voice when no one represents that voice? I am listening. - -Let’s restore trust and pride in our country by Making America Great Again. - - - - - - - - - - - -TEACHING THE MEDIA DOLLARS AND SENSE - - -“I HOPE DONALD TRUMP, the pompous host of Celebrity Apprentice, runs for president,” wrote Washington Post columnist Michelle Singletary in April 2015. She continued: “Then we’ll get a certified look at his income, investments, and debts. But here’s a Trump-like prediction, which is like the various pronouncements made by the real estate developer that aren’t backed by any credible evidence: Trump will not run. He won’t officially declare his candidacy because the Ethics in Government Act requires those running for federal office to file disclosures of their personal finances.” - -Kyle Smith, resident genius at Rupert Murdoch’s New York Post, also had it all figured out. - -He wrote—“Big news coming from Donald Trump. Big, huge. I have the news before anyone else. Donald Trump is running for president . . . of the Donald Trump Love & Admiration Society. He’s sure to be elected in a landslide. Oh, that other thing? Nah. No chance. When Trump declared to the Republican Party of Iowa’s Lincoln Dinner that he is going to make an announcement in June ‘that’s going to surprise a lot of people,’ he wasn’t preparing to launch his long-awaited candidacy. He was simply doing what he always does: Promote the Donald. Generate headlines. Get people talking.” - -The truly odious Jonah Goldberg of the National Review was his usual incompetent self when he wrote—“Arguing with Trump is sort of like dressing up an adorable toddler in a Viking outfit and listening to him say he will raid my village and slaughter all in his path. It’s cute. It’s funny. Maybe it’s even vaguely disturbing if he goes on too long . . . But, just as with Trump’s ranting, the one thing you don’t ever do is take it seriously.” - -This is the sad and often pathetic state of our “objective” media today. The people who are supposed to be reporting the news have no concept of fairness, because they believe themselves to be the experts. They “know better”—they have the inside scoop. - -They never get embarrassed, but they should be. They must think their readers are idiots who forget how often they get it wrong. After I declared that I was running, a lot of them still didn’t believe it. - -Somehow they all “knew” that I wouldn’t file the financial disclosures—because maybe Trump isn’t as rich as people think he is. As it turned out, after the filing I was much richer. - -As the “brilliant” Goldberg wrote (getting it completely wrong again), “In the past, Trump always pulled back from the brink. Why risk his beloved TV show? Why endure the embarrassment of revealing he’s not as rich as he claims . . . But something changed . . . And Trump took the leap—though he hasn’t provided the required financial disclosures yet, which inclines me to think that he will either suddenly find an excuse to retreat or that he has a team of accountants trying to figure out how he can simultaneously save face and avoid perjury.” - -It’s incredible to me how dishonest the media in this country really is. People sometimes forget that the newspapers and television stations are profit-making businesses—or at least they’re trying to be. If they have to choose between honest reporting and making a profit, which choice do you think they will make? - -The sad thing is that all it does is prove that both liberal and conservative news outlets can lie and distort the news shamelessly. I’ve had meetings with reporters who faithfully recorded what I said, then changed the words and the meaning. - -Reporters have been writing about me and talking about me, even interviewing me, in newspapers, magazines, and on television for almost four decades. A lot of my press has been good and fair, but some of it has been incredibly dishonest and just terrible. I get along pretty well with a lot of the good reporters; it’s those people trying to get attention by writing inaccurate stories about me and the Trump Organization that I really object to. There are some experiences I’ve never forgotten. I had a so-called journalist from a well-known publication come up to my office and interview me and several of my executives. We gave him a pile of paperwork, we gave him financial reports and statements, anything he asked for—then he wrote one of the most inaccurate stories I’ve ever read. The public pays attention to a story for less than a week, especially when you get as many stories as I do. But the impression a bad story leaves lasts a lot longer. - -For a long time I had decided to ignore most of these attacks; I had buildings and golf clubs going up all over the world, my TV show was in the top ten, and I’ve got a great family. I didn’t want to give them any more attention than they deserve. But then my cousin, John Walter, called and started complaining about a particular story he’d heard saying that I hadn’t built a building since 1992 and told me I had to set the record straight. I couldn’t continue to let reporters get it so wrong. I hadn’t built a building since 1992? That’s just bizarre. You’d have to be blind as well as ignorant to say something like that. It’s got to be the easiest thing in the world to check the things I’ve accomplished. I’m bringing it up because it makes the point that you can’t believe everything you read or hear—especially about someone like me. - -I could list literally dozens of major projects I’ve done since 1992 (and I do in the appendix): But just for example, the award-winning 52-story Trump International Hotel and Tower opened in 1996. The 5-star Trump International Hotel and Tower opened in Chicago in 2009. The $1.3 billion Trump International Hotel in Las Vegas opened in 2008. The completely renovated “Crown Jewel of Wall Street,” 40 Wall Street, which for a brief time was the tallest building in the world, reopened in 1996 and is totally rented. The renovated 555 California Street, the second-tallest building in San Francisco, reopened in 1996. The list goes on for pages. The 35-story Trump Park Avenue. Trump World Tower. I’ve built the best golf courses in the world, everywhere from Palm Beach to Aberdeen, Scotland. The Trump National Doral, Miami. I have three iconic golf courses in Scotland and Ireland with many hundreds of hotel and residential units that have been built or restored by me better than they originally were. I haven’t even slowed down. The beautiful Old Post Office building in Washington, DC, will soon be the Trump International Hotel, Washington DC. I won a fierce bidding war for the opportunity to renovate this magnificent building, which is due to open in 2016. And many, many more. I’ve obviously been an extremely busy man since 1992! - -So terrible, lazy reporting does bother me. I think it would bother anyone who works as hard as I do, and the amazing people who work for the company, when a reporter writes something that inaccurate. The next time you read or hear something about me that doesn’t seem right, take a good look at the person who wrote it or said it on TV and see if you respect them. - -Another reporter wrote in a major publication that my father gave me $200 million when I started out. Don’t I wish! This reporter didn’t even give me the courtesy of calling me to ask if that was true. He read it in an old book that had it wrong and wrote about it. There is no man in the world I loved or respected more than my father. He was my best friend and my mentor. He gave me his knowledge, his work ethic, and his drive to succeed. He built his own wonderful company in Queens and Brooklyn from nothing. But we worked in different times, on a different scale. He built good housing, and I’ve built major buildings and resorts in New York City and around the world. I took what I learned from my father and built my own business—and no one was more proud of me for doing it than my father. He told a business magazine once, “Everything Donald touches turns to gold!” - -I’m proud of what I’ve built, so when so-called journalists get it wrong, I have to respond. - -The problem is that it’s getting worse. I know that every poll shows that the public doesn’t trust the media. The irony about that is that the media is conducting those polls. - -Even they have to admit people don’t trust them. - -Maybe the journalists’ most embarrassing moment so far came when I filed my financial statement. I am the richest presidential candidate in history. I’m the only billionaire ever to run. I’m not accepting donations from my rich friends, special interests, or lobbyists. When was the last time someone running for political office didn’t take money? The voters know it—and love it. - -So maybe I shouldn’t have been surprised at the response when I filed my 92-page-long financial-disclosure forms. My net worth is in excess of $10 billion—even more than people thought. - -As any accountant will tell you, it’s actually almost impossible to put down a specific number because large assets are always in flux. The total value not only changes every day—it changes every hour. - -I also have significant foreign investments that are difficult to value. Plus, the forms we had to fill out weren’t designed for someone like me. There were many places on the form where the only box I could check was “$50,000,000 or more.” For example, one of my buildings is worth about $1.5 billion, but on the forms it is valued at only “$50,000,000 or more.” - -We checked off a lot of boxes. Wherever possible, we were accurate to the dollar. - -I am never shy about creating news by being controversial and fighting back. Remember, we need to make sure this country stands up and fights back. - -I’ve held more well-attended news conferences in the past few months than any other candidate. I always draw a large crowd of journalists who are like sharks, hoping I’ll put some blood in the water. - -I try to oblige. - -I participated in the first Republican debate, and Fox drew the biggest audience for a news event in their history. In the second debate, CNN had the biggest audience in its history. I wonder how many people would have watched if I wasn’t involved. Not many! - - - - - - - - - - - -A TAX CODE THAT WORKS - - -THE ONE THING THAT everybody agrees about is that our tax system doesn’t work. The current code is crazy. The federal tax code is 74,608 pages long. Nobody can really understand it, not even the accountants who try to help taxpayers fill out their forms. An entire industry springs up every year just to help Americans figure out how much money they owe to the government. - -The reality is that the current tax code takes too much money from the people who need it most, while allowing others to find ways to reduce their tax burden. It discourages major corporations from reinvesting foreign profits here at home and makes it hard for small businesses to grow. It absolutely destroys jobs rather than helping create them. - -A sensible tax plan would provide tax relief for middle-class Americans, allowing hardworking people to keep more of their own money; it would reduce the taxpayers’ annual anxiety and frustration by simplifying the whole tax code; it would grow the economy and create jobs by discouraging corporate inversions and make America competitive around the world; and it wouldn’t add to either our debt or deficit. - -The tax reforms I’m proposing address all those problems by simplifying the tax code for everybody: My goal is to put H&R Block out of business. - -I described these solutions in an op-ed piece that appeared in the Wall Street Journal in late September 2015: “Tax Reform for Security and Prosperity,” it was called. - -I wrote that the top priority for the government of the United States should be to provide security for its people. That security includes removing uncertainty and making sure that the economic future of the country is assured through better deals, smarter trade agreements, and tax policies that unburden the middle class and unleash the private sector. - -My approach to tax policy will do just what needs to be done. For all Americans the uncertainty and complexity of a tax code written for special interests and the very rich will be removed and a clear future will be available for all. - -The plan has several goals. Let me make it clear that this set of policies takes dead aim at eliminating deductions and loopholes available to special interests and the very rich, as well as those deductions made redundant or unnecessary by the much lower tax rates every person and business will be paying. - -In particular, I am proposing ending the current treatment of carried interest for hedge funds and other speculative partnerships that do not grow businesses or create jobs. - -The first goal of the plan will be to provide tax relief. If you are single and earning less than $25,000 or married and earning less than $50,000, you will not owe any income tax. This will immediately remove some nearly 75 million households from the income tax rolls. - -Second, the tax code will be simplified. Instead of multiple tax brackets with multiple variations, there will be only four brackets: 0%, 10%, 20%, and 25%. This new code eliminates the marriage penalty and the Alternative Minimum Tax while providing the lowest tax rates since before World War II. Further, this plan eliminates the death tax, thus allowing families to keep what has been earned. - -The proposed policies will allow the middle class to keep most of their deductions while eliminating many of the deductions for the very rich. With more money in middle-class pockets, consumer spending will increase, college savings will grow, and personal debt will decline. - -Third, we need to grow the American economy. For the past seven years our economy has been at a virtual standstill. Growth in the Gross Domestic Product of less than 2 percent per year is pathetic. We need to spur production, bring home jobs, and make it easier to invest in America. - -The plan states that any business of any size will pay no more than 15 percent of their business income in taxes. This low rate will make corporate inversions unnecessary and will make America one of the most competitive markets in the world. This plan will also require companies with off-shore capital to bring that money back to the United States at a repatriation rate of only 10 percent. Right now that money is not being brought back because the tax rate is so high. - -Finally, this plan will not add to our deficits or our national debt. With disciplined budget management and elimination of waste, fraud, and abuse, this plan will allow us to balance the budget, grow the economy at record levels, clear the backlog of workers sitting at home, and begin the process of reducing our debt. With moderate growth, this plan will be revenue neutral. These changes will ensure huge economic growth, and this country will be on the road to extraordinary prosperity. - -This tax policy has the economic well-being of the country and its citizens at the forefront. This plan is bold, but it is also cast in reality and common sense. Growing the economy will provide the security we need to make America great again. - -While my op-ed summed it up, there are other important points to make: When the income tax was first introduced, only one percent of Americans were taxed. It was never intended to be a tax on most Americans. Under this plan the income tax will be eliminated for nearly 75 million households, and 42 million households that now have to file complex forms that they often need expert advice just to figure out they don’t owe anything will file a one-page form that will save them time, anxiety, frustration—and more than an average of $100 in preparation costs. More than 31 million households will also use the simplified form—and they get to keep almost $1,000 of the money they have worked for. - -The great reduction in rates will make a lot of the current exemptions and deductions—part of the reason the forms are so complicated—unnecessary and redundant. But the deductions for charitable donations and on mortgage interest won’t be touched. Those deductions have been very successful in accomplishing their objectives—assisting America’s charities and helping people own their own homes. It also eliminates the death tax, because you earned that money and already paid taxes on it. You saved it for your family. The government already took its bite; it isn’t entitled to more of it. - -Our current tax code actually discourages business growth and penalizes success. Too many companies, from our biggest brands to innovative start-ups, are moving their headquarters out of the country, either directly or through corporate inversions. In an inversion a company moves its legal headquarters to a lower-tax nation and pays taxes there. It isn’t illegal or dishonest or even unpatriotic, it’s just good business. Any business that does not take advantage of the opportunity to increase revenue by lowering its taxes isn’t being properly managed. The Democrats want to make inversions illegal, but that isn’t going to work. Whatever laws they pass, with literally billions of dollars at stake, corporations will find methods to get around them. It makes a lot more sense to create an environment that welcomes business. - -Under Ronald Reagan we had the best corporate tax rate in the industrialized world. Now we’ve got the worst. Rather than working with our corporations to rebuild our economy and create millions of jobs, we’re practically forcing them to relocate. This tax plan would cut the corporate tax rate to 15 percent—for our small businesses and freelancers as well as the big corporations. Small businesses are the true engine of our economy. According to the Council of Economic Advisors, American small businesses create 60 percent of our new jobs. But when tax credits and deductions are included, most small businesses are actually taxed at a higher rate than large corporations. Under existing tax law, sole proprietors, freelancers, unincorporated small businesses, and pass-through entities are taxed at the higher personal income tax rates. In reality, they are often taxed at twice the rate corporations actually pay. With the Internet changing the structure of the business world and encouraging start-ups, there are more of these than ever before. This is where our economic future is being built, where every dollar counts, and our tax code makes it tough for them to survive. - -As long as these businesses are unfairly taxed at personal income rates, they will be at a huge disadvantage. The right plan would create a new 15 percent business tax rate within the personal income tax code that would substantially reduce taxes and help these businesses succeed and grow. - -As you read this, American-owned corporations have as much as $2.5 trillion in cash sitting overseas. Just imagine what would happen if our corporations brought that money home. How many jobs would be created? Currently, they don’t bring it home because the tax rate here is much higher than they are paying in other countries. A key component of this plan is a onetime repatriation of corporate cash now held overseas at a 10 percent tax rate. Under this plan, corporations would profit tremendously by bringing home that $2.5 trillion and putting it to work—while benefitting from the globally competitive, newly lowered corporate rate. - -The big question everybody will ask is, How do you pay for this wonderful plan? The good news is that it is revenue neutral—and that’s before the economic growth that will be triggered by putting more money in your pocket and by the new jobs that will be created. This plan will be paid for by reducing or eliminating most of the deductions and loopholes that allow the very rich to pay lower taxes, by the repatriation of corporate cash held overseas, by putting an end to allowing corporations to defer taxes on income earned outside this country, and by cutting down or eliminating those corporate loopholes catering to special interests—in addition to those deductions that are made redundant or unnecessary by lowering the tax rate on corporations and business income. A reasonable rate of the deductibility of business interest expenses will also be phased in. - -We also must finally reduce waste. Billions and billions of dollars are wasted annually, and nobody seems accountable. All politicians in every election cycle promise to reduce waste in spending. When was the last time you heard of government actually doing that? I’ll answer that: Never. In business you learn that small savings very quickly become large savings. When you’re spending your own money, you learn how to eliminate waste. The next president has to stop throwing your money away. Save a few billion here, a few billion there, and before you know it, you’ve made a real dent in spending. - -This waste isn’t difficult to find. In 2013, Business Insider’s Walter Hickey went through the reports of each government agency’s Inspector General and pretty easily identified $15 billion in quick savings, ranging from the $42 million given to a college by the Department of Education—that was ineligible to receive any federal funds, to the $2.7 billion that the Department of Health and Human Services could save just by reexamining the price that Medicaid and Medicare should pay for wholesale prescription drugs. - -In 2015, the Citizens Against Government Waste issued its Prime Cuts report, showing how $648 billion could be cut from the 2016 budget without causing harm. $9.6 billion could be saved by ending the Rural Utilities Service program that makes loans and grants to utilities in underserved parts of the country—in one rural Arkansas town the government spent $5,500 per resident of your tax dollars to provide broadband access. It also points out the cost of a lack of supervision of different programs, noting that there are 6.5 million active social security accounts issued to people that supposedly are 112 years old or older—although there were only 35 known people of that age. And a lot of people have estimated that there is more than $100 billion in waste in the Medicare program. - -The point is that we throw away billions of dollars every single year, and the next president has to finally do something to stop it. - -It’s time we finally brought our tax system up to date by reducing the burden on most Americans, simplifying the system for everyone, providing a sensible policy for large corporations and small businesses, and cutting out the billions of your dollars we waste every year . . . and then, to top it all off, bring our jobs back home where they belong. - - - - - - - - - - - -MAKING AMERICA GREAT AGAIN - - -I WAS TWENTY-EIGHT YEARS old in 1974 when I got involved with my first major construction project. The once great Commodore Hotel, located right next to Grand Central Station, was a total mess. There had been a time when the Commodore was one of the greatest hotels in the world, but the hotel and the whole neighborhood had become run-down. - -A lot of the buildings in the area were already in foreclosure, and many of the stores were boarded up. The exterior of the Commodore was filthy, and the inside was so dark and dingy that it felt like the building was on the verge of becoming a welfare hotel. - -It was a dying building, in a dying neighborhood, in a struggling city. - -I probably was still too young to know better. But you know what—I was the same person then that I am today, up for any challenge. I had total confidence in my ability to get great things done. But today I have the added benefit of truly great experience. - -When I looked at the Commodore, I saw its potential—it would be the largest hotel renovation in New York City during the latter part of the twentieth century. - -The neighborhood still had possibilities as well. Right in the heart of the Grand Central area, there were thousands of people walking by the hotel every day. I didn’t have enough money to finance the deal myself, and I might not have risked it even if I had the money to do so. - -All kinds of very smart real estate investors told me it wouldn’t work. - -And yet I had a vision of what could be done, so I never gave up. My enthusiasm and meticulous planning brought others to the table. I’m an unstoppable force when I’m excited, and I was in full Trump mode on this project—and many others since then. - -During the years it took me to put this deal together, I learned a lot about working with the city and the banks, the construction industry and the unions. I could have just refurbished the existing structure, but that’s not the way I think. - -There were detractors all along the way. For instance, the preservationists were angry about my creating a new and beautiful glass exterior façade. Inside, I gutted all of the floors and replaced them with the best available materials. - -The hotel, the Grand Hyatt, has been successful since the day it opened in 1980. It became the foundation for the restoration of the entire Grand Central neighborhood as well as my calling card—introducing the Trump quality brand to the people of New York. - -That project marked the first time I took a large-scale failing property and made it great again. As part of that deal I fixed up the great Grand Central Terminal itself—it looked beautiful and clean again. I’ve done it over and over again in the thirty-five years since—and now for the really big and important one: our country. - - - -We can take a crippled country and make it great again. Our country has been allowed to languish and become a tarnished, second-class place in the eyes of the world. - -The challenges ahead are many. The naysayers from the media and the political establishment are out there because they fear any changes to the status quo from which they benefit. - -But guess what? I have a vision and I understand the process by which we’re going to accomplish our goals. We need to strengthen our military, help our vets, stand up to our enemies, deter illegal immigration, rebuild our infrastructure, revamp our tax code and educational system, and rip apart the ridiculous policies of the past, including Obamacare and the Iran nuclear “agreement.” - -Most important, we need to reinvigorate the American dream and give our country back to the millions of people who have labored so hard for so little. Too many Americans are wondering (and who can blame them) what happened to this nation’s great promise and the idea that each generation makes things better for their children. - -Don’t bet against what I am saying—I understand odds very well—because I’ve always tackled the hardest challenges and come out on top. My name has become one of the greatest brands in the world. I know how to win. I like what Jay Leno said at the ceremony to unveil my star on the Hollywood Walk of Fame. “It is now official,” he proclaimed. “There is no place in America that doesn’t have the Trump name on it.” - -Candidates for political office always say they’re running on their record. Unfortunately, their records are made up of them talking about what they’re going to do, rather than them getting things done. - -Our nation’s capital has become the center of gridlock. It seems like these days most of the energy in Washington is being spent deciding whether we’re going to keep the government operating or not. No surprise there: Washington’s been running a going-out-of-business sale for a long time. - -It’s no wonder that our president and Congress have such low ratings in the polls. No wonder that we’ve lost our influence and the basic respect of both our allies and enemies throughout the world. - -Meanwhile, the Supreme Court has decided in their infinite wisdom to fill the breach by making social policy rather than defending our most precious historic assets, the US Constitution and the Bill of Rights. - -We have three branches of government, but the trunk of that tree is rotting away. - -For years I had thought about—but resisted—running for the presidency. Friends, colleagues, and customers encouraged me to do something. I thought, “I’m not a politician, and I have a huge, successful business to run.” - -But then I realized I couldn’t stand what I was seeing. I couldn’t believe the hypocrisy and inaction of Washington “insiders” who wanted to keep the gravy train flowing in their direction, while outside the Beltway, Americans were suffering and they were rightfully angry about the lack of leadership and creativity. - -So when I spoke up, the media squawked, the politicians cringed, and the special interests realized that their days of influence were numbered. - -A lot of people tried very hard to paint a bleak picture of what would happen. - -Then the American people spoke. - -The crowds started coming to my rallies in droves. We had to move our rallies into football stadiums and basketball arenas, while my competitors could barely fill small rooms. The national debates drew huge audiences—more than 24 million viewers—because our citizens felt hopeful again and wanted to hear what I had to say. And what I have been saying is that it is time to do everything necessary to Make America Great Again. - -It begins by creating millions of good jobs for hardworking Americans. The Economic Policy Institute estimates we’ve lost more than five million jobs since 1997 because of the terrible trade deals we’ve made. Those jobs are coming back home. We’ve created too many jobs—in other countries. - -Our military must be by far the greatest in the world, so when we negotiate deals with countries like Iran, we do it from strength. And when our soldiers come home, they must receive the care they have earned. This is the one national debt we should be thrilled to pay. - -A great wall on our southern border must be built. It needs beautiful doors in it to welcome LEGAL immigration, but the flood of illegal immigrants must end. And we need to legally stop the practice of birthright citizenship; the Fourteenth Amendment was never intended to create a technical path to citizenship. Most Native Americans, for example, although they were born here, were not automatically granted citizenship—and it took almost 150 years before a law was passed making them citizens if they wanted to be. - -And the Second Amendment was created to make sure Americans could protect themselves from tyranny. There is no way we will change it. - -A revised revenue-neutral tax code—which conservative writer and commentator Wayne Root described as “close to perfect”—will put money back in the pockets of the people who need it most; and when you spend it—instead of the government—you’ll be creating American jobs. It will encourage corporations to spend their earnings here, resulting in even more new jobs. - -Our educational system needs to better prepare our children and retrain adults to succeed in the new digital marketplace. No one knows how to do that better than local communities. The federal government should not be telling local schools how to educate our children. Common Core will be dead. - -Obamacare needs to be repealed and replaced with a sensible health care system that creates a competitive marketplace, which will reduce costs while providing for the medical needs of all Americans. - -We can create tens of thousands of new jobs by rebuilding our collapsing infrastructure. These are the real shovel-ready projects: Roads, bridges, tunnels, and tracks have to be replaced or repaired before they crumble, and this will also put many thousands of people to work. - -The most powerful people in Washington are lobbyists and special interest groups, whose money funds most elections and buys them influence. That has to stop, and electing someone who doesn’t take their donations is a good first step. - -We must have a viable energy policy that uses our abundance of resources to power America back to economic prosperity. - -You can believe what I say, because to see what I’ve accomplished, all you need to do is take a nice walk through the greatest cities of the world—and look up. Look up, and you’ll see the Trump buildings rising skyward. - -I’ve done things that nobody else has done. The 68-story Trump Tower, on Fifth Avenue right next to Tiffany’s, was the tallest entirely glass-exteriored building in Manhattan when it opened in 1983. That helped pioneer the modern luxury-building industry. - -One of the things I’m most proud of about that building is that the person I put in charge of overseeing construction was a 33-year-old woman. I made that decision in 1983, when the fight for gender equality in business was really beginning. - -None of the people who whine about the way I talk to women mention the fact that I voluntarily promoted gender equality in a male-dominated industry. The women who work and have worked for me will vouch for the fact that I was as demanding of them as I was of their male counterparts. - -That’s the kind of “gender equality” we need: Leadership that inspires the best in people, male or female, not a wishy-washy former secretary of state who doesn’t understand the lunacy of having her own private e-mail server. - -Laying off thousands of workers and leaving companies in a mess is also not an accomplishment, at least not one to be proud of or to pretend qualifies you to run our country. - -I always think big. I start with a plan to build the biggest, the most beautiful, and the highest-quality projects. If you don’t begin with big dreams, you can never fulfill them. Trump buildings are all over New York, from 40 Wall Street to the West Side Railway Yards. From Columbus Circle to the Trump Palace on the East Side, and downtown to the SoHo Condominiums. - -That’s just for starters. - -Eventually, we started building outside the city, and currently the Trump name is on buildings in nine states, from New York to Hawaii, from Florida to Washington, and in ten other countries, from Uruguay to India. Many large-scale and even massive projects are in the pipeline and ready to roll. - -At 52 stories tall, 555 California Street is the second-tallest building in San Francisco, and the largest in terms of usable floor space. Originally the Bank of America’s world headquarters, that building has been used in films, including Dirty Harry and The Towering Inferno. - -It would certainly make anyone’s day to be in it. - -Trump World Seoul consists of six condominium buildings throughout Seoul and its neighboring cities. The glass-clad Trump Tower at Century City, featuring 220 condos, is one of the tallest buildings in Manila, Philippines. - -The 72-story Trump Ocean Club in Panama City is Panama’s first five-star development. We’re building luxury hotels and residences all over the world. We’re representing the best of America throughout the world with some of our best hotels and projects ready to be announced. - -I understand “foreign policy” from the practical standpoint: I know how to make deals, bring foreign governments to the table, and negotiate deals that don’t give everything away. In fact, the Chinese have their biggest bank in Trump Tower. They want to be part of Trump wherever they can. - -That’s why when I hear politicians talking about some trade bill they voted for or how they balanced the budget, I really have to laugh. Maybe they have political experience, but they certainly don’t have common sense or real-world experience. - -Every construction project, every deal, is totally unique. Each project is an unbelievable balancing act; I have to bring together the business community, the financial community, and the local officials. I’ve learned to work with great architects and designers. I’ve worked out deals with the unions and the trades. - -I care about every detail. I read the small print, unlike the negotiators of the Iran nuclear “agreement,” who don’t seem to know what’s in the “side deals,” which Iran struck with the agency that is supposed to be responsible for verifying Iran’s compliance. - -When it came time to expand, I got interested in golf resorts. When I was a kid, my father would take me golfing with him. He didn’t play much, but he had a beautiful swing. I looked around, and who did I see on the golf courses? Successful people; great businesspeople. - -What were they doing as they played? They were talking about deals. I couldn’t begin to guess how many great deals were made on a golf course. So I decided to build the best golf courses and resorts in the world, and that’s what we’ve done. - -People think it’s hard to create a building? Try putting in a new golf course in New York City. In 2015 we opened Trump Golf Links at Ferry Point in the Bronx, and it instantly became one of the greatest public golf courses in the world. It’s the first public golf course opened in New York in over half a century. - -It was under construction for many years—a real mess. - -Nobody else could get it done. People were running in the other direction. Nobody wanted to take on this project’s completion. - -The city had wanted a golf course built for decades, but nobody could figure out how to do it. After the politicians had screwed up this deal for many years, they brought in a businessman, me, to clean up their mess. - -I created a magnificent golf course. - -It’s been a huge success for the city and for the Trump Organization. I promised that golfers would come from all over the world to play a round of golf in the Bronx, and that’s exactly what has happened. - -I don’t just want to bring golfers to America. We need to bring all kinds of businesses back to America, especially those that are American-owned. - -If we create the proper tax climate and cut the endless red tape restricting American businesses large and small, then we’ll have a real job resurgence, which will help to create “full employment.” - -Full employment means we don’t have 20 percent of the population either out of work or underemployed. Full employment means that every new worker can feel good about going home to his or her family with the pride of having done a hard day’s work. - -Full employment benefits unions and employers; together they can rebuild our country’s infrastructure. - -Full employment means that people who are now mortgaged beyond their means can get out from under the oppressive burden of worrying if their homes are secure. As credit loosens up from banks, the new and renovated housing industries will boom as well. - -We are at a critical turning point in our history, not only for you and me but for our children as well. America may be struggling, it may be crippled, but we can rise again. Our time has not passed, it is here, and the potential is amazing. - -America’s best days are still to come. Why? Because of our people. Together we can Make America Great Again. - - - - - -My wonderful family. - - - - - -At my confirmation—First Presbyterian Church, Jamaica, New York. I’m top row, second from the right. - - - - - -As a little boy. - - - - - -My father and mother, Fred and Mary, at my graduation from New York Military Academy. - - - - - -Dancing with my daughter Tiffany at Mar-a-Lago. - - - - - -With Ivanka, Don, and Eric. - - - - - -Trump International Hotel on Pennsylvania Avenue in Washington DC, under construction. Formerly this was the Old Post Office. - - - - - -The Trump Building at 40 Wall Street opposite the New York Stock Exchange. - - - - - -Trump Palace. - - - - - -Trump International Hotel & Tower, One Central Park West. - - - - - -Trump Tower, adjoining Tiffany’s (whose air rights I purchased), between 56th Street and 57th Street on Fifth Avenue. - - - - - -Trump International Hotel & Tower on the river in Chicago. - - - - - -The Bank of America Building, San Francisco. - - - - - -Trump World Tower—90 stories, opposite the United Nations. - - - -With my sisters and brothers. - -Left to right: Robert, Elizabeth, Fred Jr., me, and Maryanne. - - - - - -Trump National Doral, Miami. - - - - - -Trump Golf Links at Ferry Point. - - - - - -Trump International Hotel in Las Vegas—Las Vegas’s tallest building. - - - - - -With President Ronald Reagan, a great guy, at the White House. - - - - - -With my beautiful wife, Melania. - - - - - -At the office with Don, Ivanka, and Eric. - - - - - -ACKNOWLEDGMENTS - - -I would like to thank David Fisher, Bill Zanker, Corey Lewandowski, David Cohen, Rhona Graff, Meredith McIver, Hope Hicks, and Amanda Miller for their enthusiasm and assistance throughout the writing of this book. Additionally, Byrd Leavell and Scott Waxman at Waxman Leavell Literary Agency, Don McGahn, and Carolyn Reidy, Louise Burke, Mitchell Ivers, Jeremie Ruby-Strauss, Irene Kheradi, Lisa Litwack, John Paul Jones, Al Madocs, Jaime Putorti, Jennifer Robinson, Jean Anne Rose, and Nina Cordes at Simon & Schuster brought their expertise to the table and delivered a finished product in record time. Thanks to all for your hard work—it is very much appreciated. - - - - - -MY PERSONAL FINANCIALS - - -My net worth has increased since I released (at my presidential announcement) the attached financial statement which is dated as of June 2014. The value of my real estate in New York, San Francisco, Miami, Washington, DC, Europe, and many other places has gone up considerably. I have very little debt, and even that is at low interest rates. My current net worth is more than ten billion dollars. - -My income for 2014, as I reported in the PFD statement, was $362 million dollars—not including dividends, interest, capital gains, rents, and royalties. Income for 2015 will exceed $600 million. I also did well in the stock market. While that isn’t something that I’ve focused on in the past, and is only a small part of my net worth, 40 of the 45 stocks I bought rose substantially in a short period of time, resulting in a $27,021,471 gain on sale—the stocks remaining in the portfolio have an unrealized gain of more than $22 million. - -On the financial disclosure forms I included more than 500 business entities, 91 percent of which I own completely. I also included the royalties from my book The Art of the Deal, one of the bestselling business books of all time, which is still selling after three decades, as well as my many other bestsellers. - -I also reported receipts from my TV show, The Apprentice. NBC/Universal announced it was being renewed, and they were very disappointed when I informed them that I would not be available to be in the boardroom for our fifteenth season because of my run for president. They tried to talk me into it, but eventually hired Arnold Schwarzenegger—who will do a great job—to sit in my chair. During the 14 seasons of The Apprentice and Celebrity Apprentice, which is now being broadcast around the world, I made $213,606,575. - -I was very pleased to file this disclosure and proud of what I’ve accomplished. - - - -As always, - -I dedicate this book to my parents, - -Mary and Fred Trump - - - - - -ONE - - - -GET TOUGH - - - - -I’ve written this book because the country I love is a total economic disaster right now. - -For starters, we are in debt $15 trillion and soaring. Let me help you wrap your mind around that number. If by some miracle the so-called leaders in Washington could find a way to save one billion dollars of your tax dollars every single day, it would still take thirty-eight years to pay off the debt. And that’s not even taking into account the interest. - -We don’t have thirty-eight years to turn this thing around. The way I see it, we have four, maybe eight years tops. - -Every day in business I see America getting ripped off and abused. We have become a laughingstock, the world’s whipping boy, blamed for everything, credited for nothing, given no respect. You see and feel it all around you, and so do I. - -To take one example, China is bilking us for hundreds of billions of dollars by manipulating and devaluing its currency. Despite all the happy talk in Washington, the Chinese leaders are not our friends. I’ve been criticized for calling them our enemy. But what else do you call the people who are destroying your children’s and grandchildren’s future? What name would you prefer me to use for the people who are hell bent on bankrupting our nation, stealing our jobs, who spy on us to steal our technology, who are undermining our currency, and who are ruining our way of life? To my mind, that’s an enemy. If we’re going to make America number one again, we’ve got to have a president who knows how to get tough with China, how to out-negotiate the Chinese, and how to keep them from screwing us at every turn. - -Then there’s the oil crisis. The idea of $85 a barrel for oil used to be unthinkable. Now OPEC yawns at that figure and jacks the price higher, laughing all the way to the bank. The result: you and your family are paying $3 a gallon, $4 a gallon, $5 a gallon, and soaring. Excuse me, but OPEC—these twelve guys sitting around a table—wouldn’t even be in existence if it weren’t for the United States saving and protecting those Middle Eastern countries! Where is our president in all this? Where’s the accountability? What is the point of executive leadership if our executive is weak and doesn’t lead? What excuse is there for a president whose answer to the oil crisis is not to get tough with OPEC, not to free our own domestic oil companies to do their job and drill, but to release our strategic reserve? That’s not leadership, that’s an abdication of leadership. - -Whether we like it or not, oil is the axis on which the world’s economies spin. It just is. When the price of oil goes up, so does the price of just about everything else. Think about it. You buy a loaf of bread. How did it get to the store? What powered the bread truck? What equipment did the farmer use to harvest the grain? Equipment and vehicles don’t fuel themselves. They need oil. And when a producer’s prices go up, they pass the cost along to you in the form of higher prices. I was privileged to be educated at the finest business school in the world, the Wharton School of Business. But it doesn’t take some prestigious business diploma to realize what’s going on here. It’s basic math. - -And yet, with China beating us like a punching bag daily, OPEC vacuuming our wallets clean, and jobs nowhere in sight, what does President Obama do? He makes his NCAA basketball picks. He hosts lavish parties at the White House. Now look, I like basketball and lavish parties like the next person. But when you’re the president of the United States and your country is burning to the ground right before your eyes, your first instinct should not be to party. It’s no wonder America is flat broke. - -Did you know that one in seven Americans is now on food stamps?1 Think of it. In the United States, the most prosperous nation in the history of human civilization, our people are going hungry. In March 2011, we saw the steepest spike in food prices in almost four decades.2 Combine that with skyrocketing energy costs, double-digit unemployment, Obama’s massively wasteful spending spree, the federal government’s annexation of the health-care system, and the outcome is painfully clear—we’re headed for economic disaster. If we keep on this path, if we reelect Barack Obama, the America we leave our kids and grandkids won’t look like the America we were blessed to grow up in. The American Dream will be in hock. The shining city on the hill will start to look like an inner-city wreck. It won’t be morning in America, as President Reagan put it. We’ll be mourning for America, an America that was lost on Obama’s watch. The dollar will fall as the world’s international currency. Our economy will collapse again (something I believe is a very real danger and risk: a double dip recession that could turn into a depression). And China will replace America as the world’s number one economic power. - -But it doesn’t have to be this way. If we get tough and make the hard choices, we can make America a rich nation—and respected—once again. The right president can actually make America money by brokering big deals. We don’t always think of our presidents as jobs and business negotiators, but they are. Presidents are our dealmakers in chief. But the outcome of a deal is only as effective as the person brokering it. Constitutionally, a president is the commander in chief, appoints judges, and can veto or sign bills. What’s his job the rest of the time? Well, I can tell you one important job: he serves as America’s chief negotiator and dealmaker. He is supposed to broker deals that protect and benefit us with other nations. The president’s duty is to create an environment where free and fair markets can flourish, private sector jobs can be created, and our economy can boom. If they are strong negotiators and make the right deals, America wins. If they wimp out and make the wrong deals, you and your children pay the price. - -Now consider the embarrassing and anemic deals Obama has pulled off. I’m for free and fair trade. After all, I do business all over the world. But look at the deal Obama cut with South Korea. It was so bad, so embarrassing, that you can hardly believe anyone would sign such a thing. In theory, the agreement was supposed to boost American exports to South Korea. In reality, the agreement Obama signed will do next to nothing to even out the trade imbalance, will further erode American manufacturing and kill more American jobs, and will wipe away the tariffs South Korea presently pays us to sell their stuff in our country. Why would Obama agree to these terms, especially when we hold all the cards? The South Koreans like our military defending them against North Korea. But they don’t need us to do their dirty work—South Korea’s armed forces number between 600,000 and 700,000. And yet we still have 28,500 American troops in South Korea.3 Why? - -Even if you think it’s a good idea for us to keep troops in South Korea, why isn’t South Korea footing the whole bill for our defending them? (Currently they only cover a portion of the costs.) Better still, why is our president signing the trade bill that the South Koreans want him to sign instead of the one that gives us maximum advantage? He may have been a good “community organizer,” but the man is a lousy international dealmaker. This is hardly a surprise—he’s never built or run a business in his life. His entire career of dealmaking, such as it is, has been finding ways for government to shakedown taxpayers to reward his special interest groups. That’s not the kind of dealmaker we need. - -Then look at China. There are four Chinese people for every American. China’s population is massive, and its economic power is huge and growing. China is now the second-largest economy in the world. We are building China’s wealth by buying all their products, even though we make better products in America. I know. I buy a lot of products. Windows, sheet rock, you name it, I buy it by the truckload. I buy American whenever I can. Unfortunately, a lot of times American businesses can’t buy American products because, with the Chinese screwing around with their currency rates, American manufacturers can’t be competitive on price. If China didn’t play games with its currency and we played on a level economic playing field, we could easily out-compete China. But the Chinese cheat with currency manipulation and with industrial espionage—and our alleged commander in chief lets them cheat. The whole thing is a scandal and unfair to our workers and businesses. There’s no way America can become rich again if we continue down the path we’re on. - -Yet with all this, in January 2011, Barack Obama kowtowed to China’s president Hu Jintao and welcomed him to the White House. He even gave the Communist leader the high honor of an official State Dinner. China’s economy enjoys double-digit growth at our expense, while China screws us with every turn of its currency, is the biggest commercial espionage threat we face, continues its deplorable human rights abuses, and Obama’s response is to roll out the red carpet? It’s incompetence that borders on betrayal. - -Obama legitimized China on the world stage. So what did he get in return? Export deals amounting to a measly $45 billion. Obama’s team immediately declared him a master negotiator. In 2009, our trade deficit with China was nearly $230 billion. A pathetic $45 billion in trade contracts is an insulting joke. But when Hu Jintao looks across the negotiating table, he sees the kind of spinelessness and amateurism that lets him know he can buy us off by whisking a few crumbs our way. I believe America’s honor shouldn’t be for sale. We shouldn’t entertain Communists and beg for a few tiny contracts. Instead, a true commander in chief would sit down with the Chinese and demand a real deal, a far better deal. Either China plays by the rules or we slap tariffs on Chinese goods. End of story. This year, by the way, our deficit with China will be more that $350 billion—they are laughing at us. - -I love America. And when you love something, you protect it passionately—fiercely, even. We are the greatest country the world has ever known. I make no apologies for this country, my pride in it, or my desire to see us become strong and rich again. After all, wealth funds our freedom. But for too long we’ve been pushed around, used by other countries, and ill-served by politicians in Washington who measure their success by how rapidly they can expand the federal debt, and your tax burden, with their favorite government programs. - -America can do better. I think we deserve the best. That’s why I decided to write this book. The decisions we face are too monumental, too consequential, to just let slide. I have answers for the problems that confront us. I know how to make America rich again. I’ve built businesses across the globe. I’ve dealt with foreign leaders. I’ve created tens of thousands of American jobs. My whole life has been about executing deals and making real money—massive money. That’s what I do for a living: make big things happen, and now I am worth more than $7 billion. - -Restoring American wealth will require that we get tough. The next president must understand that America’s business is business. We need a president who knows how to get things done, who can keep America strong, safe, and free, and who can negotiate deals that benefit America, not the countries on the other side of the table. A president doesn’t “create” jobs, only businesses can do that. But he can help create an environment that allows the rest of us—entrepreneurs, small businessmen, big businessmen—to make America rich. - -The damage that Democrats, weak Republicans, and this disaster of a president have inflicted on America has put us in a mess like we’ve never seen before in our lifetimes. To fix the problem we’ve got to be smart and get tough. There’s no time to waste. - - - - - -TWO - - - -TAKE THE OIL - - -When you do someone a favor, they say thank you. When you give someone a loan, they pay you back. And when a nation like the United States sacrifices thousands of lives of its own young servicemen and women and more than a trillion dollars to bring freedom to the people of Iraq, the least—the absolute least—the Iraqis should do is pick up the tab for their own liberation. - -How much is it worth to them to be rid of the bloodthirsty dictatorship of Saddam Hussein and to have gained a democracy in which they can vote and have a freely elected parliament? In reality, that’s a priceless gift, although after being blown to pieces, many people think that they were better offbefore. When I say they should pay us back, I’m not even talking about cash out of their pockets. All I’m asking is that they give us, temporarily, a few flows of oil—enough to help pay us back and help take care of the tens of thousands of families and children whose brave loved ones died or were injured while securing Iraqi freedom. - -But does Iraq do that? No. In fact, they’ve made it clear they have no intention of doing so. Ever. - - - - - -To the Victor Go the Spoils - - - -In June 2011, Republican Congressman Dana Rohrabacher of California visited Iraq and told Iraqi Prime Minister Nouri al-Maliki that he hoped Iraq would someday consider repaying America for all our sacrifices on Iraq’s behalf. The Prime Minister’s response was to have his press spokesman, Ali al-Dabbagh, call up the U.S. embassy and say that they wanted the congressman to get out of their country and that his remarks were “inappropriate.” - -Excuse me? Inappropriate? What’s “inappropriate” is the fact that America puts up with this garbage. We’ve spent blood and treasure defending the people of the Middle East, from Iraq to Kuwait to Saudi Arabia and the small Gulf states. And if any country in the Middle East won’t sell us their oil at a fair market price—oil that we discovered, we pumped, and we made profitable for the countries of the Middle East in the first place—we have every right to take it. - -The ingratitude of Iraq’s leadership is breathtaking. This year, the Baghdad city government even had the audacity to demand that America pay $1 billion for the aesthetic damage caused by blast walls we erected to protect the people of Baghdad from bombs. That’s like a drowning man charging a lifeguard for having torn his swimsuit in the process of saving his life. - -Granted, eight years ago when we were told that we would be greeted in the streets by the Iraqi people with flowers and welcomed as liberators, I didn’t buy it. But as far as I’m concerned, Iraq can keep its flowers—the oil is a different matter. We should take the oil. And here’s why: because the Iraqis won’t be able to keep it themselves. Their military, even as we try to rebuild it, is incompetent, and the minute we leave, Iran will take over Iraq and its great oil reserves, the second largest after Saudi Arabia. If that happens, all of our brave men and women will have died in vain and $1.5 trillion will have been squandered. - -So, if Iran is going to take over the oil, I say we take over the oil first by hammering out a cost-sharing plan with Iraq. If we protect and control the oil fields, Iraq will get to keep a good percentage of its oil—not to mention its independence from Iran—and we will recoup some of the cost of liberating the Iraqis and also pay back the nations that fought with us in the war. And I want to repay the families of the soldiers who died or were terribly wounded. Of course, nothing can ever replace a lost life or a lost limb, but we can send the children of dead or badly wounded veterans to college, provide compensation to the spouses of our service members killed in Iraq, and make sure that wounded veterans are properly looked after. It’s common sense, and peanuts compared to what is lying under Iraq’s land. Each American family who lost a loved one in Iraq should be given $5 million, and our wounded veterans should be given money, perhaps $2 million each plus medical costs. - -Call me old school, but I believe in the old warrior’s credo that “to the victor go the spoils.” In other words, we don’t fight a war, hand over the keys to people who hate us, and leave. We win a war, take the oil to repay the financial costs we’ve incurred, and in so doing, treat Iraq and everybody else fairly. As General Douglas MacArthur said, “There is no substitute for victory.” From the very beginning of Operation Iraqi Freedom, I believed we should have hammered out the repayment plan with the Iraqis—through exiled Iraqi dissidents—before we launched the war and rid the people of Iraq of their murderous dictator, Saddam Hussein. And back then, there were a few smart people who agreed with me and said the same thing. One of them was the director of the Defense Department’s Office of Net Assessment, Andrew Marshall. He recommended that oil revenues should be used to reduce the sticker price for occupation.1 Of course, that hasn’t happened. Still, there’s no reason we can’t or shouldn’t implement a cost-sharing arrangement with Iraq. Do not take no for an answer. - -It’s hardly a radical idea. In September 2010, our own Government Accountability Office (GAO) and others studied the issue in depth and concluded that a cost-sharing plan is feasible and wise. All the know-nothings in the White House need to do is read the cover of the report: “Iraqi-U.S. Cost-Sharing: Iraq Has a Cumulative Budget Surplus, Offering the Potential for Further Cost-Sharing.” That’s literally the title. And if they actually read the first line of the report, they would know the GAO found that the Iraqi government is running a budget surplus of $52.1 billion. 2 Iraq just came through a lengthy war and they’re already back in business and flush with cash. Why are we footing the bill and getting nothing in return? - -I’ll give you the answer. It’s because our so-called “leaders” in Washington know absolutely nothing about negotiation and dealmaking. Look, I do deals—big deals—all the time. I know and work with all the toughest operators in the world of high-stakes global finance. These are hard-driving, vicious, cutthroat financial killers, the kind of people who leave blood all over the boardroom table and fight to the bitter end to gain maximum advantage. And guess what? Those are exactly the kind of negotiators the United States needs, not these cream puff “diplomats” Obama sends around the globe to play patty cake with foreign governments. No, we need smart people with titanium spines and big brains who love America enough to fight fiercely for our interests. Ronald Reagan’s Secretary of State George Shultz used to ask diplomats into his office and, standing before a map, ask them what country they represented. When they pointed to their assigned country, he’d correct them and say, “No, that’s not your country, you represent the United States.” Leadership starts with the person at the top. The president sets the tone. Ronald Reagan put America first, and he knew how to negotiate. Barack Obama is no Ronald Reagan—not even close. And that’s why we’re in the mess we’re in and why our nation is on the wrong track and doing so badly. - -Until we get a new president, our congressmen will continue to be treated with contempt by the Iraqi government, that government will continue to run a surplus at our expense, and we will continue to suffer economically because the Iraqi government, and everyone else, knows Obama is weak and won’t stand up for America’s interests. The man’s natural instinct is to bow before every foreign leader he can find. - -We don’t owe the Middle East any apologies. America is not what’s wrong with the world. We’re an example of freedom to the world. No one can match America. We have big hearts—and the courage to do what’s right. But we’re not the world’s policemen. And if we have to take on that role, we need to send a clear message that protection comes at a price. If other countries benefit from our armed forces protecting them, those countries should cover the costs. Period. - - - - - -Leadership Is Down, Gas Prices Are Up - - - -Beyond simple justice, and beyond reducing our national debt, another advantage of taking the oil is that it will significantly bring down the price of gas. Gas prices are crippling our economy. In the first two years of the Obama administration, gas prices leapt a shocking 104 percent. That’s hardly the “hope and change” Americans voted for. That said, there are many environmentalists who are cheering and applauding higher prices. Their logic, if you can call it that, is that if we drive less we will emit less carbon, which will allegedly help alleviate the make-believe problem of global warming. Don’t forget, when he was a United States senator, Obama himself suggested that higher gas prices would be a good thing, but that he would prefer a “gradual adjustment.”3 - -Then look at the person Obama appoints as his Energy Secretary—Steven Chu, a guy who actually told the Wall Street Journal, “Somehow we have to figure out how to boost the price of gasoline to the levels in Europe.”4 So the fact that we’ve seen a 104 percent jump in the price of a gallon of gas since Obama was elected president should hardly come as a surprise to anyone who was paying attention. He and his supporters telegraphed as much all along. As crazy as it sounds, these folks want higher energy prices because they believe that will force Americans to drive less and businesses to slow down on production and transportation, which they think is a good thing, but which in fact will only cost us more jobs and put us at a greater economic disadvantage against China. Whose side are they on, anyway? - -Here’s another one: Cap and Tax (or as they called it, Cap and Trade). Remember that? When he was campaigning to become president, Obama outright admitted that his plan to tax businesses on carbon emissions that exceeded his arbitrary cap would drive energy prices sky high. Here’s exactly what he said: - -Under my plan of a cap and trade system, electricity rates would necessarily skyrocket, even regardless of what I say about whether coal is good or bad, because I’m capping greenhouse gases, coal-powered plants, you know, natural gas, you name it. Whatever the plants were, whatever the industry was, they would have to, uh, retrofit their operations. That will cost money. They will pass that money on to consumers.5 - - - - - -Most of us shake our heads in disbelief at this stuff. But you really have to understand the fringe Left’s radical mindset and just how extreme and out of touch with reality this president and his dwindling group of supporters are with the rest of the country. They want us to have higher energy prices, they want to deprive our economy of the fuel it needs to grow, they intentionally put the pseudo-science of global warming and socialist management of our economy—the two go together—ahead of making our economy competitive and creating real private sector jobs for the American people. - -The fact is, you’re not going to see real growth or create real jobs until we get these exorbitant energy costs under control. Someone needs to tell this president that business owners are not the enemy; they’re the people who create jobs. Government can’t create jobs. All it can do is put more people on the taxpayer’s dime. All it can do is sap our nation’s wealth. - -The real way to help the 14.4 million unemployed Americans get their jobs back is not through “stimulus spending” that only has you, the taxpayer, cutting the check for yet more government employees. The real way is limiting taxes, slashing crippling and unnecessary regulations, and keeping commodity and fuel costs low. - -If our “community organizer in chief” would take the time to study the marketplace, he would know that over the past year, things like fruit, pasta, coffee, bacon, and lots of other foods have registered price spikes as high as 40 percent, and there’s no end in sight—in large part because of the price of oil, which has spiked transportation and fertilizer costs.6 Until we get this country’s lifeblood—oil—back down to reasonable rates, America’s economy will continue to slump, jobs won’t get created, and American consumers will face ever-rising prices. - -We can talk all day about windmills, nuclear power, and solar, geothermal, and other alternative fuels. I’m all for developing alternatives to oil, but that’s for the long term. The fact is, right now and for the foreseeable future, the planet runs on oil—and that means we need to get the price of a barrel of oil down—way down, maybe even to $20 a barrel—and boy would our economy rock. - -Does Obama do that? No. He goes around the country lecturing everyone that they need to buy hybrid vehicles, before hopping in his carbon-spewing presidential limousine and Air Force One. If he’s really concerned about carbon emissions and air pollution, then maybe he should have grounded his wife before she jetted off with forty of her “closest friends” on a lavish vacation to Spain on the taxpayers’ dime. I’ve got a private jet and love taking my wife and kids on expensive trips too, but there are two differences: I pay for it myself, and I don’t go around waving my finger in people’s faces lecturing them on the evils of travel and restricting their economic freedoms. - -Obama promised he was going to create millions of so-called “green collar” jobs. He used that promise to justify his massive government giveaway of billions and billions of taxpayers’ dollars to green energy companies. We’re now seeing the results of Obama’s promise and big government scheme. Solyndra, a U.S. solar panel company, turned out to be a total bust. They were selling $6 solar panels for $3. It doesn’t take a genius to realize that’s a loser of a business model. But Solyndra’s owner, billionaire George Kaiser, had an inside connection with Obama: Kaiser was a big Obama donor and one of the president’s campaign fundraiser “bundlers.” So the Obama administration fast-tracked a $535 million federally guaranteed loan. Obama believed so much in Kaiser and Solyndra that he made a big public relations event at Solyndra to deliver a speech singing the praises of Solyndra, green jobs, and justifying why taxpayers should foot the bill to stimulate green companies. Predictably, the company went bankrupt, its 1,100 workers lost their jobs, and the American taxpayer got the shaft, to the tune of over half a billion dollars. - -Obama has played off the Solyndra scandal, saying he has no regrets and that the company “went through the regular review process.”7 However, in the wake of FBI investigations, the truth is now leaking out. According to the Washington Post, emails have now been released revealing that “evidence is mounting that there was something irregular about the way the Solyndra deal got greenlighted.”8 I predict that there will be many more “Solyndra-style” revelations in the months to come. But Solyndra just shows you that this bunch is engaged in the very crony capitalism and insider deal-cutting that they are always accusing others of. Worse, it shows that the millions of green jobs Obama promised were completely bogus. - -But even more shocking than the hypocrisy of it all is the total cluelessness it reveals. At one of the president’s speaking events, a man told Obama that he and his wife need a bigger vehicle because they have eight kids. So what did Obama do? He told the guy, “Buy a hybrid van.” Just one problem: they don’t exist in America. This president cannot even speak intelligently without a teleprompter. It’s embarrassing and sad! - -When he’s not hectoring people about hybrids, he’s appointing his Attorney General Eric Holder to conduct criminal investigations of gas stations engaging in “price gouging.” This is a silly attempt to scapegoat and deflect attention away from how ineffective and weak he is on energy policy. As anyone with a brain knows, the reason gas prices are through the roof is because OPEC controls supply and therefore massively inflates crude oil prices.9 - -America doesn’t have time for games. This country is in huge trouble. It’s time to get serious and look at the facts. Currently, we’re paying over $85 a barrel for oil. The United States uses about 7 billion barrels of oil a year. Do the math. We’re singlehandedly transferring hundreds of billions of dollars a year to OPEC countries that hate our guts. And again, we’re giving all this money to governments who seethe with anti-American hatred. It’s stupid policy. - - - - - -Take On the Oil Thugs - - - -With proper leadership, we can get that price down to $40–$50 a barrel, if not the $20 that I have previously suggested. But to get there we need a president who will get tough with the real price gougers—not your local gas station, but the illegal cartel that’s holding American wealth hostage, OPEC. OPEC stands for the Organization of the Petroleum Exporting Countries. It was created at the Baghdad Conference in September 1960 by our good buddies Iran, Venezuela, Saudi Arabia, Iraq, and Kuwait. Since then OPEC has added as members Angola, Ecuador, Qatar, Algeria, the United Arab Emirates, Nigeria, and our dear friend Libya. So here you have twelve men (in this case they’re all men) sitting around a table determining and fixing the price of oil. Now, if you have a store and I have a store and we collude to set prices, we go to jail. But that’s what these guys do, and no one lifts a finger. And the worst part of it is that these twelve OPEC countries control 80 percent of the world’s accessible oil.10 - -Let your eyes dart back up to that list of OPEC’s founding members. First up, Iran. Iranian President Mahmoud Ahmadinejad has called for wiping our close ally Israel off the map. He said that the September 11 terrorist attacks on New York were a plot by the United States government. He believes the Holocaust is a “myth.” His regime is developing nuclear weapons in violation of the Nuclear Non-Proliferation Treaty. Next, Hugo Chavez’s Venezuela. During one of his rambling United Nations speeches, Chavez called President George W. Bush “the devil.” His mouthpiece in Venezuela, ViVe TV, issued a press release in January 2010, saying the 200,000 innocent victims of the awful Haiti earthquake were really killed by an American “earthquake weapon.”11 Then look at Saudi Arabia. It is the world’s biggest funder of terrorism.12 Saudi Arabia funnels our petro dollars—our very own money—to fund the terrorists that seek to destroy our people, while the Saudis rely on us to protect them! Then there is Kuwait, which would not even exist had we and our allies not fought the First Gulf War against Saddam’s aggression. And of course we have Iraq, whose freedom we’ve paid for to the tune of more than a trillion dollars and more than 4,000 dead servicemen and women. These countries do us no favors. Through OPEC they squeeze us for every penny they can get out of us. - -Two years ago, Amy Myers Jaffe, an energy expert from the James A. Baker III Institute for Public Policy at Rice University, did a study to determine the real product cost of a barrel of oil. The price of a barrel of oil back then was $ 60. Jaffe found that the actual cost to produce a barrel of oil then was $15, exactly a quarter of the actual market price.13 That means you’re looking at a 400 percent markup on pricing before the oil even gets to the refinery to be turned into gas. Again, if you or I did this, we would be thrown in jail, because it’s illegal to collude and fix prices. But these petro thugs do this year in and year out and laugh all the way to the bank. They claim they’re not restricting oil production to jack up the prices, but that’s a lie. In 1973, OPEC produced 30 million barrels a day. Guess how much they produced in 2011? That’s right, the same amount. Production hasn’t moved an inch. The reason for this is not because OPEC countries have reached peak oil output. After all, as Robert Zubin points out, as recently as April 2011, the Saudis announced they were going to cut production by 800,000 barrels a day, so they’re nowhere near running at full capacity.14 Instead, OPEC is squeezing production so oil prices skyrocket and America pays. - -The OPEC countries wouldn’t even exist if it weren’t for us—it’s our money that makes them rich and our troops that have made Iraq free and kept Kuwait, Qatar, and Saudi Arabia from being gobbled up by Saddam Hussein (or now, potentially, by Iran). A smart negotiator would use the leverage of our dollars, our laws, and our armed forces to get a better deal from OPEC. It’s time to get tough. And smart! - - - - - -Sue OPEC - - - -We can start by suing OPEC for violating antitrust laws. - -Currently, bringing a lawsuit against OPEC is difficult. It’s been made even more complicated by a 2002 federal court, and subsequent appeals courts, ruling that “under the current state of our federal laws the individual member states of OPEC are afforded immunity from suit brought for damage caused by their commercial activities when they act through OPEC.”15 The way to fix this is to make sure that Congress passes and the president signs the “No Oil Producing and Exporting Cartels Act” (NOPEC) (S.394), which will amend the Sherman Antitrust Act and make it illegal for any foreign governments to act collectively to limit production or set prices. If we get it passed, the bill would clear the way for the United States to sue member nations of OPEC for price-fixing and anti-competitive behavior. - -One of the smart people in this debate is Iowa Republican Senator Chuck Grassley, a co-sponsor of the bill. “It’s time to get it passed,” says Grassley. “OPEC needs to know we are committed to stopping anti-competitive behavior.” - -Here’s the good news: since 2000, this bill has passed the Senate Judiciary Committee four times with bipartisan support, and in May 2008, the NOPEC bill passed in the House when Democrats were in control. Now the bad news: President George W. Bush got spooked and threatened to veto the bill because he was afraid that, with the wars in Iraq and Afghanistan raging, NOPEC might spark “retaliatory action.” Bush’s fear was misguided. First of all, these oil shakedown artists need and want our money. What are they going to do? Fold their arms, throw a temper tantrum, and refuse to sell us their oil and be out billions and billions of dollars? Give me a break. And two, they already engage in “retaliatory action”: it’s called a 104 percent spike in the price of gas since Obama took office, and that’s with him going around practically kissing their feet. - -Thomas W. Evans was an adviser to Presidents George H. W. Bush and Ronald Reagan. Evans says that when OPEC or its member nations realize the likelihood of the huge damages they would face and how their illegal actions would be curtailed, they would be forced to seek a settlement on production goals that would put prices in much closer alignment with actual costs. The net effect, says Evans, would be price reductions for heating fuel and gas at the pump that would be so large they might exceed the $168 billion the government spent on the 2008 federal stimulus package. As for concern over any potential fallout, he says what I say: getting tough is getting smart. Suing OPEC “would undoubtedly anger political leaders in the Middle East,” writes Evans. “But how stable is the Middle East right now? And isn’t starting a lawsuit better than starting a war?”16 - -Imagine how much money the average American would save if we busted the OPEC cartel. Imagine how much stronger economic shape we would be in if we made the Iraqi government agree to a cost-sharing plan that paid us back the $1.5 trillion we’ve dropped on liberating Iraq so it could have a democratic government. Just those two acts of leadership alone would represent a huge leap forward for our country. And by the way, it would also make us respected again in the world. It’s sad—truly sad and disgraceful—the way Obama has allowed America to be abused and kicked around. All we have to do is be smart and show some backbone to begin setting things right. - - - - - -Use America’s Resources and Create Jobs - - - -So number one, we take the oil through the cost-sharing plans that even the GAO says are smart and feasible. Two, we hit OPEC in the wallet and rein them in by signing into law the bipartisan NOPEC law. And the third thing we need to do is to take advantage of one of our country’s chief assets—natural gas. We are the “Saudi Arabia” of natural gas, but we don’t use it. Abu Dhabi recently had all of their transportation converted to natural gas so they can sell their expensive oil to us.17 Even they recognize how efficient natural gas is. It’s cleaner, cheaper, and better. So why aren’t we using it to our advantage? - -Did you know that with the natural gas reserves we have in the United States we could power America’s energy needs for the next 110 years? Those aren’t my estimations, that’s what the United States Energy Department’s Energy Information Administration says. In fact, one of the larger mother lodes of natural gas, the Marcellus Shale, could produce the energy equivalent of 87 billion barrels of oil.18 Some critics believe those numbers might be inflated. Fine. Let’s say the real number is fifty-five years of energy, or that we only get 43 billion barrels’ worth of energy. So what? That buys us more time to innovate and develop newer, more efficient, cleaner, and cheaper forms of energy. - -The point is that sitting around handwringing all day accomplishes nothing. Yes, I want us to extract the shale gas safely and responsibly. Who doesn’t? But too often, environmental extremists take things so far that they will never be pleased. They’re for nuclear energy, then they’re against it. They like natural gas, then they don’t like it because of new drilling techniques. They want windmills everywhere, then they oppose them because they hack birds to pieces and create “visual pollution” (about this, I agree!). They love ethanol, then they don’t anymore because it eats up vast amounts of farm land and sparks food riots in Africa when the price of corn goes up. They like electric cars, then they don’t because they realize that half of electricity comes from coal, and they hate coal. On and on, back and forth it goes. Meanwhile, our country’s economy is sinking like a stone. - -What people need to know is what the great conservative economist and writer Thomas Sowell taught us: in the world of economics, there are no such things as “solutions,” only tradeoffs. Every action has a consequence. Every decision has an upside and a downside. So you make smart decisions that minimize harm and maximize freedom. One of the many reasons why I’m a conservative is because I believe in the so-called Law of Unintended Consequences—the idea that, no matter how good government’s intentions, when you start social engineering or messing around with the free market, more often than not you open a Pandora’s box of negatives you didn’t see coming. - -So, in terms of energy, we need to be exploring and developing numerous approaches ... and I also include in that drilling for oil right here at home. We have oil all over the place in America. It’s incredible how much oil is right under our own land and water. But the Obama administration refuses to get tough with the environmental lobby and liberate our oil companies to drill for domestic oil. - -Yes, the BP oil spill was bad, but it was no reason to put tighter clamps on domestic drilling. That showed no leadership at all. What it showed was that the Obama administration is driven more by hysteria than facts. - -You want some facts? Here’s one that anyone who has ever studied oceanic oil supplies already knows: “Tens of millions of gallons of crude oil leak into the ocean every day. Naturally, from the sea floor,” as David Ropeik from Harvard University, hardly a rightwing institution, has written. 19 I also read from the U.S. National Academy of Sciences that the ocean itself is to blame for contributing “the highest amount of oil to the marine environment.”20 So if the extreme environmental crazies have a beef to pick with anyone, perhaps it should be with Mother Earth herself. - -The real issue, of course, is that those who oppose drilling in the United States simply don’t want the drilling to occur in their own backyard. What they ignore is the fact that the holes are going to get drilled into the planet anyway. We should drill them on our soil and create our own jobs and keep the revenue here instead of exporting it to the Middle East. Remember when Obama gave his 2008 speech at the Democratic National Convention and said that he would “invest” $150 billion in renewable energy over the next ten years and create “five million new jobs?” How did that turn out? He spent $80 billion of your and my money and, by his own Council of Economic Advisors’ admission, “created or saved” just 225,000 jobs. Now run those numbers: that’s $335,000 for each so-called “green collar” job we created or “saved,” whatever that means.21 - -Sadly, when it comes to using the energy industry to create American jobs, Obama has been a total disaster. And that’s a shame, because he’s missing a huge opportunity that could give a lot of people good quality jobs while helping get our country back on solid economic footing. Just look at how he’s mismanaged offshore oil drilling. Here at home, he’s kept in place the bans on drilling off our coasts. But he goes to Brazil, gives them $2 billion through the U.S. Export-Import Bank, and brags that he’s proud and excited to make America one of Brazil’s “best customers.” Pull it up on YouTube and watch it for yourself, if you can stomach it. It’s the most ludicrous, anemic leadership anyone could imagine. Think about it. If Obama supports offshore drilling in Brazil, and puts billions of our dollars in their hands to do it, why can’t we drill in America and create more jobs and less dependence on foreign sources of oil? - -The fact that Obama decided to tap into our nation’s Strategic Petroleum Reserve—a stockpile of 727 million barrels of emergency oil, or thirty-four days’ worth of America’s annual usage—and used up 30 million barrels to lower summertime gas prices so he could goose his sinking approval ratings is a national disgrace. But ironically, his decision only proves what everyone knows: more domestically produced oil on the market will drive down gas prices. Period. - -So let’s drill already. And let’s do it in America. It’s not only economically smart, it’s strategic—the Middle East needs to get the message loud and clear that we’re done coming to them on bended knee. We’re waking up, getting up, and making America the powerhouse we once were. - -Take the oil, sue OPEC, and drill domestically—if we do these three big things, we’ll be on the right track to rebuild American strength, wealth, jobs, and opportunity. Will it be tough? Sure. But that’s what makes us Americans: we do hard things, and we do them well . . . if we have the right leadership. - - - - - -THREE - - - -TAX CHINA TO SAVE AMERICAN JOBS - - - - -When it comes to China, Barack Obama practices “pretty please” diplomacy. He begs and pleads and bows—and it’s been a colossal failure. - -Get it straight: China is not our friend. They see us as the enemy. Washington better wake up fast, because China is stealing our jobs, sending a wrecking ball through our manufacturing industry, and ripping off our technology and military capabilities at Mach speed. If America doesn’t get wise soon, the damage will be irreversible. - -There is a lot that Obama and his globalist pals don’t want you to know about China’s strength. But no one who knows the truth can sit back and ignore how dangerous this economic powerhouse will be if our so-called leaders in Washington don’t get their acts together and start standing up for American jobs and stop outsourcing them to China. It’s been predicted that by 2027, China will overtake the United States as the world’s biggest economy—much sooner if the Obama economy’s disastrous trends continue. 2 That means in a handful of years, America will be engulfed by the economic tsunami that is the People’s Republic of China—my guess is by 2016 if we don’t act fast. - -This didn’t happen overnight or in a vacuum. We’ve been kicking the can down the road and ignoring the warning signs for years. Truth be told, we took a strong jobs beating from China during President George W. Bush’s term. Even before the Obama-led employment disaster we’re stuck in now, from 2001 to 2008, America lost 2.4 million jobs to China.3 - -For the past thirty years, China’s economy has grown an average 9 to 10 percent each year. But under President Barack Obama, China has experienced unusually fast gains and America unusually fast losses. In the first quarter of 2011 alone, China’s economy grew a robust 9.7 percent. America’s first quarter growth rate? An embarrassing and humiliating 1.9 percent.4 It’s a national disgrace, and Barack Obama’s inept policies and weak response to China’s manipulation of its currency, assault on our jobs, and attack on our manufacturing base have made things worse—far worse—than they would otherwise have been. And yet, every time you turn on the television, what do you hear from Obama? Happy-talk rhetoric. It’s like that old “prosperity is right around the corner” mantra Herbert Hoover repeated when America was in the throes of the Great Depression. It’s a lot of hot air. We’ve got 14.4 million of our people out of work. We need action. - -America’s relationship with China is at a crossroads. We only have a short window of time to make the tough decisions necessary to keep our standing in the world. Roughly every seven years, the Chinese economy doubles in size. That’s a tremendous economic achievement, and it’s also why they clean our clocks year in and year out on trade. Right now, we are running a massive $300 billion trade deficit with China.5 That means every year China is making almost $300 billion off the United States. When I go on television talk shows and news programs, I say that number and people can’t even wrap their minds around a figure like that, but it’s true. Just on the trade imbalance alone, China is banking almost a trillion of our dollars every three years. And sadly, whereas American manufacturing used to rule the day, now, because China cheats with its currency, American companies can’t compete on price, despite the fact that we make a far better product. So China is now the world’s top manufacturer and exporter. By the way, they also hold more than $3 trillion of foreign reserves.6 That’s enough money for China to buy a controlling interest in every large company in the Dow Jones Industrial Average—companies like Alcoa, Caterpillar, Exxon Mobil, and Walmart—and still have billions left in the bank.7 - -One out of every six people on the planet is Chinese. Their population of 1.3 billion people outnumbers us roughly four to one. That’s a huge pool of talent from which to build businesses, staff manufacturing facilities, fill elite educational institutions, and build an enormous and lethal military. - -The other great concern is the fact that China graduates 7 million university students every single year. So far America still remains way ahead of China on college graduation rates as a percentage of our total population, but you have to ask whether our colleges are graduating students with the skills they need to compete. I read too many stories about corporations that have to offer remedial education classes for their employees. And when you look at test scores for middle school and high schools, there’s cause for alarm. In a 2010 authoritative international study of 15-year-olds, Americans ranked twenty-fifth out of thirty-four nations in math. China’s rank? Number one.8 In fact, the Shanghai students who were studied not only were number one in math, but in reading and science as well. They just absolutely ate our lunch—and everyone else’s. Sure, maybe the study was a little skewed because they sampled kids from Shanghai where many of China’s smartest students go to school. But as even liberal TIME magazine points out, when you consider the enormous demographic changes America is undergoing, there’s educational danger on the horizon. In a generation we will be a majority-minority nation, and currently a heartbreaking 40 percent of African-American and Latino-American children don’t even graduate from high school (to say nothing of college).9 - - - - - -In China’s Crosshairs - - - -Where do you think Communist Chinese President Hu Jintao plans to direct most of China’s educational and economic edge? That’s right, the military and weapons industries. A new report from the Pentagon reveals that China is rapidly beefing up its army and navy and pouring billions of dollars into developing its first stealth fighter jet, advanced attack submarines, sophisticated air defense systems, high-tech space warfare systems, and adding to its ballistic missile stockpile.10 In response to China’s military buildup, Chairman of the Joint Chiefs of Staff Admiral Michael Mullen said this: “The Chinese have every right to develop the military they want. What I just have not been able to crack is why on some of these capabilities, whether it’s [the J-20 stealth fighter], whether it’s anti-satellite, whether it’s anti-ship, many of these capabilities seem to be focused very specifically on the United States.”11 - -What China is doing on the cyber warfare front is equally alarming. In his congressional commission testimony, Vice Chairman of the Joint Chiefs of Staff General James Cartwright said that China is heavily involved in cyber reconnaissance of American corporate and government networks. General Cartwright explained that cyber spying can isolate network weaknesses and allow the Chinese to steal valuable intelligence.12 - -So what should we do? - -China presents three big threats to the United States in its outrageous currency manipulation, its systematic attempt to destroy our manufacturing base, and its industrial espionage and cyber warfare against America. The Chinese have been running roughshod over us for years. But the Obama administration, in its incredible weakness, seems almost complicit in wanting to help the Chinese trample us. Obama claims we can’t do what’s in our interests because it might spark a “trade war”—as if we’re not in one now. And if we are in trade war, Obama’s policies amount to virtual economic treason. Still, I believe we can overcome China’s threats with a smart strategy and a strong negotiator. - -China’s massive manipulation of its currency is designed to boost its exports and wreck our domestic industries. When the Chinese government manipulates the yuan (China’s currency, sometimes also called the renminbi) and undervalues it, they are able to sell to other countries at a far, far lower price than a U.S. company, because our currency is valued at a more accurate market rate. That means our products are priced higher, which makes them less competitive. - -Many analysts have tried to determine the actual value of China’s currency, but it’s hard to say for sure, since valuations change all the time. There does, however, seem to be a consensus that the yuan is likely undervalued somewhere in the neighborhood of 40 to 50 percent of its true value.13 That means the Chinese can charge up to half the price an American manufacturer would for a similar good or service. That spells job losses for American workers, and that’s exactly what’s happening right now. - -Just look at what China’s monetary manipulation did to our steel industry. As a builder of huge luxury buildings, I can tell you that the steel industry has been vital to our economic strength, and is an important cost in any building. According to the American Iron and Steel Institute (AISI), China’s currency undervaluation represents “the single-largest subsidy” to Chinese manufacturers, is the “key” to China’s explosive export-driven growth, and is “a major cause” of global structural imbalances that helped bring about America’s recent financial collapse. - -China’s currency manipulation and other unfair trade practices helped China’s crude steel production jump from 15 percent of world production in 2002 to a jaw-dropping 47 percent in 2008. In 2002, the United States imported just 600,000 tons of steel (3 percent of our steel imports) from China. By 2008, China had us buying 5 million tons of steel.14 And again, much of this they achieved by undervaluing the yuan. - -Economist Alan Tonelson got it right when he wrote: - -For eight long years, Washington’s China lobby—lavishly funded by multinational companies whose China facilities benefit from this 50 percent subsidy [from the undervalued yuan]—has trotted out rationalizations for inaction. The disastrous costs already incurred of following the China lobby’s advice amply justify ignoring its latest ploy.... American factories have kept closing, survivors’ profits have kept shriveling and even vanishing, job losses have kept mounting, and wages have kept sagging. Worse, U.S.-centric global economic imbalances kept mounting until they triggered the biggest American and worldwide downturn since the Great Depression.15 - - - - - -Other observers, like Republican Senator Richard Shelby of Alabama, have their eyes wide open too. “There is no question that China manipulates its currency to subsidize its exports,” said Shelby. As for China buying U.S. Treasury bonds, Shelby said, “It may be time for new legislation to ensure that Treasury looks out for American workers, not Chinese creditors.”16 - -As the world’s leading economy, we get hurt most by China’s abusive trading practices—and anyone who knows anything about economics knows I’m right. As CNN Money reported, “Most economists would agree with Trump’s logic that China is holding down the value of its currency to give its manufacturers an advantage when selling goods to the U.S.”17 - -Of course, back in 2008 during the presidential campaign, Barack Obama was more than happy to sound off on the negative effects of currency manipulation. As a candidate, he even endorsed a bill that would have changed the current law to “define currency manipulation as a subsidy subject to the imposition of countervailing duties.”18 Fast forward to 2011. Today, Obama is all nicey-nice on the subject and engaged in his usual “pretty please” diplomacy with the Chinese. Just listen to what the president is saying now about the Chinese undervaluing their currency to rip us off: “So we’ll continue to look for the value of China’s currency to be increasingly driven by the market, which will help ensure that no nation has an undue economic advantage.” - -That statement is drenched in weakness. “We’ll continue to look” for the Chinese to magically turn from their wicked ways? Is this is a joke? As if by some miracle the Communist regime that’s making $300 billion off us each year is going to wake up tomorrow and decide, “You know what, we really ought to play more fairly with the Americans and stop poaching all their jobs and companies and billions of dollars.” It’s ludicrous. - -And by the way, shouldn’t our president be looking out for our economic interests instead of protecting other nations’ economic standing so that “no nation has an undue economic advantage”? Let’s get real. China’s economy is on track this year to enjoy 10.5 percent growth. The rest of the world is on pace for an average 4.8 percent growth. America? In September 2011, the U.S. GDP was an embarrassing 1.3 percent.19 Our president should stop trying to be an economist to the world and start fighting for our economy. Instead he’s putting us farther behind. He even has the audacity to brag about our one-sided trading relationship with China. - -“We’re now exporting more than $100 billion a year to China in goods and services,” Obama said. “And as a result of deals we completed this week, we’ll be increasing U.S. exports to China by more than $45 billion and China’s investment in America by several billion dollars. Most important, these deals will support some 235,000 American jobs, and that includes a lot of manufacturing jobs.”20 - -How can the president even say this with a straight face? Yes, we’re exporting $100 billion in products to China, but the point is that they are exporting four times as much and banking $300 billion off us because they lie about their currency! But does he mention that? No. And notice how he says his negotiated $45 billion in exports to Communist China will “support” 235,000 American jobs. That means, we’re not creating new jobs, we’re just “supporting” jobs not yet destroyed by Obamanomics. So if you’re lucky enough to have a manufacturing job in aviation you might get to keep it—you’ll just be building planes for Hu Jintao. - -The president needs to get serious with the Chinese and threaten serious sanctions if they won’t play by market rules. He shouldn’t be bragging about pitiful “deals” to “support” American jobs, he should be negotiating hard for real reform that would give American manufacturers a level playing field with their Chinese opponents. Then we’ll see who can really clean whose clocks and create real, new private sector jobs. - - - - - -Made in the U.S.A. - - - -I’m sick of always reading about outsourcing. Why aren’t we talking about “onshoring”? We need to bring manufacturing jobs back home where they belong. Onshoring, or “repatriation,” is a way for us to take back the jobs China is stealing. We know that China’s wages are increasing. Also, China lacks certain natural resources that we have in abundance. If we exploit those two key facts, we can begin making the case to companies that they should bring their manufacturing facilities home to America. - -Some smart people are already working on this. Harry Moser, a former CEO of a U.S. manufacturing technology supplier, has started something called the Reshoring Initiative, a group that shows businesses and the government how they can make more money and build a better business through onshoring. “This trend is real,” says Moser, “and it’s more than a trickle, it’s a steady stream.”21 Moser is right. I recently read an article in NewsMax magazine about a chopstick company in Americus, Georgia, called Georgia Chopsticks. The company’s owners, David Hughes and Jae Lee, realized that there’s tons of the special kind of wood you have to use to make chopsticks in southern Georgia. They realized they could make their chopsticks in America for cheaper than they could in China. Better still, they knew they could create more American jobs that way. So they make the chopsticks in Georgia and ship them to China! How great is that? Right now they make 4 million chopsticks a day—and they’re about to up production to 10 million a day, which will create 150 new American jobs. “I’m proud to be a part of this,” said Susan White, a Georgia Chopsticks employee. “It seems like everything you see in the United States these days is made in China, from clothes to even American flags. We’re giving back. It’s awesome.”22 - -Onshoring has huge potential. But Harry Moser says the Obama administration isn’t interested. “It’s been a challenge getting [Obama] to embrace this. All his chips are on exporting.”23 That’s why Congress needs to pass Virginia Congressman Frank Wolf’s bill called the “Bring Jobs Back to America Act” (H.R. 516) to help expand the onshoring movement and get American jobs back where they belong—here in America. Look, if we can make chopsticks in America and sell them to the Chinese, we can compete on hundreds of other fronts as well. We just have to get tough, get smart, and get a president willing to stand up for America and stick it to the Chinese. - -Right now we’re simply getting hustled by the Chinese—and most Chinese people I deal with on a business level know it and are amazed at what Obama lets the Chinese government get away with. A tough negotiator can make the Chinese back off. We’ve done it before. A great example was when the Bush administration spent two years pressuring China to increase the value of the yuan relative to the dollar.24 It worked. Between 2005 and 2008, the yuan’s value rose 21 percent.25 Since then, however, China has stopped allowing its money to appreciate, and we’re in terrible shape because of it. The point is: the Chinese are smart—they respond to economic pressure, and they know they’re not going to get any from Obama. - -Getting China to stop playing its currency charades can begin whenever we elect a president ready to take decisive action. He could start by signing into law a bill the U.S. House of Representatives approved on a 348 to 79 vote in September 2010. It would allow our government to calculate taxes on imports based on how much the manufacturing country’s currency is undervalued. Sounds like a great idea, right? But no sooner did the bill pass the House than Obama’s Treasury Secretary Tim Geithner warned us that we had to be nice to China. “It’s important to recognize that we’re not going to have a trade war,” Geithner said. “We’re not going to have a currency war. I would say that a substantial fraction of the Chinese leadership understands it is very important to them economically to let this exchange rate move.” Then why don’t we make them do something about it, Secretary Geithner? It’s the utter weakness and failure to fight for American interests from Geithner and Obama that have left us underwriting China’s economic rise and our own economic collapse. - -Open markets are the ideal, but if one guy is cheating the whole time, how is that free trade? Just look at the classical laws of economics, derived from that great Scotsman Adam Smith. People who know very little about capitalism summarize Adam Smith’s epic book, The Wealth of Nations, as saying, in essence, that “greed is good,” as the old line from the movie Wall Street put it. Like most people, I think that line is witty and made for Hollywood, but that’s not what Adam Smith said in that book, nor is it what he really meant. That’s why most people who bash capitalism and Adam Smith never took the time to read the book he wrote before The Wealth of Nations, which laid out the moral ground rules for markets, business, and life. It was a book called The Theory of Moral Sentiments, and it’s definitely worth picking up. As Smith writes, “The man who barely abstains from violating either the person, or the estate, or the reputation of his neighbors, has surely very little positive merit.”26 - - - - - -No More Currency Manipulation - - - -It’s a plain fact: free trade requires having fair rules that apply to everyone. And if we had a president who pressed the Chinese to abide by the rules, the benefits to our economy would be enormous. The Peterson Institute for International Economics has studied the Chinese currency issue extensively and concluded that a revaluation of just 20 percent (less than half the presumed fair market rate) would create 300,000 to 700,000 American jobs over the next two to three years.27 Think about that. Right now we have a president and a Treasury secretary who shrug while China tears away hundreds of thousands of manufacturing jobs from the United States. That’s leadership? The problem is so bad and the solution so obvious that even New York Times columnist (and radical lefty “economist”) Paul Krugman has had to concede the point: “In normal times, I’d be among the first people to reject claims that China is stealing other peoples’ jobs, but right now it’s the simple truth,” writes Krugman. “Something must be done about China’s currency.” When an Obama worshipper like Paul Krugman is forced to admit there’s a problem, you know America’s in deep trouble.28 - -Some take the Obama approach and simply shrug at China’s systematic destruction of American manufacturing. They think there’s no way to revitalize that sector of our economy—and the millions of jobs that go with it. They think we can do just fine as a service-based economy. But that’s just wrong. There’s no reason to sacrifice millions of jobs and the future of important American industries to China just because our leaders won’t get tough and defend our interests. - -Here’s the solution: get tough. Slap a 25 percent tax on China’s products if they don’t set a real market value on their currency. End of story. You think the Chinese wouldn’t respond constructively? No businessman I know would want to turn his back on the U.S. market—and the Chinese wouldn’t either. But it would help close the outrageous trade deficit driven by China’s cheating. CNBC analyst and UC Irvine business professor Peter Navarro points out that our trade deficit is costing us roughly 1 percent of GDP growth each year, which is a loss of almost 1 million jobs annually. “That’s millions of jobs we have failed to create over the last decade,” writes Navarro. “And if we had those jobs now, we wouldn’t see continuing high unemployment numbers, padlocked houses under foreclosure and empty factories pushing up weeds.... When a mercantilist China uses unfair trade practices to wage war on our manufacturing base, the American economy is the big loser.”29 - -It’s hardly any wonder that our country’s manufacturing dominance has evaporated. We have a president who has a vendetta against businesspeople and considers them the enemy. He’s also clueless about manufacturing. And he seems to have no regard for how China is conducting massive industrial espionage against the United States. - - - - - -Stop Stealing Our Technology - - - -American corporations and entrepreneurs are masters of technological and business innovation, but the Chinese are equally expert at stealing our trade secrets and technology. American investors and companies can pour millions of dollars into creating and developing a new product, only to have the Chinese, through industrial espionage, steal all that information for nothing. The Chinese laugh at how weak and pathetic our government is in combating intellectual property theft. That would be bad enough, but our government also stands by and does nothing while China demands that any American company that wants to enter the Chinese market has to transfer its technology to China. Such forced technology transfers are actually banned by the World Trade Organization as an unfair trade practice, but Obama lets China get away with it.30 - -Josh Kraushaar of the National Journal has noted that Obama’s economic cluelessness has hurt him with blue collar workers. While Obama is obsessed with “green collar jobs,” blue collar workers aren’t buying it. “Clean-energy jobs may be the future, but they’re not seen by displaced workers as a panacea.”31 The reason why blue collar workers dismiss Obama’s happy-talk rhetoric is because they’re smart. They know that anytime you hear this guy talk about how innovations in green technology are going to spark huge job opportunities, it’s all meaningless, because Obama lacks the spine and the guts to take on China’s wholesale thievery of U.S. technology and trade secrets. - -And it could easily get worse, threatening not only our economy but our national security. China is a major aggressor in the field of cyber espionage and cyber warfare. It has the capacity not only to steal highly classified U.S. military technology, but to unleash crippling computer viruses on our networks. About twelve years ago, I wrote a book called The America We Deserve. As somebody who has written many bestsellers, including many #1 bestsellers, it was probably my least successful book. The fact is, people didn’t want to hear from Donald Trump about politics but about business. That’s why when I wrote The Art of the Deal and many of my other books, they were huge successes. In fact, The Art of the Deal is said to be the biggest-selling business book of all time. Nevertheless, I was proud of The America We Deserve for a number of reasons. First, I strongly predicted terrorism in this country, something which happened, unfortunately, and which could have been avoided or minimized. I even included Osama bin Laden by name. Second, I predicted the crash of the economy. There were too many signs, too many signals, too many factors that I thought made the coming crash obvious. So while it was probably my least successful because it didn’t discuss business, I have been given great credit for the book’s powerful and accurate predictions. In this book, I’m not looking to make predictions, I’m looking to make a difference and warn about other potential threats. - -I fear that a similar but different type of long-term threat exists with China’s rapidly expanding military technology developments. According to the Pentagon, China’s military has also made “steady progress” in developing online warfare tactics.32 - -For a country like China, being able to steal our military designs represents hundreds of billions in savings on research and development costs. After all, why spend trillions building and testing complex weapons systems when you can just poach the blueprints for free with a click of a mouse? - -Just look at what’s already happening right now. In 2009, the Wall Street Journal reported that cyber-intruders successfully copied several terabytes of highly classified data on our $300 billion Joint Strike Fighter project, which would make it far easier to defeat the new fighter, the F-35 Lightning II.33 Not surprisingly, U.S. officials have concluded with a “high level of certainty” that the attack came from—you guessed it—China.34 - -We also now know that the People’s Liberation Army (PLA) has adopted a new doctrine known as the Integrated Network Electronic Warfare (INEW). The Communist government’s new plan involves “training and equipping its force to use a variety of IW [Information Warfare] tools for intelligence gathering and to establish information dominance over its adversaries during a conflict.”35 In a congressional commission, General James Cartwright testified that China is actively engaging in “cyber reconnaissance” and is penetrating the computer networks of American government agencies as well as private companies.36 For those China apologists who might claim that these cyber attacks may have been carried out by Chinese hackers and are operating independent from the Communist government, RAND’s extensive study proved exactly the opposite: - -A review of the scale, focus, and complexity of the overall campaign directed against the United States and, increasingly, a host of other countries around the world strongly suggest that these operations are state-sponsored or supported. The operators appear to have access to financial, personnel, and analytic resources that exceed what organized cybercriminal operations or multiple hacker groups operating independently could likely access consistently over several years. Furthermore, the categories of data stolen do not have inherent monetary value like credit card numbers or bank account information that is often the focus of cybercriminal organizations. Highly technical defense engineering information, military related information, or government policy analysis documents are not easily monetized by cybercriminals unless they have a nation-state customer.37 - - - - - -The military threat from China is gigantic—and it’s no surprise that the Communist Chinese government lies about how big its military budget is. The Chinese claim that it’s $553 billion a year, which is about one-fifth the size of our own. But regional security experts believe that China’s real military budget is much higher. One way the Chinese hide their military spending is by assigning it to other departments of government. That way their rapid military expansion can be kept secret from other nations, which, if they knew China’s true military budget, might feel alarmed enough to ramp up their own spending.38 As leaked 2009 cables revealed, Beijing’s tactic of deception follows the grandfather of modern China Deng Xiaoping’s admonition that China hide its capabilities while biding its time.39 - -Look, when it comes to China, America better stop messing around. China sees us as a naïve, gullible, foolish enemy. And every day Obama remains in office, they take huge strides to overtake us economically. They manipulate their currency in a way that steals a million American jobs and inflates an utterly unfair trade imbalance by $300 billion. They rip off our business’s trade secrets so they can save billions in research and development costs and shave years off the time it takes to get a new product to market. And to top it all off, China is leading the way in developing advanced new cyber warfare techniques to serve as a force multiplier of their already massive military, which currently stands at 2,285,000 active troops with another 800,000 reserves. But remember one thing when we go to the negotiating table with China: Japan, a much smaller country with far fewer people and soldiers, kicked China’s ass in war—not a good sign for China’s warrior-like future. - -We need a president who will sign the bipartisan legislation to force a proper valuation of China’s currency. We need a president who will slap the Chinese with a 25 percent tax on all their products entering America if they don’t stop undervaluing the yuan. We need a president who will crack down on China’s massive and blatant intellectual property theft that allows China to pirate our products (maybe if Obama didn’t view entrepreneurs and businesspeople as the enemy he’d be more aggressive about this). Most of all, we need a president who is smart and tough enough to recognize the national security threat China poses in the new frontier of cyber warfare. - -It may seem to many that I speak very badly about China and its representatives. The truth is I have great respect for the people of China. I also have great respect for the people that represent China. What I don’t respect is the way that we negotiate and deal with China. Over the years, I have done many deals and transactions with the Chinese. I have made a tremendous amount of money. I have sold apartments for $53 million, $33 million, and many at smaller numbers. I built one of the largest jobs in Manhattan with Chinese partners and made a great deal of money. So I know the Chinese, and understand and respect the Chinese. - -Whenever I speak badly of what they are doing to us, I am not blaming them—I am blaming our leaders and representatives. If we could get away with it against them, I would strongly encourage us to do so. Unfortunately, they are too smart and our leaders are not smart enough. - -I have many friends in China who cannot believe that their leaders are able to make such unbelievably favorable deals. I can understand it more easily than they can. Our leaders are rather, to put it succinctly, stupid. The amazing thing is, despite all of the hard rhetoric and strong words I use against China, Bloomberg Businessweek recently did an article about the thing the Chinese most want. Notable is a quote by real estate president Asher Alcobi of his Chinese clients’ preferences: “Anything that has the Trump name is good.”40 - -So, I speak badly of China, but I speak the truth and what do the consumers in China want? They want Trump. You know what that means? That means that they respect people who tell it like it is and speak the truth, even if that truth may not be so nice towards them. In fact, it is my respect for the Chinese that leads me to tell our leaders to be careful. The Chinese will take and take and take until we have nothing left—and who can blame them if they can get away with it? - -China is our enemy. It’s time we start acting like it . . . and if we do our job correctly, China will gain a whole new respect for the United States, and we can then happily travel the highway to the future with China as our friend. - - - - - -FOUR - - - -IT’S YOUR MONEY–YOU SHOULD KEEP MORE OF IT - - - - - -The first sixteen hours of your forty-hour workweek you work for free. Put another way, the first four and a half months of the entire year, you work for absolutely nothing—the government confiscates every last penny of your hard-earned money in the form of taxes. - -That’s terrible. The economic robbery of it all is offensive enough, but equally infuriating is the amount of freedom and time the government is stealing from you as well. Imagine having sixteen hours more each week to spend with your family, or volunteering sixteen more hours every week at your favorite charity, or spending sixteen additional hours each week working on your business or next entrepreneurial venture. Imagine your paycheck was 40 percent higher than it currently is. What could you do with 40 percent more wealth? How many jobs and opportunities for others could you create? The longer you really think about it the madder you will get, especially when you consider the waste, fraud, and abuse the federal government traffics in as it inflicts its self-defeating policies on hard-working Americans. - -But does that stop Obama and his “progressive” pals? No. In fact, they think the real problem isn’t that your taxes are too high but that they are too low. If only those stingy wage-earners would cough up more cash, the administration reasons, benevolent government bureaucrats could redistribute it more fairly and wisely. - -Look, paying taxes is a part of life, and we need to fund the things individuals can’t do for themselves, like national defense and infrastructure, and yes, Social Security, Medicare, and Medicaid. “Render unto Caesar the things which are Caesar’s, and unto God the things that are God’s,” the Gospel of Matthew reminds us. But as one fellow Christian told me, “God only asks me to tithe 10 percent to do His good works. Obama wants far, far more.” - -Judging from their actions, progressives don’t even believe their own hype. Anyone who thinks they should pay higher taxes is free to send more money to the federal government. There’s no law that says you can’t pay additional taxes. In 1843 the Treasury Department established a special fund that remains to this day, called the “Gifts to the United States Government Fund” for “individuals wishing to express their patriotism to the United States.”2 Citizens can send in their checks at any time. But when the average American already spends more in taxes than they do on food, shelter, and clothing combined, it’s not hard to see why very few would do such a thing. Making mountains of money and creating millions of jobs would be a far more “patriotic” gesture than fattening an already morbidly obese federal government. - -In 2002, at the state level, an enterprising Virginia Delegate, Republican Kirkland Cox, set up a “Tax Me More Fund” in Virginia to see if the people who scream loudest about wanting higher taxes would put their money where their mouths were. To date, over the last eight years, the fund has netted a laughable $12,887, an amount so tiny it can’t even fund the salary of a single part-time state worker.3 Bottom line: if liberals really thought giving more of their hard-earned money to government was a great idea, they would do it. But they don’t. - -No doubt you work hard for your money—I know I do—and you should be permitted to keep more of it. Anything less creates a disincentive for a strong national work ethic. President Ronald Reagan, saw it the same way: - -The more government takes in taxes, the less incentive people have to work. What coal miner or assembly-line worker jumps at the offer of overtime when he knows Uncle Sam is going to take sixty percent or more of his extra pay? . . . Any system that penalizes success and accomplishment is wrong. Any system that discourages work, discourages productivity, discourages economic progress, is wrong. - - - - - -If, on the other hand, you reduce tax rates and allow people to spend or save more of what they earn, they’ll be more industrious; they’ll have more incentive to work hard, and money they earn will add fuel to the great economic machine that energizes our national progress. The result: more prosperity for all—and more revenue for government.4 - - - - - -As with most things, President Reagan had it right. But Reagan wasn’t the only president who understood that lower taxes yield higher revenues by unleashing economic growth and job creation. To many Democrats’ chagrin, Reagan was merely echoing the economic thoughts of President John F. Kennedy, who had already said, in 1962, “The paradoxical truth is that the tax rates are too high today and tax revenues are too low and the soundest way to raise revenues in the long run is to cut rates now.”5 - -Reagan and Kennedy’s views prove that smart tax policy shouldn’t be a partisan issue. It should be common sense. If you tax something you get less of it. It’s as simple as that. The more you tax work, the less people are willing to work. The more you tax investments, the fewer investments you’ll get. This isn’t rocket science. - -But Obama isn’t interested in common sense. He’s too busy using class warfare rhetoric to try to make you forget the disaster of his first term and give him a second. Take, for example, his rants and temper tantrums about making evil corporate jet owners pay higher taxes. If I thought for a minute that this was the solution to our $15 trillion debt, I would endorse it. But calling for higher taxes on private jet owners is a political joke. In fact, as the Washington Post points out, it’s such an embarrassing idea that the White House couldn’t even come up with an estimate for the amount of revenue such a proposal would generate. Still, others estimate the amount it would take in over ten years would be $3 billion.6 That’s 0.0002 percent of the national debt. Not only would a jet tax do absolutely nothing to put a dent in the debt, it would also have a negative economic impact on the workers who manufacture and maintain those aircraft. Don’t forget all of the many jobs provided by the private jet industry . . . including the much-needed manufacturing jobs. - -That’s the kind of empty, unserious rhetoric we get from this president. Obama bashes rich people and then vacations with them at Martha’s Vineyard before jetting around the country doing million-dollar campaign fundraisers with rich liberals. All this from a guy who lectured Americans about tightening their belts, eating their peas, and not vacationing in Vegas to gamble at casinos. (I can’t believe he can win the state of Nevada with his statements and its very high unemployment.) - - - - - -Obama’s Clueless on Taxes - - - -Obama needs to wake up, stop taking so many vacations (I’ve never seen anything like it), and quit messing around. He also needs to learn how the real world of business operates. Everyone knows that the worst thing to do during a recession or an economic downturn is to raise taxes on anyone. It may be an inconvenient truth for the president and his horrific economic team, but business owners are the people who create jobs. Two-thirds of all jobs created in America are created by small business owners. With unemployment soaring under this president, you would think he would want to do everything he can to get unemployment down and hiring up. But he doesn’t. Instead, he and his political advisors think trashing wealth creators and companies will score political points and somehow spare him defeat in 2012. - -He’s wrong. People are smart. They know you can’t be “for” jobs but against those who create them. It doesn’t work. All raising taxes on businesses does is force business owners to lay off employees they can no longer afford. It also drives up prices, encourages businessmen and women to move their businesses (and their jobs) to other countries that have far lower tax rates and regulatory costs, and sends people scrambling for tax shelters. The kid on the side of the street with a lemonade stand knows that, but not this guy. He’s never worked in the private sector or made a payroll. And for a president who likes to showcase how hip and tech savvy he is, Obama also appears surprisingly clueless about how easy it is now for anyone to outsource jobs to foreign workers with just the click of a mouse. In our broadband, high-speed Internet world, the old brick-and-mortar barriers of business have vanished. That means capital can now pivot instantly to dodge ever-increasing government regulations and taxes. - -Obama isn’t the only one who’s hazy eyed about the reality of taxes in America. In fact, lots of people have bought into the liberal lies we’ve heard for decades. The first of these is the one about how the middle and lower classes pay the overwhelming majority of the tax burden, letting the rich off the tax hook. If people would just stop and think about how loony this is, they would see that the very notion defies the laws of math. For starters, half of America doesn’t even pay a single penny in federal income taxes.7 That may shock you, but it’s true. That’s one of the reasons soaring federal spending is so dangerous: half the country shrugs its shoulders and says, “Who cares? It’s not my money they’re spending.” So the idea that the lower class is shouldering the tax burden is absurd, because the bottom half of Americans pay no federal income tax at all. - -There’s more. The top 1 percent of wage-earners in America pay for more than the entire bottom 95 percent—combined. And the top 10 percent of income earners foot 71 percent of the federal income tax bill.8 “To put this in perspective,” says Scott Hodge at the Tax Foundation, “the top 1 percent is comprised of just 1.4 million taxpayers and they pay a larger share of the income tax burden now than the bottom 134 million taxpayers.” 9 The always business savvy Neil Cavuto from Fox News puts it this way: - -It’d be like going out to dinner with friends. Your buddy at the table picks up the bill, and some knucklehead has the audacity to say, “Joe, you should have left a bigger tip.” Now, some Democrats promoting the class war say, “Good, that’s the way it should be. And yeah, Joe, you should have left a bigger tip.” But when you realize that the richest among us are paying for the bounty of the government for us. . . . We should at least, now and then, try a thank-you.10 - - - - - -I don’t need a thank-you note from anyone. I make lots of money and pay lots of taxes. That’s fine. But the misinformation and lies so-called “progressives” spew is ridiculous. Why demonize rich people? Who doesn’t want to get rich? How do these people think charities get funded? Who do they think creates jobs? Rich people, business people, people who work very, very hard! - -But here’s the really fascinating part—the part liberals remain clueless about: if the federal government really wants to “stick it” to rich folks and confiscate more of their hard-earned money to fund their insane spending sprees on counterproductive social programs then they should lower, not raise, tax rates. As my friend Steve Forbes explains, before President Reagan instituted the Reagan tax cuts, the richest 1 percent of Americans paid 18 percent of all federal income taxes. The top marginal rates then went from a suffocating 70 percent down to 28 percent. And what was the result? Their portion of the national tax bill actually doubled—they paid 36 percent of federal income taxes and produced 23 percent of the nation’s income.11 As President Reagan explained, “A few economists call this principle supply-side economics. I just call it common sense.”12 - -The reason this country is an economic disaster right now is because Barack Obama doesn’t understand how wealth is created–and how the federal government can destroy it. He also doesn’t understand just how mobile wealth is today. People now have options. Individuals and businesses can play ball anywhere in the world. For example, Ireland’s corporate tax rate is 12.5 percent. America’s? We’re the second highest in the world, just behind Japan at a ridiculous 39 percent. That means businessmen can save up to 26.5 percent in taxes just by relocating their business abroad. And they are—in droves. In fact, the international average corporate tax rate is 26 percent.13 Even socialist economies understand that high corporate taxes are a death knell for jobs and economic growth. High tax rates are literally transferring wealth and jobs abroad, which only reduces the revenues the federal government would have otherwise collected. - -The other thing about high corporate tax rates is that, in the end, companies aren’t the ones who foot the bill, consumers do. The Tax Foundation ran the numbers and found that in 2007, the federal corporate income tax collected $370 billion. They further concluded that the average American household pays $3,190 in corporate income taxes each year. 14 Again, Barack Obama doesn’t understand what Ronald Reagan understood. Here’s how President Reagan explained the corrosive influence of corporate taxes on the average American: - -Some say shift the tax burden to business and industry, but business doesn’t pay taxes. Oh, don’t get the wrong idea. Business is being taxed, so much so that we’re being priced out of the world market. But business must pass its costs of operations—and that includes taxes—on to the customer in the price of the product. Only people pay taxes, all the taxes. Government just uses business in a kind of sneaky way to help collect the taxes. They’re hidden in the price; we aren’t aware of how much tax we actually pay.15 - - - - - -Reagan was right. If Americans understood just how many hidden government fees and taxes are absorbed into the prices of the goods and services they buy, they would be irate. Consider the fact that for every gallon of gas you put in your car, you pay 45.8 cents in state, local, and federal taxes. So if you fill up your tank and pump twenty gallons, you just blew $9.16 on taxes. Hidden fees affect everything, even recreational and leisure activities. For example, a fisherman pays 10 percent of the sales price on sport-fishing equipment in hidden taxes, and archers foot a federal tax on arrows of 45 cents per shaft and another 11 percent on quivers. If you book a seat on a domestic flight, you pay a 7.5 percent tax on your ticket. You’ll get hit with another $3.60 tax, plus an additional $2.50 security tax for each leg of your trip. If you travel abroad, there’s a $16.10 international arrival/departure tax, as well as a $4.50 fee for a “passenger-facility charge.” This is why the price you’re quoted for an airline ticket suddenly jumps when you pay the bill.16 - -Some people have less of a problem with so-called “sin taxes” on items government wants to discourage you from using. The federal tax on a pack of cigarettes is $1.01 a pack, on a six-pack of beer it’s 33 cents. Some people say, “Well, those aren’t good for you anyhow, so we should tax those things higher.” Similarly, heating oil, which ensures that people up north can keep their homes warm during the winter, gets taxed by most states. The point is that all these sneaky taxes are nickeling and diming Americans to death. Worse, they mask the real costs associated with big government. If the average American was aware of just how much money government poaches from their pockets each year—an estimated 40 percent of your paycheck—there would be a tax revolt that would make the Boston Tea Party look like amateur hour.17 - -It’s unfair and wrong. It’s also bad economic policy. When taxes go up, what do people do? Many smart people shift their money into tax-free municipal bonds. And guess what? The government doesn’t get the money it thinks it’s going to get. If Obama knew more about economics he’d know about something called Hauser’s Law, named after W. Kurt Hauser, a chairman emeritus at the Hoover Institution at Stanford University. As Hauser explains, the top marginal personal tax rate for the last sixty years has swung wildly, ranging from as high as 92 percent in 1952–1953 all the way down to 28 percent in 1988–1990. Yet regardless of the tax rate, tax revenues as a percentage of GDP have stayed roughly the same, averaging just under 19 percent.18 That’s because when taxes get too painful, people simply move their money away from the federal government’s greedy hands and into tax-free havens. High tax rates don’t increase government revenues, all they do is take money out of the productive economy that creates jobs and lock it into less dynamic investments like bonds. Only a fool would advocate such a disastrous plan. But that’s precisely the path Barack Obama has pursued. - -None of this should have come as a surprise to anyone who was paying attention in 2008. Remember Joe the Plumber? Then-candidate Barack Obama made his intentions crystal clear: “I believe that when you spread the wealth around, it’s good for everybody.” So we knew where this was heading all along, because it’s not government’s job to spread your money around. You spread it around yourself when you decide how you want to spend it, invest it, or donate it. Obama supports taxes because he believes government should decide more and you should decide less. - -Based on their words and policies, Michelle and Barack Obama apparently believe that capitalism and entrepreneurship are bad. The way they see it, raising taxes is a way to punish people for having the audacity to work hard and get rich. As First Lady Michelle Obama put it in a speech in Ohio to a women’s group: “Don’t go into corporate America. You know, become teachers. Work for the community. Be social workers. Be a nurse.... Make that choice, as we did, to move out of the money-making industry into the helping industry.”19 Teachers and nurses are great, but to tell people that being in business is somehow illegitimate and not part of the “helping industry” is a horrible message to send to people. Especially young people interested in business and entrepreneurship. By her logic (if you can call it that), creating a company that creates tens of thousands of jobs and provides employees an honest way to feed their families and send their kids to college is somehow to engage in activity that is not part of the “helping industry.” But again, the Obamas telegraphed their anti-wealth message all along. As President Obama confessed, “I do think at a certain point you’ve made enough money,” as if it’s his or the government’s place to decide how hard you work and how much wealth and opportunity you create. It’s shameful and sad. It’s no wonder he’s turned America into a huge train wreck. - - - - - -Time to Get Smart on Taxes - - - -We need a tax system that is fair and smart—one that encourages growth, savings, and investment. It’s time to stop punishing hard work and entrepreneurship. Specifically, we need to do five things. First, the death tax needs to die. It’s immoral for the government to tax you after you’re dead, to seize a portion of your money and property that you spent your life building up, and on which you already paid taxes. Your children deserve your estate, not the federal government. President George W. Bush eliminated the death tax (sometimes called the estate tax) for one year. But after 2010, under Obama, it rose from the grave. Now estates, above an exempted level, will be taxed at a rate up to 35 percent. “It doesn’t seem to matter that the vast majority of the money in an estate was already taxed when the money was earned,” reports the Wall Street Journal. “This ignores that much of the long-term saving and small business investment in America is motivated by the ability to pass on wealth to the next generation.... What all this means is that the higher the estate tax, the lower the incentive to reinvest in family businesses.”20 - -A study by former Congressional Budget Office director Douglas Holtz-Eakin found that moving the death tax from 0 percent to 45 percent (the amount Obama wants) is a proven jobs killer, because it will strip $1.6 trillion of small business capital out of the hands of job creators. That, says Holtz-Eakin, means a loss of 1.5 million new jobs. How can we sit back and let that happen at a time when 25 million Americans can’t find enough work to take care of their families?21 The death tax only raises a tiny 1 percent of all federal revenue.22 Plus, heirs already have to pay capital gains on the assets they acquire from any estate. This president is willing to sacrifice 1.5 million jobs just for the pleasure of “sticking it to rich people.” That’s simply wrong. - -Obama says that “when we think about tax reform we should be thinking about fairness. What’s fair?”23 Well, I’ll tell you what’s not fair, Mr. President: killing 1.5 million jobs and strangling economic growth just so you can feel warm and fuzzy about taking money from family businesses and spreading it around as you and your bureaucrats see fit. If we repeal the death tax, we get 1.5 million jobs, boost small business capital by more than $1.6 trillion, increase payrolls by 2.6 percent, improve the probability of businesses hiring new employees by 8.6 percent, and expand investment by 3 percent.24 It’s a no-brainer. It’s time to kill the death tax once and for all. More than a million jobs depend on it. - -Second, we need to lower tax rates on capital gains and dividends—two more taxes that are proven jobs and investment killers. Naturally, President Obama wants to do the opposite. He wants to raise the capital gains tax rate from 15 percent to 20 percent.25 He also wants to jack up the dividend tax rate by the same amount. Again, in Obama’s world, it’s all about punishing success and redistributing wealth. As economist J. D. Foster pointed out, “Obama was very clear in his campaign debate with then-Senator Clinton that raising revenues was not his primary reason for suggesting the capital gains tax hike.” Even the president’s own budget numbers show that a miniscule 0.01 percentage point drop in annual economic growth—which is inevitable if Obama’s tax policies are followed—would totally wipe out the money he hopes to cream off with his capital gains tax increase. J. D. Foster concludes, “The President should set aside his ideological preferences and press Congress to maintain the current 15 percent tax rates for capital gains and dividend tax rates until the economy reaches full employment.”26 To raise these tax rates now (or ever) is shortsighted and economically foolish. - -Capitalism requires capital. When government robs capital from investors, it takes away the money that creates jobs—real private sector jobs that contribute to the health of our economy. For a guy who claims that creating jobs is the first thing he thinks about when he wakes up and the last thing he thinks about before he goes to sleep, you would think he would know better. But he doesn’t. That’s why we need a new president, one who will keep capital gains rates low. - -The third thing we need to do is lower the U.S. corporate tax rate from 39 percent to zero. As I stated, America’s corporate tax rate is the second highest on the planet. The international average is 26 percent. How can we expect companies to hire American workers and locate their businesses in America when our government taxes them at exorbitant rates for doing so? That’s crazy. I want to encourage American companies to stay here and hire American workers, and I want foreign companies to relocate their businesses to the United States and create jobs here. We are the greatest country on planet earth—the world’s companies want to be here. A zero percent corporate tax would create an unprecedented jobs boom. Millions of jobs would materialize. This isn’t brain surgery. You cut the corporate tax and companies stay in America or relocate to America, and that produces jobs. Who doesn’t understand that? - -The problem is that we have a president who is more concerned with pursuing some sort of bizarre ideological mission that flies in the face of America’s free-market tradition. Look, we don’t have time to play games. Our people are hurting badly. Here’s my message to Obama: America is a capitalist country. Get over it and get on with it! Unleash job creators and we will put Americans back to work in big numbers. Cut the corporate tax and create millions of new jobs while stimulating our limping economy. - -Fourth, it’s time to get tough on those who outsource jobs overseas and reward companies who stay loyal to America. If an American company outsources its work, they get hit with a 20 percent tax. For those companies who made the mistake of sending their businesses overseas but have seen the light and are ready to come home and bring jobs with them, they pay zero tax. Bottom line: hire American workers and you win. Send jobs overseas, and you may be fine, but you will pay a tax. Also, I want foreign countries to finally start forking over cash in order to have access to our markets. So here’s the deal: any foreign country shipping goods into the United States pays a 20 percent tax. If they want a piece of the American market, they’re going to pay for it. No more free admission into the biggest show in town—and that especially includes China. - -The fifth and final part of my tax plan involves reforming the income tax. The government confiscates way too much of your paycheck. The tax code is also a very, very complicated system that forces Americans to waste 6.1 billion hours a year trying to figure it out.27 Americans also waste billions hiring accountants to try and make sense out of the tax code. You can hire 100 accountants to do your taxes and they’ll all come up with different numbers. What does that tell you? It tells me that it’s time we restore simplicity and sanity to the income tax. Here’s my income tax plan: - -• Up to $30,000, you pay 1 percent - - - -• From $30,000 to $100,000, you pay 5 percent - - - -• From $100,000 to $1 million, you pay 10 percent - - - -• On $1 million or above, you pay 15 percent - - - -It’s clear and fair. Best of all, it can be filled out on the back of a postcard and will save Americans big bucks on accountants and massive amounts of time wasted attempting to decipher the tax code. - -Our country is hungry for real tax reform. That’s why we should implement the 1-5-10-15 income tax plan. Let China, OPEC, and others pay the tax, not us. It’s about time . . . and they have all the money. - - - - - -I believe the government already takes enough of your hard-earned money. Obama thinks the opposite. If we want jobs in America, we need to enact my five-part tax policy: kill the death tax, lower the tax rates on capital gains and dividends, eliminate corporate taxes in order to create more American jobs, mandate a 15 percent tax for outsourcing jobs and a 20 percent tax for importing goods, and enact the 1-5-10-15 income tax plan. - -Government needs to stop pick-pocketing your wallet. Every time it does, it slows growth and kills jobs. It’s also immoral. We need to get back to doing what we know works. President Reagan had it right: lower taxes produce more freedom and opportunity for all. Everyone knows that—except in Washington. It’s time we send the politicians a big message loud and clear. As Senator Everett Dirksen once said, “When they feel the heat they’ll see the light.” It’s time we turn up the heat. - - - - - -FIVE - - - -A GOVERNMENT WE CAN AFFORD - - - - - - -Every day, your government takes in $6 billion in revenue and spends $10 billion. That means every day the federal government has to borrow $4 billion more than it has.2 - -To state the obvious, if any business operated the way the government does, it would go under. But in the absurd world of Washington, politicians just kick the can down the road and shrug. There’s just one problem: the can has finally hit a $15 trillion debt wall. For the first time since the founding of the Republic, we’ve lost our AAA credit rating, and now even our enemy China is having second thoughts about lending us money to bankroll Barack Obama’s endless spending spree. - -Americans understand that the U.S. has a spending problem, not a revenue problem. In September 2011, Gallup asked Americans how much money they think the federal government wastes. On average, citizens put the figure at 51 cents out of every dollar. That’s probably being too kind. - -We need more grown-ups in Washington, people who will shoot straight and level with the American people about our nation’s top budget busters. The biggest slices of the budgetary pie are eaten up by Social Security, Medicare, and Medicaid. Social Security makes up 20 percent of the budget ($707 billion). Medicare and Federal Medicaid account for 22 percent of the budget ($724 billion). As everyone knows, health-care costs are skyrocketing, and Medicaid has massively expanded its role in the health-care system. When Medicaid was created in 1965, only one in fifty citizens used the program. Today, it’s one in six Americans. - - - - - -Save Social Security and Medicaid - - - -Social Security faces a similar problem. Soon there will be more people inside the cart than there are pulling the cart. Right now, 53 million people collect Social Security benefits that average $1,067 a month. In seventy-five years, that number will jump to 122 million, roughly one out of every four citizens.3 That’s why, with 77 million baby boomers set to retire and begin collecting benefits, these two programs—a combined 42 percent of the U.S. budget—are in danger of becoming insolvent. We can’t let that happen. - -Now I know there are some Republicans who would be just fine with allowing these programs to wither and die on the vine. The way they see it, Social Security and Medicare are wasteful “entitlement programs.” But people who think this way need to rethink their position. It’s not unreasonable for people who paid into a system for decades to expect to get their money’s worth—that’s not an “entitlement,” that’s honoring a deal. We as a society must also make an ironclad commitment to providing a safety net for those who can’t make one for themselves. At least that was President Reagan’s stance. On April 20, 1983, Reagan signed a bill to preserve Social Security. At that bill signing, the president said words every Republican should heed: - -This bill demonstrates for all time our nation’s ironclad commitment to Social Security. It assures the elderly that America will always keep the promises made in troubled times a half a century ago. It assures those who are still working that they, too, have a pact with the future. From this day forward, they have one pledge that they will get their fair share of benefits when they retire.4 - - - - - -President Reagan had it right: Social Security is here to stay. To be sure, we must reform it, root out the fraud, make it more efficient, and ensure that the program is solvent beyond the Baby Boomers. But to listen to some Republicans vilify a system that’s been around for over seventy-six years and that taxpayers have paid into for decades makes me think they should go back and watch President Reagan’s speech again. - -Same goes for Medicare. Again, people have lived up to their end of the bargain and paid into the program in good faith. Of course they believe they’re “entitled” to receive the benefits they paid for—they are! - -The question is, how do we pay for Medicare, Medicaid, and Social Security when costs are ballooning and deficits are soaring? Here again, both sides fumble the ball badly. Democrats pretend that the answer is raising taxes. But anyone with a brain knows all that will do is kill economic growth. That’s the exact opposite of what needs to happen. Economic growth is the secret to making the entire pie grow larger. When that happens, millions of new workers will become new taxpayers and revenues will rise. As Senator Marco Rubio of Florida put it: “Let’s stop talking about new taxes and start talking about creating new taxpayers, which basically means jobs.”5 And that’s what economic growth will do. - -But many Republicans also miss the mark. They pretend we can just nibble around the edges by eliminating waste, fraud, and abuse and somehow magically make these programs solvent and pay off our massive $15 trillion debt. Neither side is being totally honest. - -Our country doesn’t need cowardice, it needs courage. Here’s the first part of the solution: our leaders need to get tough with the big players like China and OPEC that are ripping us off so we can recapture hundreds of billions of dollars to pay our bills, take care of our people, and get us on a path toward serious debt reduction. We must take care of our own people—we must make our country strong and rich again so that Social Security, Medicare, and Medicaid will no longer be thought of as a problem. We must save these programs through strength, power, and wealth. - -As I explained earlier, China takes us for $300 billion a year, and OPEC is even worse. Washington is so busy squabbling over peanuts that they’re completely missing the mountains of money staring them in the face. Obama and Republicans spent weeks bickering over $60 billion of spending cuts in the president’s budget. Excuse me, but we have a $15 trillion debt. We need to get serious and get tough with the big rip-off artists who abuse this country regularly. If we do that first, the remaining cuts and reforms we need to make will be substantially smaller, more manageable, and much less painful. - -Stop and think about it: even just leveling the playing field with China for a decade would be the equivalent of one-fifth of our national debt (and would have been one-third of our debt had we not elected the community organizer). You add in several hundred billion a year from putting OPEC in line, hundreds of billions from negotiating properly with the many other countries that are ripping us off, root out the hundreds of billions of incredible fraud that occur every year (more on that later), and now we have a debt problem America can manage—one where we can attack waste and abuse and whittle down the remaining debt to get our fiscal house in order. So that’s the first step: bringing home the hundreds of billions of dollars that the petro thugs at OPEC and our enemy China steal from us every single year—and then go after all of the others. - -Next, we need a president who realizes that your money belongs to you, not him. A real president should take pride in saving and spending your money wisely, not funneling it to his cronies and political backers in the form of so-called “stimulus.” But unfortunately, that’s not the kind of president we currently have in the Oval Office. This guy wouldn’t save the American taxpayer $100 million if it landed on his front doorstep. I should know. I tried to make a $100 million gift to the United States government, but Barack Obama wouldn’t even return my phone call. - - - - - -My $100 Million Gift to the U.S. Goes Uncollected - - - -If you want a small example of just how uninterested your government is in saving and spending your money wisely, read on. One day I was watching television and I saw that President Obama was hosting a dinner for various leaders at the White House. But every time they had one of these events, I noticed that they put up an old, broken, rotten-looking tent out on the White House grounds that they probably paid some local guy a fortune for every time they needed it. That’s no way for America to host important meetings and dinners with world leaders and dignitaries. We should project our nation’s power and beauty with a proper facility and ballroom. If there’s one thing I know how to build, it’s a grand ballroom. At my private Mar-a-Lago Club in Palm Beach, Florida, I built what many consider to be the single greatest ballroom in the world . . . but I own many beautiful and very successful ballrooms. - -So I called up the White House and they put me on with President Obama’s top senior strategist, David Axelrod. We had a very nice conversation, and I told David that “I will build you, free of charge, one of the great ballrooms of the world so that the president and all future American presidents can host events at the White House in a proper manner. To do it to the highest standards, it will cost anywhere from $50 to $100 million. I will cover the expenses and give the ballroom to the U.S. government as a gift. What I will do is I will hire the top ten vying architects in the world—I hope they’ll be American architects, but I’ll hire the best, whoever they are. We’ll then have a review committee set up. We’ll pick the architect that everybody agrees on, because it’s a little delicate in that it’s the White House we’re talking about. And I will build the greatest ballroom there is, even better than the Mar-a-Lago ballroom, so that Americans can be proud when our presidents host world leaders on the White House grounds.” - -“Wow,” Axelrod said. “That’s very interesting.” He then said he would talk it over and get back to me. No one ever called back. And that’s what’s wrong with this country. When Rush Limbaugh invited me to come on his show I told him that story, and Rush said that they probably didn’t get back to me because I’m a lifelong Republican. Rush is probably right, but I’m sure it is just the way business is done in Washington, billions of dollars are squandered and people just don’t care. I really thought David would take me up on my offer but it is not too late. My offer still stands. If someone wants to give America—a nation that is flat broke—a nice gift, you call them back, regardless of what party they belong to. It’s just one small example of how the Obama administration isn’t fiscally wise and certainly doesn’t care about taking advantage of ways to give Americans the most for less. To the Obama administration, saving money isn’t the point—expanding government and spending more taxpayers’ dollars is. Sometimes they call it “investment” or “stimulus,” but a lot of it is sheer unadulterated waste. - -We need a dealmaker in the White House, who knows how to think innovatively and make smart deals. - -As an example, in a fairly recent well-documented Florida deal, I purchased a house in Palm Beach at a bankruptcy sale (sadly, a very rich man lost everything) for $41 million and everybody thought I was crazy. But I knew better. It was a great parcel of land fronting the ocean—and a short time later I sold it to a Russian for approximately $100 million. Had I listened to all the geniuses I wouldn’t have made that deal. It’s all about seeing the unseen. This is the kind of thinking we need to turn this country around—and fast. - -We also need someone who can save money through common sense. When I opened Trump National Golf Club at Rancho Palos Verdes in Los Angeles, I was immediately told that I would need to build a new and costly ballroom. The current ballroom was gorgeous, but it only sat 200 people and we were losing business because people needed a larger space for their events. Building a new ballroom would take years to get approval and permits (since it’s on the Pacific Ocean), and cost about $5 million. I took one look at the ballroom and saw immediately what needed to be done. The problem wasn’t the size of the room, it was the size of the chairs. They were huge, heavy, and unwieldy. We didn’t need a bigger ballroom, we needed smaller chairs! So I had them replaced with high-end, smaller chairs. I then had our people sell the old chairs and got more money for them than the cost of the new chairs. In the end, the ballroom went from seating 200 people to seating 320 people. Our visitors got the space they desired, and I spared everyone the hassle of years of construction and $5 million of expense. It’s amazing what you can accomplish with a little common sense. - - - - - -Washington Wastes Your Money - - - -To have a government we can afford we need to eliminate the tremendous waste clogging the system. Almost every week a new story comes out reporting another gross example of government waste. The GAO reports that every year the federal government spends billions of dollars on dozens of wasteful overlapping programs. One simple fix—streamlining and consolidating 2,100 data centers—would save $200 billion over the next decade.6 - -Another example of federal government incompetence with your money: over the last five years, the Office of Personnel Management sent out $601 million in retirement benefits to people who are dead!7 The list of insane federal expenditures is almost endless: in 2010, $700,000 of your tax dollars went to research cow burps, $600,000 was spent on creating a wolf video game, and $250,000 was spent to research Internet romance. 8 And of course who can forget the $1,442,515 that the National Institutes of Health has allocated to be spent from 2008 to 2012 to study male prostitutes in Vietnam.9 On and on it goes. Your hard-earned money blown on ridiculous junk as far as the eye can see. - -Obama doesn’t respect the fact that the money he wastes belongs to us. He thinks that the wealth you create belongs to the government. That’s why he doesn’t care whether it gets wasted or mismanaged. I, on the other hand, think wasting money is offensive and foolish. That’s why I make lots of money—I manage projects tightly and put a premium on efficiency. - -Case in point: the Wollman Ice Skating Rink in Central Park. My apartment in Trump Tower overlooks the skating rink, which is more than an acre in size, making it the largest man-made ice skating rink in the United States. For seven straight years, the rink was closed on account of New York City’s management fiasco. The city of New York wasted seven years and $21 million and was still unable to get the rink open—it was a political nightmare and a great embarrassment to the city. - -Essentially, all this bureaucracy and wasting of taxpayers’ money really got to me, so I asked to take over the project and even put up the construction money myself. Furthermore, I said that if the project went over budget, I would personally pick up the overruns. I told the city I would have Wollman Rink finished in six months. I was wrong. I did it in four. And I only spent $1.8 million—and a big portion of that was demolishing all of the incompetent work that was done before I took over. Am I an expert in building ice skating rinks? No, I build luxury towers, hotels, clubs, etc. But I’ve never forgotten what my father used to tell me. He said, “Know everything you can about what you’re doing.” So I went out and found the best ice skating rink builder in America and then managed the details to a successful completion. To this day, it remains a case study in many of the leading business schools on private versus government projects. Better still, Wollman Rink provides thousands of children, families, and visitors to our great city a wonderful experience that brings lots of smiles and great memories. That’s what can happen when you actually work to save, not waste, money. - - - - - -Crack Down on Massive Fraud - - - -Beyond eliminating the wasteful spending, we need to get tough in cracking down on the hundreds of billions of dollars we lose from the massive fraud committed in government programs every year. The FBI estimates that Medicare fraud alone costs you the taxpayer between $70 billion and $234 billion every single year!10 Typically, this fraud involves fake billing scams. For example, in September 2011, officials uncovered a Medicare fraud ring involving 91 individuals charged with filing $295 million in phony billings.11 In 2010, Medicare paid out more than $35 million to 118 “phantom” medical clinics that were allegedly created by criminal gangs as part of a reimbursement racket. As 60 Minutes revealed, South Florida has become “ground zero” for Medicare fraud because so many elderly people live there. It’s become so bad down there that law enforcement says Medicare crimes have now replaced cocaine as the number one criminal enterprise in South Florida.12 - -Now stop and do the math. If the FBI’s top estimates are correct, that’s $2,340,000,000 in Medicare fraud over a decade—or 16 percent of America’s entire national debt! And by the way, we haven’t even started with Obamacare yet—a trillion dollar government boondoggle sure to unleash unbelievable corruption and criminality on the American taxpayer. - -Then there’s the disability racket. Did you know that one out of every twenty people in America now claims disability? That adds up to $170 billion a year in disability checks. Between 2005 and 2009, it is estimated that $25 billion were eaten up in fraudulent Social Security Disability Insurance filings.13 Then there’s the $116 million in fraud from the Low-Income Home Energy Assistance Program.14 And the $112 million the Internal Revenue Service doled out in tax refunds to prisoners who filed fraudulent tax returns. On and on, scam after scam it goes . . . as always, taxpayers are the ones getting stiffed. - - - - - -Negotiate Smarter - - - -A lot of Republicans I know look at all this waste, fraud, and abuse and wonder why the GOP hasn’t been better at reforming the system and getting America’s fiscal house in order. Well, the sad truth is some Republicans in Congress are clueless when it comes to negotiation. Now I know this will ruffle some of my fellow conservatives’ feathers, but I’m going to say it anyway. I’m sure Congressman Paul Ryan is a nice guy, but I can tell you this much: he is one lousy poker player. In an effort to talk about how he would balance the budget and rein in Washington’s spending addiction, he came out with his plan to overhaul Medicare. It was an absolutely unbelievable blunder . . . I’m talking about his total lack of negotiating skills. - -Congressman Ryan and the Republicans committed two fatal errors. First, anyone who knows anything about negotiation knows that you always make the other guy go first. Republicans should have waited the president out and forced him to go first in naming where cuts would come from and how he planned to get the budget under control and protect America’s credit rating. But he didn’t. Instead, Congressman Ryan committed a major mistake. He went out and put a huge target on Republicans while Obama sat back and let the GOP commit political suicide. The second mistake Ryan made was that he scared the heck out of seniors. Like it or not, the majority of seniors love Medicare. And I like it for them. When you start talking in ways that make older Americans nervous, it’s bad politics. - -So what did the Democrats do? They turned Paul Ryan and his Medicare proposal into a punching bag, and Republicans lost a special congressional election in upstate New York that they should have won handily. The Democratic candidate, Kathy Hochul, bludgeoned her Republican opponent Jane Corwin with a Mediscare campaign of TV ads that featured an old lady in a wheelchair being shoved off a cliff. The ad explained that the reason grandma was being tossed over the ledge was because of “Paul Ryan and his friends in Congress.” Unfair? You bet. Good politics? Absolutely. The GOP needs to learn how to get tough and out-negotiate Obama and his big spending allies in Washington. They also need to learn the art of using the right tone and language. - -That’s certainly the case when it comes to the debate surrounding how best to fix and save Social Security. Conservatives have to be smart in the way we speak. Using crazy language that terrifies seniors accomplishes nothing. It simply hands Democrats another weapon with which to demonize Republicans as heartless and stingy. Again, when someone has worked for forty years and seen the government deduct 6 percent out of each of the 480 paychecks they received over those years, it’s perfectly understandable that they would want the money they are owed. It’s only fair. - -So the first thing we need to remind seniors is that their Social Security is safe, secure, and will not be touched in any way whatsoever. Period. We have the funds to pay them the money they are due, and we will. Then, we need to look at the next seventy-five years and address the projected $5.3 trillion shortfall. The Democrats’ solution is the same solution they have for everything—tax, tax, tax. Just one problem: it doesn’t work! All that ends up happening is the government big spenders raid the Social Security trust funds and blow the dough on junk programs we don’t need. Bottom line: raising taxes to shore up the funding gap isn’t the way to give America a government it can afford, but making the economy strong again is. - - - - - -The Solution - - - -So what should we do? The first thing we need to realize is that, thanks to advancements in medicine and health, Americans live and work longer than in the days when Social Security began. In fact, since Social Security was created in 1935, Americans’ life expectancy has increased to seventy-eight, up 26 percent, whereas the retirement age to receive full benefits has only gone up only 3 percent, to sixty-seven.15 Today people work well into their seventies, which is absolutely wonderful. So if we slowly increased the full retirement age to even just seventy, one-third of the $5.3 trillion shortfall would be eliminated right away. And don’t do it now, do it in the future.16 - -The fastest way we can start saving Social Security is to get Americans back to work. More citizens earning a paycheck means more workers paying into the system. It also means that we will save on the explosion of unemployment benefits we’ve seen under Barack Obama. For example, extended unemployment benefits in just the next two years will cost American taxpayers $34 billion.17 If the goal is getting our deficits and debt under control, the quickest road to get there is to spark economic growth and let job creators do what they do best—create jobs. - -The final part of restoring fiscal sanity to America is the most obvious, and that’s to control Obama-style runaway spending. It’s hard for most folks to wrap their minds around just how out-of-step and radical this president truly is when it comes to spending. Here’s how the Wall Street Journal tried to paint the picture: - -As for the deficit, CBO [the Congressional Budget Office] shows that over the first three years of the Obama Presidency, 2009-2011, the federal government will borrow an estimated $3.7 trillion. That is more than the entire accumulated national debt for the first 225 years of U.S. history. By 2019, the interest payments on this debt will be larger than the budget for education, roads and all other nondefense discretionary spending.18 - - - - - -The economic idiocy of this presidency has been truly astounding. And that’s why America desperately needs a president who understands and appreciates the businesses and entrepreneurs that create opportunity and jobs. But Obama spits in the face of job creators every chance he gets. Just look at the absurd tactics the Obama administration unleashed on Gibson Guitars. They raided the guitar company factories to see if they were using certain types of wood that Obama doesn’t want them to use. Is this seriously how we want America to operate? Allowing the federal government to treat businesses like drug dealers because someone may have filled an order improperly is ridiculous. It’s also a terrible misuse of limited resources. The fact that it only took three years for this guy to blow a hole in the national debt that’s equivalent to the debt accrued in 225 years of American history shows just how radical and outside the mainstream Barack Obama is. - -That said, let me be clear: I was very, very critical of President George W. Bush. I thought he betrayed his principles of fiscal conservatism by spending excessively. Furthermore, I thought that his mismanagement of Hurricane Katrina was horrible, and I questioned his judgment in launching the war in Iraq that cost us trillions in dollars and, worse, thousands in lives. But President Bush’s spending excesses were nothing compared to Obama’s. In just three years, Obama has exploded our debt so that we have to borrow $4 billion every day. By comparison, under President George W. Bush, over all eight years in office, that figure was $1.6 billion a day.19 Not great, but a lot better. - -Of course, anyone who was paying attention in 2008 should have known that Obama wasn’t interested in debt and deficit reduction. But the fact that he completely ignored his own debt commission’s findings in the Bowles-Simpson Report proves that this president has no shame and has no intention of slowing down his spending spree. Every American, regardless of party, needs to think long and hard about what another four years of Barack Obama would mean to the national debt and the solvency of Social Security, Medicare, and Medicaid. If he had no shame in adding more to the national debt in three years than almost all other United States presidents combined, can you imagine the kind of damage he would do if given another four years without the worry of reelection? It’s a horrifying thought for anyone who loves our country and wants to see her survive and thrive again. - - - - - -Look, here’s the deal: Barack Obama has been a total disaster. He has spent this country into the ground and destroyed jobs and economic growth. If something isn’t done soon, programs Americans depend on, like Medicare, Medicaid, and Social Security, are going to go up in flames. It doesn’t have to be this way. We can return America to her former greatness if we get tough and act smart. - -It starts with China and OPEC. The hundreds of billions of dollars they steal from us each year must end right away. We need a president with a titanium spine who will stand up to these shakedown artists and demand that they get their greedy hands out of our pockets effective immediately. That one action alone will result in a windfall of hundreds of billions of dollars to help us pay down our debt and meet our commitments. Next, we enforce a zero-tolerance policy for the kind of brainless government waste that we’ve all become far too accustomed to from Washington. That means we streamline our systems and end the waste. Third, we go after the criminals and con artists who are defrauding taxpayers of $243 billion every year in Medicare fraud and billions more in other kinds of fraud, such as the disability racket. Sitting back while these crooks steal from hard-working people and rob deserving Americans of the benefits they paid for is vile. We must prosecute these thugs to the fullest extent of the law and recoup the hundreds of billions they take from us year in and year out. Fourth, we must save Social Security through economic success. Fifth, we need to put Americans back to work and kick the community organizer out of office so we can instill some fiscal sanity in Washington. - -We do those five things and we will pass along to our kids and grandkids not only a government they can afford, but also one they can be proud of. - - - - - -SIX - - - -STRENGTHEN AMERICAN MUSCLE - - - - -Your civil liberties mean nothing if you’re dead. That’s why the single most important function of the federal government is national defense. - -Our Founding Fathers got it. They understood that nothing good in life—religious freedom, economic freedom, freedom of speech—can be enjoyed if people fear for their physical safety. But unfortunately, we live in a dangerous world that’s getting more dangerous by the day. China is in the midst of a massive military buildup and the creation of cyber-warfare weapons capable of bringing America to its knees. Russia is rising. Iran, which funds terrorists all over the world, is inching closer to the creation of an operational nuclear weapon. Pakistan has been exposed as the nation that harbored Osama bin Laden next to its equivalent of West Point, and its intelligence agency is assisting the Haqqani Network, a terrorist group more dangerous than al Qaeda. Afghanistan is still a mess and a terrorist hotbed. Syria is on the verge of civil war, and Libya is already engaged in one. And of course, there are always the certifiably insane dictators of Venezuela, Cuba, and North Korea. - -In short, national security threats are everywhere and growing. That’s why I have so much admiration and respect for the 2.4 million men and women of our Armed Forces. Every single day, our soldiers, sailors, airmen, and Marines wake up, put on a uniform, and honor their solemn pledge to defend America against our enemies. They know their lives are on the line, but they love America so much they’re willing to die for her. That’s a level of commitment most civilians will never experience—most of us don’t have jobs that require a willingness to die for our fellow citizens. In fact, I believe we owe our veterans more than we could ever repay them. That’s why I was honored to play a major role in the New York Vietnam Veterans Memorial Commission to honor our warriors with a proper memorial and help them land jobs. I put up over a million dollars to see to it that the effort was a success. I was so moved and proud to be associated with the project, because our heroes deserve the very best. - -America deserves a commander in chief who respects the challenges and realities our Armed Forces face in our dangerous world. Specifically, our military deserves the best equipment, the best training, and the best weapons. They also deserve to be paid well for the dangerous and heroic work they do. They more than earn it. - -If history teaches us anything, it’s that strong nations require strong leaders with clearly defined national security principles. Realities change at warp speed; international events can turn on a dime. The 9-11 terrorist attacks, the wars in Iraq, Afghanistan, Libya, the Arab Spring—all these happened in the blink of an eye. A president can’t always predict where the next national security “fire” will erupt, but he can and must have a steady and reliable compass to guide his decisions. Citizens need to know the values and principles their president will rely on to lead America through whatever unknown threats lie over the horizon. I believe that any credible American foreign policy doctrine should be defined by at least seven core principles: - -1. American interests come first. Always. No apologies. - - - -2. Maximum firepower and military preparedness. - - - -3. Only go to war to win. - - - -4. Stay loyal to your friends and suspicious of your enemies. - - - -5. Keep the technological sword razor sharp. - - - -6. See the unseen. Prepare for threats before they materialize. - - - -7. Respect and support our present and past warriors. - - - -Sadly, President Obama has undermined each of these core principles. First, no sooner had he been sworn into office than he went on an apology tour to the Arab world. Did you know that the very first interview Obama gave as president was with the Arabic news channel Al Arabiya?2 I’ve got news for President Obama: America is not what’s wrong with the world. I don’t believe we need to apologize for being hated by Islamic radical terrorists who hate our religion, hate our freedom, and hate that we extend human rights to women. Second, even as Obama’s blown trillions of our tax dollars on his “stimulus” schemes, he’s proposed cutting $400 billion from our defense budget. Third, by announcing the time and date for withdrawal in Afghanistan and not clearly defining our objectives in Libya’s civil war, Obama has completely blown it, making it virtually impossible for us to define what victory is and achieve it. Fourth, the president sold out our dear friend and ally Israel. He’s also thrown other allies, like Poland and the Czech Republic, under the bus by bowing to Russian demands that we not build missile defenses to protect our friends. Fifth, by slashing military budgets Obama has threatened our ability to keep our technological edge in weapons systems. Sixth, Obama has been caught flatfooted by China’s development of the J-20 fighter jet, something his administration didn’t think would happen for years to come. And finally, by raiding the defense budget to pay for his failed social programs, Obama continues to weaken our ability to honor our present and past warriors. - -When our military and intelligence officers located Osama bin Laden, right smack in the middle of Pakistan, they went to the president to inform him and asked whether or not he should be taken out by a missile or in a raid (either solution being okay). The only other option would have been to let him be. Well, Obama had a decision a make. We have bin Laden—do we leave him alone? I can’t believe that anybody sitting in the Oval Office would have said, “Let’s do nothing.” So he really had only one choice to make: kill him with a missile or kill him in a raid. He made the decision, either of which would have been okay, and Osama bin Laden is dead. - -It’s wonderful that we got him, but what sane person would have decided otherwise? Why does Obama get so much credit? I know that’s not politically correct to say, but if somebody can explain that to me, I would be very grateful. Our military deserves all the credit, not Obama. - -Obama’s violations of these seven principles are bad enough, but they are much worse when you consider the epic foreign policy failures he has committed in his first three years in office. Most Americans have been so focused on all of Barack Obama’s economic failures and the disastrous effects of the Obama economy that they haven’t had the time to pay close attention to how much he’s screwed up America’s national security. But a closer look uncovers some alarming realities. - - - - - -A commander in chief has to possess the right instincts. That’s one of the biggest problems with Obama: his national security instincts are almost always wrong. On the campaign trail in 2008, Obama promised he would shut down the terrorist detention facility at Guantanamo Bay, Cuba. Then he got elected president, met the grown-ups in the military and intelligence worlds, and was forced to come to grips with the reality that Guantanamo serves a purpose, just as President George W. Bush and Vice President Dick Cheney maintained all along. - -Then there was Obama’s foolish instinct to treat terrorists as criminals (instead of the enemy combatants they are), giving them civilian trials rather than military tribunals. As everyone knows, civilian trials don’t give prosecutors the latitude they need to put away dangerous terrorists and keep the country safe. But Obama and his attorney general, Eric Holder, thought otherwise. That is, until reality smacked them in the face again. Case in point was the painful education Obama got when Ahmed Ghailani was acquitted of more than 224 counts of murder in a civilian court for his part in the U.S. embassy bombings in Africa. “It was a near disaster,” said Texas Republican Congressman Lamar Smith. “If Ghailani had been acquitted of just one more count, he would have been considered innocent of these heinous crimes.”3 - -The blunder was reminiscent of Obama and Holder’s asinine foot-dragging on whether to hold the trial of 9-11 mastermind Khalid Sheikh Mohammed in New York City of all places. Why Obama and Eric Holder would want to give one of America’s biggest enemies a public relations platform and the biggest media megaphone in the world at the site of the Twin Tower terrorist attacks is beyond comprehension. But after a year of bumbling and tons of international humiliation, Obama and Holder finally decided to do what every clear-thinking American wanted to do in the first place, which was to try Khalid Sheikh Mohammed at Guantanamo. - -Then there is Obama’s recent decision to gut the U.S. military by cutting $400 billion from our defense budget, a figure more than double what then-Secretary of Defense Robert Gates identified as being prudent. Now here’s Obama, a guy who never met a spending bill he doesn’t love and one who has blown through more deficit spending than all presidents in 225 years combined. But when it comes to funding our troops and giving them the equipment, training, and support they need, Obama is MIA. As former Defense Secretary Gates said when he heard about his boss’s brainless decision, such a move would degrade “force structure and military capability.”4 - -Here’s the deal: when your secretary of defense tells you that your proposed cuts will erode America’s military capability, you pay attention. But not Obama. He thinks he knows how to run the military better than the guns in the fight. He’s wrong. The reason conservatives support a strong and well-funded military is because they know that all freedoms flow from national security. That’s why we need a new president. It’s also why we need to get tough in foreign policy to deal with the threats and challenges America faces from rival and enemy nations. - - - - - -CHINA - - - -Even as Obama is busy degrading our military might by slashing the defense budget by a crippling $400 billion, the Communist Chinese are laughing their heads off and using the billions they make off us each year to jack up their military spending by 13 percent—every year for the last twenty years!5 - -Of course, because China’s leadership is sneaky and underhanded, they significantly underreport their actual defense budget and technological advancement. It’s actually part of their culture. As I mentioned earlier, they follow the words of Premier Deng who said China must “hide our capacities and bide our time.” So they lie about their military spending and downplay their military sophistication every chance they get. For example, China claims its defense budget is just $78.6 billion a year. The Pentagon, however, believes the real number is over $150 billion. And when you factor in the purchasing power parity exchange rate, the real Chinese military budget is closer to $300 billion (the second largest in the world)—an amount that is identical to the amount they rip us off every single year.6 - -China is also a master at head faking us when it comes to their weapons development. After the head of the People’s Liberation Army (PLA) delegation, General Chen Bingde, visited America’s National Defense University, he said, “To be honest, I feel very sad after visiting, because I think I feel, and I know how poor our equipments are and how underdeveloped we remain.”7 Only a fool would fall for such garbage. As the Wall Street Journal has reported, “Beijing has the most ambitious missile program in the world—including an anti-ship ballistic missile that threatens U.S. aircraft carriers.”8 We also know that China is busy building a fleet of nuclear submarines so large that it will soon overtake ours in size, is planning to build numerous aircraft carriers, and has significantly ramped up its cyber-warfare program and anti-satellite weapons. “If the United States can light a fire in China’s backyard,” said Colonel Dai Xu of the PLA, “we can also light a fire in their backyard.”9 - -Then, in 2011, just one week before Chinese President Hu Jintao visited America, the PLA successfully tested its new stealth fighter jet, the J-20, an advanced medium bomber that the Obama administration thought the Chinese were still years away from flying.10 As one defense expert put it, “It was a middle-finger welcome salute to Defense Secretary Robert Gates,”11 who was then in China on an official visit. And what did Obama do? Not wanting to mess up his chance to bow down to yet another foreign leader, the president did what he always does when our enemies take a swipe at us—nothing. Instead, he let Hu Jintao waltz into our country the very next week and make a total joke out of us and showcase Barack Obama’s weakness. Worse, Obama groveled at the feet of the Communist he depends on to loan him the money to fund our president’s disastrous spending programs. As Hillary Clinton put it privately, “How do you deal toughly with your banker?”12 - -Here’s my answer: you wake up and realize that money is itself a weapon. Hu Jintao gets that. Most Americans get that. But the clueless bunch in the White House seems not to understand that, or maybe they just don’t care. Either way, the Communist Chinese know that collecting our debt allows them to hold us hostage with the threat that they will dump our debt and send interest rates skyrocketing. That’s also why China is snatching up minerals, oil, and food in Africa, South America, and the Middle East.13 When you combine this economic “weaponry” with China’s aggressive military buildup, it’s crystal clear that America should be strengthening our military muscle, not weakening it. Specifically, defense experts believe that meeting China’s military challenge will require that we deploy more submarines, more 5th generation aircraft like the F-22 Raptor and F-35 Lightning, bolster our anti-submarine and anti-mining capabilities, add missile and cruise-missile defense systems, beef up our cyber-warfare technologies, sharpen our reconnaissance platforms, and add longer-range precision-strike platforms.14 Will Barack Obama do those things? Fat chance. We need a president who will. - - - - - -RUSSIA - - - -Obama’s popularity in America may be at rock bottom levels, but I know one place his ratings are likely sky high: the Kremlin. Russia’s leaders can hardly believe their luck. Never in a million years did they think America would elect a guy as ineffective as this. Obama’s pretty-please diplomacy and endless American apology tours have served Russian interests extremely well. Russian Prime Minister Vladimir Putin, of whom I often speak highly for his intelligence and no-nonsense way, is a former KGB officer. No sooner did Obama move into 1600 Pennsylvania Avenue than he began making concessions and sacrificing American power on the altar of “improving relations” with Russia. - -According to Barack Obama’s favorite newspaper, the New York Times, within weeks of being sworn in as president of the United States, Obama sent a top U.S. official to Moscow to hand deliver a secret letter to Russia’s then-President Dmitry Medvedev. According to the Times, the secret letter said that Obama “would back off deploying a new missile defense system in Eastern Europe if Moscow would help stop Iran from developing long-range weapons.” It’s so outrageous I hardly believed it until I read it myself. Obama had barely moved his stuff into the White House residence and already the guy was just itching to start degrading America’s power and undermining our allies. - -Not surprisingly, Putin was ecstatic: “The latest decision by President Obama . . . has positive implications,” said Putin. “I very much hope that this very right and brave decision will be followed by others.”15 - -But it gets even worse. Incredibly, the Obama administration made the decision to throw our friends Poland and the Czech Republic under the bus and leave them naked to missile attacks “despite having no public guarantees” that Moscow would help crack down on Iran’s missile programs. 16 Many in the intelligence world were baffled by Obama’s reckless and foolish move. U.S. senators piped up too. “This is going to be seen as a capitulation to the Russians, who had no real basis to object to what we were doing,” warned Republican Senator Lindsey Graham of South Carolina. “And at the end of the day you empowered the Russians, you made Iran happy and you made the people in Eastern Europe wonder who we are as Americans.”17 What was Barack Obama’s response? “If the byproduct of it is that the Russians feel a little less paranoid and are now willing to work more effectively with us to deal with threats like ballistic missiles from Iran or nuclear development in Iran, you know, then that’s a bonus.” - -The results of Obama’s pandering to the Russians have been a total disaster. In 2010, the Russians outsmarted Obama by promising to play nice and not sell Iran anti-aircraft missiles. The administration proudly hailed the announcement as a big success and praised Medvedev for having “shown leadership in holding Iran accountable for its actions, from start to finish.” Then, even as Obama was busy cheerleading the Russians’ actions, the Los Angeles Times reported that “Russian diplomats were quietly recruiting other countries . . . to undercut tougher penalties imposed on the Islamic Republic.”18 It was an incredible coup for Russia: they got Obama to give up missile defense for absolutely nothing in return and stuck it to America by secretly convincing other nations to back Iran. - -Putin has big plans for Russia. He wants to edge out its neighbors so that Russia can dominate oil supplies to all of Europe.19 Putin has also announced his grand vision: the creation of a “Eurasian Union” made up of former Soviet nations that can dominate the region. I respect Putin and the Russians but cannot believe our leader allows them to get away with so much—I am sure that Vladimir Putin is even more surprised than I am. Hats off to the Russians. - - - - - -IRAN - - - -Obama’s plan to have Russia stand up to Iran was a horrible failure that turned America into a laughingstock. Unfortunately, our current foreign policy toward Iran has been just as embarrassing and disastrous. - -First, there was the epic and inexplicable failure of Obama to speak out strongly for freedom during Iran’s so-called “Green Revolution.” As the world watched, Iranian college kids and dissidents took to the streets to peacefully protest for democratic reforms and human rights, only to be violently suppressed by the regime’s thugs. What did Obama do? As incredible and outrageous as it might seem, he sat silent. We’re talking about an Iranian regime led by Mahmoud Ahmadinejad, a guy who has declared Iran’s desire to see one of our greatest allies, Israel, “wiped off the map.” But did Obama stand up for the voices of freedom and against the anti-Israel forces of Iran’s Islamic Revolutionary Guard? Not a chance. Had Obama stepped out to help the protesters early, the regime could have easily been overthrown and we would not have our biggest problem today. When it comes to defending human rights in the Islamic world, Obama shies away because he thinks America should be apologizing to Muslim countries rather than speaking out. It’s a disgrace. - -The greatest outrage, however, has been Obama’s unwillingness to stand strong in the face of Ahmadinejad’s nuclear weapon ambitions. Iran is the most sanctioned member of the United Nations. Since 2006, Iran has been the focus of five Security Council resolutions demanding that it stop its uranium enrichment.20 And yet, knowing all this, Obama continues to concoct his kindergarten-style “solutions” for dealing with the Iranian threat. For example, even as the adults in the intelligence world are wracking their brains about how to stop Iran from developing an operational nuclear weapon, Barack Obama proposed something so childish I’m almost embarrassed to write about it. Obama wanted to create a telephone hotline between America and Iran. I kid you not. Obama’s solution to thwarting a nuclear Iran is to set up a little telephone line that our military can use to talk nicely with the Iranian terrorist regime that threatens to destroy America. - -As pathetic and ridiculous as that is, here’s the most humiliating part: Iran laughed at him and rejected the plan outright. Worse, once they heard Obama’s proposal and realized what a joke the guy is, they were emboldened to get tough. “In addition to rejecting the hot line,” reported the Wall Street Journal, “Iranian military officers have threatened to deploy Iranian naval forces in the Western Hemisphere, including potentially the Gulf of Mexico”21 (emphasis mine). - -How did the White House respond? Obama sent his press secretary out with this message of strength: “We don’t take these statements seriously, given that they do not reflect at all Iran’s naval capabilities.”22 How reassuring! - -The point isn’t that Iran’s navy is incapable of anchoring its ships off the coast of Florida. The point is that Iran’s government has so little fear, so little respect for America’s leadership, that it feels free to make the threat. The Iranians know our president will sit back and do nothing, just like he did during Iran’s Green Revolution. They know Obama’s instincts are to apologize, grovel, and retreat. As the Wall Street Journal pointed out, “Tehran appears to be taking a more aggressive posture in the Persian Gulf, in part as a response to the scheduled drawdown of American forces in Iraq and Afghanistan.”23 In other words, because Obama made the horrible decision to announce a date of withdrawal, Iran now feels emboldened to throw its weight around. By the way, in 2011, U.S. defense officials reported that there have been several “near-misses” between Islamic Revolutionary Guards Corps (IRGC) speed boats challenging U.S. and allied war ships.24 Way to go, Mr. President. - -America’s primary goal with Iran must be to destroy its nuclear ambitions. Let me put this as plainly as I know how: Iran’s nuclear program must be stopped—by any and all means necessary. Period. We cannot allow this radical regime to acquire a nuclear weapon that they will either use or hand off to terrorists. Better now than later! - -At the end of his second term, President George W. Bush authorized a covert program to “undermine the electrical and computer systems” at Natanz, Iran’s uranium enrichment facility.25 What came out of that initiative was the creation of the world’s most advanced cyber-weapon ever. With technical support from Israel, as well as technology from other allies, the Stuxnet cyber worm was unleashed against Iran’s nuclear centrifuges and made them spin so fast they destroyed themselves. The operation was very successful and destroyed roughly one-fifth of Iran’s centrifuges. No one knows for sure how many months or years we put back on Iran’s nuclear clock. Some analysts say six months, others one or two years. But that’s the point: the clock is still ticking. - -Many experts believe the only way to eliminate the Iranian nuclear threat is to bomb their facilities. Israel has used airstrikes to knock out nuclear facilities twice: once in 1981 on an Iraqi nuclear site, and again in 2007 to destroy a nuclear bomb plant in Syria. It’s clear that Iran is preparing itself for this possibility. In September 2011, Iran moved its most important nuclear fuel production to a “heavily defended underground military facility” to guard their supplies from a possible air or cyber-attack. The White House spokesman for the National Security Council said the move was a direct violation of the UN security requirements and was “another provocative act.”26 But, as usual, Obama will do nothing. He’s too busy trying to get reelected, going to fundraisers, and vacationing. - -Worse, we know Obama’s instincts on Iran are horrible. On May 18, 2008, during a campaign speech then-candidate Obama made this breathtakingly ignorant statement: “I mean, think about it. Iran, Cuba, Venezuela—these countries are tiny, compared to the Soviet Union. They don’t pose a serious threat to us the way the Soviet Union posed a threat to us. . . . You know, Iran, they spend one-one hundredth of what we spend on the military. If Iran ever tried to pose a serious threat to us, they wouldn’t stand a chance. And we should use that position of strength that we have, to be bold enough to go ahead and listen.” Then, after his advisors told him what a moronic statement he’d made, Obama went out two days later and reversed his stance: “Iran is a grave threat. It has an illicit nuclear program, it supports terrorism across the region and militias in Iraq, it threatens Israel’s existence, it denies the holocaust.”27 Once again, the guy’s initial instincts are always wrong. And in this case, they endangered America and our ally Israel. - -Obviously we must listen to our intelligence experts to decide the best way to thwart Iran’s nuclear ambitions. But here’s the reality: because the clock is ticking down, the next president America elects will in all likelihood be the president who either stops Iran from obtaining a nuclear weapon or who sits back and lets it happen. Given Obama’s track record of weakness, that’s not a risk America can afford to take. - - - - - -PAKISTAN - - - -When our tremendous Navy SEALS took out Osama bin Laden, they didn’t find him in some obscure hole in the ground or in a remote mountainside cave. No, they found him in Pakistan right next door to one of Pakistan’s most prestigious military academies. What does that tell you? It tells me that Pakistan knew where Osama was all along. - -Get it straight: Pakistan is not our friend. We’ve given them billions and billions of dollars, and what did we get? Betrayal and disrespect—and much worse. When one of our helicopters was downed during the Osama bin Laden raid, Pakistan handed it over to China so that Chinese engineers could study it and steal the technology we spent billions of dollars developing. The Pakistanis think we’re a bunch of dopes. They don’t respect us and they never will as long as Obama is our commander in chief. And it’s much, much worse than just disrespect. In May 2011, Pakistan actually fired on American Apache helicopter crews. As one military official stated, “We’re not allowed to return fire to coordinates inside the Pakistan border. We know it’s the Pakistani military in many cases. Pakistan has been instigating.”28 - -The fact that our rules of engagement (ROE) don’t allow our military to defend themselves and return fire is absolute lunacy. We need to remove the handcuffs and get tough. You shoot at our troops, our troops shoot at you. End of story. - -But there’s an even graver threat emerging out of Pakistan. I’m talking about the rise of the so-called Haqqani Network, a terrorist network estimated to be 15,000 fighters strong. The Haqquani Network is closely allied with al Qaeda. The Haqqanis originated in Afghanistan but have now holed up in Pakistan. They are considered bigger and better funded than al Qaeda. Here’s the worst part: Pakistan’s Inter-Services Intelligence (ISI) is helping the Haqqanis. Former chairman of the Joint Chiefs of Staff Admiral Michael Mullen has worked closer with Pakistan than most. He says that the Haqqani Network has become “a strategic arm” of Pakistan’s intelligence agency and is responsible for the attacks on the U.S. embassy in Kabul, the Inter-Continental Hotel in Kabul, and the truck bomb attack that injured seventy-seven U.S. soldiers.29 - -And get this: according to intelligence experts, “Pakistan is preparing to replace the billions of dollars of critical military aid it has been receiving from the U.S. by courting China and soliciting help from Islamic ally Saudi Arabia.”30 - -When are we going to wake up and realize that we are funding our enemies? And when are we going to let our troops hit back? Right now we ban our forces from using Predator drones inside the city of Miram where the Haqqanis are headquartered. The reason? Obama didn’t want to “offend” the Pakistanis. That’s absurd—they’re killing our soldiers! We need to get tough, give our troops permission to return fire, and tell Pakistan that we will sever all economic activity with them until they cut ties with the Haqqani network. If the Pakistani intelligence services work with terrorists, we should declare their military a terrorist organization.31 - - - - - -LIBYA - - - -Obama ran for president on a platform that he wouldn’t start any more “illegal wars.” Guess what? He started an “illegal war.” He never went before Congress to ask for a declaration of war with Libya. Instead, Obama launched one by himself and thrust America into a bloody civil war. Isn’t that what Obama bashed George W. Bush for doing, even though Bush got rid of Saddam Hussein? - -Now Qaddafi is dead and gone. So what? We have spent more than $1 billion on the Libya operation. And what are we getting in return? A huge bill, that’s what. It’s incredible how foolish the Obama administration is. Libya has enormous oil reserves. When the so-called “rebels” came to NATO (which is really the U.S.) and asked for help to defeat Qaddafi, we should have said, “Sure, we don’t like the guy either. We will help you take out Qaddafi. But in exchange, you give us 50 percent of your oil for the next twenty-five years to pay for our military support and to say thank you for the United States doing what you could never have done on your own.” The “rebels” would have jumped at the offer and said yes. After all, they didn’t stand a chance—they were being routed—it was over. But did we do that? No. Our leaders are too brainless to negotiate a deal like that. - -Imagine the amount of oil we could have secured for America. Think about how much economic relief we would have secured for our people and our businesses. A deal like that would have been so easy to broker. But our diplomats are pansies. They don’t want to “offend” anyone. Guess what? The American people are offended! Our policy should be: no oil, no military support. No exceptions. - -Even with Qaddafi gone now, unfortunately, the price we will pay for our stupid Libyan policy may end up being far more expensive and dire than the billion dollars we’ve already blown there. In September 2011, up to 20,000 shoulder-fired anti-aircraft missiles went missing in Libya. According to the left-leaning group Human Rights Watch, the reason this happened was because Barack Obama refused to provide proper protection to guard the weapons stockpiles.32 When weapons went missing in Iraq, the liberal media made a massive story out of it and used the issue to try and defeat George W. Bush. But now, on Obama’s watch, 20,000 shoulder-fired missiles—the kind that can take down a commercial jetliner—are nowhere to be found, and the mainstream media yawns. - -There’s no telling how much money those missiles will be sold for on the black market. But there’s one thing you can bet your bottom dollar on, and that’s every terrorist organization will be standing in line to buy them. We know that al Qaeda is already in Libya. Former White House counterterrorism advisor Richard Clark says that the probability of al Qaeda successfully smuggling the missiles out of Libya is “pretty high.”33 When the story surfaced, as usual, the White House shrugged its shoulders. “We have ... worked closely with the [Libyan rebel leaders] as well as NATO in investigating and dealing with the issue of conventional weapons in Libya,” said Press Secretary Jay Carney. “We are exploring every option to expand our support.”34 - -Nice! - -Now here’s the worst of it: guess who “discreetly” provided the Libyan rebels with “humanitarian aid” before the fall of Libya’s capital, Tripoli? That’s right: Iran. When the rebels seized the capital, Iran “congratulated the Muslim people of Libya.”35 - -Like everyone else, I’m glad Qaddafi is gone. But if we had been smart and negotiated shrewdly, we would have taken 50 percent of Libya’s oil for twenty-five years before we spent mountains of American money. Once again, Obama has proven to be a horrible negotiator and an expert at missing huge opportunities for America. And guess who gets much of that oil from Libya—that’s right, it’s China, not the U.S. - - - - - -Americans have been too busy fighting the ravages of the Obama economy to notice what a colossal disaster the community organizer has been as our commander in chief. The damage Obama has done to our military and to our standing in the world can only be repaired by electing a new president, one who respects our men and women in uniform and pursues a national security doctrine that puts America first. - - - - - -SEVEN - - - -A SAFETY NET, NOT A HAMMOCK - - - - -In 1964, President Lyndon Baines Johnson declared “War on Poverty.” Guess what? Poverty won. Big time. - -Since Johnson launched his mythical quest for a government-run utopia, welfare spending has skyrocketed 13 times the amount spent in 1964 (in inflation adjusted dollars). Back then, welfare spending accounted for 1.2 percent of GDP. Today, it’s almost 6 percent.1 That means taxpayers have paid—are you ready for this?—a jaw-dropping $16 trillion on public-assistance programs.2 That’s a totally outrageous sum—until you realize what Obama wants to spend over the next decade. - -In 2011, Obama jacked welfare spending up 42 percent over 2008 levels. This huge increase means America is paying $953 billion a year on welfare.3 America is flat broke. We cannot afford to spend $10 trillion over the next decade on dependency-inducing welfare schemes that have created an underclass, demoralized it, and drained taxpayers who are paying for programs that not only make poverty worse but that are notoriously rife with fraud and abuse. - -You want an example? In 2010, the Los Angeles Times reported that welfare recipients in California were using their welfare cards to get cash from ATMs at strip clubs. Taxpayers should not be paying for some guy’s lap dance!4 And over in Virginia, taxpayers were outraged when it was revealed that their tax dollars were going to subsidize welfare recipients living in luxury apartments, complete with “resort-style swimming pools with fountains and heated spas, billiard rooms, granite counter tops, indoor basketball courts, and stainless steel appliances.” “These are resort-style amenities that the majority of the taxpayers that are subsidizing it don’t have in their own [homes],” said supervisor Pat Herrity. “Luxury has no place in subsidized housing.”5 - -Look, I believe deeply that America must maintain a sturdy safety net. We have an obligation to take care of those who can’t take care of themselves, whether due to age or illness. Our country has a big heart. And it’s a point of national pride that we take care of our own. It’s one of the things that makes us so great. And certainly our people need a lot more help given that President Obama has been such a total disaster. Today, under this administration, more people than ever in America’s history—a staggering 46.2 million—live under the federal poverty line. Many of these individuals are out of work. They need temporary assistance as they search for the few jobs that remain in the Obama economy. We should help these folks and their kids, no question about it. But it is counterproductive and cruel to allow America’s safety net to morph into a hammock. It is simply immoral for the government to encourage able-bodied Americans to think that a life on welfare, of being supported by taxpayers, is an acceptable lifestyle. - -Our Founding Fathers understood that self-reliance is the axis on which freedom spins. The American work ethic is what led generations of Americans to create our once prosperous nation. The idea that working hard was a spiritual act of doing one’s work “as unto the Lord” spurred us to give our very best day in and day out. And because we believed that work was a virtue, we produced massive wealth, plentiful jobs, and a self-sufficient society. - -That’s what I find so morally offensive about welfare dependency: it robs people of the chance to improve. Work gives every day a sense of purpose. A job well done provides a sense of pride and accomplishment. I love to work. In fact, I like working so much that I seldom take vacations. Because I work so hard, I’ve been privileged to create jobs for tens of thousands of people. And on my hit show The Apprentice, I get to work with people from all walks of life. I’m known for my famous line, “You’re fired!” But the truth is, I don’t like firing people. Sometimes you have to do it, but it’s never fun or easy. One of my favorite parts of business is seeing how work transforms people into better, more confident, more competent individuals. It’s inspiring and beautiful to watch. - -America became a powerhouse because of our deep belief in the virtue of self-reliance. As Thomas Jefferson said, “I predict future happiness for Americans if they can prevent the government from wasting the labors of the people under the pretense of taking care of them.” Government wasn’t created to take care of us. Generations of Americans believed they should be responsible for themselves. When hard times hit, churches and neighbors pitched in and pulled together to help. But in the end, the Founders believed that government should only do those few things individuals couldn’t do for themselves. We are rapidly losing that self-reliant spirit that made America great. - - - - - -Proper Perspective on Poverty - - - -Real economic pain exists in America. No doubt about that. And we need pro-growth, pro-jobs policies. But it’s also important for us not to lose sight of the bigger picture. Obama tries to justify his massive spending programs in part based on the idea that they’re needed to eradicate poverty in America, but as Dinesh D’Souza, author of the bestselling book What’s So Great about America, points out, America is one of the few places in the world where a “poor” person can still be obese.6 “Poor” is a relative term. By global standards, poor people in America are rich. And even by American standards, poor people today are better off than average people were in our parents’ lifetimes. According to a Heritage Foundation study, “Today, poor boys at ages 18 and 19 are actually taller and heavier than boys of similar age in the general U.S. population in the late 1950s. They are one inch taller and some 10 pounds heavier than GIs of similar age during World War II.”7 Poor people in America have comforts most of the world’s poor have never seen, as the Heritage Foundation reports: - -• 80 percent of poor households have air conditioning. In 1970, only 36 percent of the entire U.S. population enjoyed air conditioning. - - - -• 92 percent of poor households have a microwave. - - - -• Nearly three-fourths have a car or truck, and 31 percent have two or more cars or trucks. - - - -• Nearly two-thirds have cable or satellite TV. - - - -• Two-thirds have at least one DVD player, and 70 percent have a VCR. - - - -• Half have a personal computer, and one in seven have two or more computers. - - - -• More than half of poor families with children have a video game system, such as an Xbox or PlayStation. - - - -• 43 percent have Internet access. - - - -• One-third have a wide-screen plasma or LCD TV. - - - -• One-fourth have a digital video recorder system, such as a TiVo.8 - - - -Does this mean that poor Americans aren’t in need of help, most especially a job? No, of course not. But it does mean that Americans should never lose sight of the fact that we are incredibly blessed to live in a nation where 97 percent of those considered poor own a color television and have the electricity to power it.9 - - - - - -Childhood Poverty Is a Tragedy - - - -The innocent bystanders of American poverty are kids. Yet two-thirds of childhood poverty in America is absolutely preventable if individuals did just one thing: get married before they have children. As someone once put it, “Marriage is the greatest ‘anti-poverty’ program God ever created.” - -An out-of-wedlock child is six times more likely to live in poverty than a child born in a two-parent home. The reason for this is painfully obvious: two paychecks are twice as much as one. This isn’t brain surgery. Two people working full-time at Walmart puts a family above the federal poverty line (defined as a family of four earning less than $22,314, not including in-kind benefits). The key thing is for the father to stick around, which is what marriage is meant to ensure. Both parents don’t necessarily have to hold down a job. One paycheck from a gainfully employed dad, with mom at home taking care of the kids, is better than a single mother living off welfare. - -The explosion of out-of-wedlock births in America is staggering. This is a total departure from American history—one that is reshaping our country, and not for the better. Back when LBJ began engineering his “Great Society” and declaring his “War on Poverty,” only 7 percent of kids were born out of wedlock. Today, 40 percent of all births in America are to unwed mothers. Government is now the “father” in far too many homes. But here’s the thing: kids don’t just need a wallet—they need a dad who will teach boys how to be responsible men and show daughters what it means to be respected and protected. - -Out-of-wedlock birth rates are not only one of the greatest generators of poverty but of inequality in America. Twenty-nine percent of white children are born to a single mother (a figure that’s far too high), but 72 percent of black children are born out of wedlock. Beyond the economic consequences, we know that kids without a dad are also exponentially more likely to abuse drugs, drop out of school, commit crime, and be incarcerated.10 Kids who grow up in homes where a magic check appears each month from the government believe there’s nothing wrong with sitting at home doing nothing while taxpayers bust their humps working to fund them. For an entire generation, government welfare programs are eradicating the virtues of responsibility, hard work, and self-reliance that built America. - -Luis Lopez is a Democrat and youth counselor in Florida. He tells the story of an exchange he had with a 13-year-old pregnant girl he met in an inner-city, low-income housing project. He asked who was going to pay for her baby. Smiling, she said, “Medicaid and Social Security will pay for it.” “What about the father?” “We broke up,” she said. The girl went on to explain that her grandmother would raise her child. Then Lopez asked the pregnant teen what her mom thought about the fact that she was so young and pregnant. “My mom had me when she was 14,” the girl replied. “So what’s the problem?”11 - -It wasn’t always this way. A lot of us remember a time when there was a social stigma and sense of shame against living on the public dole. There’s a great scene in the movie Cinderella Man with Russell Crowe that illustrates how radically our entitlement culture has changed America. The movie is based on the true story of boxer James J. Braddock, a fighter during the Great Depression who goes on to become heavyweight champion of the world. As Braddock struggles to establish his boxing career, he eventually has to turn to public assistance to feed his wife and kids. He’s deeply embarrassed and ashamed, but he has no other options, so he accepts the money. Later, as his boxing career takes off and the prize money starts rolling in, Braddock returns to the welfare office and stands in line patiently. When he reaches the front of the line, he hands the welfare worker a stack of cash to pay back the government the money he had received to support his kids. That really happened. But today, given our entitlement culture, we can hardly imagine something like that except in the movies. - -We have to combat the welfare mentality that says individuals are entitled to live off taxpayers. We need to reaffirm that mothers and fathers have a responsibility to their children—and that it starts with getting married before they have them. But unfortunately our welfare system has created monetary incentives to avoid marriage and to have more out-of-wedlock children in order to get bigger welfare benefits. Each year, taxpayers shell out $300 billion to unmarried parents.12 That’s almost a third of a trillion dollars that could easily be saved if we could restore personal responsibility and the importance of marriage before childbearing. Your tax dollars in the form of Medicaid also pick up the delivery costs for 40 percent of all children born in America, most of those children being born to never-married mothers.13 - -For too many of these mothers and their children, living off welfare becomes a way of life. Consider these numbers: since becoming president, Obama has added 8 million more Americans to the rolls,14 and food stamp spending has more than doubled since 2007, going from $33 billion to $77 billion.15 But even more shocking than these figures is that half of food stamps go to people who have been on public assistance for eight and a half years or more.16 The only good thing about this for Obama, and he knows it . . . they will all be voting for him. - - - - - -Obama’s “Food Stamp Crime Wave” - - - -The food stamp program was originally created as temporary assistance for families with momentary times of need. And it shouldn’t be needed often. Thankfully, 96 percent of America’s poor parents say their children never suffer even a day of hunger.17 But when half of food stamp recipients have been on the dole for nearly a decade, something is clearly wrong, and some of it has to do with fraud. - -The Wall Street Journal has reported that Obama’s food stamp policies are ushering in a massive “food stamp crime wave.”18 That’s been matched by fewer prosecutions of illegal food stamp transactions involving alcohol or other non-eligible items.19 And “millionaires are now legally entitled to collect food stamps as long as they have little or no monthly income.”20 - -As the Wall Street Journal notes, “The Obama administration is far more enthusiastic about boosting food-stamp enrollment than about preventing fraud.” Under Obama’s rapid expansion of food stamps, recipients are selling welfare benefit cards on Facebook and Craigslist and using the money to buy drugs,21 food stamp checks are going to prison inmates,22 a $2 million lottery winner qualified for food stamps (and complained that he still deserved food stamps because the government took half his winnings in taxes),23 and the program is rife with incredibly costly scams including one enterprising crook who created more than 1,000 fraudulent food stamp claims and pocketed $8 million.24 And that’s just scratching the surface of the program’s waste, fraud, and abuse. The really infuriating thing is that the Obama administration doesn’t seem to care about how taxpayers are being shaken down by this outrageously mismanaged government program. - -The blatant waste of taxpayers’ dollars doesn’t bother Obama, because it’s all part of his broader nanny-state agenda. It seems he believes the more voters he gives welfare goodies to, the more votes he’ll rack up for reelection. Perhaps that’s why his administration doesn’t give a rip about policing fraud or administering responsible oversight—he’s buying votes! And like any good leftist knows, the bigger you grow the welfare state, the bigger you grow your electoral army. It’s an outrageous betrayal of the American taxpayer and of the twin pillars of hard work and self-reliance that support the American Dream of freedom, progress, and bettering oneself and one’s family. - -We see the same trend in public housing, where since Barack Obama’s election, massive crowds have been lining up to get Section 8 housing. In Atlanta, for example, 30,000 people showed up in the hopes of getting government housing applications or vouchers.25 There’s no doubt that some of those individuals are truly in need, whether due to age or disability, but the fact is that we know that able-bodied, non-elderly individuals without children routinely enter the program and spend on average nearly eight years in public housing.26 That’s outrageous. - -People who have the ability to work should. But with the government happy to send checks, too many of them don’t. On average, able-bodied welfare recipients work just sixteen hours a week. How can anyone expect to climb out of poverty working just over three hours a day in a five-day work week?27 More hours at work equals more income. But our government’s welfare trap has built a system that creates a disincentive for work. The more hours you work the fewer welfare goodies you get. So what do you think people are going to do? They keep their work hours artificially low to keep their welfare checks artificially high. And once again, America’s twin virtues of hard work and self-reliance take a beating. - -When you realize that every seventh person you pass on the sidewalk now receives food stamps, and that Obama has upped welfare spending to just under $1 trillion a year, it becomes painfully clear that this president’s rapid expansion of the welfare industry is part of a much broader effort to “fundamentally transform America,” as Obama put it early in his presidency. - -I’ve got a newsflash for you, Mr. President: America likes America the way the Founding Fathers built her—as a nation that deeply values hard work and self-reliance. The next president America elects must be committed to serious welfare reforms that overhaul the system and roll back Obama’s disastrous public assistance policies. - -We know how to reform welfare because we’ve done it before. In 1996, then-Speaker Newt Gingrich and congressional Republicans passed and pushed President Clinton to sign the 1996 Welfare Reform Act. In the wake of the bill’s passage, the liberal New York Times ran a breathless op-ed with the headline: “A Sad Day for Poor Children.” “This is not reform, it is punishment,” read the article. “The effect on cities will be devastating.”28 As usual, the New York Times could not have been more wrong. The results were as dramatic as they were hopeful: welfare caseloads went down 60 percent, 2.8 million families transitioned from welfare to work, and 1.6 million kids climbed out of poverty.29 - - - - - -Welfare to Work - - - -The secret to the 1996 Welfare Reform Act’s success was that it tied welfare to work. To get your check, you had to prove that you were enrolled in job-training or trying to find work. But here’s the rub: the 1996 Welfare Reform Act only dealt with one program, Aid to Families with Dependent Children (AFDC), not the other seventy-six welfare programs which, today, cost taxpayers more than $900 billion annually.30 We need to take a page from the 1996 reform and do the same for other welfare programs. Benefits should have strings attached to them. After all, if it’s our money recipients are getting, we the people should have a say in how it’s spent. - -The way forward is to do what we did with AFDC and attach welfare benefits to work. The Welfare Reform Act of 2011—proposed by Republican Congressmen Jim Jordan of Ohio, Tim Scott of South Carolina, and Scott Garrett of New Jersey—does just that.31 Their bill, if enacted, would make sure that welfare programs would serve only those who truly need them, place a cap on welfare expenditures to prevent bureaucrats from endlessly expanding the programs, give more authority to the states over welfare spending, prevent federal funding of abortions through welfare programs, and enforce work requirements, among other reforms.32 It’s a serious plan that deserves to be passed and signed into law. - -Of course, just as with the 1996 Welfare Reform Act, liberals will cry, kick, scream, and throw temper tantrums. But let them. It’s far more important that we help poor people to become independent, self-sufficient individuals who gain the benefits of work. Let’s get it done. - -Next, I believe that the state of Florida made a smart move when in 2011 it became the only state to require drug testing of all recipients of the welfare program Temporary Assistance to Needy Families (TANF). As Florida Governor Rick Scott said, “While there are certainly legitimate needs for public assistance, it is unfair for Florida taxpayers to subsidize drug addiction. This new law will encourage personal accountability and will help to prevent the misuse of tax dollars.”33 The governor is right. It’s common sense. By the way, Rick Scott is doing a great job and not getting the credit he deserves. - -Look, millions of employees have to get drug tested for their jobs. Do they make a big stink about it? No. It’s only smart. But leave it to the know-nothings at the American Civil Liberties Union (ACLU) to whine and cry about a requirement that millions of hard-working taxpayers go through year in and year out. “The wasteful program created by this law subjects Floridians who are impacted by the economic downturn, as well as their families, to a humiliating search of their urine and body fluids,” said a foolish Howard Simon, executive director of ACLU Florida.34 Humiliating? Excuse me? How is it “humiliating” to make sure that taxpayers aren’t funding a drug addict’s next hit? And how is it “humiliating” to take a drug screen that millions of working people take with no problem? It’s not. It’s just one more example of liberals’ attempting to erode personal responsibility and waste taxpayers’ money. - -The bill requires that TANF recipients take and pass a drug test. If it’s a two-parent household, both individuals get tested. Anyone who tests positive for drugs is ineligible for benefits for a year. If they fail it a second time they are ineligible for three years. Recipients cover the cost of the screening, which they later recoup through benefits.35 If parents fail the drug test, benefits for children can be awarded to a third-party recipient acting as a guardian provided he or she passes a drug test.36 - -This common sense approach should be a no-brainer. It’s insane to ask taxpayers to foot the bill for some junkie’s drug habit when America is already $15 trillion in the hole and many Americans are fighting to survive in the Obama economy. Bottom line: you do drugs, no welfare check. End of story. - -Finally, it’s time to get tough on those who cheat and defraud taxpayers. The Obama-fueled welfare “crime wave” must end fast. Otherwise, it will further spread the mindset that says, “Who cares if I cheat the system, it’s not my money. I deserve free stuff.” That means punishing violators, not turning a blind eye like the Obama administration has done. And that includes punishing corrupt bureaucrats who run scams and leave taxpayers holding the bill. Also, no more millionaires getting welfare checks. That’s outrageous and must be stopped immediately. - - - - - -America has a big heart. We believe in helping our fellow citizens when they are down on their luck, become seriously disabled, or reach an age when they can’t care for themselves. For those folks, the safety net is necessary and totally appropriate. - -Yet for too many people, welfare has become a way of life. There’s nothing “compassionate” about allowing welfare dependency to be passed from generation to generation. Kids deserve better. America deserves better. - -President Reagan put it best: “Welfare’s purpose should be to eliminate, as far as possible, the need for its own existence.” - - - - - -EIGHT - - - -REPEAL OBAMACARE - - - - - - - -Former Speaker of the House Nancy Pelosi said Congress had to pass Obamacare so we could find out what’s in it. Now we have. And what’s inside those 2,733 pages is a job-killing, health care-destroying monstrosity. It can’t be reformed, salvaged, or fixed. It’s that bad. Obamacare has to be killed now before it grows into an even bigger mess, as it inevitably will. Obamacare takes full effect in 2014. If it’s not repealed before then, it will be more than just another failed government entitlement program—it will be the trillion-ton weight that finally takes down our economy forever. - -Polls show that more than 80 percent of Americans are reasonably pleased with their current health insurance plan.1 That’s an impressive number. Still, everyone agrees we need to take steps to reduce the rising costs of health care and make insurance more affordable. But socialized medicine is not the solution. That’s why the majority of Americans are against Obamacare. They know that giving our inept, bumbling federal government control over health care is an invitation to disaster. Obamacare is a heat-seeking missile that will destroy jobs and small businesses; it will explode health-care costs; and it will lead to health care that is far less innovative than it is today. Every argument that you’d make against socialism you can make against socialized health care, and any candidate who isn’t 100 percent committed to scrapping Obamacare is not someone America should elect president. Repealing Obamacare may be one of the most important and consequential actions our next president takes. - - - - - -Obamacare Puts Small Businesses on Life Support - - - -It’s sad to see just how many citizens—some of them smart people—got duped into believing Obama’s bait and switch sales pitch on Obamacare. Take, for instance, Starbucks CEO Howard Schultz. Schultz did a terrific job of turning Starbucks around. But when it came to Obamacare, he took the bait hook, line, and sinker. “When I was invited to the White House prior to health care being reformed, I was very supportive of the president’s plan,” Schultz said. However, after Schultz and his team studied the massive bill more closely, he changed his tune. “As the bill is currently written and if it was going to land in 2014 under the current guidelines, the pressure on small businesses, because of the [individual] mandate, is too great.”2 - -That’s putting it mildly. A September 2011 report by UBS, the highly respected financial services company, said, “Arguably the biggest impediment to hiring (particularly hiring of less skilled workers) is healthcare reform, which has the added drawback of straining state and federal budgets.” 3 The report went on to explain in simple language why Obamacare is such a jobs killer: - -The new law requires most businesses to provide a generous “essential” package of benefits, which is beyond what many small businesses provide today. It subjects businesses to highly complex rules that increase the cost, risk, and “hassle factor” of adding to payrolls. Companies that do offer insurance can be fined iflow-income employees take a government-subsidized plan. All firms with more than 50 workers must provide benefits, which creates an incentive for smaller firms to stay “under the limit” by expanding overseas, outsourcing, or dividing into two companies.4 - - - - - -And liberals scratch their heads and wonder why businesses don’t want to hire? - -Simple: companies know Obama is anti-business, and his government-run health-care takeover has created a major disincentive to hire new workers. So business leaders aren’t hiring. Instead, they will just ship more jobs overseas or automate their systems with machines. Just do some simple math. If you have a business with fifty employees, would you hire that fatal fifty-first employee and instantly subject yourself to a $100,000+ penalty ($2,000 for every employee in your company) for having the audacity to create more jobs, enlarge your business, and stimulate the economy? No. You would either put a freeze on hiring (in the hopes Obamacare will be repealed or overturned by the Supreme Court), outsource jobs to other countries, or create a second company (which will grow more slowly than if you concentrated your resources) to avoid the penalty. That’s where we are today, and that’s why we have record unemployment. This isn’t rocket science. - -Obamacare also slaps companies who already insure their employees with $3,000 fines per employee if the health-care benefits they offer aren’t up to Obama’s standards. The White Castle hamburger chain ran the numbers and discovered that these new regulations will eat up 55 percent of their net income after 2014.5 How in the world can anyone expect businesses to hire new workers under these kinds of insane requirements? - -Not surprisingly, the instant businesses began crunching the numbers, thousands of businesses and states began asking for “waivers” from Obamacare. So far just under 1,500 waivers have been granted.6 And guess who the big winners have been? President Obama’s biggest backers who championed Obamacare! More than 50 percent of the waivers have gone to union members. And in the recent round of new waivers, 20 percent of them went to Nancy Pelosi’s district.7 You just can’t make this stuff up. How is it fair to let Obama’s pals off the hook and grant them waivers but force the rest of America to be stuck with Obamacare? Mr. President, you need to give all Americans a waiver! - - - - - -Obamacare Passed, Premiums Skyrocketed - - - -What’s incredible is that Obamacare hasn’t even kicked in yet and already it’s doing tremendous damage. During the health-care debate, Obama swore that passing Obamacare would “bring down the cost of health care for families, for businesses, and for the federal government.” He also said that passing his plan would “lower premiums for the typical family by $2,500 a year.”8 In September 2011, the nonprofit Kaiser Family Foundation, which tracks annual employer health insurance, released a study revealing that health insurance premiums leapt 9 percent in 2011. As Senator Orrin Hatch put it, “The president’s promise that his partisan health law would lower costs was just empty rhetoric.”9 - -Liberals could hardly believe it—they couldn’t understand how health-care costs could have risen so much when their hero Barack Obama had promised that they wouldn’t. Obama claimed his socialized medicine plan would immediately “bend the cost curve downward.” He said the bill’s pre-2014 requirements, such as forcing employers to cover millions of adult “children” up to twenty-six years old on their parents’ health plans, would push costs down. Well, the Kaiser report found that 2.3 million adult “kids” have been added so far in the wake of Obamacare passing. And guess what happened? Under Obama, the average family’s health insurance premiums have risen $2,393. That’s almost the exact opposite of what the president promised. How’s that for “hope and change”?10 - -As business and economics columnist Robert Samuelson concluded, “The study reminds us that runaway costs are the health system’s core problem; [Obamacare] does nothing to solve it—and would actually make it worse.... If roughly 30 million or so Americans get insurance and no basic changes are made in the delivery system, then added demand will lead to higher costs, longer waiting periods, or both.... [Obamacare] was also bound to raise the costs of hiring workers by compelling employers to provide expensive coverage. That prospect can’t be helping job creation.”11 - -Looking back, it’s incredible that anyone believed Obama and Nancy Pelosi’s wild rhetoric. Remember when Pelosi promised us that passing Obamacare would magically create jobs? Her exact words were even bolder. Pelosi said, “It’s about jobs. In its life, it [Obamacare] will create four million jobs—400,000 jobs almost immediately.”12 400,000 jobs almost immediately . . . incredible, isn’t it? Liberals were fools to have believed such garbage, especially when there were so many small business owners pleading with the government not to crush their ability to create jobs. - - - - - -Obamacare Is Killing Jobs - - - -Instead of creating new jobs, Obamacare is destroying jobs. And the worst part is yet to come, since the truly painful provisions don’t kick in until 2014. Businesses like Boeing, Caterpillar, and Deere & Company are already tallying up the job-killing costs of Obamacare. The numbers are ugly. These companies will now have to find $150 million, $100 million, and $150 million respectively—and that’s just the cost to meet one provision in the new law.13 Where does Obama think these sums will come from? Does he not understand that businesses exist to make a profit? Every time government adds a cost to a business, that company either has to pass the cost along to consumers, fire or stop hiring workers, or both. These three companies are big enough to absorb Obamacare’s body blow and still survive. But what about small businesses that are struggling to grow and would love to hire more workers? Those are the companies that will suffer the most under Obamacare. And don’t forget, small businesses are our biggest jobs creators. In fact, over the last fifteen years, small businesses have been responsible for 64 percent of net new jobs.14 - -How many jobs will Obamacare kill? A study from the National Federation of Independent Business found that Obamacare could mean the loss of 1.6 million jobs, 66 percent of which would be from small businesses. 15 Obama will probably dismiss that study because it comes from an organization that has the word “business” in its name. Fine. Then maybe he should listen to the director of the nonpartisan Congressional Budget Office, who said during congressional testimony that the bill would kill 800,000 full-time jobs in the first decade alone.16 Bottom line: as Minnesota Congressman John Kline put it, “To suggest [Obamacare] doesn’t undermine job creation is to deny reality.”17 - - - - - -Obamacare Will Destroy Patient Choice and Explode Spending - - - -In addition to killing jobs, Obamacare also destroys a patient’s right to choose the insurance and doctor he wants. Whole books have been written about what’s wrong with Obamacare and how we can improve health care without wrecking our economy, like The Truth about Obamacare by Pacific Research Institute President Sally C. Pipes. But even a casual observer can see that Obama’s rhetoric doesn’t align with reality. Remember when Obama promised us that “if you’ve got health insurance, you like your doctors, you like your plan, you can keep your doctor, you can keep your plan. Nobody is talking about taking that away from you”?18 Yeah, well that was a flat out lie. Here’s why: once the law goes into full effect in 2014, one out of three employers plan to drop employee health benefits entirely and just pay the government penalty.19 That means those workers will be shoved into the government’s subsidized insurance exchanges. And nothing will make liberals happier. - -As liberals see it, pushing businesses to dump their current health-care plans and funnel their workers into the government-run health-care plans is a backdoor way to drag America closer to a so-called “single payer system,” otherwise known as total government-run health care. As Howard Dean joyfully said, “Most small businesses are not going to be in the health insurance business anymore after this thing goes into effect.”20 So the line Obama sold the country about everyone being able to keep the plan they have was a total con job. As the president knew all along, millions of workers’ current insurance plans will be scrapped entirely. Obamacare is nothing more than a lurch toward total government-controlled health care. - -It’s crazy that this plan was even proposed—America is a debtor nation. How in the world does it make sense to create a budget-busting government program like Obamacare when the United States is already $15 trillion in the hole? It’s financial suicide. - -The original price tag Obama and his liberal supporters quoted us was also a total sham. Not wanting to quote a price that used the word “trillion,” Obama and Pelosi made sure to jigger the numbers so that he could claim Obamacare would cost $940 billion over the decade. Yet as Dr. Jeffrey H. Anderson points out, “Even this colossal tally is like the introductory price quoted by a cell phone provider. It’s the price before you pay for minutes, fees, and overcharges—and before the price balloons after the introductory offer expires.” Using the CBO’s numbers, Anderson calculates that Obamacare’s actual cost from 2014 (when the plan fully kicks in) to 2023 will be $2.0 trillion, more than double what Obama and Pelosi claimed, in order to insure the 30 million Americans Obama says are uninsured.21 As usual, liberals play a shell game with how much they’re planning to screw taxpayers for. The Obama administration said they would pay for the program by slashing $ 575 billion from Medicare and make up the rest in tax hikes. I think we can count on tax hikes—lots of them. - -Now take a closer look at that number of uninsured Americans—30 million. Throughout the health-care debate, Obama chronically talked about the “46 million uninsured Americans.” Over and over we had that number pounded into us like a nail. Then, all of a sudden, Obama decided that, no, the actual number of people who couldn’t get health-care coverage was 30 million. That’s quite a drop! But of course he used the smaller number after having blasted the inflated number far and wide. - -But pretend for a moment that the 46 million was real. According to the ultra right-wing New York Times, here’s how that number breaks down “with the caveat that there is overlap in these numbers” (which is why they don’t exactly add up to 46 million). One out of five of the people he claimed were uninsured weren’t even U.S. citizens! Another 13.7 million have plenty of money to buy health care (they make more than $75,000 a year) but choose not to get it. Eleven million poorer Americans are Medicaid or SCHIP eligible but just haven’t enrolled yet. That leaves 13 million young people (ages nineteen to twenty-nine) who are either fresh out of college, can afford insurance but think they’re invincible, are in-between jobs, or who are searching for jobs.22 There’s no doubt some of these folks need a safety net under them until they start their careers. The question is, was it worth it to jeopardize the world’s greatest health-care system and shackle America with $2 trillion of additional debt to address the temporary health-care needs of 4 percent of the country? Or could we have devised a smarter, more efficient, less expensive solution that would have accomplished the same goal? Only a fool would choose the former over the latter. - -We may get lucky and have the Supreme Court declare Obamacare unconstitutional. After all, there’s no doubt that the government forcing all citizens to buy a product is a direct violation of the Commerce Clause. That would set a very dangerous precedent. “If Congress may require that individuals purchase a particular good or service,” says Utah Senator Orrin Hatch, “we could simply require that Americans buy certain cars.... For that matter, we could attack the obesity problem by requiring Americans to buy fruits and vegetables.”23 Hatch is right. The individual mandate is a massive federal overreach and is clearly unconstitutional. But as every conservative knows, the Supreme Court tramples on the Constitution all the time. So it’s anyone’s guess what they will do. - -Still, I think we’ve got an even chance that the Supreme Court may strike down Obamacare’s so-called “individual mandate” to buy health insurance. If that happens, even Obama’s supporters concede it will all but kill Obamacare, because the whole thing hinges on the government forcing everyone to buy insurance whether they want to or not. But it’s anyone’s guess if the Supreme Court will rule properly. - - - - - -Bring Down Costs through Competition - - - -Regardless of what happens in the Supreme Court, and even if we elect a real president who will get tough and repeal Obamacare, we still need a plan to bring down health-care costs and make health-care insurance more affordable for everyone. It starts with increasing competition between insurance companies. Competition makes everything better and more affordable. When I build a building, I let various builders and architects compete for the contract. Why? Because it sharpens their game, makes them bid competitively on price, and encourages them to give me the best quality product possible. That’s true for any service or product. That’s why Americans need more options when it comes to purchasing health-care insurance. - -One way to infuse more competition into the market is to let citizens purchase health-care plans across state lines. Health-care costs vary drastically from state to state. For example, a 25-year-old in California can buy an HMO plan that costs him $260 a month. But for a New Yorker to buy a similar plan with equivalent benefits, it will cost him $1,228 a month.24 Why not allow people to buy health insurance across state lines and make companies compete to offer the best plans at the best rates? - -This could be easily accomplished if Congress got some guts and did the right thing. The U.S. Constitution gives Congress control over interstate commerce. But for whatever reason, the Congress has never exercised this power regarding health insurance. Bills for interstate insurance compacts have been proposed for over six years. As usual, though, the politicians in Washington have done nothing about it. They need to. As former Florida Congressman Thomas Feeney points out, creating a national market for health-care plans would help bring costs down for lower-income Americans—such as those 19- to 29-year-olds without coverage—and give them more affordable options.25 - -The reason prices vary so much from state to state is because states differ wildly on the kinds of mandates they require in their coverage plans. As Devon Herrick, a senior fellow with the National Center for Policy Analysis, puts it: “If consumers do not want expensive ‘Cadillac’ health plans that pay for acupuncture, fertility treatments or hairpieces, they could buy from insurers in a state that does not mandate such benefits.”26 Increasing competition is common sense. We need to pass laws that encourage it. - - - - - -Real Tort Reform Right Now - - - -The other way we can drive down costs is by recognizing that doctors today are practicing “defensive medicine.” In other words, doctors often order unnecessary tests and procedures to avoid being sued. Pricewaterhouse Coopers did a study to see how much defensive medicine adds to overall medical costs. They found that this phenomenon accounts for at least 10 percent of all medical costs.27 That’s huge. - -It’s not hard to understand why doctors engage in defensive medicine. Just look at disgraced Democratic vice presidential nominee John Edwards. In his former life, Edwards was a world-class ambulance chaser. In just twelve years, Edwards won $175 million in malpractice judgments by suing doctors, insurance companies, and hospitals for causing infant cerebral palsy. And this despite the fact that the American College of Obstetricians and Gynecologists has stated that the “vast majority” of cerebral palsy cases have nothing to do with the way a baby is delivered.28 It’s just one more example of what a disgraceful human being John Edwards is. - -“The courts are clogged up with these cases,” says Dr. Cecil Wilson of the American Medical Association. “Physicians are afraid of being hauled into court and as a result order tests they ordinarily would not order.”29 With sleazy characters like Edwards lurking around every hospital corner, it’s no wonder doctors feel forced to add all those expensive tests to protect themselves. Doing so, however, jacks up our health-care costs by at least 10 percent. That’s why we need serious tort reform. Specifically, we need to cap damages for so-called “pain and suffering” at $100,000. We also need “loser pays” laws that make the loser pay the legal bills of the winner if the charges are deemed baseless—a system followed by almost all other western democracies. This will help cut down on frivolous suits that artificially raise health-care costs and clog up our courts. The state of Texas recently passed loser pays legislation. Other states should do the same. - - - - - -There’s a reason most Americans oppose Obamacare: it’s a total disaster. Barack Obama has put us so deep in the debt hole that America can’t afford another one of his $2 trillion spending programs. Obamacare is already making health-care costs rise, and the thing hasn’t even gone into full effect yet. Worse, it’s absolutely slaughtering jobs. No businessperson with a brain would consider serious expansion with this regulatory nightmare hanging over them. Whether through a Supreme Court ruling or a presidential election, America must repeal Obamacare once and for all. - -Destroying the world’s finest health-care system so that Obama can have his socialized medicine program is reckless and foolish. The proper way to bring the cost of health-care down is to make insurance companies compete nationally and get defensive medicine under control through serious tort reform that includes loser pays provisions. - -We need a president who will get tough and repeal Obamacare on day one. When they do, they will have accomplished more with one stroke of the pen than Obama has accomplished in his abysmal presidency. 2012 can’t come soon enough. - - - - - -NINE - - - -IT’S CALLED ILLEGAL IMMIGRATION FOR A REASON - - - - - -Illegal immigration is a wrecking ball aimed at U.S. taxpayers. Washington needs to get tough and fight for “We the People,” not for the special interests who want cheap labor and a minority voting bloc. Every year taxpayers are getting stuck with a $113 billion bill to pay for the costs of illegal immigration.2 That’s a bill we can’t afford and wouldn’t have to pay if people in Washington did their jobs and upheld our nation’s laws. - -Too many Republicans in Washington turn a blind eye to illegal immigration because some of their business supporters want artificially cheap labor. Liberal Democrats, on the other hand, look on illegal immigrants as another potential Democrat voting bloc eager for their big government agenda of welfare handouts, class warfare, and “affirmative action.” What do taxpayers get? They get the shaft. - - - - - -Illegal Criminals Have Got to Go - - - -Both sides need to grow up and put America’s interests first—and that means doing what’s right for our economy, our national security, and our public safety. According to a Government Accountability Office (GAO) 2011 report, America’s prisons house 351,000 criminal aliens who committed a crime after having already broken the law by entering America illegally. Making taxpayers pay for 351,000 criminals who should never have been here in the first place is ridiculous. The GAO says that the annual price tag to incarcerate these thugs is $1.1 billion. And get this: criminal aliens have an average of seven arrests.3 That’s at least seven crimes committed against American citizens by each of these criminals who should never have been allowed across our borders. - -According to the New York Times, one out of every three federal prison inmates is a Latino, and three quarters of these are here illegally.4 As one Phoenix, Arizona, assistant federal defender put it, “I have Anglo and Native American clients who tell me about being the only non-Spanish speaker in their pod. Ten years ago, it just wasn’t that way.... A lot of times the guards don’t speak the language. How do you safely guard people who may not understand your orders?”5 A better question is why should we have to guard them at all? Have we suddenly become an annex of Mexico’s prison system? If so, Mexico should pay for it. I actually have a theory that Mexico is sending their absolute worst, possibly including prisoners, in order for us to bear the cost, both financial and social. This would account for the fact that there is so much crime and violence. - -We shouldn’t want lawbreakers as citizens—and that’s what illegal immigrants are by definition: lawbreakers. Yes, America is a nation of immigrants, but that doesn’t mean we have to offer citizenship to everyone who crosses our borders. I’d guess just about every poor person in the world wants to come here. Who wouldn’t want to come to the greatest nation on earth? But that obviously is crazy. What is not crazy is having an immigration policy where we decide which potential immigrants are entitled to citizenship, where we choose the best and most productive people who want to come here for that honor. We should not let ourselves become the dumping ground for other countries’ undesirables. Instead we should roll out a welcome mat only to those who can make our country better—and illegal immigrant criminals don’t do that. - -The illegal immigrant crime problem is far more serious and threatening than most people understand. Along our southern border, our citizens, police, and border patrol agents are being attacked with increasing brutality and regularity. Did you know that three border patrol agents are assaulted every day along America’s southern border? And it’s getting worse. According to the Justice Department, assaults against U.S. border patrol agents have spiked 46 percent.6 - -Then there is the “most dangerous gang in the world,” the Mara Salvatrucha, more commonly known as the MS-13 gang. The gang, comprised mostly of Central American immigrants, is known for its extreme viciousness. Besides smuggling (and abusing) illegal immigrants into the United States, MS-13 might be conspiring with terrorists. Al Qaeda is always looking for a way to smuggle terrorists into our country, and American officials know that a top al Qaeda lieutenant (who had also been in Canada seeking nuclear material for a so-called “dirty bomb”) met with leaders of MS-13 about ways to infiltrate America through our border with Mexico.7 Intelligence agencies have also spotted several known members of the Somalia-based Al Shabaab Islamic terror group in Mexico and have warned that they are planning to penetrate the United States.8 - -MS-13 represents a lethal threat to both our citizens and illegal immigrants. The gang brags that they are “immigrant hunters.” They lie in wait at immigration checkpoints knowing that illegals will jump off trains. MS-13 then holds the stranded illegal aliens for ransom. With 22,000 illegal immigrant kidnappings occurring each year, it’s estimated that gangs like MS-13 could be raking in upwards of $50 million annually.9 - -Obviously not all illegal immigrants are members of violent gangs. Many aliens are just seeking a better life for their families. Who could fault them? But again, we cannot become a repository for all the poor and desperate people of the world. For America to change its culture and way of life, to give away American jobs at a time of high unemployment to non-citizens who have broken the law to come here, is to commit economic and cultural suicide. - -And not enforcing our laws leads directly to the deaths of American citizens. Here is a poignant example. In 2010, Carlos Montano, an illegal immigrant, killed a 66-year-old nun, Sister Denise Mosier, and critically injured two others when Montano was driving drunk in a Virginia suburb. Incredibly, Carlos Montano had been arrested not once but twice before on drunk-driving charges and had other traffic-related arrests. But when Montano got handed off to Immigration and Customs Enforcement (ICE) for deportation, the Department of Homeland Security inexplicably let Montano walk. “We handed him over to the feds assuming he would be deported,” said Corey Steward, chairman of the Prince William County’s Board of Supervisors. “But instead federal authorities released him back into the neighborhood and he killed a nun. . . . Blood is on the hands of Congress for not properly funding immigration enforcement.”10 - -The needless death of Sister Denise Mosier is hardly an isolated case. There are countless stories of driving fatalities and serious injuries by people who should not have been on American roads in the first place. When liberals like Barack Obama hear tragic stories like that of Sister Mosier, they come back with, “Yes, and that’s precisely why we should grant ‘undocumented workers’”—that’s illegals to you and me—“driver’s licenses and teach them the rules of our roads!” It’s a level of cluelessness that borders on delusion. - -Look, my wife is an immigrant—a legal immigrant. Did she have to jump through legal hoops? Of course. Did she complain about it? No, she didn’t. She is grateful for the chance to live in America. So she complied with the laws of the land. She worked hard to become a U.S. citizen—and the U.S. got a good one. - - - - - -Illegals Are Breaking Our Bank - - - -In purely economic terms, however, one of illegal immigration’s biggest costs to taxpayers involves the monies paid to educate the children of illegal aliens. Illegal immigrant children often require special classes and language specialists, and take time and resources away from our own students. On this point I strongly disagree with Governor Rick Perry. The Federation for American Immigration Reform (FAIR) reports that U.S. taxpayers shell out $52 billion annually to educate illegal aliens. Liberals like to say that illegal aliens pay taxes too in the form of sales taxes and the fees and taxes that get folded into the costs of things like gasoline. But this argument fails—big time. According to FAIR, on average, less than 5 percent of the public costs associated with illegals are regained through taxes paid by illegal aliens.11 - -The fact is when it comes to taxpayer-provided social services and welfare, illegal aliens have elbowed their way to the front of the line. In 2011, the Houston Chronicle reported that 70 percent of the illegal immigrant families living in Texas received welfare assistance. That’s compared to the already too high 39 percent of native-born Americans who receive welfare.12 That’s insane. People who broke into the country use our social safety net with greater regularity than our own citizens! How can we ever expect to get a handle on the illegal immigration crisis when we incentivize and reward it with free welfare checks and health care? - -“We can no longer afford to be HMO to the world,” says Los Angeles County Supervisor Michael Antonovich. He says that the total cost to taxpayers for illegal immigrants in Los Angeles County is $1.6 billion, “not including the hundreds of millions of dollars for education.”13 - -The root cause of all the welfare payments to illegal aliens is the so-called “anchor baby” phenomenon, which is when illegal immigrant mothers have a baby on American soil. The child automatically becomes an American citizen, though this was never the intention of the Fourteenth Amendment, which states, “All citizens born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and the state wherein they reside.” The clear purpose of the Fourteenth Amendment, ratified in 1868, three years after the end of the Civil War, was to guarantee full citizenship rights to now emancipated former slaves. It was not intended to guarantee untrammeled immigration to the United States. - -Some 4 million anchor babies are now officially U.S. citizens. This has to stop. The only other major country in the world that issues citizenship based on where one’s mother delivers her child is Canada. The rest of the world bases citizenship on who the kid’s parents are, which is of course the only sane standard.14 If a pregnant American mother is traveling to Egypt on business and goes into delivery, do we instantly declare her child an Egyptian? Of course not. But that’s precisely what goes on every day in America: women who have zero connection to the United States cross the border, deliver a baby, and their kid magically becomes an American citizen eligible to receive all the rights and benefits of those who have lived, worked, and paid taxes in our country. - -Republican Senators John Kyl of Arizona and Lindsey Graham of South Carolina have discussed introducing a constitutional amendment to clarify and restore the original intent of the Fourteenth Amendment. It’s long past time that America joins the rest of the world in granting citizenship along rational lines. - - - - - -Liberal Myths - - - -But in restoring sanity to the interpretation and enforcement of our laws, we’ll have to fight liberal myths every step of the way. We’ve all heard a million times: “We need illegal immigrants because they are willing to do jobs Americans just won’t.” To that one I say, “Says who?” We have 25 million citizens who need jobs, and 7 million illegal immigrants holding American jobs. Do the math. If illegal aliens weren’t holding these jobs, American citizens would, because these jobs need to be filled, and guess what? Those jobs would pay more than they do now, because illegal low-wage workers drive down wage rates. Even the Washington Post has conceded that “an influx of immigrants has helped depress the incomes of low-skilled workers in recent decades, many economists agree.”15 As research by Harvard University economist George J. Borjas has shown, “the primary losers in this country are workers who do not have high school diplomas, particularly blacks and native-born Hispanics.” Borjas found that from 1980 to 2000, illegal immigrants lowered the nation’s average wages some 7.4 percent for America’s 10 million native-born men who lack a high school diploma.16 You would think that Obama, who talks a good game about caring for the poor, would try to help raise wages for people at the bottom of the economic ladder. But with black teenage unemployment now at a staggering 46.5 percent, and with the overall black underemployment rate at a breathtaking 18.8 percent, it’s outrageous that the president continues to mock Republican efforts to reduce illegal immigration and boost wages.17 - -“All the stuff they [Republicans] ask for, we’ve done,” said Obama at a 2011 immigration rally in El Paso, Texas. “Maybe they’ll need a moat,” Obama said to laughter. “Maybe they want alligators in the moat! They’ll never be satisfied.”18 - -Mr. President, you might think the border deaths, narco terrorists, and waves of violent illegal criminals into America are a joke, but the people who live along the border and the communities under siege do not. We need a president who will get tough, enforce our laws, protect our people, and pull wages up. - -One of the biggest myths we have been told is that illegal immigrants actually produce a net gain economically. This is a cute argument, but it’s a complete joke. It assumes, among other things, that illegal workers keep their money here in America. But they don’t. In 2006, 73 percent of Latino immigrants regularly sent money back to their home countries, amounting to $45 billion. For countries like Mexico, illegal immigrants in the United States are a cash cow. In fact, Mexico’s second biggest source of foreign income, just behind oil exports, comes from—you guessed it—remittances from illegal aliens. In 2008, Mexico got $25.1 billion in money sent back home. Remittances have skyrocketed over the last decade. They went from $9 billion in 2001 to $26 billion in 2007.19 That’s money that American workers could be earning, saving, and spending here in the United States. - -So what to do? - - - - - -Reform Our Illegal Immigration System - - - -Before I lay out what needs to be done to get our illegal immigration mess fixed, it’s worth first discussing what America needs to do with our legal immigration system. It, too, is backwards and in need of a total overhaul. Thankfully, our neighbors to the north, Canada, have a smart, merit-based plan that America should adopt. - -Canada’s legal immigration plan starts with a simple and smart question: How will any immigrant applying for citizenship “support the development of a strong, prosperous Canadian economy”? Economic benefit should be our chief aim. America doesn’t need freeloaders who come here to live off our welfare system. We need legal immigrants who bring skills, prosperity, and intellectual capital. In Canada, aliens applying for permanent residence are awarded points based on their skills and how they will benefit the Canadian economy. Only 40 percent of the overall determination on whether permanent residence will be granted depends on family relationships or refugee status. The remaining 60 percent of the decision hinges on how the immigrant will add value to Canada’s economy. Our system is almost exactly the opposite. In fact, it’s worse. Seventy percent of the one million permanent resident admissions the United States grants every year are based on family relations. Only 13 percent depend on employment (the remainder are for refugees and diversity visas).20 This makes no sense whatsoever. - -For a Canadian applicant to be considered for permanent residency, he must score a minimum of 67 points out of 100. He must also have a minimum of one year of full-time work experience in a desired skill area within the last ten years. The better the immigrant’s attributes, the higher the score. If the alien doesn’t earn 67 points and is serious about wanting to live in Canada, he can work on developing his marketable skills until he does qualify. For example, if the applicant isn’t a college graduate, he can go home, get a college degree, and add 25 points to his total and reapply. As a result, roughly half of Canada’s immigrants have a bachelor’s degree.21 - -Canada’s legal immigration system also requires that before an immigrant qualifies for Canada’s equivalent of Social Security, he has to have been resident in the country for at least ten of his adult years. In America, we only require five years.22 - - - - - -Work Visas - - - -Our country’s leaders are just so plain stupid. As an example, foreign students come over to our colleges, learn everything there is to learn about physics, finance, mathematics, and computers, and graduate with honors. They would love to stay in this country, but we don’t allow them to. We immediately ship them back to their country to use all of the knowledge they learned at the best colleges in the United States back in their country rather than keep it here in ours. - -When we have gifted people in this country we should cherish them and let them stay. But instead we fling our arms wide open to the lowlifes, the criminals, the people who have no intention to contribute to our country. We spend billions of dollars taking care of them as they, in many cases, run rampant through our streets, doing many things you’re not supposed to do. But the great ones, we immediately expel. - -Wouldn’t it be better if we invited foreign students graduating from our colleges to stay to build American companies, instead of foreign companies that will be wreaking havoc against Boeing, Caterpillar, and many other of our great American companies in the future? - -If we adopted this commonsense merit-based approach, our immigration policy would be guided by what benefits America. That’s the way it ought to be. If American businesses need immigrants with particular technical skills, by all means, let’s hire them. The privilege of becoming an American citizen should be about the value an immigrant brings to our country, not about an open door for anyone and everyone who wants to come here. - -Bottom line: living in America is the greatest blessing a person could ever receive. If people want to live and work here, they should bring something to the table, not just be feasting off it. - - - - - -The 5-Point Trump Plan - - - -Now, as for what to do about illegal immigration, we should follow the repeal of the anchor baby provisions with a five-point program to create a smart and humane plan to get illegal immigration under control. It starts with securing our borders. Look, if a nation can’t protect its own borders, it ceases to be a country. We’re not just some landmass that anyone who wants to can trample on at will. I believe America is an exceptional nation worthy of protection. That requires getting tough on border enforcement. We can and should have a robust debate over whether that means continuing to build the physical border fence or utilizing “virtual fences” that use lasers as trip wires to monitor illegal border crossings. - -From the research my people have shown me, I’m not impressed with the mediocre success rates of the current crop of virtual fences that have been developed and tested. I am, however, impressed with the success of the double- and triple-layered fence in places like Yuma, Arizona. The wall there is a serious 20-foot wall. It has three walls separated by 75-yard “no man’s lands” for border agents to zoom up and down in vehicles. It also has cameras, radio systems, radar, and pole-topped lights.23 “This wall works,” says U.S. border patrol agent Michael Bernacke. “A lot of people have the misconception that it is a waste of time and money, but the numbers of apprehensions show that it works.” After the triple-layered fence was installed, the 120-mile stretch of the U.S.-Mexican border known as the Yuma sector experienced a 72 percent plunge in illegal immigrant apprehensions. Before the fence was installed, 800 people were apprehended attempting to enter America each day. Post-fence, that number was 50 or fewer.24 - -Some say Yuma’s flat terrain makes it a special case and that other parts of the border aren’t conducive to that kind of fence. In that case, we just need to be ready to build other kinds of fences, too. The point is that properly built walls work. We just need the political will to finish the job. And by the way, finishing the job will employ a lot of construction workers. Moreover, I call on Congress and the president to hire another 25,000 border patrol agents and give them the aerial equipment they need, such as Predator drones, to provide real-time aerial reconnaissance information to agents guarding the border wall. - -Second, we need a president who will enforce our laws. Right now, in a sneaky attempt to appease the strong and well-organized pro-amnesty lobby, the Department of Homeland Security has, on Obama’s orders, put a freeze on the deportation of 300,000 illegal immigrants. The administration says it wants to review each case individually and will only deport illegal aliens with criminal records, and that “no enforcement resources will be expended on those who do not pose a threat to public safety.”25 - -This wholesale abdication of a president’s constitutional duties is as shocking as it is foolish. It’s political pandering of the worst kind. Worse, Obama has said these aliens who were slated for deportation can obtain work permits !26 So in Obama we have a president who is not only not enforcing our laws, he is helping illegal immigrants to break them further! Obama wants to reward illegal immigrants by giving them the chance to take yet another American job. “The lesson for illegal aliens,” says James R. Edwards Jr., coauthor of The Congressional Politics of Immigration Reform, is that if they get “caught, they can escape immigration trouble, win legal status and seek a work permit.”27 - -How can we ask our brave U.S. border agents to risk their lives when the commander in chief is just going to shrug his shoulders and let 300,000 illegals make a mockery of our laws? It’s a total disgrace. Obama should be ashamed to play politics on an issue of such national importance. But he’s not. He thinks it’s cute and makes jokes about it, and he thinks it will win him votes on the insulting assumption that Latino Americans don’t care about America’s laws. The evidence is clear that President Obama certainly doesn’t care about America’s immigration laws. After all, two of his relatives—his uncle Omar Onyango Obama (arrested for drunk driving in Massachusetts) and his aunt Zeituni Onyango—are illegal aliens who have magically avoided deportation, with his aunt having finally been awarded asylum. Republican Congressman Steve King of Iowa has called for congressional hearings into whether the White House intervened on behalf of President Obama’s relatives. But of course the bigger scandal is that it is Obama administration policy to give special treatment to all illegal aliens—to treat them as if they are legal. - -You just can’t make this stuff up. Can you imagine the national firestorm the liberal media would have stoked had President George W. Bush had not one but two illegal immigrant family members who had received special treatment and been permitted to stay in America?28 Or what if President Bush had failed to enforce environmental laws and gave orders to federal agencies to help businesses break such laws? Democrats would have called for Bush’s impeachment. But not this president. The liberal media protect Obama every way they can. - -The third thing we need to do is overturn Obama’s insane new ICE recommendations for illegal immigrant detention facilities. In an effort to coddle illegal aliens, officials at nine detention facilities have now been instructed to make the following changes: - -• Soften the look for the facility with hanging plants, flower baskets, new paint colors . . . wall graphics and framed pictures on the walls, and enhance the aesthetics of the living areas.... - - - -• Expand programming for detainees to include movie nights, bingo, arts and crafts, dance, walk and exercise classes, health and welfare classes, basic cooking classes, tutoring and self-paced computer training on portable computer stations.... - - - -• Provide celebrations of special occasions and [allow] a detainee to receive outside, packaged food for celebrations.... - - - -• Provide fresh carrot sticks and celery or other vegetables in a bar format.... - - - -• Provide self-serve beverage bars.... - - - -• Offer water and tea in the housing area at all times. - - - -• Provide a unit manager so detainees have someone available to talk to and to solve problems in the facility other than the immediate guard.... - - - -• Survey community-based immigration advocacy groups and immigration attorneys for suggestions that may improve communication and ease of access.... - - - -• Increase availability of legal supplies and postage ... for legal correspondence. - - - -• Add research resources at the law libraries.... - - - -• [Provide] non-penal clothing for detainees to wear. - - - -• Eliminate lock downs and lights out.... - - - -• Reduce the frequency of and . . . wholly eliminate pat down searches.... - - - -• Provide four hours or more hours of recreation in a natural setting. . . . - - - -• Provide internet-based free phone service. - - - -• Provide email access for detainees....29 - - - -That’s right, your government now requires resort-like accommodations—paid for by you, the American taxpayer—to reward the flood of people entering our country illegally. Obama has turned America into a laughingstock. Our next president must stop this insanity. - -The next part of my plan involves opposing the so-called DREAM Act, which grants in-state tuition benefits at public colleges and universities to illegal immigrant college students. The Development, Relief, and Education for Alien Minors (DREAM) proposal is yet another attempt by Obama and his pro-amnesty pals to create new anchors and rewards for those who defy our laws. - -So get this: under the DREAM Act, if you’re not a citizen but a child of illegal immigrants, then you get in-state tuition benefits, but if you’re a legal citizen living out of state, you have to pay higher tuition. So an American student in Texas who wants to go to college in Arizona will have to pay more in tuition than a non-citizen student living illegally in Arizona. How fair is that? The fact that legislation like the DREAM Act has even seen the light of day shows you just how upside down our immigration policies have become—and just how far politicians are willing to pander to what they see as a Latino voting bloc. Sacrificing American laws on the altar of political expediency is immoral. If Congress is ever foolish enough to pass legislation that grants tuition breaks for illegal aliens, America’s next president must have the political courage and constitutional conviction to veto it. - - - - - -Democrats need to respect our laws, respect the fact that Latino Americans are as interested in the rule of law as anyone else, respect the immigrants who are patiently and lawfully standing in line for legal citizenship, and most of all respect our own citizens who should not have the rule of law, their jobs, even their lives and their nation’s future put at risk by irresponsible Washington politicians. That’s the sort of “hope and change” we need, not a commander in chief who thinks border security and the rule of law is a joke. - - - - - -TEN - - - -THE AMERICA OUR CHILDREN DESERVE - - - - - - -Barack Obama has done an incredible job of tarnishing our kids’ futures. - -He’s saddled our children with more debt than we accumulated in 225 years in America. He’s bowed down to China and allowed them to steal our economic future through currency manipulation and ripping off our technological and military secrets. He’s failed to stand up to the Middle Eastern oil mobsters known as OPEC who think they can hold us hostage through higher prices at the pump. He’s unleashed Obamacare on our small businesses and brought job creation and hiring to a screeching halt. - -And he’s created an economic climate where young people out of college face incredible uncertainty. Three years ago, 90 percent of college graduates landed a job out of college. Today, under the community organizer, that figure is a depressing 56 percent.1 - -I love America. I’m saddened by what I see happening to our country. We’re being humiliated, disrespected, and badly abused. Obama was a leftist experiment that has failed and gone horribly wrong, and everyone knows it. Even friends of mine who voted for the guy privately admit that he’s been a huge disappointment. We cannot afford four more years of this mess. Our children’s futures are on the line—and we have to come through for them. We have to get tough so that our country can be great again. - -We have to get tough on OPEC. These oil thugs rip us off year after year. We’ve had no leadership in Washington willing to stand up to them and put a stop to it. We have spent hundreds of billions of dollars and thousands of lives in Iraq, and now Libya, and gotten nothing in return but disrespect and ingratitude. That must end. Now. I say we take the oil. No more free military support. Either you pay us to defend you or we take the oil. It’s fair and smart, which is probably why the politicians in Washington haven’t implemented it. - -We have to get tough on China. For every one American child there are four Chinese. China is out to steal our kids’ jobs, and so far they’re doing a tremendous job of it. They’ve manipulated their currency to such an unbelievable degree that they have destroyed our manufacturing sector. It’s time to bring American manufacturing back to life. It’s encouraging that the U.S. finally did as I have been saying for over a year and got tough on China’s currency manipulation. The president should sign that measure into law effective immediately. Unfair trade is not free trade. China cheats and wins to the tune of more than $300 billion a year. No more. They either must play by the rules or they pay the price. End of story. - -It’s time our leaders in Washington wake up and realize that China’s massive military buildup is producing weapons with our names on them. Our kids are the ones who will be facing down the Chinese in the years to come. If we don’t get tough and put a stop to their rampant theft of our military and technological secrets, we will have failed the next generation of Americans miserably. Those who pretend China is our friend are either naïve, incompetent, or both. The Chinese can be reined in easily—we are their biggest customer. All we need is a president willing to stand up, not bow down, to China. - -I believe we are at a monumental fork in the American road. I always say that the next election is the “most important election” in our lifetimes. Our national debt is at $15 trillion. Just imagine what Obama will do if he knows he’s not facing reelection? What kind of damage to our national security and international standing will we suffer over the next four years with Obama bowing down to a rising Russia and China? It’s horrifying to think about. - - - - - -America the Exceptional - - - -But maybe my biggest beef with Obama is his view that there’s nothing special or exceptional about America—that we’re no different than any other country. If a guy is that clueless about the character of America, he has no business being the leader of America. Our country is the greatest force for freedom the world has ever known. We have big hearts, big brains, and big guts—and we use all three. In the past our free market capitalist system has created more wealth and prosperity than any government-controlled economy could ever dream of doing. Because of that wealth, we give more in charity than any other country, and twice as much as the second most generous nation.2 - -Obama thinks America would be better off if we acted more like European socialist countries—many of which are in default and economic freefall. I think America would be better off if we ditch the community organizer experiment and get back to being the America we’ve been since our Founding Fathers risked their lives to create our country. - -In 2012, our nation needs to send Barack Obama a message. We need to say loud and clear: Mr. President, we’re not interested in your utopian vision of “fundamentally transforming America.” We like the vision our Founding Fathers and the Constitution created just fine. If that’s what you meant by “hope and change,” you can keep the change. We’re not interested. - -We need a leader who will get tough, get smart, and get America working again. I believe America is worth fighting for. I believe America has nothing to apologize for. I believe America can get back her greatness. But we need a tough leader for tough times—someone who isn’t afraid to do hard things. We must find and elect that leader so our children and grandchildren will inherit an America as free and safe as the one we grew up in. The price of failure is too great—we have to succeed. The fate of freedom rests on it. - -We have to get tough on the notion that government is the solution to every problem. It’s not. As President Reagan said, “Government isn’t the solution to our problem. Government is the problem.” Barack Obama is the most liberal president the United States has ever dared to elect. When he was running for office, Obama warned the country that his goal was to “fundamentally transform America” and that he believed in “spreading the wealth around.” Now, with three years under his belt, America looks like an economic wasteland. One out of every five men you pass on your way to work is out of work. Every seventh person you pass on the sidewalk is now on food stamps. Forty-six million Americans—more than at any time ever in the history of this country—now live under the poverty line. Businesses are being shuttered. Foreclosure rates are at historic highs. For the first time in American history, the United States has lost its triple-A credit rating. Gas prices have doubled. Our national debt has exploded. Jobs and economic growth are nowhere in sight. - -How can we feel good about handing over this mess to our children and grandchildren? How can we think about the hundreds of thousands of soldiers, sailors, airmen, and Marines who have died for our freedom and way of life and not be ashamed at how we’ve allowed their gift to be trashed and abused? It’s a total disgrace. If we’re going to turn this thing around, we have to do it fast. - -It’s time to get tough. The time is now. - - - - - -AFTERWORD - - - -THE PRESS AND THE PRESIDENCY - - - -In the spring of 2011, Lally Weymouth, daughter of the late, great Katherine Graham, publisher of the Washington Post, invited me to go to the White House Correspondents’ Dinner. I had turned down so many of Lally’s invitations in the past, I thought accepting her invite would be the right thing to do. I knew I was probably being set up by the media, but that’s okay as long as you’re prepared for it. - -When I arrived at the event in Washington, thousands of people were packed into DC’s biggest ballroom. The White House Correspondents’ Dinner is the Academy Awards of politics. News reporters, political operatives, celebrities—you name it they’re all there. As I walked in, the paparazzi and press were going crazy. “Mr. Trump, Mr. Trump,” they shouted, “do you think the president will mention you in his speech?” I said, “I have absolutely no idea. I never thought about it, I sincerely doubt it, and why would he mention me?” I said this honestly despite the fact that I was at the top of the polls without even campaigning. The truth is, if I was in Obama’s position, I probably wouldn’t have mentioned the name Donald Trump, especially since I was hitting him hard on his birth certificate and asking why he wouldn’t just show it and get on with dealing with the serious issues our country faces today on debt, unemployment, and China, among others. - -In any event, the festivities started, people went to the dais and made speeches, and eventually a third-rate comedian named Seth Meyers (somebody who in my opinion has absolutely no talent) got up and spoke. He was nervous, shaking, and sounded like he had marbles in his mouth. He made a crack that Donald Trump’s candidacy was a joke or something to that effect. It was quite nasty but I’ve had a lot worse things said about me. - -Then the president got up. As part of his routine they had a picture of the so-called birth certificate blown up on a large screen. And while the president was smiling, I knew inside he wasn’t. Then, they showed a picture of the White House with “Trump White House” written on top of it like a hotel sign, which was cute. The president spent a lot of time telling jokes about me. I didn’t quite know how to react. Should I be laughing? Smiling? Frowning? I wasn’t sure so I decided to keep a straight face, with a few little smiles every once in a while because I knew the cameras were on me. The fact is, I loved the evening and I loved what the president was saying because even though they were jokes, he was telling them in a nice and respectful way and he did a good job telling them. And while I shouldn’t admit this, I don’t mind being the center of attention, especially on such an evening. - -Sitting at another table was a beautiful blonde woman who turned out to be supermodel Brooklyn Decker, wife of Andy Roddick (a wonderful guy and a terrific tennis player who has never received his fair due). Brooklyn was not happy. Lally Weymouth was laughing her head off and other people were laughing like crazy. They thought it was hilarious that I was being roasted, but Brooklyn Decker actually looked angry. Months later, Brooklyn and I met at Anna Wintour’s fabulous dinner at the Metropolitan Museum of Art. I thanked Brooklyn for her classy attitude and she knew exactly what I was saying. She is a terrific person and will continue to go far. - -In any event, as the president was telling joke after joke, I tapped my wonderful wife, Melania, on the knee and said under my breath, “Baby, do you believe this? This is amazing. The president of the United States is doing nothing but talking about me.” I loved it! I was having a great time! In fact, walking out of the ballroom, people were high-fiving me. They couldn’t believe what they had just witnessed. It was a stellar night. - -The next morning, I picked up the newspapers. The press was brutal. They said I was ridiculed, refused to smile, and was deeply embarrassed. I realized then and there that political life is not real life. The media can distort the truth, and everyone thinks that’s what really happened. I had a great time, but the press made it seem just the opposite. So for the record, the White House Correspondents’ Dinner was a real highlight for me, and I loved it immensely. - - - - - -The Press - - - -What I don’t love are wannabe “journalists” who are obsessed with protecting Obama, and “reporters” who try to ride my coattails to make a name for themselves and compensate for their lack of talent. Take, for example, MSNBC. They have this guy called Lawrence O’Donnell whom I seldom watch (and neither does anybody else). His ratings are terrible. So bad, in fact, that they just moved him from the 8:00 p.m. time slot because Bill O’Reilly was absolutely killing him. - -This O’Donnell guy’s hatred of me was absolutely laughable. He would rant and rave about me like a total lunatic. I don’t think he has a huge career in television—at least I would be very surprised. A year ago, he had strongly picked Tim Pawlenty to win the Republican nomination. Obviously, that turned out wrong. His media bookers, who are very nice, keep calling my office begging me to do his show. Our response is simple: I only do shows that get good ratings. I don’t want to waste my time. - -One bright spot on MSNBC is Joe Scarborough and Mika Brzezinski on a show called Morning Joe. I don’t always agree with what they say, but it’s a vibrant, entertaining show. They have a great future in television or anything they want to do. My only suggestion for Joe and Mika is that they be more forthcoming, because often last year Mika and Joe would call me to say they were making a speech to a big audience and wanted to know if it would be okay if they could call me on my cell phone so I could say a few words during their speech. Whenever possible, I would agree to it. Mika usually would be the one to call and I would be speaking by cell phone to hundreds or thousands of people. Mika said to me, and I am sure she won’t deny it, that every time they make a speech all the people ask about is Donald Trump. They want to hear about Donald Trump, they want to know what it is Donald Trump thinks. The only problem is they don’t say that on the air. In fact, recently, there was a beautiful picture of Mika making a speech holding up her cell phone and Joe alluded, “Oh, look at the cell phone . . . I wonder what is going on there.” Mika, knowing exactly what Joe was alluding to, said, “Uh, well something . . . ” and that was the end of it and they went on to the next topic. That doesn’t change my love for Joe and Mika. They are great people and they are very talented, but they should be a little bit more open about how I help them. - -One guy I find somewhat irritating, but have actually come to like in terms of viewing, is Bob Beckel, the resident liberal on Fox News. I don’t know the guy, but every time in the past when my name was mentioned, Beckel would say, “Well, what does he know? He went bankrupt.” - -When I heard this, I asked one of my people to call Fox and explain that I never went bankrupt. Over the years, I, Carl Icahn, Henry Kravis, Cerberus, Apollo, and many of the biggest names in business have used the nation’s laws to do and turn great deals. I have used the laws for certain companies to reduce debt and turn them around. I have also bought companies and immediately thrown them into Chapter 11 in order to renegotiate with banks, and made great deals because of it—I followed the law, and what’s wrong with that? Now, if they change the laws, I will find another way to gain maximum advantage. But the bottom line is I never went bankrupt. Anyway, I have to give credit to Beckel, because after we set him straight, he has stopped making his mistake and now I enjoy him much more. - -Someone I don’t enjoy is the clown called Bryant Gumbel. He has failed on so many shows I’ve lost count. At any rate, Gumbel goes on HBO to cover the amazing story of a golf course I am building in Scotland. Trump International Golf Links is being built on the largest dunes in the world. When completed, there won’t be anything like it. In fact, I and others predict it will be the greatest golf course anywhere in the world. It’s a spectacular project, but it’s been controversial because some environmental groups are opposed to developing the Great Dunes of Scotland, which I happen to own. So Bryant gets on HBO to do a story on the amazing golf course and goes off on a rant about me and Obama and tries to paint me as some kind of racist, which I am the furthest thing from being. Here he gets this big story on almost 2,000 acres along the North Sea in Scotland being transformed into the world’s greatest golf course, and what does he do? He launches into a temper tantrum about Donald Trump. What a jerk! About the only thing I admire about the guy is his taste in real estate—he bought a couple of apartments in one of my many buildings. - -Far better than Bryant Gumbel is Piers Morgan. After winning Celebrity Apprentice, Piers Morgan became a star and took over for Larry King on CNN. His show is terrific. One day Piers called me and asked me whether I would call in to his show, a privilege I receive that few others get because they’d rather have me on the show by phone than not at all. So I told him I would. - -It turned out that he had a guest that night who was, of all people, then-Congressman Anthony Weiner. Interestingly, this was shortly before he imploded with his death-wish antics of sending nude photos of himself to women he had never met. Think of this: a celebrity politician, well known, doing such a thing. What a loser! - -About a month prior to doing the show, Anthony called me. I knew him somewhat. He asked me to support his bid for New York City mayor. I told him strongly that I liked Mike Bloomberg, who would not be running again, and that it was a little early to start thinking about it, but sure, he could come see me if he wanted to. We had a very pleasant conversation. In fact, it could not have been nicer. - -So when I called in to Piers at about 9:10 p.m., I learned that Anthony was in studio and would participate in our conversation. They asked me if that was okay and I said it was fine. - -Piers started off by asking me a question and then all of the sudden out of nowhere I heard this maniac Weiner ranting and raving, with great anger, about all sorts of things. He said I would never be president. I said to myself, Wow, is this the same guy that just called me about campaign contributions? Then I attacked him, because I always believe when attacked, hit your opponent back harder and meaner and ideally right between the eyes. Weiner said in a snide voice that he was on pace to be New York City’s next mayor, whether I liked it or not. I told him that wasn’t happening, at least not according to the polls and what people in the city were saying about him. He rambled for a bit and then I said, “A lot of people are leaving the city if you end up winning.” - -Little did I realize that a few weeks later this moron would explode, and my prediction that Anthony Weiner would not be mayor of New York City would be so prophetic and be proven correct so quickly. - -As you probably know, my show Celebrity Apprentice has been one of NBC’s biggest hit shows and a huge money maker for eleven seasons. I have a lot of rich friends who tell me they would kill to have their own hot reality show. Not for the money, mind you, but because it creates such a powerful brand presence and is a lot of fun to do. I tell them to go for it, but for whatever reason—personality, looks, stage fright, lots of reasons—they say they could never do it. But they give me a lot of credit for being able to pull it off. - -Last season, The Apprentice was usually the #1 show in the 10:00 p.m. time slot, which is the most important time slot because it leads into the local news. It’s been a winner right from the beginning. Here, for instance, is a Variety ratings chart on its first season. - - - - - -Right now we’re shooting the twelfth season, which will debut right after the Super Bowl. And let me tell you, NBC is going to be happy, because we have the best cast I think we’ve ever had. - -Nevertheless, when I announced I was thinking of running for president, some of NBC’s news people absolutely smashed me. One of them was Chuck Todd. I call him “Sleepy Eyes” because he looks like he is falling asleep when he speaks. No matter what I did or how hot I was in the polls, Sleepy Eyes Chuck Todd refused to say it. I would call him and say, “Chuck, it’s not fair what you’re doing.” He would say, “Okay, I’ll change it,” but he wouldn’t change. The thing I find most offensive about Chuck Todd is the fact that he pretends to be an objective journalist, when in reality the guy is a partisan hack. I was very disappointed in Chuck Todd. Needless to say, he’s no Tim Russert. - -Look, I love NBC. They were the ones who really understood how big The Apprentice was going to be. ABC made an offer too, but NBC had the vision. I give Bob Wright and Jeff Zucker tremendous credit. They wouldn’t let Mark Burnett or me go anywhere else. They practically locked the doors at 30 Rock until we signed. - -I also think Bob Greenblatt will do a fantastic job with prime time, but they need a lot of help. Steve Burke and Brian Roberts of Comcast are going to be amazing. I already see a big difference. So I love NBC. They are very special to me and I want to see them succeed. I’m sure they will—in spite of lightweights like Lawrence O’Donnell, Sleepy Eyes Chuck Todd, and that goof Ed Schultz. I don’t say that with mean-spiritedness. It’s just that lackluster talent offends me. People like Matt Lauer and Jim Bell’s Today show team are great. I hope NBC will be able to re-sign Lauer when his contract comes up so he will continue there for many years to come. I also think David Gregory is doing a fine job filling some awfully big shoes over at Meet the Press. David’s been tough but fair—and that’s all you can ask. - -I do, however, think NBC made two big mistakes recently. One, they let CNN steal Erin Burnett away. I don’t think Erin will be as successful on CNN, because it’s very hard to do well in certain corporate cultures. Letting Erin get away was a big loss for NBC. Second, they named Brian Williams’s new show Rock Center, a horrible name for a show—and names of shows really matter. Brian is fantastic. But Rock Center will never work, and if they get four or five million viewers a night it will be a lot. I hope it works out, but I think it’s going to be a very long, hard road. Now, if they did it in Trump Tower and called it Trump Tower, it would, of course, be a smash hit! - -One person who was very critical of me last spring but who hasn’t spoken up lately is Karl Rove. I didn’t know Rove, but he asked to see me quite some time ago, way before I talked about running for president. He came to my office and asked for money for his PAC. I think I gave him $100,000 or maybe more. When I was giving him money, he was a very nice guy. “There’s nobody like you, Mr. Trump,” he said. But then I decided—without consulting him, I guess—to consider a presidential run. I was quickly #1 in the polls, and Rove said something to the effect that mine was a clown candidacy. He already had a favorite candidate, so he felt he had to torpedo me because I was a threat. I really went after Rove. Since then, he’s become nice and respectful. But I would say this: if he attacks me again I’ll go after him like nobody has ever gone after him before. I didn’t mind his statements about me, but I thought it was a terrible move after I gave him a six-figure donation. Life doesn’t work that way—not in my world. Very stupid, very disloyal. - -Plenty of media guys, like George Stephanopoulos, are big Obama fans. I like George a lot. But it was incredible to see how overprotective reporters got toward Obama when I simply said what everyone in America was thinking: “Where’s the birth certificate?” I didn’t actually bring up the whole birth certificate question at first—I wanted to talk about how China and OPEC are ripping us off and how we need to get tough on Iran, taxes, reckless spending, and repealing Obamacare. But when George brought it up during an interview on Good Morning America, he literally sprang out of his chair and started screaming at me for even questioning Obama. It was amazing. If the president were a Republican, the press wouldn’t be so protective. But Obama? He must be guarded and treated with kid gloves. - -I never understood why Obama would allow the question to hang around. Why not just produce the birth certificate and be done with it? Get it out there and move on. So I was very proud that I was able to finally get him to do something that no one else had been able to do. For the record, I’m not saying Obama wasn’t born in the United States. However, multiple questions still surround the hospital records, his grandmother’s statement that he was born out of the country, and his family members’ statements that they weren’t sure which hospital he was born in. As for the birth certificate I got him to produce, some people have questioned whether it’s authentic. Maybe it is, maybe it isn’t. That’s for experts to determine. But if Obama’s liberal media pals don’t like my answer, stop asking the question. - -Nothing irritates me more than a double standard, and yet that’s what we see with liberal media types. Take Jon Stewart. I actually enjoy the guy, but when he did a segment mocking presidential candidate Herman Cain, and used a very racist and degrading tone that was insulting to the African American community, did he get booted off the air like Don Imus? No. Where was the Reverend Jesse Jackson? Where was the Reverend Al Sharpton? Where was Sleepy Eyes Chuck Todd to provide hard-hitting journalistic “analysis”? Nowhere. Stewart should have lost his job—at least temporarily. But he didn’t and he won’t because liberals in the media always get a free pass, no matter how bad their behavior. - -Disappointing behavior by people in the press occurs on both sides of the aisle. A conservative commentator on Fox News, Charles Krauthammer, was really hitting me hard last spring. He couldn’t believe I was #1 in the polls and kept knocking me. Now, you have to understand, he didn’t know me, he never met me. But one day on Bill O’Reilly’s show, Krauthammer hit me so hard it was ridiculous. He said mine was a joke candidacy or something to that effect. So O’Reilly sent Jesse Watters to New Hampshire to get my response. I let Krauthammer have it. I was very tough, some would say vicious, but I was tough because Krauthammer had been tough to me. - -The next day I turned on the show and they didn’t air my response. I called O’Reilly and said, “Bill, what happened? Krauthammer can talk about me but I can’t talk about him?” Bill gave me what I considered a weak reason as to why he wouldn’t play my response and I left it at that. I think Bill O’Reilly is terrific, and I think Greta Van Susteren, Sean Hannity, and Neil Cavuto are as well. These are outstanding people who get big ratings and do a fantastic job. But in this case I thought Bill was wrong. I should have been allowed to rebut Krauthammer as a matter of fairness. - -In any case, there’s a reason Fox News has such high quality programs and phenomenal ratings. His name is Roger Ailes. Whether some people like it or not, Roger Ailes, the creator of Fox News, working together with Rupert Murdoch, is one of the great geniuses in television history. Roger can look at a person and instantly tell whether that individual will grab ratings. In addition to O’Reilly, Van Susteren, Hannity, and Cavuto, Roger has numerous others who do amazing work. People like Bret Baier. I also love the team on Fox and Friends with Gretchen Carlson, Steve Doocy, and Brian Kilmeade. They’re smart, quick, funny, and really know what’s going on. The Fox morning show is a tremendous success due to its three talented hosts and the wonderful Roger Ailes. I really enjoy being on it. - -Guys like Ailes understand that ratings rule. When I have friends with television shows that aren’t doing well, they just can’t understand why they’re being canceled. I tell them this: I have learned that entertainment is a very simple business. You can be a horrible human being, you can be a truly terrible person, but if you get ratings, you are a king. If you don’t get ratings, you are immediately canceled and nothing else will matter. - -I happen to get ratings, and always have. Larry King used to tell me, “You get the highest ratings.” Everybody wants me to be on their show, not because they like me, not because I’m handsome and have great hair, but because I get ratings. To tell you the truth, I’m not exactly sure why. I don’t want to be provocative, and in many cases I try not to be provocative. But I think the reason millions of people follow my views on world events is because they know I understand that our country is being ripped off by OPEC, China, and other countries. They know America is in big trouble if we don’t get back on the right track. And they know I’m not afraid to tell it like it is. It’s not that they like or love me—it’s that they respect what I have to say, believe the same thing themselves, and know that I’m right. - -I’m also told that many people have a general interest in the details of my life and the people I work with in show business, particularly since they’ve seen all the amazing talent I’ve had on Celebrity Apprentice. It’s always fun to see the kinds of questions people ask in the letters and emails my office receives. I enjoy working with stars and seeing their careers grow. - -One of the most interesting and special people I’ve gotten to know is Lady Gaga. About five years ago, when she was a total unknown, Lady Gaga was the entertainment for the Miss Universe Pageant, which was held that year in Vietnam. I own the Miss Universe Pageant and have made it very, very successful. One day my people came to me and told me about a young woman they called Lady Gaga who nobody had ever heard of. We put her on as the entertainer in the middle of the pageant, which is broadcast internationally. I thought, “Wow, she is really, really good.” The next day, it was crazy. Everybody was talking about how good Lady Gaga was—“Who is she, where is she? She’s going to be someone big, she’s amazing!” Well, she became a big star and maybe she became a star because I put her on the Miss Universe Pageant. It’s very possible, who knows what would have happened without it, because she caused a sensation. - -A couple years later, she opened in New York and was hotter than ever at Radio City Music Hall. I was there and happened to be sitting with a large group of very major celebrities. I won’t mention their names because I’m not looking to embarrass anybody. Gaga gave a fantastic performance and, after she was finished, her manager came and shouted, “Mr. Trump, Gaga wants to see you, but only you, nobody else can come.” Now, here you have major singers, musicians, and television personalities and the manager is shouting to me, “Mr. Trump, only you and nobody else.” I went back with my wife, Melania, and talked to Lady Gaga for about forty-five minutes. She’s a fantastic person, solid as a rock, and I’m very proud of her success because I really believe I had at least something to do with it. - -No matter if you’re talking about media from the entertainment world or news shows, the media bookers all try to get me on their programs to help boost their ratings. Because I operate in both entertainment TV and current events shows, I have a keen understanding of how various moves affect ratings. For instance, I told Jeff Zucker, who previously ran NBC, “Jeff, don’t move Jay Leno. He’s #1 in the evening and when you are #1, you don’t move. In fact, not only is he #1, he is a strong #1. Don’t move Jay Leno—it is a terrible mistake.” - -I warned them that it would be the first time in history somebody’s going to be taken out of the #1 position and moved and told them it would turn Leno into the equivalent of a lame duck president. In any event, they did it and Conan went on. To put it mildly, it didn’t work. Jay went back to his original time and has never been the same again. His show’s ratings are way down from what they were—he has never fully recovered. - -I was actually doing the Jay Leno Show the night he was told that this move was going to be made and, even though it wasn’t going to take place for five years, I could see that he was devastated, confused, and didn’t know why they were doing it. I didn’t either. It turned out to be possibly the greatest mistake in broadcast history. - -Politics and television are nasty businesses. When the two collide, things get even nastier. As an example, Jay Leno—he knocks the hell out of me on the show but always wants me to be on. The interesting thing is, even the ones that really go after me want me on the show for one reason and one reason only: I am a ratings machine. - -Still, no matter how good your ratings are, sometimes you can’t stop the press from running stories that are totally false but that they know will grab viewers or readers. To show you how dishonest the press is, I recently sold a house for $7.15 million. It was a house I had built at my great Trump National Golf Club in Los Angeles. The home is in a beautiful location fronting the Pacific Ocean with views of the course. This house was originally built for someone else who was unable to get financing from our country’s wonderful banks and defaulted on $1.5 million. The house, which is one of seventy-five lots I own facing the ocean, cost me very little above that amount, so I had the house for almost nothing. - -I listed the house for $12 million knowing I couldn’t get anywhere near that but figuring it’s a great way to negotiate. The buyer paid me $7.15 million, which was a substantial profit on that individual parcel. - -The dishonest press smelled blood. Headlines raged that I had taken a major haircut on my home, as if I were selling my own personal house, not one of many in the development. In actuality, I had only been inside the house one time for five minutes to check it out. But it didn’t matter. We tried to correct the newspapers, but the Los Angeles Times and others got the story totally wrong. In fact, one reporter told one of my lawyers that he knew we were right, that it wasn’t my personal house, and that he knew the sale was almost all profit. “So why did you write it that way?” my lawyer asked. “Because it doesn’t make for a good story,” the reporter told him. That’s how dishonest the press can be. - - - - - -The Presidency - - - -In all my years in business and participating in politics I’ve never seen the country as divided as it is right now—and I’ve seen bad times. Voters’ hatred of both Democrats and Republicans is beyond anything I have ever witnessed. A great leader can bring America together. But unfortunately for us, Barack Obama is not a leader. So who can the country turn to? - -I have been saying for a long time that it is very hard for a truly successful person to run for political office. Your rivals and the press will take every deal you’ve ever made, even the best of them, and make them look bad. You could have built a $7 billion+ net worth, but it doesn’t make any difference, because they will make you look as foolish as possible. A guy like Obama has it much easier. He had never done a deal before except for the purchase of his house which, in my opinion, was not an honest transaction. A smart investigative reporter should definitely look into that because any objective examination of the facts reveals there was definitely something fishy going on. But that was the only deal Obama ever did. He hasn’t done hundreds of deals like very successful people do, where we employ thousands of people and have to manage numerous complex enterprises. So he had an easier go of it. - -Mark Burnett, my good friend, partner, and the best producer in television, really wanted me to continue with The Apprentice and not run for president. Mark’s big shows are The Apprentice, Survivor, and now the hit show The Voice. He said, “Donald, I think you would be an incredible president, but you are far too successful to run. You’ve done too many deals and too many things. They’ll go after every single deal you’ve ever done and, even on the best of them, will try to make you look bad.” - -So essentially, Mark was voicing what I had been saying for the last two years—that a very successful person cannot run for political office (especially the presidency) and isn’t that sad, because that’s the kind of person and thinking we need to bring the country back. - -Either way, when I was leading in the polls, I committed an unforced error. I was asked by a friend to make a speech in Las Vegas in front of a small group. I agreed. A couple hundred people were expected, mostly Republican women, and it was no big deal. Or so I thought. But when they announced that I was going to speak, thousands of people showed up. The owner of the hotel, a great man named Phil Ruffin, one of the smartest investors around, told me it was the most people they had ever packed into the ballroom at the hotel. The place was mobbed. Everybody was happy. They were thrilled, and in the room you had lots of good, tough Las Vegas people who I can’t believe will ever vote for Obama, especially after he told people not to go to Las Vegas. - -We had thousands of people there, it turned out to be wild, and I made a mistake. I catered to that crowd. They absolutely loved the speech and I used some foul language which, with that crowd, went over phenomenally well. But unfortunately they had cameras in the room, which I didn’t see, and only those parts of the speech where I used strong language ended up being shown through our nation. - -I wish I hadn’t done it. It got a lot of press but some people were turned off by it. I’m not a big curser but it did take place, and I will say the people in the room loved that speech, because we’re not living in a baby world. It’s a rough, mean world where everybody’s out to get everybody else and where other countries are out to get the United States, and they are doing a pretty good job of it. So I got fired up and the crowd did too. - -Of course, Joe Biden dropped the f-word in front of the entire media on a stage with the president. But Biden gets a pass because he’s with Obama, and as we all know, Obama can do no wrong in the media’s eyes. - -In my opinion, our president is totally overrated both as a person and as a campaigner. The press has given a false impression of him as a brilliant student (which he was not), a brilliant leader (which he is not), and a campaigner the likes of which we have not seen in many years. Yet now many Democrats are suffering buyer’s remorse and wish they had elected Hillary Clinton instead. - -Regardless, the Republicans are going to have a very tough race. Obama is harnessing all of the negativity he created and flipping it back on the people—a very smart, if cynical, strategy. I’ve never seen anything like it. The guy is willing to rip the country in half to win. Sadly, it may prove to be a winning strategy. If I were doing as badly as he is, I would realize it is my only road to victory. - -I love my life and businesses, so I would rather not run for president. When people say I should run as an Independent, I remind them that it’s very hard for an Independent to win, though perhaps easier than ever before. Still, if the economy continues to be bad (which I think it will, due to incompetent leadership) and Republicans pick the wrong candidate (which I hope they won’t), I can’t completely rule out a run. Most people have never heard of a very stupid law—called equal time—that prevents someone with a major television show from running for political office. So Obama is allowed to go on television every day and can fly around the country any time he wants at taxpayer expense, but I’m not allowed to do The Apprentice and run for office at the same time. You tell me, is that right? Were it not for that ridiculous law I would probably be running for president right now and having a good time doing it—because America has tremendous potential, unbelievable potential, and it is being wasted. - -I distinctly remember when I made my decision to sign for another season of The Apprentice, which put my run for the presidency on hold. It was a Friday about 7:00 p.m., and Melania was watching Entertainment Tonight, Access Hollywood, Extra, or one of the various entertainment shows she enjoys and frankly so do I. I sat down at the dinner table with the television blaring and watched as some of the biggest actors and actresses in Hollywood were hoping that their networks would pick up their show for another season. You see, the following Monday, NBC was having its big “Upfront” where they and the other networks announce their schedules for the year. So this was a tough time for actors because they wanted to know if their shows were going to be renewed. - -As Melania and I were watching, I’m seeing these big stars saying, “I hope they renew our show, our show is so great, our cast is so amazing, the ratings are okay.” Everything was, “I hope, I hope, I hope,” and I’m watching these major actors almost begging. That’s when I said to my wife, “You know, baby, it’s amazing. I have a show that is a big success and I have the president of the network calling me all day long saying, ‘Donald, Donald, we want you, we love you.’” In addition to that, I had a great guy named Steve Burke at Comcast saying to me something like, “Donald, we’d like you to renew, we’d like you to go for another season or whatever you want, please renew.” So I am saying to myself, here these network executives are calling me on an hourly basis wanting me to renew my contract for another season of a two-hour hit show on primetime Sunday, and I am telling them no and yet I come home and I watch the entertainment shows and all of these big name actors and actresses are hoping beyond hope that they are going to be renewed. At that precise moment I got a call from Steve Burke reiterating the fact that they would love to have me sign the contract. Right then and there I said to my wife, “Baby, you know what? This is ridiculous. I’m going to sign the contract with NBC.” - -My wife, Melania, who is considered by many, including me, to be one of the most beautiful women in the world, has amazing instincts. For years I would ask her whether or not I could run and win. And she would say, “Donald, people love you, but they wouldn’t vote for you for president.” When I asked her why, she said, “You’re a little wild and a little too controversial. They respect you, they think you’re really smart—the smartest of all—but enough people just wouldn’t vote for you.” - -So she told me this for a long period of time and then recently, as she’s watching political news on television and seeing all the things that are wrong with our country, she looks at me and says, “Darling, you know you’d win if you ran, don’t you?” I said, “What do you mean? You always told me I couldn’t win.” She said, “But now you could win, and maybe even easily. People really want you. I see it on the streets. People want you and they really need you.” - -This was a great compliment coming from a very smart woman. - -Some people have yet to realize how serious I was and am about running for the White House. In fact I was so close that I had already prepared the Public Financial Disclosure Report required of a presidential candidate. That’s a big deal because the Trump Organization is a private company, and people don’t know what I’m really worth. So I had the independent firm Predictive, which is used by government agencies and top companies like GM, Visa, Pfizer, and others, prepare valuations on branding, and we filled out the other areas of the long and complex presidential Public Financial Disclosure Report. So my forms were already completed when I told NBC I’d renew. I was ready to sign and submit the papers, which were completed in strict compliance with the instructions. Rather than waste the forms (and who knows, I may be filing them sometime later), I thought I would share the most important three pages with you in this book. These are three of the many pages of the completed submittal. The third summary page is probably the most important. - -My primary reason for running for the presidency would be to straighten out the mess Obama has made of our country. I have built a truly great company, one with unbelievable assets and locations that I believe are about as good as it gets. We have great asset value, cash flow, and very little debt. I want the American people to see this, because ultimately our country is, in a certain way, the exact opposite of my company. And whether it’s me or someone else, we need the kind of thinking that can produce this kind of success. For the sophisticated financial people who already know me, these numbers come as no surprise. For the miserable, petty, jealous wannabes who knowingly fabricate stories about me, maybe this will shut them up. - - - - - -Executive Branch Personnel PUBLIC FINANCIAL DISCLOSURE REPORT - - - - - -Donald J. Trump - -Summary of Net Worth - -As of June 30, 2010 - - - - - -By the way, in the spirit of transparency, these forms were completed before the very public purchase of the late billionaire John Kluge’s winery, which became embroiled in controversy and tens of millions of dollars of debt after his divorce. Now called the Trump Vineyard Estates, the winery is located in one of the best areas in the United States, Charlottesville, Virginia. Trump Vineyard Estates is more than 1,000 acres and has already received a great amount of publicity in the Washington and Virginia press and was featured on the cover of Town & Country magazine. Originally, it cost around $150 million to build and assemble. I bought it at auction for $6.2 million in cash. I pride myself on being liquid when few others are. That’s one of the reasons I was able to buy the Kluge estate for such a terrific price—cash. There were many people at the foreclosure auction who knew what an amazing asset it was, but they didn’t have the cash or would need bank financing at a much higher amount to close the deal. By the way, the reason I have so much cash is that, among other things, I’ve made some of the best branding deals around, especially recently. If our government were as wise with our nation’s cash, we wouldn’t be in the big mess we are in today. - -Some people think the presidency no longer matters, that the United States is finished. But let me tell you, the president makes all the difference in the world. If we get the right president, our country can become stronger and better and more successful than ever before. - -The Republican field has several good candidates in the race—most of whom have come to see me at my office in Trump Tower. The reason they come to see me isn’t just because I am a nice person but because millions of people listen to what I say and know I “get it.” Some magazines have said I am the single most important endorsement a presidential candidate can have. I don’t know if that’s true but it wouldn’t surprise me. I don’t say that to brag, I just tell it like it is. - -It started when Sarah Palin came to Trump Tower. She is a terrific woman and gets an unfair shake from the media. We had a great conversation. She said, “Hey, let’s go out for pizza.” We did and it was bedlam, with tons of people swarming us. I got criticized because I ate my pizza with a fork. (The truth is, I know how to eat pizza but I was trying to eat as little as possible because I hate gaining weight!) But I really enjoyed my time with Sarah and her family. We caused quite a stir on the streets of New York and especially in front of that pizza parlor. It was wild! - -Michele Bachmann came up to my office more than once. She is a real worker bee. She started low, shot to the top of the polls, and then dropped down again, probably because Rick Perry came in and stole a lot of her thunder. But Michele is a wonderful person and no matter what happens with her run for the White House she’s got a great political future ahead of her. She’s passionate about America and a strong protector of traditional values. - -When Rick Perry came to see me at Trump Tower we had a great discussion, and then went to Jean Georges Restaurant, probably the best restaurant in New York, which is located on the street level of Trump International Hotel and Tower at One Central Park West. We had an incredible conversation and I found him to be a good and personable guy, much different from what you see in the debates. Since then, I have spoken to him on numerous occasions, and every time I speak to him he is so forceful and strong that I have actually said to him: “Rick, why can’t you act this way during the debates?” He said, “Donald, the debates are just not my thing.” So I said, “Why don’t you pretend you are someplace else? You gotta act different. You are getting killed in the debates.” But he repeated, “Donald, they are just not for me.” Fair enough. But Rick was severely hurt by what took place in the debates. It was sad to see. The debates are turning out to be much more important in this presidential cycle than in past primaries, and if you don’t do well in the debates, it’s a long climb back to the top. But again, Rick is a terrific guy with some solid ideas. It will be interesting to see if he can regain his footing. - -Mitt Romney came up to Trump Tower. I had never met Mitt before and, not having met him, maybe I was inclined not to be in his corner. The fact is, when you meet him in person, he is a much different guy than he is in public. He is warm and engaging. The public has to get to know him better. He gets criticized for changing his opinions, or “flip flopping,” but over a lifetime I’ve seen many people who don’t change and they always get left behind. Smart people learn things, so they change their minds. Only stupid people never change their minds. - -In the debates, Mitt has been spectacular. He’s sharp, highly educated, and looks like a president. The amazing thing is, no matter how well he does, no matter who endorses him, he seems to stay at about the same numbers in the polls. So far, although he remains at the top or close to the top in the polls, he seems frozen at around 25 percent of primary voters. As other candidates drop out of the race, those numbers may break in his favor. Only time will tell. - -Herman Cain is a real piece of work. He came up to my office and immediately I liked him and I believe he liked me. He’s a terrific guy with a magnetic personality (he also happens to be a great singer). When Herman left Trump Tower, the press swarmed over him and I was told he said something like, “Look, I don’t know if I am going to get Donald’s support or endorsement but I wanted to get to know him and I wanted him to like me because he’s got the most vicious mouth for anybody he doesn’t like and I didn’t want him badmouthing me.” I thought it was extremely cute and honest and I do indeed like Herman Cain. He’s run an amazing campaign with very little staff. That has some advantages. There are plenty of political bloodsuckers who leech on to candidates and get millions of dollars and do nothing but give the candidates bad ideas and bad advice. - -One thing I told Herman is that no matter what happens, he has elevated his stock. If he doesn’t win, which is a very distinct possibility, he can run for another office and walk in. Whether it’s the Senate or a governorship or even running a company, Herman has built a great resume and done it for peanuts. He didn’t waste money—and I really admire that. - -As this book goes to press, there are some vicious rumors swirling around Herman. These kinds of charges are to be expected in any political race, but we will see if he can weather them. - -One mistake I made was with Jon Huntsman who really seems like a nice guy. He called me a number of times and I was unbelievably busy doing a deal and I didn’t get back to him. Then, the time got long enough that I decided maybe I shouldn’t bother him. - -Jon Huntsman and his family have done an amazing job for the Wharton School of Finance, which is the best business school in the world. We both went there and I respect him and the job his family has done. Nevertheless, when all the candidates were saying they were coming up to see me, he said just the opposite. He said, “I don’t have to see Donald Trump. I don’t need Donald Trump. I don’t want to see Donald Trump.” What he didn’t say was that he called me to have a meeting. While I like Jon Huntsman, he should not have said he didn’t call me when he did. In fact, he left me his number and the person’s name to call to set up a meeting. If you want, I can give you both. I know he won’t lie if directly asked about this. I should have called him back—it wasn’t polite and to him I apologize. If he ever calls again I promise to take his call and I would look forward to meeting him. With that said, for many more reasons which are fairly obvious, he can’t and won’t win the presidency in 2012. - -One group that has already won is the Tea Party. The Tea Party has done a great service to the United States. They have made all politicians look seriously at what’s wrong with our country, including America’s $15 trillion of debt. - -The media continuously bash the Tea Party. Nothing could be more unfair. In fact, when the Tea Party held a rally recently in Richmond, Virginia, they were forced to put up $10,000 to take care of insurance and barricades—and they gladly did it. When Occupy Wall Street marched, their gathering caused much more disturbance and disruption and they weren’t asked to put up any money. The press constantly maligns, ridicules, and mocks the Tea Party folks. The fact is the Tea Party is made up of great citizens of this country. And in the end, I think the Tea Party patriots will get the last laugh because they will go down as having done more to change the country than any other group. They are terrific people, great Americans, and I am proud to have such a good relationship with them. - -As for the Occupy Wall Street protesters, I am certainly not opposed to them or their concerns, some of which are legitimate. They are angry at the banks and they should be. They are angry at the government and they have every right to be. But, as I tell them all the time when they call my office, they need to move their protest over to the White House and get the community organizer out of his 747 and into the Oval Office so he can get to work making good deals with other countries and stopping other nations from ripping us off. If we could take back our jobs and money from China, OPEC, and all of the other places that are ripping us off, we wouldn’t have to decimate our safety net and leave those who really need help stranded. That’s a cause worth fighting for. - -Of course, while there are some Occupy protesters who are serious and sincere people, there are a lot of them who are just there to meet people and have a party. And there are still others who are bad people who are involved for bad reasons. What started as a protest is becoming dangerous to the protesters. How long it will last is anyone’s guess. - -One thing the Tea Party folks and the Occupy Wall Street people can and should agree on is tackling the rampant problem in the Obama administration of crony capitalism. We’ve already seen with Solyndra and Fisker how the president’s pals and big time campaign donors all got sweetheart loans and deals and stuck taxpayers with the bill. I predict we haven’t heard the last of it and that the Obama administration engaged in many more cases of funneling money to companies connected to the president and his donors. Mark my word. - -I love capitalism enough to protect it. There has to be a level playing field where everyone can compete fairly. The guy swinging a hammer all day shouldn’t have the government reaching in his pocket and handing his taxes to Obama’s big shot donors. It’s wrong and unfair. Teachers, nurses, police officers, and firefighters have no business bailing out Wall Street bankers and billion-dollar companies. - -Likewise, I think the Occupy people and the Tea Party can agree to get rid of the corporate welfare that gives tax subsidies to oil companies. How does that make any sense? Oil companies make billions. Why should the taxpayers have money taken out of their hard-earned paycheck to hand over to the oil companies, many of whom are in cahoots with OPEC? That’s stupid and unfair as anyone can clearly see. - - - - - -I believe America can restore herself to greatness. But we need the right kind of change to tap the massive potential locked inside our great country. The so-called “ruling class” in Washington needs to be replaced with people committed to the Constitution and the values of fair play, hard work, and sparking the innovation and entrepreneurship that has made America great. - -We just lost a great innovator and American entrepreneur in Steve Jobs. Like his politics or not, Jobs changed the world with his technological innovations. Interestingly, Jobs kept Washington money largely out of Apple—he wanted his company to stand and fight on its own two feet. Jobs, who was an Obama supporter, had a great idea that he offered to the president. “Put together a group of six or seven CEOs who could really explain the innovation challenges in America.” But according to Walter Isaacson’s biography of Jobs, once the White House officials got involved they messed it up, trying to micromanage things and turning it into a much larger event, and Steve Jobs pulled out.1 - -We need more innovators, dreamers, and entrepreneurs. America used to be #1 in producing all three. We can restore America and unleash the incredible potential of our great land and people. All it takes is the wisdom to return to our core principles, the resolve to keep the faith, and the willingness to get tough and innovate. - -Take, for example, the X-PRIZE Foundation. This entrepreneurial group hosts competitions with cash prizes for the most innovative idea in Education, Exploration, Energy, and Life Sciences. The first $10 million reward was given in 2004 to whatever team could launch a manned spacecraft twice in two weeks. The X-PRIZE motto is totally American: “Revolution Through Competition.”2 That’s the way Americans like to think. That’s the American Dream in motion. We’re going to have to invent our way out of the mess our country is in. It starts with doing something I’ve always done, which is to think big. - -Americans dream big and do hard things. It’s who we are. It’s what we do. When our country is unchained, we’re unstoppable. But we need smart leaders, people who understand how the world works and have the guts to get tough. With proper leadership, we can rebuild the shining city on a hill we once were. When we do, we should boldly and proudly celebrate America’s power and dominance in the world. The way I see it, greatness need not apologize for itself. Ever. - -If we do that, we can, together, make America #1 again. - - - - - -ACKNOWLEDGMENTS - - - -The team at Regnery Publishing has been terrific to work with in every way, and I’d like to thank Wynton Hall, Peter Schweizer, Marji Ross, Jeff Carneal, and Harry Crocker for doing such a great job. They’ve been a pleasure to work with and their professionalism was apparent from the start. At the Trump Organization I would like to thank Rhona Graff, Meredith McIver, Michael Cohen, Kacey Kennedy, and Thuy Colayco for their enthusiasm and careful work. Thanks to all for a job well done. - - - - - diff --git a/examples/text_generation/data/trump/trump_twitter.txt b/examples/text_generation/data/trump/trump_twitter.txt deleted file mode 100644 index 4727ef989..000000000 --- a/examples/text_generation/data/trump/trump_twitter.txt +++ /dev/null @@ -1,3434 +0,0 @@ -Thank you for joining me today. - -This was going to be a speech on Hillary Clinton and how bad a President, especially in these times of Radical Islamic Terrorism, she would be. - -Even her former Secret Service Agent, who has seen her under pressure and in times of stress, has stated that she lacks the temperament and integrity to be president. - -There will be plenty of opportunity to discuss these important issues at a later time, and I will deliver that speech soon. - -But today there is only one thing to discuss: the growing threat of terrorism inside of our borders. - -The attack on the Pulse Nightclub in Orlando, Florida, was the worst terrorist strike on our soil since September 11th, and the worst mass shooting in our country’s history. - -So many people dead, so many people gravely injured, so much carnage, such a disgrace. - -The horror is beyond description. - -The families of these wonderful people are totally devastated. Likewise, our whole nation, and indeed the whole world, is devastated. - -We express our deepest sympathies to the victims, the wounded, and their families. - -We mourn, as one people, for our nation’s loss – and pledge our support to any and all who need it. - -I would like to ask now that we all observe a moment of silence for the victims of the attack. - -Our nation stands together in solidarity with the members of Orlando's LGBT Community. - -This is a very dark moment in America’s history. - -A radical Islamic terrorist targeted the nightclub not only because he wanted to kill Americans, but in order to execute gay and lesbian citizens because of their sexual orientation. - -It is a strike at the heart and soul of who we are as a nation. - -It is an assault on the ability of free people to live their lives, love who they want and express their identity. - -It is an attack on the right of every single American to live in peace and safety in their own country. - -We need to respond to this attack on America as one united people – with force, purpose and determination. - -But the current politically correct response cripples our ability to talk and think and act clearly. - -If we don't get tough, and we don't get smart – and fast – we're not going to have a country anymore -- there will be nothing left. - -The killer, whose name I will not use, or ever say, was born to Afghan parents who immigrated to the United States. His father published support for the Afghan Taliban, a regime which murders those who don’t share its radical views. The father even said he was running for President of that country. - -The bottom line is that the only reason the killer was in America in the first place was because we allowed his family to come here. - -That is a fact, and it's a fact we need to talk about. - -We have a dysfunctional immigration system which does not permit us to know who we let into our country, and it does not permit us to protect our citizens. - -We have an incompetent administration, and if I am not elected President, that will not change over the next four years -- but it must change, and it must change now. - -With fifty people dead, and dozens more wounded, we cannot afford to talk around the issue anymore -- we have to address it head on. - -I called for a ban after San Bernardino, and was met with great scorn and anger but now, many are saying I was right to do so -- and although the pause is temporary, we must find out what is going on. The ban will be lifted when we as a nation are in a position to properly and perfectly screen those people coming into our country. - -The immigration laws of the United States give the President the power to suspend entry into the country of any class of persons that the President deems detrimental to the interests or security of the United States, as he deems appropriate. - -I will use this power to protect the American people. When I am elected, I will suspend immigration from areas of the world when there is a proven history of terrorism against the United States, Europe or our allies, until we understand how to end these threats. - -After a full, impartial and long overdue security assessment, we will develop a responsible immigration policy that serves the interests and values of America. - -We cannot continue to allow thousands upon thousands of people to pour into our country, many of whom have the same thought process as this savage killer. - -Many of the principles of Radical Islam are incompatible with Western values and institutions. - -Radical Islam is anti-woman, anti-gay and anti-American. - -I refuse to allow America to become a place where gay people, Christian people, and Jewish people, are the targets of persecution and intimidation by Radical Islamic preachers of hate and violence. - -It’s not just a national security issue. It is a quality of life issue. - -If we want to protect the quality of life for all Americans – women and children, gay and straight, Jews and Christians and all people – then we need to tell the truth about Radical Islam. - -We need to tell the truth, also, about how Radical Islam is coming to our shores. - -We are importing Radical Islamic Terrorism into the West through a failed immigration system -- and through an intelligence community held back by our president. - -Even our own FBI Director has admitted that we cannot effectively check the backgrounds of the people we are letting into America. - -All of the September 11th hijackers were issued visas. - -Large numbers of Somali refugees in Minnesota have tried to join ISIS. - -The Boston Bombers came here through political asylum. - -The male shooter in San Bernardino – again, whose name I won't mention -- was the child of immigrants from Pakistan, and he brought his wife – the other terrorist - from Saudi Arabia, through another one of our easily exploited visa programs. - -Immigration from Afghanistan into the United States has increased nearly five-fold in just one year. According to Pew Research, 99% of people in Afghanistan support oppressive Sharia Law. - -We admit many more from other countries in the region who share these same oppressive views. - -If we want to remain a free and open society, then we have to control our borders. - -Yet, Hillary Clinton – for months and despite so many attacks – repeatedly refused to even say the words “radical Islam,” until I challenged her yesterday to say the words or leave the race. - -However, Hillary Clinton – who has been forced to say the words today after policies she supports have caused us so much damage – still has no clue what Radical Islam is, and won’t speak honestly about what it is. - -She is in total denial, and her continuing reluctance to ever name the enemy broadcasts weakness across the world. - -In fact, just a few weeks before the San Bernardino slaughter, Hillary Clinton explained her refusal to say the words Radical Islam. Here is what she said: “Muslims are peaceful and tolerant people, and have nothing whatsoever to do with terrorism.” - -Hillary Clinton says the solution is to ban guns. They tried that in France, which has among the toughest gun laws in the world, and 130 were brutally murdered by Islamic terrorists in cold blood. Her plan is to disarm law-abiding Americans, abolishing the 2nd amendment, and leaving only the bad guys and terrorists with guns. She wants to take away Americans’ guns, then admit the very people who want to slaughter us. - -I will be meeting with the NRA, which has given me their earliest endorsement in a Presidential race, to discuss how to ensure Americans have the means to protect themselves in this age of terror. - -The bottom line is that Hillary supports the policies that bring the threat of Radical Islam into America, and allow it to grow overseas. - -In fact, Hillary Clinton’s catastrophic immigration plan will bring vastly more Radical Islamic immigration into this country, threatening not only our security but our way of life. - -When it comes to Radical Islamic terrorism, ignorance is not bliss – it's deadly. - -The Obama Administration, with the support of Hillary Clinton and others, has also damaged our security by restraining our intelligence-gathering and failing to support law enforcement. They have put political correctness above common sense, above your safety, and above all else. - -I refuse to be politically correct. - -I will do the right thing--I want to straighten things out and to Make America Great Again. - -The days of deadly ignorance will end, and they will end soon. - -As President I will give our intelligence community, law enforcement and military the tools they need to prevent terrorist attacks. - -We need an intelligence-gathering system second to none. That includes better cooperation between state, local and federal officials – and with our allies. - -I will have an Attorney General, a Director of National Intelligence, and a Secretary of Defense who will know how to fight the war on Radical Islamic Terrorism – and who will have the support they require to get the job done. - -We also must ensure the American people are provided the information they need to understand the threat. - -The Senate Subcommittee on Immigration has already identified hundreds of immigrants charged with terrorist activities inside the United States since September 11th. - -Nearly a year ago, the Senate Subcommittee asked President Obama's Departments of Justice, State and Homeland Security to provide the immigration history of all terrorists inside the United States. - -These Departments refused to comply. - -President Obama must release the full and complete immigration histories of all individuals implicated in terrorist activity of any kind since 9/11. - -The public has a right to know how these people got here. - -We have to screen applicants to know whether they are affiliated with, or support, radical groups and beliefs. - -We have to control the amount of future immigration into this country to prevent large pockets of radicalization from forming inside America. - -Even a single individual can be devastating, just look at what happened in Orlando. Can you imagine large groups? - -Truly, our President doesn't know what he is doing. He has failed us, and failed us badly, and under his leadership, this situation will not get any better -- it will only get worse. - -Each year, the United States permanently admits more than 100,000 immigrants from the Middle East, and many more from Muslim countries outside the Middle East. Our government has been admitting ever-growing numbers, year after year, without any effective plan for our security. - -In fact, Clinton's State Department was in charge of the admissions process for people applying to enter from overseas. - -Having learned nothing from these attacks, she now plans to massively increase admissions without a screening plan, including a 500% increase in Syrian refugees. - -This could be a better, bigger version of the legendary Trojan Horse. - -We can't let this happen. - -Altogether, under the Clinton plan, you'd be admitting hundreds of thousands of refugees from the Middle East with no system to vet them, or to prevent the radicalization of their children. - -The burden is on Hillary Clinton to tell us why she believes immigration from these dangerous countries should be increased without any effective system to screen who we are bringing in. - -The burden is on Hillary Clinton to tell us why we should admit anyone into our country who supports violence of any kind against gay and lesbian Americans. - -The burden is also on Hillary Clinton to tell us how she will pay for it. Her plan will cost Americans hundreds of billions of dollars long-term. - -Wouldn't this money be better spent on rebuilding America for our current population, including the many poor people already living here? - -We have to stop the tremendous flow of Syrian refugees into the United States – we don't know who they are, they have no documentation, and we don't know what they're planning. - -What I want is common sense. I want a mainstream immigration policy that promotes American values. - -That is the choice I put before the American people: a mainstream immigration policy designed to benefit America, or Hillary Clinton's radical immigration policy designed to benefit politically-correct special interests. - -We've got to get smart, and tough, and vigilant, and we've got to do it now, because later is too late. - -Ask yourself, who is really the friend of women and the LGBT community, Donald Trump with his actions, or Hillary Clinton with her words? Clinton wants to allow Radical Islamic terrorists to pour into our country—they enslave women, and murder gays. - -I don’t want them in our country. - -The terrorist attack on the Pulse Night Club demands a full and complete investigation into every aspect of the assault. - -In San Bernardino, as an example, people knew what was going on, but they used the excuse of racial profiling for not reporting it. - -We need to know what the killer discussed with his relatives, parents, friends and associates. - -We need to know if he was affiliated with any radical Mosques or radical activists and what, if any, is their immigration status. - -We need to know if he travelled anywhere, and who he travelled with. - -We need to make sure every single last person involved in this plan – including anyone who knew something but didn't tell us – is brought to justice. - -If it can be proven that somebody had information about any attack, and did not give this information to authorities, they must serve prison time . - -America must do more – much more – to protect its citizens, especially people who are potential victims of crimes based on their backgrounds or sexual orientations. - -It also means we must change our foreign policy. - -The decision to overthrow the regime in Libya, then pushing for the overthrow of the regime in Syria, among other things, without plans for the day after, have created space for ISIS to expand and grow. - -These actions, along with our disastrous Iran deal, have also reduced our ability to work in partnership with our Muslim allies in the region. - -That is why our new goal must be to defeat Islamic terrorism, not nation-building. - -For instance, the last major NATO mission was Hillary Clinton's war in Libya. That mission helped unleash ISIS on a new continent. - -I've said NATO needs to change its focus to stopping terrorism. Since I've raised that criticism, NATO has since announced a new initiative focused on just that. - -America must unite the whole civilized world in the fight against Islamic terrorism, just like we did against communism in the Cold War. - -We've tried it President Obama's way. He gave the world his apology tour, we got ISIS, and many other problems, in return. - -I'd like to conclude my remarks today by again expressing our solidarity with the people of Orlando who have come under attack. - -When I am President, I pledge to protect and defend all Americans who live inside of our borders. Wherever they come from, wherever they were born, all Americans living here and following our laws will be protected. - -America will be a tolerant and open society. - -America will also be a safe society. - -We will protect our borders at home. - -We will defeat ISIS overseas. - -We will ensure every parent can raise their children in peace and safety. - -We will make America rich again. - -We will make America safe again. - -We will make American Great Again. - -Thank you. - -The media talks about “homegrown,” terrorism, but Islamic radicalism, and the networks that nurture it, are imports from overseas. - -Yes, there are many radicalized people already inside our country as a result of the poor policies of the past. But the whole point is that it will be much, much easier to deal with our current problem if we don’t keep on bringing in people who add to the problem. - -For instance, the controversial Mosque attended by the Boston Bombers had as its founder an immigrant from overseas charged in an assassination plot. - -This shooter in Orlando was the child of an immigrant father who supported one of the most repressive regimes on Earth. Why would we admit people who support violent hatred? - -Hillary Clinton can never claim to be a friend of the gay community as long as she continues to support immigration policies that bring Islamic extremists to our country who suppress women, gays and anyone who doesn’t share their views. - -She can’t have it both ways. She can’t claim to be supportive of these communities while trying to increase the number of people coming in who want to oppress them. - -How does this kind of immigration make our life better? How does this kind of immigration make our country better? - -Why does Hillary Clinton want to bring people here—in vast numbers—who reject our values? - -Immigration is a privilege, and we should not let anyone into this country who doesn’t support our communities – all of our communities. - -America has already admitted four times more immigrants than any country on earth, and we continue to admit millions more with no real checks or scrutiny. - -Not surprisingly, wages for our workers haven’t budged in many years. - -So whether it’s matter of national security, or financial security, we can’t afford to keep on going like this. We owe $19 trillion in debt, and no longer have options. - -All our communities, from all backgrounds, are ready for some relief. This is not an act of offense against anyone; it is an act of defense. - -I want us all to work together, including in partnership with our Muslim communities. But Muslim communities must cooperate with law enforcement and turn in the people who they know are bad – and they do know where they are. - -I want to fix our schools, roads, bridges and job market. I want every American to succeed. Hillary Clinton wants to empty out the Treasury to bring people into the country that include individuals who preach hate against our own citizens. - -I want to protect our citizens – all of our citizens. - -Last night, our nation was attacked by a radical Islamic terrorist. It was the worst terrorist attack on our soil since 9/11, and the second of its kind in 6 months. My deepest sympathy and support goes out to the victims, the wounded, and their families. - -In his remarks today, President Obama disgracefully refused to even say the words 'Radical Islam'. For that reason alone, he should step down. If Hillary Clinton, after this attack, still cannot say the two words 'Radical Islam' she should get out of this race for the Presidency. - -If we do not get tough and smart real fast, we are not going to have a country anymore. Because our leaders are weak, I said this was going to happen – and it is only going to get worse. I am trying to save lives and prevent the next terrorist attack. We can't afford to be politically correct anymore. - -The terrorist, Omar Mir Saddique Mateen, is the son of an immigrant from Afghanistan who openly published his support for the Afghanistani Taliban and even tried to run for President of Afghanistan. According to Pew, 99% of people in Afghanistan support oppressive Sharia Law. - -We admit more than 100,000 lifetime migrants from the Middle East each year. Since 9/11, hundreds of migrants and their children have been implicated in terrorism in the United States. - - -Hillary Clinton wants to dramatically increase admissions from the Middle East, bringing in many hundreds of thousands during a first term – and we will have no way to screen them, pay for them, or prevent the second generation from radicalizing. - -We need to protect all Americans, of all backgrounds and all beliefs, from Radical Islamic Terrorism - which has no place in an open and tolerant society. Radical Islam advocates hate for women, gays, Jews, Christians and all Americans. I am going to be a President for all Americans, and I am going to protect and defend all Americans. We are going to make America safe again and great again for everyone. - -It is unfortunate that my comments have been misconstrued as a categorical attack against people of Mexican heritage. I am friends with and employ thousands of people of Mexican and Hispanic descent. The American justice system relies on fair and impartial judges. All judges should be held to that standard. I do not feel that one’s heritage makes them incapable of being impartial, but, based on the rulings that I have received in the Trump University civil case, I feel justified in questioning whether I am receiving a fair trial. - -Over the past few weeks, I have watched as the media has reported one inaccuracy after another concerning the ongoing litigation involving Trump University. There are several important facts the public should know and that the media has failed to report. - -Throughout the litigation my attorneys have continually demonstrated that students who participated in Trump University were provided a substantive, valuable education based upon a curriculum developed by professors from Northwestern University, Columbia Business School, Stanford University and other respected institutions. And, the response from students was overwhelming. Over a five year period, more than 10,000 paying students filled out surveys giving the courses high marks and expressing their overwhelming satisfaction with Trump University’s programs. For example: - -Former student Tarla Makaeff, the original plaintiff in the litigation, not only completed multiple surveys rating Trump University’s three-day seminar “excellent” in every category, but also praised Trump University’s mentorship program in a glowing 5 plus minute video testimonial. When asked “how could Trump University help to meet [her] goals”, she simply stated “[c]ontinue to offer great classes.” Once the plaintiffs’ lawyers realized how disastrous a witness she was, they asked to have her removed from the case. Over my lawyers’ objections, the judge granted the plaintiffs’ motion, but allowed the case to continue. -Art Cohen, a lead plaintiffs in the litigation, completed a survey in which he not only rated Trump University’s three-day seminar “excellent” in virtually every category, but went so far as to indicate that he would “attend another Trump University seminar” and even “recommend Trump University seminars to a friend.” When asked how Trump University could improve the seminar, Mr. Cohen’s only suggestion was to “[h]ave lunch sandwiches brought in” and make the lunch break 45 minutes. -Former student Bob Giullo, who has been critical of Trump University in numerous interviews and negative advertisements from my political opponents, also expressed his satisfaction, rating Trump University’s programs “excellent” in every category. When asked how Trump University could improve its programs, Mr. Giullo simply asked that students be provided “more comfortable chairs.” - -Indeed, these are just a few of literally thousands of positive surveys, all of which can be viewed online at www.98percentapproval.com. - -For those students who decided that Trump University’s programs were not for them, the company had a generous refund policy, offering a full refund to any student who asked for their money back within 3 days of signing up for a program or by the end of the first day of any multi-day program, whichever came later. - -Normally, legal issues in a civil case would be heard in a neutral environment. However, given my unique circumstances as nominee of the Republican Party and the core issues of my campaign that focus on illegal immigration, jobs and unfair trade, I have concerns as to my ability to receive a fair trial. - -I am fighting hard to bring jobs back to the United States. Many companies – like Ford, Nabisco, Carrier – are moving production to Mexico. Drugs and illegal immigrants are also pouring across our border. This is bad for all Americans, regardless of their heritage. - -Due to what I believe are unfair and mistaken rulings in this case and the Judge’s reported associations with certain professional organizations, questions were raised regarding the Obama appointed Judge’s impartiality. It is a fair question. I hope it is not the case. - -While this lawsuit should have been dismissed, it is now scheduled for trial in November. I do not intend to comment on this matter any further. With all of the thousands of people who have given the courses such high marks and accolades, we will win this case! - -Based on the fact that the Democratic nominating process is totally rigged and Crooked Hillary Clinton and Deborah Wasserman Schultz will not allow Bernie Sanders to win, and now that I am the presumptive Republican nominee, it seems inappropriate that I would debate the second place finisher. Likewise, the networks want to make a killing on these events and are not proving to be too generous to charitable causes, in this case, women’s health issues. Therefore, as much as I want to debate Bernie Sanders - and it would be an easy payday - I will wait to debate the first place finisher in the Democratic Party, probably Crooked Hillary Clinton, or whoever it may be. - -I’m delighted to be in North Dakota, a state at the forefront of a new energy revolution. - -Oil and natural gas production is up significantly in the last decade. Our oil imports have been cut in half. - -But all this occurred in spite of massive new bureaucratic and political barriers. - -President Obama has done everything he can to get in the way of American energy. He’s made life much more difficult for North Dakota, as costly regulation makes it harder and harder to turn a profit. - -If Hillary Clinton is in charge, things will get much worse. She will shut down energy production across this country. - -Millions of jobs, and trillions of dollars of wealth, will be destroyed as a result. - -That is why our choice this November is so crucial. - -Here’s what it comes down to. - -Wealth versus poverty. - -North Dakota shows how energy exploration creates shared prosperity. Better schools. More funding for infrastructure. Higher wages. Lower unemployment. - -Things we’ve been missing. - -It’s a choice between sharing in this great energy wealth, or sharing in the poverty promised by Hillary Clinton. - -You don’t have to take my word for it. Just listen to Hillary Clinton’s own words. She has declared war on the American worker. - -Here is what Hillary Clinton said earlier this year: “We are going to put a lot of coal miners and coal companies out of work.” - -She wants to shut down the coal mines. - -And if Crooked Hillary can shut down the mines, she can shut down your business too. - -Let me tell you how President Obama Undermined Our Middle Class - -President Obama’s stated intent is to eliminate oil and natural gas production in America. - -His policy is death by a thousand cuts through an onslaught of regulations. - -The Environmental Protection Agency’s use of totalitarian tactics forces energy operators in North Dakota into paying unprecedented multi-billion dollar fines before a penalty is even confirmed. - -Government misconduct goes on and on: - -The Department of Justice filed a lawsuit against seven North Dakota oil companies for the deaths of 28 birds while the Administration fast-tracked wind projects that kill more than 1 million birds a year. -The U.S Fish and Wildlife Service abuses the Endangered Species Act to restrict oil and gas exploration. -Adding to the pain, President Obama now proposes a $10-per-barrel tax on American-produced oil in the middle of a downturn. - -At the same time President Obama lifts economic sanctions on Iran, he imposes economic sanctions on America. He has allowed this country to hit the lowest oil rig count since 1999, producing thousands of layoffs. -America’s incredible energy potential remains untapped. It is a totally self-inflicted wound. - -Under my presidency, we will accomplish complete American energy independence. - -Imagine a world in which our foes, and the oil cartels, can no longer use energy as a weapon. - -But President Obama has done everything he can to keep us dependent on others. Let me list some of the good energy projects he killed. - -He rejected the Keystone XL Pipeline despite the fact that: - - -It would create and support more than 42,000 jobs. -His own State Department concluded that it would be the safest pipeline ever built in the United States. -And it would have no significant impact on the environment. -Yet, even as he rejected this America-Canada pipeline, he made a deal that allows Iran to transport more oil through its pipeline that would have ever flowed through Keystone –with no environmental review. - -President Obama has done everything he can to kill the coal industry. Here are a few of President Obama’s decrees: - - -Regulations that shut down hundreds of coal-fired power plants and block the construction of new ones. - - -A prohibition against coal production on federal land. - -Draconian climate rules that, unless stopped, would effectively bypass Congress to impose job-killing cap-and-trade. - -President Obama has aggressively blocked the production of oil & natural gas: - - -He’s taken a huge percentage of the Alaska National Petroleum Reserve off the table -Oil and natural gas production on federal lands is down 10%. -87% of available land in the Outer Continental Shelf has been put off limits. -Atlantic Lease sales were closed down too – despite the fact that they would create 280,000 jobs and $23.5 billion in economic activity. -President Obama entered the United States into the Paris Climate Accords – unilaterally, and without the permission of Congress. This agreement gives foreign bureaucrats control over how much energy we use right here in America. -These actions have denied millions of Americans access to the energy wealth sitting under our feet. - -This is your treasure, and you – the American People – are entitled to share in the riches. - -President Obama’s anti-energy orders have also weakened our security, by keeping us reliant on foreign sources of energy. - -Every dollar of energy we don’t explore here, is a dollar of energy that makes someone else rich over there. - -If President Obama wanted to weaken America he couldn’t have done a better job. - -As bad as President Obama is, Hillary Clinton will be worse. - -She will escalate the war against American energy, and unleash the EPA to control every aspect of our lives. -She declared that “we’ve got to move away from coal and all the other fossil fuels,” locking away trillions in American wealth. -In March, Hillary Clinton said: “by the time we get through all of my conditions, I do not think there will be many places in America where fracking will continue to take place.” Keep in mind, shale energy production could add 2 million jobs in 7 years. - -Yet, while Hillary Clinton doesn’t want American energy, she is strongly in favor of foreign energy. Here is what she told China as Secretary of State: - -“American experts and Chinese experts will work to develop China’s natural gas resources. Imagine what it would mean for China if China unleashed its own natural gas resources so you are not dependent on foreign oil.” -Hillary Clinton has her priorities wrong. But we are going to turn all of that around. - - -A Trump Administration will develop an America First energy plan. Here is how this plan will make America Wealthy Again: - -American energy dominance will be declared a strategic economic and foreign policy goal of the United States. -America has 1.5 times as much oil as the combined proven resources of all OPEC countries; we have more Natural Gas than Russia, Iran, Qatar and Saudi Arabia Combined; we have three times more coal than Russia. Our total untapped oil and gas reserves on federal lands equal an estimated $50 trillion. -We will become, and stay, totally independent of any need to import energy from the OPEC cartel or any nations hostile to our interests. -At the same time, we will work with our Gulf allies to develop a positive energy relationship as part of our anti-terrorism strategy. -We will use the revenues from energy production to rebuild our roads, schools, bridges and public infrastructure. Cheaper energy will also boost American agriculture. -We will get the bureaucracy out of the way of innovation, so we can pursue all forms of energy. This includes renewable energies and the technologies of the future. It includes nuclear, wind and solar energy – but not to the exclusion of other energy. The government should not pick winners and losers. Instead, it should remove obstacles to exploration. Any market has ups and downs, but lifting these draconian barriers will ensure that we are no longer at the mercy of global markets. - -A Trump Administration will focus on real environmental challenges, not phony ones: - -We will reject Hillary Clinton’s poverty-expansion agenda that enriches her friends and makes everyone else poor. -We’ll solve real environmental problems in our communities like the need for clean and safe drinking water. President Obama actually tried to cut the funding for our drinking water infrastructure – even as he pushed to increase funding for his EPA bureaucrats. -American workers will be the ones building this new infrastructure. - -Here is my 100-day action plan: - -We’re going to rescind all the job-destroying Obama executive actions including the Climate Action Plan and the Waters of the U.S. rule. -We’re going to save the coal industry and other industries threatened by Hillary Clinton’s extremist agenda. -I’m going to ask Trans Canada to renew its permit application for the Keystone Pipeline. -We’re going to lift moratoriums on energy production in federal areas -We’re going to revoke policies that impose unwarranted restrictions on new drilling technologies. These technologies create millions of jobs with a smaller footprint than ever before. -We’re going to cancel the Paris Climate Agreement and stop all payments of U.S. tax dollars to U.N. global warming programs. -Any regulation that is outdated, unnecessary, bad for workers, or contrary to the national interest will be scrapped. We will also eliminate duplication, provide regulatory certainty, and trust local officials and local residents. -Any future regulation will go through a simple test: is this regulation good for the American worker? If it doesn’t pass this test, the rule will not be approved. -Policy decisions will be public and transparent. They won’t be made on Hillary’s private email account. - -We’re going to do all this while taking proper regard for rational environmental concerns. We are going to conserve our beautiful natural habitats, reserves and resources. - -In a Trump Administration, political activists with extreme agendas will no longer write the rules. Instead, we will work with conservationists whose only agenda is protecting nature. - -From an environmental standpoint, my priorities are very simple: clean air and clean water. - -My America First energy plan will do for the American People what Hillary Clinton will never do: create real jobs and real wage growth. - -According to the Institute for Energy Research, lifting the restrictions on American energy will create a flood of new jobs: - -Almost a $700 billion increase in annual economic output over the next 30 years. -More than a $30 billion increase in annual wages over the next 7 years. -Over the next four decades, more than $20 trillion in additional economic activity and $6 trillion in new tax revenue. - -The oil and natural gas industry supports 10 million high-paying Americans jobs and can create another 400,000 new jobs per year. This exploration will also create a resurgence in American manufacturing -- dramatically reducing both our trade deficit and our budget deficit. - -Compare this future to Hillary Clinton’s Venezuela-style politics of poverty. - -If you think about it, not one idea Hillary Clinton has will actually create a single net job or create a single new dollar to put in workers’ pockets. - -In fact, every idea Hillary has will make jobs disappear. - -Hillary Clinton’s agenda is job destruction. My agenda is job creation. - -She wants to tax and regulate our workers to the point of extinction. - -She wants terrible trade deals, like NAFTA, signed by her husband, that will empty out our manufacturing. - -During her time as Secretary of State, she surrendered to China – allowing them to steal hundreds of billions of dollars in our intellectual property. - -She let them devalue their currency and add more than a trillion dollars to our trade deficit. - -Then there was Libya. - -Secretary Clinton’s reckless Libya invasion handed the country over to ISIS, which now controls the oil. - -The Middle East that Clinton inherited was far less dangerous than the Middle East she left us with today. - -Her reckless decisions in Iraq, Libya, Iran, Egypt and Syria have made the Middle East more unstable than ever before. - -The Hillary Clinton foreign policy legacy is chaos. - -Hillary Clinton also wants totally open borders in America, which would further plunge our workers into poverty. - -Hillary’s open borders agenda means a young single mom living in poverty would have to compete for a job or a raise against millions of lower-wage workers rushing into the country, but she doesn’t care. - -My agenda will be accomplished through a series of reforms that put America First: - -Energy reform that creates trillions in new wealth. -Immigration reform that protects our borders and defends our workers. -Tax reform that brings millions of new jobs to America. -Regulation reform that eliminates stupid rules that send our jobs overseas. -Welfare reform that requires employers to recruit from the unemployment office – not the immigration office. -Trade reform that brings back our manufacturing jobs and stands up to countries that cheat. -There is one more thing we must do to make America wealthy again: we have to make our communities safe again. - -Violent crime is rising in major cities across the country. This is unacceptable. Every parent has the right to raise their kids in safety. - -When we put political correctness before justice, we hurt those who have the least. It undermines their schools, slashes the value of their homes, and drives away their jobs. - -Crime is a stealth tax on the poor. - -To those living in fear, I say: help is coming. A Trump Administration will return law and order to America. Security is not something that should only be enjoyed by the rich and powerful. - -By the way, I was endorsed by the National Rifle Association, and we are not going to let Hillary Clinton abolish the 2nd amendment, either. - -My reform agenda is going to bring wealth and security to the poorest communities in this country. - -What does Hillary have to offer the poor but more of the same? - -In Chicago, for instance, one-fourth of young Hispanics and one-third of young African-Americans are unemployed. - -My message today to all the people trapped in poverty is this: politicians like Hillary Clinton have failed you. - -They have used you. - -You need something new. I am the only who will deliver it. - -We are going to put America back to work. - -We are going to put people before government. - -We are going to rebuild our inner cities. - -We are going to make you and your family safe, secure and prosperous. - -The choice in November is a choice between a Clinton Agenda that puts Donors First – or a new agenda that puts America First. - -It is a choice between a Clinton government of, by and for the powerful – or a return to government of, by and for the people. - -It is a choice between certain decline, or a revival of America’s promise. - -The people in charge of our government say things can’t change. - -I am here to tell you that things have to change. - -They want you to keep trusting the same people who’ve betrayed you. - -I am here to tell you that if you keep supporting those who’ve let you down, then you will keep getting let down for the rest of your life. - -I am prepared to kick the special interests out of Washington, D.C. and to hand their seat of power over to you. - -It’s about time. - -Together, we will put the American people first again. - -We will make our communities wealthy again. - -We will make our cities safe again. - -We will make our country strong again. - -Ladies and Gentlemen: We will make America Great Again. - -The fact that Hillary thinks the temporary Muslim ban, which she calls the "Muslim ban", promotes terrorism, proves Bernie Sanders was correct when he said she is not qualified to be President. - -Look at the carnage all over the world including the World Trade Center, San Bernardino, Paris, the USS Cole, Brussels and an unlimited number of other places. She and our totally ignorant President won't even use the term Radical Islamic Terrorism. And by the way, ask Hillary who blew up the plane last night - another terrible, but preventable tragedy. She has bad judgement and is unfit to serve as President at this delicate and difficult time in our country's history. - -Justice Scalia was a remarkable person and a brilliant Supreme Court Justice. His career was defined by his reverence for the Constitution and his legacy of protecting Americans’ most cherished freedoms. He was a Justice who did not believe in legislating from the bench and he is a person whom I held in the highest regard and will always greatly respect his intelligence and conviction to uphold the Constitution of our country. The following list of potential Supreme Court justices is representative of the kind of constitutional principles I value and, as President, I plan to use this list as a guide to nominate our next United States Supreme Court Justices. - -We are pleased to have this partnership in place with the national party. By working together with the RNC to raise support for Republicans everywhere, we are going to defeat Hillary Clinton, keep Republican majorities in Congress and in the states, and Make America Great Again. - -I filed my PFD, which I am proud to say is the largest in the history of the FEC. Despite the fact that I am allowed extensions, I have again filed my report, which is 104 pages, on time. Bernie Sanders has requested, on the other hand, an extension for his small report. This is the difference between a businessman and the all talk, no action politicians that have failed the American people for far too long. I have built an incredible company and have accumulated one of the greatest portfolios of real estate assets, many of which are considered to be among the finest and most iconic properties in the world. This is the kind of thinking the country needs. - -I am proud to receive the endorsement of former Congressman Lightfoot. I have tremendous respect for him and I greatly appreciate his support. - -It is tremendous to be working with these leaders and their colleagues on winning solutions that will really move us forward. A strong House Republican Majority is imperative to fixing the problems facing America and making our country better and stronger than ever before. - -The United States cannot afford another four years of the Obama White House, which is what Hillary Clinton represents. That is why it’s critical that Republicans unite around our shared principles, advance a conservative agenda, and do all we can to win this fall. With that focus, we had a great conversation this morning. While we were honest about our few differences, we recognize that there are also many important areas of common ground. We will be having additional discussions, but remain confident there’s a great opportunity to unify our party and win this fall, and we are totally committed to working together to achieve that goal. We are extremely proud of the fact that many millions of new voters have entered the primary system, far more than ever before in the Republican Party's history. This was our first meeting, but it was a very positive step toward unification. - -It is a great honor to have won both West Virginia and Nebraska, especially by such massive margins. My time spent in both states was a wonderful and enlightening experience for me. I learned a lot, and that knowledge will be put to good use towards the creation of businesses, jobs, and the strengthening and revival of their economies. I look forward to returning to West Virginia and Nebraska soon, and hope to win both states in the general election. Likewise, my time spent last week with the great people of Oregon will hopefully lead to another victory next Tuesday. - -Governor Christie is an extremely knowledgeable and loyal person with the tools and resources to put together an unparalleled Transition Team, one that will be prepared to take over the White House when we win in November. I am grateful to Governor Christie for his contributions to this movement. - -I want to thank Senator Bob Dole for his endorsement. He is a wonderful man and it is a great honor to have his support. - -I fully understand why Lindsey Graham cannot support me. If I got beaten as badly as I beat him, and all the other candidates he endorsed, I would not be able to give my support either. Every time I see Lindsey Graham spew hate during interviews I ask why the media never questions how I single-handedly destroyed his hapless run for President. As a candidate who did not receive 1% in his own state - compared to my victory at nearly 40% with many others in the race - he has zero credibility. He was a poor representative and an embarrassment to the great people of South Carolina. Judging by the incompetent way he ran his campaign, it is easy to see why his military strategies have failed so badly --- we can’t even beat ISIS! - -While I will unify the party, Lindsey Graham has shown himself to be beyond rehabilitation. And like the voters who rejected him, so will I! - -I am not ready to support Speaker Ryan's agenda. Perhaps in the future we can work together and come to an agreement about what is best for the American people. They have been treated so badly for so long that it is about time for politicians to put them first! - -Steven is a professional at the highest level with an extensive and very successful financial background. He brings unprecedented experience and expertise to a fundraising operation that will benefit the Republican Party and ultimately defeat Hillary Clinton. - -Ted Cruz is a desperate candidate trying to save his failing campaign. It is no surprise he has resorted to his usual tactics of over-the-top rhetoric that nobody believes. Over the last week, I have watched Lyin’ Ted become more and more unhinged as he is unable to react under the pressure and stress of losing, in all cases by landslides, the last six primary elections --- in fact, coming in last place in all but one of them. Today’s ridiculous outburst only proves what I have been saying for a long time, that Ted Cruz does not have the temperament to be President of the United States. - -I am pleased to have the support of Representative Duncan (TN) who is one of the most fiscally conservative Members of the House. If more Members voted like Rep. Duncan, we wouldn't be wasting trillions of the taxpayer dollars in foreign countries. - -After massive defeats in Arizona, New York, Pennsylvania, Rhode Island, Delaware, Connecticut and Maryland, (in addition to twenty other contests) and given the fact that Senator Cruz has millions of votes less than me and is being clobbered on the delegate front, this is a pure waste of time. It reminds me very much of the already failed Kasich 'collusion' ­a desperate attempt to save a failing campaign by an all talk, no action politician. The people of Indiana are very smart ­and they will see through this just like they saw through the already failed Kasich alliance. Cruz has no path to victory - he is only trying to stay relevant. - -Thank you for the opportunity to speak to you, and thank you to the Center for the National Interest for honoring me with this invitation. - -I would like to talk today about how to develop a new foreign policy direction for our country – one that replaces randomness with purpose, ideology with strategy, and chaos with peace. - -It is time to shake the rust off of America’s foreign policy. It's time to invite new voices and new visions into the fold. - -The direction I will outline today will also return us to a timeless principle. My foreign policy will always put the interests of the American people, and American security, above all else. That will be the foundation of every decision that I will make. - -America First will be the major and overriding theme of my administration. - -But to chart our path forward, we must first briefly look back. - -We have a lot to be proud of. In the 1940s we saved the world. The Greatest Generation beat back the Nazis and the Japanese Imperialists. - -Then we saved the world again, this time from totalitarian Communism. The Cold War lasted for decades, but we won. - -Democrats and Republicans working together got Mr. Gorbachev to heed the words of President Reagan when he said: “tear down this wall.” - -History will not forget what we did. - -Unfortunately, after the Cold War, our foreign policy veered badly off course. We failed to develop a new vision for a new time. In fact, as time went on, our foreign policy began to make less and less sense. - -Logic was replaced with foolishness and arrogance, and this led to one foreign policy disaster after another. - -We went from mistakes in Iraq to Egypt to Libya, to President Obama’s line in the sand in Syria. Each of these actions have helped to throw the region into chaos, and gave ISIS the space it needs to grow and prosper. - -It all began with the dangerous idea that we could make Western democracies out of countries that had no experience or interest in becoming a Western Democracy. - -We tore up what institutions they had and then were surprised at what we unleashed. Civil war, religious fanaticism; thousands of American lives, and many trillions of dollars, were lost as a result. The vacuum was created that ISIS would fill. Iran, too, would rush in and fill the void, much to their unjust enrichment. - -Our foreign policy is a complete and total disaster. - -No vision, no purpose, no direction, no strategy. - -Today, I want to identify five main weaknesses in our foreign policy. - - -First, Our Resources Are Overextended - -President Obama has weakened our military by weakening our economy. He’s crippled us with wasteful spending, massive debt, low growth, a huge trade deficit and open borders. - -Our manufacturing trade deficit with the world is now approaching $1 trillion a year. We’re rebuilding other countries while weakening our own. - -Ending the theft of American jobs will give us the resources we need to rebuild our military and regain our financial independence and strength. - -I am the only person running for the Presidency who understands this problem and knows how to fix it. - - -Secondly, our allies are not paying their fair share. - -Our allies must contribute toward the financial, political and human costs of our tremendous security burden. But many of them are simply not doing so. They look at the United States as weak and forgiving and feel no obligation to honor their agreements with us. - -In NATO, for instance, only 4 of 28 other member countries, besides America, are spending the minimum required 2% of GDP on defense. - -We have spent trillions of dollars over time – on planes, missiles, ships, equipment – building up our military to provide a strong defense for Europe and Asia. The countries we are defending must pay for the cost of this defense – and, if not, the U.S. must be prepared to let these countries defend themselves. - -The whole world will be safer if our allies do their part to support our common defense and security. - -A Trump Administration will lead a free world that is properly armed and funded. - - -Thirdly, our friends are beginning to think they can’t depend on us. - -We’ve had a president who dislikes our friends and bows to our enemies. - -He negotiated a disastrous deal with Iran, and then we watched them ignore its terms, even before the ink was dry. - -Iran cannot be allowed to have a nuclear weapon and, under a Trump Administration, will never be allowed to have a nuclear weapon. - -All of this without even mentioning the humiliation of the United States with Iran’s treatment of our ten captured sailors. - -In negotiation, you must be willing to walk. The Iran deal, like so many of our worst agreements, is the result of not being willing to leave the table. When the other side knows you’re not going to walk, it becomes absolutely impossible to win. - -At the same time, your friends need to know that you will stick by the agreements that you have with them. - -President Obama gutted our missile defense program, then abandoned our missile defense plans with Poland and the Czech Republic. - -He supported the ouster of a friendly regime in Egypt that had a longstanding peace treaty with Israel – and then helped bring the Muslim Brotherhood to power in its place. - -Israel, our great friend and the one true Democracy in the Middle East, has been snubbed and criticized by an Administration that lacks moral clarity. Just a few days ago, Vice President Biden again criticized Israel – a force for justice and peace – for acting as an impediment to peace in the region. - -President Obama has not been a friend to Israel. He has treated Iran with tender love and care and made it a great power in the Middle East – all at the expense of Israel, our other allies in the region and, critically, the United States. - -We’ve picked fights with our oldest friends, and now they’re starting to look elsewhere for help. - -Fourth, our rivals no longer respect us. - -In fact, they are just as confused as our allies, but an even bigger problem is that they don’t take us seriously any more. - -When President Obama landed in Cuba on Air Force One, no leader was there to meet or greet him – perhaps an incident without precedent in the long and prestigious history of Air Force One. - -Then, amazingly, the same thing happened in Saudi Arabia -- it's called no respect. - -Do you remember when the President made a long and expensive trip to Copenhagen, Denmark to get the Olympics for our country, and, after this unprecedented effort, it was announced that the United States came in fourth place? - -He should have known the result before making such an embarrassing commitment. - -The list of humiliations goes on and on. - -President Obama watches helplessly as North Korea increases its aggression and expands even further with its nuclear reach. - -Our president has allowed China to continue its economic assault on American jobs and wealth, refusing to enforce trade rules – or apply the leverage on China necessary to rein in North Korea. - -He has even allowed China to steal government secrets with cyber attacks and engage in industrial espionage against the United States and its companies. - -We’ve let our rivals and challengers think they can get away with anything. - -If President Obama’s goal had been to weaken America, he could not have done a better job. - -Finally, America no longer has a clear understanding of our foreign policy goals. - -Since the end of the Cold War and the break-up of the Soviet Union, we’ve lacked a coherent foreign policy. - -One day we’re bombing Libya and getting rid of a dictator to foster democracy for civilians, the next day we are watching the same civilians suffer while that country falls apart. - -We're a humanitarian nation. But the legacy of the Obama-Clinton interventions will be weakness, confusion, and disarray. - -We have made the Middle East more unstable and chaotic than ever before. - -We left Christians subject to intense persecution and even genocide. - -Our actions in Iraq, Libya and Syria have helped unleash ISIS. - -And we’re in a war against radical Islam, but President Obama won’t even name the enemy! - -Hillary Clinton also refuses to say the words “radical Islam,” even as she pushes for a massive increase in refugees. - -After Secretary Clinton’s failed intervention in Libya, Islamic terrorists in Benghazi took down our consulate and killed our ambassador and three brave Americans. Then, instead of taking charge that night, Hillary Clinton decided to go home and sleep! Incredible. - -Clinton blames it all on a video, an excuse that was a total lie. Our Ambassador was murdered and our Secretary of State misled the nation – and by the way, she was not awake to take that call at 3 o'clock in the morning. - -And now ISIS is making millions of dollars a week selling Libyan oil. - -This will change when I am president. - -To all our friends and allies, I say America is going to be strong again. America is going to be a reliable friend and ally again. - -We’re going to finally have a coherent foreign policy based upon American interests, and the shared interests of our allies. - -We are getting out of the nation-building business, and instead focusing on creating stability in the world. - -Our moments of greatest strength came when politics ended at the water’s edge. - -We need a new, rational American foreign policy, informed by the best minds and supported by both parties, as well as by our close allies. - -This is how we won the Cold War, and it’s how we will win our new and future struggles. - -First, we need a long-term plan to halt the spread and reach of radical Islam. - -Containing the spread of radical Islam must be a major foreign policy goal of the United States. - -Events may require the use of military force. But it’s also a philosophical struggle, like our long struggle in the Cold War. - -In this we’re going to be working very closely with our allies in the Muslim world, all of which are at risk from radical Islamic violence. - -We should work together with any nation in the region that is threatened by the rise of radical Islam. But this has to be a two-way street – they must also be good to us and remember us and all we are doing for them. - -The struggle against radical Islam also takes place in our homeland. There are scores of recent migrants inside our borders charged with terrorism. For every case known to the public, there are dozens more. - -We must stop importing extremism through senseless immigration policies. - -A pause for reassessment will help us to prevent the next San Bernardino or worse -- all you have to do is look at the World Trade Center and September 11th. - -And then there’s ISIS. I have a simple message for them. Their days are numbered. I won’t tell them where and I won’t tell them how. We must as, a nation, be more unpredictable. But they’re going to be gone. And soon. - -Secondly, we have to rebuild our military and our economy. - -The Russians and Chinese have rapidly expanded their military capability, but look what’s happened to us! - -Our nuclear weapons arsenal – our ultimate deterrent – has been allowed to atrophy and is desperately in need of modernization and renewal. - -Our active duty armed forces have shrunk from 2 million in 1991 to about 1.3 million today. - -The Navy has shrunk from over 500 ships to 272 ships during that time. - -The Air Force is about 1/3 smaller than 1991. Pilots are flying B-52s in combat missions today which are older than most people in this room. - -And what are we doing about this? President Obama has proposed a 2017 defense budget that, in real dollars, cuts nearly 25% from what we were spending in 2011. - -Our military is depleted, and we’re asking our generals and military leaders to worry about global warming. - -We will spend what we need to rebuild our military. It is the cheapest investment we can make. We will develop, build and purchase the best equipment known to mankind. Our military dominance must be unquestioned. - -But we will look for savings and spend our money wisely. In this time of mounting debt, not one dollar can be wasted. - -We are also going to have to change our trade, immigration and economic policies to make our economy strong again – and to put Americans first again. This will ensure that our own workers, right here in America, get the jobs and higher pay that will grow our tax revenue and increase our economic might as a nation. - -We need to think smarter about areas where our technological superiority gives us an edge. This includes 3-D printing, artificial intelligence and cyberwarfare. - -A great country also takes care of its warriors. Our commitment to them is absolute. A Trump Administration will give our service men and women the best equipment and support in the world when they serve, and the best care in the world when they return as veterans to civilian life. - -Finally, we must develop a foreign policy based on American interests. - - -Businesses do not succeed when they lose sight of their core interests and neither do countries. - -Look at what happened in the 1990s. Our embassies in Kenya and Tanzania were attacked and seventeen brave sailors were killed on the USS Cole. And what did we do? It seemed we put more effort into adding China to the World Trade Organization – which has been a disaster for the United States – than into stopping Al Qaeda. - -We even had an opportunity to take out Osama Bin Laden, and didn’t do it. And then, we got hit at the World Trade Center and the Pentagon, the worst attack on our country in its history. - -Our foreign policy goals must be based on America’s core national security interests, and the following will be my priorities. - -In the Middle East, our goals must be to defeat terrorists and promote regional stability, not radical change. We need to be clear-sighted about the groups that will never be anything other than enemies. - -And we must only be generous to those that prove they are our friends. - -We desire to live peacefully and in friendship with Russia and China. We have serious differences with these two nations, and must regard them with open eyes. But we are not bound to be adversaries. We should seek common ground based on shared interests. Russia, for instance, has also seen the horror of Islamic terrorism. - -I believe an easing of tensions and improved relations with Russia – from a position of strength – is possible. Common sense says this cycle of hostility must end. Some say the Russians won’t be reasonable. I intend to find out. If we can’t make a good deal for America, then we will quickly walk from the table. - -Fixing our relations with China is another important step towards a prosperous century. China respects strength, and by letting them take advantage of us economically, we have lost all of their respect. We have a massive trade deficit with China, a deficit we must find a way, quickly, to balance. - -A strong and smart America is an America that will find a better friend in China. We can both benefit or we can both go our separate ways. - -After I am elected President, I will also call for a summit with our NATO allies, and a separate summit with our Asian allies. In these summits, we will not only discuss a rebalancing of financial commitments, but take a fresh look at how we can adopt new strategies for tackling our common challenges. - -For instance, we will discuss how we can upgrade NATO’s outdated mission and structure – grown out of the Cold War – to confront our shared challenges, including migration and Islamic terrorism. - -I will not hesitate to deploy military force when there is no alternative. But if America fights, it must fight to win. I will never send our finest into battle unless necessary – and will only do so if we have a plan for victory. - -Our goal is peace and prosperity, not war and destruction. - -The best way to achieve those goals is through a disciplined, deliberate and consistent foreign policy. - -With President Obama and Secretary Clinton we’ve had the exact opposite: a reckless, rudderless and aimless foreign policy – one that has blazed a path of destruction in its wake. - -After losing thousands of lives and spending trillions of dollars, we are in far worse shape now in the Middle East than ever before. - -I challenge anyone to explain the strategic foreign policy vision of Obama-Clinton – it has been a complete and total disaster. - -I will also be prepared to deploy America’s economic resources. Financial leverage and sanctions can be very persuasive – but we need to use them selectively and with determination. Our power will be used if others do not play by the rules. - -Our friends and enemies must know that if I draw a line in the sand, I will enforce it. - -However, unlike other candidates for the presidency, war and aggression will not be my first instinct. You cannot have a foreign policy without diplomacy. A superpower understands that caution and restraint are signs of strength. - -Although not in government service, I was totally against the War in Iraq, saying for many years that it would destabilize the Middle East. Sadly, I was correct, and the biggest beneficiary was Iran, who is systematically taking over Iraq and gaining access to their rich oil reserves – something it has wanted to do for decades. And now, to top it all off, we have ISIS. - -My goal is to establish a foreign policy that will endure for several generations. - -That is why I will also look for talented experts with new approaches, and practical ideas, rather than surrounding myself with those who have perfect resumes but very little to brag about except responsibility for a long history of failed policies and continued losses at war. - -Finally, I will work with our allies to reinvigorate Western values and institutions. Instead of trying to spread “universal values” that not everyone shares, we should understand that strengthening and promoting Western civilization and its accomplishments will do more to inspire positive reforms around the world than military interventions. - -These are my goals, as president. - -I will seek a foreign policy that all Americans, whatever their party, can support, and which our friends and allies will respect and welcome. - -The world must know that we do not go abroad in search of enemies, that we are always happy when old enemies become friends, and when old friends become allies. - -To achieve these goals, Americans must have confidence in their country and its leadership again. - -Many Americans must wonder why our politicians seem more interested in defending the borders of foreign countries than their own. - -Americans must know that we are putting the American people first again. On trade, on immigration, on foreign policy – the jobs, incomes and security of the American worker will always be my first priority. - -No country has ever prospered that failed to put its own interests first. Both our friends and enemies put their countries above ours and we, while being fair to them, must do the same. - -We will no longer surrender this country, or its people, to the false song of globalism. - -The nation-state remains the true foundation for happiness and harmony. I am skeptical of international unions that tie us up and bring America down, and will never enter America into any agreement that reduces our ability to control our own affairs. - -NAFTA, as an example, has been a total disaster for the U.S. and has emptied our states of our manufacturing and our jobs. Never again. Only the reverse will happen. We will keep our jobs and bring in new ones. Their will be consequences for companies that leave the U.S. only to exploit it later. - -Under a Trump Administration, no American citizen will ever again feel that their needs come second to the citizens of foreign countries. - -I will view the world through the clear lens of American interests. - -I will be America’s greatest defender and most loyal champion. We will not apologize for becoming successful again, but will instead embrace the unique heritage that makes us who we are. - -The world is most peaceful, and most prosperous, when America is strongest. - -America will continually play the role of peacemaker. - -We will always help to save lives and, indeed, humanity itself. But to play that role, we must make America strong again. - -We must make America respected again. And we must make America great again. - -If we do that, perhaps this century can be the most peaceful and prosperous the world has ever known. Thank you. - -Ken has a proven track record in winning state political races. He will support our delegate operations team and bolster our ground game efforts. He brings tremendous experience to the job, and I know he is up to the task of working with my team. - -It is sad that two grown politicians have to collude against one person who has only been a politician for ten months in order to try and stop that person from getting the Republican nomination. - - -Senator Cruz has done very poorly and after his New York performance, which was a total disaster, he is in free fall and as everyone has seen, he does not react well under pressure. Also, approximately 80% of the Republican Party is against him. Governor Kasich, who has only won 1 state out of 41, in other words, he is 1 for 41 and he is not even doing as well as other candidates who could have stubbornly stayed in the race like him but chose not to do so. Marco Rubio, as an example, has more delegates than Kasich and yet suspended his campaign one month ago. Others, likewise, have done much better than Kasich, who would get slaughtered by Hillary Clinton once the negative ads against him begin. 85% of Republican voters are against Kasich. - - -Collusion is often illegal in many other industries and yet these two Washington insiders have had to revert to collusion in order to stay alive. They are mathematically dead and this act only shows, as puppets of donors and special interests, how truly weak they and their campaigns are. I have brought millions of voters into the Republican primary system and have received many millions of votes more than Cruz or Kasich. Additionally, I am far ahead of both candidates with delegates and would be receiving in excess of 60% of the vote except for the fact that there were so many candidates running against me. - - -Because of me, everyone now sees that the Republican primary system is totally rigged. When two candidates who have no path to victory get together to stop a candidate who is expanding the party by millions of voters, (all of whom will drop out if I am not in the race) it is yet another example of everything that is wrong in Washington and our political system. This horrible act of desperation, from two campaigns who have totally failed, makes me even more determined, for the good of the Republican Party and our country, to prevail! - -I am honored to be invited to speak at an organization founded by former President Richard Nixon, and look forward to sharing my views on the many serious foreign policy issues facing our country and our allies around the world. Trade, immigration and security policies are critical concerns of all Americans, and we must develop a clear, consistent long-term foreign policy for making America safe and prosperous. - -Rick is a seasoned political expert with a very successful career in winning elections. He brings decades of experience, and his deep ties to political leaders and activists across the country will be a tremendous asset as we enter the final phase of securing the nomination. - -I am pleased to bring Tim on board to organize what is a very important state. I know he will be an asset to the team and ultimately deliver a win in California. - -Thank you to the great people of Missouri who voted for me and the state officials who worked to ensure the votes of the people mattered. It is great to have yet another victory as we look forward to the upcoming primary in New York. - -My campaign continues to receive tremendous support from voters all across the country. We have won far more states, far more delegates, and millions more votes than any other candidate. The nomination process has reached a point that requires someone familiar with the complexities involved in the final stages. I am organizing these responsibilities under someone who has done this job successfully in many campaigns. This will allow the rest of my team to deal with the increasing needs of a national campaign for both the pre-Convention phase and most importantly, the general election. Paul is a well-respected expert in this regard and we are pleased to have him join the efforts to Make America Great Again. - -New York is my home and I am so proud to have been able to assemble such an incredible team. I have watched and known these people for so many years. They love New York and our country. Together we will Make America Great Again. - -Congressman Hunter and Congressman Collins are conservative stalwarts. I am honored to have the support of these two well respected Members of Congress who share my vision of securing our borders, strengthening our military, treating our veterans with the respect and care they deserve and putting Americans first again. - -If Congress were to pass legislation making abortion illegal and the federal courts upheld this legislation, or any state were permitted to ban abortion under state and federal law, the doctor or any other person performing this illegal act upon a woman would be held legally responsible, not the woman. The woman is a victim in this case as is the life in her womb. My position has not changed - like Ronald Reagan, I am pro-life with exceptions. - -I am deeply grateful for this extraordinary and historic endorsement from America's Border Patrol Agents, and especially honored that they would break with their past precedent of not endorsing in presidential primaries in order to endorse my candidacy. This endorsement represents a total rejection of the corrupt politicians who have allowed transnational gangs and cartels to terrorize American communities. - -The National Border Patrol Council is the official body representing America's front-line Border Patrol Agents who sacrifice every day, under intolerable political restraints, to keep America safe. America's 16,500 border patrol agents represented by the NBPC are the first line of defense for our nation. The NBPC provides the vital outlet to learn the truth - not the political spin from bureaucrats - about what is really happening on our border. And the NBPC has been the one outlet these agents have to prevent their voice from being drowned out by big money special interests. - -As President, I will work tirelessly with the NBPC and their rank-and-file agents to secure our border once and for all. I will ensure that every rank-and-file officer has the resources, tools and support they need to protect this nation and stop the influx of drugs, gangs and cartel violence. Together, we will save thousands of American lives, millions of American jobs, and billions of American tax dollars. - -I am deeply privileged to have the official support of America's border patrol agents, and will never let them down. - -This is our chance to make our country safe again, and ultimately, Make America Great Again. - -Paul is a great asset and an important addition as we consolidate the tremendous support we have received in the primaries and caucuses, garnering millions more votes than any other candidate. Paul Manafort, and the team I am building, bring the needed skill sets to ensure that the will of the Republican voters, not the Washington political establishment, determines who will be the nominee for the Republican Party. I look forward to winning the nomination, and ultimately the presidency in order to Make America Great Again. - -I have no idea whether or not the cover story about Ted Cruz in this week's issue of the National Enquirer is true or not, but I had absolutely nothing to do with it, did not know about it, and have not, as yet, read it. Likewise, I have nothing to do with the National Enquirer and unlike Lyin’ Ted Cruz I do not surround myself with political hacks and henchmen and then pretend total innocence. Ted Cruz’s problem with the National Enquirer is his and his alone, and while they were right about O.J. Simpson, John Edwards, and many others, I certainly hope they are not right about Lyin’ Ted Cruz. I look forward to spending the week in Wisconsin, winning the Republican nomination and ultimately the Presidency in order to Make America Great Again. - -Good evening. I speak to you today as a lifelong supporter and true friend of Israel. I am a newcomer to politics but not to backing the Jewish state. - -In late 2001, weeks after the attacks on New York City and Washington - attacks perpetrated by Islamic fundamentalists, Mayor Giuliani visited Israel to show solidarity with terror victims. I sent him in my plane because I backed the mission 100%. - -In Spring 2004, at the height of violence in the Gaza Strip, I was the Grand Marshal of the 40th Salute to Israel Parade, the largest single gathering in support of the Jewish state. - -It was a very dangerous time for Israel and frankly for anyone supporting Israel - many people turned down this honor –I did not, I took the risk. - -I didn't come here tonight to pander to you about Israel. That's what politicians do: all talk, no action. I came here to speak to you about where I stand on the future of American relations with our strategic ally, our unbreakable friendship, and our cultural brother, the only democracy in the Middle East, the State of Israel. - -My number one priority is to dismantle the disastrous deal with Iran. I have been in business a long time. I know deal-making and let me tell you, this deal is catastrophic - for America, for Israel, and for the whole Middle East. - -The problem here is fundamental. We have rewarded the world's leading state sponsor of terror with $150 billion and we received absolutely nothing in return. - -I've studied this issue in greater detail than almost anybody. The biggest concern with the deal is not necessarily that Iran is going to violate it, although it already has, the bigger problem is that they can keep the terms and still get to the bomb by simply running out the clock, and, of course, they keep the billions. - -The deal doesn’t even require Iran to dismantle its military nuclear capability! Yes, it places limits on its military nuclear program for only a certain number of years. But when those restrictions expire, Iran will have an industrial-size military nuclear capability ready to go, and with zero provision for delay no matter how bad Iran's behavior is. When I am president, I will adopt a strategy that focuses on three things when it comes to Iran. - -First, we will stand up to Iran’s aggressive push to destabilize and dominate the region. Iran is a very big problem and will continue to be, but if I'm elected President, I know how to deal with trouble. Iran is a problem in Iraq, a problem in Syria, a problem in Lebanon, a problem in Yemen, and will be a very major problem for Saudi Arabia. Literally every day, Iran provides more and better weapons to their puppet states. - -Hezbollah in Lebanon has received sophisticated anti-ship weapons, anti-aircraft weapons, and GPS systems on rockets. Now they're in Syria trying to establish another front against Israel from the Syrian side of the Golan Heights. - -In Gaza, Iran is supporting Hamas and Islamic Jihad - and in the West Bank they are openly offering Palestinians $7,000 per terror attack and $30,000 for every Palestinian terrorist's home that’s been destroyed. - -Iran is financing military forces throughout the Middle East and it is absolutely indefensible that we handed them over $150 billion to facilitate even more acts of terror. - -Secondly, we will totally dismantle Iran’s global terror network. Iran has seeded terror groups all over the world. During the last five years, Iran has perpetrated terror attacks in 25 different countries on five continents. They’ve got terror cells everywhere, including in the western hemisphere very close to home. Iran is the biggest sponsor of terrorism around the world and we will work to dismantle that reach. - -Third, at the very least, we must hold Iran accountable by restructuring the terms of the previous deal. Iran has already - since the deal is in place - test-fired ballistic missiles three times. Those ballistic missiles, with a range of 1,250 miles, were designed to intimidate not only Israel, which is only 600 miles away but also intended to frighten Europe, and, someday, the United States. - -Do you want to hear something really shocking? As many of the great people in this room know, painted on those missiles – in both Hebrew and Farsi - were the words “Israel must be wiped off the face of the earth.” - -What kind of demented minds write that in Hebrew? And here's another twisted part - testing these missiles does not even violate the horrible deal that we made! - -The deal is silent on test missiles but those tests DO violate UN Security Council Resolutions. The problem is, no one has done anything about it. Which brings me to my next point – the utter weakness and incompetence of the United Nations. - -The United Nations is not a friend of democracy. It's not a friend to freedom. It's not a friend even to the United States of America, where as all know, it has its home. And it surely isn’t a friend to Israel. - -With President Obama in his final year, discussions have been swirling about an attempt to bring a security council resolution on the terms of an eventual agreement between Israel and Palestine. Let me be clear: An agreement imposed by the UN would be a total and complete disaster. The United States must oppose this resolution and use the power of our veto. Why? Because that's not how you make a deal. - -Deals are made when parties come to the table and negotiate. Each side must give up something it values in exchange for something it requires. A deal that imposes conditions on Israel and the Palestinian Authority will do nothing to bring peace. It will only further delegitimize Israel and it would reward Palestinian terrorism, because every day they are stabbing Israelis – and even Americans. - -Just last week, American Taylor Allen Force, a West Point grad who served in Iraq and Afghanistan, was murdered in the street by a knife-wielding Palestinian. You don't reward that behavior, you confront it! - -It's not up the United Nations to impose a solution. The parties must negotiate a resolution themselves. The United States can be useful as a facilitator of negotiations, but no one should be telling Israel it must abide by some agreement made by others thousands of miles away that don't even really know what's happening. - -When I'm president, believe me, I will veto any attempt by the UN to impose its will on the Jewish state. You see, I know about deal-making - that's what I do. I wrote The Art of the Deal, one of the all-time best-selling books about deals and deal making. To make a great deal, you need two willing participants. - -We know Israel is willing to deal. Israel has been trying to sit down at the negotiating table, without pre-conditions, for years. You had Camp David in 2000, where Prime Minister Barak made an incredible offer – maybe even too generous. Arafat rejected it. - -In 2008, Prime Minister Olmert made an equally generous offer. The Palestinian Authority rejected it. Then John Kerry tried to come up with a framework and Abbas didn't even respond, not even to the Secretary of State of the United States of America! - -When I become President, the days of treating Israel like a second-class citizen will end on Day One. I will meet with Prime Minister Netanyahu immediately. I have known him for many years and we will be able to work closely together to help bring stability and peace to Israel and to the entire region. - -Meanwhile, every single day, you have rampant incitement and children being taught to hate Israel and hate the Jews. When you live in a society where the firefighters are the hero’s little kids want to be firefighters. - -When you live in a society where athletes and movie stars are heroes, little kids want to be athletes and movie stars. In Palestinian society, the heroes are those who murder Jews - we can't let this continue. You cannot achieve peace if terrorists are treated as martyrs. Glorifying terrorists is a tremendous barrier to peace. - -In Palestinian textbooks and mosques, you’ve got a culture of hatred that has been fermenting there for years, and if we want to achieve peace, they’ve got to end this indoctrination of hatred. There is no moral equivalency. Israel does not name public squares after terrorists. Israel does not pay its children to stab random Palestinians. - -You see, what President Obama gets wrong about deal making is that he constantly applies pressure to our friends and rewards our enemies. That pattern, practiced by the President and his administration, including former Secretary of State, Hillary Clinton, has repeated itself over and over and has done nothing but embolden those who hate America. We saw that with releasing $150 billion to Iran in the hope that they would magically join the world community - It's the same with Israel and Palestine. - -President Obama thinks that applying pressure to Israel will force the issue, but it's precisely the opposite. Already, half the population of Palestine has been taken over by the Palestinian ISIS in Hamas, and the other half refuses to confront the first half, so it’s a very difficult situation but when the United States stands with Israel, the chances of peace actually rise. That's what will happen when I’m president. - -We will move the American embassy to the eternal capital of the Jewish people, Jerusalem - and we will send a clear signal that there is no daylight between America and our most reliable ally, the state of Israel. - -The Palestinians must come to the table knowing that the bond between the United States and Israel is unbreakable. They must come to the table willing and able to stop the terror being committed on a daily basis against Israel and they must come to the table willing to accept that Israel is a Jewish State and it will forever exist as a Jewish State. - -Thank you very much, its been a great honor to be with you. - -It is my great honor to receive the endorsement from a leader as highly respected as Attorney General Pam Bondi. I love the people of Florida, where over the years, I have invested my time, hundreds of millions of dollars and employed thousands of people. Pam is one of the many individuals I have formed a great relationship with and I am very proud to receive her support. - -Ben is one of the truly great people I know. It is my honor to receive his endorsement and enthusiasm behind this incredible movement we are all building together. With Ben’s help, we will continue to grow the Republican party by bringing new people into the process to ensure we defeat Hillary Clinton in November. - -Record rates of immigration have produced lower wages and higher unemployment for U.S. workers. Pew polling shows 83 percent of all voters - Democrats, Republicans and Independents - think immigration should be frozen or reduced. The biggest beneficiaries of allowing fewer foreign workers into our country would be minority workers, including all immigrants now living here, who are competing for jobs, benefits and community resources against record waves of foreign workers. Limiting job competition would reopen pathways to middle-class stability and shrink welfare rolls. In addition, it would relieve overcrowding in our schools and hospitals that afflict our poorest communities. Yet, Senators Cruz and Rubio have led the charge for even higher immigration rates - a policy supported by only 7 percent of the Republican electorate. When I am President we will listen to the people - not the special interests - and get immigration numbers under control, as the voters have demanded. - -Lightweight Senator Marco Rubio is a dishonest person. He has cheated with credit cards, and does favors for lobbyists. In my opinion, he is a total crook and I am doing the people of Florida a great favor by further exposing him. In addition to everything else, he is an absentee Senator with one of the worst voting records in the history of the United States Senate, instead preferring to spend his time begging for campaign contributions. He takes his orders from the Republican establishment and Super PACs who are spending millions of dollars to keep their puppet alive and well so he will continue to do what they say. Former Prosecutor and now Governor of New Jersey, Chris Christie, exposed him on the debate stage for being what he is - a choke artist. We are now going a step further. - -Megyn Kelly asked about highly-skilled immigration. The H-1B program is neither high-skilled nor immigration: these are temporary foreign workers, imported from abroad, for the explicit purpose of substituting for American workers at lower pay. I remain totally committed to eliminating rampant, widespread H-1B abuse and ending outrageous practices such as those that occurred at Disney in Florida when Americans were forced to train their foreign replacements. I will end forever the use of the H-1B as a cheap labor program, and institute an absolute requirement to hire American workers first for every visa and immigration program. No exceptions. - -It is an honor to have Jeff as a member of the team. I have such great respect for him and I look forward to working with him on the issues most important to Americans. - -Thanks to the efforts of these talented staff members and the support of millions who understand my vision to make our country, better and stronger than ever before, we have had definitive victories in most of the early state elections and on Super Tuesday. As we look forward to more primary election victories we are expanding their roles within the campaign. I am proud to have assembled a team of staff and supporters who are loyal only to the American people, not special interests, and truly want to Make America Great Again! - -It is truly an honor to receive the endorsement of two individuals I hold in the highest regard. They are smart, successful and the kind of business leaders our country needs to help negotiate trade deals, create jobs and spur economic development. I am proud to have their support and the support of two business leaders including Carl Icahn. - -I am proud to receive the endorsement of such an iconic brand and a quality person such as Brian. Brian has a wonderful family and is an incredibly successful business person. I have great respect for Brian and I am grateful for his support and that of Bill Elliott, one of the best drivers in history, and active stock car racers, including his son Chase Elliott, Ryan Newman and David Lee Regan. - -Trump University has a 98% approval rating and an “A” rating from the Better Business Bureau. New York Attorney General, Eric Schneiderman continues to waste taxpayer money trying to smear me, but the fact is that the overwhelming majority of students had a great experience. It’s a minor civil case I have not settled out of principle. Lightweight Marco Rubio is grasping at straws and produced terrible ads featuring three people who all provided written statements praising the program. I demand an immediate retraction of this false and libelous ads. It just shows how low a failing campaign will go to help their failing candidate. - -I am deeply honored to have the endorsement of Senator Jeff Sessions, leader of congressional conservatives. He has been called the Senate's indispensable man and the gold standard. He led the fight against the Gang of Eight, against Obama's trade deal, against Obama's judges, and for American sovereignty. He has stood up to special interests as few have. There is no more respected man in Congress and we are closely aligned on many issues, including trade and illegal immigration, and I am proud to consider Jeff Sessions an advisor, friend and ally. - -I am truly honored to have the support of these American heroes, the best of their generation. The American people can know with certainty, I will always place their interest above all else. I am the most militaristic person and it is so important to me to strengthen our military and protect American families and freedoms. - -I love the state of Arizona and have received incredible support throughout the state. I am leading in all the polls and we have had amazing events with tremendous crowds. I am honored to receive this endorsement from Governor Brewer. - -I am proud to receive Governor LePage’s endorsement. He will be a great asset as we continue to campaign across the country, and I’m grateful for his support. - -It is my great honor to receive the endorsement of the Governor. We have had a wonderful relationship for many years. He is a solid person that I have tremendous respect for. I am really proud to receive the support of the Governor and his family. - -I have great respect for Governor Mike Huckabee and we have a mutual admiration for our wonderful families. It is great to have his daughter, Sarah, join the campaign. - -If and when the Vatican is attacked by ISIS, which as everyone knows is ISIS’s ultimate trophy, I can promise you that the Pope would have only wished and prayed that Donald Trump would have been President because this would not have happened. ISIS would have been eradicated unlike what is happening now with our all talk, no action politicians. - -The Mexican government and its leadership has made many disparaging remarks about me to the Pope, because they want to continue to rip off the United States, both on trade and at the border, and they understand I am totally wise to them. The Pope only heard one side of the story - he didn’t see the crime, the drug trafficking and the negative economic impact the current policies have on the United States. He doesn’t see how Mexican leadership is outsmarting President Obama and our leadership in every aspect of negotiation. - -For a religious leader to question a person’s faith is disgraceful. I am proud to be a Christian and as President I will not allow Christianity to be consistently attacked and weakened, unlike what is happening now, with our current President. No leader, especially a religious leader, should have the right to question another man’s religion or faith. They are using the Pope as a pawn and they should be ashamed of themselves for doing so, especially when so many lives are involved and when illegal immigration is so rampant. - -Despite Senator Ted Cruz attempting to smear me and totally lie about my beliefs and positions on almost all of the issues, I am a conservative person and I believe in conservative values. Like Ronald Reagan, on many issues, I have evolved. I am pro-life and have been for a long time. - -Let me be clear—I am pro-life. I support that position with exceptions allowed for rape, incest or the life of the mother being at risk. I did not always hold this position, but I had a significant personal experience that brought the precious gift of life into perspective for me. My story is well documented, so I will not retell it here. However, what I will do with the remaining space is express my feelings about life, and the culture of life, as we approach the 43nd anniversary of the Roe v. Wade. - -I build things. There is a process involved in building things. We tap into a lot of disciplines with engineering being one of the most important. The rules for putting structures together are as strict as are the rules of physics. These rules have stood the test of time and have become the path to putting together structures that endure and are beautiful. America, when it is at its best, follows a set of rules that have worked since our founding. One of those rules is that we, as Americans, revere life and have done so since our Founders made it the first, and most important, of our “unalienable” rights. - -Over time, our culture of life in this country has started sliding toward a culture of death. Perhaps the most significant piece of evidence to support this assertion is that since Roe v. Wade was decided by the Supreme Count 43 years ago over 50 million Americans never had the chance to enjoy the opportunities offered by this country. They never had the chance to become doctors, musicians, farmers, teachers, husbands, fathers, sons or daughters. They never had the chance to enrich the culture of this nation or to bring their skills, lives, loves or passions into the fabric of country. They are missing, and they are missed. - -The Supreme Court in 1973 based their decision on imagining rights and liberties in the Constitution that are nowhere to be found. Even if we take the court at its word, that abortion is a matter of privacy, we should then extend the argument to the logical conclusion that private funds, then, should subsidize this choice rather than the half billion dollars given to abortion providers every year by Congress. Public funding of abortion providers is an insult to people of conscience at the least and an affront to good governance at best. - -If using taxpayer money to facilitate our slide to a culture of death was not enough, the 1973 decision became a landmark decision demonstrating the utter contempt the court had for federalism and the 10th Amendment. Roe v. Wade gave the court an excuse to dismantle the decisions of state legislatures and the votes of the people. This is a pattern that the court has repeated over and over again since that decision. Perhaps Roe v. Wade became yet another incidence of disconnect between the people and their government. - -We are in the middle of a presidential political cycle and votes will be cast in just days. The citizens of this nation will have the chance to vote for candidates that are aligned with their individual worldviews. It is my hope that they will choose the builder, the man who has the ability to imagine the greatness of this nation. The next President must follow those principles that work best and that reinforce the reverence Americans hold for life. A culture of life is too important to let slip away for convenience or political correctness. It is by preserving our culture of life that we will Make America Great Again. - -Ted Cruz is a totally unstable individual. He is the single biggest liar I’ve ever come across, in politics or otherwise, and I have seen some of the best of them. His statements are totally untrue and completely outrageous. It is hard to believe a person who proclaims to be a Christian could be so dishonest and lie so much. - -Cruz said I would be appointing a liberal judge when in fact I will appoint a great conservative and I am the only candidate who has gone so far, at the debate, as to suggest two individuals I feel would best represent the conservative values we need to protect: William “Bill” Pryor Jr. and Diane Sykes. - -Cruz says I am pro-choice, when in fact I am staunchly pro-life and have been for a long time. Like Ronald Reagan, on many issues, I have evolved. - -Cruz says I am in favor of ObamaCare, when in fact I have spoken about repealing and replacing this disaster of a system at every speech throughout my campaign and since it’s inception. Meanwhile, Cruz was responsible for getting Bush to put in the judge that failed to vote against ObamaCare twice. - -Cruz says I will try to take away your second amendment rights, when I am one of the strongest proponents of the right to bear arms and I say so in every speech that I have made for years. I am a proud member of the NRA and so are my sons. - -Cruz has become unhinged and is lying with the hopes that his statements will go unchecked until after the election and he will save his failing campaign. - -In Iowa, Cruz told thousands of Ben Carson voters that Dr. Carson had left the race and to instead vote for Ted Cruz. He apologized when the race was over. Likewise, his fraudulent voter violation form sent to Iowa voters. If Ted is going to continue to lie with such desperation, I have no choice but to fight back. - -One of the ways I can fight back is to bring a lawsuit against him relative to the fact that he was born in Canada and therefore cannot be President. If he doesn’t take down his false ads and retract his lies, I will do so immediately. Additionally, the RNC should intervene and if they don’t they are in default of their pledge to me. - -I am the strongest on the borders and I will build a wall, and it will be a real wall. I am strongest on illegal immigration, strongest on ISIS, strongest on the military and I will take care of our Vets. I will end common core and preserve the second amendment. I will renegotiate our trade deals and bring our jobs back to our country. I am the only person who will Make America Great Again. - -I would like to offer my sincerest condolences to the Scalia family after the passing of Justice Scalia. Justice Scalia was a remarkable person and a brilliant Supreme Court Justice, one of the best of all time. His career was defined by his reverence for the Constitution and his legacy of protecting Americans’ most cherished freedoms. He was a Justice who did not believe in legislating from the bench and he is a person whom I held in the highest regard and will always greatly respect his intelligence and conviction to uphold the Constitution of our country. My thoughts and prayers are with his family during this time. - -The RNC, which is probably not on my side, just illegally put out a fundraising notice saying ‘Trump wants you to contribute to the RNC.' The RNC does not treat me well and then they use my name, without my knowledge, to raise money for themselves. At my insistence, they have withdrawn their request. I am self-funding my campaign and this totally unauthorized notice is yet another example of deceptive Washington tricks used to take advantage of the voters and get money from the hard-working people the politicians have failed. I will not stand for it and neither should you. - -People wouldn’t be talking about illegal immigration had I not brought it up when I announced I was running for President. It is a massive problem in our country and now everybody agrees with me --- bad for crime and the economy. A wonderful young man, Jamiel Shaw Jr., whose father has become a friend of mine, was shot in the face for no reason by an illegal immigrant. He was getting ready to go to college on a football scholarship and his sole fault was walking home to see his father. Because of the relationship I have established with his father, Jamiel’s death is very personal to me. We must stop illegal immigration. - -I am proud to be endorsed by such accomplished and well-respected military experts. Additionally, to be endorsed by a Gold Star mother is such a wonderful honor. Homeland security and strengthening our military are two of the most important issues and cornerstones of my campaign. Jim and Gary’s support is so important to me and Susan is an incredible person and with their help we will win Florida and Make America Great Again! - -It is my great honor to receive these coveted and influential endorsements from tremendous people in the state of Georgia. I have visited many times and had great crowds and poll numbers. I look forward to being in Georgia again soon and working with each of these local leaders to Make America Great Again. - -I am delighted Bill Stern has accepted a leadership role in my campaign. He is a successful real estate developer and for eight years he served as Chair of the South Carolina Ports Authority, which is one of the most important commercial ports in the world. I look forward to working closely with Bill --- with his support we will Make American Great Again. - -This is a movement. People believe in my vision to make our country better and stronger than ever before and I am so honored to have their support. I am going to make you so proud--- we will take our country back and Make America Great Again! - -Our Veterans have been treated like third-class citizens and it is my great honor to support them with this $1 million dollar contribution – they are truly incredible people. We are going to strengthen our military, take care of our Vets and Make America Great Again. - -We have such tremendous support all over the country, the people of Georgia have been amazing. We are leading in all the polls, with 33% in the most recent Georgia survey. The support from this state is so important to me --- with their help we will Make America Great Again! - -Lt. Governor McMaster has a distinguished record of service to the people of South Carolina and the country, and his endorsement is a great honor. I’m proud to have Henry and his wife Peggy, join the team. The people of South Carolina are amazing and I continue to lead all state polls by wide, double digit margins. - -I have great respect for Sheriff Arpaio. We must restore law and order on the border and respect the men and women of our police forces. I thank him for his support of my policies and candidacy for President. - -It is truly an honor to receive Jerry’s endorsement. Not only is he a high quality person, with a wonderful family, whom I have great respect for – I also consider him a very good friend and his support means so much to me. - -It was a great honor to be introduced by the legendary Jerry Falwell Jr. He has built a tremendous institution and is a really terrific person with a beautiful family. We have always had a wonderful relationship and I am proud to share his kind words with the people of Iowa and South Carolina. - -Ted Cruz is a total hypocrite and, until recently, a Canadian citizen who may not even have a legal right to run for President. He didn’t disclose loans, pretending he’s Robin Hood, when he’s just another all talk, no action politician. Had I not brought up the subject of illegal immigration, an issue which Ted Cruz is very weak on, nobody would even be talking about it. I will build a great wall, and Mexico will pay for it. - -I am truly honored to receive Willie’s endorsement. He is a great person, has had tremendous success and a really terrific family. He believes in my message and knows that I am the only one who will Make America Great Again! - -Jeff is a terrific guy. He has been supportive of my campaign since the very beginning and it is a honor to receive the endorsement of such an intelligent, high quality individual, committed to my vision to Make America Great Again! - -I am greatly honored to receive Sarah’s endorsement. She is a friend, and a high quality person whom I have great respect for. I am proud to have her support. - -I am proud to announce our Leadership Team in Louisiana where we have tremendous grassroots support and a great team in place. I look forward to visiting soon and working with these individuals, whose support is so important, to share my vision to Make America Great Again! - -My family has been so supportive of me and my candidacy and I am so proud of Ivanka. She is a terrific person, a devoted mother and an exceptional entrepreneur. Ivanka is doing a fantastic job running my company, alongside her brothers, and is building many of my biggest jobs. It is so important to have the support of all of my children and I’m really proud of this ad. - -It is such an honor to have the support of so many grassroots leaders across Virginia where we are leading in the polls by substantial margins. The people of Virginia are ready to Make America Great Again! - -I have tremendous crowds, we are leading all the polls and there are so many people that want to see our country be greater than ever before. My life has been about winning and Iowa and New Hampshire are so important to me. This is where we will begin to Make America Great Again! - -It is my great honor to receive the endorsement of the highly respected Sue Everhart in addition to so many other terrific people in Georgia. We have had a tremendous response across the state and I look forward to visiting again soon to share my message to Make America Great Again! - -It is my great honor to receive endorsements from each of these incredible people. Their support for my message and endorsement of my candidacy for President of the United States means so much to me, and with their help, and the help of so many great people in Florida and all over the country, we will Make America Great Again! - -It is my pleasure to have Joe serve as our Honorary Chairman in the great state of Rhode Island. Securing our border and solving the issue of illegal immigration has become a cornerstone of my campaign and I have great respect for Joe’s passion with regard to solving this problem. With the help of Joe and so many other supporters in Rhode Island, we will Make America Great Again. - -I am honored to receive these important endorsements from such accomplished businessmen in the great state of New Hampshire. Their support, and the support of so many incredible people all over the state means so much to me. It is time to Make America Great Again and with your help, on February 9th, we will do just that. - -It is my honor to be on the ballot in West Virginia. We have received tremendous support from so many people in the Mountain State and are leading in all the polls by double digits. In fact, my support in West Virginia is higher than in any other state in the nation. I look forward to visiting soon and sharing more about my message and my plans to Make America Great Again! - -The people of Georgia are incredible and the response to my message has been amazing. We have built a great team and with their help, we will Make America Great Again! - -We already have tremendous grassroots support across the state of Louisiana and with the help of Ryan and our Leadership team we will continue to build a top-tier operation. I look forward to visiting soon and sharing my vision to Make America Great Again! - -I love the people of Texas and I am proud to have such a strong team in place in this important state as we work together to Make America Great Again! - -I love the people of New Hampshire and have made many great friends over the years. We have a great team in place, the biggest crowds and continue to lead all the polls. Andrew is a great addition and with his help we will win New Hampshire and Make America Great Again! - -It is my honor to be on the ballot in Illinois and to file a full slate of delegates. We have received tremendous support from so many people in Illinois, we are leading in all the polls and have had the largest crowds. With the support of the people of Illinois, and around the country, we will Make America Great Again. - -I am leading in every poll by wide, double digit margins. We have tremendous crowds, incredible support from all over the country and I am $35 million dollars under budget. We have spent the least amount of money and have the best results and this is the kind of thinking the country needs. I am very proud of this ad, I don’t know if I need it, but I don’t want to take any chances because if I win we are going to Make America Great Again. - -I love Texas and have visited many times. We have had tremendous crowds, are winning all the polls and have met some terrific people who truly want to Make America Great Again. With their support, we will make the country better than ever before. - -I am pleased to welcome Ryan to the team. We already have tremendous grassroots support across the state of Louisiana and I look forward to visiting soon and sharing my vision to Make America Great Again! - -It is a great honor to have such incredible grassroots support across the state of New Hampshire. We have tremendous poll results and crowds and I look forward to visiting many times over the next several weeks. With the support of each Town Chair and so many others, we will Make America Great Again! - -I am excited to return to Michigan where we had an incredible rally attended by over 5,000 people in August. Scott will be a great asset to our team as we continue to build infrastructure beyond the early primary states and share my vision to Make America Great Again. - -If anyone needed more evidence of why the American people are suffering at the hands of their own government, look no further than the budget deal announced by Speaker Ryan. In order to avoid a government shutdown, a cowardly threat from an incompetent President, the elected Republicans in Congress threw in the towel and showed absolutely no budget discipline. - -The American people will have to absorb higher deficits, greater debt, less economic liberty and more corporate welfare. Congress cannot seem to help itself in bending to every whim of special interests. How can they face their constituents when they continue to burden our children and grandchildren with debts they will never be able to repay? Our government is failing us, so we must do something about it. Who knows how bad things will be when the next administration comes in and has to pick up the pieces? - -The only special interest not being served by our government is the American people. It is time we imposed budget discipline by holding the line on spending, getting rid of waste, fraud and abuse, and by taking on our debt. To do these things, we need a President who can lead the fight to hold Congress and the rest of government accountable. Together, we can Make America Great Again. - -I am proud to share this report, written by the highly respected Dr. Harold Bornstein of Lenox Hill Hospital, stating that I am in excellent health. I am fortunate to have been blessed with great genes --- both of my parents had very long and productive lives. I have truly enjoyed working on the campaign trail with one objective in mind, to Make America Great Again! People have been impressed by my stamina, but to me it has been easy because I am truly doing something that I love. Our country will soon be better and stronger than ever before. - -I am incredibly honored to receive this endorsement. My entire life has been spent defending the police and the incredible job they do. Especially today, they will play an increasingly vital part in making our nation safe. With their support and hard work together we will Make America Great Again. - -I am so proud with this tremendous showing of support in New Hampshire and Tennessee. We are winning all the polls by huge, double digit margins and have the biggest crowds. With support from each of these incredible delegates and supporters across both states, we will Make America Great Again. - -It is a great honor to receive the endorsement of so many strong, accomplished women in the great state of Texas. Their leadership will help us win this important state on March 1st, 2016. With their support and the support of so many other women across the country, we are going to Make America Great Again. - -I am proud to have the support of so many community leaders in Oklahoma and it is an honor to be on the ballot. I visited in September and was overwhelmed by the enthusiasm at the State Fair where over 20,000 people attended my speech. With the support of those people and my team, we will Make America Great Again! - -Without looking at the various polling data, it is obvious to anybody the hatred is beyond comprehension. Where this hatred comes from and why we will have to determine. Until we are able to determine and understand this problem and the dangerous threat it poses, our country cannot be the victims of horrendous attacks by people that believe only in Jihad, and have no sense of reason or respect for human life. If I win the election for President, we are going to Make America Great Again. - -I love Massachusetts and have many great friends there, with a recent poll showing me in first place with 48%. The support has been incredible and I look forward to being there again soon and visiting Mississippi for the first time. With support from these two important states we will Make America Great Again! - -It is my honor to be on the ballot in Georgia, Louisiana, Missouri, Tennessee and Utah. I have visited Tennessee and Georgia many times and we have tremendous crowds and support from the great people there and I look forward to campaigning in Louisiana, Missouri, and Utah soon. We are leading in all the polls and with your help we will Make America Great Again! - -This was an extraordinary meeting of religious leaders many of whom have decided to endorse me--- such a great honor! I look forward to future meetings with the Coalition of African American Ministers. - -It is my honor to be on the ballot in Arizona, the Commonwealth of the Northern Mariana Islands, the District of Columbia, Vermont and the U.S. Virgin Islands. We are going to make this country better than it has ever been before by building a wall to secure our southern border, creating jobs, strengthening our military, taking care of our Vets and protecting our country. We will Make America Great Again! - -It is a great honor to receive Kat's endorsement. She is a genuine patriot who has worked tirelessly in service to our country and her fellow Veterans. With her support and the support of so many other Veterans across the country, we are going to Make America Great Again. - -Serge Kovaleski must think a lot of himself if he thinks I remember him from decades ago – if I ever met him at all, which I doubt I did. He should stop using his disability to grandstand and get back to reporting for a paper that is rapidly going down the tubes. - -I am proud to announce these fantastic additions to my Virginia leadership team. We have had tremendous crowds and enthusiasm from supporters across the state. I will do a great job for the people of Virginia and with their help we will Make America Great Again! - -I have created thousands of jobs and own some of the most iconic assets in the state. I love the people of Florida and I am proud to have such overwhelming support and a great staff in place. I look forward to visiting often and working with my team to share my vision to Make America Great Again. - -It is great to announce the additions of Earl and Taylor who will be valuable members of our operation in North Carolina, where I have been leading in every poll for many months. I look forward to being back in North Carolina soon as I continue to share my vision to Make America Great Again! - -It is an honor to be on ballot in the state of Virginia where we have a great staff and team of volunteers in place. I am pleased to announce the Southwest Virginia Leadership Team as we continue to be successful in Virginia and across the country. - -We must address Islamic terrorism and protect our country first. I will lead by example, as I always have, by vowing to defeat ISIS, stop illegal immigration and the Syrian refugee program, secure our border and bring real change to Washington, D.C. I am the only one who can Make America Great Again. - -November is a great month for Americans. In New York City, we get to look out on Central Park and see the leaves change as we move from the beauty of Fall to a magnificent city cloaked in the chill and briskness of Winter. What we Americans often gloss over is that November is a time for celebrating the very reason we enjoy the freedom, opportunity and prosperity found nowhere else in the world. The reason we are able to live like we do is due to the courage, dedication and sacrifice of those who serve in the armed forces of the United States and to those who have given the last full measure. - -This November we should pay particular attention to the world situation. All over the world, young men and women who have volunteered to serve this nation are standing watch to protect us from the evil that is focused on taking away all that we value. We have turmoil in the Middle East, unrest in Europe, aggressive expansion in the Pacific Rim and stateless terrorism that threatens every aspect of freedom all across the globe. At no time in history have things been so precarious. Yet, we are able to find the best among us to step forward and take on the traditions and values of generations past, humbly putting on the uniform of this nation. They do so to support and defend the Constitution, not the President, the Congress or even the flag. They swear to defend an idea and an ideal. As is so subtly stated in the Preamble of our Founding Document, they take on the task "to secure Liberty for ourselves and our posterity." - -On November 10th, the Marine Corps celebrated its 240th birthday. The Marines were, are and will remain unique in the world, a force that is first to fight and last to come home. The Marines have maintained the highest standards and "once a Marine, always a Marine," captures the sense of duty and honor that permeates every pore of those who have worn the Eagle, Globe and Anchor. - -Today, we honor all veterans who are and have served in the armed forces. This day has special meaning to so many. Over the years, I have hired thousands of veteransand have found them to be among the very best. They bring no attention to themselves except in their extraordinary professionalism and dedication to whatever they choose to do. Quietly, humbly, and without fanfare, they go about their daily lives being great citizens, good moms and dads, and loving, devoted companions. Bearing up under the terrible damage of war, so many carry on with dignity and grace. We should be so proud of them and we should honor them every day.November is also the month we celebrate the Military Family. Too many of us have never had to be away from family and friends on holidays, birthdays and anniversaries. We have never been asked to risk life and limb in places we cannot even find on a map. We go about our daily lives enjoying freedom, most of the time without ever thinking about what it is like to have a loved one in harm's way. The sacrifice and strength of military families, those who also serve, is so important to morale and welfare of those so far from home. - -I have lived a full life and have benefited more than most from the opportunities found in this country. I have a loving, healthy family and I am running for President -- only in America. To be where I am, I thank God every day for those who stand on the ramparts of liberty with courage, strength, humility and dignity. So, this November, let's celebrate our veterans and their families and keep them in our prayers. Because of these dedicated Americans, we will be able to Make America Great Again! - -I greatly appreciate the professional service shown by the representatives of the states of Nevada and Kentucky. It is my great honor to be on the ballot in these important states and I look forward to Making America Great Again. - -I am self-funding my campaign and therefore I will not be controlled by the donors, special interests and lobbyists who have corrupted our politics and politicians for far too long. I have disavowed all Super PAC's, requested the return of all donations made to said PAC's, and I am calling on all Presidential candidates to do the same. The character of our country is only as strong as our leaders---the only special interest I am beholden to is the American people and together we will Make America Great Again! - -I am grateful to have the endorsement of Senator Zaun who understands what is at stake in this election. Our country faces tough challenges. But America can be bigger and better than ever before. It will require leadership and a commitment to do what is right even when it may not be popular among the elites in Washington, D.C. who have created more problems than they have solved. With the support of Senator Zaun and so many other conservatives across the country, we will Make America Great Again - -I am grateful to have the support of so many leaders in Oklahoma. These individuals share my concern for our country and know that our challenges are not going to be solved by career politicians in Washington, D.C. With their support and the support of so many other conservatives across the country, we are going to Make America Great Again - -My message to Make America Great Again has been so overwhelmingly well received, drawing record crowds and putting me in first place in all the polls. America is crippled right now and Washington, DC is broken. Our politicians are not capable or competent enough to fix our problems. We will continue to build out a substantial campaign team that allows us to take our message across the country and continue to share my vision to Make America Great Again - -We are in this to win it. These staff additions are the continuation of our plan to have a strategic and significant presence across the country. I am pleased that my vision to Make America Great Again has generated so much support and such a positive response that we are leading in all the polls. By adding to our team in these critical states we will be able to build on the tremendous support we have received and share our message with even more voters in these states.,. . I look forward to being in these states even more as I continue to share my ideas about how to put America back on top! - -It is great to welcome Seth and Darren as we continue to build our team around the country and organize in early states. We have received great support in Georgia and Tennessee where my message to create jobs, secure our border, strengthen our military and take care of our vets resonates strongly with voters who are ready to Make America Great Again. - -Have you seen today’s poll? What about this Fox poll? We are in first place! All the polls confirm it! We continue to prove we are ready to take our country back and, more than any other candidate; we have the support of the people. - -Yesterday I traveled to Laredo, Texas to visit the border and meet with local law enforcement officials. We discussed the need for stronger border security and the importance of continuing the national discussion on this issue. Strengthening our border will most benefit legal immigrants and working class Americans. Continuing to allow illegal immigrants to cross a weak border is hurting America’s economy. - -Many politicians in Washington talk about the border but have no idea how dangerous it really is. I went to the border to make sure the American people have the opportunity to see the truth. - -Earlier this week I hosted my South Carolina campaign kickoff. Hundreds of veterans came to watch my speech. I announced the launch of a new hotline, 855 – VETS – 352, and email address, veterans@donaldtrump.com, for Veterans to share their stories about the need to reform our Veterans Administration. The way veterans are being treated in our country is a disgrace and I am the candidate that will fix it. - -I made a campaign stop in Nevada to speak at Freedom Fest. In front of 2,000 energized conservatives I made a pledge that I will remake today; as President I will restore the American free market and ensure that companies are incentivized to bring factories and jobs back to American soil. - -I also traveled to Phoenix to host a rally. 15,000 people showed up. It was an incredible testament to the deep commitment of the American people to restoring American greatness. We have stayed silent for far too longwatching bad policies wreak havoc on our economy and our nation. I will take care of our veterans, rebuild our military, and secure our borders. - -Most politicians would have backed down after being relentlessly targeted by the media. I will never stop speaking out on behalf of all Americans. - -We have been bringing our message across America over the past several weeks. In California I met with the families of six victims who were killed by illegal aliens. Each family shared their story. It was a heartbreaking reminder of why we must secure our border – to make sure no more Americans are senselessly killed by illegal aliens. - -Our unprotected border is a threat to the stability of our economy, the personal safety of Americans, and our national security. Too many American lives have been lost because our leaders refuse to secure the border. - -I also filed my FEC financial disclosure form. I have spent my life building businesses in large American cities and small American towns. I will use my experience to put Americans back to work – and the polls prove that the people know I am the only one who can do it. - -This campaign is about changing Washington. We need to once again have a government that is of the people, for the people and by the people. - -We will make America Great Again! - -Our Veterans are incredibly important and I’m proud to have the support of this coalition, especially in New Hampshire, where if I am elected I will build a full-service, first-class VA hospital to ensure all New Hampshire Veterans received the care they deserve. I love all Veterans and will help them finally lead the kind of lives that they should be leading. - -First people said I would never run, and I did. Then, they said, I would never file my statement of candidacy with the FEC, and I did. Next, they said I would never file my personal financial disclosure forms. I filed them early despite the fact that I am allowed two 45 days extensions. Now I have surged in the polls and am fighting to Make America Great Again. I look forward to the challenge of winning the presidency and doing a fantastic job for our country. I will make the United States rich and strong and respected again, but also a country with a 'big heart' toward the care of our people. - -The Obama Administration’s agreement with Iran is very dangerous. Iran developing a nuclear weapon, either through uranium or nuclear fuel, and defying the world is still a very real possibility. The inspections will not be followed, and Iran will no longer have any sanctions. Iran gets everything and loses nothing. - -Every promise the Obama Administration made in the beginning of negotiations, including the vow (made at the beginning of the negotiations) to get our great American prisoners returned to the U. S. has been broken. This is a bad deal that sets a dangerous precedent. - -This deal sets off a nuclear arms race in the Middle East, which is the most-unstable region in the world. It is a horrible and perhaps catastrophic event for Israel. - -Furthermore, we should have kept the billions of dollars we have agreed to pay them. Any great dealmaker would know this is a perfect example of “tapping along” and because they have been unchecked for so long throughout this extremely lengthy process, I guarantee they are much closer to producing a nuclear weapon than they were at the start of negotiations. -The fact is, the US has incompetent leaders and even more incompetent negotiators. We must do better for America and the world. We have to Make America Great Again. - -Failing candidate Hillary Clinton, who is desperately trying to hold on to her lead in the democratic primary against Bernie Sanders, is knowingly putting out lies about my stance on illegal immigration. I said “Mexico is sending”--- I’m not knocking immigration or immigrants, but rather am very critical of the country of Mexico for sending us people that they don’t want. Likewise I am very critical of illegal immigration and the tremendous problems including crime, which it causes. - -She is desperate, she is sad, and she is obviously very nervous when she has to revert to issues that have already been settled given the absolute accuracy of my statement. She speaks about “my tone” and that’s the problem with our country’s leaders. They are more worried about tone than results! It’s not about being nice--- it’s about being competent. - -Hillary should spend more time producing her illegally hidden emails and less time trying to obfuscate a statement by me that is totally clear and obviously very much accepted by the public as true. I am honored, however, that she is attacking me, instead of Jeb Bush. Obviously she knows that JEB is no longer her real competition. The last person she wants to face is Donald Trump. - -Stephen is a great addition to our New Hampshire leadership team and I am proud to have his endorsement for the Republican presidential primary nomination. He understands what is at stake with our economy and will be an asset as we continue to solidify our position at the top of the field, both in New Hampshire and nationally. - -I am pleased to submit this filing to the Federal Election Commission, formalizing my campaign for President of the United States. I can rebuild the American Dream so that it is stronger, bigger and better than ever before. Together we will Make America Great Again! - -The tragic events that occurred on Wednesday evening should be our nation’s primary focus for the foreseeable future. - -This is a time for healing, not politics. - -I look forward to returning to South Carolina and continuing our discussion on how we can best move our country forward,” said Trump. “Until that time our prayers and deepest condolences are with the people of Charleston and the families of those who have been torn apart by this senseless act of violence and hate. - -Quite simply, it is time to bring real leadership to Washington. The fact is, the American Dream is dead -- but if I win, I will bring it back bigger and better and stronger than ever before. Together we will Make America Great Again! - -I have great respect for the people of New Hampshire --- people who work hard and love this country. We are always greeted by massive crowds and immense support, which is reflected in these results. If I run, and if I win, the people of New Hampshire will be very proud--- we will Make America Great Again! - -Are these the people you want negotiating for you? Our country is in big trouble. We are not utilizing our best. It’s time we fix Washington with outsiders who actually know what they’re doing. It’s time we Make America Great Again! - -The American Dream is dead, but if I run, and if I win, we will bring it back stronger, bigger, and better than ever before! - -I’m proud to announce these individuals that share my passion for the country and understand that now is the time for executive leadership and real action. Together we can Make America Great Again! - -Yet again, the politicians are allowing our president to reinforce the lack of respect countries like China and Japan now have for the United States. They will devalue their currency, exploit our trade agreements, continue to destroy our economy and put Americans out of work. Politicians are all talk and no action. Instead of fast tracking TPP, Congress should pass legislation that holds China and Japan accountable for currency manipulation. This would send a message to the world that there are consequences for cheating the United States. It’s time for action. It’s time to Make America Great Again! - -I’ve always enjoyed my time in New Hampshire and have great respect for the people of New Hampshire --- people who work hard and love this country. We are always greeted by huge crowds and overwhelming support which is reflected in these results. I have a great love for our country, but it is a country that is in serious trouble. Americans deserve better than what they get from their politicians --- who are all talk and no action! I am the only one who can make America truly great again—I know how and the politicians don’t. - -I have a great love for our country, but it is a country that is in serious trouble. We have lost the respect of the entire world. Americans deserve better than what they get from their politicians --- who are all talk and no action! I have built a great company, created thousands of jobs and built a tremendous net worth with some of the finest and most prestigious assets in the world --- and very little debt! All Americans deserve the same opportunity. Our real unemployment rate is staggering while our manufacturing base is eroding on a daily basis. We must rebuild our infrastructure, control our borders, support local control of education, greatly strengthen our military, care for our veterans and put Americans back to work! We must stop other countries from totally taking advantage of our representatives who are being out-negotiated at every turn. I am the only one who can make America truly great again! - -One of the biggest political events anywhere in the world is happening right now with the Republican Party. Millions and millions of people are going out to the polls and they're voting. They're voting out of enthusiasm. They're voting out of love. Some of these people, frankly, have never voted before - 50 years old, 60 years old, 70 years old - never voted before. -We're taking people from the Democrat Party. We're taking people as independents, and they're all coming out and the whole world is talking about it. It's very exciting. I think, frankly, the Republican establishment, or whatever you want to call it, should embrace what's happening. -We're having millions of extra people join. We are going to beat the Democrats. We are going to beat Hillary or whoever it may be. And we're going to beat them soundly. - -Because nobody knows the system better than me. I know the H1B. I know the H2B. Nobody knows it better than me. I'm a businessman. These are laws. These are regulations. These are rules. We're allowed to do it. And frankly, because of the devaluations that other countries - the monetary devaluations that other countries are constantly doing and brilliantly doing against us, it's very, very hard for our companies in this country, in our country, to compete. -So I will take advantage of it; they're the laws. But I'm the one that knows how to change it. Nobody else on this dais knows how to change it like I do, believe me. - -I will. First of all, I think and I know the H1B very well. And it's something that I frankly use and I shouldn't be allowed to use it. We shouldn't have it. Very, very bad for workers. And second of all, I think it's very important to say, well, I'm a businessman and I have to do what I have to do. -When it's sitting there waiting for you, but it's very bad. It's very bad for business in terms of - and it's very bad for our workers and it's unfair for our workers. And we should end it. Very importantly, the Disney workers endorsed me, as you probably read. -And I got a full endorsement because they are the ones that said, and they had a news conference, and they said, he's the only one that's going to be able to fix it. Because it is a mess. I think for a period of a year to two years we have to look back and we have to see, just to answer the second part of your question, where we are, where we stand, what's going on. -We have to sort of take a strong, good, hard look and come up with plans that work. And we're rushing into things, and we're just - we're leading with the chin. - -We're leading with people that don't know what they are doing in terms of our leadership. I'd say a minimum of one year, maybe two years. - -Education through Washington, D.C. I don't want that. I want local education. I want the parents, and I want all of the teachers, and I want everybody to get together around a school and to make education great. -And it was very interesting, I was with Dr. Ben Carson today, who is endorsing me, by the way, tomorrow morning, and he is - -We were talking. We spoke for over an hour on education. And he has such a great handle on it. He wants competitive schools. He wants a lot of different things that are terrific, including charter schools, by the way, that the unions are fighting like crazy. But charter schools work and they work very well. -So there are a lot of things. But I'm going to have Ben very involved with education, something that's an expertise of his. - -You're right, Jake. But it has been taken over by the federal government. It was originally supposed to be that way. And certainly sounds better that way. But it has all been taken over now by the bureaucrats in Washington, and they are not interested in what's happening in Miami or in Florida, in many cases. -Now in some cases they would be. But in many cases they are more interested in their paycheck and the big bureaucracy than they are taking care of the children. - -Well, first of all, I want you to understand that the Democrats, and I've watched them very intensely, even though it's a very, very boring thing to watch, that the Democrats are doing nothing with Social Security. They're leaving it the way it is. In fact, they want to increase it. They want to actually give more. -And that's what we're up against. And whether we like it or not, that is what we're up against. -I will do everything within my power not to touch Social Security, to leave it the way it is; to make this country rich again; to bring back our jobs; to get rid of deficits; to get rid of waste, fraud and abuse, which is rampant in this country, rampant, totally rampant. - -And it's my absolute intention to leave Social Security the way it is. Not increase the age and to leave it as is. -You have 22 years, you have a long time to go. It's not long in terms of what we're talking about, but it's still a long time to go, and I want to leave Social Security as is, I want to make our country rich again so we can afford it. I want to bring back our jobs, I want to do things that will make us, that will bring back GDP -I mean, as an example, GDP was zero essentially for the last two quarters. If that ever happened in China. you would have had a depression like nobody's ever seen before. They go down to 7 percent, 8 percent, and it's a - it's a national tragedy. We're at zero, we're not doing anything. -We've lost our jobs. We've lost everything. We're losing everything. Our jobs are gone, our businesses are being taken out of the country. I want to make America great again and I want to leave Social Security as is. We're going to get rid of waste, fraud, abuse and bring back business. - -Because they don't cover most of the subjects. We're the policemen of the world. We take care of the entire world. We're going to have a stronger military, much stronger. Our military is depleted. But we take care of Germany, we take care of Saudi Arabia, we take care of Japan, we take care of South Korea. We take - every time this maniac from North Korea does anything, we immediately send our ships. We get virtually nothing. -We have 28,000 soldiers on the line, on the border between North and South Korea. We have so many places. Saudi Arabia was making a billion dollars a day, and we were getting virtually nothing to protect them. We are going to be in a different world. We're going to negotiate real deals now, and we're going to bring the wealth back to our country. We owe $19 trillion. We're going to bring wealth back to our country. - -Well, I don't know if he's saying that. Look, I'm just saying very simply we have a country that I've never seen anything like it. I've been going over budgets and looking at budgets. We don't bid things out. We don't bid out, as an example, the drug industry, pharmaceutical industry. They don't go out to bid. They just pay almost as if you walk into a drug store. That's what they're paying. -And the reason is they have a fantastic lobby. They take care of all of the senators, the Congressmen. They have great power and they don't bid out. The military is never properly bid. When we go out to military bids, it's not properly bid. And the people that really sell us the product are oftentimes the product we don't want, only because that particular company has political juice, OK? -I'm self-funding my campaign. Nobody is going to be taking care of me. I don't want anybody's money. I will tell you something. We're going to go out to bid in virtually every different facet of our government. We're going to save a fortune. - -Yes. If you look back to Iowa, Ted did change his view and his stance on ethanol quite a bit. He did and - at the end. Not full on, but he did change his view in the hopes of maybe doing well. And you know, I think everybody knows that. It was a front page story all over the place, and he did make a change. - -Well, that's fine. First of all, Ted was in favor of amnesty. So there's no question about that. And Sheriff Joe Arpaio recently endorsed me and there's nobody tougher on the borders than Sheriff Joe. And Jeff Sessions, one of the most respected Senator in Washington, an incredible man, also endorsed me. -And there's nobody that knows more about the borders than Senator Jeff Sessions. I would say this. We're all in this together. We're going to come up with solutions. We're going to find the answers to things. And so far I cannot believe how civil it's been up here. - -Well, first of all, I don't really think that. I think that I hold views that are very similar to many of the people. We are more inclusive. And if you look at the polls and if you look at the millions of people that have been pouring into the polls, it's, again, the biggest story. -You look at all of these people that are coming in, something is happening. I am different in one primary respect, and that's trade. I feel that we have had horrible negotiators, horrible trade deals. The jobs in this country are disappearing, and especially the good jobs. -You look at the recent jobs reports, which are really done so that presidents and politicians look good, because all of these people looking for jobs, when they give up, they go home, they give up, and they are considered statistically employed. So that's that. -But I will say, trade deals are absolutely killing our country. The devaluations of their currencies by China and Japan and many, many other countries, and we don't do it because we don't play the game. and the only way we're going to be able to do it is we're going to have to do taxes unless they behave. -If you don't tax certain products coming into this country from certain countries that are taking advantage of the United States and laughing at our stupidity, we're going to continue to lose businesses and we're going to continue to lose jobs. -And if you look at the average worker over the last 12 years, their salary and their pay have gone down, not up. It has gone down. And I think that's why there has been such an outpouring of love to what I'm saying. - -The 45 percent tax is a threat. It was not a tax, it was a threat. It will be a tax if they don't behave. Take China as an example. I have many friends, great manufacturers, they want to go into China. They can't. China won't let them. We talk about free trade. It's not tree free trade, it's stupid trade. -China dumps everything that they have over here. No tax, no nothing, no problems, no curfews (ph), no anything. We can't get into China. I have the best people, manufacturers, they can't get in. When they get in, they have to pay a tremendous tax. -The 45 percent is a threat that if they don't behave, if they don't follow the rules and regulations so that we can have it equal on both sides, we will tax you. It doesn't have to be 45, it could be less. But it has to be something because our country and our trade and our deals and most importantly our jobs are going to hell. - -Jake, I have to say - honestly, it's just the opposite. What will happen if they don't behave, we will put on a tax of some amount, and it could be a large amount, and we will start building those factories and those plants. Instead of in China, we'll build them here. And people will buy products from here, rather than buying it through China where we're being ripped off. And we have a $505 billion trade deficit right now. -So we'll build our factories here and we'll make our own products. And that's the way it should be done. And the way we've been doing it for the last long period of time is our country - our country is in serious, serious trouble. It's a bubble and it's going to explode, believe me. - -I mean a lot of them. I mean a lot of them. - -Well, you know, I've been watching the debate today. And they're talking about radical Islamic terrorism or radical Islam. But I will tell you this. There's something going on that maybe you don't know about, maybe a lot of other people don't know about, but there's tremendous hatred. And I will stick with exactly what I said to Anderson Cooper. - -Marco talks about consequences. Well, we've had a lot of consequences, including airplanes flying into the World Trade Center, the Pentagon and could have been the White House. There have been a lot of problems. -Now you can say what you want, and you can be politically correct if you want. I don't want to be so politically correct. I like to solve problems. We have a serious, serious problem of hate. - -There is tremendous hate. There is tremendous hate. Where large portions of a group of people, Islam, large portions want to use very, very harsh means. Let me go a step further. Women are treated horribly. You know that. You do know that. Women are treated horribly, and other things are happening that are very, very bad. - -Now I will say this, there is tremendous hatred. The question was asked, what do you think? I said, there is hatred. Now it would be very easy for me to say something differently. And everybody would say, oh, isn't that wonderful. - -We better solve the problem before it's too late. - -Let me go back to the other just for a second. In large mosques, all over the Middle East, you have people chanting "death to the USA." Now, that does not sound like a friendly act to me. -As far as the families are concerned, and as far as the law is concerned, we have a law - this all started with your question on water boarding. We have a law that doesn't allow right now water boarding. They have no laws. They have no rules. They have no regulations. They chop off heads. They drown 40, 50, 60 people at a time in big steel cages, pull them up an hour later, everyone dead. And we're working on a different set of parameters. -Now, we have to obey the laws. Have to obey the laws. But we have to expand those laws, because we have to be able to fight on at least somewhat of an equal footing or we will never ever knock out ISIS and all of the others that are so bad. -We better expand our laws or we're being a bunch of suckers, and they are laughing at us. They are laughing at us, believe me. - -First of all, there's nobody on this stage that's more pro Israel than I am. OK. There's nobody. - -I am pro-Israel. - -I was the grand marshall, not so long ago, of the Israeli Day Parade down 5th avenue. I've made massive contributions to Israel. I have a lot of - I have tremendous love for Israel. I happen to have a son-in-law and a daughter that are Jewish, OK? And two grandchildren that are Jewish. - -But I will tell you, I think if we're going to ever negotiate a peace settlement, which every Israeli wants, and I've spoken to the toughest and the sharpest, they all want peace, I think it would be much more helpful is - I'm a negotiator. If I go in, I'll say I'm pro-Israel and I've told that to everybody and anybody that would listen. -But I would like to at least have the other side think I'm somewhat neutral as to them, so that we can maybe get a deal done. Maybe we can get a deal. I think it's probably the toughest negotiation of all time. But maybe we can get a deal done. - -And, by the way, just so you understand, as far as Iran, I would have never made that deal. I think it's maybe the world deal I've ever seen. I think it's the worst deal I've ever seen negotiated. I will be so tough on them and ultimately that deal will be broken unless they behave better than they've ever behaved in their lives, which is probably unlikely. That deal will be broken. - -If I become president of the United States, one of the things that will be an absolute priority is number one, protection of Israel, but also seeing if a deal can be made, the toughest deal, the toughest negotiation there probably is of any kind no matter where you look, no matter how hard you look. - -We really have no choice. We have to knock out ISIS. We have to knock the hell out of them. We have to get rid of it. And then come back and rebuild our country, which is falling apart. We have no choice. - -I would listen to the generals, but I'm hearing numbers of 20,000 to 30,000. We have to knock them out fast. Look, we're not allowed to fight. We can't fight. We're not knocking out the oil because they don't want to create environmental pollution up in the air. -I mean, these are things that nobody even believes. They think we're kidding. They didn't want to knock out the oil because of what it's going to do to the carbon footprint. We don't fight like we used to fight. We used to fight to win. Now we fight for no reason whatsoever. We don't even know what we're doing. - -So, the answer is we have to knock them out. We have to knock them out fast. And we have to get back home. And we have to rebuild our country which is falling apart. - -Well, I don't really agree with President Obama. I think I'm somewhere in the middle. What I want is I want a much better deal to be made because right now, Cuba is making - as usual with our country, we don't make good deal. We don't have our right people negotiating, we have people that don't have a clue. -As an example, I heard recently where the threat was made that they want reparations for years of abuse by the United States, and nobody's talking about it and they'll end up signing a deal and then we'll get sued for $400 billion or $1 trillion. -All that stuff has to be agreed to now. We don't want to get sued after the deal is made. So I don't agree with President Obama, I do agree something should be - should take place. After 50 years, it's enough time, folks. But we have to make a good deal and we have to get rid of all the litigation that's going to happen. -This was just a little story but it was a big story to me because I said oh, here we go, we make a deal, then get sued for a tremendous amount of money for reparations. So I want to do something, but it's got to be done intelligently. We have to make good deal. - -I would want to make a good deal, I would want to make a strong, solid, good deal because right now, everything is in Cuba's favor. Right now, everything, every single aspect of this deal is in Cuba's favor. It the same way as the Iran deal. -We never walked - we never - all we do is keep giving. We give and give and give. - -I would probably have the embassy closed until such time as a really good deal was made and struck by the United States. - -Well, if Ted was listening, he would have heard me say something very similar. I said we would not do the deal unless it was going to be a very good deal for us. And I think I said it loud and I think I said it very clear. And I think after 50 years, and I have many friends, I own many properties in Miami, many, many, and I have many people that know and they feel exactly the way I do, make a deal, it would be great, but it's got to be a great deal for the United States, not a bad deal for the United States. As far as Iran is concerned, I would have never made that deal. That is one of the worst deals ever, ever made by this country. It is a disaster. So for Ted to say I agree with this deal, I mean, it's a staple in my speeches that that may he worst single deal I've ever seen negotiated. So don't try to put it on me like it's wonderful, like I love it. - -I was against the giving of the money at all cost. I said don't negotiate at all until you get the prisoners back. If the prisoners don't come back early - three years ago. One of the longest negotiations I've ever seen, by the way. If they don't come back early, I was saying don't negotiate. They come back early. -What you do is you take it back and you say, either give us the prisoners or we double up the sanctions. What we should have done is doubled up the sanctions and made a much better deal. Cause that deal is a disaster. -Ted, the money is largely gone because of incompetent and very, very poor negotiators. But that money, the $150 billion, is largely gone and already spent everywhere but the United States. - -That doesn't mean I was endorsing that. I was not endorsing it. I said that is a strong, powerful government that put it down with strength. And then they kept down the riot. It was a horrible thing. It doesn't mean at all I was endorsing it. -As far as Putin is concerned, I think Putin has been a very strong leader for Russia. I think he has been a lot stronger than our leader, that I can tell you. I mean, for Russia, that doesn't mean I'm endorsing Putin. - -I used to think Merkel was a great leader until she did what she did to Germany. Germany is a disaster right now. So I used to think that. - -And strong doesn't mean good. Putin is a strong leader, absolutely. I could name many strong leaders. I could name very many very weak leaders. But he is a strong leader. Now I don't say that in a good way or a bad way. I say it as a fact. - -I hope not. I truly hope not. I will say this. We have 25 (thousand), 30,000 people - you've seen it yourself. People come with tremendous passion and love for the country, and when they see protest - in some cases - you know, you're mentioning one case, which I haven't seen, I heard about it, which I don't like. But when they see what's going on in this country, they have anger that's unbelievable. They have anger. -They love this country. They don't like seeing bad trade deals, they don't like seeing higher taxes, they don't like seeing a loss of their jobs where our jobs have just been devastated. And I know - I mean, I see it. There is some anger. There's also great love for the country. It's a beautiful thing in many respects. But I certainly do not condone that at all, Jake. - -We have some protesters who are bad dudes, they have done bad things. They are swinging, they are really dangerous and they get in there and they start hitting people. And we had a couple big, strong, powerful guys doing damage to people, not only the loudness, the loudness I don't mind. But doing serious damage. And if they've got to be taken out, to be honest, I mean, we have to run something. -And it's not me. It's usually the municipal government, the police because I don't have guards all over these stadiums. I mean, we fill up stadiums. It's usually the police - and, by the way, speaking of the police, we should pay our respects to the police because they are taking tremendous abuse in this country and they do a phenomenal job. - -So we should pay - we should truly give our police. They're incredible people, we should give them a great deal more respect than they receive. - -It shows the total dishonesty of the press. We were having - on a few occasions, again massive crowds. And we're talking and I'm saying who is going to vote on Tuesday? Who is going to vote? The place goes crazy. Then I say, hey, do me a favor. Raise your right hand. Do you swear you're going to vote for Donald Trump? -Everyone's laughing, we're all having a good time. That's why I have much bigger crowds than Ted, because we have a good time at mine. - -But we're all having a good time and the next day, on the Today Show and a couple of other place, not too many. Because when you look at it, everyone's smiling, laughing. Their arms are raised like this. They had pictures, still pictures of people and they tried to equate it to Nazi Germany. - -It is a disgrace. It was a total disgrace. And I've had reporters, people that you know, come up to me and said that - what they did on the Today Show was a disgrace. - -I think that what should happen, getting back maybe a little bit to your first question, I think that whoever - first of all, I think I'm going to have the delegates. OK? I think. Let's see what happens. - -But if somebody doesn't have the delegates, and I guess there's two of us up here that can and there are two of us that cannot at this moment. But if - no, that's just - by the way, that is not meant to be a criticism. That's just a mathematical fact. OK? -If two of us get up there, I would say this, if - if Marco, if the governor, if Ted had more votes than me in the form of delegates, I think whoever gets to the top position as opposed to solving that artificial number that was by somebody, which is a very random number, I think that whoever gets the most delegates should win. That's what I think. HEWITT: Senator Cruz, if you - if you overtake Donald Trump at the convention, what will you do to take his very passionate supporters and keep them from bolting the convention and sabotaging the fall election? - -Make me president. - -You know, I listen and I watch Ted on television and when he speaks, and he's always saying, "I'm the only one that beat Donald in six contests; and I beat him." But I beat him in 13 contests. He never mentions that. - -And let me just tell you another little fact, little minor fact. I have about a 1.6 million votes during this primary season, more votes than Ted. The other thing is, I beat Hillary, and I will give you the list, I beat Hillary in many of the polls that have been taken. And each week, I get better and better. And believe me, I haven't even started on her yet. - -I have not made that decision yet. I will make a decision on that, but I have not made that decision. My decision was that I would go through the entire primary season and I have turned down probably $275 million worth. I have many, many friends that come up all day long, $5 million, $10 million, I'm turning down money. I feel sort of foolish to be honest with you. I don't know if I get any credit for it but I'm self-funding my campaign. - -And other than - and by the way, other than very small donations where people are sending in $200, $15, $20, and we have some of that, but it's not a large amount. No, I'm self-funding my campaign, and the reason is that I've been in this business a long time and I was on the other side - until eight months ago I was on the or side. I made massive contributions, large contributions to politicians, both Democrats and Republicans. I was liked by everybody, which is an important thing. -I will say this - people control special interests, lobbyists, donors, they make large contributions to politicians and they have total control over those politicians. I don't want anybody to control me but the people right out there. And I'm going to do the right thing. - -Ted was given to PACs. I mean, PACs - you know, these super PACs are a disaster, by the way, folks. Very corrupt. It's going to lead to lots of disasters. But Ted has super PACs and you have to look at the people that are giving to those super PACs, number one. It's very important to do that. -There is total control of the candidates, I know it better than anybody that probably ever lived. And I will tell you this, I know the system far better than anybody else and I know the system is broken. And I'm the one, because I know it so well because I was on both sides of it, I was on the other side all my life and I've always made large contributions. -And frankly, I know the system better than anybody else and I'm the only one up here that's going to be able to fix that system because that system is wrong. - -It depends on what comes up. You never know. It depends on what comes up. Look, look, we had a great president, Ronald Reagan. We had Tip O'Neill, speaker. And what do we do, we take these two men that are very, very different men, they got along, they had relationships, and they got things, and very beautifully. -Nobody is complaining about the deals that Ronald Reagan made. And he made it with Tip O'Neill. We need to have people get together and work good deals out, good deals out from our standpoint. And I'll tell you this, it can be done. -We don't want to continue to watch people signing executive orders because that was not what the Constitution and the brilliant designers of this incredible document had in mind. We need people that can make deals and can work, because right now in Washington there's total, absolute gridlock. - -Thank you very much. -The Republican Party has a great chance to embrace millions of people that it's never known before. They're coming by the millions. We should seize that opportunity. These are great people. These are fantastic people. These are people that love our country. These are people that want to see America be great again. -These are people that will win us the election and win it easily. These are people that once the election is won will be able to put Supreme Court justices up that will do a fabulous job. Because let me tell you, if we lose this election, you're going to have three, four or maybe even five justices and this country will never, ever recover. It will take centuries to recover. -So I just say embrace these millions of people that now for the first time ever love the Republican Party. And unify. Be smart and unify. - -Well look, he was a failed candidate, he should have beaten president Obama very easy. - -He failed miserably, and it was an embarrassment to everybody, including the Republican party. It looked like he went away on a vacation the last month. So, I don't take that, and I guess, obviously, he wants to be relevant. He wants to be back in the game. - -As far as domestic policy and trade which is killing our country, he said free trade and I believe in free trade also. But, if you look at China, and you look Japan, and if you look at Mexico, both at the border, by the way, where they're killing us. - -Both at the border, and with trade -- and every other country we do business with we are getting absolutely crushed on trade. And, he said free trade, I say free trade great. But, not when they're beating us so badly. - -With China we're going to lose $505 billion dollars in terms of trades. You just can't do it. - -Mexico, $58 billion dollars. - -Japan, probably about, they don't know it yet, but about $109 billion dollars. - -Every country we lose money with. As far as I'm concerned, we've got to reduce -- we have to redo our trade deals 100 percent. I have the greatest business people in the world lined up to do it. We will make great trade deals. - -I totally disavow the Klu Klux Klan. I totally disavow David Duke. I've been doing it now for two weeks, this is your -- you're probably about the 18th person that's asked me the question. It was very clear, that question was also talked about in the form of groups. Groups, I want to know which groups are you talking about? You have to tell me which groups? - -Ultimately, he got to the Klu Klux Klan, which obviously I'm going to disavow. And, by the way, if you look on my Twitter account, almost immediately after the program they were disavowed again. - -You know, it's amazing. When I do something on Twitter, everybody picks it up, goes all over the place. But, when I did this one nobody ever picks it up. Take a look at my Twitter account. - -Thank you. Thanks. - -And we will. - -Well, I also happened to call him a lightweight, OK? And I have said that. So I would like to take that back. He is really not that much of a lightweight. And as far as -- and I have to say this, I have to say this. He hit my hands. Nobody has ever hit my hands. I have never heard of this. Look at those hands. Are they small hands? - -And he referred to my hands, if they are small, something else must be small. I guarantee you there is no problem. I guarantee. - -I have heard Ted say that over and over again on television, that he is the only one that can beat me. Just, for the record, I have won 10. He has won three or four. Last week, in fact, on Tuesday, I was a half a million votes higher than him. I was a million votes higher than Marco, 1 million votes. That's a lot of votes. And was by far in first place. - -So I keep hearing that he is the only one that can beat me but he is getting beaten very, very badly. So where does this come from? Where does it come from? - -Very nice words, but happens to be wrong. CNN just came out with a poll two days ago that - -That national poll -- excuse me - -The national poll -- a national poll where he's at 15, he's at 14 - -And, I'm at 49, so when he says 75 percent, that would mean that 80 percent of the people don't dig you, and I'm back down to 50 - -Wrong - -I beat Hillary Clinton. I beat Hillary Clinton in many polls - -I beat Hillary Clinton in many polls - -I think I'm talking - -I beat Hillary Clinton - -I hope you think - -I beat Hillary Clinton in many polls. The Cue (ph) poll just came out. I beat Hillary Clinton in a recent Fox poll, I beat Hillary Clinton in USA Today, I beat her today in a poll in Ohio. I beat -- I'm the only one that beats Hillary Clinton. - -I beat -- and I have not started on Hillary yet. Believe me, I will start soon. I haven't even started. - -In one poll - -Wrong. Wrong. - -This little guy has lied so much about my record. - -He has lied so much about my record. - -And I will tell you this. First of all, I got a call from my sister and brother tonight, and they said we had no idea Dad gave you $200 million. Believe me, I started off with $1 million. I built a company that's worth more than $10 billion. And I say it not in a bragging way, but that's the kind of thinking we need. - -Very low debt, tremendous cash flow. My financials are all -- they're all in there with the federal elections. You've seen them. Everybody has seen them. I say it only because that's the kind of thinking this country needs with $19 trillion in debit. Believe me. - -They devalue their currencies. I will do that. And by the way, I have been doing it more and more. But they devalue their currencies, in particular China. Mexico is doing a big number now, also. Japan is unbelievable what they're doing. - -They devalue their currencies, and they make it impossible for clothing-makers in this country to do clothing in this country. And if you look at what's happened on Seventh Avenue, and you look at what's happened in New York with the garment industry, so much of the clothing now comes out from Vietnam, China, and other places. And it's all because of devaluation. - -By the way, the Trans-Pacific, if you look at the TPP, a total disaster, which, by the way, Marco is in favor of, they need -- it is a disaster for our country. It's trying to be approved by various people, including President Obama. And I'll tell you something. The biggest problem with that is: They don't take into concurrence the devaluation. They're devaluing their currency. - -And they're killing - -No, no. I have very good answers. - -I know what's happening with the economy. You don't know a thing. - -You haven't employed in your life one person. - -I have employed tens of thousands of people. - -You haven't employed one person. - -Oh, you know what? You know what? Take a look at Trump Steaks. - -By the way, that's the other thing - -Mitt Romney - -false, totally false. And now the funny thing is he didn't talk about the hundreds of really successful jobs, the buildings all over the world that have made a fortune. - -I will. Don't worry about it, Marco. Don't worry about it. Don't worry about it little Marco, I will. - -Don't worry about it, little Marco. - -This guy has a number one -- the number one absentee record in the United States - -He doesn't show up to vote. - -That's why the people in Florida do not like him. - -Correct. - -Department of Education. We're cutting Common Core. We're getting rid of Common Core. We're bringing education locally. Department of Environmental Protection. We are going to get rid are of it in almost every form. We're going to have little tidbits left but we're going to take a tremendous amount out. - -We have various other things. If you look at the IRS, if you look at every single agency, we can cut it down, and I mean really cut it down and save. The waste, fraud, and abuse is massive. - -Larry Kudlow, great guy, everybody respects him, said my plan for taxes and tax cutting is the best by far of everybody. - -Let me explain something. Because of the fact that the pharmaceutical companies -- because of the fact that the pharmaceutical companies are not mandated to bid properly, they have hundreds of billions of dollars in waste. - -We don't bid properly. We don't have proper bidding procedures. The reason we don't is because they take care of all of the senators, all of the congressman, and they don't bid. They don't go out to bid. - -Take a look -- excuse me. You are talking about hundreds of billions of dollars. - -if we went out to the proper bid. Of course you are. - -I'm saying saving through negotiation throughout the economy, you will save $300 billion a year. - -And that's a huge -- of course it is. We are going to buy things for less money. Of course it is. That works out. - -I'm not only talking about drugs, I'm talking about other things. We will save $300 billion a year if we properly negotiate. We don't do that. We don't negotiate. We don't negotiate anything. - -Well, all of a sudden, I hear for 40 years I've been involved in Washington. I have been supporting people for many years. And these people have been politicians, and they've been on both sides, Democrats, Republicans, liberals, conservatives. I've supported everybody, because, until recently, I wasn't a politician, and I hope maybe you don't all consider me a politician right now. I hate the term politician. - -But I've been supporting politicians. A recent article somewhere said Donald Trump is a world-class businessman who goes out and he does get along with everybody. I've supported Democrats, and I've supported Republicans. And as a businessman, I owed that to my company, to my family, to my workers, to everybody to get along. - -Part of the problem we have in Washington, Chris. - -is it's total gridlock. Nobody gets along. We need people to get along. We need to be able to get things done. - -Actually, it was for business. It was. It was. It was for business. I pride myself, including outside of the United States. I'm doing almost 120 deals outside of the -- which I hope to be able to stop very soon and let my children handle it -- but we're doing many, many deals outside of the United States. - -I support politicians. In 2008, I supported Hillary Clinton. I supported many other people, by the way. And that was because of the fact that I'm in business. I did support very heavily Ronald Reagan. I also supported George Bush, by the way. - -Let me tell you, something, Ted. The last person that Hillary Clinton wants to face is Donald Trump. That I can tell you. - -Hello. - -Nice to be with you, Megyn. - -You're looking well. You're looking well. - -I don't know exactly what -- when you talk about off the record. First of all, Buzzfeed? They were the ones that said under no circumstances will I run for president. And were they wrong. But a lot of people said that. - -Then, I did have a meeting with the editorial board of the New York Times, a very nice meeting. Many of those things were off the record, I think at their suggestion and my suggestion. And I think being off the record is a very important thing. I think it's a very, very powerful thing. - -And I will say this. These three gentlemen have gone off the record many times with reporters. And I think they want to honor it, and I would always honor that. - -I will say, though, in terms of immigration -- and almost anything else -- there always has to be some, you know, tug and pull and deal. And, you know, when I watch Ted stand on the Senate floor, I had great respect for what he did. He stood there for a day-and-a- half or something. In the meantime, what came of it? Nothing. You have to be able to have some flexibility, some negotiation. - -Now, sometimes you ask for more than you want and you negotiate down to the point. I may have discussed something like that with the New York Times, but I would never release off-the-record conversations. I don't think it's fair, frankly, to do that to anybody. - -Not very flexible. No, not very flexible. I give the example -- I'm going to build a wall. I'm the one that wants the wall. I'm the one that can build the wall. - -It's going to get built. And by the way, Mexico is going to pay for the wall. I can tell you that. Mexico is going to pay for the wall. - -But -- and I used an example. And this isn't necessarily what was said, but whatever was said, the wall's 50 feet high. Is it going to be 45 feet or 40 feet? That could very well be. That could very well -- he wants it to be higher. - -That could very well be. But there's always give and take. There's always negotiation. And the best negotiator that knows what he's doing will make a great deal. But we need give and take in government. If you don't have give and take, you're never going to agree on anything. - -Fine. - -I will say one thing, what Marco said is -- I understand it. He is talking about a little give and take and a little negotiation. And you know what? That's OK. That's not the worst thing in the world. - -There is nothing wrong with that. I happen to be much stronger on illegal immigration. Sheriff Joe Arpaio endorsed me. And if he endorses you, believe me, you are the strongest, from Arizona. - -But give and take is OK. And I thought what he said is OK. We may differ on the degree. But what he said to me is OK. - -No. I never do that. I would not do that. I don't think -- I have too much respect -- if I deal with you off the record, if I deal with Bret or Chris off the record, I have too much respect for that process to say, just release everything. I would not do that. - -I'm changing. I'm changing. We need highly skilled people in this country, and if we can't do it, we'll get them in. But, and we do need in Silicon Valley, we absolutely have to have. - -So, we do need highly skilled, and one of the biggest problems we have is people go to the best colleges. They'll go to Harvard, they'll go to Stanford, they'll go to Wharton, as soon as they're finished they'll get shoved out. They want to stay in this country. They want to stay here desperately, they're not able to stay here. For that purpose, we absolutely have to be able to keep the brain power in this country. - -I'm changing it, and I'm softening the position because we have to have talented people in this country. - -That is correct. - -No, I'm not playing. - -I'm not playing to anybody's fantasies, I'm playing to the fact that our country is in trouble, that we have a tremendous problem with crime. The border is a disaster, it's like a piece of Swiss cheese. We're going to stop it, we're going to stop people from coming into our country illegally. We're going to stop it. - -First of all I've had tens of thousands of people working for me, most of which are -- 98, 97, 98 percent of the people in this country, from this country. I'm very proud of it. You have a club in Palm Beach, Florida called the Mar-a-Lago Club, it's a very, very successful club. It has a very short season, it's called, the Season, and it goes from November until March. - -It's a few months, five months at the most. People don't want a short-term job. They don't want -- so, we will bring people in, and we will send the people out. All done legally, all done with the process that's approved by government in Palm Beach, or West Palm Beach. We bring people in, we bring them out. We want to hire as many Americans as we can, but they don't want part-time, very short part-time jobs. - -Wrong. - -That's wrong. - -Wrong. - -Wrong. - -The -- the -- the other hotels during the season, they do the same thing. They take in a lot of people, because you can't get them. They take in a lot of people. Long-term employees, we don't do that, but short-term employees, we have no choice but to do it, and other hotels in that very, very hot area. It is a very hot area. - -It's very, very hard to get people. But other hotels do the exact same thing. And just so you understand, just again, this is a legal process. This is a procedure. It's part of the law. I take advantage of that. There's nothing wrong with it. We have no choice. - -This wasn't on the subject. - -Tapes were not on the subject. - -No, no. You're the liar. You're the lying guy up here. - -You're the -- you're the one. You're the one. - -You're the one. Now, let me just tell you. Let me just tell you. - -Excuse me. Excuse me. I've given my answer, Lyin' Ted. I've given my answer. - -They won't refuse. They're not going to refuse me. Believe me. - -Let me just tell you, you look at the Middle East. They're chopping off heads. They're chopping off the heads of Christians and anybody else that happens to be in the way. They're drowning people in steel cages. And he -- now we're talking about waterboarding. - -This really started with Ted, a question was asked of Ted last -- two debates ago about waterboarding. And Ted was, you know, having a hard time with that question, to be totally honest with you. They then came to me, what do you think of waterboarding? I said it's fine. And if we want to go stronger, I'd go stronger, too, because, frankly that's the way I feel. Can you imagine -- can you imagine these people, these animals over in the Middle East, that chop off heads, sitting around talking and seeing that we're having a hard problem with waterboarding? We should go for waterboarding and we should go tougher than waterboarding. That's my opinion. - -And -- and -- and -- I'm a leader. I'm a leader. I've always been a leader. I've never had any problem leading people. If I say do it, they're going to do it. That's what leadership is all about. - -Well, look, you know, when a family flies into the World Trade Center, a man flies into the World Trade Center, and his family gets sent back to where they were going -- and I think most of you know where they went -- and, by the way, it wasn't Iraq -- but they went back to a certain territory, they knew what was happening. The wife knew exactly what was happening. - -They left two days early, with respect to the World Trade Center, and they went back to where they went, and they watched their husband on television flying into the World Trade Center, flying into the Pentagon, and probably trying to fly into the White House, except we had some very, very brave souls on that third plane. All right? - -I have no problem with it. - -I think Richard Haass is excellent. I have a lot of respect for him. I think General Keane is excellent. I think that there are -- I like Colonel Jacobs very much. I see him. I know him. I have many people that I think are really excellent but in the end it's going to be my decision. - -When you just asked the question about Snowden, I will tell you right from the beginning, I said he was a spy and we should get him back. And if Russia respected our country, they would have sent him back immediately, but he was a spy. It didn't take me a long time to figure that one out. Believe me. - -But I would get the best people, people that I'd be comfortable with. And we will do the right thing. - -We've made a terrible mistake getting involved there in the first place. That thing will collapse about two seconds after they leave. Just as I said that Iraq was going to collapse after we leave. - -We made a mistake going into Iraq. I've never said we made a mistake - -Well, OK, I never said that. - -OK. Wouldn't matter. I never said it. - -Should I respond to that first? - -You'll be here a long time. - -I hate the concept of it, but on a humanitarian basis, with what's happening, you have to. It's living in Hell in Syria; there's no question about it. They're living in Hell. - -Look, from a humanitarian standpoint, I'd love to help, but we have our own problems. We have so many problems that we have to solve. - -They lied. They said there were weapons of mass destruction; there were none. And they knew there were none. - -I don't know if he lied or not. He could have lied. Maybe he did. Maybe he didn't. I guess you'd have to ask him. - -Well, on Afghanistan, I did mean Iraq. I think you have to stay in Afghanistan for a while, because of the fact that you're right next to Pakistan, which has nuclear weapons, and we have to protect that. Nuclear weapons change the game. - -And I was always against going into Iraq. In fact, I -- believe me, I was always against it. There was some cases where I sort of -- in one interview with a great friend of mine, and yours, Howard Stern -- said that -- said that I said very meekly, long before we went in, I said very meekly, well, maybe, maybe, I don't know. By the time it got to that point, I was always against Iraq. But Afghanistan, I felt -- and in that one, if you notice, I corrected it the second day. OK? Second question? - -No, no. - -Now on -- let me explain that. You're right. Let me explain. First time the question had been put to me, it was very early on. The migration had just started. And I had heard that the number was a very, very small number. - -By the second day, two or three days later, I heard the number was going to be thousands and thousands of people. You know, when they originally heard about it, they were talking about bringing very, very small numbers in. - -And I said, begrudgingly, well, I guess maybe that's OK. It was not like, "Let's bring them in," because I think we should build a safe zone in -- we should really -- what we should be doing is building safe zones so they can stay in their own country and not go all over, and at least this way we're not going to have the problem. That's what we have to do. - -But just -- just to set -- because I fully understand what you're asking. When I first heard the question, first time the question was ever asked to me, first time I really had known about the question, the migration had just started. I was very much like, OK, by the time I went back and studied it, and they were talking about bringing thousands and thousands, I changed my tune. And I don't think there's anything wrong with that. - -Megyn, I have a very strong core. - -I have a very strong core. But I've never seen a successful person who wasn't flexible, who didn't have a certain degree of flexibility. You have to have a certain degree of flexibility. - -You can't -- for instance, let's say, on -- on the second question, you can't say it's OK, and then you find out it's not OK, and you don't want to do anything. You have to be flexible, because you learn. I mean, before I knew the question was asked by Bill, and the next day, or the couple of days later, the question was asked by, by -- you know -- I was asked by a number of people, actually. I was asked by Sean, but I was asked by a number of people. But by that time, the number had increased significantly. - -The next day. But I had learned. I mean, nobody had ever asked me the question. This was brand new. But -- and I really mean it. You have to show a degree of flexibility. If you're going to be one way and you think it's wrong, does that mean the rest of your life you have to go in the wrong direction because you don't want to change? - -That's not right. - -And, by the way, just so you understand. - -This is a case I could have settled very easily, but I don't settle cases very easily when I'm right. Ninety-eight percent approval rating, we have an "A" from the Better Business Bureau. - -We have a 98 percent approval rating from the people who took the course. We have an "A" from the Better Business Bureau. And, people like it. Now, he's saying they didn't learn. - -We have many, many people that will be witnesses. Again, I don't settle cases. I don't do it because that's why I don't get sued very often, because I don't settle, unlike a lot of other people. - -We have a situation where we will win in court. But, many of the people that are witnesses did tremendously well, and made a lot of money. - -By taking the course. - -You're going to see, you don't know. - -You're going to see, you're going to see. - -No, no. Before they had the information. - -Before they had the information. - -Before they had the information it got -- it is right now an "A", once they had the information. - -The only reason that is was a "D" was because we didn't care -- we didn't give them the information. - -When they got the information it became an "A". - -Marco you don't know. - -Yes. - -But it was elevated to an "A". - -I can give it to you. I can give it to you tomorrow. - -It was elevated to an "A". - -Small business. - -Right. - -The lead plaintiff is now getting out of the case because it's so bad for her. - -Excuse me, the lead plaintiff signed a letter saying how great it was, and it on tape saying how great it was. - -She's trying to get out of the case. She's trying to get out of the case. - -Oh, give me a break. - -Give me a break. - -Give me a break. - -You know what, let's see what happens in court. This is a civil case. Very easy to have settled. Could settle it now. Very easy to have settled. Let's see what happens at the end of a couple years when this case is over, OK? - -Yes, it has been going for a long time. - -We'll win the case. - -One, one of the victims. - -I gave many people their money back. - -We will see who's right at the end of a few years. But all of the -- almost all of the people, many, many people signed what's called the report card at the end, did you like the course, how did you like it. - -Almost all of them said it was terrific, OK? With letters, with this. Some of them are on tape saying it was terrific. Let's see what happens at the end of three years. - -I gave some refunds to people because if they asked for the refunds in a certain period of time, and we gave refunds to people. - -But let's see what happens at the end of three years. Let's see who's right. - -It's called pending litigation. - -Let me tell you the real con artist. Excuse me. Excuse me. The real con artist is Senator Marco Rubio who was elected in Florida and who has the worst voting record in the United States Senate. - -He doesn't go to vote. He's absent. He doesn't go. Now, the people of Florida can't stand him. He couldn't get elected dogcatcher. The people of Florida -- the people of Florida -- and by the way, I know he's going to spend $25 million on ads. Without that he wouldn't have a chance. He's 20 points south. - -The people in Florida wouldn't elect him dogcatcher. He couldn't get any -- he's right now 21 points down to me. And, you know. Again, there will be a lot of advertising. It's the only thing that might save him. But I doubt it. - -He scammed the people of Florida. He scammed people. He doesn't vote. He doesn't show up for the U.S. Senate. He doesn't vote. He scammed the people. He defrauded the people of Florida. - -You defrauded the people of Florida, little Marco. - -That was licensing. - -Oh, stop it. - -It's just a minor case. It's a minor case. - -It's a minor civil case. - -Give me a break. - -It's a minor civil case. - -There are many, many civil cases. - -Give me a break. - -I don't believe these politicians. All talk, no action. I'm standing here listening to -- I'm hearing him say about a percentage. CNN, he gets 15. That means 85 percent, based on what you're saying, of the people don't dig you, number one, number one. Is that a correct statement? How do you get -- are you at 15 in the new CNN poll? Do you believe in CNN? I mean, I know we're with FOX. But CNN spent a lot of money on a poll, just came out. I'm at 49. He's at 15. He tells me about 65 percent of the people. It's not 65 percent of the people. If you go by that, 85 percent of the people. - -Then he goes, we have five. And -- well, excuse me, I won 10. I won 10 states. If you listen to him, it's like -- I won 10 states. Everybody knows that on Super Tuesday Trump was the winner. There wasn't one person that didn't say that. Even the two people on your left and right said we did a great job. So how does he take -- how does he take five and say it's better than 10? - -I am by far the leader. But if you listen to a politician, he'll try and convince you otherwise. - -No, I don't. No, I don't. - -I know, but your recent polls have me beating Hillary Clinton, and very, very easily. - -I have nothing to say. I mean, generally speaking, agree with what he said. I would have certainly have rather left it to the states. I was always in favor -- I was very surprised when they came up with that decision. - -I would have certainly -- I would have preferred had it been left to the states and I think most people would have preferred that. - -No, I'm a big defender of the Second Amendment. And if you look at what's happened, whether it's in California, where you had the 14 people killed, whether it's in Paris -- which, by the way, has the toughest gun laws in the world and 130 people killed. Many, many people in the hospital gravely injured. They will be dying. Many people will be dying in addition. - -If we had guns, or if they had guns on the other side of the room, with the bullets going in the opposite direction, you would not have had 130 people killed. That I can tell you right now. - -So I'm a very, very big supporter of the Second Amendment. - -I don't support it anymore. I do not support the ban on assault. - -I -- I did not say that. I did not say that. - -I did not say that. - -So we're listening to the all-talk, no-action politician, and he was the primary supporter of John Roberts, who gave us Obamacare. - -No, it's not. You take a look. He was the primary supporter. He pushed John Roberts, and pushed him, and pushed him, and Bush ultimately appointed him. He got appointed. And when it came his time to raise his hand and kill Obamacare, not once, but twice, he let us down, and he did the wrong thing. - -This is the man that was the primary supporter. And you can read law journal, you can read whatever you want to read -- I've read plenty of it. There was no stronger supporter of John Roberts than him. And it was a very, very big mistake. - -Not what you say in the op-ed. - -That is not what you said in the op-ed. - -Yeah, I know it is. But it's not what you said in the op-ed. - -Lyin' Ted. - -Well, let me just say this. I've gotten to know Marco over a period of time, believe me, he is not a leader. Believe me. - -He didn't answer -- he's not a leader. And, frankly, when I say they'll do as I tell them, they'll do as I tell them. And that's very -- it's very simple. It's very simple. - -We are in a very dangerous place. We have a depleted military. Totally depleted. We have -- by the way, our vets are treated horribly. We're going to take care of our vets. We're going to start taking care of our vets, properly, like we should. - -But we're going to build up our military, and we're going to get the equipment we want, not the equipment that's sold to us by somebody that gave him and him and not the governor campaign contributions. OK? We're going to get the equipment that the generals and the soldiers want. - -I will prove to be a great leader. And, you know, it's very interesting, we talk about the polls. Every single poll when it comes to ISIS and the military and the border say, by far, Trump is the best. - -Wrong. Wrong. - -Wrong. - -Wrong. - -He said very good things about me - -Yeah, finish. - -Let me just tell you, first of all, I've been hearing this man so long talking about Putin. Putin said about me -- I didn't say about Putin -- Putin said very nice things about me. And I say very nicely, wouldn't it be nice if actually we could get along with Russia, we could get along with foreign countries, instead of spending trillions and trillions of dollars? - -You're talking about Flint, Michigan. You're talking about places -- we need to rebuild the infrastructure of our country. Wouldn't it be nice if we got along with the world, and maybe Russia could help us in our quest to get rid of ISIS, et cetera, et cetera? - -I think I'd get along very well with Vladimir Putin. I just think so. - -Even if it's not me? - -OK -- that I'm very, very proud of -- millions and millions of people have come to the Republican Party over the last little while. They've come to the Republican Party. And by the way, the Democrats are losing people. This is a trend that's taking place. It's the biggest thing happening in politics, and I'm very proud to be a part of it. And I'm going to give them some credit, too, even though they don't deserve it. But the answer is: Yes, I will. - -Yes, I will. Yes. I will. - -Thank you. I am going to bring jobs back to the United States like nobody else can. We're going to fix our very depleted military. We're going to take care of our vets. We're going to strengthen our borders. And you're going to be very, very proud of this country in just a few years if I'm elected president. Thank you. - -Thank you. My whole theme is make America great again. We don't win anymore as a country. We don't win with trade, we don't win with the military. ISIS, we can't even knock out ISIS, and we will, believe me. We will. - -We don't win in any capacity with healthcare. We have terrible health care, Obamacare is going to be repealed and replaced. We just don't win. - -You look at our borders, they're like swiss cheese, everybody pours in. - -We're going to make a great country again. We're going to start winning again. We're going to win a lot, it's going to be a big difference, believe me. It's going to be a big difference. - -First of all, he was in charge of amnesty, he was the leader, and you can ask Marco because they've been debating this every debate that we've had. - -As far as coming back in, number one, you wouldn't even be talking, and you wouldn't have asked that as the first question if it weren't for me when my opening when I talked about illegals immigration. It wouldn't even be a big subject. - -But, we either have a country, or we don't have a country. We have at least 11 million people in this country that came in illegally. They will go out. They will come back -- some will come back, the best, through a process. They have to come back legally. They have to come back through a process, and it may not be a very quick process, but I think that's very fair, and very fine. - -They're going to get in line with other people. The best of them will come back, but they're going to come back through a process. - -Well, I'm very glad that Ted mentioned Arizona because probably the toughest man on borders is Sheriff Joe Arpaio, and two days ago he totally endorsed me, so, thank you. - -Well, first of all, self-deportation is people are going to leave as soon as they see others going out. If you look at Dwight Eisenhower in the 1950s, they started moving people out and the rest of them left. - -Self-deportation, as I really define it, and that's the way I define it, is you're going to get some to go, and the rest are going to go out. - -As far as the people that I've hired in various parts of Florida during the absolute prime season, like Palm Beach and other locations, you could not get help. It's the up season. People didn't want to have part-time jobs. There were part-time jobs, very seasonal, 90-day jobs, 120-day jobs, and you couldn't get. - -Everybody agrees with me on that. They were part-time jobs. You needed them, or we just might as well close the doors, because you couldn't get help in those hot, hot sections of Florida. - -I criticized Mitt Romney for losing the election. He should have won that election. He had a failed president. He ran a terrible campaign. He was a terrible candidate. That's what I criticize Mitt Romney - -Excuse me. He ran one terrible campaign. That's an election that should have been won. - -No, no, I'm the only one on the stage that's hired people. You haven't hired anybody. - -And by the way, I've hired -- and by the way, I've hired tens of thousands of people over at my job. You've hired nobody. - -You've had nothing but problems with your credit cards, et cetera. So don't tell me about that. - -You haven't hired one person, you liar. - -That's wrong. That's wrong. Totally wrong. - -I've hired tens of thousands of people over my lifetime. Tens of thousands. - -Be quiet. Just be quiet. - -Let me talk. I've hired tens of thousands of people. He brings up something from 30 years ago, it worked out very well. Everybody was happy. - -And by the way, the laws were totally different. That was a whole different world. - -But I've hired people. Nobody up here has hired anybody. - -I can only say this, and I've said it loud and clear and I've said it for years. And many of these people are sitting right in the audience right now -- your lobbyist and your special interest and your donors, because the audience is packed with them, and they're packed with you. - -I've had an amazing relationship with politicians -- with politicians both Democrat, Republican, because I was a businessman. As one magazine said, he's a world-class businessman; he was friendly with everybody. I got along with everybody. - -You get along with nobody. You don't have one Republican -- you don't have one Republican senator, and you work with them every day of your life, although you skipped a lot of time. These are minor details. But you don't have one Republican senator backing you; not one. You don't have the endorsement of one Republican senator and you work with these people. You should be ashamed of yourself. - -Here's a man -- Robin Hood. This is Robin Hood over here. He talks about corruption. On his financial disclosure form, he didn't even put that he's borrowed money from Citibank and from Goldman Sachs, which is a total violation. He didn't talk about the fact that he pays almost no interest. He just left it off, and now he's going to protect the people from the big bad banks. - -Give me a break. - -Correct. - -I will, and the wall just got 10 feet taller, believe me. - -It just got 10 feet taller. I saw him make that -- I saw him make the statement. I saw him use the word that he used. I can only tell you, if I would have used even half of that word, it would have been national scandal. - -This guy used a filthy, disgusting word on television, and he should be ashamed of himself, and he should apologize, OK? Number one. Number two, we have a trade deficit with Mexico of $58 billion a year. And that doesn't include all the drugs that are pouring across and destroying our country. - -We're going to make them pay for that wall. Now, the wall is $10 billion to $12 billion, if I do it. If these guys do it, it'll end up costing $200 billion. - -But the wall is $10 billion to $12 billion. You need 1,000 -- you need 1,000 miles. The Great Wall of China, built 2,000 years ago -- 2,000, is 13,000 miles. We need 1,000, because we have a lot of natural barriers. - -We can do it for $10 billion to $12 billion, and it's a real wall. This is a wall that's a heck of a lot higher than the ceiling you're looking at. This is a wall that's going to work. - -Mexico will pay for it, because they are not doing us any favors. They could stop all of this illegal trade if they wanted to immediately. Mexico will pay for the wall. It's a small portion of the kind of money that we lose and the deficits that we have with Mexico. - -Well, you know, I don't mind trade wars when we're losing $58 billion a year, you want to know the truth. We're losing so much. - -We're losing so much with Mexico and China -- with China, we're losing $500 billion a year. And then people say, "don't we want to trade?" I don't mind trading, but I don't want to lose $500 billion. I don't want to lose $58 billion. - -Mexico just took Carrier Corporation, maker of air conditioners. They just took Ford. They're building a $2.5 billion plant. They just took Nabisco out of Chicago. - -And I always say I'm not having Oreos anymore, which is true, by the way. But they just took a big plant from Nabisco into Mexico. They're taking our businesses. I don't mind. - -Such a cute sound bite. - -All right, you know what? - -Because they devalue their currency -- they devalue their currencies that makes it -- well, you don't know a thing about business. You lose on everything. - -Let me just tell you -- they de-value their currency. They de-value their currencies. - -That makes it -- well, you don't know a thing about business. You lose on everything you do. - -Let me just tell you, they de-value their currencies. China, Mexico, everybody. Japan with the cars. They de-value their currencies to such an extent that our businesses cannot compete with them, our workers lose their jobs. - -But you wouldn't know anything about it because you're a lousy businessman. - -No, I -- and you know why? You know why? - -You know why? - -And by the way I've won most of the lawsuits. - -And they actually did a very good job, but I've won most of the lawsuits. - -Excuse me. Hey Wolf, let me ask you. Am I allowed to respond to this? - -OK. Well let -- no, I haven't. I really haven't. - -Here's a guy -- here's a guy that buys a house for $179,000, he sells it to a lobbyist who's probably here for $380,000 and then legislation is passed. You tell me about this guy. This is what we're going to have as president. - -No, no, no. - -I borrowed $1 million, I turned it into $10 billion, more than $10 billion. - -I have to say, he lied this time. He lied. 100 percent. 100 percent. - -Yes, yes, yes. 38 years ago. - -True. - -True. - -No. - -First of all, I don't believe anything Telemundo says. - -Number one. Number two, I currently employ thousands of Hispanics, and over the years, I've employed tens of thousands of Hispanics. They're incredible people. They know, and the reason I won in Nevada, not only won the big one, but I also won subs, like, as an example, I won with women. - -I won with every single category. I won with men, I won with high-income, low-income, I won with Hispanics. And I got 46 percent. Nobody else was close. Because they know I'm going to bring jobs back from China, from Japan, from so many other places. - -They get it. They're incredible people. They're incredible workers. They get it. And I've won many of the polls with Hispanics. I didn't maybe win the Telemundo poll. - -But one thing I'm also going to do, I'm going to be getting -- bringing a lot of people in who are Democrats, who are independents, and you're seeing that with the polls, because if you look at anywhere, look at any of the elections, every single election, it has been record-setting. - -And the good news is, for the Republican Party, the Democrats are getting very poor numbers in terms of bringing them in. We're getting record-setting numbers. I think I have something to do with that. - -We're getting record-setting numbers. And I won every one -- the three of them that I won, I won with record-setting numbers. - -New people are coming into the Republican Party. We are building a new Republican Party, a lot of new people are coming in. - -I love them. I love them. - -They're fine. Do you know what? They're fine. - -Why did they take the poll? Why did they - -I'm just telling you, I'm doing very well with Hispanics. And by the way, I settled my suit, as you know, with Univision. It was settled. We're good friends now. It was all settled up. - -Very happy, very happy. Very good people. - -I'm just telling you -- I'm just telling you that I will do really well with Hispanics. I will do better than anybody on this stage. I have respect for the people on the stage, but I will do very well with Hispanics. But I'm telling you also, I'm bringing people, Democrats over and I'm bringing independents over, and we're building a much bigger, much stronger Republican Party. - -Yes, I would. And I've been there. And I've been there very strongly. I do have to say something, and this is interesting and it's not anybody's fault. It's not Ted's fault. Justice Roberts was strongly recommended and pushed by Ted. Justice Roberts gave us Obamacare. Might as well be called Roberts-care. Two times of the Supreme Court, Justice Roberts approved something that he should have never raised his hand to approve. And we ended up with Obamacare. - -That is a rough thing. And I know Ted feels badly about it. And I think he probably still respects the judge. But that judge has been a disaster in terms of everything we stand for because there is no way -- no way that he should have approved Obamacare. - -Now, with that being said, these are the things that happen. But Ted very, very strongly pushed Judge Roberts, and Justice Roberts gave us something that we don't want. - -Well, let -- let me -- let me just say -- let me just say this. Look, I watched Ted -- and I respected it, but he gets nowhere -- stand on the Senate floor for a day or two days, and talk and talk and talk. - -I watched the other senators laughing and smiling. And when Ted was totally exhausted, he left the Senate floor, and they went back to work. OK? We have to have somebody that's going to make deals. - -It's wonderful to stand up for two days and do that. Now, Ted's been very critical -- I have a sister who's a brilliant. - -excuse me. She's a brilliant judge. He's been criticizing -- he's been criticizing my sister for signing a certain bill. You know who else signed that bill? Justice Samuel Alito, a very conservative member of the Supreme Court, with my sister, signed that bill. - -So I think that maybe we should get a little bit of an apology from Ted. What do you think? - -When you say crazy zealot, are you talking about you? Crazy zealot -- give me a break. - -Well, let -- let me just say -- let me just say, first of all, I have great respect for Justice Scalia. I thought he was terrific. And if you talk about evolving, Ronald Reagan was a somewhat liberal Democrat. Ronald Reagan evolved into a somewhat strong conservative -- more importantly, he was a great president. A great president. - -As far as Planned Parenthood is concerned, I'm pro-life. I'm totally against abortion, having to do with Planned Parenthood. But millions and millions of women -- cervical cancer, breast cancer -- are helped by Planned Parenthood. - -So you can say whatever you want, but they have millions of women going through Planned Parenthood that are helped greatly. And I wouldn't fund it. - -I would defund it because of the abortion factor, which they say is 3 percent. I don't know what percentage it is. They say it's 3 percent. But I would defund it, because I'm pro-life. But millions of women are helped by Planned Parenthood. - -I just want to say, I agree with that 100 percent, except pre-existing conditions, I would absolutely get rid of Obamacare. We're going to have something much better, but pre-existing conditions, when I'm referring to that, and I was referring to that very strongly on the show with Anderson Cooper, I want to keep pre- existing conditions. - -I think we need it. I think it's a modern age. And I think we have to have it. - -I think they're wrong 100 percent. What we need -- look, the insurance companies take care of the politicians. The insurance companies get what they want. We should have gotten rid of the lines around each state so we can have real competition. - -We thought that was gone, we thought those lines were going to be gone, so something happened at the last moment where Obamacare got approved, and all of that was thrown out the window. - -The reason is some of the people in the audience are insurance people, and insurance lobbyists, and special interests. They got -- I'm not going to point to these gentlemen, of course, they're part of the problem, other than Ben, in all fairness. - -And, actually, the Governor too, let's just talk about these too, OK? - -Because I don't think the Governor had too much to do with this. - -But, we should have gotten rid of the borders, we should have gotten rid of the lines around the state so there's great competition. The insurance companies are making a fortune on every single thing they do. - -I'm self-funding my campaign. I'm the only one in either party self-funding my campaign. I'm going to do what's right. We have to get rid of the lines around the states so that there's serious, serious competition. - -And, you're going to see -- excuse me. You're going to see preexisting conditions and everything else be part of it, but the price will be done, and the insurance companies can pay. Right now they're making a fortune. - -That's going to solve the problem. And, the insurance companies aren't going to say that, they want to keep it. They want to say -- they say whatever they have to say to keep it the way it is. I know the insurance companies, they're friends of mine. The top guys, they're friends of mine. I shouldn't tell you guys, you'll say it's terrible, I have a conflict of interest. They're friends of mine, there's some right in the audience. One of them was just waving to me, he was laughing and smiling. He's not laughing so much anymore. - -Hi. - -Look, the insurance companies are making an absolute fortune. Yes, they will keep preexisting conditions, and that would be a great thing. Get rid of Obamacare, we'll come up with new plans. But, we should keep preexisting conditions. - -And, you don't know what it means. - -You don't know. - -The biggest problem, I'll have you know. - -You know, I watched him meltdown two weeks ago with Chris Christie. I got to tell you, the biggest problem he's got is he really doesn't know about the lines. The biggest thing we've got, and the reason we've got no competition, is because we have lines around the state, and you have essentially. - -You don't know much. - -The lines around the states and, it was almost done -- not now - -Excuse me. Excuse me. - -You get rid of the lines, it brings in competition. So, instead of having one insurance company taking care of New York, or Texas, you'll have many. They'll compete, and it'll be a beautiful thing. - -The nice part of the plan -- you'll have many different plans. You'll have competition, you'll have so many different plans. - -No, no, no. - -I watched him repeat himself five times four weeks ago. - -I watched him meltdown on the stage like that, I've never seen it in anybody. - -I thought he came out of the swimming pool. - -We're going to have many different plans because competition - -There is going to be competition among all of the states, and the insurance companies. They're going to have many, many different plans. BASH: Is there anything else you would like to add to that - -No, there's nothing to add. - -What is to add? - -I do not want socialized medicine, just so you understand. He goes around saying oh, he wants it. I do not want socialized medicine. I do agree with him that it's going to be a disaster, Obamacare, for the economy. - -In 2017, it will be impossible for us to pay for it if you look at what's going on. That's why it has to be repealed, for a lot of reasons, Number one, it doesn't work, number two, premium. You look at premiums going up, 25, 35, even 45 percent, and more. We have to get rid of Obamacare. It is going to destroy our economy completely. Our economy is not doing well. It is going to destroy our economy greatly. And on that, I agree. - -That's false. - -No, I said it worked in a couple of countries. - -No, I did not. No I did not. - -Correct. I will not let people die on the streets if I'm president. - -Excuse me. Let me talk. If people -- my plan is very simple. I will not -- we're going to have private -- we are going to have health care, but I will not allow people to die on the sidewalks and the streets of our country if I'm president. You may let it and you may be fine with it. - -I'm not fine with it. We are going to take those people. - -Excuse me. We are going to take those people and those people are going to be serviced by doctors and hospitals. We're going to make great deals on it, but we're not going to let them die in the streets. - -You know what? Call it what you want. - -Call it what you want, people are not going to be dying on the sidewalk. - -Because the country will become a dynamic economy. We'll be dynamic again. If you look at what's going on, we have the highest taxes anywhere in the world. We pay more business tax, we pay more personal tax. We have the highest taxes in the world. - -It's shutting off our economy. It's shutting off our country. We have trillions of dollars outside that we can't get in. Yes, we will do my tax plan, and it will be great. We will have a dynamic economy again. - -We're going to make many cuts in business. We're getting rid of -- we're going to get rid of so many different things. Department of Education -- Common Core is out. We're going local. Have to go local. - -Environmental protection -- we waste all of this money. We're going to bring that back to the states. And we're going to have other many things. -We are going to cut many of the agencies, we will balance our budget, and we will be dynamic again. - -Waste, fraud and abuse all over the place. Waste, fraud and abuse. - -You look at what's happening with Social Security, you look -- look at what's happening with every agency -- waste, fraud and abuse. We will cut so much, your head will spin. - -I just want to say -- and I'm a big fan of the governor, but they also struck oil, OK, so that helped Iowa a lot. - -All right. First of all, let me just explain. I was the first one to file a financial disclosure form -- almost 100 pages. You don't learn anything about somebody's wealth with a tax return. You learn it from statements. - -I filed -- which shows that I'm worth over $10 billion. I built a great company with very little debt. People were shocked, the people in the back, the reporters, they were shocked when they went down. And I filed it on time. I didn't ask for five 45-day extensions, which I would have been entitled to. - -So as far as that's concerned, I filed it. And that's where you find out what kind of a company. You don't learn anything from a tax return. - -I will say this. Mitt Romney looked like a fool when he delayed and delayed and delayed. And Harry Reid baited him so beautifully. And Mitt Romney didn't file his return until a September 21st of 2012, about a month-and-a-half before the election. And it cost him big league. - -As far as my return, I want to file it, except for many years, I've been audited every year. Twelve years, or something like that. Every year they audit me, audit me, audit me. - -Nobody gets audited -- I have friends that are very wealthy people. They never get audited. I get audited every year. I will absolutely give my return, but I'm being audited now for two or three years, so I can't do it until the audit is finished, obviously. And I think people would understand that. - -Are you going to ask anybody else that question? - -Every single question comes to me? - -I know I'm here for the ratings, but it's a little bit ridiculous. - -True. - -No, I'm not. First of all, very few people listen to your radio show. That's the good news. - -Let me just tell you, let me just -- which happens to be true. Check out the ratings. - -Look, let me just tell you something. Let me just tell you something. I want to release my tax returns but I can't release it while I'm under an audit. We're under a routine audit. I've had it for years, I get audited. - -And obviously if I'm being audited, I'm not going to release a return. As soon as the audit is done, I love it. - -Eighty-five percent say you, big difference. - -So at the beginning, I said openly to everybody that I contribute to many, many politicians, both Republican and Democrat. And I have, over the years. I'm a businessman. I have, over the years. - -And I sort of have to laugh when Ted makes a big deal out of the fact that he's doing well in the polls. Well, I'm beating him in virtually every poll. I'm tied in Texas, by the way, which I shouldn't be. But I think I'll do very well. - -But a poll just came out -- a Bloomberg poll -- where I am beating him so badly that it's, like, embarrassing even for me to say I'm beating him that badly. - -And -- and here's the thing -- it was sort of funny -- 65 percent of the people don't like you -- I just got 36 percent of the vote, right? I just got 46 percent on another one. I got 38 percent on another one. That means -- and he got 20 and 22, and he lost in South Carolina so badly -- that was going to be his stronghold. He said a year ago, "I can't lose South Carolina." I beat him in a landslide. - -Last week in Nevada, I beat him in a landslide, and he sang (ph) about the polls. One other thing -- Hillary Clinton -- take a look at USA Today, take a look at the Q poll. I beat her, and I beat her badly. And I -- and I haven't even started at her. I only had one little interchange. - -I only had one little interchange, and that was four weeks ago, when she said I was sexist. And believe me, they had a rough weekend that weekend, between Bill and Hillary. They had a rough weekend. - -Nothing. - -I'm not afraid. - -First of all, he's talking about the polls. I'm beating him awfully badly in the polls. - -Well, then, if I can't -- if -- hey, if I can't beat her, you're really going to get killed, aren't you? - -I'm being audited 12 years in a row, at least. - -Shocking. - -Right. - -Well, first of all, I don't think they do under President Obama because I think he's treated Israel horribly, all right? I think he's treated Israel horribly. - -I was the grand marshall down 5th Avenue a number of years ago for the Israeli Day Parade, I have very close ties to Israel. I've received the Tree of Life Award and many of the greatest awards given by Israel. - -As president, however, there's nothing that I would rather do to bring peace to Israel and its neighbors generally. And I think it serves no purpose to say that you have a good guy and a bad guy. - -Now, I may not be successful in doing it. It's probably the toughest negotiation anywhere in the world of any kind. OK? But it doesn't help if I start saying, "I am very pro-Israel, very pro, more than anybody on this stage." But it doesn't do any good to start demeaning the neighbors, because I would love to do something with regard to negotiating peace, finally, for Israel and for their neighbors. - -And I can't do that as well -- as a negotiator, I cannot do that as well if I'm taking big, big sides. With that being said, I am totally pro-Israel. - -Well, I can only say -- look, I can only say I've been a big contributor to Israel over the years. I've received many, many awards from Israel, as I've said before. I have a great relationship with Israel. And I'm going to keep it that way. And if I could bring peace, that would be a fantastic thing. It would be one of my greatest achievements as president. - -I'm a negotiator. I've done very well over the years through negotiation. It's very important that we do that. In all fairness, Marco is not a negotiator. I watched him melt down and I'll tell you, it was one of the saddest things I've ever seen. He's not going down -- excuse me, wait a minute, and these people may even be tougher than Chris Christie. OK? - -OK, no, no, no -- a deal is a deal. Let me tell you that. I learned a long time ago. - -You are not a negotiator. You are not a negotiator. - -And, with your thinking, you will never bring peace. You will never bring peace. - -Excuse me, I want to be able to bring peace. - -He will never be able to do it. I think I may be able to do it, although I will say this. Probably the toughest deal of any kind is that particular deal. - -One thing I'd like to add to what the Governor's saying, I think that we are now in a position -- are $19 trillion dollars because of the horrible omnibus budget that was approved six weeks ago, it's going to be $21 trillion dollars. We can no longer defend all of these countries, Japan, Germany, South Korea. - -You order televisions, you order almost anything, you're getting it from these countries. Whether it's a Mercedes-Benz, or whether it's an air conditioning unit. They're coming out of these countries. They are making a fortune. Saudi Arabia, we are defending Saudi Arabia. Before the oil went down, now they're making less, but they're making plenty. They were making $1 billion dollars a day. - -We defend all of these countries for peanuts. You talk about budgets. We have to start getting reimbursed for taking care of the military services for all of these countries. - -I really don't because it not working and the countries aren't agreeing to it and the rebels aren't agreeing and Syria is not agreeing. So It's a meaningless ceasefire. - -I love the idea of a ceasefire. I love the idea of -- with a total cessation. But it's not working, as you know very well. It's not working. If -- we can do what we want with Russia but nobody else is adhering to it. - -So I certainly support it, I would certainly love it, but all parties have to be part of it. - -Again, I think I gave them both checks to be exactly honest. I think they both liked me very much. - -Well, I think Bush did a hell of a bad as far as that's concerned. You know it and so do I. - -Be honest. Be honest. No, this was before. The check came early. - -But let me just tell you, Syria, he's saying that I was in favor of Syria. He said I was in favor of Libya? I never discussed that subject. I was in favor of Libya? We would be so much better off if Gadhafi were in charge right now. - -If these politicians went to the beach and didn't do a thing, and we had Saddam Hussein and if we had Gadhafi in charge, instead of having terrorism all over the place, we'd be -- at least they killed terrorists, all right? - -And I'm not saying they were good because they were bad, they were really bad, but we don't know what we're getting. You look at Libya right now, ISIS, as we speak, is taking over their oil. As we speak, it's a total mess. - -We would have been better off if the politicians took a day off instead of going into war. - -I never said walk away. I wouldn't want to walk away. I want them to pay us much more money. We cannot afford to subsidize a lot. I'll negotiate a lot more money than you'll ever get. - -As far as John Kerry is concerned, there has been no tougher critic of this man, I think he negotiated one of the worst deals in the history of our country, the Iran deal, where they get their $150 billion and all of the other things that take place. - -It is a disaster for this country, and speaking of Israel, it's a disaster for Israel. I'm no fan of John Kerry. - -Well, look, my response is very simple. There is nobody on this stage that has done more for Israel than I have. Nobody. You might say, you might talk, you're politicians, all talk, no action. - -I've been watching it all my life. You are all talk and no action. - -What I've seen up here -- I mean, first of all, this guy is a choke artist, and this guy is a liar. You have a combination. - -You have a combination of factors. He can't do it for the obvious reason, and he can't do it because he doesn't know how to tell the truth. Other than that, I rest my case. - -I watched -- I watched the lobbyists. I watched what this man did to Dr. Ben Carson, who I respect, in Iowa, where he said that Ben Carson is out of the race -- he has left Iowa and he's out of the race. And I thought it was disgraceful. - -And got a lot of votes because of that -- a lot of votes. Took them away from Ben Carson. I watched that. Probably took them away from me, too. But I watched it. - -I also watched where he did a forum that looked like it came right out of a government agency, and it said on top, "Voter Violation," and then it graded you and it scared the hell out of people, and it said the only way you clear up the violation, essentially, is to go and vote for Ted Cruz. I watched that fraudulent document, and I said it's the worst thing I've ever seen in politics. - -To me, that was even worse than what he did to Ben. - -I know politicians -- I know politicians, believe it or not, better than you do. And it's not good. - -I funded you. I funded him. Can you believe it? - -I funded this guy. I gave him a check. - -I gave him a check. He never funded me. - -You know why? I didn't want to, but he sent me his book with his autograph. - -Mr. Trump, you're doing a great job. I have his book. - -Thank you -- thank you for the book. Go ahead. - -This is a lot of fun up here tonight, I have to tell you. - -Go ahead. I'm relaxed. You're the basket case. - -Go ahead. Don't get nervous. - -Go ahead. - -I've seen you. - -You're losing so badly. - -you don't know what's happening. - -First of all, you're talking about a border that's many, many times longer. You're talking about a massive border. - -We have far less problem with that border than we do with our Southern border, and tremendous amounts -- you know, I won, I had the privilege of winning by a landslide, by the way, New Hampshire. - -You go to New Hampshire, the first thing they talk about is heroin and drugs pouring in. And, you wouldn't think this beautiful place -- it's beautiful. With the trees and the roads, and the countryside. Their biggest problem is heroin, and it's such a shame to see it. - -They're pouring in from the Southern border, so I'm talking about great security. I'm talking about a wall that can absolutely be built, and I'll build it on time, on budget. It'll be a very high wall, a great wall. It's going to be built, it's going to be built. It's going to be paid for by Canada, by the way -- maybe I'll get Canada to pay? Got to be paid for by Mexico. - -The problem with Canada, you're talking about a massively long piece. You're talking about a border that would be about four times longer. It would be very, very hard to do, and we -- it is not our biggest problem. I don't care what anyone says. It is not our big problem. Our big problem is not only people coming in, and in many cases the wrong people, it's the tremendous amount of drugs that are coming in. - -Thank you. - -Nobody knows politicians better than I do. They're all talk, they're no action, nothing gets done. I've watched it for years. Take a look at what's happening to our country. - -All of the things that I've been talking about, whether it's trade, whether it's building up our depleted military, whether it's taking care of our vets, whether it's getting rid of Common Core, which is a disaster, or knocking out Obamacare and coming up with something so much better, I will get it done. Politicians will never, ever get it done. And we will make America great again. Thank you. - -Now, until that audit's done, and I don't think anybody would blame me, I'm not giving it. - -Well, I can say this. If the President, and if I were President now I would certainly want to try and nominate a justice. I’m sure that, frankly, I’m absolutely sure that President Obama will try and do it. I hope that our Senate is going to be able — Mitch, and the entire group, is going to be able to do something about it. -In times of delay, we could have a Diane Sykes, or you could have a Bill Pryor, we have some fantastic people. But this is a tremendous blow to conservatism. It’s a tremendous blow, frankly to our country. - -I think he’s going to do it whether or I’m OK with it or not. I think it’s up to Mitch McConnell, and everybody else to stop it. It’s called delay, delay, delay. - -What we want to do, when we want to do it, and how hard do we want to hit? Because we are going to have to hit very, very hard to knock out ISIS. -We’re going to also have to learn who our allies are. We have allies, so-called allies, we’re spending billions and billions of dollars supporting people — we have no idea who they are in Syria. Do we want to stay that route, or do we want to go and make something with Russia? -I hate to say Iran, but with Russia, because we — and the Iran deal is one of the worst deals I have ever seen negotiated in my entire life. It’s a disgrace that this country negotiated that deal. But very important not only a disgrace, it’s a disgrace and an embarrassment. But very important, who are we fighting with? Who are we fighting for? What are we doing? We have to rebuild our country. But we have to — I’m the only one on this stage that said, “Do not go into Iraq. Do not attack Iraq.” Nobody else on this stage said that. And I said it loud and strong. And I was in the private sector. I wasn’t a politician, fortunately. -But I said it, and I said it loud and clear, “You’ll destabilize the Middle East.” That’s exactly what happened. -I also said, by the way, four years ago, three years ago, attack the oil, take the wealth away, attack the oil and keep the oil. They didn’t listen. They just started that a few months ago. - -Called me a genius, I like him so far, I have to tell you. Let me just tell you this. -Jeb is so wrong. Jeb is absolutely self — just so you understand, you know what that is? That’s Jeb’s special interest and lobbyist talking. -Look, let me just tell you something, Jeb — Jeb is so wrong. You got to fight ISIS first. You fight ISIS first. Right now you have Russia, you have Iran, you have them with Assad, and you have them with Syria. You have to knock out ISIS. They’re chopping off heads. These are animals. You have to knock em out. You have to knock them off strong. You decide what to do after, you can’t fight two wars at one time. -If you listen to him, and you listen to some of the folks that I’ve been listening to, that’s why we’ve been in the Middle East for 15 years, and we haven’t won anything. We’ve spent $5 trillion dollars in the Middle East with thinking like that. We’ve spent $5 - -Lindsey Graham, who backs him, had zero on his polls. Let me just say something — we’ve spent — we’ve spent. -I only tell the truth, lobbyists. -We’ve spent $5 trillion dollars all over the — we have to rebuild our country. We have to rebuild our infrastructure. you listen to that you’re going to be there for another 15. - -You’ll end up with world war three. - -We’re supporting troops that we don’t even know who they are. - -We’re supporting troops that we don’t even know who they are. - -We have no idea who they are. - -Oh, yeah, yeah. - -Let 44 million in New Hampshire, it was practically 44 million — give me a break. - -First of all, I have to say, as a businessman I get along with everybody. I have business all over the world. - -I know so many of the people in the audience. And by the way, I’m a self-funder. I don’t have — I have my wife and I have my son. That’s all I have. I don’t have this. - -So let me just tell you, I get along with everybody, which is my obligation to my company, to myself, et cetera. -Obviously, the war in Iraq was a big, fat mistake. All right? Now, you can take it any way you want, and it took — it took Jeb Bush, if you remember at the beginning of his announcement, when he announced for president, it took him five days. -He went back, it was a mistake, it wasn’t a mistake. It took him five days before his people told him what to say, and he ultimately said, “it was a mistake.” The war in Iraq, we spent $2 trillion, thousands of lives, we don’t even have it. Iran has taken over Iraq with the second-largest oil reserves in the world. -Obviously, it was a mistake. - -George Bush made a mistake. We can make mistakes. But that one was a beauty. We should have never been in Iraq. We have destabilized the Middle East. - -I’m being nice. - -The World Trade Center came down during your brother’s reign, remember that. - -That’s not keeping us safe. - -She should be running. - -How did he keep us safe when the World Trade Center — the World — excuse me. I lost hundreds of friends. The World Trade Center came down during the reign of George Bush. He kept us safe? That is not safe. That is not safe, Marco. That is not safe. - -And George Bush– by the way, George Bush had the chance, also, and he didn’t listen to the advice of his CIA. - -I don’t want to go. - -Yes. - -First of all, the — when you say I’m the only candidate, if you listen to the Democrats, they want to do many things to Social Security and I want to do them on its own merit. You listen to them, what they want to do to Social Security, none of these folks are getting elected, OK, whether they can do it or not. I’m going to save Social Security. I’m going to bring jobs back from China. I’m going to bring jobs back from Mexico and from Japan, where they’re all — every country throughout the world — now Vietnam, that’s the new one. -They are taking our jobs. They are taking our wealth. They are taking our base. And you and I have had this discussion. We’re going to make our economy strong again. I’m lowering taxes. We have $2.5 trillion offshore. We have 2.5 trillion that I think is actually five trillion because the government has no idea when they say 2.5, they have no idea what they’re doing or saying, as they’ve proven very well. -We’re going to bring that money back. You take a look at what happened just this week, China bought the Chicago Stock Exchange, China, a Chinese company. Carrier is moving to Mexico, air conditioning company. Not only the ones I talk about all the time, Nabisco and Ford and — they’re all moving out. -We have an economy that last quarter, GDP didn’t grow. It was flat. We have to make our economy grow again. We’re dying. This country is dying. And our workers are losing their jobs, and you’re going. - -I’m the only one who is going to save Social Security, believe me. - -Because you have tremendous waste. I’ll tell you. - -You have tremendous waste, fraud and abuse. That we’re taking care of. That we’re taking care of. It’s tremendous. We have in Social Security right now thousand and thousands of people that are over 106 years old. Now, you know they don’t exist. They don’t exist. There’s tremendous waste, fraud and abuse, and we’re going to get it. But we’re not going to hurt the people who have been paying into Social Security their whole life and then all of a sudden they’re supposed to get less. We’re bringing our jobs back. We’re going to make our economy great again. - -I want everybody taken care of, but we have to take care of our people in this country. We’re not taking care of our people. We have no border. We have no control. People are flooding across. We can’t have it. We either have a border, and I’m very strongly — I’m not proposing. I will build a wall. I will build a wall. -Remember this, the wall will be paid for by Mexico. We are not being treated right. - -We are not being treated properly. If we don’t have borders, if we don’t have strength, we don’t have a country. People are flowing across. We have to take care of our people. Believe me. - -Look, when I announced that I was running for president on June 16th, illegal immigration wasn’t even a subject. If I didn’t bring it up, we wouldn’t even be talking. - -Now I don’t often agree with Marco, and I don’t often agree with Ted, but I can in this case. The weakest person on this stage by far on illegal immigration is Jeb Bush. They come out of an act of love, whether you like it or not. He is so weak on illegal immigration it’s laughable, and everybody knows it. - -Spend a little more money on the commercials. - -I don’t know what you’re talking about. - -I never called him — I don’t call him. - -He also said about language. Two days ago he said he would take his pants off and moon everybody, and that’s fine. Nobody reports that. He gets up and says that, and then he tells me, oh, my language was a little bit rough. - -My language. Give me a break. - -You did say it, You did say it. Been reported in 10 different news. - -Or a tax. - -I would build consensus with Congress and Congress would agree with me. I’ll give you an example because I don’t like the idea of using executive orders like our president. It is a disaster what he’s doing. I would build consensus, but consensus means you have to work hard. You have to cajole. You have to get them into the Oval Office and get them all together, and you have to make deals. -Let me just tell you, I mentioned before, China — big Chinese company bought the Chicago Exchange. Kerry is moving — and if you saw the people, because they have a video of the announcement that Carrier is moving to Mexico, OK? -Well, I’ll tell you what. I would go right now to Carrier and I would say I am going to work awfully hard. You’re going to make air conditioners now in Mexico. You’re going to get all of these 1400 people that are being laid off — they’re laid off. They were crying. They were — it was a very sad situation. You’re going to go to Mexico. You’re going to make air conditioners in Mexico, you’re going to put them across our border with no tax. -I’m going to tell them right now, I am going to get consensus from Congress and we’re going to tax you when those air conditioners come. So stay where you are or build in the United States because we are killing ourselves with trade pacts that are no good for us and no good for our workers. - -John, in life you have flexibility. You do have flexibility. When you’re fighting wars, you’re going one way, you have a plan. It’s a beautiful plan. It can’t lose. The enemy makes a change, and all of a sudden you have to change. -You have to have flexibility. In Ronald Reagan, though, in terms of what we’re talking about, was the great example. He was a somewhat liberal Democrat who became a somewhat, pretty strong conservative. He became — most importantly he became a great president. He made many of the changes that I’ve made — I mean, I’ve seen as a grew up, I’ve seen, and as I get older and wiser, and I feel that I am a conservative. -Now, I also feel I’m a common-sense conservative, because some of the views I don’t agree with. And I think a lot of people agree with me, obviously, based on what’s happening. - -Well, I think these people always hit me with eminent domain, and frankly, I’m not in love with eminent domain. But eminent domain is something you need very strongly. -When Jeb had said, “You used eminent domain privately for a parking lot.” It wasn’t for a parking lot. The state of New Jersey — too bad Chris Christie is not here, he could tell you — the state of New Jersey went to build a very large tower that was going to employ thousands of people. -I mean, it was going to really do a big job in terms of economic development. Now, just so you understand, I got hit very hard. It’s private, it’s private eminent domain. You understand that they took over a stadium in Texas, and they used private eminent domain, but he just found that out after he made the charge. - -Yeah. Well, Jeb, wouldn’t have known about it. - -You shouldn’t have used it then, Jeb. - -Thank you very much, I appreciate it. - -You probably are worse than Jeb Bush. You are single biggest liar. This guys lied – let me just tell you, this guy lied about Ben Carson when he took votes away from Ben Carson in Iowa and he just continues. Today, we had robo-calls saying. “Donald Trump is not going to run in South Carolina,” — where I’m leading by a lot.” -I’m not going to vote for Ted Cruz. This is the same thing he did to Ben Carson. This guy will say anything, nasty guy. Now I know why he doesn’t have one endorsement from any of his colleagues. - -He’s a nasty guy. - -Where did I support it? Where did I. - -Again, where did I support it? - -Hey Ted, where I support it? - -Where did I support? - -That’s a lot of lies. - -It does do wonderful things but not as it relates to abortion. - -Excuse me. Excuse me, there are wonderful things having to do with women’s health. - -But not when it comes to abortion. - -Hold on. - -Ted Cruz told your brother that he wanted John Roberts to be on the United States Supreme Court. They both pushed him, he twice approved Obamacare. - -OK, governor. - -You pushed him. You pushed him. - -You worked with him and you pushed him. Why do you lie? - -Why do you lie? - -You pushed him. - -Yeah, yeah, I know, you’re an adult. - - -Well, I would say my wife tells me I’m wrong all the time. And I listen. - -Oh, let me just say — look, I am very open — I hired top people. I’ve had great success. I built a great, great company. I don’t need to do this. I’m self-funding. I’m spending a lot of money. I’ve spent — like in New Hampshire, I spent $3 million. Jeb bush spent $44 million. He came in five, and I came in No. 1. -That’s what the country needs, folks. I spent $3, he spends 42 of their money, of special interest money. And it’s just — this is not going to make — excuse me. This is not going to make our country great again. -This is not what we need in our country. We need people that know what the hell they’re doing. And politicians, they’re all talk, they’re no action. And that’s why people are supporting me. -I do listen to people. I hire experts. I hire top, top people. And I do listen. And you know what? Sometimes they’re wrong. You have to know what to do, when to do it. But sometimes they’re wrong. - -Well, I’ll tell you — over the years, I’ve made many speeches. People have asked me, big companies have asked me to make speeches, and friends of mine that run big companies on success. -And occasion, in order to sort of really highlight something, I’ll use a profanity. One of the profanities that I got credited with using, that I didn’t use, was a very bad word, two weeks ago, that I never used. -I said, “You.” And everybody said “Oh, he didn’t say anything wrong.” But you bleeped it, so everyone thinks I said the — I didn’t say anything. I never said the word. -It is very unfair, that criticism. Now, I will say this, with all of that being said, I have said I will not do it at all, because if I say a word that’s a little bit off color, a little bit, it ends up being a headline. -I will not do it again. I was a very good student at a great school not using — by the way — not using profanity is very easy. - -That’s not — let me respond. That’s another lie. I never went bankrupt! - -No, but it’s another lie. - -No, but it’s another lie. This guy doesn’t know what he’s talking about. Just a lie. - -Let me just tell you. Jeb goes around saying, just like the biggest business leaders in this country, I’ve used the laws of the land to chapter — I bought a company, I threw it immediately into a chapter, I made a great deal. I uses the laws to my benefit, because I run a company. - -Excuse me, Jeb! - -I never went bankrupt, never. Now — but you don’t want to say that. Now, let me just say, I’ve used it, just like the biggest leaders in the country. Let me tell you something — Florida. - -Florida, he put so much debt on Florida. You know, we keep saying he’s a wonderful governor, wonderful governor. He put so much debt on Florida, and he increased spending so much that as soon as he got out of office, Florida crashed. -I happened to be there. It’s my second home. Florida crashed. He didn’t do a good job as governor. - -And you haven’t — excuse me, you haven’t heard that. You listen to the good record in Florida. You take a look at what happened, as soon as that year ended he got out, Florida crashed. Too much debt. -He loaded it up with debt, and his spending went through the roof. - -By the way, he was not a good governor. - -Take a look at your numbers. - -Florida went down the tubes right after he got out of office. - -Went right down because of what he did to it. - -Thank you. -Politicians are all talk, no action. You’ve seen where they’ve take you to. We are 19 trillion dollars right now. It’s going to be increased with that horrible budget from a month ago that was just approved by politicians. -We need a change. We need a very big change. We’re going to make our country great again. -I say this every night, every day, every afternoon and it’s so true – we don’t win anymore. We don’t win with healthcare, we don’t win with ISIS and the military, we don’t take care of our vets, we don’t care of our borders, we don’t win. We are going to start winning again. We are not going to be controlled by people that are special interests and lobbyists that everybody here has contributed to. And you know what, they do exactly what those folks want them to do. -We are going to make our country great and we’re going to do the right thing. I’m working for you. I’m not working for anybody else. -Thank you very much. - -I actually think I have the best temperament. I built a massive corporation. I employ thousands and thousands of people. I’ve gotten along with people for years and years, have tremendous relationships with many people, including politicians on both sides. And no matter how you cut it, when I — when I came out, I hit immigration, I hit it very hard. Everybody said, “Oh, the temperament,” because I talked about illegal immigration. - -Now, everybody’s coming to me, they’re all trying to say, well, he’s right, we have to come to him. I hit other things. I talked about Muslims. We have a problem. Nobody else wanted to mention the problem, I brought it up. I took a lot of heat. We have to have a temporary something, because there’s something going on that’s not good. And remember this, I’m the only one up here, when the war of Iraq — in Iraq, I was the one that said, “Don’t go, don’t do it, you’re going to destabilize the Middle East.” So, I’m not one with a trigger. I’m not one with a trigger. Other people up here, believe me, would be a lot faster. - -But I’ll build the mill arbitrary stronger, bigger, better than anybody up here, and nobody is going to mess with us. That, I can tell you. - -Am I allowed to respond? I have to respond. - -First of all, I respect what Ted just said, but if you noticed, he didn’t answer your question. And that’s what’s going to happen — OK. - -That’s what’s going to happen with our enemies and the people we compete against. We’re going to win with Trump. We’re going to win. We don’t win anymore. Our country doesn’t win anymore. We’re going to win with Trump. And people back down with Trump. And that’s what I like and that’s what the country is going to like. - -Well, let me say a couple of things. First of all, Marco said earlier on that President Obama knows exactly what he’s doing, like we have this president that really knows. I disagree, respectfully, with Marco. - -I think we have a president who, as a president, is totally incompetent, and he doesn’t know what he’s doing. - -I think he has no idea what he’s doing. And our country is going to hell. So, I just want to say, we disagree on that. Is that okay? - -Good. - -As to North Korea? - -We have — tremendous — has been just sucked out of our country by China. China says they don’t have that good of control over North Korea. They have tremendous control. I deal with the Chinese all of the time. I do tremendous — the largest bank in the world is in one of my buildings in Manhattan. - -I deal with them. They tell me. They have total, absolute control, practically, of North Korea. They are sucking trillions of dollars out of our country — they’re rebuilding China with the money they take out of our country. I would get on with China, let China solve that problem. - -They can do it quickly and surgically. That’s what we should do with North Korea. - -Good evening. - -Yes. - -I don’t think I am. I think I’m closer to common sense. We are going to repeal Obamacare. - -We’re going to repeal Obamacare. We are going to replace Obamacare with something so much better. And there are so many examples of it. And I will tell you, part of the reason we have some people laughing, because you have insurance people that take care of everybody up here. - -In addition to that, you have the health care savings plans, which are excellent. What I do say is, there will be a certain number of people that will be on the street dying and as a Republican, I don’t want that to happen. We’re going to take care of people that are dying on the street because there will be a group of people that are not going to be able to even think in terms of private or anything else and we’re going to take care of those people. - -And I think everybody on this stage would have to agree you’re not going to let people die, sitting in the middle of a street in any city in this country. - -Well, let me just tell you about eminent domain because almost all of these people actually criticize it, but so many people have hit me with commercials and other things about eminent domain. - -Eminent domain is an absolute necessity for a country, for our country. Without it, you wouldn’t have roads, you wouldn’t have hospitals, you wouldn’t have anything. You wouldn’t have schools, you wouldn’t have bridges. You need eminent domain. And a lot of the big conservatives that tell me how conservative they are — I think I’m more than they are — they tell me, oh — well, they all want the Keystone Pipeline. The Keystone Pipeline, without eminent domain, it wouldn’t go 10 feet, OK? You need eminent domain. And eminent domain is a good thing, not a bad thing. - -And what a lot of people don’t know because they were all saying, oh, you’re going to take their property. When somebody — when eminent domain is used on somebody’s property, that person gets a fortune. They get at least fair market value, and if they are smart, they’ll get two or three times the value of their property. But without eminent domain, you don’t have roads, highways, schools, bridges or anything. - -So eminent domain — it’s not that I love it, but eminent domain is absolutely — it’s a necessity for a country. And certainly it’s a necessity for our country. - -Yes. - -Jeb wants to be — he wants to be a tough guy tonight. I didn’t take the property. - -I didn’t take the property. - -The woman ultimately didn’t want to do that. I walked away. - -Well, let me just — you know, he wants to be a tough guy. A lot of times, you’ll have — you’ll have — and it didn’t work very well. - -A lot of time — let me talk. Quiet. A lot of times — a lot of times...BUSH: How tough it is to take away a property from an elderly woman? - -you — let me talk. Let me talk. Quiet. A lot of times that’s all of his donors and special interests out there. - -So — it’s what it is. That’s what — and by the way, let me just tell you, we needed tickets. You can’t get them. You know who has the tickets for the — I’m talking about, to the television audience? Donors, special interests, the people that are putting up the money. - -That’s who it is. The RNC told us. We have all donors in the audience. And the reason they’re not loving me the reason they’re not — excuse me. The reason they’re not loving me is, I don’t want their money. I’m going to do the right thing for the American public. I don’t want their money. I don’t need their money. And I’m the only one up here that can say that. - -Eminent domain, the Keystone pipeline — do you consider that a private job? Do you — do you consider that. - -No — no, let me ask you, Jeb. - -Do you consider the Keystone pipeline private? - -Is it public or private? - -Real — a public use? - -No, it’s a private job. - -It’s a private job. - -You wouldn’t have the Keystone pipeline that you want so badly without eminent domain. - -You wouldn’t have massive — excuse me, Josh — you wouldn’t have massive factories without eminent domain. - -Well, I think I am, and to me, I view the word conservative as a derivative I — of — of the word conserve. We want to converse our money. We want to conserve our wealth. We want to conserve. We want to be smart. We want to be smart where we go, where we spend, how we spend. We want to conserve our country. We want to save our country. And we have people that have no idea how to do that and they are not doing it, and it’s a very important word and it’s something I believe in very, very strongly. - -Well, before I go there, I will tell you, I will bring jobs back from China. I will bring jobs back from Japan. I will bring jobs back from Mexico, where New Hampshire, by the way, has been virtually wiped out. They’ve lost so many businesses going to Mexico because of horrible trade deals. And now we’re about to sign another trade deal, TPP, which is going to be a disaster for this country because they don’t talk about monetary manipulation. It is going to be a disaster. - -I’m going to bring jobs back and I’ll start bringing them back very fast. Under my tax plan — right now, we’re the highest taxed country in the world. Under my plan, we cut not only taxes for the middle class, but we cut taxes for corporations. We will bring back trillions of dollars that’s offshore. Right now, they have $2.5 trillion, and in my opinion, it’s much more than that. That’s what the government says. All of that money is going to come back. - -And we’re not going to lose Pfizer, which is now leaving, and other great companies, which is now leaving. And they’re all leaving. We have many, many companies that are leaving this country. We’re not going to lose them anymore because we’re going to have a tax structure that is going to keep them in our country. - -Well, four years ago, I said, bomb the oil and take the oil. And if we did that, they wouldn’t have the wealth they have right now. Now, I still say the same thing, because we’re doing little pinpricks. We’re not even bombing — if somebody’s driving a truck, they give notice to the person driving the truck, “we’re going to bomb.” If they don’t get out of the truck, the truck sails away with the oil. - -We actually have a case where we don’t want to bomb the oil, because we don’t want to hurt — pollute the atmosphere. Can you imagine General Douglas MacArthur or General Patton saying we can’t bomb because we’re gonna hurt the atmosphere? - -You have to knock the hell out of the oil. You have to take the oil. And you have also back channels of banking. You have people that you think are our great allies, our friends, in the Middle East, that are paying tremendous numbers of — tremendous amounts of money to ISIS. - -So we have to stop those circuits. Nobody knows banking better than I do. They have back circuits, back channels. Tremendous amounts of money is coming in through the banking system. So between the oil and the banking, you will dry them up. But it should have been done four years ago, not now. - -You have to go in — first of all, when you take away their money, when you take away their wealth, that’ll very much weaken — and it will happen fairly fast. - -They’ll last for about a year, based on all of the wealth they’ve accumulated. But when you stop the banking channels and when you stop the oil and take the oil — not just bomb it, take it — when you do that, it’s going to dry up very quickly. They’re going to become a very weakened power, quickly. Thank you. - -Well, I’ll tell you what. In the Middle East, we have people chopping the heads off Christians, we have people chopping the heads off many other people. We have things that we have never seen before — as a group, we have never seen before, what’s happening right now. - -The medieval times — I mean, we studied medieval times — not since medieval times have people seen what’s going on. I would bring back waterboarding and I’d bring back a hell of a lot worse than waterboarding. - -No, a good deal maker will make great deals, but we’ll do it the way our founders thought it should be done. People get together, they make deals. Ronald Reagan did it with Tip O’Neil very successfully, you didn’t hear so much about executive orders, if you heard about it at all. You have to be able to get a consensus. - -Now, the real person like it was mentioned about the deal with Iran, how bad a deal is that? It doesn’t get any more amateurish than that. A good deal maker would never make a deal like that. With Congress, you have to get everybody in a room, and you have to get them to agree. But, you have to get them to agree what you want, and that’s part of being a deal maker. You can’t leave the White House, go to Hawaii and play golf for three weeks and be a real deal maker. It doesn’t work that way. You have to get people in, grab them, hug them, kiss them, and get the deal done. But, it’s got to be the deal that you want. - -Some? - -The problem with executive authority for the president, it’s really bad news for this reason. Since he’s given up on working with Congress, he thinks he can impose anything he wants. He’s not a king. He’s a president. An executive order should be used frankly in consolidation and with consulting with the leadership in the — in the Congress. - -I’ve done it in Ohio. I consult. I could use executive orders, but I don’t trump the legislature, because if you do, you aggravate them, you anger them and then the long-term prospects get bleak. We have to solve problems in America by coming together, Republicans and Democrats, Americans first, party and ideology second — in the second back seat of this country. That’s what we need to do. - -And we can do it. And we can do it. - -Yes. OK, good. It looked like he was looking right at me, right there. - -I think that — I look at what’s going on, I look at all of the polls, I do very, very well against Hillary Clinton. I can tell you, I’m the last person that she wants to run against. - -And I think you can see what we’ve done in terms of galvanizing. I’ve been all over the country. We’re — last night, I was in South Carolina, we had 12,000 people. It set up in about four days. We have galvanized and we’ve created a movement. A lot of it has to do with — as an example, Josh’s question on drugs. - -I’m the first person that said, “Build a wall.” But I mean, a real wall, not a toy wall like they have right now. A real wall. And you’ll solve lots of problems. - -But we will galvanize the people of this country, and we will beat Hillary Clinton. Because — assuming that she runs, by the way, how she gets away with the e-mail stuff is hard to believe. So, I don’t know that she’s going to be running. But on the assumption she runs. - -I mean, look. And speaking of that, if she runs, she’s running for one reason. She’s going to be able to run for one reason, and that’s because the Democrats are protecting her. Because so many people have done so much less than her, and they were absolutely — their lives have been destroyed. - -But on the assumption they do protect her, I will win the election and we will win it by a lot. We will win it handily. We cannot have another four years of essentially Barack Obama. - -Well, there is a divide, but I have to say that the police are absolutely mistreated and misunderstood, and if there is an incident, whether it’s an incident done purposely — which is a horror, and you should really take very strong action — or if it is a mistake, it’s on your news casts all night, all week, all month, and it never ends. - -The police in this country have done an unbelievable job of keeping law and order, and they’re afraid for their jobs, they’re afraid of the mistreatment they get, and I’m telling you that not only, me speaking, minorities all over the country, they respect the police of this country and we have to give them more respect. - -They can’t act. They can’t act. They’re afraid for losing their pension, their job. They don’t know what to do. And I deal with them all the time. We have to give great respect, far greater than we are right now, to our really fantastic police. - -Well, they do. And, you know, they sue. Everybody sues, right? They see excessive — I mean, they go out, they sue. We have so much litigation — I see the courts, I see what they’re doing. They sue, and you know what? We don’t want excessive force. But at what point — you know, either you’re going to have a police force that can do its job... - -I was just up in Manchester, I met with the police officers yesterday. Tremendous people. They love the area, they love the people, they love all the people. They want to do their job. And you’re going to have abuse and you’re going to have problems, and you’ve got to solve the problems and you have to weed out the problems. But the police in this country are absolutely amazing people. - -Well, I — I know Diane Foley very well. Her husband and — these are tremendous people. I spoke for them, I raised a lot of money for the foundation. I fully understand, James, one of — that was really the first that we saw, really visually saw — it was so horrible. - -And I will tell you, though, with all of that being said, you can not negotiate this way with terrorists. If you do, you are going to have many, many more James Foleys. - -James Foley was a great young man. His parents are incredible people. They’ve done such a good job, since his — since his death. But you just cannot negotiate that way with terrorists, or you’re gonna have so many other James Foleys. - -And one thing on the vets — during the last debate, I raised $6 million for the vets, and I will tell you something. - -Carolina. - -That’s because he got Ben Carson’s votes, by the way, but we won’t (inaudible). Our country that we love so much doesn’t win anymore. We don’t win with the military, we don’t’ win on the border. You look at New Hampshire with the tremendous problem we have with heroin. Number one thing I hear from the people of New Hampshire, who I love, and developed such relationships, we don’t win with healthcare. We don’t win with trade. - -You look at what other countries are doing to us. China. Everyone, they’re killing us on trade. If I’m elected president, we will win, and we will win, and we will win. Thank you, thank you very much. - -Thank you. -I began this journey six months ago. My total focus was on building up our military, building up our strength, building up our borders, making sure that China, Japan, Mexico, both at the border and in trade, no longer takes advantage of our country. -Certainly would never have made that horrible, disgusting, absolutely incompetent deal with Iran where they get $150 billion. They’re a terrorist nation. But I began it talking about other things. -And those things are things that I’m very good at and maybe that’s why I’m center stage. People saw it. People liked it. People respected it. -A month ago things changed. Radical Islamic terrorism came into effect even more so than it has been in the past. People like what I say. People respect what I say. And we’ve opened up a very big discussion that needed to be opened up. -Thank you very much. - -We are not talking about isolation. We’re talking about security. We’re not talking about religion. We’re talking about security. Our country is out of control. People are pouring across the southern border. I will build a wall. It will be a great wall. People will not come in unless they come in legally. Drugs will not pour through that wall. -As far as other people like in the migration, where they’re going, tens of thousands of people having cell phones with ISIS flags on them? I don’t think so, Wolf. They’re not coming to this country. And if I’m president and if Obama has brought some to this country, they are leaving. They’re going. They’re gone. - -Jeb doesn’t really believe I’m unhinged. He said that very simply because he has failed in this campaign. It’s been a total disaster. Nobody cares. And frankly, I’m the most solid person up here. I built a tremendous company and all I want to do is make America great again. -I don’t want our country to be taken away from us, and that’s what’s happening. The policies that we’ve suffered under other presidents have been a disaster for our country. We want to make America great again. And Jeb, in all fairness, he doesn’t believe that. - -Well, look, this is so easy to answer. ISIS is recruiting through the Internet. ISIS is using the Internet better than we are using the Internet, and it was our idea. What I wanted to do is I wanted to get our brilliant people from Silicon Valley and other places and figure out a way that ISIS cannot do what they’re doing. -You talk freedom of speech. You talk freedom of anything you want. I don’t want them using our Internet to take our young, impressionable youth and watching the media talking about how they’re masterminds — these are masterminds. They shouldn’t be using the word “mastermind.” These are thugs. These are terrible people in ISIS, not masterminds. And we have to change it from every standpoint. But we should be using our brilliant people, our most brilliant minds to figure a way that ISIS cannot use the Internet. And then on second, we should be able to penetrate the Internet and find out exactly where ISIS is and everything about ISIS. And we can do that if we use our good people. - -I would certainly be open to closing areas where we are at war with somebody. I sure as hell don’t want to let people that want to kill us and kill our nation use our Internet. Yes, sir, I am. - -We have to be much tougher. We have to be much stronger than we’ve been. We have people that know what is going on. You take a look at just the attack in California the other day. There were numerous people, including the mother, that knew what was going on. -They saw a pipe bomb sitting all over the floor. They saw ammunition all over the place. They knew exactly what was going on. -When you had the World Trade Center go, people were put into planes that were friends, family, girlfriends, and they were put into planes and they were sent back, for the most part, to Saudi Arabia. -They knew what was going on. They went home and they wanted to watch their boyfriends on television. I would be very, very firm with families. Frankly, that will make people think because they may not care much about their lives, but they do care, believe it or not, about their families’ lives. - -Look, the problem is we need toughness. Honestly, I think Jeb is a very nice person. He’s a very nice person. But we need tough people. We need toughness. We need intelligence and we need tough. -Jeb said when they come across the southern border they come as an act of love. - -Am I talking or are you talking, Jeb? - -You can go back. You’re not talking. You interrupted me. - -Are you going to apologize, Jeb? No. Am I allowed to finish? - -Excuse me, am I allowed to finish? - -I know you’re trying to build up your energy, Jeb, but it’s not working very well. - -Look, look, look. We need a toughness. We need strength. We’re not respected, you know, as a nation anymore. We don’t have that level of respect that we need. And if we don’t get it back fast, we’re just going to go weaker, weaker and just disintegrate. -We can’t allow that to happen. We need strength. We don’t have it. When Jeb comes out and he talks about the border, and I saw it and I was witness to it, and so was everyone else, and I was standing there, “they come across as an act of love,” he’s saying the same thing right now with radical Islam. -And we can’t have that in our country. It just won’t work. We need strength. - -With Jeb’s attitude, we will never be great again, that I can tell you. We will never be great again. - -So, they can kill us, but we can’t kill them? That’s what you’re saying. And as far as the Internet is concerned, we’re not talking about closing the Internet. I’m talking about parts of Syria, parts of Iraq, where ISIS is, spotting it. -Now, you could close it. What I like even better than that is getting our smartest and getting our best to infiltrate their Internet, so that we know exactly where they’re going, exactly where they’re going to be. I like that better. - -But we have to — who would be — I just can’t imagine somebody booing. These are people that want to kill us, folks, and you’re — you’re objecting to us infiltrating their conversations? I don’t think so. I don’t think so. - -In my opinion, we’ve spent $4 trillion trying to topple various people that frankly, if they were there and if we could’ve spent that $4 trillion in the United States to fix our roads, our bridges, and all of the other problems; our airports and all of the other problems we’ve had, we would’ve been a lot better off. I can tell you that right now. -We have done a tremendous disservice, not only to Middle East, we’ve done a tremendous disservice to humanity. The people that have been killed, the people that have wiped away, and for what? It’s not like we had victory. -It’s a mess. The Middle East is totally destabilized. A total and complete mess. I wish we had the $4 trillion or $5 trillion. I wish it were spent right here in the United States, on our schools, hospitals, roads, airports, and everything else that are all falling apart. - -Well, there’s nothing to respond to. Well, people feel differently. I mean, the fact is Benghazi was a disaster because of Libya, everything just fell into place. It could not have been worse. -What do we have now? We have nothing. We’ve spent $3 trillion and probably much more – I have no idea what we’ve spent. Thousands and thousands of lives, we have nothing. Wounded warriors all over the place who I love, we have nothing for it. -And by the way – and Ben said incorrectly – and I’m not saying this as a knock – he’s one of finest men. You’re not going to find a finer men. -But I’ve been talking about oil for three years. I’ve been saying,, “take the oil, take the oil.” I didn’t say, “just bomb it,” I said,” take it and use it and distribute it so that the wounded warriors -” People, I’ve been saying this now for many years. - -Now, all of a sudden everybody’s saying, “take the oil.” It wasn’t so fashionable to take the oil six months ago. I’ve been saying it for years. - -I think Assad is a bad guy, a very bad guy, all right? Lots of people killed. I think we are backing people we have no idea who they are. The rebels, we call them the rebels, the patriotic rebels. We have no idea. A lot of people think, Hugh, that they are ISIS. -We have to do one thing at a time. We can’t be fighting ISIS and fighting Assad. Assad is fighting ISIS. He is fighting ISIS. Russia is fighting now ISIS. And Iran is fighting ISIS. -We have to do one thing at a time. We can’t go — and I watched Lindsey Graham, he said, I have been here for 10 years fighting. Well, he will be there with that thinking for another 50 years. He won’t be able to solve the problem. -We have to get rid of ISIS first. After we get rid of ISIS, we’ll start thinking about it. But we can’t be fighting Assad. And when you’re fighting Assad, you are fighting Russia, you’re fighting — you’re fighting a lot of different groups. -But we can’t be fighting everybody at one time. - -I think it’s very sad that CNN leads Jeb Bush, Governor Bush, down a road by starting off virtually all the questions, “Mr. Trump this, Mister” — I think it’s very sad. And, frankly, I watched — I think it’s very sad. And, frankly, I watched the first debate, and the first long number of questions were, “Mr. Trump said this, Mr. Trump said that. Mr. Trump” — these poor guys — although, I must tell you, Santorum, good guy. Governor Huckabee, good guy. They were very nice, and I respect them greatly. But I thought it was very unfair that virtually the entire early portion of the debate was Trump this, Trump that, in order to get ratings, I guess. In order to get ratings, I guess. - -I just think it’s very — excuse me. - -Excuse me. I think it’s very unprofessional. - -Well, I think it’s very unprofessional. - -OK, fine. - -This isn’t tough and easy. I wish it was always this easy as you, Jeb. - -Oh, yeah. - -Oh, I know. You’re a tough guy, Jeb. I know. - -You’re tough. - -Well, let’s see. I’m at 42, and you’re at 3. So, so far, I’m doing better. - -So far, I’m doing better. You know, you started off over here, Jeb. You’re moving over further and further. Pretty soon you’re going to be off the end. - -I believe I did. - -I have a very hardline position, we have a country or we don’t have a country. People that have come into our country illegally, they have to go. They have to come back into through a legal process. -I want a strong border. I do want a wall. Walls do work, you just have to speak to the folks in Israel. Walls work if they’re properly constructed. I know how to build, believe me, I know how to build. -I feel a very, very strong bind, and really I’m bound to this country, we either have a border or we don’t. People can come into the country, we welcome people to come but they have to come in legally. - -Well, first of all, I think we need somebody absolutely that we can trust, who is totally responsible; who really knows what he or she is doing. That is so powerful and so important. And one of the things that I’m frankly most proud of is that in 2003, 2004, I was totally against going into Iraq because you’re going to destabilize the Middle East. I called it. I called it very strongly. And it was very important. -But we have to be extremely vigilant and extremely careful when it comes to nuclear. Nuclear changes the whole ball game. Frankly, I would have said get out of Syria; get out — if we didn’t have the power of weaponry today. The power is so massive that we can’t just leave areas that 50 years ago or 75 years ago we wouldn’t care. It was hand-to-hand combat. -The biggest problem this world has today is not President Obama with global warming, which is inconceivable, this is what he’s saying. The biggest problem we have is nuclear — nuclear proliferation and having some maniac, having some madman go out and get a nuclear weapon. That’s in my opinion, that is the single biggest problem that our country faces right now. - -I think — I think, for me, nuclear is just the power, the devastation is very important to me. - -I did. - -Let me just say that I have gotten to know him over the last three or four days. He has a wonderful temperament. - -He’s just fine. Don’t worry about it. - -You better not attack. - -I really am. I’ll be honest, I really am. - -I mean, the people have been putting me. - -I really am. - -Let me just. - -I’ve gained great respect for the Republican leadership. I’ve gained great respect for many — and I’m going to even say — I mean, in different forms for the people on the dais, in different forms. - -In different forms. -But I have great respect for the people I have met through this process. I’ve never done this process before. I’ve never been a politician. I mean, for the last six months I’ve been a politician. -But I will tell you, I am totally committed to the Republican Party. I feel very honored to be the front runner. - -And I think I’ll do very well if I’m chosen. If I’m so fortunate to be chosen, I think I’ll do very well. -Polls have come out recently saying I would beat Hillary. I will do everything in my power to beat Hillary Clinton, I promise you. - -Our country doesn’t win anymore. We don’t win on trade. We don’t win on the military. We can’t defeat ISIS. We’re not taking care of our great people, the veterans. We’re not taking care of them. -We have to change our whole way, our health care system is a disaster. It’s going to implode in 2017, just like you’re sitting there. It doesn’t work. Nothing works in our country. If I’m elected president, we will win again. We will win a lot. And we’re going to have a great, great country, greater than ever before. -Thank you. - -I can’t be Neil. And the and the reason I can’t be is that we are a country that is being beaten on every front economically, militarily. There is nothing that we do now to win. We don’t win anymore. Our taxes are too high. I’ve come up with a tax plan that many, many people like very much. It’s going to be a tremendous plan. I think it’ll make our country and our economy very dynamic. - -But, taxes too high, wages too high, we’re not going to be able to compete against the world. I hate to say it, but we have to leave it the way it is. People have to go out, they have to work really hard and have to get into that upper stratum. But we can not do this if we are going to compete with the rest of the world. We just can’t do it. - -I would not do it. - -I was so happy yesterday when I saw that decision come down. That was an unbelievable decision. - -And we don’t have enough of those decisions coming down. He of the executive order, because nobody wants to listen to him, including the Democrats, so he just goes around signing executive orders. - -That was a great day. And, frankly, we have to stop illegal immigration. It’s hurting us economically. It’s hurting us from every standpoint. It’s causing tremendous difficulty with respect to drugs and what that does to many of our inner cities in particular. - -And it really is — was such an unbelievable moment because the courts have not been ruling in our favor. And it was a 2-1 decision. And it was a terrific thing that happened. - -And I will tell you, we are a country of laws. We need borders. We will have a wall. The wall will be built. The wall will be successful. And if you think walls don’t work, all you have to do is ask Israel. The wall works, believe me. Properly done. Believe me. - -You are going to have to bring people — you are going to have to send people out. Look, we’re a country. - -Maria, we’re a country of laws. We either have a country or we don’t have a country. We are a country of laws. Going to have to go out and they will come back but they are going to have to go out and hopefully they get back. - -But we have no choice if we’re going to run our country properly and if we’re going to be a country. - -All I can say is, you’re lucky in Ohio that you struck oil. That is for one thing. - -Moved them again beyond the border, they came back. Didn’t like it. Moved them way south. They never came back. - -No, it’s unfair. - -built an unbelievable company worth billions and billions of dollars. I don’t have to hear from this man, believe me. I don’t have to hear from him. - -We have millions of people right now on line trying to come into this country. Very, very unfair to the people that want to come into our country legally. They’ve gone through the process. They’re on line. They’re waiting. Very, very unfair to them. That I can tell you. - -Yes. - -No, I’m sorry. No, excuse me. I was there. - -We have to make our military bigger, better, stronger than ever before so that nobody messes with us, and a long run, it’s going to save us. I agree with Marco, I agree with Ted, we have no choice. And, I can tell you this with certainty. We all have a different tax plan. Some I don’t totally agree with. - -One thing we understand, each one of those tax plans is better than the mess that we have right now. - -Yes. - -Yeah. - -It’s a horrible deal. - -The TPP is horrible deal. It is a deal that is going to lead to nothing but trouble. It’s a deal that was designed for China to come in, as they always do, through the back door and totally take advantage of everyone. It’s 5,600 pages long. So complex that nobodies read it. It’s like Obamacare; nobody ever read it. They passed it; nobody read it. And look at mess we have right now. And it will be repealed. - -But this is one of the worst trade deals. And I would, yes, rather not have it. With all of these countries, and all of the bad ones getting advantage and taking advantage of what the good ones would normally get, I’d rather make individual deals with individual countries. We will do much better. - -We lose a fortune on trade. The United States loses with everybody. We’re losing now over $500 billion in terms of imbalance with China, $75 billion a year imbalance with Japan. By the way, Mexico, $50 billion a year imbalance. - -So I must say, Gerard, I just think it’s a terrible deal. I love trade. I’m a free trader, 100 percent. But we need smart people making the deals, and we don’t have smart people making the deals. - -Yes. Well, the currency manipulation they don’t discuss in the agreement, which is a disaster. If you look at the way China and India and almost everybody takes advantage of the United States — China in particular, because they’re so good. It’s the number-one abuser of this country. And if you look at the way they take advantage, it’s through currency manipulation. It’s not even discussed in the almost 6,000-page agreement. It’s not even discussed. - -And as you understand, I mean, you understand very well from the Wall Street Journal, currency manipulation is the single great weapon people have. They don’t even discuss it in this agreement. - -So I say, it’s a very bad deal, should not be approved. If it is approved, it will just be more bad trade deals, more loss of jobs for our country. We are losing jobs like nobody’s ever lost jobs before. I want to bring jobs back into this country. - -Well, first of all, it’s not only Russia. We have problems with North Korea where they actually have nuclear weapons. You know, nobody talks about it, we talk about Iran, and that’s one of the worst deals ever made. One of the worst contracts ever signed, ever, in anything, and it’s a disgrace. But, we have somebody over there, a madman, who already has nuclear weapons we don’t talk about that. That’s a problem. - -China is a problem, both economically in what they’re doing in the South China Sea, I mean, they are becoming a very, very major force. So, we have more than just Russia. But, as far as the Ukraine is concerned, and you could Syria — as far as Syria, I like — if Putin wants to go in, and I got to know him very well because we were both on 60 Minutes, we were stablemates, and we did very well that night. - -But, you know that. - -But, if Putin wants to go and knocked the hell out of ISIS, I am all for it, 100%, and I can’t understand how anybody would be against it. - -They blew up — hold it. - -They blew up, wait a minute. - -They blew up a Russian airplane. He cannot be in love with these people. He’s going in, and we can go in, and everybody should go in. As far as the Ukraine is concerned, we have a group of people, and a group of countries, including Germany — tremendous economic behemoth — why are we always doing the work? - -We are — I’m all for protecting Ukraine and working — but, we have countries that are surrounding the Ukraine that aren’t doing anything. They say, “Keep going, keep going, you dummies, keep going. Protect us.” - -And we have to get smart. We can’t continue to be the policeman of the world. We are $19 trillion dollars, we have a country that’s going to hell, we have an infrastructure that’s falling apart. Our roads, our bridges, our schools, our airports, and we have to start investing money in our country. - -Assad is a bad guy, but we have no idea who the so-called rebels — I read about the rebels, nobody even knows who they are. I spoke to a general two weeks ago, he said — he was very up on exactly what we’re talking about. He said, “You know, Mr. Trump? We’re giving hundreds of millions of dollars of equipment to these people, we have no idea who they are.” - -So, I don’t like Assad. Who’s going to like Assad? But, we have no idea who these people, and what they’re going to be, and what they’re going to represent. They may be far worse than Assad. Look at Libya. Look at Iraq. Look at the mess we have after spending $2 trillion dollars, thousands of lives, wounded warriors all over the place — who I love, OK? All over. - -We have nothing. And, I said, keep the oil. And we should have kept the oil, believe me. We should have kept the oil. And, you know what? We should have given the oil. We should’ve given big chunks to the people that lost their arms, their legs, and their families, and their sons, and daughters, because right now, you know who has a lot of that oil? Iran, and ISIS. - -Why does she keep interrupting everybody? - -Terrible. - -We are not. - -No, no, no. - -Well, what’s happening right now, Neil, is something that not been a subject of conversation by politicians. As primarily the only politician, I guess other than Carly on the stage, they haven’t talked about a corporate inversion. A corporate inversion — companies are leaving. You know, we used to leave New York to go to Florida. We got better taxes, we got, maybe, something else. - -Now, they’re the United States to go to other countries. They have trillions of dollars in those other countries. They’re going for two reasons, they can’t get their money back in. It’s something where the democrats and the republicans both agree, it’s the only thing I can think of. They both agree, let the money come back in. - -Three and a half years, they still can’t make a deal. They can’t get the money in. It’s probably two and a half trillion, but, I think it’s much more than that. All of that money could become — could come right in and be used to rebuild our country, and investments in our country. They can’t do it. What we have to do, and what I’ve done, is made the tax rate — and one of the reasons they don’t the taxes so obnoxious, they can’t do it. - -Where, I made it a 10% number, as you know. I’ve been very highly praised for it. A lot of money’s going to come back in, we’re going to get rid of the bureaucratic problems, and roadblocks, because that’s also a problem. And, we’re going to have all of this money pour back into the United States. It’s going to be used to build businesses, for jobs, and everything else. - -And, as I say, my expression is, let’s make America great again. - -Thank you. Over the years, I’ve created tens of thousands of jobs and a great company. It’s a company I’m very proud of. Some of the most iconic assets anywhere in the world. And I will tell you, I don’t have to give you a website because I’m self-funding my campaign. I’m putting up my own money. - -I want to do something really special. I want to make our country greater than it’s ever been. I think we have that potential. We cannot lose this election. We cannot let Hillary Clinton, who is the worst secretary of state in the history of our country, win this election. - -We will fight. We will win. And we truly will make this even more special. We have to make it better than ever before. And I will tell you, the United States can actually be better than ever before. Thank you. - -I think maybe my greatest weakness is that I trust people too much. I’m too trusting. And when they let me down, if they let me down, I never forgive. I find it very, very hard to forgive people that deceived me. So I don’t know if you would call that a weakness, but my wife said “let up.” - -Right. - -Right. - -That’s right. - -No, not a comic book, and it’s not a very nicely asked question the way you say that. -Larry Kudlow is an example, who I have a lot of respect for, who loves my tax plan. We’re reducing taxes to 15 percent. We’re bringing corporate taxes down, bringing money back in, corporate inversions. We have $2.5 trillion outside of the United States which we want to bring back in. -As far as the wall is concerned, we’re going to build a wall. We’re going to create a border. We’re going to let people in, but they’re going to come in legally. They’re going to come in legally. And it’s something that can be done, and I get questioned about that. They built the great wall of China. That’s 13,000 miles. Here, we actually need 1,000 because we have natural barriers. So we need 1,000. - -We can do a wall. We’re going to have a big, fat beautiful door right in the middle of the wall. We’re going to have people come in, but they’re coming in legally. And Mexico’s going to pay for the wall because Mexico — I love the Mexican people; I respect the Mexican leaders — but the leaders are much sharper, smarter and more cunning than our leaders. -And just to finish, people say, how will you get Mexico to pay? A politician other than the people in the states — I don’t want to — a politician cannot get them to pay. I can. We lose, we have a trade imbalance. Excuse me, John, of $50 billion. - -believe me the world is peanuts by comparison. - -Right. Dynamically. - -Then you have to get rid of Larry Kudlow, who sits on your panel, who’s a great guy, who came out the other day and said, I love Trump’s tax plan. - -First of all, John got lucky with a thing called fracking, OK? He hit oil. He got lucky with fracking. Believe me, that is why Ohio is doing well. Number — and that is important for you to know. -Number two, this was the man that was a managing general partner at Lehman Brothers when it went down the tubes and almost took every one of us with it, including Ben and myself, because I was there and I watched what happened. -And Lehman Brothers started it all. He was on the board. And he was a managing general partner. -And just thirdly, he was so nice. He was such a nice guy. And he said, oh, I’m never going to attack. But then his poll numbers tanked. He has got — that is why he is on the end. - -And he got nasty. And he got nasty. So you know what? You can have him. - -Well, first of all, like many other very big businessmen, I could name them here, but I’m not going to do that for a lot of obvious reasons, but the biggest, and almost all of them, they’ve all used the chapter laws, the bankruptcy laws to their own benefit. -Before this, I was a very successful person as a developer and as a businessman. Atlantic City has gone bad. I mean, Chris will know about that. I’m not blaming Chris, by the way, but he will know about that. Caesar’s — excuse me — Caesar’s, the Rolls-Royce, as you know, is in bankruptcy. Almost every hotel in Atlantic City has either been in bankruptcy or will be in bankruptcy — the biggest. -But also the biggest people (ph) — now I’ve used that to my advantage as a business man, for my family, for myself. I never filed for bankruptcy. But many, many people did. What happened with Atlantic City is very, very disgraceful. -Now hundreds of companies I’ve opened. I’ve used it three times, maybe four times. Came out great. But I guess I’m supposed to come out great. That is what I could do for the country. We owe $19 trillion, boy am I good at solving debt problems. Nobody can solve it like me. -But I will tell you this, Atlantic City, you’re using that, hundreds of companies that I have opened have thrived. I built a net worth of way over $10 billion, and I have done it four times out of hundreds. And I’m glad I did it. -I used the laws of the country to my benefit, I’m sorry. - -Thank you. - -I was not at all critical of him. I was not at all. In fact, frankly, he’s complaining about the fact that we’re losing some of the most talented people. They go to Harvard. They go to Yale. They go to Princeton. They come from another country and they’re immediately sent out. -I am all in favor of keeping these talented people here so they can go to work in Silicon Valley. - -So I have nothing at all critical of him. - -Probably, I don’t know — you people write the stuff. I don’t know where you. - -And if I could say just one thing. I am the only person in either campaign that’s self-funding. I’m putting up 100 percent of my own money. And right now, I will be putting up a tremendous — so far, I’ve put up less than anybody and I have the best results. Wouldn’t that be nice if the country could do that? -But I will be putting — I will be putting up, you know, tremendous amounts of money. SuperPacs are a disaster. They’re a scam. They cause dishonesty. And you better get rid of them because they are causing a lot of bad decisions to be made by some very good people. And I’m not blaming these folks — well, I guess I could. - -Very good people are making very bad decisions right now. And if anything comes out of this whole thing with some of these nasty and ridiculous questions, I will tell you, you better get rid of the SuperPacs because they causing a big problem with this country, not only in dishonesty and what’s going on, but also in a lot of bad decisions that have been made for the benefit of lobbyists and special interests. - -I never said that. I never said that. - -You’ve got another gentleman in Florida, who happens to be a very nice guy, but not. - -he’s really doing some bad - -I never said that. I never said that. - -He has got another gentleman in Florida, who happens to be a very nice guy, but not. - -Everybody is really doing some bad fact. - -I’m in favor of people coming into this country legally. And you know what? They can have it anyway you want. You can call it visas, you can call it work permits, you can call it anything you want. I’ve created tens of thousands of jobs, and in all due respect — and actually some of these folks I really like a lot — but I’m the only one that can say that. I have created tens of thousands of jobs, and I’ll be creating many millions of jobs if I’m given — if I’m given the opportunity to be president. -As far as Mark is concerned, as far as the visas are concerned, if we need people, they have — it’s fine. They have to come into this country legally. We have a country of borders. We have a country of laws. We have to obey the laws. It’s fine if they come in, but they have to come in legally. - -Yes. - -Or somebody else. Right. - -Yes, I might feel more comfortable. I would say that I would and I have a permit, which is very unusual in New York — a permit to carry. And I do carry on occasion, sometimes a lot. But I like to be unpredictable so that people don’t know exactly. - -By the way, unlike our country where we’re totally predictable and the enemy, whether it’s ISIS or anybody else, they know exactly what we’re doing because we have the wrong leadership. - -I would change them. I would change them. - -Such a nasty — such a nasty question, but thank you, Governor. - -Yes, it’s very simple. We’re going to make a really dynamic economy from what we have right now, which is not at all dynamic. We’re going to bring jobs back from Japan, we’re going to bring jobs back from China, we’re going to bring, frankly, jobs back from Mexico where, as you probably saw, Nabisco is leaving Chicago with one of their biggest plants, and they’re moving it to Mexico. -We’re going to bring jobs and manufacturing back. We’re going to cut costs. We’re going to save Social Security, and we’re going to save Medicare. - -Our country doesn’t win anymore. We used to win, we don’t win anymore. We lose on trade. We lose with ISIS. We lose with one of the worst deals I’ve ever seen negotiated of any kind, that’s our recent catastrophe with Iran. We don’t win. -Let me give you one quick example. These folks, CNBC, they had it down at three, three and a half hours. I just read today in the New York Times, $250,000 for a 30 second ad. I went out and said, it’s ridiculous. Nobody — I could stand up here all night. Nobody wants to watch three and a half, or three hours. It was a back sacrifice, and I have to hand it to Ben. -We called Ben, he was with me 100%. We called in, we said, that’s it. We’re not doing it. They lost a lot of money, everybody said it couldn’t be done. Everybody said it was going to be three hours, three and a half, including them, and in about two minutes I renegotiated it so we can get the hell out of here. Not bad. - -And, I’ll do that with the country. We will make America great again. And, thank you everybody. Just for the record. - -That’s not right. That is absolutely not right. You know that. That is not right. - -I'm Donald Trump. I wrote "The Art of the Deal". I say not in a braggadocious way, I've made billions and billions of dollars dealing with people all over the world, and I want to put whatever that talent is to work for this country so we have great trade deals, we make our country rich again, we make it great again. We build our military, we take care of our vets, we get rid of Obamacare, and we have a great life altogether. - -Thank you. Thank you. - -Well, first of all, Rand Paul shouldn't even be on this stage. He's number 11, he's got 1 percent in the polls, and how he got up here, there's far too many people anyway. - -As far as temperament -- and we all know that -- as far as temperament, I think I have a great temperament. I built a phenomenal business with incredible, iconic assets, one of the really truly great real-estate businesses. - -And I may be an entertainer, because I've had tremendous success with number-one bestsellers all over the place, with "The Apprentice" and everything else I've done. - -But I will tell you this: What I am far and away greater than an entertainer is a businessman, and that's the kind of mindset this country needs to bring it back, because we owe $19 trillion right now, $19 trillion, and you need this kind of thinking to bring our country back. - -And believe me, my temperament is very good, very calm. But we will be respected outside of this country. We are not respected now. - -I never attacked him on his look, and believe me, there's plenty of subject matter right there. - -That I can tell you. - -I've actually been in politics all my life, although I've been on that side as opposed to this side. I'm now a politician for about three months. Obviously, I'm doing pretty well. I'm number one in every polls (sic) by a lot. - -But the qualification is that I've dealt with people all over the world, been successful all over the world. Everything I've done virtually has been a tremendous success. - -When markets changed, when things turned, I heard Governor Pataki, who, by the way, was a failed governor in New York, a very seriously failed -- he wouldn't be elected dog catcher right now. I heard what he had to say. - -And I will tell you this: Atlantic City, I've made a tremendous amount of money in Atlantic City. I left seven years ago, I've gotten great credit for my timing, and that's what I'm all about. - -I'm a businessman, did really well, really well, and Jeb, what I want to do is put that ability into this country to make our country rich again. And I can do that, and I'm not sure that anybody else in the group will be able to do that. - -But I have to say. - -Well, in Wisconsin. - -Excuse me. - -In Wisconsin, you're losing $2.2 billion right now. - -I would do so much better than that. - -No. - -I'm using facts. - -Every major business leader has used the -- I never went bankrupt, by the way, as you know, everybody knows. But we -- hundreds of companies, hundreds of deals, I've used into bankruptcy. That's what's wrong with politicians in Washington right now. They think we can take a country into bankruptcy. - -Every major business leader, has used the -- I never went bank bankrupt, by the way, as you know, everybody knows. But -- hundreds of companies, hundreds of deals, I used the law four times and made a tremendous thing. I'm in business. I did a very good job. - -But I will say this, and people are very, very impressed with what I've done, the business people. But when the folks of Iowa found out the true facts of the job that you've done in Wisconsin, all of a sudden you, tubed (ph), he was No. 1 and now he's No. 6 or seven in the polls. - -So, look, we brought it out, you were supposed to make a billion dollars in the state. You lost 2.2 -- you have right now, a huge budget deficit. That's not a Democratic point. That's a point. That's a fact. And when the people of Iowa found that out, I went to No. 1 and you went down the tubes. - -I didn't. - -Totally false. - -I would have gotten it. - -I promise I would have gotten it. - -I promise if I wanted it, I would have gotten it. - -I know my people. - -I know my people. - -No. I just will tell you that, you know, Jeb made the statement. I'm not only referring to him. I -- a lot of money was raised by a lot of different people that are standing up here. And the donors, the special interests, the lobbyists have very strong power over these people. - -I'm spending all of my money, I'm not spending -- I'm not getting any -- I turned down -- I turn down so much, I could have right now from special interests and donors, I could have double and triple what he's got. I've turned it down. I turned down last week $5 million from somebody. - -So I will tell you I understand the game, I've been on the other side all of my life. And they have a lot of control over our politicians. And I don't say that favorably, and I'm not sure if there's another system, but I say this. I am not accepting any money from anybody. Nobody has control of me other than the people of this country. I'm going to do the right thing. - -That's true. That's true. - -I was -- excuse me, Jeb. - -I was a businessman, I got along with Clinton, I got along with everybody. That was my job, to get along with people. - -I didn't want to -- excuse me. One second. - -OK, more energy tonight. I like that. - -I didn't want -- it was my obligation as a businessman to my family, to my company, to my employees, to get along with all politicians. I get along with all of them, and I did a damn good job in doing it. Go ahead. - -Got along with everybody. - -Wrong. - -Don't make things up. Jeb, don't make things up. Come on. - -Don't make things up. - -So, number one, they have to respect you. He has absolutely no respect for President Obama. Zero. - -Syria's a mess. You look at what's going on with ISIS in there, now think of this: we're fighting ISIS. ISIS wants to fight Syria. Why are we fighting ISIS in Syria? Let them fight each other and pick up the remnants. - -I would talk to him. I would get along with him. I believe -- and I may be wrong, in which case I'd probably have to take a different path, but I would get along with a lot of the world leaders that this country is not getting along with. - -We don't get along with China. We don't get along with the heads of Mexico. We don't get along with anybody, and yet, at the same time, they rip us left and right. They take advantage of us economically and every other way. We get along with nobody. - -I will get along -- I think -- with Putin, and I will get along with others, and we will have a much more stable -- stable world. - -I believe that I will get along -- we will do -- between that, Ukraine, all of the other problems, we won't have the kind of problems that our country has right now with Russia and many other nations. TAPPER: Senator Rubio, you've taken a very different approach to the -- the question of Russia. You've called Vladimir Putin a, quote, "gangster." - -Why would President Rubio's approach be more effective than President Trump's? - -I wouldn't have drawn the line, but once he drew it, he had no choice but to go across. They do bear some responsibility, but I think he probably didn't do it, not for that reason. - -Somehow, he just doesn't have courage. There is something missing from our president. Had he crossed the line and really gone in with force, done something to Assad -- if he had gone in with tremendous force, you wouldn't have millions of people displaced all over the world. - -They had a responsibility, absolutely. I think we have three of them here. - -I think they had a responsibility, yes. - -I think it will haunt him. I think it's a terrible. I think it's going to haunt him absolutely. He came back later and he said he misspoke. There was no question because I heard when he said the statement. I was watching and he said the statement. - -And I said, wow, I can't believe it. I will take care of women. I respect women. I will take care of women. - -One thing we will say and I would like to get back to the Iran situation. We're talking about Iran. The agreement was terrible. It was incompetent. I've never seen anything like it. One of the worst contracts of any kind I've ever seen. - -And nobody ever mentions North Korea where you have this maniac sitting there and he actually has nuclear weapons and somebody better start thinking about North Korea and perhaps a couple of other places. But certainly North Korea. - -And Ted and I have spoken. We've -- a lot of us have spoken. We're talking about Iran. They are bad actors, bad things are going to happen. But in the meantime, you have somebody right now in North Korea who has got nuclear weapons and who is saying almost every other week, I'm ready to use them. And we don't even mention it. - -So why didn't you say it? Why didn't you say it? - -I know, but why did you say it? I heard it myself. Why did you say it? - -You said you're going to cut funding for women's health. You said it. - -You said it. - -I think she's got a beautiful face, and I think she's a beautiful woman. - -Correct. First of all, I want to build a wall, a wall that works. So important, and it's a big part of it. - -Second of all, we have a lot of really bad dudes in this country from outside, and I think Chris knows that, maybe as well as anybody. - -They go, if I get elected, first day they're gone. Gangs all over the place. Chicago, Baltimore, no matter where you look. - -We have a country based on laws. I will make sure that those laws are adhered to. These are illegal immigrants. I don't think you'd even be asking this question if I didn't run because when I ran, and I brought this up, my opening remarks at Trump Tower, I took heat like nobody has taken heat in a long time. And, then they found out with the killing of Katie, from San Francisco, and so many other crimes, they found out that I was right. - -And, most people, many people, apologized to me. I don't think you'd even be talking about illegal immigration if it weren't for me. So, we have a country of laws, they're going to go out, and they'll come back if they deserve to come back. If they've had a bad record, if they've been arrested, if they've been in jail, they're never coming back. We're going to have a country again. Right now, we don't have a country, we don't have a border, and we're going to do something about it, and it can be done with proper management, and it can be done with heart. - -By the way, I agree with -- with what Chris is saying, but, I will say this. Illegal immigration is costing us more than $200 billion dollars a year just to maintain what we have. - -Correct. - -Well, I have to tell you, I hear phenomenal things. I hear your wife is a lovely woman. - -I don't know her, and this is a total mischaracterization. - -Good. - -No, I won't do that, because I've said nothing wrong. - -But I do hear she's a lovely woman. - -Jeb said that they come into our country as an act of love. - -With all of the problems we that we have, in so many instances -- we have wonderful people coming in. But with all of the problems -- this is not an act of love. He's weak on immigration -- by the way, in favor of Common Core, which is also a disaster, but weak on immigration. - -He doesn't get my vote. - -Not with this intensity. - -As I said, we are spending $200 billion -- we are spending $200 billion a year on maintaining what we have. We will move them out. The great ones will come back, the good ones will come back. - -They'll be expedited, they'll be back, they'll come back legally. We'll have a country -- they'll come back, legally. - -Well, I think it's wonderful and all, but I did it a little bit half-heartedly, but I do mean it to a large extent. - -We have a country, where, to assimilate, you have to speak English. And I think that where he was, and the way it came out didn't sound right to me. We have to have assimilation -- to have a country, we have to have assimilation. - -I'm not the first one to say this, Dana. We've had many people over the years, for many, many years, saying the same thing. This is a country where we speak English, not Spanish. - -This is a reporter, not a high school kid. - -Well, first of all, the -- the 14th Amendment says very, very clearly to a lot of great legal scholars -- not television scholars, but legal scholars -- that it is wrong. It can be corrected with an act of Congress, probably doesn't even need that. - -A woman gets pregnant. She's nine months, she walks across the border, she has the baby in the United States, and we take care of the baby for 85 years. I don't think so. - -And by the way, Mexico and almost every other country anywhere in the world doesn't have that. We're the only ones dumb enough, stupid enough to have it. And people -- and by the way, this is not just with respect to Mexico. They are coming from Asia to have babies here, and all of a sudden, we have to take care of the babies for the life of the baby. - -The 14th Amendment, it reads properly, you can go and -- it's probably going to be have to be check -- go through a process of court, probably ends up at the Supreme Court, but there are a lot of great legal scholars that say that is not correct. - -And in my opinion, it makes absolutely no -- we're the only -- one of the only countries, we're going to take care of those babies for 70, 75, 80, 90 years? I don't think so. - -I agree 100 percent, by the way, with Carly on the fact that the Democrats do not want to solve this problem, for the obvious reasons, but they do not. - -But I believe that a reading of the 14th Amendment allows you to have an interpretation where this is not legal and where it can't be done. I've seen both sides, but some of the greatest scholars agree with me, without having to go through Congress. - -If you do go through Congress, you can absolutely solve the problem. - -That's true, sure. - -Let me just explain. The head of the Yale Business School, Jeffrey Sonnenfeld, wrote a paper recently, one of the worst tenures for a CEO that he has ever seen, ranked one of the top 20 in the history of business. The company is a disaster and continues to be a disaster. They still haven't recovered. In fact, today, on the front page of the Wall Street Journal, they fired another 25 or 30,000 people saying we still haven't recovered from the catastrophe. - -When Carly says the revenues went up, that's because she bought Compaq, it was a terrible deal, and it really led to the destruction of the company. - -Now one other company before that was Lucent. Carly was at Lucent before that. And Lucent turned out to be a catastrophe also. So I only say this. She can't run any of my companies. That I can tell you. - -I never filed for bankruptcy. - -I'll tell you why; it's very simple. - -I've made over $10 billion. I had a casino company -- Caesars just filed for bankruptcy. Chris will tell you -- it's not Chris' fault either -- but almost everybody in Atlantic City is either in trouble or filed for -- maybe I'll blame Chris. - -But Atlantic City is a disaster. - -Wait a minute, Carly. Wait. I let you speak. Atlantic City is a disaster, and I did great in Atlantic City. I knew when to get out. My timing was great. And I got a lot of credit for it. - -Many of the great business people that you know -- and Carl Icon (ph) is going to work with me on making great deals for this country. But whether it's Carl or so many others that we read about all the time. They have used the laws of the land. - -Well, I'd like to respond, I'd like to respond. - -Well, I think the thing about the flat tax, I know it very well. What I don't like is that if you make $200 million a year, you pay ten percent, you're paying very little relatively to somebody that's making $50,000 a year, and has to hire H&R Block to do the -- because it's so complicated. - -One thing I'll say to Ben is that we've had a graduated tax system for many years, so it's not a socialistic thing. What I'd like to do, and I'll be putting in the plan in about two weeks, and I think people are going to like it, it's a major reduction in taxes. It's a major reduction for the middle class. The hedge fund guys won't like me as much as they like me right now. I know them all, but they'll pay more. - -I know people that are making a tremendous amount of money and paying virtually no tax, and I think it's unfair. - -Well, I heard Hugh Hewitt, a nice man, he apologized because he actually said that we had a misunderstanding. And he said today that Donald Trump is maybe the best interview there is anywhere that he has ever done. - -Now unless he was just saying that on CNN to be nice, but he did say that. - -And we had a legitimate misunderstanding in terms of his pronunciation of a word. - -But I would say just. - -Well, I think it was. And he actually said that. Did you say that? - -OK. So I will say this, though, Hugh was giving me name after name, Arab name, Arab name, and there are few people anywhere, anywhere that would have known those names. I think he was reading them off a sheet. - -And frankly I will have -- and I told him, I will have the finest team that anybody has put together and we will solve a lot of problems. - -You know, right now they know a lot and look at what is happening. The world is blowing up around us. We will have great teams and great people. - -I hope that answers your question. I mean, you are in the Senate, but I hope that answers your question. - -No, I don't think he's suggesting that at all. - -I don't think he's suggesting that at all. - -Well, you have to understand, I am not sitting in the United States Senate with, by the way, the worst voting record there is today. Number one. I am not sitting in the United States Senate. I'm a businessman doing business transactions. - -I am doing business transactions. I will know more about this -- and, as you said, that was very acceptable, and when you listen to that whole interview, it's a great interview, you said it, I didn't. Well, now I did. - -Listen, just one second. Just one second. - -I will know more about the problems of this world by the time I sit, and you look at what's going in this world right now by people that supposedly know, this world is a mess. TAPPER: Senator Rubio, he did invoke your absentee record in the Senate. - -I'm -- and I'm meeting with people that are terrific people, but I have to say something because it's about judgment. - -I am the only person on this dais -- the only person -- that fought very, very hard against us (ph), and I wasn't a sitting politician going into Iraq, because I said going into Iraq -- that was in 2003, you can check it out, check out -- I'll give you 25 different stories. - -In fact, a delegation was sent to my office to see me because I was so vocal about it. I'm a very militaristic person, but you have to know when to use the military. I'm the only person up here that fought against going into Iraq. - -Just excuse me, one second, Rand. - -If you don't mind, Rand -- you know, you are on last -- you do have your 1 percent. - -I would like -- and I think it's very important. I think it's important, because it's about judgment. It's about judgment. - -I didn't want to go into Iraq, and I fought it, because what I said -- what I said was you're going to -- you're going to destabilize the Middle East, and that's what happened. - -If you think about it, your brother -- and your brother's administration gave us Barack Obama, because it was such a disaster, those last three months, that Abraham Lincoln couldn't have been elected. - -I don't know. You feel safe right now? I don't feel so safe. - -Or the collapse of the economy. - -Speaking for myself, I'm OK with it. I think there's a certain truth to it. I know people that, frankly, it has no impact on their life whatsoever. There are many people. - -I would almost say leave it up to them, but I would be willing to check it off, and say I will not get Social Security. - -As a policy, I would almost leave it up to the people. Don't forget they pay in and they pay in, and maybe they do well, and maybe some people want it. But the fact is that there are people that truly don't need it, and there are many people that do need it very, very badly. And I would be willing to write mine off 100 percent, Dana. BASH: So is a voluntary program the way to get the Social Security system solvent again like that. - -Well, I -- I -- I'd like to respond. - -I'd like to respond. - -Autism has become an epidemic. Twenty-five years ago, 35 years ago, you look at the statistics, not even close. It has gotten totally out of control. - -I am totally in favor of vaccines. But I want smaller doses over a longer period of time. Because you take a baby in -- and I've seen it -- and I've seen it, and I had my children taken care of over a long period of time, over a two or three year period of time. - -Same exact amount, but you take this little beautiful baby, and you pump -- I mean, it looks just like it's meant for a horse, not for a child, and we've had so many instances, people that work for me. - -Just the other day, two years old, two and a half years old, a child, a beautiful child went to have the vaccine, and came back, and a week later got a tremendous fever, got very, very sick, now is autistic. - -I only say it's not -- I'm in favor of vaccines, do them over a longer period of time, same amount. - -But just in -- in little sections. - -I think -- and I think you're going to have -- I think you're going to see a big impact on autism. - -And that's all I'm saying, Jake. That's all I'm saying. - -Well, because she's been sitting for three hours, I think my daughter, Ivanka, who's right here. - -Other than that we'll go with Rosa Parks. I like that. - -Humble. - -If I become president, we will do something really special. We will make this country greater than ever before. We'll have more jobs. We'll have more of everything. - -We were discussing disease, we were discussing all sorts of things tonight, many of which will just be words, it will just pass on. I don't want to say politicians, all talk, no action. But a lot of what we talked about is words and it will be forgotten very quickly. - -If I'm president, many of the things that we discussed tonight will not be forgotten. We'll find solutions. And the world will respect us. They will respect us like never before. And it will be actually a friendlier world. - -And I have to say, it is a great honor to be here tonight. - -I fully understand. - -I fully understand. - -I cannot say. I have to respect the person that, if it’s not me, the person that wins, if I do win, and I’m leading by quite a bit, that’s what I want to do. I can totally make that pledge. If I’m the nominee, I will pledge I will not run as an independent. But — and I am discussing it with everybody, but I’m, you know, talking about a lot of leverage. We want to win, and we will win. But I want to win as the Republican. I want to run as the Republican nominee. - -Well, I’ve given him plenty of money. - -I will not make the pledge at this time. - -Only Rosie O’Donnell. - -Thank you. - -Yes, I’m sure it was. - -I think the big problem this country has is being politically correct. - -’ve been challenged by so many people, and I don’t frankly have time for total political correctness. And to be honest with you, this country doesn’t have time either. This country is in big trouble. We don’t win anymore. We lose to China. We lose to Mexico both in trade and at the border. We lose to everybody. - -And frankly, what I say, and oftentimes it’s fun, it’s kidding. We have a good time. What I say is what I say. And honestly Megyn, if you don’t like it, I’m sorry. I’ve been very nice to you, although I could probably maybe not be, based on the way you have treated me. But I wouldn’t do that. - -But you know what, we — we need strength, we need energy, we need quickness and we need brain in this country to turn it around. That, I can tell you right now. - -So, if it weren’t for me, you wouldn’t even be talking about illegal immigration, Chris. You wouldn’t even be talking about it. - -This was not a subject that was on anybody’s mind until I brought it up at my announcement. And I said, Mexico is sending. Except the reporters, because they’re a very dishonest lot, generally speaking, in the world of politics, they didn’t cover my statement the way I said it. - -The fact is, since then, many killings,murders, crime, drugs pouring across the border, are money going out and the drugs coming in. And I said we need to build a wall, and it has to be built quickly. - -And I don’t mind having a big beautiful door in that wall so that people can come into this country legally. But we need, Jeb, to build a wall, we need to keep illegals out. - -Border Patrol, I was at the border last week. Border Patrol, people that I deal with, that I talk to, they say this is what’s happening. Because our leaders are stupid. Our politicians are stupid. - -And the Mexican government is much smarter, much sharper, much more cunning. And they send the bad ones over because they don’t want to pay for them. They don’t want to take care of them. - -Why should they when the stupid leaders of the United States will do it for them? And that’s what is happening whether you like it or not. - -A complete disaster, yes. - -Correct. - -First of all, I’d like to just go back to one. In July of 2004, I came out strongly against the war with Iraq, because it was going to destabilize the Middle East. And I’m the only one on this stage that knew that and had the vision to say it. And that’s exactly what happened. - -And the Middle East became totally destabilized. So I just want to say. - -As far as single payer, it works in Canada. It works incredibly well in Scotland. It could have worked in a different age, which is the age you’re talking about here. - -What I’d like to see is a private system without the artificial lines around every state. I have a big company with thousands and thousands of employees. And if I’m negotiating in New York or in New Jersey or in California, I have like one bidder. Nobody can bid. - -You know why? - -Because the insurance companies are making a fortune because they have control of the politicians, of course, with the exception of the politicians on this stage. - -But they have total control of the politicians. They’re making a fortune. - -Get rid of the artificial lines and you will have yourself great plans. And then we have to take care of the people that can’t take care of themselves. And I will do that through a different system. - -I’m not — I’m not are — I don’t think you heard me. You’re having a hard time tonight. - -You’d better believe it. - -If I ask them, if I need them, you know, most of the people on this stage I’ve given to, just so you understand, a lot of money. - -Many of them. - -Not much. - -Good. - -Sounds good. Sounds good to me, Governor. - -I will tell you that our system is broken. I gave to many people, before this, before two months ago, I was a businessman. I give to everybody. When they call, I give. - -And do you know what? - -When I need something from them two years later, three years later, I call them, they are there for me. - -And that’s a broken system. - -Well, I’ll tell you what, with Hillary Clinton, I said be at my wedding and she came to my wedding. - -You know why? - -She didn’t have a choice because I gave. I gave to a foundation that, frankly, that foundation is supposed to do good. I didn’t know her money would be used on private jets going all over the world. It was. - -Because I have used the laws of this country just like the greatest people that you read about every day in business have used the laws of this country, the chapter laws, to do a great job for my company, for myself, for my employees, for my family, et cetera. - -I have never gone bankrupt, by the way. I have never. - -But out of hundreds of deals. - -Excuse me. Excuse me. - -Excuse me, what am I saying? Out of hundreds of deals that I’ve done, hundreds, on four occasions I’ve taken advantage of the laws of this country, like other people. I’m not going to name their names because I’m not going to embarrass, but virtually every person that you read about on the front page of the business sections, they’ve used the law. - -The difference is, when somebody else uses those laws, nobody writes about it. When I use it, they say, “Trump, Trump, Trump.” The fact is, I built a net worth of more than $10 billion. I have a great, great company. I employ thousands of people. And I’m very proud of the job I did. - -Again Chris, hundreds and hundreds of deals. Four times, I’ve taken advantage of the laws. And frankly, so has everybody else in my position. - -et me just tell you about the lenders. First of all, these lenders aren’t babies. These are total killers. These are not the nice, sweet little people that you think, OK? - -And by the way, this country right now owes $19 trillion. And they need somebody like me to straighten out that mess. - -I don’t think they like me very much. I’ll tell you what. I’ve evolved on many issues over the years. And you know who else has? Is Ronald Reagan evolved on many issues. - -And I am pro-life. And if you look at the question, I was in business. They asked me a question as to pro-life or choice. And I said if you let it run, that I hate the concept of abortion. I hate the concept of abortion. And then since then, I’ve very much evolved. - -And what happened is friends of mine years ago were going to have a child, and it was going to be aborted. And it wasn’t aborted. And that child today is a total superstar, a great, great child. And I saw that. And I saw other instances. - -And I am very, very proud to say that I am pro-life. - -As far as being a Republican is concerned, I come from a place, New York City, which is virtually, I mean, it is almost exclusively Democrat. And I have really started to see some of the negatives — as an example, and I have a lot of liking for this man, but the last number of months of his brother’s administration were a catastrophe. And unfortunately, those few months gave us President Obama. And you can’t be happy about that. - -First of all, Jeb, I am very happy that you denied that, and I appreciate that very much. He is a true gentleman. He really is. - -One thing he did say, and I mean that. The one thing he did say about me, however, was my tone. And I also understand that. But when you have people that are cutting Christians’ heads off, when you have a world that the border and at so many places, that it is medieval times, we’ve never — it almost has to be as bad as it ever was in terms of the violence and the horror, we don’t have time for tone. We have to go out and get the job done. - -I would be so different from what you have right now. Like, the polar opposite. We have a president who doesn’t have a clue. I would say he’s incompetent, but I don’t want to do that because that’s not nice. - -But if you look at the deals we make, whether it’s the nuclear deal with 24 hour periods — and by the way, before you get to the 24 hours, you have to go through a system. You look at Sergeant Bergdahl, we get Bergdahl, a traitor, and they get five of the big, great killers leaders that they want. We have people in Washington that don’t know what they’re doing. Now I agree. - -Now, with Iran, we’re making a deal, you would say, we want him. We want out our prisoners. We want all these things, and we don’t get anything. We’re giving them $150 billion dollars plus, they are going to be — I’ll tell you what, if Iran was a stock, you folks should go out and buy it right now because you’ll quadruple — this, what’s happening in Iran, is a disgrace, and it’s going to lead to destruction in large portions of the world. - -Our country is in serious trouble. We don’t win anymore. - -We don’t beat China in trade. We don’t beat Japan, with their millions and millions of cars coming into this country, in trade. We can’t beat Mexico, at the border or in trade. - -We can’t do anything right. Our military has to be strengthened. Our vets have to be taken care of. We have to end Obamacare, and we have to make our country great again, and I will do that. - -Thank you. \ No newline at end of file diff --git a/examples/text_generation/data/word_counts.txt b/examples/text_generation/data/word_counts.txt deleted file mode 100755 index f41e74126..000000000 --- a/examples/text_generation/data/word_counts.txt +++ /dev/null @@ -1,11519 +0,0 @@ -a 969108 - 586368 - 586368 -. 440479 -on 213612 -of 202290 -the 196219 -in 182598 -with 152984 -and 139109 -is 97322 -man 72712 -to 67506 -sitting 52259 -an 49463 -two 47993 -, 43921 -standing 42264 -at 42204 -people 41672 -are 40768 -next 36794 -white 35874 -woman 33849 -street 30173 -table 29681 -that 27969 -holding 27648 -it 26574 -person 24540 -large 24417 -some 24130 -down 22912 -top 21994 -group 21582 -up 20742 -field 20614 -small 19884 -tennis 19485 -near 19450 -his 19304 -front 19296 -black 19187 -train 18217 -dog 18107 -plate 18081 -riding 18081 -room 18076 -red 17185 -young 16971 -cat 16933 -by 16864 -water 16374 -baseball 15472 -has 14974 -while 14551 -playing 14504 -walking 14492 -bathroom 14339 -sign 14028 -blue 13618 -kitchen 13244 -food 13049 -grass 12917 -there 12678 -bus 12554 -green 12484 -parked 12436 -pizza 12360 -side 12328 -building 12309 -other 12082 -bed 11869 -looking 11826 -snow 11734 -beach 11249 -ball 11105 -three 11014 -couple 11013 -for 10929 -boy 10814 -men 10688 -toilet 10515 -clock 10326 -city 10322 -flying 10141 -road 10137 -wearing 9905 -out 9852 -skateboard 9755 -her 9744 -player 9659 -over 9572 -several 9415 -game 9400 -girl 9331 -laying 9259 -from 9236 -sits 9200 -picture 8844 -wooden 8808 -bench 8772 -bear 8705 -area 8691 -through 8603 -their 8593 -one 8553 -laptop 8498 -around 8408 -horse 8336 -eating 8321 -brown 8320 -yellow 8263 -cake 8243 -phone 8054 -frisbee 8053 -computer 8005 -sink 7945 -board 7939 -giraffe 7848 -outside 7770 -as 7628 -air 7514 -living 7494 -truck 7490 -window 7321 -motorcycle 7241 -desk 7188 -'s 7154 -umbrella 7135 -car 7050 -tree 7019 -trees 6986 -covered 6942 -wall 6927 -each 6926 -open 6917 -elephant 6867 -park 6854 -many 6831 -close 6765 -behind 6680 -this 6679 -very 6666 -old 6577 -under 6474 -filled 6440 -little 6412 -fire 6349 -stop 6333 -court 6213 -sky 6144 -together 6081 -child 6035 -into 6023 -surfboard 5910 -its 5857 -kite 5845 -background 5747 -skis 5698 -inside 5617 -sheep 5612 -boat 5563 -bat 5515 -back 5513 -bowl 5502 -stands 5470 -big 5470 -photo 5464 -chair 5454 -view 5416 -light 5410 -bunch 5347 -ocean 5343 -couch 5306 -bird 5300 -glass 5281 -traffic 5241 -cell 5208 -airplane 5182 -hydrant 5139 -zebra 5116 -fence 5064 -mirror 5022 -teddy 5021 -shirt 4974 -counter 4949 -orange 4933 -women 4924 -sandwich 4898 -hand 4888 -another 4865 -sidewalk 4863 -plane 4834 -different 4812 -wave 4778 -floor 4681 -lot 4676 -stand 4661 -tall 4635 -parking 4635 -giraffes 4596 -flowers 4593 -cars 4584 -horses 4546 -vase 4538 -tracks 4535 -racket 4462 -baby 4449 -tower 4446 -ground 4373 -grassy 4306 -tie 4304 -vegetables 4301 -zebras 4264 -off 4242 -being 4231 -elephants 4215 -day 4169 -bananas 4153 -along 4087 -full 4074 -middle 4052 -ready 3990 -image 3967 -hill 3932 -dirt 3919 -station 3884 -taking 3870 -bike 3836 -sit 3835 -signs 3833 -four 3785 -slope 3771 -driving 3742 -stuffed 3710 -head 3676 -piece 3632 -above 3618 -broccoli 3589 -grazing 3571 -cows 3565 -skiing 3562 -across 3549 -beside 3535 -luggage 3525 -long 3508 -wine 3504 -snowy 3472 -skate 3408 -them 3388 -wii 3378 -ski 3372 -hanging 3365 -hat 3359 -during 3355 -glasses 3318 -mountain 3304 -refrigerator 3302 -holds 3297 -children 3295 -camera 3288 -doing 3285 -pink 3258 -display 3232 -herd 3228 -suit 3218 -hot 3199 -cow 3192 -fruit 3173 -buildings 3159 -pole 3159 -corner 3097 -going 3064 -empty 3057 -looks 3033 -umbrellas 3028 -cutting 3018 -watching 3016 -oven 3013 -kites 3011 -pair 3004 -trick 2997 -jumping 2976 -stove 2958 -track 2957 -smiling 2955 -dogs 2942 -keyboard 2938 -chairs 2937 -posing 2904 -talking 2899 -boats 2898 -double 2874 -airport 2869 -door 2867 -television 2866 -box 2865 -soccer 2856 -colorful 2837 -crowd 2833 -traveling 2813 -animals 2795 -swinging 2791 -video 2775 -tv 2771 -surf 2770 -topped 2748 -various 2743 -getting 2741 -birds 2735 -using 2718 -lady 2714 -who 2687 -plates 2687 -body 2667 -against 2634 -set 2632 -hit 2630 -all 2620 -paper 2583 -banana 2582 -guy 2566 -motorcycles 2550 -coffee 2540 -wood 2540 -carrying 2539 -brick 2520 -lots 2515 -river 2504 -cup 2483 -someone 2471 -bedroom 2458 -cheese 2457 -something 2451 -night 2445 -lights 2422 -waiting 2415 -restaurant 2413 -house 2407 -be 2398 -bears 2385 -walk 2368 -players 2364 -shower 2346 -skateboarder 2345 -metal 2342 -meat 2341 -runway 2338 -skier 2326 -bicycle 2304 -remote 2294 -snowboard 2291 -racquet 2286 -face 2282 -about 2279 -home 2273 -running 2268 -high 2251 -items 2251 -surfer 2248 -jet 2240 -busy 2239 -line 2219 -ramp 2205 -intersection 2159 -lying 2153 -passenger 2150 -dressed 2149 -hands 2144 -male 2127 -tray 2094 -like 2080 -surfing 2076 -mouth 2036 -book 2031 -he 2024 -suitcase 2013 -decker 2005 -animal 2001 -him 2000 -slice 1982 -preparing 1976 -store 1972 -shown 1972 -rides 1967 -cut 1963 -bridge 1940 -pulling 1933 -made 1933 -bottle 1921 -scissors 1919 -batter 1913 -screen 1906 -gray 1899 -bag 1898 -sleeping 1877 -donuts 1872 -half 1869 -look 1867 -zoo 1862 -kids 1850 -dark 1842 -way 1834 -number 1833 -enclosure 1832 -row 1826 -surrounded 1808 -microwave 1803 -tub 1802 -knife 1801 -sand 1799 -jacket 1795 -showing 1787 -between 1787 -carrots 1768 -play 1763 -adult 1753 -colored 1750 -decorated 1741 -toy 1741 -pile 1732 -silver 1728 -lake 1728 -few 1713 -boys 1702 -cabinets 1699 -forest 1698 -lined 1698 -buses 1690 -walks 1686 -mouse 1684 -skiers 1680 -older 1671 -meal 1665 -seat 1663 -purple 1652 -girls 1646 -bread 1646 -past 1642 -`` 1631 -oranges 1631 -furniture 1626 -hair 1605 -grey 1604 -swing 1590 -outdoor 1584 -kid 1579 -have 1575 -cloudy 1572 -waves 1567 -coming 1560 -displayed 1548 -drink 1538 -photograph 1534 -'' 1524 -throwing 1523 -attached 1515 -can 1500 -chocolate 1497 -leaning 1496 -onto 1492 -crossing 1488 -monitor 1485 -fork 1484 -scene 1478 -making 1476 -painted 1471 -shelf 1454 -dining 1453 -pan 1442 -rocks 1439 -cats 1439 -meter 1435 -stone 1434 -hitting 1424 -drinking 1423 -seen 1421 -salad 1421 -walls 1413 -no 1412 -lush 1408 -lit 1407 -apples 1406 -towards 1402 -birthday 1402 -watch 1397 -female 1391 -resting 1391 -cross 1387 -office 1373 -market 1364 -fruits 1359 -rain 1355 -windows 1351 -public 1349 -bright 1347 -apple 1339 -sunny 1337 -blanket 1330 -dish 1324 -leaves 1322 -clean 1321 -tables 1318 -flower 1305 -catch 1305 -fries 1303 -plastic 1303 -bikes 1302 -sun 1300 -clear 1293 -stopped 1290 -been 1288 -edge 1283 -beautiful 1279 -mountains 1278 -surfboards 1278 -books 1277 -moving 1276 -statue 1271 -teeth 1268 -setting 1266 -pictures 1263 -snowboarder 1262 -rail 1252 -helmet 1252 -ride 1251 -dress 1249 -trying 1247 -working 1245 -underneath 1242 -slices 1229 -branch 1229 -uniform 1223 -donut 1220 -rock 1219 -yard 1210 -platform 1203 -bath 1193 -time 1186 -controller 1173 -shot 1172 -motor 1172 -eat 1170 -or 1164 -pieces 1161 -shows 1160 -nice 1156 -cellphone 1152 -perched 1147 -having 1146 -skateboarding 1145 -placed 1136 -cart 1135 -country 1133 -catcher 1130 -wet 1128 -shore 1126 -basket 1125 -computers 1124 -passing 1121 -pitch 1119 -case 1118 -police 1114 -path 1113 -sandy 1111 -surface 1110 -vases 1107 -base 1107 -cooking 1105 -family 1095 -they 1091 -vehicle 1090 -hotel 1089 -dinner 1078 -eaten 1076 -modern 1071 -pizzas 1071 -types 1070 -sauce 1059 -lap 1058 -just 1057 -multiple 1053 -boards 1051 -town 1051 -doughnut 1049 -tiled 1048 -others 1048 -brushing 1047 -nintendo 1046 -reading 1045 -plant 1043 -beer 1042 -trains 1040 -watches 1032 -pitcher 1023 -doughnuts 1022 -single 1021 -toppings 1019 -laptops 1018 -benches 1017 -post 1012 -variety 1011 -engine 1010 -mounted 1007 -lamp 1007 -plants 1007 -trucks 1003 -guys 1003 -enjoying 999 -gear 996 -bowls 995 -tarmac 990 -curb 987 -jump 986 -appliances 985 -distance 984 -fresh 983 -passengers 982 -gathered 982 -graffiti 976 -woods 975 -pretty 974 -phones 973 -tricks 971 -bathtub 971 -end 969 -performing 967 -cute 964 -rice 960 -pen 957 -place 952 -shoes 949 -candles 947 -flies 941 -shop 940 -toothbrush 931 -containing 930 -pasture 928 -left 925 -five 922 -fenced 921 -carriage 919 -cattle 919 -poles 918 -chicken 918 -bottles 915 -feeding 910 -brush 907 -dirty 906 -pot 897 -match 897 -railroad 896 -concrete 893 -below 892 -after 889 -right 888 -bags 888 -school 886 -tile 885 -fly 885 -dock 881 -neck 879 -take 875 -drinks 873 -including 872 -steel 871 -bar 871 -fireplace 870 -takes 869 -vintage 868 -she 861 -pillows 856 -striped 853 -sofa 853 -pulled 850 -crowded 849 -nearby 848 -huge 845 -sinks 845 -sandwiches 844 -rack 840 -control 840 -well 839 -catching 838 -dry 837 -sliced 834 -alone 834 -center 833 -fridge 833 -vehicles 832 -boxes 831 -polar 829 -shaped 828 -planes 826 -reaching 824 -equipment 822 -trunk 821 -beds 818 -container 818 -skateboards 818 -flat 813 -giant 813 -church 811 -floating 807 -both 803 -bicycles 799 -suitcases 798 -served 796 -feet 795 -where 793 -poses 792 -arm 791 -bushes 787 -not 786 -plays 786 -away 782 -legs 781 -serve 778 -atop 777 -asian 776 -tomatoes 776 -clothes 774 -taken 770 -towel 768 -putting 768 -staring 767 -sticking 767 -hay 766 -airplanes 765 -smiles 764 -prepares 764 -potatoes 762 -lays 761 -space 761 -cream 759 -pose 758 -surfers 756 -commercial 755 -style 755 -professional 752 -foods 750 -serving 748 -dishes 747 -spoon 746 -wild 746 -trail 745 -run 735 -subway 734 -which 734 -mother 733 -show 731 -work 731 -cabinet 731 -cement 729 -christmas 725 -shorts 724 -painting 722 -soup 716 -breakfast 715 -wire 715 -lone 712 -round 711 -stacked 710 -square 709 -pool 708 -same 708 -toward 707 -backpack 707 -cooked 706 -highway 703 -reflection 696 -swimming 689 -games 688 -dessert 685 -umpire 684 -blender 681 -stall 679 -french 679 -pillow 678 -seated 676 -flag 675 -throw 675 -swings 675 -docked 674 -outdoors 674 -adults 670 -couches 669 -go 668 -cluttered 667 -wedding 667 -get 665 -boarding 661 -hotdog 661 -clocks 659 -rug 659 -assortment 659 -van 659 -controllers 656 -business 656 -drives 655 -garden 654 -sunglasses 652 -team 647 -bun 646 -hillside 641 -ledge 636 -among 634 -military 633 -flock 632 -rocky 631 -wooded 629 -skies 629 -jumps 628 -low 627 -restroom 626 -facing 626 -ice 621 -trash 621 -does 620 -doors 619 -foot 618 -stainless 618 -shelves 618 -lawn 617 -mid 617 -desktop 616 -kitten 613 -stairs 613 -onions 613 -eyes 612 -drawn 609 -assorted 608 -gate 608 -desert 604 -see 602 -event 600 -pointing 598 -fish 597 -closeup 596 -things 595 -cups 595 -eats 594 -clouds 593 -ceiling 593 -race 592 -coat 590 -eggs 588 -before 587 -steps 586 -monitors 583 -toddler 583 -arms 583 -landing 577 -picnic 574 -construction 573 -curtain 573 -says 573 -rackets 570 -land 569 -floors 569 -electronic 567 -party 566 -snowboarding 566 -broken 566 -glove 566 -friends 565 -arranged 565 -turn 565 -lunch 562 -kneeling 562 -cage 560 -boarder 554 -overlooking 554 -tied 554 -graze 553 -pond 552 -rider 552 -closed 552 -bottom 551 -new 550 -overhead 548 -type 547 -vegetable 547 -narrow 546 -tan 545 -colors 543 -cakes 542 -skating 539 -wide 539 -reads 538 -kind 537 -pots 537 -sale 537 -roof 536 -object 535 -messy 535 -fancy 535 -himself 534 -gold 533 -hold 527 -machine 526 -suits 524 -trailer 522 -sides 521 -pasta 520 -plain 519 -towels 516 -pastries 516 -appears 515 -foreground 515 -sunset 514 -bite 514 -cloth 514 -features 509 -walkway 509 -pants 509 -veggies 508 -urban 508 -ties 508 -sea 507 -pie 506 -leather 505 -photos 503 -transit 503 -shade 503 -net 502 -make 502 -short 500 -rest 500 -what 500 -was 499 -loaded 496 -blurry 496 -carrot 495 -clothing 494 -action 493 -scooter 492 -commuter 491 -palm 490 -ear 490 -smoke 488 -fashioned 488 -pepperoni 488 -leash 488 -antique 486 -hole 486 -device 485 -baked 483 -ship 482 -partially 482 -still 481 -beneath 480 -doorway 479 -pastry 479 -signal 479 -toys 479 -toilets 478 -lies 477 -wrapped 475 -held 474 -harbor 473 -houses 471 -pedestrians 471 -kinds 470 -gets 470 -petting 468 -fighter 468 -tour 467 -giving 467 -decorative 466 -chips 464 -bow 463 -racing 463 -do 462 -structure 462 -pier 461 -tomato 458 -outfit 457 -used 455 -contains 455 -grill 454 -island 453 -rests 453 -farm 452 -winter 450 -railing 449 -containers 448 -includes 446 -vanity 446 -lettuce 443 -peppers 442 -bunches 442 -part 441 -streets 440 -hard 440 -beans 439 -opened 438 -sheet 438 -model 438 -cap 438 -prepared 437 -leading 437 -electric 436 -six 435 -steam 435 -garage 434 -use 434 -apartment 433 -lid 432 -residential 430 -makes 429 -smart 429 -waits 429 -stack 429 -deck 429 -shopping 428 -lift 426 -ripe 426 -produce 425 -cover 425 -sheets 424 -heads 424 -rural 424 -bacon 424 -papers 422 -doll 422 -meters 421 -tooth 420 -sail 420 -barn 420 -i 419 -directions 417 -happy 416 -but 416 -here 416 -turned 415 -hats 415 -brightly 415 -tiny 414 -mound 414 -remotes 414 -these 414 -branches 414 -juice 413 -writing 413 -pavement 412 -upside 410 -snowboards 409 -disc 409 -rainy 408 -frosting 406 -nose 405 -alongside 405 -soda 404 -patio 404 -color 403 -displaying 401 -smaller 401 -parade 401 -riders 400 -pans 400 -tea 400 -touching 399 -close-up 398 -candle 397 -flight 397 -log 396 -ducks 396 -milk 396 -mug 395 -officer 395 -mustard 395 -carpet 395 -laid 394 -costume 393 -sized 393 -itself 392 -loading 391 -only 391 -wetsuit 391 -personal 390 -sort 390 -purse 390 -smile 390 -sill 389 -persons 389 -kicking 388 -ketchup 388 -bush 387 -ladies 386 -decorations 386 -roll 385 -uniforms 385 -noodles 385 -urinals 385 -course 384 -flags 383 -american 382 -mushrooms 382 -dresser 381 -platter 380 -sliding 380 -enclosed 380 -pipe 379 -paddle 379 -wheel 378 -pick 378 -goats 377 -leans 377 -stuff 376 -trays 376 -lines 376 -pickup 375 -beige 375 -gas 375 -utensils 373 -deep 373 -elderly 373 -egg 373 -goes 371 -sailing 371 -throws 370 -fried 370 -approaching 369 -sweater 368 -vest 368 -tent 367 -ornate 366 -fun 366 -sausage 366 -attempting 365 -crosswalk 365 -greens 365 -runs 364 -growing 363 -tow 363 -terminal 362 -rear 362 -uses 362 -bride 362 -hangs 360 -written 359 -rows 359 -propeller 358 -fake 357 -passes 357 -jets 356 -indoor 356 -built 355 -stick 354 -direction 351 -pulls 351 -collection 351 -beverage 350 -art 348 -posted 347 -shoe 347 -neatly 346 -wind 346 -roadway 345 -rolling 345 -curtains 344 -leg 344 -if 344 -chef 344 -relaxing 344 -paint 343 -rope 343 -bull 343 -power 342 -toast 342 -wires 341 -bending 340 -sticks 339 -baskets 339 -airliner 339 -balls 339 -museum 339 -missing 338 -potted 338 -curled 338 -shirtless 338 -falling 338 -aircraft 338 -mitt 337 -tank 335 -sports 335 -stadium 334 -frame 333 -glazed 333 -supplies 333 -tops 333 -spectators 333 -siting 332 -lamps 331 -sculpture 331 -roses 330 -performs 330 -wagon 330 -how 330 -telephone 330 -stunt 329 -without 329 -paved 329 -mirrors 328 -locomotive 328 -tiles 327 -wheels 327 -talks 326 -practicing 326 -guitar 325 -good 325 -also 324 -toothbrushes 324 -block 324 -selfie 323 -gather 323 -toaster 323 -electronics 322 -jeans 321 -cupcakes 321 -shallow 321 -formation 321 -packed 320 -we 320 -wait 320 -2 319 -eye 319 -landscape 319 -opposite 318 -stream 318 -bank 317 -tail 317 -flown 317 -garbage 317 -chain 316 -pass 316 -shoulder 316 -jetliner 316 -piled 316 -steep 315 -travels 315 -blankets 315 -ingredients 315 -waters 315 -bucket 312 -homemade 312 -cones 312 -you 312 -motion 312 -hotdogs 311 -images 310 -so 310 -selling 310 -dead 309 -pedestrian 309 -floral 309 -spread 308 -hits 307 -competition 307 -put 307 -puppy 306 -holder 306 -groom 306 -fallen 305 -covering 305 -fluffy 305 -workers 304 -downhill 303 -flip 302 -freezer 302 -semi 302 -skateboarders 301 -objects 300 -sprinkles 299 -fast 299 -headphones 298 -gathering 298 -pictured 297 -marble 297 -tongue 297 -snowboarders 297 -icing 296 -strawberries 295 -stuck 295 -pitching 295 -pushing 295 -faces 295 -horseback 294 -counters 294 -visible 294 -neon 293 -featuring 293 -handle 293 -asleep 293 -mixed 293 -surrounding 293 -blonde 292 -pack 291 -rolls 291 -habitat 290 -drive 290 -cargo 290 -frosted 290 -urinal 289 -entrance 289 -pouring 288 -shirts 287 -life 287 -patch 286 -wears 285 -slopes 285 -climbing 285 -microphone 285 -golden 284 -checkered 284 -london 284 -whole 283 -speed 283 -string 283 -duck 283 -arrangement 282 -design 281 -beard 281 -skirt 280 -range 279 -officers 279 -shiny 279 -item 279 -reaches 278 -haired 278 -travel 277 -carries 277 -stickers 276 -delicious 274 -desserts 274 -grilled 274 -name 274 -far 274 -napkin 272 -liquid 272 -covers 272 -beef 271 -feeder 271 -interior 271 -moves 270 -batting 270 -butter 270 -someones 270 -position 270 -opening 269 -hills 269 -ben 269 -cleaning 269 -bare 269 -blowing 268 -goat 268 -first 267 -controls 267 -laughing 267 -advertisement 266 -checking 266 -straw 266 -mobile 266 -lighting 265 -basketball 265 -olives 265 -displays 264 -shape 264 -cookies 264 -great 264 -onion 263 -gravel 262 -rusty 262 -cardboard 261 -fountain 261 -ahead 260 -corn 259 -rusted 259 -plaid 259 -ham 257 -condiments 257 -potato 257 -miniature 257 -newspaper 256 -wing 256 -hang 255 -located 255 -muddy 255 -seats 255 -catches 255 -carts 254 -fishing 254 -lane 253 -tabby 253 -thin 253 -rainbow 252 -flooring 252 -foil 251 -clay 251 -fan 251 -bookshelf 250 -mini 250 -hood 249 -trolley 249 -tools 249 -unmade 248 -turning 248 -bunk 248 -jar 247 -bent 247 -travelling 246 -exhibit 245 -pet 245 -either 245 -posed 245 -shadow 244 -hospital 244 -forward 244 -cafe 243 -herself 243 -watering 243 -hallway 243 -sharing 243 -words 243 -following 242 -gloves 242 -library 242 -stoplight 242 -n't 242 -attire 242 -grapes 241 -meadow 240 -pickle 240 -series 240 -stares 240 -santa 239 -driver 239 -aerial 239 -nicely 238 -size 238 -sleeps 238 -comforter 238 -seems 238 -steeple 238 -calf 238 -stage 237 -frisbees 237 -pancakes 236 -us 235 -crust 235 -helping 235 -more 234 -dishwasher 234 -chasing 234 -backs 234 -burger 234 -almost 234 -double-decker 234 -seagulls 234 -industrial 234 -mostly 233 -thrown 233 -advertising 233 -chili 233 -brushes 233 -wings 233 -league 233 -return 233 -heading 232 -chinese 232 -lay 232 -teams 231 -accessories 231 -multicolored 231 -coach 231 -freshly 231 -paw 231 -ceramic 230 -statues 230 -tasty 230 -balcony 229 -baking 229 -matching 229 -paws 228 -trunks 228 -steak 228 -picking 227 -tunnel 227 -railway 226 -countryside 226 -designed 226 -collar 226 -multi 226 -cowboy 226 -iron 226 -hugging 226 -larger 225 -leaving 225 -neighborhood 225 -system 225 -marina 225 -horns 224 -cuts 224 -biting 223 -bouquet 223 -twin 222 -funny 222 -jumbo 222 -owner 221 -alley 221 -gentleman 220 -mat 220 -amongst 220 -cycle 220 -chopped 219 -booth 219 -buffet 218 -father 218 -story 218 -mask 218 -heavy 217 -dusk 217 -individual 217 -hose 217 -hamburger 217 -cupcake 216 -lamb 216 -tries 216 -driveway 215 -sugar 215 -abandoned 215 -blow 215 -foreign 214 -prepare 214 -formal 214 -bakery 214 -downtown 214 -metallic 213 -audience 213 -shining 213 -poster 213 -sniffing 213 -resort 213 -calm 213 -sets 212 -helmets 211 -motorbike 211 -tractor 211 -rails 211 -baggage 211 -worker 211 -devices 211 -sunlight 210 -busses 210 -dryer 210 -rose 210 -seagull 209 -smoking 209 -cold 209 -heart 209 -circular 209 -connected 209 -propped 208 -cases 208 -mans 208 -boots 208 -typing 207 -folded 207 -chest 207 -hiding 207 -sub 206 -t-shirt 206 -backyard 205 -stool 205 -framed 205 -motorcyclist 204 -washing 204 -mud 204 -candy 204 -natural 204 -fans 204 -focus 204 -entertainment 203 -bearded 203 -mattress 203 -chewing 203 -castle 203 -onlookers 203 -dimly 203 -bay 203 -bars 203 -contents 202 -weather 202 -section 202 -decor 202 -neat 202 -magnets 202 -raised 202 -organized 201 -shaking 201 -reflected 200 -furry 200 -cliff 199 -kick 199 -lemon 199 -cook 199 -upon 199 -cupboards 199 -ears 198 -cigarette 198 -bikers 198 -thick 198 -evening 197 -storage 197 -point 196 -sailboat 196 -males 196 -dump 196 -lounging 195 -bell 195 -racquets 195 -chefs 195 -porch 195 -lighthouse 194 -3 194 -spinach 194 -progress 194 -wilderness 193 -unique 193 -kitty 193 -collage 193 -skiis 193 -hardwood 193 -tossing 193 -' 192 -adorable 192 -; 192 -card 192 -strip 192 -balancing 192 -canopy 192 -silverware 191 -grazes 191 -lambs 191 -tablet 191 -piano 191 -word 191 -taxi 190 -bats 190 -daytime 190 -bikini 190 -besides 190 -lemons 190 -really 190 -grocery 190 -parrot 190 -finger 189 -move 189 -scarf 189 -separate 189 -son 188 -crosses 188 -simple 188 -pony 188 -safety 187 -jersey 187 -leafy 187 -straight 187 -belt 186 -jungle 186 -peeking 186 -football 186 -saying 184 -ring 184 -fall 184 -cookie 184 -moped 184 -amount 184 -stretching 184 -tag 184 -ladder 183 -indoors 183 -scattered 183 -warning 183 -suspended 183 -odd 183 -apron 182 -delivery 182 -cabin 182 -break 182 -towering 182 -tourists 182 -nuts 181 -balloons 181 -lounge 181 -fighting 181 -pickles 181 -goggles 181 -circle 181 -quiet 180 -turkey 180 -obstacle 180 -spot 180 -sizes 180 -peeled 180 -countertop 180 -ribbon 180 -licking 180 -panda 179 -motorcyclists 179 -classroom 179 -decoration 178 -tire 178 -cellphones 178 -screens 178 -themselves 177 -golf 177 -strange 177 -hung 177 -numerous 177 -sticker 177 -topping 176 -spray 176 -enjoy 176 -logs 176 -scooters 176 -drawers 176 -shoreline 176 -magazine 175 -plaza 175 -pigeons 175 -vegetation 175 -freight 175 -courtyard 175 -crane 175 -fixing 174 -skyline 174 -feed 173 -clutter 173 -leaf 173 -closet 173 -beverages 173 -surfs 172 -st 172 -canoe 172 -complete 172 -army 172 -stops 172 -cereal 171 -snack 171 -monkey 171 -own 171 -toothpaste 170 -healthy 170 -played 170 -muffin 170 -tusks 170 -foggy 170 -fabric 169 -students 169 -jockey 169 -talk 169 -furnished 169 -comes 168 -berries 168 -tin 168 -photographs 168 -shops 168 -blond 167 -figure 167 -slightly 167 -headed 167 -buns 167 -waterway 166 -cop 166 -teenager 166 -service 165 -pine 165 -dispenser 165 -crossed 165 -softball 165 -process 164 -positioned 164 -than 164 -arrow 163 -cone 163 -faucet 163 -rustic 163 -den 163 -sideways 163 -wear 163 -nap 163 -lighted 163 -observing 163 -points 162 -speakers 162 -shrimp 162 -airlines 162 -oriental 162 -groups 161 -illuminated 161 -pairs 160 -approaches 160 -married 160 -slicing 160 -designs 160 -bedspread 160 -goods 160 -leaping 160 -pull 159 -flooded 159 -seven 159 -cauliflower 159 -catchers 159 -bathing 159 -reflecting 159 -consisting 159 -grabbing 159 -friend 158 -wading 158 -granite 158 -pedestal 158 -vendor 158 -puts 158 -cushion 158 -fixtures 158 -valley 158 -village 157 -classic 157 -meeting 157 -bin 157 -sailboats 156 -herding 156 -patterned 156 -cheesy 156 -para 156 -paddling 156 -load 156 -beak 156 -reach 156 -forks 155 -port 155 -unusual 155 -desks 155 -war 155 -attempts 155 -music 154 -interesting 154 -fully 154 -works 154 -layer 154 -drawing 154 -arena 154 -trough 154 -wit 154 -snacks 154 -caught 153 -soap 153 -bald 153 -sport 153 -leads 153 -freeway 153 -numbers 152 -level 152 -dried 152 -mall 152 -guard 152 -waving 151 -tropical 150 -peanut 150 -keys 150 -stopping 150 -pad 150 -piles 150 -spoons 150 -much 149 -basin 149 -terrain 149 -tags 149 -transportation 148 -spraying 148 -pens 148 -heard 148 -within 148 -helicopter 148 -photographer 148 -conversation 148 -had 147 -kept 147 -greenery 147 -come 147 -tasting 146 -jars 146 -notebook 146 -strapped 146 -cellular 146 -portrait 146 -monument 146 -diamond 146 -costumes 146 -step 145 -boogie 145 -individuals 144 -turns 144 -jackets 144 -roaming 144 -deer 144 -overpass 144 -pin 144 -pepper 143 -grinding 143 -cans 143 -doubles 143 -utility 143 -celebrating 143 -split 142 -conference 142 -posts 142 -skater 142 -buggy 142 -canal 142 -owl 142 -second 142 -handing 142 -order 142 -button 142 -soft 142 -wake 141 -language 141 -? 141 -pattern 141 -soldiers 141 -quilt 141 -speaking 140 -extended 140 -elegant 140 -massive 140 -gives 139 -airborne 139 -knives 139 -site 139 -hooked 139 -thing 139 -console 139 -backdrop 139 -upright 139 -bit 139 -link 138 -curve 138 -human 138 -crib 138 -chained 138 -factory 138 -tape 137 -love 137 -indicating 137 -stance 137 -tents 137 -english 137 -parachute 136 -seating 136 -tablecloth 136 -include 136 -hall 136 -print 136 -roman 136 -signals 136 -learning 136 -winding 136 -practice 136 -bagel 136 -boarders 135 -raft 135 -customers 135 -peel 134 -shades 134 -elevated 134 -hangar 134 -figurine 134 -mom 134 -teenage 133 -message 133 -cartoon 133 -cleaned 133 -world 132 -wrapper 132 -hauling 132 -fashion 132 -drivers 132 -knee 132 -savannah 132 -themed 132 -main 132 -s 131 -products 131 -adjusting 131 -daughter 131 -geese 131 -frying 131 -farmers 131 -toasted 131 -kissing 131 -will 131 -pathway 130 -weird 130 -mashed 130 -thumbs 130 -drying 130 -overcast 130 -drawer 130 -trio 130 -depot 130 -windowsill 129 -figurines 129 -meats 129 -babies 129 -pug 129 -sneakers 129 -squatting 129 -naked 129 -metro 129 -festival 128 -polka 128 -third 128 -stir 128 -depicting 128 -fat 128 -china 127 -star 127 -storm 127 -digital 127 -knees 127 -photographed 127 -scenic 126 -circus 126 -plains 126 -stroller 126 -package 126 -headboard 126 -such 126 -grown 126 -swim 125 -chopsticks 125 -multi-colored 125 -read 125 -british 125 -grab 125 -warehouse 125 -returning 125 -bookcase 125 -cooks 124 -printer 124 -gravy 124 -towing 124 -carry 124 -wicker 124 -stools 124 -dot 124 -disk 124 -marked 124 -balloon 124 -keyboards 124 -backpacks 124 -rowing 124 -competing 123 -placing 123 -sails 123 -fed 123 -barbed 123 -crate 123 -most 123 -too 123 -lobby 123 -note 123 -cruise 123 -african 123 -commode 123 -am 122 -staircase 122 -shuttle 122 -bicyclist 122 -cab 122 -puddle 122 -shines 122 -roadside 122 -worn 122 -places 121 -barrier 121 -carousel 121 -uncooked 121 -fur 121 -calico 121 -chrome 120 -ostrich 120 -balances 120 -cloud 120 -stable 120 -pears 120 -adorned 120 -listening 120 -deserted 120 -medium 120 -rocking 120 -major 120 -electrical 120 -cluster 120 -stores 119 -strawberry 119 -dough 119 -enter 119 -european 119 -float 119 -stationary 119 -powdered 119 -intently 118 -muffins 118 -coke 118 -fair 118 -tube 118 -refrigerators 118 -foliage 118 -asphalt 118 -tight 118 -directly 118 -map 118 -cathedral 118 -stepping 118 -stretched 118 -stripes 118 -lie 117 -shoulders 117 -planter 117 -names 117 -rings 117 -band 117 -environment 117 -entering 116 -closely 116 -were 116 -eight 116 -runner 116 -shrubs 116 -logo 116 -help 116 -opponent 116 -class 116 -midair 116 -leaned 116 -policeman 116 -suv 115 -sweet 115 -frozen 115 -wildlife 115 -towers 115 -starting 115 -boardwalk 115 -cubicle 115 -shapes 115 -conveyor 115 -black-and-white 115 -limb 115 -rectangular 115 -crates 114 -barrel 114 -spacious 114 -stomach 114 -tagged 114 -soldier 114 -podium 114 -fingers 114 -force 114 -stretches 114 -hind 114 -slide 113 -outfits 113 -figures 113 -diner 113 -checks 113 -floats 113 -ships 113 -dual 113 -appear 113 -signage 113 -tourist 113 -ferry 113 -racks 113 -silhouette 113 -speeding 113 -necks 112 -fog 112 -dressing 112 -partly 112 -thru 112 -operating 112 -stalls 112 -paintings 112 -spotted 112 -suite 112 -uniformed 112 -unit 112 -ones 112 -dusty 111 -touch 111 -pies 111 -tissue 111 -skillet 111 -when 111 -blurred 111 -rubbing 111 -tee 111 -dipping 111 -tiger 111 -raw 111 -key 110 -4 110 -grandfather 110 -serves 110 -texting 110 -younger 110 -artwork 110 -dozen 109 -mountainous 109 -skinny 109 -now 109 -n 109 -maroon 109 -bicyclists 109 -mixture 109 -peering 109 -cupboard 109 -elaborate 109 -pocket 109 -radio 109 -lodge 109 -artistic 109 -crouching 109 -curved 109 -macaroni 109 -alcohol 108 -newly 108 -needs 108 -cobblestone 108 -shots 108 -array 108 -angle 108 -blinds 108 -kettle 108 -mushroom 108 -stunts 108 -feeds 107 -sofas 107 -dip 107 -washer 107 -apart 107 -try 107 -flipping 107 -cool 107 -gourmet 107 -nightstand 107 -sad 107 -loaf 107 -infant 107 -clearing 107 -syrup 107 -weeds 106 -start 106 -teenagers 106 -somewhere 106 -sprinkled 106 -tidy 106 -hitter 106 -letting 106 -movie 106 -spots 106 -crackers 106 -japanese 106 -balance 106 -sleep 106 -craft 106 -fry 105 -billboard 105 -self 105 -bowling 105 -porcelain 105 -biker 105 -volley 105 -else 105 -remodeled 105 -bamboo 105 -nest 105 -indian 105 -rubber 105 -identical 105 -shrubbery 104 -extra 104 -cub 104 -cherry 104 -farmer 104 -avenue 104 -wash 104 -ha 104 -pineapple 104 -jelly 104 -awaiting 103 -engaged 103 -unfinished 103 -skiier 103 -upward 103 -suburban 103 -treats 103 -goal 103 -carriages 103 -bulls 103 -stump 103 -stair 102 -entree 102 -menu 102 -cast 102 -any 102 -united 102 -glowing 102 -done 102 -attractive 102 -viewing 102 -viewed 102 -rough 102 -chandelier 102 -lifting 102 -sushi 102 -selection 102 -stacks 102 -opens 101 -maker 101 -gaming 101 -blood 101 -pigeon 101 -crouched 101 -pajamas 101 -appliance 101 -plush 101 -notes 101 -locked 100 -spatula 100 -parts 100 -moored 100 -tuxedo 100 -traditional 100 -tying 100 -cooling 100 -banner 100 -buttons 100 -motel 100 -roller 100 -teaching 100 -lime 100 -lining 100 -cubs 99 -possibly 99 -altered 99 -hiking 99 -caption 99 -breaking 99 -pads 99 -old-fashioned 99 -navy 99 -alarm 99 -coast 99 -forested 99 -swung 99 -youth 99 -dancing 99 -barren 98 -harness 98 -pork 98 -falls 98 -care 98 -mural 98 -warm 98 -iced 98 -hoodie 97 -celery 97 -stripped 97 -local 97 -wheelchair 97 -ridding 97 -blocks 97 -interacting 97 -walled 97 -cutter 97 -bigger 97 -happily 97 -engines 97 -sharp 97 -grouped 97 -compact 96 -stripe 96 -towed 96 -ad 96 -transport 96 -pitchers 96 -fuzzy 96 -yogurt 96 -unloading 96 -playground 96 -adjacent 96 -trim 96 -letters 95 -separated 95 -ottoman 95 -equipped 95 -heels 95 -hut 95 -females 95 -junk 95 -sausages 95 -jeep 95 -chickens 95 -aged 95 -hanger 95 -pitched 94 -windshield 94 -afternoon 94 -eagle 94 -theme 94 -horned 94 -hipster 94 -steer 94 -dozens 94 -flowered 94 -pottery 94 -wrap 94 -coats 94 -firetruck 94 -furnishings 94 -bunny 94 -money 94 -cycles 94 -special 94 -powered 94 -whipped 94 -ram 94 -followed 93 -blocking 93 -socks 93 -feathers 93 -bends 93 -zone 93 -form 93 -skates 93 -halves 93 -donkey 93 -necktie 93 -peas 93 -clown 93 -oil 93 -handles 93 -pets 93 -silly 93 -perches 93 -called 93 -parasailing 93 -york 93 -company 92 -led 92 -check 92 -pipes 92 -grasses 92 -shaggy 92 -dim 92 -vandalized 92 -claim 92 -scale 92 -text 92 -aluminum 92 -dugout 92 -flatbed 91 -plan 91 -relish 91 -accents 91 -call 91 -unripe 91 -wandering 90 -gun 90 -wheeled 90 -hilly 90 -strings 90 -ovens 90 -panel 90 -outstretched 90 -poised 90 -bedding 90 -gym 90 -garlic 90 -members 90 -mean 90 -swans 90 -tram 90 -frisbe 89 -distant 89 -guests 89 -toiletries 89 -crest 89 -grizzly 89 -real 89 -swan 89 -pointed 89 -delta 89 -comfortable 89 -antelope 89 -directing 89 -setup 89 -character 89 -folding 89 -plenty 89 -quickly 89 -poking 89 -exit 89 -beyond 89 -portion 89 -cyclist 89 -sweatshirt 89 -driven 89 -provide 88 -decked 88 -peeling 88 -lego 88 -herbs 88 -waterfall 88 -cabbage 88 -sewing 88 -bricks 88 -ipod 88 -barefoot 88 -diving 87 -backwards 87 -cops 87 -raising 87 -interactive 87 -ave 87 -mixing 87 -kneels 87 -unable 87 -vast 87 -roast 87 -snow-covered 87 -steering 87 -sleek 87 -rim 87 -lead 87 -watched 86 -could 86 -cards 86 -peace 86 -shaded 86 -situated 86 -location 86 -compartment 86 -excited 86 -dinning 86 -& 86 -german 86 -crouches 86 -helps 86 -carved 86 -teen 86 -airline 86 -free 86 -trip 86 -chopping 85 -inflatable 85 -mopeds 85 -carrier 85 -t.v 85 -biplane 85 -those 85 -private 85 -slides 85 -leave 85 -holes 85 -stocked 85 -price 85 -stained 85 -t 85 -repair 85 -treat 85 -pay 85 -measuring 85 -share 85 -cooler 84 -news 84 -leaps 84 -corral 84 -knitted 84 -heavily 84 -saddle 84 -slab 84 -finished 84 -rolled 84 -gliding 84 -bites 84 -windsurfing 84 -cucumbers 84 -prop 84 -cages 83 -motorbikes 83 -holiday 83 -cyclists 83 -dolls 83 -oval 83 -handbag 83 -areas 83 -midst 83 -slowly 83 -portable 83 -champagne 83 -though 83 -o 83 -pillar 83 -shed 83 -briefcase 83 -mid-air 83 -ramps 83 -bbq 83 -anchored 82 -speaker 82 -tucked 82 -bend 82 -robe 82 -information 82 -mantle 82 -coleslaw 82 -cords 82 -perform 82 -completely 82 -page 82 -materials 82 -nearly 82 -stating 82 -travelers 82 -training 82 -meals 82 -threw 82 -pit 82 -smelling 81 -moon 81 -pirate 81 -route 81 -plugged 81 -filling 81 -layered 81 -ten 81 -strike 81 -numerals 81 -incoming 81 -m 81 -melted 81 -smartphone 81 -kayak 81 -shake 81 -liquor 81 -slow 81 -smiley 80 -combination 80 -cross-country 80 -plated 80 -parasol 80 -takeoff 80 -soaked 80 -bidet 80 -capped 80 -stew 80 -tulips 80 -outfield 80 -rabbit 80 -sculptures 80 -installed 79 -machines 79 -arch 79 -sled 79 -sparse 79 -times 79 -squash 79 -landed 79 -airfield 79 -king 79 -protective 79 -cola 79 -asparagus 79 -keep 79 -historic 79 -mess 79 -convention 79 -creek 79 -mixer 79 -material 79 -athlete 79 -roads 79 -club 79 -singing 79 -powder 79 -hydrants 79 -faced 79 -best 78 -skatepark 78 -wallpaper 78 -pushed 78 -cord 78 -presentation 78 -caution 78 -streetlight 78 -washington 78 -huddled 78 -iphone 78 -nothing 78 -rams 78 -tending 78 -state 78 -mugs 78 -caged 77 -thats 77 -stare 77 -studio 77 -basil 77 -buffalo 77 -casserole 77 -celebration 77 -labeled 77 -stems 77 -give 77 -medical 77 -grassland 77 -polo 77 -scratching 77 -angry 77 -volleyball 77 -curious 77 -waffles 77 -camping 77 -rodeo 77 -canadian 77 -dad 77 -chains 77 -rise 77 -printed 76 -emergency 76 -tiered 76 -nine 76 -comfortably 76 -coin 76 -soaring 76 -lower 76 -spaghetti 76 -wax 76 -makeup 76 -cheesecake 76 -surprised 76 -opposing 76 -taped 76 -yarn 75 -brocolli 75 -shelving 75 -dragon 75 -mustache 75 -instruments 75 -limit 75 -napkins 75 -cable 75 -mac 75 -lonely 75 -tires 75 -veggie 75 -touches 75 -summer 75 -st. 75 -colorfully 74 -yet 74 -amid 74 -cabinetry 74 -dashboard 74 -relaxes 74 -clearly 74 -bullet 74 -bleachers 74 -ends 74 -president 74 -preparation 74 -extremely 74 -glides 74 -penguin 74 -lab 74 -follows 74 -characters 74 -wool 74 -jam 73 -oncoming 73 -lifts 73 -somebody 73 -saucer 73 -posters 73 -fields 73 -sauces 73 -jockeys 73 -surround 73 -perch 73 -bookshelves 73 -draped 73 -ultimate 73 -everything 73 -darkened 73 -blueberries 73 -may 73 -even 73 -honey 73 -parents 72 -grand 72 -similar 72 -dresses 72 -tethered 72 -shredded 72 -states 72 -sporting 72 -choppy 72 -gazing 72 -featured 72 -quite 72 -canoes 72 -steamed 71 -trainer 71 -department 71 -watermelon 71 -parallel 71 -sold 71 -windy 71 -directional 71 -olive 71 -policemen 71 -taxis 71 -cloudless 71 -barrels 71 -carpeted 71 -file 71 -stretch 71 -mountainside 71 -barbecue 71 -nighttime 71 -lens 71 -pig 71 -siamese 71 -countertops 70 -consists 70 -beginning 70 -mexican 70 -extends 70 -everyone 70 -competitive 70 -strewn 70 -damaged 70 -greeting 70 -picked 70 -kit 70 -frames 70 -referee 70 -varieties 70 -say 70 -upper 70 -early 70 -need 70 -parent 70 -styled 70 -flowing 69 -stars 69 -crew 69 -begins 69 -carnival 69 -today 69 -relax 69 -bundle 69 -cafeteria 69 -exposed 69 -skiiers 69 -college 69 -burning 69 -crashing 69 -tortilla 68 -musical 68 -telling 68 -artificial 68 -arriving 68 -microwaves 68 -bad 68 -change 68 -arched 68 -border 68 -storefront 68 -icy 68 -automobile 68 -motorized 68 -camper 68 -buckets 68 -regular 68 -! 68 -projector 68 -student 68 -cameras 68 -marker 68 -somewhat 68 -morning 68 -cozy 68 -plow 67 -breast 67 -deli 67 -rounding 67 -vacant 67 -hello 67 -stones 67 -propellers 67 -safari 67 -peers 67 -mix 67 -paddles 67 -jumped 67 -tarp 67 -curly 67 -cemetery 67 -deserts 67 -airways 67 -pregnant 67 -enjoys 66 -vests 66 -queen 66 -rising 66 -mote 66 -dust 66 -finish 66 -strong 66 -sword 66 -wallet 66 -tug 66 -filed 66 -tool 66 -everywhere 66 -debris 66 -avocado 66 -milking 66 -wireless 66 -kiwi 66 -gated 66 -spices 66 -protest 66 -crown 65 -historical 65 -businesses 65 -obama 65 -exiting 65 -striking 65 -rotten 65 -metropolitan 65 -underwear 65 -curvy 65 -rises 65 -horn 65 -tide 65 -nestled 65 -overlooks 65 -bins 65 -waffle 65 -district 65 -stirring 65 -attention 65 -sanding 65 -5 65 -passed 65 -know 65 -project 64 -noses 64 -batters 64 -lovely 64 -winds 64 -warming 64 -let 64 -then 64 -balanced 64 -sexy 64 -speeds 64 -legged 64 -racer 64 -mirrored 64 -fill 64 -grabs 64 -lion 64 -sells 64 -butterfly 64 -sniffs 64 -presents 64 -skills 64 -mail 64 -splashing 63 -smothered 63 -garnish 63 -skyscrapers 63 -blows 63 -wok 63 -kicks 63 -depicts 63 -turquoise 63 -sprouts 63 -stem 63 -belly 63 -random 63 -shadows 63 -easy 63 -lanes 63 -skyscraper 63 -limes 63 -liner 63 -homes 63 -brand 63 -removed 63 -faded 62 -dilapidated 62 -submerged 62 -kiss 62 -splash 62 -kiteboarding 62 -cucumber 62 -accident 62 -digging 62 -cuddling 62 -guiding 62 -must 62 -serious 62 -plaque 61 -decorating 61 -styrofoam 61 -compete 61 -livestock 61 -meet 61 -hummingbird 61 -rather 61 -paste 61 -dorm 61 -bagels 61 -buying 61 -backhand 61 -garnished 61 -italian 61 -kittens 61 -present 61 -trailing 61 -snuggling 61 -gentlemen 61 -mouths 61 -beers 61 -oddly 61 -stalk 61 -paperwork 60 -knocked 60 -indicates 60 -oak 60 -nursing 60 -crowds 60 -pours 60 -expression 60 -gone 60 -bumper 60 -pushes 60 -retro 60 -smoothie 60 -ropes 60 -flavored 60 -linens 60 -pale 60 -wetsuits 60 -crafts 60 -me 60 -shaving 60 -picks 60 -architecture 60 -dance 60 -lounges 60 -ridden 60 -gazelle 60 -pumpkins 59 -cherries 59 -ascending 59 -caboose 59 -burnt 59 -tilted 59 -sepia 59 -grouping 59 -wheeler 59 -goose 59 -hook 59 -holders 59 -rugs 59 -frizbee 59 -slaw 59 -televisions 59 -belongings 59 -except 59 -: 59 -wrapping 59 -stoplights 58 -lettering 58 -tangerines 58 -dotted 58 -security 58 -late 58 -sandals 58 -hundreds 58 -mesh 58 -cinnamon 58 -bronze 58 -facility 58 -raises 58 -hawk 58 -should 58 -rooms 58 -daylight 58 -medicine 58 -tabletop 58 -boxing 58 -shady 58 -necklace 58 -citrus 58 -crystal 58 -embracing 58 -attending 58 -bricked 58 -milling 58 -destination 58 -autumn 58 -otherwise 58 -earth 58 -comb 58 -amidst 58 -fit 57 -describe 57 -foam 57 -pretending 57 -squirrel 57 -examining 57 -bottled 57 -docks 57 -wiping 57 -busted 57 -washed 57 -already 57 -vendors 57 -twilight 57 -lifted 57 -camouflage 57 -arrows 57 -canada 57 -littered 57 -tired 57 -napping 57 -partial 57 -your 57 -skull 57 -raspberries 57 -mill 57 -bean 57 -cracked 57 -incline 57 -bundled 56 -handicap 56 -climb 56 -sided 56 -stairway 56 -seeds 56 -exotic 56 -mama 56 -retail 56 -hairy 56 -blocked 56 -visitors 56 -entry 56 -magazines 56 -squares 56 -alert 56 -carton 56 -awning 56 -buss 56 -pita 56 -captured 56 -custom 56 -horizon 56 -hovering 56 -wines 56 -fine 56 -inspecting 56 -transporting 56 -ways 56 -pencils 56 -sectional 56 -formally 55 -platters 55 -backed 55 -pilot 55 -equestrian 55 -pillars 55 -walked 55 -chop 55 -washroom 55 -olympic 55 -monster 55 -admiring 55 -buy 55 -sunflowers 55 -torn 55 -appearing 55 -returns 55 -whit 55 -paneled 55 -typical 55 -patrons 55 -symbol 55 -fixture 55 -markers 55 -because 55 -starbucks 54 -basic 54 -cane 54 -stormy 54 -ceremony 54 -frog 54 -job 54 -goalie 54 -fight 54 -rv 54 -enormous 54 -escalator 54 -flips 54 -mass 54 -found 54 -remains 54 -waterfront 54 -seem 54 -hooded 54 -archway 54 -decorate 54 -hammer 54 -trashcan 54 -dome 54 -accompanied 54 -bells 54 -extending 54 -kits 54 -tip 54 -grain 54 -lean 54 -sandwhich 54 -tattoo 53 -lapse 53 -bandana 53 -flamingos 53 -bulldog 53 -stored 53 -dairy 53 -tournament 53 -peacock 53 -surfboarding 53 -aisle 53 -pump 53 -evergreen 53 -spire 53 -record 53 -deckered 53 -advertisements 53 -savanna 53 -amusement 53 -lite 53 -biking 53 -puppies 53 -hazy 53 -vane 53 -divided 53 -thumb 53 -hovers 53 -processor 53 -pc 53 -employees 53 -gates 53 -taxiing 53 -seafood 53 -grinds 53 -renovated 52 -tanker 52 -robot 52 -examines 52 -vending 52 -reclining 52 -beached 52 -instructions 52 -teal 52 -sesame 52 -waling 52 -packaged 52 -sunshine 52 -cramped 52 -got 52 -scissor 52 -dots 52 -owners 52 -spring 52 -mannequin 52 -athletic 52 -presented 52 -removing 52 -seed 52 -vans 52 -radishes 52 -grapefruit 52 -scenery 52 -ambulance 52 -swims 52 -handicapped 52 -beanie 52 -conductor 52 -recording 52 -climbs 51 -analog 51 -ancient 51 -guide 51 -mozzarella 51 -machinery 51 -marching 51 -calves 51 -hidden 51 -recliner 51 -hockey 51 -clad 51 -packages 51 -heater 51 -cityscape 51 -businessman 51 -chews 51 -drain 51 -shelter 51 -raining 51 -mascot 51 -pumpkin 51 -futon 51 -workstation 51 -raquet 51 -bouncing 51 -ornament 51 -approach 51 -shepherd 51 -shell 51 -reflective 51 -instrument 51 -triple 51 -tier 51 -airstrip 51 -radiator 51 -shaved 51 -dense 51 -handed 51 -trey 51 -dragging 51 -native 51 -sidecar 51 -overgrown 51 -thread 51 -adjusts 51 -ornaments 50 -west 50 -ofa 50 -baseman 50 -galloping 50 -would 50 -trolly 50 -parrots 50 -knit 50 -letter 50 -cd 50 -live 50 -haircut 50 -expanse 50 -downward 50 -alpine 50 -contemporary 50 -named 50 -laugh 50 -jug 50 -barber 50 -available 50 -octopus 50 -wig 50 -personnel 50 -fist 50 -candies 50 -political 50 -athletes 50 -settings 50 -snowsuit 50 -residence 50 -barely 50 -ponies 50 -flowery 50 -creepy 50 -beat 50 -peeks 50 -ticket 50 -sparsely 49 -chin 49 -interstate 49 -tofu 49 -dvd 49 -u.s. 49 -markings 49 -huddle 49 -gadgets 49 -rod 49 -amtrak 49 -official 49 -oxen 49 -reflects 49 -halloween 49 -patiently 49 -florets 49 -camp 49 -conversing 49 -peak 49 -hamburgers 49 -fedora 49 -neutral 49 -courts 49 -stockings 49 -speech 49 -salmon 49 -burner 49 -dropping 49 -garb 49 -krispy 49 -hooks 49 -pro 49 -wheelie 49 -currently 49 -calculator 49 -delivering 49 -casual 49 -roasted 49 -weathered 49 -finishing 49 -biscuits 49 -noodle 49 -boot 49 -trails 48 -wired 48 -pencil 48 -wedge 48 -length 48 -stuffing 48 -coca 48 -languages 48 -camel 48 -carried 48 -levels 48 -slanted 48 -concert 48 -practices 48 -carefully 48 -moment 48 -clocktower 48 -complex 48 -gift 48 -plantains 48 -came 48 -sleigh 48 -crab 48 -underground 48 -drapes 48 -crocheted 48 -spinning 48 -laundry 48 -licks 48 -beautifully 48 -poured 48 -waste 48 -probably 48 -vanilla 48 -gown 47 -positions 47 -engaging 47 -strips 47 -descending 47 -peach 47 -gang 47 -utensil 47 -cole 47 -focused 47 -captivity 47 -bill 47 -charging 47 -western 47 -upwards 47 -trekking 47 -peaches 47 -standard 47 -foraging 47 -shine 47 -attempt 47 -husky 47 -hedge 47 -berry 47 -snowing 47 -cushions 47 -push 47 -salt 47 -broadway 47 -showroom 47 -begin 47 -fleet 47 -fencing 47 -warns 47 -cactus 47 -sour 47 -median 47 -barriers 47 -solar 47 -blazer 47 -fences 47 -crawling 47 -readies 47 -idle 46 -spewing 46 -product 46 -breads 46 -makeshift 46 -marketplace 46 -sound 46 -) 46 -sidewalks 46 -herded 46 -household 46 -grove 46 -filthy 46 -yummy 46 -croissant 46 -hug 46 -burners 46 -vertical 46 -foal 46 -headlights 46 -tails 46 -vegetarian 46 -pressing 46 -shells 46 -hunched 46 -reins 46 -sections 46 -kreme 46 -mosaic 46 -peacefully 46 -compartments 46 -views 46 -daisies 46 -extreme 46 -bunt 46 -affixed 46 -biscuit 46 -snuggled 45 -wrong 45 -bubble 45 -year 45 -tongs 45 -ugly 45 -waist 45 -enough 45 -packing 45 -rhino 45 -peaking 45 -aboard 45 -refridgerator 45 -trimmed 45 -contest 45 -sunlit 45 -teens 45 -supermarket 45 -squats 45 -fedex 45 -barge 45 -reception 45 -theater 45 -coaster 45 -flowering 45 -speaks 45 -sleepy 45 -buried 45 -int 45 -nasty 45 -carving 45 -diaper 45 -eyed 45 -film 45 -populated 45 -cheeses 45 -consumption 45 -instead 45 -mice 45 -unused 45 -stylish 45 -patriotic 45 -charger 45 -channel 45 -cables 45 -pokes 45 -nature 44 -wooly 44 -customer 44 -captive 44 -lips 44 -ginger 44 -stoves 44 -horse-drawn 44 -groceries 44 -loft 44 -instructor 44 -claw 44 -central 44 -hoop 44 -fell 44 -skin 44 -lands 44 -needle 44 -might 44 -leashes 44 -salon 44 -bridle 44 -( 44 -important 44 -zucchini 44 -virgin 44 -turtle 44 -blooming 44 -added 44 -bib 44 -glow 44 -master 43 -boarded 43 -memorial 43 -grooming 43 -architectural 43 -awaits 43 -stay 43 -stereo 43 -spaces 43 -diced 43 -substance 43 -helmeted 43 -sight 43 -housing 43 -caps 43 -mp3 43 -signing 43 -took 43 -ox 43 -depicted 43 -ornamental 43 -claus 43 -races 43 -retriever 43 -woven 43 -server 43 -brass 43 -macbook 43 -grilling 43 -plank 43 -contraption 43 -motorboat 43 -lamppost 43 -label 43 -roam 43 -intricate 43 -cloths 43 -styles 43 -ferris 43 -banquet 43 -numeral 42 -overturned 42 -herds 42 -rust 42 -columns 42 -perfect 42 -tossed 42 -tosses 42 -rink 42 -omelet 42 -controlled 42 -royal 42 -rundown 42 -broth 42 -stylized 42 -hides 42 -docking 42 -stages 42 -curiously 42 -maintenance 42 -packs 42 -sorts 42 -gesture 42 -thomas 42 -every 42 -layed 42 -support 42 -th 42 -contact 42 -shack 42 -bases 42 -shooting 42 -chihuahua 42 -pepsi 42 -handled 42 -underside 42 -motes 42 -slippers 42 -ranch 42 -bushel 42 -san 42 -england 42 -knacks 42 -combing 42 -pauses 42 -toned 42 -lunging 42 -fireman 42 -asia 42 -california 42 -tattoos 42 -hoagie 42 -pic 42 -destroyed 42 -pelican 42 -litter 42 -leap 42 -idly 41 -burrito 41 -drop 41 -assembly 41 -blades 41 -palace 41 -brief 41 -rally 41 -wrestling 41 -blade 41 -canvas 41 -parasail 41 -browns 41 -famous 41 -seeing 41 -lightly 41 -paddock 41 -trailers 41 -sing 41 -rowboat 41 -worked 41 -gentle 41 -mechanical 41 -half-eaten 41 -swiss 41 -glider 41 -exterior 41 -diners 41 -pop 41 -beauty 41 -toilette 41 -burgers 41 -replica 41 -recreational 41 -newspapers 41 -flavors 41 -vines 41 -leopard 41 -linen 41 -nuzzling 41 -teapot 41 -boiled 40 -gloomy 40 -sell 40 -patterns 40 -elevator 40 -wavy 40 -root 40 -entire 40 -pantry 40 -salsa 40 -forehead 40 -meatball 40 -prey 40 -wrist 40 -clothed 40 -community 40 -ca 40 -marks 40 -mitts 40 -brushed 40 -surroundings 40 -quaint 40 -models 40 -atv 40 -dude 40 -think 40 -jewelry 40 -jack 40 -strap 40 -dalmatian 40 -halfway 40 -activities 40 -structures 40 -sat 40 -observe 40 -nook 40 -viewer 40 -blueberry 40 -fryer 40 -super 40 -spreading 40 -find 40 -eyeglasses 40 -hearts 40 -salads 39 -learn 39 -peels 39 -cave 39 -samsung 39 -rapids 39 -6 39 -fairly 39 -goers 39 -misses 39 -wander 39 -gathers 39 -arrive 39 -taller 39 -dunkin 39 -yield 39 -rugby 39 -newborn 39 -throughout 39 -bug 39 -changing 39 -panoramic 39 -handlebars 39 -fours 39 -luxury 39 -checked 39 -coated 39 -whose 39 -paneling 39 -parks 39 -deliver 39 -casting 39 -dangling 39 -boston 39 -divider 39 -receive 39 -animated 39 -laughs 39 -boating 39 -rounded 39 -beam 39 -boulders 39 -highchair 39 -maneuver 39 -frisby 38 -brownies 38 -tone 38 -peer 38 -squat 38 -foamy 38 -hammock 38 -triangle 38 -hours 38 -pane 38 -roasting 38 -pineapples 38 -wedges 38 -drops 38 -ford 38 -stalks 38 -difficult 38 -recently 38 -meatballs 38 -groomed 38 -rd 38 -lunges 38 -glaze 38 -panting 38 -tusk 38 -graphic 38 -murky 38 -yacht 38 -coaches 38 -flatbread 38 -height 38 -india 38 -streamers 38 -sweets 38 -scrambled 38 -toss 38 -investigating 38 -shark 38 -chicago 38 -rig 38 -caramel 37 -layers 37 -twelve 37 -pancake 37 -ceilings 37 -performance 37 -perspective 37 -presenting 37 -blur 37 -beads 37 -taco 37 -unknown 37 -twig 37 -plat 37 -knick 37 -overalls 37 -families 37 -angel 37 -robes 37 -hurdle 37 -tell 37 -lizard 37 -ribbons 37 -oversized 37 -messing 37 -aquarium 37 -hear 37 -gothic 37 -batch 37 -trotting 37 -laden 37 -boxed 37 -welcome 37 -cranes 37 -bone 37 -standup 37 -buds 37 -combo 37 -recipe 37 -lighter 37 -spanish 37 -region 37 -gifts 37 -ironing 37 -curbside 37 -unoccupied 37 -shears 37 -employee 37 -starts 37 -bordered 37 -cheeseburger 37 -costumed 37 -graduation 37 -demonstration 37 -injured 37 -modified 37 -appetizers 37 -sunrise 37 -column 37 -locations 36 -hash 36 -stroll 36 -lip 36 -operated 36 -leafless 36 -hour 36 -fixed 36 -crumbs 36 -bento 36 -spiral 36 -nightstands 36 -tightly 36 -kayaks 36 -studying 36 -releasing 36 -rescue 36 -rearview 36 -goofy 36 -propellor 36 -celebratory 36 -national 36 -my 36 -- 36 -paying 36 -plunger 36 -wrought 36 -mane 36 -sauerkraut 36 -south 36 -side-by-side 36 -whine 36 -airliners 36 -nokia 36 -expired 36 -tusked 36 -terrier 36 -staff 36 -preforming 36 -discussing 36 -playfully 36 -bitten 36 -flood 36 -social 36 -establishment 36 -trophy 36 -seasoning 36 -cabs 36 -renovation 36 -soars 36 -fix 35 -wants 35 -edges 35 -streetlights 35 -last 35 -uphill 35 -discussion 35 -spreads 35 -pitches 35 -pub 35 -sailors 35 -seal 35 -quiche 35 -gestures 35 -seasoned 35 -parasols 35 -facial 35 -flanked 35 -microphones 35 -bundt 35 -swimmers 35 -function 35 -frowning 35 -handmade 35 -grind 35 -quarters 35 -alleyway 35 -diagonal 35 -grow 35 -lavender 35 -victorian 35 -sunflower 35 -crazy 35 -croissants 35 -follow 35 -packaging 35 -allowed 35 -bits 35 -centerpiece 35 -playful 35 -navigating 35 -wheat 35 -launch 35 -folks 35 -discs 35 -lazy 35 -butcher 35 -oatmeal 35 -chatting 35 -submarine 35 -miscellaneous 35 -couples 35 -pong 35 -patrol 35 -shearing 35 -bushy 35 -poodle 35 -crying 35 -peanuts 35 -able 35 -converted 35 -headband 35 -backseat 35 -vine 35 -tree-lined 35 -constructed 35 -melon 35 -twigs 35 -intense 35 -union 34 -automobiles 34 -north 34 -bison 34 -mannequins 34 -rooster 34 -pick-up 34 -flash 34 -chunks 34 -pointy 34 -ping 34 -marsh 34 -purses 34 -freely 34 -outdated 34 -participating 34 -roped 34 -looked 34 -sox 34 -forrest 34 -japan 34 -wonderful 34 -butt 34 -sneaker 34 -repairs 34 -patient 34 -photographing 34 -whip 34 -banners 34 -revealing 34 -swiftly 34 -hitched 34 -almonds 34 -pinned 34 -commuters 34 -bedside 34 -mark 34 -tripod 34 -varying 34 -age 34 -dollar 34 -labels 34 -dumpster 34 -projection 33 -windmill 33 -racers 33 -directs 33 -petals 33 -web 33 -details 33 -bundles 33 -horizontal 33 -popular 33 -scoop 33 -triangular 33 -crochet 33 -bushels 33 -companion 33 -toddlers 33 -legos 33 -symbols 33 -fixes 33 -jean 33 -sure 33 -blown 33 -bearing 33 -cottage 33 -inner 33 -controlling 33 -twenty 33 -lifeguard 33 -workspace 33 -eiffel 33 -coupe 33 -parachutes 33 -feature 33 -defensive 33 -sigh 33 -backside 33 -livingroom 33 -convertible 33 -coral 33 -juicer 33 -pear 33 -loose 33 -mickey 33 -win 33 -least 33 -sheeps 33 -mats 33 -harley 33 -parlor 33 -platforms 33 -certain 33 -tshirt 33 -collie 33 -abstract 33 -curls 33 -surrounds 33 -detour 33 -pilots 33 -peaceful 33 -spilled 33 -galley 33 -burned 33 -mason 32 -childs 32 -celebrates 32 -flipped 32 -masks 32 -pelicans 32 -flush 32 -safe 32 -dingy 32 -religious 32 -cel 32 -aprons 32 -movies 32 -raincoat 32 -profile 32 -direct 32 -crooked 32 -half-pipe 32 -juvenile 32 -indicate 32 -neath 32 -canned 32 -socializing 32 -rooftop 32 -forehand 32 -stovetop 32 -visor 32 -medieval 32 -t-ball 32 -pins 32 -chalk 32 -crow 32 -crop 32 -slalom 32 -kiteboard 32 -nut 32 -bake 32 -booths 32 -activity 32 -upscale 32 -capitol 32 -floored 32 -award 32 -filming 32 -banks 32 -monk 32 -toliet 32 -misty 32 -velvet 32 -hollywood 32 -spanning 32 -numbered 32 -majestic 32 -icebox 32 -barricade 32 -tricycle 32 -silhouetted 32 -via 32 -cockpit 32 -handling 32 -scrub 32 -pudding 32 -grate 32 -basement 32 -bark 32 -tips 32 -france 32 -patches 32 -bows 32 -lace 32 -overlook 32 -whom 32 -crash 32 -era 31 -o'clock 31 -blind 31 -urn 31 -oar 31 -restaurants 31 -century 31 -snowman 31 -stately 31 -kitchenette 31 -knitting 31 -casually 31 -current 31 -burgundy 31 -always 31 -ordering 31 -protruding 31 -$ 31 -sprawled 31 -arches 31 -sidelines 31 -glassware 31 -dated 31 -drums 31 -coal 31 -bored 31 -minutes 31 -tipped 31 -beaks 31 -festive 31 -browsing 31 -vessel 31 -plums 31 -handler 31 -1 31 -offers 31 -given 31 -eatery 31 -nails 31 -barbeque 31 -ware 31 -yawning 31 -slop 31 -sprayed 31 -beams 31 -europe 31 -pigs 31 -planters 31 -steal 31 -mouses 31 -eachother 31 -obstacles 31 -cruising 31 -fellow 31 -cracker 31 -duffel 30 -headset 30 -wraps 30 -nibbling 30 -woolly 30 -emo 30 -tons 30 -breaded 30 -needles 30 -embedded 30 -ordered 30 -lost 30 -tells 30 -denim 30 -onward 30 -vibrant 30 -taste 30 -dives 30 -messages 30 -w 30 -sleeve 30 -coca-cola 30 -handstand 30 -receiving 30 -coconut 30 -dolly 30 -baker 30 -arrives 30 -usa 30 -mattresses 30 -clydesdale 30 -limbs 30 -homeless 30 -gummy 30 -celebrate 30 -overview 30 -ostriches 30 -maple 30 -amenities 30 -moss 30 -'re 30 -exercise 30 -francisco 30 -billowing 30 -crashed 30 -hilltop 30 -hairdryer 30 -cheek 30 -repairing 30 -auditorium 30 -donkeys 30 -temple 30 -await 30 -testing 30 -less 30 -brownie 30 -prairie 30 -cliffs 30 -tones 30 -outhouse 30 -naps 30 -halfpipe 30 -penny 30 -better 30 -offering 30 -normal 30 -mantel 30 -vent 30 -africa 30 -hip 30 -searching 30 -domed 29 -transported 29 -years 29 -hugs 29 -rode 29 -concept 29 -windsurfer 29 -friendly 29 -flys 29 -sprays 29 -fighters 29 -guards 29 -handrail 29 -breaks 29 -clip 29 -charter 29 -grasslands 29 -looms 29 -bloody 29 -matches 29 -inspects 29 -reigns 29 -combined 29 -hamster 29 -unattended 29 -swimsuit 29 -pecking 29 -greet 29 -adding 29 -soil 29 -booze 29 -witha 29 -auto 29 -diet 29 -maybe 29 -gigantic 29 -arts 29 -bmw 29 -arranging 29 -leafs 29 -sampling 29 -guides 29 -cowboys 29 -program 29 -blank 29 -license 29 -buliding 29 -lids 29 -sings 29 -ivy 29 -lobster 29 -hey 29 -doily 29 -chalkboard 29 -wildebeest 29 -alcoholic 29 -yak 29 -sodas 29 -copper 29 -facade 29 -3d 29 -tater 29 -stationed 29 -shrub 29 -peoples 29 -east 29 -clause 29 -tots 29 -law 29 -circles 29 -wildflowers 29 -globe 29 -dash 29 -boulder 29 -visiting 29 -ipad 29 -photography 29 -moose 29 -flour 29 -clustered 28 -starring 28 -altitude 28 -resemble 28 -billboards 28 -teacher 28 -skaters 28 -chip 28 -petted 28 -steaming 28 -parsley 28 -razor 28 -prints 28 -melting 28 -tanks 28 -wife 28 -linked 28 -press 28 -dental 28 -date 28 -mature 28 -demolished 28 -medal 28 -planted 28 -munching 28 -fireworks 28 -netting 28 -bloom 28 -arabic 28 -adorn 28 -motorists 28 -womans 28 -parmesan 28 -bubbles 28 -rared 28 -member 28 -fisheye 28 -sheering 28 -converse 28 -ruler 28 -ornately 28 -waking 28 -goatee 28 -filtered 28 -8 28 -gay 28 -luxurious 28 -ton 28 -blenders 28 -sailor 28 -cds 28 -shutters 28 -pews 28 -youths 28 -placemat 28 -discarded 28 -memorabilia 28 -observes 28 -angels 28 -restored 28 -saddled 28 -notebooks 28 -cobble 28 -grows 28 -dispensers 28 -attachment 28 -fenced-in 28 -cigarettes 28 -cleaner 28 -loveseat 28 -tilting 27 -sprinkle 27 -hawaiian 27 -elbow 27 -marking 27 -pockets 27 -puffy 27 -lasagna 27 -technology 27 -antiques 27 -cider 27 -beets 27 -landscaping 27 -abundance 27 -leaking 27 -concentrating 27 -ripped 27 -incredible 27 -keeps 27 -attraction 27 -jetliners 27 -av 27 -strolling 27 -missed 27 -allow 27 -jesus 27 -snake 27 -study 27 -badminton 27 -arid 27 -thre 27 -elaborately 27 -lapel 27 -prices 27 -protect 27 -teammates 27 -sheared 27 -wades 27 -teh 27 -streaming 27 -bandanna 27 -shakes 27 -loads 27 -vw 27 -southwest 27 -flames 27 -holing 27 -skeleton 27 -speckled 27 -perfectly 27 -competitor 27 -tissues 27 -ditch 27 -davidson 27 -drawings 27 -mount 27 -remodeling 27 -waterskiing 27 -takeout 27 -middle-aged 27 -outlet 27 -slender 27 -nowhere 27 -concerned 27 -princess 27 -sipping 27 -purchase 27 -did 27 -creamy 27 -beagle 27 -aiming 27 -acting 27 -ages 27 -jacuzzi 27 -rounds 27 -cocktail 27 -senior 27 -serviced 27 -seaweed 27 -heating 26 -cuisine 26 -lock 26 -non 26 -launching 26 -america 26 -poor 26 -heron 26 -amazing 26 -nude 26 -international 26 -pacifier 26 -spice 26 -mailbox 26 -grains 26 -source 26 -cooker 26 -sloped 26 -juggling 26 -antelopes 26 -edible 26 -tinfoil 26 -gripping 26 -why 26 -pumps 26 -fronts 26 -eagerly 26 -departing 26 -mens 26 -canyon 26 -seaside 26 -panels 26 -intensely 26 -potter 26 -detailed 26 -dinosaur 26 -gazes 26 -blvd 26 -demonstrating 26 -husband 26 -moved 26 -bomber 26 -rhinos 26 -cotton 26 -fuel 26 -prominent 26 -marshmallows 26 -countries 26 -glide 26 -oars 26 -slats 26 -shaker 26 -warmly 26 -collared 26 -leftover 26 -minimal 26 -dips 26 -picket 26 -capture 26 -tattooed 26 -ribs 26 -artist 26 -workshop 26 -long-haired 26 -crafting 26 -pawing 26 -granola 26 -ridge 26 -topless 26 -automatic 26 -proudly 26 -crashes 26 -creating 26 -firefighters 26 -rains 26 -hero 26 -bust 26 -tuna 26 -rushing 26 -onlooker 26 -shoppers 26 -flatscreen 26 -favorite 26 -trade 26 -pound 26 -ago 26 -shut 26 -etc 26 -guitars 26 -marine 26 -cycling 26 -external 26 -gulls 26 -pristine 26 -emerging 26 -creme 26 -contain 26 -aqua 26 -occupied 26 -unopened 26 -stairwell 26 -goblets 26 -orchard 26 -laps 26 -angled 26 -rumpled 26 -assembled 26 -paris 26 -scaffolding 26 -rat 26 -desolate 26 -seemingly 26 -knot 26 -shipping 26 -want 25 -avocados 25 -adorns 25 -tvs 25 -plowing 25 -formations 25 -l 25 -blackberry 25 -pooh 25 -e 25 -picturesque 25 -liberty 25 -vertically 25 -reflections 25 -flushing 25 -organic 25 -potty 25 -protesting 25 -curtained 25 -possible 25 -nurse 25 -riverbank 25 -keeping 25 -focal 25 -walker 25 -cob 25 -interested 25 -fired 25 -media 25 -battery 25 -crack 25 -firemen 25 -sticky 25 -jail 25 -raisins 25 -handheld 25 -hate 25 -lazily 25 -punch 25 -toasting 25 -hedges 25 -chases 25 -scenes 25 -distorted 25 -hips 25 -credit 25 -gorilla 25 -feild 25 -dreary 25 -poorly 25 -cardinal 25 -hungry 25 -plethora 25 -saucers 25 -batteries 25 -tights 25 -nuzzle 25 -spending 25 -apparatus 25 -manual 25 -switch 25 -maintained 25 -mcdonald 25 -angles 25 -hearty 25 -remodel 25 -suspenders 25 -completing 25 -procession 25 -underwater 25 -tech 24 -pressed 24 -bra 24 -gears 24 -gum 24 -university 24 -advertises 24 -additional 24 -passanger 24 -strung 24 -turkeys 24 -chick 24 -skylight 24 -gazelles 24 -exchanging 24 -wreath 24 -version 24 -weight 24 -traveler 24 -faucets 24 -solitary 24 -creature 24 -drum 24 -ordinary 24 -mangoes 24 -seashore 24 -streetcar 24 -sales 24 -chops 24 -coastline 24 -march 24 -congregate 24 -stocking 24 -theatre 24 -uneaten 24 -projected 24 -b 24 -lingerie 24 -flow 24 -balding 24 -reveals 24 -guided 24 -pallet 24 -provides 24 -dvds 24 -sweat 24 -photoshopped 24 -filmed 24 -clutching 24 -pliers 24 -boulevard 24 -caring 24 -wrappers 24 -mossy 24 -drizzled 24 -lowered 24 -tagging 24 -highly 24 -winery 24 -placid 24 -wars 24 -strikes 24 -chute 24 -common 24 -semi-truck 24 -apparently 24 -jerseys 24 -overflowing 24 -armchair 24 -clusters 24 -related 24 -wade 24 -motocross 24 -chase 24 -mint 24 -toll 24 -embankment 24 -fitted 24 -dying 24 -attendant 24 -businessmen 24 -gazebo 24 -doctor 24 -snap 24 -dangerous 24 -jogging 24 -centered 24 -partition 24 -cuddled 24 -isle 24 -canes 23 -guarding 23 -butterflies 23 -obscured 23 -eleven 23 -bump 23 -exposure 23 -cities 23 -hairbrush 23 -brussel 23 -cheering 23 -7 23 -visit 23 -waiter 23 -flavor 23 -express 23 -kiosk 23 -thicket 23 -broom 23 -arrangements 23 -bought 23 -birdhouse 23 -crowding 23 -chubby 23 -attacking 23 -llama 23 -crisp 23 -avoid 23 -wagons 23 -cot 23 -baseballs 23 -gnar 23 -broke 23 -patties 23 -rub 23 -hygiene 23 -keeper 23 -fills 23 -shoot 23 -victory 23 -primitive 23 -heat 23 -poll 23 -notepad 23 -dressers 23 -built-in 23 -extinguisher 23 -dummy 23 -honda 23 -violin 23 -upstairs 23 -chew 23 -nets 23 -created 23 -attic 23 -cuckoo 23 -connecting 23 -duffle 23 -bikinis 23 -campus 23 -monkeys 23 -plug 23 -plus 23 -fatigues 23 -plans 23 -lantern 23 -labrador 23 -infront 23 -filing 23 -differently 23 -twisted 23 -designated 23 -casts 23 -skiies 23 -quilted 23 -venue 23 -quick 23 -wiimote 23 -cuddles 23 -components 23 -magnifying 23 -protesters 23 -bringing 22 -needed 22 -para-sailing 22 -brunette 22 -easel 22 -novelty 22 -feather 22 -coins 22 -swamp 22 -rectangle 22 -battle 22 -shares 22 -unloaded 22 -stain 22 -congratulations 22 -dug 22 -lincoln 22 -sking 22 -winning 22 -corridor 22 -occasion 22 -daffodils 22 -nail 22 -entertaining 22 -lets 22 -flashing 22 -cordless 22 -zombie 22 -motif 22 -dell 22 -melons 22 -damage 22 -walk-in 22 -act 22 -gesturing 22 -bookcases 22 -secluded 22 -cape 22 -colt 22 -article 22 -temporary 22 -youngster 22 -reacts 22 -motorcyle 22 -piercing 22 -pavilion 22 -create 22 -minivan 22 -slight 22 -knifes 22 -pacific 22 -trench 22 -guns 22 -sliver 22 -upcoming 22 -infield 22 -fifth 22 -scheme 22 -signaling 22 -eggplant 22 -remaining 22 -confused 22 -tale 22 -pizzeria 22 -10 22 -fixings 22 -bowtie 22 -remove 22 -tiling 22 -finishes 22 -flame 22 -sanctuary 22 -judge 22 -merry 22 -well-lit 22 -once 22 -statute 22 -whale 22 -spider 22 -draft 22 -aside 22 -checkerboard 22 -signpost 22 -ith 22 -patty 22 -assisting 22 -wares 22 -pause 22 -hoses 22 -brake 22 -mayo 22 -privacy 22 -signed 22 -wilted 22 -bathrooms 22 -exhaust 22 -whites 22 -hound 22 -cubes 22 -railings 22 -bridges 22 -lecture 22 -accent 22 -inspect 22 -sony 22 -interact 22 -react 22 -rocket 22 -padded 21 -binder 21 -servings 21 -speedboat 21 -barred 21 -nurses 21 -protection 21 -roundabout 21 -pastel 21 -basins 21 -tubes 21 -earphones 21 -underpass 21 -likes 21 -broad 21 -quarter 21 -kisses 21 -calmly 21 -tins 21 -trimming 21 -sample 21 -description 21 -pesto 21 -snow-capped 21 -appetizing 21 -mayonnaise 21 -straddling 21 -joke 21 -plumbing 21 -days 21 -perching 21 -uncovered 21 -bales 21 -turf 21 -gull 21 -guacamole 21 -vacation 21 -tubs 21 -alcove 21 -wrench 21 -disorganized 21 -rowboats 21 -retrieving 21 -hope 21 -glue 21 -buoy 21 -chopper 21 -evil 21 -ex 21 -paints 21 -disgusting 21 -texas 21 -pleasant 21 -latest 21 -ma 21 -gymnasium 21 -thermometer 21 -options 21 -dripping 21 -manicured 21 -futuristic 21 -rapid 21 -trek 21 -spilling 21 -artistically 21 -knobs 21 -darth 21 -scape 21 -packet 21 -rubble 21 -ink 21 -fondant 21 -rubs 21 -conditions 21 -kiwis 21 -walkers 21 -maneuvers 21 -sweaters 21 -well-dressed 21 -colonial 21 -resembles 21 -doctors 21 -gap 21 -stories 21 -add 21 -humans 21 -confined 21 -multitude 21 -spires 21 -barb 21 -sip 21 -gutter 21 -saw 21 -montage 21 -badly 21 -backsplash 21 -hauls 21 -kayaking 21 -seasonings 21 -effect 21 -updated 21 -convenience 21 -reddish 21 -barking 21 -lagoon 21 -captures 21 -lavish 21 -sons 21 -reflect 21 -crouch 20 -humming 20 -eighteen 20 -smashed 20 -intent 20 -receipt 20 -harnessed 20 -sock 20 -chat 20 -strainer 20 -audio 20 -hipsters 20 -emblem 20 -peek 20 -tangled 20 -squatted 20 -apartments 20 -ass 20 -blossom 20 -timer 20 -gallery 20 -officials 20 -competitors 20 -appetizer 20 -ajar 20 -surfboarder 20 -cutout 20 -expensive 20 -curl 20 -snowmobile 20 -nike 20 -peaks 20 -surfaces 20 -herder 20 -pretzels 20 -decent 20 -blossoms 20 -reclines 20 -panorama 20 -t-shirts 20 -records 20 -forklift 20 -filth 20 -olympics 20 -bale 20 -manner 20 -lemonade 20 -sheer 20 -dawn 20 -walnuts 20 -ferns 20 -stations 20 -scout 20 -titled 20 -tattered 20 -dunes 20 -sloppy 20 -lowers 20 -begging 20 -penguins 20 -offspring 20 -snowstorm 20 -airshow 20 -turban 20 -intersecting 20 -rafts 20 -sloping 20 -grips 20 -losing 20 -fastened 20 -vader 20 -chocolates 20 -korean 20 -inn 20 -aid 20 -illustration 20 -untidy 20 -firetrucks 20 -yachts 20 -adds 20 -loaves 20 -straightening 20 -decorates 20 -washes 20 -wipes 20 -orders 20 -parka 20 -mounds 20 -guest 20 -teammate 20 -apparel 20 -sucking 20 -feel 20 -spotless 20 -ate 20 -jumper 20 -boar 20 -adjust 20 -longboard 20 -scary 20 -speak 20 -handlers 20 -grated 20 -alike 20 -newer 20 -campaign 20 -aeroplane 20 -quietly 20 -lions 20 -rackett 20 -dollhouse 20 -brother 20 -scarves 20 -tinted 20 -ducklings 20 -clowns 20 -samples 20 -unpaved 20 -closer 20 -grasping 19 -internet 19 -modest 19 -lipstick 19 -access 19 -goblet 19 -sightseeing 19 -magnetic 19 -lavatory 19 -handful 19 -climate 19 -rendering 19 -actively 19 -miss 19 -pace 19 -sniff 19 -glare 19 -crops 19 -flops 19 -lick 19 -lava 19 -wrecked 19 -micro 19 -cellar 19 -excitedly 19 -yoga 19 -depart 19 -khaki 19 -portions 19 -period 19 -freez 19 -toes 19 -shorn 19 -stork 19 -overstuffed 19 -fold 19 -grounds 19 -floss 19 -sheered 19 -lazing 19 -oats 19 -score 19 -fox 19 -width 19 -cantaloupe 19 -shepard 19 -summit 19 -bunched 19 -elegantly 19 -thee 19 -carpeting 19 -grape 19 -hump 19 -breed 19 -embrace 19 -heated 19 -fo 19 -sweeping 19 -overweight 19 -grins 19 -tokyo 19 -selecting 19 -sands 19 -county 19 -squirting 19 -halved 19 -supply 19 -bridal 19 -per 19 -organ 19 -protecting 19 -verdant 19 -nun 19 -contemplating 19 -performer 19 -implements 19 -attendants 19 -uncut 19 -upside-down 19 -'d 19 -decals 19 -orchids 19 -slatted 19 -bristles 19 -australia 19 -uk 19 -pepperonis 19 -fits 19 -luggages 19 -stark 19 -scooping 19 -bring 19 -ina 19 -outer 19 -cranberry 19 -maneuvering 19 -mad 19 -shield 19 -frolicking 19 -tuck 19 -energy 19 -unhappy 19 -merchandise 19 -ray 19 -insect 19 -cameraman 19 -ruins 19 -satellite 19 -tractors 19 -parchment 19 -capital 19 -grafitti 19 -feast 19 -pedal 19 -spectator 19 -blouse 19 -herb 19 -geisha 19 -brings 19 -hog 19 -silhouettes 19 -pages 19 -mounting 19 -touring 19 -glaring 19 -target 19 -rested 19 -active 19 -went 19 -chess 19 -smokes 19 -protected 19 -mop 19 -doggy 19 -clapping 19 -bathed 19 -ewe 19 -clips 19 -brushy 19 -pouch 19 -hanged 19 -rag 19 -coastal 19 -iwth 19 -choose 19 -destinations 19 -sequence 19 -fir 18 -pretzel 18 -lasso 18 -gadget 18 -lufthansa 18 -glassed 18 -grave 18 -yamaha 18 -shared 18 -flamingo 18 -appointed 18 -separating 18 -armor 18 -defaced 18 -mario 18 -bot 18 -showcasing 18 -cock 18 -pandas 18 -examine 18 -videogame 18 -sack 18 -mercedes 18 -tuxedos 18 -monorail 18 -chests 18 -sponge 18 -cobbled 18 -fourth 18 -solo 18 -higher 18 -showcase 18 -birth 18 -floppy 18 -anything 18 -sugared 18 -winnie 18 -angeles 18 -textured 18 -fudge 18 -greek 18 -jugs 18 -antenna 18 -chicks 18 -rifle 18 -cgi 18 -ripened 18 -inverted 18 -bee 18 -flickr 18 -straws 18 -never 18 -panini 18 -buggies 18 -zones 18 -gps 18 -paid 18 -graveyard 18 -armed 18 -ladle 18 -chairlift 18 -alot 18 -explaining 18 -coloring 18 -firefighter 18 -bud 18 -rhinoceros 18 -orderly 18 -backward 18 -witting 18 -proud 18 -test 18 -due 18 -wineglass 18 -nasa 18 -alter 18 -unison 18 -growth 18 -wintery 18 -priest 18 -harry 18 -gorgeous 18 -knight 18 -writes 18 -decal 18 -streams 18 -350 18 -government 18 -leashed 18 -ghost 18 -kale 18 -vegtables 18 -suited 18 -species 18 -menus 18 -racetrack 18 -describing 18 -specialty 18 -aligned 18 -sees 18 -bruised 18 -pretends 18 -shredding 18 -humorous 18 -accessible 18 -proper 18 -muzzle 18 -jalapenos 18 -teenaged 18 -puzzle 18 -engineer 18 -ale 18 -table.. 18 -files 18 -parasailers 18 -attentively 18 -seaplane 18 -accordion 18 -# 18 -sleeves 18 -diesel 18 -sugary 18 -grinning 18 -darkness 18 -raspberry 18 -danger 18 -florida 18 -crushed 18 -captain 18 -rollerblading 18 -unload 18 -movement 18 -build 18 -alaska 18 -cookware 18 -stock 18 -watery 18 -junction 18 -straps 18 -inviting 18 -socialize 18 -hula 18 -boiling 18 -song 18 -jetway 18 -font 18 -seperate 18 -shovel 18 -tar 18 -ethnic 18 -unpacked 18 -rotting 18 -concentrates 17 -sealed 17 -dyed 17 -blended 17 -curving 17 -dumping 17 -canopies 17 -foodstuffs 17 -disposable 17 -drags 17 -u 17 -mule 17 -heights 17 -espresso 17 -tupperware 17 -emitting 17 -flashlight 17 -hatch 17 -entrees 17 -resembling 17 -wildebeests 17 -cube 17 -perhaps 17 -preserve 17 -claws 17 -forefront 17 -bucking 17 -contained 17 -coconuts 17 -bodies 17 -session 17 -occupy 17 -smooth 17 -dj 17 -dr 17 -naval 17 -v 17 -trophies 17 -rugged 17 -dune 17 -performers 17 -juicy 17 -gloved 17 -enthusiastically 17 -russian 17 -gilded 17 -ups 17 -soaking 17 -penned 17 -sundae 17 -skinned 17 -calendar 17 -crock 17 -bookstore 17 -videos 17 -dew 17 -duct 17 -girafee 17 -hiker 17 -mm 17 -god 17 -dropped 17 -sectioned 17 -units 17 -temperature 17 -instruction 17 -bounces 17 -usb 17 -earrings 17 -shelve 17 -camo 17 -bodysuit 17 -crammed 17 -mohawk 17 -slim 17 -lunchbox 17 -wolf 17 -hardware 17 -vein 17 -muscular 17 -selections 17 -los 17 -fronted 17 -telephones 17 -mid-jump 17 -soon 17 -chimney 17 -loop 17 -based 17 -battered 17 -18 17 -switching 17 -aa 17 -bulb 17 -tot 17 -toe 17 -ankle 17 -pyramid 17 -charge 17 -dries 17 -skillfully 17 -minor 17 -spool 17 -hydrogen 17 -flea 17 -bills 17 -atm 17 -longhorn 17 -release 17 -eastern 17 -marbled 17 -punk 17 -siding 17 -accented 17 -clump 17 -keychain 17 -shorter 17 -involved 17 -lucky 17 -lotion 17 -hike 17 -observed 17 -artisan 17 -mist 17 -efficiency 17 -daily 17 -bartender 17 -dig 17 -dial 17 -chipped 17 -muscles 17 -gotten 17 -please 17 -google 17 -bead 17 -host 17 -condiment 17 -doubledecker 17 -mainly 17 -roosters 17 -season 17 -gauges 17 -orioles 17 -choices 17 -windmills 16 -rich 16 -wheelbarrow 16 -k 16 -shoots 16 -address 16 -retrieve 16 -balconies 16 -grounded 16 -spins 16 -chilli 16 -enforcement 16 -ponytail 16 -hangings 16 -hairless 16 -coverings 16 -teaches 16 -slacks 16 -snout 16 -lowering 16 -747 16 -fairy 16 -dividers 16 -descends 16 -knights 16 -lanterns 16 -cant 16 -zombies 16 -batman 16 -anticipation 16 -attaching 16 -swampy 16 -dachshund 16 -upholstered 16 -screened 16 -erected 16 -closing 16 -swimmer 16 -spiky 16 -dolphin 16 -dumplings 16 -impressive 16 -cash 16 -milkshake 16 -eclectic 16 -comfy 16 -storefronts 16 -perpendicular 16 -landscaped 16 -typewriter 16 -our 16 -skying 16 -stucco 16 -rare 16 -polished 16 -shopped 16 -causing 16 -citizens 16 -runners 16 -dodgers 16 -future 16 -navigates 16 -humongous 16 -stables 16 -mid-swing 16 -exhibition 16 -spa 16 -postcard 16 -wedged 16 -marshy 16 -urns 16 -butting 16 -greasy 16 -folder 16 -ii 16 -marinara 16 -hikers 16 -skii 16 -terrace 16 -mothers 16 -penn 16 -nunchuck 16 -colander 16 -paths 16 -pea 16 -feta 16 -la 16 -planks 16 -respected 16 -virtual 16 -confetti 16 -disney 16 -stood 16 -bees 16 -cresting 16 -easily 16 -surgical 16 -armoire 16 -health 16 -tends 16 -conveyer 16 -continental 16 -neckties 16 -opponents 16 -anticipating 16 -overhang 16 -returned 16 -joined 16 -register 16 -d 16 -un 16 -drift 16 -bunting 16 -property 16 -mousepad 16 -cay 16 -1950 16 -mcdonalds 16 -exercising 16 -delivered 16 -snapshot 16 -peddling 16 -lilies 16 -outskirts 16 -tacos 16 -shift 16 -hotels 16 -anyone 16 -turnips 16 -awkward 16 -silo 16 -mate 16 -mit 16 -shadowy 16 -steers 16 -wineglasses 16 -proceed 16 -aims 16 -wheeling 16 -holidays 16 -goodies 16 -indicator 16 -performed 16 -plowed 16 -brought 16 -prison 16 -creative 16 -congested 16 -packets 16 -badge 16 -loving 16 -wanting 16 -cushioned 16 -giants 16 -origami 16 -vultures 16 -paperback 16 -volkswagon 16 -mingle 16 -dancers 16 -thinking 16 -dealership 16 -gondola 16 -trained 16 -imac 16 -gnome 16 -drab 16 -tangerine 16 -quality 16 -nailed 16 -ever 16 -undergoing 16 -further 16 -often 16 -restaraunt 16 -limo 16 -hazard 16 -waitress 16 -chaise 16 -pausing 16 -thought 16 -teddybear 16 -curry 16 -tortillas 16 -frothy 16 -broccolli 16 -huts 16 -snuggles 15 -bulding 15 -conditioner 15 -crows 15 -seriously 15 -fashionable 15 -dirtbike 15 -derby 15 -nursery 15 -cuddle 15 -flyers 15 -john 15 -confusing 15 -scallops 15 -embroidered 15 -youngsters 15 -mountaintop 15 -tend 15 -necessary 15 -fountains 15 -territory 15 -estate 15 -dramatic 15 -griddle 15 -messed 15 -squid 15 -partner 15 -50 15 -toothpicks 15 -passage 15 -prep 15 -coolers 15 -polish 15 -affectionate 15 -allowing 15 -alien 15 -meatloaf 15 -automated 15 -garment 15 -skirts 15 -ollie 15 -repaired 15 -pics 15 -omelette 15 -sex 15 -towns 15 -relaxed 15 -sister 15 -squadron 15 -snoozing 15 -props 15 -coo 15 -baguette 15 -means 15 -quote 15 -rue 15 -chunk 15 -entryway 15 -crumbling 15 -novel 15 -dusting 15 -yelling 15 -shit 15 -pail 15 -totally 15 -generated 15 -flat-screen 15 -disrepair 15 -icecream 15 -spout 15 -handwritten 15 -mosquito 15 -bicycling 15 -arrived 15 -wakeboarding 15 -musician 15 -barricades 15 -pretend 15 -buttered 15 -handsome 15 -laces 15 -sorting 15 -exciting 15 -collecting 15 -skulls 15 -acrobatic 15 -enters 15 -peacocks 15 -bound 15 -strangely 15 -ump 15 -checker 15 -colgate 15 -breeze 15 -thatched 15 -banister 15 -heel 15 -picker 15 -illuminate 15 -wreck 15 -polaroid 15 -garland 15 -spare 15 -intricately 15 -simply 15 -lesson 15 -fire-hydrant 15 -inclosure 15 -sedan 15 -ads 15 -motorola 15 -splashes 15 -reveal 15 -his/her 15 -wanders 15 -waterside 15 -momma 15 -evergreens 15 -unpeeled 15 -bystanders 15 -hers 15 -blending 15 -ana 15 -bathrobe 15 -tours 15 -hunting 15 -hoping 15 -appropriate 15 -consumed 15 -haul 15 -comic 15 -tugboat 15 -congregating 15 -patchwork 15 -trinkets 15 -ceremonial 15 -yaks 15 -autographs 15 -toyota 15 -trolleys 15 -carvings 15 -stemmed 15 -dodge 15 -steeples 15 -theres 15 -geometric 15 -unripened 15 -connect 15 -puppet 15 -chased 15 -outfielder 15 -proximity 15 -childrens 15 -wildly 15 -pops 15 -fabrics 15 -comical 15 -rowers 15 -finely 15 -australian 15 -smeared 15 -attended 15 -dye 15 -skewers 15 -wtih 15 -grits 15 -knob 15 -pate 15 -locker 14 -over-sized 14 -fro 14 -tundra 14 -listens 14 -suspension 14 -crossroads 14 -twisting 14 -roots 14 -calzone 14 -outfitted 14 -interviewed 14 -ave. 14 -presses 14 -admire 14 -nears 14 -thanksgiving 14 -choosing 14 -plot 14 -cappuccino 14 -draws 14 -smoggy 14 -born 14 -corona 14 -elvis 14 -surboard 14 -systems 14 -amounts 14 -walnut 14 -ankles 14 -slicer 14 -vaulted 14 -shocked 14 -wonder 14 -demonic 14 -paraphernalia 14 -serene 14 -feasting 14 -blindfolded 14 -popping 14 -detail 14 -stirred 14 -rental 14 -filter 14 -rodent 14 -amateur 14 -vigorously 14 -gushing 14 -weed 14 -delicate 14 -buddha 14 -crumpled 14 -collect 14 -contrasting 14 -satchel 14 -diffrent 14 -bomb 14 -articulated 14 -sunbathing 14 -stilts 14 -snowshoes 14 -torso 14 -condition 14 -science 14 -arcade 14 -actors 14 -engage 14 -oblong 14 -spelled 14 -necessities 14 -luncheon 14 -dc 14 -fires 14 -deciding 14 -cowgirl 14 -readying 14 -user 14 -afro 14 -parasailer 14 -enthusiasts 14 -surprise 14 -visitor 14 -springs 14 -anchor 14 -mismatched 14 -tux 14 -offered 14 -gross 14 -leftovers 14 -search 14 -campground 14 -pared 14 -overly 14 -park-like 14 -escape 14 -mates 14 -wilting 14 -parliament 14 -shavings 14 -extension 14 -overhand 14 -greyhound 14 -contrails 14 -shading 14 -gently 14 -clipped 14 -compute 14 -veil 14 -alligator 14 -navigate 14 -belongs 14 -nuzzles 14 -peripherals 14 -rules 14 -interest 14 -loom 14 -overloaded 14 -freeze 14 -tandem 14 -potential 14 -twp 14 -25 14 -trooper 14 -unidentifiable 14 -scruffy 14 -replaced 14 -graffitti 14 -pontoon 14 -chandeliers 14 -flapping 14 -operates 14 -cleared 14 -duty 14 -join 14 -crispy 14 -kimono 14 -aloft 14 -dusted 14 -combs 14 -ducky 14 -unbuttoned 14 -sheepdog 14 -forming 14 -smell 14 -films 14 -dinette 14 -tote 14 -stony 14 -pendulum 14 -backing 14 -popcorn 14 -falcon 14 -parakeet 14 -yellowish 14 -halter 14 -transparent 14 -angrily 14 -unusually 14 -envelope 14 -write 14 -thai 14 -peopel 14 -pamphlets 14 -cleans 14 -content 14 -sling 14 -staying 14 -6th 14 -mary 14 -shakers 14 -affection 14 -thames 14 -monks 14 -solid 14 -italy 14 -blooms 14 -domestic 14 -custard 14 -merchant 14 -parasails 14 -streak 14 -comfort 14 -20 14 -toiled 14 -allows 14 -propping 14 -turbulent 14 -empire 14 -comically 13 -screaming 13 -stillness 13 -choice 13 -locks 13 -mango 13 -bi-plane 13 -para-sail 13 -respective 13 -bulletin 13 -connection 13 -frisbie 13 -wrinkled 13 -harnesses 13 -amish 13 -glows 13 -rods 13 -become 13 -trainers 13 -4-way 13 -bangs 13 -tiara 13 -professionals 13 -gaze 13 -poop 13 -inspired 13 -producing 13 -wallpapered 13 -instructing 13 -charming 13 -links 13 -cause 13 -proceeds 13 -rafting 13 -heeled 13 -blinders 13 -observation 13 -pallets 13 -dear 13 -happening 13 -strolls 13 -pedestals 13 -miles 13 -buidling 13 -outs 13 -ramen 13 -plastered 13 -tacks 13 -rafters 13 -baltimore 13 -weaving 13 -braces 13 -meaty 13 -squinting 13 -dials 13 -fern 13 -grasps 13 -bulbs 13 -deflated 13 -parakeets 13 -adventure 13 -encased 13 -roofs 13 -engraved 13 -wrote 13 -kangaroo 13 -attend 13 -articles 13 -g 13 -dreadlocks 13 -identification 13 -crude 13 -pike 13 -prize 13 -cheetah 13 -listen 13 -traversing 13 -bovine 13 -outward 13 -twins 13 -crafted 13 -delivers 13 -piers 13 -vessels 13 -forth 13 -de 13 -graduates 13 -p 13 -supporting 13 -pedals 13 -fling 13 -hover 13 -blend 13 -treed 13 -withe 13 -canisters 13 -thermos 13 -campers 13 -necklaces 13 -providing 13 -nutritious 13 -greenish 13 -stains 13 -cents 13 -struggles 13 -salami 13 -supports 13 -sparkler 13 -cork 13 -particular 13 -ans 13 -os 13 -caucasian 13 -waxing 13 -crusted 13 -mulch 13 -leisurely 13 -re 13 -munches 13 -tasks 13 -awesome 13 -average 13 -cilantro 13 -vacuum 13 -mast 13 -birdcage 13 -tp 13 -clothesline 13 -inches 13 -casing 13 -butts 13 -mansion 13 -creates 13 -hamper 13 -involving 13 -topper 13 -vicinity 13 -vespa 13 -closest 13 -koala 13 -protector 13 -damp 13 -egret 13 -chevrolet 13 -bakers 13 -kraut 13 -copy 13 -corned 13 -teacup 13 -grid 13 -silk 13 -consuming 13 -gardening 13 -exchange 13 -flashes 13 -actual 13 -elf 13 -satin 13 -bracelet 13 -field.. 13 -remarkable 13 -sleeveless 13 -flooding 13 -bowing 13 -bulldozer 13 -logos 13 -rapidly 13 -ally 13 -brace 13 -sunken 13 -duplicate 13 -reeds 13 -atmosphere 13 -whisk 13 -headdress 13 -receiver 13 -catering 13 -descend 13 -plating 13 -mamma 13 -shrine 13 -saluting 13 -bracelets 13 -investigates 13 -trestle 13 -icons 13 -actually 13 -baseline 13 -one-way 13 -saddles 13 -draw 13 -measure 13 -exactly 13 -ample 13 -unseen 13 -warmer 13 -cricket 13 -3rd 13 -guardrail 13 -ban 13 -seattle 13 -staged 13 -mile 13 -obelisk 13 -mare 13 -grating 13 -awkwardly 13 -terminals 13 -cardigan 13 -wheelchairs 13 -googly 13 -calling 13 -zooms 13 -changed 13 -slider 13 -removes 13 -longer 13 -shinning 13 -exploring 13 -chugs 13 -attentive 13 -foyer 13 -cove 13 -calls 13 -executing 13 -sparkling 13 -doggie 13 -plywood 13 -c 13 -finds 13 -dive 13 -headlight 13 -sick 13 -telescope 12 -smirk 12 -extend 12 -obese 12 -professionally 12 -peson 12 -fisherman 12 -shawl 12 -postal 12 -tricky 12 -diapers 12 -scoops 12 -gooey 12 -seperated 12 -marquee 12 -paired 12 -chow 12 -popsicle 12 -asking 12 -tablets 12 -expansive 12 -taxidermy 12 -widow 12 -riverside 12 -sprinkling 12 -knelt 12 -trots 12 -patrolling 12 -kitesurfing 12 -deal 12 -whiskey 12 -windowed 12 -enthusiast 12 -arranges 12 -briefcases 12 -shattered 12 -creatures 12 -squeezed 12 -affectionately 12 -remain 12 -jetty 12 -scantily 12 -charm 12 -politician 12 -announcing 12 -rivers 12 -fetch 12 -vehicular 12 -grinder 12 -hide 12 -final 12 -seminar 12 -rims 12 -tress 12 -assembling 12 -donation 12 -two-story 12 -simulator 12 -egyptian 12 -sunbathers 12 -outcropping 12 -question 12 -breasts 12 -mules 12 -fowl 12 -catsup 12 -kettles 12 -successful 12 -diagonally 12 -listed 12 -danish 12 -cooktop 12 -venice 12 -scuba 12 -careful 12 -contrast 12 -gallon 12 -focusing 12 -casino 12 -panties 12 -swirl 12 -overripe 12 -documents 12 -arugula 12 -renovations 12 -ther 12 -airy 12 -tucks 12 -toothpick 12 -crepe 12 -streaks 12 -en 12 -document 12 -choo 12 -underway 12 -nearing 12 -barbwire 12 -unbaked 12 -magnet 12 -praying 12 -creation 12 -mustang 12 -processing 12 -nibbles 12 -handlebar 12 -posting 12 -grease 12 -boaters 12 -scrap 12 -id 12 -outline 12 -vinyl 12 -camouflaged 12 -bathe 12 -purchased 12 -stirs 12 -ferret 12 -recycling 12 -hugged 12 -rabbits 12 -artsy 12 -illuminates 12 -texts 12 -pf 12 -pm 12 -deployed 12 -lumber 12 -album 12 -slate 12 -man-made 12 -mating 12 -participate 12 -cubby 12 -crosstown 12 -'m 12 -burritos 12 -scratches 12 -vancouver 12 -inserted 12 -spotlight 12 -booster 12 -mash 12 -cruiser 12 -cruises 12 -enclosures 12 -scratch 12 -graffitied 12 -urinating 12 -zoomed 12 -ascends 12 -lightening 12 -frolic 12 -louis 12 -aimed 12 -wiener 12 -mack 12 -stamp 12 -pomegranate 12 -properly 12 -underbrush 12 -wih 12 -farmland 12 -gras 12 -powdery 12 -cemetary 12 -erect 12 -website 12 -onlooking 12 -crucifix 12 -jackson 12 -12 12 -bumping 12 -grip 12 -linoleum 12 -whimsical 12 -emptied 12 -wiped 12 -quesadilla 12 -winged 12 -sparrow 12 -tilts 12 -taxing 12 -mic 12 -artfully 12 -bustling 12 -messily 12 -original 12 -almond 12 -tows 12 -wardrobe 12 -elk 12 -54 12 -navel 12 -convoy 12 -66 12 -pod 12 -ravioli 12 -ladders 12 -investigate 12 -suckling 12 -crepes 12 -faux 12 -scared 12 -passport 12 -similarly 12 -touched 12 -roofed 12 -ruffled 12 -warms 12 -dove 12 -slot 12 -jobs 12 -scottish 12 -cigar 12 -flannel 12 -scraps 12 -backsides 12 -toiler 12 -graphics 12 -tint 12 -simultaneously 12 -unkempt 12 -descent 12 -perimeter 12 -upturned 12 -stake 12 -trouble 12 -jousting 12 -lookers 12 -pastrami 12 -fliers 12 -grater 12 -multi-story 12 -scanner 12 -edited 12 -uniquely 12 -daisy 12 -fielder 12 -x 12 -yankees 12 -drag 12 -exits 12 -upset 12 -sustenance 12 -baord 12 -barefooted 12 -doorways 12 -dolphins 12 -peeping 12 -composite 12 -deco 12 -grade 12 -bold 12 -struggling 12 -spoonful 12 -expressing 12 -downed 12 -ranger 12 -rusting 12 -mowed 12 -adolescent 12 -ivory 12 -sprout 12 -isolated 12 -secured 12 -arrival 12 -vivid 12 -locking 12 -kill 12 -disabled 12 -cobblestones 12 -learns 12 -paisley 12 -moustache 12 -squeezing 12 -blacktop 12 -centre 12 -violet 12 -shabby 12 -talbe 11 -wand 11 -corporate 11 -peperoni 11 -role 11 -pecks 11 -rush 11 -adidas 11 -piping 11 -untouched 11 -extravagant 11 -infamous 11 -atlantic 11 -fifteen 11 -chilly 11 -greenhouse 11 -collide 11 -giraffee 11 -germany 11 -cylinder 11 -barbie 11 -death 11 -zipper 11 -assists 11 -carnations 11 -attachments 11 -darkly 11 -powerful 11 -trucked 11 -amp 11 -wakeboard 11 -goodbye 11 -llamas 11 -save 11 -skateboarded 11 -feminine 11 -bob 11 -lockers 11 -nearest 11 -funky 11 -tugging 11 -sites 11 -charcoal 11 -nypd 11 -varied 11 -y 11 -locomotives 11 -tranquil 11 -elizabeth 11 -detroit 11 -pinstripe 11 -beaded 11 -traveled 11 -daughters 11 -nick 11 -tack 11 -coordinating 11 -joy 11 -largest 11 -stir-fry 11 -writings 11 -tether 11 -sippy 11 -tigers 11 -breath 11 -rooftops 11 -resturant 11 -prominently 11 -curling 11 -carves 11 -al 11 -tapestry 11 -hued 11 -clutches 11 -loves 11 -labelled 11 -dismantled 11 -steet 11 -soar 11 -streamlined 11 -robotic 11 -footprints 11 -atrium 11 -knoll 11 -mechanic 11 -plater 11 -anime 11 -opener 11 -frown 11 -recorder 11 -houseboat 11 -browned 11 -scouts 11 -elder 11 -overlooked 11 -plugs 11 -sparklers 11 -attack 11 -judges 11 -rays 11 -dwelling 11 -dollop 11 -stoop 11 -folders 11 -windsurfers 11 -shampoo 11 -crabs 11 -overcoat 11 -splits 11 -auction 11 -dalmation 11 -merging 11 -beet 11 -teachers 11 -advantage 11 -1st 11 -courthouse 11 -atvs 11 -standstill 11 -idea 11 -applying 11 -anchovies 11 -protestors 11 -footed 11 -checkers 11 -vcr 11 -loses 11 -pills 11 -sinking 11 -browse 11 -fish-eye 11 -sombrero 11 -photographers 11 -innocent 11 -customized 11 -crusts 11 -pickled 11 -adjoining 11 -frosty 11 -sydney 11 -2012 11 -outlined 11 -mets 11 -manhole 11 -splayed 11 -mills 11 -ashore 11 -oakland 11 -enjoyed 11 -wi 11 -launches 11 -ripening 11 -happen 11 -truly 11 -pylons 11 -volkswagen 11 -suites 11 -algae 11 -soiled 11 -dull 11 -dinnerware 11 -data 11 -mixers 11 -tailgate 11 -pride 11 -reader 11 -subject 11 -scarfs 11 -whats 11 -stting 11 -largely 11 -ascent 11 -mixes 11 -24 11 -22 11 -mein 11 -purpose 11 -recreation 11 -grimaces 11 -spell 11 -completed 11 -stranded 11 -l-shaped 11 -tailed 11 -ruined 11 -nesting 11 -released 11 -jammed 11 -transports 11 -awful 11 -bookbag 11 -dine 11 -soaps 11 -unidentified 11 -bottoms 11 -blackberries 11 -pendant 11 -musicians 11 -crossbones 11 -wth 11 -watercraft 11 -spooning 11 -it.. 11 -sprayer 11 -graduate 11 -unfurnished 11 -campsite 11 -colliding 11 -devil 11 -mlb 11 -elements 11 -chillin 11 -pour 11 -detailing 11 -afghan 11 -cosmetics 11 -concession 11 -waring 11 -renaissance 11 -corded 11 -spear 11 -scarecrow 11 -orchid 11 -southern 11 -sweeper 11 -puff 11 -grates 11 -selfies 11 -scallions 11 -fridges 11 -wharf 11 -cutlery 11 -purchasing 11 -joint 11 -dishing 11 -orphanage 11 -tapes 11 -sideline 11 -prepping 11 -vista 11 -jones 11 -accessory 11 -hippo 11 -pat 11 -lightning 11 -pensive 11 -geared 11 -5th 11 -dividing 11 -organization 11 -magnificent 11 -drenched 11 -huddling 11 -34th 11 -advanced 11 -essentials 11 -whirlpool 11 -corners 11 -raring 11 -whiskers 11 -siblings 11 -ladybug 11 -departure 11 -pirates 11 -cropped 11 -unwrapped 11 -spears 11 -sittin 11 -greets 11 -inlet 11 -gliders 11 -tiers 11 -boom 11 -woody 10 -crotch 10 -barstools 10 -feeling 10 -singer 10 -admires 10 -former 10 -sumo 10 -shutter 10 -prior 10 -installing 10 -intended 10 -emirates 10 -30 10 -scrubs 10 -canning 10 -promotional 10 -colts 10 -pomeranian 10 -bagged 10 -excitement 10 -compared 10 -croutons 10 -watermelons 10 -lollipop 10 -portraits 10 -sending 10 -fe 10 -scoreboard 10 -sledding 10 -upclose 10 -trapped 10 -intriguing 10 -handbags 10 -contending 10 -buddy 10 -likely 10 -met 10 -fitting 10 -toped 10 -shoveling 10 -journey 10 -vineyard 10 -awards 10 -protein 10 -woodland 10 -skilled 10 -spicy 10 -20th 10 -maps 10 -mmm 10 -gowns 10 -services 10 -mouthwash 10 -expert 10 -grainy 10 -task 10 -nordic 10 -reached 10 -settee 10 -welcoming 10 -snuggle 10 -excellent 10 -prosciutto 10 -carcass 10 -grandmother 10 -collars 10 -wakeboarder 10 -wagging 10 -animation 10 -trot 10 -believing 10 -skewered 10 -joker 10 -fender 10 -boxcars 10 -handrails 10 -well-made 10 -demon 10 -prince 10 -patchy 10 -pleased 10 -payphone 10 -joe 10 -stunning 10 -lookign 10 -attracts 10 -duvet 10 -looming 10 -dollars 10 -tear 10 -crumbled 10 -studies 10 -bordering 10 -prone 10 -representing 10 -madison 10 -brow 10 -thinks 10 -gallop 10 -fiving 10 -streaking 10 -determined 10 -lanyard 10 -pastor 10 -lei 10 -yawns 10 -see-through 10 -highlighted 10 -glares 10 -hosing 10 -tavern 10 -rickshaw 10 -cutouts 10 -dedicated 10 -reacting 10 -entirely 10 -pigtails 10 -cupola 10 -nerd 10 -foothills 10 -ultra 10 -dwarfed 10 -bugs 10 -since 10 -pup 10 -bridled 10 -flaming 10 -awnings 10 -... 10 -billows 10 -ice-cream 10 -modeled 10 -twist 10 -railways 10 -buckled 10 -adobe 10 -hoof 10 -pebbles 10 -modeling 10 -chilling 10 -clinging 10 -felt 10 -creatively 10 -multicolor 10 -borders 10 -amazed 10 -drainage 10 -take-out 10 -hybrid 10 -propellors 10 -beacon 10 -charms 10 -dribbles 10 -medium-sized 10 -tended 10 -envelopes 10 -ontop 10 -lampposts 10 -wholly 10 -footlong 10 -clippings 10 -brochures 10 -bakes 10 -growling 10 -plaster 10 -puddles 10 -standng 10 -shadowed 10 -surfboarders 10 -pearls 10 -clover 10 -r 10 -armored 10 -rimmed 10 -undone 10 -teach 10 -stoned 10 -parading 10 -stabbed 10 -lessons 10 -chain-link 10 -manmade 10 -scratched 10 -bumps 10 -wiring 10 -crusty 10 -fives 10 -authentic 10 -crt 10 -guinea 10 -dimensional 10 -eager 10 -whiteboard 10 -rubbish 10 -slip 10 -classy 10 -artichoke 10 -yells 10 -peal 10 -scones 10 -booties 10 -triumph 10 -posh 10 -usually 10 -slat 10 -mingling 10 -papered 10 -mysterious 10 -boxers 10 -convex 10 -zooming 10 -crowed 10 -gatorade 10 -drizzle 10 -dr. 10 -airfrance 10 -unlit 10 -fascinating 10 -loader 10 -decrepit 10 -delight 10 -11 10 -monochrome 10 -efficient 10 -dice 10 -plump 10 -feeders 10 -general 10 -sewer 10 -caked 10 -f 10 -seashells 10 -vegetated 10 -hutch 10 -fluid 10 -interacts 10 -flaps 10 -proceeding 10 -civil 10 -stomachs 10 -etched 10 -decadent 10 -flyer 10 -rent 10 -pennsylvania 10 -slushy 10 -motorcylce 10 -finding 10 -flows 10 -cheer 10 -wonderland 10 -graham 10 -wrestle 10 -duster 10 -sin 10 -tortoiseshell 10 -grassing 10 -oversize 10 -flutes 10 -blt 10 -chevy 10 -bureau 10 -juts 10 -grumpy 10 -trimmings 10 -partaking 10 -noon 10 -netbook 10 -refueling 10 -klm 10 -fulled 10 -kneel 10 -snail 10 -charity 10 -appealing 10 -paul 10 -slowing 10 -valve 10 -worm 10 -failing 10 -occupants 10 -snows 10 -vegan 10 -goldfish 10 -instructional 10 -cross-legged 10 -notice 10 -wreaths 10 -clippers 10 -kneepads 10 -lobsters 10 -fathers 10 -leeks 10 -recessed 10 -sundown 10 -kickstand 10 -michigan 10 -smartphones 10 -westminster 10 -refreshments 10 -segments 10 -vodka 10 -definitely 10 -mild 10 -checkout 10 -avoiding 10 -soul 10 -maid 10 -pumping 10 -disassembled 10 -ranges 10 -rye 10 -bleak 10 -lookout 10 -pomegranates 10 -raise 10 -tab 10 -bones 10 -ion 10 -handset 10 -cookbook 10 -changes 10 -sandwhiches 10 -relatively 10 -hearth 10 -redheaded 10 -quantity 10 -flakes 10 -swords 10 -lacy 10 -cubicles 10 -sandwiched 10 -cartoons 10 -nutella 10 -drooping 10 -marmalade 10 -wishes 10 -toasters 10 -superimposed 10 -thousands 10 -discuss 10 -stern 9 -fin 9 -bannister 9 -mache 9 -troll 9 -complicated 9 -grandma 9 -ona 9 -washbasin 9 -tiki 9 -engages 9 -beware 9 -civilians 9 -exposing 9 -chipping 9 -saint 9 -compass 9 -sips 9 -suburbs 9 -cheddar 9 -boutonniere 9 -saucepan 9 -carriers 9 -disembark 9 -mid-flight 9 -zookeeper 9 -pasted 9 -heart-shaped 9 -lcd 9 -furnishing 9 -conventional 9 -photoshop 9 -remnants 9 -hundred 9 -sturdy 9 -hoisted 9 -kart 9 -lived 9 -wet-suit 9 -redone 9 -salvation 9 -newlywed 9 -advertised 9 -gain 9 -highest 9 -operate 9 -caterpillar 9 -warped 9 -congratulating 9 -gourds 9 -squad 9 -queens 9 -ratchet 9 -strain 9 -assist 9 -expressions 9 -puffs 9 -treetops 9 -shapped 9 -valentine 9 -chart 9 -pamphlet 9 -lacrosse 9 -easter 9 -gallops 9 -stret 9 -later 9 -pugs 9 -staning 9 -rotating 9 -homeplate 9 -antlers 9 -caddy 9 -rushes 9 -unzipped 9 -bail 9 -lavishly 9 -flushed 9 -faster 9 -companions 9 -refrigerated 9 -escorting 9 -icon 9 -cheap 9 -ironic 9 -delectable 9 -patrick 9 -list 9 -sum 9 -flap 9 -untied 9 -9 9 -pipeline 9 -fireplug 9 -fron 9 -freezers 9 -footstool 9 -electricity 9 -daybed 9 -hurt 9 -multi-color 9 -paintbrush 9 -retaining 9 -startled 9 -piercings 9 -nachos 9 -calfs 9 -carpets 9 -wondering 9 -wrenches 9 -kong 9 -padding 9 -surgery 9 -accept 9 -pant 9 -bouquets 9 -equestrians 9 -send 9 -twine 9 -drill 9 -thirteen 9 -spill 9 -pines 9 -firehydrant 9 -fancily 9 -co 9 -volvo 9 -forty 9 -transformed 9 -cranberries 9 -lily 9 -prom 9 -lunches 9 -tackling 9 -observers 9 -struck 9 -operator 9 -hogs 9 -knick-knacks 9 -won 9 -grills 9 -trucking 9 -wizard 9 -experiment 9 -pools 9 -kiddie 9 -suzuki 9 -hi 9 -rights 9 -logging 9 -playstation 9 -dunking 9 -fort 9 -brownish 9 -shave 9 -pain 9 -beachfront 9 -framing 9 -munch 9 -dipped 9 -prohibiting 9 -lo 9 -bespectacled 9 -citizen 9 -beaten 9 -floret 9 -weathervane 9 -pinkish 9 -goo 9 -steamboat 9 -soups 9 -[ 9 -aircrafts 9 -feathered 9 -pre 9 -southwestern 9 -regarding 9 -kingdom 9 -provided 9 -clams 9 -included 9 -belonging 9 -circuit 9 -mini-fridge 9 -toga 9 -ergonomic 9 -tub/shower 9 -usual 9 -plaques 9 -propane 9 -oceans 9 -specially 9 -example 9 -mimicking 9 -layout 9 -contemplates 9 -textbook 9 -downwards 9 -armrest 9 -artichokes 9 -installation 9 -hes 9 -te 9 -crests 9 -brim 9 -decoratively 9 -candlelight 9 -microsoft 9 -hyde 9 -slid 9 -meant 9 -utilizing 9 -coffe 9 -irish 9 -regal 9 -hors 9 -nozzle 9 -cornbread 9 -sadly 9 -boxcar 9 -webcam 9 -mitten 9 -applied 9 -boars 9 -boxer 9 -eyeing 9 -juices 9 -forms 9 -sauteed 9 -kickflip 9 -screwdriver 9 -chestnut 9 -magenta 9 -peeing 9 -collected 9 -cheerful 9 -shred 9 -emptying 9 -spaniel 9 -fierce 9 -owls 9 -backlit 9 -photographic 9 -dutch 9 -downstairs 9 -culinary 9 -arial 9 -archways 9 -dances 9 -dancer 9 -uninstalled 9 -disks 9 -35 9 -blizzard 9 -messenger 9 -croquet 9 -double-decked 9 -nudging 9 -chewed 9 -55 9 -successfully 9 -illuminating 9 -coop 9 -individually 9 -releases 9 -marathon 9 -dimly-lit 9 -sculpted 9 -densely 9 -protrudes 9 -swirly 9 -markets 9 -wrestler 9 -inflated 9 -make-up 9 -consist 9 -mock 9 -demonstrates 9 -smith 9 -twirling 9 -campfire 9 -fluorescent 9 -linger 9 -granny 9 -reporter 9 -robin 9 -until 9 -murals 9 -shirted 9 -stamps 9 -crescent 9 -silos 9 -latte 9 -routes 9 -williams 9 -wall-mounted 9 -canon 9 -exhausted 9 -strand 9 -blues 9 -blackboard 9 -manger 9 -tart 9 -depiction 9 -informational 9 -chance 9 -tickets 9 -poppy 9 -walkie 9 -riverboat 9 -las 9 -tangle 9 -medley 9 -xbox 9 -vegas 9 -aggressively 9 -bowler 9 -fascinated 9 -crumb 9 -dummies 9 -adjustable 9 -cautiously 9 -nad 9 -enclose 9 -medals 9 -winner 9 -cabbages 9 -jams 9 -blackened 9 -shaves 9 -vitamin 9 -mexico 9 -abundant 9 -technique 9 -styling 9 -earbuds 9 -wrangling 9 -cylindrical 9 -fading 9 -jumpsuit 9 -directed 9 -bleeding 9 -uncomfortable 9 -dam 9 -photo-shopped 9 -curves 9 -appearance 9 -helped 9 -swimsuits 9 -bobble 9 -walkways 9 -executes 9 -arizona 9 -icicles 9 -unpainted 9 -caravan 9 -playroom 9 -beanbag 9 -clicking 9 -lorry 9 -persian 9 -outcrop 9 -george 9 -tarts 9 -viewpoint 9 -wintry 9 -barges 9 -canister 9 -kilt 9 -comforters 9 -britain 9 -riderless 9 -smal 9 -protects 9 -hil 9 -footpath 9 -contrail 9 -safely 9 -shoving 9 -wile 9 -blue-and-white 9 -slug 9 -climber 8 -pinto 8 -grapefruits 8 -mangos 8 -grooms 8 -triangles 8 -stays 8 -minute 8 -queue 8 -pouches 8 -vision 8 -formed 8 -rags 8 -experience 8 -trellis 8 -weedy 8 -tummy 8 -awake 8 -sundaes 8 -span 8 -disheveled 8 -sweaty 8 -thigh 8 -matress 8 -vying 8 -tented 8 -wed 8 -pressure 8 -separates 8 -hong 8 -cashews 8 -scrubber 8 -traffice 8 -alertly 8 -macintosh 8 -rind 8 -kabob 8 -bazaar 8 -uneven 8 -phrase 8 -predators 8 -housed 8 -unfolded 8 -spouting 8 -application 8 -turbines 8 -timey 8 -sheriff 8 -examined 8 -cocoa 8 -finch 8 -grille 8 -niche 8 -passageway 8 -digs 8 -annoyed 8 -meets 8 -unicorn 8 -yes 8 -hatchback 8 -holly 8 -lieing 8 -again 8 -taht 8 -development 8 -valves 8 -nobody 8 -hen 8 -bails 8 -screensaver 8 -burns 8 -obscene 8 -lever 8 -eat-in 8 -spirit 8 -relief 8 -fillings 8 -qantas 8 -off-road 8 -susan 8 -necked 8 -heaped 8 -sporty 8 -meaning 8 -sanctioned 8 -bonnet 8 -workbench 8 -fifties 8 -lakeside 8 -draining 8 -nourishment 8 -sash 8 -organizer 8 -shores 8 -blinder 8 -victoria 8 -attendees 8 -trampoline 8 -hyrdrant 8 -servicing 8 -runways 8 -diverse 8 -taught 8 -gong 8 -ar 8 -neglected 8 -drifting 8 -checkpoint 8 -dandelion 8 -red-haired 8 -hoods 8 -bi 8 -outwards 8 -boeing 8 -rotunda 8 -apricots 8 -sleds 8 -paragliding 8 -indifferent 8 -plentiful 8 -flagpole 8 -dumped 8 -stroke 8 -gingerbread 8 -mold 8 -margarita 8 -clumps 8 -barrow 8 -grimacing 8 -foaming 8 -ballpark 8 -comics 8 -cartons 8 -nerf 8 -gaining 8 -flattened 8 -batches 8 -dribbling 8 -laminate 8 -bass 8 -ash 8 -passersby 8 -barack 8 -trashcans 8 -latin 8 -horrible 8 -harbour 8 -made-up 8 -acoustic 8 -matter 8 -doves 8 -judged 8 -firsbee 8 -iv 8 -foilage 8 -commuting 8 -effort 8 -pure 8 -designer 8 -earlier 8 -marijuana 8 -personalized 8 -cacti 8 -girlfriend 8 -milks 8 -pleasure 8 -to-go 8 -tender 8 -examples 8 -pee 8 -chaps 8 -moderate 8 -harvest 8 -ref 8 -balck 8 -patting 8 -mooring 8 -shelfs 8 -variations 8 -jeeps 8 -draught 8 -wood-paneled 8 -harper 8 -arrayed 8 -wound 8 -carnation 8 -specific 8 -plantain 8 -core 8 -nd 8 -sharpening 8 -hashbrowns 8 -ann 8 -counter-top 8 -woolen 8 -runaway 8 -devoid 8 -ok 8 -oj 8 -od 8 -braided 8 -duo 8 -decaying 8 -martini 8 -beast 8 -demonstrate 8 -tastefully 8 -hyenas 8 -instructs 8 -peripheral 8 -simulated 8 -soccor 8 -s. 8 -beetle 8 -squared 8 -moldy 8 -kennel 8 -turbine 8 -toiletry 8 -tomatos 8 -reporters 8 -intercept 8 -souffle 8 -cabins 8 -jutting 8 -clipping 8 -garnishment 8 -clinton 8 -ferries 8 -shetland 8 -2013 8 -minimalist 8 -astride 8 -cinder 8 -sibling 8 -calculators 8 -negative 8 -movable 8 -farming 8 -exercises 8 -droplets 8 -smoothies 8 -becoming 8 -brothers 8 -supported 8 -roaring 8 -bodyboarding 8 -slam 8 -mandarin 8 -chinatown 8 -sidwalk 8 -shoelace 8 -motorhome 8 -booty 8 -applesauce 8 -cheeks 8 -clouded 8 -peter 8 -barbershop 8 -ward 8 -cookbooks 8 -disco 8 -tastes 8 -clings 8 -raccoon 8 -emerges 8 -sensor 8 -ceramics 8 -plume 8 -decks 8 -oin 8 -ascend 8 -erase 8 -sophisticated 8 -appreciating 8 -component 8 -comparing 8 -ensemble 8 -dentist 8 -avatar 8 -clipboard 8 -inthe 8 -swooping 8 -lama 8 -safeway 8 -toting 8 -volunteer 8 -muscle 8 -masked 8 -thinly 8 -functional 8 -biggest 8 -fabulous 8 -delicacy 8 -plats 8 -splashed 8 -bandages 8 -rolex 8 -attempted 8 -promoting 8 -spools 8 -drier 8 -hooves 8 -passangers 8 -scrubby 8 -d.c 8 -elongated 8 -known 8 -artful 8 -developed 8 -suspicious 8 -mommy 8 -vice 8 -ripeness 8 -sil 8 -badges 8 -july 8 -catholic 8 -snacking 8 -bronco 8 -bedspreads 8 -stowed 8 -talkie 8 -squeeze 8 -whether 8 -30th 8 -radish 8 -printing 8 -monogrammed 8 -glossy 8 -spokes 8 -twists 8 -wristband 8 -hangers 8 -modem 8 -prepped 8 -sleeved 8 -projects 8 -continues 8 -manipulated 8 -slabs 8 -crayons 8 -windsurfs 8 -lattice 8 -odds 8 -cubical 8 -recognizable 8 -receives 8 -wispy 8 -braves 8 -brave 8 -courch 8 -take-off 8 -wigs 8 -bullpen 8 -whatever 8 -loosened 8 -booklet 8 -addresses 8 -poem 8 -skill 8 -intersections 8 -technician 8 -farmhouse 8 -blindfold 8 -swaddled 8 -ending 8 -said 8 -tucking 8 -smoked 8 -floaters 8 -tartar 8 -nectar 8 -alto 8 -creams 8 -q 8 -thank 8 -escort 8 -answering 8 -especially 8 -gracefully 8 -extensive 8 -hyrdant 8 -jumble 8 -fueling 8 -tipping 8 -afar 8 -swirled 8 -consume 8 -lilacs 8 -mower 8 -50th 8 -sheers 8 -drape 8 -frizbe 8 -executive 8 -mints 8 -punching 8 -chargers 8 -guinness 8 -facet 8 -jello 8 -advertise 8 -tim 8 -memory 8 -realistic 8 -ruck 8 -hull 8 -cockatoo 8 -mallet 8 -flask 8 -waxed 8 -khakis 8 -bard 8 -stitched 8 -manipulating 8 -paused 8 -tap 8 -barns 8 -reindeer 8 -gnawing 8 -wrought-iron 7 -dinosaurs 7 -tulip 7 -effects 7 -tallest 7 -honk 7 -open-air 7 -exact 7 -talbot 7 -consoles 7 -webpage 7 -milked 7 -majestically 7 -situation 7 -edged 7 -singapore 7 -stagecoach 7 -foreheads 7 -corrected 7 -stylus 7 -operation 7 -carry-on 7 -removable 7 -motors 7 -and/or 7 -wrinkles 7 -clasping 7 -bred 7 -newest 7 -problem 7 -outlets 7 -gardens 7 -wii-mote 7 -indicated 7 -convenient 7 -disembarking 7 -cost 7 -puffed 7 -chic 7 -barnyard 7 -stooping 7 -thumbs-up 7 -battleship 7 -anytime 7 -mittens 7 -imitating 7 -cream-colored 7 -manhattan 7 -avid 7 -cradle 7 -sorry 7 -berlin 7 -budding 7 -giraff 7 -houseplants 7 -subs 7 -meander 7 -bog 7 -fingertips 7 -stitch 7 -vender 7 -malaysian 7 -spans 7 -coco 7 -beachside 7 -alfalfa 7 -preserver 7 -compound 7 -needing 7 -obstructed 7 -drifts 7 -spikes 7 -concoction 7 -treeline 7 -composed 7 -prizes 7 -amazon 7 -flair 7 -approached 7 -recess 7 -monopoly 7 -boundary 7 -widescreen 7 -sisters 7 -teeshirt 7 -classical 7 -domino 7 -trams 7 -flotation 7 -pounce 7 -leaguer 7 -keypad 7 -pedaling 7 -columned 7 -rom 7 -entertain 7 -witch 7 -plying 7 -grime 7 -manikin 7 -stroking 7 -cupping 7 -director 7 -donations 7 -googles 7 -vespas 7 -binders 7 -friday 7 -sole 7 -wastebasket 7 -impressed 7 -dragged 7 -wording 7 -explore 7 -antennae 7 -ringed 7 -leathers 7 -buddhist 7 -usage 7 -sterile 7 -dangled 7 -eatting 7 -tequila 7 -gauge 7 -gauze 7 -crutches 7 -protester 7 -forage 7 -inscribed 7 -steaks 7 -obscures 7 -breasted 7 -misshapen 7 -sideburns 7 -cheers 7 -barbecued 7 -puzzled 7 -brimmed 7 -slipper 7 -grace 7 -condom 7 -firm 7 -moto 7 -primarily 7 -corsage 7 -code 7 -sharply 7 -trumpet 7 -allot 7 -merge 7 -straining 7 -colleagues 7 -hops 7 -email 7 -atlanta 7 -paving 7 -unplugged 7 -expo 7 -funnel 7 -harvested 7 -glimpse 7 -bunnies 7 -hummus 7 -themself 7 -pinstriped 7 -depict 7 -eps 7 -lapsed 7 -grizzle 7 -pays 7 -physical 7 -trial 7 -breeds 7 -skidding 7 -multi-tiered 7 -vert 7 -spaceous 7 -stand-up 7 -hurry 7 -bridles 7 -corgi 7 -brussels 7 -heritage 7 -politicians 7 -baron 7 -click 7 -sprinkler 7 -approval 7 -addition 7 -brunch 7 -illusion 7 -gerbil 7 -pilled 7 -onstage 7 -braids 7 -motorist 7 -sanitizer 7 -clementine 7 -staples 7 -insides 7 -sunday 7 -pint 7 -thorny 7 -cracks 7 -rancher 7 -disconnected 7 -dole 7 -roams 7 -tortoise 7 -del 7 -stepped 7 -luke 7 -pagoda 7 -spend 7 -roping 7 -severely 7 -drank 7 -announcer 7 -afield 7 -panes 7 -time-lapse 7 -wide-eyed 7 -sheeted 7 -ambulances 7 -hot-dog 7 -sharpie 7 -zipping 7 -technological 7 -intertwined 7 -garnishes 7 -nw 7 -partitions 7 -ballroom 7 -granddaughter 7 -treading 7 -hub 7 -dandelions 7 -ot 7 -simpsons 7 -mission 7 -modernized 7 -unhealthy 7 -tarps 7 -knickknacks 7 -birch 7 -joining 7 -steve 7 -disposal 7 -puppets 7 -makings 7 -steams 7 -sparrows 7 -junky 7 -motorcyles 7 -incredibly 7 -cutters 7 -gelato 7 -showed 7 -amusing 7 -landmark 7 -skimpy 7 -bid 7 -wiht 7 -glacier 7 -multistory 7 -parachuting 7 -brilliant 7 -diagram 7 -zippered 7 -fleece 7 -fishermen 7 -coveralls 7 -fetching 7 -continue 7 -wipe 7 -pawn 7 -brahma 7 -presidential 7 -trike 7 -bonds 7 -basking 7 -coals 7 -slathered 7 -catamaran 7 -confinement 7 -motocycle 7 -highlights 7 -sprints 7 -w. 7 -aim 7 -launched 7 -farther 7 -/ 7 -division 7 -gnarly 7 -cal 7 -flank 7 -dread 7 -fifty 7 -cannon 7 -skatboard 7 -partially-eaten 7 -urine 7 -reason 7 -overexposed 7 -diorama 7 -chiquita 7 -invisible 7 -jeff 7 -scrunched 7 -smells 7 -fishnet 7 -hitching 7 -maze 7 -cartoonish 7 -holstein 7 -partners 7 -eccentric 7 -poultry 7 -astounding 7 -inset 7 -haze 7 -somber 7 -programming 7 -overtaken 7 -stoic 7 -girraffe 7 -sown 7 -decided 7 -valentines 7 -manager 7 -battling 7 -strokes 7 -listing 7 -cough 7 -broadcast 7 -13 7 -pill 7 -grin 7 -bible 7 -escorted 7 -slips 7 -curio 7 -jointly 7 -furnace 7 -bluff 7 -venturing 7 -kilts 7 -shallows 7 -binoculars 7 -adn 7 -hitch 7 -stride 7 -eyeballs 7 -vapor 7 -glances 7 -roles 7 -cultural 7 -malaysia 7 -crews 7 -gin 7 -begs 7 -bareback 7 -turntable 7 -persona 7 -informing 7 -crush 7 -knows 7 -wrestles 7 -explains 7 -saloon 7 -refrigeration 7 -literature 7 -baggy 7 -ottomans 7 -dramatically 7 -anywhere 7 -haight 7 -horizontally 7 -frontier 7 -swift 7 -coached 7 -cocker 7 -artifacts 7 -chugging 7 -2nd 7 -ease 7 -bluebird 7 -offer 7 -unaware 7 -peope 7 -bluish 7 -ridiculous 7 -rained 7 -driftwood 7 -alfredo 7 -doodling 7 -philadelphia 7 -dispensing 7 -darker 7 -snowball 7 -brooklyn 7 -buoys 7 -arching 7 -sandal 7 -zip 7 -servers 7 -skimming 7 -girrafe 7 -centers 7 -purposes 7 -pieced 7 -autograph 7 -derelict 7 -autographed 7 -biplanes 7 -familiar 7 -hauled 7 -jalapeno 7 -beginner 7 -saver 7 -chapel 7 -pierced 7 -shear 7 -network 7 -sunning 7 -licked 7 -ted 7 -onboard 7 -four-wheeler 7 -dreads 7 -online 7 -busily 7 -title 7 -adjusted 7 -cosmetic 7 -terrible 7 -philly 7 -sponsored 7 -collectible 7 -ballon 7 -u-turn 7 -bleacher 7 -craggy 7 -helicopters 7 -black-faced 7 -coating 7 -glazing 7 -spouts 7 -maintain 7 -dougnuts 7 -hoody 7 -tye 7 -pinning 7 -smokey 7 -die 7 -useful 7 -coaching 7 -correct 7 -pained 7 -organizing 7 -celebrity 7 -nineteen 7 -frilly 7 -bathes 7 -crawls 7 -trees.. 7 -precariously 7 -folds 7 -peaked 7 -cubed 7 -overhanging 7 -clementines 7 -rearing 7 -highland 7 -homey 7 -whiile 7 -beaked 7 -brides 7 -refurbished 7 -glory 7 -supreme 7 -leggings 7 -smoky 7 -kicked 7 -inscription 7 -choir 7 -clothe 7 -ran 7 -chariot 7 -vulture 7 -review 7 -postage 7 -bizarre 7 -anniversary 7 -challenging 7 -squints 7 -dumb 7 -distinctive 7 -lodged 7 -sharpener 7 -initials 7 -hurdles 7 -trap 7 -succulents 7 -closes 7 -elbows 7 -soccerball 7 -headphone 7 -broadly 7 -scraper 6 -bannanas 6 -alternative 6 -levitating 6 -spin 6 -goofing 6 -elders 6 -benched 6 -duel 6 -rippling 6 -crowns 6 -palms 6 -crater 6 -lit-up 6 -minnie 6 -admired 6 -fueled 6 -sorted 6 -ont 6 -jewels 6 -recycled 6 -plumes 6 -bubbly 6 -unrecognizable 6 -fluffed 6 -scott 6 -terra 6 -benedict 6 -moderately 6 -restrooms 6 -surveying 6 -crook 6 -scatter 6 -tonight 6 -taps 6 -jay 6 -jal 6 -rapped 6 -> 6 -mementos 6 -recovery 6 -production 6 -two-tiered 6 -slew 6 -reserve 6 -skiiing 6 -emu 6 -techniques 6 -shields 6 -candlesticks 6 -steady 6 -pillowcases 6 -arctic 6 -backup 6 -treks 6 -communicating 6 -ipads 6 -kitcehn 6 -feels 6 -technicians 6 -informal 6 -umbrealla 6 -michael 6 -ronald 6 -airborn 6 -headscarf 6 -tuned 6 -strains 6 -cell-phone 6 -emblems 6 -lose 6 -trucker 6 -universal 6 -fists 6 -void 6 -snowsuits 6 -ventilation 6 -ducking 6 -merchants 6 -hawks 6 -sleeper 6 -sever 6 -contestant 6 -margherita 6 -shakespeare 6 -oreo 6 -sweating 6 -stoops 6 -meme 6 -gel 6 -capable 6 -motorcade 6 -naan 6 -pastures 6 -railed 6 -designating 6 -commute 6 -meowing 6 -karaoke 6 -sucks 6 -viewers 6 -bandaged 6 -extraordinary 6 -bassinet 6 -witches 6 -ktichen 6 -valleys 6 -vietnamese 6 -assisted 6 -barley 6 -fourteen 6 -mortar 6 -ramping 6 -payment 6 -chilies 6 -rump 6 -cathay 6 -happens 6 -screwed 6 -nathans 6 -feathery 6 -limousine 6 -saber 6 -overheard 6 -attractions 6 -fantasy 6 -handkerchief 6 -thorough 6 -minnesota 6 -despite 6 -toolbox 6 -solders 6 -shortly 6 -bathmat 6 -duke 6 -loitering 6 -whle 6 -minimalistic 6 -cautionary 6 -lentils 6 -fastball 6 -okay 6 -processed 6 -tassels 6 -wilson 6 -populate 6 -gains 6 -blurs 6 -bibs 6 -bodyboard 6 -kiteboards 6 -dudes 6 -homework 6 -vegitables 6 -johns 6 -users 6 -app 6 -cheerleader 6 -refigerator 6 -cotta 6 -well-organized 6 -unclear 6 -hosts 6 -yorkie 6 -spatulas 6 -figs 6 -fritter 6 -glassy 6 -spoke 6 -telegraph 6 -lovingly 6 -captioned 6 -piggy 6 -fail 6 -entwined 6 -prevent 6 -snaps 6 -creations 6 -vote 6 -combines 6 -skate-park 6 -souvenirs 6 -af 6 -beaming 6 -candid 6 -enjoyable 6 -deviled 6 -houseplant 6 -immediate 6 -sideboard 6 -hollow 6 -architecturally 6 -sighn 6 -buttery 6 -overtop 6 -overgrowth 6 -benz 6 -blow-drying 6 -mechanism 6 -manchester 6 -bourbon 6 -laser 6 -rigged 6 -criss 6 -pursuit 6 -secure 6 -indians 6 -indiana 6 -ruffle 6 -swimwear 6 -chimneys 6 -out-of-focus 6 -tee-shirt 6 -graph 6 -blast 6 -halo 6 -acknowledging 6 -13th 6 -octagonal 6 -menacingly 6 -torches 6 -mph 6 -chunky 6 -sweeps 6 -motorcross 6 -soapy 6 -dunk 6 -jordan 6 -bathtub/shower 6 -ciabatta 6 -belong 6 -shuttered 6 -skat 6 -requires 6 -focuses 6 -statutes 6 -spelling 6 -stading 6 -resident 6 -sonic 6 -eatables 6 -notices 6 -hp 6 -skatebaord 6 -colourful 6 -frisbee-based 6 -coup 6 -luck 6 -struts 6 -burn 6 -rockaway 6 -woodwork 6 -skim 6 -pattered 6 -grained 6 -birdbath 6 -drizzling 6 -browning 6 -gyro 6 -according 6 -caricature 6 -barry 6 -receptacle 6 -open-faced 6 -tackle 6 -represent 6 -storks 6 -none 6 -cocking 6 -beret 6 -netted 6 -swishing 6 -monroe 6 -leafed 6 -rears 6 -shouting 6 -pew 6 -fogged 6 -golfer 6 -hilton 6 -retrieves 6 -receipts 6 -chats 6 -enthusiastic 6 -haunches 6 -barbed-wire 6 -brands 6 -semis 6 -hikes 6 -scribbled 6 -pats 6 -mt 6 -flippers 6 -enterprise 6 -interviewing 6 -pinking 6 -campbell 6 -bands 6 -heap 6 -ny 6 -thirty 6 -threes 6 -kerry 6 -budweiser 6 -pokemon 6 -surveys 6 -ability 6 -hardly 6 -op 6 -grungy 6 -bounded 6 -measured 6 -penalty 6 -forlorn 6 -plateau 6 -waiters 6 -barricaded 6 -nostalgic 6 -disarray 6 -boring 6 -sittign 6 -combat 6 -braid 6 -blanketed 6 -pods 6 -irises 6 -sailer 6 -off-white 6 -omelets 6 -easier 6 -buiding 6 -planner 6 -gleaming 6 -hookah 6 -turret 6 -stature 6 -monitoring 6 -mime 6 -addressing 6 -brocoli 6 -missiles 6 -amused 6 -gingham 6 -rummaging 6 -pearl 6 -refuse 6 -laced 6 -tomatoe 6 -accepting 6 -chowder 6 -euro 6 -rainbow-colored 6 -harvesting 6 -graceful 6 -crinkle 6 -cheetos 6 -knotted 6 -eva 6 -slit 6 -element 6 -snowfall 6 -shacks 6 -flesh 6 -bulidings 6 -placemats 6 -encompassing 6 -savory 6 -raggedy 6 -blue-green 6 -applies 6 -certainly 6 -para-surfing 6 -yams 6 -browses 6 -slap 6 -] 6 -ruby 6 -earring 6 -showerhead 6 -skinning 6 -fanning 6 -idling 6 -lacking 6 -high-speed 6 -started 6 -sprouting 6 -bucks 6 -buldings 6 -attractively 6 -hush 6 -advancing 6 -aging 6 -wieners 6 -don 6 -offerings 6 -inspected 6 -bounds 6 -cessna 6 -keepers 6 -15 6 -14 6 -17 6 -bowels 6 -dick 6 -787 6 -triumphantly 6 -compilation 6 -abandon 6 -supposed 6 -mouthful 6 -motionless 6 -circling 6 -room/dining 6 -pails 6 -desktops 6 -flanking 6 -poached 6 -institutional 6 -courtroom 6 -blazing 6 -salesman 6 -matt 6 -beachgoers 6 -constructing 6 -waterhole 6 -quad 6 -spitting 6 -preparations 6 -40 6 -41 6 -solemn 6 -stray 6 -whizzes 6 -multi-level 6 -manicure 6 -differnt 6 -grassed 6 -pondering 6 -southeast 6 -gamer 6 -trust 6 -windup 6 -procedure 6 -suburb 6 -cylinders 6 -footboard 6 -foldable 6 -straddles 6 -hunches 6 -slippery 6 -corks 6 -kleenex 6 -snowmen 6 -liter 6 -uncommonly 6 -freshener 6 -heaping 6 -congregated 6 -chees 6 -womens 6 -hobby 6 -posture 6 -possession 6 -pecans 6 -neighboring 6 -bulging 6 -kawasaki 6 -dynamite 6 -occupying 6 -incomplete 6 -collectibles 6 -manufacturing 6 -emits 6 -lunge 6 -funeral 6 -converses 6 -boad 6 -fame 6 -bandage 6 -neighbor 6 -dented 6 -graffiti-covered 6 -vie 6 -stnading 6 -nibble 6 -crossword 6 -interface 6 -pda 6 -loan 6 -doo 6 -insignia 6 -furthest 6 -chickpeas 6 -singular 6 -volunteers 6 -accompany 6 -deodorant 6 -h 6 -building.. 6 -eof 6 -lad 6 -wiffle 6 -eventually 6 -bolted 6 -toronto 6 -forces 6 -tough 6 -swell 6 -hopping 6 -couscous 6 -schoolbus 6 -street.. 6 -assistance 6 -buck 6 -slogan 6 -hotplate 6 -identically 6 -idyllic 6 -james 6 -tie-dyed 6 -comparison 6 -gump 6 -exceptionally 6 -game.. 6 -ledges 6 -tried 6 -payer 6 -plungers 6 -splattered 6 -bowel 6 -sailboard 6 -labeling 6 -multi-lane 6 -perked 6 -mauve 6 -outing 6 -inspection 6 -becomes 6 -semi-circle 6 -showering 6 -strapping 6 -three-tiered 6 -tabe 6 -joins 6 -holidng 6 -spandex 6 -shes 6 -connects 6 -digitally 6 -bonsai 6 -longingly 6 -restricted 6 -peculiar 6 -tapping 6 -smoker 6 -chives 6 -well-furnished 6 -deformed 6 -fribee 6 -nativity 6 -slept 6 -dangerously 6 -pegs 6 -warplane 6 -distinct 6 -stacking 6 -response 6 -possessions 6 -squished 6 -blossoming 6 -statement 6 -dodging 6 -smallest 6 -marilyn 6 -preforms 6 -flurry 6 -flowerpot 6 -arrange 6 -shock 6 -presidents 6 -tearing 6 -lapt 6 -sayings 6 -masts 6 -routine 6 -clinic 6 -crossroad 6 -alive 6 -noise 6 -dinging 6 -enclosing 6 -christ 6 -lampshade 6 -actions 6 -poked 6 -mallard 6 -rad 6 -cookout 6 -carseat 6 -blower 6 -altar 6 -collision 6 -motions 6 -bolts 6 -tanning 6 -pakistan 6 -kiln 6 -lack 6 -branded 6 -stethoscope 6 -backpacking 6 -tankless 6 -combed 6 -arc 6 -glistening 6 -rip 6 -currents 6 -nosing 6 -half-filled 6 -shoved 6 -brochure 6 -bye 6 -myspace 6 -pealed 6 -divers 6 -tolit 6 -fied 6 -readied 6 -leak 6 -murdered 6 -broiler 5 -broiled 5 -climbed 5 -schoolboys 5 -copse 5 -sixteen 5 -competes 5 -semi-trailer 5 -kfc 5 -reuben 5 -scream 5 -teresa 5 -pontoons 5 -umbilical 5 -spit 5 -canoeing 5 -waterboard 5 -municipal 5 -confection 5 -treadmill 5 -calzones 5 -pasadena 5 -balled 5 -skewer 5 -inquisitively 5 -zippers 5 -blowup 5 -drilling 5 -sponsor 5 -aggressive 5 -guarded 5 -workplace 5 -dark-colored 5 -rover 5 -thailand 5 -crosswalks 5 -portrays 5 -frumpled 5 -scramble 5 -overcooked 5 -schoolhouse 5 -colleague 5 -frizzy 5 -courses 5 -delicately 5 -frito 5 -clasped 5 -underbelly 5 -inserting 5 -wafer 5 -totes 5 -survey 5 -boutique 5 -stab 5 -gallons 5 -false 5 -dished 5 -romaine 5 -aer 5 -nuclear 5 -fielding 5 -villagers 5 -hissing 5 -worth 5 -basebal 5 -sprig 5 -doge 5 -bracket 5 -panned 5 -visited 5 -attacked 5 -crockery 5 -amoco 5 -congratulate 5 -peep 5 -gutting 5 -dark-haired 5 -ibm 5 -ended 5 -soy 5 -overnight 5 -winks 5 -cherub 5 -beaver 5 -metered 5 -biathlon 5 -mid-stride 5 -moonlight 5 -thoughtful 5 -picnicking 5 -gren 5 -rockefeller 5 -porcupine 5 -culture 5 -kneeled 5 -petal 5 -watered 5 -all-white 5 -embellished 5 -3/4 5 -frontal 5 -bitch 5 -snowed 5 -well-used 5 -limits 5 -troopers 5 -taupe 5 -astro 5 -functions 5 -manuals 5 -aids 5 -umbella 5 -traverses 5 -stooges 5 -troop 5 -buries 5 -ting 5 -well-kept 5 -midway 5 -jacked 5 -sittiing 5 -fillet 5 -boa 5 -crevice 5 -set-up 5 -nazi 5 -reef 5 -seas 5 -coupons 5 -loosing 5 -escalators 5 -uhaul 5 -distracted 5 -bow-tie 5 -youtube 5 -lving 5 -preserves 5 -preserved 5 -brackets 5 -specialized 5 -junked 5 -acts 5 -worshiping 5 -mystery 5 -tom 5 -weenies 5 -sup 5 -doubled 5 -spiked 5 -generator 5 -trainyard 5 -skiff 5 -planet 5 -portraying 5 -stree 5 -hippopotamus 5 -throat 5 -prancing 5 -pulp 5 -newscast 5 -weighed 5 -closets 5 -ravine 5 -wreckage 5 -seductively 5 -bulbous 5 -fettuccine 5 -stirrer 5 -@ 5 -glued 5 -tinsel 5 -settled 5 -distributing 5 -attract 5 -drummer 5 -alight 5 -anxiously 5 -dads 5 -nerdy 5 -zodiac 5 -cobbler 5 -flusher 5 -tested 5 -deliveries 5 -suckles 5 -wallets 5 -nunchuk 5 -durham 5 -representation 5 -superman 5 -offices 5 -participant 5 -squating 5 -shoeless 5 -woma 5 -horsed 5 -argyle 5 -bout 5 -hunter 5 -tongues 5 -grazed 5 -defense 5 -sititng 5 -liquids 5 -ballgame 5 -warmed 5 -kitchenware 5 -folk 5 -19th 5 -chose 5 -suggests 5 -pajama 5 -bindings 5 -cursive 5 -priority 5 -trespassing 5 -monsters 5 -backback 5 -wonders 5 -pasty 5 -torch 5 -garbanzo 5 -shielding 5 -miller 5 -dork 5 -financial 5 -shopper 5 -reclined 5 -paddington 5 -contestants 5 -writting 5 -joust 5 -lushly 5 -cellophane 5 -dealer 5 -actor 5 -visual 5 -dwarfs 5 -suggesting 5 -managing 5 -clawing 5 -carolina 5 -wielding 5 -bros 5 -skied 5 -clam 5 -joyfully 5 -congress 5 -cpu 5 -accompanying 5 -sync 5 -threatening 5 -appreciation 5 -marriage 5 -plae 5 -condos 5 -vitamins 5 -roofing 5 -dotting 5 -choking 5 -pikachu 5 -shoestring 5 -crooks 5 -bu 5 -bp 5 -jewish 5 -results 5 -index 5 -residents 5 -mariners 5 -boxy 5 -drug 5 -ct 5 -cradling 5 -variegated 5 -swirls 5 -sellers 5 -hairdo 5 -glance 5 -brightly-colored 5 -chilled 5 -wide-open 5 -incorporated 5 -strength 5 -ds 5 -1971 5 -fairground 5 -scaffold 5 -waterski 5 -foreclosure 5 -domes 5 -seuss 5 -veteran 5 -dressage 5 -free-standing 5 -coffeemaker 5 -motorcylces 5 -puree 5 -recorded 5 -bathtubs 5 -stealth 5 -splitting 5 -headgear 5 -stirfry 5 -injury 5 -lacks 5 -perfume 5 -scanning 5 -outboard 5 -manning 5 -shortcake 5 -climbers 5 -rulers 5 -brakes 5 -trendy 5 -chromed 5 -11th 5 -rotted 5 -missile 5 -graffit 5 -cots 5 -mode 5 -fingerling 5 -attach 5 -kitties 5 -plugging 5 -ripen 5 -flexing 5 -decline 5 -hd 5 -mechanics 5 -teacups 5 -expose 5 -shelters 5 -motoring 5 -wiith 5 -collector 5 -reference 5 -backhoe 5 -sprinklers 5 -azure 5 -carting 5 -stapled 5 -seawall 5 -armchairs 5 -unfrosted 5 -streeet 5 -sittingon 5 -tun 5 -bmx 5 -inbetween 5 -backround 5 -chamber 5 -seconds 5 -autism 5 -stemware 5 -compare 5 -potatos 5 -drainer 5 -petite 5 -florescent 5 -seeking 5 -clamp 5 -participants 5 -bistro 5 -entrancing 5 -corkscrew 5 -utilizes 5 -plows 5 -peple 5 -commodes 5 -rowed 5 -understand 5 -itch 5 -assignment 5 -peg 5 -foot-long 5 -kayaker 5 -oils 5 -thatch 5 -daring 5 -winners 5 -lg 5 -poeple 5 -brownstone 5 -airports 5 -treys 5 -ashtray 5 -weapon 5 -raven 5 -wise 5 -fishbowl 5 -deers 5 -entertainer 5 -taxiway 5 -sparkles 5 -geek 5 -twice 5 -gladiator 5 -360 5 -uncommon 5 -kingfisher 5 -utilitarian 5 -papaya 5 -sanitary 5 -umping 5 -ne 5 -dappled 5 -towl 5 -conversations 5 -opera 5 -begun 5 -attracted 5 -cloves 5 -sidecars 5 -littering 5 -applauding 5 -simmering 5 -puncher 5 -buddies 5 -waiving 5 -communicate 5 -idiot 5 -tablecloths 5 -hoisting 5 -knelling 5 -hexagonal 5 -enticing 5 -securing 5 -korea 5 -flier 5 -hooding 5 -associated 5 -kitche 5 -skyteam 5 -reds 5 -boasts 5 -fez 5 -cocked 5 -emerald 5 -phrases 5 -rigs 5 -sanitizers 5 -shedding 5 -bronx 5 -high-rise 5 -raincoats 5 -herons 5 -supple 5 -foul 5 -foundation 5 -tempting 5 -lifejacket 5 -insects 5 -tolet 5 -craning 5 -trombone 5 -canopied 5 -otter 5 -3-way 5 -lollipops 5 -'a 5 -cleaver 5 -bicycler 5 -detached 5 -stole 5 -sw 5 -televison 5 -roosts 5 -israeli 5 -winters 5 -mand 5 -snowbank 5 -intel 5 -inter 5 -mna 5 -hooking 5 -sucker 5 -compartmentalized 5 -coors 5 -bulky 5 -composting 5 -contentedly 5 -readers 5 -sprint 5 -storing 5 -highways 5 -delicous 5 -richmond 5 -midday 5 -2010 5 -prarie 5 -expectantly 5 -bare-chested 5 -carraige 5 -vace 5 -courtesy 5 -smash 5 -astroturf 5 -rhode 5 -dhl 5 -cedar 5 -shoppe 5 -vignette 5 -chill 5 -bubbling 5 -offshore 5 -vents 5 -spigot 5 -townhouses 5 -latter 5 -ocean.. 5 -cardinals 5 -wo 5 -garnishing 5 -plateful 5 -manuever 5 -eying 5 -beating 5 -accepts 5 -wan 5 -true 5 -computing 5 -white-tiled 5 -baggie 5 -sequential 5 -muslim 5 -cam 5 -ducati 5 -stevens 5 -maine 5 -figuring 5 -juggles 5 -unattached 5 -taker 5 -sneak 5 -streetlamp 5 -dominates 5 -labs 5 -banans 5 -udders 5 -restuarant 5 -randy 5 -meandering 5 -canals 5 -handy 5 -explores 5 -moths 5 -sanwich 5 -kings 5 -journal 5 -stumps 5 -smack 5 -graces 5 -stealing 5 -mind 5 -mine 5 -explain 5 -stabbing 5 -monica 5 -decides 5 -gushes 5 -cieling 5 -opportunity 5 -impending 5 -mouthed 5 -forked 5 -mussels 5 -somersault 5 -arbor 5 -participates 5 -mermaid 5 -cocktails 5 -teething 5 -red-headed 5 -cabana 5 -shingled 5 -carring 5 -gril 5 -romantic 5 -worms 5 -sunk 5 -hideous 5 -grasshopper 5 -hunching 5 -sphere 5 -preening 5 -gravelly 5 -muzzled 5 -29 5 -unbrellas 5 -rainforest 5 -dont 5 -idles 5 -majority 5 -contorted 5 -collapsed 5 -barrack 5 -reverse 5 -gag 5 -salutes 5 -coil 5 -motorboats 5 -31 5 -gateway 5 -reserved 5 -butternut 5 -marshmallow 5 -tubing 5 -interaction 5 -chomping 5 -completes 5 -humped 5 -signifying 5 -dryers 5 -well-worn 5 -propel 5 -although 5 -in-between 5 -42 5 -limited 5 -formica 5 -tweezers 5 -snarling 5 -oysters 5 -swirling 5 -toa 5 -10th 5 -buildings.. 5 -paddleboarding 5 -bellowing 5 -kentucky 5 -boater 5 -greenwich 5 -trashed 5 -hunk 5 -doorstep 5 -select 5 -wildflower 5 -sheds 5 -counting 5 -oasis 5 -calories 5 -nathan 5 -phillips 5 -nectarine 5 -nissan 5 -swerving 5 -cheesey 5 -faint 5 -life-sized 5 -glad 5 -catalog 5 -nights 5 -skyward 5 -collaboration 5 -carport 5 -tabled 5 -entangled 5 -strands 5 -x-ray 5 -breathing 5 -headless 5 -strategically 5 -blustery 5 -flew 5 -rockets 5 -lilac 5 -80 5 -maintains 5 -snowcapped 5 -ashbury 5 -roadwork 5 -mike 5 -gutted 5 -miami 5 -tumble 5 -detergent 5 -yankee 5 -nigh 5 -sittting 5 -hoe 5 -haphazardly 5 -snakes 5 -ranging 5 -paddled 5 -result 5 -in-flight 5 -medallion 5 -t.v.v 5 -kneading 5 -toppled 5 -peopl 5 -oregon 5 -evidently 5 -fluted 5 -router 5 -artifact 5 -twinkies 5 -sas 5 -peice 5 -tears 5 -dormitory 5 -loafs 5 -coasting 5 -dressy 5 -ghostly 5 -remolded 5 -leroy 5 -loosely 5 -unmanned 5 -oyster 5 -defender 5 -weaved 5 -flatware 5 -imposed 5 -sneaks 5 -illustrated 5 -dainty 5 -life-size 5 -involves 5 -dicing 5 -oh 5 -pyramids 5 -frittata 5 -grandpa 5 -well-decorated 5 -autos 5 -scrolls 5 -cut-up 5 -whtie 5 -zips 5 -furred 5 -overlapping 5 -goalkeeper 5 -egrets 5 -tasble 5 -clocked 5 -walsk 5 -oxygen 5 -bunkbed 5 -speedboats 5 -circled 5 -sewn 5 -speared 5 -termite 5 -humor 5 -freestanding 5 -boombox 5 -peeler 5 -nemo 5 -umpires 5 -expertly 5 -whippet 5 -enjoyment 5 -parasurfer 5 -charged 5 -sweatpants 5 -staging 5 -squirt 5 -kichen 5 -restraunt 5 -condensation 5 -rain-covered 5 -modular 5 -shuffle 5 -rotini 5 -woodsy 5 -malnourished 5 -reservoir 5 -roomful 5 -contorts 5 -bowed 5 -scotland 5 -flights 5 -winking 5 -buisness 5 -roaster 5 -reel 5 -motorcyclers 5 -william 5 -expressway 5 -brook 5 -seniors 5 -grayish 5 -creamer 5 -scrolling 5 -allover 5 -taping 5 -prohibited 5 -torsos 5 -staked 5 -eyebrows 5 -curiosity 5 -waited 5 -carve 5 -workout 5 -sparkly 5 -swiping 5 -scrubbing 5 -ollies 5 -unenthused 5 -sepia-toned 5 -servicemen 5 -arguing 5 -bolt 5 -themes 5 -villa 5 -phillies 5 -nyc 5 -shower/tub 5 -americans 5 -occupies 5 -cologne 5 -pebble 5 -turtles 5 -chocolate-covered 5 -meanders 5 -clawfoot 5 -conducting 5 -practical 5 -washers 5 -departs 5 -parcel 5 -earing 5 -flute 5 -tarmack 5 -tarmacs 5 -annual 5 -pavers 5 -entre 5 -backflip 5 -wayland 5 -'ve 5 -saab 5 -migrating 5 -liners 5 -2009 5 -blaze 5 -ingredient 5 -unpacking 5 -fringe 5 -loved 5 -wetland 5 -monstrosity 5 -homer 5 -cereals 5 -stuffs 5 -highlighting 5 -corpse 5 -awarded 5 -editing 5 -surreal 5 -d'oeuvres 5 -assembles 5 -hating 5 -coasters 5 -bloomed 5 -tampa 5 -henry 5 -glob 5 -confident 5 -workings 5 -charlie 5 -presence 5 -carafe 5 -patrols 5 -moms 5 -molded 5 -moutains 5 -blue-eyed 5 -skillets 5 -fiddling 5 -onesie 5 -screenshot 5 -'do 5 -violent 5 -intact 5 -severed 5 -cornfield 5 -thrift 5 -forcefully 5 -crested 5 -berth 5 -sunscreen 5 -nonchalantly 5 -speedometer 5 -script 5 -gleefully 5 -option 5 -cleverly 5 -knocking 5 -whoa 5 -engineers 5 -shamrock 5 -negotiate 5 -old-time 5 -fremont 5 -littel 5 -chars 5 -dill 5 -headbands 5 -afraid 5 -oozing 5 -clogged 5 -graveled 5 -rocker 5 -surge 5 -rear-view 5 -bratwurst 5 -'ll 5 -wheelers 5 -illegally 5 -chins 4 -spotty 4 -yahoo 4 -cookers 4 -snugly 4 -undeveloped 4 -handcuffs 4 -wholesome 4 -altogether 4 -suticase 4 -cordoned 4 -lookin 4 -paraglider 4 -marches 4 -boiler 4 -petaled 4 -bellow 4 -slivers 4 -maxwell 4 -tame 4 -denoting 4 -ointment 4 -whoever 4 -mailboxes 4 -quartet 4 -saving 4 -veldt 4 -sprawls 4 -simplistic 4 -pimp 4 -straighten 4 -tracking 4 -steamer 4 -lively 4 -rumbles 4 -wits 4 -witb 4 -passerby 4 -corked 4 -hounds 4 -denotes 4 -airway 4 -burried 4 -sometimes 4 -cassette 4 -iphones 4 -treatment 4 -bouncy 4 -measurements 4 -concealed 4 -melbourne 4 -chirping 4 -iin 4 -carelessly 4 -babe 4 -placard 4 -tightrope 4 -bilingual 4 -houston 4 -frowns 4 -snow-packed 4 -backless 4 -receding 4 -vigorous 4 -quarry 4 -potties 4 -incense 4 -mover 4 -innings 4 -sunshades 4 -shotgun 4 -wee 4 -stooped 4 -tick 4 -tubular 4 -searches 4 -progression 4 -ocean-side 4 -jamaica 4 -otters 4 -bared 4 -fastening 4 -firehose 4 -ems 4 -dustbin 4 -sissors 4 -tony 4 -pees 4 -enthused 4 -franklin 4 -sake 4 -varnished 4 -disgusted 4 -grandson 4 -mr. 4 -landline 4 -soldering 4 -obscuring 4 -african-american 4 -tilling 4 -sawdust 4 -bagpipes 4 -headpiece 4 -assemble 4 -snooze 4 -monte 4 -dumpling 4 -decide 4 -steadying 4 -learned 4 -ninja 4 -gree 4 -randomly 4 -kales 4 -locale 4 -zipped 4 -lives 4 -smacked 4 -communal 4 -composition 4 -yolk 4 -unbrella 4 -taillights 4 -muffs 4 -vampire 4 -manage 4 -kitesurfer 4 -visibility 4 -fare 4 -stenciled 4 -paddleboard 4 -bluetooth 4 -polka-dotted 4 -tolkien 4 -brighten 4 -tens 4 -ken 4 -keg 4 -hurrying 4 -run-down 4 -propelled 4 -hurls 4 -painters 4 -collaborate 4 -snow-filled 4 -sittng 4 -glazes 4 -gargoyle 4 -skins 4 -ihop 4 -zealand 4 -saturn 4 -presumably 4 -tudor 4 -roosting 4 -confrontation 4 -seabird 4 -sprigs 4 -braking 4 -utilities 4 -weekend 4 -pasenger 4 -freestyle 4 -grouo 4 -memo 4 -voodoo 4 -enthusiasm 4 -blends 4 -judging 4 -slapping 4 -sights 4 -raptor 4 -zero 4 -residue 4 -all-way 4 -rainstorm 4 -unsliced 4 -emaciated 4 -messes 4 -water.. 4 -embraces 4 -fiction 4 -tricycles 4 -trudges 4 -converge 4 -choco 4 -forty-five 4 -suede 4 -screws 4 -parfait 4 -timber 4 -cud 4 -mrs. 4 -motivational 4 -restoration 4 -spicket 4 -beasts 4 -varies 4 -iguana 4 -sapling 4 -backboard 4 -dixie 4 -workbook 4 -vagina 4 -1960s 4 -ganache 4 -gingerly 4 -placidly 4 -equal 4 -garish 4 -posses 4 -topiary 4 -top.. 4 -parkland 4 -supplied 4 -beck 4 -tenders 4 -refection 4 -staining 4 -variation 4 -primary 4 -traditionally 4 -thoroughly 4 -derek 4 -peddle 4 -undressed 4 -hopefully 4 -blimp 4 -vulcan 4 -greeted 4 -focaccia 4 -pursues 4 -tshirts 4 -ppk 4 -abed 4 -striding 4 -sometime 4 -eerie 4 -sacks 4 -vietnam 4 -fasten 4 -impression 4 -boast 4 -variously 4 -cased 4 -cornflakes 4 -unloads 4 -woodpeckers 4 -stiing 4 -smokestack 4 -fury 4 -impaled 4 -claw-foot 4 -ninth 4 -nectarines 4 -humping 4 -aeroplanes 4 -longboarder 4 -porter 4 -chowing 4 -beaches 4 -burton 4 -pullman 4 -grasp 4 -semicircle 4 -cascading 4 -underhanded 4 -camcorder 4 -65th 4 -israel 4 -flan 4 -birdcages 4 -alighting 4 -retrievers 4 -commemorative 4 -knive 4 -aman 4 -lumbers 4 -cosplay 4 -precision 4 -parasurfing 4 -pegboard 4 -summertime 4 -departments 4 -leveled 4 -youngest 4 -distressed 4 -eagles 4 -antennas 4 -kitcken 4 -ricotta 4 -republican 4 -chaos 4 -sculpting 4 -snowplow 4 -hairstyle 4 -spirits 4 -fear 4 -sponsors 4 -neutrals 4 -coffeepot 4 -pasties 4 -transfer 4 -federer 4 -intimidating 4 -coffees 4 -toasts 4 -swivel 4 -avery 4 -vanities 4 -dumpy 4 -bater 4 -instruct 4 -shoveled 4 -videotaping 4 -white-walled 4 -caped 4 -'no 4 -lurches 4 -shadyside 4 -disaster 4 -girrafes 4 -vain 4 -toboggan 4 -ewes 4 -foo 4 -freesby 4 -mighty 4 -supporters 4 -veg 4 -veterans 4 -teat 4 -minion 4 -fingernail 4 -apparent 4 -locamotive 4 -decades 4 -disguised 4 -spam 4 -ski-lift 4 -lurking 4 -lettered 4 -bombs 4 -goup 4 -scears 4 -litte 4 -clap 4 -playin 4 -forest-like 4 -mimic 4 -installations 4 -yellowed 4 -westmark 4 -delapidated 4 -corrugated 4 -slipped 4 -livery 4 -abuilding 4 -muted 4 -splatters 4 -drunk 4 -d.c. 4 -tobacco 4 -recent 4 -clearance 4 -clutched 4 -ba 4 -bt 4 -specifically 4 -mediterranean 4 -crisscrossing 4 -seatbelt 4 -david 4 -spaceship 4 -buttocks 4 -soak 4 -shaven 4 -fronds 4 -superhero 4 -grafiti 4 -dazzling 4 -lust 4 -command 4 -teriyaki 4 -dyrgas 4 -thier 4 -alliance 4 -95th 4 -marlboro 4 -fireplaces 4 -bewildered 4 -churches 4 -studs 4 -chooses 4 -groves 4 -cabanas 4 -i. 4 -manned 4 -hydration 4 -riverbed 4 -du 4 -cog 4 -flexible 4 -defending 4 -hungrily 4 -basset 4 -sends 4 -dines 4 -theirs 4 -evidence 4 -fragile 4 -retired 4 -frightened 4 -e. 4 -comprised 4 -fruity 4 -sterilized 4 -garments 4 -abraham 4 -fanned 4 -concern 4 -developing 4 -hens 4 -pitbull 4 -barbecuing 4 -silverwear 4 -salem 4 -baloons 4 -trudging 4 -attemping 4 -peole 4 -unicycle 4 -sardines 4 -child-sized 4 -wildwood 4 -cemented 4 -dire 4 -bask 4 -knots 4 -scrolled 4 -bnsf 4 -atable 4 -kindle 4 -macaw 4 -u.s 4 -necking 4 -wow 4 -woo 4 -buzz 4 -liked 4 -linen-covered 4 -minced 4 -toucan 4 -willed 4 -heats 4 -evenly 4 -attired 4 -ukulele 4 -waveland 4 -edifice 4 -traintracks 4 -1/2 4 -pansies 4 -commerical 4 -deaker 4 -bedsheets 4 -yong 4 -beg 4 -affair 4 -slows 4 -shy 4 -pointer 4 -seperating 4 -kkk 4 -scrubland 4 -reptile 4 -mississippi 4 -daniels 4 -projecting 4 -curbed 4 -notepads 4 -solution 4 -drew 4 -oman 4 -ratty 4 -thirds 4 -pebbled 4 -vows 4 -quantities 4 -exception 4 -sanitized 4 -romp 4 -separately 4 -fore 4 -shin 4 -alerts 4 -primed 4 -flowerbed 4 -stapler 4 -forages 4 -upraised 4 -oom 4 -capturing 4 -scalloped 4 -aviator 4 -greenfield 4 -finance 4 -fuschia 4 -streetsign 4 -impersonators 4 -contrasts 4 -nun-chuck 4 -dates 4 -thighs 4 -swabs 4 -spills 4 -deluxe 4 -alerting 4 -cylce 4 -des 4 -drained 4 -sleepily 4 -dignitaries 4 -settlers 4 -approximately 4 -polishing 4 -kk 4 -generators 4 -honking 4 -halve 4 -matters 4 -outside.. 4 -televsion 4 -hard-sided 4 -fishes 4 -loungers 4 -graves 4 -flagstone 4 -semitrailer 4 -parkway 4 -regions 4 -ln 4 -ls 4 -pitts 4 -rec 4 -rung 4 -gravity 4 -areal 4 -instant 4 -buster 4 -winchester 4 -backpacker 4 -tests 4 -smartly 4 -preen 4 -flaky 4 -doesnt 4 -wish 4 -collies 4 -skiboard 4 -pilaf 4 -clasps 4 -tweed 4 -streer 4 -mischievous 4 -stared 4 -chateau 4 -entertained 4 -onit 4 -ms 4 -acrobat 4 -oriented 4 -shellfish 4 -grassless 4 -rinsing 4 -bashed 4 -swept 4 -boss 4 -improve 4 -sanitation 4 -muddied 4 -decanter 4 -spongebob 4 -clubs 4 -shards 4 -na 4 -mildly 4 -lapels 4 -twentieth 4 -houseboats 4 -fangs 4 -coordinated 4 -circa 4 -require 4 -ant 4 -frig 4 -jeter 4 -weapons 4 -votive 4 -stylishly 4 -buys 4 -events 4 -modes 4 -engulfed 4 -virginia 4 -schedules 4 -om 4 -og 4 -lovers 4 -igloo 4 -saxophone 4 -rate 4 -1900 4 -oat 4 -paradise 4 -wisconsin 4 -swarm 4 -perusing 4 -bohemian 4 -condo 4 -ranchers 4 -snowshoeing 4 -pa 4 -knack 4 -abilities 4 -peephole 4 -standalone 4 -mustached 4 -mustaches 4 -asain 4 -stalking 4 -associates 4 -overseeing 4 -issues 4 -slated 4 -heir 4 -udder 4 -planning 4 -pastoral 4 -aplate 4 -hunters 4 -painter 4 -buzzards 4 -bunkbeds 4 -dressings 4 -capabilities 4 -groomer 4 -certificate 4 -rickety 4 -guitarist 4 -meadows 4 -bile 4 -pollution 4 -currency 4 -pocketknife 4 -volcano 4 -drove 4 -shipped 4 -speedy 4 -rotary 4 -wipeout 4 -beech 4 -rockers 4 -someplace 4 -bock 4 -melt 4 -hoding 4 -si 4 -tore 4 -neighbourhood 4 -footwear 4 -drywall 4 -ironically 4 -backyards 4 -freezing 4 -sited 4 -aluminium 4 -jarred 4 -catchup 4 -laboratory 4 -partitioned 4 -kneck 4 -escaping 4 -restaruant 4 -morgan 4 -subtitles 4 -supine 4 -glitter 4 -tortellini 4 -worktable 4 -downpour 4 -mountain.. 4 -bernard 4 -scallop 4 -representative 4 -error 4 -nautical 4 -freighter 4 -graduating 4 -poverty 4 -technical 4 -bullhorn 4 -dominos 4 -bitty 4 -truth 4 -maciel 4 -yello 4 -fouling 4 -amphibious 4 -segment 4 -locust 4 -brightly-painted 4 -blowdrying 4 -buffalos 4 -strides 4 -raquets 4 -whispering 4 -bookends 4 -footbridge 4 -roomy 4 -software 4 -backcountry 4 -ing 4 -capers 4 -watchers 4 -total 4 -aware 4 -interview 4 -cherubs 4 -castle-like 4 -bang 4 -dishwashers 4 -seater 4 -hollandaise 4 -insulated 4 -maiden 4 -duckling 4 -valance 4 -voting 4 -stove/oven 4 -bras 4 -warmth 4 -sheltered 4 -hispanic 4 -jekyll 4 -lashes 4 -gearing 4 -juiced 4 -paces 4 -kitteh 4 -goslings 4 -pictorial 4 -accordian 4 -urinates 4 -whacked 4 -trumpets 4 -intrigued 4 -hand-held 4 -mascara 4 -browser 4 -commander 4 -statuette 4 -lob 4 -somerset 4 -wrestlers 4 -computerized 4 -10:20 4 -vet 4 -ver 4 -handwriting 4 -soot 4 -nestles 4 -anticipates 4 -wolverine 4 -squirts 4 -withered 4 -experienced 4 -reminiscent 4 -full-length 4 -olden 4 -cocks 4 -rummage 4 -referees 4 -crap 4 -astronomical 4 -photoed 4 -lettuces 4 -watermark 4 -follow-through 4 -detector 4 -boatyard 4 -brinks 4 -demo 4 -frequently 4 -magician 4 -immediately 4 -equally 4 -serveral 4 -stonework 4 -acrylic 4 -orchestra 4 -wiimotes 4 -100 4 -ignoring 4 -suitable 4 -seventy 4 -leaks 4 -bug-gee 4 -croup 4 -seek 4 -doe 4 -warrior 4 -entitled 4 -differing 4 -hummingbirds 4 -repurposed 4 -coney 4 -petersburg 4 -cleaners 4 -bruises 4 -tiolet 4 -ayi 4 -stitting 4 -soundboard 4 -cascade 4 -kabobs 4 -goth 4 -shipyard 4 -up-close 4 -chalet 4 -diversion 4 -macro 4 -pacman 4 -cribs 4 -blackbird 4 -oral 4 -cigars 4 -rooting 4 -radar 4 -filters 4 -23 4 -brewers 4 -disneyland 4 -believe 4 -sweatshirts 4 -frisco 4 -kitchens 4 -mystic 4 -clack 4 -tourbus 4 -gordon 4 -asians 4 -agricultural 4 -min 4 -gibson 4 -monochromatic 4 -enhanced 4 -bicyclers 4 -ac 4 -convection 4 -knotty 4 -tilled 4 -iamge 4 -parted 4 -fattening 4 -hydran 4 -deciduous 4 -install 4 -suvs 4 -rank 4 -tog 4 -advising 4 -q-tips 4 -refreshment 4 -disgust 4 -habit 4 -springtime 4 -mid-leap 4 -seasonal 4 -collard 4 -banannas 4 -51 4 -candlelit 4 -old-style 4 -hawaii 4 -lawns 4 -fuselage 4 -legally 4 -reno 4 -masai 4 -workman 4 -recumbent 4 -pom 4 -relection 4 -coupled 4 -candied 4 -actress 4 -thimble 4 -rosy 4 -confines 4 -bodacious 4 -promote 4 -n. 4 -nightlife 4 -avacado 4 -briefly 4 -hwy 4 -quizzical 4 -birdseed 4 -side-view 4 -stephen 4 -ding 4 -76 4 -70 4 -compiled 4 -fastest 4 -railyard 4 -rollers 4 -batroom 4 -mid-century 4 -miniture 4 -porridge 4 -dormant 4 -sudsy 4 -filet 4 -post-it 4 -junior 4 -swats 4 -kimonos 4 -snails 4 -stone-paved 4 -hippie 4 -undecorated 4 -bungee 4 -badger 4 -edging 4 -89 4 -weaves 4 -4th 4 -echo 4 -walk/do 4 -kegs 4 -teapots 4 -drips 4 -refg 4 -kissed 4 -recline 4 -concourse 4 -vat 4 -sandles 4 -piling 4 -swining 4 -flare 4 -hop 4 -offloaded 4 -airbus 4 -bitter 4 -nations 4 -nineteenth 4 -castro 4 -harmony 4 -gameboy 4 -ole 4 -consulting 4 -get-together 4 -grandparents 4 -sheeting 4 -slush 4 -snapple 4 -maintaining 4 -boyfriend 4 -sam 4 -cloak 4 -roger 4 -discolored 4 -back.. 4 -scooped 4 -mammal 4 -agitated 4 -forwards 4 -florist 4 -frock 4 -motocycles 4 -rite 4 -turrets 4 -half-empty 4 -stylist 4 -apiece 4 -raging 4 -energetic 4 -occurred 4 -vinegar 4 -showboat 4 -traces 4 -scare 4 -printers 4 -umberellas 4 -fielded 4 -ikea 4 -hauler 4 -brilliantly 4 -solider 4 -belgium 4 -contently 4 -schedule 4 -planting 4 -forests 4 -montrose 4 -inning 4 -told 4 -spooky 4 -ither 4 -lan 4 -satisfied 4 -rooom 4 -contrasted 4 -powers 4 -forced 4 -phase 4 -observer 4 -received 4 -ill 4 -homebase 4 -skaeboard 4 -90th 4 -drip 4 -spotlessly 4 -columbus 4 -howling 4 -reality 4 -switches 4 -jogger 4 -steeply 4 -bording 4 -clue 4 -scaling 4 -snapshots 4 -yourself 4 -karate 4 -telivision 4 -gratified 4 -amplifier 4 -plate.. 4 -bedtime 4 -elementary 4 -multitasking 4 -signalling 4 -linden 4 -charlottesville 4 -pho 4 -concentration 4 -promenade 4 -songbird 4 -steele 4 -chariots 4 -tufts 4 -albino 4 -ariel 4 -info 4 -mosque 4 -beach-goers 4 -soggy 4 -griaffe 4 -umbrells 4 -bart 4 -terracotta 4 -floodway 4 -awe 4 -plumber 4 -scrambles 4 -creamed 4 -pirched 4 -two-person 4 -eclairs 4 -twitter 4 -humorously 4 -usaf 4 -offset 4 -strategy 4 -poinsettia 4 -hosting 4 -intersects 4 -penne 4 -obstruction 4 -trousers 4 -coin-operated 4 -permit 4 -pecan 4 -gasoline 4 -scoring 4 -fiery 4 -giraffees 4 -streaked 4 -serrated 4 -microscope 4 -vacuuming 4 -claims 4 -room.. 4 -specials 4 -overlaid 4 -clerk 4 -stallion 4 -high-tech 4 -massage 4 -pidgeons 4 -yorkshire 4 -pebbly 4 -gage 4 -trader 4 -slum 4 -romping 4 -movements 4 -harsh 4 -tatoo 4 -manufacturer 4 -whited 4 -sandpiper 4 -guadalajara 4 -rain-wet 4 -chested 4 -particularly 4 -fins 4 -bet 4 -upgraded 4 -tanding 4 -interlocking 4 -rattan 4 -horton 4 -multilevel 4 -hopes 4 -lend 4 -greenville 4 -rinse 4 -glases 4 -atlas 4 -sharks 4 -semi-formal 4 -billed 4 -chief 4 -octopuses 4 -shreds 4 -stances 4 -'stop 4 -turbans 4 -establishments 4 -minimally 4 -occupant 4 -phases 4 -rowan 4 -nudges 4 -tremendous 4 -novels 4 -pinwheels 4 -buttoned 4 -buckle 4 -reminds 4 -j 4 -brocclie 4 -potholders 4 -claiming 4 -alright 4 -8th 4 -humps 4 -woodpecker 4 -baggies 4 -sickly 4 -wanted 4 -classes 4 -hammertime 4 -1970 4 -crazing 4 -drawbridge 4 -betting 4 -carryout 4 -parched 4 -furled 4 -willows 4 -steadily 4 -versions 4 -latch 4 -navigation 4 -light-colored 4 -jetblue 4 -living-room 4 -secures 4 -exclamation 4 -dirtbikes 4 -trenchcoat 4 -blue/white 4 -elphant 4 -blob 4 -scrapbook 4 -racehorses 4 -flat-bread 4 -trundle 4 -pensively 4 -fiddles 4 -scowling 4 -relaxation 4 -boneless 4 -fiber 4 -rollerblades 4 -churning 4 -coasts 4 -skatebord 4 -gala 4 -para-sails 4 -firewood 4 -movers 4 -cronuts 4 -slideshow 4 -stolen 4 -smirking 4 -rib 4 -shephard 4 -ignore 4 -casks 4 -people.. 4 -z 4 -scroll 4 -mardi 4 -mits 4 -coverlet 4 -tax 4 -interrupted 4 -institution 4 -behinds 4 -tommy 4 -ghetto 4 -cleavage 4 -generations 4 -panning 4 -hippos 4 -octagon 4 -leader 4 -hitchhiking 4 -dozing 4 -kerouac 4 -moguls 4 \ No newline at end of file diff --git a/examples/text_generation/tutorial_generate_text.py b/examples/text_generation/tutorial_generate_text.py deleted file mode 100644 index f17440b62..000000000 --- a/examples/text_generation/tutorial_generate_text.py +++ /dev/null @@ -1,332 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- -# Copyright 2019 TensorLayer. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Example of Synced sequence input and output. - -Generate text using LSTM. - -Data: https://github.com/tensorlayer/tensorlayer/tree/master/example/data/ - -""" - -import os -import re -import time - -import nltk -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.layers import * -from tensorlayer.models import Model - -tl.logging.set_verbosity(tl.logging.DEBUG) - -_UNK = "_UNK" - - -def basic_clean_str(string): - """Tokenization/string cleaning for a datasets.""" - string = re.sub(r"\n", " ", string) # '\n' --> ' ' - string = re.sub(r"\'s", " \'s", string) # it's --> it 's - string = re.sub(r"\’s", " \'s", string) - string = re.sub(r"\'ve", " have", string) # they've --> they have - string = re.sub(r"\’ve", " have", string) - string = re.sub(r"\'t", " not", string) # can't --> can not - string = re.sub(r"\’t", " not", string) - string = re.sub(r"\'re", " are", string) # they're --> they are - string = re.sub(r"\’re", " are", string) - string = re.sub(r"\'d", "", string) # I'd (I had, I would) --> I - string = re.sub(r"\’d", "", string) - string = re.sub(r"\'ll", " will", string) # I'll --> I will - string = re.sub(r"\’ll", " will", string) - string = re.sub(r"\“", " ", string) # “a” --> “ a ” - string = re.sub(r"\”", " ", string) - string = re.sub(r"\"", " ", string) # "a" --> " a " - string = re.sub(r"\'", " ", string) # they' --> they ' - string = re.sub(r"\’", " ", string) # they’ --> they ’ - string = re.sub(r"\.", " . ", string) # they. --> they . - string = re.sub(r"\,", " , ", string) # they, --> they , - string = re.sub(r"\!", " ! ", string) - string = re.sub(r"\-", " ", string) # "low-cost"--> lost cost - string = re.sub(r"\(", " ", string) # (they) --> ( they) - string = re.sub(r"\)", " ", string) # ( they) --> ( they ) - string = re.sub(r"\]", " ", string) # they] --> they ] - string = re.sub(r"\[", " ", string) # they[ --> they [ - string = re.sub(r"\?", " ", string) # they? --> they ? - string = re.sub(r"\>", " ", string) # they> --> they > - string = re.sub(r"\<", " ", string) # they< --> they < - string = re.sub(r"\=", " ", string) # easier= --> easier = - string = re.sub(r"\;", " ", string) # easier; --> easier ; - string = re.sub(r"\;", " ", string) - string = re.sub(r"\:", " ", string) # easier: --> easier : - string = re.sub(r"\"", " ", string) # easier" --> easier " - string = re.sub(r"\$", " ", string) # $380 --> $ 380 - string = re.sub(r"\_", " ", string) # _100 --> _ 100 - string = re.sub(r"\s{2,}", " ", string) # Akara is handsome --> Akara is handsome - return string.strip().lower() # lowercase - - -def customized_clean_str(string): - """Tokenization/string cleaning for a datasets.""" - string = re.sub(r"\n", " ", string) # '\n' --> ' ' - string = re.sub(r"\'s", " \'s", string) # it's --> it 's - string = re.sub(r"\’s", " \'s", string) - string = re.sub(r"\'ve", " have", string) # they've --> they have - string = re.sub(r"\’ve", " have", string) - string = re.sub(r"\'t", " not", string) # can't --> can not - string = re.sub(r"\’t", " not", string) - string = re.sub(r"\'re", " are", string) # they're --> they are - string = re.sub(r"\’re", " are", string) - string = re.sub(r"\'d", "", string) # I'd (I had, I would) --> I - string = re.sub(r"\’d", "", string) - string = re.sub(r"\'ll", " will", string) # I'll --> I will - string = re.sub(r"\’ll", " will", string) - string = re.sub(r"\“", " “ ", string) # “a” --> “ a ” - string = re.sub(r"\”", " ” ", string) - string = re.sub(r"\"", " “ ", string) # "a" --> " a " - string = re.sub(r"\'", " ' ", string) # they' --> they ' - string = re.sub(r"\’", " ' ", string) # they’ --> they ' - string = re.sub(r"\.", " . ", string) # they. --> they . - string = re.sub(r"\,", " , ", string) # they, --> they , - string = re.sub(r"\-", " ", string) # "low-cost"--> lost cost - string = re.sub(r"\(", " ( ", string) # (they) --> ( they) - string = re.sub(r"\)", " ) ", string) # ( they) --> ( they ) - string = re.sub(r"\!", " ! ", string) # they! --> they ! - string = re.sub(r"\]", " ] ", string) # they] --> they ] - string = re.sub(r"\[", " [ ", string) # they[ --> they [ - string = re.sub(r"\?", " ? ", string) # they? --> they ? - string = re.sub(r"\>", " > ", string) # they> --> they > - string = re.sub(r"\<", " < ", string) # they< --> they < - string = re.sub(r"\=", " = ", string) # easier= --> easier = - string = re.sub(r"\;", " ; ", string) # easier; --> easier ; - string = re.sub(r"\;", " ; ", string) - string = re.sub(r"\:", " : ", string) # easier: --> easier : - string = re.sub(r"\"", " \" ", string) # easier" --> easier " - string = re.sub(r"\$", " $ ", string) # $380 --> $ 380 - string = re.sub(r"\_", " _ ", string) # _100 --> _ 100 - string = re.sub(r"\s{2,}", " ", string) # Akara is handsome --> Akara is handsome - return string.strip().lower() # lowercase - - -def customized_read_words(input_fpath): # , dictionary): - with open(input_fpath, "r", encoding="utf8") as f: - words = f.read() - # Clean the data - words = customized_clean_str(words) - # Split each word - return words.split() - - -def main_restore_embedding_layer(): - """How to use Embedding layer, and how to convert IDs to vector, - IDs to words, etc. - """ - # Step 1: Build the embedding matrix and load the existing embedding matrix. - vocabulary_size = 50000 - embedding_size = 128 - model_file_name = "model_word2vec_50k_128" - batch_size = None - - if not os.path.exists(model_file_name + ".npy"): - raise Exception( - "Pretrained embedding matrix not found. " - "Hint: Please pre-train the default model in " - "`examples/text_word_embedding/tutorial_word2vec_basic.py`." - ) - - print("Load existing embedding matrix and dictionaries") - all_var = tl.files.load_npy_to_any(name=model_file_name + '.npy') - data = all_var['data'] - count = all_var['count'] - dictionary = all_var['dictionary'] - reverse_dictionary = all_var['reverse_dictionary'] - - tl.nlp.save_vocab(count, name='vocab_' + model_file_name + '.txt') - - del all_var, data, count - - class Embedding_Model(Model): - - def __init__(self): - super(Embedding_Model, self).__init__() - self.embedding = Embedding(vocabulary_size, embedding_size) - - def forward(self, inputs): - return self.embedding(inputs) - - model = Embedding_Model() - model.eval() - - # TODO: assign certain parameters to model - model.load_weights(model_file_name + ".hdf5", skip=True, in_order=False) - - # Step 2: Input word(s), output the word vector(s). - word = 'hello' - word_id = dictionary[word] - print('word_id:', word_id) - - words = ['i', 'am', 'tensor', 'layer'] - word_ids = tl.nlp.words_to_word_ids(words, dictionary, _UNK) - context = tl.nlp.word_ids_to_words(word_ids, reverse_dictionary) - print('word_ids:', word_ids) - print('context:', context) - - vector = model(word_id) - print('vector:', vector.shape) - print(vector) - - vectors = model(word_ids) - print('vectors:', vectors.shape) - print(vectors) - - -class Text_Generation_Net(Model): - - def __init__(self, vocab_size, hidden_size, init): - super(Text_Generation_Net, self).__init__() - - self.embedding = Embedding(vocab_size, hidden_size, init, name='embedding') - self.lstm = tl.layers.RNN( - cell=tf.keras.layers.LSTMCell(hidden_size), return_last_output=False, return_last_state=True, - return_seq_2d=True, in_channels=hidden_size - ) - self.out_dense = Dense(vocab_size, in_channels=hidden_size, W_init=init, b_init=init, act=None, name='output') - - def forward(self, inputs, initial_state=None): - embedding_vector = self.embedding(inputs) - lstm_out, final_state = self.lstm(embedding_vector, initial_state=initial_state) - logits = self.out_dense(lstm_out) - return logits, final_state - - -def main_lstm_generate_text(): - """Generate text by Synced sequence input and output.""" - # rnn model and update (describtion: see tutorial_ptb_lstm.py) - init_scale = 0.1 - learning_rate = 1e-3 - sequence_length = 20 - hidden_size = 200 - max_epoch = 100 - batch_size = 16 - - top_k_list = [1, 3, 5, 10] - print_length = 30 - - model_file_name = "model_generate_text.hdf5" - - # ===== Prepare Data - words = customized_read_words(input_fpath="data/trump/trump_text.txt") - - vocab = tl.nlp.create_vocab([words], word_counts_output_file='vocab.txt', min_word_count=1) - vocab = tl.nlp.Vocabulary('vocab.txt', unk_word="") - vocab_size = vocab.unk_id + 1 - train_data = [vocab.word_to_id(word) for word in words] - - # Set the seed to generate sentence. - seed = "it is a" - # seed = basic_clean_str(seed).split() - seed = nltk.tokenize.word_tokenize(seed) - print('seed : %s' % seed) - - init = tl.initializers.random_uniform(-init_scale, init_scale) - - net = Text_Generation_Net(vocab_size, hidden_size, init) - - train_weights = net.trainable_weights - optimizer = tf.optimizers.Adam(lr=learning_rate) - - # ===== Training - - print("\nStart learning a model to generate text") - for i in range(max_epoch): - - print("Epoch: %d/%d" % (i + 1, max_epoch)) - epoch_size = ((len(train_data) // batch_size) - 1) // sequence_length - - start_time = time.time() - costs = 0.0 - iters = 0 - - net.train() - # reset all states at the begining of every epoch - lstm_state = None - for step, (x, y) in enumerate(tl.iterate.ptb_iterator(train_data, batch_size, sequence_length)): - with tf.GradientTape() as tape: - ## compute outputs - logits, lstm_state = net(x, initial_state=lstm_state) - ## compute loss and update model - cost = tl.cost.cross_entropy(logits, tf.reshape(y, [-1]), name='train_loss') - - grad = tape.gradient(cost, train_weights) - optimizer.apply_gradients(zip(grad, train_weights)) - - costs += cost - iters += 1 - - if step % (epoch_size // 10) == 1: - print( - "%.3f perplexity: %.3f speed: %.0f wps" % ( - step * 1.0 / epoch_size, np.exp(costs / iters), - iters * batch_size * sequence_length * batch_size / (time.time() - start_time) - ) - ) - train_perplexity = np.exp(costs / iters) - # print("Epoch: %d Train Perplexity: %.3f" % (i + 1, train_perplexity)) - print("Epoch: %d/%d Train Perplexity: %.3f" % (i + 1, max_epoch, train_perplexity)) - - net.eval() - # for diversity in diversity_list: - # testing: sample from top k words - for top_k in top_k_list: - # Testing, generate some text from a given seed. - lstm_state = None - outs_id = [vocab.word_to_id(w) for w in seed] - # feed the seed to initialize the state for generation. - for ids in outs_id[:-1]: - a_id = np.asarray(ids).reshape(1, 1) - _, lstm_state = net(a_id, initial_state=lstm_state) - - # feed the last word in seed, and start to generate sentence. - a_id = outs_id[-1] - for _ in range(print_length): - a_id = np.asarray(a_id).reshape(1, 1) - logits, lstm_state = net(a_id, initial_state=lstm_state) - out = tf.nn.softmax(logits) - # Without sampling - # a_id = np.argmax(out[0]) - # Sample from all words, if vocab_size is large, - # this may have numeric error. - # a_id = tl.nlp.sample(out[0], diversity) - # Sample from the top k words. - a_id = tl.nlp.sample_top(out[0].numpy(), top_k=top_k) - outs_id.append(a_id) - sentence = [vocab.id_to_word(w) for w in outs_id] - sentence = " ".join(sentence) - # print(diversity, ':', sentence) - print(top_k, ':', sentence) - - print("Save model") - net.save_weights(model_file_name) - - -if __name__ == '__main__': - # Restore a pretrained embedding matrix - # main_restore_embedding_layer() - - # How to generate text from a given context - main_lstm_generate_text() diff --git a/examples/text_ptb/README.md b/examples/text_ptb/README.md deleted file mode 100644 index d3fd9c9e7..000000000 --- a/examples/text_ptb/README.md +++ /dev/null @@ -1 +0,0 @@ -### Language modeling on Penn Tree Bank (PTB) dataset \ No newline at end of file diff --git a/examples/text_ptb/tutorial_ptb_lstm.py b/examples/text_ptb/tutorial_ptb_lstm.py deleted file mode 100644 index 6f215abba..000000000 --- a/examples/text_ptb/tutorial_ptb_lstm.py +++ /dev/null @@ -1,523 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- -r"""Example of Synced sequence input and output. - -This is a reimpmentation of the TensorFlow official PTB example in : -tensorflow/models/rnn/ptb - -The batch_size can be seem as how many concurrent computations.\n -As the following example shows, the first batch learn the sequence information by using 0 to 9.\n -The second batch learn the sequence information by using 10 to 19.\n -So it ignores the information from 9 to 10 !\n -If only if we set the batch_size = 1, it will consider all information from 0 to 20.\n - -The meaning of batch_size here is not the same with the MNIST example. In MNIST example, -batch_size reflects how many examples we consider in each iteration, while in -PTB example, batch_size is how many concurrent processes (segments) -for speed up computation. - -Some Information will be ignored if batch_size > 1, however, if your dataset -is "long" enough (a text corpus usually has billions words), the ignored -information would not effect the final result. - -In PTB tutorial, we setted batch_size = 20, so we cut the dataset into 20 segments. -At the begining of each epoch, we initialize (reset) the 20 RNN states for 20 -segments, then go through 20 segments separately. - -The training data will be generated as follow:\n - ->>> train_data = [i for i in range(20)] ->>> for batch in tl.iterate.ptb_iterator(train_data, batch_size=2, num_steps=3): ->>> x, y = batch ->>> print(x, '\n',y) -... [[ 0 1 2] <---x 1st subset/ iteration -... [10 11 12]] -... [[ 1 2 3] <---y -... [11 12 13]] -... -... [[ 3 4 5] <--- 1st batch input 2nd subset/ iteration -... [13 14 15]] <--- 2nd batch input -... [[ 4 5 6] <--- 1st batch target -... [14 15 16]] <--- 2nd batch target -... -... [[ 6 7 8] 3rd subset/ iteration -... [16 17 18]] -... [[ 7 8 9] -... [17 18 19]] - -Hao Dong: This example can also be considered as pre-training of the word -embedding matrix. - -About RNN ----------- -$ Karpathy Blog : http://karpathy.github.io/2015/05/21/rnn-effectiveness/ - -More TensorFlow official RNN examples can be found here ---------------------------------------------------------- -$ RNN for PTB : https://www.tensorflow.org/versions/master/tutorials/recurrent/index.html#recurrent-neural-networks -$ Seq2seq : https://www.tensorflow.org/versions/master/tutorials/seq2seq/index.html#sequence-to-sequence-models -$ translation : tensorflow/models/rnn/translate - -Example / benchmark for building a PTB LSTM model. - -Trains the model described in: -(Zaremba, et. al.) Recurrent Neural Network Regularization -http://arxiv.org/abs/1409.2329 - -There are 3 supported model configurations: -=========================================== -| config | epochs | train | valid | test -=========================================== -| small | 13 | 37.99 | 121.39 | 115.91 -| medium | 39 | 48.45 | 86.16 | 82.07 -| large | 55 | 37.87 | 82.62 | 78.29 -The exact results may vary depending on the random initialization. - -The hyperparameters used in the model: -- init_scale - the initial scale of the weights -- learning_rate - the initial value of the learning rate -- max_grad_norm - the maximum permissible norm of the gradient -- num_layers - the number of LSTM layers -- num_steps - the number of unrolled steps of LSTM -- hidden_size - the number of LSTM units -- max_epoch - the number of epochs trained with the initial learning rate -- max_max_epoch - the total number of epochs for training -- keep_prob - the probability of keeping weights in the dropout layer -- lr_decay - the decay of the learning rate for each epoch after "max_epoch" -- batch_size - the batch size - -The data required for this example is in the data/ dir of the -PTB dataset from Tomas Mikolov's webpage: - -$ wget http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz -$ tar xvf simple-examples.tgz - - -A) use the zero_state function on the cell object - -B) for an rnn, all time steps share weights. We use one matrix to keep all -gate weights. Split by column into 4 parts to get the 4 gate weight matrices. - -""" -import argparse -import sys -import time - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.models import Model - -tl.logging.set_verbosity(tl.logging.DEBUG) - - -def process_args(args): - parser = argparse.ArgumentParser() - - parser.add_argument( - '--model', default='small', choices=['small', 'medium', 'large'], - help="A type of model. Possible options are: small, medium, large." - ) - parameters = parser.parse_args(args) - return parameters - - -class PTB_Net(Model): - - def __init__(self, vocab_size, hidden_size, init, keep): - super(PTB_Net, self).__init__() - - self.embedding = tl.layers.Embedding(vocab_size, hidden_size, init) - self.dropout1 = tl.layers.Dropout(keep=keep) - self.lstm1 = tl.layers.RNN( - cell=tf.keras.layers.LSTMCell(hidden_size), return_last_output=False, return_last_state=True, - return_seq_2d=False, in_channels=hidden_size - ) - self.dropout2 = tl.layers.Dropout(keep=keep) - self.lstm2 = tl.layers.RNN( - cell=tf.keras.layers.LSTMCell(hidden_size), return_last_output=False, return_last_state=True, - return_seq_2d=True, in_channels=hidden_size - ) - self.dropout3 = tl.layers.Dropout(keep=keep) - self.out_dense = tl.layers.Dense(vocab_size, in_channels=hidden_size, W_init=init, b_init=init, act=None) - - def forward(self, inputs, lstm1_initial_state=None, lstm2_initial_state=None): - inputs = self.embedding(inputs) - inputs = self.dropout1(inputs) - lstm1_out, lstm1_state = self.lstm1(inputs, initial_state=lstm1_initial_state) - inputs = self.dropout2(lstm1_out) - lstm2_out, lstm2_state = self.lstm2(inputs, initial_state=lstm2_initial_state) - inputs = self.dropout3(lstm2_out) - logits = self.out_dense(inputs) - return logits, lstm1_state, lstm2_state - - -def main(): - """ - The core of the model consists of an LSTM cell that processes one word at - a time and computes probabilities of the possible continuations of the - sentence. The memory state of the network is initialized with a vector - of zeros and gets updated after reading each word. Also, for computational - reasons, we will process data in mini-batches of size batch_size. - - """ - param = process_args(sys.argv[1:]) - - if param.model == "small": - init_scale = 0.1 - learning_rate = 1e-3 - max_grad_norm = 5 - num_steps = 20 - hidden_size = 200 - max_epoch = 4 - max_max_epoch = 13 - keep_prob = 1.0 - lr_decay = 0.5 - batch_size = 20 - vocab_size = 10000 - elif param.model == "medium": - init_scale = 0.05 - learning_rate = 1e-3 - max_grad_norm = 5 - # num_layers = 2 - num_steps = 35 - hidden_size = 650 - max_epoch = 6 - max_max_epoch = 39 - keep_prob = 0.5 - lr_decay = 0.8 - batch_size = 20 - vocab_size = 10000 - elif param.model == "large": - init_scale = 0.04 - learning_rate = 1e-3 - max_grad_norm = 10 - # num_layers = 2 - num_steps = 35 - hidden_size = 1500 - max_epoch = 14 - max_max_epoch = 55 - keep_prob = 0.35 - lr_decay = 1 / 1.15 - batch_size = 20 - vocab_size = 10000 - else: - raise ValueError("Invalid model: %s", param.model) - - # Load PTB dataset - train_data, valid_data, test_data, vocab_size = tl.files.load_ptb_dataset() - # train_data = train_data[0:int(100000/5)] # for fast testing - print('len(train_data) {}'.format(len(train_data))) # 929589 a list of int - print('len(valid_data) {}'.format(len(valid_data))) # 73760 a list of int - print('len(test_data) {}'.format(len(test_data))) # 82430 a list of int - print('vocab_size {}'.format(vocab_size)) # 10000 - - # One int represents one word, the meaning of batch_size here is not the - # same with MNIST example, it is the number of concurrent processes for - # computational reasons. - - init = tf.random_uniform_initializer(-init_scale, init_scale) - net = PTB_Net(hidden_size=hidden_size, vocab_size=vocab_size, init=init, keep=keep_prob) - - # Truncated Backpropagation for training - lr = tf.Variable(0.0, trainable=False) - train_weights = net.weights - optimizer = tf.optimizers.Adam(lr=lr) - - print(net) - - print("\nStart learning a language model by using PTB dataset") - for i in range(max_max_epoch): - # decreases the initial learning rate after several - # epoachs (defined by ``max_epoch``), by multipling a ``lr_decay``. - new_lr_decay = lr_decay**max(i - max_epoch, 0.0) - lr.assign(learning_rate * new_lr_decay) - - # Training - net.train() - print("Epoch: %d/%d Learning rate: %.3f" % (i + 1, max_max_epoch, lr.value())) - epoch_size = ((len(train_data) // batch_size) - 1) // num_steps - start_time = time.time() - costs = 0.0 - iters = 0 - # reset all states at the begining of every epoch - lstm1_state = None - lstm2_state = None - - for step, (x, y) in enumerate(tl.iterate.ptb_iterator(train_data, batch_size, num_steps)): - - with tf.GradientTape() as tape: - ## compute outputs - logits, lstm1_state, lstm2_state = net( - x, lstm1_initial_state=lstm1_state, lstm2_initial_state=lstm2_state - ) - ## compute loss and update model - cost = tl.cost.cross_entropy(logits, tf.reshape(y, [-1]), name='train_loss') - - grad, _ = tf.clip_by_global_norm(tape.gradient(cost, train_weights), max_grad_norm) - optimizer.apply_gradients(zip(grad, train_weights)) - - costs += cost - iters += 1 - - if step % (epoch_size // 10) == 10: - print( - "%.3f perplexity: %.3f speed: %.0f wps" % ( - step * 1.0 / epoch_size, np.exp(costs / iters), iters * batch_size * num_steps / - (time.time() - start_time) - ) - ) - train_perplexity = np.exp(costs / iters) - print("Epoch: %d/%d Train Perplexity: %.3f" % (i + 1, max_max_epoch, train_perplexity)) - - # Validing - net.eval() - start_time = time.time() - costs = 0.0 - iters = 0 - # reset all states at the begining of every epoch - lstm1_state = None - lstm2_state = None - for step, (x, y) in enumerate(tl.iterate.ptb_iterator(valid_data, batch_size, num_steps)): - ## compute outputs - logits, lstm1_state, lstm2_state = net(x, lstm1_initial_state=lstm1_state, lstm2_initial_state=lstm2_state) - ## compute loss and update model - cost = tl.cost.cross_entropy(logits, tf.reshape(y, [-1]), name='train_loss') - costs += cost - iters += 1 - valid_perplexity = np.exp(costs / iters) - print("Epoch: %d/%d Valid Perplexity: %.3f" % (i + 1, max_max_epoch, valid_perplexity)) - - print("Evaluation") - # Testing - net.eval() - # go through the test set step by step, it will take a while. - start_time = time.time() - costs = 0.0 - iters = 0 - # reset all states at the begining - lstm1_state = None - lstm2_state = None - for step, (x, y) in enumerate(tl.iterate.ptb_iterator(test_data, batch_size=1, num_steps=1)): - ## compute outputs - logits, lstm1_state, lstm2_state = net(x, lstm1_initial_state=lstm1_state, lstm2_initial_state=lstm2_state) - ## compute loss and update model - cost = tl.cost.cross_entropy(logits, tf.reshape(y, [-1]), name='train_loss') - costs += cost - iters += 1 - test_perplexity = np.exp(costs / iters) - print("Test Perplexity: %.3f took %.2fs" % (test_perplexity, time.time() - start_time)) - - print( - "More example: Text generation using Trump's speech data: https://github.com/tensorlayer/tensorlayer/blob/master/example/tutorial_generate_text.py -- def main_lstm_generate_text():" - ) - - -if __name__ == "__main__": - main() - -# log of SmallConfig -# Start learning a language model by using PTB dataset -# Epoch: 1 Learning rate: 1.000 -# 0.004 perplexity: 5512.735 speed: 4555 wps -# 0.104 perplexity: 841.289 speed: 8823 wps -# 0.204 perplexity: 626.273 speed: 9292 wps -# 0.304 perplexity: 505.628 speed: 9472 wps -# 0.404 perplexity: 435.580 speed: 9551 wps -# 0.504 perplexity: 390.108 speed: 9555 wps -# 0.604 perplexity: 351.379 speed: 9546 wps -# 0.703 perplexity: 324.846 speed: 9579 wps -# 0.803 perplexity: 303.824 speed: 9574 wps -# 0.903 perplexity: 284.468 speed: 9551 wps -# Epoch: 1 Train Perplexity: 269.981 -# Epoch: 1 Valid Perplexity: 178.561 -# Epoch: 2 Learning rate: 1.000 -# 0.004 perplexity: 211.632 speed: 7697 wps -# 0.104 perplexity: 151.509 speed: 9488 wps -# 0.204 perplexity: 158.947 speed: 9674 wps -# 0.304 perplexity: 153.963 speed: 9806 wps -# 0.404 perplexity: 150.938 speed: 9817 wps -# 0.504 perplexity: 148.413 speed: 9824 wps -# 0.604 perplexity: 143.763 speed: 9765 wps -# 0.703 perplexity: 141.616 speed: 9731 wps -# 0.803 perplexity: 139.618 speed: 9781 wps -# 0.903 perplexity: 135.880 speed: 9735 wps -# Epoch: 2 Train Perplexity: 133.771 -# Epoch: 2 Valid Perplexity: 142.595 -# Epoch: 3 Learning rate: 1.000 -# 0.004 perplexity: 146.902 speed: 8345 wps -# 0.104 perplexity: 105.647 speed: 9572 wps -# 0.204 perplexity: 114.261 speed: 9585 wps -# 0.304 perplexity: 111.237 speed: 9586 wps -# 0.404 perplexity: 110.181 speed: 9605 wps -# 0.504 perplexity: 109.383 speed: 9601 wps -# 0.604 perplexity: 106.722 speed: 9635 wps -# 0.703 perplexity: 106.075 speed: 9597 wps -# 0.803 perplexity: 105.481 speed: 9624 wps -# 0.903 perplexity: 103.262 speed: 9618 wps -# Epoch: 3 Train Perplexity: 102.272 -# Epoch: 3 Valid Perplexity: 131.884 -# Epoch: 4 Learning rate: 1.000 -# 0.004 perplexity: 118.127 speed: 7867 wps -# 0.104 perplexity: 85.530 speed: 9330 wps -# 0.204 perplexity: 93.559 speed: 9399 wps -# 0.304 perplexity: 91.141 speed: 9386 wps -# 0.404 perplexity: 90.668 speed: 9462 wps -# 0.504 perplexity: 90.366 speed: 9516 wps -# 0.604 perplexity: 88.479 speed: 9477 wps -# 0.703 perplexity: 88.275 speed: 9533 wps -# 0.803 perplexity: 88.091 speed: 9560 wps -# 0.903 perplexity: 86.430 speed: 9516 wps -# Epoch: 4 Train Perplexity: 85.839 -# Epoch: 4 Valid Perplexity: 128.408 -# Epoch: 5 Learning rate: 1.000 -# 0.004 perplexity: 100.077 speed: 7682 wps -# 0.104 perplexity: 73.856 speed: 9197 wps -# 0.204 perplexity: 81.242 speed: 9266 wps -# 0.304 perplexity: 79.315 speed: 9375 wps -# 0.404 perplexity: 79.009 speed: 9439 wps -# 0.504 perplexity: 78.874 speed: 9377 wps -# 0.604 perplexity: 77.430 speed: 9436 wps -# 0.703 perplexity: 77.415 speed: 9417 wps -# 0.803 perplexity: 77.424 speed: 9407 wps -# 0.903 perplexity: 76.083 speed: 9407 wps -# Epoch: 5 Train Perplexity: 75.719 -# Epoch: 5 Valid Perplexity: 127.057 -# Epoch: 6 Learning rate: 0.500 -# 0.004 perplexity: 87.561 speed: 7130 wps -# 0.104 perplexity: 64.202 speed: 9753 wps -# 0.204 perplexity: 69.518 speed: 9537 wps -# 0.304 perplexity: 66.868 speed: 9647 wps -# 0.404 perplexity: 65.766 speed: 9538 wps -# 0.504 perplexity: 64.967 speed: 9537 wps -# 0.604 perplexity: 63.090 speed: 9565 wps -# 0.703 perplexity: 62.415 speed: 9544 wps -# 0.803 perplexity: 61.751 speed: 9504 wps -# 0.903 perplexity: 60.027 speed: 9482 wps -# Epoch: 6 Train Perplexity: 59.127 -# Epoch: 6 Valid Perplexity: 120.339 -# Epoch: 7 Learning rate: 0.250 -# 0.004 perplexity: 72.069 speed: 7683 wps -# 0.104 perplexity: 53.331 speed: 9526 wps -# 0.204 perplexity: 57.897 speed: 9572 wps -# 0.304 perplexity: 55.557 speed: 9491 wps -# 0.404 perplexity: 54.597 speed: 9483 wps -# 0.504 perplexity: 53.817 speed: 9471 wps -# 0.604 perplexity: 52.147 speed: 9511 wps -# 0.703 perplexity: 51.473 speed: 9497 wps -# 0.803 perplexity: 50.788 speed: 9521 wps -# 0.903 perplexity: 49.203 speed: 9515 wps -# Epoch: 7 Train Perplexity: 48.303 -# Epoch: 7 Valid Perplexity: 120.782 -# Epoch: 8 Learning rate: 0.125 -# 0.004 perplexity: 63.503 speed: 8425 wps -# 0.104 perplexity: 47.324 speed: 9433 wps -# 0.204 perplexity: 51.525 speed: 9653 wps -# 0.304 perplexity: 49.405 speed: 9520 wps -# 0.404 perplexity: 48.532 speed: 9487 wps -# 0.504 perplexity: 47.800 speed: 9610 wps -# 0.604 perplexity: 46.282 speed: 9554 wps -# 0.703 perplexity: 45.637 speed: 9536 wps -# 0.803 perplexity: 44.972 speed: 9493 wps -# 0.903 perplexity: 43.506 speed: 9496 wps -# Epoch: 8 Train Perplexity: 42.653 -# Epoch: 8 Valid Perplexity: 122.119 -# Epoch: 9 Learning rate: 0.062 -# 0.004 perplexity: 59.375 speed: 7158 wps -# 0.104 perplexity: 44.223 speed: 9275 wps -# 0.204 perplexity: 48.269 speed: 9459 wps -# 0.304 perplexity: 46.273 speed: 9564 wps -# 0.404 perplexity: 45.450 speed: 9604 wps -# 0.504 perplexity: 44.749 speed: 9604 wps -# 0.604 perplexity: 43.308 speed: 9619 wps -# 0.703 perplexity: 42.685 speed: 9647 wps -# 0.803 perplexity: 42.022 speed: 9673 wps -# 0.903 perplexity: 40.616 speed: 9678 wps -# Epoch: 9 Train Perplexity: 39.792 -# Epoch: 9 Valid Perplexity: 123.170 -# Epoch: 10 Learning rate: 0.031 -# 0.004 perplexity: 57.333 speed: 7183 wps -# 0.104 perplexity: 42.631 speed: 9592 wps -# 0.204 perplexity: 46.580 speed: 9518 wps -# 0.304 perplexity: 44.625 speed: 9569 wps -# 0.404 perplexity: 43.832 speed: 9576 wps -# 0.504 perplexity: 43.153 speed: 9571 wps -# 0.604 perplexity: 41.761 speed: 9557 wps -# 0.703 perplexity: 41.159 speed: 9524 wps -# 0.803 perplexity: 40.494 speed: 9527 wps -# 0.903 perplexity: 39.111 speed: 9558 wps -# Epoch: 10 Train Perplexity: 38.298 -# Epoch: 10 Valid Perplexity: 123.658 -# Epoch: 11 Learning rate: 0.016 -# 0.004 perplexity: 56.238 speed: 7190 wps -# 0.104 perplexity: 41.771 speed: 9171 wps -# 0.204 perplexity: 45.656 speed: 9415 wps -# 0.304 perplexity: 43.719 speed: 9472 wps -# 0.404 perplexity: 42.941 speed: 9483 wps -# 0.504 perplexity: 42.269 speed: 9494 wps -# 0.604 perplexity: 40.903 speed: 9530 wps -# 0.703 perplexity: 40.314 speed: 9545 wps -# 0.803 perplexity: 39.654 speed: 9580 wps -# 0.903 perplexity: 38.287 speed: 9597 wps -# Epoch: 11 Train Perplexity: 37.477 -# Epoch: 11 Valid Perplexity: 123.523 -# Epoch: 12 Learning rate: 0.008 -# 0.004 perplexity: 55.552 speed: 7317 wps -# 0.104 perplexity: 41.267 speed: 9234 wps -# 0.204 perplexity: 45.119 speed: 9461 wps -# 0.304 perplexity: 43.204 speed: 9519 wps -# 0.404 perplexity: 42.441 speed: 9453 wps -# 0.504 perplexity: 41.773 speed: 9536 wps -# 0.604 perplexity: 40.423 speed: 9555 wps -# 0.703 perplexity: 39.836 speed: 9576 wps -# 0.803 perplexity: 39.181 speed: 9579 wps -# 0.903 perplexity: 37.827 speed: 9554 wps -# Epoch: 12 Train Perplexity: 37.020 -# Epoch: 12 Valid Perplexity: 123.192 -# Epoch: 13 Learning rate: 0.004 -# 0.004 perplexity: 55.124 speed: 8234 wps -# 0.104 perplexity: 40.970 speed: 9391 wps -# 0.204 perplexity: 44.804 speed: 9525 wps -# 0.304 perplexity: 42.912 speed: 9512 wps -# 0.404 perplexity: 42.162 speed: 9536 wps -# 0.504 perplexity: 41.500 speed: 9630 wps -# 0.604 perplexity: 40.159 speed: 9591 wps -# 0.703 perplexity: 39.574 speed: 9575 wps -# 0.803 perplexity: 38.921 speed: 9613 wps -# 0.903 perplexity: 37.575 speed: 9629 wps -# Epoch: 13 Train Perplexity: 36.771 -# Epoch: 13 Valid Perplexity: 122.917 -# Evaluation -# Test Perplexity: 116.723 took 124.06s - -# MediumConfig -# Epoch: 1 Learning rate: 1.000 -# 0.008 perplexity: 5173.547 speed: 6469 wps -# 0.107 perplexity: 1219.527 speed: 6453 wps -# 0.206 perplexity: 866.163 speed: 6441 wps -# 0.306 perplexity: 695.163 speed: 6428 wps -# 0.405 perplexity: 598.464 speed: 6420 wps -# 0.505 perplexity: 531.875 speed: 6422 wps -# 0.604 perplexity: 477.079 speed: 6425 wps -# 0.704 perplexity: 438.297 speed: 6428 wps -# 0.803 perplexity: 407.928 speed: 6425 wps -# 0.903 perplexity: 381.264 speed: 6429 wps -# Epoch: 1 Train Perplexity: 360.795 -# Epoch: 1 Valid Perplexity: 208.854 -# ... -# Epoch: 39 Learning rate: 0.001 -# 0.008 perplexity: 56.618 speed: 6357 wps -# 0.107 perplexity: 43.375 speed: 6341 wps -# 0.206 perplexity: 47.873 speed: 6336 wps -# 0.306 perplexity: 46.408 speed: 6337 wps -# 0.405 perplexity: 46.327 speed: 6337 wps -# 0.505 perplexity: 46.115 speed: 6335 wps -# 0.604 perplexity: 45.323 speed: 6336 wps -# 0.704 perplexity: 45.286 speed: 6337 wps -# 0.803 perplexity: 45.174 speed: 6336 wps -# 0.903 perplexity: 44.334 speed: 6336 wps -# Epoch: 39 Train Perplexity: 44.021 -# Epoch: 39 Valid Perplexity: 87.516 -# Evaluation -# Test Perplexity: 83.858 took 167.58s diff --git a/examples/text_ptb/tutorial_ptb_lstm_state_is_tuple.py b/examples/text_ptb/tutorial_ptb_lstm_state_is_tuple.py deleted file mode 100644 index 0021a7bfc..000000000 --- a/examples/text_ptb/tutorial_ptb_lstm_state_is_tuple.py +++ /dev/null @@ -1,618 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- -"""Example of Synced sequence input and output. - -This is a reimpmentation of the TensorFlow official PTB example in : -tensorflow/models/rnn/ptb - -The batch_size can be seem as how many concurrent computations.n -As the following example shows, the first batch learn the sequence information by using 0 to 9.n -The second batch learn the sequence information by using 10 to 19.n -So it ignores the information from 9 to 10 !n -If only if we set the batch_size = 1, it will consider all information from 0 to 20.n - -The meaning of batch_size here is not the same with the MNIST example. In MNIST example, -batch_size reflects how many examples we consider in each iteration, while in -PTB example, batch_size is how many concurrent processes (segments) -for speed up computation. - -Some Information will be ignored if batch_size > 1, however, if your dataset -is "long" enough (a text corpus usually has billions words), the ignored -information would not effect the final result. - -In PTB tutorial, we setted batch_size = 20, so we cut the dataset into 20 segments. -At the begining of each epoch, we initialize (reset) the 20 RNN states for 20 -segments, then go through 20 segments separately. - -The training data will be generated as follow:n - ->>> train_data = [i for i in range(20)] ->>> for batch in tl.iterate.ptb_iterator(train_data, batch_size=2, num_steps=3): ->>> x, y = batch ->>> print(x, 'n',y) -... [[ 0 1 2] <---x 1st subset/ iteration -... [10 11 12]] -... [[ 1 2 3] <---y -... [11 12 13]] -... -... [[ 3 4 5] <--- 1st batch input 2nd subset/ iteration -... [13 14 15]] <--- 2nd batch input -... [[ 4 5 6] <--- 1st batch target -... [14 15 16]] <--- 2nd batch target -... -... [[ 6 7 8] 3rd subset/ iteration -... [16 17 18]] -... [[ 7 8 9] -... [17 18 19]] - -Hao Dong: This example can also be considered as pre-training of the word -embedding matrix. - -About RNN ----------- -$ Karpathy Blog : http://karpathy.github.io/2015/05/21/rnn-effectiveness/ - -More TensorFlow official RNN examples can be found here ---------------------------------------------------------- -$ RNN for PTB : https://www.tensorflow.org/versions/master/tutorials/recurrent/index.html#recurrent-neural-networks -$ Seq2seq : https://www.tensorflow.org/versions/master/tutorials/seq2seq/index.html#sequence-to-sequence-models -$ translation : tensorflow/models/rnn/translate - -tensorflow (0.9.0) - -Example / benchmark for building a PTB LSTM model. - -Trains the model described in: -(Zaremba, et. al.) Recurrent Neural Network Regularization -http://arxiv.org/abs/1409.2329 - -There are 3 supported model configurations: -=========================================== -| config | epochs | train | valid | test -=========================================== -| small | 13 | 37.99 | 121.39 | 115.91 -| medium | 39 | 48.45 | 86.16 | 82.07 -| large | 55 | 37.87 | 82.62 | 78.29 -The exact results may vary depending on the random initialization. - -The hyperparameters used in the model: -- init_scale - the initial scale of the weights -- learning_rate - the initial value of the learning rate -- max_grad_norm - the maximum permissible norm of the gradient -- num_layers - the number of LSTM layers -- num_steps - the number of unrolled steps of LSTM -- hidden_size - the number of LSTM units -- max_epoch - the number of epochs trained with the initial learning rate -- max_max_epoch - the total number of epochs for training -- keep_prob - the probability of keeping weights in the dropout layer -- lr_decay - the decay of the learning rate for each epoch after "max_epoch" -- batch_size - the batch size - -The data required for this example is in the data/ dir of the -PTB dataset from Tomas Mikolov's webpage: - -$ wget http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz -$ tar xvf simple-examples.tgz - -A) use the zero_state function on the cell object - -B) for an rnn, all time steps share weights. We use one matrix to keep all -gate weights. Split by column into 4 parts to get the 4 gate weight matrices. - -""" - -import sys -import time - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl - -tf.logging.set_verbosity(tf.logging.DEBUG) -tl.logging.set_verbosity(tl.logging.DEBUG) - -flags = tf.app.flags - -flags.DEFINE_string("model", "small", "A type of model. Possible options are: small, medium, large.") - -if (tf.VERSION >= '1.5'): - # parse flags - flags.FLAGS(sys.argv, known_only=True) - flags.ArgumentParser() - -FLAGS = flags.FLAGS - -tf.logging.set_verbosity(tf.logging.DEBUG) - - -def main(_): - """ - The core of the model consists of an LSTM cell that processes one word at - a time and computes probabilities of the possible continuations of the - sentence. The memory state of the network is initialized with a vector - of zeros and gets updated after reading each word. Also, for computational - reasons, we will process data in mini-batches of size batch_size. - """ - if FLAGS.model == "small": - init_scale = 0.1 - learning_rate = 1. - max_grad_norm = 5 - num_steps = 20 - hidden_size = 200 - max_epoch = 4 - max_max_epoch = 13 - keep_prob = 1.0 - lr_decay = 0.5 - batch_size = 20 - vocab_size = 10000 - elif FLAGS.model == "medium": - init_scale = 0.05 - learning_rate = 1.0 - max_grad_norm = 5 - # num_layers = 2 - num_steps = 35 - hidden_size = 650 - max_epoch = 6 - max_max_epoch = 39 - keep_prob = 0.5 - lr_decay = 0.8 - batch_size = 20 - vocab_size = 10000 - elif FLAGS.model == "large": - init_scale = 0.04 - learning_rate = 1.0 - max_grad_norm = 10 - # num_layers = 2 - num_steps = 35 - hidden_size = 1500 - max_epoch = 14 - max_max_epoch = 55 - keep_prob = 0.35 - lr_decay = 1 / 1.15 - batch_size = 20 - vocab_size = 10000 - else: - raise ValueError("Invalid model: %s", FLAGS.model) - - # Load PTB dataset - train_data, valid_data, test_data, vocab_size = tl.files.load_ptb_dataset() - # train_data = train_data[0:int(100000/5)] # for fast testing - print('len(train_data) {}'.format(len(train_data))) # 929589 a list of int - print('len(valid_data) {}'.format(len(valid_data))) # 73760 a list of int - print('len(test_data) {}'.format(len(test_data))) # 82430 a list of int - print('vocab_size {}'.format(vocab_size)) # 10000 - - sess = tf.InteractiveSession() - - # One int represents one word, the meaning of batch_size here is not the - # same with MNIST example, it is the number of concurrent processes for - # computational reasons. - - # Training and Validation - input_data = tf.placeholder(tf.int32, [batch_size, num_steps]) - targets = tf.placeholder(tf.int32, [batch_size, num_steps]) - # Testing (Evaluation) - input_data_test = tf.placeholder(tf.int32, [1, 1]) - targets_test = tf.placeholder(tf.int32, [1, 1]) - - def inference(x, is_training, num_steps, reuse=None): - """If reuse is True, the inferences use the existing parameters, - then different inferences share the same parameters. - - Note : - - For DynamicRNNLayer, you can set dropout and the number of RNN layer internally. - """ - print("\nnum_steps : %d, is_training : %s, reuse : %s" % (num_steps, is_training, reuse)) - init = tf.random_uniform_initializer(-init_scale, init_scale) - with tf.variable_scope("model", reuse=reuse): - net = tl.layers.EmbeddingInputlayer(x, vocab_size, hidden_size, init, name='embedding') - net = tl.layers.DropoutLayer(net, keep=keep_prob, is_fix=True, is_train=is_training, name='drop1') - net = tl.layers.RNNLayer( - net, - cell_fn=tf.contrib.rnn.BasicLSTMCell, # tf.nn.rnn_cell.BasicLSTMCell, - cell_init_args={ - 'forget_bias': 0.0, - 'state_is_tuple': True - }, - n_hidden=hidden_size, - initializer=init, - n_steps=num_steps, - return_last=False, - name='basic_lstm1' - ) - lstm1 = net - net = tl.layers.DropoutLayer(net, keep=keep_prob, is_fix=True, is_train=is_training, name='drop2') - net = tl.layers.RNNLayer( - net, - cell_fn=tf.contrib.rnn.BasicLSTMCell, # tf.nn.rnn_cell.BasicLSTMCell, - cell_init_args={ - 'forget_bias': 0.0, - 'state_is_tuple': True - }, - n_hidden=hidden_size, - initializer=init, - n_steps=num_steps, - return_last=False, - return_seq_2d=True, - name='basic_lstm2' - ) - lstm2 = net - # Alternatively, if return_seq_2d=False, in the above RNN layer, - # you can reshape the outputs as follow: - # net = tl.layers.ReshapeLayer(net, - # shape=[-1, int(net.outputs._shape[-1])], name='reshape') - net = tl.layers.DropoutLayer(net, keep=keep_prob, is_fix=True, is_train=is_training, name='drop3') - net = tl.layers.DenseLayer(net, vocab_size, W_init=init, b_init=init, act=None, name='output') - return net, lstm1, lstm2 - - # Inference for Training - net, lstm1, lstm2 = inference(input_data, is_training=True, num_steps=num_steps, reuse=None) - # Inference for Validating - net_val, lstm1_val, lstm2_val = inference(input_data, is_training=False, num_steps=num_steps, reuse=True) - # Inference for Testing (Evaluation) - net_test, lstm1_test, lstm2_test = inference(input_data_test, is_training=False, num_steps=1, reuse=True) - - # sess.run(tf.global_variables_initializer()) - sess.run(tf.global_variables_initializer()) - - def loss_fn(outputs, targets, batch_size): - # See tl.cost.cross_entropy_seq() - # Returns the cost function of Cross-entropy of two sequences, implement - # softmax internally. - # outputs : 2D tensor [batch_size*num_steps, n_units of output layer] - # targets : 2D tensor [batch_size, num_steps], need to be reshaped. - # batch_size : RNN batch_size, number of concurrent processes. - # n_examples = batch_size * num_steps - # so - # cost is the averaged cost of each mini-batch (concurrent process). - loss = tf.contrib.legacy_seq2seq.sequence_loss_by_example( - [outputs], [tf.reshape(targets, [-1])], [tf.ones_like(tf.reshape(targets, [-1]), dtype=tf.float32)] - ) - # [tf.ones([batch_size * num_steps])]) - cost = tf.reduce_sum(loss) / batch_size - return cost - - # Cost for Training - cost = loss_fn(net.outputs, targets, batch_size) - # Cost for Validating - cost_val = loss_fn(net_val.outputs, targets, batch_size) - # Cost for Testing (Evaluation) - cost_test = loss_fn(net_test.outputs, targets_test, 1) - - # Truncated Backpropagation for training - with tf.variable_scope('learning_rate'): - lr = tf.Variable(0.0, trainable=False) - tvars = tf.trainable_variables() - grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), max_grad_norm) - optimizer = tf.train.GradientDescentOptimizer(lr) - train_op = optimizer.apply_gradients(zip(grads, tvars)) - - # sess.run(tf.global_variables_initializer()) - sess.run(tf.global_variables_initializer()) - - net.print_params() - net.print_layers() - tl.layers.print_all_variables() - - print("nStart learning a language model by using PTB dataset") - for i in range(max_max_epoch): - # decreases the initial learning rate after several - # epoachs (defined by ``max_epoch``), by multipling a ``lr_decay``. - new_lr_decay = lr_decay**max(i - max_epoch, 0.0) - sess.run(tf.assign(lr, learning_rate * new_lr_decay)) - - # Training - print("Epoch: %d/%d Learning rate: %.3f" % (i + 1, max_max_epoch, sess.run(lr))) - epoch_size = ((len(train_data) // batch_size) - 1) // num_steps - start_time = time.time() - costs = 0.0 - iters = 0 - # reset all states at the begining of every epoch - state1 = tl.layers.initialize_rnn_state(lstm1.initial_state) - state2 = tl.layers.initialize_rnn_state(lstm2.initial_state) - for step, (x, y) in enumerate(tl.iterate.ptb_iterator(train_data, batch_size, num_steps)): - feed_dict = { - input_data: x, - targets: y, - lstm1.initial_state.c: state1[0], - lstm1.initial_state.h: state1[1], - lstm2.initial_state.c: state2[0], - lstm2.initial_state.h: state2[1], - } - # For training, enable dropout - feed_dict.update(net.all_drop) - _cost, state1_c, state1_h, state2_c, state2_h, _ = sess.run( - [cost, lstm1.final_state.c, lstm1.final_state.h, lstm2.final_state.c, lstm2.final_state.h, train_op], - feed_dict=feed_dict - ) - state1 = (state1_c, state1_h) - state2 = (state2_c, state2_h) - - costs += _cost - iters += num_steps - - if step % (epoch_size // 10) == 10: - print( - "%.3f perplexity: %.3f speed: %.0f wps" % - (step * 1.0 / epoch_size, np.exp(costs / iters), iters * batch_size / (time.time() - start_time)) - ) - train_perplexity = np.exp(costs / iters) - print("Epoch: %d/%d Train Perplexity: %.3f" % (i + 1, max_max_epoch, train_perplexity)) - - # Validation - start_time = time.time() - costs = 0.0 - iters = 0 - # reset all states at the begining of every epoch - state1 = tl.layers.initialize_rnn_state(lstm1_val.initial_state) - state2 = tl.layers.initialize_rnn_state(lstm2_val.initial_state) - for step, (x, y) in enumerate(tl.iterate.ptb_iterator(valid_data, batch_size, num_steps)): - feed_dict = { - input_data: x, - targets: y, - lstm1_val.initial_state.c: state1[0], - lstm1_val.initial_state.h: state1[1], - lstm2_val.initial_state.c: state2[0], - lstm2_val.initial_state.h: state2[1], - } - _cost, state1_c, state1_h, state2_c, state2_h, _ = sess.run( - [ - cost_val, lstm1_val.final_state.c, lstm1_val.final_state.h, lstm2_val.final_state.c, - lstm2_val.final_state.h, - tf.no_op() - ], feed_dict=feed_dict - ) - state1 = (state1_c, state1_h) - state2 = (state2_c, state2_h) - costs += _cost - iters += num_steps - valid_perplexity = np.exp(costs / iters) - print("Epoch: %d/%d Valid Perplexity: %.3f" % (i + 1, max_max_epoch, valid_perplexity)) - - print("Evaluation") - # Testing - # go through the test set step by step, it will take a while. - start_time = time.time() - costs = 0.0 - iters = 0 - # reset all states at the begining - state1 = tl.layers.initialize_rnn_state(lstm1_test.initial_state) - state2 = tl.layers.initialize_rnn_state(lstm2_test.initial_state) - for step, (x, y) in enumerate(tl.iterate.ptb_iterator(test_data, batch_size=1, num_steps=1)): - feed_dict = { - input_data_test: x, - targets_test: y, - lstm1_test.initial_state.c: state1[0], - lstm1_test.initial_state.h: state1[1], - lstm2_test.initial_state.c: state2[0], - lstm2_test.initial_state.h: state2[1], - } - _cost, state1_c, state1_h, state2_c, state2_h = sess.run( - [ - cost_test, - lstm1_test.final_state.c, - lstm1_test.final_state.h, - lstm2_test.final_state.c, - lstm2_test.final_state.h, - ], feed_dict=feed_dict - ) - state1 = (state1_c, state1_h) - state2 = (state2_c, state2_h) - costs += _cost - iters += 1 - test_perplexity = np.exp(costs / iters) - print("Test Perplexity: %.3f took %.2fs" % (test_perplexity, time.time() - start_time)) - - print( - "More example: Text generation using Trump's speech data: https://github.com/tensorlayer/tensorlayer/blob/master/example/tutorial_generate_text.py -- def main_lstm_generate_text():" - ) - - -if __name__ == "__main__": - tf.app.run() - -# log of SmallConfig -# Start learning a language model by using PTB dataset -# Epoch: 1 Learning rate: 1.000 -# 0.004 perplexity: 5512.735 speed: 4555 wps -# 0.104 perplexity: 841.289 speed: 8823 wps -# 0.204 perplexity: 626.273 speed: 9292 wps -# 0.304 perplexity: 505.628 speed: 9472 wps -# 0.404 perplexity: 435.580 speed: 9551 wps -# 0.504 perplexity: 390.108 speed: 9555 wps -# 0.604 perplexity: 351.379 speed: 9546 wps -# 0.703 perplexity: 324.846 speed: 9579 wps -# 0.803 perplexity: 303.824 speed: 9574 wps -# 0.903 perplexity: 284.468 speed: 9551 wps -# Epoch: 1 Train Perplexity: 269.981 -# Epoch: 1 Valid Perplexity: 178.561 -# Epoch: 2 Learning rate: 1.000 -# 0.004 perplexity: 211.632 speed: 7697 wps -# 0.104 perplexity: 151.509 speed: 9488 wps -# 0.204 perplexity: 158.947 speed: 9674 wps -# 0.304 perplexity: 153.963 speed: 9806 wps -# 0.404 perplexity: 150.938 speed: 9817 wps -# 0.504 perplexity: 148.413 speed: 9824 wps -# 0.604 perplexity: 143.763 speed: 9765 wps -# 0.703 perplexity: 141.616 speed: 9731 wps -# 0.803 perplexity: 139.618 speed: 9781 wps -# 0.903 perplexity: 135.880 speed: 9735 wps -# Epoch: 2 Train Perplexity: 133.771 -# Epoch: 2 Valid Perplexity: 142.595 -# Epoch: 3 Learning rate: 1.000 -# 0.004 perplexity: 146.902 speed: 8345 wps -# 0.104 perplexity: 105.647 speed: 9572 wps -# 0.204 perplexity: 114.261 speed: 9585 wps -# 0.304 perplexity: 111.237 speed: 9586 wps -# 0.404 perplexity: 110.181 speed: 9605 wps -# 0.504 perplexity: 109.383 speed: 9601 wps -# 0.604 perplexity: 106.722 speed: 9635 wps -# 0.703 perplexity: 106.075 speed: 9597 wps -# 0.803 perplexity: 105.481 speed: 9624 wps -# 0.903 perplexity: 103.262 speed: 9618 wps -# Epoch: 3 Train Perplexity: 102.272 -# Epoch: 3 Valid Perplexity: 131.884 -# Epoch: 4 Learning rate: 1.000 -# 0.004 perplexity: 118.127 speed: 7867 wps -# 0.104 perplexity: 85.530 speed: 9330 wps -# 0.204 perplexity: 93.559 speed: 9399 wps -# 0.304 perplexity: 91.141 speed: 9386 wps -# 0.404 perplexity: 90.668 speed: 9462 wps -# 0.504 perplexity: 90.366 speed: 9516 wps -# 0.604 perplexity: 88.479 speed: 9477 wps -# 0.703 perplexity: 88.275 speed: 9533 wps -# 0.803 perplexity: 88.091 speed: 9560 wps -# 0.903 perplexity: 86.430 speed: 9516 wps -# Epoch: 4 Train Perplexity: 85.839 -# Epoch: 4 Valid Perplexity: 128.408 -# Epoch: 5 Learning rate: 1.000 -# 0.004 perplexity: 100.077 speed: 7682 wps -# 0.104 perplexity: 73.856 speed: 9197 wps -# 0.204 perplexity: 81.242 speed: 9266 wps -# 0.304 perplexity: 79.315 speed: 9375 wps -# 0.404 perplexity: 79.009 speed: 9439 wps -# 0.504 perplexity: 78.874 speed: 9377 wps -# 0.604 perplexity: 77.430 speed: 9436 wps -# 0.703 perplexity: 77.415 speed: 9417 wps -# 0.803 perplexity: 77.424 speed: 9407 wps -# 0.903 perplexity: 76.083 speed: 9407 wps -# Epoch: 5 Train Perplexity: 75.719 -# Epoch: 5 Valid Perplexity: 127.057 -# Epoch: 6 Learning rate: 0.500 -# 0.004 perplexity: 87.561 speed: 7130 wps -# 0.104 perplexity: 64.202 speed: 9753 wps -# 0.204 perplexity: 69.518 speed: 9537 wps -# 0.304 perplexity: 66.868 speed: 9647 wps -# 0.404 perplexity: 65.766 speed: 9538 wps -# 0.504 perplexity: 64.967 speed: 9537 wps -# 0.604 perplexity: 63.090 speed: 9565 wps -# 0.703 perplexity: 62.415 speed: 9544 wps -# 0.803 perplexity: 61.751 speed: 9504 wps -# 0.903 perplexity: 60.027 speed: 9482 wps -# Epoch: 6 Train Perplexity: 59.127 -# Epoch: 6 Valid Perplexity: 120.339 -# Epoch: 7 Learning rate: 0.250 -# 0.004 perplexity: 72.069 speed: 7683 wps -# 0.104 perplexity: 53.331 speed: 9526 wps -# 0.204 perplexity: 57.897 speed: 9572 wps -# 0.304 perplexity: 55.557 speed: 9491 wps -# 0.404 perplexity: 54.597 speed: 9483 wps -# 0.504 perplexity: 53.817 speed: 9471 wps -# 0.604 perplexity: 52.147 speed: 9511 wps -# 0.703 perplexity: 51.473 speed: 9497 wps -# 0.803 perplexity: 50.788 speed: 9521 wps -# 0.903 perplexity: 49.203 speed: 9515 wps -# Epoch: 7 Train Perplexity: 48.303 -# Epoch: 7 Valid Perplexity: 120.782 -# Epoch: 8 Learning rate: 0.125 -# 0.004 perplexity: 63.503 speed: 8425 wps -# 0.104 perplexity: 47.324 speed: 9433 wps -# 0.204 perplexity: 51.525 speed: 9653 wps -# 0.304 perplexity: 49.405 speed: 9520 wps -# 0.404 perplexity: 48.532 speed: 9487 wps -# 0.504 perplexity: 47.800 speed: 9610 wps -# 0.604 perplexity: 46.282 speed: 9554 wps -# 0.703 perplexity: 45.637 speed: 9536 wps -# 0.803 perplexity: 44.972 speed: 9493 wps -# 0.903 perplexity: 43.506 speed: 9496 wps -# Epoch: 8 Train Perplexity: 42.653 -# Epoch: 8 Valid Perplexity: 122.119 -# Epoch: 9 Learning rate: 0.062 -# 0.004 perplexity: 59.375 speed: 7158 wps -# 0.104 perplexity: 44.223 speed: 9275 wps -# 0.204 perplexity: 48.269 speed: 9459 wps -# 0.304 perplexity: 46.273 speed: 9564 wps -# 0.404 perplexity: 45.450 speed: 9604 wps -# 0.504 perplexity: 44.749 speed: 9604 wps -# 0.604 perplexity: 43.308 speed: 9619 wps -# 0.703 perplexity: 42.685 speed: 9647 wps -# 0.803 perplexity: 42.022 speed: 9673 wps -# 0.903 perplexity: 40.616 speed: 9678 wps -# Epoch: 9 Train Perplexity: 39.792 -# Epoch: 9 Valid Perplexity: 123.170 -# Epoch: 10 Learning rate: 0.031 -# 0.004 perplexity: 57.333 speed: 7183 wps -# 0.104 perplexity: 42.631 speed: 9592 wps -# 0.204 perplexity: 46.580 speed: 9518 wps -# 0.304 perplexity: 44.625 speed: 9569 wps -# 0.404 perplexity: 43.832 speed: 9576 wps -# 0.504 perplexity: 43.153 speed: 9571 wps -# 0.604 perplexity: 41.761 speed: 9557 wps -# 0.703 perplexity: 41.159 speed: 9524 wps -# 0.803 perplexity: 40.494 speed: 9527 wps -# 0.903 perplexity: 39.111 speed: 9558 wps -# Epoch: 10 Train Perplexity: 38.298 -# Epoch: 10 Valid Perplexity: 123.658 -# Epoch: 11 Learning rate: 0.016 -# 0.004 perplexity: 56.238 speed: 7190 wps -# 0.104 perplexity: 41.771 speed: 9171 wps -# 0.204 perplexity: 45.656 speed: 9415 wps -# 0.304 perplexity: 43.719 speed: 9472 wps -# 0.404 perplexity: 42.941 speed: 9483 wps -# 0.504 perplexity: 42.269 speed: 9494 wps -# 0.604 perplexity: 40.903 speed: 9530 wps -# 0.703 perplexity: 40.314 speed: 9545 wps -# 0.803 perplexity: 39.654 speed: 9580 wps -# 0.903 perplexity: 38.287 speed: 9597 wps -# Epoch: 11 Train Perplexity: 37.477 -# Epoch: 11 Valid Perplexity: 123.523 -# Epoch: 12 Learning rate: 0.008 -# 0.004 perplexity: 55.552 speed: 7317 wps -# 0.104 perplexity: 41.267 speed: 9234 wps -# 0.204 perplexity: 45.119 speed: 9461 wps -# 0.304 perplexity: 43.204 speed: 9519 wps -# 0.404 perplexity: 42.441 speed: 9453 wps -# 0.504 perplexity: 41.773 speed: 9536 wps -# 0.604 perplexity: 40.423 speed: 9555 wps -# 0.703 perplexity: 39.836 speed: 9576 wps -# 0.803 perplexity: 39.181 speed: 9579 wps -# 0.903 perplexity: 37.827 speed: 9554 wps -# Epoch: 12 Train Perplexity: 37.020 -# Epoch: 12 Valid Perplexity: 123.192 -# Epoch: 13 Learning rate: 0.004 -# 0.004 perplexity: 55.124 speed: 8234 wps -# 0.104 perplexity: 40.970 speed: 9391 wps -# 0.204 perplexity: 44.804 speed: 9525 wps -# 0.304 perplexity: 42.912 speed: 9512 wps -# 0.404 perplexity: 42.162 speed: 9536 wps -# 0.504 perplexity: 41.500 speed: 9630 wps -# 0.604 perplexity: 40.159 speed: 9591 wps -# 0.703 perplexity: 39.574 speed: 9575 wps -# 0.803 perplexity: 38.921 speed: 9613 wps -# 0.903 perplexity: 37.575 speed: 9629 wps -# Epoch: 13 Train Perplexity: 36.771 -# Epoch: 13 Valid Perplexity: 122.917 -# Evaluation -# Test Perplexity: 116.723 took 124.06s - -# MediumConfig -# Epoch: 1 Learning rate: 1.000 -# 0.008 perplexity: 5173.547 speed: 6469 wps -# 0.107 perplexity: 1219.527 speed: 6453 wps -# 0.206 perplexity: 866.163 speed: 6441 wps -# 0.306 perplexity: 695.163 speed: 6428 wps -# 0.405 perplexity: 598.464 speed: 6420 wps -# 0.505 perplexity: 531.875 speed: 6422 wps -# 0.604 perplexity: 477.079 speed: 6425 wps -# 0.704 perplexity: 438.297 speed: 6428 wps -# 0.803 perplexity: 407.928 speed: 6425 wps -# 0.903 perplexity: 381.264 speed: 6429 wps -# Epoch: 1 Train Perplexity: 360.795 -# Epoch: 1 Valid Perplexity: 208.854 -# ... -# Epoch: 39 Learning rate: 0.001 -# 0.008 perplexity: 56.618 speed: 6357 wps -# 0.107 perplexity: 43.375 speed: 6341 wps -# 0.206 perplexity: 47.873 speed: 6336 wps -# 0.306 perplexity: 46.408 speed: 6337 wps -# 0.405 perplexity: 46.327 speed: 6337 wps -# 0.505 perplexity: 46.115 speed: 6335 wps -# 0.604 perplexity: 45.323 speed: 6336 wps -# 0.704 perplexity: 45.286 speed: 6337 wps -# 0.803 perplexity: 45.174 speed: 6336 wps -# 0.903 perplexity: 44.334 speed: 6336 wps -# Epoch: 39 Train Perplexity: 44.021 -# Epoch: 39 Valid Perplexity: 87.516 -# Evaluation -# Test Perplexity: 83.858 took 167.58s diff --git a/examples/text_word_embedding/tutorial_word2vec_basic.py b/examples/text_word_embedding/tutorial_word2vec_basic.py deleted file mode 100644 index d7bc63fbc..000000000 --- a/examples/text_word_embedding/tutorial_word2vec_basic.py +++ /dev/null @@ -1,373 +0,0 @@ -# Copyright 2019 TensorLayer. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Vector Representations of Words. - -This is the minimalistic reimplementation of -tensorflow/examples/tutorials/word2vec/word2vec_basic.py -This basic example contains the code needed to download some data, -train on it a bit and visualize the result by using t-SNE. - -Once you get comfortable with reading and running the basic version, -you can graduate to -tensorflow/models/embedding/word2vec.py -which is a more serious implementation that showcases some more advanced -TensorFlow principles about how to efficiently use threads to move data -into a text model, how to checkpoint during training, etc. - -If your model is no longer I/O bound but you want still more performance, you -can take things further by writing your own TensorFlow Ops, as described in -Adding a New Op. Again we've provided an example of this for the Skip-Gram case -tensorflow/models/embedding/word2vec_optimized.py. - -Link ------- -https://www.tensorflow.org/versions/r0.9/tutorials/word2vec/index.html#vector-representations-of-words - -""" - -import argparse -import os -import time - -import numpy as np -import tensorflow as tf -from six.moves import xrange # pylint: disable=redefined-builtin - -import tensorlayer as tl -import wget - -parser = argparse.ArgumentParser() - -parser.add_argument( - "--model", default='one', type=str, required=False, help="The model name. It can be 'one', 'two', 'three', 'four'." -) - -FLAGS = parser.parse_args() - - -def main_word2vec_basic(): - - # Step 1: Download the data, read the context into a list of strings. - # Set hyperparameters. - words = tl.files.load_matt_mahoney_text8_dataset() - data_size = len(words) - print(data_size) # 17005207 - print(words[0:10]) # ['anarchism', 'originated', 'as', 'a', 'term', 'of', 'abuse', 'first', 'used', 'against'] - # exit() - - resume = False # load existing model, data and dictionaries - _UNK = "_UNK" - - if FLAGS.model == "one": - # toy setting (tensorflow/examples/tutorials/word2vec/word2vec_basic.py) - vocabulary_size = 50000 # maximum number of word in vocabulary - batch_size = 128 - embedding_size = 128 # Dimension of the embedding vector (hidden layer). - skip_window = 1 # How many words to consider left and right. - num_skips = 2 # How many times to reuse an input to generate a label. - # (should be double of 'skip_window' so as to - # use both left and right words) - num_sampled = 64 # Number of negative examples to sample. - # more negative samples, higher loss - learning_rate = 1.0 - n_epoch = 20 - model_file_name = "model_word2vec_50k_128" - # Eval 2084/15851 accuracy = 15.7% - elif FLAGS.model == "two": - # (tensorflow/models/embedding/word2vec.py) - vocabulary_size = 80000 - batch_size = 20 # Note: small batch_size need more steps for a Epoch - embedding_size = 200 - skip_window = 5 - num_skips = 10 - num_sampled = 100 - learning_rate = 0.2 - n_epoch = 15 - model_file_name = "model_word2vec_80k_200" - # 7.9% - elif FLAGS.model == "three": - # (tensorflow/models/embedding/word2vec_optimized.py) - vocabulary_size = 80000 - batch_size = 500 - embedding_size = 200 - skip_window = 5 - num_skips = 10 - num_sampled = 25 - learning_rate = 0.025 - n_epoch = 20 - model_file_name = "model_word2vec_80k_200_opt" - # bad 0% - elif FLAGS.model == "four": - # see: Learning word embeddings efficiently with noise-contrastive estimation - vocabulary_size = 80000 - batch_size = 100 - embedding_size = 600 - skip_window = 5 - num_skips = 10 - num_sampled = 25 - learning_rate = 0.03 - n_epoch = 200 * 10 - model_file_name = "model_word2vec_80k_600" - # bad - else: - raise Exception("Invalid model: %s" % FLAGS.model) - - num_steps = int((data_size / batch_size) * n_epoch) # total number of iteration - - print('%d Steps in a Epoch, total Epochs %d' % (int(data_size / batch_size), n_epoch)) - print(' learning_rate: %f' % learning_rate) - print(' batch_size: %d' % batch_size) - - # Step 2: Build the dictionary and replace rare words with 'UNK' token. - print() - if resume: - print("Load existing data and dictionaries" + "!" * 10) - all_var = tl.files.load_npy_to_any(name=model_file_name + '.npy') - data = all_var['data'] - count = all_var['count'] - dictionary = all_var['dictionary'] - reverse_dictionary = all_var['reverse_dictionary'] - else: - data, count, dictionary, reverse_dictionary = tl.nlp.build_words_dataset(words, vocabulary_size, True, _UNK) - - print( - 'Most 5 common words (+UNK)', count[:5] - ) # [['UNK', 418391], (b'the', 1061396), (b'of', 593677), (b'and', 416629), (b'one', 411764)] - print('Sample data', data[:10], [reverse_dictionary[i] for i in data[:10]]) - # [5243, 3081, 12, 6, 195, 2, 3135, 46, 59, 156] [b'anarchism', b'originated', b'as', b'a', b'term', b'of', b'abuse', b'first', b'used', b'against'] - - del words # Hint to reduce memory. - - # Step 3: Function to generate a training batch for the Skip-Gram model. - print() - - batch, labels, data_index = tl.nlp.generate_skip_gram_batch( - data=data, batch_size=8, num_skips=4, skip_window=2, data_index=0 - ) - for i in range(8): - print(batch[i], reverse_dictionary[batch[i]], '->', labels[i, 0], reverse_dictionary[labels[i, 0]]) - - batch, labels, data_index = tl.nlp.generate_skip_gram_batch( - data=data, batch_size=8, num_skips=2, skip_window=1, data_index=0 - ) - for i in range(8): - print(batch[i], reverse_dictionary[batch[i]], '->', labels[i, 0], reverse_dictionary[labels[i, 0]]) - - # Step 4: Build a Skip-Gram model. - print() - - # We pick a random validation set to sample nearest neighbors. Here we limit the - # validation samples to the words that have a low numeric ID, which by - # construction are also the most frequent. - valid_size = 16 # Random set of words to evaluate similarity on. - valid_window = 100 # Only pick dev samples in the head of the distribution. - valid_examples = np.random.choice(valid_window, valid_size, replace=False) - # a list of 'valid_size' integers smaller than 'valid_window' - # print(valid_examples) # [90 85 20 33 35 62 37 63 88 38 82 58 83 59 48 64] - # n_epoch = int(num_steps / batch_size) - - # train_inputs is a row vector, a input is an integer id of single word. - # train_labels is a column vector, a label is an integer id of single word. - # valid_dataset is a column vector, a valid set is an integer id of single word. - valid_dataset = tf.constant(valid_examples, dtype=tf.int32) - - # Look up embeddings for inputs. - inputs = tl.layers.Input([batch_size], dtype=tf.int32) - labels = tl.layers.Input([batch_size, 1], dtype=tf.int32) - - emb_net = tl.layers.Word2vecEmbedding( - vocabulary_size=vocabulary_size, - embedding_size=embedding_size, - num_sampled=num_sampled, - activate_nce_loss=True, # nce loss is activated - nce_loss_args={}, - E_init=tl.initializers.random_uniform(minval=-1.0, maxval=1.0), - nce_W_init=tl.initializers.truncated_normal(stddev=float(1.0 / np.sqrt(embedding_size))), - nce_b_init=tl.initializers.constant(value=0.0), - name='word2vec_layer', - ) - emb, nce = emb_net([inputs, labels]) - - model = tl.models.Model(inputs=[inputs, labels], outputs=[emb, nce], name="word2vec_model") - - # Compute the average NCE loss for the batch. - # tf.nce_loss automatically draws a new sample of the negative labels - # each time we evaluate the loss. - - # Construct the optimizer. Note: AdamOptimizer is very slow in this case - optimizer = tf.optimizers.Adagrad(learning_rate, initial_accumulator_value=0.1) - - # normalized embedding - normalized_embeddings = emb_net.normalized_embeddings - - # Step 5: Start training. - model.train() - - if resume: - print("Load existing model" + "!" * 10) - model.load_weights(filepath=model_file_name + '.hdf5') - - # save vocabulary to txt - tl.nlp.save_vocab(count, name='vocab_text8.txt') - - average_loss = 0 - step = 0 - print_freq = 2000 - while step < num_steps: - start_time = time.time() - batch_inputs, batch_labels, data_index = tl.nlp.generate_skip_gram_batch( - data=data, batch_size=batch_size, num_skips=num_skips, skip_window=skip_window, data_index=data_index - ) - - # We perform one update step by evaluating the train_op (including it - # in the list of returned values for sess.run() - - with tf.GradientTape() as tape: - outputs, nce_cost = model([batch_inputs, batch_labels]) - - grad = tape.gradient(nce_cost, model.trainable_weights) - optimizer.apply_gradients(zip(grad, model.trainable_weights)) - - average_loss += nce_cost - - if step % print_freq == 0: - if step > 0: - average_loss /= print_freq - print("Average loss at step %d/%d. loss: %f took: %fs/per step" % \ - (step, num_steps, average_loss, time.time() - start_time)) - average_loss = 0 - - # Prints out nearby words given a list of words. - # Note that this is expensive (~20% slowdown if computed every 500 steps) - if step % (print_freq * 5) == 0: - - # Compute the cosine similarity between minibatch examples and all embeddings. - # For simple visualization of validation set. - valid_embed = tf.nn.embedding_lookup(normalized_embeddings, valid_dataset) - sim = tf.matmul(valid_embed, normalized_embeddings, transpose_b=True) - sim = sim.numpy() - # multiply all valid word vector with all word vector. - # transpose_b=True, normalized_embeddings is transposed before multiplication. - - for i in xrange(valid_size): - valid_word = reverse_dictionary[valid_examples[i]] - top_k = 8 # number of nearest neighbors to print - nearest = (-sim[i, :]).argsort()[1:top_k + 1] - log_str = "Nearest to %s:" % valid_word - for k in xrange(top_k): - close_word = reverse_dictionary[nearest[k]] - log_str = "%s %s," % (log_str, close_word) - print(log_str) - - if (step % (print_freq * 20) == 0) and (step != 0): - print("Save model, data and dictionaries" + "!" * 10) - model.save_weights(filepath=model_file_name + ".hdf5") - tl.files.save_any_to_npy( - save_dict={ - 'data': data, - 'count': count, - 'dictionary': dictionary, - 'reverse_dictionary': reverse_dictionary - }, name=model_file_name + '.npy' - ) - - # if step == num_steps-1: - # keeptrain = input("Training %d finished enter 1 to keep training: " % num_steps) - # if keeptrain == '1': - # step = 0 - # learning_rate = float(input("Input new learning rate: ")) - # train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) - step += 1 - - # Step 6: Visualize the normalized embedding matrix by t-SNE. - print() - - final_embeddings = normalized_embeddings #.eval() - tl.visualize.tsne_embedding(final_embeddings, reverse_dictionary, plot_only=500, \ - second=5, saveable=False, name='word2vec_basic') - - # Step 7: Evaluate by analogy questions. see tensorflow/models/embedding/word2vec_optimized.py - print() - model.eval() - - # from tensorflow/models/embedding/word2vec.py - if not os.path.exists("questions-words.txt"): - print("Downloading file 'questions-words.txt'") - wget.download('http://download.tensorflow.org/data/questions-words.txt') - - analogy_questions = tl.nlp.read_analogies_file(eval_file='questions-words.txt', word2id=dictionary) - # For each question (row in dist), find the top 'n_answer' words. - n_answer = 4 - - def predict(analogy): - # The eval feeds three vectors of word ids for a, b, c, each of - # which is of size N, where N is the number of analogies we want to - # evaluate in one batch. - analogy_a = analogy[:, 0] # [N] - analogy_b = analogy[:, 1] # [N] - analogy_c = analogy[:, 2] # [N] - # Each row of a_emb, b_emb, c_emb is a word's embedding vector. - # They all have the shape [N, emb_dim] - a_emb = tf.gather(normalized_embeddings, analogy_a) # a's embs - b_emb = tf.gather(normalized_embeddings, analogy_b) # b's embs - c_emb = tf.gather(normalized_embeddings, analogy_c) # c's embs - # We expect that d's embedding vectors on the unit hyper-sphere is - # near: c_emb + (b_emb - a_emb), which has the shape [N, emb_dim]. - # Bangkok Thailand Tokyo Japan -> Thailand - Bangkok = Japan - Tokyo - # Japan = Tokyo + (Thailand - Bangkok) - # d = c + (b - a) - target = c_emb + (b_emb - a_emb) - # Compute cosine distance between each pair of target and vocab. - # dist has shape [N, vocab_size]. - dist = tf.matmul(target, normalized_embeddings, transpose_b=True) - """Predict the top 4 answers for analogy questions.""" - _, pred_idx = tf.nn.top_k(dist, n_answer) - - return pred_idx - - # Evaluate analogy questions and reports accuracy. - # i.e. How many questions we get right at precision@1. - correct = 0 - total = analogy_questions.shape[0] - start = 0 - while start < total: - limit = start + 2500 - sub = analogy_questions[start:limit, :] # question - idx = predict(sub) # 4 answers for each question - # print('question:', tl.nlp.word_ids_to_words(sub[0], reverse_dictionary)) - # print('answers:', tl.nlp.word_ids_to_words(idx[0], reverse_dictionary)) - start = limit - for question in xrange(sub.shape[0]): - for j in xrange(n_answer): - # if one of the top 4 answers in correct, win ! - if idx[question, j] == sub[question, 3]: - # Bingo! We predicted correctly. E.g., [italy, rome, france, paris]. - print( - j + 1, tl.nlp.word_ids_to_words([idx[question, j]], reverse_dictionary), ':', - tl.nlp.word_ids_to_words(sub[question, :], reverse_dictionary) - ) - correct += 1 - break - elif idx[question, j] in sub[question, :3]: - # We need to skip words already in the question. - continue - else: - # The correct label is not the precision@1 - break - print("Eval %4d/%d accuracy = %4.1f%%" % (correct, total, correct * 100.0 / total)) - - -if __name__ == '__main__': - main_word2vec_basic() diff --git a/examples/text_word_embedding/word2vec_basic.pdf b/examples/text_word_embedding/word2vec_basic.pdf deleted file mode 100644 index 6dc5b9221..000000000 Binary files a/examples/text_word_embedding/word2vec_basic.pdf and /dev/null differ diff --git a/examples/tutorial_work_with_onnx.py b/examples/tutorial_work_with_onnx.py deleted file mode 100644 index 4d9de2cf8..000000000 --- a/examples/tutorial_work_with_onnx.py +++ /dev/null @@ -1,343 +0,0 @@ -#! /usr/bin/env python3 -# -*- coding: utf-8 -*- -r""" -Play with ONNX models in TensorLayer. - -This tutorial is corresponding to the onnx-tf tutorial: -https://github.com/onnx/tutorials/blob/7b549ae622ff8d74a5f5e0c32e109267f4c9ccae/tutorials/OnnxTensorflowExport.ipynb - -Introduction ----------------- -ONNX is an open-source specification for neural models. It has the following components: -- A definition of an extensible computation graph model. -- Definitions of standard data types. -- Definitions of built-in operators -Caffe2, PyTorch, Microsoft Cognitive Toolkit, Apache MXNet and other tools are developing ONNX support. Enabling interoperability between different frameworks and streamlining the path from research to production will increase the speed of innovation in the AI community. - -To run this script, you shall have the following pre-requisites: ----------------------------- -- Install ONNX and onnx-tf package: ->>> pip install onnx ->>> pip install onnx-tf -Note: When installing in a non-Anaconda environment, make sure to install the Protobuf compiler before running the pip installation of onnx. For example, on Ubuntu: ->>>sudo apt-get install protobuf-compiler libprotoc-dev ->>>pip install onnx -More details please go to ONNX official website: https://github.com/onnx/onnx - -- Testing environment configuration: -Ubuntu:16.04.4 LTS -Python:3.6.5 -TensorLayer:1.8.6rc2 -TensorFlow-gpu:1.8.0 -onnx:1.2.2 -onnx-tf:1.1.2 - -Tutorial structure ------------------- - -1.Training ----------- -Firstly, we can initiate the training script by issuing the command on your terminal. ->>>python tutorial_work_with_onnx.py - Shortly, we should obtain a trained MNIST model. The training process needs no special instrumentation. However, to successfully convert the trained model, onnx-tensorflow requires three pieces of information, all of which can be obtained after training is complete: - -- Graph definition: -You need to obtain information about the graph definition in the form of GraphProto. The easiest way to achieve this is to use the following snippet of code as shown in the example training script: ->>>with open("graph.proto", "wb") as file: ->>> graph = tf.get_default_graph().as_graph_def(add_shapes=True) ->>> file.write(graph.SerializeToString()) -This code is under the code where you call your architecture in your function - -- Shape information: By default, as_graph_def does not serialize any information about the shapes of the intermediate tensor and such information is required by onnx-tensorflow. Thus we request Tensorflow to serialize the shape information by adding the keyword argument add_shapes=True as demonstrated above. - -- Checkpoint: Tensorflow checkpoint files contain information about the obtained weight; thus they are needed to convert the trained model to ONNX format. - -2.Graph Freezing ----------------- -Secondly, we freeze the graph. Thus here we build the free_graph tool in TensorLayer source folder and execute it with the information about where the GraphProto is, where the checkpoint file is and where to put the freozen graph. ->>>python3 -m tensorflow.python.tools.freeze_graph \ - --input_graph=/root/graph.proto \ - --input_checkpoint=/root/model/model.ckpt \ - --output_graph=/root/frozen_graph.pb \ - --output_node_names=output/bias_add\ - --input_binary=True - -note: -input_graph is the path of your proto file -input_checkpoint is the path of your checkpoint file -output_graph is the path where you want to put -output_node is the output node you want to put into your graph: -you can try this code to print and find the node what you want: ->>>print([n.name for n in tf.get_default_graph().as_graph_def().node]) - -Note that now we have obtained the frozen_graph.pb with graph definition as well as weight information in one file. - -3.Model Conversion ------------------ -Thirdly, we convert the model to ONNX format using onnx-tensorflow. Using tensorflow_graph_to_onnx_model from onnx-tensorflow API (documentation available at https://github.com/onnx/onnx-tensorflow/blob/master/onnx_tf/doc/API.md). ->>>import tensorflow as tf ->>>from onnx_tf.frontend import tensorflow_graph_to_onnx_model - ->>>with tf.gfile.GFile("frozen_graph.pb", "rb") as f: ->>> graph_def = tf.GraphDef() ->>> graph_def.ParseFromString(f.read()) ->>> onnx_model = tensorflow_graph_to_onnx_model(graph_def, ->>> "output/bias_add", ->>> opset=6) - ->>> file = open("mnist.onnx", "wb") ->>> file.write(onnx_model.SerializeToString()) ->>> file.close() - -Then you will get thr first node info: ->>>input: "cnn1/kernel" ->>>output: "cnn1/kernel/read" ->>>name: "cnn1/kernel/read" ->>>op_type: "Identity" - -4.Inference using Backend(This part onnx-tf is under implementation!!!) -------------------------------------------------------------------- -In this tutorial, we continue our demonstration by performing inference using this obtained ONNX model. Here, we exported an image representing a handwritten 7 and stored the numpy array as image.npz. Using onnx-tf backend, we will classify this image using the converted ONNX model. ->>>import onnx ->>>import numpy as np ->>>from onnx_tf.backend import prepare - ->>>model = onnx.load('mnist.onnx') ->>>tf_rep = prepare(model) ->>>#Image Path ->>>img = np.load("./assets/image.npz", allow_pickle=True) ->>>output = tf_rep.run(img.reshape([1, 784])) ->>>print "The digit is classified as ", np.argmax(output) - -You will get the information in your console: ->>>The digit is classified as 7 - -""" - -import time - -import numpy as np -import tensorflow as tf -from tensorflow.python.tools.freeze_graph import freeze_graph as _freeze_graph - -import onnx -import tensorlayer as tl -from onnx_tf.backend import prepare -from onnx_tf.frontend import tensorflow_graph_to_onnx_model - -tf.logging.set_verbosity(tf.logging.DEBUG) -tl.logging.set_verbosity(tl.logging.DEBUG) - - -def generate_graph_and_checkpoint(graph_output_path, checkpoint_output_path): - """ - Reimplementation of the TensorFlow official MNIST CNN tutorials and generate the graph and checkpoint for this model: - - https://www.tensorflow.org/versions/r0.8/tutorials/mnist/pros/index.html - - https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/mnist/convolutional.py - - - For simplified CNN layer see "Convolutional layer (Simplified)" - - Parameters - ----------- - graph_output_path : string - the path of the graph where you want to save. - checkpoint_output_path : string - the path of the checkpoint where you want to save. - - References - ----------- - - `onnx-tf exporting tutorial `__ - - """ - X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 28, 28, 1)) - - sess = tf.InteractiveSession() - - batch_size = 128 - x = tf.placeholder(tf.float32, shape=[batch_size, 28, 28, 1]) # [batch_size, height, width, channels] - y_ = tf.placeholder(tf.int64, shape=[batch_size]) - - net = tl.layers.InputLayer(x, name='input') - - # Simplified conv API (the same with the above layers) - net = tl.layers.Conv2d(net, 32, (5, 5), (1, 1), act=tf.nn.relu, padding='SAME', name='cnn1') - net = tl.layers.MaxPool2d(net, (2, 2), (2, 2), padding='SAME', name='pool1') - net = tl.layers.Conv2d(net, 64, (5, 5), (1, 1), act=tf.nn.relu, padding='SAME', name='cnn2') - net = tl.layers.MaxPool2d(net, (2, 2), (2, 2), padding='SAME', name='pool2') - # end of conv - net = tl.layers.FlattenLayer(net, name='flatten') - net = tl.layers.DropoutLayer(net, keep=0.5, name='drop1') - net = tl.layers.DenseLayer(net, 256, act=tf.nn.relu, name='relu1') - net = tl.layers.DropoutLayer(net, keep=0.5, name='drop2') - net = tl.layers.DenseLayer(net, 10, act=None, name='output') - - y = net.outputs - - print([n.name for n in tf.get_default_graph().as_graph_def().node]) - - # To string Graph - with open(graph_output_path, "wb") as file: - graph = tf.get_default_graph().as_graph_def(add_shapes=True) - file.write(graph.SerializeToString()) - - cost = tl.cost.cross_entropy(y, y_, 'cost') - - correct_prediction = tf.equal(tf.argmax(y, 1), y_) - acc = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) - - # train - n_epoch = 200 - learning_rate = 0.0001 - print_freq = 10 - - train_params = net.all_params - train_op = tf.train.AdamOptimizer(learning_rate).minimize(cost, var_list=train_params) - - tl.layers.initialize_global_variables(sess) - net.print_params() - net.print_layers() - - print(' learning_rate: %f' % learning_rate) - print(' batch_size: %d' % batch_size) - - for epoch in range(n_epoch): - start_time = time.time() - for X_train_a, y_train_a in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=True): - feed_dict = {x: X_train_a, y_: y_train_a} - feed_dict.update(net.all_drop) # enable noise layers - sess.run(train_op, feed_dict=feed_dict) - # Save the checkpoint every 10 eopchs - if epoch % 10 == 0: - tl.files.save_ckpt(sess, mode_name='model.ckpt', save_dir=checkpoint_output_path, printable=True) - if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: - print("Epoch %d of %d took %fs" % (epoch + 1, n_epoch, time.time() - start_time)) - train_loss, train_acc, n_batch = 0, 0, 0 - for X_train_a, y_train_a in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=True): - dp_dict = tl.utils.dict_to_one(net.all_drop) # disable noise layers - feed_dict = {x: X_train_a, y_: y_train_a} - feed_dict.update(dp_dict) - err, ac = sess.run([cost, acc], feed_dict=feed_dict) - train_loss += err - train_acc += ac - n_batch += 1 - print(" train loss: %f" % (train_loss / n_batch)) - print(" train acc: %f" % (train_acc / n_batch)) - val_loss, val_acc, n_batch = 0, 0, 0 - for X_val_a, y_val_a in tl.iterate.minibatches(X_val, y_val, batch_size, shuffle=True): - dp_dict = tl.utils.dict_to_one(net.all_drop) # disable noise layers - feed_dict = {x: X_val_a, y_: y_val_a} - feed_dict.update(dp_dict) - err, ac = sess.run([cost, acc], feed_dict=feed_dict) - val_loss += err - val_acc += ac - n_batch += 1 - print(" val loss: %f" % (val_loss / n_batch)) - print(" val acc: %f" % (val_acc / n_batch)) - - # Evaluation - print('Evaluation') - test_loss, test_acc, n_batch = 0, 0, 0 - for X_test_a, y_test_a in tl.iterate.minibatches(X_test, y_test, batch_size, shuffle=True): - dp_dict = tl.utils.dict_to_one(net.all_drop) # disable noise layers - feed_dict = {x: X_test_a, y_: y_test_a} - feed_dict.update(dp_dict) - err, ac = sess.run([cost, acc], feed_dict=feed_dict) - test_loss += err - test_acc += ac - n_batch += 1 - print(" test loss: %f" % (test_loss / n_batch)) - print(" test acc: %f" % (test_acc / n_batch)) - - -def freeze_graph(graph_path, checkpoint_path, output_path, end_node_names, is_binary_graph): - """Reimplementation of the TensorFlow official freeze_graph function to freeze the graph and checkpoint together: - - Parameters - ----------- - graph_path : string - the path where your graph file save. - checkpoint_output_path : string - the path where your checkpoint save. - output_path : string - the path where you want to save the output proto buff - end_node_names : string - the name of the end node in your graph you want to get in your proto buff - is_binary_graph : boolean - declare your file whether is a binary graph - - References - ---------- - - `onnx-tf exporting tutorial `__ - - `tensorflow freeze_graph ` - """ - _freeze_graph( - input_graph=graph_path, input_saver='', input_binary=is_binary_graph, input_checkpoint=checkpoint_path, - output_graph=output_path, output_node_names=end_node_names, restore_op_name='save/restore_all', - filename_tensor_name='save/Const:0', clear_devices=True, initializer_nodes=None - ) - - -def convert_model_to_onnx(frozen_graph_path, end_node_names, onnx_output_path): - """Reimplementation of the TensorFlow-onnx official tutorial convert the proto buff to onnx file: - - Parameters - ----------- - frozen_graph_path : string - the path where your frozen graph file save. - end_node_names : string - the name of the end node in your graph you want to get in your proto buff - onnx_output_path : string - the path where you want to save the onnx file. - - References - ----------- - - `onnx-tf exporting tutorial ` - """ - with tf.gfile.GFile(frozen_graph_path, "rb") as f: - graph_def = tf.GraphDef() - graph_def.ParseFromString(f.read()) - onnx_model = tensorflow_graph_to_onnx_model(graph_def, end_node_names, opset=6) - file = open(onnx_output_path, "wb") - file.write(onnx_model.SerializeToString()) - file.close() - - -def convert_onnx_to_model(onnx_input_path): - """Reimplementation of the TensorFlow-onnx official tutorial convert the onnx file to specific: model - - Parameters - ----------- - onnx_input_path : string - the path where you save the onnx file. - - References - ----------- - - `onnx-tf exporting tutorial `__ - """ - model = onnx.load(onnx_input_path) - tf_rep = prepare(model) - # Image Path - img = np.load("./assets/image.npz", allow_pickle=True) - output = tf_rep.run(img.reshape([1, 784])) - print("The digit is classified as ", np.argmax(output)) - - -if __name__ == '__main__': - - # 1. Train the CNN network and output the graph and checkpoints - generate_graph_and_checkpoint(graph_output_path='graph.proto', checkpoint_output_path='./') - - # 2. Freeze the graph with checkpoints - freeze_graph( - graph_path='graph.proto', is_binary_graph=True, checkpoint_path='model.ckpt', output_path='frozen_graph.pb', - end_node_names='output/bias_add' - ) - - # 3. Convert the tensorflow protobuf file to ONNX file - convert_model_to_onnx( - frozen_graph_path='frozen_graph.pb', end_node_names='output/bias_add', onnx_output_path='mnist.onnx' - ) - - # 4. Convert thr ONNX file to specific model - # the following step is not working by far as the tensorflow-onnx project has a bug at the time of writing. - # convert_onnx_to_model(onnx_input_path='mnist.onnx') diff --git a/requirements/requirements_paddle.txt b/requirements/requirements_paddle.txt new file mode 100644 index 000000000..96b189ace --- /dev/null +++ b/requirements/requirements_paddle.txt @@ -0,0 +1 @@ +paddlepaddle>=2.0.2 \ No newline at end of file diff --git a/requirements/requirements_test.txt b/requirements/requirements_test.txt index 9642a41a4..e47c0ed72 100644 --- a/requirements/requirements_test.txt +++ b/requirements/requirements_test.txt @@ -6,6 +6,4 @@ pytest-cache>=1.0,<1.1 pytest-cov>=2.7.1 pytest-xdist>=1.28.0 sphinx==2.0.1 -yapf==0.29.0 -autoflake==1.3.1 -isort==4.3.21 +yapf>=0.27.0 diff --git a/tensorlayer/__init__.py b/tensorlayer/__init__.py index f89eebfff..be46822de 100644 --- a/tensorlayer/__init__.py +++ b/tensorlayer/__init__.py @@ -2,6 +2,12 @@ # -*- coding: utf-8 -*- """Deep learning and Reinforcement learning library for Researchers and Engineers""" +# import backend +from .backend import * +# from .backend import ops +# import dataflow +# from .dataflow import * + import os from distutils.version import LooseVersion @@ -30,7 +36,6 @@ " - `pip install --upgrade tensorflow-gpu`" ) - from tensorlayer import activation from tensorlayer import array_ops from tensorlayer import cost from tensorlayer import decorators @@ -44,6 +49,9 @@ from tensorlayer import optimizers from tensorlayer import rein from tensorlayer import utils + from tensorlayer import dataflow + from tensorlayer import metric + from tensorlayer import vision from tensorlayer.lazy_imports import LazyImport @@ -56,7 +64,6 @@ visualize = LazyImport("tensorlayer.visualize") # alias - act = activation vis = visualize alphas = array_ops.alphas diff --git a/tensorlayer/activation.py b/tensorlayer/activation.py deleted file mode 100644 index e2d3ac3b9..000000000 --- a/tensorlayer/activation.py +++ /dev/null @@ -1,366 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- -"""A file containing various activation functions.""" - -import tensorflow as tf - -from tensorlayer.decorators import deprecated - -__all__ = [ - 'leaky_relu', - 'leaky_relu6', - 'leaky_twice_relu6', - 'lrelu', - 'lrelu6', - 'ltrelu6', - 'ramp', - 'swish', - 'sign', - 'htanh', - 'hard_tanh', - 'pixel_wise_softmax', - 'mish', -] - - -def ramp(x, v_min=0, v_max=1, name=None): - """Ramp activation function. - - Reference: [tf.clip_by_value] - - Parameters - ---------- - x : Tensor - input. - v_min : float - cap input to v_min as a lower bound. - v_max : float - cap input to v_max as a upper bound. - name : str - The function name (optional). - - Returns - ------- - Tensor - A ``Tensor`` in the same type as ``x``. - - """ - return tf.clip_by_value(x, clip_value_min=v_min, clip_value_max=v_max, name=name) - - -# @deprecated(date="2018-09-30", instructions="This API is deprecated. Please use as `tf.nn.leaky_relu`") -def leaky_relu(x, alpha=0.2, name="leaky_relu"): - """leaky_relu can be used through its shortcut: :func:`tl.act.lrelu`. - - This function is a modified version of ReLU, introducing a nonzero gradient for negative input. Introduced by the paper: - `Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013] `__ - - The function return the following results: - - When x < 0: ``f(x) = alpha_low * x``. - - When x >= 0: ``f(x) = x``. - - Parameters - ---------- - x : Tensor - Support input type ``float``, ``double``, ``int32``, ``int64``, ``uint8``, ``int16``, or ``int8``. - alpha : float - Slope. - name : str - The function name (optional). - - Examples - -------- - >>> import tensorlayer as tl - >>> net = tl.layers.Input([10, 200]) - >>> net = tl.layers.Dense(n_units=100, act=lambda x : tl.act.lrelu(x, 0.2), name='dense')(net) - - Returns - ------- - Tensor - A ``Tensor`` in the same type as ``x``. - - References - ---------- - - `Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013] `__ - - """ - if not (0 < alpha <= 1): - raise ValueError("`alpha` value must be in [0, 1]`") - - with tf.name_scope(name) as name_scope: - x = tf.convert_to_tensor(x, name="features") - return tf.maximum(x, alpha * x, name=name_scope) - - -def leaky_relu6(x, alpha=0.2, name="leaky_relu6"): - """:func:`leaky_relu6` can be used through its shortcut: :func:`tl.act.lrelu6`. - - This activation function is a modified version :func:`leaky_relu` introduced by the following paper: - `Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013] `__ - - This activation function also follows the behaviour of the activation function :func:`tf.nn.relu6` introduced by the following paper: - `Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010] `__ - - The function return the following results: - - When x < 0: ``f(x) = alpha_low * x``. - - When x in [0, 6]: ``f(x) = x``. - - When x > 6: ``f(x) = 6``. - - Parameters - ---------- - x : Tensor - Support input type ``float``, ``double``, ``int32``, ``int64``, ``uint8``, ``int16``, or ``int8``. - alpha : float - Slope. - name : str - The function name (optional). - - Examples - -------- - >>> import tensorlayer as tl - >>> net = tl.layers.Input([10, 200]) - >>> net = tl.layers.Dense(n_units=100, act=lambda x : tl.act.leaky_relu6(x, 0.2), name='dense')(net) - - Returns - ------- - Tensor - A ``Tensor`` in the same type as ``x``. - - References - ---------- - - `Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013] `__ - - `Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010] `__ - """ - if not isinstance(alpha, tf.Tensor) and not (0 < alpha <= 1): - raise ValueError("`alpha` value must be in [0, 1]`") - - with tf.name_scope(name) as name_scope: - x = tf.convert_to_tensor(x, name="features") - return tf.minimum(tf.maximum(x, alpha * x), 6, name=name_scope) - - -def leaky_twice_relu6(x, alpha_low=0.2, alpha_high=0.2, name="leaky_relu6"): - """:func:`leaky_twice_relu6` can be used through its shortcut: :func:`:func:`tl.act.ltrelu6`. - - This activation function is a modified version :func:`leaky_relu` introduced by the following paper: - `Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013] `__ - - This activation function also follows the behaviour of the activation function :func:`tf.nn.relu6` introduced by the following paper: - `Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010] `__ - - This function push further the logic by adding `leaky` behaviour both below zero and above six. - - The function return the following results: - - When x < 0: ``f(x) = alpha_low * x``. - - When x in [0, 6]: ``f(x) = x``. - - When x > 6: ``f(x) = 6 + (alpha_high * (x-6))``. - - Parameters - ---------- - x : Tensor - Support input type ``float``, ``double``, ``int32``, ``int64``, ``uint8``, ``int16``, or ``int8``. - alpha_low : float - Slope for x < 0: ``f(x) = alpha_low * x``. - alpha_high : float - Slope for x < 6: ``f(x) = 6 (alpha_high * (x-6))``. - name : str - The function name (optional). - - Examples - -------- - >>> import tensorlayer as tl - >>> net = tl.layers.Input([10, 200]) - >>> net = tl.layers.Dense(n_units=100, act=lambda x : tl.act.leaky_twice_relu6(x, 0.2, 0.2), name='dense')(net) - - Returns - ------- - Tensor - A ``Tensor`` in the same type as ``x``. - - References - ---------- - - `Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013] `__ - - `Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010] `__ - - """ - if not isinstance(alpha_high, tf.Tensor) and not (0 < alpha_high <= 1): - raise ValueError("`alpha_high` value must be in [0, 1]`") - - if not isinstance(alpha_low, tf.Tensor) and not (0 < alpha_low <= 1): - raise ValueError("`alpha_low` value must be in [0, 1]`") - - with tf.name_scope(name) as name_scope: - x = tf.convert_to_tensor(x, name="features") - - x_is_above_0 = tf.minimum(x, 6 * (1 - alpha_high) + alpha_high * x) - x_is_below_0 = tf.minimum(alpha_low * x, 0) - - return tf.maximum(x_is_above_0, x_is_below_0, name=name_scope) - - -def swish(x, name='swish'): - """Swish function. - - See `Swish: a Self-Gated Activation Function `__. - - Parameters - ---------- - x : Tensor - input. - name: str - function name (optional). - - Returns - ------- - Tensor - A ``Tensor`` in the same type as ``x``. - - """ - # TODO: in this case, the beta = 1, but the beta can either be a constant or a trainable parameter - with tf.name_scope(name): - x = tf.nn.sigmoid(x) * x - return x - - -# @tf.RegisterGradient("QuantizeGrad") -# def _sign_grad(unused_op, grad): -# return tf.clip_by_value(grad, -1, 1) - - -@tf.custom_gradient -def sign(x): - """Sign function. - - Clip and binarize tensor using the straight through estimator (STE) for the gradient, usually be used for - quantizing values in `Binarized Neural Networks`: https://arxiv.org/abs/1602.02830. - - Parameters - ---------- - x : Tensor - input. - - Returns - ------- - Tensor - A ``Tensor`` in the same type as ``x``. - - References - ---------- - - `Rectifier Nonlinearities Improve Neural Network Acoustic Models, Maas et al. (2013)` - http://web.stanford.edu/~awni/papers/relu_hybrid_icml2013_final.pdf - - - `BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1, Courbariaux et al. (2016)` - https://arxiv.org/abs/1602.02830 - - """ - - def grad(dy): - return tf.clip_by_value(dy, -1, 1) - - return tf.sign(x, name='sign'), grad - - -# if tf.__version__ > "1.7": -# @tf.custom_gradient -# def sign(x): # https://www.tensorflow.org/versions/master/api_docs/python/tf/custom_gradient?hl=ES#top_of_page -# """Differentiable sign function using sigmoid as the derivation function, -# see `tf.sign `__ and `tf.custom_gradient -# `__. -# -# Parameters -# ---------- -# x : Tensor -# input. -# -# Returns -# ------- -# Tensor -# A ``Tensor`` in the same type as ``x``. -# -# """ -# tao = tf.nn.sigmoid(x) -# def grad(): -# return tao * (1 - tao) -# return tf.sign(x), grad - - -def hard_tanh(x, name='htanh'): - """Hard tanh activation function. - - Which is a ramp function with low bound of -1 and upper bound of 1, shortcut is `htanh`. - - Parameters - ---------- - x : Tensor - input. - name : str - The function name (optional). - - Returns - ------- - Tensor - A ``Tensor`` in the same type as ``x``. - - """ - # with tf.variable_scope("hard_tanh"): - return tf.clip_by_value(x, -1, 1, name=name) - - -@deprecated(date="2018-06-30", instructions="This API will be deprecated soon as tf.nn.softmax can do the same thing") -def pixel_wise_softmax(x, name='pixel_wise_softmax'): - """Return the softmax outputs of images, every pixels have multiple label, the sum of a pixel is 1. - - Usually be used for image segmentation. - - Parameters - ---------- - x : Tensor - input. - - For 2d image, 4D tensor (batch_size, height, weight, channel), where channel >= 2. - - For 3d image, 5D tensor (batch_size, depth, height, weight, channel), where channel >= 2. - name : str - function name (optional) - - Returns - ------- - Tensor - A ``Tensor`` in the same type as ``x``. - - Examples - -------- - >>> outputs = pixel_wise_softmax(network.outputs) - >>> dice_loss = 1 - dice_coe(outputs, y_, epsilon=1e-5) - - References - ---------- - - `tf.reverse `__ - - """ - with tf.name_scope(name): - return tf.nn.softmax(x) - - -def mish(x): - """Mish activation function. - - Reference: [Mish: A Self Regularized Non-Monotonic Neural Activation Function .Diganta Misra, 2019] - - Parameters - ---------- - x : Tensor - input. - - Returns - ------- - Tensor - A ``Tensor`` in the same type as ``x``. - - """ - return x * tf.math.tanh(tf.math.softplus(x)) - - -# Alias -lrelu = leaky_relu -lrelu6 = leaky_relu6 -ltrelu6 = leaky_twice_relu6 -htanh = hard_tanh diff --git a/tensorlayer/backend/__init__.py b/tensorlayer/backend/__init__.py new file mode 100644 index 000000000..4533f5b82 --- /dev/null +++ b/tensorlayer/backend/__init__.py @@ -0,0 +1,6 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +# load ops +from .ops import * +from tensorlayer.backend import ops diff --git a/tensorlayer/backend/ops/__init__.py b/tensorlayer/backend/ops/__init__.py new file mode 100644 index 000000000..f5eea8684 --- /dev/null +++ b/tensorlayer/backend/ops/__init__.py @@ -0,0 +1,142 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +# load nn ops +from .load_backend import padding_format +from .load_backend import preprocess_1d_format +from .load_backend import preprocess_2d_format +from .load_backend import preprocess_3d_format +from .load_backend import nchw_to_nhwc +from .load_backend import nhwc_to_nchw +from .load_backend import relu +from .load_backend import relu6 +from .load_backend import leaky_relu +from .load_backend import softplus +from .load_backend import tanh +from .load_backend import sigmoid +from .load_backend import softmax +from .load_backend import bias_add +from .load_backend import conv1d +from .load_backend import conv2d +from .load_backend import conv3d +from .load_backend import lrn +from .load_backend import moments +from .load_backend import max_pool +from .load_backend import avg_pool +from .load_backend import max_pool3d +from .load_backend import avg_pool3d +from .load_backend import pool +from .load_backend import depthwise_conv2d +from .load_backend import Conv1d_transpose +from .load_backend import Conv2d_transpose +from .load_backend import Conv3d_transpose +from .load_backend import GroupConv2D +from .load_backend import BinaryConv2D +from .load_backend import DorefaConv2D + +from .load_backend import ReLU +from .load_backend import ReLU6 +from .load_backend import LeakyReLU +from .load_backend import Softplus +from .load_backend import Tanh +from .load_backend import Sigmoid +from .load_backend import Softmax +from .load_backend import Conv1D +from .load_backend import Conv2D +from .load_backend import Conv3D +from .load_backend import BiasAdd +from .load_backend import MaxPool1d +from .load_backend import MaxPool +from .load_backend import MaxPool3d +from .load_backend import AvgPool1d +from .load_backend import AvgPool +from .load_backend import AvgPool3d +from .load_backend import Dropout +from .load_backend import BatchNorm +from .load_backend import DepthwiseConv2d +from .load_backend import SeparableConv1D +from .load_backend import SeparableConv2D +from .load_backend import AdaptiveMeanPool1D +from .load_backend import AdaptiveMeanPool2D +from .load_backend import AdaptiveMeanPool3D +from .load_backend import AdaptiveMaxPool1D +from .load_backend import AdaptiveMaxPool2D +from .load_backend import AdaptiveMaxPool3D +from .load_backend import Floor +from .load_backend import Ceil + +# load ops +from .load_backend import Variable +from .load_backend import matmul +from .load_backend import add +from .load_backend import dtypes +from .load_backend import minimum +from .load_backend import reshape +from .load_backend import concat +from .load_backend import convert_to_tensor +from .load_backend import sqrt +from .load_backend import reduce_mean +from .load_backend import reduce_min +from .load_backend import reduce_max +from .load_backend import pad +from .load_backend import stack +from .load_backend import meshgrid +from .load_backend import range +from .load_backend import expand_dims +from .load_backend import tile +from .load_backend import cast +from .load_backend import transpose +from .load_backend import gather_nd +from .load_backend import clip_by_value +from .load_backend import split +from .load_backend import get_tensor_shape +from .load_backend import set_context +from .load_backend import resize +from .load_backend import floor +from .load_backend import gather +from .load_backend import linspace +from .load_backend import slice +from .load_backend import add_n +from .load_backend import ceil +from .load_backend import multiply +from .load_backend import divide +from .load_backend import identity + +# dtype +from .load_backend import (DType, float16, float32, float64, int8, int16, int32, int64, uint8, uint16, uint32, uint64) +# initlizers +from .load_backend import (zeros, ones, constant, random_uniform, random_normal, truncated_normal, he_normal) +# backend +from .load_backend import BACKEND +from .load_backend import BACKEND_VERSION + +from .load_backend import Reshape +from .load_backend import ReduceSum +from .load_backend import ReduceMax +from .load_backend import ReduceMean +from .load_backend import OneHot +from .load_backend import L2Normalize +from .load_backend import EmbeddingLookup +from .load_backend import NCELoss +from .load_backend import NotEqual +from .load_backend import Cast +from .load_backend import ExpandDims +from .load_backend import CountNonzero +from .load_backend import FlattenReshape +from .load_backend import Transpose +from .load_backend import MatMul +from .load_backend import Tile +from .load_backend import Concat +from .load_backend import ZeroPadding1D +from .load_backend import ZeroPadding2D +from .load_backend import ZeroPadding3D +from .load_backend import Stack +from .load_backend import Unstack +from .load_backend import Sign +from .load_backend import Resize +from .load_backend import Pad +from .load_backend import Minimum +from .load_backend import Maximum +from .load_backend import Meshgrid +from .load_backend import BatchToSpace +from .load_backend import DepthToSpace diff --git a/tensorlayer/backend/ops/load_backend.py b/tensorlayer/backend/ops/load_backend.py new file mode 100644 index 000000000..72e120db6 --- /dev/null +++ b/tensorlayer/backend/ops/load_backend.py @@ -0,0 +1,74 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +import json +import os +import sys + +# BACKEND = 'tensorflow' +# BACKEND = 'mindspore' +BACKEND = 'paddle' + +# Check for backend.json files +tl_backend_dir = os.path.expanduser('~') +if not os.access(tl_backend_dir, os.W_OK): + tl_backend_dir = '/tmp' +tl_dir = os.path.join(tl_backend_dir, '.tl') + +config = { + 'backend': BACKEND, +} +if not os.path.exists(tl_dir): + path = os.path.join(tl_dir, 'tl_backend.json') + os.makedirs(tl_dir) + with open(path, "w") as f: + json.dump(config, f) + BACKEND = config['backend'] + sys.stderr.write("Create the backend configuration file :" + path + '\n') +else: + path = os.path.join(tl_dir, 'tl_backend.json') + with open(path, 'r') as load_f: + load_dict = json.load(load_f) + if load_dict['backend'] is not config['backend']: + BACKEND = config['backend'] + else: + BACKEND = load_dict['backend'] + +# Set backend based on TL_BACKEND. +if 'TL_BACKEND' in os.environ: + backend = os.environ['TL_BACKEND'] + if backend: + BACKEND = backend + +# import backend functions +if BACKEND == 'tensorflow': + from .tensorflow_backend import * + from .tensorflow_nn import * + import tensorflow as tf + BACKEND_VERSION = tf.__version__ + sys.stderr.write('Using TensorFlow backend.\n') + +elif BACKEND == 'mindspore': + from .mindspore_backend import * + from .mindspore_nn import * + import mindspore as ms + BACKEND_VERSION = ms.__version__ + # set context + import mindspore.context as context + import os + os.environ['DEVICE_ID'] = '0' + context.set_context(mode=context.PYNATIVE_MODE, device_target='GPU'), + # context.set_context(mode=context.GRAPH_MODE, device_target='CPU'), + # enable_task_sink=True, enable_loop_sink=True) + # context.set_context(mode=context.GRAPH_MODE, backend_policy='ms', + # device_target='Ascend', enable_task_sink=True, enable_loop_sink=True) + sys.stderr.write('Using MindSpore backend.\n') + +elif BACKEND == 'paddle': + from .paddle_backend import * + from .paddle_nn import * + import paddle as pd + BACKEND_VERSION = pd.__version__ + sys.stderr.write('Using Paddle backend.\n') +else: + raise NotImplementedError("This backend is not supported") diff --git a/tensorlayer/backend/ops/mindspore_backend.py b/tensorlayer/backend/ops/mindspore_backend.py new file mode 100644 index 000000000..59187cba9 --- /dev/null +++ b/tensorlayer/backend/ops/mindspore_backend.py @@ -0,0 +1,1306 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +from __future__ import absolute_import, division, print_function +from .mindspore_nn import nchw_to_nhwc, nhwc_to_nchw +from mindspore._c_expression.typing import Type +from mindspore.common import dtype as mstype + +from mindspore.common.parameter import Parameter +from mindspore.common.initializer import ( + initializer, Constant, Normal, TruncatedNormal, Initializer, _assignment, _calculate_in_and_out, One, Zero +) +from mindspore.common.tensor import Tensor +from mindspore.ops import operations as P +from mindspore.ops import functional as F +from mindspore.ops import composite as C +import mindspore.context as context +from mindspore.nn import Cell +from mindspore.ops import count_nonzero +import mindspore.numpy as msnp + +import numpy as np +from scipy.stats import truncnorm +import random + +_dtypeDict = { + 'DType': Type, + 'float16': mstype.float16, + 'float32': mstype.float32, + 'float64': mstype.float64, + 'int8': mstype.int8, + 'int16': mstype.int16, + 'int32': mstype.int32, + 'int64': mstype.int64, + 'uint8': mstype.uint8, + 'uint16': mstype.uint16, + 'uint32': mstype.uint32, + 'uint64': mstype.uint64 +} + +DType = Type +float16 = mstype.float16 +float32 = mstype.float32 +float64 = mstype.float64 +int8 = mstype.int8 +int16 = mstype.int16 +int32 = mstype.int32 +int64 = mstype.int64 +uint8 = mstype.uint8 +uint16 = mstype.uint16 +uint32 = mstype.uint32 +uint64 = mstype.uint64 + +# isinstance input output +# TensorLike = Tensor_ + + +def set_context(**kwargs): + return context.set_context(**kwargs) + + +def get_tensor_shape(x): + return list(P.Shape()(x)) + + +# initializers +def zeros(shape, dtype=mstype.float32): + """ + Creates a tensor with all elements set to zero. + + Parameters + ---------- + shape : A list of integers + a tuple of integers, or a 1-D Tensor of type int32. + dtype : tensor + The DType of an element in the resulting Tensor + + Returns + ------- + A Tensor with all elements set to zero. + + """ + # shape = shape[::-1] + arr = np.ndarray(shape) + init_obj = Zero() + init_obj(arr) + return Tensor(arr, dtype=dtype) + + +def ones(shape, dtype=mstype.float32): + """ + Creates a tensor with all elements set to ones. + + Parameters + ---------- + shape : A list of integers + a tuple of integers, or a 1-D Tensor of type int32. + dtype : tensor + The DType of an element in the resulting Tensor + + Returns + ------- + A Tensor with all elements set to zero. + + """ + # shape = shape[::-1] + arr = np.ndarray(shape) + init_obj = One() + init_obj(arr) + return Tensor(arr, dtype=dtype) + + +def constant(value, dtype=mstype.float32, shape=None): + """ + Creates a constant tensor from a tensor-like object. + + Parameters + ---------- + value : list + A constant value (or list) of output type dtype. + dtype : tensor + The type of the elements of the resulting tensor. + shape : tuple + Optional dimensions of resulting tensor. + + Returns + ------- + A Constant Tensor. + + """ + # shape = shape[::-1] + arr = np.ndarray(shape) + Constant(value)(arr=arr) + return Tensor(arr, dtype=dtype) + + +class Uniform(Initializer): + """ + Initialize a uniform array, and obtain values U(-scale, scale) from the uniform distribution + to fill the input tensor. + + Args: + minval : int + The lower bound on the range of random values to generate (inclusive). Defaults to 0. + maxval : int + The upper bound on the range of random values to generate (exclusive). Defaults to 1 if dtype is floating point. + seed : int + Used in combination with tf.random.set_seed to create a reproducible sequence of tensors across multiple calls. + + Returns: + Array, uniform array. + """ + + def __init__(self, minval=0, maxval=None, seed=None): + super(Uniform, self).__init__(minval=minval, maxval=maxval, seed=seed) + self.minval = minval + self.maxval = maxval + self.seed = seed + + def _initialize(self, arr): + random.seed(self.seed) + tmp = np.random.uniform(self.minval, self.maxval, arr.shape) + _assignment(arr, tmp) + + +def random_uniform(shape, minval=0, maxval=None, dtype=mstype.float32, seed=None): + """ + Outputs random values from a uniform distribution. + + Parameters + ---------- + shape : tuple + A 1-D integer Tensor or Python array. The shape of the output tensor. + minval : int + The lower bound on the range of random values to generate (inclusive). Defaults to 0. + maxval : int + The upper bound on the range of random values to generate (exclusive). Defaults to 1 if dtype is floating point. + dtype : tensor + The type of the output: float16, float32, float64, int32, or int64. + seed : int + Used in combination with tf.random.set_seed to create a reproducible sequence of tensors across multiple calls. + Returns + ------- + A tensor of the specified shape filled with random uniform values. + + """ + # shape = shape[::-1] + arr = np.ndarray(shape) + init_obj = Uniform(minval=minval, maxval=maxval, seed=seed) + init_obj(arr) + return Tensor(arr, dtype=dtype) + + +class Normal(Initializer): + """ + Initialize a normal array, and obtain values N(0, sigma) from the uniform distribution + to fill the input tensor. + + Parameters + ---------- + mean : float + The mean of the normal distribution + stddev : float + The standard deviation of the normal distribution. + seed : A Python integer + Used to create a random seed for the distribution + + Returns: + Array, normal array. + """ + + def __init__(self, mean=0.0, stddev=0.01, seed=None): + super(Normal, self).__init__(mean=mean, stddev=stddev) + self.mean = mean + self.stddev = stddev + self.seed = seed + + def _initialize(self, arr): + random.seed(self.seed) + tmp = np.random.normal(self.mean, self.stddev, arr.shape) + _assignment(arr, tmp) + + +class RandomNormal(Cell): + + def __init__(self, mean=0.0, stddev=0.01, seed=None): + super(RandomNormal, self).__init__() + self.normal = Normal(mean=mean, stddev=stddev, seed=seed) + + def construct(self, shape): + arr = np.ndarray(shape) + outputs = self.normal(arr) + return outputs + + +def random_normal(shape, mean=0.0, stddev=1.0, dtype=mstype.float32, seed=None): + """ + Outputs random values from a normal distribution. + + Parameters + ---------- + shape : tuple + A 1-D integer Tensor or Python array. The shape of the output tensor. + mean : float + The mean of the normal distribution + stddev : float + The standard deviation of the normal distribution. + dtype : tensor + The type of the output. + seed : A Python integer + Used to create a random seed for the distribution + + Returns + ------- + A tensor of the specified shape filled with random normal values. + + """ + # shape = shape[::-1] + arr = np.ndarray(shape) + init_obj = Normal(mean=mean, stddev=stddev, seed=seed) + init_obj(arr) + return Tensor(arr, dtype=dtype) + + +class TruncatedNormal(Initializer): + """ + Initialize a truncated normal distribution which is a bounded normal distribution within N(low, high). + + Args: + sigma (float): The sigma of the array. Default: 0.01. + + Returns: + Array, truncated normal array. + """ + + def __init__(self, mean=0.0, stddev=0.01, seed=None): + super(TruncatedNormal, self).__init__(mean=mean, stddev=stddev, seed=seed) + self.mean = mean + self.stddev = stddev + self.seed = seed + + def _initialize(self, arr): + tmp = truncnorm.rvs(-2, 2, loc=self.mean, scale=self.stddev, size=arr.shape, random_state=None) + _assignment(arr, tmp) + + +def truncated_normal(shape, mean=0.0, stddev=1.0, dtype=mstype.float32, seed=None): + """ + Outputs random values from a truncated normal distribution. + + Parameters + ---------- + shape : tuple + A 1-D integer Tensor or Python array. The shape of the output tensor. + mean : float + The mean of the normal distribution + stddev : float + The standard deviation of the normal distribution. + dtype : tensor + The type of the output. + seed : A Python integer + Used to create a random seed for the distribution + + Returns + ------- + A tensor of the specified shape filled with random truncated normal values. + + """ + # shape = shape[::-1] + arr = np.ndarray(shape) + init_obj = TruncatedNormal(mean=mean, stddev=stddev, seed=seed) + init_obj(arr) + return Tensor(arr, dtype=dtype) + + +class HeNormal(Initializer): + r""" + he_normal: It draws samples from a truncated normal distribution centered on 0 with + stddev = sqrt(2 / fan_in) where fan_in is the number of input units in the weight tensor. + + Args: + arr (Array): The array to be assigned. + + Returns: + Array, assigned array. + """ + + def __init__(self, seed=None): + super(HeNormal, self).__init__(seed=seed) + self.seed = seed + + def _initialize(self, arr): + n_in, _ = _calculate_in_and_out(arr) + boundary = np.sqrt(2.0 / n_in) + random.seed(self.seed) + data = np.random.normal(-boundary, boundary, arr.shape) + _assignment(arr, data) + + +def he_normal(shape, dtype, seed=None): + """ + He normal initializer. + + Parameters + ---------- + seed : A Python integer. + Used to seed the random generator. + shape : tuple + A 1-D integer Tensor or Python array. The shape of the output tensor. + dtype : tensor + The type of the output. + + Returns + ------- + A tensor of the specified shape filled with he normal values. + """ + # shape = shape[::-1] + arr = np.ndarray(shape) + init_obj = HeNormal(seed) + init_obj(arr) + return Tensor(arr, dtype=dtype) + + +def Variable(initial_value, name, trainable=True): + """ + Creates a new variable with value initial_value. + + Parameters + ---------- + initial_value : tensor + A Tensor, or Python object convertible to a Tensor + name : str + Optional name for the variable. Defaults to 'Variable' and gets uniquified automatically. + Returns + ------- + Variable + """ + + var = Parameter(initial_value, name=name, requires_grad=trainable) + return var + + +class MatMul(Cell): + + def __init__(self): + super(MatMul, self).__init__() + self.matmul = P.MatMul() + + def construct(self, a, b): + return self.matmul(a, b) + + +def matmul(a, b): + """ + Multiplies matrix a by matrix b, producing a * b. + + Parameters + ---------- + a : tensor + type float16, float32, float64, int32, complex64, complex128 and rank > 1. + b : tensor + with same type and rank as a. + + Returns + ------- + A Tensor of the same type as a and b + """ + matmul_obj = P.MatMul() + outputs = matmul_obj(a, b) + return outputs + + +def add(value, bias): + """ + Returns x + y element-wise. + + Parameters + ---------- + value : tensor. + Must be one of the following types: bfloat16, half, float32, float64, + uint8, int8, int16, int32, int64, complex64, complex128, string. + bias : tensor + Must have the same type as a + name : str + A name for the operation + + Returns + ------- + A Tensor. Has the same type as a. + """ + + add_obj = P.TensorAdd() + outputs = add_obj(value, bias) + return outputs + + +def dtypes(dt): + """ + Data dtypes. + + Parameters + ---------- + dt : string + It could be 'uint8', 'uint16', 'uint32', 'uint64', 'int8', 'int16', + 'int32', 'int64', 'float16', 'float32', 'float64', 'DType'. + + Returns + ------- + Data dtypes + """ + + if dt not in _dtypeDict.keys(): + raise Exception("Unsupported dtype: {}".format(dt)) + return _dtypeDict[dt] + + +class Maximum(Cell): + + def __init__(self): + super(Maximum, self).__init__() + self.maximum = P.Maximum() + + def construct(self, x, y): + return self.maximum(x, y) + + +class Minimum(Cell): + + def __init__(self): + super(Minimum, self).__init__() + self.minimum = P.Minimum() + + def construct(self, x, y): + return self.minimum(x, y) + + +def minimum(x, y): + """ + Returns the min of x and y (i.e. x < y ? x : y) element-wise. + + Parameters + ---------- + x : tensor. + Must be one of the following types: bfloat16, half, float32, float64, int32, int64. + y : A Tensor. + Must have the same type as x. + name : str + A name for the operation (optional). + + Returns + ------- + A Tensor. Has the same type as x + """ + minimum_obj = P.Minimum() + outputs = minimum_obj(x, y) + return outputs + + +class FlattenReshape(Cell): + + def __init__(self): + super(FlattenReshape, self).__init__() + self.shape = P.Shape() + self.reshape = P.Reshape() + + def construct(self, inputs): + dim = 1 + for d in self.shape(inputs)[1:]: + dim *= d + return self.reshape(inputs, (-1, dim)) + + +class Reshape(Cell): + + def __init__(self, shape): + super(Reshape, self).__init__() + self.reshape = P.Reshape() + self.shape = tuple(shape) + + def construct(self, tensor): + return self.reshape(tensor, self.shape) + + +def reshape(tensor, shape): + """ + Reshapes a tensor. + + Parameters + ---------- + tensor : tensor + A Tensor. + shape : tensor + Defines the shape of the output tensor. + Returns + ------- + A Tensor. Has the same type as tensor + """ + reshape_obj = P.Reshape() + outputs = reshape_obj(tensor, tuple(shape)) + return outputs + + +class Concat(Cell): + + def __init__(self, axis): + super(Concat, self).__init__() + self.concat = P.Concat(axis) + + def construct(self, values): + return self.concat(values) + + +def concat(values, axis): + """ + Concatenates tensors along one dimension. + + Parameters + ---------- + values : list + A list of Tensor objects or a single Tensor + axis : int + 0-D int32 Tensor. Dimension along which to concatenate + Returns + ------- + A Tensor resulting from concatenation of the input tensors. + """ + # TODO testing axis + concat_obj = P.Concat(axis) + outputs = concat_obj(values) + return outputs + + +def convert_to_tensor(value, dtype=None): + """ + Converts the given value to a Tensor. + + Parameters + ---------- + value : object + An object whose type has a registered Tensor conversion function. + dtype : optional + Optional element type for the returned tensor. If missing, the type is inferred from the type of value. + + Returns + ------- + A Tensor based on value. + """ + #todo testing value + return Tensor(value, dtype=dtype) + + +def sqrt(x): + """ + Computes square root of x element-wise. + + Parameters + ---------- + x : tensor + Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128. + + Returns + ------- + A Tensor. Has the same type as x. + """ + sqrt_obj = P.Sqrt() + outputs = sqrt_obj(x) + return outputs + + +class ReduceSum(Cell): + + def __init__(self, axis): + super(ReduceSum, self).__init__() + self.axis = axis + self.reduce_sum = P.ReduceSum(keep_dims=False) + + def construct(self, input): + return self.reduce_sum(input, self.axis) + + +class ReduceMean(Cell): + + def __init__(self, axis): + super(ReduceMean, self).__init__() + self.axis = axis + self.reducemean = P.ReduceMean(keep_dims=False) + + def construct(self, inputs): + output = self.reducemean(inputs, self.axis) + return output + + +def reduce_mean(input_tensor, axis=None): + """ + Computes the mean of elements across dimensions of a tensor. + + Parameters + ---------- + input_tensor : tensor + The tensor to reduce. Should have numeric type. + axis : int + The dimensions to reduce. If None (the default), reduces all dimensions. + Must be in the range [-rank(input_tensor), rank(input_tensor)). + name : str + A name for the operation (optional). + + Returns + ------- + The reduced tensor. + """ + + Rmean_obj = P.ReduceMean(keep_dims=False) + outputs = Rmean_obj(input_tensor, axis) + return outputs + + +class ReduceMax(Cell): + + def __init__(self, axis): + super(ReduceMax, self).__init__() + self.axis = axis + self.reducemax = P.ReduceMax(keep_dims=False) + + def construct(self, inputs): + output = self.reducemax(inputs, self.axis) + return output + + +def reduce_max(input_tensor, axis=None): + """ + Computes the maximum of elements across dimensions of a tensor. + + Parameters + ---------- + input_tensor : tensor + The tensor to reduce. Should have real numeric type. + axis : int + The dimensions to reduce. If None (the default), reduces all dimensions. + Must be in the range [-rank(input_tensor), rank(input_tensor)). + name : str + A name for the operation (optional). + + Returns + ------- + The reduced tensor. + """ + + Rmax_obj = P.ReduceMax(keep_dims=False) + outputs = Rmax_obj(input_tensor, axis) + return outputs + + +def reduce_min(input_tensor, axis=None): + """ + Computes the minimum of elements across dimensions of a tensor. + + Parameters + ---------- + input_tensor : tensor + The tensor to reduce. Should have real numeric type. + axis : int + The dimensions to reduce. If None (the default), reduces all dimensions. + Must be in the range [-rank(input_tensor), rank(input_tensor)). + name : str + A name for the operation (optional). + + Returns + ------- + The reduced tensor. + """ + + Rmin_obj = P.ReduceMin(keep_dims=False) + outputs = Rmin_obj(input_tensor, axis) + return outputs + + +class Pad(Cell): + + def __init__(self, paddings, mode="REFLECT"): + super(Pad, self).__init__() + if mode not in ["REFLECT", "SYMMETRIC"]: + raise Exception("Unsupported mode: {}".format(mode)) + self.pad = P.MirrorPad(mode=mode) + self.paddings = Tensor(paddings) + + def construct(self, x): + return self.pad(x, self.paddings) + + +def pad(tensor, paddings, mode='CONSTANT', constant_values=0): + """ + Pads a tensor. + + Parameters + ---------- + tensor : tensor + A Tensor. + paddings : tuple + A tuple of type int32. + mode : str + One of "CONSTANT", "REFLECT", or "SYMMETRIC" (case-insensitive) + constant_values : int + In "CONSTANT" mode, the scalar pad value to use. Must be same type as tensor. + + Returns + ------- + A Tensor. Has the same type as tensor. + """ + raise NotImplementedError + + +class Unstack(Cell): + + def __init__(self, axis, num=None): + super(Unstack, self).__init__() + if num is not None: + raise ("The num Parameters do not need to be set.") + self.unstack = P.Unpack(axis=axis) + + def construct(self, values): + return self.unstack(values) + + +class Stack(Cell): + + def __init__(self, axis=0): + super(Stack, self).__init__() + self.stack = P.Pack(axis=axis) + + def construct(self, values): + return self.stack(values) + + +def stack(values, axis=0): + """ + Stacks a list of rank-R tensors into one rank-(R+1) tensor. + + Parameters + ---------- + values : list + A list of Tensor objects with the same shape and type. + axis : int + An int. The axis to stack along. Defaults to the first dimension. + Negative values wrap around, so the valid range is [-(R+1), R+1). + + Returns + ------- + A stacked Tensor with the same type as values. + """ + _stack = P.Pack(axis=axis) + return _stack(values) + + +class Meshgrid(Cell): + + def __init__(self, indexing='xy'): + super(Meshgrid, self).__init__() + self._meshgrid = P.Meshgrid(indexing=indexing) + + def construct(self, *args): + inputs = tuple(*args) + return self._meshgrid(inputs) + + +def meshgrid(*args, **kwargs): + """ + Broadcasts parameters for evaluation on an N-D grid. + + Parameters + ---------- + x : tensor + Tensors with rank 1. + y : tensor + Tensors with rank 1. + + Returns + ------- + A list of N Tensors with rank N. + """ + + _meshgrid = P.Meshgrid(**kwargs) + return _meshgrid(*args) + + +def range(start, limit=None, delta=1, dtype=None): + """ + Creates a sequence of numbers. + + Parameters + ---------- + start : tensor + A 0-D Tensor (scalar). Acts as first entry in the range if limit is not None; + otherwise, acts as range limit and first entry defaults to 0. + limit : tensor + A 0-D Tensor (scalar). Upper limit of sequence, exclusive. If None, + defaults to the value of start while the first entry of the range defaults to 0. + delta : tensor + A 0-D Tensor (scalar). Number that increments start. Defaults to 1. + dtype : type + The type of the elements of the resulting tensor. + + Returns + ------- + An 1-D Tensor of type dtype. + """ + + pass + + +class ExpandDims(Cell): + + def __init__(self, axis): + super(ExpandDims, self).__init__() + self.axis = axis + self.expand_dims = P.ExpandDims() + + def construct(self, input): + output = self.expand_dims(input, self.axis) + return output + + +def expand_dims(input, axis): + """ + Inserts a dimension of 1 into a tensor's shape. + + Parameters + ---------- + input : tensor + A Tensor. + axis : int + 0-D (scalar). Specifies the dimension index at which to expand the shape of input. + Must be in the range [-rank(input) - 1, rank(input)]. + + Returns + ------- + A Tensor with the same data as input, but its shape has an additional dimension of size 1 added. + """ + + expand_obj = P.ExpandDims() + outputs = expand_obj(input, axis) + return outputs + + +class Tile(Cell): + + def __init__(self): + super(Tile, self).__init__() + self.tile = P.Tile() + + def construct(self, input, multiples): + return self.tile(input, tuple(multiples)) + + +def tile(input, multiples): + """ + Constructs a tensor by tiling a given tensor. + + Parameters + ---------- + input : tensor + A Tensor. 1-D or higher. + multiples : tensor + Must be one of the following types: int32, int64. 1-D. + Length must be the same as the number of dimensions in input + + Returns + ------- + A Tensor. Has the same type as input. + """ + tile_obj = P.Tile() + outputs = tile_obj(input, multiples) + return outputs + + +class Cast(Cell): + + def __init__(self, dtype): + super(Cast, self).__init__() + self.dtype = dtype + self.cast = P.Cast() + + def construct(self, input): + return self.cast(input, self.dtype) + + +def cast(x, dtype): + """ + Casts a tensor to a new type. + + Parameters + ---------- + x : tensor + A Tensor or SparseTensor or IndexedSlices of numeric type. + It could be uint8, uint16, uint32, uint64, int8, int16, int32, int64, float16, float32, float64. + dtype : dtpye + The destination type. The list of supported dtypes is the same as x + + Returns + ------- + A Tensor or SparseTensor or IndexedSlices with same shape as x and same type as dtype. + """ + cast_obj = P.Cast() + outputs = cast_obj(x, dtype) + return outputs + + +class Transpose(Cell): + + def __init__(self, perm, conjugate=False): + super(Transpose, self).__init__() + self.perm = tuple(perm) + self.conjugate = conjugate + self.transpose = P.Transpose() + if self.conjugate: + raise NotImplementedError("conjugate not implemented") + + def construct(self, a): + return self.transpose(a, self.perm) + + +def transpose(a, perm=None, conjugate=False): + """ + Transposes a. + + Parameters + ---------- + a : tensor + A Tensor. + perm : int + A permutation of the dimensions of a. + conjugate : bool + Setting it to True is mathematically equivalent to ms.math.conj(ms.transpose(input)). + + Returns + ------- + A transposed Tensor. + """ + # TODO conjugate + trans_obj = P.Transpose() + outputs = trans_obj(a, perm) + print(outputs) + + +def gather_nd(params, indices, batch_dims=0): + """ + Gather slices from params into a Tensor with shape specified by indices. + + Parameters + ---------- + params : tensor + The tensor from which to gather values. + indices : tensor + Must be one of the following types: int32, int64. Index tensor. + batch_dims : int + An integer or a scalar 'Tensor'. The number of batch dimensions. + + Returns + ------- + A Tensor. Has the same type as params. + """ + + pass + + +def clip_by_value(t, clip_value_min, clip_value_max): + """ + Clips tensor values to a specified min and max. + + Parameters + ---------- + t : tensor + A Tensor or IndexedSlices + clip_value_min : tensor + A 0-D (scalar) Tensor, or a Tensor with the same shape as t. The minimum value to clip by + clip_value_max : tensor + A 0-D (scalar) Tensor, or a Tensor with the same shape as t. The minimum value to clip by + + Returns + ------- + A clipped Tensor or IndexedSlices. + """ + min_value = Tensor(clip_value_min, mstype.float32) + max_value = Tensor(clip_value_max, mstype.float32) + output = C.clip_by_value(t, min_value, max_value) + return output + + +def split(value, num_or_size_splits, axis=0, num=None): + """ + Splits a tensor into sub tensors. + + Parameters + ---------- + value : tensor + The Tensor to split. + num_or_size_splits : list + Either an integer indicating the number of splits along split_dim or a 1-D integer Tensor or + Python list containing the sizes of each output tensor along split_dim. + axis : int + The dimension along which to split. Must be in the range [-rank(value), rank(value)). Defaults to 0. + num : int + used to specify the number of outputs when it cannot be inferred from the shape of size_splits. + + Returns + ------- + Tensor objects resulting from splitting value. + """ + pass + + +class Floor(Cell): + + def __call__(self, *args, **kwargs): + raise NotImplementedError + + +def floor(x): + return NotImplementedError + + +def gather(params, indices): + return NotImplementedError + + +def linspace(start, stop, num): + return NotImplementedError + + +def slice(inputs, starts, sizes): + return NotImplementedError + + +def add_n(inputs): + return NotImplementedError + + +class OneHot(Cell): + + def __init__(self, axis=-1, depth=1, on_value=1.0, off_value=0.0, dtype=mstype.float32): + super(OneHot, self).__init__() + self.onehot = P.OneHot(axis) + self.depth = depth + self.dtype = dtype + self.on_value = F.cast(on_value, self.dtype) + self.off_value = F.cast(off_value, self.dtype) + + def construct(self, indices): + return self.onehot(indices, self.depth, self.on_value, self.off_value) + + +class L2Normalize(Cell): + + def __init__(self, axis=None, epsilon=1e-12): + super(L2Normalize, self).__init__() + pass + + def construct(self, input, *args, **kwargs): + pass + + +class EmbeddingLookup(Cell): + + def __init__(self, max_norm=0): + super(EmbeddingLookup, self).__init__() + self.max_norm = max_norm + self.embedding_lookup = P.EmbeddingLookup() + + def construct(self, params, ids, *args, **kwargs): + return self.embedding_lookup(params, ids, self.max_norm) + + +class NCELoss(Cell): + + def __init__(self, num_true=1, sampled_values=None, remove_accidental_hits=False): + super(NCELoss, self).__init__() + pass + + def construct(self, weights, biases, labels, inputs, num_sampled, num_classes): + raise NotImplementedError + + +class NotEqual(Cell): + + def __init__(self): + super(NotEqual, self).__init__() + self.not_equal = P.NotEqual() + + def construct(self, x, y): + outputs = self.not_equal(x, y) + return outputs + + +class CountNonzero(object): + + def __init__(self, keepdims=None, dtype=int64): + self.keepdims = keepdims + self.dtype = dtype + + def __call__(self, input, axis=None): + input = self.convert_dtype(input) + return count_nonzero(x=input, axis=axis, keep_dims=self.keepdims, dtype=self.dtype) + + def bool_convert_to_tensor(self, x): + x = x.asnumpy() + shapes = x.shape + b = np.ones(shapes) + if len(shapes) == 1: + for i in range(shapes - 1): + if x[i] ==True: + b[i] = 1 + else: + b[i] = 0 + if len(shapes) == 2: + for i in range(shapes[0] - 1): + for j in range(shapes[1] - 1): + if x[i][j] ==True: + b[i][j] = 1 + else: + b[i][j] = 0 + return Tensor(b, dtype=float32) + + def convert_dtype(self, input): + if input.shape == 1 and type(input[0]) is bool: + output = self.bool_convert_to_tensor(input) + elif input.shape == 2 and type(input[0][0]) is bool: + output = self.bool_convert_to_tensor(input) + else: + output = input + return output + + +class Resize(Cell): + + def __init__(self, scale, method, antialias=False, data_format='channels_last', ksize=None): + super(Resize, self).__init__() + self.data_format = data_format + if method not in ['nearest', 'bilinear']: + raise ('The method must be "nearest" or "bilinear".') + self.method = method + + if ksize is None: + raise ('The "bilinear" and "nearest" method must enter ksize. The dimension of size must be 2 (H, W).') + + out_seize = (int(ksize[0] * scale[0]), int(ksize[1] * scale[1])) + if self.method == 'nearest': + self.resize = P.ResizeNearestNeighbor(size=out_seize, align_corners=antialias) + elif self.method == 'bilinear': + + self.resize = P.ResizeBilinear(size=out_seize) + + def construct(self, inputs): + if self.data_format == 'channels_last': + inputs = nhwc_to_nchw(inputs) + outputs = self.resize(inputs) + if self.data_format == 'channels_last': + outputs = nchw_to_nhwc(outputs) + return outputs + + +def resize(inputs, output_size, method, antialias): + raise NotImplementedError + + +class ZeroPadding1D(Cell): + + def __init__(self, padding): + super(ZeroPadding1D, self).__init__() + if np.size(padding) == 2: + self.pad = P.Pad(paddings=padding) + else: + raise ("The shape of parameter paddings is (N, 2). N is the rank of input data.") + + def construct(self, inputs): + return self.pad(inputs) + + +class ZeroPadding2D(Cell): + + def __init__(self, padding): + super(ZeroPadding2D, self).__init__() + if np.size(padding) == 4: + self.pad = P.Pad(paddings=padding) + else: + raise ("The shape of parameter paddings is (N, 2). N is the rank of input data.") + + def construct(self, inputs): + return self.pad(inputs) + + +class ZeroPadding3D(Cell): + + def __init__(self, padding): + super(ZeroPadding3D, self).__init__() + if np.size(padding) == 6: + self.pad = P.Pad(paddings=padding) + else: + raise ("The shape of parameter paddings is (N, 2). N is the rank of input data.") + + def construct(self, inputs): + return self.pad(inputs) + + +class Sign(Cell): + + def __init__(self): + super(Sign, self).__init__() + self.sign = P.Sign() + + def construct(self, x): + return self.sign(x) + + +class Ceil(Cell): + + def __init__(self): + super(Ceil, self).__init__() + self.ceil = P.Ceil() + + def construct(self, x): + return self.ceil(x) + + +def ceil(x): + _ceil = P.Ceil() + return _ceil(x) + + +def multiply(x, y): + raise NotImplementedError + + +def divide(x, y): + return msnp.divide(x, y) + + +def identity(x): + raise NotImplementedError + + +class BatchToSpace(Cell): + + def __init__(self, block_size, crops): + super(BatchToSpace, self).__init__() + self.batch_to_space = P.BatchToSpace(block_size=block_size, crops=crops) + + def __call__(self, input_x): + return self.batch_to_space(input_x) + + +class DepthToSpace(Cell): + + def __init__(self, block_size, data_format='NHWC'): + super(DepthToSpace, self).__init__() + self.data_format = data_format + self.depth_to_space = P.DepthToSpace(block_size=block_size) + + def __call__(self, input): + if self.data_format == 'NHWC': + input = nhwc_to_nchw(input) + + output = self.depth_to_space(input) + + if self.data_format == 'NHWC': + output = nchw_to_nhwc(output) + + return output diff --git a/tensorlayer/backend/ops/mindspore_nn.py b/tensorlayer/backend/ops/mindspore_nn.py new file mode 100644 index 000000000..1babad3bc --- /dev/null +++ b/tensorlayer/backend/ops/mindspore_nn.py @@ -0,0 +1,1761 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- +from __future__ import absolute_import, division, print_function + +import itertools +import mindspore as ms +import mindspore.ops as P +from mindspore import context +from mindspore.nn.cell import Cell +from mindspore._checkparam import Rel +from mindspore.ops import functional as F +from mindspore.communication import management +from mindspore.ops.operations import _inner_ops as inner +from mindspore._extends import cell_attr_register +from mindspore.ops._grad.grad_base import bprop_getters +from mindspore._checkparam import Validator as validator +from mindspore.communication.management import get_group_size, get_rank + + +def padding_format(padding): + """ + Checks that the padding format correspond format. + + Parameters + ---------- + padding : str + Must be one of the following:"same", "SAME", "VALID", "valid" + + Returns + ------- + str "SAME" or "VALID" + """ + + if padding in ["SAME", "same"]: + padding = "same" + elif padding in ["VALID", "valid"]: + padding = "valid" + elif padding == None: + padding = None + else: + raise Exception("Unsupported padding: " + str(padding)) + return padding + + +def preprocess_1d_format(data_format, padding): + """ + Checks that the 1-D dataformat format correspond format. + + Parameters + ---------- + data_format : str + Must be one of the following:"channels_last","NWC","NCW","channels_first" + padding : str + Must be one of the following:"same","valid","SAME","VALID" + + Returns + ------- + str "NWC" or "NCW" and "SAME" or "VALID" + """ + + if data_format in ["channels_last", "NWC"]: + data_format = "NWC" + elif data_format in ["channels_first", "NCW"]: + data_format = "NCW" + elif data_format == None: + data_format = None + else: + raise Exception("Unsupported data format: " + str(data_format)) + padding = padding_format(padding) + return data_format, padding + + +def preprocess_2d_format(data_format, padding): + """ + Checks that the 2-D dataformat format correspond format. + + Parameters + ---------- + data_format : str + Must be one of the following:"channels_last","NHWC","NCHW","channels_first" + padding : str + Must be one of the following:"same","valid","SAME","VALID" + + Returns + ------- + str "NHWC" or "NCHW" and "SAME" or "VALID" + """ + + if data_format in ["channels_last", "NHWC", "nhwc"]: + data_format = "NHWC" + elif data_format in ["channels_first", "NCHW", "nchw"]: + data_format = "NCHW" + elif data_format == None: + data_format = None + else: + raise Exception("Unsupported data format: " + str(data_format)) + padding = padding_format(padding) + return data_format, padding + + +def preprocess_3d_format(data_format, padding): + """ + Checks that the 3-D dataformat format correspond format. + + Parameters + ---------- + data_format : str + Must be one of the following:"channels_last","NDHWC","NCDHW","channels_first" + padding : str + Must be one of the following:"same","valid","SAME","VALID" + + Returns + ------- + str "NDHWC" or "NCDHW" and "SAME" or "VALID" + """ + + if data_format in ['channels_last', 'NDHWC']: + data_format = 'NDHWC' + elif data_format in ['channels_first', 'NCDHW']: + data_format = 'NCDHW' + elif data_format == None: + data_format = None + else: + raise Exception("Unsupported data format: " + str(data_format)) + padding = padding_format(padding) + return data_format, padding + + +def nchw_to_nhwc(x): + """ + Channels first to channels last + + Parameters + ---------- + x : tensor + channels first tensor data + + Returns + ------- + channels last tensor data + """ + + if len(P.Shape()(x)) == 3: + x = P.Transpose()(x, (0, 2, 1)) + elif len(P.Shape()(x)) == 4: + x = P.Transpose()(x, (0, 2, 3, 1)) + elif len(P.Shape()(x)) == 5: + x = P.Transpose()(x, (0, 2, 3, 4, 1)) + # else: + # raise Exception("Unsupported dimensions") + return x + + +def nhwc_to_nchw(x): + """ + Channles last to channels first + + Parameters + ---------- + x : tensor + channels last tensor data + + Returns + ------- + channels first tensor data + """ + + if len(P.Shape()(x)) == 3: + x = P.Transpose()(x, (0, 2, 1)) + elif len(P.Shape()(x)) == 4: + x = P.Transpose()(x, (0, 3, 1, 2)) + elif len(P.Shape()(x)) == 5: + x = P.Transpose()(x, (0, 4, 1, 2, 3)) + # else: + # raise Exception("Unsupported dimensions") + return x + + +class ReLU(Cell): + + def __init__(self): + super(ReLU, self).__init__() + self.relu = P.ReLU() + + def construct(self, x): + return self.relu(x) + + +def relu(x): + """ + Computes rectified linear: max(features, 0). + + Parameters + ---------- + x : tensor + Must be one of the following types: float32, float64, int32, uint8, int16, + int8, int64, bfloat16, uint16, half, uint32, uint64, qint8. + + Returns + ------- + A Tensor. Has the same type as features. + """ + outputs = P.ReLU() + return outputs(x) + + +class ReLU6(Cell): + + def __init__(self): + super(ReLU6, self).__init__() + self.relu6 = P.ReLU6() + + def construct(self, x): + return self.relu6(x) + + +def relu6(x): + """ + Computes Rectified Linear 6: min(max(features, 0), 6). + + Parameters + ---------- + x : tensor + Must be one of the following types: float32, float64, int32, uint8, int16, + int8, int64, bfloat16, uint16, half, uint32, uint64, qint8. + + Returns + ------- + A Tensor with the same type as features. + """ + outputs = P.ReLU6() + return outputs(x) + + +class LeakyReLU(Cell): + + def __init__(self, alpha=0.2): + super(LeakyReLU, self).__init__() + self.leakyrelu = ms.nn.LeakyReLU(alpha=alpha) + + def construct(self, x): + return self.leakyrelu(x) + + +def leaky_relu(x, alpha=0.2): + """ + Compute the Leaky ReLU activation function. + + Parameters + ---------- + x : tensor + representing preactivation values. Must be one of the following types: + float16, float32, float64, int32, int64. + + Returns + ------- + The activation value. + """ + + leaky_relu = LeakyReLU(alpha=alpha) + output = leaky_relu(x) + return leaky_relu + + +class Softplus(Cell): + + def __init__(self): + super(Softplus, self).__init__() + self.softplus = P.Softplus() + + def construct(self, x): + return self.softplus(x) + + +def softplus(x): + """ + Computes softplus: log(exp(features) + 1). + + Parameters + ---------- + x : tensor + Must be one of the following types: half, bfloat16, float32, float64. + + Returns + ------- + A Tensor. Has the same type as features. + """ + + obj = Softplus() + return obj(x) + + +class Tanh(Cell): + + def __init__(self): + super(Tanh, self).__init__() + self.tanh = P.Tanh() + + def construct(self, x): + return self.tanh(x) + + +def tanh(x): + """ + Computes hyperbolic tangent of x element-wise. + + Parameters + ---------- + x : tensor + Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128. + + Returns + ------- + A Tensor. Has the same type as x. + """ + + _tanh = Tanh() + return _tanh(x) + + +class Sigmoid(Cell): + + def __init__(self): + super(Sigmoid, self).__init__() + self.sigmoid = P.Sigmoid() + + def construct(self, x): + return self.sigmoid(x) + + +def sigmoid(x): + """ + Computes sigmoid of x element-wise. + + Parameters + ---------- + x : tensor + A Tensor with type float16, float32, float64, complex64, or complex128. + + Returns + ------- + A Tensor with the same type as x. + """ + outputs = P.Sigmoid() + return outputs(x) + + +class Softmax(Cell): + + def __init__(self): + super(Softmax, self).__init__() + self.softmax = P.Softmax() + + def construct(self, x): + return self.softmax(x) + + +def softmax(logits, axis=None): + """ + Computes softmax activations. + + Parameters + ---------- + logits : tensor + Must be one of the following types: half, float32, float64. + axis : int + The dimension softmax would be performed on. The default is -1 which indicates the last dimension. + + Returns + ------- + A Tensor. Has the same type and shape as logits. + """ + outputs = P.Softmax(axis) + return outputs(logits) + + +class Dropout(Cell): + + def __init__(self, keep, seed=0): + super(Dropout, self).__init__() + self.dropout = P.Dropout(keep_prob=keep) + self.is_gpu = context.get_context('device_target') in ["GPU"] + self.get_shape = P.Shape() + self.dropout_gen_mask = P.DropoutGenMask(Seed0=seed, Seed1=0) + self.dropout_do_mask = P.DropoutDoMask() + self.cast = P.Cast() + self.keep_prob = keep # ms.Tensor(keep, dtype=ms.float32) + # print(self.keep_prob, type(self.keep_prob)) + + def construct(self, inputs): + if self.is_gpu: + outputs, _ = self.dropout(inputs) + return outputs + if self.keep_prob == 1: + return inputs + shape = self.get_shape(inputs) + dtype = P.DType()(inputs) + if self._is_float_dtype(dtype): + keep_prob = self.cast(self.keep_prob, dtype=dtype) + else: + keep_prob = self.cast(self.keep_prob, ms.float16) + output = self.dropout_gen_mask(shape, keep_prob) + return self.dropout_do_mask(inputs, output, keep_prob) + + def _is_float_dtype(dtype): + if dtype in [ms.float32, ms.float16]: + return True + return False + + +class BiasAdd(Cell): + """ + Adds bias to value. + + Parameters + ---------- + x : tensor + A Tensor with type float, double, int64, int32, uint8, int16, int8, complex64, or complex128. + bias : tensor + Must be the same type as value unless value is a quantized type, + in which case a different quantized type may be used. + Returns + ------- + A Tensor with the same type as value. + """ + + def __init__(self, data_format='channels_first'): + super(BiasAdd, self).__init__() + self.bias_add = P.BiasAdd() + if data_format in ['channels_first', 'NCW', 'NCHW', 'NCDHW']: + self.data_format = 'channels_first' + elif data_format in ['channels_last', 'NWC', 'NHWC', 'NDHWC']: + self.data_format = 'channels_last' + else: + raise ("Unsupported data format: " + str(data_format)) + + def construct(self, x, bias): + if self.data_format == 'channels_last': + x = nhwc_to_nchw(x) + outputs = self.bias_add(x, bias) + if self.data_format == 'channels_last': + outputs = nchw_to_nhwc(outputs) + return outputs + + +def bias_add(x, bias): + """ + Adds bias to value. + + Parameters + ---------- + x : tensor + A Tensor with type float, double, int64, int32, uint8, int16, int8, complex64, or complex128. + bias : tensor + Must be the same type as value unless value is a quantized type, + in which case a different quantized type may be used. + data_format : A string. + 'N...C' and 'NC...' are supported. + name : str + A name for the operation (optional). + Returns + ------- + A Tensor with the same type as value. + """ + raise NotImplementedError + + +class Conv1D(Cell): + + def __init__(self, stride, padding, data_format='NWC', dilations=None, out_channel=None, k_size=None): + super(Conv1D, self).__init__() + self.data_format, self.padding = preprocess_1d_format(data_format, padding) + self.stride = (1, stride) + self.dilations = (1, dilations) + self.k_size = (1, k_size) + self.out_channel = out_channel + + self.conv2d = P.Conv2D( + out_channel=self.out_channel, kernel_size=self.k_size, pad_mode=self.padding, stride=self.stride, + dilation=self.dilations, mode=1, group=1 + ) + + self.expand_dims = P.ExpandDims() + self.squeeze = P.Squeeze(2) + + def construct(self, x, filters): + if self.data_format == 'NWC': + x = nhwc_to_nchw(x) + + x = self.expand_dims(x, 2) + filters = self.expand_dims(filters, 2) + + output = self.conv2d(x, filters) + output = self.squeeze(output) + + if self.data_format == 'NWC': + output = nchw_to_nhwc(output) + return output + + +def conv1d(input, filters, stride, padding, data_format='NWC', dilations=None, name=None): + """ + Computes a 1-D convolution given 3-D input and filter tensors. + + Parameters + ---------- + input : tensor + A 3D Tensor. Must be of type float16, float32, or float64 + filters : tensor + A 3D Tensor. Must have the same type as input. + stride : int of list + An int or list of ints that has length 1 or 3. The number of entries by which the filter is moved right at each step. + padding : string + 'SAME' or 'VALID' + data_format : string + An optional string from "NWC", "NCW". Defaults to "NWC", the data is stored in the order of + [batch, in_width, in_channels]. The "NCW" format stores data as [batch, in_channels, in_width]. + dilations : int or list + An int or list of ints that has length 1 or 3 which defaults to 1. + The dilation factor for each dimension of input. If set to k > 1, + there will be k-1 skipped cells between each filter element on that dimension. + Dilations in the batch and depth dimensions must be 1. + name : string + A name for the operation (optional). + Returns + ------- + A Tensor. Has the same type as input. + """ + + pass + + +class Conv2D(Cell): + + def __init__(self, strides, padding, data_format='NHWC', dilations=None, out_channel=None, k_size=None): + super(Conv2D, self).__init__() + self.data_format, self.padding = preprocess_2d_format(data_format, padding) + + if self.data_format is 'NHWC': + self.ms_stride = strides[1] + self.ms_dilation = dilations[1] + elif self.data_format is 'NCHW': + self.ms_stride = strides[2] + self.ms_dilation = dilations[2] + + self.conv2d = P.Conv2D( + out_channel=out_channel, kernel_size=k_size, pad_mode=self.padding, stride=self.ms_stride, + dilation=self.ms_dilation, mode=1, group=1, data_format=self.data_format + ) + + def construct(self, inputs, filters): + outputs = self.conv2d(inputs, filters) + return outputs + + +def conv2d(input, filters, strides, padding, data_format='NCHW', dilations=None): + """ + Computes a 2-D convolution given 4-D input and filters tensors. + + Parameters + ---------- + input : tensor + Must be one of the following types: half, bfloat16, float32, float64. A 4-D tensor. + The dimension order is interpreted according to the value of data_format, see below for details. + filters : tensor + Must have the same type as input. A 4-D tensor of shape [filter_height, filter_width, in_channels, out_channels] + strides : int of list + The stride of the sliding window for each dimension of input. If a single value is given it is replicated in the H and W dimension. + By default the N and C dimensions are set to 1. The dimension order is determined by the value of data_format, see below for details. + padding : string + "SAME" or "VALID" + data_format : string + "NHWC", "NCHW". Defaults to "NCHW". + dilations : list or ints + list of ints that has length 1, 2 or 4, defaults to 1. The dilation factor for each dimension ofinput. + + Returns + ------- + A Tensor. Has the same type as input. + """ + raise NotImplementedError + + +class Conv3D(Cell): + + def __init__(self, strides, padding, data_format='NDHWC', dilations=None, out_channel=None, k_size=None): + super(Conv3D, self).__init__() + self.data_format, self.padding = preprocess_3d_format(data_format, padding) + + if self.data_format is 'NDHWC': + self.ms_stride = strides[1] + self.ms_dilation = dilations[1] + raise NotImplementedError("The optional value for data format. Currently only support “NCDHW”.") + elif self.data_format is 'NCDHW': + self.ms_stride = strides[2] + self.ms_dilation = dilations[2] + + self.conv3d = P.Conv3D( + out_channel=out_channel, kernel_size=k_size, pad_mode=self.padding, stride=self.ms_stride, + dilation=self.ms_dilation, data_format=data_format + ) + + def construct(self, input, filters): + outputs = self.conv3d(input, filters) + return outputs + + +def conv3d(input, filters, strides, padding, data_format='NDHWC', dilations=None, name=None): + """ + Computes a 3-D convolution given 5-D input and filters tensors. + + Parameters + ---------- + input : tensor + Must be one of the following types: half, bfloat16, float32, float64. + Shape [batch, in_depth, in_height, in_width, in_channels]. + filters : tensor + Must have the same type as input. Shape [filter_depth, filter_height, filter_width, in_channels, out_channels]. + in_channels must match between input and filters. + strides : list of ints + A list of ints that has length >= 5. 1-D tensor of length 5. + The stride of the sliding window for each dimension of input. + Must have strides[0] = strides[4] = 1. + padding : string + A string from: "SAME", "VALID". The type of padding algorithm to use. + data_format : string + An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. + With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. + Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width]. + dilations : list of ints + Defaults to [1, 1, 1, 1, 1]. 1-D tensor of length 5. The dilation factor for each dimension of input. + If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. + The dimension order is determined by the value of data_format, see above for details. + Dilations in the batch and depth dimensions must be 1. + name : string + A name for the operation (optional). + + Returns + ------- + A Tensor. Has the same type as input. + """ + + raise NotImplementedError + + +def lrn(inputs, depth_radius, bias, alpha, beta): + """ + Local Response Normalization. + + Parameters + ---------- + inputs : tensor + Must be one of the following types: half, bfloat16, float32. 4-D. + depth_radius : int + Defaults to 5. 0-D. Half-width of the 1-D normalization window. + bias : float + Defaults to 1. An offset (usually positive to avoid dividing by 0). + alpha : float + Defaults to 1. A scale factor, usually positive. + beta : float + Defaults to 0.5. An exponent. + + Returns + ------- + A Tensor. Has the same type as input. + """ + pass + + +def moments(x, axes, shift=None, keepdims=False): + """ + Calculates the mean and variance of x. + + Parameters + ---------- + x : tensor + A Tensor + axes : ints + Axes along which to compute mean and variance. + shift : int + Not used in the current implementation. + keepdims : bool + produce moments with the same dimensionality as the input. + + Returns + ------- + Two Tensor objects: mean and variance. + """ + + pass + + +class MaxPool1d(Cell): + + def __init__(self, ksize, strides, padding, data_format=None): + super(MaxPool1d, self).__init__() + self.data_format, padding = preprocess_1d_format(data_format=data_format, padding=padding) + self.expand = P.ExpandDims() + _strides = (1, strides[0]) + _ksize = (1, ksize[0]) + if self.data_format == 'NWC': + self.squeeze = P.Squeeze(1) + _data_format = 'NHWC' + if self.data_format == 'NCW': + self.squeeze = P.Squeeze(2) + _data_format = 'NCHW' + + self.max_pool = P.MaxPool(kernel_size=_ksize, strides=_strides, pad_mode=padding, data_format=_data_format) + + def construct(self, inputs): + if self.data_format == 'NWC': + x = self.expand(inputs, 1) + if self.data_format == 'NCW': + x = self.expand(inputs, 2) + output = self.max_pool(x) + output = self.squeeze(output) + return output + + +class MaxPool(Cell): + + def __init__(self, ksize, strides, padding, data_format=None): + super(MaxPool, self).__init__() + data_format, padding = preprocess_2d_format(data_format=data_format, padding=padding) + + if data_format == 'NHWC': + _strides = (strides[1], strides[2]) + if data_format == 'NCHW': + _strides = (strides[2], strides[3]) + + self.maxpool = P.MaxPool(kernel_size=ksize, strides=_strides, pad_mode=padding, data_format=data_format) + + def construct(self, inputs): + outputs = self.maxpool(inputs) + return outputs + + +def max_pool(input, ksize, strides, padding, data_format=None): + """ + Performs the max pooling on the input. + + Parameters + ---------- + input : tensor + Tensor of rank N+2, of shape [batch_size] + input_spatial_shape + [num_channels] if data_format does not start + with "NC" (default), or [batch_size, num_channels] + input_spatial_shape if data_format starts with "NC". + Pooling happens over the spatial dimensions only. + ksize : int or list of ints + An int or list of ints that has length 1, N or N+2. + The size of the window for each dimension of the input tensor. + strides : list or list of ints + An int or list of ints that has length 1, N or N+2. + The stride of the sliding window for each dimension of the input tensor. + padding : string + 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.ops.convolution for details. + + Returns + ------- + A Tensor of format specified by data_format. The max pooled output tensor. + """ + data_format, padding = preprocess_2d_format(data_format=data_format, padding=padding) + if data_format == 'NHWC': + _strides = (strides[1], strides[2]) + if data_format == 'NCHW': + _strides = (strides[2], strides[3]) + outputs = P.MaxPool(kernel_size=ksize, strides=_strides, pad_mode=padding, data_format=data_format)(input) + return outputs + + +class AvgPool1d(Cell): + + def __init__(self, ksize, strides, padding, data_format=None): + super(AvgPool1d, self).__init__() + self.data_format, self.padding = preprocess_1d_format(data_format=data_format, padding=padding) + self.kernel_size = (1, ksize[0]) + self.stride = (1, strides[0]) + + if self.data_format == 'NWC': + _data_format = 'NHWC' + self.squeeze = P.Squeeze(1) + if self.data_format == 'NCW': + _data_format = 'NCHW' + self.squeeze = P.Squeeze(2) + + self.avg_pool = P.AvgPool( + kernel_size=self.kernel_size, strides=self.stride, pad_mode=self.padding, data_format=_data_format + ) + self.reduce_mean = P.ReduceMean(keep_dims=True) + self.slice = P.Slice() + self.expand = P.ExpandDims() + self.shape = P.Shape() + + def construct(self, inputs): + x = inputs + batch, channel, width = self.shape(inputs) + if width == self.kernel_size[1]: + x = self.reduce_mean(x, 2) + elif width - self.kernel_size[1] < self.stride[1]: + x = self.slice(x, (0, 0, 0), (batch, channel, self.kernel_size[1])) + x = self.reduce_mean(x, 2) + else: + if self.data_format == 'NCW': + x = self.expand(x, 2) + if self.data_format == 'NWC': + x = self.expand(x, 1) + x = self.avg_pool(x) + x = self.squeeze(x) + return x + + +class AvgPool(Cell): + + def __init__(self, ksize, strides, padding, data_format=None): + super(AvgPool, self).__init__() + self.data_format, self.padding = preprocess_2d_format(data_format=data_format, padding=padding) + ms_ksize = ksize[1] + ms_strides = strides[1] + self.avgpool = P.AvgPool(ksize=ms_ksize, strides=ms_strides, padding=padding, data_format=self.data_format) + + def construct(self, inputs): + outputs = self.avgpool(inputs) + return outputs + + +def avg_pool(input, ksize, strides, padding): + """ + Performs the avg pooling on the input. + + Parameters + ---------- + input : tensor + Tensor of rank N+2, of shape [batch_size] + input_spatial_shape + [num_channels] + if data_format does not start with "NC" (default), or [batch_size, num_channels] + input_spatial_shape + if data_format starts with "NC". Pooling happens over the spatial dimensions only. + ksize : int or list of ints + An int or list of ints that has length 1, N or N+2. + The size of the window for each dimension of the input tensor. + strides : int or list of ints + An int or list of ints that has length 1, N or N+2. + The stride of the sliding window for each dimension of the input tensor. + padding : string + 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.ops.convolution for details. + + Returns + ------- + A Tensor of format specified by data_format. The average pooled output tensor. + """ + padding = padding_format(padding) + ms_ksize = ksize[0] + ms_strides = strides[1] + outputs = P.AvgPool(ksize=ms_ksize, strides=ms_strides, padding=padding) + return outputs(input) + + +class MaxPool3d(Cell): + def __init__(self, ksize, strides, padding, data_format=None): + super(MaxPool3d, self).__init__() + self.data_format, self.padding = preprocess_3d_format(data_format, padding) + if data_format == 'NDHWC': + _strides = (strides[1], strides[2], strides[3]) + if data_format == 'NCDHW': + _strides = (strides[2], strides[3], strides[4]) + self.max_pool3d = P.MaxPool3D( + kernel_size=ksize, + strides=_strides, + padding=padding, + data_format=self.data_format) + + def __call__(self, inputs): + outputs = self.max_pool3d(inputs) + return outputs + + +def max_pool3d(input, ksize, strides, padding, data_format=None, name=None): + """ + Performs the max pooling on the input. + + Parameters + ---------- + input : tensor + A 5-D Tensor of the format specified by data_format. + ksize : int or list of ints + An int or list of ints that has length 1, 3 or 5. + The size of the window for each dimension of the input tensor. + strides : int or list of ints + An int or list of ints that has length 1, 3 or 5. + The stride of the sliding window for each dimension of the input tensor. + padding : string + 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.ops.convolution for details. + data_format : string + "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. + With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. + Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width]. + name : string + A name for the operation (optional). + + Returns + ------- + A Tensor of format specified by data_format. The max pooled output tensor. + """ + pass + + +class AvgPool3d(Cell): + def __init__(self, ksize, strides, padding, data_format=None): + super(AvgPool3d, self).__init__() + self.data_format, self.padding = preprocess_3d_format(data_format, padding) + if data_format == 'NDHWC': + _strides = (strides[1], strides[2], strides[3]) + if data_format == 'NCDHW': + _strides = (strides[2], strides[3], strides[4]) + raise NotImplementedError + + def __call__(self, inputs): + pass + + +def avg_pool3d(input, ksize, strides, padding, data_format=None, name=None): + """ + Performs the average pooling on the input. + + Parameters + ---------- + input : tensor + A 5-D Tensor of shape [batch, height, width, channels] and type float32, float64, qint8, quint8, or qint32. + ksize : int or list of ints + An int or list of ints that has length 1, 3 or 5. The size of the window for each dimension of the input tensor. + strides : int or list of ints + An int or list of ints that has length 1, 3 or 5. + The stride of the sliding window for each dimension of the input tensor. + padding : string + 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.ops.convolution for details. + data_format : string + 'NDHWC' and 'NCDHW' are supported. + name : string + Optional name for the operation. + + Returns + ------- + A Tensor with the same type as value. The average pooled output tensor. + """ + pass + + +def pool(input, window_shape, pooling_type, strides=None, padding='VALID', data_format=None, dilations=None, name=None): + """ + Performs an N-D pooling operation. + + Parameters + ---------- + input : tensor + Tensor of rank N+2, of shape [batch_size] + input_spatial_shape + [num_channels] + if data_format does not start with "NC" (default), or [batch_size, num_channels] + input_spatial_shape + if data_format starts with "NC". Pooling happens over the spatial dimensions only. + window_shape : int + Sequence of N ints >= 1. + pooling_type : string + Specifies pooling operation, must be "AVG" or "MAX". + strides : ints + Sequence of N ints >= 1. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1. + padding : string + The padding algorithm, must be "SAME" or "VALID". Defaults to "SAME". + See the "returns" section of tf.ops.convolution for details. + data_format : string + Specifies whether the channel dimension of the input and output is the last dimension (default, or if data_format does not start with "NC"), + or the second dimension (if data_format starts with "NC"). + For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". + For N=3, the valid values are "NDHWC" (default) and "NCDHW". + dilations : list of ints + Dilation rate. List of N ints >= 1. Defaults to [1]*N. If any value of dilation_rate is > 1, then all values of strides must be 1. + name : string + Optional. Name of the op. + + Returns + ------- + Tensor of rank N+2, of shape [batch_size] + output_spatial_shape + [num_channels] + """ + pass + + +class DepthwiseConv2d(Cell): + + def __init__(self, strides, padding, data_format=None, dilations=None, ksize=None, channel_multiplier=1): + super(DepthwiseConv2d, self).__init__() + self.data_format, self.padding = preprocess_2d_format(data_format, padding) + self.ms_stride = strides[1] + self.ms_dilation = dilations[1] + self.depthwise_conv2d = P.DepthwiseConv2dNative( + channel_multiplier=channel_multiplier, kernel_size=ksize, stride=self.ms_stride, dilation=self.ms_dilation + ) + + def construct(self, input, filter): + if self.data_format == 'NHWC': + input = nhwc_to_nchw(input) + outputs = self.depthwise_conv2d(input, filter) + if self.data_format == 'NHWC': + outputs = nchw_to_nhwc(outputs) + return outputs + + +def depthwise_conv2d(input, filter, strides, padding, data_format=None, dilations=None, name=None): + """ + Depthwise 2-D convolution. + + Parameters + ---------- + input : tensor + 4-D with shape according to data_format. + filter : tensor + 4-D with shape [filter_height, filter_width, in_channels, channel_multiplier]. + strides : list + 1-D of size 4. The stride of the sliding window for each dimension of input. + padding : string + 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.ops.convolution for details. + data_format : string + The data format for input. Either "NHWC" (default) or "NCHW". + dilations : list + 1-D of size 2. The dilation rate in which we sample input values across the height and width dimensions in atrous convolution. + If it is greater than 1, then all values of strides must be 1. + name : string + A name for this operation (optional). + + Returns + ------- + A 4-D Tensor with shape according to data_format. + E.g., for "NHWC" format, shape is [batch, out_height, out_width, in_channels * channel_multiplier]. + """ + + pass + + +class Conv1d_transpose(Cell): + + def __init__(self, strides, padding, data_format, dilations=None, out_channel=None, k_size=None, in_channels=None): + super(Conv1d_transpose, self).__init__() + self.data_format, self.padding = preprocess_1d_format(data_format, padding) + self.in_channels = in_channels + self.out_channel = out_channel + self.strides = (1, strides) + self.dilations = (1, dilations) + self.k_size = (1, k_size) + + self.conv2d_transpose = P.Conv2DBackpropInput( + out_channel=self.in_channels, kernel_size=self.k_size, pad_mode=self.padding, stride=self.strides, + dilation=self.dilations, mode=1, group=1 + ) + self.shape = P.Shape() + self.expand_dims = P.ExpandDims() + self.squeeze = P.Squeeze(2) + + def _deconv_output_length(self, input_length, filter_size, stride_size, dilation_size): + length = 0 + filter_size = filter_size + (filter_size - 1) * (dilation_size - 1) + + if self.padding == 'same': + length = input_length * stride_size + elif self.padding == 'valid': + length = input_length * stride_size + max(filter_size - stride_size, 0) + + return length + + def construct(self, x, filters): + if self.data_format == 'NWC': + x = nhwc_to_nchw(x) + x = self.expand_dims(x, 2) + filters = self.expand_dims(filters, 2) + n, _, h, w = self.shape(x) + + h_out = self._deconv_output_length(h, self.k_size[0], self.strides[0], self.dilations[0]) + w_out = self._deconv_output_length(w, self.k_size[1], self.strides[1], self.dilations[1]) + output = self.conv2d_transpose(x, filters, (n, self.out_channel, h_out, w_out)) + output = self.squeeze(output) + + if self.data_format == 'NWC': + output = nchw_to_nhwc(output) + return output + + +def conv1d_transpose( + input, filters, output_shape, strides, padding='SAME', data_format='NWC', dilations=None, name=None +): + """ + The transpose of conv1d. + + Parameters + ---------- + input : tensor + A 3-D Tensor of type float and shape [batch, in_width, in_channels] + for NWC data format or [batch, in_channels, in_width] for NCW data format. + filters : tensor + A 3-D Tensor with the same type as value and shape [filter_width, output_channels, in_channels]. + filter's in_channels dimension must match that of value. + output_shape : tensor + A 1-D Tensor, containing three elements, representing the output shape of the deconvolution op. + strides : list + An int or list of ints that has length 1 or 3. The number of entries by which the filter is moved right at each step. + padding : string + 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.ops.convolution for details. + data_format : string + 'NWC' and 'NCW' are supported. + dilations : list + An int or list of ints that has length 1 or 3 which defaults to 1. + The dilation factor for each dimension of input. If set to k > 1, + there will be k-1 skipped cells between each filter element on that dimension. + Dilations in the batch and depth dimensions must be 1. + name : string + Optional name for the returned tensor. + + Returns + ------- + A Tensor with the same type as value. + """ + pass + + +class Conv2d_transpose(Cell): + + def __init__(self, strides, padding, data_format, dilations=None, out_channel=None, k_size=None, in_channels=None): + super(Conv2d_transpose, self).__init__() + self.data_format, self.padding = preprocess_2d_format(data_format, padding) + self.in_channels = in_channels + self.out_channel = out_channel + + self.k_size = k_size + if self.data_format == 'NHWC': + self.strides = (strides[1], strides[2]) + self.dilations = (dilations[1], dilations[2]) + elif self.data_format == 'NCHW': + self.strides = (strides[2], strides[3]) + self.dilations = (dilations[2], dilations[3]) + + self.conv2d_transpose = P.Conv2DBackpropInput( + out_channel=self.in_channels, kernel_size=self.k_size, pad_mode=self.padding, stride=self.strides, + dilation=self.dilations, mode=1, group=1 + ) + self.shape = P.Shape() + + def _deconv_output_length(self, input_length, filter_size, stride_size, dilation_size): + length = 0 + filter_size = filter_size + (filter_size - 1) * (dilation_size - 1) + + if self.padding == 'same': + length = input_length * stride_size + elif self.padding == 'valid': + length = input_length * stride_size + max(filter_size - stride_size, 0) + + return length + + def construct(self, x, filters): + if self.data_format == 'NHWC': + x = nhwc_to_nchw(x) + + n, _, h, w = self.shape(x) + + h_out = self._deconv_output_length(h, self.k_size[0], self.strides[0], self.dilations[0]) + w_out = self._deconv_output_length(w, self.k_size[1], self.strides[1], self.dilations[1]) + + output = self.conv2d_transpose(x, filters, (n, self.out_channel, h_out, w_out)) + + if self.data_format == 'NHWC': + output = nchw_to_nhwc(output) + + return output + + +def conv2d_transpose( + input, filters, output_shape, strides, padding='SAME', data_format='NHWC', dilations=None, name=None +): + """ + The transpose of conv2d. + + Parameters + ---------- + input : tensor + A 4-D Tensor of type float and shape [batch, height, width, in_channels] + for NHWC data format or [batch, in_channels, height, width] for NCHW data format. + filters : tensor + A 4-D Tensor with the same type as input and shape [height, width, + output_channels, in_channels]. filter's in_channels dimension must match that of input. + output_shape : tensor + A 1-D Tensor representing the output shape of the deconvolution op. + strides : list + An int or list of ints that has length 1, 2 or 4. The stride of the sliding window for each dimension of input. + If a single value is given it is replicated in the H and W dimension. + By default the N and C dimensions are set to 0. + The dimension order is determined by the value of data_format, see below for details. + padding : string + 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.ops.convolution for details. + data_format : string + 'NHWC' and 'NCHW' are supported. + dilations : list + An int or list of ints that has length 1, 2 or 4, defaults to 1. + name : string + Optional name for the returned tensor. + + Returns + ------- + A Tensor with the same type as input. + """ + pass + + +class Conv3d_transpose(Cell): + + def __init__( + self, strides, padding, data_format='NDHWC', dilations=None, name=None, out_channel=None, k_size=None, + in_channels=None + ): + super(Conv3d_transpose, self).__init__() + self.data_format, self.padding = preprocess_3d_format(data_format, padding) + if self.data_format == 'NDHWC': + self.strides = (strides[1], strides[2], strides[3]) + self.dilations = (dilations[1], dilations[2], dilations[3]) + elif self.data_format == 'NCDHW': + self.strides = (strides[2], strides[3], strides[4]) + self.dilations = (dilations[2], dilations[3], dilations[4]) + + self.conv3d_transpose = P.Conv3DTranspose( + in_channel=in_channels, out_channel=out_channel, kernel_size=k_size, mode=1, pad_mode=padding, + stride=self.strides, dilation=self.dilations, data_format=self.data_format + ) + + def construct(self, input, filters): + output = self.conv3d_transpose(input, filters) + return output + + +def conv3d_transpose( + input, filters, output_shape, strides, padding='SAME', data_format='NDHWC', dilations=None, name=None +): + """ + The transpose of conv3d. + + Parameters + ---------- + input : tensor + A 5-D Tensor of type float and shape [batch, height, width, in_channels] for + NHWC data format or [batch, in_channels, height, width] for NCHW data format. + filters : tensor + A 5-D Tensor with the same type as value and shape [height, width, output_channels, in_channels]. + filter's in_channels dimension must match that of value. + output_shape : tensor + A 1-D Tensor representing the output shape of the deconvolution op. + strides : list + An int or list of ints that has length 1, 3 or 5. + padding : string + 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.ops.convolution for details. + data_format : string + 'NDHWC' and 'NCDHW' are supported. + dilations : list of ints + An int or list of ints that has length 1, 3 or 5, defaults to 1. + name : string + Optional name for the returned tensor. + + Returns + ------- + A Tensor with the same type as value. + """ + + pass + + +class BatchNorm(Cell): + """Batch Normalization base class.""" + + @cell_attr_register + def __init__( + self, num_features, epsilon=1e-5, decay=0.9, gamma=None, beta=None, moving_mean=None, moving_var=None, + is_train=None, device_num_each_group=1, process_groups=0, data_format='NCHW' + ): + super(BatchNorm, self).__init__() + if data_format in ["channels_last", "NHWC", "nhwc"]: + data_format = "NHWC" + elif data_format in ["channels_first", "NCHW", "nchw"]: + data_format = "NCHW" + validator.check_value_type('num_features', num_features, [int], self.cls_name) + if num_features < 1: + raise ValueError("num_features must be at least 1") + + if decay < 0 or decay > 1: + raise ValueError("momentum should be a number in range [0, 1], but got {}".format(decay)) + self.format = validator.check_string(data_format, ['NCHW', 'NHWC'], 'format', self.cls_name) + if context.get_context("device_target") != "GPU" and self.format == "NHWC": + raise ValueError("NHWC format only support in GPU target.") + self.use_batch_statistics = is_train + self.num_features = num_features + self.eps = epsilon + self.moving_mean = moving_mean + self.moving_variance = moving_var + self.gamma = gamma + self.beta = beta + self.group_device_num = validator.check_positive_int(device_num_each_group) + self.process_groups = process_groups + self.is_global = False + self.parallel_mode = context.get_auto_parallel_context("parallel_mode") + global SYNC_BN_GROUP_NAME + # for GlobalBatchNorm + if self.group_device_num != 1: + self.rank_id = get_rank() + self.rank_size = get_group_size() + self.device_list = [i for i in range(0, self.rank_size)] + self.rank_list = self.list_group(self.device_list, self.group_device_num) + self.rank_list_idx = len(self.rank_list) + for i in range(self.rank_list_idx): + if self.rank_id in self.rank_list[i]: + self.is_global = True + if SYNC_BN_GROUP_NAME == "": + SYNC_BN_GROUP_NAME = "sync_bn_group" + str(i) + management.create_group(SYNC_BN_GROUP_NAME, self.rank_list[i]) + # for SyncBatchNorm + if self.process_groups != 0: + self.rank_id = get_rank() + self.rank_size = get_group_size() + if self.process_groups is not None: + validator.check_isinstance("process_groups", self.process_groups, list) + self._check_rank_ids(self.process_groups, self.rank_size) + for i in range(len(self.process_groups)): + validator.check_isinstance("process_groups[" + str(i) + "]", self.process_groups[i], list) + self.group_device_num = len(self.process_groups[i]) + if self.rank_id in self.process_groups[i] and self.group_device_num > 1: + self.is_global = True + if SYNC_BN_GROUP_NAME == "": + SYNC_BN_GROUP_NAME = "sync_bn_group" + str(i) + management.create_group(SYNC_BN_GROUP_NAME, self.process_groups[i]) + elif self.rank_size > 1: + self.is_global = True + self.group_device_num = self.rank_size + self.device_list = [i for i in range(0, self.rank_size)] + if SYNC_BN_GROUP_NAME == "": + SYNC_BN_GROUP_NAME = "sync_bn_group0" + management.create_group(SYNC_BN_GROUP_NAME, self.device_list) + + self.shape = P.Shape() + self.reduce_mean = P.ReduceMean(keep_dims=True) + self.square = P.Square() + self.sqrt = P.Sqrt() + self.cast = P.Cast() + self.dtype = P.DType() + self.reshape = P.Reshape() + self._target = context.get_context("device_target") + self.is_graph_mode = context.get_context("mode") == context.GRAPH_MODE + self.momentum = 1.0 - decay + if context.get_context("enable_ge"): + self.is_ge_backend = True + else: + self.is_ge_backend = False + + self.bn_train = P.BatchNorm(is_training=True, epsilon=self.eps, momentum=self.momentum, data_format=self.format) + if self.is_global: + self.bn_train = inner.SyncBatchNorm( + epsilon=self.eps, momentum=self.momentum, group=SYNC_BN_GROUP_NAME, device_num=self.group_device_num + ) + + self.bn_infer = P.BatchNorm(is_training=False, epsilon=self.eps, data_format=self.format) + + data_parallel_strategy = ((1, ), (1, )) + data_parallel_strategy_one = ((1, ), ()) + self.sub_mean = P.Sub().shard(data_parallel_strategy) + self.sub_var = P.Sub().shard(data_parallel_strategy) + self.mul_mean = P.Mul().shard(data_parallel_strategy_one) + self.mul_var = P.Mul().shard(data_parallel_strategy_one) + self.assign_sub_mean = P.AssignSub().shard(data_parallel_strategy) + self.assign_sub_var = P.AssignSub().shard(data_parallel_strategy) + + def list_group(self, world_rank, group_size): + if group_size > get_group_size(): + raise ValueError( + "group size can not be greater than local rank size, group size is {}, " + "local_rank_size is {}".format(group_size, get_group_size()) + ) + if len(world_rank) % group_size != 0: + raise ValueError("please make your group size correct.") + world_rank_list = zip(*(iter(world_rank), ) * group_size) + group_list = [list(i) for i in world_rank_list] + return group_list + + def _check_rank_ids(self, process_groups, rank_size): + seen = set() + for rid in itertools.chain(*process_groups): + validator.check_int_range(rid, 0, rank_size, Rel.INC_LEFT, "rank id in process_groups") + if rid in seen: + raise ValueError("rank id in process_groups should not be duplicated.") + seen.add(rid) + + def construct(self, inputs): + x_shape = F.shape(inputs) + if len(x_shape) == 5: + inputs = self.reshape(inputs, (x_shape[0], x_shape[1], x_shape[2] * x_shape[3], x_shape[4])) + + flag = self.use_batch_statistics + + if flag: + output = self.bn_train(inputs, self.gamma, self.beta, self.moving_mean, self.moving_variance)[0] + + if len(x_shape) == 5: + output = self.reshape(output, x_shape) + return output + + output = self.bn_infer(inputs, self.gamma, self.beta, self.moving_mean, self.moving_variance)[0] + if len(x_shape) == 5: + output = self.reshape(output, x_shape) + return output + + def extend_repr(self): + return 'num_features={}, eps={}, momentum={}, gamma={}, beta={}, moving_mean={}, moving_variance={}'.format( + self.num_features, self.eps, self.momentum, self.gamma, self.beta, self.moving_mean, self.moving_variance + ) + + +class GroupConv2D(Cell): + + def __init__(self, strides, padding, data_format, dilations, out_channel, k_size, groups): + super(GroupConv2D, self).__init__() + self.data_format, self.padding = preprocess_2d_format(data_format, padding) + + if self.data_format is 'NHWC': + self.ms_stride = strides[1] + self.ms_dilation = dilations[1] + + elif self.data_format is 'NCHW': + self.ms_stride = strides[2] + self.ms_dilation = dilations[2] + + self.conv2d = P.Conv2D( + out_channel=out_channel, kernel_size=k_size, pad_mode=self.padding, stride=self.ms_stride, + dilation=self.ms_dilation, mode=1, group=groups, data_format=self.data_format + ) + + def construct(self, inputs, filters): + outputs = self.conv2d(inputs, filters) + return outputs + + +class SeparableConv1D(Cell): + + def __init__(self, stride, padding, data_format, dilations, out_channel, k_size, in_channel, depth_multiplier): + super(SeparableConv1D, self).__init__() + self.data_format, self.padding = preprocess_1d_format(data_format, padding) + self.stride = (1, stride) + self.dilations = (1, dilations) + self.k_size = (1, k_size) + self.out_channel = out_channel + self.in_channel = in_channel + self.depth_multiplier = depth_multiplier + self.depthwise_conv = P.Conv2D( + out_channel=self.in_channel * self.depth_multiplier, kernel_size=self.k_size, pad_mode=self.padding, + stride=self.stride, dilation=self.dilations, mode=1, group=self.in_channel + ) + + self.pointwise_conv = P.Conv2D( + out_channel=self.out_channel, kernel_size=(1, 1), pad_mode=self.padding, stride=(1, 1), dilation=(1, 1), + mode=1, group=1 + ) + + self.expand_dims = P.ExpandDims() + self.squeeze = P.Squeeze(2) + + def construct(self, x, depthwise_filters, pointwise_filters): + + if self.data_format == 'NWC': + x = nhwc_to_nchw(x) + + x = self.expand_dims(x, 2) + depthwise_filters = self.expand_dims(depthwise_filters, 2) + pointwise_filters = self.expand_dims(pointwise_filters, 2) + + outputs = self.depthwise_conv(x, depthwise_filters) + outputs = self.pointwise_conv(outputs, pointwise_filters) + + outputs = self.squeeze(outputs) + + if self.data_format == 'NWC': + outputs = nchw_to_nhwc(outputs) + return outputs + + +class SeparableConv2D(Cell): + + def __init__(self, strides, padding, data_format, dilations, out_channel, k_size, in_channel, depth_multiplier): + super(SeparableConv2D, self).__init__() + self.data_format, self.padding = preprocess_2d_format(data_format, padding) + self.k_size = k_size + self.out_channel = out_channel + self.in_channel = in_channel + self.depth_multiplier = depth_multiplier + + if self.data_format is 'NHWC': + self.ms_stride = strides[1] + self.ms_dilation = dilations[1] + elif self.data_format is 'NCHW': + self.ms_stride = strides[2] + self.ms_dilation = dilations[2] + + self.depthwise_conv = P.Conv2D( + out_channel=self.in_channel * self.depth_multiplier, kernel_size=self.k_size, pad_mode=self.padding, + stride=self.ms_stride, dilation=self.ms_dilation, mode=1, group=self.in_channel, + data_format=self.data_format + ) + + self.pointwise_conv = P.Conv2D( + out_channel=self.out_channel, kernel_size=(1, 1), pad_mode=self.padding, stride=(1, 1), dilation=(1, 1), + mode=1, group=1, data_format=self.data_format + ) + + def construct(self, x, depthwise_filters, pointwise_filters): + outputs = self.depthwise_conv(x, depthwise_filters) + outputs = self.pointwise_conv(outputs, pointwise_filters) + return outputs + + +class AdaptiveMeanPool1D(Cell): + + def __init__(self, output_size, data_format): + super(AdaptiveMeanPool1D, self).__init__() + self.data_format, _ = preprocess_1d_format(data_format, None) + self.output_size = output_size + self.expand_dims = P.ExpandDims() + self.squeeze = P.Squeeze(2) + + def construct(self, inputs): + + if self.data_format == 'NWC': + n, w, c = inputs.shape + inputs = nhwc_to_nchw(inputs) + else: + n, c, w = inputs.shape + inputs = self.expand_dims(inputs, 2) + + stride = (1, w // self.output_size) + kernel = (1, w - (self.output_size - 1) * stride[1]) + outputs = P.AvgPool(kernel_size=kernel, strides=stride, pad_mode='VALID')(inputs) + outputs = self.squeeze(outputs) + + if self.data_format == 'NWC': + outputs = nchw_to_nhwc(outputs) + + return outputs + + +class AdaptiveMeanPool2D(Cell): + + def __init__(self, output_size, data_format): + super(AdaptiveMeanPool2D, self).__init__() + self.data_format, _ = preprocess_2d_format(data_format, None) + self.output_size = output_size + + def construct(self, inputs): + + if self.data_format == 'NHWC': + n, h, w, c = inputs.shape + inputs = nhwc_to_nchw(inputs) + else: + n, c, h, w = inputs.shape + + out_h, out_w = self.output_size + stride_h = h // out_h + kernel_h = h - (out_h - 1) * stride_h + stride_w = w // out_w + kernel_w = w - (out_w - 1) * stride_w + outputs = P.AvgPool(kernel_size=(kernel_h, kernel_w), strides=(stride_h, stride_w), pad_mode='VALID')(inputs) + + if self.data_format == 'NHWC': + outputs = nchw_to_nhwc(outputs) + + return outputs + + +class AdaptiveMeanPool3D(Cell): + + pass + + +class AdaptiveMaxPool1D(Cell): + + def __init__(self, output_size, data_format): + super(AdaptiveMaxPool1D, self).__init__() + self.data_format, _ = preprocess_1d_format(data_format, None) + self.output_size = output_size + self.expand_dims = P.ExpandDims() + self.squeeze = P.Squeeze(2) + + def construct(self, inputs): + + if self.data_format == 'NWC': + n, w, c = inputs.shape + inputs = nhwc_to_nchw(inputs) + else: + n, c, w = inputs.shape + inputs = self.expand_dims(inputs, 2) + + stride = (1, w // self.output_size) + kernel = (1, w - (self.output_size - 1) * stride[1]) + outputs = P.MaxPool(kernel_size=kernel, strides=stride, pad_mode='VALID')(inputs) + outputs = self.squeeze(outputs) + + if self.data_format == 'NWC': + outputs = nchw_to_nhwc(outputs) + + return outputs + + +class AdaptiveMaxPool2D(Cell): + + def __init__(self, output_size, data_format): + super(AdaptiveMaxPool2D, self).__init__() + self.data_format, _ = preprocess_2d_format(data_format, None) + self.output_size = output_size + + def construct(self, inputs): + + if self.data_format == 'NHWC': + n, h, w, c = inputs.shape + inputs = nhwc_to_nchw(inputs) + else: + n, c, h, w = inputs.shape + + out_h, out_w = self.output_size + stride_h = h // out_h + kernel_h = h - (out_h - 1) * stride_h + stride_w = w // out_w + kernel_w = w - (out_w - 1) * stride_w + outputs = P.MaxPool( + kernel_size=(kernel_h, kernel_w), strides=(stride_h, stride_w), pad_mode='VALID', + data_format=self.data_format + )(inputs) + + return outputs + + +class AdaptiveMaxPool3D(Cell): + + pass + + +class BinaryConv2D(Cell): + + def __init__(self, strides, padding, data_format, dilations, out_channel, k_size, in_channel): + super(BinaryConv2D, self).__init__() + self.data_format, self.padding = preprocess_2d_format(data_format, padding) + if self.data_format is 'NHWC': + self.ms_stride = strides[1] + self.ms_dilation = dilations[1] + elif self.data_format is 'NCHW': + self.ms_stride = strides[2] + self.ms_dilation = dilations[2] + + self.conv2d = P.Conv2D( + out_channel=out_channel, kernel_size=k_size, pad_mode=self.padding, stride=self.ms_stride, + dilation=self.ms_dilation, mode=1, group=1, data_format=self.data_format + ) + + @bprop_getters.register(P.Sign) + def get_bprop_Sign(self): + + def bprop(x, out, dout): + + grad = P.clip_by_value(dout, -1, 1) + return (grad, ) + + return bprop + + self.sign = P.Sign() + + def construct(self, inputs, filters): + + filters = self.sign(filters) + outputs = self.conv2d(inputs, filters) + + return outputs + + +class DorefaConv2D(Cell): + + def __init__(self, bitW, bitA, strides, padding, data_format, dilations, out_channel, k_size, in_channel): + super(DorefaConv2D, self).__init__() + self.data_format, self.padding = preprocess_2d_format(data_format, padding) + self.bitW = ms.Tensor(bitW) + self.bitA = ms.Tensor(bitA) + if self.data_format is 'NHWC': + self.ms_stride = strides[1] + self.ms_dilation = dilations[1] + # self.transpose = P.Transpose() + elif self.data_format is 'NCHW': + self.ms_stride = strides[2] + self.ms_dilation = dilations[2] + + self.conv2d = P.Conv2D( + out_channel=out_channel, kernel_size=k_size, pad_mode=self.padding, stride=self.ms_stride, + dilation=self.ms_dilation, mode=1, group=1 + ) + + @bprop_getters.register(P.Round) + def get_bprop_Round(self): + + def bprop(x, out, dout): + + return (dout, ) + + return bprop + + @bprop_getters.register(P.Sign) + def get_bprop_Sign(self): + + def bprop(x, out, dout): + + return (dout, ) + + return bprop + + self.mimimum = P.Minimum() + self.abs = P.Abs() + self.round = P.Round() + self.reducemean = P.ReduceMean() + self.sign = P.Sign() + self.pow = P.Pow() + self.sub = P.Sub() + self.oneslike = P.OnesLike() + + def cabs(self, inputs): + + a = P.stop_gradient(self.oneslike(inputs)) + return self.mimimum(self.abs(inputs), a) + + def _quantize_dorefa(self, x, k): + + n = self.sub(self.pow(2.0, k), 1) + return self.round(x * n) / n + + def quantize_active(self, x, bitA): + if bitA == 32: + return x + return self._quantize_dorefa(x, bitA) + + def quantize_weight(self, x, bitW, force_quantization=False): + + if bitW == 32 and not force_quantization: + return x + + if bitW == 1: + E = P.stop_gradient(self.reducemean(self.abs(x))) + return self.sign(x / E) * E + + x = P.clip_by_value(x * 0.5 + 0.5, 0.0, 1.0) + + return 2 * self._quantize_dorefa(x, bitW) - 1 + + def construct(self, inputs, filters): + + if self.data_format == 'NHWC': + inputs = nhwc_to_nchw(inputs) + + inputs = self.quantize_active(self.cabs(inputs), self.bitA) + + filters = self.quantize_weight(filters, self.bitW) + + outputs = self.conv2d(inputs, filters) + + if self.data_format == 'NHWC': + outputs = nchw_to_nhwc(outputs) + + return outputs diff --git a/tensorlayer/backend/ops/paddle_backend.py b/tensorlayer/backend/ops/paddle_backend.py new file mode 100644 index 000000000..3d8e75c88 --- /dev/null +++ b/tensorlayer/backend/ops/paddle_backend.py @@ -0,0 +1,987 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +from __future__ import absolute_import, division, print_function +import paddle as pd +import paddle.nn as nn + +_dtypeDict = ["float16", "float32", "float64", "int8", "int16", "int32", "int64", "uint8", "uint16", "uint32", "uint64"] +# TODO NotImplemented +DType = None +float16 = "float16" +float32 = "float32" +float64 = "float64" +int8 = "int8" +int16 = "int16" +int32 = "int32" +int64 = "int64" +uint8 = "uint8" +uint16 = "uint16" +uint32 = "uint32" +uint64 = "uint64" + + +def _getter(init_fn, **kwargs): + """Return an named eager tensor.""" + raise NotImplementedError + + +def set_context(**kwargs): + raise Exception("Using Paddle backend,You don't need to set context") + + +def get_tensor_shape(x): + return pd.shape(x) + + +# initializers +def zeros(shape, dtype="float32"): + """ + Creates a tensor with all elements set to zero. + + Parameters + ---------- + shape : A list of integers + a tuple of integers, or a 1-D Tensor of type int32. + dtype : tensor + The DType of an element in the resulting Tensor + + Returns + ------- + A Tensor with all elements set to zero. + + """ + return pd.zeros(shape=shape, dtype=dtype) + + +def ones(shape, dtype="float32"): + """ + Creates a tensor with all elements set to ones. + + Parameters + ---------- + shape : A list of integers + a tuple of integers, or a 1-D Tensor of type int32. + dtype : tensor + The DType of an element in the resulting Tensor + + Returns + ------- + A Tensor with all elements set to zero. + + """ + return pd.ones(shape=shape, dtype=dtype) + + +def constant(value, shape, dtype="float32"): + """ + Creates a constant tensor from a tensor-like object. + + Parameters + ---------- + value : list + A constant value (or list) of output type dtype. + dtype : tensor + The type of the elements of the resulting tensor. + shape : tuple + Optional dimensions of resulting tensor. + + Returns + ------- + A Constant Tensor. + + """ + return nn.initializer.constant(value=value) + + +def random_uniform(shape, minval=0, maxval=None, dtype="float32", seed=None): + """ + Outputs random values from a uniform distribution. + + Parameters + ---------- + shape : tuple + A 1-D integer Tensor or Python array. The shape of the output tensor. + minval : int + The lower bound on the range of random values to generate (inclusive). Defaults to 0. + maxval : int + The upper bound on the range of random values to generate (exclusive). Defaults to 1 if dtype is floating point. + dtype : tensor + The type of the output: float16, float32, float64, int32, or int64. + seed : int + Used in combination with dragon.random.set_seed to create a reproducible sequence of tensors across multiple calls. + Returns + ------- + A tensor of the specified shape filled with random uniform values. + + """ + raise NotImplementedError + + +def random_normal(shape, mean=0.0, stddev=1.0, dtype="float32", seed=None): + """ + Outputs random values from a normal distribution. + + Parameters + ---------- + shape : tuple + A 1-D integer Tensor or Python array. The shape of the output tensor. + mean : float + The mean of the normal distribution + stddev : float + The standard deviation of the normal distribution. + dtype : tensor + The type of the output. + seed : A Python integer + Used to create a random seed for the distribution + + Returns + ------- + A tensor of the specified shape filled with random normal values. + + """ + raise NotImplementedError + + +def truncated_normal(shape, mean=0.0, stddev=1.0, dtype="float32", seed=None): + """ + Outputs random values from a truncated normal distribution. + + Parameters + ---------- + shape : tuple + A 1-D integer Tensor or Python array. The shape of the output tensor. + mean : float + The mean of the normal distribution + stddev : float + The standard deviation of the normal distribution. + dtype : tensor + The type of the output. + seed : A Python integer + Used to create a random seed for the distribution + + Returns + ------- + A tensor of the specified shape filled with random truncated normal values. + + """ + raise NotImplementedError + + +def he_normal(shape, dtype, seed=None): + """ + He normal initializer. + + Parameters + ---------- + seed : A Python integer. + Used to seed the random generator. + shape : tuple + A 1-D integer Tensor or Python array. The shape of the output tensor. + dtype : tensor + The type of the output. + + Returns + ------- + A tensor of the specified shape filled with he normal values. + """ + # shape = shape[::-1] + raise NotImplementedError + + +def Variable(initial_value, name, trainable=None): + """ + Creates a new variable with value initial_value. + + Parameters + ---------- + initial_value : tensor + A Tensor, or Python object convertible to a Tensor + name : str + Optional name for the variable. Defaults to 'Variable' and gets uniquified automatically. + Returns + ------- + Variable + """ + raise NotImplementedError + + +class MatMul(object): + + def __init__(self): + pass + + def __call__(self, a, b): + return pd.matmul(x=a, y=b) + + +def matmul(a, b): + """ + Multiplies matrix a by matrix b, producing a * b. + + Parameters + ---------- + a : tensor + type float16, float32, float64, int32, complex64, complex128 and rank > 1. + b : tensor + with same type and rank as a. + + Returns + ------- + A Tensor of the same type as a and b + """ + raise NotImplementedError + + +def add(value, bias): + """ + Returns x + y element-wise. + + Parameters + ---------- + value : tensor. + Must be one of the following types: bfloat16, half, float32, float64, + uint8, int8, int16, int32, int64, complex64, complex128, string. + bias : tensor + Must have the same type as a + name : str + A name for the operation + + Returns + ------- + A Tensor. Has the same type as a. + """ + + raise NotImplementedError + + +def dtypes(dt): + """ + Data dtypes. + + Parameters + ---------- + dt : string + It could be 'uint8', 'uint16', 'uint32', 'uint64', 'int8', 'int16', + 'int32', 'int64', 'float16', 'float32', 'float64', 'DType'. + + Returns + ------- + Data dtypes + """ + raise NotImplementedError + + +class Maximum(object): + + def __init__(self): + pass + + def __call__(self, x, y): + raise NotImplementedError + + +class Minimum(object): + + def __init__(self): + pass + + def __call__(self, x, y): + raise NotImplementedError + + +def minimum(x, y): + """ + Returns the min of x and y (i.e. x < y ? x : y) element-wise. + + Parameters + ---------- + x : tensor. + Must be one of the following types: bfloat16, half, float32, float64, int32, int64. + y : A Tensor. + Must have the same type as x. + name : str + A name for the operation (optional). + + Returns + ------- + A Tensor. Has the same type as x + """ + raise NotImplementedError + + +class FlattenReshape(object): + + def __init__(self): + pass + + def __call__(self, inputs): + return pd.flatten(x=inputs, start_axis=1, stop_axis=-1) + + +class Reshape(object): + + def __init__(self, shape): + self.shape = shape + + def __call__(self, tensor): + raise NotImplementedError + + +def reshape(tensor, shape): + """ + Reshapes a tensor. + + Parameters + ---------- + tensor : tensor + A Tensor. + shape : tensor + Defines the shape of the output tensor. + Returns + ------- + A Tensor. Has the same type as tensor + """ + raise NotImplementedError + + +class Concat(object): + + def __init__(self, axis): + super(Concat, self).__init__() + self.axis = axis + + def __call__(self, values): + raise NotImplementedError + + +def concat(values, axis): + """ + Concatenates tensors along one dimension. + + Parameters + ---------- + values : list + A list of Tensor objects or a single Tensor + axis : int + 0-D int32 Tensor. Dimension along which to concatenate + Returns + ------- + A Tensor resulting from concatenation of the input tensors. + """ + raise NotImplementedError + + +def convert_to_tensor(value, dtype=None): + """ + Converts the given value to a Tensor. + + Parameters + ---------- + value : object + An object whose type has a registered Tensor conversion function. + dtype : optional + Optional element type for the returned tensor. If missing, the type is inferred from the type of value. + + Returns + ------- + A Tensor based on value. + """ + raise NotImplementedError + + +def sqrt(x): + """ + Computes square root of x element-wise. + + Parameters + ---------- + x : tensor + Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128. + + Returns + ------- + A Tensor. Has the same type as x. + """ + raise NotImplementedError + + +class ReduceSum(object): + + def __init__(self, axis): + pass + + def construct(self, input): + pass + + +class ReduceMean(object): + + def __init__(self, axis): + self.axis = axis + + def __call__(self, inputs): + return pd.mean(inputs, axis=self.axis) + + +def reduce_mean(input_tensor, axis=None): + """ + Computes the mean of elements across dimensions of a tensor. + + Parameters + ---------- + input_tensor : tensor + The tensor to reduce. Should have numeric type. + axis : int + The dimensions to reduce. If None (the default), reduces all dimensions. + Must be in the range [-rank(input_tensor), rank(input_tensor)). + name : str + A name for the operation (optional). + + Returns + ------- + The reduced tensor. + """ + + raise NotImplementedError + + +class ReduceMax(object): + + def __init__(self, axis): + self.axis = axis + + def __call__(self, inputs): + return pd.max(inputs, axis=self.axis) + + +def reduce_max(input_tensor, axis=None): + """ + Computes the maximum of elements across dimensions of a tensor. + + Parameters + ---------- + input_tensor : tensor + The tensor to reduce. Should have real numeric type. + axis : int + The dimensions to reduce. If None (the default), reduces all dimensions. + Must be in the range [-rank(input_tensor), rank(input_tensor)). + name : str + A name for the operation (optional). + + Returns + ------- + The reduced tensor. + """ + + raise NotImplementedError + + +def reduce_min(input_tensor, axis=None): + """ + Computes the minimum of elements across dimensions of a tensor. + + Parameters + ---------- + input_tensor : tensor + The tensor to reduce. Should have real numeric type. + axis : int + The dimensions to reduce. If None (the default), reduces all dimensions. + Must be in the range [-rank(input_tensor), rank(input_tensor)). + name : str + A name for the operation (optional). + + Returns + ------- + The reduced tensor. + """ + raise NotImplementedError + + +class Pad(object): + + def __init__(self, paddings, mode="REFLECT"): + if mode not in ['CONSTANT', 'REFLECT', 'SYMMETRIC']: + raise Exception("Unsupported mode: {}".format(mode)) + if mode == 'SYMMETRIC': + mode = 'EDGE' + self.paddings = paddings + self.mode = mode + + def __call__(self, x): + raise NotImplementedError + + +def pad(tensor, paddings, mode='CONSTANT', constant_values=0): + """ + Pads a tensor. + + Parameters + ---------- + tensor : tensor + A Tensor. + paddings : tuple + A tuple of type int32. + mode : str + One of "CONSTANT", "REFLECT", or "SYMMETRIC" (case-insensitive) + constant_values : int + In "CONSTANT" mode, the scalar pad value to use. Must be same type as tensor. + + Returns + ------- + A Tensor. Has the same type as tensor. + """ + raise NotImplementedError + + +class Unstack(object): + + def __init__(self, axis, num=None): + self.axis = axis + self.num = num + + def __call__(self, values): + raise NotImplementedError + + +class Stack(object): + + def __init__(self, axis): + self.axis = axis + + def __call__(self, values): + raise NotImplementedError + + +def stack(values, axis=0): + """ + Stacks a list of rank-R tensors into one rank-(R+1) tensor. + + Parameters + ---------- + values : list + A list of Tensor objects with the same shape and type. + axis : int + An int. The axis to stack along. Defaults to the first dimension. + Negative values wrap around, so the valid range is [-(R+1), R+1). + + Returns + ------- + A stacked Tensor with the same type as values. + """ + raise NotImplementedError + + +class Meshgrid(object): + + def __init__(self, indexing='xy'): + super(Meshgrid, self).__init__() + self.index = indexing + + def __call__(self, inputs): + pass + + +def meshgrid(x, y): + """ + Broadcasts parameters for evaluation on an N-D grid. + + Parameters + ---------- + x : tensor + Tensors with rank 1. + y : tensor + Tensors with rank 1. + + Returns + ------- + A list of N Tensors with rank N. + """ + + pass + + +def range(start, limit=None, delta=1, dtype=None): + """ + Creates a sequence of numbers. + + Parameters + ---------- + start : tensor + A 0-D Tensor (scalar). Acts as first entry in the range if limit is not None; + otherwise, acts as range limit and first entry defaults to 0. + limit : tensor + A 0-D Tensor (scalar). Upper limit of sequence, exclusive. If None, + defaults to the value of start while the first entry of the range defaults to 0. + delta : tensor + A 0-D Tensor (scalar). Number that increments start. Defaults to 1. + dtype : type + The type of the elements of the resulting tensor. + + Returns + ------- + An 1-D Tensor of type dtype. + """ + raise NotImplementedError + + +class ExpandDims(object): + + def __init__(self, axis): + pass + + def construct(self, input): + pass + + +def expand_dims(input, axis): + """ + Inserts a dimension of 1 into a tensor's shape. + + Parameters + ---------- + input : tensor + A Tensor. + axis : int + 0-D (scalar). Specifies the dimension index at which to expand the shape of input. + Must be in the range [-rank(input) - 1, rank(input)]. + + Returns + ------- + A Tensor with the same data as input, but its shape has an additional dimension of size 1 added. + """ + + raise NotImplementedError + + +class Tile(object): + + def __init__(self): + pass + + def __call__(self, input, multiples): + raise NotImplementedError + + +def tile(input, multiples): + """ + Constructs a tensor by tiling a given tensor. + + Parameters + ---------- + input : tensor + A Tensor. 1-D or higher. + multiples : tensor + Must be one of the following types: int32, int64. 1-D. + Length must be the same as the number of dimensions in input + + Returns + ------- + A Tensor. Has the same type as input. + """ + raise NotImplementedError + + +class Cast(object): + + def __init__(self, dtype): + pass + + def __call__(self, input): + pass + + +def cast(x, dtype): + """ + Casts a tensor to a new type. + + Parameters + ---------- + x : tensor + A Tensor or SparseTensor or IndexedSlices of numeric type. + It could be uint8, uint16, uint32, uint64, int8, int16, int32, int64, float16, float32, float64. + dtype : dtpye + The destination type. The list of supported dtypes is the same as x + + Returns + ------- + A Tensor or SparseTensor or IndexedSlices with same shape as x and same type as dtype. + """ + raise NotImplementedError + + +class Transpose(object): + + def __init__(self, perm, conjugate=False): + self.perm = perm + if conjugate: + raise ("The conjugate Parameters not supported") + + def __call__(self, a): + raise NotImplementedError + + +def transpose(a, perm=None, conjugate=False): + """ + Transposes a. + + Parameters + ---------- + a : tensor + A Tensor. + perm : int + A permutation of the dimensions of a. + conjugate : bool + Setting it to True is mathematically equivalent to ms.math.conj(ms.transpose(input)). + + Returns + ------- + A transposed Tensor. + """ + + raise NotImplementedError + + +def gather_nd(params, indices, batch_dims=0): + """ + Gather slices from params into a Tensor with shape specified by indices. + + Parameters + ---------- + params : tensor + The tensor from which to gather values. + indices : tensor + Must be one of the following types: int32, int64. Index tensor. + batch_dims : int + An integer or a scalar 'Tensor'. The number of batch dimensions. + + Returns + ------- + A Tensor. Has the same type as params. + """ + + pass + + +def clip_by_value(t, clip_value_min, clip_value_max): + """ + Clips tensor values to a specified min and max. + + Parameters + ---------- + t : tensor + A Tensor or IndexedSlices + clip_value_min : tensor + A 0-D (scalar) Tensor, or a Tensor with the same shape as t. The minimum value to clip by + clip_value_max : tensor + A 0-D (scalar) Tensor, or a Tensor with the same shape as t. The minimum value to clip by + + Returns + ------- + A clipped Tensor or IndexedSlices. + """ + + pass + + +def split(value, num_or_size_splits, axis=0, num=None): + """ + Splits a tensor into sub tensors. + + Parameters + ---------- + value : tensor + The Tensor to split. + num_or_size_splits : list + Either an integer indicating the number of splits along split_dim or a 1-D integer Tensor or + Python list containing the sizes of each output tensor along split_dim. + axis : int + The dimension along which to split. Must be in the range [-rank(value), rank(value)). Defaults to 0. + num : int + used to specify the number of outputs when it cannot be inferred from the shape of size_splits. + + Returns + ------- + Tensor objects resulting from splitting value. + """ + pass + + +class Floor(object): + + def __call__(self, *args, **kwargs): + raise NotImplementedError + + +def floor(x): + raise NotImplementedError + + +def gather(params, indices): + raise NotImplementedError + + +def linspace(start, stop, num): + raise NotImplementedError + + +def slice(inputs, starts, sizes): + raise NotImplementedError + + +def add_n(inputs): + raise NotImplementedError + + +class OneHot(object): + + def __init__(self, axis=-1, depth=1, on_value=1.0, off_value=0.0, dtype="float32"): + self.depth = depth + self.dtype = dtype + + def __call__(self, indices): + raise NotImplementedError + + +class L2Normalize(object): + + def __init__(self, axis=None, epsilon=1e-12): + super(L2Normalize, self).__init__() + pass + + def __call__(self, input, *args, **kwargs): + pass + + +class EmbeddingLookup(object): + + def __init__(self, max_norm=None): + self.max_norm = max_norm + + def __call__(self, params, ids, *args, **kwargs): + pass + + +class NCELoss(object): + + def __init__(self, num_true=1, sampled_values=None, remove_accidental_hits=False): + super(NCELoss, self).__init__() + + def __call__(self, weights, biases, labels, inputs, num_sampled, num_classes): + pass + + +class NotEqual(object): + + def __init__(self): + pass + + def __call__(self, x, y): + pass + + +class CountNonzero(object): + + def __init__(self, keepdims=None, dtype="int64"): + pass + + def __call__(self, *args, **kwargs): + pass + + +class Resize: + + def __init__(self, scale, method, antialias=False, data_format='channels_last', ksize=None): + if method not in ['nearest', 'linear', 'bilinear']: + raise ('Current resize does not support this method.') + if method == 'bilinear': + method = 'linear' + self.method = method + self.antialias = antialias + self.scale = scale + if data_format != 'channel_last': + raise Exception("UpSampling2d resize_images only support channel_last") + + def __call__(self, inputs): + raise NotImplementedError + + +def resize(inputs, output_size, method, antialias): + raise NotImplementedError + + +class ZeroPadding1D(object): + + def __init__(self): + pass + + def __call__(self, padding): + raise NotImplementedError + + +class ZeroPadding2D(object): + + def __init__(self): + pass + + def __call__(self, padding): + raise NotImplementedError + + +class ZeroPadding3D(object): + + def __init__(self): + pass + + def __call__(self, padding): + raise NotImplementedError + + +class Sign(object): + + def __init__(self): + pass + + def __call__(self, x): + raise NotImplementedError + + +class Ceil(object): + + def __call__(self, *args, **kwargs): + raise NotImplementedError + + +def ceil(x): + raise NotImplementedError + + +def multiply(x, y): + raise NotImplementedError + + +def divide(x, y): + raise NotImplementedError + + +def identity(x): + raise NotImplementedError + + +class BatchToSpace(object): + + def __init__(self, block_size, crops): + super(BatchToSpace, self).__init__() + pass + + def __call__(self, input_x): + raise NotImplementedError + + +class DepthToSpace(object): + + def __init__(self, block_size, data_format='NHWC'): + pass + + def __call__(self, input): + raise NotImplementedError diff --git a/tensorlayer/backend/ops/paddle_nn.py b/tensorlayer/backend/ops/paddle_nn.py new file mode 100644 index 000000000..2a6ec2d2f --- /dev/null +++ b/tensorlayer/backend/ops/paddle_nn.py @@ -0,0 +1,1259 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +import paddle as pd +import paddle.nn.functional as F + + +def padding_format(padding): + """ + Checks that the padding format correspond format. + + Parameters + ---------- + padding : str + Must be one of the following:"same", "SAME", "VALID", "valid" + + Returns + ------- + str "SAME" or "VALID" + """ + + if padding in ["SAME", "same"]: + padding = "SAME" + elif padding in ["VALID", "valid"]: + padding = "VALID" + elif padding == None: + padding = None + else: + raise Exception("Unsupported padding: " + str(padding)) + return padding + + +def preprocess_1d_format(data_format, padding): + """ + Checks that the 1-D dataformat format correspond format. + + Parameters + ---------- + data_format : str + Must be one of the following:"channels_last","NWC","NCW","channels_first" + padding : str + Must be one of the following:"same","valid","SAME","VALID" + + Returns + ------- + str "NWC" or "NCW" and "SAME" or "VALID" + """ + + if data_format in ["channels_last", "NWC", "NLC"]: + data_format = "NLC" + elif data_format in ["channels_first", "NCW", "NCL"]: + data_format = "NCL" + elif data_format == None: + data_format = None + else: + raise Exception("Unsupported data format: " + str(data_format)) + padding = padding_format(padding) + return data_format, padding + + +def preprocess_2d_format(data_format, padding): + """ + Checks that the 2-D dataformat format correspond format. + + Parameters + ---------- + data_format : str + Must be one of the following:"channels_last","NHWC","NCHW","channels_first" + padding : str + Must be one of the following:"same","valid","SAME","VALID" + + Returns + ------- + str "NHWC" or "NCHW" and "SAME" or "VALID" + """ + + if data_format in ["channels_last", "NHWC", "nhwc"]: + data_format = "NHWC" + elif data_format in ["channels_first", "NCHW", "nchw"]: + data_format = "NCHW" + elif data_format == None: + data_format = None + else: + raise Exception("Unsupported data format: " + str(data_format)) + padding = padding_format(padding) + return data_format, padding + + +def preprocess_3d_format(data_format, padding): + """ + Checks that the 3-D dataformat format correspond format. + + Parameters + ---------- + data_format : str + Must be one of the following:"channels_last","NDHWC","NCDHW","channels_first" + padding : str + Must be one of the following:"same","valid","SAME","VALID" + + Returns + ------- + str "NDHWC" or "NCDHW" and "SAME" or "VALID" + """ + + if data_format in ['channels_last', 'NDHWC']: + data_format = 'NDHWC' + elif data_format in ['channels_first', 'NCDHW']: + data_format = 'NCDHW' + elif data_format == None: + data_format = None + else: + raise Exception("Unsupported data format: " + str(data_format)) + padding = padding_format(padding) + return data_format, padding + + +def nchw_to_nhwc(x): + """ + Channels first to channels last + + Parameters + ---------- + x : tensor + channels first tensor data + + Returns + ------- + channels last tensor data + """ + + if len(x.shape) == 3: + x = pd.transpose(x, (0, 2, 1)) + elif len(x.shape) == 4: + x = pd.transpose(x, (0, 2, 3, 1)) + elif len(x.shape) == 5: + x = pd.transpose(x, (0, 2, 3, 4, 1)) + else: + raise Exception("Unsupported dimensions") + return x + + +def nhwc_to_nchw(x): + """ + Channles last to channels first + + Parameters + ---------- + x : tensor + channels last tensor data + + Returns + ------- + channels first tensor data + """ + + if len(x.shape) == 3: + x = pd.transpose(x, (0, 2, 1)) + elif len(x.shape) == 4: + x = pd.transpose(x, (0, 3, 1, 2)) + elif len(x.shape) == 5: + x = pd.transpose(x, (0, 4, 1, 2, 3)) + else: + raise Exception("Unsupported dimensions") + return x + + +class ReLU(object): + + def __init__(self): + pass + + def __call__(self, x): + return F.relu(x) + + +def relu(x): + """ + Computes rectified linear: max(features, 0). + + Parameters + ---------- + x : tensor + Must be one of the following types: float32, float64, int32, uint8, int16, + int8, int64, bfloat16, uint16, half, uint32, uint64, qint8. + + Returns + ------- + A Tensor. Has the same type as features. + """ + return F.relu(x) + + +class ReLU6(object): + + def __init__(self): + pass + + def __call__(self, x): + return F.relu6(x) + + +def relu6(x): + """ + Computes Rectified Linear 6: min(max(features, 0), 6). + + Parameters + ---------- + x : tensor + Must be one of the following types: float32, float64, int32, uint8, int16, + int8, int64, bfloat16, uint16, half, uint32, uint64, qint8. + + Returns + ------- + A Tensor with the same type as features. + """ + return F.relu6(x) + + +class LeakyReLU(object): + + def __init__(self, alpha=0.2): + self.alpha = alpha + + def __call__(self, x): + return F.leaky_relu(x, negative_slope=self.alpha) + + +def leaky_relu(x): + """ + Compute the Leaky ReLU activation function. + + Parameters + ---------- + x : tensor + representing preactivation values. Must be one of the following types: + float16, float32, float64, int32, int64. + + Returns + ------- + The activation value. + """ + + return F.leaky_relu(x) + + +class Softplus(object): + + def __init__(self): + pass + + def __call__(self, x): + return F.softplus(x) + + +def softplus(x): + """ + Computes softplus: log(exp(features) + 1). + + Parameters + ---------- + x : tensor + Must be one of the following types: half, bfloat16, float32, float64. + + Returns + ------- + A Tensor. Has the same type as features. + """ + + return F.softplus(x) + + +class Tanh(object): + + def __init__(self): + pass + + def __call__(self, x): + return F.tanh(x) + + +def tanh(x): + """ + Computes hyperbolic tangent of x element-wise. + + Parameters + ---------- + x : tensor + Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128. + + Returns + ------- + A Tensor. Has the same type as x. + """ + + return F.tanh(x) + + +class Sigmoid(object): + + def __init__(self): + pass + + def __call__(self, x): + return F.sigmoid(x) + + +def sigmoid(x): + """ + Computes sigmoid of x element-wise. + + Parameters + ---------- + x : tensor + A Tensor with type float16, float32, float64, complex64, or complex128. + + Returns + ------- + A Tensor with the same type as x. + """ + return F.sigmoid(x) + + +class Softmax(object): + + def __init__(self): + pass + + def __call__(self, x): + return F.softmax(x) + + +def softmax(logits, axis=-1): + """ + Computes softmax activations. + + Parameters + ---------- + logits : tensor + Must be one of the following types: half, float32, float64. + axis : int + The dimension softmax would be performed on. The default is -1 which indicates the last dimension. + + Returns + ------- + A Tensor. Has the same type and shape as logits. + """ + return F.softmax(logits, axis=axis) + + +class Dropout(object): + + def __init__(self, keep, seed=1): + self.keep = 1 - keep + self.seed = seed + + def __call__(self, inputs): + output = F.dropout( + inputs, + p=self.keep, + mode='upscale_in_train') + return output + + +class BiasAdd(object): + """ + Adds bias to value. + + Parameters + ---------- + x : tensor + A Tensor with type float, double, int64, int32, uint8, int16, int8, complex64, or complex128. + bias : tensor + Must be the same type as value unless value is a quantized type, + in which case a different quantized type may be used. + Returns + ------- + A Tensor with the same type as value. + """ + def __init__(self, data_format='channels_first'): + super(BiasAdd, self).__init__() + if data_format in ['channels_first', 'NCW', 'NCHW', 'NCDHW']: + self.data_format = 'channels_first' + elif data_format in ['channels_last', 'NWC', 'NHWC', 'NDHWC']: + self.data_format = 'channels_last' + else: + raise ("Unsupported data format: " + str(data_format)) + + def __call__(self, x, bias): + if self.data_format == 'channels_first': + x = nchw_to_nhwc(x) + outputs = pd.add(x, bias) + if self.data_format == 'channels_first': + outputs = nhwc_to_nchw(outputs) + return outputs + + +def bias_add(x, bias): + """ + Adds bias to value. + + Parameters + ---------- + x : tensor + A Tensor with type float, double, int64, int32, uint8, int16, int8, complex64, or complex128. + bias : tensor + Must be the same type as value unless value is a quantized type, + in which case a different quantized type may be used. + data_format : A string. + 'N...C' and 'NC...' are supported. + name : str + A name for the operation (optional). + Returns + ------- + A Tensor with the same type as value. + """ + + #TODO the bias_add only supports channels_last + outputs = pd.add(x, bias) + return outputs + + +class Conv1D(object): + def __init__(self, stride, padding, data_format='NWC', dilations=None, out_channel=None, k_size=None): + super(Conv1D, self).__init__() + self.data_format, self.padding = preprocess_1d_format(padding=padding, data_format=data_format) + self.stride = stride + self.dilations = dilations + + def __call__(self, input, filters): + output = F.conv1d(x=input, + weight=filters, + stride=self.stride, + dilation=self.dilations, + data_format=self.data_format, + padding=self.padding) + return output + + +def conv1d(input, filters, stride, padding, data_format='NWC', dilations=None, name=None): + """ + Computes a 1-D convolution given 3-D input and filter tensors. + + Parameters + ---------- + input : tensor + A 3D Tensor. Must be of type float16, float32, or float64 + filters : tensor + A 3D Tensor. Must have the same type as input. + stride : int of list + An int or list of ints that has length 1 or 3. The number of entries by which the filter is moved right at each step. + padding : string + 'SAME' or 'VALID' + data_format : string + An optional string from "NWC", "NCW". Defaults to "NWC", the data is stored in the order of + [batch, in_width, in_channels]. The "NCW" format stores data as [batch, in_channels, in_width]. + dilations : int or list + An int or list of ints that has length 1 or 3 which defaults to 1. + The dilation factor for each dimension of input. If set to k > 1, + there will be k-1 skipped cells between each filter element on that dimension. + Dilations in the batch and depth dimensions must be 1. + name : string + A name for the operation (optional). + Returns + ------- + A Tensor. Has the same type as input. + """ + + outputs = F.conv1d(x=input, + weight=filters, + stride=stride, + padding=padding, + data_format=data_format, + dilation=dilations, + name=name) + return outputs + + +class Conv2D(object): + + def __init__(self, strides, padding, data_format='NHWC', dilations=None, out_channel=None, k_size=None): + self.data_format, self.padding = preprocess_2d_format(data_format, padding) + if self.data_format is 'NHWC': + self._stride = (strides[1], strides[2]) + self._dilation = (dilations[1], dilations[2]) + elif self.data_format is 'NCHW': + self._stride = (strides[2], strides[3]) + self._dilation = (dilations[2], dilations[3]) + + + def __call__(self, inputs, filters): + outputs = F.conv2d(x=inputs, + weight=filters, + stride=self._stride, + dilation=self._dilation, + padding=self.padding, + data_format=self.data_format) + return outputs + + +def conv2d(input, filters, strides, padding, data_format='NCHW', dilations=None): + """ + Computes a 2-D convolution given 4-D input and filters tensors. + + Parameters + ---------- + input : tensor + Must be one of the following types: half, bfloat16, float32, float64. A 4-D tensor. + The dimension order is interpreted according to the value of data_format, see below for details. + filters : tensor + Must have the same type as input. A 4-D tensor of shape [filter_height, filter_width, in_channels, out_channels] + strides : int of list + The stride of the sliding window for each dimension of input. If a single value is given it is replicated in the H and W dimension. + By default the N and C dimensions are set to 1. The dimension order is determined by the value of data_format, see below for details. + padding : string + "SAME" or "VALID" + data_format : string + "NHWC", "NCHW". Defaults to "NCHW". + dilations : list or ints + list of ints that has length 1, 2 or 4, defaults to 1. The dilation factor for each dimension ofinput. + + Returns + ------- + A Tensor. Has the same type as input. + """ + data_format, padding = preprocess_2d_format(data_format, padding) + if data_format is 'NHWC': + _stride = (strides[1], strides[2]) + _dilation = (dilations[1], dilations[2]) + elif data_format is 'NCHW': + _stride = (strides[2], strides[3]) + _dilation = (dilations[2], dilations[3]) + outputs = F.conv2d(x=input, + weight=filters, + stride=_stride, + dilation=_dilation, + padding=padding, + data_format=data_format) + return outputs + + +class Conv3D(object): + def __init__(self, strides, padding, data_format='NDHWC', dilations=None, out_channel=None, k_size=None): + self.data_format, self.padding = preprocess_3d_format(data_format, padding) + if data_format is 'NDHWC': + self._strides = (strides[1], strides[2], strides[3]) + self._dilations = (dilations[1], dilations[2], dilations[3]) + elif data_format is 'NCDHW': + self._strides = (strides[2], strides[3], strides[4]) + self._dilations = (dilations[2], dilations[3], dilations[4]) + + def __call__(self, input, filters): + outputs = F.conv3d(x=input, + weight=filters, + stride=self._strides, + dilation=self._dilations, + data_format=self.data_format, + padding=self.padding) + return outputs + + +def conv3d(input, filters, strides, padding, data_format='NDHWC', dilations=None, name=None): + """ + Computes a 3-D convolution given 5-D input and filters tensors. + + Parameters + ---------- + input : tensor + Must be one of the following types: half, bfloat16, float32, float64. + Shape [batch, in_depth, in_height, in_width, in_channels]. + filters : tensor + Must have the same type as input. Shape [filter_depth, filter_height, filter_width, in_channels, out_channels]. + in_channels must match between input and filters. + strides : tuple of ints + A list of ints that has length >= 5. 1-D tensor of length 5. + The stride of the sliding window for each dimension of input. + Must have strides[0] = strides[4] = 1. + padding : string + A string from: "SAME", "VALID". The type of padding algorithm to use. + data_format : string + An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. + With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. + Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width]. + dilations : touple of ints + Defaults to [1, 1, 1, 1, 1]. 1-D tensor of length 5. The dilation factor for each dimension of input. + If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. + The dimension order is determined by the value of data_format, see above for details. + Dilations in the batch and depth dimensions must be 1. + name : string + A name for the operation (optional). + + Returns + ------- + A Tensor. Has the same type as input. + """ + data_format, padding = preprocess_3d_format(data_format, padding) + if data_format is 'NDHWC': + _strides = (strides[1], strides[2], strides[3]) + _dilations = (dilations[1], dilations[2], dilations[3]) + elif data_format is 'NCDHW': + _strides = (strides[2], strides[3], strides[4]) + _dilations = (dilations[2], dilations[3], dilations[4]) + outputs = F.conv3d(x=input, + weight=filters, + stride=_strides, + dilation=_dilations, + data_format=data_format, + padding=padding, + name=name) + return outputs + + +def lrn(inputs, depth_radius, bias, alpha, beta): + """ + Local Response Normalization. + + Parameters + ---------- + inputs : tensor + Must be one of the following types: half, bfloat16, float32. 4-D. + depth_radius : int + Defaults to 5. 0-D. Half-width of the 1-D normalization window. + bias : float + Defaults to 1. An offset (usually positive to avoid dividing by 0). + alpha : float + Defaults to 1. A scale factor, usually positive. + beta : float + Defaults to 0.5. An exponent. + + Returns + ------- + A Tensor. Has the same type as input. + """ + pass + + +def moments(x, axes, shift=None, keepdims=False): + """ + Calculates the mean and variance of x. + + Parameters + ---------- + x : tensor + A Tensor + axes : ints + Axes along which to compute mean and variance. + shift : int + Not used in the current implementation. + keepdims : bool + produce moments with the same dimensionality as the input. + + Returns + ------- + Two Tensor objects: mean and variance. + """ + + pass + + +class MaxPool1d(object): + + def __init__(self, ksize, strides, padding, data_format=None): + self.data_format, self.padding = preprocess_1d_format(data_format=data_format, padding=padding) + self.ksize = ksize + self.strides = strides + + def __call__(self, inputs): + if self.data_format == 'NLC': + inputs = nhwc_to_nchw(inputs) + outputs = F.max_pool1d(inputs, self.ksize, self.strides, self.padding) + if self.data_format == 'NLC': + outputs = nchw_to_nhwc(outputs) + return outputs + + +class MaxPool(object): + + def __init__(self, ksize, strides, padding, data_format=None): + self.data_format, self.padding = preprocess_2d_format(data_format, padding) + self.ksize = ksize + if self.data_format is 'NHWC': + self._stride = (strides[1], strides[2]) + elif self.data_format is 'NCHW': + self._stride = (strides[2], strides[3]) + + def __call__(self, inputs): + outputs = F.max_pool2d(x=inputs, + kernel_size=self.ksize, + stride=self._stride, + padding=self.padding, + data_format=self.data_format) + return outputs + + +def max_pool(input, ksize, strides, padding, data_format=None): + """ + Performs the max pooling on the input. + + Parameters + ---------- + input : tensor + Tensor of rank N+2, of shape [batch_size] + input_spatial_shape + [num_channels] if data_format does not start + with "NC" (default), or [batch_size, num_channels] + input_spatial_shape if data_format starts with "NC". + Pooling happens over the spatial dimensions only. + ksize : int or list of ints + An int or list of ints that has length 1, N or N+2. + The size of the window for each dimension of the input tensor. + strides : int or list of ints + An int or list of ints that has length 1, N or N+2. + The stride of the sliding window for each dimension of the input tensor. + padding : string + 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.ops.convolution for details. + + Returns + ------- + A Tensor of format specified by data_format. The max pooled output tensor. + """ + pass + + +class AvgPool1d(object): + + def __init__(self, ksize, strides, padding, data_format=None): + self.data_format, self.padding = preprocess_1d_format(data_format=data_format, padding=padding) + self.ksize = ksize + self.strides = strides + + def __call__(self, inputs): + if self.data_format == 'NLC': + inputs = nhwc_to_nchw(inputs) + outputs = F.avg_pool1d(inputs, self.ksize, self.strides, self.padding) + if self.data_format == 'NLC': + outputs = nchw_to_nhwc(outputs) + return outputs + + +class AvgPool(object): + + def __init__(self, ksize, strides, padding, data_format=None): + self.data_format, self.padding = preprocess_2d_format(data_format, padding) + self.filter_size = ksize + if self.data_format is 'NHWC': + self._stride = (strides[1], strides[2]) + elif self.data_format is 'NCHW': + self._stride = (strides[2], strides[3]) + + def __call__(self, inputs): + outputs = F.avg_pool2d( + inputs, + kernel_size=self.filter_size, + stride=self._stride, + padding=self.padding, + data_format=self.data_format) + return outputs + + +def avg_pool(input, ksize, strides, padding): + """ + Performs the avg pooling on the input. + + Parameters + ---------- + input : tensor + Tensor of rank N+2, of shape [batch_size] + input_spatial_shape + [num_channels] + if data_format does not start with "NC" (default), or [batch_size, num_channels] + input_spatial_shape + if data_format starts with "NC". Pooling happens over the spatial dimensions only. + ksize : int or list of ints + An int or list of ints that has length 1, N or N+2. + The size of the window for each dimension of the input tensor. + strides : int or list of ints + An int or list of ints that has length 1, N or N+2. + The stride of the sliding window for each dimension of the input tensor. + padding : string + 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.ops.convolution for details. + + Returns + ------- + A Tensor of format specified by data_format. The average pooled output tensor. + """ + pass + + +class MaxPool3d(object): + def __init__(self, ksize, strides, padding, data_format=None): + self.data_format, self.padding = preprocess_3d_format(data_format, padding) + self.ksize = ksize + if self.data_format == 'NCDHW': + self.strides = (strides[2], strides[3], strides[4]) + if self.data_format == 'NDHWC': + self.strides = (strides[1], strides[2], strides[3]) + + def __call__(self, inputs): + outputs = F.max_pool3d( + inputs, + kernel_size=self.ksize, + stride=self.strides, + padding=self.padding, + data_format=self.data_format) + return outputs + + +def max_pool3d(input, ksize, strides, padding, data_format=None, name=None): + """ + Performs the max pooling on the input. + + Parameters + ---------- + input : tensor + A 5-D Tensor of the format specified by data_format. + ksize : int or list of ints + An int or list of ints that has length 1, 3 or 5. + The size of the window for each dimension of the input tensor. + strides : int or list of ints + An int or list of ints that has length 1, 3 or 5. + The stride of the sliding window for each dimension of the input tensor. + padding : string + 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.ops.convolution for details. + data_format : string + "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. + With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. + Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width]. + name : string + A name for the operation (optional). + + Returns + ------- + A Tensor of format specified by data_format. The max pooled output tensor. + """ + pass + + +class AvgPool3d(object): + def __init__(self, ksize, strides, padding, data_format=None): + self.data_format, self.padding = preprocess_3d_format(data_format, padding) + self.ksize = ksize + if self.data_format == 'NCDHW': + self.strides = (strides[2], strides[3], strides[4]) + if self.data_format == 'NDHWC': + self.strides = (strides[1], strides[2], strides[3]) + + def __call__(self, inputs): + outputs = F.avg_pool3d( + inputs, + kernel_size=self.ksize, + stride=self.strides, + padding=self.padding, + data_format=self.data_format) + return outputs + + +def avg_pool3d(input, ksize, strides, padding, data_format=None, name=None): + """ + Performs the average pooling on the input. + + Parameters + ---------- + input : tensor + A 5-D Tensor of shape [batch, height, width, channels] and type float32, float64, qint8, quint8, or qint32. + ksize : int or list of ints + An int or list of ints that has length 1, 3 or 5. The size of the window for each dimension of the input tensor. + strides : int or list of ints + An int or list of ints that has length 1, 3 or 5. + The stride of the sliding window for each dimension of the input tensor. + padding : string + 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.ops.convolution for details. + data_format : string + 'NDHWC' and 'NCDHW' are supported. + name : string + Optional name for the operation. + + Returns + ------- + A Tensor with the same type as value. The average pooled output tensor. + """ + pass + + +def pool(input, window_shape, pooling_type, strides=None, padding='VALID', data_format=None, dilations=None, name=None): + """ + Performs an N-D pooling operation. + + Parameters + ---------- + input : tensor + Tensor of rank N+2, of shape [batch_size] + input_spatial_shape + [num_channels] + if data_format does not start with "NC" (default), or [batch_size, num_channels] + input_spatial_shape + if data_format starts with "NC". Pooling happens over the spatial dimensions only. + window_shape : int + Sequence of N ints >= 1. + pooling_type : string + Specifies pooling operation, must be "AVG" or "MAX". + strides : ints + Sequence of N ints >= 1. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1. + padding : string + The padding algorithm, must be "SAME" or "VALID". Defaults to "SAME". + See the "returns" section of tf.ops.convolution for details. + data_format : string + Specifies whether the channel dimension of the input and output is the last dimension (default, or if data_format does not start with "NC"), + or the second dimension (if data_format starts with "NC"). + For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". + For N=3, the valid values are "NDHWC" (default) and "NCDHW". + dilations : list of ints + Dilation rate. List of N ints >= 1. Defaults to [1]*N. If any value of dilation_rate is > 1, then all values of strides must be 1. + name : string + Optional. Name of the op. + + Returns + ------- + Tensor of rank N+2, of shape [batch_size] + output_spatial_shape + [num_channels] + """ + pass + + +class DepthwiseConv2d(object): + + def __init__(self, strides, padding, data_format=None, dilations=None, ksize=None, channel_multiplier=1): + self.data_format, self.padding = preprocess_2d_format(data_format, padding) + self.stride = strides + self.dilations = dilations + + def __call__(self, input, filter): + raise NotImplementedError("Not implemented depthwiseconv2d") + + +def depthwise_conv2d(input, filter, strides, padding, data_format=None, dilations=None, name=None): + """ + Depthwise 2-D convolution. + + Parameters + ---------- + input : tensor + 4-D with shape according to data_format. + filter : tensor + 4-D with shape [filter_height, filter_width, in_channels, channel_multiplier]. + strides : list + 1-D of size 4. The stride of the sliding window for each dimension of input. + padding : string + 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.ops.convolution for details. + data_format : string + The data format for input. Either "NHWC" (default) or "NCHW". + dilations : list + 1-D of size 2. The dilation rate in which we sample input values across the height and width dimensions in atrous convolution. + If it is greater than 1, then all values of strides must be 1. + name : string + A name for this operation (optional). + + Returns + ------- + A 4-D Tensor with shape according to data_format. + E.g., for "NHWC" format, shape is [batch, out_height, out_width, in_channels * channel_multiplier]. + """ + + pass + + +class Conv1d_transpose(object): + + def __init__( + self, strides, padding, data_format='NWC', dilations=None, out_channel=None, k_size=None, in_channels=None + ): + self.strides = strides + self.dilations = dilations + self.data_format, self.padding = preprocess_1d_format(data_format, padding) + + def __call__(self, input, filters): + raise NotImplementedError + + +def conv1d_transpose( + input, filters, output_shape, strides, padding='SAME', data_format='NWC', dilations=None, name=None +): + """ + The transpose of conv1d. + + Parameters + ---------- + input : tensor + A 3-D Tensor of type float and shape [batch, in_width, in_channels] + for NWC data format or [batch, in_channels, in_width] for NCW data format. + filters : tensor + A 3-D Tensor with the same type as value and shape [filter_width, output_channels, in_channels]. + filter's in_channels dimension must match that of value. + output_shape : tensor + A 1-D Tensor, containing three elements, representing the output shape of the deconvolution op. + strides : list + An int or list of ints that has length 1 or 3. The number of entries by which the filter is moved right at each step. + padding : string + 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.ops.convolution for details. + data_format : string + 'NWC' and 'NCW' are supported. + dilations : list + An int or list of ints that has length 1 or 3 which defaults to 1. + The dilation factor for each dimension of input. If set to k > 1, + there will be k-1 skipped cells between each filter element on that dimension. + Dilations in the batch and depth dimensions must be 1. + name : string + Optional name for the returned tensor. + + Returns + ------- + A Tensor with the same type as value. + """ + pass + + +class Conv2d_transpose(object): + + def __init__( + self, strides, padding, data_format='NHWC', dilations=None, name=None, out_channel=None, k_size=None, + in_channels=None + ): + self.strides = strides + self.dilations = dilations + self.name = name + self.data_format, self.padding = preprocess_2d_format(data_format, padding) + + def __call__(self, input, filters): + raise NotImplementedError + + +def conv2d_transpose( + input, filters, output_shape, strides, padding='SAME', data_format='NHWC', dilations=None, name=None +): + """ + The transpose of conv2d. + + Parameters + ---------- + input : tensor + A 4-D Tensor of type float and shape [batch, height, width, in_channels] + for NHWC data format or [batch, in_channels, height, width] for NCHW data format. + filters : tensor + A 4-D Tensor with the same type as input and shape [height, width, + output_channels, in_channels]. filter's in_channels dimension must match that of input. + output_shape : tensor + A 1-D Tensor representing the output shape of the deconvolution op. + strides : list + An int or list of ints that has length 1, 2 or 4. The stride of the sliding window for each dimension of input. + If a single value is given it is replicated in the H and W dimension. + By default the N and C dimensions are set to 0. + The dimension order is determined by the value of data_format, see below for details. + padding : string + 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.ops.convolution for details. + data_format : string + 'NHWC' and 'NCHW' are supported. + dilations : list + An int or list of ints that has length 1, 2 or 4, defaults to 1. + name : string + Optional name for the returned tensor. + + Returns + ------- + A Tensor with the same type as input. + """ + pass + + +class Conv3d_transpose(object): + + def __init__( + self, strides, padding, data_format='NDHWC', dilations=None, name=None, out_channel=None, k_size=None, + in_channels=None + ): + self.strides = strides + self.dilations = dilations + self.name = name + self.out_channel = out_channel + self.data_format, self.padding = preprocess_3d_format(data_format, padding) + + def __call__(self, input, filters): + raise NotImplementedError + + +def conv3d_transpose( + input, filters, output_shape, strides, padding='SAME', data_format='NDHWC', dilations=None, name=None +): + """ + The transpose of conv3d. + + Parameters + ---------- + input : tensor + A 5-D Tensor of type float and shape [batch, height, width, in_channels] for + NHWC data format or [batch, in_channels, height, width] for NCHW data format. + filters : tensor + A 5-D Tensor with the same type as value and shape [height, width, output_channels, in_channels]. + filter's in_channels dimension must match that of value. + output_shape : tensor + A 1-D Tensor representing the output shape of the deconvolution op. + strides : list + An int or list of ints that has length 1, 3 or 5. + padding : string + 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.ops.convolution for details. + data_format : string + 'NDHWC' and 'NCDHW' are supported. + dilations : list of ints + An int or list of ints that has length 1, 3 or 5, defaults to 1. + name : string + Optional name for the returned tensor. + + Returns + ------- + A Tensor with the same type as value. + """ + + pass + + +class BatchNorm(object): + + def __init__(self, decay=0.9, epsilon=0.00001, beta=None, gamma=None, moving_mean=None, moving_var=None, num_features=None, + data_format='channels_last', is_train=False): + self.decay = decay + self.epsilon = epsilon + self.data_format = data_format + self.beta = beta + self.gamma = gamma + self.moving_mean = moving_mean + self.moving_var = moving_var + self.num_features = num_features + self.is_train = is_train + self.axes = None + + + def __call__(self, inputs): + data_format = self.channel_format(inputs) + outputs = pd.nn.functional.batch_norm( + inputs, + self.moving_mean, + self.moving_var, + weight=self.gamma, + bias=self.beta, + training=self.is_train, + momentum=self.decay, + epsilon=self.epsilon, + data_format=data_format + ) + return outputs + + def channel_format(self, inputs): + """ return "NC", "NCL", "NCHW", "NCDHW", "NLC", "NHWC" or "NDHWC". """ + len_in_shape = len(inputs.shape) + if len_in_shape == 2: + return 'NC' + if self.data_format == 'channels_last': + if len_in_shape == 3: + return 'NLC' + if len_in_shape == 4: + return 'NHWC' + if len_in_shape == 5: + return 'NDHWC' + if self.data_format == 'channels_first': + if len_in_shape == 3: + return 'NCL' + if len_in_shape == 4: + return 'NCHW' + if len_in_shape == 5: + return 'NCDHW' + + +class GroupConv2D(object): + + def __init__(self, strides, padding, data_format, dilations, out_channel, k_size, groups): + pass + + def __call__(self, input, filters): + raise NotImplementedError + + +class SeparableConv1D(object): + + def __init__(self, stride, padding, data_format, dilations, out_channel, k_size, in_channel, depth_multiplier): + pass + + def __call__(self, inputs, depthwise_filters, pointwise_filters): + raise NotImplementedError + + +class SeparableConv2D(object): + + def __init__(self, strides, padding, data_format, dilations, out_channel, k_size, in_channel, depth_multiplier): + pass + + def __call__(self, inputs, depthwise_filters, pointwise_filters): + raise NotImplementedError + + +class AdaptiveMeanPool1D(object): + + def __init__(self, output_size, data_format): + pass + + def __call__(self, input): + + raise NotImplementedError + + +class AdaptiveMeanPool2D(object): + + def __init__(self, output_size, data_format): + pass + + def __call__(self, inputs): + + raise NotImplementedError + + +class AdaptiveMeanPool3D(object): + + def __init__(self, output_size, data_format): + pass + + def __call__(self, inputs): + raise NotImplementedError + + +class AdaptiveMaxPool1D(object): + + def __init__(self, output_size, data_format): + pass + + def __call__(self, input): + + raise NotImplementedError + + +class AdaptiveMaxPool2D(object): + + def __init__(self, output_size, data_format): + pass + + def __call__(self, inputs): + raise NotImplementedError + + +class AdaptiveMaxPool3D(object): + + def __init__(self, output_size, data_format): + pass + + def __call__(self, inputs): + raise NotImplementedError + + +class BinaryConv2D(object): + + def __init__(self, strides, padding, data_format, dilations, out_channel, k_size, in_channel): + pass + + def __call__(self, inputs, filters): + raise NotImplementedError + + +class DorefaConv2D(object): + + def __init__(self, bitW, bitA, strides, padding, data_format, dilations, out_channel, k_size, in_channel): + pass + + def __call__(self, inputs, filters): + raise NotImplementedError diff --git a/tensorlayer/backend/ops/tensorflow_backend.py b/tensorlayer/backend/ops/tensorflow_backend.py new file mode 100644 index 000000000..9e45c569b --- /dev/null +++ b/tensorlayer/backend/ops/tensorflow_backend.py @@ -0,0 +1,1045 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- +from __future__ import absolute_import, division, print_function +from .tensorflow_nn import nchw_to_nhwc, nhwc_to_nchw +import tensorflow as tf + +_dtypeDict = { + 'DType': tf.DType, + 'float16': tf.float16, + 'float32': tf.float32, + 'float64': tf.float64, + 'int8': tf.int8, + 'int16': tf.int16, + 'int32': tf.int32, + 'int64': tf.int64, + 'uint8': tf.uint8, + 'uint16': tf.uint16, + 'uint32': tf.uint32, + 'uint64': tf.uint64 +} + +DType = tf.DType +float16 = tf.float16 +float32 = tf.float32 +float64 = tf.float64 +int8 = tf.int8 +int16 = tf.int16 +int32 = tf.int32 +int64 = tf.int64 +uint8 = tf.uint8 +uint16 = tf.uint16 +uint32 = tf.uint32 +uint64 = tf.uint64 + +# isinstance input output +# TensorLike = tf_ops._TensorLike + + +def set_context(**kwargs): + raise Exception("Using TenosrFlow backend,You don't need to set context") + + +def get_tensor_shape(x): + return x.get_shape().as_list() + + +# initializers +def zeros(shape, dtype=tf.float32): + """ + Creates a tensor with all elements set to zero. + + Parameters + ---------- + shape : A list of integers + a tuple of integers, or a 1-D Tensor of type int32. + dtype : tensor + The DType of an element in the resulting Tensor + + Returns + ------- + A Tensor with all elements set to zero. + + """ + return tf.zeros(shape=shape, dtype=dtype) + + +def ones(shape, dtype=tf.float32): + """ + Creates a tensor with all elements set to ones. + + Parameters + ---------- + shape : A list of integers + a tuple of integers, or a 1-D Tensor of type int32. + dtype : tensor + The DType of an element in the resulting Tensor + + Returns + ------- + A Tensor with all elements set to zero. + + """ + return tf.ones(shape=shape, dtype=dtype) + + +def constant(value, dtype=tf.float32, shape=None): + """ + Creates a constant tensor from a tensor-like object. + + Parameters + ---------- + value : list + A constant value (or list) of output type dtype. + dtype : tensor + The type of the elements of the resulting tensor. + shape : tuple + Optional dimensions of resulting tensor. + + Returns + ------- + A Constant Tensor. + + """ + return tf.constant(value=value, dtype=dtype, shape=shape) + + +def random_uniform(shape, minval=0, maxval=None, dtype=tf.float32, seed=None): + """ + Outputs random values from a uniform distribution. + + Parameters + ---------- + shape : tuple + A 1-D integer Tensor or Python array. The shape of the output tensor. + minval : int + The lower bound on the range of random values to generate (inclusive). Defaults to 0. + maxval : int + The upper bound on the range of random values to generate (exclusive). Defaults to 1 if dtype is floating point. + dtype : tensor + The type of the output: float16, float32, float64, int32, or int64. + seed : int + Used in combination with tf.random.set_seed to create a reproducible sequence of tensors across multiple calls. + Returns + ------- + A tensor of the specified shape filled with random uniform values. + + """ + outputs = tf.random.uniform(shape=shape, minval=minval, maxval=maxval, dtype=dtype, seed=seed) + return outputs + + +def random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.dtypes.float32, seed=None): + """ + Outputs random values from a normal distribution. + + Parameters + ---------- + shape : tuple + A 1-D integer Tensor or Python array. The shape of the output tensor. + mean : float + The mean of the normal distribution + stddev : float + The standard deviation of the normal distribution. + dtype : tensor + The type of the output. + seed : A Python integer + Used to create a random seed for the distribution + + Returns + ------- + A tensor of the specified shape filled with random normal values. + + """ + outputs = tf.random.normal(shape=shape, mean=mean, stddev=stddev, dtype=dtype, seed=seed) + return outputs + + +def truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None): + """ + Outputs random values from a truncated normal distribution. + + Parameters + ---------- + shape : tuple + A 1-D integer Tensor or Python array. The shape of the output tensor. + mean : float + The mean of the normal distribution + stddev : float + The standard deviation of the normal distribution. + dtype : tensor + The type of the output. + seed : A Python integer + Used to create a random seed for the distribution + + Returns + ------- + A tensor of the specified shape filled with random truncated normal values. + + """ + outputs = tf.random.truncated_normal(shape=shape, mean=mean, stddev=stddev, dtype=dtype, seed=seed) + return outputs + + +def he_normal(shape, dtype, seed=None): + """ + He normal initializer. + + Parameters + ---------- + seed : A Python integer. + Used to seed the random generator. + shape : tuple + A 1-D integer Tensor or Python array. The shape of the output tensor. + dtype : tensor + The type of the output. + + Returns + ------- + A tensor of the specified shape filled with he normal values. + """ + return tf.initializers.he_normal(seed)(shape=shape, dtype=dtype) + + +def Variable(initial_value, name, trainable=True): + """ + Creates a new variable with value initial_value. + + Parameters + ---------- + initial_value : tensor + A Tensor, or Python object convertible to a Tensor + name : str + Optional name for the variable. Defaults to 'Variable' and gets uniquified automatically. + Returns + ------- + Variable + """ + + var = tf.Variable(initial_value=initial_value, name=name, trainable=trainable) + return var + + +class MatMul(object): + + def __init__(self): + pass + + def __call__(self, a, b): + return tf.matmul(a, b) + + +def matmul(a, b): + """ + Multiplies matrix a by matrix b, producing a * b. + + Parameters + ---------- + a : tensor + type float16, float32, float64, int32, complex64, complex128 and rank > 1. + b : tensor + with same type and rank as a. + + Returns + ------- + A Tensor of the same type as a and b + """ + + outputs = tf.matmul(a, b) + return outputs + + +def add(value, bias): + """ + Returns x + y element-wise. + + Parameters + ---------- + value : tensor. + Must be one of the following types: bfloat16, half, float32, float64, + uint8, int8, int16, int32, int64, complex64, complex128, string. + bias : tensor + Must have the same type as a + + Returns + ------- + A Tensor. Has the same type as a. + """ + + outputs = tf.add(value, bias) + return outputs + + +def dtypes(dt): + """ + Data dtypes. + + Parameters + ---------- + dt : string + It could be 'uint8', 'uint16', 'uint32', 'uint64', 'int8', 'int16', + 'int32', 'int64', 'float16', 'float32', 'float64', 'DType'. + + Returns + ------- + Data dtypes + """ + + if dt not in _dtypeDict.keys(): + raise Exception("Unsupported dtype: {}".format(dt)) + return _dtypeDict[dt] + + +class Maximum(object): + + def __init__(self): + pass + + def __call__(self, x, y): + return tf.maximum(x=x, y=y) + + +class Minimum(object): + + def __init__(self): + pass + + def __call__(self, x, y): + return tf.minimum(x=x, y=y) + + +def minimum(x, y): + """ + Returns the min of x and y (i.e. x < y ? x : y) element-wise. + + Parameters + ---------- + x : tensor. + Must be one of the following types: bfloat16, half, float32, float64, int32, int64. + y : A Tensor. + Must have the same type as x. + + Returns + ------- + A Tensor. Has the same type as x + """ + + outputs = tf.minimum(x=x, y=y) + return outputs + + +class FlattenReshape(object): + + def __init__(self): + pass + + def __call__(self, inputs): + dim = 1 + for d in get_tensor_shape(inputs)[1:]: + dim *= d + return tf.reshape(inputs, [-1, dim]) + + +class Reshape(object): + + def __init__(self, shape): + self.shape = shape + + def __call__(self, tensor): + return tf.reshape(tensor, self.shape) + + +def reshape(tensor, shape): + """ + Reshapes a tensor. + + Parameters + ---------- + tensor : tensor + A Tensor. + shape : tensor + Defines the shape of the output tensor. + Returns + ------- + A Tensor. Has the same type as tensor + """ + + return tf.reshape(tensor, shape) + + +class Concat(object): + + def __init__(self, axis): + super(Concat, self).__init__() + self.axis = axis + + def __call__(self, values): + return tf.concat(values=values, axis=self.axis) + + +def concat(values, axis): + """ + Concatenates tensors along one dimension. + + Parameters + ---------- + values : list + A list of Tensor objects or a single Tensor + axis : int + 0-D int32 Tensor. Dimension along which to concatenate + Returns + ------- + A Tensor resulting from concatenation of the input tensors. + """ + + return tf.concat(values, axis) + + +def convert_to_tensor(value, dtype=None): + """ + Converts the given value to a Tensor. + + Parameters + ---------- + value : object + An object whose type has a registered Tensor conversion function. + dtype : optional + Optional element type for the returned tensor. If missing, the type is inferred from the type of value. + + Returns + ------- + A Tensor based on value. + """ + + return tf.convert_to_tensor(value, dtype) + + +def sqrt(x): + """ + Computes square root of x element-wise. + + Parameters + ---------- + x : tensor + Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128. + + Returns + ------- + A Tensor. Has the same type as x. + """ + return tf.sqrt(x) + + +class ReduceSum(object): + + def __init__(self, axis=None): + self.axis = axis + + def __call__(self, input): + return tf.reduce_sum(input, axis=self.axis) + + +class ReduceMean(object): + + def __init__(self, axis): + self.axis = axis + + def __call__(self, inputs): + output = tf.reduce_mean(inputs, self.axis) + return output + + +def reduce_mean(input_tensor, axis=None): + """ + Computes the mean of elements across dimensions of a tensor. + + Parameters + ---------- + input_tensor : tensor + The tensor to reduce. Should have numeric type. + axis : list + The dimensions to reduce. If None (the default), reduces all dimensions. + Must be in the range [-rank(input_tensor), rank(input_tensor)). + name : str + A name for the operation (optional). + + Returns + ------- + The reduced tensor. + """ + + return tf.reduce_mean(input_tensor, axis=axis) + + +class ReduceMax(object): + + def __init__(self, axis): + self.axis = axis + + def __call__(self, inputs): + output = tf.reduce_max(inputs, self.axis) + return output + + +def reduce_max(input_tensor, axis=None): + """ + Computes the maximum of elements across dimensions of a tensor. + + Parameters + ---------- + input_tensor : tensor + The tensor to reduce. Should have real numeric type. + axis : int + The dimensions to reduce. If None (the default), reduces all dimensions. + Must be in the range [-rank(input_tensor), rank(input_tensor)). + name : str + A name for the operation (optional). + + Returns + ------- + The reduced tensor. + """ + + return tf.reduce_max(input_tensor, axis=axis) + + +def reduce_min(input_tensor, axis=None): + """ + Computes the minimum of elements across dimensions of a tensor. + + Parameters + ---------- + input_tensor : tensor + The tensor to reduce. Should have real numeric type. + axis : int + The dimensions to reduce. If None (the default), reduces all dimensions. + Must be in the range [-rank(input_tensor), rank(input_tensor)). + name : str + A name for the operation (optional). + + Returns + ------- + The reduced tensor. + """ + + return tf.reduce_min(input_tensor, axis=axis) + + +class Pad(object): + + def __init__(self, paddings, mode="REFLECT"): + if mode not in ['CONSTANT', 'REFLECT', 'SYMMETRIC']: + raise Exception("Unsupported mode: {}".format(mode)) + self.paddings = paddings + self.mode = mode + + def __call__(self, x): + outputs = tf.pad(x, self.paddings, mode=self.mode, constant_values=0) + return outputs + + +def pad(tensor, paddings, mode='CONSTANT', constant_values=0): + """ + Pads a tensor. + + Parameters + ---------- + tensor : tensor + A Tensor. + paddings : tensor + A Tensor of type int32. + mode : str + One of "CONSTANT", "REFLECT", or "SYMMETRIC" (case-insensitive) + constant_values : int + In "CONSTANT" mode, the scalar pad value to use. Must be same type as tensor. + + Returns + ------- + A Tensor. Has the same type as tensor. + """ + + if mode not in ['CONSTANT', 'REFLECT', 'SYMMETRIC']: + raise Exception("Unsupported mode: {}".format(mode)) + outputs = tf.pad(tensor, paddings, mode=mode, constant_values=constant_values) + return outputs + + +class Unstack(object): + + def __init__(self, axis, num=None): + self.axis = axis + self.num = num + + def __call__(self, values): + return tf.unstack(values, num=self.num, axis=self.axis) + + +class Stack(object): + + def __init__(self, axis=0): + self.axis = axis + + def __call__(self, values): + return tf.stack(values, axis=self.axis) + + +def stack(values, axis=0): + """ + Stacks a list of rank-R tensors into one rank-(R+1) tensor. + + Parameters + ---------- + values : list + A list of Tensor objects with the same shape and type. + axis : int + An int. The axis to stack along. Defaults to the first dimension. + Negative values wrap around, so the valid range is [-(R+1), R+1). + + Returns + ------- + A stacked Tensor with the same type as values. + """ + + return tf.stack(values, axis=axis) + + +class Meshgrid(object): + + def __init__(self, indexing='xy'): + super(Meshgrid, self).__init__() + self.index = indexing + + def __call__(self, inputs): + return tf.meshgrid(inputs) + + +def meshgrid(*args, **kwargs): + """ + Broadcasts parameters for evaluation on an N-D grid. + + Parameters + ---------- + x : tensor + Tensors with rank 1. + y : tensor + Tensors with rank 1. + + Returns + ------- + A list of N Tensors with rank N. + """ + + return tf.meshgrid(*args, **kwargs) + + +def range(start, limit=None, delta=1, dtype=None): + """ + Creates a sequence of numbers. + + Parameters + ---------- + start : tensor + A 0-D Tensor (scalar). Acts as first entry in the range if limit is not None; + otherwise, acts as range limit and first entry defaults to 0. + limit : tensor + A 0-D Tensor (scalar). Upper limit of sequence, exclusive. If None, + defaults to the value of start while the first entry of the range defaults to 0. + delta : tensor + A 0-D Tensor (scalar). Number that increments start. Defaults to 1. + dtype : type + The type of the elements of the resulting tensor. + + Returns + ------- + An 1-D Tensor of type dtype. + """ + + if limit is None: + outputs = tf.range(start, delta=delta, dtype=dtype) + else: + outputs = tf.range(start, limit, delta=delta, dtype=dtype) + return outputs + + +class ExpandDims(object): + + def __init__(self, axis): + self.axis = axis + + def __call__(self, input): + return tf.expand_dims(input, axis=self.axis) + + +def expand_dims(input, axis): + """ + Inserts a dimension of 1 into a tensor's shape. + + Parameters + ---------- + input : tensor + A Tensor. + axis : int + 0-D (scalar). Specifies the dimension index at which to expand the shape of input. + Must be in the range [-rank(input) - 1, rank(input)]. + + Returns + ------- + A Tensor with the same data as input, but its shape has an additional dimension of size 1 added. + """ + + return tf.expand_dims(input, axis) + + +class Tile(object): + + def __init__(self): + pass + + def __call__(self, input, multiples): + return tf.tile(input, multiples) + + +def tile(input, multiples): + """ + Constructs a tensor by tiling a given tensor. + + Parameters + ---------- + input : tensor + A Tensor. 1-D or higher. + multiples : tensor + Must be one of the following types: int32, int64. 1-D. + Length must be the same as the number of dimensions in input + + Returns + ------- + A Tensor. Has the same type as input. + """ + + return tf.tile(input, multiples) + + +class Cast(object): + + def __init__(self, dtype): + self.dtype = dtype + + def __call__(self, x): + return tf.cast(x, dtype=self.dtype) + + +def cast(x, dtype): + """ + Casts a tensor to a new type. + + Parameters + ---------- + x : tensor + A Tensor or SparseTensor or IndexedSlices of numeric type. + It could be uint8, uint16, uint32, uint64, int8, int16, int32, int64, float16, float32, float64. + dtype : dtpye + The destination type. The list of supported dtypes is the same as x + + Returns + ------- + A Tensor or SparseTensor or IndexedSlices with same shape as x and same type as dtype. + """ + + return tf.cast(x, dtype=dtype) + + +class Transpose(object): + + def __init__(self, perm, conjugate=False): + self.perm = perm + self.conjugate = conjugate + + def __call__(self, a): + return tf.transpose(a, self.perm, self.conjugate) + + +def transpose(a, perm=None, conjugate=False): + """ + Transposes a. + + Parameters + ---------- + a : tensor + A Tensor. + perm : list / int + A permutation of the dimensions of a. + conjugate : bool + Setting it to True is mathematically equivalent to tf.math.conj(tf.transpose(input)). + + Returns + ------- + A transposed Tensor. + """ + + return tf.transpose(a, perm, conjugate) + + +def gather_nd(params, indices, batch_dims=0): + """ + Gather slices from params into a Tensor with shape specified by indices. + + Parameters + ---------- + params : tensor + The tensor from which to gather values. + indices : tensor + Must be one of the following types: int32, int64. Index tensor. + batch_dims : int + An integer or a scalar 'Tensor'. The number of batch dimensions. + + Returns + ------- + A Tensor. Has the same type as params. + """ + + return tf.gather_nd(params, indices, batch_dims) + + +def clip_by_value(t, clip_value_min, clip_value_max): + """ + Clips tensor values to a specified min and max. + + Parameters + ---------- + t : tensor + A Tensor or IndexedSlices + clip_value_min : tensor + A 0-D (scalar) Tensor, or a Tensor with the same shape as t. The minimum value to clip by + clip_value_max : tensor + A 0-D (scalar) Tensor, or a Tensor with the same shape as t. The minimum value to clip by + + Returns + ------- + A clipped Tensor or IndexedSlices. + """ + + return tf.clip_by_value(t, clip_value_min, clip_value_max) + + +def split(value, num_or_size_splits, axis=0, num=None): + """ + Splits a tensor into sub tensors. + + Parameters + ---------- + value : tensor + The Tensor to split. + num_or_size_splits : list + Either an integer indicating the number of splits along split_dim or a 1-D integer Tensor or + Python list containing the sizes of each output tensor along split_dim. + axis : int + The dimension along which to split. Must be in the range [-rank(value), rank(value)). Defaults to 0. + num : int + used to specify the number of outputs when it cannot be inferred from the shape of size_splits. + + Returns + ------- + Tensor objects resulting from splitting value. + """ + + return tf.split(value=value, num_or_size_splits=num_or_size_splits, axis=axis, num=num) + + +class Floor(object): + + def __call__(self, x): + return tf.floor(x) + + +def floor(x): + return tf.floor(x) + + +def gather(params, indices): + return tf.gather(params, indices) + + +def linspace(start, stop, num): + return tf.linspace(start, stop, num) + + +def slice(inputs, starts, sizes): + return tf.slice(inputs, starts, sizes) + + +def add_n(inputs): + return tf.add_n(inputs) + + +class OneHot(object): + + def __init__(self, depth, on_value, off_value, axis, dtype): + self.depth = depth + self.on_value = on_value + self.off_value = off_value + self.axis = axis + self.dtype = dtype + + def __call__(self, inputs, *args, **kwargs): + outputs = tf.one_hot( + inputs, self.depth, on_value=self.on_value, off_value=self.off_value, axis=self.axis, dtype=self.dtype + ) + return outputs + + +class L2Normalize(object): + + def __init__(self, axis=None, epsilon=1e-12): + self.axis = axis + self.epsilon = epsilon + + def __call__(self, input, *args, **kwargs): + outputs = tf.math.l2_normalize(input, axis=self.axis, epsilon=self.epsilon) + return outputs + + +class EmbeddingLookup(object): + + def __init__(self, max_norm=None): + self.max_norm = max_norm + + def __call__(self, params, ids, *args, **kwargs): + outputs = tf.nn.embedding_lookup(params=params, ids=ids, max_norm=self.max_norm) + return outputs + + +class NCELoss(object): + + def __init__(self, num_true=1, sampled_values=None, remove_accidental_hits=False): + self.num_true = num_true + self.sampled_values = sampled_values + self.remove_accidental_hits = remove_accidental_hits + + def __call__(self, weights, biases, labels, inputs, num_sampled, num_classes): + outputs = tf.nn.nce_loss( + weights=weights, biases=biases, inputs=inputs, labels=labels, num_sampled=num_sampled, + num_classes=num_classes + ) + return outputs + + +class NotEqual(object): + + def __init__(self): + pass + + def __call__(self, x, y): + return tf.not_equal(x, y) + + +class CountNonzero(object): + + def __init__(self, keepdims=None, dtype=int64): + self.keepdims = keepdims + self.dtype = dtype + + def __call__(self, input, axis=None): + return tf.math.count_nonzero(input, axis=axis, keepdims=self.keepdims, dtype=self.dtype) + + +class Resize: + + def __init__(self, scale, method, antialias=False, data_format='channels_last', ksize=None): + self.method = method + self.antialias = antialias + self.scale = scale + self.data_format = data_format + + def __call__(self, inputs): + if self.data_format == 'channels_first': + inputs = nchw_to_nhwc(inputs) + if len(get_tensor_shape(inputs)) == 4: + output_size = [int(inputs.shape[1] * self.scale[0]), int(inputs.shape[2] * self.scale[1])] + else: + raise ("The inputs shape must be 4-D Tensor.") + outputs = tf.image.resize(inputs, size=output_size, method=self.method, antialias=self.antialias) + if self.data_format == 'channels_first': + outputs = nhwc_to_nchw(outputs) + return outputs + + +def resize(inputs, output_size, method, antialias): + return tf.image.resize(inputs, size=output_size, method=method, antialias=antialias) + + +class ZeroPadding1D(object): + + def __init__(self, padding): + self.zeropad = tf.keras.layers.ZeroPadding1D(padding=padding) + + def __call__(self, inputs): + return self.zeropad(inputs) + + +class ZeroPadding2D(object): + + def __init__(self, padding): + self.zeropad = tf.keras.layers.ZeroPadding2D(padding=padding) + + def __call__(self, inputs): + return self.zeropad(inputs) + + +class ZeroPadding3D(object): + + def __init__(self, padding): + self.zeropad = tf.keras.layers.ZeroPadding3D(padding=padding) + + def __call__(self, inputs): + return self.zeropad(inputs) + + +class Sign(object): + + def __init__(self): + pass + + def __call__(self, x): + return tf.sign(x) + + +class Ceil(object): + + def __call__(self, x): + return tf.math.ceil(x) + + +def ceil(x): + return tf.math.ceil(x) + + +def multiply(x, y): + return tf.multiply(x, y) + + +def divide(x, y): + return tf.divide(x, y) + + +def identity(x): + return tf.identity(x) + + +class BatchToSpace(object): + + def __init__(self, block_size, crops): + self.bolock_size = block_size + self.crops = crops + + def __call__(self, input_x): + return tf.batch_to_space(input=input_x, block_shape=self.bolock_size, crops=self.crops) + + +class DepthToSpace(object): + + def __init__(self, block_size, data_format='NHWC'): + self.block_size = block_size + self.data_format = data_format + + def __call__(self, input): + return tf.nn.depth_to_space(input, block_size=self.block_size, data_format=self.data_format) diff --git a/tensorlayer/backend/ops/tensorflow_nn.py b/tensorlayer/backend/ops/tensorflow_nn.py new file mode 100644 index 000000000..1e8ac1142 --- /dev/null +++ b/tensorlayer/backend/ops/tensorflow_nn.py @@ -0,0 +1,1913 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +import tensorflow as tf +from tensorflow.python.framework import ops +from tensorflow.python.ops import math_ops +from tensorflow.python.training import moving_averages +from math import floor, ceil +# loss function +sparse_softmax_cross_entropy_with_logits = tf.nn.sparse_softmax_cross_entropy_with_logits +sigmoid_cross_entropy_with_logits = tf.nn.sigmoid_cross_entropy_with_logits + + +def padding_format(padding): + """ + Checks that the padding format correspond format. + + Parameters + ---------- + padding : str + Must be one of the following:"same", "SAME", "VALID", "valid" + + Returns + ------- + str "SAME" or "VALID" + """ + + if padding in ["SAME", "same"]: + padding = "SAME" + elif padding in ["VALID", "valid"]: + padding = "VALID" + elif padding == None: + padding = None + else: + raise Exception("Unsupported padding: " + str(padding)) + return padding + + +def preprocess_1d_format(data_format, padding): + """ + Checks that the 1-D dataformat format correspond format. + + Parameters + ---------- + data_format : str + Must be one of the following:"channels_last","NWC","NCW","channels_first" + padding : str + Must be one of the following:"same","valid","SAME","VALID" + + Returns + ------- + str "NWC" or "NCW" and "SAME" or "VALID" + """ + if data_format in ["channels_last", "NWC"]: + data_format = "NWC" + elif data_format in ["channels_first", "NCW"]: + data_format = "NCW" + elif data_format == None: + data_format = None + else: + raise Exception("Unsupported data format: " + str(data_format)) + padding = padding_format(padding) + return data_format, padding + + +def preprocess_2d_format(data_format, padding): + """ + Checks that the 2-D dataformat format correspond format. + + Parameters + ---------- + data_format : str + Must be one of the following:"channels_last","NHWC","NCHW","channels_first" + padding : str + Must be one of the following:"same","valid","SAME","VALID" + + Returns + ------- + str "NHWC" or "NCHW" and "SAME" or "VALID" + """ + + if data_format in ["channels_last", "NHWC"]: + data_format = "NHWC" + elif data_format in ["channels_first", "NCHW"]: + data_format = "NCHW" + elif data_format == None: + data_format = None + else: + raise Exception("Unsupported data format: " + str(data_format)) + padding = padding_format(padding) + return data_format, padding + + +def preprocess_3d_format(data_format, padding): + """ + Checks that the 3-D dataformat format correspond format. + + Parameters + ---------- + data_format : str + Must be one of the following:"channels_last","NDHWC","NCDHW","channels_first" + padding : str + Must be one of the following:"same","valid","SAME","VALID" + + Returns + ------- + str "NDHWC" or "NCDHW" and "SAME" or "VALID" + """ + + if data_format in ['channels_last', 'NDHWC']: + data_format = 'NDHWC' + elif data_format in ['channels_first', 'NCDHW']: + data_format = 'NCDHW' + elif data_format == None: + data_format = None + else: + raise Exception("Unsupported data format: " + str(data_format)) + padding = padding_format(padding) + return data_format, padding + + +def nchw_to_nhwc(x): + """ + Channels first to channels last + + Parameters + ---------- + x : tensor + channels first tensor data + + Returns + ------- + channels last tensor data + """ + + if len(x.shape) == 3: + x = tf.transpose(x, (0, 2, 1)) + elif len(x.shape) == 4: + x = tf.transpose(x, (0, 2, 3, 1)) + elif len(x.shape) == 5: + x = tf.transpose(x, (0, 2, 3, 4, 1)) + else: + raise Exception("Unsupported dimensions") + return x + + +def nhwc_to_nchw(x): + """ + Channles last to channels first + + Parameters + ---------- + x : tensor + channels last tensor data + + Returns + ------- + channels first tensor data + """ + + if len(x.shape) == 3: + x = tf.transpose(x, (0, 2, 1)) + elif len(x.shape) == 4: + x = tf.transpose(x, (0, 3, 1, 2)) + elif len(x.shape) == 5: + x = tf.transpose(x, (0, 4, 1, 2, 3)) + else: + raise Exception("Unsupported dimensions") + return x + + +class ReLU(object): + + def __init__(self): + pass + + def __call__(self, x): + return tf.nn.relu(x) + + +def relu(x): + """ + Computes rectified linear: max(features, 0). + + Parameters + ---------- + x : tensor + Must be one of the following types: float32, float64, int32, uint8, int16, + int8, int64, bfloat16, uint16, half, uint32, uint64, qint8. + + Returns + ------- + A Tensor. Has the same type as features. + """ + + return tf.nn.relu(x) + + +class ReLU6(object): + + def __init__(self): + pass + + def __call__(self, x): + return tf.nn.relu6(x) + + +def relu6(x): + """ + Computes Rectified Linear 6: min(max(features, 0), 6). + + Parameters + ---------- + x : tensor + Must be one of the following types: float32, float64, int32, uint8, int16, + int8, int64, bfloat16, uint16, half, uint32, uint64, qint8. + + Returns + ------- + A Tensor with the same type as features. + """ + + return tf.nn.relu6(x) + + +class LeakyReLU(object): + + def __init__(self, alpha=0.2): + self.alpha = alpha + + def __call__(self, x): + return tf.nn.leaky_relu(x, alpha=self.alpha) + + +def leaky_relu(x, alpha=0.2): + """ + Compute the Leaky ReLU activation function. + + Parameters + ---------- + x : tensor + representing preactivation values. Must be one of the following types: + float16, float32, float64, int32, int64. + + Returns + ------- + The activation value. + """ + + return tf.nn.leaky_relu(x, alpha=alpha) + + +class Softplus(object): + + def __init__(self): + pass + + def __call__(self, x): + return tf.nn.softplus(x) + + +def softplus(x): + """ + Computes softplus: log(exp(features) + 1). + + Parameters + ---------- + x : tensor + Must be one of the following types: half, bfloat16, float32, float64. + + Returns + ------- + A Tensor. Has the same type as features. + """ + + return tf.nn.softplus(x) + + +class Tanh(object): + + def __init__(self): + pass + + def __call__(self, x): + return tf.nn.tanh(x) + + +def tanh(x): + """ + Computes hyperbolic tangent of x element-wise. + + Parameters + ---------- + x : tensor + Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128. + + Returns + ------- + A Tensor. Has the same type as x. + """ + + return tf.nn.tanh(x) + + +class Sigmoid(object): + + def __init__(self): + pass + + def __call__(self, x): + return tf.nn.sigmoid(x) + + +def sigmoid(x): + """ + Computes sigmoid of x element-wise. + + Parameters + ---------- + x : tensor + A Tensor with type float16, float32, float64, complex64, or complex128. + + Returns + ------- + A Tensor with the same type as x. + """ + + return tf.nn.sigmoid(x) + + +class Softmax(object): + + def __init__(self): + pass + + def __call__(self, x): + return tf.nn.softmax(x) + + +def softmax(logits, axis=None): + """ + Computes softmax activations. + + Parameters + ---------- + logits : tensor + Must be one of the following types: half, float32, float64. + axis : int + The dimension softmax would be performed on. The default is -1 which indicates the last dimension. + + Returns + ------- + A Tensor. Has the same type and shape as logits. + """ + + return tf.nn.softmax(logits, axis) + + +class Dropout(object): + + def __init__(self, keep, seed=0): + self.keep = keep + self.seed = seed + + def __call__(self, inputs, *args, **kwargs): + outputs = tf.nn.dropout(inputs, rate=1 - (self.keep), seed=self.seed) + return outputs + + +class BiasAdd(object): + """ + Adds bias to value. + + Parameters + ---------- + x : tensor + A Tensor with type float, double, int64, int32, uint8, int16, int8, complex64, or complex128. + bias : tensor + Must be the same type as value unless value is a quantized type, + in which case a different quantized type may be used. + Returns + ------- + A Tensor with the same type as value. + """ + + def __init__(self, data_format=None): + self.data_format = data_format + + def __call__(self, x, bias): + return tf.nn.bias_add(x, bias, data_format=self.data_format) + + +def bias_add(x, bias, data_format=None, name=None): + """ + Adds bias to value. + + Parameters + ---------- + x : tensor + A Tensor with type float, double, int64, int32, uint8, int16, int8, complex64, or complex128. + bias : tensor + Must be the same type as value unless value is a quantized type, + in which case a different quantized type may be used. + data_format : A string. + 'N...C' and 'NC...' are supported. + name : str + A name for the operation (optional). + Returns + ------- + A Tensor with the same type as value. + """ + + x = tf.nn.bias_add(x, bias, data_format=data_format, name=name) + return x + + +class Conv1D(object): + + def __init__(self, stride, padding, data_format='NWC', dilations=None, out_channel=None, k_size=None): + self.stride = stride + self.dilations = dilations + self.data_format, self.padding = preprocess_1d_format(data_format, padding) + + def __call__(self, input, filters): + outputs = tf.nn.conv1d( + input=input, + filters=filters, + stride=self.stride, + padding=self.padding, + data_format=self.data_format, + dilations=self.dilations, + # name=name + ) + return outputs + + +def conv1d(input, filters, stride, padding, data_format='NWC', dilations=None): + """ + Computes a 1-D convolution given 3-D input and filter tensors. + + Parameters + ---------- + input : tensor + A 3D Tensor. Must be of type float16, float32, or float64 + filters : tensor + A 3D Tensor. Must have the same type as input. + stride : int of list + An int or list of ints that has length 1 or 3. The number of entries by which the filter is moved right at each step. + padding : string + 'SAME' or 'VALID' + data_format : string + An optional string from "NWC", "NCW". Defaults to "NWC", the data is stored in the order of + [batch, in_width, in_channels]. The "NCW" format stores data as [batch, in_channels, in_width]. + dilations : int or list + An int or list of ints that has length 1 or 3 which defaults to 1. + The dilation factor for each dimension of input. If set to k > 1, + there will be k-1 skipped cells between each filter element on that dimension. + Dilations in the batch and depth dimensions must be 1. + name : string + A name for the operation (optional). + Returns + ------- + A Tensor. Has the same type as input. + """ + + data_format, padding = preprocess_1d_format(data_format, padding) + outputs = tf.nn.conv1d( + input=input, + filters=filters, + stride=stride, + padding=padding, + data_format=data_format, + dilations=dilations, + # name=name + ) + return outputs + + +class Conv2D(object): + + def __init__(self, strides, padding, data_format='NHWC', dilations=None, out_channel=None, k_size=None): + self.strides = strides + self.dilations = dilations + self.data_format, self.padding = preprocess_2d_format(data_format, padding) + + def __call__(self, input, filters): + outputs = tf.nn.conv2d( + input=input, + filters=filters, + strides=self.strides, + padding=self.padding, + data_format=self.data_format, + dilations=self.dilations, + ) + return outputs + + +def conv2d(input, filters, strides, padding, data_format='NHWC', dilations=None): + """ + Computes a 2-D convolution given 4-D input and filters tensors. + + Parameters + ---------- + input : tensor + Must be one of the following types: half, bfloat16, float32, float64. A 4-D tensor. + The dimension order is interpreted according to the value of data_format, see below for details. + filters : tensor + Must have the same type as input. A 4-D tensor of shape [filter_height, filter_width, in_channels, out_channels] + strides : int of list + The stride of the sliding window for each dimension of input. If a single value is given it is replicated in the H and W dimension. + By default the N and C dimensions are set to 1. The dimension order is determined by the value of data_format, see below for details. + padding : string + "SAME" or "VALID" + data_format : string + "NHWC", "NCHW". Defaults to "NHWC". + dilations : list or ints + list of ints that has length 1, 2 or 4, defaults to 1. The dilation factor for each dimension ofinput. + name : string + A name for the operation (optional). + + Returns + ------- + A Tensor. Has the same type as input. + """ + + data_format, padding = preprocess_2d_format(data_format, padding) + outputs = tf.nn.conv2d( + input=input, + filters=filters, + strides=strides, + padding=padding, + data_format=data_format, + dilations=dilations, + ) + return outputs + + +class Conv3D(object): + + def __init__(self, strides, padding, data_format='NDHWC', dilations=None, out_channel=None, k_size=None): + self.strides = strides + self.dilations = dilations + self.data_format, self.padding = preprocess_3d_format(data_format, padding) + + def __call__(self, input, filters): + outputs = tf.nn.conv3d( + input=input, + filters=filters, + strides=self.strides, + padding=self.padding, + data_format=self.data_format, + dilations=self.dilations, + ) + return outputs + + +def conv3d(input, filters, strides, padding, data_format='NDHWC', dilations=None): + """ + Computes a 3-D convolution given 5-D input and filters tensors. + + Parameters + ---------- + input : tensor + Must be one of the following types: half, bfloat16, float32, float64. + Shape [batch, in_depth, in_height, in_width, in_channels]. + filters : tensor + Must have the same type as input. Shape [filter_depth, filter_height, filter_width, in_channels, out_channels]. + in_channels must match between input and filters. + strides : list of ints + A list of ints that has length >= 5. 1-D tensor of length 5. + The stride of the sliding window for each dimension of input. + Must have strides[0] = strides[4] = 1. + padding : string + A string from: "SAME", "VALID". The type of padding algorithm to use. + data_format : string + An optional string from: "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. + With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. + Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width]. + dilations : list of ints + Defaults to [1, 1, 1, 1, 1]. 1-D tensor of length 5. The dilation factor for each dimension of input. + If set to k > 1, there will be k-1 skipped cells between each filter element on that dimension. + The dimension order is determined by the value of data_format, see above for details. + Dilations in the batch and depth dimensions must be 1. + name : string + A name for the operation (optional). + + Returns + ------- + A Tensor. Has the same type as input. + """ + + data_format, padding = preprocess_3d_format(data_format, padding) + outputs = tf.nn.conv3d( + input=input, + filters=filters, + strides=strides, + padding=padding, + data_format=data_format, # 'NDHWC', + dilations=dilations, # [1, 1, 1, 1, 1], + # name=name, + ) + return outputs + + +def lrn(inputs, depth_radius, bias, alpha, beta): + """ + Local Response Normalization. + + Parameters + ---------- + inputs : tensor + Must be one of the following types: half, bfloat16, float32. 4-D. + depth_radius : int + Defaults to 5. 0-D. Half-width of the 1-D normalization window. + bias : float + Defaults to 1. An offset (usually positive to avoid dividing by 0). + alpha : float + Defaults to 1. A scale factor, usually positive. + beta : float + Defaults to 0.5. An exponent. + + Returns + ------- + A Tensor. Has the same type as input. + """ + + outputs = tf.nn.lrn(inputs, depth_radius=depth_radius, bias=bias, alpha=alpha, beta=beta) + return outputs + + +def moments(x, axes, shift=None, keepdims=False): + """ + Calculates the mean and variance of x. + + Parameters + ---------- + x : tensor + A Tensor + axes : list or ints + Axes along which to compute mean and variance. + shift : int + Not used in the current implementation. + keepdims : bool + produce moments with the same dimensionality as the input. + + Returns + ------- + Two Tensor objects: mean and variance. + """ + + outputs = tf.nn.moments(x, axes, shift, keepdims) + return outputs + + +class MaxPool1d(object): + + def __init__(self, ksize, strides, padding, data_format=None): + self.data_format, self.padding = preprocess_1d_format(data_format=data_format, padding=padding) + self.ksize = ksize + self.strides = strides + + def __call__(self, inputs): + outputs = tf.nn.max_pool( + input=inputs, ksize=self.ksize, strides=self.strides, padding=self.padding, data_format=self.data_format + ) + return outputs + + +class MaxPool(object): + + def __init__(self, ksize, strides, padding, data_format=None): + self.ksize = ksize + self.strides = strides + self.data_format = data_format + self.padding = padding + + def __call__(self, inputs): + if inputs.ndim == 3: + self.data_format, self.padding = preprocess_1d_format(data_format=self.data_format, padding=self.padding) + elif inputs.ndim == 4: + self.data_format, self.padding = preprocess_2d_format(data_format=self.data_format, padding=self.padding) + elif inputs.ndim == 5: + self.data_format, self.padding = preprocess_3d_format(data_format=self.data_format, padding=self.padding) + + outputs = tf.nn.max_pool( + input=inputs, ksize=self.ksize, strides=self.strides, padding=self.padding, data_format=self.data_format + ) + return outputs + + +def max_pool(input, ksize, strides, padding, data_format=None): + """ + Performs the max pooling on the input. + + Parameters + ---------- + input : tensor + Tensor of rank N+2, of shape [batch_size] + input_spatial_shape + [num_channels] if data_format does not start + with "NC" (default), or [batch_size, num_channels] + input_spatial_shape if data_format starts with "NC". + Pooling happens over the spatial dimensions only. + ksize : int or list of ints + An int or list of ints that has length 1, N or N+2. + The size of the window for each dimension of the input tensor. + strides : int or list of ints + An int or list of ints that has length 1, N or N+2. + The stride of the sliding window for each dimension of the input tensor. + padding : string + 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.ops.convolution for details. + name : string + A name for the operation (optional). + + Returns + ------- + A Tensor of format specified by data_format. The max pooled output tensor. + """ + + if input.ndim == 3: + data_format, padding = preprocess_1d_format(data_format=data_format, padding=padding) + elif input.ndim == 4: + data_format, padding = preprocess_2d_format(data_format=data_format, padding=padding) + elif input.ndim == 5: + data_format, padding = preprocess_3d_format(data_format=data_format, padding=padding) + + outputs = tf.nn.max_pool(input=input, ksize=ksize, strides=strides, padding=padding, data_format=data_format) + return outputs + + +class AvgPool1d(object): + + def __init__(self, ksize, strides, padding, data_format=None): + self.data_format, self.padding = preprocess_1d_format(data_format=data_format, padding=padding) + self.ksize = ksize + self.strides = strides + + def __call__(self, inputs): + outputs = tf.nn.pool( + input=inputs, + window_shape=self.ksize, + pooling_type="AVG", + strides=self.strides, + padding=self.padding, + data_format=self.data_format, + ) + return outputs + + +class AvgPool(object): + + def __init__(self, ksize, strides, padding, data_format=None): + self.ksize = ksize + self.strides = strides + self.data_format = data_format + self.padding = padding_format(padding) + + def __call__(self, inputs): + outputs = tf.nn.avg_pool( + input=inputs, ksize=self.ksize, strides=self.strides, padding=self.padding, data_format=self.data_format + ) + return outputs + + +def avg_pool(input, ksize, strides, padding): + """ + Performs the avg pooling on the input. + + Parameters + ---------- + input : tensor + Tensor of rank N+2, of shape [batch_size] + input_spatial_shape + [num_channels] + if data_format does not start with "NC" (default), or [batch_size, num_channels] + input_spatial_shape + if data_format starts with "NC". Pooling happens over the spatial dimensions only. + ksize : int or list of ints + An int or list of ints that has length 1, N or N+2. + The size of the window for each dimension of the input tensor. + strides : int or list of ints + An int or list of ints that has length 1, N or N+2. + The stride of the sliding window for each dimension of the input tensor. + padding : string + 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.ops.convolution for details. + name : string + Optional name for the operation. + + Returns + ------- + A Tensor of format specified by data_format. The average pooled output tensor. + """ + + padding = padding_format(padding) + outputs = tf.nn.avg_pool( + input=input, + ksize=ksize, + strides=strides, + padding=padding, + ) + return outputs + + +class MaxPool3d(object): + def __init__(self, ksize, strides, padding, data_format=None): + self.data_format, self.padding = preprocess_3d_format(data_format, padding) + self.ksize = ksize + self.strides = strides + + def __call__(self, inputs): + outputs = tf.nn.max_pool3d( + input=inputs, + ksize=self.ksize, + strides=self.strides, + padding=self.padding, + data_format=self.data_format, + ) + return outputs + + +def max_pool3d(input, ksize, strides, padding, data_format=None): + """ + Performs the max pooling on the input. + + Parameters + ---------- + input : tensor + A 5-D Tensor of the format specified by data_format. + ksize : int or list of ints + An int or list of ints that has length 1, 3 or 5. + The size of the window for each dimension of the input tensor. + strides : int or list of ints + An int or list of ints that has length 1, 3 or 5. + The stride of the sliding window for each dimension of the input tensor. + padding : string + 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.ops.convolution for details. + data_format : string + "NDHWC", "NCDHW". Defaults to "NDHWC". The data format of the input and output data. + With the default format "NDHWC", the data is stored in the order of: [batch, in_depth, in_height, in_width, in_channels]. + Alternatively, the format could be "NCDHW", the data storage order is: [batch, in_channels, in_depth, in_height, in_width]. + name : string + A name for the operation (optional). + + Returns + ------- + A Tensor of format specified by data_format. The max pooled output tensor. + """ + + data_format, padding = preprocess_3d_format(data_format, padding) + outputs = tf.nn.max_pool3d( + input=input, + ksize=ksize, + strides=strides, + padding=padding, + data_format=data_format, + ) + return outputs + + +class AvgPool3d(object): + def __init__(self, ksize, strides, padding, data_format=None): + self.data_format, self.padding = preprocess_3d_format(data_format, padding) + self.ksize = ksize + self.strides = strides + + def __call__(self, inputs): + outputs = tf.nn.avg_pool3d( + input=inputs, + ksize=self.ksize, + strides=self.strides, + padding=self.padding, + data_format=self.data_format, + ) + return outputs + + +def avg_pool3d(input, ksize, strides, padding, data_format=None): + """ + Performs the average pooling on the input. + + Parameters + ---------- + input : tensor + A 5-D Tensor of shape [batch, height, width, channels] and type float32, float64, qint8, quint8, or qint32. + ksize : int or list of ints + An int or list of ints that has length 1, 3 or 5. The size of the window for each dimension of the input tensor. + strides : int or list of ints + An int or list of ints that has length 1, 3 or 5. + The stride of the sliding window for each dimension of the input tensor. + padding : string + 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.ops.convolution for details. + data_format : string + 'NDHWC' and 'NCDHW' are supported. + name : string + Optional name for the operation. + + Returns + ------- + A Tensor with the same type as value. The average pooled output tensor. + """ + + data_format, padding = preprocess_3d_format(data_format, padding) + outputs = tf.nn.avg_pool3d( + input=input, + ksize=ksize, + strides=strides, + padding=padding, + data_format=data_format, + ) + return outputs + + +def pool(input, window_shape, pooling_type, strides=None, padding='VALID', data_format=None, dilations=None, name=None): + """ + Performs an N-D pooling operation. + + Parameters + ---------- + input : tensor + Tensor of rank N+2, of shape [batch_size] + input_spatial_shape + [num_channels] + if data_format does not start with "NC" (default), or [batch_size, num_channels] + input_spatial_shape + if data_format starts with "NC". Pooling happens over the spatial dimensions only. + window_shape : int + Sequence of N ints >= 1. + pooling_type : string + Specifies pooling operation, must be "AVG" or "MAX". + strides : ints + Sequence of N ints >= 1. Defaults to [1]*N. If any value of strides is > 1, then all values of dilation_rate must be 1. + padding : string + The padding algorithm, must be "SAME" or "VALID". Defaults to "SAME". + See the "returns" section of tf.ops.convolution for details. + data_format : string + Specifies whether the channel dimension of the input and output is the last dimension (default, or if data_format does not start with "NC"), + or the second dimension (if data_format starts with "NC"). + For N=1, the valid values are "NWC" (default) and "NCW". For N=2, the valid values are "NHWC" (default) and "NCHW". + For N=3, the valid values are "NDHWC" (default) and "NCDHW". + dilations : list of ints + Dilation rate. List of N ints >= 1. Defaults to [1]*N. If any value of dilation_rate is > 1, then all values of strides must be 1. + name : string + Optional. Name of the op. + + Returns + ------- + Tensor of rank N+2, of shape [batch_size] + output_spatial_shape + [num_channels] + """ + if pooling_type in ["MAX", "max"]: + pooling_type = "MAX" + elif pooling_type in ["AVG", "avg"]: + pooling_type = "AVG" + else: + raise ValueError('Unsupported pool_mode: ' + str(pooling_type)) + padding = padding_format(padding) + outputs = tf.nn.pool( + input=input, + window_shape=window_shape, + pooling_type=pooling_type, + strides=strides, + padding=padding, + data_format=data_format, + dilations=dilations, + name=name, + ) + return outputs + + +class DepthwiseConv2d(object): + + def __init__(self, strides, padding, data_format=None, dilations=None, ksize=None, channel_multiplier=1): + self.data_format, self.padding = preprocess_2d_format(data_format, padding) + self.strides = strides + self.dilations = dilations + + def __call__(self, input, filter): + outputs = tf.nn.depthwise_conv2d( + input=input, + filter=filter, + strides=self.strides, + padding=self.padding, + data_format=self.data_format, + dilations=self.dilations, + ) + return outputs + + +def depthwise_conv2d(input, filter, strides, padding, data_format=None, dilations=None, name=None): + """ + Depthwise 2-D convolution. + + Parameters + ---------- + input : tensor + 4-D with shape according to data_format. + filter : tensor + 4-D with shape [filter_height, filter_width, in_channels, channel_multiplier]. + strides : list + 1-D of size 4. The stride of the sliding window for each dimension of input. + padding : string + 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.ops.convolution for details. + data_format : string + The data format for input. Either "NHWC" (default) or "NCHW". + dilations : list + 1-D of size 2. The dilation rate in which we sample input values across the height and width dimensions in atrous convolution. + If it is greater than 1, then all values of strides must be 1. + name : string + A name for this operation (optional). + + Returns + ------- + A 4-D Tensor with shape according to data_format. + E.g., for "NHWC" format, shape is [batch, out_height, out_width, in_channels * channel_multiplier]. + """ + + data_format, padding = preprocess_2d_format(data_format, padding) + outputs = tf.nn.depthwise_conv2d( + input=input, + filter=filter, + strides=strides, + padding=padding, + data_format=data_format, + dilations=dilations, + name=name, + ) + return outputs + + +class Conv1d_transpose(object): + + def __init__( + self, strides, padding, data_format='NWC', dilations=None, out_channel=None, k_size=None, in_channels=None + ): + self.strides = strides + self.dilations = dilations + self.data_format, self.padding = preprocess_1d_format(data_format, padding) + + def __call__(self, input, filters): + batch_size = input.shape[0] + if self.data_format == 'NWC': + w_axis, c_axis = 1, 2 + else: + w_axis, c_axis = 2, 1 + + input_shape = input.shape.as_list() + filters_shape = filters.shape.as_list() + input_w = input_shape[w_axis] + filters_w = filters_shape[0] + output_channels = filters_shape[1] + dilations_w = 1 + + if isinstance(self.strides, int): + strides_w = self.strides + else: + strides_list = list(self.strides) + strides_w = strides_list[w_axis] + + if self.dilations is not None: + if isinstance(self.dilations, int): + dilations_w = self.dilations + else: + dilations_list = list(self.dilations) + dilations_w = dilations_list[w_axis] + + filters_w = filters_w + (filters_w - 1) * (dilations_w - 1) + assert self.padding in {'SAME', 'VALID'} + if self.padding == 'VALID': + output_w = input_w * strides_w + max(filters_w - strides_w, 0) + elif self.padding == 'SAME': + output_w = input_w * strides_w + + if self.data_format == 'NCW': + output_shape = (batch_size, output_channels, output_w) + else: + output_shape = (batch_size, output_w, output_channels) + output_shape = tf.stack(output_shape) + outputs = tf.nn.conv1d_transpose( + input=input, + filters=filters, + output_shape=output_shape, + strides=self.strides, + padding=self.padding, + data_format=self.data_format, + dilations=self.dilations, + ) + return outputs + + +def conv1d_transpose( + input, filters, output_shape, strides, padding='SAME', data_format='NWC', dilations=None, name=None +): + """ + The transpose of conv1d. + + Parameters + ---------- + input : tensor + A 3-D Tensor of type float and shape [batch, in_width, in_channels] + for NWC data format or [batch, in_channels, in_width] for NCW data format. + filters : tensor + A 3-D Tensor with the same type as value and shape [filter_width, output_channels, in_channels]. + filter's in_channels dimension must match that of value. + output_shape : tensor + A 1-D Tensor, containing three elements, representing the output shape of the deconvolution op. + strides : list + An int or list of ints that has length 1 or 3. The number of entries by which the filter is moved right at each step. + padding : string + 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.ops.convolution for details. + data_format : string + 'NWC' and 'NCW' are supported. + dilations : list + An int or list of ints that has length 1 or 3 which defaults to 1. + The dilation factor for each dimension of input. If set to k > 1, + there will be k-1 skipped cells between each filter element on that dimension. + Dilations in the batch and depth dimensions must be 1. + name : string + Optional name for the returned tensor. + + Returns + ------- + A Tensor with the same type as value. + """ + + data_format, padding = preprocess_1d_format(data_format, padding) + outputs = tf.nn.conv1d_transpose( + input=input, + filters=filters, + output_shape=output_shape, + strides=strides, + padding=padding, + data_format=data_format, + dilations=dilations, + name=name, + ) + return outputs + + +class Conv2d_transpose(object): + + def __init__( + self, strides, padding, data_format='NHWC', dilations=None, name=None, out_channel=None, k_size=None, + in_channels=None + ): + self.strides = strides + self.dilations = dilations + self.name = name + self.data_format, self.padding = preprocess_2d_format(data_format, padding) + + def __call__(self, input, filters): + if self.data_format == 'NHWC': + h_axis, w_axis = 1, 2 + else: + h_axis, w_axis = 2, 3 + + input_shape = input.shape.as_list() + filters_shape = filters.shape.as_list() + batch_size = input.shape[0] + input_h, input_w = input_shape[h_axis], input_shape[w_axis] + kernel_h, kernel_w = filters_shape[0], filters_shape[1] + output_channels = filters_shape[2] + dilations_h, dilations_w = 1, 1 + + if isinstance(self.strides, int): + strides_h = self.strides + strides_w = self.strides + else: + strides_list = list(self.strides) + if len(strides_list) != 4: + strides_h = strides_list[0] + strides_w = strides_list[1] + else: + strides_h = strides_list[h_axis] + strides_w = strides_list[w_axis] + + if self.dilations is not None: + if isinstance(self.dilations, int): + dilations_h = self.dilations + dilations_w = self.dilations + else: + dilations_list = list(self.dilations) + if len(dilations_list) != 4: + dilations_h = dilations_list[0] + dilations_w = dilations_list[1] + else: + dilations_h = dilations_list[h_axis] + dilations_w = dilations_list[w_axis] + + kernel_h = kernel_h + (kernel_h - 1) * (dilations_h - 1) + kernel_w = kernel_w + (kernel_w - 1) * (dilations_w - 1) + + assert self.padding in {'SAME', 'VALID'} + if self.padding == 'VALID': + output_h = input_h * strides_h + max(kernel_h - strides_h, 0) + output_w = input_w * strides_w + max(kernel_w - strides_w, 0) + elif self.padding == 'SAME': + output_h = input_h * strides_h + output_w = input_w * strides_w + + if self.data_format == 'NCHW': + out_shape = (batch_size, output_channels, output_h, output_w) + else: + out_shape = (batch_size, output_h, output_w, output_channels) + + output_shape = tf.stack(out_shape) + + outputs = tf.nn.conv2d_transpose( + input=input, filters=filters, output_shape=output_shape, strides=self.strides, padding=self.padding, + data_format=self.data_format, dilations=self.dilations, name=self.name + ) + return outputs + + +def conv2d_transpose( + input, filters, output_shape, strides, padding='SAME', data_format='NHWC', dilations=None, name=None +): + """ + The transpose of conv2d. + + Parameters + ---------- + input : tensor + A 4-D Tensor of type float and shape [batch, height, width, in_channels] + for NHWC data format or [batch, in_channels, height, width] for NCHW data format. + filters : tensor + A 4-D Tensor with the same type as input and shape [height, width, + output_channels, in_channels]. filter's in_channels dimension must match that of input. + output_shape : tensor + A 1-D Tensor representing the output shape of the deconvolution op. + strides : list + An int or list of ints that has length 1, 2 or 4. The stride of the sliding window for each dimension of input. + If a single value is given it is replicated in the H and W dimension. + By default the N and C dimensions are set to 0. + The dimension order is determined by the value of data_format, see below for details. + padding : string + 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.ops.convolution for details. + data_format : string + 'NHWC' and 'NCHW' are supported. + dilations : list + An int or list of ints that has length 1, 2 or 4, defaults to 1. + name : string + Optional name for the returned tensor. + + Returns + ------- + A Tensor with the same type as input. + """ + + data_format, padding = preprocess_2d_format(data_format, padding) + outputs = tf.nn.conv2d_transpose( + input=input, + filters=filters, + output_shape=output_shape, + strides=strides, + padding=padding, + data_format=data_format, + dilations=dilations, + name=name, + ) + return outputs + + +class Conv3d_transpose(object): + + def __init__( + self, strides, padding, data_format='NDHWC', dilations=None, name=None, out_channel=None, k_size=None, + in_channels=None + ): + self.strides = strides + self.dilations = dilations + self.name = name + self.out_channel = out_channel + self.data_format, self.padding = preprocess_3d_format(data_format, padding) + + def __call__(self, input, filters): + if self.data_format == 'NDHWC': + d_axis, h_axis, w_axis = 1, 2, 3 + else: + d_axis, h_axis, w_axis = 2, 3, 4 + + input_shape = input.shape.as_list() + filters_shape = filters.shape.as_list() + batch_size = input_shape[0] + input_d, input_h, input_w = input_shape[d_axis], input_shape[h_axis], input_shape[w_axis] + kernel_d, kernel_h, kernel_w = filters_shape[0], filters_shape[1], filters_shape[2] + dilations_d, dilations_h, dilations_w = 1, 1, 1 + + if isinstance(self.strides, int): + strides_d, strides_h, strides_w = self.strides + else: + strides_list = list(self.strides) + if len(strides_list) != 5: + strides_d, strides_h, strides_w = \ + strides_list[0], \ + strides_list[1], \ + strides_list[2] + else: + strides_d, strides_h, strides_w = \ + strides_list[d_axis], \ + strides_list[h_axis], \ + strides_list[w_axis] + + if self.dilations is not None: + if isinstance(self.dilations, int): + dilations_d, dilations_h, dilations_w = self.dilations + else: + dilations_list = list(self.dilations) + if len(dilations_list) != 5: + dilations_d, dilations_h, dilations_w = \ + dilations_list[0], \ + dilations_list[1], \ + dilations_list[2] + else: + dilations_d, dilations_h, dilations_w = \ + dilations_list[d_axis],\ + dilations_list[h_axis], \ + dilations_list[w_axis] + + assert self.padding in {'VALID', 'SAME'} + + kernel_d = kernel_d + (kernel_d - 1) * (dilations_d - 1) + kernel_h = kernel_h + (kernel_h - 1) * (dilations_h - 1) + kernel_w = kernel_w + (kernel_w - 1) * (dilations_w - 1) + + if self.padding == 'VALID': + output_d = input_d * strides_d + max(kernel_d - strides_d, 0) + output_h = input_h * strides_h + max(kernel_h - strides_h, 0) + output_w = input_w * strides_w + max(kernel_w - strides_w, 0) + elif self.padding == 'SAME': + output_d = input_d * strides_d + output_h = input_h * strides_h + output_w = input_w * strides_w + + if self.data_format == 'NDHWC': + output_shape = (batch_size, output_d, output_h, output_w, self.out_channel) + else: + output_shape = (batch_size, self.out_channel, output_d, output_h, output_w) + + output_shape = tf.stack(output_shape) + outputs = tf.nn.conv3d_transpose( + input=input, filters=filters, output_shape=output_shape, strides=self.strides, padding=self.padding, + data_format=self.data_format, dilations=self.dilations, name=self.name + ) + + return outputs + + +def conv3d_transpose( + input, filters, output_shape, strides, padding='SAME', data_format='NDHWC', dilations=None, name=None +): + """ + The transpose of conv3d. + + Parameters + ---------- + input : tensor + A 5-D Tensor of type float and shape [batch, height, width, in_channels] for + NHWC data format or [batch, in_channels, height, width] for NCHW data format. + filters : tensor + A 5-D Tensor with the same type as value and shape [height, width, output_channels, in_channels]. + filter's in_channels dimension must match that of value. + output_shape : tensor + A 1-D Tensor representing the output shape of the deconvolution op. + strides : list + An int or list of ints that has length 1, 3 or 5. + padding : string + 'VALID' or 'SAME'. The padding algorithm. See the "returns" section of tf.ops.convolution for details. + data_format : string + 'NDHWC' and 'NCDHW' are supported. + dilations : list of ints + An int or list of ints that has length 1, 3 or 5, defaults to 1. + name : string + Optional name for the returned tensor. + + Returns + ------- + A Tensor with the same type as value. + """ + + data_format, padding = preprocess_3d_format(data_format, padding) + outputs = tf.nn.conv3d_transpose( + input=input, filters=filters, output_shape=output_shape, strides=strides, padding=padding, + data_format=data_format, dilations=dilations, name=name + ) + return outputs + + +def depthwise_conv2d(input, filters, strides, padding='SAME', data_format='NHWC', dilations=None, name=None): + """ + Depthwise 2-D convolution. + + Parameters + ---------- + input : tensor + 4-D with shape according to data_format. + filters : tensor + 4-D with shape [filter_height, filter_width, in_channels, channel_multiplier]. + strides : tuple + 1-D of size 4. The stride of the sliding window for each dimension of input. + padding : string + 'VALID' or 'SAME' + data_format : string + "NHWC" (default) or "NCHW". + dilations : tuple + The dilation rate in which we sample input values across the height and width dimensions in atrous convolution. + If it is greater than 1, then all values of strides must be 1. + name : string + A name for this operation (optional). + + Returns + ------- + A 4-D Tensor with shape according to data_format. + """ + + data_format, padding = preprocess_2d_format(data_format, padding) + outputs = tf.nn.depthwise_conv2d( + input=input, + filter=filters, + strides=strides, + padding=padding, + data_format=data_format, + dilations=dilations, + name=name, + ) + return outputs + + +def _to_channel_first_bias(b): + """Reshape [c] to [c, 1, 1].""" + channel_size = int(b.shape[0]) + new_shape = (channel_size, 1, 1) + return tf.reshape(b, new_shape) + + +def _bias_scale(x, b, data_format): + """The multiplication counter part of tf.nn.bias_add.""" + if data_format == 'NHWC': + return x * b + elif data_format == 'NCHW': + return x * _to_channel_first_bias(b) + else: + raise ValueError('invalid data_format: %s' % data_format) + + +def _bias_add(x, b, data_format): + """Alternative implementation of tf.nn.bias_add which is compatiable with tensorRT.""" + if data_format == 'NHWC': + return tf.add(x, b) + elif data_format == 'NCHW': + return tf.add(x, _to_channel_first_bias(b)) + else: + raise ValueError('invalid data_format: %s' % data_format) + + +def batch_normalization(x, mean, variance, offset, scale, variance_epsilon, data_format, name=None): + """Data Format aware version of tf.nn.batch_normalization.""" + if data_format == 'channels_last': + mean = tf.reshape(mean, [1] * (len(x.shape) - 1) + [-1]) + variance = tf.reshape(variance, [1] * (len(x.shape) - 1) + [-1]) + offset = tf.reshape(offset, [1] * (len(x.shape) - 1) + [-1]) + scale = tf.reshape(scale, [1] * (len(x.shape) - 1) + [-1]) + elif data_format == 'channels_first': + mean = tf.reshape(mean, [1] + [-1] + [1] * (len(x.shape) - 2)) + variance = tf.reshape(variance, [1] + [-1] + [1] * (len(x.shape) - 2)) + offset = tf.reshape(offset, [1] + [-1] + [1] * (len(x.shape) - 2)) + scale = tf.reshape(scale, [1] + [-1] + [1] * (len(x.shape) - 2)) + else: + raise ValueError('invalid data_format: %s' % data_format) + + with ops.name_scope(name, 'batchnorm', [x, mean, variance, scale, offset]): + inv = math_ops.rsqrt(variance + variance_epsilon) + if scale is not None: + inv *= scale + + a = math_ops.cast(inv, x.dtype) + b = math_ops.cast(offset - mean * inv if offset is not None else -mean * inv, x.dtype) + # Return a * x + b with customized data_format. + # Currently TF doesn't have bias_scale, and tensorRT has bug in converting tf.nn.bias_add + # So we reimplemted them to allow make the model work with tensorRT. + # See https://github.com/tensorlayer/openpose-plus/issues/75 for more details. + # df = {'channels_first': 'NCHW', 'channels_last': 'NHWC'} + # return _bias_add(_bias_scale(x, a, df[data_format]), b, df[data_format]) + return a * x + b + + +class BatchNorm(object): + """ + The :class:`BatchNorm` is a batch normalization layer for both fully-connected and convolution outputs. + See ``tf.nn.batch_normalization`` and ``tf.nn.moments``. + + Parameters + ---------- + decay : float + A decay factor for `ExponentialMovingAverage`. + Suggest to use a large value for large dataset. + epsilon : float + Eplison. + act : activation function + The activation function of this layer. + is_train : boolean + Is being used for training or inference. + beta_init : initializer or None + The initializer for initializing beta, if None, skip beta. + Usually you should not skip beta unless you know what happened. + gamma_init : initializer or None + The initializer for initializing gamma, if None, skip gamma. + When the batch normalization layer is use instead of 'biases', or the next layer is linear, this can be + disabled since the scaling can be done by the next layer. see `Inception-ResNet-v2 `__ + moving_mean_init : initializer or None + The initializer for initializing moving mean, if None, skip moving mean. + moving_var_init : initializer or None + The initializer for initializing moving var, if None, skip moving var. + num_features: int + Number of features for input tensor. Useful to build layer if using BatchNorm1d, BatchNorm2d or BatchNorm3d, + but should be left as None if using BatchNorm. Default None. + data_format : str + channels_last 'channel_last' (default) or channels_first. + name : None or str + A unique layer name. + + Examples + --------- + With TensorLayer + + >>> net = tl.layers.Input([None, 50, 50, 32], name='input') + >>> net = tl.layers.BatchNorm()(net) + + Notes + ----- + The :class:`BatchNorm` is universally suitable for 3D/4D/5D input in static model, but should not be used + in dynamic model where layer is built upon class initialization. So the argument 'num_features' should only be used + for subclasses :class:`BatchNorm1d`, :class:`BatchNorm2d` and :class:`BatchNorm3d`. All the three subclasses are + suitable under all kinds of conditions. + + References + ---------- + - `Source `__ + - `stackoverflow `__ + + """ + + def __init__( + self, decay=0.9, epsilon=0.00001, beta=None, gamma=None, moving_mean=None, moving_var=None, num_features=None, + data_format='channels_last', is_train=False + ): + self.decay = decay + self.epsilon = epsilon + self.data_format = data_format + self.beta = beta + self.gamma = gamma + self.moving_mean = moving_mean + self.moving_var = moving_var + self.num_features = num_features + self.is_train = is_train + self.axes = None + + if self.decay < 0.0 or 1.0 < self.decay: + raise ValueError("decay should be between 0 to 1") + + def _get_param_shape(self, inputs_shape): + if self.data_format == 'channels_last': + axis = -1 + elif self.data_format == 'channels_first': + axis = 1 + else: + raise ValueError('data_format should be either %s or %s' % ('channels_last', 'channels_first')) + + channels = inputs_shape[axis] + params_shape = [channels] + + return params_shape + + def _check_input_shape(self, inputs): + if inputs.ndim <= 1: + raise ValueError('expected input at least 2D, but got {}D input'.format(inputs.ndim)) + + def __call__(self, inputs): + self._check_input_shape(inputs) + self.channel_axis = len(inputs.shape) - 1 if self.data_format == 'channels_last' else 1 + if self.axes is None: + self.axes = [i for i in range(len(inputs.shape)) if i != self.channel_axis] + + mean, var = tf.nn.moments(inputs, self.axes, keepdims=False) + if self.is_train: + # update moving_mean and moving_var + self.moving_mean = moving_averages.assign_moving_average( + self.moving_mean, mean, self.decay, zero_debias=False + ) + self.moving_var = moving_averages.assign_moving_average(self.moving_var, var, self.decay, zero_debias=False) + outputs = batch_normalization(inputs, mean, var, self.beta, self.gamma, self.epsilon, self.data_format) + else: + outputs = batch_normalization( + inputs, self.moving_mean, self.moving_var, self.beta, self.gamma, self.epsilon, self.data_format + ) + + return outputs + + +class GroupConv2D(object): + + def __init__(self, strides, padding, data_format, dilations, out_channel, k_size, groups): + self.data_format, self.padding = preprocess_2d_format(data_format, padding) + self.strides = strides + self.dilations = dilations + self.groups = groups + if self.data_format == 'NHWC': + self.channels_axis = 3 + else: + self.channels_axis = 1 + + def __call__(self, input, filters): + + if self.groups == 1: + outputs = tf.nn.conv2d( + input=input, + filters=filters, + strides=self.strides, + padding=self.padding, + data_format=self.data_format, + dilations=self.dilations, + ) + else: + inputgroups = tf.split(input, num_or_size_splits=self.groups, axis=self.channels_axis) + weightsgroups = tf.split(filters, num_or_size_splits=self.groups, axis=self.channels_axis) + convgroups = [] + for i, k in zip(inputgroups, weightsgroups): + convgroups.append( + tf.nn.conv2d( + input=i, + filters=k, + strides=self.strides, + padding=self.padding, + data_format=self.data_format, + dilations=self.dilations, + ) + ) + outputs = tf.concat(axis=self.channels_axis, values=convgroups) + + return outputs + + +class SeparableConv1D(object): + + def __init__(self, stride, padding, data_format, dilations, out_channel, k_size, in_channel, depth_multiplier): + self.data_format, self.padding = preprocess_1d_format(data_format, padding) + + if self.data_format == 'NWC': + self.spatial_start_dim = 1 + self.strides = (1, stride, stride, 1) + self.data_format = 'NHWC' + else: + self.spatial_start_dim = 2 + self.strides = (1, 1, stride, stride) + self.data_format = 'NCHW' + self.dilation_rate = (1, dilations) + + def __call__(self, inputs, depthwise_filters, pointwise_filters): + inputs = tf.expand_dims(inputs, axis=self.spatial_start_dim) + depthwise_filters = tf.expand_dims(depthwise_filters, 0) + pointwise_filters = tf.expand_dims(pointwise_filters, 0) + + outputs = tf.nn.separable_conv2d( + inputs, depthwise_filters, pointwise_filters, strides=self.strides, padding=self.padding, + dilations=self.dilation_rate, data_format=self.data_format + ) + + outputs = tf.squeeze(outputs, axis=self.spatial_start_dim) + + return outputs + + +class SeparableConv2D(object): + + def __init__(self, strides, padding, data_format, dilations, out_channel, k_size, in_channel, depth_multiplier): + self.data_format, self.padding = preprocess_2d_format(data_format, padding) + self.strides = strides + self.dilations = (dilations[2], dilations[2]) + + def __call__(self, inputs, depthwise_filters, pointwise_filters): + + outputs = tf.nn.separable_conv2d( + inputs, depthwise_filters, pointwise_filters, strides=self.strides, padding=self.padding, + dilations=self.dilations, data_format=self.data_format + ) + + return outputs + + +class AdaptiveMeanPool1D(object): + + def __init__(self, output_size, data_format): + self.data_format, _ = preprocess_1d_format(data_format, None) + self.output_size = output_size + + def __call__(self, input): + + if self.data_format == 'NWC': + n, w, c = input.shape + else: + n, c, w = input.shape + + stride = floor(w / self.output_size) + kernel = w - (self.output_size - 1) * stride + output = tf.nn.avg_pool1d(input, ksize=kernel, strides=stride, data_format=self.data_format, padding='VALID') + + return output + + +class AdaptiveMeanPool2D(object): + + def __init__(self, output_size, data_format): + self.data_format, _ = preprocess_2d_format(data_format, None) + self.output_size = output_size + + def __call__(self, inputs): + + if self.data_format == 'NHWC': + n, h, w, c = inputs.shape + else: + n, c, h, w = inputs.shape + + out_h, out_w = self.output_size + stride_h = floor(h / out_h) + kernel_h = h - (out_h - 1) * stride_h + stride_w = floor(w / out_w) + kernel_w = w - (out_w - 1) * stride_w + + outputs = tf.nn.avg_pool2d( + inputs, ksize=(kernel_h, kernel_w), strides=(stride_h, stride_w), data_format=self.data_format, + padding='VALID' + ) + + return outputs + + +class AdaptiveMeanPool3D(object): + + def __init__(self, output_size, data_format): + self.data_format, _ = preprocess_3d_format(data_format, None) + self.output_size = output_size + + def __call__(self, inputs): + + if self.data_format == 'NDHWC': + n, d, h, w, c = inputs.shape + else: + n, c, d, h, w = inputs.shape + + out_d, out_h, out_w = self.output_size + stride_d = floor(d / out_d) + kernel_d = d - (out_d - 1) * stride_d + stride_h = floor(h / out_h) + kernel_h = h - (out_h - 1) * stride_h + stride_w = floor(w / out_w) + kernel_w = w - (out_w - 1) * stride_w + + outputs = tf.nn.avg_pool3d( + inputs, ksize=(kernel_d, kernel_h, kernel_w), strides=(stride_d, stride_h, stride_w), + data_format=self.data_format, padding='VALID' + ) + + return outputs + + +class AdaptiveMaxPool1D(object): + + def __init__(self, output_size, data_format): + self.data_format, _ = preprocess_1d_format(data_format, None) + self.output_size = output_size + + def __call__(self, input): + + if self.data_format == 'NWC': + n, w, c = input.shape + else: + n, c, w = input.shape + + stride = floor(w / self.output_size) + kernel = w - (self.output_size - 1) * stride + output = tf.nn.max_pool1d(input, ksize=kernel, strides=stride, data_format=self.data_format, padding='VALID') + + return output + + +class AdaptiveMaxPool2D(object): + + def __init__(self, output_size, data_format): + self.data_format, _ = preprocess_2d_format(data_format, None) + self.output_size = output_size + + def __call__(self, inputs): + + if self.data_format == 'NHWC': + n, h, w, c = inputs.shape + else: + n, c, h, w = inputs.shape + + out_h, out_w = self.output_size + stride_h = floor(h / out_h) + kernel_h = h - (out_h - 1) * stride_h + stride_w = floor(w / out_w) + kernel_w = w - (out_w - 1) * stride_w + + outputs = tf.nn.max_pool2d( + inputs, ksize=(kernel_h, kernel_w), strides=(stride_h, stride_w), data_format=self.data_format, + padding='VALID' + ) + + return outputs + + +class AdaptiveMaxPool3D(object): + + def __init__(self, output_size, data_format): + self.data_format, _ = preprocess_3d_format(data_format, None) + self.output_size = output_size + + def __call__(self, inputs): + + if self.data_format == 'NDHWC': + n, d, h, w, c = inputs.shape + else: + n, c, d, h, w = inputs.shape + + out_d, out_h, out_w = self.output_size + stride_d = floor(d / out_d) + kernel_d = d - (out_d - 1) * stride_d + stride_h = floor(h / out_h) + kernel_h = h - (out_h - 1) * stride_h + stride_w = floor(w / out_w) + kernel_w = w - (out_w - 1) * stride_w + + outputs = tf.nn.max_pool3d( + inputs, ksize=(kernel_d, kernel_h, kernel_w), strides=(stride_d, stride_h, stride_w), + data_format=self.data_format, padding='VALID' + ) + + return outputs + + +class BinaryConv2D(object): + + def __init__(self, strides, padding, data_format, dilations, out_channel, k_size, in_channel): + self.data_format, self.padding = preprocess_2d_format(data_format, padding) + self.strides = strides + self.dilations = dilations + + # @tf.RegisterGradient("TL_Sign_QuantizeGrad") + # def _quantize_grad(op, grad): + # """Clip and binarize tensor using the straight through estimator (STE) for the gradient.""" + # return tf.clip_by_value(grad, -1, 1) + + def quantize(self, x): + # ref: https://github.com/AngusG/tensorflow-xnor-bnn/blob/master/models/binary_net.py#L70 + # https://github.com/itayhubara/BinaryNet.tf/blob/master/nnUtils.py + with tf.compat.v1.get_default_graph().gradient_override_map({"Sign": "TL_Sign_QuantizeGrad"}): + return tf.sign(x) + + def __call__(self, inputs, filters): + + filters = self.quantize(filters) + + outputs = tf.nn.conv2d( + input=inputs, filters=filters, strides=self.strides, padding=self.padding, data_format=self.data_format, + dilations=self.dilations + ) + + return outputs + + +class DorefaConv2D(object): + + def __init__(self, bitW, bitA, strides, padding, data_format, dilations, out_channel, k_size, in_channel): + self.data_format, self.padding = preprocess_2d_format(data_format, padding) + self.strides = strides + self.dilations = dilations + self.bitW = bitW + self.bitA = bitA + + def _quantize_dorefa(self, x, k): + G = tf.compat.v1.get_default_graph() + n = float(2**k - 1) + with G.gradient_override_map({"Round": "Identity"}): + return tf.round(x * n) / n + + def cabs(self, x): + return tf.minimum(1.0, tf.abs(x), name='cabs') + + def quantize_active(self, x, bitA): + if bitA == 32: + return x + return self._quantize_dorefa(x, bitA) + + def quantize_weight(self, x, bitW, force_quantization=False): + + G = tf.compat.v1.get_default_graph() + if bitW == 32 and not force_quantization: + return x + if bitW == 1: # BWN + with G.gradient_override_map({"Sign": "Identity"}): + E = tf.stop_gradient(tf.reduce_mean(input_tensor=tf.abs(x))) + return tf.sign(x / E) * E + x = tf.clip_by_value( + x * 0.5 + 0.5, 0.0, 1.0 + ) # it seems as though most weights are within -1 to 1 region anyways + return 2 * self._quantize_dorefa(x, bitW) - 1 + + def __call__(self, inputs, filters): + + inputs = self.quantize_active(self.cabs(inputs), self.bitA) + + filters = self.quantize_weight(filters, self.bitW) + + outputs = tf.nn.conv2d( + input=inputs, + filters=filters, + strides=self.strides, + padding=self.padding, + data_format=self.data_format, + dilations=self.dilations, + ) + + return outputs diff --git a/tensorlayer/cost/__init__.py b/tensorlayer/cost/__init__.py new file mode 100644 index 000000000..9a7cbdd89 --- /dev/null +++ b/tensorlayer/cost/__init__.py @@ -0,0 +1,13 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +from tensorlayer.backend import BACKEND + +if BACKEND == 'tensorflow': + from .tensorflow_cost import * +elif BACKEND == 'mindspore': + from .mindspore_cost import * +elif BACKEND == 'paddle': + from .paddle_cost import * +else: + raise NotImplementedError("This backend is not supported") diff --git a/tensorlayer/cost/mindspore_cost.py b/tensorlayer/cost/mindspore_cost.py new file mode 100644 index 000000000..d2d054b10 --- /dev/null +++ b/tensorlayer/cost/mindspore_cost.py @@ -0,0 +1,764 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +from mindspore import nn +from mindspore.nn import Cell +import mindspore.ops as P + +__all__ = [ + 'softmax_cross_entropy_with_logits', + 'sigmoid_cross_entropy', + 'binary_cross_entropy', + 'mean_squared_error', + 'normalized_mean_square_error', + 'absolute_difference_error', + 'dice_coe', + 'dice_hard_coe', + 'iou_coe', + 'cross_entropy_seq', + 'cross_entropy_seq_with_mask', + 'cosine_similarity', + 'li_regularizer', + 'lo_regularizer', + 'maxnorm_regularizer', + 'maxnorm_o_regularizer', + 'maxnorm_i_regularizer', +] + + +softmax_cross_entropy_with_logits = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean') + + +def sigmoid_cross_entropy(output, target, name=None): + """Sigmoid cross-entropy operation, see ``tf.ops.sigmoid_cross_entropy_with_logits``. + + Parameters + ---------- + output : Tensor + A batch of distribution with shape: [batch_size, num of classes]. + target : Tensor + A batch of index with shape: [batch_size, ]. + name : string + Name of this loss. + + """ + outputs = P.ReduceMean(P.SigmoidCrossEntropyWithLogits()(output, target)) + return outputs + + +def binary_cross_entropy(output, target, epsilon=1e-8, name='bce_loss'): + """Binary cross entropy operation. + + Parameters + ---------- + output : Tensor + Tensor with type of `float32` or `float64`. + target : Tensor + The target distribution, format the same with `output`. + epsilon : float + A small value to avoid output to be zero. + name : str + An optional name to attach to this function. + + References + ----------- + - `ericjang-DRAW `__ + + """ + + # return tf.reduce_mean( + # tf.reduce_sum( + # -(target * tf.math.log(output + epsilon) + (1. - target) * tf.math.log(1. - output + epsilon)), axis=1 + # ), name=name + # ) + raise NotImplementedError("Not Implemented.") + + +def mean_squared_error(output, target, is_mean=False, axis=-1, name="mean_squared_error"): + """Return the TensorFlow expression of mean-square-error (L2) of two batch of data. + + Parameters + ---------- + output : Tensor + 2D, 3D or 4D tensor i.e. [batch_size, n_feature], [batch_size, height, width] or [batch_size, height, width, channel]. + target : Tensor + The target distribution, format the same with `output`. + is_mean : boolean + Whether compute the mean or sum for each example. + - If True, use ``tf.reduce_mean`` to compute the loss between one target and predict data. + - If False, use ``tf.reduce_sum`` (default). + axis : int or list of int + The dimensions to reduce. + name : str + An optional name to attach to this function. + + References + ------------ + - `Wiki Mean Squared Error `__ + + """ + # with tf.name_scope(name): + # if len(output.shape) == 2: # [batch_size, n_feature] + # axis = 1 + # elif len(output.shape) == 3: # [batch_size, w, h] + # axis = [1, 2] + # elif len(output.shape) == 4: # [batch_size, w, h, c] + # axis = [1, 2, 3] + # else: + # raise Exception("Unknow dimension") + + return nn.MSELoss()(output, target) + + +def normalized_mean_square_error(output, target, axis=-1, name="normalized_mean_squared_error_loss"): + """Return the TensorFlow expression of normalized mean-square-error of two distributions. + + Parameters + ---------- + output : Tensor + 2D, 3D or 4D tensor i.e. [batch_size, n_feature], [batch_size, height, width] or [batch_size, height, width, channel]. + target : Tensor + The target distribution, format the same with `output`. + axis : int or list of int + The dimensions to reduce. + name : str + An optional name to attach to this function. + + """ + # with tf.name_scope("normalized_mean_squared_error_loss"): + # nmse_a = tf.sqrt(tf.reduce_sum(tf.math.squared_difference(output, target), axis=axis)) + # nmse_b = tf.sqrt(tf.reduce_sum(tf.square(target), axis=axis)) + # nmse = tf.reduce_mean(nmse_a / nmse_b, name=name) + raise NotImplementedError("Not Implemented.") + + +def absolute_difference_error(output, target, is_mean=False, axis=-1, name="absolute_difference_error_loss"): + """Return the TensorFlow expression of absolute difference error (L1) of two batch of data. + + Parameters + ---------- + output : Tensor + 2D, 3D or 4D tensor i.e. [batch_size, n_feature], [batch_size, height, width] or [batch_size, height, width, channel]. + target : Tensor + The target distribution, format the same with `output`. + is_mean : boolean + Whether compute the mean or sum for each example. + - If True, use ``tf.reduce_mean`` to compute the loss between one target and predict data. + - If False, use ``tf.reduce_sum`` (default). + axis : int or list of int + The dimensions to reduce. + name : str + An optional name to attach to this function. + + """ + + # if is_mean: + # loss = tf.reduce_mean(tf.reduce_mean(tf.abs(output - target), axis), name=name) + # else: + # loss = tf.reduce_mean(tf.reduce_sum(tf.abs(output - target), axis), name=name) + raise NotImplementedError("Not Implemented.") + + +def dice_coe(output, target, loss_type='jaccard', axis=(1, 2, 3), smooth=1e-5): + """Soft dice (Sørensen or Jaccard) coefficient for comparing the similarity + of two batch of data, usually be used for binary image segmentation + i.e. labels are binary. The coefficient between 0 to 1, 1 means totally match. + + Parameters + ----------- + output : Tensor + A distribution with shape: [batch_size, ....], (any dimensions). + target : Tensor + The target distribution, format the same with `output`. + loss_type : str + ``jaccard`` or ``sorensen``, default is ``jaccard``. + axis : tuple of int + All dimensions are reduced, default ``[1,2,3]``. + smooth : float + This small value will be added to the numerator and denominator. + - If both output and target are empty, it makes sure dice is 1. + - If either output or target are empty (all pixels are background), dice = ```smooth/(small_value + smooth)``, then if smooth is very small, dice close to 0 (even the image values lower than the threshold), so in this case, higher smooth can have a higher dice. + + Examples + --------- + >>> import tensorlayer as tl + >>> outputs = tl.act.pixel_wise_softmax(outputs) + >>> dice_loss = 1 - tl.cost.dice_coe(outputs, y_) + + References + ----------- + - `Wiki-Dice `__ + + """ + # inse = tf.reduce_sum(output * target, axis=axis) + # if loss_type == 'jaccard': + # l = tf.reduce_sum(output * output, axis=axis) + # r = tf.reduce_sum(target * target, axis=axis) + # elif loss_type == 'sorensen': + # l = tf.reduce_sum(output, axis=axis) + # r = tf.reduce_sum(target, axis=axis) + # else: + # raise Exception("Unknow loss_type") + # dice = (2. * inse + smooth) / (l + r + smooth) + # ## + # dice = tf.reduce_mean(dice, name='dice_coe') + raise NotImplementedError("Not Implemented.") + + +def dice_hard_coe(output, target, threshold=0.5, axis=(1, 2, 3), smooth=1e-5): + """Non-differentiable Sørensen–Dice coefficient for comparing the similarity + of two batch of data, usually be used for binary image segmentation i.e. labels are binary. + The coefficient between 0 to 1, 1 if totally match. + + Parameters + ----------- + output : tensor + A distribution with shape: [batch_size, ....], (any dimensions). + target : tensor + The target distribution, format the same with `output`. + threshold : float + The threshold value to be true. + axis : tuple of integer + All dimensions are reduced, default ``(1,2,3)``. + smooth : float + This small value will be added to the numerator and denominator, see ``dice_coe``. + + References + ----------- + - `Wiki-Dice `__ + + """ + # output = tf.cast(output > threshold, dtype=tf.float32) + # target = tf.cast(target > threshold, dtype=tf.float32) + # inse = tf.reduce_sum(tf.multiply(output, target), axis=axis) + # l = tf.reduce_sum(output, axis=axis) + # r = tf.reduce_sum(target, axis=axis) + # hard_dice = (2. * inse + smooth) / (l + r + smooth) + # ## + # hard_dice = tf.reduce_mean(hard_dice, name='hard_dice') + raise NotImplementedError("Not Implemented.") + + +def iou_coe(output, target, threshold=0.5, axis=(1, 2, 3), smooth=1e-5): + """Non-differentiable Intersection over Union (IoU) for comparing the + similarity of two batch of data, usually be used for evaluating binary image segmentation. + The coefficient between 0 to 1, and 1 means totally match. + + Parameters + ----------- + output : tensor + A batch of distribution with shape: [batch_size, ....], (any dimensions). + target : tensor + The target distribution, format the same with `output`. + threshold : float + The threshold value to be true. + axis : tuple of integer + All dimensions are reduced, default ``(1,2,3)``. + smooth : float + This small value will be added to the numerator and denominator, see ``dice_coe``. + + Notes + ------ + - IoU cannot be used as training loss, people usually use dice coefficient for training, IoU and hard-dice for evaluating. + + """ + # pre = tf.cast(output > threshold, dtype=tf.float32) + # truth = tf.cast(target > threshold, dtype=tf.float32) + # inse = tf.reduce_sum(tf.multiply(pre, truth), axis=axis) # AND + # union = tf.reduce_sum(tf.cast(tf.add(pre, truth) >= 1, dtype=tf.float32), axis=axis) # OR + # batch_iou = (inse + smooth) / (union + smooth) + # iou = tf.reduce_mean(batch_iou, name='iou_coe') + raise NotImplementedError("Not Implemented.") + + +def sequence_loss_by_example( + logits, targets, weights, average_across_timesteps=True, softmax_loss_function=None, name=None +): + """Weighted cross-entropy loss for a sequence of logits (per example). see original tensorflow code : + + + Parameters + ---------- + logits: List + List of 2D Tensors of shape [batch_size x num_decoder_symbols]. + targets: List + List of 1D batch-sized int32 Tensors of the same length as logits. + weights: List + List of 1D batch-sized float-Tensors of the same length as logits. + average_across_timesteps: Boolean + If set, divide the returned cost by the total label weight. + softmax_loss_function: None or Function + Function (labels, logits) -> loss-batch to be used instead of the standard softmax (the default if this is None). + **Note that to avoid confusion, it is required for the function to accept named arguments.** + name: None or str + Optional name for this operation, default: "sequence_loss_by_example". + + Returns + ------- + 1D batch-sized float Tensor: The log-perplexity for each sequence. + + Raises + ------ + ValueError: If len(logits) is different from len(targets) or len(weights). + + """ + # if len(targets) != len(logits) or len(weights) != len(logits): + # raise ValueError( + # "Lengths of logits, weights, and targets must be the same " + # "%d, %d, %d." % (len(logits), len(weights), len(targets)) + # ) + # with ops.name_scope(name, "sequence_loss_by_example", logits + targets + weights): + # log_perp_list = [] + # for logit, target, weight in zip(logits, targets, weights): + # if softmax_loss_function is None: + # # TODO(irving,ebrevdo): This reshape is needed because + # # sequence_loss_by_example is called with scalars sometimes, which + # # violates our general scalar strictness policy. + # target = array_ops.reshape(target, [-1]) + # crossent = nn_ops.sparse_softmax_cross_entropy_with_logits(labels=target, logits=logit) + # else: + # crossent = softmax_loss_function(labels=target, logits=logit) + # log_perp_list.append(crossent * weight) + # log_perps = math_ops.add_n(log_perp_list) + # if average_across_timesteps: + # total_size = math_ops.add_n(weights) + # total_size += 1e-12 # Just to avoid division by 0 for all-0 weights. + # log_perps /= total_size + raise NotImplementedError("Not Implemented.") + + +def cross_entropy_seq(logits, target_seqs, batch_size=None): + """Returns the expression of cross-entropy of two sequences, implement + softmax internally. Normally be used for fixed length RNN outputs, see `PTB example `__. + + Parameters + ---------- + logits : Tensor + 2D tensor with shape of `[batch_size * n_steps, n_classes]`. + target_seqs : Tensor + The target sequence, 2D tensor `[batch_size, n_steps]`, if the number of step is dynamic, please use ``tl.cost.cross_entropy_seq_with_mask`` instead. + batch_size : None or int. + Whether to divide the cost by batch size. + - If integer, the return cost will be divided by `batch_size`. + - If None (default), the return cost will not be divided by anything. + + Examples + -------- + >>> import tensorlayer as tl + >>> # see `PTB example `__.for more details + >>> # outputs shape : (batch_size * n_steps, n_classes) + >>> # targets shape : (batch_size, n_steps) + >>> cost = tl.cost.cross_entropy_seq(outputs, targets) + + """ + # sequence_loss_by_example_fn = sequence_loss_by_example + # + # loss = sequence_loss_by_example_fn( + # [logits], [tf.reshape(target_seqs, [-1])], [tf.ones_like(tf.reshape(target_seqs, [-1]), dtype=tf.float32)] + # ) + # # [tf.ones([batch_size * num_steps])]) + # cost = tf.reduce_sum(loss) # / batch_size + # if batch_size is not None: + # cost = cost / batch_size + raise NotImplementedError("Not Implemented.") + + +def cross_entropy_seq_with_mask(logits, target_seqs, input_mask, return_details=False, name=None): + """Returns the expression of cross-entropy of two sequences, implement + softmax internally. Normally be used for Dynamic RNN with Synced sequence input and output. + + Parameters + ----------- + logits : Tensor + 2D tensor with shape of [batch_size * ?, n_classes], `?` means dynamic IDs for each example. + - Can be get from `DynamicRNNLayer` by setting ``return_seq_2d`` to `True`. + target_seqs : Tensor + int of tensor, like word ID. [batch_size, ?], `?` means dynamic IDs for each example. + input_mask : Tensor + The mask to compute loss, it has the same size with `target_seqs`, normally 0 or 1. + return_details : boolean + Whether to return detailed losses. + - If False (default), only returns the loss. + - If True, returns the loss, losses, weights and targets (see source code). + + Examples + -------- + >>> import tensorlayer as tl + >>> import tensorflow as tf + >>> import numpy as np + >>> batch_size = 64 + >>> vocab_size = 10000 + >>> embedding_size = 256 + >>> ni = tl.layers.Input([batch_size, None], dtype=tf.int64) + >>> net = tl.layers.Embedding( + ... vocabulary_size = vocab_size, + ... embedding_size = embedding_size, + ... name = 'seq_embedding')(ni) + >>> net = tl.layers.RNN( + ... cell =tf.keras.layers.LSTMCell(units=embedding_size, dropout=0.1), + ... return_seq_2d = True, + ... name = 'dynamicrnn')(net) + >>> net = tl.layers.Dense(n_units=vocab_size, name="output")(net) + >>> model = tl.models.Model(inputs=ni, outputs=net) + >>> input_seqs = np.random.randint(0, 10, size=(batch_size, 10), dtype=np.int64) + >>> target_seqs = np.random.randint(0, 10, size=(batch_size, 10), dtype=np.int64) + >>> input_mask = np.random.randint(0, 2, size=(batch_size, 10), dtype=np.int64) + >>> outputs = model(input_seqs, is_train=True) + >>> loss = tl.cost.cross_entropy_seq_with_mask(outputs, target_seqs, input_mask) + + """ + # targets = tf.reshape(target_seqs, [-1]) # to one vector + # weights = tf.cast(tf.reshape(input_mask, [-1]), dtype=tf.float32) # to one vector like targets + # losses = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=targets, name=name) * weights + # # losses = tf.reduce_mean(tf.ops.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=targets, name=name)) # for TF1.0 and others + # + # loss = tf.divide( + # tf.reduce_sum(losses), # loss from mask. reduce_sum before element-wise mul with mask !! + # tf.reduce_sum(weights), + # name="seq_loss_with_mask" + # ) + # + # if return_details: + # return loss, losses, weights, targets + # else: + # return loss + raise NotImplementedError("Not Implemented.") + + +def cosine_similarity(v1, v2): + """Cosine similarity [-1, 1]. + + Parameters + ---------- + v1, v2 : Tensor + Tensor with the same shape [batch_size, n_feature]. + + References + ---------- + - `Wiki `__. + + """ + + # return tf.reduce_sum(tf.multiply(v1, v2), 1) / \ + # (tf.sqrt(tf.reduce_sum(tf.multiply(v1, v1), 1)) * + # tf.sqrt(tf.reduce_sum(tf.multiply(v2, v2), 1))) + raise NotImplementedError("Not Implemented.") + + +# Regularization Functions +def li_regularizer(scale, scope=None): + """Li regularization removes the neurons of previous layer. The `i` represents `inputs`. + Returns a function that can be used to apply group li regularization to weights. + The implementation follows `TensorFlow contrib `__. + + Parameters + ---------- + scale : float + A scalar multiplier `Tensor`. 0.0 disables the regularizer. + scope: str + An optional scope name for this function. + + Returns + -------- + A function with signature `li(weights, name=None)` that apply Li regularization. + + Raises + ------ + ValueError : if scale is outside of the range [0.0, 1.0] or if scale is not a float. + + """ + # if isinstance(scale, numbers.Integral): + # raise ValueError('scale cannot be an integer: %s' % scale) + # if isinstance(scale, numbers.Real): + # if scale < 0.: + # raise ValueError('Setting a scale less than 0 on a regularizer: %g' % scale) + # if scale >= 1.: + # raise ValueError('Setting a scale greater than 1 on a regularizer: %g' % scale) + # if scale == 0.: + # logging.info('Scale of 0 disables regularizer.') + # return lambda _, name=None: None + # + # def li(weights): + # """Applies li regularization to weights.""" + # with tf.name_scope('li_regularizer') as scope: + # my_scale = ops.convert_to_tensor(scale, dtype=weights.dtype.base_dtype, name='scale') + # # if tf.__version__ <= '0.12': + # # standard_ops_fn = standard_ops.mul + # # else: + # standard_ops_fn = standard_ops.multiply + # return standard_ops_fn( + # my_scale, standard_ops.reduce_sum(standard_ops.sqrt(standard_ops.reduce_sum(tf.square(weights), 1))), + # name=scope + # ) + + raise NotImplementedError("Not Implemented.") + + +def lo_regularizer(scale): + """Lo regularization removes the neurons of current layer. The `o` represents `outputs` + Returns a function that can be used to apply group lo regularization to weights. + The implementation follows `TensorFlow contrib `__. + + Parameters + ---------- + scale : float + A scalar multiplier `Tensor`. 0.0 disables the regularizer. + + Returns + ------- + A function with signature `lo(weights, name=None)` that apply Lo regularization. + + Raises + ------ + ValueError : If scale is outside of the range [0.0, 1.0] or if scale is not a float. + + """ + # if isinstance(scale, numbers.Integral): + # raise ValueError('scale cannot be an integer: %s' % scale) + # + # if isinstance(scale, numbers.Real): + # if scale < 0.: + # raise ValueError('Setting a scale less than 0 on a regularizer: %g' % scale) + # if scale >= 1.: + # raise ValueError('Setting a scale greater than 1 on a regularizer: %g' % scale) + # if scale == 0.: + # logging.info('Scale of 0 disables regularizer.') + # return lambda _, name=None: None + # + # def lo(weights, name='lo_regularizer'): + # """Applies group column regularization to weights.""" + # with tf.name_scope(name) as scope: + # my_scale = ops.convert_to_tensor(scale, dtype=weights.dtype.base_dtype, name='scale') + # # if tf.__version__ <= '0.12': + # # standard_ops_fn = standard_ops.mul + # # else: + # standard_ops_fn = standard_ops.multiply + # return standard_ops_fn( + # my_scale, standard_ops.reduce_sum(standard_ops.sqrt(standard_ops.reduce_sum(tf.square(weights), 0))), + # name=scope + # ) + + raise NotImplementedError("Not Implemented.") + + +def maxnorm_regularizer(scale=1.0): + """Max-norm regularization returns a function that can be used to apply max-norm regularization to weights. + + More about max-norm, see `wiki-max norm `_. + The implementation follows `TensorFlow contrib `__. + + Parameters + ---------- + scale : float + A scalar multiplier `Tensor`. 0.0 disables the regularizer. + + Returns + --------- + A function with signature `mn(weights, name=None)` that apply Lo regularization. + + Raises + -------- + ValueError : If scale is outside of the range [0.0, 1.0] or if scale is not a float. + + """ + # if isinstance(scale, numbers.Integral): + # raise ValueError('scale cannot be an integer: %s' % scale) + # + # if isinstance(scale, numbers.Real): + # if scale < 0.: + # raise ValueError('Setting a scale less than 0 on a regularizer: %g' % scale) + # # if scale >= 1.: + # # raise ValueError('Setting a scale greater than 1 on a regularizer: %g' % + # # scale) + # if scale == 0.: + # logging.info('Scale of 0 disables regularizer.') + # return lambda _, name=None: None + # + # def mn(weights, name='max_regularizer'): + # """Applies max-norm regularization to weights.""" + # with tf.name_scope(name) as scope: + # my_scale = ops.convert_to_tensor(scale, dtype=weights.dtype.base_dtype, name='scale') + # # if tf.__version__ <= '0.12': + # # standard_ops_fn = standard_ops.mul + # # else: + # standard_ops_fn = standard_ops.multiply + # return standard_ops_fn(my_scale, standard_ops.reduce_max(standard_ops.abs(weights)), name=scope) + # + # return mn + raise NotImplementedError("Not Implemented.") + + +def maxnorm_o_regularizer(scale): + """Max-norm output regularization removes the neurons of current layer. + Returns a function that can be used to apply max-norm regularization to each column of weight matrix. + The implementation follows `TensorFlow contrib `__. + + Parameters + ---------- + scale : float + A scalar multiplier `Tensor`. 0.0 disables the regularizer. + + Returns + --------- + A function with signature `mn_o(weights, name=None)` that apply Lo regularization. + + Raises + --------- + ValueError : If scale is outside of the range [0.0, 1.0] or if scale is not a float. + + """ + # if isinstance(scale, numbers.Integral): + # raise ValueError('scale cannot be an integer: %s' % scale) + # + # if isinstance(scale, numbers.Real): + # if scale < 0.: + # raise ValueError('Setting a scale less than 0 on a regularizer: %g' % scale) + # # if scale >= 1.: + # # raise ValueError('Setting a scale greater than 1 on a regularizer: %g' % + # # scale) + # if scale == 0.: + # logging.info('Scale of 0 disables regularizer.') + # return lambda _, name=None: None + # + # def mn_o(weights, name='maxnorm_o_regularizer'): + # """Applies max-norm regularization to weights.""" + # with tf.name_scope(name) as scope: + # my_scale = ops.convert_to_tensor(scale, dtype=weights.dtype.base_dtype, name='scale') + # if tf.__version__ <= '0.12': + # standard_ops_fn = standard_ops.mul + # else: + # standard_ops_fn = standard_ops.multiply + # return standard_ops_fn( + # my_scale, standard_ops.reduce_sum(standard_ops.reduce_max(standard_ops.abs(weights), 0)), name=scope + # ) + # + # return mn_o + raise NotImplementedError("Not Implemented.") + + +def maxnorm_i_regularizer(scale): + """Max-norm input regularization removes the neurons of previous layer. + Returns a function that can be used to apply max-norm regularization to each row of weight matrix. + The implementation follows `TensorFlow contrib `__. + + Parameters + ---------- + scale : float + A scalar multiplier `Tensor`. 0.0 disables the regularizer. + + Returns + --------- + A function with signature `mn_i(weights, name=None)` that apply Lo regularization. + + Raises + --------- + ValueError : If scale is outside of the range [0.0, 1.0] or if scale is not a float. + + """ + # if isinstance(scale, numbers.Integral): + # raise ValueError('scale cannot be an integer: %s' % scale) + # + # if isinstance(scale, numbers.Real): + # if scale < 0.: + # raise ValueError('Setting a scale less than 0 on a regularizer: %g' % scale) + # # if scale >= 1.: + # # raise ValueError('Setting a scale greater than 1 on a regularizer: %g' % + # # scale) + # if scale == 0.: + # logging.info('Scale of 0 disables regularizer.') + # return lambda _, name=None: None + # + # def mn_i(weights, name='maxnorm_i_regularizer'): + # """Applies max-norm regularization to weights.""" + # with tf.name_scope(name) as scope: + # my_scale = ops.convert_to_tensor(scale, dtype=weights.dtype.base_dtype, name='scale') + # if tf.__version__ <= '0.12': + # standard_ops_fn = standard_ops.mul + # else: + # standard_ops_fn = standard_ops.multiply + # return standard_ops_fn( + # my_scale, standard_ops.reduce_sum(standard_ops.reduce_max(standard_ops.abs(weights), 1)), name=scope + # ) + # + # return mn_i + raise NotImplementedError("Not Implemented.") + + +def huber_loss( + output, target, is_mean=True, delta=1.0, dynamichuber=False, reverse=False, axis=-1, epsilon=0.00001, name=None +): + """Huber Loss operation, see ``https://en.wikipedia.org/wiki/Huber_loss`` . + Reverse Huber Loss operation, see ''https://statweb.stanford.edu/~owen/reports/hhu.pdf''. + Dynamic Reverse Huber Loss operation, see ''https://arxiv.org/pdf/1606.00373.pdf''. + + Parameters + ---------- + output : Tensor + A distribution with shape: [batch_size, ....], (any dimensions). + target : Tensor + The target distribution, format the same with `output`. + is_mean : boolean + Whether compute the mean or sum for each example. + - If True, use ``tf.reduce_mean`` to compute the loss between one target and predict data (default). + - If False, use ``tf.reduce_sum``. + delta: float + The point where the huber loss function changes from a quadratic to linear. + dynamichuber: boolean + Whether compute the coefficient c for each batch. + - If True, c is 20% of the maximal per-batch error. + - If False, c is delta. + reverse: boolean + Whether compute the reverse huber loss. + axis : int or list of int + The dimensions to reduce. + epsilon: + Eplison. + name : string + Name of this loss. + + """ + # if reverse: + # if dynamichuber: + # huber_c = 0.2 * tf.reduce_max(tf.abs(output - target)) + # else: + # huber_c = delta + # if is_mean: + # loss = tf.reduce_mean( + # tf.where( + # tf.less_equal(tf.abs(output - target), huber_c), tf.abs(output - target), + # tf.multiply( + # tf.pow(output - target, 2.0) + tf.pow(huber_c, 2.0), + # tf.math.divide_no_nan(.5, huber_c + epsilon) + # ) + # ), name=name + # ) + # else: + # loss = tf.reduce_mean( + # tf.reduce_sum( + # tf.where( + # tf.less_equal(tf.abs(output - target), huber_c), tf.abs(output - target), + # tf.multiply( + # tf.pow(output - target, 2.0) + tf.pow(huber_c, 2.0), + # tf.math.divide_no_nan(.5, huber_c + epsilon) + # ) + # ), axis + # ), name=name + # ) + # elif is_mean: + # loss = tf.reduce_mean( + # tf.where( + # tf.less_equal(tf.abs(output - target), delta), 0.5 * tf.pow(output - target, 2), + # delta * (tf.abs(output - target) - 0.5 * delta) + # ), name=name + # ) + # else: + # loss = tf.reduce_mean( + # tf.reduce_sum( + # tf.where( + # tf.less_equal(tf.abs(output - target), delta), 0.5 * tf.pow(output - target, 2), + # delta * (tf.abs(output - target) - 0.5 * delta) + # ), axis + # ), name=name + # ) + # return loss + raise NotImplementedError("Not Implemented.") diff --git a/tensorlayer/cost/paddle_cost.py b/tensorlayer/cost/paddle_cost.py new file mode 100644 index 000000000..3464b8cf1 --- /dev/null +++ b/tensorlayer/cost/paddle_cost.py @@ -0,0 +1,603 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +import paddle.nn.functional as F +import paddle as pd + +__all__ = [ + 'softmax_cross_entropy_with_logits', + 'sigmoid_cross_entropy', + 'binary_cross_entropy', + 'mean_squared_error', + 'normalized_mean_square_error', + 'absolute_difference_error', + 'dice_coe', + 'dice_hard_coe', + 'iou_coe', + 'cross_entropy_seq', + 'cross_entropy_seq_with_mask', + 'cosine_similarity', + 'li_regularizer', + 'lo_regularizer', + 'maxnorm_regularizer', + 'maxnorm_o_regularizer', + 'maxnorm_i_regularizer', +] + + +def softmax_cross_entropy_with_logits(output, target): + """Softmax cross-entropy operation, returns the TensorFlow expression of cross-entropy for two distributions, + it implements softmax internally. See ``tf.ops.sparse_softmax_cross_entropy_with_logits``. + + Parameters + ---------- + output : Tensor + A batch of distribution with shape: [batch_size, num of classes]. + target : Tensor + A batch of index with shape: [batch_size, ]. + name : string + Name of this loss. + + Examples + -------- + >>> import tensorlayer as tl + >>> ce = tl.cost.softmax_cross_entropy_with_logits(y_logits, y_target_logits) + + References + ----------- + - About cross-entropy: ``__. + - The code is borrowed from: ``__. + + """ + + return F.cross_entropy(input=output, label=target) + + +def sigmoid_cross_entropy(output, target): + """Sigmoid cross-entropy operation, see ``tf.ops.sigmoid_cross_entropy_with_logits``. + + Parameters + ---------- + output : Tensor + A batch of distribution with shape: [batch_size, num of classes]. + target : Tensor + A batch of index with shape: [batch_size, ]. + name : string + Name of this loss. + + """ + + if output.shape[-1] == target.shape[-1]: + pass + else: + depth = output.shape[-1] + label = pd.fluid.layers.one_hot(target, depth=depth) + out = pd.fluid.layers.sigmoid_cross_entropy_with_logits(x=output, label=label) + out = pd.fluid.layers.reduce_mean(out) + return out + + +def binary_cross_entropy(output, target, epsilon=1e-8): + """Binary cross entropy operation. + + Parameters + ---------- + output : Tensor + Tensor with type of `float32` or `float64`. + target : Tensor + The target distribution, format the same with `output`. + epsilon : float + A small value to avoid output to be zero. + name : str + An optional name to attach to this function. + + References + ----------- + - `ericjang-DRAW `__ + + """ + + if output.shape[-1] == target.shape[-1]: + pass + else: + depth = output.shape[-1] + target = pd.fluid.layers.one_hot(target, depth=depth) + out = pd.fluid.layers.reduce_sum( + -(target * pd.log(output + epsilon) + (1. - target) * pd.log(1. - output + epsilon)) + ) + return out + + +def mean_squared_error(output, target, is_mean=False, axis=-1, name="mean_squared_error"): + """Return the TensorFlow expression of mean-square-error (L2) of two batch of data. + + Parameters + ---------- + output : Tensor + 2D, 3D or 4D tensor i.e. [batch_size, n_feature], [batch_size, height, width] or [batch_size, height, width, channel]. + target : Tensor + The target distribution, format the same with `output`. + is_mean : boolean + Whether compute the mean or sum for each example. + - If True, use ``tf.reduce_mean`` to compute the loss between one target and predict data. + - If False, use ``tf.reduce_sum`` (default). + axis : int or list of int + The dimensions to reduce. + name : str + An optional name to attach to this function. + + References + ------------ + - `Wiki Mean Squared Error `__ + + """ + + if output.shape[-1] == target.shape[-1]: + pass + else: + depth = output.shape[-1] + target = pd.fluid.layers.one_hot(target, depth=depth) + + if is_mean: + mse = F.mse_loss(input=output, label=target, reduction='mean') + else: + mse = F.mse_loss(input=output, label=target, reduction='sum') + return mse + + +def normalized_mean_square_error(output, target, axis=-1, name="normalized_mean_squared_error_loss"): + """Return the TensorFlow expression of normalized mean-square-error of two distributions. + + Parameters + ---------- + output : Tensor + 2D, 3D or 4D tensor i.e. [batch_size, n_feature], [batch_size, height, width] or [batch_size, height, width, channel]. + target : Tensor + The target distribution, format the same with `output`. + axis : int or list of int + The dimensions to reduce. + name : str + An optional name to attach to this function. + + """ + + if output.shape[-1] == target.shape[-1]: + pass + else: + depth = output.shape[-1] + target = pd.fluid.layers.one_hot(target, depth=depth) + + nmse_a = pd.sqrt(pd.fluid.layers.reduce_sum(pd.fluid.layers.square_error_cost(output, target), dim=axis)) + nmse_b = pd.sqrt(pd.fluid.layers.reduce_sum(pd.square(target), dim=axis)) + nmse = pd.fluid.layers.reduce_mean(nmse_a / nmse_b) + return nmse + + +def absolute_difference_error(output, target, is_mean=False, axis=-1, name="absolute_difference_error_loss"): + """Return the TensorFlow expression of absolute difference error (L1) of two batch of data. + + Parameters + ---------- + output : Tensor + 2D, 3D or 4D tensor i.e. [batch_size, n_feature], [batch_size, height, width] or [batch_size, height, width, channel]. + target : Tensor + The target distribution, format the same with `output`. + is_mean : boolean + Whether compute the mean or sum for each example. + - If True, use ``tf.reduce_mean`` to compute the loss between one target and predict data. + - If False, use ``tf.reduce_sum`` (default). + axis : int or list of int + The dimensions to reduce. + name : str + An optional name to attach to this function. + + """ + + if is_mean: + loss = pd.fluid.layers.reduce_mean(pd.fluid.layers.reduce_mean(pd.abs(output - target), axis)) + else: + loss = pd.fluid.layers.reduce_mean(pd.fluid.layers.reduce_sum(pd.abs(output - target), axis)) + return loss + + +def dice_coe(output, target, loss_type='jaccard', axis=(1, 2, 3), smooth=1e-5): + """Soft dice (Sørensen or Jaccard) coefficient for comparing the similarity + of two batch of data, usually be used for binary image segmentation + i.e. labels are binary. The coefficient between 0 to 1, 1 means totally match. + + Parameters + ----------- + output : Tensor + A distribution with shape: [batch_size, ....], (any dimensions). + target : Tensor + The target distribution, format the same with `output`. + loss_type : str + ``jaccard`` or ``sorensen``, default is ``jaccard``. + axis : tuple of int + All dimensions are reduced, default ``[1,2,3]``. + smooth : float + This small value will be added to the numerator and denominator. + - If both output and target are empty, it makes sure dice is 1. + - If either output or target are empty (all pixels are background), dice = ```smooth/(small_value + smooth)``, then if smooth is very small, dice close to 0 (even the image values lower than the threshold), so in this case, higher smooth can have a higher dice. + + Examples + --------- + >>> import tensorlayer as tl + >>> outputs = tl.act.pixel_wise_softmax(outputs) + >>> dice_loss = 1 - tl.cost.dice_coe(outputs, y_) + + References + ----------- + - `Wiki-Dice `__ + + """ + + axis = list(axis) + inse = pd.fluid.layers.reduce_sum(output * target, dim=axis) + if loss_type == 'jaccard': + l = pd.fluid.layers.reduce_sum(output * output, dim=axis) + r = pd.fluid.layers.reduce_sum(target * target, dim=axis) + elif loss_type == 'sorensen': + l = pd.fluid.layers.reduce_sum(output, dim=axis) + r = pd.fluid.layers.reduce_sum(target, dim=axis) + else: + raise Exception("Unknow loss_type") + + dice = (2. * inse + smooth) / (l + r + smooth) + dice = pd.fluid.layers.reduce_mean(dice) + return dice + + +def dice_hard_coe(output, target, threshold=0.5, axis=(1, 2, 3), smooth=1e-5): + """Non-differentiable Sørensen–Dice coefficient for comparing the similarity + of two batch of data, usually be used for binary image segmentation i.e. labels are binary. + The coefficient between 0 to 1, 1 if totally match. + + Parameters + ----------- + output : tensor + A distribution with shape: [batch_size, ....], (any dimensions). + target : tensor + The target distribution, format the same with `output`. + threshold : float + The threshold value to be true. + axis : tuple of integer + All dimensions are reduced, default ``(1,2,3)``. + smooth : float + This small value will be added to the numerator and denominator, see ``dice_coe``. + + References + ----------- + - `Wiki-Dice `__ + + """ + + output = pd.cast(output > threshold, dtype='float32') + target = pd.cast(target > threshold, dtype='float32') + inse = pd.fluid.layers.reduce_sum(pd.multiply(output, target), dim=list(axis)) + l = pd.fluid.layers.reduce_sum(output, dim=list(axis)) + r = pd.fluid.layers.reduce_sum(target, dim=list(axis)) + + hard_dice = (2. * inse + smooth) / (l + r + smooth) + ## + hard_dice = pd.fluid.layers.reduce_mean(hard_dice) + return hard_dice + + +def iou_coe(output, target, threshold=0.5, axis=(1, 2, 3), smooth=1e-5): + """Non-differentiable Intersection over Union (IoU) for comparing the + similarity of two batch of data, usually be used for evaluating binary image segmentation. + The coefficient between 0 to 1, and 1 means totally match. + + Parameters + ----------- + output : tensor + A batch of distribution with shape: [batch_size, ....], (any dimensions). + target : tensor + The target distribution, format the same with `output`. + threshold : float + The threshold value to be true. + axis : tuple of integer + All dimensions are reduced, default ``(1,2,3)``. + smooth : float + This small value will be added to the numerator and denominator, see ``dice_coe``. + + Notes + ------ + - IoU cannot be used as training loss, people usually use dice coefficient for training, IoU and hard-dice for evaluating. + + """ + + pre = pd.cast(output > threshold, dtype='float32') + truth = pd.cast(target > threshold, dtype='float32') + inse = pd.fluid.layers.reduce_sum(pd.multiply(pre, truth), dim=axis) # AND + union = pd.fluid.layers.reduce_sum(pd.cast(pd.add(pre, truth) >= 1, dtype='float32'), dim=axis) # OR + batch_iou = (inse + smooth) / (union + smooth) + iou = pd.fluid.layers.reduce_mean(batch_iou, name='iou_coe') + return iou + + +def sequence_loss_by_example( + logits, targets, weights, average_across_timesteps=True, softmax_loss_function=None, name=None +): + """Weighted cross-entropy loss for a sequence of logits (per example). see original tensorflow code : + + + Parameters + ---------- + logits: List + List of 2D Tensors of shape [batch_size x num_decoder_symbols]. + targets: List + List of 1D batch-sized int32 Tensors of the same length as logits. + weights: List + List of 1D batch-sized float-Tensors of the same length as logits. + average_across_timesteps: Boolean + If set, divide the returned cost by the total label weight. + softmax_loss_function: None or Function + Function (labels, logits) -> loss-batch to be used instead of the standard softmax (the default if this is None). + **Note that to avoid confusion, it is required for the function to accept named arguments.** + name: None or str + Optional name for this operation, default: "sequence_loss_by_example". + + Returns + ------- + 1D batch-sized float Tensor: The log-perplexity for each sequence. + + Raises + ------ + ValueError: If len(logits) is different from len(targets) or len(weights). + + """ + + raise NotImplementedError("Not Implemented.") + + +def cross_entropy_seq(logits, target_seqs, batch_size=None): + """Returns the expression of cross-entropy of two sequences, implement + softmax internally. Normally be used for fixed length RNN outputs, see `PTB example `__. + + Parameters + ---------- + logits : Tensor + 2D tensor with shape of `[batch_size * n_steps, n_classes]`. + target_seqs : Tensor + The target sequence, 2D tensor `[batch_size, n_steps]`, if the number of step is dynamic, please use ``tl.cost.cross_entropy_seq_with_mask`` instead. + batch_size : None or int. + Whether to divide the cost by batch size. + - If integer, the return cost will be divided by `batch_size`. + - If None (default), the return cost will not be divided by anything. + + Examples + -------- + >>> import tensorlayer as tl + >>> # see `PTB example `__.for more details + >>> # outputs shape : (batch_size * n_steps, n_classes) + >>> # targets shape : (batch_size, n_steps) + >>> cost = tl.cost.cross_entropy_seq(outputs, targets) + + """ + + raise NotImplementedError("Not Implemented.") + + +def cross_entropy_seq_with_mask(logits, target_seqs, input_mask, return_details=False, name=None): + """Returns the expression of cross-entropy of two sequences, implement + softmax internally. Normally be used for Dynamic RNN with Synced sequence input and output. + + Parameters + ----------- + logits : Tensor + 2D tensor with shape of [batch_size * ?, n_classes], `?` means dynamic IDs for each example. + - Can be get from `DynamicRNNLayer` by setting ``return_seq_2d`` to `True`. + target_seqs : Tensor + int of tensor, like word ID. [batch_size, ?], `?` means dynamic IDs for each example. + input_mask : Tensor + The mask to compute loss, it has the same size with `target_seqs`, normally 0 or 1. + return_details : boolean + Whether to return detailed losses. + - If False (default), only returns the loss. + - If True, returns the loss, losses, weights and targets (see source code). + + Examples + -------- + >>> import tensorlayer as tl + >>> import tensorflow as tf + >>> import numpy as np + >>> batch_size = 64 + >>> vocab_size = 10000 + >>> embedding_size = 256 + >>> ni = tl.layers.Input([batch_size, None], dtype=tf.int64) + >>> net = tl.layers.Embedding( + ... vocabulary_size = vocab_size, + ... embedding_size = embedding_size, + ... name = 'seq_embedding')(ni) + >>> net = tl.layers.RNN( + ... cell =tf.keras.layers.LSTMCell(units=embedding_size, dropout=0.1), + ... return_seq_2d = True, + ... name = 'dynamicrnn')(net) + >>> net = tl.layers.Dense(n_units=vocab_size, name="output")(net) + >>> model = tl.models.Model(inputs=ni, outputs=net) + >>> input_seqs = np.random.randint(0, 10, size=(batch_size, 10), dtype=np.int64) + >>> target_seqs = np.random.randint(0, 10, size=(batch_size, 10), dtype=np.int64) + >>> input_mask = np.random.randint(0, 2, size=(batch_size, 10), dtype=np.int64) + >>> outputs = model(input_seqs, is_train=True) + >>> loss = tl.cost.cross_entropy_seq_with_mask(outputs, target_seqs, input_mask) + + """ + + raise NotImplementedError("Not Implemented.") + + +def cosine_similarity(v1, v2): + """Cosine similarity [-1, 1]. + + Parameters + ---------- + v1, v2 : Tensor + Tensor with the same shape [batch_size, n_feature]. + + References + ---------- + - `Wiki `__. + + """ + + return pd.fluid.layers.reduce_sum(pd.multiply(v1, v2), 1) / \ + (pd.sqrt(pd.fluid.layers.reduce_sum(pd.multiply(v1, v1), 1)) * + pd.sqrt(pd.fluid.layers.reduce_sum(pd.multiply(v2, v2), 1))) + + +# Regularization Functions +def li_regularizer(scale, scope=None): + """Li regularization removes the neurons of previous layer. The `i` represents `inputs`. + Returns a function that can be used to apply group li regularization to weights. + The implementation follows `TensorFlow contrib `__. + + Parameters + ---------- + scale : float + A scalar multiplier `Tensor`. 0.0 disables the regularizer. + scope: str + An optional scope name for this function. + + Returns + -------- + A function with signature `li(weights, name=None)` that apply Li regularization. + + Raises + ------ + ValueError : if scale is outside of the range [0.0, 1.0] or if scale is not a float. + + """ + + raise NotImplementedError("Not Implemented.") + + +def lo_regularizer(scale): + """Lo regularization removes the neurons of current layer. The `o` represents `outputs` + Returns a function that can be used to apply group lo regularization to weights. + The implementation follows `TensorFlow contrib `__. + + Parameters + ---------- + scale : float + A scalar multiplier `Tensor`. 0.0 disables the regularizer. + + Returns + ------- + A function with signature `lo(weights, name=None)` that apply Lo regularization. + + Raises + ------ + ValueError : If scale is outside of the range [0.0, 1.0] or if scale is not a float. + + """ + + raise NotImplementedError("Not Implemented.") + + +def maxnorm_regularizer(scale=1.0): + """Max-norm regularization returns a function that can be used to apply max-norm regularization to weights. + + More about max-norm, see `wiki-max norm `_. + The implementation follows `TensorFlow contrib `__. + + Parameters + ---------- + scale : float + A scalar multiplier `Tensor`. 0.0 disables the regularizer. + + Returns + --------- + A function with signature `mn(weights, name=None)` that apply Lo regularization. + + Raises + -------- + ValueError : If scale is outside of the range [0.0, 1.0] or if scale is not a float. + + """ + + raise NotImplementedError("Not Implemented.") + + +def maxnorm_o_regularizer(scale): + """Max-norm output regularization removes the neurons of current layer. + Returns a function that can be used to apply max-norm regularization to each column of weight matrix. + The implementation follows `TensorFlow contrib `__. + + Parameters + ---------- + scale : float + A scalar multiplier `Tensor`. 0.0 disables the regularizer. + + Returns + --------- + A function with signature `mn_o(weights, name=None)` that apply Lo regularization. + + Raises + --------- + ValueError : If scale is outside of the range [0.0, 1.0] or if scale is not a float. + + """ + + raise NotImplementedError("Not Implemented.") + + +def maxnorm_i_regularizer(scale): + """Max-norm input regularization removes the neurons of previous layer. + Returns a function that can be used to apply max-norm regularization to each row of weight matrix. + The implementation follows `TensorFlow contrib `__. + + Parameters + ---------- + scale : float + A scalar multiplier `Tensor`. 0.0 disables the regularizer. + + Returns + --------- + A function with signature `mn_i(weights, name=None)` that apply Lo regularization. + + Raises + --------- + ValueError : If scale is outside of the range [0.0, 1.0] or if scale is not a float. + + """ + + raise NotImplementedError("Not Implemented.") + + +def huber_loss( + output, target, is_mean=True, delta=1.0, dynamichuber=False, reverse=False, axis=-1, epsilon=0.00001, name=None +): + """Huber Loss operation, see ``https://en.wikipedia.org/wiki/Huber_loss`` . + Reverse Huber Loss operation, see ''https://statweb.stanford.edu/~owen/reports/hhu.pdf''. + Dynamic Reverse Huber Loss operation, see ''https://arxiv.org/pdf/1606.00373.pdf''. + + Parameters + ---------- + output : Tensor + A distribution with shape: [batch_size, ....], (any dimensions). + target : Tensor + The target distribution, format the same with `output`. + is_mean : boolean + Whether compute the mean or sum for each example. + - If True, use ``tf.reduce_mean`` to compute the loss between one target and predict data (default). + - If False, use ``tf.reduce_sum``. + delta: float + The point where the huber loss function changes from a quadratic to linear. + dynamichuber: boolean + Whether compute the coefficient c for each batch. + - If True, c is 20% of the maximal per-batch error. + - If False, c is delta. + reverse: boolean + Whether compute the reverse huber loss. + axis : int or list of int + The dimensions to reduce. + epsilon: + Eplison. + name : string + Name of this loss. + + """ + + raise NotImplementedError("Not Implemented.") diff --git a/tensorlayer/cost.py b/tensorlayer/cost/tensorflow_cost.py similarity index 97% rename from tensorlayer/cost.py rename to tensorlayer/cost/tensorflow_cost.py index 9ccf5eeca..1cab86baf 100644 --- a/tensorlayer/cost.py +++ b/tensorlayer/cost/tensorflow_cost.py @@ -10,7 +10,7 @@ from tensorlayer import logging __all__ = [ - 'cross_entropy', + 'softmax_cross_entropy_with_logits', 'sigmoid_cross_entropy', 'binary_cross_entropy', 'mean_squared_error', @@ -30,9 +30,9 @@ ] -def cross_entropy(output, target, name=None): +def softmax_cross_entropy_with_logits(output, target, name=None): """Softmax cross-entropy operation, returns the TensorFlow expression of cross-entropy for two distributions, - it implements softmax internally. See ``tf.nn.sparse_softmax_cross_entropy_with_logits``. + it implements softmax internally. See ``tf.ops.sparse_softmax_cross_entropy_with_logits``. Parameters ---------- @@ -46,7 +46,7 @@ def cross_entropy(output, target, name=None): Examples -------- >>> import tensorlayer as tl - >>> ce = tl.cost.cross_entropy(y_logits, y_target_logits, 'my_loss') + >>> ce = tl.cost.softmax_cross_entropy_with_logits(y_logits, y_target_logits, 'my_loss') References ----------- @@ -60,7 +60,7 @@ def cross_entropy(output, target, name=None): def sigmoid_cross_entropy(output, target, name=None): - """Sigmoid cross-entropy operation, see ``tf.nn.sigmoid_cross_entropy_with_logits``. + """Sigmoid cross-entropy operation, see ``tf.ops.sigmoid_cross_entropy_with_logits``. Parameters ---------- @@ -236,7 +236,7 @@ def dice_coe(output, target, loss_type='jaccard', axis=(1, 2, 3), smooth=1e-5): Examples --------- >>> import tensorlayer as tl - >>> outputs = tl.act.pixel_wise_softmax(outputs) + >>> outputs = tl.ops.softmax(outputs) >>> dice_loss = 1 - tl.cost.dice_coe(outputs, y_) References @@ -492,27 +492,28 @@ def cross_entropy_seq_with_mask(logits, target_seqs, input_mask, return_details= >>> vocab_size = 10000 >>> embedding_size = 256 >>> ni = tl.layers.Input([batch_size, None], dtype=tf.int64) - >>> net = tl.layers.Embedding( + >>> net_lits = [] + >>> net_list.append(tl.layers.Embedding( ... vocabulary_size = vocab_size, ... embedding_size = embedding_size, - ... name = 'seq_embedding')(ni) - >>> net = tl.layers.RNN( + ... name = 'seq_embedding')) + >>> net_list.append(tl.layers.RNN( ... cell =tf.keras.layers.LSTMCell(units=embedding_size, dropout=0.1), ... return_seq_2d = True, - ... name = 'dynamicrnn')(net) - >>> net = tl.layers.Dense(n_units=vocab_size, name="output")(net) - >>> model = tl.models.Model(inputs=ni, outputs=net) + ... name = 'dynamicrnn')) + >>> net_list.append(tl.layers.Dense(n_units=vocab_size, name="output")) + >>> model = tl.layers.SequentialLayer(net_list) >>> input_seqs = np.random.randint(0, 10, size=(batch_size, 10), dtype=np.int64) >>> target_seqs = np.random.randint(0, 10, size=(batch_size, 10), dtype=np.int64) >>> input_mask = np.random.randint(0, 2, size=(batch_size, 10), dtype=np.int64) - >>> outputs = model(input_seqs, is_train=True) + >>> outputs = model(input_seqs) >>> loss = tl.cost.cross_entropy_seq_with_mask(outputs, target_seqs, input_mask) """ targets = tf.reshape(target_seqs, [-1]) # to one vector weights = tf.cast(tf.reshape(input_mask, [-1]), dtype=tf.float32) # to one vector like targets losses = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=targets, name=name) * weights - # losses = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=targets, name=name)) # for TF1.0 and others + # losses = tf.reduce_mean(tf.ops.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=targets, name=name)) # for TF1.0 and others loss = tf.divide( tf.reduce_sum(losses), # loss from mask. reduce_sum before element-wise mul with mask !! diff --git a/tensorlayer/dataflow/__init__.py b/tensorlayer/dataflow/__init__.py new file mode 100644 index 000000000..3eb12829c --- /dev/null +++ b/tensorlayer/dataflow/__init__.py @@ -0,0 +1,20 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- +from __future__ import absolute_import, division, print_function + +from tensorlayer.backend.ops.load_backend import BACKEND + +if BACKEND == 'tensorflow': + from .tensorflow_data import * + +elif BACKEND == 'mindspore': + from .mindspore_data import * + +elif BACKEND == 'paddle': + from .paddle_data import * + +elif BACKEND == 'dragon': + pass + +else: + raise NotImplementedError("This backend is not supported") diff --git a/tensorlayer/dataflow/mindspore_data.py b/tensorlayer/dataflow/mindspore_data.py new file mode 100644 index 000000000..9c12d87a7 --- /dev/null +++ b/tensorlayer/dataflow/mindspore_data.py @@ -0,0 +1,128 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +import mindspore.dataset as ds +import mindspore as ms +from enum import Enum +__all__ = [ + 'Batch', + 'Concat', + 'FromGenerator', + 'FromSlices', + 'Map', + 'Repeat', + 'Shuffle', + 'Dataloader', + 'Dataset', + 'IterableDataset', +] + + +class Dataset(object): + + def __init__(self): + pass + + def __getitem__(self, idx): + raise NotImplementedError("'{}' not implement in class "\ + "{}".format('__getitem__', self.__class__.__name__)) + + def __len__(self): + raise NotImplementedError("'{}' not implement in class "\ + "{}".format('__len__', self.__class__.__name__)) + + +class IterableDataset(object): + + def __init__(self): + pass + + def __iter__(self): + raise NotImplementedError("'{}' not implement in class " \ + "{}".format('__iter__', self.__class__.__name__)) + + +def Batch(dataset, batch_size, drop_last=False): + ''' + + Parameters + ---------- + dataset + batch_size + drop_last + Returns + ------- + + ''' + return dataset.batch(batch_size=batch_size, drop_remainder=drop_last) + + +def Concat(datasets): + + datasets = list(datasets) + dataset = ds.Dataset.concat(datasets) + return dataset + + +def FromGenerator(generator, output_types, column_names): + + output_types = list(output_types) + column_names = list(column_names) + return ds.GeneratorDataset(source=generator, column_names=column_names, column_types=output_types) + + +def FromSlices(datas, column_names): + + return ds.NumpySlicesDataset(data=datas, column_names=column_names) + + +def Map(dataset, map_func, input_columns=None): + """ Maps map_func across the elements of this dataset. + + Parameters + ---------- + dataset : DataFlow + input DataFlow + map_func : function + A function mapping a dataset element to another dataset element. + num_parallel_calls + + Returns + ------- + + """ + return dataset.map(operations=map_func, input_columns=input_columns) + + +def Repeat(dataset, count=None): + + return dataset.repeat(count) + + +def Shuffle(dataset, buffer_size): + + return dataset.shuffle(buffer_size) + + +def Zip(datasets): + ''' + Creates a Dataset by zipping together the given datasets. + Parameters + ---------- + datasets: + A tuple of datasets to be zipped together. + Returns + ------- + + ''' + datasets = tuple(datasets) + return ds.zip(datasets) + + +def Dataloader(dataset, batch_size, shuffle=False, drop_last=False, shuffle_buffer_size=10000): + + if shuffle: + dataset = Shuffle(dataset, buffer_size=shuffle_buffer_size) + dataset = Batch(dataset, batch_size=batch_size, drop_last=drop_last) + + return dataset diff --git a/tensorlayer/dataflow/paddle_data.py b/tensorlayer/dataflow/paddle_data.py new file mode 100644 index 000000000..d442a8fd7 --- /dev/null +++ b/tensorlayer/dataflow/paddle_data.py @@ -0,0 +1,98 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +import numpy as np +import paddle +from paddle.io import Dataset as dataset +from paddle.io import IterableDataset as iterabledataset +from paddle.io import DataLoader +__all__ = [ + 'Batch', + 'Concat', + 'FromGenerator', + 'FromSlices', + 'Map', + 'Repeat', + 'Shuffle', + 'Dataloader', + 'Dataset', + 'IterableDataset', +] + + +class Dataset(dataset): + + def __init__(self): + pass + + def __getitem__(self, idx): + raise NotImplementedError("'{}' not implement in class "\ + "{}".format('__getitem__', self.__class__.__name__)) + + def __len__(self): + raise NotImplementedError("'{}' not implement in class "\ + "{}".format('__len__', self.__class__.__name__)) + + +class IterableDataset(iterabledataset): + + def __init__(self): + pass + + def __iter__(self): + raise NotImplementedError("'{}' not implement in class "\ + "{}".format('__iter__', self.__class__.__name__)) + + def __getitem__(self, idx): + raise RuntimeError("'{}' should not be called for IterableDataset" \ + "{}".format('__getitem__', self.__class__.__name__)) + + def __len__(self): + raise RuntimeError("'{}' should not be called for IterableDataset" \ + "{}".format('__len__', self.__class__.__name__)) + + +def FromGenerator(generator, output_types=None, column_names=None): + + return generator + + +def FromSlices(datas, column_names=None): + + datas = list(datas) + return paddle.io.TensorDataset(datas) + + +def Concat(datasets): + + return paddle.io.ChainDataset(list(datasets)) + + +def Zip(datasets): + + return paddle.io.ComposeDataset(list(datasets)) + + +def Dataloader(dataset, batch_size=None, shuffle=False, drop_last=False, shuffle_buffer_size=0): + + return DataLoader(dataset, batch_size=batch_size, shuffle=shuffle, drop_last=drop_last, return_list=True) + + +def Batch(dataset, batch_size, drop_last=False): + + raise NotImplementedError('This function not implement in paddle backend.') + + +def Shuffle(dataset, buffer_size, seed=None): + + raise NotImplementedError('This function not implement in paddle backend.') + + +def Repeat(dataset, count=None): + + raise NotImplementedError('This function not implement in paddle backend.') + + +def Map(dataset, map_func, input_columns=None): + + raise NotImplementedError('This function not implement in paddle backend.') diff --git a/tensorlayer/dataflow/tensorflow_data.py b/tensorlayer/dataflow/tensorflow_data.py new file mode 100644 index 000000000..312565078 --- /dev/null +++ b/tensorlayer/dataflow/tensorflow_data.py @@ -0,0 +1,358 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +import tensorflow as tf +import tensorlayer as tl +import numpy as np +__all__ = [ + 'Batch', + 'Concat', + 'FromGenerator', + 'FromSlices', + 'Map', + 'Repeat', + 'Shuffle', + 'Zip', + 'Dataloader', + 'Dataset', + 'IterableDataset', +] + + +class Dataset(object): + """An abstract class to encapsulate methods and behaviors of datasets. + All datasets in map-style(dataset samples can be get by a given key) should be a subclass of 'tensorlayer.dataflow.Dataset'. + ALl subclasses should implement following methods: + :code:`__getitem__`: get sample from dataset with a given index. + :code:`__len__`: return dataset sample number. + + Examples + ---------- + With TensorLayer + + >>> from tensorlayer.dataflow import Dataset + >>> class mnistdataset(Dataset): + >>> def __init__(self, data, label,transform): + >>> self.data = data + >>> self.label = label + >>> self.transform = transform + >>> def __getitem__(self, index): + >>> data = self.data[index].astype('float32') + >>> data = self.transform(data) + >>> label = self.label[index].astype('int64') + >>> return data, label + >>> def __len__(self): + >>> return len(self.data) + >>> train_dataset = mnistdataset(data = X_train, label = y_train ,transform = transform) + + """ + + def __init__(self): + pass + + def __call__(self): + + return self + + def __getitem__(self, idx): + raise NotImplementedError("'{}' not implement in class "\ + "{}".format('__getitem__', self.__class__.__name__)) + + def __len__(self): + raise NotImplementedError("'{}' not implement in class "\ + "{}".format('__len__', self.__class__.__name__)) + + +class IterableDataset(object): + """An abstract class to encapsulate methods and behaviors of iterable datasets. + All datasets in iterable-style (can only get sample one by one sequentially, likea Python iterator) should be a subclass of `tensorlayer.dataflow.IterableDataset`. + All subclasses should implement following methods: + :code:`__iter__`: yield sample sequentially. + + Examples + ---------- + With TensorLayer + + >>> class mnistdataset(IterableDataset): + >>> def __init__(self, data, label,transform): + >>> self.data = data + >>> self.label = label + >>> self.transform = transform + >>> def __iter__(self): + >>> for i in range(len(self.data)): + >>> data = self.data[i].astype('float32') + >>> data = self.transform(data) + >>> label = self.label[i].astype('int64') + >>> yield data, label + >>> train_dataset = mnistdataset(data = X_train, label = y_train ,transform = transform) + + """ + + def __init__(self): + pass + + def __call__(self): + + return self + + def __iter__(self): + raise NotImplementedError("'{}' not implement in class "\ + "{}".format('__iter__', self.__class__.__name__)) + + +def FromGenerator(generator, output_types, column_names=None): + """Creates a `Dataset` whose elements are generated by `generator`. + + Parameters + ---------- + generator: Callable or Iterable + A generator callable object or an iterable Python object. + output_types: list or tuple + Set output data type. This parameter not support in MindSpore backend and Paddle backend. + column_names: list or tuple + column names of the dataset. This parameter not support in TensorFlow backend and Paddle backend. + + Returns + ------- + Dataset + A Dataset. + + Examples + ---------- + With TensorLayer + + >>> train_dataset = mnistdataset(data = X_train, label = y_train ,transform = transform) + >>> train_dataset = tl.dataflow.FromGenerator(train_dataset, output_types=[tl.float32, tl.int64], column_names=['data', 'label']) + + """ + output_types = tuple(output_types) + return tf.data.Dataset.from_generator(generator, output_types=output_types) + + +def Batch(dataset, batch_size, drop_last=False): + """Combine batch_size number of consecutive rows into batches.This function not implement in Paddle backend. + + Parameters + ---------- + dataset: + A dataset. + batch_size: int + Sample number in a mini-batch. + drop_last: boolean + whether drop the last incomplete batch dataset size is not divisible by the batch size. + + Returns + ------- + Dataset + A batchDataset. + """ + + return dataset.batch(batch_size=batch_size, drop_remainder=drop_last) + + +def Concat(datasets): + """Concatenate the datasets in the input list of datasets. + + Parameters + ---------- + datasets: dataset + A list of datasets. + + Returns + ------- + Dataset + datasets concatenated. + + Examples + ---------- + With TensorLayer + + >>> dataset = tl.dataflow.Concat([dataset1, dataset2]) + + """ + + dataset_num = len(datasets) + dataset = datasets[0] + for i in range(1, dataset_num): + dataset.concatenate(datasets[i]) + return dataset + + +def FromSlices(datas, column_names=None): + """Creates a dataset with given data slices. + + Parameters + ---------- + datas: list or tuple + Each data should be in shape of [N, …], while N is the sample number. + Input data will be sliced along the first dimension and generate additional rows + column_names: list + List of column names of the dataset. This parameter not support in TensorFlow backend and Paddle backend. + + Returns + ------- + Dataset + A dataset. + + Examples + ---------- + With TensorLayer + + >>> dataset = tl.dataflow.FromSlices([data1, data2]) + + """ + + return tf.data.Dataset.from_tensor_slices(datas) + + +def Map(dataset, map_func, input_columns=None): + """ Maps map_func across the elements of this dataset. This function not implement in Paddle backend. + + Parameters + ---------- + dataset : Dataset + A dataset to map. + map_func : function + A function mapping a dataset element to another dataset element. + input_columns: list + List of column names of the dataset to map. This parameter not support in TensorFlow backend. + + Returns + ------- + Dataset + A mapped dataset. + + Examples + ---------- + With TensorLayer + + >>> dataset = tl.dataflow.Map(dataset, map_func) + + """ + return dataset.map(map_func) + + +def Repeat(dataset, count=None): + """ Repeat this dataset count times. This function not implement in Paddle backend. + + Parameters + ---------- + dataset : Dataset + A dataset to repeat. + count : int + The number of times the dataset should be repeated. The default behavior (if count is None or -1) is for the dataset be repeated indefinitely. + + Returns + ------- + Dataset + A repeated dataset. + + Examples + ---------- + With TensorLayer + + >>> dataset = tl.dataflow.Repeat(dataset, 2) + + """ + return dataset.repeat(count=count) + + +def Shuffle(dataset, buffer_size): + """ Randomly shuffles the elements of this dataset.This function not implement in Paddle backend. + + Parameters + ---------- + dataset : Dataset + A dataset to shuffle. + buffer_size : int + The number of elements from this dataset from which the new dataset will sample. + + Returns + ------- + Dataset + A shuffled dataset. + + Examples + ---------- + With TensorLayer + + >>> dataset = tl.dataflow.Shuffle(dataset, 2000) + + """ + return dataset.shuffle(buffer_size, seed=None, reshuffle_each_iteration=True) + + +def Zip(datasets): + """ Creates a Dataset by zipping together the given datasets.This function not implement in Paddle backend. + + Parameters + ---------- + datasets : list + A list of datasets to zip. + + Returns + ------- + Dataset + A zip dataset. + + Examples + ---------- + With TensorLayer + + >>> dataset = tl.dataflow.Zip([dataset1, dataset2]) + + """ + return tf.data.Dataset.zip(datasets) + + +def Dataloader(dataset, batch_size, shuffle=False, drop_last=False, shuffle_buffer_size=10000): + """ Creates a Datasetloader to trian network. We recommend using this function. + + Parameters + ---------- + dataset : Dataset + the dataset to load data from. + batch_size: int or None + sample number in a mini-batch. + shuffle: boolean + whther to shuffle indices order before genrate batch indices. + drop_last: boolean + whether drop the last incomplete batch dataset size is not divisible by the batch size. + shuffle_buffer_size: int + The number of elements from this dataset from which the new dataset will sample. This parameter not support in Paddle backend. + + Returns + ------- + DataLoader + an iterable object for data iterating, each elemnet of the generated data is a Tensor. + + Examples + ---------- + With TensorLayer + + >>> from tensorlayer.dataflow import Dataset + >>> class mnistdataset(Dataset): + >>> def __init__(self, data, label,transform): + >>> self.data = data + >>> self.label = label + >>> self.transform = transform + >>> def __getitem__(self, index): + >>> data = self.data[index].astype('float32') + >>> data = self.transform(data) + >>> label = self.label[index].astype('int64') + >>> return data, label + >>> def __len__(self): + >>> return len(self.data) + >>> train_dataset = mnistdataset(data = X_train, label = y_train ,transform = transform) + >>> train_dataset = tl.dataflow.FromGenerator(train_dataset, output_types=[tl.float32, tl.int64], column_names=['data', 'label']) + >>> train_dataloader = tl.dataflow.Dataloader(train_dataset, batch_size=128, shuffle=True, drop_last=False, shuffle_buffer_size=2000) + + """ + + if shuffle: + dataset = Shuffle(dataset, buffer_size=shuffle_buffer_size) + + dataset = Batch(dataset, batch_size=batch_size, drop_last=drop_last) + dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE) + + return dataset diff --git a/tensorlayer/decorators/__init__.py b/tensorlayer/decorators/__init__.py index 2a289862a..ba8d5eb9f 100644 --- a/tensorlayer/decorators/__init__.py +++ b/tensorlayer/decorators/__init__.py @@ -5,7 +5,7 @@ various benchmarks and domain-specific problems. In addition, we also support transparent access to native TensorFlow parameters. For example, we provide not only layers for local response normalization, but also -layers that allow user to apply ``tf.nn.lrn`` on ``network.outputs``. +layers that allow user to apply ``tf.ops.lrn`` on ``network.outputs``. More functions can be found in `TensorFlow API `__. """ diff --git a/tensorlayer/files/__init__.py b/tensorlayer/files/__init__.py index 0de8a9737..8d985afff 100644 --- a/tensorlayer/files/__init__.py +++ b/tensorlayer/files/__init__.py @@ -5,7 +5,7 @@ various benchmarks and domain-specific problems. In addition, we also support transparent access to native TensorFlow parameters. For example, we provide not only layers for local response normalization, but also -layers that allow user to apply ``tf.nn.lrn`` on ``network.outputs``. +layers that allow user to apply ``tf.ops.lrn`` on ``network.outputs``. More functions can be found in `TensorFlow API `__. """ @@ -68,10 +68,10 @@ 'save_ckpt', 'save_npz', 'save_npz_dict', + 'load_and_assign_ckpt', + 'ckpt_to_npz_dict' #'save_graph', #'load_graph', #'save_graph_and_params', #'load_graph_and_params', - 'load_and_assign_ckpt', - 'ckpt_to_npz_dict' ] diff --git a/tensorlayer/files/utils.py b/tensorlayer/files/utils.py index ff3c84cb9..13505770b 100644 --- a/tensorlayer/files/utils.py +++ b/tensorlayer/files/utils.py @@ -19,6 +19,7 @@ import cloudpickle import h5py import numpy as np +import progressbar import scipy.io as sio import tensorflow as tf from six.moves import cPickle @@ -28,11 +29,16 @@ from tensorflow.python.util.tf_export import keras_export from tensorflow.python import pywrap_tensorflow -import progressbar import tensorlayer as tl from tensorlayer import logging, nlp, utils, visualize -# from six.moves import zip +if tl.BACKEND == 'mindspore': + from mindspore.ops.operations import Assign + from mindspore.nn import Cell + from mindspore import Tensor + import mindspore as ms +if tl.BACKEND == 'paddle': + import paddle as pd if sys.version_info[0] == 2: from urllib import urlretrieve @@ -67,7 +73,10 @@ 'save_npz', 'save_npz_dict', 'tf_variables_to_numpy', + 'ms_variables_to_numpy', 'assign_tf_variable', + 'assign_ms_variable', + 'assign_pd_variable', 'save_weights_to_hdf5', 'load_hdf5_to_weights_in_order', 'load_hdf5_to_weights', @@ -78,7 +87,7 @@ # 'save_pkl_graph', # 'load_pkl_graph', 'load_and_assign_ckpt', - 'ckpt_to_npz_dict', + 'ckpt_to_npz_dict' ] @@ -1950,7 +1959,13 @@ def save_npz(save_list=None, name='model.npz'): if save_list is None: save_list = [] - save_list_var = tf_variables_to_numpy(save_list) + if tl.BACKEND == 'tensorflow': + save_list_var = tf_variables_to_numpy(save_list) + elif tl.BACKEND == 'mindspore': + save_list_var = ms_variables_to_numpy(save_list) + else: + raise NotImplementedError("This backend is not supported") + # print(name, save_list_var) np.savez(name, params=save_list_var) save_list_var = None del save_list_var @@ -2015,8 +2030,26 @@ def assign_weights(weights, network): """ ops = [] - for idx, param in enumerate(weights): - ops.append(network.all_weights[idx].assign(param)) + if tl.BACKEND == 'tensorflow': + for idx, param in enumerate(weights): + ops.append(network.all_weights[idx].assign(param)) + + elif tl.BACKEND == 'mindspore': + + class Assign_net(Cell): + + def __init__(self, y): + super(Assign_net, self).__init__() + self.y = y + + def construct(self, x): + Assign()(self.y, x) + + for idx, param in enumerate(weights): + assign_param = Tensor(param, dtype=ms.float32) + # net = Assign_net(network.all_weights[idx]) + # net(assign_param) + Assign()(network.all_weights[idx], assign_param) return ops @@ -2064,7 +2097,14 @@ def save_npz_dict(save_list=None, name='model.npz'): save_list = [] save_list_names = [tensor.name for tensor in save_list] - save_list_var = tf_variables_to_numpy(save_list) + if tl.BACKEND == 'tensorflow': + save_list_var = tf_variables_to_numpy(save_list) + elif tl.BACKEND == 'mindspore': + save_list_var = ms_variables_to_numpy(save_list) + elif tl.BACKEND == 'paddle': + save_list_var = pd_variables_to_numpy(save_list) + else: + raise NotImplementedError('Not implemented') save_var_dict = {save_list_names[idx]: val for idx, val in enumerate(save_list_var)} np.savez(name, **save_var_dict) save_list_var = None @@ -2108,7 +2148,16 @@ def load_and_assign_npz_dict(name='model.npz', network=None, skip=False): "if you want to skip redundant or mismatch weights." % key ) else: - assign_tf_variable(network.all_weights[net_weights_name.index(key)], weights[key]) + if tl.BACKEND == 'tensorflow': + assign_tf_variable(network.all_weights[net_weights_name.index(key)], weights[key]) + elif tl.BACKEND == 'mindspore': + assign_param = Tensor(weights[key], dtype=ms.float32) + assign_ms_variable(network.all_weights[net_weights_name.index(key)], assign_param) + elif tl.BACKEND == 'paddle': + assign_pd_variable(network.all_weights[net_weights_name.index(key)], weights[key]) + else: + raise NotImplementedError('Not implemented') + logging.info("[*] Model restored from npz_dict %s" % name) @@ -2544,11 +2593,52 @@ def tf_variables_to_numpy(variables): return results +def ms_variables_to_numpy(variables): + """Convert MS tensor or list of tensors into a list of numpy array""" + if not isinstance(variables, list): + var_list = [variables] + else: + var_list = variables + + results = [v.data.asnumpy() for v in var_list] + return results + + +def pd_variables_to_numpy(variables): + if not isinstance(variables, list): + var_list = [variables] + else: + var_list = variables + + results = [v.numpy() for v in var_list] + return results + + def assign_tf_variable(variable, value): """Assign value to a TF variable""" variable.assign(value) +def assign_ms_variable(variable, value): + + class Assign_net(Cell): + + def __init__(self, y): + super(Assign_net, self).__init__() + self.y = y + + def construct(self, x): + Assign()(self.y, x) + + # net = Assign_net(variable) + # net(value) + Assign()(variable, value) + + +def assign_pd_variable(variable, value): + pd.assign(value, variable) + + def _save_weights_to_hdf5_group(f, layers): """ Save layer/model weights into hdf5 group recursively. @@ -2780,46 +2870,6 @@ def load_hdf5_to_weights(filepath, network, skip=False): logging.info("[*] Load %s SUCCESS!" % filepath) -def check_ckpt_file(model_dir): - model_dir = model_dir - model_path = None - count_extension = 0 - for root, dirs, files in os.walk(model_dir): - for file in files: - filename, extension = os.path.splitext(file) - if extension in ['.data-00000-of-00001', '.index', '.meta']: - count_extension += 1 - if count_extension == 3: - model_path = model_dir + '/' + filename - else: - raise Exception("Check the file extension for missing .data-00000-of-00001, .index, .meta") - if model_path is None: - raise Exception('The ckpt file is not found') - return model_path, filename - - -def rename_weight_or_biases(variable_name): - if variable_name is None: - return variable_name - split_var = variable_name.split('/') - - str_temp = '' - for i in range(len(split_var)): - if 'w' in split_var[i]: - split_var[i] = 'filters:0' - elif 'b' in split_var[i]: - split_var[i] = 'biases:0' - else: - pass - - if i < len(split_var) - 1: - str_temp = str_temp + split_var[i] + '/' - else: - str_temp = str_temp + split_var[i] - - return str_temp - - def load_and_assign_ckpt(model_dir, network=None, skip=True): """Load weights by name from a given file of ckpt format @@ -2838,7 +2888,16 @@ def load_and_assign_ckpt(model_dir, network=None, skip=True): ------- """ - model_path, filename = check_ckpt_file(model_dir) + model_dir = model_dir + model_path = None + for root, dirs, files in os.walk(model_dir): + for file in files: + filename, extension = os.path.splitext(file) + if extension in ['.data-00000-of-00001', '.index', '.meta']: + model_path = model_dir + '/' + filename + break + if model_path == None: + raise Exception('The ckpt file is not found') reader = pywrap_tensorflow.NewCheckpointReader(model_path) var_to_shape_map = reader.get_variable_to_shape_map() @@ -2859,7 +2918,7 @@ def load_and_assign_ckpt(model_dir, network=None, skip=True): logging.info("[*] Model restored from ckpt %s" % filename) -def ckpt_to_npz_dict(model_dir, save_name='model.npz', rename_key=False): +def ckpt_to_npz_dict(model_dir, save_name='model.npz'): """ Save ckpt weights to npz file Parameters @@ -2869,27 +2928,28 @@ def ckpt_to_npz_dict(model_dir, save_name='model.npz', rename_key=False): Examples: model_dir = /root/cnn_model/ save_name : str The save_name of the `.npz` file. - rename_key : bool - Modify parameter naming, used to match TL naming rule. - Examples: conv1_1/b_b --> conv1_1/biases:0 ; conv1_1/w_w --> conv1_1/filters:0 Returns ------- """ - model_path, _ = check_ckpt_file(model_dir) + model_dir = model_dir + model_path = None + for root, dirs, files in os.walk(model_dir): + for file in files: + filename, extension = os.path.splitext(file) + if extension in ['.data-00000-of-00001', '.index', '.meta']: + model_path = model_dir + '/' + filename + break + if model_path == None: + raise Exception('The ckpt file is not found') reader = pywrap_tensorflow.NewCheckpointReader(model_path) var_to_shape_map = reader.get_variable_to_shape_map() parameters_dict = {} - if rename_key is False: - for key in sorted(var_to_shape_map): - parameters_dict[key] = reader.get_tensor(key) - elif rename_key is True: - for key in sorted(var_to_shape_map): - parameters_dict[rename_weight_or_biases(key)] = reader.get_tensor(key) - + for key in sorted(var_to_shape_map): + parameters_dict[key] = reader.get_tensor(key) np.savez(save_name, **parameters_dict) parameters_dict = None del parameters_dict diff --git a/tensorlayer/initializers/__init__.py b/tensorlayer/initializers/__init__.py new file mode 100644 index 000000000..ef8c65fe0 --- /dev/null +++ b/tensorlayer/initializers/__init__.py @@ -0,0 +1,25 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +# __all__ = [ +# 'Initializer', 'Zeros', 'Ones', 'Constant', 'RandomUniform', 'RandomNormal', 'TruncatedNormal', +# 'deconv2d_bilinear_upsampling_initializer', 'He_Normal' +# ] +from .load_initializers_backend import Initializer +from .load_initializers_backend import Zeros +from .load_initializers_backend import Ones +from .load_initializers_backend import Constant +from .load_initializers_backend import RandomUniform +from .load_initializers_backend import RandomNormal +from .load_initializers_backend import TruncatedNormal +from .load_initializers_backend import deconv2d_bilinear_upsampling_initializer +from .load_initializers_backend import HeNormal + +# Alias +zeros = Zeros +ones = Ones +constant = Constant +random_uniform = RandomUniform +random_normal = RandomNormal +truncated_normal = TruncatedNormal +he_normal = HeNormal diff --git a/tensorlayer/initializers/load_initializers_backend.py b/tensorlayer/initializers/load_initializers_backend.py new file mode 100644 index 000000000..3f5492da4 --- /dev/null +++ b/tensorlayer/initializers/load_initializers_backend.py @@ -0,0 +1,14 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +from __future__ import absolute_import, division, print_function +from tensorlayer.backend.ops.load_backend import BACKEND + +if BACKEND == 'tensorflow': + from .tensorflow_initializers import * +elif BACKEND == 'mindspore': + from .mindspore_initializers import * +elif BACKEND == 'paddle': + from .paddle_initializers import * +else: + raise NotImplementedError("This backend is not supported") diff --git a/tensorlayer/initializers.py b/tensorlayer/initializers/mindspore_initializers.py similarity index 78% rename from tensorlayer/initializers.py rename to tensorlayer/initializers/mindspore_initializers.py index aaf4f37ac..4c0ef8450 100644 --- a/tensorlayer/initializers.py +++ b/tensorlayer/initializers/mindspore_initializers.py @@ -2,11 +2,13 @@ # -*- coding: utf-8 -*- import numpy as np -import tensorflow as tf +import tensorlayer as tl +from mindspore import Tensor +from mindspore.common import initializer __all__ = [ 'Initializer', 'Zeros', 'Ones', 'Constant', 'RandomUniform', 'RandomNormal', 'TruncatedNormal', - 'deconv2d_bilinear_upsampling_initializer' + 'deconv2d_bilinear_upsampling_initializer', 'HeNormal' ] @@ -22,7 +24,7 @@ def __call__(self, shape, dtype=None): shape : tuple of int. The shape of the tensor. dtype : Optional dtype of the tensor. - If not provided will return tensor of `tf.float32`. + If not provided will return tensor of `tl.float32`. Returns ------- @@ -61,16 +63,26 @@ class Zeros(Initializer): """Initializer that generates tensors initialized to 0. """ - def __call__(self, shape, dtype=tf.float32): - return tf.zeros(shape, dtype=dtype) + def __init__(self): + self.zero = initializer.Zero() + + def __call__(self, shape, dtype=tl.float32): + arr = np.ndarray(shape) + self.zero(arr) + return Tensor(arr, dtype=dtype) class Ones(Initializer): """Initializer that generates tensors initialized to 1. """ - def __call__(self, shape, dtype=tf.float32): - return tf.ones(shape, dtype=dtype) + def __init__(self): + self.one = initializer.One() + + def __call__(self, shape, dtype=tl.float32): + arr = np.ndarray(shape) + self.one(arr) + return Tensor(arr, dtype=dtype) class Constant(Initializer): @@ -85,9 +97,12 @@ class Constant(Initializer): def __init__(self, value=0): self.value = value + self.constant = initializer.Constant(value=value) - def __call__(self, shape, dtype=None): - return tf.constant(self.value, shape=shape, dtype=dtype) + def __call__(self, shape, dtype=tl.float32): + arr = np.ndarray(shape) + self.constant(arr) + return Tensor(arr, dtype=dtype) def get_config(self): return {"value": self.value} @@ -112,8 +127,8 @@ def __init__(self, minval=-0.05, maxval=0.05, seed=None): self.maxval = maxval self.seed = seed - def __call__(self, shape, dtype=tf.float32): - return tf.random.uniform(shape, self.minval, self.maxval, dtype=dtype, seed=self.seed) + def __call__(self, shape, dtype=tl.float32): + return tl.random_uniform(shape, self.minval, self.maxval, dtype=dtype, seed=self.seed) def get_config(self): return {"minval": self.minval, "maxval": self.maxval, "seed": self.seed} @@ -137,8 +152,8 @@ def __init__(self, mean=0.0, stddev=0.05, seed=None): self.stddev = stddev self.seed = seed - def __call__(self, shape, dtype=tf.float32): - return tf.random.normal(shape, self.mean, self.stddev, dtype=dtype, seed=self.seed) + def __call__(self, shape, dtype=tl.float32): + return tl.random_normal(shape, self.mean, self.stddev, dtype=dtype, seed=self.seed) def get_config(self): return {"mean": self.mean, "stddev": self.stddev, "seed": self.seed} @@ -168,13 +183,33 @@ def __init__(self, mean=0.0, stddev=0.05, seed=None): self.stddev = stddev self.seed = seed - def __call__(self, shape, dtype=tf.float32): - return tf.random.truncated_normal(shape, self.mean, self.stddev, dtype=dtype, seed=self.seed) + def __call__(self, shape, dtype=tl.float32): + return tl.truncated_normal(shape, self.mean, self.stddev, dtype=dtype, seed=self.seed) def get_config(self): return {"mean": self.mean, "stddev": self.stddev, "seed": self.seed} +class HeNormal(Initializer): + """He normal initializer. + + Parameters + ---------- + seed : A Python integer. + Used to seed the random generator. + + """ + + def __init__(self, seed=None): + self.seed = seed + + def __call__(self, shape, dtype=tl.float32): + return tl.he_normal(seed=self.seed, shape=shape, dtype=dtype) + + def get_config(self): + return {"seed", self.seed} + + def deconv2d_bilinear_upsampling_initializer(shape): """Returns the initializer that can be passed to DeConv2dLayer for initializing the weights in correspondence to channel-wise bilinear up-sampling. @@ -220,13 +255,4 @@ def deconv2d_bilinear_upsampling_initializer(shape): weights[:, :, i, i] = bilinear_kernel # assign numpy array to constant_initalizer and pass to get_variable - return tf.constant_initializer(value=weights) - - -# Alias -zeros = Zeros -ones = Ones -constant = Constant -random_uniform = RandomUniform -random_normal = RandomNormal -truncated_normal = TruncatedNormal + return Constant(value=weights) diff --git a/tensorlayer/initializers/paddle_initializers.py b/tensorlayer/initializers/paddle_initializers.py new file mode 100644 index 000000000..d332be15a --- /dev/null +++ b/tensorlayer/initializers/paddle_initializers.py @@ -0,0 +1,222 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +from paddle.fluid.initializer import ConstantInitializer +from paddle.fluid.initializer import UniformInitializer +from paddle.fluid.initializer import NormalInitializer +from paddle.fluid.initializer import TruncatedNormalInitializer +from paddle.fluid.initializer import MSRAInitializer +import paddle + +__all__ = [ + 'Initializer', 'Zeros', 'Ones', 'Constant', 'RandomUniform', 'RandomNormal', 'TruncatedNormal', + 'deconv2d_bilinear_upsampling_initializer', 'HeNormal' +] + +class Initializer(object): + """Initializer base class: all initializers inherit from this class. + """ + + def __call__(self, shape, dtype=None): + """Returns a tensor object initialized as specified by the initializer. + + Parameters + ---------- + shape : tuple of int. + The shape of the tensor. + dtype : Optional dtype of the tensor. + If not provided will return tensor of `tl.float32`. + + Returns + ------- + + """ + raise NotImplementedError + + def get_config(self): + """Returns the configuration of the initializer as a JSON-serializable dict. + + Returns + ------- + A JSON-serializable Python dict. + """ + return {} + + @classmethod + def from_config(cls, config): + """Instantiates an initializer from a configuration dictionary. + + Parameters + ---------- + config : A python dictionary. + It will typically be the output of `get_config`. + + Returns + ------- + An Initializer instance. + """ + if 'dtype' in config: + config.pop('dtype') + return cls(**config) + + +class Zeros(ConstantInitializer): + """Initializer that generates tensors initialized to 0. + """ + + def __init__(self): + super(Zeros, self).__init__(value=0.0, force_cpu=False) + + +class Ones(object): + """Initializer that generates tensors initialized to 1. + """ + + def __init__(self): + # super(Ones, self).__init__(value=1.0, force_cpu=False) + pass + + def __call__(self, shape, dtype): + return paddle.ones(shape=shape, dtype=dtype) + + +class Constant(ConstantInitializer): + """Initializer that generates tensors initialized to a constant value. + + Parameters + ---------- + value : A python scalar or a numpy array. + The assigned value. + + """ + + def __init__(self, value=0.0): + if value is None: + raise ValueError("value must not be none.") + super(Constant, self).__init__(value=value, force_cpu=False) + self.value = value + + def get_config(self): + return {"value": self.value} + + +class RandomUniform(UniformInitializer): + """Initializer that generates tensors with a uniform distribution. + + Parameters + ---------- + minval : A python scalar or a scalar tensor. + Lower bound of the range of random values to generate. + maxval : A python scalar or a scalar tensor. + Upper bound of the range of random values to generate. + seed : A Python integer. + Used to seed the random generator. + + """ + + def __init__(self, minval=-0.05, maxval=0.05, seed=0): + assert minval is not None, 'low should not be None' + assert maxval is not None, 'high should not be None' + assert maxval >= minval, 'high should greater or equal than low' + super(RandomUniform, self).__init__(low=minval, high=maxval, seed=seed, diag_num=0, diag_step=0, diag_val=1.0) + self.minval = minval + self.maxval = maxval + self.seed = seed + + def get_config(self): + return {"minval": self.minval, "maxval": self.maxval, "seed": self.seed} + + +class RandomNormal(NormalInitializer): + """Initializer that generates tensors with a normal distribution. + + Parameters + ---------- + mean : A python scalar or a scalar tensor. + Mean of the random values to generate. + stddev : A python scalar or a scalar tensor. + Standard deviation of the random values to generate. + seed : A Python integer. + Used to seed the random generator. + """ + + def __init__(self, mean=0.0, stddev=0.05, seed=0): + assert mean is not None, 'mean should not be None' + assert stddev is not None, 'std should not be None' + super(RandomNormal, self).__init__(loc=mean, scale=stddev, seed=seed) + self.mean = mean + self.stddev = stddev + self.seed = seed + + def get_config(self): + return {"mean": self.mean, "stddev": self.stddev, "seed": self.seed} + + +class TruncatedNormal(TruncatedNormalInitializer): + """Initializer that generates a truncated normal distribution. + + These values are similar to values from a `RandomNormal` + except that values more than two standard deviations from the mean + are discarded and re-drawn. This is the recommended initializer for + neural network weights and filters. + + + Parameters + ---------- + mean : A python scalar or a scalar tensor. + Mean of the random values to generate. + stddev : A python scalar or a scalar tensor. + Standard deviation of the andom values to generate. + seed : A Python integer. + Used to seed the random generator. + """ + + def __init__(self, mean=0.0, stddev=0.05, seed=0): + assert mean is not None, 'mean should not be None' + assert stddev is not None, 'std should not be None' + super(TruncatedNormal, self).__init__(loc=mean, scale=stddev, seed=seed) + self.mean = mean + self.stddev = stddev + self.seed = seed + + def get_config(self): + return {"mean": self.mean, "stddev": self.stddev, "seed": self.seed} + + +class HeNormal(MSRAInitializer): + """He normal initializer. + + Parameters + ---------- + seed : A Python integer. + Used to seed the random generator. + + """ + + def __init__(self, seed=0): + super(HeNormal, self).__init__(uniform=False, fan_in=None, seed=seed) + self.seed = seed + + def get_config(self): + return {"seed", self.seed} + + +def deconv2d_bilinear_upsampling_initializer(shape): + """Returns the initializer that can be passed to DeConv2dLayer for initializing the + weights in correspondence to channel-wise bilinear up-sampling. + Used in segmentation approaches such as [FCN](https://arxiv.org/abs/1605.06211) + + Parameters + ---------- + shape : tuple of int + The shape of the filters, [height, width, output_channels, in_channels]. + It must match the shape passed to DeConv2dLayer. + + Returns + ------- + ``tf.constant_initializer`` + A constant initializer with weights set to correspond to per channel bilinear upsampling + when passed as W_int in DeConv2dLayer + + """ + raise NotImplementedError diff --git a/tensorlayer/initializers/tensorflow_initializers.py b/tensorlayer/initializers/tensorflow_initializers.py new file mode 100644 index 000000000..5009969b7 --- /dev/null +++ b/tensorlayer/initializers/tensorflow_initializers.py @@ -0,0 +1,298 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +import numpy as np +import tensorlayer as tl + +__all__ = [ + 'Initializer', 'Zeros', 'Ones', 'Constant', 'RandomUniform', 'RandomNormal', 'TruncatedNormal', + 'deconv2d_bilinear_upsampling_initializer', 'HeNormal' +] + + +class Initializer(object): + """Initializer base class: all initializers inherit from this class. + """ + + def __call__(self, shape, dtype=None): + """Returns a tensor object initialized as specified by the initializer. + + Parameters + ---------- + shape : tuple of int. + The shape of the tensor. + dtype : Optional dtype of the tensor. + If not provided will return tensor of `tl.float32`. + + Returns + ------- + + """ + raise NotImplementedError + + def get_config(self): + """Returns the configuration of the initializer as a JSON-serializable dict. + + Returns + ------- + A JSON-serializable Python dict. + """ + return {} + + @classmethod + def from_config(cls, config): + """Instantiates an initializer from a configuration dictionary. + + Parameters + ---------- + config : A python dictionary. + It will typically be the output of `get_config`. + + Returns + ------- + An Initializer instance. + """ + if 'dtype' in config: + config.pop('dtype') + return cls(**config) + + +class Zeros(Initializer): + """Initializer that generates tensors initialized to 0. + + Examples + -------- + + >>> import tensorlayer as tl + >>> init = tl.initializers.zeros() + >>> print(init(shape=(5, 10), dtype=tl.float32)) + + """ + + def __call__(self, shape, dtype=tl.float32): + return tl.zeros(shape, dtype=dtype) + + +class Ones(Initializer): + """Initializer that generates tensors initialized to 1. + + Examples + -------- + + >>> import tensorlayer as tl + >>> init = tl.initializers.ones() + >>> print(init(shape=(5, 10), dtype=tl.float32)) + + """ + + def __call__(self, shape, dtype=tl.float32): + return tl.ones(shape, dtype=dtype) + + +class Constant(Initializer): + """Initializer that generates tensors initialized to a constant value. + + Parameters + ---------- + value : A python scalar or a numpy array. + The assigned value. + + Examples + -------- + + >>> import tensorlayer as tl + >>> init = tl.initializers.constant(value=10) + >>> print(init(shape=(5, 10), dtype=tl.float32)) + + """ + + def __init__(self, value=0): + self.value = value + + def __call__(self, shape, dtype=tl.float32): + return tl.constant(self.value, shape=shape, dtype=dtype) + + def get_config(self): + return {"value": self.value} + + +class RandomUniform(Initializer): + """Initializer that generates tensors with a uniform distribution. + + Parameters + ---------- + minval : A python scalar or a scalar tensor. + Lower bound of the range of random values to generate. + maxval : A python scalar or a scalar tensor. + Upper bound of the range of random values to generate. + seed : A Python integer. + Used to seed the random generator. + + Examples + -------- + + >>> import tensorlayer as tl + >>> init = tl.initializers.random_uniform(minval=-0.05, maxval=0.05) + >>> print(init(shape=(5, 10), dtype=tl.float32)) + + """ + + def __init__(self, minval=-0.05, maxval=0.05, seed=None): + self.minval = minval + self.maxval = maxval + self.seed = seed + + def __call__(self, shape, dtype=tl.float32): + return tl.random_uniform(shape, self.minval, self.maxval, dtype=dtype, seed=self.seed) + + def get_config(self): + return {"minval": self.minval, "maxval": self.maxval, "seed": self.seed} + + +class RandomNormal(Initializer): + """Initializer that generates tensors with a normal distribution. + + Parameters + ---------- + mean : A python scalar or a scalar tensor. + Mean of the random values to generate. + stddev : A python scalar or a scalar tensor. + Standard deviation of the random values to generate. + seed : A Python integer. + Used to seed the random generator. + + minval=-0.05, maxval=0.05 + + Examples + -------- + + >>> import tensorlayer as tl + >>> init = tl.initializers.random_normal(mean=0.0, stddev=0.05) + >>> print(init(shape=(5, 10), dtype=tl.float32)) + + """ + + def __init__(self, mean=0.0, stddev=0.05, seed=None): + self.mean = mean + self.stddev = stddev + self.seed = seed + + def __call__(self, shape, dtype=tl.float32): + return tl.random_normal(shape, self.mean, self.stddev, dtype=dtype, seed=self.seed) + + def get_config(self): + return {"mean": self.mean, "stddev": self.stddev, "seed": self.seed} + + +class TruncatedNormal(Initializer): + """Initializer that generates a truncated normal distribution. + + These values are similar to values from a `RandomNormal` + except that values more than two standard deviations from the mean + are discarded and re-drawn. This is the recommended initializer for + neural network weights and filters. + + + Parameters + ---------- + mean : A python scalar or a scalar tensor. + Mean of the random values to generate. + stddev : A python scalar or a scalar tensor. + Standard deviation of the andom values to generate. + seed : A Python integer. + Used to seed the random generator. + + Examples + -------- + + >>> import tensorlayer as tl + >>> init = tl.initializers.truncated_normal(mean=0.0, stddev=0.05) + >>> print(init(shape=(5, 10), dtype=tl.float32)) + + """ + + def __init__(self, mean=0.0, stddev=0.05, seed=None): + self.mean = mean + self.stddev = stddev + self.seed = seed + + def __call__(self, shape, dtype=tl.float32): + return tl.truncated_normal(shape, self.mean, self.stddev, dtype=dtype, seed=self.seed) + + def get_config(self): + return {"mean": self.mean, "stddev": self.stddev, "seed": self.seed} + + +class HeNormal(Initializer): + """He normal initializer. + + Parameters + ---------- + seed : A Python integer. + Used to seed the random generator. + + Examples + -------- + + >>> import tensorlayer as tl + >>> init = tl.initializers.he_normal() + >>> print(init(shape=(5, 10), dtype=tl.float32)) + + """ + + def __init__(self, seed=None): + self.seed = seed + + def __call__(self, shape, dtype=tl.float32): + return tl.he_normal(seed=self.seed, shape=shape, dtype=dtype) + + def get_config(self): + return {"seed", self.seed} + + +def deconv2d_bilinear_upsampling_initializer(shape): + """Returns the initializer that can be passed to DeConv2dLayer for initializing the + weights in correspondence to channel-wise bilinear up-sampling. + Used in segmentation approaches such as [FCN](https://arxiv.org/abs/1605.06211) + + Parameters + ---------- + shape : tuple of int + The shape of the filters, [height, width, output_channels, in_channels]. + It must match the shape passed to DeConv2dLayer. + + Returns + ------- + ``tf.constant_initializer`` + A constant initializer with weights set to correspond to per channel bilinear upsampling + when passed as W_int in DeConv2dLayer + + """ + if shape[0] != shape[1]: + raise Exception('deconv2d_bilinear_upsampling_initializer only supports symmetrical filter sizes') + + if shape[3] < shape[2]: + raise Exception( + 'deconv2d_bilinear_upsampling_initializer behaviour is not defined for num_in_channels < num_out_channels ' + ) + + filter_size = shape[0] + num_out_channels = shape[2] + num_in_channels = shape[3] + + # Create bilinear filter kernel as numpy array + bilinear_kernel = np.zeros([filter_size, filter_size], dtype=np.float32) + scale_factor = (filter_size + 1) // 2 + if filter_size % 2 == 1: + center = scale_factor - 1 + else: + center = scale_factor - 0.5 + for x in range(filter_size): + for y in range(filter_size): + bilinear_kernel[x, y] = (1 - abs(x - center) / scale_factor) * (1 - abs(y - center) / scale_factor) + weights = np.zeros((filter_size, filter_size, num_out_channels, num_in_channels), dtype=np.float32) + for i in range(num_out_channels): + weights[:, :, i, i] = bilinear_kernel + + # assign numpy array to constant_initalizer and pass to get_variable + return Constant(value=weights) diff --git a/tensorlayer/layers/activation.py b/tensorlayer/layers/activation.py index 31abaeaba..f4298478d 100644 --- a/tensorlayer/layers/activation.py +++ b/tensorlayer/layers/activation.py @@ -1,24 +1,17 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - from tensorlayer import logging -from tensorlayer.activation import leaky_relu6, leaky_twice_relu6 -from tensorlayer.decorators import deprecated_alias +import tensorlayer as tl from tensorlayer.initializers import truncated_normal -from tensorlayer.layers.core import Layer - -# from tensorlayer.layers.core import LayersConfig +from tensorlayer.layers.core import Module __all__ = [ - 'PRelu', - 'PRelu6', - 'PTRelu6', + 'PRelu', 'PRelu6', 'PTRelu6', 'LeakyReLU', 'LeakyReLU6', 'LeakyTwiceRelu6', 'Ramp', 'Swish', 'HardTanh', 'Mish' ] -class PRelu(Layer): +class PRelu(Module): """ The :class:`PRelu` class is Parametric Rectified Linear layer. It follows f(x) = alpha * x for x < 0, f(x) = x for x >= 0, @@ -39,12 +32,7 @@ class PRelu(Layer): Examples ----------- >>> inputs = tl.layers.Input([10, 5]) - >>> prelulayer = tl.layers.PRelu(channel_shared=True) - >>> print(prelulayer) - PRelu(channel_shared=True,in_channels=None,name=prelu) - >>> prelu = prelulayer(inputs) - >>> model = tl.models.Model(inputs=inputs, outputs=prelu) - >>> out = model(data, is_train=True) + >>> prelulayer = tl.layers.PRelu(channel_shared=True, in_channels=5)(inputs) References ----------- @@ -54,17 +42,16 @@ class PRelu(Layer): """ def __init__( - self, - channel_shared=False, - in_channels=None, - a_init=truncated_normal(mean=0.0, stddev=0.05), - name=None # "prelu" + self, channel_shared=False, in_channels=None, a_init=truncated_normal(mean=0.0, stddev=0.05), name=None, + data_format='channels_last', dim=2 ): super(PRelu, self).__init__(name) self.channel_shared = channel_shared self.in_channels = in_channels self.a_init = a_init + self.data_format = data_format + self.dim = dim if self.channel_shared: self.build((None, )) @@ -86,22 +73,35 @@ def __repr__(self): def build(self, inputs_shape): if self.channel_shared: w_shape = (1, ) - else: - w_shape = (inputs_shape[-1], ) + elif self.data_format == 'channels_last': + w_shape = (self.in_channels, ) + elif self.data_format == 'channels_first': + if self.dim == 2: + w_shape = (1, self.in_channels, 1, 1) + elif self.dim == 1: + w_shape = (1, self.in_channels, 1) + elif self.dim == 3: + w_shape = (1, self.in_channels, 1, 1, 1) + else: + raise Exception("Dim should be equal to 1, 2 or 3") self.alpha_var = self._get_weights("alpha", shape=w_shape, init=self.a_init) - self.alpha_var_constrained = tf.nn.sigmoid(self.alpha_var, name="constraining_alpha_var_in_0_1") + self.relu = tl.ops.ReLU() + self.sigmoid = tl.ops.Sigmoid() - # @tf.function def forward(self, inputs): - - pos = tf.nn.relu(inputs) - self.alpha_var_constrained = tf.nn.sigmoid(self.alpha_var, name="constraining_alpha_var_in_0_1") - neg = -self.alpha_var_constrained * tf.nn.relu(-inputs) - + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True + + pos = self.relu(inputs) + self.alpha_var_constrained = self.sigmoid(self.alpha_var) + neg = -self.alpha_var_constrained * self.relu(-inputs) return pos + neg -class PRelu6(Layer): +class PRelu6(Module): """ The :class:`PRelu6` class is Parametric Rectified Linear layer integrating ReLU6 behaviour. @@ -132,6 +132,11 @@ class PRelu6(Layer): name : None or str A unique layer name. + Examples + ----------- + >>> inputs = tl.layers.Input([10, 5]) + >>> prelulayer = tl.layers.PRelu6(channel_shared=True, in_channels=5)(inputs) + References ----------- - `Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification `__ @@ -145,13 +150,17 @@ def __init__( channel_shared=False, in_channels=None, a_init=truncated_normal(mean=0.0, stddev=0.05), - name=None # "prelu6" + name=None, # "prelu6" + data_format='channels_last', + dim=2 ): super(PRelu6, self).__init__(name) self.channel_shared = channel_shared self.in_channels = in_channels self.a_init = a_init + self.data_format = data_format + self.dim = dim if self.channel_shared: self.build((None, )) @@ -173,21 +182,37 @@ def __repr__(self): def build(self, inputs_shape): if self.channel_shared: w_shape = (1, ) - else: - w_shape = (inputs_shape[-1], ) + elif self.data_format == 'channels_last': + w_shape = (self.in_channels, ) + elif self.data_format == 'channels_first': + if self.dim == 2: + w_shape = (1, self.in_channels, 1, 1) + elif self.dim == 1: + w_shape = (1, self.in_channels, 1) + elif self.dim == 3: + w_shape = (1, self.in_channels, 1, 1, 1) + else: + raise Exception("Dim should be equal to 1, 2 or 3") self.alpha_var = self._get_weights("alpha", shape=w_shape, init=self.a_init) - self.alpha_var_constrained = tf.nn.sigmoid(self.alpha_var, name="constraining_alpha_var_in_0_1") + self.sigmoid = tl.ops.Sigmoid() + self.relu = tl.ops.ReLU() # @tf.function def forward(self, inputs): - pos = tf.nn.relu(inputs) - pos_6 = -tf.nn.relu(inputs - 6) - neg = -self.alpha_var_constrained * tf.nn.relu(-inputs) - + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True + + alpha_var_constrained = self.sigmoid(self.alpha_var) + pos = self.relu(inputs) + pos_6 = -self.relu(inputs - 6) + neg = -alpha_var_constrained * self.relu(-inputs) return pos + pos_6 + neg -class PTRelu6(Layer): +class PTRelu6(Module): """ The :class:`PTRelu6` class is Parametric Rectified Linear layer integrating ReLU6 behaviour. @@ -220,6 +245,11 @@ class PTRelu6(Layer): name : None or str A unique layer name. + Examples + ----------- + >>> inputs = tl.layers.Input([10, 5]) + >>> prelulayer = tl.layers.PTRelu6(channel_shared=True, in_channels=5)(inputs) + References ----------- - `Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification `__ @@ -232,6 +262,7 @@ def __init__( self, channel_shared=False, in_channels=None, + data_format='channels_last', a_init=truncated_normal(mean=0.0, stddev=0.05), name=None # "ptrelu6" ): @@ -239,6 +270,7 @@ def __init__( super(PTRelu6, self).__init__(name) self.channel_shared = channel_shared self.in_channels = in_channels + self.data_format = data_format self.a_init = a_init if self.channel_shared: @@ -261,21 +293,331 @@ def __repr__(self): def build(self, inputs_shape): if self.channel_shared: w_shape = (1, ) - else: - w_shape = (inputs_shape[-1], ) + elif self.data_format == 'channels_last': + w_shape = (self.in_channels, ) + elif self.data_format == 'channels_first': + if self.dim == 2: + w_shape = (1, self.in_channels, 1, 1) + elif self.dim == 1: + w_shape = (1, self.in_channels, 1) + elif self.dim == 3: + w_shape = (1, self.in_channels, 1, 1, 1) + else: + raise Exception("Dim should be equal to 1, 2 or 3") # Alpha for outputs lower than zeros self.alpha_low = self._get_weights("alpha_low", shape=w_shape, init=self.a_init) - self.alpha_low_constrained = tf.nn.sigmoid(self.alpha_low, name="constraining_alpha_low_in_0_1") - + self.sigmoid = tl.ops.Sigmoid() + self.relu = tl.ops.ReLU() # Alpha for outputs higher than 6 self.alpha_high = self._get_weights("alpha_high", shape=w_shape, init=self.a_init) - self.alpha_high_constrained = tf.nn.sigmoid(self.alpha_high, name="constraining_alpha_high_in_0_1") # @tf.function def forward(self, inputs): - pos = tf.nn.relu(inputs) - pos_6 = -tf.nn.relu(inputs - 6) + self.alpha_high_constrained * tf.nn.relu(inputs - 6) - neg = -self.alpha_low_constrained * tf.nn.relu(-inputs) + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True + + alpha_low_constrained = self.sigmoid(self.alpha_low) + alpha_high_constrained = self.sigmoid(self.alpha_high) + pos = self.relu(inputs) + pos_6 = -self.relu(inputs - 6) + alpha_high_constrained * self.relu(inputs - 6) + neg = -alpha_low_constrained * self.relu(-inputs) return pos + pos_6 + neg + + +class Ramp(Module): + """Ramp activation function. + + Reference: [tf.clip_by_value] + + Parameters + ---------- + x : Tensor + input. + v_min : float + cap input to v_min as a lower bound. + v_max : float + cap input to v_max as a upper bound. + + Returns + ------- + Tensor + A ``Tensor`` in the same type as ``x``. + + Examples + ----------- + >>> inputs = tl.layers.Input([10, 5]) + >>> prelulayer = tl.layers.Ramp()(inputs) + + """ + + def __init__(self, v_min=0, v_max=1): + super(Ramp, self).__init__() + self._built = True + self.v_min = v_min + self.v_max = v_max + + def forward(self, x): + return tl.ops.clip_by_value(x, clip_value_min=self.v_min, clip_value_max=self.v_max) + + +class LeakyReLU(Module): + """ + + This function is a modified version of ReLU, introducing a nonzero gradient for negative input. Introduced by the paper: + `Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013] `__ + + The function return the following results: + - When x < 0: ``f(x) = alpha_low * x``. + - When x >= 0: ``f(x) = x``. + + Parameters + ---------- + x : Tensor + Support input type ``float``, ``double``, ``int32``, ``int64``, ``uint8``, ``int16``, or ``int8``. + alpha : float + Slope. + name : str + The function name (optional). + + Examples + -------- + >>> net = tl.layers.Input([10, 200]) + >>> net = tl.layers.LeakyReLU(alpha=0.5)(net) + + Returns + ------- + Tensor + A ``Tensor`` in the same type as ``x``. + + References + ---------- + - `Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013] `__ + + """ + + def __init__(self, alpha=0.2): + super(LeakyReLU, self).__init__() + self._built = True + self.alpha = alpha + self._leakyrelu = tl.ops.LeakyReLU(alpha=alpha) + + def forward(self, x): + return self._leakyrelu(x) + + +class LeakyReLU6(Module): + """ + This activation function is a modified version :func:`leaky_relu` introduced by the following paper: + `Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013] `__ + + This activation function also follows the behaviour of the activation function :func:`tf.ops.relu6` introduced by the following paper: + `Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010] `__ + + The function return the following results: + - When x < 0: ``f(x) = alpha_low * x``. + - When x in [0, 6]: ``f(x) = x``. + - When x > 6: ``f(x) = 6``. + + Parameters + ---------- + x : Tensor + Support input type ``float``, ``double``, ``int32``, ``int64``, ``uint8``, ``int16``, or ``int8``. + alpha : float + Slope. + name : str + The function name (optional). + + Examples + -------- + >>> net = tl.layers.Input([10, 200]) + >>> net = tl.layers.LeakyReLU6(alpha=0.5)(net) + + Returns + ------- + Tensor + A ``Tensor`` in the same type as ``x``. + + References + ---------- + - `Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013] `__ + - `Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010] `__ + """ + + def __init__(self, alpha=0.2): + super(LeakyReLU6, self).__init__() + self._built = True + if not (0 < alpha <= 1): + raise ValueError("`alpha` value must be in [0, 1]`") + + self.alpha = alpha + self.minimum = tl.ops.Minimum() + self.maximum = tl.ops.Maximum() + + def forward(self, x): + return self.minimum(self.maximum(x, self.alpha * x), 6) + + +class LeakyTwiceRelu6(Module): + """ + + This activation function is a modified version :func:`leaky_relu` introduced by the following paper: + `Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013] `__ + + This activation function also follows the behaviour of the activation function :func:`tf.ops.relu6` introduced by the following paper: + `Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010] `__ + + This function push further the logic by adding `leaky` behaviour both below zero and above six. + + The function return the following results: + - When x < 0: ``f(x) = alpha_low * x``. + - When x in [0, 6]: ``f(x) = x``. + - When x > 6: ``f(x) = 6 + (alpha_high * (x-6))``. + + Parameters + ---------- + x : Tensor + Support input type ``float``, ``double``, ``int32``, ``int64``, ``uint8``, ``int16``, or ``int8``. + alpha_low : float + Slope for x < 0: ``f(x) = alpha_low * x``. + alpha_high : float + Slope for x < 6: ``f(x) = 6 (alpha_high * (x-6))``. + name : str + The function name (optional). + + Examples + -------- + >>> net = tl.layers.Input([10, 200]) + >>> net = tl.layers.LeakyTwiceRelu6(alpha_low=0.5, alpha_high=0.2)(net) + + Returns + ------- + Tensor + A ``Tensor`` in the same type as ``x``. + + References + ---------- + - `Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013] `__ + - `Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010] `__ + + """ + + def __init__(self, alpha_low=0.2, alpha_high=0.2): + super(LeakyTwiceRelu6, self).__init__() + self._built = True + if not (0 < alpha_high <= 1): + raise ValueError("`alpha_high` value must be in [0, 1]`") + + if not (0 < alpha_low <= 1): + raise ValueError("`alpha_low` value must be in [0, 1]`") + + self.alpha_low = alpha_low + self.alpha_high = alpha_high + self.minimum = tl.ops.Minimum() + self.maximum = tl.ops.Maximum() + + def forward(self, x): + x_is_above_0 = self.minimum(x, 6 * (1 - self.alpha_high) + self.alpha_high * x) + x_is_below_0 = self.minimum(self.alpha_low * x, 0) + return self.maximum(x_is_above_0, x_is_below_0) + + +class Swish(Module): + """Swish function. + + See `Swish: a Self-Gated Activation Function `__. + + Parameters + ---------- + x : Tensor + input. + name: str + function name (optional). + + Examples + -------- + >>> net = tl.layers.Input([10, 200]) + >>> net = tl.layers.Swish()(net) + + Returns + ------- + Tensor + A ``Tensor`` in the same type as ``x``. + + """ + + def __init__(self): + super(Swish, self).__init__() + self.sigmoid = tl.ops.Sigmoid() + self._built = True + + def forward(self, x): + return self.sigmoid(x) * x + + +class HardTanh(Module): + """Hard tanh activation function. + + Which is a ramp function with low bound of -1 and upper bound of 1, shortcut is `htanh`. + + Parameters + ---------- + x : Tensor + input. + name : str + The function name (optional). + + Examples + -------- + >>> net = tl.layers.Input([10, 200]) + >>> net = tl.layers.HardTanh()(net) + + Returns + ------- + Tensor + A ``Tensor`` in the same type as ``x``. + + """ + + def __init__(self): + super(HardTanh, self).__init__() + self._built = True + + def forward(self, x): + return tl.ops.clip_by_value(x, -1, 1) + + +class Mish(Module): + """Mish activation function. + + Reference: [Mish: A Self Regularized Non-Monotonic Neural Activation Function .Diganta Misra, 2019] + + Parameters + ---------- + x : Tensor + input. + + Examples + -------- + >>> net = tl.layers.Input([10, 200]) + >>> net = tl.layers.Mish()(net) + + Returns + ------- + Tensor + A ``Tensor`` in the same type as ``x``. + + """ + + def __init__(self): + super(Mish, self).__init__() + self._tanh = tl.ops.Tanh() + self._softplus = tl.ops.Softplus() + self._built = True + + def forward(self, x): + return x * self._tanh(self._softplus(x)) diff --git a/tensorlayer/layers/convolution/__init__.py b/tensorlayer/layers/convolution/__init__.py index 8cf4bd74c..12aaa1485 100644 --- a/tensorlayer/layers/convolution/__init__.py +++ b/tensorlayer/layers/convolution/__init__.py @@ -5,7 +5,7 @@ various benchmarks and domain-specific problems. In addition, we also support transparent access to native TensorFlow parameters. For example, we provide not only layers for local response normalization, but also -layers that allow user to apply ``tf.nn.lrn`` on ``network.outputs``. +layers that allow user to apply ``tf.ops.lrn`` on ``network.outputs``. More functions can be found in `TensorFlow API `__. """ @@ -13,14 +13,14 @@ from .deformable_conv import * from .depthwise_conv import * from .dorefa_conv import * -from .expert_conv import * -from .expert_deconv import * +# from .expert_conv import * +# from .expert_deconv import * from .group_conv import * from .quan_conv import * from .quan_conv_bn import * from .separable_conv import * from .simplified_conv import * -from .simplified_deconv import * +# from .simplified_deconv import * from .super_resolution import * from .ternary_conv import * @@ -32,18 +32,19 @@ 'Conv3d', # simplified deconv + 'DeConv1d', 'DeConv2d', 'DeConv3d', # expert conv - 'Conv1dLayer', - 'Conv2dLayer', - 'Conv3dLayer', + # 'Conv1dLayer', + # 'Conv2dLayer', + # 'Conv3dLayer', # expert conv - 'DeConv1dLayer', - 'DeConv2dLayer', - 'DeConv3dLayer', + # 'DeConv1dLayer', + # 'DeConv2dLayer', + # 'DeConv3dLayer', # atrous # 'AtrousConv1dLayer', diff --git a/tensorlayer/layers/convolution/binary_conv.py b/tensorlayer/layers/convolution/binary_conv.py index 92929ae92..f949a48ce 100644 --- a/tensorlayer/layers/convolution/binary_conv.py +++ b/tensorlayer/layers/convolution/binary_conv.py @@ -1,18 +1,16 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer -from tensorlayer.layers.utils import quantize +from tensorlayer.layers.core import Module -__all__ = ['BinaryConv2d'] +__all__ = [ + 'BinaryConv2d', +] -class BinaryConv2d(Layer): +class BinaryConv2d(Module): """ The :class:`BinaryConv2d` class is a 2D binary CNN layer, which weights are either -1 or 1 while inference. @@ -31,9 +29,6 @@ class BinaryConv2d(Layer): The activation function of this layer. padding : str The padding algorithm type: "SAME" or "VALID". - use_gemm : boolean - If True, use gemm instead of ``tf.matmul`` for inference. - TODO: support gemm data_format : str "channels_last" (NHWC, default) or "channels_first" (NCHW). dilation_rate : tuple of int @@ -52,35 +47,23 @@ class BinaryConv2d(Layer): With TensorLayer >>> net = tl.layers.Input([8, 100, 100, 32], name='input') - >>> binaryconv2d = tl.layers.QuanConv2d( - ... n_filter=64, filter_size=(3, 3), strides=(2, 2), act=tf.nn.relu, in_channels=32, name='binaryconv2d' - ... )(net) + >>> binaryconv2d = tl.layers.BinaryConv2d( + ... n_filter=64, filter_size=(3, 3), strides=(2, 2), act=tl.ReLU, in_channels=32, name='binaryconv2d')(net) >>> print(binaryconv2d) >>> output shape : (8, 50, 50, 64) """ def __init__( - self, - n_filter=32, - filter_size=(3, 3), - strides=(1, 1), - act=None, - padding='SAME', - use_gemm=False, - data_format="channels_last", - dilation_rate=(1, 1), - W_init=tl.initializers.truncated_normal(stddev=0.02), - b_init=tl.initializers.constant(value=0.0), - in_channels=None, - name=None # 'binary_cnn2d', + self, n_filter=32, filter_size=(3, 3), strides=(1, 1), act=None, padding='VALID', data_format="channels_last", + dilation_rate=(1, 1), W_init=tl.initializers.truncated_normal(stddev=0.02), + b_init=tl.initializers.constant(value=0.0), in_channels=None, name=None ): - super().__init__(name, act=act) + super(BinaryConv2d, self).__init__(name, act=act) self.n_filter = n_filter self.filter_size = filter_size - self.strides = self._strides = strides + self._strides = self.strides = strides self.padding = padding - self.use_gemm = use_gemm self.data_format = data_format self._dilation_rate = self.dilation_rate = dilation_rate self.W_init = W_init @@ -94,18 +77,12 @@ def __init__( logging.info( "BinaryConv2d %s: n_filter: %d filter_size: %s strides: %s pad: %s act: %s" % ( self.name, n_filter, str(filter_size), str(strides), padding, - self.act.__name__ if self.act is not None else 'No Activation' + self.act.__class__.__name__ if self.act is not None else 'No Activation' ) ) - if use_gemm: - raise Exception("TODO. The current version use tf.matmul for inferencing.") - - if len(self.strides) != 2: - raise ValueError("len(strides) should be 2.") - def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' + actstr = self.act.__class__.__name__ if self.act is not None else 'No Activation' s = ( '{classname}(in_channels={in_channels}, out_channels={n_filter}, kernel_size={filter_size}' ', strides={strides}, padding={padding}' @@ -139,21 +116,38 @@ def build(self, inputs_shape): self.filter_shape = (self.filter_size[0], self.filter_size[1], self.in_channels, self.n_filter) self.W = self._get_weights("filters", shape=self.filter_shape, init=self.W_init) + + self.b_init_flag = False if self.b_init: self.b = self._get_weights("biases", shape=(self.n_filter, ), init=self.b_init) + self.bias_add = tl.ops.BiasAdd(self.data_format) + self.b_init_flag = True - def forward(self, inputs): + self.act_init_flag = False + if self.act: + self.act_init_flag = True + + self.binaryconv2d = tl.ops.BinaryConv2D( + strides=self._strides, + padding=self.padding, + data_format=self.data_format, + dilations=self._dilation_rate, + out_channel=self.n_filter, + k_size=self.filter_size, + in_channel=self.in_channels, + ) - _W = quantize(self.W) + def forward(self, inputs): + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True - outputs = tf.nn.conv2d( - input=inputs, filters=_W, strides=self._strides, padding=self.padding, data_format=self.data_format, - dilations=self._dilation_rate, name=self.name - ) + outputs = self.binaryconv2d(inputs, self.W) - if self.b_init: - outputs = tf.nn.bias_add(outputs, self.b, data_format=self.data_format, name='bias_add') - if self.act: + if self.b_init_flag: + outputs = self.bias_add(outputs, self.b) + if self.act_init_flag: outputs = self.act(outputs) - return outputs diff --git a/tensorlayer/layers/convolution/deformable_conv.py b/tensorlayer/layers/convolution/deformable_conv.py index 3a8038c39..9de896cd9 100644 --- a/tensorlayer/layers/convolution/deformable_conv.py +++ b/tensorlayer/layers/convolution/deformable_conv.py @@ -1,25 +1,22 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias, private_method -from tensorlayer.layers.core import Layer +from tensorlayer.layers.core import Module __all__ = [ 'DeformableConv2d', ] -class DeformableConv2d(Layer): +class DeformableConv2d(Module): """The :class:`DeformableConv2d` class is a 2D `Deformable Convolutional Networks `__. Parameters ---------- - offset_layer : tf.Tensor + offset_layer : tl.Tensor To predict the offset of convolution operations. The shape is (batchsize, input height, input width, 2*(number of element in the convolution kernel)) e.g. if apply a 3*3 kernel, the number of the last dimension should be 18 (2*3*3) @@ -43,8 +40,7 @@ class DeformableConv2d(Layer): Examples -------- With TensorLayer - - >>> net = tl.layers.InputLayer([5, 10, 10, 16], name='input') + >>> net = tl.layers.Input([5, 10, 10, 16], name='input') >>> offset1 = tl.layers.Conv2d( ... n_filter=18, filter_size=(3, 3), strides=(1, 1), padding='SAME', name='offset1' ... )(net) @@ -61,7 +57,6 @@ class DeformableConv2d(Layer): References ---------- - The deformation operation was adapted from the implementation in `here `__ - Notes ----- - The padding is fixed to 'SAME'. @@ -100,78 +95,12 @@ def __init__( logging.info( "DeformableConv2d %s: n_filter: %d, filter_size: %s act: %s" % ( self.name, self.n_filter, str(self.filter_size - ), self.act.__name__ if self.act is not None else 'No Activation' + ), self.act.__class__.__name__ if self.act is not None else 'No Activation' ) ) - # try: - # pre_channel = int(prev_layer.outputs.get_shape()[-1]) - # except Exception: # if pre_channel is ?, it happens when using Spatial Transformer Net - # pre_channel = 1 - # logging.info("[warnings] unknow input channels, set to 1") - # shape = (filter_size[0], filter_size[1], pre_channel, n_filter) - - # with tf.compat.v1.variable_scope(name): - # offset = self.offset_layer # .outputs - # - # # if offset.get_shape()[-1] != 2 * shape[0] * shape[1]: - # # raise AssertionError("offset.get_shape()[-1] is not equal to: %d" % 2 * shape[0] * shape[1]) - # - # # Grid initialisation - # input_h = int(self.inputs.get_shape()[1]) - # input_w = int(self.inputs.get_shape()[2]) - # # kernel_n = shape[0] * shape[1] - # initial_offsets = tf.stack( - # tf.meshgrid(tf.range(shape[0]), tf.range(shape[1]), indexing='ij') - # ) # initial_offsets --> (kh, kw, 2) - # initial_offsets = tf.reshape(initial_offsets, (-1, 2)) # initial_offsets --> (n, 2) - # initial_offsets = tf.expand_dims(initial_offsets, 0) # initial_offsets --> (1, n, 2) - # initial_offsets = tf.expand_dims(initial_offsets, 0) # initial_offsets --> (1, 1, n, 2) - # initial_offsets = tf.tile(initial_offsets, [input_h, input_w, 1, 1]) # initial_offsets --> (h, w, n, 2) - # initial_offsets = tf.cast(initial_offsets, 'float32') - # grid = tf.meshgrid( - # tf.range(-int((shape[0] - 1) / 2.0), int(input_h - int((shape[0] - 1) / 2.0)), 1), - # tf.range(-int((shape[1] - 1) / 2.0), int(input_w - int((shape[1] - 1) / 2.0)), 1), indexing='ij' - # ) - # - # grid = tf.stack(grid, axis=-1) - # grid = tf.cast(grid, 'float32') # grid --> (h, w, 2) - # grid = tf.expand_dims(grid, 2) # grid --> (h, w, 1, 2) - # grid = tf.tile(grid, [1, 1, self.kernel_n, 1]) # grid --> (h, w, n, 2) - # grid_offset = grid + initial_offsets # grid_offset --> (h, w, n, 2) - # - # input_deform = self._tf_batch_map_offsets(self.inputs, offset, grid_offset) - # - # # W = tf.compat.v1.get_variable( - # # name='W_deformableconv2d', shape=[1, 1, shape[0] * shape[1], shape[-2], shape[-1]], initializer=W_init, - # # dtype=LayersConfig.tf_dtype, - # # ) - # - # # _tensor = tf.nn.conv3d(input_deform, W, strides=[1, 1, 1, 1, 1], padding='VALID', name=None) - # # _tensor = tf.nn.conv3d( - # # input=input_deform, - # # filters=W, - # # strides=[1, 1, 1, 1, 1], - # # padding='VALID', - # # name=None - # # ) - # - # # if b_init: - # # b = tf.compat.v1.get_variable( - # # name='b_deformableconv2d', shape=(shape[-1]), initializer=b_init, # dtype=LayersConfig.tf_dtype, - # # ) - # # - # # _tensor = tf.nn.bias_add(_tensor, b, name='bias_add') - # - # # self.outputs = tf.reshape( - # # tensor=self._apply_activation(_tensor), - # # shape=[tf.shape(input=self.inputs)[0], input_h, input_w, shape[-1]] - # # ) - # - # # self._add_layers(self.outputs) - def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' + actstr = self.act.__class__.__name__ if self.act is not None else 'No Activation' s = ( '{classname}(in_channels={in_channels}, out_channels={n_filter}, kernel_size={filter_size}' ', padding={padding}' @@ -190,29 +119,29 @@ def build(self, inputs_shape): self.input_h = int(inputs_shape[1]) self.input_w = int(inputs_shape[2]) - initial_offsets = tf.stack( - tf.meshgrid(tf.range(self.filter_size[0]), tf.range(self.filter_size[1]), indexing='ij') + initial_offsets = tl.ops.stack( + tl.ops.meshgrid(tl.ops.range(self.filter_size[0]), tl.ops.range(self.filter_size[1]), indexing='ij') ) # initial_offsets --> (kh, kw, 2) - initial_offsets = tf.reshape(initial_offsets, (-1, 2)) # initial_offsets --> (n, 2) - initial_offsets = tf.expand_dims(initial_offsets, 0) # initial_offsets --> (1, n, 2) - initial_offsets = tf.expand_dims(initial_offsets, 0) # initial_offsets --> (1, 1, n, 2) - initial_offsets = tf.tile( + initial_offsets = tl.ops.reshape(initial_offsets, (-1, 2)) # initial_offsets --> (n, 2) + initial_offsets = tl.ops.expand_dims(initial_offsets, 0) # initial_offsets --> (1, n, 2) + initial_offsets = tl.ops.expand_dims(initial_offsets, 0) # initial_offsets --> (1, 1, n, 2) + initial_offsets = tl.ops.tile( initial_offsets, [self.input_h, self.input_w, 1, 1] ) # initial_offsets --> (h, w, n, 2) - initial_offsets = tf.cast(initial_offsets, 'float32') - grid = tf.meshgrid( - tf.range( + initial_offsets = tl.ops.cast(initial_offsets, 'float32') + grid = tl.ops.meshgrid( + tl.ops.range( -int((self.filter_size[0] - 1) / 2.0), int(self.input_h - int((self.filter_size[0] - 1) / 2.0)), 1 ), - tf.range( + tl.ops.range( -int((self.filter_size[1] - 1) / 2.0), int(self.input_w - int((self.filter_size[1] - 1) / 2.0)), 1 ), indexing='ij' ) - grid = tf.stack(grid, axis=-1) - grid = tf.cast(grid, 'float32') # grid --> (h, w, 2) - grid = tf.expand_dims(grid, 2) # grid --> (h, w, 1, 2) - grid = tf.tile(grid, [1, 1, self.kernel_n, 1]) # grid --> (h, w, n, 2) + grid = tl.ops.stack(grid, axis=-1) + grid = tl.ops.cast(grid, 'float32') # grid --> (h, w, 2) + grid = tl.ops.expand_dims(grid, 2) # grid --> (h, w, 1, 2) + grid = tl.ops.tile(grid, [1, 1, self.kernel_n, 1]) # grid --> (h, w, n, 2) self.grid_offset = grid + initial_offsets # grid_offset --> (h, w, n, 2) self.filter_shape = (1, 1, self.kernel_n, self.in_channels, self.n_filter) @@ -222,43 +151,54 @@ def build(self, inputs_shape): if self.b_init: self.b = self._get_weights("b_deformableconv2d", shape=(self.n_filter, ), init=self.b_init) + self.conv3d = tl.ops.Conv3D(strides=[1, 1, 1, 1, 1], padding='VALID') + self.bias_add = tl.ops.BiasAdd() + def forward(self, inputs): + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True + # shape = (filter_size[0], filter_size[1], pre_channel, n_filter) offset = self.offset_layer grid_offset = self.grid_offset input_deform = self._tf_batch_map_offsets(inputs, offset, grid_offset) - outputs = tf.nn.conv3d(input=input_deform, filters=self.W, strides=[1, 1, 1, 1, 1], padding='VALID', name=None) - outputs = tf.reshape(tensor=outputs, shape=[outputs.get_shape()[0], self.input_h, self.input_w, self.n_filter]) + outputs = self.conv3d(input=input_deform, filters=self.W) + outputs = tl.ops.reshape( + tensor=outputs, shape=[outputs.get_shape()[0], self.input_h, self.input_w, self.n_filter] + ) if self.b_init: - outputs = tf.nn.bias_add(outputs, self.b, name='bias_add') + outputs = self.bias_add(outputs, self.b) if self.act: outputs = self.act(outputs) return outputs def _to_bc_h_w(self, x, x_shape): """(b, h, w, c) -> (b*c, h, w)""" - x = tf.transpose(a=x, perm=[0, 3, 1, 2]) - x = tf.reshape(x, (-1, x_shape[1], x_shape[2])) + x = tl.ops.transpose(a=x, perm=[0, 3, 1, 2]) + x = tl.ops.reshape(x, (-1, x_shape[1], x_shape[2])) return x def _to_b_h_w_n_c(self, x, x_shape): """(b*c, h, w, n) -> (b, h, w, n, c)""" - x = tf.reshape(x, (-1, x_shape[4], x_shape[1], x_shape[2], x_shape[3])) - x = tf.transpose(a=x, perm=[0, 2, 3, 4, 1]) + x = tl.ops.reshape(x, (-1, x_shape[4], x_shape[1], x_shape[2], x_shape[3])) + x = tl.ops.transpose(a=x, perm=[0, 2, 3, 4, 1]) return x def tf_flatten(self, a): """Flatten tensor""" - return tf.reshape(a, [-1]) + return tl.ops.reshape(a, [-1]) def _get_vals_by_coords(self, inputs, coords, idx, out_shape): - indices = tf.stack( + indices = tl.ops.stack( [idx, self.tf_flatten(coords[:, :, :, :, 0]), self.tf_flatten(coords[:, :, :, :, 1])], axis=-1 ) - vals = tf.gather_nd(inputs, indices) - vals = tf.reshape(vals, out_shape) + vals = tl.ops.gather_nd(inputs, indices) + vals = tl.ops.reshape(vals, out_shape) return vals def _tf_repeat(self, a, repeats): @@ -268,50 +208,46 @@ def _tf_repeat(self, a, repeats): if len(a.get_shape()) != 1: raise AssertionError("This is not a 1D Tensor") - a = tf.expand_dims(a, -1) - a = tf.tile(a, [1, repeats]) + a = tl.ops.expand_dims(a, -1) + a = tl.ops.tile(a, [1, repeats]) a = self.tf_flatten(a) return a def _tf_batch_map_coordinates(self, inputs, coords): """Batch version of tf_map_coordinates - Only supports 2D feature maps - Parameters ---------- - inputs : ``tf.Tensor`` + inputs : ``tl.Tensor`` shape = (b*c, h, w) - coords : ``tf.Tensor`` + coords : ``tl.Tensor`` shape = (b*c, h, w, n, 2) - Returns ------- - ``tf.Tensor`` + ``tl.Tensor`` A Tensor with the shape as (b*c, h, w, n) - """ inputs_shape = inputs.get_shape() coords_shape = coords.get_shape() - batch_channel = tf.shape(input=inputs)[0] + batch_channel = tl.get_tensor_shape(inputs)[0] input_h = int(inputs_shape[1]) input_w = int(inputs_shape[2]) kernel_n = int(coords_shape[3]) n_coords = input_h * input_w * kernel_n - coords_lt = tf.cast(tf.floor(coords), 'int32') - coords_rb = tf.cast(tf.math.ceil(coords), 'int32') - coords_lb = tf.stack([coords_lt[:, :, :, :, 0], coords_rb[:, :, :, :, 1]], axis=-1) - coords_rt = tf.stack([coords_rb[:, :, :, :, 0], coords_lt[:, :, :, :, 1]], axis=-1) + coords_lt = tl.ops.cast(tl.ops.Floor()(coords), 'int32') + coords_rb = tl.ops.cast(tl.ops.Ceil()(coords), 'int32') + coords_lb = tl.ops.stack([coords_lt[:, :, :, :, 0], coords_rb[:, :, :, :, 1]], axis=-1) + coords_rt = tl.ops.stack([coords_rb[:, :, :, :, 0], coords_lt[:, :, :, :, 1]], axis=-1) - idx = self._tf_repeat(tf.range(batch_channel), n_coords) + idx = self._tf_repeat(tl.ops.range(batch_channel), n_coords) vals_lt = self._get_vals_by_coords(inputs, coords_lt, idx, (batch_channel, input_h, input_w, kernel_n)) vals_rb = self._get_vals_by_coords(inputs, coords_rb, idx, (batch_channel, input_h, input_w, kernel_n)) vals_lb = self._get_vals_by_coords(inputs, coords_lb, idx, (batch_channel, input_h, input_w, kernel_n)) vals_rt = self._get_vals_by_coords(inputs, coords_rt, idx, (batch_channel, input_h, input_w, kernel_n)) - coords_offset_lt = coords - tf.cast(coords_lt, 'float32') + coords_offset_lt = coords - tl.ops.cast(coords_lt, 'float32') vals_t = vals_lt + (vals_rt - vals_lt) * coords_offset_lt[:, :, :, :, 0] vals_b = vals_lb + (vals_rb - vals_lb) * coords_offset_lt[:, :, :, :, 0] @@ -321,24 +257,21 @@ def _tf_batch_map_coordinates(self, inputs, coords): def _tf_batch_map_offsets(self, inputs, offsets, grid_offset): """Batch map offsets into input - Parameters ------------ - inputs : ``tf.Tensor`` + inputs : ``tl.Tensor`` shape = (b, h, w, c) - offsets: ``tf.Tensor`` + offsets: ``tl.Tensor`` shape = (b, h, w, 2*n) - grid_offset: `tf.Tensor`` + grid_offset: `tl.Tensor`` Offset grids shape = (h, w, n, 2) - Returns ------- - ``tf.Tensor`` + ``tl.Tensor`` A Tensor with the shape as (b, h, w, c) - """ inputs_shape = inputs.get_shape() - batch_size = tf.shape(input=inputs)[0] + batch_size = tl.get_tensor_shape(inputs)[0] kernel_n = int(int(offsets.get_shape()[3]) / 2) input_h = inputs_shape[1] input_w = inputs_shape[2] @@ -348,21 +281,19 @@ def _tf_batch_map_offsets(self, inputs, offsets, grid_offset): inputs = self._to_bc_h_w(inputs, inputs_shape) # offsets (b, h, w, 2*n) --> (b, h, w, n, 2) - offsets = tf.reshape(offsets, (batch_size, input_h, input_w, kernel_n, 2)) - # offsets (b, h, w, n, 2) --> (b*c, h, w, n, 2) - # offsets = tf.tile(offsets, [channel, 1, 1, 1, 1]) + offsets = tl.ops.reshape(offsets, (batch_size, input_h, input_w, kernel_n, 2)) - coords = tf.expand_dims(grid_offset, 0) # grid_offset --> (1, h, w, n, 2) - coords = tf.tile(coords, [batch_size, 1, 1, 1, 1]) + offsets # grid_offset --> (b, h, w, n, 2) + coords = tl.ops.expand_dims(grid_offset, 0) # grid_offset --> (1, h, w, n, 2) + coords = tl.ops.tile(coords, [batch_size, 1, 1, 1, 1]) + offsets # grid_offset --> (b, h, w, n, 2) # clip out of bound - coords = tf.stack( + coords = tl.ops.stack( [ - tf.clip_by_value(coords[:, :, :, :, 0], 0.0, tf.cast(input_h - 1, 'float32')), - tf.clip_by_value(coords[:, :, :, :, 1], 0.0, tf.cast(input_w - 1, 'float32')) + tl.ops.clip_by_value(coords[:, :, :, :, 0], 0.0, tl.ops.cast(input_h - 1, 'float32')), + tl.ops.clip_by_value(coords[:, :, :, :, 1], 0.0, tl.ops.cast(input_w - 1, 'float32')) ], axis=-1 ) - coords = tf.tile(coords, [channel, 1, 1, 1, 1]) + coords = tl.ops.tile(coords, [channel, 1, 1, 1, 1]) mapped_vals = self._tf_batch_map_coordinates(inputs, coords) # (b*c, h, w, n) --> (b, h, w, n, c) diff --git a/tensorlayer/layers/convolution/depthwise_conv.py b/tensorlayer/layers/convolution/depthwise_conv.py index 4f963d317..e84e0d062 100644 --- a/tensorlayer/layers/convolution/depthwise_conv.py +++ b/tensorlayer/layers/convolution/depthwise_conv.py @@ -1,21 +1,17 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer - -# from tensorlayer.layers.core import LayersConfig +from tensorlayer.layers.core import Module +from tensorlayer.backend import BACKEND __all__ = [ 'DepthwiseConv2d', ] -class DepthwiseConv2d(Layer): +class DepthwiseConv2d(Module): """Separable/Depthwise Convolutional 2D layer, see `tf.nn.depthwise_conv2d `__. Input: @@ -54,7 +50,7 @@ class DepthwiseConv2d(Layer): >>> net = tl.layers.Input([8, 200, 200, 32], name='input') >>> depthwiseconv2d = tl.layers.DepthwiseConv2d( - ... filter_size=(3, 3), strides=(1, 1), dilation_rate=(2, 2), act=tf.nn.relu, depth_multiplier=2, name='depthwise' + ... filter_size=(3, 3), strides=(1, 1), dilation_rate=(2, 2), act=tl.ReLU, depth_multiplier=2, name='depthwise' ... )(net) >>> print(depthwiseconv2d) >>> output shape : (8, 200, 200, 64) @@ -100,12 +96,12 @@ def __init__( logging.info( "DepthwiseConv2d %s: filter_size: %s strides: %s pad: %s act: %s" % ( self.name, str(filter_size), str(strides), padding, - self.act.__name__ if self.act is not None else 'No Activation' + self.act.__class__.__name__ if self.act is not None else 'No Activation' ) ) def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' + actstr = self.act.__class__.__name__ if self.act is not None else 'No Activation' s = ( '{classname}(in_channels={in_channels}, out_channels={n_filter}, kernel_size={filter_size}' ', strides={strides}, padding={padding}' @@ -128,35 +124,47 @@ def build(self, inputs_shape): if self.in_channels is None: self.in_channels = inputs_shape[-1] self._strides = [1, self._strides[0], self._strides[1], 1] - self._dilation_rate = [1, self._dilation_rate[0], self._dilation_rate[1], 1] elif self.data_format == 'channels_first': self.data_format = 'NCHW' if self.in_channels is None: self.in_channels = inputs_shape[1] self._strides = [1, 1, self._strides[0], self._strides[1]] - self._dilation_rate = [1, 1, self._dilation_rate[0], self._dilation_rate[1]] else: raise Exception("data_format should be either channels_last or channels_first") self.filter_shape = (self.filter_size[0], self.filter_size[1], self.in_channels, self.depth_multiplier) - self.W = self._get_weights("filters", shape=self.filter_shape, init=self.W_init) + # Set the size of kernel as (K1,K2), then the shape is (K,Cin,K1,K2), K must be 1. + if BACKEND == 'mindspore': + self.filter_shape = (self.filter_size[0], self.filter_size[1], self.in_channels, 1) - if self.b_init: - self.b = self._get_weights("biases", shape=(self.in_channels * self.depth_multiplier), init=self.b_init) + self.W = self._get_weights("filters", shape=self.filter_shape, init=self.W_init, transposed=True) - def forward(self, inputs): - outputs = tf.nn.depthwise_conv2d( - input=inputs, - filter=self.W, - strides=self._strides, - padding=self.padding, - data_format=self.data_format, - dilations=self.dilation_rate, - name=self.name, + self.depthwise_conv2d = tl.ops.DepthwiseConv2d( + strides=self._strides, padding=self.padding, data_format=self.data_format, dilations=self._dilation_rate, + ksize=self.filter_size, channel_multiplier=self.depth_multiplier ) + + self.b_init_flag = False if self.b_init: - outputs = tf.nn.bias_add(outputs, self.b, data_format=self.data_format, name='bias_add') + self.b = self._get_weights("biases", shape=(self.in_channels * self.depth_multiplier, ), init=self.b_init) + self.bias_add = tl.ops.BiasAdd(self.data_format) + self.b_init_flag = True + + self.act_init_flag = False if self.act: + self.act_init_flag = True + + def forward(self, inputs): + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True + + outputs = self.depthwise_conv2d(input=inputs, filter=self.W) + if self.b_init_flag: + outputs = self.bias_add(outputs, self.b) + if self.act_init_flag: outputs = self.act(outputs) return outputs diff --git a/tensorlayer/layers/convolution/dorefa_conv.py b/tensorlayer/layers/convolution/dorefa_conv.py index bc80f5e3a..de82b50c3 100644 --- a/tensorlayer/layers/convolution/dorefa_conv.py +++ b/tensorlayer/layers/convolution/dorefa_conv.py @@ -1,18 +1,16 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer -from tensorlayer.layers.utils import cabs, quantize_active, quantize_weight +from tensorlayer.layers.core import Module -__all__ = ['DorefaConv2d'] +__all__ = [ + 'DorefaConv2d', +] -class DorefaConv2d(Layer): +class DorefaConv2d(Module): """The :class:`DorefaConv2d` class is a 2D quantized convolutional layer, which weights are 'bitW' bits and the output of the previous layer are 'bitA' bits while inferencing. @@ -35,9 +33,6 @@ class DorefaConv2d(Layer): The activation function of this layer. padding : str The padding algorithm type: "SAME" or "VALID". - use_gemm : boolean - If True, use gemm instead of ``tf.matmul`` for inferencing. - TODO: support gemm data_format : str "channels_last" (NHWC, default) or "channels_first" (NCHW). dilation_rate : tuple of int @@ -56,8 +51,8 @@ class DorefaConv2d(Layer): With TensorLayer >>> net = tl.layers.Input([8, 12, 12, 32], name='input') - >>> dorefaconv2d = tl.layers.QuanConv2d( - ... n_filter=32, filter_size=(5, 5), strides=(1, 1), act=tf.nn.relu, padding='SAME', name='dorefaconv2d' + >>> dorefaconv2d = tl.layers.DorefaConv2d( + ... n_filter=32, filter_size=(5, 5), strides=(1, 1), act=tl.ReLU, padding='SAME', name='dorefaconv2d' ... )(net) >>> print(dorefaconv2d) >>> output shape : (8, 12, 12, 32) @@ -73,7 +68,6 @@ def __init__( strides=(1, 1), act=None, padding='SAME', - use_gemm=False, data_format="channels_last", dilation_rate=(1, 1), W_init=tl.initializers.truncated_normal(stddev=0.02), @@ -88,7 +82,6 @@ def __init__( self.filter_size = filter_size self.strides = self._strides = strides self.padding = padding - self.use_gemm = use_gemm self.data_format = data_format self.dilation_rate = self._dilation_rate = dilation_rate self.W_init = W_init @@ -102,18 +95,12 @@ def __init__( logging.info( "DorefaConv2d %s: n_filter: %d filter_size: %s strides: %s pad: %s act: %s" % ( self.name, n_filter, str(filter_size), str(strides), padding, - self.act.__name__ if self.act is not None else 'No Activation' + self.act.__class__.__name__ if self.act is not None else 'No Activation' ) ) - if self.use_gemm: - raise Exception("TODO. The current version use tf.matmul for inferencing.") - - if len(self.strides) != 2: - raise ValueError("len(strides) should be 2.") - def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' + actstr = self.act.__class__.__name__ if self.act is not None else 'No Activation' s = ( '{classname}(in_channels={in_channels}, out_channels={n_filter}, kernel_size={filter_size}' ', strides={strides}, padding={padding}' @@ -147,23 +134,35 @@ def build(self, inputs_shape): self.filter_shape = (self.filter_size[0], self.filter_size[1], self.in_channels, self.n_filter) self.W = self._get_weights("filters", shape=self.filter_shape, init=self.W_init) + + self.b_init_flag = False if self.b_init: self.b = self._get_weights("biases", shape=(self.n_filter, ), init=self.b_init) + self.bias_add = tl.ops.BiasAdd(self.data_format) + self.b_init_flag = True - def forward(self, inputs): + self.act_init_flag = False + if self.act: + self.act_init_flag = True - inputs = quantize_active(cabs(inputs), self.bitA) # Do not remove + self.dorefaconv2d = tl.ops.DorefaConv2D( + bitW=self.bitW, bitA=self.bitA, strides=self._strides, padding=self.padding, data_format=self.data_format, + dilations=self._dilation_rate, out_channel=self.n_filter, k_size=self.filter_size, + in_channel=self.in_channels + ) - W_ = quantize_weight(self.W, self.bitW) + def forward(self, inputs): - outputs = tf.nn.conv2d( - input=inputs, filters=W_, strides=self._strides, padding=self.padding, data_format=self.data_format, - dilations=self._dilation_rate, name=self.name - ) + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True - if self.b_init: - outputs = tf.nn.bias_add(outputs, self.b, data_format=self.data_format, name='bias_add') - if self.act: - outputs = self.act(outputs) + outputs = self.dorefaconv2d(inputs, self.W) + if self.b_init_flag: + outputs = self.bias_add(outputs, self.b) + if self.act_init_flag: + outputs = self.act(outputs) return outputs diff --git a/tensorlayer/layers/convolution/expert_conv.py b/tensorlayer/layers/convolution/expert_conv.py deleted file mode 100644 index 062a2738c..000000000 --- a/tensorlayer/layers/convolution/expert_conv.py +++ /dev/null @@ -1,372 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- - -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer - -# from tensorlayer.layers.core import LayersConfig - -__all__ = [ - 'Conv1dLayer', - 'Conv2dLayer', - 'Conv3dLayer', -] - - -class Conv1dLayer(Layer): - """ - The :class:`Conv1dLayer` class is a 1D CNN layer, see `tf.nn.conv1d `__. - - Parameters - ---------- - act : activation function - The activation function of this layer. - shape : tuple of int - The shape of the filters: (filter_length, in_channels, out_channels). - stride : int - The number of entries by which the filter is moved right at a step. - padding : str - The padding algorithm type: "SAME" or "VALID". - data_format : str - 'NWC' or 'NCW', Default is 'NWC' as it is a 1D CNN. - dilation_rate : int - Filter up-sampling/input down-sampling rate. - W_init : initializer - The initializer for the weight matrix. - b_init : initializer or None - The initializer for the bias vector. If None, skip biases. - name : None or str - A unique layer name - - Notes - ----- - - shape = [w, the number of output channel of previous layer, the number of output channels] - - the number of output channel of a layer is its last dimension. - - Examples - -------- - With TensorLayer - - >>> net = tl.layers.Input([8, 100, 1], name='input') - >>> conv1d = tl.layers.Conv1dLayer(shape=(5, 1, 32), stride=2, b_init=None, name='conv1d_1') - >>> print(conv1d) - >>> tensor = tl.layers.Conv1dLayer(shape=(5, 1, 32), stride=2, act=tf.nn.relu, name='conv1d_2')(net) - >>> print(tensor) - - """ - - def __init__( - self, - act=None, - shape=(5, 1, 5), - stride=1, - padding='SAME', - data_format='NWC', - dilation_rate=1, - W_init=tl.initializers.truncated_normal(stddev=0.02), - b_init=tl.initializers.constant(value=0.0), - name=None # 'cnn1d_layer', - ): - super().__init__(name, act=act) - self.n_filter = shape[-1] - self.filter_size = shape[0] - self.shape = shape - self.stride = stride - self.dilation_rate = dilation_rate - self.padding = padding - self.data_format = data_format - self.W_init = W_init - self.b_init = b_init - self.in_channels = shape[-2] - - self.build(None) - self._built = True - - logging.info( - "Conv1dLayer %s: shape: %s stride: %s pad: %s act: %s" % ( - self.name, str(shape), str(stride), padding, - self.act.__name__ if self.act is not None else 'No Activation' - ) - ) - - def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' - s = ( - '{classname}(in_channels={in_channels}, out_channels={n_filter}, kernel_size={filter_size}' - ', stride={stride}, padding={padding}' - ) - if self.dilation_rate != 1: - s += ', dilation={dilation_rate}' - if self.b_init is None: - s += ', bias=False' - s += (', ' + actstr) - if self.name is not None: - s += ', name=\'{name}\'' - s += ')' - return s.format(classname=self.__class__.__name__, **self.__dict__) - - def build(self, inputs_shape): - self.W = self._get_weights("filters", shape=self.shape, init=self.W_init) - if self.b_init: - self.b = self._get_weights("biases", shape=(self.n_filter), init=self.b_init) - - def forward(self, inputs): - - outputs = tf.nn.conv1d( - input=inputs, - filters=self.W, - stride=self.stride, - padding=self.padding, - dilations=[ - self.dilation_rate, - ], - data_format=self.data_format, - name=self.name, - ) - - if self.b_init: - outputs = tf.nn.bias_add(outputs, self.b, data_format=self.data_format, name='bias_add') - if self.act: - outputs = self.act(outputs) - return outputs - - -class Conv2dLayer(Layer): - """ - The :class:`Conv2dLayer` class is a 2D CNN layer, see `tf.nn.conv2d `__. - - Parameters - ---------- - act : activation function - The activation function of this layer. - shape : tuple of int - The shape of the filters: (filter_height, filter_width, in_channels, out_channels). - strides : tuple of int - The sliding window strides of corresponding input dimensions. - It must be in the same order as the ``shape`` parameter. - padding : str - The padding algorithm type: "SAME" or "VALID". - data_format : str - "NHWC" or "NCHW", default is "NHWC". - dilation_rate : tuple of int - Filter up-sampling/input down-sampling rate. - W_init : initializer - The initializer for the weight matrix. - b_init : initializer or None - The initializer for the bias vector. If None, skip biases. - name : None or str - A unique layer name. - - Notes - ----- - - shape = [h, w, the number of output channel of previous layer, the number of output channels] - - the number of output channel of a layer is its last dimension. - - Examples - -------- - With TensorLayer - - >>> net = tl.layers.Input([8, 28, 28, 1], name='input') - >>> conv2d = tl.layers.Conv2dLayer(shape=(5, 5, 1, 32), strides=(1, 1, 1, 1), b_init=None, name='conv2d_1') - >>> print(conv2d) - >>> tensor = tl.layers.Conv2dLayer(shape=(5, 5, 1, 32), strides=(1, 1, 1, 1), act=tf.nn.relu, name='conv2d_2')(net) - >>> print(tensor) - - """ - - def __init__( - self, - act=None, - shape=(5, 5, 1, 100), - strides=(1, 1, 1, 1), - padding='SAME', - data_format='NHWC', - dilation_rate=(1, 1, 1, 1), - W_init=tl.initializers.truncated_normal(stddev=0.02), - b_init=tl.initializers.constant(value=0.0), - name=None # 'cnn2d_layer', - ): - super().__init__(name, act=act) - self.n_filter = shape[-1] - self.filter_size = (shape[0], shape[1]) - self.shape = shape - self.strides = strides - self.dilation_rate = dilation_rate - self.padding = padding - self.data_format = data_format - self.W_init = W_init - self.b_init = b_init - self.in_channels = shape[-2] - - self.build(None) - self._built = True - - logging.info( - "Conv2dLayer %s: shape: %s strides: %s pad: %s act: %s" % ( - self.name, str(shape), str(strides), padding, - self.act.__name__ if self.act is not None else 'No Activation' - ) - ) - - def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' - s = ( - '{classname}(in_channels={in_channels}, out_channels={n_filter}, kernel_size={filter_size}' - ', strides={strides}, padding={padding}' - ) - if self.dilation_rate != [ - 1, - ] * len(self.dilation_rate): - s += ', dilation={dilation_rate}' - if self.b_init is None: - s += ', bias=False' - s += (', ' + actstr) - if self.name is not None: - s += ', name=\'{name}\'' - s += ')' - return s.format(classname=self.__class__.__name__, **self.__dict__) - - def build(self, inputs): - self.W = self._get_weights("filters", shape=self.shape, init=self.W_init) - if self.b_init: - self.b = self._get_weights("biases", shape=(self.n_filter), init=self.b_init) - - def forward(self, inputs): - outputs = tf.nn.conv2d( - input=inputs, - filters=self.W, - strides=self.strides, - padding=self.padding, - data_format=self.data_format, - dilations=list(self.dilation_rate), - name=self.name, - ) - - if self.b_init: - outputs = tf.nn.bias_add(outputs, self.b, data_format=self.data_format, name='bias_add') - if self.act: - outputs = self.act(outputs) - return outputs - - -class Conv3dLayer(Layer): - """ - The :class:`Conv3dLayer` class is a 3D CNN layer, see `tf.nn.conv3d `__. - - Parameters - ---------- - act : activation function - The activation function of this layer. - shape : tuple of int - Shape of the filters: (filter_depth, filter_height, filter_width, in_channels, out_channels). - strides : tuple of int - The sliding window strides for corresponding input dimensions. - Must be in the same order as the shape dimension. - padding : str - The padding algorithm type: "SAME" or "VALID". - data_format : str - "NDHWC" or "NCDHW", default is "NDHWC". - dilation_rate : tuple of int - Filter up-sampling/input down-sampling rate. - W_init : initializer - The initializer for the weight matrix. - b_init : initializer or None - The initializer for the bias vector. If None, skip biases. - name : None or str - A unique layer name. - - Notes - ----- - - shape = [d, h, w, the number of output channel of previous layer, the number of output channels] - - the number of output channel of a layer is its last dimension. - - Examples - -------- - With TensorLayer - - >>> net = tl.layers.Input([8, 100, 100, 100, 3], name='input') - >>> conv3d = tl.layers.Conv3dLayer(shape=(2, 2, 2, 3, 32), strides=(1, 2, 2, 2, 1), b_init=None, name='conv3d_1') - >>> print(conv3d) - >>> tensor = tl.layers.Conv3dLayer(shape=(2, 2, 2, 3, 32), strides=(1, 2, 2, 2, 1), act=tf.nn.relu, name='conv3d_2')(net) - >>> print(tensor) - - """ - - def __init__( - self, - act=None, - shape=(2, 2, 2, 3, 32), - strides=(1, 2, 2, 2, 1), - padding='SAME', - data_format='NDHWC', - dilation_rate=(1, 1, 1, 1, 1), - W_init=tl.initializers.truncated_normal(stddev=0.02), - b_init=tl.initializers.constant(value=0.0), - name=None # 'cnn3d_layer' - ): - super().__init__(name, act=act) - self.n_filter = shape[-1] - self.filter_size = (shape[0], shape[1], shape[2]) - self.shape = shape - self.strides = strides - self.padding = padding - self.data_format = data_format - self.dilation_rate = dilation_rate - self.W_init = W_init - self.b_init = b_init - self.in_channels = shape[-2] - - self.build(None) - self._built = True - - logging.info( - "Conv3dLayer %s: shape: %s strides: %s pad: %s act: %s" % ( - self.name, str(shape), str(strides), padding, - self.act.__name__ if self.act is not None else 'No Activation' - ) - ) - - def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' - s = ( - '{classname}(in_channels={in_channels}, out_channels={n_filter}, kernel_size={filter_size}' - ', strides={strides}, padding={padding}' - ) - if self.dilation_rate != [ - 1, - ] * len(self.dilation_rate): - s += ', dilation={dilation_rate}' - if self.b_init is None: - s += ', bias=False' - s += (', ' + actstr) - if self.name is not None: - s += ', name=\'{name}\'' - s += ')' - return s.format(classname=self.__class__.__name__, **self.__dict__) - - def build(self, inputs): - - self.W = self._get_weights("filters", shape=self.shape, init=self.W_init) - if self.b_init: - self.b = self._get_weights("biases", shape=(self.n_filter), init=self.b_init) - - def forward(self, inputs): - outputs = tf.nn.conv3d( - input=inputs, - filters=self.W, - strides=self.strides, - padding=self.padding, - data_format=self.data_format, #'NDHWC', - dilations=list(self.dilation_rate), #[1, 1, 1, 1, 1], - name=self.name, - ) - - if self.b_init: - outputs = tf.nn.bias_add(outputs, self.b, data_format=self.data_format, name='bias_add') - if self.act: - outputs = self.act(outputs) - return outputs diff --git a/tensorlayer/layers/convolution/expert_deconv.py b/tensorlayer/layers/convolution/expert_deconv.py deleted file mode 100644 index ace1f221b..000000000 --- a/tensorlayer/layers/convolution/expert_deconv.py +++ /dev/null @@ -1,397 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- - -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer - -# from tensorlayer.layers.core import LayersConfig - -__all__ = [ - 'DeConv1dLayer', - 'DeConv2dLayer', - 'DeConv3dLayer', -] - - -class DeConv1dLayer(Layer): - """A de-convolution 1D layer. - - See `tf.nn.conv1d_transpose `__. - - Parameters - ---------- - act : activation function or None - The activation function of this layer. - shape : tuple of int - Shape of the filters: (height, width, output_channels, in_channels). - The filter's ``in_channels`` dimension must match that of value. - outputs_shape : tuple of int - Output shape of the deconvolution, - strides : tuple of int - The sliding window strides for corresponding input dimensions. - padding : str - The padding algorithm type: "SAME" or "VALID". - data_format : str - "NWC" or "NCW", default is "NWC". - dilation_rate : int - Filter up-sampling/input down-sampling rate. - W_init : initializer - The initializer for the weight matrix. - b_init : initializer or None - The initializer for the bias vector. If None, skip biases. - name : None or str - A unique layer name. - - Notes - ----- - - shape = [w, the number of output channels of this layer, the number of output channel of the previous layer]. - - outputs_shape = [batch_size, any, the number of output channels of this layer]. - - the number of output channel of a layer is its last dimension. - - Examples - -------- - >>> input_layer = Input([8, 25, 32], name='input_layer') - >>> deconv1d = tl.layers.DeConv1dLayer( - ... shape=(5, 64, 32), outputs_shape=(8, 50, 64), strides=(1, 2, 1), name='deconv1dlayer' - ... ) - >>> print(deconv1d) - >>> tensor = tl.layers.DeConv1dLayer( - ... shape=(5, 64, 32), outputs_shape=(8, 50, 64), strides=(1, 2, 1), name='deconv1dlayer' - ... )(input_layer) - >>> print(tensor) - >>> output shape : (8, 50, 64) - - """ - - def __init__( - self, - act=None, - shape=(3, 128, 256), - outputs_shape=(1, 256, 128), - strides=(1, 2, 1), - padding='SAME', - data_format='NWC', - dilation_rate=(1, 1, 1), - W_init=tl.initializers.truncated_normal(stddev=0.02), - b_init=tl.initializers.constant(value=0.0), - name=None # 'decnn1d_layer', - ): - super().__init__(name, act=act) - self.shape = shape - self.outputs_shape = outputs_shape - self.strides = strides - self.padding = padding - self.data_format = data_format - self.dilation_rate = dilation_rate - self.W_init = W_init - self.b_init = b_init - self.in_channels = self.shape[-1] - - self.build(None) - self._built = True - - logging.info( - "DeConv1dLayer %s: shape: %s out_shape: %s strides: %s pad: %s act: %s" % ( - self.name, str(shape), str(outputs_shape), str(strides), padding, - self.act.__name__ if self.act is not None else 'No Activation' - ) - ) - - def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' - s = ( - '{classname}(in_channels={in_channels}, out_channels={n_filter}, kernel_size={filter_size}' - ', strides={strides}, padding={padding}' - ) - if self.dilation_rate != (1, ) * len(self.dilation_rate): - s += ', dilation={dilation_rate}' - if self.b_init is None: - s += ', bias=False' - s += (', ' + actstr) - if self.name is not None: - s += ', name=\'{name}\'' - s += ')' - return s.format( - classname=self.__class__.__name__, n_filter=self.shape[-2], filter_size=self.shape[0], **self.__dict__ - ) - - def build(self, inputs): - self.W = self._get_weights("filters", shape=self.shape, init=self.W_init) - if self.b_init: - self.b = self._get_weights("biases", shape=(self.shape[-2]), init=self.b_init) - - def forward(self, inputs): - outputs = tf.nn.conv1d_transpose( - input=inputs, - filters=self.W, - output_shape=self.outputs_shape, - strides=list(self.strides), - padding=self.padding, - data_format=self.data_format, - dilations=list(self.dilation_rate), - name=self.name, - ) - if self.b_init: - outputs = tf.nn.bias_add(outputs, self.b, data_format=self.data_format, name='bias_add') - if self.act: - outputs = self.act(outputs) - return outputs - - -class DeConv2dLayer(Layer): - """A de-convolution 2D layer. - - See `tf.nn.conv2d_transpose `__. - - Parameters - ---------- - act : activation function or None - The activation function of this layer. - shape : tuple of int - Shape of the filters: (height, width, output_channels, in_channels). - The filter's ``in_channels`` dimension must match that of value. - outputs_shape : tuple of int - Output shape of the deconvolution, - strides : tuple of int - The sliding window strides for corresponding input dimensions. - padding : str - The padding algorithm type: "SAME" or "VALID". - data_format : str - "NHWC" or "NCHW", default is "NHWC". - dilation_rate : tuple of int - Filter up-sampling/input down-sampling rate. - W_init : initializer - The initializer for the weight matrix. - b_init : initializer or None - The initializer for the bias vector. If None, skip biases. - name : None or str - A unique layer name. - - Notes - ----- - - shape = [h, w, the number of output channels of this layer, the number of output channel of the previous layer]. - - outputs_shape = [batch_size, any, any, the number of output channels of this layer]. - - the number of output channel of a layer is its last dimension. - - Examples - -------- - With TensorLayer - - TODO: Add the example code of a part of the generator in DCGAN example - - U-Net - - >>> .... - >>> conv10 = tl.layers.Conv2dLayer( - ... act=tf.nn.relu, - ... shape=(3, 3, 1024, 1024), strides=(1, 1, 1, 1), padding='SAME', - ... W_init=w_init, b_init=b_init, name='conv10' - ... )(conv9) - >>> print(conv10) - (batch_size, 32, 32, 1024) - >>> deconv1 = tl.layers.DeConv2dLayer( - ... act=tf.nn.relu, - ... shape=(3, 3, 512, 1024), strides=(1, 2, 2, 1), outputs_shape=(batch_size, 64, 64, 512), - ... padding='SAME', W_init=w_init, b_init=b_init, name='devcon1_1' - ... )(conv10) - - """ - - def __init__( - self, - act=None, - shape=(3, 3, 128, 256), - outputs_shape=(1, 256, 256, 128), - strides=(1, 2, 2, 1), - padding='SAME', - data_format='NHWC', - dilation_rate=(1, 1, 1, 1), - W_init=tl.initializers.truncated_normal(stddev=0.02), - b_init=tl.initializers.constant(value=0.0), - name=None # 'decnn2d_layer', - ): - super().__init__(name, act=act) - self.shape = shape - self.outputs_shape = outputs_shape - self.strides = strides - self.padding = padding - self.data_format = data_format - self.dilation_rate = dilation_rate - self.W_init = W_init - self.b_init = b_init - self.in_channels = self.shape[-1] - - self.build(None) - self._built = True - - logging.info( - "DeConv2dLayer %s: shape: %s out_shape: %s strides: %s pad: %s act: %s" % ( - self.name, str(shape), str(outputs_shape), str(strides), padding, - self.act.__name__ if self.act is not None else 'No Activation' - ) - ) - - def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' - s = ( - '{classname}(in_channels={in_channels}, out_channels={n_filter}, kernel_size={filter_size}' - ', strides={strides}, padding={padding}' - ) - if self.dilation_rate != (1, ) * len(self.dilation_rate): - s += ', dilation={dilation_rate}' - if self.b_init is None: - s += ', bias=False' - s += (', ' + actstr) - if self.name is not None: - s += ', name=\'{name}\'' - s += ')' - return s.format( - classname=self.__class__.__name__, n_filter=self.shape[-2], filter_size=(self.shape[0], self.shape[1]), - **self.__dict__ - ) - - def build(self, inputs): - self.W = self._get_weights("filters", shape=self.shape, init=self.W_init) - if self.b_init: - self.b = self._get_weights("biases", shape=(self.shape[-2]), init=self.b_init) - - def forward(self, inputs): - outputs = tf.nn.conv2d_transpose( - input=inputs, - filters=self.W, - output_shape=self.outputs_shape, - strides=self.strides, - padding=self.padding, - data_format=self.data_format, - dilations=list(self.dilation_rate), - name=self.name, - ) - if self.b_init: - outputs = tf.nn.bias_add(outputs, self.b, data_format=self.data_format, name='bias_add') - if self.act: - outputs = self.act(outputs) - return outputs - - -class DeConv3dLayer(Layer): - """A de-convolution 3D layer. - - See `tf.nn.conv3d_transpose `__. - - Parameters - ---------- - act : activation function or None - The activation function of this layer. - shape : tuple of int - The shape of the filters: (depth, height, width, output_channels, in_channels). - The filter's in_channels dimension must match that of value. - outputs_shape : tuple of int - The output shape of the deconvolution. - strides : tuple of int - The sliding window strides for corresponding input dimensions. - padding : str - The padding algorithm type: "SAME" or "VALID". - data_format : str - "NDHWC" or "NCDHW", default is "NDHWC". - dilation_rate : tuple of int - Filter up-sampling/input down-sampling rate. - W_init : initializer - The initializer for the weight matrix. - b_init : initializer or None - The initializer for the bias vector. If None, skip biases. - name : None or str - A unique layer name. - - Notes - ----- - - shape = [d, h, w, the number of output channels of this layer, the number of output channel of the previous layer]. - - outputs_shape = [batch_size, any, any, any, the number of output channels of this layer]. - - the number of output channel of a layer is its last dimension. - - Examples - -------- - >>> input_layer = Input([8, 10, 10, 10 32], name='input_layer') - >>> deconv3d = tl.layers.DeConv3dLayer( - ... shape=(2, 2, 2, 128, 32), outputs_shape=(8, 20, 20, 20, 128), strides=(1, 2, 2, 2, 1), name='deconv3dlayer' - ... ) - >>> print(deconv3d) - >>> tensor = tl.layers.DeConv1dLayer( - ... shape=(2, 2, 2, 128, 32), outputs_shape=(8, 20, 20, 20, 128), strides=(1, 2, 2, 2, 1), name='deconv3dlayer' - ... )(input_layer) - >>> print(tensor) - >>> output shape : (8, 20, 20, 20, 128) - - """ - - def __init__( - self, - act=None, - shape=(2, 2, 2, 128, 256), - outputs_shape=(1, 12, 32, 32, 128), - strides=(1, 2, 2, 2, 1), - padding='SAME', - data_format='NDHWC', - dilation_rate=(1, 1, 1, 1, 1), - W_init=tl.initializers.truncated_normal(stddev=0.02), - b_init=tl.initializers.constant(value=0.0), - name=None # 'decnn3d_layer', - ): - super().__init__(name, act=act) - self.shape = shape - self.outputs_shape = outputs_shape - self.strides = strides - self.padding = padding - self.data_format = data_format - self.dilation_rate = dilation_rate - self.W_init = W_init - self.b_init = b_init - self.in_channels = self.shape[-1] - - self.build(None) - self._built = True - - logging.info( - "DeConv3dLayer %s: shape: %s out_shape: %s strides: %s pad: %s act: %s" % ( - self.name, str(shape), str(outputs_shape), str(strides), padding, - self.act.__name__ if self.act is not None else 'No Activation' - ) - ) - - def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' - s = ( - '{classname}(in_channels={in_channels}, out_channels={n_filter}, kernel_size={filter_size}' - ', strides={strides}, padding={padding}' - ) - if self.dilation_rate != (1, ) * len(self.dilation_rate): - s += ', dilation={dilation_rate}' - if self.b_init is None: - s += ', bias=False' - s += (', ' + actstr) - if self.name is not None: - s += ', name=\'{name}\'' - s += ')' - return s.format( - classname=self.__class__.__name__, n_filter=self.shape[-2], - filter_size=(self.shape[0], self.shape[1], self.shape[2]), **self.__dict__ - ) - - def build(self, inputs): - self.W = self._get_weights("filters", shape=self.shape, init=self.W_init) - if self.b_init: - self.b = self._get_weights("biases", shape=(self.shape[-2]), init=self.b_init) - - def forward(self, inputs): - outputs = tf.nn.conv3d_transpose( - input=inputs, filters=self.W, output_shape=self.outputs_shape, strides=self.strides, padding=self.padding, - data_format=self.data_format, dilations=list(self.dilation_rate), name=self.name - ) - if self.b_init: - outputs = tf.nn.bias_add(outputs, self.b, data_format=self.data_format, name='bias_add') - if self.act: - outputs = self.act(outputs) - return outputs diff --git a/tensorlayer/layers/convolution/group_conv.py b/tensorlayer/layers/convolution/group_conv.py index 78b7b17fa..079961e69 100644 --- a/tensorlayer/layers/convolution/group_conv.py +++ b/tensorlayer/layers/convolution/group_conv.py @@ -1,86 +1,71 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer - -# from tensorlayer.layers.core import LayersConfig +from tensorlayer.layers.core import Module +from tensorlayer.backend import BACKEND __all__ = [ 'GroupConv2d', ] -class GroupConv2d(Layer): +class GroupConv2d(Module): """The :class:`GroupConv2d` class is 2D grouped convolution, see `here `__. - Parameters - -------------- - n_filter : int - The number of filters. - filter_size : tuple of int - The filter size. - strides : tuple of int - The stride step. - n_group : int - The number of groups. - act : activation function - The activation function of this layer. - padding : str - The padding algorithm type: "SAME" or "VALID". - data_format : str - "channels_last" (NHWC, default) or "channels_first" (NCHW). - dilation_rate : tuple of int - Specifying the dilation rate to use for dilated convolution. - W_init : initializer - The initializer for the weight matrix. - b_init : initializer or None - The initializer for the bias vector. If None, skip biases. - in_channels : int - The number of in channels. - name : None or str - A unique layer name. - - Examples - --------- - With TensorLayer - - >>> net = tl.layers.Input([8, 24, 24, 32], name='input') - >>> groupconv2d = tl.layers.QuanConv2d( - ... n_filter=64, filter_size=(3, 3), strides=(2, 2), n_group=2, name='group' - ... )(net) - >>> print(groupconv2d) - >>> output shape : (8, 12, 12, 64) - - """ + Parameters + -------------- + n_filter : int + The number of filters. + filter_size : tuple of int + The filter size. + stride : tuple of int + The stride step. + n_group : int + The number of groups. + act : activation function + The activation function of this layer. + padding : str + The padding algorithm type: "SAME" or "VALID". + data_format : str + "channels_last" (NHWC, default) or "channels_first" (NCHW). + dilation_rate : tuple of int + Specifying the dilation rate to use for dilated convolution. + W_init : initializer + The initializer for the weight matrix. + b_init : initializer or None + The initializer for the bias vector. If None, skip biases. + in_channels : int + The number of in channels. + name : None or str + A unique layer name. + + Examples + --------- + With TensorLayer + >>> net = tl.layers.Input([8, 24, 24, 32], name='input') + >>> groupconv2d = tl.layers.QuanConv2d( + ... n_filter=64, filter_size=(3, 3), strides=(2, 2), n_group=2, name='group' + ... )(net) + >>> print(groupconv2d) + >>> output shape : (8, 12, 12, 64) + + """ def __init__( - self, - n_filter=32, - filter_size=(3, 3), - strides=(2, 2), - n_group=2, - act=None, - padding='SAME', - data_format='channels_last', - dilation_rate=(1, 1), - W_init=tl.initializers.truncated_normal(stddev=0.02), - b_init=tl.initializers.constant(value=0.0), - in_channels=None, - name=None # 'groupconv', - ): # Windaway + self, n_filter=32, filter_size=(1, 1), strides=(1, 1), n_group=1, act=None, padding='SAME', + data_format="channels_last", dilation_rate=(1, 1), W_init=tl.initializers.truncated_normal(stddev=0.02), + b_init=tl.initializers.constant(value=0.0), in_channels=None, name=None + ): super().__init__(name, act=act) self.n_filter = n_filter self.filter_size = filter_size - self.strides = self._strides = strides + self._strides = self.strides = strides self.n_group = n_group self.padding = padding self.data_format = data_format - self.dilation_rate = self._dilation_rate = dilation_rate + self._dilation_rate = self.dilation_rate = dilation_rate self.W_init = W_init self.b_init = b_init self.in_channels = in_channels @@ -90,30 +75,29 @@ def __init__( self._built = True logging.info( - "GroupConv2d %s: n_filter: %d size: %s strides: %s n_group: %d pad: %s act: %s" % ( + "Conv2d %s: n_filter: %d filter_size: %s strides: %s n_group: %d pad: %s act: %s" % ( self.name, n_filter, str(filter_size), str(strides), n_group, padding, - self.act.__name__ if self.act is not None else 'No Activation' + self.act.__class__.__name__ if self.act is not None else 'No Activation' ) ) def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' + actstr = self.act.__class__.__name__ if self.act is not None else "No Activation" s = ( '{classname}(in_channels={in_channels}, out_channels={n_filter}, kernel_size={filter_size}' - ', strides={strides}, padding={padding}' + ', strides={strides}, n_group = {n_group}, padding={padding}' ) if self.dilation_rate != (1, ) * len(self.dilation_rate): - s += ', dilation={dilation_rate}' + s += ', dilation = {dilation_rate}' if self.b_init is None: s += ', bias=False' - s += (', ' + actstr) + s += (',', +actstr) if self.name is not None: s += ', name=\'{name}\'' s += ')' return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape): - if self.data_format == 'channels_last': self.data_format = 'NHWC' if self.in_channels is None: @@ -129,29 +113,55 @@ def build(self, inputs_shape): else: raise Exception("data_format should be either channels_last or channels_first") - self.groupConv = lambda i, k: tf.nn.conv2d( - i, k, strides=self._strides, padding=self.padding, data_format=self.data_format, dilations=self. - _dilation_rate, name=self.name - ) + if self.n_group < 1: + raise ValueError( + "The n_group must be a integer greater than or equal to 1, but we got :{}".format(self.n_group) + ) + + if self.in_channels % self.n_group != 0: + raise ValueError( + "The channels of input must be divisible by n_group, but we got: the channels of input" + "is {}, the n_group is {}.".format(self.in_channels, self.n_group) + ) + + if self.n_filter % self.n_group != 0: + raise ValueError( + "The number of filters must be divisible by n_group, but we got: the number of filters " + "is {}, the n_group is {}. ".format(self.n_filter, self.n_group) + ) + # TODO channels first filter shape [out_channel, in_channel/n_group, filter_h, filter_w] self.filter_shape = ( self.filter_size[0], self.filter_size[1], int(self.in_channels / self.n_group), self.n_filter ) - self.We = self._get_weights("filters", shape=self.filter_shape, init=self.W_init) - if self.b_init: - self.b = self._get_weights("biases", shape=self.n_filter, init=self.b_init) + self.W = self._get_weights("filters", shape=self.filter_shape, init=self.W_init) - def forward(self, inputs): - if self.n_group == 1: - outputs = self.groupConv(inputs, self.We) - else: - inputGroups = tf.split(axis=3, num_or_size_splits=self.n_group, value=inputs) - weightsGroups = tf.split(axis=3, num_or_size_splits=self.n_group, value=self.We) - convGroups = [self.groupConv(i, k) for i, k in zip(inputGroups, weightsGroups)] - outputs = tf.concat(axis=3, values=convGroups) + self.b_init_flag = False if self.b_init: - outputs = tf.nn.bias_add(outputs, self.b, data_format=self.data_format, name='bias_add') + self.b = self._get_weights("biases", shape=(self.n_filter, ), init=self.b_init) + self.bias_add = tl.ops.BiasAdd(self.data_format) + self.b_init_flag = True + + self.group_conv2d = tl.ops.GroupConv2D( + strides=self._strides, padding=self.padding, data_format=self.data_format, dilations=self._dilation_rate, + out_channel=self.n_filter, k_size=(self.filter_size[0], self.filter_size[1]), groups=self.n_group + ) + + self.act_init_flag = False if self.act: + self.act_init_flag = True + + def forward(self, inputs): + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True + + outputs = self.group_conv2d(inputs, self.W) + if self.b_init_flag: + outputs = self.bias_add(outputs, self.b) + if self.act_init_flag: outputs = self.act(outputs) return outputs diff --git a/tensorlayer/layers/convolution/quan_conv.py b/tensorlayer/layers/convolution/quan_conv.py index 6d17376c8..f89c64851 100644 --- a/tensorlayer/layers/convolution/quan_conv.py +++ b/tensorlayer/layers/convolution/quan_conv.py @@ -1,18 +1,15 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer +from tensorlayer.layers.core import Module from tensorlayer.layers.utils import (quantize_active_overflow, quantize_weight_overflow) __all__ = ['QuanConv2d'] -class QuanConv2d(Layer): +class QuanConv2d(Module): """The :class:`QuanConv2d` class is a quantized convolutional layer without BN, which weights are 'bitW' bits and the output of the previous layer are 'bitA' bits while inferencing. Note that, the bias vector would not be binarized. @@ -58,7 +55,7 @@ class QuanConv2d(Layer): >>> net = tl.layers.Input([8, 12, 12, 64], name='input') >>> quanconv2d = tl.layers.QuanConv2d( - ... n_filter=32, filter_size=(5, 5), strides=(1, 1), act=tf.nn.relu, padding='SAME', name='quancnn2d' + ... n_filter=32, filter_size=(5, 5), strides=(1, 1), act=tl.ReLU, padding='SAME', name='quancnn2d' ... )(net) >>> print(quanconv2d) >>> output shape : (8, 12, 12, 32) @@ -103,7 +100,7 @@ def __init__( logging.info( "QuanConv2d %s: n_filter: %d filter_size: %s strides: %s pad: %s act: %s" % ( self.name, n_filter, str(filter_size), str(strides), padding, - self.act.__name__ if self.act is not None else 'No Activation' + self.act.__class__.__name__ if self.act is not None else 'No Activation' ) ) @@ -150,20 +147,27 @@ def build(self, inputs_shape): self.W = self._get_weights("filters", shape=self.filter_shape, init=self.W_init) if self.b_init: self.b = self._get_weights("biases", shape=(self.n_filter, ), init=self.b_init) + self.bias_add = tl.ops.BiasAdd(data_format=self.data_format) + + self.conv2d = tl.ops.Conv2D( + strides=self.strides, padding=self.padding, data_format=self.data_format, dilations=self._dilation_rate + ) def forward(self, inputs): + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True - inputs = quantize_active_overflow(inputs, self.bitA) # Do not remove + inputs = quantize_active_overflow(inputs, self.bitA) W_ = quantize_weight_overflow(self.W, self.bitW) - outputs = tf.nn.conv2d( - input=inputs, filters=W_, strides=self.strides, padding=self.padding, data_format=self.data_format, - dilations=self._dilation_rate, name=self.name - ) + outputs = self.conv2d(inputs, W_) if self.b_init: - outputs = tf.nn.bias_add(outputs, self.b, data_format=self.data_format, name='bias_add') + outputs = self.bias_add(outputs, self.b) if self.act: outputs = self.act(outputs) diff --git a/tensorlayer/layers/convolution/quan_conv_bn.py b/tensorlayer/layers/convolution/quan_conv_bn.py index df20a6835..cec940d52 100644 --- a/tensorlayer/layers/convolution/quan_conv_bn.py +++ b/tensorlayer/layers/convolution/quan_conv_bn.py @@ -1,21 +1,18 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import numpy as np import tensorflow as tf -from tensorflow.python.training import moving_averages - import tensorlayer as tl from tensorlayer import logging -from tensorlayer.layers.core import Layer +from tensorlayer.layers.core import Module +from tensorflow.python.training import moving_averages from tensorlayer.layers.utils import (quantize_active_overflow, quantize_weight_overflow) - -# from tensorlayer.layers.core import LayersConfig +from tensorlayer.backend import BACKEND __all__ = ['QuanConv2dWithBN'] -class QuanConv2dWithBN(Layer): +class QuanConv2dWithBN(Module): """The :class:`QuanConv2dWithBN` class is a quantized convolutional layer with BN, which weights are 'bitW' bits and the output of the previous layer are 'bitA' bits while inferencing. @@ -118,10 +115,13 @@ def __init__( logging.info( "QuanConv2dWithBN %s: n_filter: %d filter_size: %s strides: %s pad: %s act: %s " % ( self.name, n_filter, filter_size, str(strides), padding, - self.act.__name__ if self.act is not None else 'No Activation' + self.act.__class__.__name__ if self.act is not None else 'No Activation' ) ) + if BACKEND == 'mindspore': + raise NotImplementedError("MindSpore backend does not implement this method") + if self.in_channels: self.build(None) self._built = True @@ -133,7 +133,7 @@ def __init__( raise ValueError("len(strides) should be 2.") def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' + actstr = self.act.__class__.__name__ if self.act is not None else 'No Activation' s = ( '{classname}(in_channels={in_channels}, out_channels={n_filter}, kernel_size={filter_size}' ', strides={strides}, padding={padding}' + actstr @@ -187,6 +187,12 @@ def build(self, inputs_shape): ) def forward(self, inputs): + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True + x = inputs inputs = quantize_active_overflow(inputs, self.bitA) # Do not remove outputs = tf.nn.conv2d( diff --git a/tensorlayer/layers/convolution/separable_conv.py b/tensorlayer/layers/convolution/separable_conv.py index 156a5f80d..fe721b65d 100644 --- a/tensorlayer/layers/convolution/separable_conv.py +++ b/tensorlayer/layers/convolution/separable_conv.py @@ -1,14 +1,10 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import numpy as np -import tensorflow as tf - import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer -from tensorlayer.layers.utils import get_collection_trainable +from tensorlayer.layers.core import Module +from tensorlayer.backend import BACKEND __all__ = [ 'SeparableConv1d', @@ -16,9 +12,8 @@ ] -class SeparableConv1d(Layer): +class SeparableConv1d(Module): """The :class:`SeparableConv1d` class is a 1D depthwise separable convolutional layer. - This layer performs a depthwise convolution that acts separately on channels, followed by a pointwise convolution that mixes channels. Parameters @@ -29,6 +24,8 @@ class SeparableConv1d(Layer): Specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions. strides : int Specifying the stride of the convolution. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1. + act : activation function + The activation function of this layer. padding : str One of "valid" or "same" (case-insensitive). data_format : str @@ -51,43 +48,23 @@ class SeparableConv1d(Layer): Examples -------- With TensorLayer - >>> net = tl.layers.Input([8, 50, 64], name='input') - >>> separableconv1d = tl.layers.Conv1d(n_filter=32, filter_size=3, strides=2, padding='SAME', act=tf.nn.relu, name='separable_1d')(net) + >>> separableconv1d = tl.layers.SeparableConv1d(n_filter=32, filter_size=3, strides=2, padding='SAME', act=tl.ReLU, name='separable_1d')(net) >>> print(separableconv1d) >>> output shape : (8, 25, 32) """ - # @deprecated_alias(layer='prev_layer', end_support_version=1.9) # TODO remove this line for the 1.9 release def __init__( - self, - n_filter=100, - filter_size=3, - strides=1, - act=None, - padding='valid', - data_format='channels_last', - dilation_rate=1, - depth_multiplier=1, - depthwise_init=None, - pointwise_init=None, - b_init=tl.initializers.constant(value=0.0), - # depthwise_regularizer=None, - # pointwise_regularizer=None, - # bias_regularizer=None, - # activity_regularizer=None, - # depthwise_constraint=None, - # pointwise_constraint=None, - # W_init=tf.truncated_normal_initializer(stddev=0.1), - # b_init=tf.constant_initializer(value=0.0), - in_channels=None, - name=None # 'seperable1d', + self, n_filter=32, filter_size=1, stride=1, act=None, padding="SAME", data_format="channels_last", + dilation_rate=1, depth_multiplier=1, depthwise_init=tl.initializers.truncated_normal(stddev=0.02), + pointwise_init=tl.initializers.truncated_normal(stddev=0.02), b_init=tl.initializers.constant(value=0.0), + in_channels=None, name=None ): - super().__init__(name, act=act) + super(SeparableConv1d, self).__init__(name, act=act) self.n_filter = n_filter self.filter_size = filter_size - self.strides = strides + self.stride = stride self.padding = padding self.data_format = data_format self.dilation_rate = dilation_rate @@ -97,15 +74,19 @@ def __init__( self.b_init = b_init self.in_channels = in_channels + if self.in_channels: + self.build(None) + self._built = True + logging.info( "SeparableConv1d %s: n_filter: %d filter_size: %s strides: %s depth_multiplier: %d act: %s" % ( - self.name, n_filter, str(filter_size), str(strides), depth_multiplier, - self.act.__name__ if self.act is not None else 'No Activation' + self.name, n_filter, str(filter_size), str(stride), depth_multiplier, + self.act.__class__.__name__ if self.act is not None else 'No Activation' ) ) def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' + actstr = self.act.__class__.__name__ if self.act is not None else 'No Activation' s = ( '{classname}(in_channels={in_channels}, out_channels={n_filter}, kernel_size={filter_size}' ', stride={strides}, padding={padding}' @@ -121,143 +102,143 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape): - self.layer = tf.keras.layers.SeparableConv1D( - filters=self.n_filter, - kernel_size=self.filter_size, - strides=self.strides, - padding=self.padding, - data_format=self.data_format, - dilation_rate=self.dilation_rate, - depth_multiplier=self.depth_multiplier, - activation=self.act, - use_bias=(True if self.b_init is not None else False), - depthwise_initializer=self.depthwise_init, - pointwise_initializer=self.pointwise_init, - bias_initializer=self.b_init, - # depthwise_regularizer=None, - # pointwise_regularizer=None, - # bias_regularizer=None, - # activity_regularizer=None, - # depthwise_constraint=None, - # pointwise_constraint=None, - # bias_constraint=None, - trainable=True, - name=self.name - ) - if self.data_format == "channels_first": - self.in_channels = inputs_shape[1] + if self.data_format == 'channels_last': + self.data_format = 'NWC' + if self.in_channels is None: + self.in_channels = inputs_shape[-1] + elif self.data_format == 'channels_first': + self.data_format = 'NCW' + if self.in_channels is None: + self.in_channels = inputs_shape[1] else: - self.in_channels = inputs_shape[-1] + raise Exception("data_format should be either channels_last or channels_first") + + if BACKEND == 'tensorflow': + self.depthwise_filter_shape = (self.filter_size, self.in_channels, self.depth_multiplier) + elif BACKEND == 'mindspore': + self.depthwise_filter_shape = (self.filter_size, 1, self.depth_multiplier * self.in_channels) + + self.pointwise_filter_shape = (1, self.depth_multiplier * self.in_channels, self.n_filter) - # _out = self.layer(np.random.uniform([1] + list(inputs_shape))) # initialize weights - _out = self.layer( - tf.convert_to_tensor(np.random.uniform(size=list(inputs_shape)), dtype=np.float) - ) # initialize weights - outputs_shape = _out.shape - # self._add_weights(self.layer.weights) - self._trainable_weights = self.layer.weights + self.depthwise_W = self._get_weights( + 'depthwise_filters', shape=self.depthwise_filter_shape, init=self.depthwise_init + ) + self.pointwise_W = self._get_weights( + 'pointwise_filters', shape=self.pointwise_filter_shape, init=self.pointwise_init + ) + + self.b_init_flag = False + if self.b_init: + self.b = self._get_weights("biases", shape=(self.n_filter, ), init=self.b_init) + self.bias_add = tl.ops.BiasAdd(self.data_format) + self.b_init_flag = True + + self.act_init_flag = False + if self.act: + self.activate = self.act + self.act_init_flag = True + + self.separable_conv1d = tl.ops.SeparableConv1D( + stride=self.stride, padding=self.padding, data_format=self.data_format, dilations=self.dilation_rate, + out_channel=self.n_filter, k_size=self.filter_size, in_channel=self.in_channels, + depth_multiplier=self.depth_multiplier + ) def forward(self, inputs): - outputs = self.layer(inputs) + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True + + outputs = self.separable_conv1d(inputs, self.depthwise_W, self.pointwise_W) + if self.b_init_flag: + outputs = self.bias_add(outputs, self.b) + if self.act_init_flag: + outputs = self.act(outputs) return outputs -class SeparableConv2d(Layer): +class SeparableConv2d(Module): """The :class:`SeparableConv2d` class is a 2D depthwise separable convolutional layer. + This layer performs a depthwise convolution that acts separately on channels, followed by a pointwise convolution that mixes channels. - This layer performs a depthwise convolution that acts separately on channels, followed by a pointwise convolution that mixes channels. - While :class:`DepthwiseConv2d` performs depthwise convolution only, which allow us to add batch normalization between depthwise and pointwise convolution. - - Parameters - ------------ - n_filter : int - The dimensionality of the output space (i.e. the number of filters in the convolution). - filter_size : tuple/list of 2 int - Specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions. - strides : tuple/list of 2 int - Specifying the strides of the convolution. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1. - padding : str - One of "valid" or "same" (case-insensitive). - data_format : str - One of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). - dilation_rate : integer or tuple/list of 2 int - Specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1. - depth_multiplier : int - The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to num_filters_in * depth_multiplier. - depthwise_init : initializer - for the depthwise convolution kernel. - pointwise_init : initializer - For the pointwise convolution kernel. - b_init : initializer - For the bias vector. If None, ignore bias in the pointwise part only. - in_channels : int - The number of in channels. - name : None or str - A unique layer name. - - Examples - -------- - With TensorLayer + Parameters + ------------ + n_filter : int + The dimensionality of the output space (i.e. the number of filters in the convolution). + filter_size : tuple of int + Specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions. + strides : tuple of int + Specifying the stride of the convolution. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1. + act : activation function + The activation function of this layer. + padding : str + One of "valid" or "same" (case-insensitive). + data_format : str + One of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, height, width). + dilation_rate : tuple of int + Specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value != 1 is incompatible with specifying any stride value != 1. + depth_multiplier : int + The number of depthwise convolution output channels for each input channel. The total number of depthwise convolution output channels will be equal to num_filters_in * depth_multiplier. + depthwise_init : initializer + for the depthwise convolution kernel. + pointwise_init : initializer + For the pointwise convolution kernel. + b_init : initializer + For the bias vector. If None, ignore bias in the pointwise part only. + in_channels : int + The number of in channels. + name : None or str + A unique layer name. - >>> net = tl.layers.Input([8, 50, 50, 64], name='input') - >>> separableconv2d = tl.layers.Conv1d(n_filter=32, filter_size=(3, 3), strides=(2, 2), act=tf.nn.relu, padding='VALID', name='separableconv2d')(net) - >>> print(separableconv2d) - >>> output shape : (8, 24, 24, 32) + Examples + -------- + With TensorLayer + >>> net = tl.layers.Input([8, 50, 50, 64], name='input') + >>> separableconv2d = tl.layers.SeparableConv2d(n_filter=32, filter_size=3, strides=2, depth_multiplier = 3 , padding='SAME', act=tl.ReLU, name='separable_2d')(net) + >>> print(separableconv2d) + >>> output shape : (8, 24, 24, 32) - """ + """ - # @deprecated_alias(layer='prev_layer', end_support_version=1.9) # TODO remove this line for the 1.9 release def __init__( - self, - n_filter=100, - filter_size=(3, 3), - strides=(1, 1), - act=None, - padding='valid', - data_format='channels_last', - dilation_rate=(1, 1), - depth_multiplier=1, - depthwise_init=None, - pointwise_init=None, - b_init=tl.initializers.constant(value=0.0), - # depthwise_regularizer=None, - # pointwise_regularizer=None, - # bias_regularizer=None, - # activity_regularizer=None, - # depthwise_constraint=None, - # pointwise_constraint=None, - # W_init=tf.truncated_normal_initializer(stddev=0.1), - # b_init=tf.constant_initializer(value=0.0), - in_channels=None, - name=None # 'seperable2d', + self, n_filter=32, filter_size=(1, 1), strides=(1, 1), act=None, padding="VALID", data_format="channels_last", + dilation_rate=(1, 1), depth_multiplier=1, depthwise_init=tl.initializers.truncated_normal(stddev=0.02), + pointwise_init=tl.initializers.truncated_normal(stddev=0.02), b_init=tl.initializers.constant(value=0.0), + in_channels=None, name=None ): - super().__init__(name, act=act) + super(SeparableConv2d, self).__init__(name, act=act) self.n_filter = n_filter self.filter_size = filter_size - self.strides = strides + self._strides = self.strides = strides self.padding = padding self.data_format = data_format - self.dilation_rate = dilation_rate + self._dilation_rate = self.dilation_rate = dilation_rate self.depth_multiplier = depth_multiplier self.depthwise_init = depthwise_init self.pointwise_init = pointwise_init self.b_init = b_init self.in_channels = in_channels + if self.in_channels: + self.build(None) + self._built = True + logging.info( - "SeparableConv2d %s: n_filter: %d filter_size: %s filter_size: %s depth_multiplier: %d act: %s" % ( + "SeparableConv2d %s: n_filter: %d filter_size: %s strides: %s depth_multiplier: %d act: %s" % ( self.name, n_filter, str(filter_size), str(strides), depth_multiplier, - self.act.__name__ if self.act is not None else 'No Activation' + self.act.__class__.__name__ if self.act is not None else 'No Activation' ) ) def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' + actstr = self.act.__class__.__name__ if self.act is not None else 'No Activation' s = ( '{classname}(in_channels={in_channels}, out_channels={n_filter}, kernel_size={filter_size}' - ', stride={strides}, padding={padding}' + ', stride={strides }, padding={padding}' ) - if self.dilation_rate != 1: + if self.dilation_rate != (1, ) * len(self.dilation_rate): s += ', dilation={dilation_rate}' if self.b_init is None: s += ', bias=False' @@ -268,40 +249,67 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape): - self.layer = tf.keras.layers.SeparableConv2D( - filters=self.n_filter, - kernel_size=self.filter_size, - strides=self.strides, - padding=self.padding, - data_format=self.data_format, - dilation_rate=self.dilation_rate, - depth_multiplier=self.depth_multiplier, - activation=self.act, - use_bias=(True if self.b_init is not None else False), - depthwise_initializer=self.depthwise_init, - pointwise_initializer=self.pointwise_init, - bias_initializer=self.b_init, - # depthwise_regularizer=None, - # pointwise_regularizer=None, - # bias_regularizer=None, - # activity_regularizer=None, - # depthwise_constraint=None, - # pointwise_constraint=None, - # bias_constraint=None, - trainable=True, - name=self.name - ) - if self.data_format == "channels_first": - self.in_channels = inputs_shape[1] + if self.data_format == 'channels_last': + self.data_format = 'NHWC' + if self.in_channels is None: + self.in_channels = inputs_shape[-1] + self._strides = [1, self._strides[0], self._strides[1], 1] + self._dilation_rate = [1, self._dilation_rate[0], self._dilation_rate[1], 1] + elif self.data_format == 'channels_first': + self.data_format = 'NCHW' + if self.in_channels is None: + self.in_channels = inputs_shape[1] + self._strides = [1, 1, self._strides[0], self._strides[1]] + self._dilation_rate = [1, 1, self._dilation_rate[0], self._dilation_rate[1]] else: - self.in_channels = inputs_shape[-1] - # _out = self.layer(np.random.uniform([1] + list(inputs_shape))) # initialize weights - _out = self.layer( - tf.convert_to_tensor(np.random.uniform(size=list(inputs_shape)), dtype=np.float) - ) # initialize weights - outputs_shape = _out.shape - self._trainable_weights = self.layer.weights + raise Exception("data_format should be either channels_last or channels_first") + + if BACKEND == 'tensorflow': + self.depthwise_filter_shape = ( + self.filter_size[0], self.filter_size[1], self.in_channels, self.depth_multiplier + ) + self.pointwise_filter_shape = (1, 1, self.depth_multiplier * self.in_channels, self.n_filter) + + elif BACKEND == 'mindspore': + self.depthwise_filter_shape = ( + self.filter_size[0], self.filter_size[1], 1, self.depth_multiplier * self.in_channels + ) + self.pointwise_filter_shape = (1, 1, self.depth_multiplier * self.in_channels, self.n_filter) + + self.depthwise_W = self._get_weights( + 'depthwise_filters', shape=self.depthwise_filter_shape, init=self.depthwise_init + ) + + self.pointwise_W = self._get_weights( + 'pointwise_filters', shape=self.pointwise_filter_shape, init=self.pointwise_init + ) + + self.b_init_flag = False + if self.b_init: + self.b = self._get_weights("biases", shape=(self.n_filter, ), init=self.b_init) + self.bias_add = tl.ops.BiasAdd(self.data_format) + self.b_init_flag = True + + self.act_init_flag = False + if self.act: + self.act_init_flag = True + + self.separable_conv2d = tl.ops.SeparableConv2D( + strides=self._strides, padding=self.padding, data_format=self.data_format, dilations=self._dilation_rate, + out_channel=self.n_filter, k_size=self.filter_size, in_channel=self.in_channels, + depth_multiplier=self.depth_multiplier + ) def forward(self, inputs): - outputs = self.layer(inputs) + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True + + outputs = self.separable_conv2d(inputs, self.depthwise_W, self.pointwise_W) + if self.b_init_flag: + outputs = self.bias_add(outputs, self.b) + if self.act_init_flag: + outputs = self.act(outputs) return outputs diff --git a/tensorlayer/layers/convolution/simplified_conv.py b/tensorlayer/layers/convolution/simplified_conv.py index fab3d5817..1c8d13adb 100644 --- a/tensorlayer/layers/convolution/simplified_conv.py +++ b/tensorlayer/layers/convolution/simplified_conv.py @@ -1,22 +1,21 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - +from tensorlayer.layers.core import Module import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer -from tensorlayer.layers.utils import get_collection_trainable __all__ = [ 'Conv1d', 'Conv2d', 'Conv3d', + 'DeConv1d', + 'DeConv2d', + 'DeConv3d', ] -class Conv1d(Layer): +class Conv1d(Module): """Simplified version of :class:`Conv1dLayer`. Parameters @@ -51,7 +50,7 @@ class Conv1d(Layer): >>> net = tl.layers.Input([8, 100, 1], name='input') >>> conv1d = tl.layers.Conv1d(n_filter=32, filter_size=5, stride=2, b_init=None, in_channels=1, name='conv1d_1') >>> print(conv1d) - >>> tensor = tl.layers.Conv1d(n_filter=32, filter_size=5, stride=2, act=tf.nn.relu, name='conv1d_2')(net) + >>> tensor = tl.layers.Conv1d(n_filter=32, filter_size=5, stride=2, act=tl.ReLU, name='conv1d_2')(net) >>> print(tensor) """ @@ -88,12 +87,12 @@ def __init__( logging.info( "Conv1d %s: n_filter: %d filter_size: %s stride: %d pad: %s act: %s" % ( self.name, n_filter, filter_size, stride, padding, - self.act.__name__ if self.act is not None else 'No Activation' + self.act.__class__.__name__ if self.act is not None else 'No Activation' ) ) def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' + actstr = self.act.__class__.__name__ if self.act is not None else 'No Activation' s = ( '{classname}(in_channels={in_channels}, out_channels={n_filter}, kernel_size={filter_size}' ', stride={stride}, padding={padding}' @@ -125,27 +124,38 @@ def build(self, inputs_shape): # TODO : check self.W = self._get_weights("filters", shape=self.filter_shape, init=self.W_init) + self.b_init_flag = False if self.b_init: - self.b = self._get_weights("biases", shape=(self.n_filter), init=self.b_init) + self.b = self._get_weights("biases", shape=(self.n_filter, ), init=self.b_init) + self.bias_add = tl.ops.BiasAdd(self.data_format) + self.b_init_flag = True - def forward(self, inputs): - outputs = tf.nn.conv1d( - input=inputs, - filters=self.W, - stride=self.stride, - padding=self.padding, - data_format=self.data_format, - dilations=self.dilation_rate, - name=self.name, + self.conv1d = tl.ops.Conv1D( + stride=self.stride, padding=self.padding, data_format=self.data_format, dilations=self.dilation_rate, + out_channel=self.n_filter, k_size=self.filter_size ) - if self.b_init: - outputs = tf.nn.bias_add(outputs, self.b, data_format=self.data_format, name='bias_add') + + self.act_init_flag = False if self.act: + self.act_init_flag = True + + def forward(self, inputs): + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True + + outputs = self.conv1d(inputs, self.W) + if self.b_init_flag: + outputs = self.bias_add(outputs, self.b) + if self.act_init_flag: outputs = self.act(outputs) + return outputs -class Conv2d(Layer): +class Conv2d(Module): """Simplified version of :class:`Conv2dLayer`. Parameters @@ -181,7 +191,7 @@ class Conv2d(Layer): >>> net = tl.layers.Input([8, 400, 400, 3], name='input') >>> conv2d = tl.layers.Conv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), b_init=None, in_channels=3, name='conv2d_1') >>> print(conv2d) - >>> tensor = tl.layers.Conv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), act=tf.nn.relu, name='conv2d_2')(net) + >>> tensor = tl.layers.Conv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), act=tl.ReLU, name='conv2d_2')(net) >>> print(tensor) """ @@ -198,9 +208,9 @@ def __init__( W_init=tl.initializers.truncated_normal(stddev=0.02), b_init=tl.initializers.constant(value=0.0), in_channels=None, - name=None # 'conv2d', + name=None, # 'conv2d', ): - super().__init__(name, act=act) + super(Conv2d, self).__init__(name, act=act) self.n_filter = n_filter self.filter_size = filter_size self._strides = self.strides = strides @@ -218,12 +228,12 @@ def __init__( logging.info( "Conv2d %s: n_filter: %d filter_size: %s strides: %s pad: %s act: %s" % ( self.name, n_filter, str(filter_size), str(strides), padding, - self.act.__name__ if self.act is not None else 'No Activation' + self.act.__class__.__name__ if self.act is not None else 'No Activation' ) ) def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' + actstr = self.act.__class__.__name__ if self.act is not None else 'No Activation' s = ( '{classname}(in_channels={in_channels}, out_channels={n_filter}, kernel_size={filter_size}' ', strides={strides}, padding={padding}' @@ -254,35 +264,47 @@ def build(self, inputs_shape): else: raise Exception("data_format should be either channels_last or channels_first") + #TODO channels first filter shape [out_channel, in_channel, filter_h, filter_w] self.filter_shape = (self.filter_size[0], self.filter_size[1], self.in_channels, self.n_filter) - self.W = self._get_weights("filters", shape=self.filter_shape, init=self.W_init) + self.b_init_flag = False if self.b_init: self.b = self._get_weights("biases", shape=(self.n_filter, ), init=self.b_init) + self.bias_add = tl.ops.BiasAdd(self.data_format) + self.b_init_flag = True - def forward(self, inputs): - outputs = tf.nn.conv2d( - input=inputs, - filters=self.W, - strides=self._strides, - padding=self.padding, - data_format=self.data_format, #'NHWC', - dilations=self._dilation_rate, #[1, 1, 1, 1], - name=self.name, + self.conv2d = tl.ops.Conv2D( + strides=self._strides, padding=self.padding, data_format=self.data_format, dilations=self._dilation_rate, + out_channel=self.n_filter, k_size=(self.filter_size[0], self.filter_size[1]) ) - if self.b_init: - outputs = tf.nn.bias_add(outputs, self.b, data_format=self.data_format, name='bias_add') + + self.act_init_flag = False if self.act: + self.act_init_flag = True + + def forward(self, inputs): + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True + + outputs = self.conv2d(inputs, self.W) + if self.b_init_flag: + outputs = self.bias_add(outputs, self.b) + if self.act_init_flag: outputs = self.act(outputs) return outputs -class Conv3d(Layer): +class Conv3d(Module): """Simplified version of :class:`Conv3dLayer`. Parameters - ---------- + ---AppData\Local\Continuum\anaconda3\envs\ms_tf\lib\site-packages\mindspore\common\api.py", line 412, in compile + result = self._executor.compile(obj, args_list, phase, use_vm) +RuntimeError: Unable to cast from non-held to held instance (T& to Holder) of type 'std:------- n_filter : int The number of filters. filter_size : tuple of int @@ -312,9 +334,9 @@ class Conv3d(Layer): With TensorLayer >>> net = tl.layers.Input([8, 20, 20, 20, 3], name='input') - >>> conv3d = tl.layers.Conv2d(n_filter=32, filter_size=(3, 3, 3), strides=(2, 2, 2), b_init=None, in_channels=3, name='conv3d_1') + >>> conv3d = tl.layers.Conv3d(n_filter=32, filter_size=(3, 3, 3), strides=(2, 2, 2), b_init=None, in_channels=3, name='conv3d_1') >>> print(conv3d) - >>> tensor = tl.layers.Conv2d(n_filter=32, filter_size=(3, 3, 3), strides=(2, 2, 2), act=tf.nn.relu, name='conv3d_2')(net) + >>> tensor = tl.layers.Conv3d(n_filter=32, filter_size=(3, 3, 3), strides=(2, 2, 2), act=tl.ReLU, name='conv3d_2')(net) >>> print(tensor) """ @@ -351,12 +373,12 @@ def __init__( logging.info( "Conv3d %s: n_filter: %d filter_size: %s strides: %s pad: %s act: %s" % ( self.name, n_filter, str(filter_size), str(strides), padding, - self.act.__name__ if self.act is not None else 'No Activation' + self.act.__class__.__name__ if self.act is not None else 'No Activation' ) ) def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' + actstr = self.act.__class__.__name__ if self.act is not None else 'No Activation' s = ( '{classname}(in_channels={in_channels}, out_channels={n_filter}, kernel_size={filter_size}' ', strides={strides}, padding={padding}' @@ -393,21 +415,476 @@ def build(self, inputs_shape): self.W = self._get_weights("filters", shape=self.filter_shape, init=self.W_init) + self.b_init_flag = False if self.b_init: self.b = self._get_weights("biases", shape=(self.n_filter, ), init=self.b_init) + self.bias_add = tl.ops.BiasAdd(self.data_format) + self.b_init_flag = True + + self.conv3d = tl.ops.Conv3D( + strides=self._strides, padding=self.padding, data_format=self.data_format, dilations=self._dilation_rate, + out_channel=self.n_filter, k_size=(self.filter_size[0], self.filter_size[1], self.filter_size[2]) + ) + + self.act_init_flag = False + if self.act: + self.act_init_flag = True def forward(self, inputs): - outputs = tf.nn.conv3d( - input=inputs, - filters=self.W, - strides=self._strides, + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True + + outputs = self.conv3d(inputs, self.W) + if self.b_init_flag: + outputs = self.bias_add(outputs, self.b) + if self.act_init_flag: + outputs = self.act(outputs) + return outputs + + +class DeConv1d(Module): + """Simplified version of :class:`Deconv1dlayer`. + + Parameters + ---------- + n_filter : int + The number of filters + filter_size : int + The filter size + strides : int or list + An int or list of `ints` that has length `1` or `3`. The number of entries by which the filter is moved right at each step. + output_shape : a 1-D Tensor + containing three elements, representing the output shape of the deconvolution op. + dilation_rate : int or list + Specifying the dilation rate to use for dilated convolution. + act : activation function + The function that is applied to the layer activations + padding : str + The padding algorithm type: "SAME" or "VALID". + data_format : str + "channel_last" (NWC, default) or "channels_first" (NCW). + W_init : initializer + The initializer for the weight matrix. + b_init : initializer or None + The initializer for the bias vector. If None, skip biases. + in_channels : int + The number of in channels. + name : None or str + A unique layer name + + Examples + -------- + With TensorLayer + + >>> net = tl.layers.Input([8, 100, 1], name='input') + >>> conv1d = tl.layers.DeConv1d(n_filter=32, filter_size=5, stride=2, b_init=None, in_channels=1, name='Deonv1d_1') + >>> print(conv1d) + >>> tensor = tl.layers.DeConv1d(n_filter=32, filter_size=5, stride=2, act=tl.ReLU, name='Deconv1d_2')(net) + >>> print(tensor) + + """ + + def __init__( + self, + n_filter=32, + filter_size=15, + strides=1, + act=None, + padding='SAME', + data_format="channels_last", + dilation_rate=1, + W_init=tl.initializers.truncated_normal(stddev=0.02), + b_init=tl.initializers.constant(value=0.0), + in_channels=None, + name=None # 'conv1d_transpose' + ): + super(DeConv1d, self).__init__(name, act=act) + self.n_filter = n_filter + self.filter_size = filter_size + self.strides = strides + self.padding = padding + self.data_format = data_format + self.dilation_rate = dilation_rate + self.W_init = W_init + self.b_init = b_init + self.in_channels = in_channels + + if self.in_channels: + self.build(None) + self._built = True + + logging.info( + "DeConv1d %s: n_filter: %d filter_size: %s stride: %d pad: %s act: %s" % ( + self.name, n_filter, filter_size, strides, padding, + self.act.__class__.__name__ if self.act is not None else 'No Activation' + ) + ) + + def __repr__(self): + actstr = self.act.__class__.__name__ if self.act is not None else 'No Activation' + s = ( + '{classname}(in_channels={in_channels}, out_channels={n_filter}, kernel_size={filter_size}' + ', strides={strides}, padding={padding}' + ) + if self.dilation_rate != 1: + s += ', dilation={dilation_rate}' + if self.b_init is None: + s += ', bias=False' + s += (', ' + actstr) + if self.name is not None: + s += ', name=\'{name}\'' + s += ')' + return s.format(classname=self.__class__.__name__, **self.__dict__) + + def build(self, inputs_shape): + if self.data_format == 'channels_last': + self.data_format = 'NWC' + if self.in_channels is None: + self.in_channels = inputs_shape[-1] + elif self.data_format == 'channels_first': + self.data_format = 'NCW' + if self.in_channels is None: + self.in_channels = inputs_shape[1] + else: + raise Exception("data_format should be either channels_last or channels_first") + + self.filter_shape = (self.filter_size, self.n_filter, self.in_channels) + + # TODO : check + self.W = self._get_weights("filters", shape=self.filter_shape, init=self.W_init) + + self.b_init_flag = False + if self.b_init: + self.b = self._get_weights("biases", shape=(self.n_filter, ), init=self.b_init) + self.bias_add = tl.ops.BiasAdd(self.data_format) + self.b_init_flag = True + + self.conv1d_transpose = tl.ops.Conv1d_transpose( + strides=self.strides, padding=self.padding, - data_format=self.data_format, #'NDHWC', - dilations=self._dilation_rate, #[1, 1, 1, 1, 1], - name=self.name, + data_format=self.data_format, + dilations=self.dilation_rate, + out_channel=self.n_filter, + k_size=self.filter_size, + in_channels=self.in_channels, ) + + self.act_init_flag = False + if self.act: + self.act_init_flag = True + + def forward(self, inputs): + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True + + outputs = self.conv1d_transpose(inputs, self.W) + if self.b_init_flag: + outputs = self.bias_add(outputs, self.b) + if self.act_init_flag: + outputs = self.act(outputs) + return outputs + + +class DeConv2d(Module): + """Simplified version of :class:`Deconv2dLayer`. + + Parameters + ---------- + + n_filter : int + The number of filters. + filter_size : tuple of int + The filter size. + strides : tuple of int + The sliding window strides of corresponding input dimensions. + It must be in the same order as the ``shape`` parameter. + output_shape : A 1-D Tensor + representing the output shape of the deconvolution op. + dilation_rate : tuple of int + Specifying the dilation rate to use for dilated convolution. + act : activation function + The activation function of this layer. + padding : str + The padding algorithm type: "SAME" or "VALID". + data_format : str + "channels_last" (NHWC, default) or "channels_first" (NCHW). + W_init : initializer + The initializer for the the weight matrix. + b_init : initializer or None + The initializer for the the bias vector. If None, skip biases. + in_channels : int + The number of in channels. + name : None or str + A unique layer name. + + Examples + -------- + With TensorLayer + + >>> net = tl.layers.Input([8, 400, 400, 3], name='input') + >>> conv2d_transpose = tl.layers.DeConv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), b_init=None, in_channels=3, name='conv2d_transpose_1') + >>> print(conv2d_transpose) + >>> tensor = tl.layers.DeConv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), act=tl.ReLU, name='conv2d_transpose_2')(net) + >>> print(tensor) + + """ + + def __init__( + self, + n_filter=32, + filter_size=(3, 3), + strides=(1, 1), + act=None, + padding='SAME', + data_format='channels_last', + dilation_rate=(1, 1), + W_init=tl.initializers.truncated_normal(stddev=0.02), + b_init=tl.initializers.constant(value=0.0), + in_channels=None, + name=None, # 'conv2d_transpose', + ): + super(DeConv2d, self).__init__(name, act=act) + self.n_filter = n_filter + self.filter_size = filter_size + self._strides = self.strides = strides + self.padding = padding + self.data_format = data_format + self._dilation_rate = self.dilation_rate = dilation_rate + self.W_init = W_init + self.b_init = b_init + self.in_channels = in_channels + + if self.in_channels: + self.build(None) + self._built = True + + logging.info( + "DeConv2d %s: n_filter: %d filter_size: %s strides: %s pad: %s act: %s" % ( + self.name, n_filter, str(filter_size), str(strides), padding, + self.act.__class__.__name__ if self.act is not None else 'No Activation' + ) + ) + + def __repr__(self): + actstr = self.act.__class__.__name__ if self.act is not None else 'No Activation' + s = ( + '{classname}(in_channels={in_channels}, out_channels={n_filter}, kernel_size={filter_size}' + ', strides={strides}, padding={padding}' + ) + if self.dilation_rate != (1, ) * len(self.dilation_rate): + s += ', dilation={dilation_rate}' + if self.b_init is None: + s += ', bias=False' + s += (', ' + actstr) + if self.name is not None: + s += ', name=\'{name}\'' + s += ')' + return s.format(classname=self.__class__.__name__, **self.__dict__) + + def build(self, inputs_shape): + if self.data_format == 'channels_last': + self.data_format = 'NHWC' + if self.in_channels is None: + self.in_channels = inputs_shape[-1] + self._strides = [1, self._strides[0], self._strides[1], 1] + self._dilation_rate = [1, self._dilation_rate[0], self._dilation_rate[1], 1] + elif self.data_format == 'channels_first': + self.data_format = 'NCHW' + if self.in_channels is None: + self.in_channels = inputs_shape[1] + self._strides = [1, 1, self._strides[0], self._strides[1]] + self._dilation_rate = [1, 1, self._dilation_rate[0], self._dilation_rate[1]] + else: + raise Exception("data_format should be either channels_last or channels_first") + + #TODO channels first filter shape [out_channel, in_channel, filter_h, filter_w] + self.filter_shape = (self.filter_size[0], self.filter_size[1], self.n_filter, self.in_channels) + self.W = self._get_weights("filters", shape=self.filter_shape, init=self.W_init, transposed=True) + + self.b_init_flag = False if self.b_init: - outputs = tf.nn.bias_add(outputs, self.b, data_format=self.data_format, name='bias_add') + self.b = self._get_weights("biases", shape=(self.n_filter, ), init=self.b_init) + self.bias_add = tl.ops.BiasAdd(self.data_format) + self.b_init_flag = True + + self.conv2d_transpose = tl.ops.Conv2d_transpose( + strides=self._strides, padding=self.padding, data_format=self.data_format, dilations=self._dilation_rate, + out_channel=self.n_filter, k_size=(self.filter_size[0], self.filter_size[1]), in_channels=self.in_channels + ) + + self.act_init_flag = False if self.act: + self.act_init_flag = True + + def forward(self, inputs): + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True + + outputs = self.conv2d_transpose(inputs, self.W) + if self.b_init_flag: + outputs = self.bias_add(outputs, self.b) + if self.act_init_flag: + outputs = self.act(outputs) + return outputs + + +class DeConv3d(Module): + """Simplified version of :class:`Deconv3dLayer`. + + Parameters + ---AppData\Local\Continuum\anaconda3\envs\ms_tf\lib\site-packages\mindspore\common\api.py", line 412, in compile + result = self._executor.compile(obj, args_list, phase, use_vm) + RuntimeError: Unable to cast from non-held to held instance (T& to Holder) of type 'std:------- + n_filter : int + The number of filters. + filter_size : tuple of int + The filter size (depth, height, width). + output_shape: + A 1-D Tensor representing the output shape of the deconvolution op. + strides : tuple of int + The sliding window strides of corresponding input dimensions. + It must be in the same order as the ``shape`` parameter. + dilation_rate : tuple of int + Specifying the dilation rate to use for dilated convolution. + act : activation function + The activation function of this layer. + padding : str + The padding algorithm type: "SAME" or "VALID". + data_format : str + "channels_last" (NDHWC, default) or "channels_first" (NCDHW). + W_init : initializer + The initializer for the the weight matrix. + b_init : initializer or None + The initializer for the the bias vector. If None, skip biases. + in_channels : int + The number of in channels. + name : None or str + A unique layer name. + + Examples + -------- + With TensorLayer + + >>> net = tl.layers.Input([8, 20, 20, 20, 3], name='input') + >>> deconv3d = tl.layers.DeConv3d(n_filter=32, filter_size=(3, 3, 3), strides=(2, 2, 2), b_init=None, in_channels=3, name='deconv3d_1') + >>> print(deconv3d) + >>> tensor = tl.layers.DeConv3d(n_filter=32, filter_size=(3, 3, 3), strides=(2, 2, 2), act=tl.ReLU, name='deconv3d_2')(net) + >>> print(tensor) + + """ + + def __init__( + self, + n_filter=32, + filter_size=(3, 3, 3), + strides=(1, 1, 1), + act=None, + padding='SAME', + data_format='channels_last', + dilation_rate=(1, 1, 1), + W_init=tl.initializers.truncated_normal(stddev=0.02), + b_init=tl.initializers.constant(value=0.0), + in_channels=None, + name=None # 'deconv3d', + ): + super(DeConv3d, self).__init__(name, act=act) + self.n_filter = n_filter + self.filter_size = filter_size + self._strides = self.strides = strides + self.padding = padding + self.data_format = data_format + self._dilation_rate = self.dilation_rate = dilation_rate + self.W_init = W_init + self.b_init = b_init + self.in_channels = in_channels + + if self.in_channels: + self.build(None) + self._built = True + + logging.info( + "DeConv3d %s: n_filter: %d filter_size: %s strides: %s pad: %s act: %s" % ( + self.name, n_filter, str(filter_size), str(strides), padding, + self.act.__class__.__name__ if self.act is not None else 'No Activation' + ) + ) + + def __repr__(self): + actstr = self.act.__class__.__name__ if self.act is not None else 'No Activation' + s = ( + '{classname}(in_channels={in_channels}, out_channels={n_filter}, kernel_size={filter_size}' + ', strides={strides}, padding={padding}' + ) + if self.dilation_rate != (1, ) * len(self.dilation_rate): + s += ', dilation={dilation_rate}' + if self.b_init is None: + s += ', bias=False' + s += (', ' + actstr) + if self.name is not None: + s += ', name=\'{name}\'' + s += ')' + return s.format(classname=self.__class__.__name__, **self.__dict__) + + def build(self, inputs_shape): + if self.data_format == 'channels_last': + self.data_format = 'NDHWC' + if self.in_channels is None: + self.in_channels = inputs_shape[-1] + self._strides = [1, self._strides[0], self._strides[1], self._strides[2], 1] + self._dilation_rate = [1, self.dilation_rate[0], self.dilation_rate[1], self.dilation_rate[2], 1] + elif self.data_format == 'channels_first': + self.data_format = 'NCDHW' + if self.in_channels is None: + self.in_channels = inputs_shape[1] + self._strides = [1, 1, self._strides[0], self._strides[1], self._strides[2]] + self._dilation_rate = [1, 1, self._dilation_rate[0], self._dilation_rate[1], self._dilation_rate[2]] + else: + raise Exception("data_format should be either channels_last or channels_first") + + self.filter_shape = ( + self.filter_size[0], self.filter_size[1], self.filter_size[2], self.n_filter, self.in_channels + ) + + self.W = self._get_weights("filters", shape=self.filter_shape, init=self.W_init, transposed=True) + + if self.b_init: + self.b = self._get_weights("biases", shape=(self.n_filter, ), init=self.b_init) + + self.b_init_flag = False + if self.b_init: + self.b = self._get_weights("biases", shape=(self.n_filter, ), init=self.b_init) + self.bias_add = tl.ops.BiasAdd(self.data_format) + self.b_init_flag = True + + self.conv3d_transpose = tl.ops.Conv3d_transpose( + strides=self._strides, padding=self.padding, data_format=self.data_format, dilations=self._dilation_rate, + out_channel=self.n_filter, k_size=(self.filter_size[0], self.filter_size[1], self.filter_size[2]), + in_channels=self.in_channels + ) + + self.act_init_flag = False + if self.act: + self.act_init_flag = True + + def forward(self, inputs): + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True + + outputs = self.conv3d_transpose(inputs, self.W) + if self.b_init_flag: + outputs = self.bias_add(outputs, self.b) + if self.act_init_flag: outputs = self.act(outputs) return outputs diff --git a/tensorlayer/layers/convolution/simplified_deconv.py b/tensorlayer/layers/convolution/simplified_deconv.py deleted file mode 100644 index 8e967c114..000000000 --- a/tensorlayer/layers/convolution/simplified_deconv.py +++ /dev/null @@ -1,273 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer -from tensorlayer.layers.utils import get_collection_trainable - -__all__ = [ - # 'DeConv1d' # TODO: Shall be implemented - 'DeConv2d', - 'DeConv3d', -] - - -class DeConv2d(Layer): - """Simplified version of :class:`DeConv2dLayer`, see `tf.nn.conv3d_transpose `__. - - Parameters - ---------- - n_filter : int - The number of filters. - filter_size : tuple of int - The filter size (height, width). - strides : tuple of int - The stride step (height, width). - padding : str - The padding algorithm type: "SAME" or "VALID". - act : activation function - The activation function of this layer. - data_format : str - "channels_last" (NHWC, default) or "channels_first" (NCHW). - dilation_rate : int of tuple of int - The dilation rate to use for dilated convolution - W_init : initializer - The initializer for the weight matrix. - b_init : initializer or None - The initializer for the bias vector. If None, skip biases. - in_channels : int - The number of in channels. - name : None or str - A unique layer name. - - Examples - -------- - With TensorLayer - - >>> net = tl.layers.Input([5, 100, 100, 32], name='input') - >>> deconv2d = tl.layers.DeConv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), in_channels=32, name='DeConv2d_1') - >>> print(deconv2d) - >>> tensor = tl.layers.DeConv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), name='DeConv2d_2')(net) - >>> print(tensor) - - """ - - def __init__( - self, - n_filter=32, - filter_size=(3, 3), - strides=(2, 2), - act=None, - padding='SAME', - dilation_rate=(1, 1), - data_format='channels_last', - W_init=tl.initializers.truncated_normal(stddev=0.02), - b_init=tl.initializers.constant(value=0.0), - in_channels=None, - name=None # 'decnn2d' - ): - super().__init__(name, act=act) - self.n_filter = n_filter - self.filter_size = filter_size - self.strides = strides - self.padding = padding - self.data_format = data_format - self.dilation_rate = dilation_rate - self.W_init = W_init - self.b_init = b_init - self.in_channels = in_channels - - # Attention: To build, we need not only the in_channels! Solved. - if self.in_channels is not None: - self.build(None) - self._built = True - - logging.info( - "DeConv2d {}: n_filters: {} strides: {} padding: {} act: {} dilation: {}".format( - self.name, - str(n_filter), - str(strides), - padding, - self.act.__name__ if self.act is not None else 'No Activation', - dilation_rate, - ) - ) - - if len(strides) != 2: - raise ValueError("len(strides) should be 2, DeConv2d and DeConv2dLayer are different.") - - def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' - s = ( - '{classname}(in_channels={in_channels}, out_channels={n_filter}, kernel_size={filter_size}' - ', strides={strides}, padding={padding}' - ) - if self.dilation_rate != (1, ) * len(self.dilation_rate): - s += ', dilation={dilation_rate}' - if self.b_init is None: - s += ', bias=False' - s += (', ' + actstr) - if self.name is not None: - s += ', name=\'{name}\'' - s += ')' - return s.format(classname=self.__class__.__name__, **self.__dict__) - - def build(self, inputs_shape): - self.layer = tf.keras.layers.Conv2DTranspose( - filters=self.n_filter, - kernel_size=self.filter_size, - strides=self.strides, - padding=self.padding, - data_format=self.data_format, - dilation_rate=self.dilation_rate, - activation=self.act, - use_bias=(True if self.b_init is not None else False), - kernel_initializer=self.W_init, - bias_initializer=self.b_init, - # dtype=tf.float32, - name=self.name, - ) - if inputs_shape is not None: - self.in_channels = inputs_shape[1 if self.data_format == "channels_first" else -1] - elif self.in_channels is not None: - inputs_shape = [1, self.in_channels, 1, 1 - ] if self.data_format == "channels_first" else [1, 1, 1, self.in_channels] - else: - raise ValueError("Either inputs_shape or in_channels must be specified for build.") - _out = self.layer( - tf.convert_to_tensor(np.random.uniform(size=inputs_shape), dtype=np.float32) - ) #np.random.uniform([1] + list(inputs_shape))) # initialize weights - outputs_shape = _out.shape - self._trainable_weights = self.layer.weights - - def forward(self, inputs): - outputs = self.layer(inputs) - return outputs - - -class DeConv3d(Layer): - """Simplified version of :class:`DeConv3dLayer`, see `tf.nn.conv3d_transpose `__. - - Parameters - ---------- - n_filter : int - The number of filters. - filter_size : tuple of int - The filter size (depth, height, width). - strides : tuple of int - The stride step (depth, height, width). - padding : str - The padding algorithm type: "SAME" or "VALID". - act : activation function - The activation function of this layer. - data_format : str - "channels_last" (NDHWC, default) or "channels_first" (NCDHW). - W_init : initializer - The initializer for the weight matrix. - b_init : initializer or None - The initializer for the bias vector. If None, skip bias. - in_channels : int - The number of in channels. - name : None or str - A unique layer name. - - Examples - -------- - With TensorLayer - - >>> net = tl.layers.Input([5, 100, 100, 100, 32], name='input') - >>> deconv3d = tl.layers.DeConv3d(n_filter=32, filter_size=(3, 3, 3), strides=(2, 2, 2), in_channels=32, name='DeConv3d_1') - >>> print(deconv3d) - >>> tensor = tl.layers.DeConv3d(n_filter=32, filter_size=(3, 3, 3), strides=(2, 2, 2), name='DeConv3d_2')(net) - >>> print(tensor) - - """ - - def __init__( - self, - n_filter=32, - filter_size=(3, 3, 3), - strides=(2, 2, 2), - padding='SAME', - act=None, - data_format='channels_last', - W_init=tl.initializers.truncated_normal(stddev=0.02), - b_init=tl.initializers.constant(value=0.0), - in_channels=None, - name=None # 'decnn3d' - ): - super().__init__(name, act=act) - self.n_filter = n_filter - self.filter_size = filter_size - self.strides = strides - self.padding = padding - self.data_format = data_format - self.W_init = W_init - self.b_init = b_init - self.in_channels = in_channels - - # Attention: To build, we need not only the in_channels! Solved. - if self.in_channels is not None: - self.build(None) - self._built = True - - logging.info( - "DeConv3d %s: n_filters: %s strides: %s pad: %s act: %s" % ( - self.name, str(n_filter), str(strides), padding, - self.act.__name__ if self.act is not None else 'No Activation' - ) - ) - - if len(strides) != 3: - raise ValueError("len(strides) should be 3, DeConv3d and DeConv3dLayer are different.") - - def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' - s = ( - '{classname}(in_channels={in_channels}, out_channels={n_filter}, kernel_size={filter_size}' - ', strides={strides}, padding={padding}' - ) - # if self.dilation_rate != (1,) * len(self.dilation_rate): - # s += ', dilation={dilation_rate}' - if self.b_init is None: - s += ', bias=False' - s += (', ' + actstr) - if self.name is not None: - s += ', name=\'{name}\'' - s += ')' - return s.format(classname=self.__class__.__name__, **self.__dict__) - - def build(self, inputs_shape): - self.layer = tf.keras.layers.Conv3DTranspose( - filters=self.n_filter, - kernel_size=self.filter_size, - strides=self.strides, - padding=self.padding, - data_format=self.data_format, - activation=self.act, - use_bias=(True if self.b_init is not None else False), - kernel_initializer=self.W_init, - bias_initializer=self.b_init, - name=self.name, - ) - if inputs_shape is not None: - self.in_channels = inputs_shape[1 if self.data_format == "channels_first" else -1] - elif self.in_channels is not None: - inputs_shape = [1, self.in_channels, 1, 1, 1 - ] if self.data_format == "channels_first" else [1, 1, 1, 1, self.in_channels] - else: - raise ValueError("Either inputs_shape or in_channels must be specified for build.") - _out = self.layer( - tf.convert_to_tensor(np.random.uniform(size=inputs_shape), dtype=np.float32) - ) #self.layer(np.random.uniform([1] + list(inputs_shape))) # initialize weights - outputs_shape = _out.shape - self._trainable_weights = self.layer.weights - - def forward(self, inputs): - outputs = self.layer(inputs) - return outputs diff --git a/tensorlayer/layers/convolution/super_resolution.py b/tensorlayer/layers/convolution/super_resolution.py index 5bdbd24c7..0b9339fe6 100644 --- a/tensorlayer/layers/convolution/super_resolution.py +++ b/tensorlayer/layers/convolution/super_resolution.py @@ -1,12 +1,9 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias, private_method -from tensorlayer.layers.core import Layer +from tensorlayer.layers.core import Module __all__ = [ 'SubpixelConv1d', @@ -14,7 +11,7 @@ ] -class SubpixelConv1d(Layer): +class SubpixelConv1d(Module): """It is a 1D sub-pixel up-sampling layer. Calls a TensorFlow function that directly implements this functionality. @@ -56,7 +53,7 @@ def __init__( super().__init__(name, act=act) self.scale = scale self.in_channels = in_channels - self.out_channels = int(self.in_channels / self.scale) + # self.out_channels = int(self.in_channels / self.scale) if self.in_channels is not None: self.build(None) @@ -64,11 +61,11 @@ def __init__( logging.info( "SubpixelConv1d %s: scale: %d act: %s" % - (self.name, scale, self.act.__name__ if self.act is not None else 'No Activation') + (self.name, scale, self.act.__class__.__name__ if self.act is not None else 'No Activation') ) def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' + actstr = self.act.__class__.__name__ if self.act is not None else 'No Activation' s = ('{classname}(in_channels={in_channels}, out_channels={out_channels}') s += (', ' + actstr) if self.name is not None: @@ -80,21 +77,29 @@ def build(self, inputs_shape): if inputs_shape is not None: self.in_channels = inputs_shape[-1] self.out_channels = int(self.in_channels / self.scale) + self.transpose = tl.ops.Transpose(perm=[2, 1, 0]) + self.batch_to_space = tl.ops.BatchToSpace(block_size=[self.scale], crops=[[0, 0]]) def forward(self, inputs): - outputs = self._PS(inputs, r=self.scale) + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True + + outputs = self._PS(inputs) if self.act is not None: outputs = self.act(outputs) return outputs - def _PS(self, I, r): - X = tf.transpose(a=I, perm=[2, 1, 0]) # (r, w, b) - X = tf.batch_to_space(input=X, block_shape=[r], crops=[[0, 0]]) # (1, r*w, b) - X = tf.transpose(a=X, perm=[2, 1, 0]) + def _PS(self, I): + X = self.transpose(I) # (r, w, b) + X = self.batch_to_space(X) # (1, r*w, b) + X = self.transpose(X) return X -class SubpixelConv2d(Layer): +class SubpixelConv2d(Module): """It is a 2D sub-pixel up-sampling layer, usually be used for Super-Resolution applications, see `SRGAN `__ for example. @@ -119,17 +124,17 @@ class SubpixelConv2d(Layer): >>> # examples here just want to tell you how to set the n_out_channel. >>> net = tl.layers.Input([2, 16, 16, 4], name='input1') - >>> subpixelconv2d = tl.layers.SubpixelConv2d(scale=2, n_out_channel=1, name='subpixel_conv2d1')(net) + >>> subpixelconv2d = tl.layers.SubpixelConv2d(scale=2, n_out_channels=1, name='subpixel_conv2d1')(net) >>> print(subpixelconv2d) >>> output shape : (2, 32, 32, 1) >>> net = tl.layers.Input([2, 16, 16, 4*10], name='input2') - >>> subpixelconv2d = tl.layers.SubpixelConv2d(scale=2, n_out_channel=10, name='subpixel_conv2d2')(net) + >>> subpixelconv2d = tl.layers.SubpixelConv2d(scale=2, n_out_channels=10, name='subpixel_conv2d2')(net) >>> print(subpixelconv2d) >>> output shape : (2, 32, 32, 10) >>> net = tl.layers.Input([2, 16, 16, 25*10], name='input3') - >>> subpixelconv2d = tl.layers.SubpixelConv2d(scale=5, n_out_channel=10, name='subpixel_conv2d3')(net) + >>> subpixelconv2d = tl.layers.SubpixelConv2d(scale=5, n_out_channels=10, name='subpixel_conv2d3')(net) >>> print(subpixelconv2d) >>> output shape : (2, 80, 80, 10) @@ -158,11 +163,11 @@ def __init__( self._built = True logging.info( "SubpixelConv2d %s: scale: %d act: %s" % - (self.name, scale, self.act.__name__ if self.act is not None else 'No Activation') + (self.name, scale, self.act.__class__.__name__ if self.act is not None else 'No Activation') ) def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' + actstr = self.act.__class__.__name__ if self.act is not None else 'No Activation' s = ('{classname}(in_channels={in_channels}, out_channels={n_out_channels}') s += (', ' + actstr) if self.name is not None: @@ -180,8 +185,15 @@ def build(self, inputs_shape): "SubpixelConv2d: The number of input channels == (scale x scale) x The number of output channels" ) self.n_out_channels = int(self.in_channels / (self.scale**2)) + self.depth_to_space = tl.ops.DepthToSpace(block_size=self.scale) def forward(self, inputs): + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True + outputs = self._PS(X=inputs, r=self.scale, n_out_channels=self.n_out_channels) if self.act is not None: outputs = self.act(outputs) @@ -195,7 +207,7 @@ def _PS(self, X, r, n_out_channels): if int(X.get_shape()[-1]) != (r**2) * n_out_channels: raise Exception(_err_log) - X = tf.compat.v1.depth_to_space(input=X, block_size=r) + X = self.depth_to_space(input=X) else: raise RuntimeError(_err_log) diff --git a/tensorlayer/layers/convolution/ternary_conv.py b/tensorlayer/layers/convolution/ternary_conv.py index a75630a9f..74e96ecee 100644 --- a/tensorlayer/layers/convolution/ternary_conv.py +++ b/tensorlayer/layers/convolution/ternary_conv.py @@ -1,18 +1,15 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer +from tensorlayer.layers.core import Module from tensorlayer.layers.utils import compute_alpha, ternary_operation __all__ = ['TernaryConv2d'] -class TernaryConv2d(Layer): +class TernaryConv2d(Module): """ The :class:`TernaryConv2d` class is a 2D ternary CNN layer, which weights are either -1 or 1 or 0 while inference. @@ -52,8 +49,8 @@ class TernaryConv2d(Layer): With TensorLayer >>> net = tl.layers.Input([8, 12, 12, 32], name='input') - >>> ternaryconv2d = tl.layers.QuanConv2d( - ... n_filter=64, filter_size=(5, 5), strides=(1, 1), act=tf.nn.relu, padding='SAME', name='ternaryconv2d' + >>> ternaryconv2d = tl.layers.TernaryConv2d( + ... n_filter=64, filter_size=(5, 5), strides=(1, 1), act=tl.ReLU, padding='SAME', name='ternaryconv2d' ... )(net) >>> print(ternaryconv2d) >>> output shape : (8, 12, 12, 64) @@ -94,7 +91,7 @@ def __init__( logging.info( "TernaryConv2d %s: n_filter: %d filter_size: %s strides: %s pad: %s act: %s" % ( self.name, n_filter, str(filter_size), str(strides), padding, - self.act.__name__ if self.act is not None else 'No Activation' + self.act.__class__.__name__ if self.act is not None else 'No Activation' ) ) @@ -141,21 +138,28 @@ def build(self, inputs_shape): self.W = self._get_weights("filters", shape=self.filter_shape, init=self.W_init) if self.b_init: self.b = self._get_weights("biases", shape=(self.n_filter, ), init=self.b_init) + self.bias_add = tl.ops.BiasAdd(data_format=self.data_format) + + self.conv2d = tl.ops.Conv2D( + strides=self._strides, padding=self.padding, data_format=self.data_format, dilations=self._dilation_rate + ) def forward(self, inputs): + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True alpha = compute_alpha(self.W) W_ = ternary_operation(self.W) - W_ = tf.multiply(alpha, W_) + W_ = tl.ops.multiply(alpha, W_) - outputs = tf.nn.conv2d( - input=inputs, filters=W_, strides=self._strides, padding=self.padding, data_format=self.data_format, - dilations=self._dilation_rate, name=self.name - ) + outputs = self.conv2d(inputs, W_) if self.b_init: - outputs = tf.nn.bias_add(outputs, self.b, data_format=self.data_format, name='bias_add') + outputs = self.bias_add(outputs, self.b) if self.act: outputs = self.act(outputs) diff --git a/tensorlayer/layers/core.py b/tensorlayer/layers/core.py deleted file mode 100644 index 023d510a2..000000000 --- a/tensorlayer/layers/core.py +++ /dev/null @@ -1,730 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- - -import inspect -from abc import abstractmethod - -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer import logging -from tensorlayer.decorators import (deprecated_alias, private_method, protected_method) -from tensorlayer.files import utils -from tensorlayer.layers.utils import (get_variable_with_initializer, list_remove_repeat) - -__all__ = ['Layer', 'ModelLayer', 'LayerList'] - -_global_layer_name_dict = {} # TODO: better implementation? - -_act_dict = { - "relu": tf.nn.relu, - "relu6": tf.nn.relu6, - "leaky_relu": tf.nn.leaky_relu, - "lrelu": tf.nn.leaky_relu, - "softplus": tf.nn.softplus, - "tanh": tf.nn.tanh, - "sigmoid": tf.nn.sigmoid, -} - - -def str2act(act): - if len(act) > 5 and act[0:5] == "lrelu": - try: - alpha = float(act[5:]) - return lambda x: tf.nn.leaky_relu(x, alpha=alpha) - except Exception as e: - raise Exception("{} can not be parsed as a float".format(act[5:])) - - if len(act) > 10 and act[0:10] == "leaky_relu": - try: - alpha = float(act[10:]) - return lambda x: tf.nn.leaky_relu(x, alpha=alpha) - except Exception as e: - raise Exception("{} can not be parsed as a float".format(act[10:])) - - if act not in _act_dict.keys(): - raise Exception("Unsupported act: {}".format(act)) - return _act_dict[act] - - -class Layer(object): - """The basic :class:`Layer` class represents a single layer of a neural network. - - It should be subclassed when implementing new types of layers. - - Parameters - ---------- - name : str or None - A unique layer name. If None, a unique name will be automatically assigned. - - Methods - --------- - __init__() - Initializing the Layer. - __call__() - (1) Building the Layer if necessary. (2) Forwarding the computation. - all_weights() - Return a list of Tensor which are all weights of this Layer. - trainable_weights() - Return a list of Tensor which are all trainable weights of this Layer. - nontrainable_weights() - Return a list of Tensor which are all nontrainable weights of this Layer. - build() - Abstract method. Build the Layer. All trainable weights should be defined in this function. - forward() - Abstract method. Forward computation and return computation results. - - """ - - def __init__(self, name=None, act=None, *args, **kwargs): - """ - Initializing the Layer. - - :param name: str or None - :param name: str or function or None - """ - - # Layer constants - # for key in kwargs.keys(): - # setattr(self, key, self._argument_dict_checkup(kwargs[key])) - - # Auto naming if the name is not given - global _global_layer_name_dict - if name is None: - prefix = self.__class__.__name__.lower() - - if _global_layer_name_dict.get(prefix) is not None: - _global_layer_name_dict[prefix] += 1 - name = prefix + '_' + str(_global_layer_name_dict[prefix]) - else: - _global_layer_name_dict[prefix] = 0 - name = prefix - while True: - if _global_layer_name_dict.get(name) is None: - break - _global_layer_name_dict[prefix] += 1 - name = prefix + '_' + str(_global_layer_name_dict[prefix]) - else: - if _global_layer_name_dict.get(name) is not None: - pass - # raise ValueError( - # 'Layer name \'%s\' has already been used by another layer. Please change the layer name.' % name - # ) - else: - _global_layer_name_dict[name] = 0 - - self.name = name - if isinstance(act, str): - self.act = str2act(act) - else: - self.act = act - - # Layer building state - self._built = False - - # Layer nodes state - self._nodes = [] - self._nodes_fixed = False - - # Layer weight state - self._all_weights = None - self._trainable_weights = [] - self._nontrainable_weights = [] - - # nested layers - self._layers = None - - # Layer training state - self.is_train = True - - # layer config and init_args - self._config = None - self.layer_args = self._get_init_args(skip=3) - - @staticmethod - def _compute_shape(tensors): - if isinstance(tensors, list): - shape_mem = [t.get_shape().as_list() for t in tensors] - else: - shape_mem = tensors.get_shape().as_list() - return shape_mem - - @property - def config(self): - # if not self._nodes_fixed: - # raise RuntimeError("Model can not be saved when nodes are not fixed.") - if self._config is not None: - return self._config - else: - _config = {} - _config.update({'class': self.__class__.__name__.split('.')[-1]}) - self.layer_args.update(self.get_args()) - self.layer_args["name"] = self.name - _config.update({"args": self.layer_args}) - if self.__class__.__name__ in tl.layers.inputs.__all__: - _config.update({'prev_layer': None}) - else: - _config.update({'prev_layer': []}) - for node in self._nodes: - in_nodes = node.in_nodes - if not isinstance(in_nodes, list): - prev_name = in_nodes.name - else: - prev_name = [in_node.name for in_node in in_nodes] - if len(prev_name) == 1: - prev_name = prev_name[0] - _config['prev_layer'].append(prev_name) - if self._nodes_fixed: - self._config = _config - return _config - - @property - def all_weights(self): - if self._all_weights is not None and len(self._all_weights) > 0: - pass - else: - self._all_weights = self.trainable_weights + self.nontrainable_weights - return self._all_weights - - @property - def trainable_weights(self): - nested = self._collect_sublayers_attr('trainable_weights') - return self._trainable_weights + nested - - @property - def nontrainable_weights(self): - nested = self._collect_sublayers_attr('nontrainable_weights') - return self._nontrainable_weights + nested - - @property - def weights(self): - raise Exception( - "no property .weights exists, do you mean .all_weights, .trainable_weights, or .nontrainable_weights ?" - ) - - def _collect_sublayers_attr(self, attr): - if attr not in ['trainable_weights', 'nontrainable_weights']: - raise ValueError( - "Only support to collect some certain attributes of nested layers," - "e.g. 'trainable_weights', 'nontrainable_weights', but got {}".format(attr) - ) - if self._layers is None: - return [] - nested = [] - for layer in self._layers: - value = getattr(layer, attr) - if value is not None: - nested.extend(value) - return nested - - def __call__(self, inputs, *args, **kwargs): - """ - (1) Build the Layer if necessary. - (2) Forward the computation and return results. - (3) Add LayerNode if necessary - - :param prev_layer: np.ndarray, Tensor, Layer, list of Layers - :param kwargs: - :return: Layer - """ - if self.__class__.__name__ in tl.layers.inputs.__all__: - input_tensors = tf.convert_to_tensor(inputs) - else: - input_tensors = inputs - - if not self._built: - if isinstance(self, LayerList): - self._input_tensors = input_tensors - inputs_shape = self._compute_shape(input_tensors) - self.build(inputs_shape) - self._built = True - - outputs = self.forward(input_tensors, *args, **kwargs) - - if not self._nodes_fixed: - self._add_node(input_tensors, outputs) - - return outputs - - def _add_node(self, input_tensors, output_tensors): - """Add a LayerNode for this layer given input_tensors, output_tensors. - - WARINING: This function should not be called from outside, it should only be called - in layer.__call__ when building static model. - - Parameters - ---------- - input_tensors : Tensor or a list of tensors - Input tensors to this layer. - output_tensors : Tensor or a list of tensors - Output tensors to this layer. - - """ - inputs_list = tolist(input_tensors) - outputs_list = tolist(output_tensors) - - if self.__class__.__name__ in tl.layers.inputs.__all__: - # for InputLayer, there should be no in_nodes - in_nodes = [] - in_tensor_idxes = [0] - else: - in_nodes = [tensor._info[0] for tensor in inputs_list] - in_tensor_idxes = [tensor._info[1] for tensor in inputs_list] - node_index = len(self._nodes) - - new_node = LayerNode(self, node_index, in_nodes, inputs_list, outputs_list, in_tensor_idxes) - self._nodes.append(new_node) - for idx, tensor in enumerate(outputs_list): - tensor._info = (new_node, idx) # FIXME : modify tensor outside layers? how to deal? - - def _release_memory(self): - """ - WARINING: This function should be called with great caution. - - self.inputs and self.outputs will be set as None but not deleted in order to release memory. - """ - # FIXME : not understand why saving inputs/outputs shape - for node in self._nodes: - node.in_tensors = None - node.out_tensors = None - - def _set_mode_for_layers(self, is_train): - """ Set training/evaluation mode for the Layer""" - self.is_train = is_train - - def _fix_nodes_for_layers(self): - """ fix LayerNodes to stop growing for this layer""" - self._nodes_fixed = True - - def _get_weights(self, var_name, shape, init=tl.initializers.random_normal(), trainable=True): - """ Get trainable variables. """ - weight = get_variable_with_initializer(scope_name=self.name, var_name=var_name, shape=shape, init=init) - if trainable is True: - if self._trainable_weights is None: - self._trainable_weights = list() - self._trainable_weights.append(weight) - else: - if self._nontrainable_weights is None: - self._nontrainable_weights = list() - self._nontrainable_weights.append(weight) - return weight - - @abstractmethod - def build(self, inputs_shape): - """ - An abstract method which should be overwritten in derived classes - to define all necessary trainable weights of the layer. - - self.built should be set as True after self.build() is called. - - :param inputs_shape: tuple - """ - raise Exception("The build(self, inputs_shape) method must be implemented by inherited class") - - @abstractmethod - def forward(self, inputs): - """ - An abstract method which should be overwritten in derived classes - to define forward feeding operations of the layer. - - :param inputs: Tensor - :return: Tensor - """ - raise Exception("The forward method must be implemented by inherited class") - - @abstractmethod - def __repr__(self): - reprstr = "Layer" - return reprstr - - def __setitem__(self, key, item): - raise TypeError("The Layer API does not allow to use the method: `__setitem__`") - - def __delitem__(self, key): - raise TypeError("The Layer API does not allow to use the method: `__delitem__`") - - def __setattr__(self, key, value): - if isinstance(value, Layer): - value._fix_nodes_for_layers() - if self._layers is None: - self._layers = [] - self._layers.append(value) - super().__setattr__(key, value) - - def __delattr__(self, name): - value = getattr(self, name, None) - if isinstance(value, Layer): - self._layers.remove(value) - super().__delattr__(name) - - @protected_method - def get_args(self): - init_args = {"layer_type": "normal"} - return init_args - - @protected_method - def _get_init_args(self, skip=3): - """Get all arguments of current layer for saving the graph.""" - stack = inspect.stack() - - if len(stack) < skip + 1: - raise ValueError("The length of the inspection stack is shorter than the requested start position.") - - args, _, _, values = inspect.getargvalues(stack[skip][0]) - - params = {} - - for arg in args: - - # some args dont need to be saved into the graph. e.g. the input placeholder - if values[arg] is not None and arg not in ['self', 'prev_layer', 'inputs']: - - val = values[arg] - - if arg == "dtype" and isinstance(val, tf.DType): - params[arg] = repr(val) - continue - - # change function (e.g. act) into dictionary of module path and function name - if inspect.isfunction(val): - if ("__module__" in dir(val)) and (len(val.__module__) > 10) and (val.__module__[0:10] - == "tensorflow"): - params[arg] = val.__name__ - else: - params[arg] = ('is_Func', utils.func2str(val)) - # ignore more args e.g. TL initializer - elif arg.endswith('init'): - continue - # for other data type, save them directly - else: - params[arg] = val - - return params - - -class LayerNode(object): - """ - The class :class:`LayerNode` class represents a conceptional node for a layer. - - LayerNode is used for building static model and it is actually a light weighted - wrapper over Layer. Specifically, it is used for building static computational graph - (see _construct_graph() in tl.models.Model). In static model, each layer relates to - one or more LayerNode, and the connection relationship between layers is built upon - LayerNode. In addition, LayerNode eases layer reuse and weights sharing. - - Parameters - ---------- - layer : tl.layers.Layer - A tl layer that wants to create a node. - node_index : int - Index of this node in layer._nodes. - in_nodes :a list of LayerNode - Father nodes to this node. - in_tensors : a list of tensors - Input tensors to this node. - out_tensors : a list of tensors - Output tensors to this node. - in_tensor_idxes : a list of int - Indexes of each input tensor in its corresponding node's out_tensors. - - Methods - --------- - __init__() - Initializing the LayerNode. - __call__() - (1) Forwarding through the layer. (2) Update its input/output tensors. - """ - - def __init__(self, layer, node_index, in_nodes, in_tensors, out_tensors, in_tensor_idxes): - """ - - Parameters - ---------- - layer - node_index - in_nodes - in_tensors - out_tensors - in_tensor_idxes - """ - self.layer = layer - self.node_index = node_index - self.in_nodes = in_nodes - self.out_nodes = [] - self.in_tensors = in_tensors - self.out_tensors = out_tensors - self.name = layer.name + "_node_{}".format(node_index) - - self.in_tensors_idxes = in_tensor_idxes - - self.visited = False - - def __call__(self, inputs, **kwargs): - """(1) Forwarding through the layer. (2) Update its input/output tensors.""" - outputs = self.layer.forward(inputs, **kwargs) - self.in_tensors = tolist(inputs) - self.out_tensors = tolist(outputs) - return self.out_tensors - - -class ModelLayer(Layer): - """ - The class :class:`ModelLayer` converts a :class:`Model` to a :class:`Layer` instance. - - Note that only a :class:`Model` with specified inputs and outputs can be converted to a :class:`ModelLayer`. - For example, a customized model in dynamic eager mode normally does NOT have specified inputs and outputs so the - customized model in dynamic eager mode can NOT be converted to a :class:`ModelLayer`. - - Parameters - ---------- - model: tl.models.Model - A model. - name : str or None - A unique layer name. If None, a unique name will be automatically assigned. - - Methods - --------- - __init__() - Initializing the ModelLayer. - weights() - Same as the weights of the given model. - build() - Do nothing because the given model has already been built. - forward() - Forward the computation. Simply call the forward() of the given model. - """ - - def __init__(self, model, name=None): - """ - Initializing the ModelLayer given a instance of Model. - - :param model: tl.models.Model - """ - super(ModelLayer, self).__init__(name=name) - - self.model = model - - # Layer building state - self._built = True - - # Layer weight state - self._all_weights = model.all_weights - self._trainable_weights = model.trainable_weights - self._nontrainable_weights = model.nontrainable_weights - - # Layer training state - self.is_train = True - - logging.info("ModelLayer %s from Model: %s" % (self.name, self.model.name)) - - def __repr__(self): - tmpstr = 'ModelLayer' + '(\n' - - modstr = self.model.__repr__() - modstr = _addindent(modstr, 2) - - tmpstr += modstr + ')' - return tmpstr - - def build(self, inputs_shape): - pass - - def forward(self, inputs): - return self.model.forward(inputs) - - def _set_mode_for_layers(self, is_train): - """ Set training/evaluation mode for the ModelLayer.""" - self.is_train = is_train - return self.model._set_mode_for_layers(is_train) - - def _fix_nodes_for_layers(self): - """ fix LayerNodes to stop growing for this ModelLayer.""" - self._nodes_fixed = True - self.model._fix_nodes_for_layers() - - def _release_memory(self): - """ - WARINING: This function should be called with great caution. - - self.inputs and self.outputs will be set as None but not deleted in order to release memory. - """ - - super(ModelLayer, self)._release_memory() - self.model.release_memory() - - def get_args(self): - init_args = {} - init_args.update({"layer_type": "modellayer"}) - # init_args["model"] = utils.net2static_graph(self.layer_args["model"]) - init_args["model"] = self.layer_args["model"].config - return init_args - - -class LayerList(Layer): - """ - The class :class:`LayerList` is a linear stack of layers. - - The :class:`LayerList` can be created by passing a list of layer instances. - The given layer instances will be automatically connected one by one. - - Parameters - ---------- - layers: list of Layer - A list of layers. - name : str or None - A unique layer name. If None, a unique name will be automatically assigned. - - Methods - --------- - __init__() - Initializing the LayerList. - weights() - A collection of weights of all the layer instances. - build() - Build the LayerList. The layer instances will be connected automatically one by one. - forward() - Forward the computation. The computation will go through all layer instances. - """ - - def __init__(self, layers, name=None): - """ - Initializing the LayerList given a list of Layer. - - :param layers: list of Layer - :param name: str or None - """ - - super(LayerList, self).__init__(name=name) - self.layers = layers - - is_built = True - for layer in self.layers: - self._trainable_weights.extend(layer.trainable_weights) - self._nontrainable_weights.extend(layer.nontrainable_weights) - if layer._built is False: - is_built = False - if layer._built and layer.all_weights is not None: - # some layers in the list passed in have already been built - # e.g. using input shape to construct layers in dynamic eager - if self._all_weights is None: - self._all_weights = list() - self._all_weights.extend(layer.all_weights) - if is_built: - self._built = True - - logging.info( - "LayerList %s including layers [%s]" % (self.name, ', '.join([layer.name for layer in self.layers])) - ) - - # check layer name uniqueness in LayerList - local_layer_name_set = set() - for layer in self.layers: - if layer.name not in local_layer_name_set: - local_layer_name_set.add(layer.name) - else: - raise ValueError( - 'Layer name \'%s\' has already been used by another layer. Please change the layer name.' % - layer.name - ) - - def __getitem__(self, idx): - if isinstance(idx, slice): - return LayerList(list(self.layers)[idx]) - else: - return self.layers[idx] - - def __len__(self): - return len(self.layers) - - def __repr__(self): - tmpstr = 'LayerList' + '(\n' - for idx, layer in enumerate(self.layers): - modstr = layer.__repr__() - modstr = _addindent(modstr, 2) - tmpstr = tmpstr + ' (' + str(idx) + '): ' + modstr + '\n' - - tmpstr = tmpstr + ')' - return tmpstr - - def build(self, inputs_shape): - """ - Build the LayerList. The layer instances will be connected automatically one by one. - """ - in_tensor = self._input_tensors - # in_layer = self._input_layer - for layer in self.layers: - is_build = layer._built - out_tensor = layer(in_tensor) - # nlayer = layer(in_layer) - if is_build is False and layer.all_weights is not None: - if self._all_weights is None: - self._all_weights = list() - self._all_weights.extend(layer.all_weights) - layer._built = True - in_tensor = out_tensor - # in_layer = nlayer - - def forward(self, inputs): - """ - Forward the computation. The computation will go through all layer instances. - """ - z = inputs - for layer in self.layers: - z = layer.forward(z) - return z - - def _set_mode_for_layers(self, is_train): - """Set training/evaluation mode for all layer instances.""" - self.is_train = is_train - for layer in self.layers: - if isinstance(layer, ModelLayer): - layer._set_mode_for_layers(is_train) - elif isinstance(layer, LayerList): - layer._set_mode_for_layers(is_train) - else: - layer.is_train = is_train - - def _fix_nodes_for_layers(self): - """ fix LayerNodes to stop growing for this LayerList.""" - self._nodes_fixed = True - for layer in self.layers: - layer._fix_nodes_for_layers() - - def _release_memory(self): - """ - WARINING: This function should be called with great caution. - - self.inputs and self.outputs will be set as None but not deleted. - """ - super(LayerList, self)._release_memory() - for layer in self.layers: - layer._release_memory() - - def get_args(self): - init_args = {} - layers = self.layer_args["layers"] - init_args["layers"] = [layer.config for layer in layers] - init_args.update({"layer_type": "layerlist"}) - return init_args - - -def _addindent(s_, numSpaces): - s = s_.split('\n') - # don't do anything for single-line stuff - if len(s) == 1: - return s_ - first = s.pop(0) - s = [(numSpaces * ' ') + line for line in s] - s = '\n'.join(s) - s = first + '\n' + s - return s - - -def tolist(tensors): - if isinstance(tensors, list) or isinstance(tensors, tuple): - ntensors = list() - for t in tensors: - ntensors += tolist(t) - return ntensors - else: - return [tensors] diff --git a/tensorlayer/layers/core/__init__.py b/tensorlayer/layers/core/__init__.py new file mode 100644 index 000000000..d9d96891e --- /dev/null +++ b/tensorlayer/layers/core/__init__.py @@ -0,0 +1,13 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +from tensorlayer.backend import BACKEND + +if BACKEND == 'mindspore': + from .core_mindspore import * +elif BACKEND == 'tensorflow': + from .core_tensorflow import * +elif BACKEND == 'paddle': + from .core_paddle import * +else: + raise ("Unsupported backend:", BACKEND) diff --git a/tensorlayer/layers/core/common.py b/tensorlayer/layers/core/common.py new file mode 100644 index 000000000..839908cdf --- /dev/null +++ b/tensorlayer/layers/core/common.py @@ -0,0 +1,180 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +import os +import tensorlayer as tl +from tensorlayer.files import utils +from tensorlayer import logging + +_act_dict = { + "relu": tl.ops.ReLU, + "relu6": tl.ops.ReLU6, + "leaky_relu": tl.ops.LeakyReLU, + "lrelu": tl.ops.LeakyReLU, + "softplus": tl.ops.Softplus, + "tanh": tl.ops.Tanh, + "sigmoid": tl.ops.Sigmoid, + "softmax": tl.ops.Softmax +} + + +def str2act(act): + if len(act) > 5 and act[0:5] == "lrelu": + try: + alpha = float(act[5:]) + return tl.ops.LeakyReLU(alpha=alpha) + except Exception as e: + raise Exception("{} can not be parsed as a float".format(act[5:])) + + if len(act) > 10 and act[0:10] == "leaky_relu": + try: + alpha = float(act[10:]) + return tl.ops.LeakyReLU(alpha=alpha) + except Exception as e: + raise Exception("{} can not be parsed as a float".format(act[10:])) + + if act not in _act_dict.keys(): + raise Exception("Unsupported act: {}".format(act)) + return _act_dict[act] + + +def _save_weights(net, file_path, format=None): + """Input file_path, save model weights into a file of given format. + Use net.load_weights() to restore. + + Parameters + ---------- + file_path : str + Filename to which the model weights will be saved. + format : str or None + Saved file format. + Value should be None, 'hdf5', 'npz', 'npz_dict' or 'ckpt'. Other format is not supported now. + 1) If this is set to None, then the postfix of file_path will be used to decide saved format. + If the postfix is not in ['h5', 'hdf5', 'npz', 'ckpt'], then file will be saved in hdf5 format by default. + 2) 'hdf5' will save model weights name in a list and each layer has its weights stored in a group of + the hdf5 file. + 3) 'npz' will save model weights sequentially into a npz file. + 4) 'npz_dict' will save model weights along with its name as a dict into a npz file. + 5) 'ckpt' will save model weights into a tensorflow ckpt file. + + Default None. + + Examples + -------- + 1) Save model weights in hdf5 format by default. + >>> net = vgg16() + >>> optimizer = tl.optimizers.Adam(learning_rate=0.001) + >>> metric = tl.metric.Accuracy() + >>> model = tl.models.Model(network=net, loss_fn=tl.cost.cross_entropy, optimizer=optimizer, metrics=metric) + >>> model.save_weights('./model.h5') + ... + >>> model.load_weights('./model.h5') + + 2) Save model weights in npz/npz_dict format + >>> model.save_weights('./model.npz') + >>> model.save_weights('./model.npz', format='npz_dict') + + """ + + if net.all_weights is None or len(net.all_weights) == 0: + logging.warning("Model contains no weights or layers haven't been built, nothing will be saved") + return + + if format is None: + postfix = file_path.split('.')[-1] + if postfix in ['h5', 'hdf5', 'npz', 'ckpt']: + format = postfix + else: + format = 'hdf5' + + if format == 'hdf5' or format == 'h5': + utils.save_weights_to_hdf5(file_path, net) + elif format == 'npz': + utils.save_npz(net.all_weights, file_path) + elif format == 'npz_dict': + utils.save_npz_dict(net.all_weights, file_path) + elif format == 'ckpt': + # TODO: enable this when tf save ckpt is enabled + raise NotImplementedError("ckpt load/save is not supported now.") + else: + raise ValueError( + "Save format must be 'hdf5', 'npz', 'npz_dict' or 'ckpt'." + "Other format is not supported now." + ) + + +def _load_weights(net, file_path, format=None, in_order=True, skip=False): + """Load model weights from a given file, which should be previously saved by net.save_weights(). + + Parameters + ---------- + file_path : str + Filename from which the model weights will be loaded. + format : str or None + If not specified (None), the postfix of the file_path will be used to decide its format. If specified, + value should be 'hdf5', 'npz', 'npz_dict' or 'ckpt'. Other format is not supported now. + In addition, it should be the same format when you saved the file using net.save_weights(). + Default is None. + in_order : bool + Allow loading weights into model in a sequential way or by name. Only useful when 'format' is 'hdf5'. + If 'in_order' is True, weights from the file will be loaded into model in a sequential way. + If 'in_order' is False, weights from the file will be loaded into model by matching the name + with the weights of the model, particularly useful when trying to restore model in eager(graph) mode from + a weights file which is saved in graph(eager) mode. + Default is True. + skip : bool + Allow skipping weights whose name is mismatched between the file and model. Only useful when 'format' is + 'hdf5' or 'npz_dict'. If 'skip' is True, 'in_order' argument will be ignored and those loaded weights + whose name is not found in model weights (net.all_weights) will be skipped. If 'skip' is False, error will + occur when mismatch is found. + Default is False. + + Examples + -------- + 1) load model from a hdf5 file. + >>> net = vgg16() + >>> optimizer = tl.optimizers.Adam(learning_rate=0.001) + >>> metric = tl.metric.Accuracy() + >>> model = tl.models.Model(network=net, loss_fn=tl.cost.cross_entropy, optimizer=optimizer, metrics=metric) + >>> model.load_weights('./model_graph.h5', in_order=False, skip=True) # load weights by name, skipping mismatch + >>> model.load_weights('./model_eager.h5') # load sequentially + + 2) load model from a npz file + >>> model.load_weights('./model.npz') + + 3) load model from a npz file, which is saved as npz_dict previously + >>> model.load_weights('./model.npz', format='npz_dict') + + Notes + ------- + 1) 'in_order' is only useful when 'format' is 'hdf5'. If you are trying to load a weights file which is + saved in a different mode, it is recommended to set 'in_order' be True. + 2) 'skip' is useful when 'format' is 'hdf5' or 'npz_dict'. If 'skip' is True, + 'in_order' argument will be ignored. + + """ + if not os.path.exists(file_path): + raise FileNotFoundError("file {} doesn't exist.".format(file_path)) + + if format is None: + format = file_path.split('.')[-1] + + if format == 'hdf5' or format == 'h5': + if skip ==True or in_order == False: + # load by weights name + utils.load_hdf5_to_weights(file_path, net, skip) + else: + # load in order + utils.load_hdf5_to_weights_in_order(file_path, net) + elif format == 'npz': + utils.load_and_assign_npz(file_path, net) + elif format == 'npz_dict': + utils.load_and_assign_npz_dict(file_path, net, skip) + elif format == 'ckpt': + # TODO: enable this when tf save ckpt is enabled + raise NotImplementedError("ckpt load/save is not supported now.") + else: + raise ValueError( + "File format must be 'hdf5', 'npz', 'npz_dict' or 'ckpt'. " + "Other format is not supported now." + ) diff --git a/tensorlayer/layers/core/core_mindspore.py b/tensorlayer/layers/core/core_mindspore.py new file mode 100644 index 000000000..a19a49eaf --- /dev/null +++ b/tensorlayer/layers/core/core_mindspore.py @@ -0,0 +1,385 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +from .common import str2act, _save_weights, _load_weights +from mindspore.nn import Cell +import tensorlayer as tl +from collections import OrderedDict + +from mindspore import log as logger +import inspect +from mindspore import context +import numpy +import mindspore as ms +from mindspore.common.api import _pynative_exec +from mindspore.common.parameter import Parameter + +__all__ = ['Module', 'SequentialLayer'] + +_global_layer_name_dict = {} # TODO: better implementation? + + +class Module(Cell): + + def __init__(self, name=None, act=None, *args, **kwargs): + super().__init__(*args, **kwargs) + + global _global_layer_name_dict + if name is None: + prefix = self.__class__.__name__.lower() + + if _global_layer_name_dict.get(prefix) is not None: + _global_layer_name_dict[prefix] += 1 + name = prefix + '_' + str(_global_layer_name_dict[prefix]) + else: + _global_layer_name_dict[prefix] = 0 + name = prefix + while True: + if _global_layer_name_dict.get(name) is None: + break + _global_layer_name_dict[prefix] += 1 + name = prefix + '_' + str(_global_layer_name_dict[prefix]) + else: + if _global_layer_name_dict.get(name) is not None: + pass + else: + _global_layer_name_dict[name] = 0 + + self.name = name + + if isinstance(act, str): + str_act = str2act(act) + + if act: + if isinstance(act, str) and (len(act) > 5 and act[0:5] == "lrelu" or + len(act) > 10 and act[0:10] == "leaky_relu"): + self.act = str_act + elif isinstance(act, str): + self.act = str_act() + else: + self.act = act() + else: + self.act = act + + # Layer building state + self._built = False + + # Layer nodes state + self._nodes = [] + self._nodes_fixed = False + + # Layer weight state + self._all_weights = [] + self._trainable_weights = [] + self._nontrainable_weights = [] + + # Layer training state + self.is_train = True + + # layer forward state + self._forward_state = False + + # data_format + self.data_format = "NCHW" + + def forward(self, *inputs, **kwargs): + raise Exception("The forward method must be implemented by inherited class") + + def construct(self, *inputs, **kwargs): + return self.forward(*inputs, **kwargs) + + def build(self, inputs_shape): + raise Exception("The build(self, inputs_shape) method must be implemented by inherited class") + + def _get_weights(self, var_name, shape, init=tl.initializers.random_normal(), trainable=True, transposed=False): + """ Get trainable variables. """ + var_name = self.name + "/" + var_name + # TODO 2D mindspore weights shape : [out_channel, in_channel, kernel_h, kernel_w] + # TODO 2D mindspore transposed shape [in_channel, out_channel, kernel_h, kernel_w] + if len(shape) == 3: + shape = shape[::-1] + if len(shape) == 4: + if not transposed and self.data_format == 'NHWC': + shape = (shape[3], shape[0], shape[1], shape[2]) + else: + shape = (shape[3], shape[2], shape[0], shape[1]) + if len(shape) == 5: + shape = (shape[4], shape[3], shape[0], shape[1], shape[2]) + + initial_value = init(shape=shape) + var = tl.Variable(initial_value=initial_value, name=var_name, trainable=trainable) + self.trainable = trainable + return var + + def save_weights(self, file_path, format=None): + """Input file_path, save model weights into a file of given format.""" + _save_weights(self, file_path, format) + + def load_weights(self, file_path, format=None, in_order=True, skip=False): + """Load model weights from a given file, which should be previously saved by self.save_weights().""" + _load_weights(self, file_path, format, in_order, skip) + + @staticmethod + def _compute_shape(tensors): + if isinstance(tensors, list): + shape_mem = [tl.get_tensor_shape(t) for t in tensors] + else: + shape_mem = tl.get_tensor_shape(tensors) + return shape_mem + + def __call__(self, *inputs, **kwargs): + if self.__class__.construct is Cell.construct: + logger.warning( + f"The '{self.__class__}' does not override the method 'construct', " + f"will call the super class(Cell) 'construct'." + ) + if kwargs: + bound_args = inspect.signature(self.construct).bind(*inputs, **kwargs) + inputs = bound_args.args + kwargs = bound_args.kwargs + + if context.get_context("mode") == context.GRAPH_MODE: + raise NotImplemented( + "GRAPH MODE is not supported, please select PYNATIVE MODE." + ) + + # if context.get_context("mode") == context.GRAPH_MODE: + # if kwargs: + # raise ValueError("For 'graph' mode, the outermost network does not support passing " + # "variable key-value pair parameters.") + # if self.enable_hook: + # raise ValueError("The graph mode does not support hook function.") + # out = self.compile_and_run(*inputs) + # return out + + self.do_parameter_broadcast() + for item in inputs: + if isinstance(item, numpy.ndarray): + raise TypeError("cell inputs should not be numpy array.") + origin_grad = [] + if self.requires_grad is True: + _pynative_exec.set_grad_flag(True) + _pynative_exec.new_graph(self, *inputs, **kwargs) + for cell in self.cells(): + origin_grad.append(cell.requires_grad) + cell.set_grad(True) + else: + _pynative_exec.set_grad_flag(False) + cast_inputs = list() + if hasattr(self, "_mindspore_flags"): + if self._mindspore_flags.get('fp16'): + cast_inputs = self._cast_mixed_precision_inputs(inputs, ms.float16) + if self._mindspore_flags.get('fp32'): + cast_inputs = self._cast_mixed_precision_inputs(inputs, ms.float32) + if not cast_inputs: + cast_inputs = inputs + output = self.run_construct(cast_inputs, kwargs) + if isinstance(output, Parameter): + output = output.data + if self.requires_grad is True: + _pynative_exec.end_graph(self, output, *inputs, **kwargs) + for i, cell in enumerate(self.cells()): + cell.set_grad(origin_grad[i]) + return output + + def _add_node(self, input_tensors, output_tensors): + """Add a LayerNode for this layer given input_tensors, output_tensors. + + WARINING: This function should not be called from outside, it should only be called + in layer.__call__ when building static model. + + Parameters + ---------- + input_tensors : Tensor or a list of tensors + Input tensors to this layer. + output_tensors : Tensor or a list of tensors + Output tensors to this layer. + + """ + raise NotImplementedError + + def set_train(self): + """ + Sets the cell to training mode. + + The cell itself and all children cells will be set to training mode. + + Args: + mode (bool): Specifies whether the model is training. Default: True. + """ + self._phase = 'train' + self.add_flags_recursive(training=True) + return self + + def set_eval(self): + """Set this network in evaluation mode. After calling this method, + all layers in network are in evaluation mode, in particular, BatchNorm, Dropout, etc. + + Examples + -------- + >>> import tensorlayer as tl + >>> net = tl.models.vgg16() + >>> net.eval() + # do evaluation + + """ + self._phase = 'predict' + self.add_flags_recursive(training=False) + return self + + def test(self): + """Set this network in evaluation mode.""" + self.eval() + + def infer(self): + """Set this network in evaluation mode.""" + self.eval() + + @property + def trainable_weights(self): + """ + Returns all trainable weights. + + Returns a list of all trainable parmeters. + + Args: + recurse (bool): Whether contains the trainable weights of sublayers. Default: True. + + Returns: + List, the list of trainable weights. + """ + self._trainable_weights = list(filter(lambda x: x.requires_grad, self.get_parameters(expand=True))) + return self._trainable_weights + + @property + def nontrainable_weights(self): + """ + Returns all untrainable weights. + + Returns a list of all untrainable weights. + + Args: + recurse (bool): Whether contains the untrainable weights of sublayers. Default: True. + + Returns: + List, the list of untrainable weights. + """ + return list(filter(lambda x: not x.requires_grad, self.get_parameters(expand=True))) + + @property + def all_weights(self): + return list(filter(lambda x: x.requires_grad, self.get_parameters(expand=True))) \ + + list(filter(lambda x: not x.requires_grad, self.get_parameters(expand=True))) + + +class SequentialLayer(Module): + """ + The class :class:`SequentialLayer` is a linear stack of layers. + The :class:`SequentialLayer` can be created by passing a list of layer instances. + The given layer instances will be automatically connected one by one. + Parameters + ---------- + layers: list of Layer + A list of layers. + name : str or None + A unique layer name. If None, a unique name will be automatically assigned. + Methods + --------- + __init__() + Initializing the LayerList. + weights() + A collection of weights of all the layer instances. + build() + Build the LayerList. The layer instances will be connected automatically one by one. + forward() + Forward the computation. The computation will go through all layer instances. + + Examples + --------- + >>> conv = tl.layers.Conv2d(3, 2, 3, pad_mode='valid') + >>> bn = tl.layers.BatchNorm2d(2) + >>> seq = tl.layers.SequentialLayer([conv, bn]) + >>> x = tl.layers.Input((1, 3, 4, 4)) + >>> seq(x) + + """ + + def __init__(self, *args): + super(SequentialLayer, self).__init__() + # self._built = True + if len(args) == 1: + layers = args[0] + if isinstance(layers, list): + for index, layer in enumerate(layers): + self.insert_child_to_layer(str(index), layer) + elif isinstance(layers, OrderedDict): + for name, layer in layers.items(): + self.insert_child_to_layer(name, layer) + else: + raise TypeError('Layers must be list or orderedDict') + else: + for index, layer in enumerate(args): + self.insert_child_to_layer(str(index), layer) + self.layer_list = list(self._layers.values()) + + def __getitem__(self, index): + if isinstance(index, slice): + return self.__class__(OrderedDict(list(self._layers.items())[index])) + index = self._valid_index(len(self), index) + return list(self._layers.values())[index] + + def __setitem__(self, index, layer): + if self._valid_module(layer): + index = self._valid_index(len(self), index) + key = list(self._layers.keys())[index] + self._layers[key] = layer + self.layer_list = list(self._layers.values()) + + def __delitem__(self, index): + if isinstance(index, int): + index = self._valid_index(len(self), index) + key = list(self._layers.keys())[index] + del self._layers[key] + elif isinstance(index, slice): + keys = list(self._layers.keys())[index] + for key in keys: + del self._layers[key] + else: + raise TypeError('Index {} is not int type or slice type'.format(index)) + self.layer_list = list(self._layers.values()) + + def __len__(self): + return len(self._layers) + + def set_grad(self, flag=True): + self.requires_grad = flag + for layer in self._layers.values(): + layer.set_grad(flag) + + def append(self, layer): + if self._valid_module(layer): + self._layers[str(len(self))] = layer + self.layer_list = list(self._layers.values()) + return self + + def build(self, inputs_shape): + pass + + def forward(self, input_data): + for layer in self.layer_list: + input_data = layer(input_data) + return input_data + + def _valid_index(self, layer_num, index): + if not isinstance(index, int): + raise TypeError("Index {} is not int type") + if not -layer_num <= index < layer_num: + raise IndexError( + "Index should be a number in range [{}, {}), but got {}".format(-layer_num, layer_num, index) + ) + return index % layer_num + + def _valid_module(self, layer): + if issubclass(layer.__class__, Module): + return True + raise TypeError('Module {} is not subclass of Module'.format(layer)) diff --git a/tensorlayer/layers/core/core_paddle.py b/tensorlayer/layers/core/core_paddle.py new file mode 100644 index 000000000..ba0c855cb --- /dev/null +++ b/tensorlayer/layers/core/core_paddle.py @@ -0,0 +1,254 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +import copy, six +from .common import str2act +from .common import _save_weights, _load_weights +from paddle.fluid import framework +from paddle.fluid.dygraph import Layer +from paddle.fluid.framework import in_dygraph_mode +from paddle.fluid.dygraph.base import program_desc_tracing_guard, param_guard +from paddle.fluid.dygraph import parallel_helper +import paddle as pd + +_global_layer_name_dict = {} + + +class Module(Layer): + + def __init__(self, name=None, act=None, *args, **kwargs): + super().__init__(*args, **kwargs) + + global _global_layer_name_dict + if name is None: + prefix = self.__class__.__name__.lower() + + if _global_layer_name_dict.get(prefix) is not None: + _global_layer_name_dict[prefix] += 1 + name = prefix + '_' + str(_global_layer_name_dict[prefix]) + else: + _global_layer_name_dict[prefix] = 0 + name = prefix + while True: + if _global_layer_name_dict.get(name) is None: + break + _global_layer_name_dict[prefix] += 1 + name = prefix + '_' + str(_global_layer_name_dict[prefix]) + else: + if _global_layer_name_dict.get(name) is not None: + pass + else: + _global_layer_name_dict[name] = 0 + + self.name = name + + if isinstance(act, str): + str_act = str2act(act) + + if act: + if isinstance(act, str) and (len(act) > 5 and act[0:5] == "lrelu" or + len(act) > 10 and act[0:10] == "leaky_relu"): + self.act = str_act + elif isinstance(act, str): + self.act = str_act() + else: + self.act = act() + else: + self.act = act + + # Layer building state + self._built = False + + # paddl_built + self._paddle_built = False + + # Layer nodes state + self._nodes = [] + self._nodes_fixed = False + + # Layer weight state + self._all_weights = [] + self._trainable_weights = [] + self._nontrainable_weights = [] + + # Layer training state + self.is_train = True + + # layer forward state + self._forward_state = False + + def set_train(self): + """ + Sets this Layer and all its sublayers to training mode. + This only effects certain modules like `Dropout` and `BatchNorm`. + + Returns: + None + + Example:: + .. code-block:: python + + import paddle + + class MyLayer(paddle.nn.Layer): + def __init__(self): + super(MyLayer, self).__init__() + self._linear = paddle.nn.Linear(1, 1) + self._dropout = paddle.nn.Dropout(p=0.5) + + def forward(self, input): + temp = self._linear(input) + temp = self._dropout(temp) + return temp + + x = paddle.randn([10, 1], 'float32') + mylayer = MyLayer() + mylayer.eval() # set mylayer._dropout to eval mode + out = mylayer(x) + mylayer.train() # set mylayer._dropout to train mode + out = mylayer(x) + + """ + # global setting in dygraph + # NOTE(chenweihang): nn.Layer also can be used in static mode, + # but _dygraph_tracer() can not be called in static mode + if in_dygraph_mode(): + framework._dygraph_tracer().train_mode() + # Layer-level setting + self.training = True + for layer in self.sublayers(): + layer.training = True + + def set_eval(self): + """ + Sets this Layer and all its sublayers to evaluation mode. + This only effects certain modules like `Dropout` and `BatchNorm`. + + Returns: + None + + Example:: + .. code-block:: python + + import paddle + + class MyLayer(paddle.nn.Layer): + def __init__(self): + super(MyLayer, self).__init__() + self._linear = paddle.nn.Linear(1, 1) + self._dropout = paddle.nn.Dropout(p=0.5) + + def forward(self, input): + temp = self._linear(input) + temp = self._dropout(temp) + return temp + + x = paddle.randn([10, 1], 'float32') + mylayer = MyLayer() + mylayer.eval() # set mylayer._dropout to eval mode + out = mylayer(x) + print(out) + + """ + # global setting in dygraph + # NOTE(chenweihang): nn.Layer also can be used in static mode, + # but _dygraph_tracer() can not be called in static mode + if in_dygraph_mode(): + framework._dygraph_tracer().eval_mode() + # Layer-level setting + self.training = False + for layer in self.sublayers(): + layer.training = False + + def build(self, inputs_shape): + raise Exception("The build(self, inputs_shape) method must be implemented by inherited class") + + def forward(self, *inputs, **kwargs): + raise Exception("The forward method must be implemented by inherited class") + + def __call__(self, *inputs, **kwargs): + with param_guard(self._parameters), param_guard(self._buffers): + for forward_pre_hook in self._forward_pre_hooks.values(): + hook_result = forward_pre_hook(self, inputs) + if hook_result is not None: + if not isinstance(hook_result, tuple): + hook_result = (hook_result, ) + inputs = hook_result + + if not self._paddle_built: + with program_desc_tracing_guard(False): + self._build_once(*inputs, **kwargs) + if parallel_helper._is_data_parallel_mode(): + parallel_helper._broadcast_parameters(self._parameters.values()) + self._paddle_built = True + + outputs = self.forward(*inputs, **kwargs) + + for forward_post_hook in self._forward_post_hooks.values(): + hook_result = forward_post_hook(self, inputs, outputs) + if hook_result is not None: + outputs = hook_result + + return outputs + + def _get_weights(self, var_name, shape, init=None, trainable=True, transposed=None): + # TODO 2D mindspore weights shape : [out_channel, in_channel, kernel_h, kernel_w] + # TODO 2D mindspore transposed shape [in_channel, out_channel, kernel_h, kernel_w] + if len(shape) == 3: + shape = shape[::-1] + if len(shape) == 4: + if transposed: + shape = (shape[3], shape[0], shape[1], shape[2]) + else: + shape = (shape[3], shape[2], shape[0], shape[1]) + if len(shape) == 5: + shape = (shape[4], shape[3], shape[0], shape[1], shape[2]) + + # if var_name in ["filters", "weights"]: + # var_name = self.name + "/" + var_name + # w_tmp = self.create_parameter(shape=shape, attr=init, is_bias=False, trainable=trainable, var_name=var_name) + # elif var_name in ["biases"]: + # var_name = self.name + "/" + var_name + # w_tmp = self.create_parameter(shape=shape, attr=init, is_bias=True, trainable=trainable, var_name=var_name) + # else: + var_name = self.name + "/" + var_name + w_tmp = self.create_parameter(shape=shape, attr=init, var_name=var_name, trainable=trainable) + self.trainable = trainable + + return w_tmp + + def create_parameter(self, shape, attr=None, dtype=None, is_bias=False, default_initializer=None, trainable=True, var_name=None): + """Create parameters for this layer.""" + init_attr = pd.ParamAttr( + name=var_name, + initializer=attr, + trainable=trainable, + do_model_average=True) + temp_attr = copy.deepcopy(init_attr) + if isinstance(temp_attr, six.string_types) and temp_attr == "": + temp_attr = None + return self._helper.create_parameter(temp_attr, shape, dtype, is_bias, default_initializer) + + @property + def all_weights(self): + ret = [param for _, param in self.named_parameters(include_sublayers=True)] + return ret + + @property + def trainable_weights(self): + return self.parameters() + + def init_build(self, *inputs, **kwargs): + """ + (1) This method must be called when the Layer has no input in_channels. + (2) Automatic shape inference when the user does not enter inchannels. + """ + + self.forward(*inputs, **kwargs) + + def save_weights(self, file_path, format=None): + _save_weights(net=self, file_path=file_path, format=format) + + def load_weights(self, file_path, format=None, in_order=True, skip=False): + """Load model weights from a given file, which should be previously saved by self.save_weights().""" + _load_weights(net=self, file_path=file_path, format=format, in_order=in_order, skip=skip) diff --git a/tensorlayer/layers/core/core_tensorflow.py b/tensorlayer/layers/core/core_tensorflow.py new file mode 100644 index 000000000..46bdc5fc1 --- /dev/null +++ b/tensorlayer/layers/core/core_tensorflow.py @@ -0,0 +1,777 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +from .common import str2act, _save_weights, _load_weights +from collections import OrderedDict +import time +import tensorlayer as tl +import tensorflow as tf +from tensorlayer.layers.utils import (get_variable_with_initializer) +from tensorlayer import logging + +__all__ = ['Module', 'SequentialLayer', 'LayerList'] + +_global_layer_name_dict = {} +Parameter_ = tf.Variable + + +class Module(object): + """The basic :class:`Module` class represents a single layer of a neural network. + It should be subclassed when implementing new types of layers. + Parameters + ---------- + name : str or None + A unique layer name. If None, a unique name will be automatically assigned. + Methods + --------- + __init__() + Initializing the Layer. + __call__() + Forwarding the computation. + all_weights() + Return a list of Tensor which are all weights of this Layer. + trainable_weights() + Return a list of Tensor which are all trainable weights of this Layer. + nontrainable_weights() + Return a list of Tensor which are all nontrainable weights of this Layer. + build() + Abstract method. Build the Layer. All trainable weights should be defined in this function. + forward() + Abstract method. Forward computation and return computation results. + + """ + + def __init__(self, name=None, act=None, *args, **kwargs): + self._params = OrderedDict() + self._layers = OrderedDict() + self._params_status = OrderedDict() + self._parameter_layout_dict = {} + self._create_time = int(time.time() * 1e9) + + global _global_layer_name_dict + if name is None: + prefix = self.__class__.__name__.lower() + + if _global_layer_name_dict.get(prefix) is not None: + _global_layer_name_dict[prefix] += 1 + name = prefix + '_' + str(_global_layer_name_dict[prefix]) + else: + _global_layer_name_dict[prefix] = 0 + name = prefix + while True: + if _global_layer_name_dict.get(name) is None: + break + _global_layer_name_dict[prefix] += 1 + name = prefix + '_' + str(_global_layer_name_dict[prefix]) + else: + if _global_layer_name_dict.get(name) is not None: + pass + else: + _global_layer_name_dict[name] = 0 + + self.name = name + + if isinstance(act, str): + str_act = str2act(act) + + if act: + if isinstance(act, str) and (len(act) > 5 and act[0:5] == "lrelu" or + len(act) > 10 and act[0:10] == "leaky_relu"): + self.act = str_act + elif isinstance(act, str): + self.act = str_act() + else: + self.act = act() + else: + self.act = act + + # Layer building state + self._built = False + + # Layer nodes state + self._nodes = [] + self._nodes_fixed = False + + # Layer weight state + self._all_weights = None + self._trainable_weights = None + self._nontrainable_weights = None + + # layer forward state + self._forward_state = False + + # Layer training state + self.is_train = True + + def extend_repr(self): + """ + Sets the extended representation of the Module. + + To print customized extended information, re-implement this method in your own Layers. + + """ + + return '' + + def __repr__(self): + extra_str = self.extend_repr() + info_str = self.__class__.__name__ + '<' + if self._layers: + sub_str = '\n' + if extra_str: + sub_str += '{}\n'.format(self.extend_repr()) + for key, value in self._layers.items(): + sub_str += '({}): {}\n'.format(key, repr(value)) + sub_str = sub_str.replace('\n', '\n ') + '>' + info_str += sub_str + else: + info_str += extra_str + '>' + return info_str + + def __setattr__(self, name, value): + layers = self.__dict__.get('_layers') + params = self.__dict__.get('_params') + + if isinstance(value, Parameter_): + if params is None: + raise AttributeError("Can not assign params before Module.__init__() call.") + if name in self.__dict__: + if self.__dict__[name] is not None: + raise TypeError("Expected type is not in (Parameter, Module), but got Parameter.") + del self.__dict__[name] + if layers and name in layers: + raise TypeError("Expected type is Module, but got Parameter.") + self.insert_param_to_layer(name, value) + + elif isinstance(value, Module): + if layers is None: + raise AttributeError("Can not assign layers before Module.__init__() call.") + if name in self.__dict__: + del self.__dict__[name] + if params and name in params: + raise TypeError("Expected type is Parameter, but got Module.") + # TODO Automatic shape inference when the user does not enter inchannels. + # if value._built is False: + # raise AttributeError( + # "The registered layer `{}` should be built in advance. " + # "Do you forget to pass the keyword argument 'in_channels'? ".format(value.name) + # ) + layers[name] = value + else: + object.__setattr__(self, name, value) + + def __call__(self, inputs, *args, **kwargs): + + output = self.forward(inputs, *args, **kwargs) + + return output + + def forward(self, *inputs, **kwargs): + raise Exception("The forward method must be implemented by inherited class") + + def build(self, inputs_shape): + raise Exception("The build(self, inputs_shape) method must be implemented by inherited class") + + def _get_weights(self, var_name, shape, init=tl.initializers.random_normal(), trainable=True, transposed=None): + """ Get trainable variables. """ + + weight = get_variable_with_initializer( + scope_name=self.name, var_name=var_name, shape=shape, init=init, trainable=trainable + ) + self.trainable = trainable + return weight + + def save_weights(self, file_path, format=None): + """Input file_path, save model weights into a file of given format.""" + + _save_weights(self, file_path, format) + + def load_weights(self, file_path, format=None, in_order=True, skip=False): + """Load model weights from a given file, which should be previously saved by self.save_weights().""" + + _load_weights(self, file_path, format, in_order, skip) + + def _set_mode_for_layers(self, is_train): + """Set all layers of this network to a given mode. + + Parameters + ---------- + is_train : boolean + Network's mode. True means training mode while False means evaluation mode. + + """ + + layers = self.layers_and_names(name_prefix='') + for layer_name, layer in layers: + if isinstance(layer, Module): + layer.is_train = is_train + + def set_train(self): + """Set this network in training mode. After calling this method, + all layers in network are in training mode, in particular, BatchNorm, Dropout, etc. + TODO It is not possible to modify the parameter state after initialization, and a better way needs to be found. + Examples + -------- + >>> import tensorlayer as tl + >>> net = tl.vgg16() + >>> net.set_train() + + """ + + if self.is_train !=True: + self.is_train = True + self._set_mode_for_layers(True) + + def set_eval(self): + """Set this network in evaluation mode. After calling this method, + all layers in network are in evaluation mode, in particular, BatchNorm, Dropout, etc. + TODO It is not possible to modify the parameter state after initialization, and a better way needs to be found. + Examples + -------- + >>> import tensorlayer as tl + >>> net = tl.vgg16() + >>> net.set_eval() + # do evaluation + + """ + + if self.is_train != False: + self.is_train = False + self._set_mode_for_layers(False) + + @staticmethod + def _compute_shape(tensors): + if isinstance(tensors, list): + shape_mem = [tl.get_tensor_shape(t) for t in tensors] + else: + shape_mem = tl.get_tensor_shape(tensors) + return shape_mem + + def insert_param_to_layer(self, param_name, param, check_name=True): + """ + Adds a parameter to the current layer. + + Inserts a parameter with given name to the layer. Please refer to the usage in + source code of `tensorlayer.layer.Module.__setattr__`. + + Parameters + ---------- + param_name : str + Name of the parameter. + param : Parameter + Parameter to be inserted to the layer. + check_name : bool + Determines whether the name input is compatible. Default: True. + + """ + + if not param_name: + raise KeyError("The name of parameter should not be null.") + if check_name and '.' in param_name: + raise KeyError("The name of parameter should not contain \".\"") + if '_params' not in self.__dict__: + raise AttributeError("You need call init() first.") + if hasattr(self, param_name) and param_name not in self._params: + raise KeyError("Duplicated parameter name '{}'.".format(param_name)) + if not isinstance(param, Parameter_) and param is not None: + raise TypeError("The type of parameter should be 'Parameter' if not None.") + self._params[param_name] = param + try: + self._params_status[param_name] = self.trainable + except: + pass + + def _add_node(self, input_tensors, output_tensors): + """Add a LayerNode for this layer given input_tensors, output_tensors. + + WARINING: This function should not be called from outside, it should only be called + in layer.__call__ when building static model. + + Parameters + ---------- + input_tensors : Tensor or a list of tensors + Input tensors to this layer. + output_tensors : Tensor or a list of tensors + Output tensors to this layer. + + """ + + raise NotImplementedError + + @property + def create_time(self): + return self._create_time + + def __getattr__(self, name): + if '_params' in self.__dict__: + params = self.__dict__['_params'] + if name in params: + return params[name] + if '_layers' in self.__dict__: + layers = self.__dict__['_layers'] + if name in layers: + return layers[name] + if '_params_status' in self.__dict__: + params_status = self.__dict__['_params_status'] + if name in params_status: + return params_status[name] + raise AttributeError("'{}' object has no attribute '{}'.".format(type(self).__name__, name)) + + def __delattr__(self, name): + if name in self._params: + del self._params[name] + elif name in self._layers: + del self._layers[name] + else: + object.__delattr__(self, name) + + @property + def trainable_weights(self): + """ + Returns all trainable weights. + Returns a list of all trainable parmeters. + + """ + + if self._trainable_weights is not None and len(self._trainable_weights) > 0: + # self._trainable_weights already extracted, so do nothing + pass + else: + self._trainable_weights = [] + layers = self.layers_and_names(name_prefix='') + for layer_name, layer in layers: + params = layer._params.items() + params_status = layer._params_status.items() + params_zip = zip(params, params_status) + for params, params_status in params_zip: + if params_status[1] ==True: + self._trainable_weights.append(params[1]) + return self._trainable_weights + + @property + def nontrainable_weights(self): + """ + Returns all untrainable weights. + Returns a list of all untrainable weights. + + """ + + if self._nontrainable_weights is not None and len(self._nontrainable_weights) > 0: + # self._nontrainable_weights already extracted, so do nothing + pass + else: + self._nontrainable_weights = [] + layers = self.layers_and_names(name_prefix='') + for layer_name, layer in layers: + params = layer._params.items() + params_status = layer._params_status.items() + params_zip = zip(params, params_status) + for params, params_status in params_zip: + if params_status[1] == False: + self._nontrainable_weights.append(params[1]) + return self._nontrainable_weights + + @property + def all_weights(self): + """ + Returns all weights. + Returns a list of all weights. + + """ + + if self._all_weights is not None and len(self._all_weights) > 0: + # self._all_weights already extracted, so do nothing + pass + else: + self._all_weights = [] + layers = self.layers_and_names(name_prefix='') + for layer_name, layer in layers: + params = layer._params.items() + for par, val in params: + self._all_weights.append(val) + return self._all_weights + + def get_weights(self, expand=True): + """ + Returns an iterator over layer weights. + Yields weights of this layer. If `expand` is True, yield parameters of this layer and all sublayers. + + Parameters + ---------- + expand : bool + If True, yields parameters of this layer and all sublayers. Otherwise, yields only parameters + that are direct members of this layer. Default: True. + + Examples + --------- + >>> net = Net() + >>> for item in net.get_weights(): + >>> print(item) + + """ + + for _, param in self.parameters_and_names(expand=expand): + yield param + + def check_names(self): + names = set("") + for value, param in self.parameters_and_names(): + if param.name in names: + raise ValueError( + "The value of {} is {}, its name '{}' already exists.".format(value, param, param.name) + ) + names.add(param.name) + + def insert_child_to_layer(self, child_name, child): + """ + Adds a child layer to the current layer. + + Parameters + ---------- + child_name : str + Name of the child layer. + child : Module + The child layer to be inserted. + + """ + + if not child_name or '.' in child_name: + raise KeyError("Child layer name is incorrect.") + if hasattr(self, child_name) and child_name not in self._layers: + raise KeyError("Duplicate child name '{}'.".format(child_name)) + if not isinstance(child, Module) and child is not None: + raise TypeError("Child layer type is incorrect.") + self._layers[child_name] = child + + def parameters_and_names(self, name_prefix='', expand=True): + """ + Returns an iterator over layer parameters. + + Includes the parameter's name and itself. + + Parameters + ---------- + name_prefix : str + Namespace. Default: ''. + expand : bool + If True, yields parameters of this layer and all sublayers. Otherwise, yields only parameters + that are direct members of this layer. Default: True. + + Examples + --------- + >>> n = Net() + >>> names = [] + >>> for m in n.parameters_and_names(): + >>> if m[0]: + >>> names.append(m[0]) + + """ + + layers = [] + if expand: + layers = self.layers_and_names(name_prefix=name_prefix) + else: + layers.append((name_prefix, self)) + + params_set = set() + for layer_name, layer in layers: + params = layer._params.items() + for par_name, par in params: + if par.inited_param is not None: + par = par.inited_param + if par is not None and id(par) not in params_set: + params_set.add(id(par)) + par_new_name = par_name + if layer_name: + par_new_name = layer_name + '.' + par_new_name + + yield par_new_name, par + + def layers_and_names(self, layers=None, name_prefix=''): + """ + Returns an iterator over all layers in the network. + + Includes the layer's name and itself. + + Parameters + ---------- + layers : str + layers to iterate over. Default: None. + name_prefix : str + Namespace. Default: ''. + + Examples + --------- + >>> n = Net() + >>> names = [] + >>> for m in n.layers_and_names(): + >>> if m[0]: + >>> names.append(m[0]) + + """ + + t_layers = layers if layers else set() + if self in t_layers: + return + + t_layers.add(self) + yield name_prefix, self + + for name, layer in self._layers.items(): + if layer: + layers_name_prefix = name + if name_prefix: + layers_name_prefix = name_prefix + '.' + layers_name_prefix + for ele in layer.layers_and_names(t_layers, layers_name_prefix): + yield ele + + def layers(self): + """Returns an iterator over immediate layers.""" + + return self.name_layers().values() + + def name_layers(self): + """ + Returns an iterator over all layers in the network. + + Include name of the layer and layer itself. + """ + + value_set = set() + layers = OrderedDict() + for name, layer in self._layers.items(): + if layer is not None and layer not in value_set: + value_set.add(layer) + layers[name] = layer + return layers + + def init_build(self, *inputs, **kwargs): + """ + (1) This method must be called when the Layer has no input in_channels. + (2) Automatic shape inference when the user does not enter in_channels. + """ + + self.forward(*inputs, **kwargs) + + +class SequentialLayer(Module): + """ + The class :class:`SequentialLayer` is a linear stack of layers. + The :class:`SequentialLayer` can be created by passing a list of layer instances. + The given layer instances will be automatically connected one by one. + Parameters + ---------- + layers: list of Layer + A list of layers. + name : str or None + A unique layer name. If None, a unique name will be automatically assigned. + Methods + --------- + __init__() + Initializing the LayerList. + weights() + A collection of weights of all the layer instances. + build() + Build the LayerList. The layer instances will be connected automatically one by one. + forward() + Forward the computation. The computation will go through all layer instances. + + Examples + --------- + >>> conv = tl.layers.Conv2d(3, 2, 3, pad_mode='valid') + >>> bn = tl.layers.BatchNorm2d(2) + >>> seq = tl.layers.SequentialLayer([conv, bn]) + >>> x = tl.layers.Input((1, 3, 4, 4)) + >>> seq(x) + """ + + def __init__(self, *args): + super(SequentialLayer, self).__init__() + self._built = True + if len(args) == 1: + layers = args[0] + if isinstance(layers, list): + for index, layer in enumerate(layers): + self.insert_child_to_layer(str(index), layer) + elif isinstance(layers, OrderedDict): + for name, layer in layers.items(): + self.insert_child_to_layer(name, layer) + else: + raise TypeError('Layers must be list or orderedDict') + else: + for index, layer in enumerate(args): + self.insert_child_to_layer(str(index), layer) + self.layer_list = list(self._layers.values()) + + def __getitem__(self, index): + if isinstance(index, slice): + return self.__class__(OrderedDict(list(self._layers.items())[index])) + index = _valid_index(len(self), index) + return list(self._layers.values())[index] + + def __setitem__(self, index, layer): + if _valid_module(layer): + index = _valid_index(len(self), index) + key = list(self._layers.keys())[index] + self._layers[key] = layer + self.layer_list = list(self._layers.values()) + + def __delitem__(self, index): + if isinstance(index, int): + index = _valid_index(len(self), index) + key = list(self._layers.keys())[index] + del self._layers[key] + elif isinstance(index, slice): + keys = list(self._layers.keys())[index] + for key in keys: + del self._layers[key] + else: + raise TypeError('Index {} is not int type or slice type'.format(index)) + self.layer_list = list(self._layers.values()) + + def __len__(self): + return len(self._layers) + + def append(self, layer): + if _valid_module(layer): + self._layers[str(len(self))] = layer + self.layer_list = list(self._layers.values()) + return self + + def build(self, inputs_shape): + pass + + def forward(self, input_data): + for layer in self.layer_list: + input_data = layer(input_data) + return input_data + + +class LayerList(Module): + """ + Holds Modules in a list. + + LayerList can be used like a regular Python list, support + '__getitem__', '__setitem__', '__delitem__', '__len__', '__iter__' and '__iadd__', + but module it contains are properly registered, and will be visible by all Modules methods. + + Parameters + ---------- + args : list + List of subclass of Module. + Methods + --------- + __init__() + Initializing the Layer. + insert() + Inserts a given layer before a given index in the list. + extend() + Appends layers from a Python iterable to the end of the list. + append() + Appends a given layer to the end of the list. + + Examples + --------- + Args: + args (list, optional): List of subclass of Module. + + Examples: + + """ + def __init__(self, args): + super(LayerList, self).__init__() + self.extend(args) + + def __getitem__(self, index): + if isinstance(index, slice): + return self.__class__(list(self._layers.values())[index]) + if isinstance(index, int): + index = _valid_index(len(self), index) + return self._layers[str(index)] + raise TypeError('Index {} is not int type or slice type'.format(index)) + + def __setitem__(self, index, layer): + if not isinstance(index, int) and _valid_module(layer): + raise TypeError('Index {} is not int type'.format(index)) + index = _valid_index(len(self), index) + self._layers[str(index)] = layer + + def __delitem__(self, index): + if isinstance(index, int): + index = _valid_index(len(self), index) + del self._layers[str(index)] + elif isinstance(index, slice): + keys = list(self._layers.keys())[index] + for key in keys: + del self._layers[key] + else: + raise TypeError('Index {} is not int type or slice type'.format(index)) + temp_dict = OrderedDict() + for idx, layer in enumerate(self._layers.values()): + temp_dict[str(idx)] = layer + self._layers = temp_dict + + def __len__(self): + return len(self._layers) + + def __iter__(self): + return iter(self._layers.values()) + + def __iadd__(self, layers): + self.extend(layers) + return self + + def insert(self, index, layer): + """ + Inserts a given layer before a given index in the list. + + """ + + idx = _valid_index(len(self), index) + _valid_module(layer) + length = len(self) + while length > idx: + self._layers[str(length)] = self._layers[str(length - 1)] + length -= 1 + self._layers[str(idx)] = layer + + def extend(self, layers): + """ + Appends layers from a Python iterable to the end of the list. + + """ + + if not isinstance(layers, list): + raise TypeError('Modules {} should be list of sublayers'.format(layers)) + for layer in layers: + if _valid_module(layer): + self._layers[str(len(self))] = layer + return self + + def append(self, layer): + """ + Appends a given layer to the end of the list. + + """ + + if _valid_module(layer): + self._layers[str(len(self))] = layer + + def forward(self, *inputs): + raise NotImplementedError + + +def _valid_index(layer_num, index): + if not isinstance(index, int): + raise TypeError("Index {} is not int type") + if not -layer_num <= index < layer_num: + raise IndexError( + "Index should be a number in range [{}, {}), but got {}".format(-layer_num, layer_num, index) + ) + return index % layer_num + + +def _valid_module(layer): + if issubclass(layer.__class__, Module): + return True + raise TypeError('Module {} is not subclass of Module'.format(layer)) \ No newline at end of file diff --git a/tensorlayer/layers/dense/__init__.py b/tensorlayer/layers/dense/__init__.py index 557fbd070..2291b949e 100644 --- a/tensorlayer/layers/dense/__init__.py +++ b/tensorlayer/layers/dense/__init__.py @@ -5,7 +5,7 @@ various benchmarks and domain-specific problems. In addition, we also support transparent access to native TensorFlow parameters. For example, we provide not only layers for local response normalization, but also -layers that allow user to apply ``tf.nn.lrn`` on ``network.outputs``. +layers that allow user to apply ``tf.ops.lrn`` on ``network.outputs``. More functions can be found in `TensorFlow API `__. """ diff --git a/tensorlayer/layers/dense/base_dense.py b/tensorlayer/layers/dense/base_dense.py index c24080432..acc3447af 100644 --- a/tensorlayer/layers/dense/base_dense.py +++ b/tensorlayer/layers/dense/base_dense.py @@ -1,22 +1,16 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import numpy as np -import tensorflow as tf - import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer - -# from tensorlayer.layers.core import LayersConfig +from tensorlayer.layers.core import Module __all__ = [ 'Dense', ] -class Dense(Layer): +class Dense(Module): """The :class:`Dense` class is a fully connected layer. Parameters @@ -40,10 +34,10 @@ class Dense(Layer): With TensorLayer >>> net = tl.layers.Input([100, 50], name='input') - >>> dense = tl.layers.Dense(n_units=800, act=tf.nn.relu, in_channels=50, name='dense_1') + >>> dense = tl.layers.Dense(n_units=800, act=tl.ReLU, in_channels=50, name='dense_1') >>> print(dense) Dense(n_units=800, relu, in_channels='50', name='dense_1') - >>> tensor = tl.layers.Dense(n_units=800, act=tf.nn.relu, name='dense_2')(net) + >>> tensor = tl.layers.Dense(n_units=800, act=tl.ReLU, name='dense_2')(net) >>> print(tensor) tf.Tensor([...], shape=(100, 800), dtype=float32) @@ -76,11 +70,11 @@ def __init__( logging.info( "Dense %s: %d %s" % - (self.name, self.n_units, self.act.__name__ if self.act is not None else 'No Activation') + (self.name, self.n_units, self.act.__class__.__name__ if self.act is not None else 'No Activation') ) def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' + actstr = self.act.__class__.__name__ if self.act is not None else 'No Activation' s = ('{classname}(n_units={n_units}, ' + actstr) if self.in_channels is not None: s += ', in_channels=\'{in_channels}\'' @@ -97,15 +91,31 @@ def build(self, inputs_shape): else: self.in_channels = inputs_shape[1] shape = [inputs_shape[1], self.n_units] + self.W = self._get_weights("weights", shape=tuple(shape), init=self.W_init) + + self.b_init_flag = False if self.b_init: self.b = self._get_weights("biases", shape=(self.n_units, ), init=self.b_init) + self.b_init_flag = True + self.bias_add = tl.ops.BiasAdd() - # @tf.function - def forward(self, inputs): - z = tf.matmul(inputs, self.W) - if self.b_init: - z = tf.add(z, self.b) + self.act_init_flag = False if self.act: + self.act_init_flag = True + + self.matmul = tl.ops.MatMul() + + def forward(self, inputs): + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True + + z = self.matmul(inputs, self.W) + if self.b_init_flag: + z = self.bias_add(z, self.b) + if self.act_init_flag: z = self.act(z) return z diff --git a/tensorlayer/layers/dense/binary_dense.py b/tensorlayer/layers/dense/binary_dense.py index d4d152ac0..24fab5cf1 100644 --- a/tensorlayer/layers/dense/binary_dense.py +++ b/tensorlayer/layers/dense/binary_dense.py @@ -1,12 +1,9 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer +from tensorlayer.layers.core import Module from tensorlayer.layers.utils import quantize __all__ = [ @@ -14,7 +11,7 @@ ] -class BinaryDense(Layer): +class BinaryDense(Module): """The :class:`BinaryDense` class is a binary fully connected layer, which weights are either -1 or 1 while inferencing. Note that, the bias vector would not be binarized. @@ -37,6 +34,14 @@ class BinaryDense(Layer): name : None or str A unique layer name. + Examples + -------- + >>> net = tl.layers.Input([10, 784], name='input') + >>> net = tl.layers.BinaryDense(n_units=800, act=tl.ReLU, name='relu1')(net) + >>> output shape :(10, 800) + >>> net = tl.layers.BinaryDense(n_units=10, name='output')(net) + >>> output shape : (10, 10) + """ def __init__( @@ -47,7 +52,7 @@ def __init__( W_init=tl.initializers.truncated_normal(stddev=0.05), b_init=tl.initializers.constant(value=0.0), in_channels=None, - name=None, #'binary_dense', + name=None, ): super().__init__(name, act=act) self.n_units = n_units @@ -62,11 +67,11 @@ def __init__( logging.info( "BinaryDense %s: %d %s" % - (self.name, n_units, self.act.__name__ if self.act is not None else 'No Activation') + (self.name, n_units, self.act.__class__.__name__ if self.act is not None else 'No Activation') ) def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' + actstr = self.act.__class__.__name__ if self.act is not None else 'No Activation' s = ('{classname}(n_units={n_units}, ' + actstr) if self.in_channels is not None: s += ', in_channels=\'{in_channels}\'' @@ -89,17 +94,22 @@ def build(self, inputs_shape): self.W = self._get_weights("weights", shape=(n_in, self.n_units), init=self.W_init) if self.b_init is not None: self.b = self._get_weights("biases", shape=(self.n_units), init=self.b_init) + self.bias_add = tl.ops.BiasAdd() + + self.matmul = tl.ops.MatMul() def forward(self, inputs): - # W = tl.act.sign(W) # dont update ... - W_ = quantize(self.W) - # W = tf.Variable(W) + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True - outputs = tf.matmul(inputs, W_) - # self.outputs = xnor_gemm(self.inputs, W) # TODO + W_ = quantize(self.W) + outputs = self.matmul(inputs, W_) if self.b_init is not None: - outputs = tf.nn.bias_add(outputs, self.b, name='bias_add') + outputs = self.bias_add(outputs, self.b) if self.act: outputs = self.act(outputs) diff --git a/tensorlayer/layers/dense/dorefa_dense.py b/tensorlayer/layers/dense/dorefa_dense.py index 4bc4f40df..f54a228d3 100644 --- a/tensorlayer/layers/dense/dorefa_dense.py +++ b/tensorlayer/layers/dense/dorefa_dense.py @@ -1,12 +1,9 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer +from tensorlayer.layers.core import Module from tensorlayer.layers.utils import cabs, quantize_active, quantize_weight __all__ = [ @@ -14,7 +11,7 @@ ] -class DorefaDense(Layer): +class DorefaDense(Module): """The :class:`DorefaDense` class is a binary fully connected layer, which weights are 'bitW' bits and the output of the previous layer are 'bitA' bits while inferencing. @@ -42,6 +39,14 @@ class DorefaDense(Layer): name : a str A unique layer name. + Examples + -------- + >>> net = tl.layers.Input([10, 784], name='input') + >>> net = tl.layers.DorefaDense(n_units=800, act=tl.ReLU, name='relu1')(net) + >>> output shape :(10, 800) + >>> net = tl.layers.DorefaDense(n_units=10, name='output')(net) + >>> output shape :(10, 10) + """ def __init__( @@ -99,15 +104,21 @@ def build(self, inputs_shape): self.W = self._get_weights("weights", shape=(n_in, self.n_units), init=self.W_init) if self.b_init is not None: self.b = self._get_weights("biases", shape=(self.n_units), init=self.b_init) + self.bias_add = tl.ops.BiasAdd() + self.matmul = tl.ops.MatMul() def forward(self, inputs): + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True + inputs = quantize_active(cabs(inputs), self.bitA) W_ = quantize_weight(self.W, self.bitW) - outputs = tf.matmul(inputs, W_) - # self.outputs = xnor_gemm(self.inputs, W) # TODO + outputs = self.matmul(inputs, W_) if self.b_init is not None: - outputs = tf.nn.bias_add(outputs, self.b, name='bias_add') - # self.outputs = xnor_gemm(self.inputs, W) + b # TODO + outputs = self.bias_add(outputs, self.b) if self.act: outputs = self.act(outputs) return outputs diff --git a/tensorlayer/layers/dense/dropconnect.py b/tensorlayer/layers/dense/dropconnect.py index 43c3a144a..3e0e3e2ef 100644 --- a/tensorlayer/layers/dense/dropconnect.py +++ b/tensorlayer/layers/dense/dropconnect.py @@ -2,20 +2,16 @@ # -*- coding: utf-8 -*- import numbers - -import tensorflow as tf - import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer +from tensorlayer.layers.core import Module __all__ = [ 'DropconnectDense', ] -class DropconnectDense(Layer): +class DropconnectDense(Module): """ The :class:`DropconnectDense` class is :class:`Dense` with DropConnect behaviour which randomly removes connections between this layer and the previous @@ -42,13 +38,13 @@ class DropconnectDense(Layer): Examples -------- - >>> net = tl.layers.Input([None, 784], name='input') - >>> net = tl.layers.DropconnectDense(keep=0.8, - ... n_units=800, act=tf.nn.relu, name='relu1')(net) - >>> net = tl.layers.DropconnectDense(keep=0.5, - ... n_units=800, act=tf.nn.relu, name='relu2')(net) - >>> net = tl.layers.DropconnectDense(keep=0.5, - ... n_units=10, name='output')(net) + >>> net = tl.layers.Input([10, 784], name='input') + >>> net = tl.layers.DropconnectDense(keep=0.8, n_units=800, act=tl.ReLU, name='relu1')(net) + >>> output shape :(10, 800) + >>> net = tl.layers.DropconnectDense(keep=0.5, n_units=800, act=tl.ReLU, name='relu2')(net) + >>> output shape :(10, 800) + >>> net = tl.layers.DropconnectDense(keep=0.5, n_units=10, name='output')(net) + >>> output shape :(10, 10) References ---------- @@ -83,7 +79,7 @@ def __init__( logging.info( "DropconnectDense %s: %d %s" % - (self.name, n_units, self.act.__name__ if self.act is not None else 'No Activation') + (self.name, n_units, self.act.__class__.__name__ if self.act is not None else 'No Activation') ) def __repr__(self): @@ -109,11 +105,21 @@ def build(self, inputs_shape): if self.b_init: self.b = self._get_weights("biases", shape=(self.n_units), init=self.b_init) + self.dropout = tl.ops.Dropout(keep=self.keep) + self.matmul = tl.ops.MatMul() + self.bias_add = tl.ops.BiasAdd() + def forward(self, inputs): - W_dropcon = tf.nn.dropout(self.W, 1 - (self.keep)) - outputs = tf.matmul(inputs, W_dropcon) + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True + + W_dropcon = self.dropout(self.W) + outputs = self.matmul(inputs, W_dropcon) if self.b_init: - outputs = tf.nn.bias_add(outputs, self.b, name='bias_add') + outputs = self.bias_add(outputs, self.b) if self.act: outputs = self.act(outputs) return outputs diff --git a/tensorlayer/layers/dense/quan_dense.py b/tensorlayer/layers/dense/quan_dense.py index 67ca73074..a055675f9 100644 --- a/tensorlayer/layers/dense/quan_dense.py +++ b/tensorlayer/layers/dense/quan_dense.py @@ -1,12 +1,9 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer +from tensorlayer.layers.core import Module from tensorlayer.layers.utils import (quantize_active_overflow, quantize_weight_overflow) __all__ = [ @@ -14,7 +11,7 @@ ] -class QuanDense(Layer): +class QuanDense(Module): """The :class:`QuanDense` class is a quantized fully connected layer with BN, which weights are 'bitW' bits and the output of the previous layer are 'bitA' bits while inferencing. @@ -40,6 +37,14 @@ class QuanDense(Layer): name : None or str A unique layer name. + Examples + -------- + >>> net = tl.layers.Input([10, 784], name='input') + >>> net = tl.layers.BinaryDense(n_units=800, act=tl.ReLU, name='relu1')(net) + >>> output shape :(10, 800) + >>> net = tl.layers.BinaryDense(n_units=10, name='output')(net) + >>> output shape :(10, 10) + """ def __init__( @@ -97,18 +102,26 @@ def build(self, inputs_shape): self.W = self._get_weights("weights", shape=(n_in, self.n_units), init=self.W_init) if self.b_init is not None: self.b = self._get_weights("biases", shape=int(self.n_units), init=self.b_init) + self.bias_add = tl.ops.BiasAdd() + + self.matmul = tl.ops.MatMul() def forward(self, inputs): + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True inputs = quantize_active_overflow(inputs, self.bitA) W_ = quantize_weight_overflow(self.W, self.bitW) # outputs = tf.matmul(inputs, self.W) - outputs = tf.matmul(inputs, W_) # hao dong change to this + outputs = self.matmul(inputs, W_) # hao dong change to this if self.b_init is not None: - outputs = tf.nn.bias_add(outputs, self.b, name='bias_add') + outputs = self.bias_add(outputs, self.b) if self.act: outputs = self.act(outputs) return outputs diff --git a/tensorlayer/layers/dense/quan_dense_bn.py b/tensorlayer/layers/dense/quan_dense_bn.py index 9270f548d..0c40c7dff 100644 --- a/tensorlayer/layers/dense/quan_dense_bn.py +++ b/tensorlayer/layers/dense/quan_dense_bn.py @@ -1,25 +1,23 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf -# from tensorlayer.layers.core import LayersConfig -from tensorflow.python.training import moving_averages - import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer -from tensorlayer.layers.utils import (quantize_active_overflow, quantize_weight_overflow) +from tensorlayer.layers.core import Module +from tensorflow.python.training import moving_averages +from tensorlayer.layers.utils import ( + quantize_active_overflow, quantize_weight_overflow, mean_var_with_update, w_fold, bias_fold +) __all__ = [ 'QuanDenseWithBN', ] -class QuanDenseWithBN(Layer): +class QuanDenseWithBN(Module): """The :class:`QuanDenseWithBN` class is a quantized fully connected layer with BN, which weights are 'bitW' bits and the output of the previous layer are 'bitA' bits while inferencing. - + # TODO The QuanDenseWithBN only supports TensorFlow backend. Parameters ---------- n_units : int @@ -59,9 +57,7 @@ class QuanDenseWithBN(Layer): >>> import tensorlayer as tl >>> net = tl.layers.Input([50, 256]) >>> layer = tl.layers.QuanDenseWithBN(128, act='relu', name='qdbn1')(net) - >>> print(layer) >>> net = tl.layers.QuanDenseWithBN(256, act='relu', name='qdbn2')(net) - >>> print(net) """ def __init__( @@ -100,11 +96,11 @@ def __init__( logging.info( "QuanDenseLayerWithBN %s: %d %s" % - (self.name, n_units, self.act.__name__ if self.act is not None else 'No Activation') + (self.name, n_units, self.act.__class__.__name__ if self.act is not None else 'No Activation') ) def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' + actstr = self.act.__class__.__name__ if self.act is not None else 'No Activation' s = ('{classname}(n_units={n_units}, ' + actstr) s += ', bitW={bitW}, bitA={bitA}' if self.in_channels is not None: @@ -146,11 +142,17 @@ def build(self, inputs_shape): ) def forward(self, inputs): + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True + x = inputs inputs = quantize_active_overflow(inputs, self.bitA) - mid_out = tf.matmul(x, self.W) + mid_out = tl.ops.matmul(x, self.W) - mean, variance = tf.nn.moments(x=mid_out, axes=list(range(len(mid_out.get_shape()) - 1))) + mean, variance = tl.ops.moments(x=mid_out, axes=list(range(len(mid_out.get_shape()) - 1))) update_moving_mean = moving_averages.assign_moving_average( self.moving_mean, mean, self.decay, zero_debias=False @@ -161,19 +163,19 @@ def forward(self, inputs): ) # if zero_debias=True, has bias if self.is_train: - mean, var = self.mean_var_with_update(update_moving_mean, update_moving_variance, mean, variance) + mean, var = mean_var_with_update(update_moving_mean, update_moving_variance, mean, variance) else: mean, var = self.moving_mean, self.moving_variance - w_fold = self._w_fold(self.W, self.scale_para, var, self.epsilon) + _w_fold = w_fold(self.W, self.scale_para, var, self.epsilon) - W = quantize_weight_overflow(w_fold, self.bitW) + W = quantize_weight_overflow(_w_fold, self.bitW) - outputs = tf.matmul(inputs, W) + outputs = tl.ops.matmul(inputs, W) if self.beta_init: - bias_fold = self._bias_fold(self.offset_para, self.scale_para, mean, var, self.epsilon) - outputs = tf.nn.bias_add(outputs, bias_fold, name='bias_add') + _bias_fold = bias_fold(self.offset_para, self.scale_para, mean, var, self.epsilon) + outputs = tl.ops.bias_add(outputs, _bias_fold) else: outputs = outputs @@ -182,13 +184,3 @@ def forward(self, inputs): else: outputs = outputs return outputs - - def mean_var_with_update(self, update_moving_mean, update_moving_variance, mean, variance): - with tf.control_dependencies([update_moving_mean, update_moving_variance]): - return tf.identity(mean), tf.identity(variance) - - def _w_fold(self, w, gama, var, epsilon): - return tf.compat.v1.div(tf.multiply(gama, w), tf.sqrt(var + epsilon)) - - def _bias_fold(self, beta, gama, mean, var, epsilon): - return tf.subtract(beta, tf.compat.v1.div(tf.multiply(gama, mean), tf.sqrt(var + epsilon))) diff --git a/tensorlayer/layers/dense/ternary_dense.py b/tensorlayer/layers/dense/ternary_dense.py index 49479df7c..5cf4457e5 100644 --- a/tensorlayer/layers/dense/ternary_dense.py +++ b/tensorlayer/layers/dense/ternary_dense.py @@ -1,12 +1,9 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer +from tensorlayer.layers.core import Module from tensorlayer.layers.utils import compute_alpha, ternary_operation __all__ = [ @@ -14,8 +11,9 @@ ] -class TernaryDense(Layer): +class TernaryDense(Module): """The :class:`TernaryDense` class is a ternary fully connected layer, which weights are either -1 or 1 or 0 while inference. + # TODO The TernaryDense only supports TensorFlow backend. Note that, the bias vector would not be tenaried. @@ -92,17 +90,20 @@ def build(self, inputs_shape): self.b = self._get_weights(var_name="biases", shape=(self.n_units), init=self.b_init) def forward(self, inputs): - # W = tl.act.sign(W) # dont update ... + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True + alpha = compute_alpha(self.W) W_ = ternary_operation(self.W) - W_ = tf.multiply(alpha, W_) - # W = tf.Variable(W) + W_ = tl.ops.multiply(alpha, W_) - outputs = tf.matmul(inputs, W_) - # self.outputs = xnor_gemm(self.inputs, W) # TODO + outputs = tl.ops.matmul(inputs, W_) if self.b_init is not None: - outputs = tf.nn.bias_add(outputs, self.b, name='bias_add') + outputs = tl.ops.bias_add(outputs, self.b, name='bias_add') if self.act: outputs = self.act(outputs) return outputs diff --git a/tensorlayer/layers/deprecated.py b/tensorlayer/layers/deprecated.py index 2cb6699c0..cc44742d1 100644 --- a/tensorlayer/layers/deprecated.py +++ b/tensorlayer/layers/deprecated.py @@ -15,7 +15,7 @@ class NonExistingLayerError(Exception): 'PTRelu6Layer', ] -__log__ = '\n Hint: 1) downgrade TF and TL from version 2.x to 1.x. 2) check the documentation of TF and TL version 2.x' +__log__ = '\n Hint: 1) downgrade TL from version 3.x to 2.x. 2) check the documentation of TF version 2.x and TL version 3.x' def PReluLayer(*args, **kwargs): @@ -414,3 +414,26 @@ def UnStackLayer(*args, **kwargs): def TimeDistributedLayer(*args, **kwargs): # raise NonExistingLayerError("TimeDistributedLayer(x1, x2, name='a') --> TimeDistributed(name='a')(x1, x2)") raise NonExistingLayerError("TimeDistributedLayer is removed for TF 2.0, please use eager mode instead." + __log__) + + +__all__ += ['ModelLayer'] + + +def ModelLayer(*args, **kwargs): + raise NonExistingLayerError("ModelLayer is removed for TensorLayer 3.0.") + + +__all__ += ['Seq2seqLuongAttention'] + + +def Seq2seqLuongAttention(*args, **kwargs): + raise NonExistingLayerError("Seq2seqLuongAttention is removed for TensorLayer 3.0.") + + +__all__ += ['cross_entropy'] + + +def cross_entropy(*args, **kwargs): + raise NonExistingLayerError( + "cross_entropy(output, target) --> softmax_cross_entropy_with_logits(output, target)" + __log__ + ) diff --git a/tensorlayer/layers/dropout.py b/tensorlayer/layers/dropout.py index 3724d8b43..54c9ba5fd 100644 --- a/tensorlayer/layers/dropout.py +++ b/tensorlayer/layers/dropout.py @@ -1,20 +1,16 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - +import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer - -# from tensorlayer.layers.core import LayersConfig +from tensorlayer.layers.core import Module __all__ = [ 'Dropout', ] -class Dropout(Layer): +class Dropout(Module): """ The :class:`Dropout` class is a noise layer which randomly set some activations to zero according to a keeping probability. @@ -29,9 +25,14 @@ class Dropout(Layer): name : None or str A unique layer name. + Examples + -------- + >>> net = tl.layers.Input([10, 200]) + >>> net = tl.layers.Dropout(keep=0.2)(net) + """ - def __init__(self, keep, seed=None, name=None): #"dropout"): + def __init__(self, keep, seed=0, name=None): #"dropout"): super(Dropout, self).__init__(name) self.keep = keep self.seed = seed @@ -49,12 +50,12 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape=None): - pass + self.dropout = tl.ops.Dropout(keep=self.keep, seed=self.seed) # @tf.function def forward(self, inputs): if self.is_train: - outputs = tf.nn.dropout(inputs, rate=1 - (self.keep), seed=self.seed, name=self.name) + outputs = self.dropout(inputs) else: outputs = inputs return outputs diff --git a/tensorlayer/layers/embedding.py b/tensorlayer/layers/embedding.py index 9d0d882d1..84e4b56c1 100644 --- a/tensorlayer/layers/embedding.py +++ b/tensorlayer/layers/embedding.py @@ -1,24 +1,14 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import numpy as np -import tensorflow as tf - import tensorlayer as tl from tensorlayer import logging -from tensorlayer.layers.core import Layer - -# from tensorlayer.layers.core import LayersConfig +from tensorlayer.layers.core import Module -__all__ = [ - 'OneHot', - 'Word2vecEmbedding', - 'Embedding', - 'AverageEmbedding', -] +__all__ = ['OneHot', 'Word2vecEmbedding', 'Embedding', 'AverageEmbedding'] -class OneHot(Layer): +class OneHot(Module): """ The :class:`OneHot` class is the starting layer of a neural network, see ``tf.one_hot``. Useful link: `https://www.tensorflow.org/api_docs/python/tf/one_hot`. @@ -34,26 +24,23 @@ class OneHot(Layer): axis : None or int The axis. dtype : None or TensorFlow dtype - The data type, None means tf.float32. + The data type, None means tl.float32. name : str A unique layer name. Examples --------- - >>> import tensorflow as tf - >>> import tensorlayer as tl - >>> net = tl.layers.Input([32], dtype=tf.int32) + >>> net = tl.layers.Input([32], dtype=tl.int32) >>> onehot = tl.layers.OneHot(depth=8) >>> print(onehot) OneHot(depth=8, name='onehot') >>> tensor = tl.layers.OneHot(depth=8)(net) >>> print(tensor) - tf.Tensor([...], shape=(32, 8), dtype=float32) + Tensor([...], shape=(32, 8), dtype=float32) """ - def __init__(self, depth=None, on_value=None, off_value=None, axis=None, dtype=None, name=None): #'input'): - + def __init__(self, depth=None, on_value=1.0, off_value=0.0, axis=-1, dtype=tl.float32, name=None): super(OneHot, self).__init__(name) self.depth = depth self.on_value = on_value @@ -62,9 +49,8 @@ def __init__(self, depth=None, on_value=None, off_value=None, axis=None, dtype=N self.dtype = dtype logging.info("OneHotInput %s" % (self.name)) - if not self._built: - self.build(tuple()) - self._built = True + self.build() + self._built = True if self.depth is None: raise RuntimeError(self.__class__.__name__ + ": depth == None the number of output units is undefined") @@ -82,10 +68,11 @@ def __repr__(self): s += ')' return s.format(classname=self.__class__.__name__, **self.__dict__) - def build(self, inputs_shape): - pass + def build(self, inputs_shape=None): + self.onehot = tl.ops.OneHot( + depth=self.depth, on_value=self.on_value, off_value=self.off_value, axis=self.axis, dtype=self.dtype + ) - # @tf.function def forward(self, inputs): """ Parameters @@ -93,13 +80,11 @@ def forward(self, inputs): inputs : input tensor The inputs are indices. The locations represented by indices in indices take value on_value, while all other locations take value off_value. """ - outputs = tf.one_hot( - inputs, self.depth, on_value=self.on_value, off_value=self.off_value, axis=self.axis, dtype=self.dtype - ) + outputs = self.onehot(inputs) return outputs -class Word2vecEmbedding(Layer): +class Word2vecEmbedding(Module): """ The :class:`Word2vecEmbedding` class is a fully connected layer. For Word Embedding, words are input as integer index. @@ -128,7 +113,7 @@ class Word2vecEmbedding(Layer): In a static model, once the model is constructed, the computation of nce loss cannot be changed (always computed or not computed). nce_loss_args : dictionary - The arguments for tf.nn.nce_loss() + The arguments for tf.ops.nce_loss() E_init : initializer The initializer for initializing the embedding matrix nce_W_init : initializer @@ -153,12 +138,11 @@ class Word2vecEmbedding(Layer): -------- Word2Vec With TensorLayer (Example in `examples/text_word_embedding/tutorial_word2vec_basic.py`) - >>> import tensorflow as tf >>> import tensorlayer as tl >>> batch_size = 8 >>> embedding_size = 50 - >>> inputs = tl.layers.Input([batch_size], dtype=tf.int32) - >>> labels = tl.layers.Input([batch_size, 1], dtype=tf.int32) + >>> inputs = tl.layers.Input([batch_size], dtype=tl.int32) + >>> labels = tl.layers.Input([batch_size, 1], dtype=tl.int32) >>> emb_net = tl.layers.Word2vecEmbedding( >>> vocabulary_size=10000, >>> embedding_size=embedding_size, @@ -248,7 +232,7 @@ def build(self, inputs_shape): init=self.E_init, ) - self.normalized_embeddings = tf.nn.l2_normalize(self.embeddings, 1) + self.normalized_embeddings = tl.L2Normalize(axis=1)(self.embeddings) if self.activate_nce_loss: # Construct the variables for the NCE loss (i.e. negative sampling) @@ -264,7 +248,11 @@ def build(self, inputs_shape): init=self.nce_b_init, ) - # @tf.function + self.embedding_lookup = tl.EmbeddingLookup() + + if self.activate_nce_loss: + self.nce_loss = tl.NCELoss(**self.nce_loss_args) + def forward(self, inputs, use_nce_loss=None): """ Parameters @@ -284,8 +272,10 @@ def forward(self, inputs, use_nce_loss=None): The nce_cost is returned only if the nce_loss is used. """ - ids = inputs[0] if isinstance(inputs, list) else inputs - outputs = tf.nn.embedding_lookup(params=self.embeddings, ids=ids) + if isinstance(inputs, list): + outputs = self.embedding_lookup(params=self.embeddings, ids=inputs[0]) + else: + outputs = self.embedding_lookup(params=self.embeddings, ids=inputs) if use_nce_loss is True and not self.activate_nce_loss: raise AttributeError( @@ -297,10 +287,10 @@ def forward(self, inputs, use_nce_loss=None): if not isinstance(inputs, list): raise ValueError("If nce loss is used, the labels of inputs must be provided.") - nce_cost = tf.reduce_mean( - input_tensor=tf.nn.nce_loss( + nce_cost = tl.reduce_mean( + input_tensor=self.nce_loss( weights=self.nce_weights, biases=self.nce_biases, inputs=outputs, labels=inputs[1], - num_sampled=self.num_sampled, num_classes=self.vocabulary_size, **self.nce_loss_args + num_sampled=self.num_sampled, num_classes=self.vocabulary_size ) ) @@ -309,7 +299,7 @@ def forward(self, inputs, use_nce_loss=None): return outputs -class Embedding(Layer): +class Embedding(Module): """ The :class:`Embedding` class is a look-up table for word embedding. @@ -337,15 +327,14 @@ class Embedding(Layer): Examples -------- - >>> import tensorflow as tf >>> import tensorlayer as tl - >>> input = tl.layers.Input([8, 100], dtype=tf.int32) + >>> input = tl.layers.Input([8, 100], dtype=tl.int32) >>> embed = tl.layers.Embedding(vocabulary_size=1000, embedding_size=50, name='embed') >>> print(embed) Embedding(vocabulary_size=1000, embedding_size=50) >>> tensor = embed(input) >>> print(tensor) - tf.Tensor([...], shape=(8, 100, 50), dtype=float32) + Tensor([...], shape=(8, 100, 50), dtype=float32) """ @@ -387,8 +376,8 @@ def build(self, inputs_shape): shape=(self.vocabulary_size, self.embedding_size), init=self.E_init, ) + self.embedding_lookup = tl.EmbeddingLookup() - # @tf.function def forward(self, inputs): """ Parameters @@ -396,11 +385,11 @@ def forward(self, inputs): inputs : Tensor The input of a network. """ - outputs = tf.nn.embedding_lookup(params=self.embeddings, ids=inputs) + outputs = self.embedding_lookup(params=self.embeddings, ids=inputs) return outputs -class AverageEmbedding(Layer): +class AverageEmbedding(Module): """The :class:`AverageEmbedding` averages over embeddings of inputs. This is often used as the input layer for models like DAN[1] and FastText[2]. @@ -429,17 +418,16 @@ class AverageEmbedding(Layer): Examples --------- - >>> import tensorflow as tf >>> import tensorlayer as tl >>> batch_size = 8 >>> length = 5 - >>> input = tl.layers.Input([batch_size, length], dtype=tf.int32) + >>> input = tl.layers.Input([batch_size, length], dtype=tl.int32) >>> avgembed = tl.layers.AverageEmbedding(vocabulary_size=1000, embedding_size=50, name='avg') >>> print(avgembed) AverageEmbedding(vocabulary_size=1000, embedding_size=50, pad_value=0) >>> tensor = avgembed(input) >>> print(tensor) - tf.Tensor([...], shape=(8, 50), dtype=float32) + Tensor([...], shape=(8, 50), dtype=float32) """ @@ -487,8 +475,13 @@ def build(self, inputs_shape): shape=(self.vocabulary_size, self.embedding_size), init=self.E_init, ) + self.embedding_lookup = tl.EmbeddingLookup() + self.not_equal = tl.NotEqual() + self.cast = tl.Cast(tl.float32) + self.expand_dims = tl.ExpandDims(axis=-1) + self.reduce_sum = tl.ReduceSum(axis=1) + self.count_nonzero = tl.CountNonzero(keepdims=True, dtype=tl.float32) - # @tf.function def forward(self, inputs): """ Parameters @@ -497,30 +490,19 @@ def forward(self, inputs): The network input. For word inputs, please use integer index format, 2D tensor: (batch_size, sentence_length). """ - word_embeddings = tf.nn.embedding_lookup( - params=self.embeddings, - ids=inputs, - name='word_embeddings', - ) + word_embeddings = self.embedding_lookup(params=self.embeddings, ids=inputs) # Zero out embeddings of pad value - masks = tf.not_equal(inputs, self.pad_value, name='masks') - word_embeddings *= tf.cast(tf.expand_dims(masks, axis=-1), dtype=tf.float32) - sum_word_embeddings = tf.reduce_sum(input_tensor=word_embeddings, axis=1) + masks = self.not_equal(inputs, self.pad_value) + word_embeddings *= self.cast(self.expand_dims(masks)) + sum_word_embeddings = self.reduce_sum(input=word_embeddings) # Count number of non-padding words in each sentence - sentence_lengths = tf.math.count_nonzero( - masks, - axis=1, - keepdims=True, - dtype=tf.float32, - name='sentence_lengths', - ) - - sentence_embeddings = tf.divide( + sentence_lengths = self.count_nonzero(masks, axis=1) + print(masks, sentence_lengths) + sentence_embeddings = tl.ops.divide( sum_word_embeddings, sentence_lengths + 1e-8, # Add epsilon to avoid dividing by 0 - name='sentence_embeddings' ) outputs = sentence_embeddings diff --git a/tensorlayer/layers/extend.py b/tensorlayer/layers/extend.py index c34815e97..9c48da201 100644 --- a/tensorlayer/layers/extend.py +++ b/tensorlayer/layers/extend.py @@ -1,11 +1,9 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - +import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer +from tensorlayer.layers.core import Module __all__ = [ 'ExpandDims', @@ -13,7 +11,7 @@ ] -class ExpandDims(Layer): +class ExpandDims(Module): """ The :class:`ExpandDims` class inserts a dimension of 1 into a tensor's shape, see `tf.expand_dims() `__ . @@ -53,15 +51,15 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape): - pass + self.expand_dims = tl.ops.ExpandDims(axis=self.axis) # @tf.function def forward(self, inputs): - outputs = tf.expand_dims(inputs, axis=self.axis, name=self.name) + outputs = self.expand_dims(inputs) return outputs -class Tile(Layer): +class Tile(Module): """ The :class:`Tile` class constructs a tensor by tiling a given tensor, see `tf.tile() `__ . @@ -78,7 +76,7 @@ class Tile(Layer): -------- >>> x = tl.layers.Input([10, 3], name='in') >>> y = tl.layers.Tile(multiples=[2, 3])(x) - [20, 9] + """ def __init__(self, multiples=None, name=None): #'tile'): @@ -99,9 +97,9 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape): - pass + self.tile = tl.ops.Tile() # @tf.function def forward(self, inputs): - outputs = tf.tile(inputs, multiples=self.multiples, name=self.name) + outputs = self.tile(inputs, multiples=self.multiples) return outputs diff --git a/tensorlayer/layers/image_resampling.py b/tensorlayer/layers/image_resampling.py index b327901a7..f0173883c 100644 --- a/tensorlayer/layers/image_resampling.py +++ b/tensorlayer/layers/image_resampling.py @@ -1,11 +1,9 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - +import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer +from tensorlayer.layers.core import Module __all__ = [ 'UpSampling2d', @@ -13,7 +11,7 @@ ] -class UpSampling2d(Layer): +class UpSampling2d(Module): """The :class:`UpSampling2d` class is a up-sampling 2D layer. See `tf.image.resize_images `__. @@ -39,36 +37,29 @@ class UpSampling2d(Layer): --------- With TensorLayer - >>> ni = tl.layers.Input([None, 50, 50, 32], name='input') + >>> ni = tl.layers.Input([10, 50, 50, 32], name='input') >>> ni = tl.layers.UpSampling2d(scale=(2, 2))(ni) - >>> output shape : [None, 100, 100, 32] + >>> output shape : [10, 100, 100, 32] """ - def __init__( - self, - scale, - method='bilinear', - antialias=False, - data_format='channel_last', - name=None, - ): + def __init__(self, scale, method='bilinear', antialias=False, data_format='channels_last', name=None, ksize=None): super(UpSampling2d, self).__init__(name) self.method = method self.antialias = antialias self.data_format = data_format + self.ksize = ksize logging.info( "UpSampling2d %s: scale: %s method: %s antialias: %s" % (self.name, scale, self.method, self.antialias) ) - self.build(None) - self._built = True - if isinstance(scale, (list, tuple)) and len(scale) != 2: raise ValueError("scale must be int or tuple/list of length 2") self.scale = (scale, scale) if isinstance(scale, int) else scale + self.build(None) + self._built = True def __repr__(self): s = '{classname}(scale={scale}, method={method}' @@ -78,8 +69,10 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, scale=self.scale, method=self.method, name=self.name) def build(self, inputs_shape): - if self.data_format != 'channel_last': - raise Exception("UpSampling2d tf.image.resize_images only support channel_last") + self.resize = tl.ops.Resize( + scale=self.scale, method=self.method, antialias=self.antialias, data_format=self.data_format, + ksize=self.ksize + ) def forward(self, inputs): """ @@ -89,12 +82,11 @@ def forward(self, inputs): inputs : :class:`Tensor` Inputs tensors with 4-D Tensor of the shape (batch, height, width, channels) """ - output_size = [int(inputs.shape[1] * self.scale[0]), int(inputs.shape[2] * self.scale[1])] - outputs = tf.image.resize(inputs, size=output_size, method=self.method, antialias=self.antialias) + outputs = self.resize(inputs) return outputs -class DownSampling2d(Layer): +class DownSampling2d(Module): """The :class:`DownSampling2d` class is down-sampling 2D layer. See `tf.image.resize_images `__. @@ -120,37 +112,30 @@ class DownSampling2d(Layer): --------- With TensorLayer - >>> ni = tl.layers.Input([None, 50, 50, 32], name='input') + >>> ni = tl.layers.Input([10, 50, 50, 32], name='input') >>> ni = tl.layers.DownSampling2d(scale=(2, 2))(ni) - >>> output shape : [None, 25, 25, 32] + >>> output shape : [10, 25, 25, 32] """ - def __init__( - self, - scale, - method='bilinear', - antialias=False, - data_format='channel_last', - name=None, - ): + def __init__(self, scale, method='bilinear', antialias=False, data_format='channels_last', name=None, ksize=None): super(DownSampling2d, self).__init__(name) self.method = method self.antialias = antialias self.data_format = data_format - + self.ksize = ksize logging.info( "DownSampling2d %s: scale: %s method: %s antialias: %s" % (self.name, scale, self.method, self.antialias) ) - self.build(None) - self._built = True - if isinstance(scale, (list, tuple)) and len(scale) != 2: raise ValueError("scale must be int or tuple/list of length 2") self.scale = (scale, scale) if isinstance(scale, int) else scale + self.build(None) + self._built = True + def __repr__(self): s = '{classname}(scale={scale}, method={method}' if self.name is not None: @@ -159,8 +144,10 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, scale=self.scale, method=self.method, name=self.name) def build(self, inputs_shape): - if self.data_format != 'channel_last': - raise Exception("DownSampling2d tf.image.resize_images only support channel_last") + scale = [1.0 / self.scale[0], 1.0 / self.scale[1]] + self.resize = tl.ops.Resize( + scale=scale, method=self.method, antialias=self.antialias, data_format=self.data_format, ksize=self.ksize + ) def forward(self, inputs): """ @@ -170,6 +157,6 @@ def forward(self, inputs): inputs : :class:`Tensor` Inputs tensors with 4-D Tensor of the shape (batch, height, width, channels) """ - output_size = [int(inputs.shape[1] * 1.0 / self.scale[0]), int(inputs.shape[2] * 1.0 / self.scale[1])] - outputs = tf.image.resize(inputs, size=output_size, method=self.method, antialias=self.antialias) + + outputs = self.resize(inputs) return outputs diff --git a/tensorlayer/layers/inputs.py b/tensorlayer/layers/inputs.py index 9d537a33d..cbcd76f0a 100644 --- a/tensorlayer/layers/inputs.py +++ b/tensorlayer/layers/inputs.py @@ -1,19 +1,14 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import numpy as np -import tensorflow as tf - import tensorlayer as tl from tensorlayer import logging -from tensorlayer.layers.core import Layer, LayerNode - -# from tensorlayer.layers.core import LayersConfig +from tensorlayer.layers.core import Module __all__ = ['Input', '_InputLayer'] -class _InputLayer(Layer): +class _InputLayer(Module): """ The :class:`Input` class is the starting layer of a neural network. @@ -28,29 +23,19 @@ class _InputLayer(Layer): """ - def __init__(self, shape, dtype=tf.float32, name=None): #'input'): - # super(InputLayer, self).__init__(prev_layer=inputs, name=name) + def __init__(self, shape, dtype=tl.float32, name=None, init=None): super(_InputLayer, self).__init__(name) - if isinstance(dtype, str): - try: - dtype = eval(dtype) - except Exception as e: - raise RuntimeError("%s is not a valid dtype for InputLayer." % (dtype)) - if not isinstance(dtype, tf.DType): - raise RuntimeError("%s is not a valid dtype for InputLayer." % (dtype)) - logging.info("Input %s: %s" % (self.name, str(shape))) - self.shape = shape # shape is needed in __repr__ - - shape_without_none = [_ if _ is not None else 1 for _ in shape] - # self.outputs = self.forward(tl.initializers.random_normal()(shape_without_none)) - outputs = self.forward(tl.initializers.ones()(shape_without_none, dtype=dtype)) - + self.shape = shape + self.dtype = dtype + self.shape_without_none = [_ if _ is not None else 1 for _ in shape] + if init is None: + self.outputs = tl.initializers.ones()(self.shape_without_none, dtype=self.dtype) + else: + self.outputs = init(self.shape_without_none, dtype=self.dtype) self._built = True - self._add_node(outputs, outputs) - def __repr__(self): s = 'Input(shape=%s' % str(self.shape) if self.name is not None: @@ -58,17 +43,17 @@ def __repr__(self): s += ')' return s - def __call__(self, inputs, *args, **kwargs): - return super(_InputLayer, self).__call__(inputs) + def __call__(self, *args, **kwargs): + return self.outputs def build(self, inputs_shape): pass - def forward(self, inputs): - return inputs + def forward(self): + return self.outputs -def Input(shape, dtype=tf.float32, name=None): +def Input(shape, init=tl.initializers.ones(), dtype=tl.float32, name=None): """ The :class:`Input` class is the starting layer of a neural network. @@ -79,7 +64,14 @@ def Input(shape, dtype=tf.float32, name=None): name : None or str A unique layer name. + Examples + --------- + With TensorLayer + + >>> ni = tl.layers.Input([10, 50, 50, 32], name='input') + >>> output shape : [10, 50, 50, 32] + """ - input_layer = _InputLayer(shape, dtype=dtype, name=name) - outputs = input_layer._nodes[0].out_tensors[0] + input_layer = _InputLayer(shape, dtype=dtype, name=name, init=init) + outputs = input_layer() return outputs diff --git a/tensorlayer/layers/lambda_layers.py b/tensorlayer/layers/lambda_layers.py index c650f233c..75f95c19b 100644 --- a/tensorlayer/layers/lambda_layers.py +++ b/tensorlayer/layers/lambda_layers.py @@ -2,13 +2,9 @@ # -*- coding: utf-8 -*- import tensorflow as tf - from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias from tensorlayer.files import utils -from tensorlayer.layers.core import Layer - -# from tensorlayer.layers.core import TF_GRAPHKEYS_VARIABLES +from tensorlayer.layers.core import Module __all__ = [ 'Lambda', @@ -16,7 +12,7 @@ ] -class Lambda(Layer): +class Lambda(Module): """A layer that takes a user-defined function using Lambda. If the function has trainable weights, the weights should be provided. Remember to make sure the weights provided when the layer is constructed are SAME as @@ -57,7 +53,7 @@ class Lambda(Layer): Please avoid using Model.save() / Model.load() to save / load models that contain such Lambda layer. Instead, you may use Model.save_weights() / Model.load_weights() to save / load model weights. Note: In this case, fn_weights should be a list, and then the trainable weights in this Lambda layer can be added into the weights of the whole model. - >>> a = tf.Variable(1.0) + >>> a = tl.ops.Variable(1.0) >>> def func(x): >>> return x + a >>> x = tl.layers.Input([8, 3], name='input') @@ -68,15 +64,15 @@ class Lambda(Layer): This case is supported in the Model.save() / Model.load() to save / load the whole model architecture and weights(optional). >>> layers = [ - >>> tf.keras.layers.Dense(10, activation=tf.nn.relu), - >>> tf.keras.layers.Dense(5, activation=tf.nn.sigmoid), - >>> tf.keras.layers.Dense(1, activation=tf.identity) + >>> tl.layers.Dense(10, act=tl.Relu), + >>> tl.layers.Dense(5, act=tl.Relu), + >>> tl.layers.Dense(1, activation=tf.identity) >>> ] - >>> perceptron = tf.keras.Sequential(layers) + >>> perceptron = tl.layers.SequentialLayer(layers) >>> # in order to compile keras model and get trainable_variables of the keras model >>> _ = perceptron(np.random.random([100, 5]).astype(np.float32)) >>> - >>> class CustomizeModel(tl.models.Model): + >>> class CustomizeModel(tl.layers.Module): >>> def __init__(self): >>> super(CustomizeModel, self).__init__() >>> self.dense = tl.layers.Dense(in_channels=1, n_units=5) @@ -87,9 +83,9 @@ class Lambda(Layer): >>> z = self.lambdalayer(z) >>> return z >>> - >>> optimizer = tf.optimizers.Adam(learning_rate=0.1) + >>> optimizer = tl.optimizers.Adam(learning_rate=0.1) >>> model = CustomizeModel() - >>> model.train() + >>> model.set_train() >>> >>> for epoch in range(50): >>> with tf.GradientTape() as tape: @@ -165,7 +161,7 @@ def get_args(self): return init_args -class ElementwiseLambda(Layer): +class ElementwiseLambda(Module): """A layer that use a custom function to combine multiple :class:`Layer` inputs. If the function has trainable weights, the weights should be provided. Remember to make sure the weights provided when the layer is constructed are SAME as @@ -188,7 +184,6 @@ class ElementwiseLambda(Layer): Non-parametric and with args case This case is supported in the Model.save() / Model.load() to save / load the whole model architecture and weights(optional). - >>> # z = mean + noise * tf.exp(std * 0.5) + foo >>> def func(noise, mean, std, foo=42): >>> return mean + noise * tf.exp(std * 0.5) + foo >>> noise = tl.layers.Input([100, 1]) @@ -200,7 +195,6 @@ class ElementwiseLambda(Layer): Non-parametric and non-args case This case is supported in the Model.save() / Model.load() to save / load the whole model architecture and weights(optional). - >>> # z = mean + noise * tf.exp(std * 0.5) >>> noise = tl.layers.Input([100, 1]) >>> mean = tl.layers.Input([100, 1]) >>> std = tl.layers.Input([100, 1]) @@ -212,7 +206,6 @@ class ElementwiseLambda(Layer): Please avoid using Model.save() / Model.load() to save / load models that contain such ElementwiseLambda layer. Instead, you may use Model.save_weights() / Model.load_weights() to save / load model weights. Note: In this case, fn_weights should be a list, and then the trainable weights in this ElementwiseLambda layer can be added into the weights of the whole model. - >>> # z = mean + noise * tf.exp(std * 0.5) + vara >>> vara = [tf.Variable(1.0)] >>> def func(noise, mean, std): >>> return mean + noise * tf.exp(std * 0.5) + vara diff --git a/tensorlayer/layers/merge.py b/tensorlayer/layers/merge.py index 3191d9db1..3fc5737cc 100644 --- a/tensorlayer/layers/merge.py +++ b/tensorlayer/layers/merge.py @@ -1,10 +1,9 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - +import tensorlayer as tl from tensorlayer import logging -from tensorlayer.layers.core import Layer +from tensorlayer.layers.core import Module __all__ = [ 'Concat', @@ -12,7 +11,7 @@ ] -class Concat(Layer): +class Concat(Module): """A layer that concats multiple tensors according to given axis. Parameters @@ -24,11 +23,11 @@ class Concat(Layer): Examples ---------- - >>> class CustomModel(tl.models.Model): + >>> class CustomModel(Module): >>> def __init__(self): >>> super(CustomModel, self).__init__(name="custom") - >>> self.dense1 = tl.layers.Dense(in_channels=20, n_units=10, act=tf.nn.relu, name='relu1_1') - >>> self.dense2 = tl.layers.Dense(in_channels=20, n_units=10, act=tf.nn.relu, name='relu2_1') + >>> self.dense1 = tl.layers.Dense(in_channels=20, n_units=10, act=tl.ReLU, name='relu1_1') + >>> self.dense2 = tl.layers.Dense(in_channels=20, n_units=10, act=tl.ReLU, name='relu2_1') >>> self.concat = tl.layers.Concat(concat_dim=1, name='concat_layer') >>> def forward(self, inputs): @@ -58,7 +57,7 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape): - pass + self.concat = tl.ops.Concat(self.concat_dim) # @tf.function def forward(self, inputs): @@ -67,12 +66,11 @@ def forward(self, inputs): prev_layer : list of :class:`Layer` List of layers to concatenate. """ - outputs = tf.concat(inputs, self.concat_dim, name=self.name) - + outputs = self.concat(inputs) return outputs -class Elementwise(Layer): +class Elementwise(Module): """A layer that combines multiple :class:`Layer` that have the same output shapes according to an element-wise operation. If the element-wise operation is complicated, please consider to use :class:`ElementwiseLambda`. @@ -80,7 +78,7 @@ class Elementwise(Layer): Parameters ---------- combine_fn : a TensorFlow element-wise combine function - e.g. AND is ``tf.minimum`` ; OR is ``tf.maximum`` ; ADD is ``tf.add`` ; MUL is ``tf.multiply`` and so on. + e.g. AND is ``tl.minimum`` ; OR is ``tl.maximum`` ; ADD is ``tl.add`` ; MUL is ``tl.multiply`` and so on. See `TensorFlow Math API `__ . If the combine function is more complicated, please consider to use :class:`ElementwiseLambda`. act : activation function @@ -93,9 +91,9 @@ class Elementwise(Layer): >>> class CustomModel(tl.models.Model): >>> def __init__(self): >>> super(CustomModel, self).__init__(name="custom") - >>> self.dense1 = tl.layers.Dense(in_channels=20, n_units=10, act=tf.nn.relu, name='relu1_1') - >>> self.dense2 = tl.layers.Dense(in_channels=20, n_units=10, act=tf.nn.relu, name='relu2_1') - >>> self.element = tl.layers.Elementwise(combine_fn=tf.minimum, name='minimum', act=tf.identity) + >>> self.dense1 = tl.layers.Dense(in_channels=20, n_units=10, act=tl.ReLU, name='relu1_1') + >>> self.dense2 = tl.layers.Dense(in_channels=20, n_units=10, act=tl.ReLU, name='relu2_1') + >>> self.element = tl.layers.Elementwise(combine_fn=tl.minimum, name='minimum', act=tl.identity) >>> def forward(self, inputs): >>> d1 = self.dense1(inputs) @@ -106,25 +104,26 @@ class Elementwise(Layer): def __init__( self, - combine_fn=tf.minimum, + combine_fn=tl.ops.minimum, act=None, name=None, #'elementwise', ): super(Elementwise, self).__init__(name, act=act) self.combine_fn = combine_fn + self.combine_fn_str = str(combine_fn).split(' ')[1] self.build(None) self._built = True logging.info( "Elementwise %s: fn: %s act: %s" % - (self.name, combine_fn.__name__, ('No Activation' if self.act is None else self.act.__name__)) + (self.name, combine_fn.__name__, ('No Activation' if self.act is None else self.act.__class__.__name__)) ) def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' - s = ('{classname}(combine_fn={combine_fn}, ' + actstr) + actstr = self.act.__class__.__name__ if self.act is not None else 'No Activation' + s = ('{classname}(combine_fn={combine_fn_str}, ' + actstr) if self.name is not None: s += ', name=\'{name}\'' s += ')' @@ -137,7 +136,7 @@ def build(self, inputs_shape): def forward(self, inputs): outputs = inputs[0] for input in inputs[1:]: - outputs = self.combine_fn(outputs, input, name=self.name) + outputs = self.combine_fn(outputs, input) if self.act: outputs = self.act(outputs) return outputs diff --git a/tensorlayer/layers/noise.py b/tensorlayer/layers/noise.py index 1a6e85463..65e12fcaf 100644 --- a/tensorlayer/layers/noise.py +++ b/tensorlayer/layers/noise.py @@ -1,19 +1,16 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer +from tensorlayer.layers.core import Module __all__ = [ 'GaussianNoise', ] -class GaussianNoise(Layer): +class GaussianNoise(Module): """ The :class:`GaussianNoise` class is noise layer that adding noise with gaussian distribution to the activation. @@ -36,7 +33,7 @@ class GaussianNoise(Layer): With TensorLayer >>> net = tl.layers.Input([64, 200], name='input') - >>> net = tl.layers.Dense(n_units=100, act=tf.nn.relu, name='dense')(net) + >>> net = tl.layers.Dense(in_channels=200, n_units=100, act=tl.ReLU, name='dense')(net) >>> gaussianlayer = tl.layers.GaussianNoise(name='gaussian')(net) >>> print(gaussianlayer) >>> output shape : (64, 100) @@ -76,7 +73,8 @@ def forward(self, inputs): if (self.is_train or self.is_always) is False: return inputs else: - # noise = np.random.normal(0.0 , sigma , tf.to_int64(self.inputs).get_shape()) - noise = tf.random.normal(shape=inputs.get_shape(), mean=self.mean, stddev=self.stddev, seed=self.seed) + shapes = tl.get_tensor_shape(inputs) + noise = tl.ops.random_normal(shape=shapes, mean=self.mean, stddev=self.stddev, seed=self.seed) + print(noise) outputs = inputs + noise return outputs diff --git a/tensorlayer/layers/normalization.py b/tensorlayer/layers/normalization.py index 161d6e018..613a19f71 100644 --- a/tensorlayer/layers/normalization.py +++ b/tensorlayer/layers/normalization.py @@ -1,152 +1,28 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf -from tensorflow.python.framework import ops -from tensorflow.python.ops import math_ops -from tensorflow.python.training import moving_averages - import tensorlayer as tl from tensorlayer import logging -from tensorlayer.layers.core import Layer +from tensorlayer.layers.core import Module __all__ = [ - 'LocalResponseNorm', - 'BatchNorm', # FIXME: wthether to keep BatchNorm + 'BatchNorm', 'BatchNorm1d', 'BatchNorm2d', 'BatchNorm3d', - 'InstanceNorm', - 'InstanceNorm1d', - 'InstanceNorm2d', - 'InstanceNorm3d', - 'LayerNorm', - 'GroupNorm', - 'SwitchNorm', ] - - -class LocalResponseNorm(Layer): - """The :class:`LocalResponseNorm` layer is for Local Response Normalization. - See ``tf.nn.local_response_normalization`` or ``tf.nn.lrn`` for new TF version. - The 4-D input tensor is a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. - Within a given vector, each component is divided by the weighted square-sum of inputs within depth_radius. - - Parameters - ----------- - depth_radius : int - Depth radius. 0-D. Half-width of the 1-D normalization window. - bias : float - An offset which is usually positive and shall avoid dividing by 0. - alpha : float - A scale factor which is usually positive. - beta : float - An exponent. - name : None or str - A unique layer name. - - """ - - def __init__( - self, - depth_radius=None, - bias=None, - alpha=None, - beta=None, - name=None, #'lrn', - ): - # super(LocalResponseNorm, self).__init__(prev_layer=prev_layer, name=name) - super().__init__(name) - self.depth_radius = depth_radius - self.bias = bias - self.alpha = alpha - self.beta = beta - - logging.info( - "LocalResponseNorm %s: depth_radius: %s, bias: %s, alpha: %s, beta: %s" % - (self.name, str(depth_radius), str(bias), str(alpha), str(beta)) - ) - - def build(self, inputs): - pass - - def forward(self, inputs): - """ - prev_layer : :class:`Layer` - The previous layer with a 4D output shape. - """ - outputs = tf.nn.lrn(inputs, depth_radius=self.depth_radius, bias=self.bias, alpha=self.alpha, beta=self.beta) - return outputs - - -def _to_channel_first_bias(b): - """Reshape [c] to [c, 1, 1].""" - channel_size = int(b.shape[0]) - new_shape = (channel_size, 1, 1) - # new_shape = [-1, 1, 1] # doesn't work with tensorRT - return tf.reshape(b, new_shape) - - -def _bias_scale(x, b, data_format): - """The multiplication counter part of tf.nn.bias_add.""" - if data_format == 'NHWC': - return x * b - elif data_format == 'NCHW': - return x * b - else: - raise ValueError('invalid data_format: %s' % data_format) - - -def _bias_add(x, b, data_format): - """Alternative implementation of tf.nn.bias_add which is compatiable with tensorRT.""" - if data_format == 'NHWC': - return tf.add(x, b) - elif data_format == 'NCHW': - return tf.add(x, b) - else: - raise ValueError('invalid data_format: %s' % data_format) - - -def _compute_shape(tensors): - if isinstance(tensors, list): - shape_mem = [t.get_shape().as_list() for t in tensors] - else: - shape_mem = tensors.get_shape().as_list() - return shape_mem - - -def batch_normalization(x, mean, variance, offset, scale, variance_epsilon, data_format, name=None): - """Data Format aware version of tf.nn.batch_normalization.""" - if data_format == 'channels_last': - mean = tf.reshape(mean, [1] * (len(x.shape) - 1) + [-1]) - variance = tf.reshape(variance, [1] * (len(x.shape) - 1) + [-1]) - offset = tf.reshape(offset, [1] * (len(x.shape) - 1) + [-1]) - scale = tf.reshape(scale, [1] * (len(x.shape) - 1) + [-1]) - elif data_format == 'channels_first': - mean = tf.reshape(mean, [1] + [-1] + [1] * (len(x.shape) - 2)) - variance = tf.reshape(variance, [1] + [-1] + [1] * (len(x.shape) - 2)) - offset = tf.reshape(offset, [1] + [-1] + [1] * (len(x.shape) - 2)) - scale = tf.reshape(scale, [1] + [-1] + [1] * (len(x.shape) - 2)) - else: - raise ValueError('invalid data_format: %s' % data_format) - - with ops.name_scope(name, 'batchnorm', [x, mean, variance, scale, offset]): - inv = math_ops.rsqrt(variance + variance_epsilon) - if scale is not None: - inv *= scale - - a = math_ops.cast(inv, x.dtype) - b = math_ops.cast(offset - mean * inv if offset is not None else -mean * inv, x.dtype) - - # Return a * x + b with customized data_format. - # Currently TF doesn't have bias_scale, and tensorRT has bug in converting tf.nn.bias_add - # So we reimplemted them to allow make the model work with tensorRT. - # See https://github.com/tensorlayer/openpose-plus/issues/75 for more details. - df = {'channels_first': 'NCHW', 'channels_last': 'NHWC'} - return _bias_add(_bias_scale(x, a, df[data_format]), b, df[data_format]) - - -class BatchNorm(Layer): +# TODO Layers that needs to be updated +# ['InstanceNorm', +# 'InstanceNorm1d', +# 'InstanceNorm2d', +# 'InstanceNorm3d', +# 'LayerNorm', +# 'GroupNorm', +# 'SwitchNorm', +# ] + + +class BatchNorm(Module): """ The :class:`BatchNorm` is a batch normalization layer for both fully-connected and convolution outputs. See ``tf.nn.batch_normalization`` and ``tf.nn.moments``. @@ -185,7 +61,7 @@ class BatchNorm(Layer): --------- With TensorLayer - >>> net = tl.layers.Input([None, 50, 50, 32], name='input') + >>> net = tl.layers.Input([10, 50, 50, 32], name='input') >>> net = tl.layers.BatchNorm()(net) Notes @@ -207,7 +83,7 @@ def __init__( decay=0.9, epsilon=0.00001, act=None, - is_train=False, + is_train=True, beta_init=tl.initializers.zeros(), gamma_init=tl.initializers.random_normal(mean=1.0, stddev=0.002), moving_mean_init=tl.initializers.zeros(), @@ -225,10 +101,17 @@ def __init__( self.moving_mean_init = moving_mean_init self.moving_var_init = moving_var_init self.num_features = num_features + self.is_train = is_train self.axes = None - if num_features is not None: + # if self.num_features is None: + # raise AttributeError( + # "The registered layer `{}` should be built in advance. " + # "Do you forget to pass the keyword argument 'num_feature'? " + # ) + + if self.num_features: self.build(None) self._built = True @@ -236,12 +119,14 @@ def __init__( raise ValueError("decay should be between 0 to 1") logging.info( - "BatchNorm %s: decay: %f epsilon: %f act: %s is_train: %s" % - (self.name, decay, epsilon, self.act.__name__ if self.act is not None else 'No Activation', is_train) + "BatchNorm %s: decay: %f epsilon: %f act: %s is_train: %s" % ( + self.name, decay, epsilon, self.act.__class__.__name__ if self.act is not None else 'No Activation', + is_train + ) ) def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' + actstr = self.act.__class__.__name__ if self.act is not None else 'No Activation' s = ('{classname}(num_features={num_features}, decay={decay}' ', epsilon={epsilon}') s += (', ' + actstr) if self.name is not None: @@ -263,8 +148,7 @@ def _get_param_shape(self, inputs_shape): return params_shape def _check_input_shape(self, inputs): - inputs_shape = _compute_shape(inputs) - if len(inputs_shape) <= 1: + if inputs.ndim <= 1: raise ValueError('expected input at least 2D, but got {}D input'.format(inputs.ndim)) def build(self, inputs_shape): @@ -272,38 +156,43 @@ def build(self, inputs_shape): self.beta, self.gamma = None, None if self.beta_init: - self.beta = self._get_weights("beta", shape=params_shape, init=self.beta_init) + self.beta = self._get_weights(var_name="beta", shape=params_shape, init=self.beta_init) if self.gamma_init: - self.gamma = self._get_weights("gamma", shape=params_shape, init=self.gamma_init) + self.gamma = self._get_weights(var_name="gamma", shape=params_shape, init=self.gamma_init) self.moving_mean = self._get_weights( - "moving_mean", shape=params_shape, init=self.moving_mean_init, trainable=False + var_name="moving_mean", shape=params_shape, init=self.moving_mean_init, trainable=False ) self.moving_var = self._get_weights( - "moving_var", shape=params_shape, init=self.moving_var_init, trainable=False + var_name="moving_var", shape=params_shape, init=self.moving_var_init, trainable=False ) - def forward(self, inputs): - self._check_input_shape(inputs) + self.batchnorm = tl.ops.BatchNorm( + decay=self.decay, epsilon=self.epsilon, beta=self.beta, gamma=self.gamma, moving_mean=self.moving_mean, + moving_var=self.moving_var, num_features=self.num_features, data_format=self.data_format, + is_train=self.is_train + ) - self.channel_axis = len(inputs.shape) - 1 if self.data_format == 'channels_last' else 1 - if self.axes is None: - self.axes = [i for i in range(len(inputs.shape)) if i != self.channel_axis] + self.act_init_flag = False + if self.act: + self.act_init_flag = True - mean, var = tf.nn.moments(inputs, self.axes, keepdims=False) - if self.is_train: - # update moving_mean and moving_var - self.moving_mean = moving_averages.assign_moving_average( - self.moving_mean, mean, self.decay, zero_debias=False - ) - self.moving_var = moving_averages.assign_moving_average(self.moving_var, var, self.decay, zero_debias=False) - outputs = batch_normalization(inputs, mean, var, self.beta, self.gamma, self.epsilon, self.data_format) - else: - outputs = batch_normalization( - inputs, self.moving_mean, self.moving_var, self.beta, self.gamma, self.epsilon, self.data_format + def forward(self, inputs): + self._check_input_shape(inputs) + if self._forward_state == False: + if self._built == False: + self.build(tl.get_tensor_shape(inputs)) + self._built = True + self._forward_state = True + + if not self.is_train: + self.batchnorm = tl.ops.BatchNorm( + decay=self.decay, epsilon=self.epsilon, beta=self.beta, gamma=self.gamma, moving_mean=self.moving_mean, + moving_var=self.moving_var, num_features=self.num_features, data_format=self.data_format, is_train=False ) - if self.act: + outputs = self.batchnorm(inputs=inputs) + if self.act_init_flag: outputs = self.act(outputs) return outputs @@ -318,7 +207,7 @@ class BatchNorm1d(BatchNorm): With TensorLayer >>> # in static model, no need to specify num_features - >>> net = tl.layers.Input([None, 50, 32], name='input') + >>> net = tl.layers.Input([10, 50, 32], name='input') >>> net = tl.layers.BatchNorm1d()(net) >>> # in dynamic model, build by specifying num_features >>> conv = tl.layers.Conv1d(32, 5, 1, in_channels=3) @@ -327,8 +216,7 @@ class BatchNorm1d(BatchNorm): """ def _check_input_shape(self, inputs): - inputs_shape = _compute_shape(inputs) - if len(inputs_shape) != 2 and len(inputs_shape) != 3: + if inputs.ndim != 2 and inputs.ndim != 3: raise ValueError('expected input to be 2D or 3D, but got {}D input'.format(inputs.ndim)) @@ -342,7 +230,7 @@ class BatchNorm2d(BatchNorm): With TensorLayer >>> # in static model, no need to specify num_features - >>> net = tl.layers.Input([None, 50, 50, 32], name='input') + >>> net = tl.layers.Input([10, 50, 50, 32], name='input') >>> net = tl.layers.BatchNorm2d()(net) >>> # in dynamic model, build by specifying num_features >>> conv = tl.layers.Conv2d(32, (5, 5), (1, 1), in_channels=3) @@ -351,8 +239,7 @@ class BatchNorm2d(BatchNorm): """ def _check_input_shape(self, inputs): - inputs_shape = _compute_shape(inputs) - if len(inputs_shape) != 4: + if inputs.ndim != 4: raise ValueError('expected input to be 4D, but got {}D input'.format(inputs.ndim)) @@ -366,7 +253,7 @@ class BatchNorm3d(BatchNorm): With TensorLayer >>> # in static model, no need to specify num_features - >>> net = tl.layers.Input([None, 50, 50, 50, 32], name='input') + >>> net = tl.layers.Input([10, 50, 50, 50, 32], name='input') >>> net = tl.layers.BatchNorm3d()(net) >>> # in dynamic model, build by specifying num_features >>> conv = tl.layers.Conv3d(32, (5, 5, 5), (1, 1), in_channels=3) @@ -375,505 +262,5 @@ class BatchNorm3d(BatchNorm): """ def _check_input_shape(self, inputs): - inputs_shape = _compute_shape(inputs) - if len(inputs_shape) != 5: + if inputs.ndim != 5: raise ValueError('expected input to be 5D, but got {}D input'.format(inputs.ndim)) - - -class InstanceNorm(Layer): - """ - The :class:`InstanceNorm` is an instance normalization layer for both fully-connected and convolution outputs. - See ``tf.nn.batch_normalization`` and ``tf.nn.moments``. - - Parameters - ----------- - act : activation function. - The activation function of this layer. - epsilon : float - Eplison. - beta_init : initializer or None - The initializer for initializing beta, if None, skip beta. - Usually you should not skip beta unless you know what happened. - gamma_init : initializer or None - The initializer for initializing gamma, if None, skip gamma. - When the instance normalization layer is use instead of 'biases', or the next layer is linear, this can be - disabled since the scaling can be done by the next layer. see `Inception-ResNet-v2 `__ - num_features: int - Number of features for input tensor. Useful to build layer if using InstanceNorm1d, InstanceNorm2d or InstanceNorm3d, - but should be left as None if using InstanceNorm. Default None. - data_format : str - channels_last 'channel_last' (default) or channels_first. - name : None or str - A unique layer name. - - - Examples - --------- - With TensorLayer - - >>> net = tl.layers.Input([None, 50, 50, 32], name='input') - >>> net = tl.layers.InstanceNorm()(net) - - Notes - ----- - The :class:`InstanceNorm` is universally suitable for 3D/4D/5D input in static model, but should not be used - in dynamic model where layer is built upon class initialization. So the argument 'num_features' should only be used - for subclasses :class:`InstanceNorm1d`, :class:`InstanceNorm2d` and :class:`InstanceNorm3d`. All the three subclasses are - suitable under all kinds of conditions. - """ - - def __init__( - self, act=None, epsilon=0.00001, beta_init=tl.initializers.zeros(), - gamma_init=tl.initializers.random_normal(mean=1.0, stddev=0.002), num_features=None, - data_format='channels_last', name=None - ): - super(InstanceNorm, self).__init__(name=name, act=act) - self.epsilon = epsilon - self.beta_init = beta_init - self.gamma_init = gamma_init - self.num_features = num_features - self.data_format = data_format - - if num_features is not None: - if not isinstance(self, InstanceNorm1d) and not isinstance(self, InstanceNorm2d) and not isinstance( - self, InstanceNorm3d): - raise ValueError( - "Please use InstanceNorm1d or InstanceNorm2d or InstanceNorm3d instead of InstanceNorm " - "if you want to specify 'num_features'." - ) - self.build(None) - self._built = True - - logging.info( - "InstanceNorm %s: epsilon: %f act: %s " % - (self.name, epsilon, self.act.__name__ if self.act is not None else 'No Activation') - ) - - def __repr__(self): - actstr = self.act.__name__ if self.act is not None else 'No Activation' - s = '{classname}(num_features=num_features, epsilon={epsilon}' + actstr - if self.name is not None: - s += ', name="{name}"' - s += ')' - return s.format(classname=self.__class__.__name__, **self.__dict__) - - def _get_param_shape(self, inputs_shape): - if self.data_format == 'channels_last': - axis = len(inputs_shape) - 1 - elif self.data_format == 'channels_first': - axis = 1 - else: - raise ValueError('data_format should be either %s or %s' % ('channels_last', 'channels_first')) - - channels = inputs_shape[axis] - params_shape = [1] * len(inputs_shape) - params_shape[axis] = channels - - axes = [i for i in range(len(inputs_shape)) if i != 0 and i != axis] - return params_shape, axes - - def build(self, inputs_shape): - params_shape, self.axes = self._get_param_shape(inputs_shape) - - self.beta, self.gamma = None, None - if self.beta_init: - self.beta = self._get_weights("beta", shape=params_shape, init=self.beta_init) - - if self.gamma_init: - self.gamma = self._get_weights("gamma", shape=params_shape, init=self.gamma_init) - - def forward(self, inputs): - mean, var = tf.nn.moments(inputs, self.axes, keepdims=True) - outputs = batch_normalization(inputs, mean, var, self.beta, self.gamma, self.epsilon, self.data_format) - if self.act: - outputs = self.act(outputs) - return outputs - - -class InstanceNorm1d(InstanceNorm): - """The :class:`InstanceNorm1d` applies Instance Normalization over 3D input (a mini-instance of 1D - inputs with additional channel dimension), of shape (N, L, C) or (N, C, L). - See more details in :class:`InstanceNorm`. - - Examples - --------- - With TensorLayer - - >>> # in static model, no need to specify num_features - >>> net = tl.layers.Input([None, 50, 32], name='input') - >>> net = tl.layers.InstanceNorm1d()(net) - >>> # in dynamic model, build by specifying num_features - >>> conv = tl.layers.Conv1d(32, 5, 1, in_channels=3) - >>> bn = tl.layers.InstanceNorm1d(num_features=32) - - """ - - def _get_param_shape(self, inputs_shape): - if self.data_format == 'channels_last': - axis = 2 - elif self.data_format == 'channels_first': - axis = 1 - else: - raise ValueError('data_format should be either %s or %s' % ('channels_last', 'channels_first')) - - if self.num_features is None: - channels = inputs_shape[axis] - else: - channels = self.num_features - params_shape = [1] * 3 - params_shape[axis] = channels - - axes = [i for i in range(3) if i != 0 and i != axis] - return params_shape, axes - - -class InstanceNorm2d(InstanceNorm): - """The :class:`InstanceNorm2d` applies Instance Normalization over 4D input (a mini-instance of 2D - inputs with additional channel dimension) of shape (N, H, W, C) or (N, C, H, W). - See more details in :class:`InstanceNorm`. - - Examples - --------- - With TensorLayer - - >>> # in static model, no need to specify num_features - >>> net = tl.layers.Input([None, 50, 50, 32], name='input') - >>> net = tl.layers.InstanceNorm2d()(net) - >>> # in dynamic model, build by specifying num_features - >>> conv = tl.layers.Conv2d(32, (5, 5), (1, 1), in_channels=3) - >>> bn = tl.layers.InstanceNorm2d(num_features=32) - - """ - - def _get_param_shape(self, inputs_shape): - if self.data_format == 'channels_last': - axis = 3 - elif self.data_format == 'channels_first': - axis = 1 - else: - raise ValueError('data_format should be either %s or %s' % ('channels_last', 'channels_first')) - - if self.num_features is None: - channels = inputs_shape[axis] - else: - channels = self.num_features - params_shape = [1] * 4 - params_shape[axis] = channels - - axes = [i for i in range(4) if i != 0 and i != axis] - return params_shape, axes - - -class InstanceNorm3d(InstanceNorm): - """The :class:`InstanceNorm3d` applies Instance Normalization over 5D input (a mini-instance of 3D - inputs with additional channel dimension) with shape (N, D, H, W, C) or (N, C, D, H, W). - See more details in :class:`InstanceNorm`. - - Examples - --------- - With TensorLayer - - >>> # in static model, no need to specify num_features - >>> net = tl.layers.Input([None, 50, 50, 50, 32], name='input') - >>> net = tl.layers.InstanceNorm3d()(net) - >>> # in dynamic model, build by specifying num_features - >>> conv = tl.layers.Conv3d(32, (5, 5, 5), (1, 1), in_channels=3) - >>> bn = tl.layers.InstanceNorm3d(num_features=32) - - """ - - def _get_param_shape(self, inputs_shape): - if self.data_format == 'channels_last': - axis = 4 - elif self.data_format == 'channels_first': - axis = 1 - else: - raise ValueError('data_format should be either %s or %s' % ('channels_last', 'channels_first')) - - if self.num_features is None: - channels = inputs_shape[axis] - else: - channels = self.num_features - params_shape = [1] * 5 - params_shape[axis] = channels - - axes = [i for i in range(5) if i != 0 and i != axis] - return params_shape, axes - - -# FIXME : not sure about the correctness, need testing -class LayerNorm(Layer): - """ - The :class:`LayerNorm` class is for layer normalization, see `tf.contrib.layers.layer_norm `__. - - Parameters - ---------- - prev_layer : :class:`Layer` - The previous layer. - act : activation function - The activation function of this layer. - others : _ - `tf.contrib.layers.layer_norm `__. - - """ - - def __init__( - self, #prev_layer, - center=True, - scale=True, - act=None, - # reuse=None, - # variables_collections=None, - # outputs_collections=None, - # trainable=True, - epsilon=1e-12, - begin_norm_axis=1, - begin_params_axis=-1, - beta_init=tl.initializers.zeros(), - gamma_init=tl.initializers.ones(), - data_format='channels_last', - name=None, - ): - - # super(LayerNorm, self).__init__(prev_layer=prev_layer, act=act, name=name) - super(LayerNorm, self).__init__(name, act=act) - self.center = center - self.scale = scale - self.epsilon = epsilon - self.begin_norm_axis = begin_norm_axis - self.begin_params_axis = begin_params_axis - self.beta_init = beta_init - self.gamma_init = gamma_init - self.data_format = data_format - - logging.info( - "LayerNorm %s: act: %s" % (self.name, self.act.__name__ if self.act is not None else 'No Activation') - ) - - def build(self, inputs_shape): - params_shape = inputs_shape[self.begin_params_axis:] - self.beta, self.gamma = None, None - if self.center: - self.beta = self._get_weights("beta", shape=params_shape, init=self.beta_init) - if self.scale: - self.gamma = self._get_weights("gamma", shape=params_shape, init=self.gamma_init) - - self.norm_axes = range(self.begin_norm_axis, len(inputs_shape)) - - def forward(self, inputs): - mean, var = tf.nn.moments(inputs, self.norm_axes, keepdims=True) - # compute layer normalization using batch_normalization function - outputs = batch_normalization( - inputs, mean, var, self.beta, self.gamma, self.epsilon, data_format=self.data_format - ) - if self.act: - outputs = self.act(outputs) - return outputs - - # with tf.compat.v1.variable_scope(name) as vs: - # self.outputs = tf.contrib.layers.layer_norm( - # self.inputs, - # center=center, - # scale=scale, - # activation_fn=self.act, - # reuse=reuse, - # variables_collections=variables_collections, - # outputs_collections=outputs_collections, - # trainable=trainable, - # begin_norm_axis=begin_norm_axis, - # begin_params_axis=begin_params_axis, - # scope='var', - # ) - # - # variables = tf.compat.v1.get_collection("TF_GRAPHKEYS_VARIABLES", scope=vs.name) - # - # self._add_layers(self.outputs) - # self._add_params(variables) - - -class GroupNorm(Layer): - """The :class:`GroupNorm` layer is for Group Normalization. - See `tf.contrib.layers.group_norm `__. - - Parameters - ----------- - # prev_layer : :class:`Layer` - # The previous layer. - groups : int - The number of groups - act : activation function - The activation function of this layer. - epsilon : float - Eplison. - data_format : str - channels_last 'channel_last' (default) or channels_first. - name : None or str - A unique layer name - - """ - - def __init__(self, groups=32, epsilon=1e-06, act=None, data_format='channels_last', name=None): #'groupnorm'): - # super(GroupNorm, self).__init__(prev_layer=prev_layer, act=act, name=name) - super().__init__(name, act=act) - self.groups = groups - self.epsilon = epsilon - self.data_format = data_format - - logging.info( - "GroupNorm %s: act: %s" % (self.name, self.act.__name__ if self.act is not None else 'No Activation') - ) - - def build(self, inputs_shape): - # shape = inputs.get_shape().as_list() - if len(inputs_shape) != 4: - raise Exception("This GroupNorm only supports 2D images.") - - if self.data_format == 'channels_last': - channels = inputs_shape[-1] - self.int_shape = tf.concat( - [#tf.shape(input=self.inputs)[0:3], - inputs_shape[0:3], - tf.convert_to_tensor(value=[self.groups, channels // self.groups])], axis=0 - ) - elif self.data_format == 'channels_first': - channels = inputs_shape[1] - self.int_shape = tf.concat( - [ - # tf.shape(input=self.inputs)[0:1], - inputs_shape[0:1], - tf.convert_to_tensor(value=[self.groups, channels // self.groups]), - # tf.shape(input=self.inputs)[2:4] - inputs_shape[2:4], - ], - axis=0 - ) - else: - raise ValueError("data_format must be 'channels_last' or 'channels_first'.") - - if self.groups > channels: - raise ValueError('Invalid groups %d for %d channels.' % (self.groups, channels)) - if channels % self.groups != 0: - raise ValueError('%d channels is not commensurate with %d groups.' % (channels, self.groups)) - - if self.data_format == 'channels_last': - # mean, var = tf.nn.moments(x, [1, 2, 4], keep_dims=True) - self.gamma = self._get_weights("gamma", shape=channels, init=tl.initializers.ones()) - # self.gamma = tf.compat.v1.get_variable('gamma', channels, initializer=tf.compat.v1.initializers.ones()) - self.beta = self._get_weights("beta", shape=channels, init=tl.initializers.zeros()) - # self.beta = tf.compat.v1.get_variable('beta', channels, initializer=tf.compat.v1.initializers.zeros()) - elif self.data_format == 'channels_first': - # mean, var = tf.nn.moments(x, [2, 3, 4], keep_dims=True) - self.gamma = self._get_weights("gamma", shape=[1, channels, 1, 1], init=tl.initializers.ones()) - # self.gamma = tf.compat.v1.get_variable('gamma', [1, channels, 1, 1], initializer=tf.compat.v1.initializers.ones()) - self.beta = self._get_weights("beta", shape=[1, channels, 1, 1], init=tl.initializers.zeros()) - # self.beta = tf.compat.v1.get_variable('beta', [1, channels, 1, 1], initializer=tf.compat.v1.initializers.zeros()) - # self.add_weights([self.gamma, self.bata]) - - def forward(self, inputs): - x = tf.reshape(inputs, self.int_shape) - if self.data_format == 'channels_last': - mean, var = tf.nn.moments(x=x, axes=[1, 2, 4], keepdims=True) - elif self.data_format == 'channels_first': - mean, var = tf.nn.moments(x=x, axes=[2, 3, 4], keepdims=True) - else: - raise Exception("unknown data_format") - x = (x - mean) / tf.sqrt(var + self.epsilon) - - outputs = tf.reshape(x, tf.shape(input=inputs)) * self.gamma + self.beta - if self.act: - outputs = self.act(outputs) - return outputs - - -class SwitchNorm(Layer): - """ - The :class:`SwitchNorm` is a switchable normalization. - - Parameters - ---------- - act : activation function - The activation function of this layer. - epsilon : float - Eplison. - beta_init : initializer or None - The initializer for initializing beta, if None, skip beta. - Usually you should not skip beta unless you know what happened. - gamma_init : initializer or None - The initializer for initializing gamma, if None, skip gamma. - When the batch normalization layer is use instead of 'biases', or the next layer is linear, this can be - disabled since the scaling can be done by the next layer. see `Inception-ResNet-v2 `__ - moving_mean_init : initializer or None - The initializer for initializing moving mean, if None, skip moving mean. - data_format : str - channels_last 'channel_last' (default) or channels_first. - name : None or str - A unique layer name. - - References - ---------- - - `Differentiable Learning-to-Normalize via Switchable Normalization `__ - - `Zhihu (CN) `__ - - """ - - def __init__( - self, - act=None, - epsilon=1e-5, - beta_init=tl.initializers.constant(0.0), - gamma_init=tl.initializers.constant(1.0), - moving_mean_init=tl.initializers.zeros(), - # beta_init=tf.compat.v1.initializers.constant(0.0), - # gamma_init=tf.compat.v1.initializers.constant(1.0), - # moving_mean_init=tf.compat.v1.initializers.zeros(), - data_format='channels_last', - name=None, #'switchnorm', - ): - # super(SwitchNorm, self).__init__(prev_layer=prev_layer, act=act, name=name) - super().__init__(name, act=act) - self.epsilon = epsilon - self.beta_init = beta_init - self.gamma_init = gamma_init - self.moving_mean_init = moving_mean_init - self.data_format = data_format - - logging.info( - "SwitchNorm %s: epsilon: %f act: %s" % - (self.name, epsilon, self.act.__name__ if self.act is not None else 'No Activation') - ) - - def build(self, inputs_shape): - if len(inputs_shape) != 4: - raise Exception("This SwitchNorm only supports 2D images.") - if self.data_format != 'channels_last': - raise Exception("This SwitchNorm only supports channels_last.") - ch = inputs_shape[-1] - self.gamma = self._get_weights("gamma", shape=[ch], init=self.gamma_init) - # self.gamma = tf.compat.v1.get_variable("gamma", [ch], initializer=gamma_init) - self.beta = self._get_weights("beta", shape=[ch], init=self.beta_init) - # self.beta = tf.compat.v1.get_variable("beta", [ch], initializer=beta_init) - - self.mean_weight_var = self._get_weights("mean_weight", shape=[3], init=tl.initializers.constant(1.0)) - # self.mean_weight_var = tf.compat.v1.get_variable("mean_weight", [3], initializer=tf.compat.v1.initializers.constant(1.0)) - self.var_weight_var = self._get_weights("var_weight", shape=[3], init=tl.initializers.constant(1.0)) - # self.var_weight_var = tf.compat.v1.get_variable("var_weight", [3], initializer=tf.compat.v1.initializers.constant(1.0)) - - # self.add_weights([self.gamma, self.beta, self.mean_weight_var, self.var_weight_var]) - - def forward(self, inputs): - - batch_mean, batch_var = tf.nn.moments(x=inputs, axes=[0, 1, 2], keepdims=True) - ins_mean, ins_var = tf.nn.moments(x=inputs, axes=[1, 2], keepdims=True) - layer_mean, layer_var = tf.nn.moments(x=inputs, axes=[1, 2, 3], keepdims=True) - - mean_weight = tf.nn.softmax(self.mean_weight_var) - var_weight = tf.nn.softmax(self.var_weight_var) - - mean = mean_weight[0] * batch_mean + mean_weight[1] * ins_mean + mean_weight[2] * layer_mean - var = var_weight[0] * batch_var + var_weight[1] * ins_var + var_weight[2] * layer_var - - inputs = (inputs - mean) / (tf.sqrt(var + self.epsilon)) - outputs = inputs * self.gamma + self.beta - if self.act: - outputs = self.act(outputs) - return outputs diff --git a/tensorlayer/layers/padding.py b/tensorlayer/layers/padding.py index ae89035bc..84695b713 100644 --- a/tensorlayer/layers/padding.py +++ b/tensorlayer/layers/padding.py @@ -1,12 +1,9 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer +from tensorlayer.layers.core import Module __all__ = [ 'PadLayer', @@ -16,7 +13,7 @@ ] -class PadLayer(Layer): +class PadLayer(Module): """The :class:`PadLayer` class is a padding layer for any mode and dimension. Please see `tf.pad `__ for usage. @@ -33,10 +30,10 @@ class PadLayer(Layer): -------- With TensorLayer - >>> net = tl.layers.Input([None, 224, 224, 3], name='input') + >>> net = tl.layers.Input([10, 224, 224, 3], name='input') >>> padlayer = tl.layers.PadLayer([[0, 0], [3, 3], [3, 3], [0, 0]], "REFLECT", name='inpad')(net) >>> print(padlayer) - >>> output shape : (None, 106, 106, 3) + >>> output shape : (10, 230, 230, 3) """ @@ -50,7 +47,7 @@ def __init__( self.padding = padding self.mode = mode - logging.info("PadLayer %s: padding: %s mode: %s" % (self.name, list(self.padding), self.mode)) + logging.info("PadLayer %s: padding: %s mode: %s" % (self.name, self.padding, self.mode)) if self.padding is None: raise Exception( @@ -68,14 +65,14 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape=None): - pass + self.pad = tl.ops.Pad(paddings=self.padding, mode=self.mode) def forward(self, inputs): - outputs = tf.pad(tensor=inputs, paddings=self.padding, mode=self.mode, name=self.name) + outputs = self.pad(inputs) return outputs -class ZeroPad1d(Layer): +class ZeroPad1d(Module): """ The :class:`ZeroPad1d` class is a 1D padding layer for signal [batch, length, channel]. @@ -91,10 +88,10 @@ class ZeroPad1d(Layer): -------- With TensorLayer - >>> net = tl.layers.Input([None, 100, 1], name='input') - >>> pad1d = tl.layers.ZeroPad1d(padding=(2, 3))(net) + >>> net = tl.layers.Input([10, 100, 1], name='input') + >>> pad1d = tl.layers.ZeroPad1d(padding=(3, 3))(net) >>> print(pad1d) - >>> output shape : (None, 106, 1) + >>> output shape : (10, 106, 1) """ @@ -121,20 +118,20 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape=None): - self.layer = tf.keras.layers.ZeroPadding1D(padding=self.padding, name=self.name) + self.layer = tl.ops.ZeroPadding1D(padding=self.padding) def forward(self, inputs): outputs = self.layer(inputs) return outputs -class ZeroPad2d(Layer): +class ZeroPad2d(Module): """ The :class:`ZeroPad2d` class is a 2D padding layer for image [batch, height, width, channel]. Parameters ---------- - padding : int, or tuple of 2 ints, or tuple of 2 tuples of 2 ints. + padding : tuple of 2 ints or int, or tuple of 2 tuples of 2 ints. - If int, the same symmetric padding is applied to width and height. - If tuple of 2 ints, interpreted as two different symmetric padding values for height and width as ``(symmetric_height_pad, symmetric_width_pad)``. - If tuple of 2 tuples of 2 ints, interpreted as ``((top_pad, bottom_pad), (left_pad, right_pad))``. @@ -145,10 +142,10 @@ class ZeroPad2d(Layer): -------- With TensorLayer - >>> net = tl.layers.Input([None, 100, 100, 3], name='input') + >>> net = tl.layers.Input([10, 100, 100, 3], name='input') >>> pad2d = tl.layers.ZeroPad2d(padding=((3, 3), (4, 4)))(net) >>> print(pad2d) - >>> output shape : (None, 106, 108, 3) + >>> output shape : (10, 106, 108, 3) """ @@ -176,14 +173,14 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape=None): - self.layer = tf.keras.layers.ZeroPadding2D(padding=self.padding, name=self.name) + self.layer = tl.ops.ZeroPadding2D(padding=self.padding) def forward(self, inputs): outputs = self.layer(inputs) return outputs -class ZeroPad3d(Layer): +class ZeroPad3d(Module): """ The :class:`ZeroPad3d` class is a 3D padding layer for volume [batch, depth, height, width, channel]. @@ -192,7 +189,8 @@ class ZeroPad3d(Layer): padding : int, or tuple of 2 ints, or tuple of 2 tuples of 2 ints. - If int, the same symmetric padding is applied to width and height. - If tuple of 2 ints, interpreted as two different symmetric padding values for height and width as ``(symmetric_dim1_pad, symmetric_dim2_pad, symmetric_dim3_pad)``. - - If tuple of 2 tuples of 2 ints, interpreted as ``((left_dim1_pad, right_dim1_pad), (left_dim2_pad, right_dim2_pad), (left_dim3_pad, right_dim3_pad))``. + - If tuple of 2 tuples of 2 ints, interpreted as + ``((left_dim1_pad, right_dim1_pad), (left_dim2_pad, right_dim2_pad), (left_dim3_pad, right_dim3_pad))``. name : None or str A unique layer name. @@ -200,10 +198,10 @@ class ZeroPad3d(Layer): -------- With TensorLayer - >>> net = tl.layers.Input([None, 100, 100, 100, 3], name='input') + >>> net = tl.layers.Input([10, 100, 100, 100, 3], name='input') >>> pad3d = tl.layers.ZeroPad3d(padding=((3, 3), (4, 4), (5, 5)))(net) >>> print(pad3d) - >>> output shape : (None, 106, 108, 110, 3) + >>> output shape : (10, 106, 108, 110, 3) """ @@ -231,7 +229,7 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape=None): - self.layer = tf.keras.layers.ZeroPadding3D(padding=self.padding, name=self.name) + self.layer = tl.ops.ZeroPadding3D(padding=self.padding) def forward(self, inputs): outputs = self.layer(inputs) diff --git a/tensorlayer/layers/pooling.py b/tensorlayer/layers/pooling.py index d9deedecd..988471f24 100644 --- a/tensorlayer/layers/pooling.py +++ b/tensorlayer/layers/pooling.py @@ -1,12 +1,9 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer +from tensorlayer.layers.core import Module __all__ = [ 'PoolLayer', @@ -22,15 +19,21 @@ 'GlobalMeanPool2d', 'GlobalMaxPool3d', 'GlobalMeanPool3d', + 'AdaptiveMeanPool1d', + 'AdaptiveMeanPool2d', + 'AdaptiveMeanPool3d', + 'AdaptiveMaxPool1d', + 'AdaptiveMaxPool2d', + 'AdaptiveMaxPool3d', 'CornerPool2d', ] -class PoolLayer(Layer): +class PoolLayer(Module): """ The :class:`PoolLayer` class is a Pooling layer. - You can choose ``tf.nn.max_pool`` and ``tf.nn.avg_pool`` for 2D input or - ``tf.nn.max_pool3d`` and ``tf.nn.avg_pool3d`` for 3D input. + You can choose ``tl.ops.max_pool`` and ``tl.ops.avg_pool`` for 2D input or + ``tl.ops.max_pool3d`` and ``tl.ops.avg_pool3d`` for 3D input. Parameters ---------- @@ -43,7 +46,7 @@ class PoolLayer(Layer): padding : str The padding algorithm type: "SAME" or "VALID". pool : pooling function - One of ``tf.nn.max_pool``, ``tf.nn.avg_pool``, ``tf.nn.max_pool3d`` and ``f.nn.avg_pool3d``. + One of ``tl.ops.max_pool``, ``tl.ops.avg_pool``, ``tl.ops.max_pool3d`` and ``f.ops.avg_pool3d``. See `TensorFlow pooling APIs `__ name : None or str A unique layer name. @@ -52,9 +55,9 @@ class PoolLayer(Layer): --------- With TensorLayer - >>> net = tl.layers.Input([None, 50, 50, 32], name='input') + >>> net = tl.layers.Input([10, 50, 50, 32], name='input') >>> net = tl.layers.PoolLayer()(net) - >>> output shape : [None, 25, 25, 32] + >>> output shape : [10, 25, 25, 32] """ @@ -63,7 +66,7 @@ def __init__( filter_size=(1, 2, 2, 1), strides=(1, 2, 2, 1), padding='SAME', - pool=tf.nn.max_pool, + pool=tl.ops.MaxPool, name=None # 'pool_pro', ): super().__init__(name) @@ -88,14 +91,14 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, poolname=self.pool.__name__, **self.__dict__) def build(self, inputs_shape=None): - pass + self._pool = self.pool(ksize=self.filter_size, strides=self.strides, padding=self.padding) def forward(self, inputs): - outputs = self.pool(inputs, ksize=self.filter_size, strides=self.strides, padding=self.padding, name=self.name) + outputs = self._pool(inputs) return outputs -class MaxPool1d(Layer): +class MaxPool1d(Module): """Max pooling for 1D signal. Parameters @@ -115,9 +118,9 @@ class MaxPool1d(Layer): --------- With TensorLayer - >>> net = tl.layers.Input([None, 50, 32], name='input') + >>> net = tl.layers.Input([10, 50, 32], name='input') >>> net = tl.layers.MaxPool1d(filter_size=3, strides=2, padding='SAME', name='maxpool1d')(net) - >>> output shape : [None, 25, 32] + >>> output shape : [10, 25, 32] """ @@ -127,7 +130,6 @@ def __init__( strides=2, padding='SAME', data_format='channels_last', - dilation_rate=1, name=None # 'maxpool1d' ): super().__init__(name) @@ -135,7 +137,6 @@ def __init__( self.strides = self._strides = strides self.padding = padding self.data_format = data_format - self.dilation_rate = self._dilation_rate = dilation_rate self.build() self._built = True @@ -147,8 +148,6 @@ def __init__( def __repr__(self): s = ('{classname}(filter_size={filter_size}' ', strides={strides}, padding={padding}') - if self.dilation_rate != 1: - s += ', dilation={dilation_rate}' if self.name is not None: s += ', name=\'{name}\'' s += ')' @@ -164,23 +163,16 @@ def build(self, inputs_shape=None): raise Exception("unsupported data format") self._filter_size = [self.filter_size] self._strides = [self.strides] - self._dilation_rate = [self.dilation_rate] + self.max_pool = tl.ops.MaxPool1d( + ksize=self._filter_size, strides=self._strides, padding=self.padding, data_format=self.data_format + ) def forward(self, inputs): - outputs = tf.nn.pool( - input=inputs, - window_shape=self._filter_size, - pooling_type="MAX", - strides=self._strides, - padding=self.padding, - data_format=self.data_format, - dilations=self._dilation_rate, - name=self.name, - ) + outputs = self.max_pool(inputs) return outputs -class MeanPool1d(Layer): +class MeanPool1d(Module): """Mean pooling for 1D signal. Parameters @@ -200,9 +192,9 @@ class MeanPool1d(Layer): --------- With TensorLayer - >>> net = tl.layers.Input([None, 50, 32], name='input') + >>> net = tl.layers.Input([10, 50, 32], name='input') >>> net = tl.layers.MeanPool1d(filter_size=3, strides=2, padding='SAME')(net) - >>> output shape : [None, 25, 32] + >>> output shape : [10, 25, 32] """ @@ -220,7 +212,6 @@ def __init__( self.strides = self._strides = strides self.padding = padding self.data_format = data_format - self.dilation_rate = self._dilation_rate = dilation_rate self.build() self._built = True @@ -232,15 +223,12 @@ def __init__( def __repr__(self): s = ('{classname}(filter_size={filter_size}' ', strides={strides}, padding={padding}') - if self.dilation_rate != 1: - s += ', dilation={dilation_rate}' if self.name is not None: s += ', name=\'{name}\'' s += ')' return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape=None): - # pass # https://tensorflow.google.cn/versions/r2.0/api_docs/python/tf/nn/pool if self.data_format == 'channels_last': self.data_format = 'NWC' @@ -250,23 +238,16 @@ def build(self, inputs_shape=None): raise Exception("unsupported data format") self._filter_size = [self.filter_size] self._strides = [self.strides] - self._dilation_rate = [self.dilation_rate] + self.avg_pool = tl.ops.AvgPool1d( + ksize=self._filter_size, strides=self._strides, padding=self.padding, data_format=self.data_format + ) def forward(self, inputs): - outputs = tf.nn.pool( - input=inputs, - window_shape=self._filter_size, - pooling_type="AVG", - padding=self.padding, - dilations=None, # TODO: support dilations - strides=self._strides, - name=self.name, - data_format=self.data_format - ) + outputs = self.avg_pool(inputs) return outputs -class MaxPool2d(Layer): +class MaxPool2d(Module): """Max pooling for 2D image. Parameters @@ -286,9 +267,9 @@ class MaxPool2d(Layer): --------- With TensorLayer - >>> net = tl.layers.Input([None, 50, 50, 32], name='input') + >>> net = tl.layers.Input([10, 50, 50, 32], name='input') >>> net = tl.layers.MaxPool2d(filter_size=(3, 3), strides=(2, 2), padding='SAME')(net) - >>> output shape : [None, 25, 25, 32] + >>> output shape : [10, 25, 25, 32] """ @@ -325,23 +306,24 @@ def __repr__(self): def build(self, inputs_shape=None): if self.data_format == 'channels_last': - self._strides = [1, self.strides[0], self.strides[1], 1] self.data_format = 'NHWC' + self._strides = [1, self.strides[0], self.strides[1], 1] elif self.data_format == 'channels_first': self.data_format = 'NCHW' self._strides = [1, 1, self.strides[0], self.strides[1]] else: raise Exception("unsupported data format") - def forward(self, inputs): - outputs = tf.nn.max_pool( - input=inputs, ksize=self.filter_size, strides=self._strides, padding=self.padding, name=self.name, - data_format=self.data_format + self.max_pool = tl.ops.MaxPool( + ksize=self.filter_size, strides=self._strides, padding=self.padding, data_format=self.data_format ) + + def forward(self, inputs): + outputs = self.max_pool(inputs) return outputs -class MeanPool2d(Layer): +class MeanPool2d(Module): """Mean pooling for 2D image [batch, height, width, channel]. Parameters @@ -361,9 +343,9 @@ class MeanPool2d(Layer): --------- With TensorLayer - >>> net = tl.layers.Input([None, 50, 50, 32], name='input') + >>> net = tl.layers.Input([10, 50, 50, 32], name='input') >>> net = tl.layers.MeanPool2d(filter_size=(3, 3), strides=(2, 2), padding='SAME')(net) - >>> output shape : [None, 25, 25, 32] + >>> output shape : [10, 25, 25, 32] """ @@ -407,16 +389,16 @@ def build(self, inputs_shape=None): self._strides = [1, 1, self.strides[0], self.strides[1]] else: raise Exception("unsupported data format") + self.avg_pool = tl.ops.AvgPool( + ksize=self.filter_size, strides=self._strides, padding=self.padding, data_format=self.data_format + ) def forward(self, inputs): - outputs = tf.nn.avg_pool( - input=inputs, ksize=self.filter_size, strides=self._strides, padding=self.padding, name=self.name, - data_format=self.data_format - ) + outputs = self.avg_pool(inputs) return outputs -class MaxPool3d(Layer): +class MaxPool3d(Module): """Max pooling for 3D volume. Parameters @@ -441,9 +423,9 @@ class MaxPool3d(Layer): --------- With TensorLayer - >>> net = tl.layers.Input([None, 50, 50, 50, 32], name='input') + >>> net = tl.layers.Input([10, 50, 50, 50, 32], name='input') >>> net = tl.layers.MaxPool3d(filter_size=(3, 3, 3), strides=(2, 2, 2), padding='SAME')(net) - >>> output shape : [None, 25, 25, 25, 32] + >>> output shape : [10, 25, 25, 25, 32] """ @@ -485,20 +467,17 @@ def build(self, inputs_shape=None): self._strides = [1, 1, self.strides[0], self.strides[1], self.strides[2]] else: raise Exception("unsupported data format") + self.max_pool3d = tl.ops.MaxPool3d(ksize=self.filter_size, + strides=self._strides, + padding=self.padding, + data_format=self.data_format) def forward(self, inputs): - outputs = tf.nn.max_pool3d( - input=inputs, - ksize=self.filter_size, - strides=self._strides, - padding=self.padding, - data_format=self.data_format, - name=self.name, - ) + outputs = self.max_pool3d(inputs) return outputs -class MeanPool3d(Layer): +class MeanPool3d(Module): """Mean pooling for 3D volume. Parameters @@ -523,9 +502,9 @@ class MeanPool3d(Layer): --------- With TensorLayer - >>> net = tl.layers.Input([None, 50, 50, 50, 32], name='input') + >>> net = tl.layers.Input([10, 50, 50, 50, 32], name='input') >>> net = tl.layers.MeanPool3d(filter_size=(3, 3, 3), strides=(2, 2, 2), padding='SAME')(net) - >>> output shape : [None, 25, 25, 25, 32] + >>> output shape : [10, 25, 25, 25, 32] """ @@ -559,28 +538,25 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape=None): + self._strides = [1, self.strides[0], self.strides[1], self.strides[2], 1] if self.data_format == 'channels_last': self.data_format = 'NDHWC' - self._strides = [1, self.strides[0], self.strides[1], self.strides[2], 1] elif self.data_format == 'channels_first': self.data_format = 'NCDHW' - self._strides = [1, 1, self.strides[0], self.strides[1], self.strides[2]] else: raise Exception("unsupported data format") + self.avg_pool3d = tl.ops.AvgPool3d(ksize=self.filter_size, + strides=self._strides, + padding=self.padding, + data_format=self.data_format + ) def forward(self, inputs): - outputs = tf.nn.avg_pool3d( - input=inputs, - ksize=self.filter_size, - strides=self._strides, - padding=self.padding, - data_format=self.data_format, - name=self.name, - ) + outputs = self.avg_pool3d(inputs) return outputs -class GlobalMaxPool1d(Layer): +class GlobalMaxPool1d(Module): """The :class:`GlobalMaxPool1d` class is a 1D Global Max Pooling layer. Parameters @@ -594,9 +570,9 @@ class GlobalMaxPool1d(Layer): --------- With TensorLayer - >>> net = tl.layers.Input([None, 100, 30], name='input') + >>> net = tl.layers.Input([10, 100, 30], name='input') >>> net = tl.layers.GlobalMaxPool1d()(net) - >>> output shape : [None, 30] + >>> output shape : [10, 30] """ @@ -622,21 +598,21 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape=None): - pass - - def forward(self, inputs): if self.data_format == 'channels_last': - outputs = tf.reduce_max(input_tensor=inputs, axis=1, name=self.name) + self.reduce_max = tl.ReduceMax(axis=1) elif self.data_format == 'channels_first': - outputs = tf.reduce_max(input_tensor=inputs, axis=2, name=self.name) + self.reduce_max = tl.ReduceMax(axis=2) else: raise ValueError( "`data_format` should have one of the following values: [`channels_last`, `channels_first`]" ) + + def forward(self, inputs): + outputs = self.reduce_max(inputs) return outputs -class GlobalMeanPool1d(Layer): +class GlobalMeanPool1d(Module): """The :class:`GlobalMeanPool1d` class is a 1D Global Mean Pooling layer. Parameters @@ -650,9 +626,9 @@ class GlobalMeanPool1d(Layer): --------- With TensorLayer - >>> net = tl.layers.Input([None, 100, 30], name='input') + >>> net = tl.layers.Input([10, 100, 30], name='input') >>> net = tl.layers.GlobalMeanPool1d()(net) - >>> output shape : [None, 30] + >>> output shape : [10, 30] """ @@ -677,21 +653,21 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape=None): - pass - - def forward(self, inputs): if self.data_format == 'channels_last': - outputs = tf.reduce_mean(input_tensor=inputs, axis=1, name=self.name) + self.reduce_mean = tl.ReduceMean(axis=1) elif self.data_format == 'channels_first': - outputs = tf.reduce_mean(input_tensor=inputs, axis=2, name=self.name) + self.reduce_mean = tl.ReduceMean(axis=2) else: raise ValueError( "`data_format` should have one of the following values: [`channels_last`, `channels_first`]" ) + + def forward(self, inputs): + outputs = self.reduce_mean(inputs) return outputs -class GlobalMaxPool2d(Layer): +class GlobalMaxPool2d(Module): """The :class:`GlobalMaxPool2d` class is a 2D Global Max Pooling layer. Parameters @@ -705,9 +681,9 @@ class GlobalMaxPool2d(Layer): --------- With TensorLayer - >>> net = tl.layers.Input([None, 100, 100, 30], name='input') + >>> net = tl.layers.Input([10, 100, 100, 30], name='input') >>> net = tl.layers.GlobalMaxPool2d()(net) - >>> output shape : [None, 30] + >>> output shape : [10, 30] """ @@ -732,21 +708,21 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape=None): - pass - - def forward(self, inputs): if self.data_format == 'channels_last': - outputs = tf.reduce_max(input_tensor=inputs, axis=[1, 2], name=self.name) + self.reduce_max = tl.ReduceMax(axis=[1, 2]) elif self.data_format == 'channels_first': - outputs = tf.reduce_max(input_tensor=inputs, axis=[2, 3], name=self.name) + self.reduce_max = tl.ReduceMax(axis=[2, 3]) else: raise ValueError( "`data_format` should have one of the following values: [`channels_last`, `channels_first`]" ) + + def forward(self, inputs): + outputs = self.reduce_max(inputs) return outputs -class GlobalMeanPool2d(Layer): +class GlobalMeanPool2d(Module): """The :class:`GlobalMeanPool2d` class is a 2D Global Mean Pooling layer. Parameters @@ -760,9 +736,9 @@ class GlobalMeanPool2d(Layer): --------- With TensorLayer - >>> net = tl.layers.Input([None, 100, 100, 30], name='input') + >>> net = tl.layers.Input([10, 100, 100, 30], name='input') >>> net = tl.layers.GlobalMeanPool2d()(net) - >>> output shape : [None, 30] + >>> output shape : [10, 30] """ @@ -788,21 +764,21 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape=None): - pass - - def forward(self, inputs): if self.data_format == 'channels_last': - outputs = tf.reduce_mean(input_tensor=inputs, axis=[1, 2], name=self.name) + self.reduce_mean = tl.ReduceMean(axis=[1, 2]) elif self.data_format == 'channels_first': - outputs = tf.reduce_mean(input_tensor=inputs, axis=[2, 3], name=self.name) + self.reduce_mean = tl.ReduceMean(axis=[2, 3]) else: raise ValueError( "`data_format` should have one of the following values: [`channels_last`, `channels_first`]" ) + + def forward(self, inputs): + outputs = self.reduce_mean(inputs) return outputs -class GlobalMaxPool3d(Layer): +class GlobalMaxPool3d(Module): """The :class:`GlobalMaxPool3d` class is a 3D Global Max Pooling layer. Parameters @@ -816,9 +792,9 @@ class GlobalMaxPool3d(Layer): --------- With TensorLayer - >>> net = tl.layers.Input([None, 100, 100, 100, 30], name='input') + >>> net = tl.layers.Input([10, 100, 100, 100, 30], name='input') >>> net = tl.layers.GlobalMaxPool3d()(net) - >>> output shape : [None, 30] + >>> output shape : [10, 30] """ @@ -844,21 +820,21 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape=None): - pass - - def forward(self, inputs): if self.data_format == 'channels_last': - outputs = tf.reduce_max(input_tensor=inputs, axis=[1, 2, 3], name=self.name) + self.reduce_max = tl.ReduceMax(axis=[1, 2, 3]) elif self.data_format == 'channels_first': - outputs = tf.reduce_max(input_tensor=inputs, axis=[2, 3, 4], name=self.name) + self.reduce_max = tl.ReduceMax(axis=[2, 3, 4]) else: raise ValueError( "`data_format` should have one of the following values: [`channels_last`, `channels_first`]" ) + + def forward(self, inputs): + outputs = self.reduce_max(inputs) return outputs -class GlobalMeanPool3d(Layer): +class GlobalMeanPool3d(Module): """The :class:`GlobalMeanPool3d` class is a 3D Global Mean Pooling layer. Parameters @@ -872,9 +848,9 @@ class GlobalMeanPool3d(Layer): --------- With TensorLayer - >>> net = tl.layers.Input([None, 100, 100, 100, 30], name='input') + >>> net = tl.layers.Input([10, 100, 100, 100, 30], name='input') >>> net = tl.layers.GlobalMeanPool3d()(net) - >>> output shape : [None, 30] + >>> output shape : [10, 30] """ @@ -899,21 +875,21 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape=None): - pass - - def forward(self, inputs): if self.data_format == 'channels_last': - outputs = tf.reduce_mean(input_tensor=inputs, axis=[1, 2, 3], name=self.name) + self.reduce_mean = tl.ReduceMean(axis=[1, 2, 3]) elif self.data_format == 'channels_first': - outputs = tf.reduce_mean(input_tensor=inputs, axis=[2, 3, 4], name=self.name) + self.reduce_mean = tl.ReduceMean(axis=[2, 3, 4]) else: raise ValueError( "`data_format` should have one of the following values: [`channels_last`, `channels_first`]" ) + + def forward(self, inputs): + outputs = self.reduce_mean(inputs) return outputs -class CornerPool2d(Layer): +class CornerPool2d(Module): """Corner pooling for 2D image [batch, height, width, channel], see `here `__. Parameters @@ -928,9 +904,9 @@ class CornerPool2d(Layer): --------- With TensorLayer - >>> net = tl.layers.Input([None, 32, 32, 8], name='input') + >>> net = tl.layers.Input([10, 32, 32, 8], name='input') >>> net = tl.layers.CornerPool2d(mode='TopLeft',name='cornerpool2d')(net) - >>> output shape : [None, 32, 32, 8] + >>> output shape : [10, 32, 32, 8] """ @@ -957,29 +933,371 @@ def build(self, inputs_shape=None): pass def forward(self, inputs): - input_width = inputs.shape[2] - input_height = inputs.shape[1] - batch_min = tf.reduce_min(inputs) + _, input_width, input_height, _ = tl.get_tensor_shape(inputs) + # input_width = inputs.shape[2] + # input_height = inputs.shape[1] + batch_min = tl.reduce_min(inputs) if self.mode == 'TopLeft': - temp_bottom = tf.pad( - inputs, tf.constant([[0, 0], [0, input_height - 1], [0, 0], [0, 0]]), constant_values=batch_min + temp_bottom = tl.pad( + inputs, tl.constant([[0, 0], [0, input_height - 1], [0, 0], [0, 0]]), constant_values=batch_min ) - temp_right = tf.pad( - inputs, tf.constant([[0, 0], [0, 0], [0, input_width - 1], [0, 0]]), constant_values=batch_min + temp_right = tl.pad( + inputs, tl.constant([[0, 0], [0, 0], [0, input_width - 1], [0, 0]]), constant_values=batch_min ) - temp_bottom = tf.nn.max_pool(temp_bottom, ksize=(input_height, 1), strides=(1, 1), padding='VALID') - temp_right = tf.nn.max_pool(temp_right, ksize=(1, input_width), strides=(1, 1), padding='VALID') - outputs = tf.add(temp_bottom, temp_right, name=self.name) + temp_bottom = tl.ops.max_pool(temp_bottom, ksize=(input_height, 1), strides=(1, 1), padding='VALID') + temp_right = tl.ops.max_pool(temp_right, ksize=(1, input_width), strides=(1, 1), padding='VALID') + outputs = tl.add(temp_bottom, temp_right) #, name=self.name) elif self.mode == 'BottomRight': - temp_top = tf.pad( - inputs, tf.constant([[0, 0], [input_height - 1, 0], [0, 0], [0, 0]]), constant_values=batch_min + temp_top = tl.pad( + inputs, tl.constant([[0, 0], [input_height - 1, 0], [0, 0], [0, 0]]), constant_values=batch_min ) - temp_left = tf.pad( - inputs, tf.constant([[0, 0], [0, 0], [input_width - 1, 0], [0, 0]]), constant_values=batch_min + temp_left = tl.pad( + inputs, tl.constant([[0, 0], [0, 0], [input_width - 1, 0], [0, 0]]), constant_values=batch_min ) - temp_top = tf.nn.max_pool(temp_top, ksize=(input_height, 1), strides=(1, 1), padding='VALID') - temp_left = tf.nn.max_pool(temp_left, ksize=(1, input_width), strides=(1, 1), padding='VALID') - outputs = tf.add(temp_top, temp_left, name=self.name) + temp_top = tl.ops.max_pool(temp_top, ksize=(input_height, 1), strides=(1, 1), padding='VALID') + temp_left = tl.ops.max_pool(temp_left, ksize=(1, input_width), strides=(1, 1), padding='VALID') + outputs = tl.add(temp_top, temp_left) + else: + outputs = tl.identity(inputs) + return outputs + + +class AdaptiveMeanPool1d(Module): + """The :class:`AdaptiveMeanPool1d` class is a 1D Adaptive Mean Pooling layer. + + Parameters + ------------ + output_size : int + The target output size. It must be an integer. + data_format : str + One of channels_last (default, [batch, width, channel]) or channels_first. The ordering of the dimensions in the inputs. + name : None or str + A unique layer name. + + Examples + --------- + With TensorLayer + + >>> net = tl.layers.Input([10, 32, 3], name='input') + >>> net = tl.layers.AdaptiveMeanPool1d(output_size=16)(net) + >>> output shape : [10, 16, 3] + + """ + + def __init__(self, output_size, data_format='channels_last', name=None): + super(AdaptiveMeanPool1d, self).__init__(name) + self.output_size = output_size + self.data_format = data_format + + self.build() + self._built = True + + logging.info("AdaptiveMeanPool1d %s: output_size: %s " % (self.name, str(output_size))) + + def __repr__(self): + s = ('{classname}(output_size={output_size}') + if self.name is not None: + s += ', name=\'{name}\'' + s += ')' + return s.format(classname=self.__class__.__name__, **self.__dict__) + + def build(self, inputs_shape=None): + if self.data_format == 'channels_last': + self.data_format = 'NWC' + elif self.data_format == 'channels_first': + self.data_format = 'NCW' + else: + raise Exception("unsupported data format") + + self.adaptivemeanpool1d = tl.ops.AdaptiveMeanPool1D(output_size=self.output_size, data_format=self.data_format) + + def forward(self, inputs): + + outputs = self.adaptivemeanpool1d(inputs) + return outputs + + +class AdaptiveMeanPool2d(Module): + """The :class:`AdaptiveMeanPool2d` class is a 2D Adaptive Mean Pooling layer. + + Parameters + ------------ + output_size : int or list or tuple + The target output size. It cloud be an int \[int,int]\(int, int). + data_format : str + One of channels_last (default, [batch, height, width, channel]) or channels_first. The ordering of the dimensions in the inputs. + name : None or str + A unique layer name. + + Examples + --------- + With TensorLayer + + >>> net = tl.layers.Input([10,32, 32, 3], name='input') + >>> net = tl.layers.AdaptiveMeanPool2d(output_size=16)(net) + >>> output shape : [10,16, 16, 3] + + """ + + def __init__(self, output_size, data_format='channels_last', name=None): + super(AdaptiveMeanPool2d, self).__init__(name) + self.output_size = output_size + self.data_format = data_format + + self.build() + self._built = True + + logging.info("AdaptiveMeanPool2d %s: output_size: %s " % (self.name, str(output_size))) + + def __repr__(self): + s = ('{classname}(output_size={output_size}') + if self.name is not None: + s += ', name=\'{name}\'' + s += ')' + return s.format(classname=self.__class__.__name__, **self.__dict__) + + def build(self, inputs_shape=None): + if self.data_format == 'channels_last': + self.data_format = 'NHWC' + elif self.data_format == 'channels_first': + self.data_format = 'NCHW' + else: + raise Exception("unsupported data format") + + if isinstance(self.output_size, int): + self.output_size = (self.output_size, ) * 2 + + self.adaptivemeanpool2d = tl.ops.AdaptiveMeanPool2D(output_size=self.output_size, data_format=self.data_format) + + def forward(self, inputs): + + outputs = self.adaptivemeanpool2d(inputs) + return outputs + + +class AdaptiveMeanPool3d(Module): + """The :class:`AdaptiveMeanPool3d` class is a 3D Adaptive Mean Pooling layer. + + Parameters + ------------ + output_size : int or list or tuple + The target output size. It cloud be an int \[int,int,int]\(int, int, int). + data_format : str + One of channels_last (default, [batch, depth, height, width, channel]) or channels_first. The ordering of the dimensions in the inputs. + name : None or str + A unique layer name. + + Examples + --------- + With TensorLayer + + >>> net = tl.layers.Input([10,32, 32, 32, 3], name='input') + >>> net = tl.layers.AdaptiveMeanPool3d(output_size=16)(net) + >>> output shape : [10, 16, 16, 16, 3] + + """ + + def __init__(self, output_size, data_format='channels_last', name=None): + super(AdaptiveMeanPool3d, self).__init__(name) + self.output_size = output_size + self.data_format = data_format + + self.build() + self._built = True + + logging.info("AdaptiveMeanPool3d %s: output_size: %s " % (self.name, str(output_size))) + + def __repr__(self): + s = ('{classname}(output_size={output_size}') + if self.name is not None: + s += ', name=\'{name}\'' + s += ')' + return s.format(classname=self.__class__.__name__, **self.__dict__) + + def build(self, inputs_shape=None): + if self.data_format == 'channels_last': + self.data_format = 'NDHWC' + elif self.data_format == 'channels_first': + self.data_format = 'NCDHW' + else: + raise Exception("unsupported data format") + + if isinstance(self.output_size, int): + self.output_size = (self.output_size, ) * 3 + + self.adaptivemeanpool3d = tl.ops.AdaptiveMeanPool3D(output_size=self.output_size, data_format=self.data_format) + + def forward(self, inputs): + + outputs = self.adaptivemeanpool3d(inputs) + return outputs + + +class AdaptiveMaxPool1d(Module): + """The :class:`AdaptiveMaxPool1d` class is a 1D Adaptive Max Pooling layer. + + Parameters + ------------ + output_size : int + The target output size. It must be an integer. + data_format : str + One of channels_last (default, [batch, width, channel]) or channels_first. The ordering of the dimensions in the inputs. + name : None or str + A unique layer name. + + Examples + --------- + With TensorLayer + + >>> net = tl.layers.Input([10, 32, 3], name='input') + >>> net = tl.layers.AdaptiveMaxPool1d(output_size=16)(net) + >>> output shape : [10, 16, 3] + + """ + + def __init__(self, output_size, data_format='channels_last', name=None): + super(AdaptiveMaxPool1d, self).__init__(name) + self.output_size = output_size + self.data_format = data_format + + self.build() + self._built = True + + logging.info("AdaptiveMaxPool1d %s: output_size: %s " % (self.name, str(output_size))) + + def __repr__(self): + s = ('{classname}(output_size={output_size}') + if self.name is not None: + s += ', name=\'{name}\'' + s += ')' + return s.format(classname=self.__class__.__name__, **self.__dict__) + + def build(self, inputs_shape=None): + if self.data_format == 'channels_last': + self.data_format = 'NWC' + elif self.data_format == 'channels_first': + self.data_format = 'NCW' + else: + raise Exception("unsupported data format") + + self.adaptivemaxpool1d = tl.ops.AdaptiveMaxPool1D(output_size=self.output_size, data_format=self.data_format) + + def forward(self, inputs): + + outputs = self.adaptivemaxpool1d(inputs) + return outputs + + +class AdaptiveMaxPool2d(Module): + """The :class:`AdaptiveMaxPool2d` class is a 2D Adaptive Max Pooling layer. + + Parameters + ------------ + output_size : int or list or tuple + The target output size. It cloud be an int \[int,int]\(int, int). + data_format : str + One of channels_last (default, [batch, height, width, channel]) or channels_first. The ordering of the dimensions in the inputs. + name : None or str + A unique layer name. + + Examples + --------- + With TensorLayer + + >>> net = tl.layers.Input([10, 32, 32, 3], name='input') + >>> net = tl.layers.AdaptiveMaxPool2d(output_size=16)(net) + >>> output shape : [10, 16, 16, 3] + + """ + + def __init__(self, output_size, data_format='channels_last', name=None): + super(AdaptiveMaxPool2d, self).__init__(name) + self.output_size = output_size + self.data_format = data_format + + self.build() + self._built = True + + logging.info("AdaptiveMaxPool1d %s: output_size: %s " % (self.name, str(output_size))) + + def __repr__(self): + s = ('{classname}(output_size={output_size}') + if self.name is not None: + s += ', name=\'{name}\'' + s += ')' + return s.format(classname=self.__class__.__name__, **self.__dict__) + + def build(self, inputs_shape=None): + if self.data_format == 'channels_last': + self.data_format = 'NHWC' + elif self.data_format == 'channels_first': + self.data_format = 'NCHW' + else: + raise Exception("unsupported data format") + if isinstance(self.output_size, int): + self.output_size = (self.output_size, ) * 2 + + self.adaptivemaxpool2d = tl.ops.AdaptiveMaxPool2D(output_size=self.output_size, data_format=self.data_format) + + def forward(self, inputs): + + outputs = self.adaptivemaxpool2d(inputs) + return outputs + + +class AdaptiveMaxPool3d(Module): + """The :class:`AdaptiveMaxPool3d` class is a 3D Adaptive Max Pooling layer. + + Parameters + ------------ + output_size : int or list or tuple + The target output size. It cloud be an int \[int,int,int]\(int, int, int). + data_format : str + One of channels_last (default, [batch, depth, height, width, channel]) or channels_first. The ordering of the dimensions in the inputs. + name : None or str + A unique layer name. + + Examples + --------- + With TensorLayer + + >>> net = tl.layers.Input([10,32, 32, 32, 3], name='input') + >>> net = tl.layers.AdaptiveMaxPool3d(output_size=16)(net) + >>> output shape : [10, 16, 16, 16, 3] + + """ + + def __init__(self, output_size, data_format='channels_last', name=None): + super(AdaptiveMaxPool3d, self).__init__(name) + self.output_size = output_size + self.data_format = data_format + + self.build() + self._built = True + + logging.info("AdaptiveMaxPool3d %s: output_size: %s " % (self.name, str(output_size))) + + def __repr__(self): + s = ('{classname}(output_size={output_size}') + if self.name is not None: + s += ', name=\'{name}\'' + s += ')' + return s.format(classname=self.__class__.__name__, **self.__dict__) + + def build(self, inputs_shape=None): + if self.data_format == 'channels_last': + self.data_format = 'NDHWC' + elif self.data_format == 'channels_first': + self.data_format = 'NCDHW' else: - outputs = tf.identity(inputs, name=self.name) + raise Exception("unsupported data format") + + if isinstance(self.output_size, int): + self.output_size = (self.output_size, ) * 3 + + self.adaptivemaxpool3d = tl.ops.AdaptiveMaxPool3D(output_size=self.output_size, data_format=self.data_format) + + def forward(self, inputs): + + outputs = self.adaptivemaxpool3d(inputs) return outputs diff --git a/tensorlayer/layers/quantize.py b/tensorlayer/layers/quantize.py index fd19c9fa4..02107c721 100644 --- a/tensorlayer/layers/quantize.py +++ b/tensorlayer/layers/quantize.py @@ -1,11 +1,8 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer +from tensorlayer.layers.core import Module from tensorlayer.layers.utils import quantize __all__ = [ @@ -13,7 +10,7 @@ ] -class Sign(Layer): +class Sign(Module): """The :class:`SignLayer` class is for quantizing the layer outputs to -1 or 1 while inferencing. Parameters @@ -45,8 +42,5 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def forward(self, inputs): - # with tf.variable_scope(name): - ## self.outputs = tl.act.sign(self.inputs) - # self.outputs = quantize(self.inputs) outputs = quantize(inputs) return outputs diff --git a/tensorlayer/layers/recurrent.py b/tensorlayer/layers/recurrent.py index 2d3558af4..5434cec6e 100644 --- a/tensorlayer/layers/recurrent.py +++ b/tensorlayer/layers/recurrent.py @@ -7,9 +7,9 @@ import tensorlayer as tl from tensorlayer import logging from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer +from tensorlayer.layers.core import Module -# TODO: uncomment +# TODO: Need to update to version 3.0 __all__ = [ 'RNN', 'SimpleRNN', @@ -26,7 +26,7 @@ ] -class RNN(Layer): +class RNN(Module): """ The :class:`RNN` class is a fixed length recurrent layer for implementing simple RNN, LSTM, GRU and etc. @@ -76,7 +76,7 @@ class RNN(Layer): For synced sequence input and output, see `PTB example `__ A simple regression model below. - + >>> inputs = tl.layers.Input([batch_size, num_steps, embedding_size]) >>> rnn_out, lstm_state = tl.layers.RNN( >>> cell=tf.keras.layers.LSTMCell(units=hidden_size, dropout=0.1), @@ -88,7 +88,7 @@ class RNN(Layer): >>> # If LSTMCell is applied, the rnn_state is [h, c] where h the hidden state and c the cell state of LSTM. A stacked RNN model. - + >>> inputs = tl.layers.Input([batch_size, num_steps, embedding_size]) >>> rnn_out1 = tl.layers.RNN( >>> cell=tf.keras.layers.SimpleRNNCell(units=hidden_size, dropout=0.1), @@ -247,7 +247,9 @@ def forward(self, inputs, sequence_length=None, initial_state=None, **kwargs): "but got an actual length of a sequence %d" % i ) - sequence_length = [i - 1 if i >= 1 else 0 for i in sequence_length] + sequence_length = tl.layers.retrieve_seq_length_op3(inputs) + + sequence_length = [i - 1 if i >= 1 else 0 for i in sequence_length] # set warning # if (not self.return_last_output) and sequence_length is not None: @@ -559,7 +561,7 @@ def __init__( ) -class BiRNN(Layer): +class BiRNN(Module): """ The :class:`BiRNN` class is a fixed length Bidirectional recurrent layer. @@ -591,7 +593,7 @@ class BiRNN(Layer): Examples -------- A simple regression model below. - + >>> inputs = tl.layers.Input([batch_size, num_steps, embedding_size]) >>> # the fw_cell and bw_cell can be different >>> rnnlayer = tl.layers.BiRNN( @@ -609,7 +611,7 @@ class BiRNN(Layer): >>> rnn_model = tl.models.Model(inputs=inputs, outputs=[outputs, rnn_out, rnn_fw_state[0], rnn_bw_state[0]]) A stacked BiRNN model. - + >>> inputs = tl.layers.Input([batch_size, num_steps, embedding_size]) >>> rnn_out1 = tl.layers.BiRNN( >>> fw_cell=tf.keras.layers.SimpleRNNCell(units=hidden_size, dropout=0.1), @@ -733,7 +735,6 @@ def forward(self, inputs, fw_initial_state=None, bw_initial_state=None, **kwargs self.bw_cell.reset_recurrent_dropout_mask() for time_step in range(total_steps): - fw_cell_output, fw_states = self.fw_cell.call(inputs[:, time_step, :], fw_states, training=self.is_train) bw_cell_output, bw_states = self.bw_cell.call( inputs[:, -time_step - 1, :], bw_states, training=self.is_train @@ -928,14 +929,14 @@ def _conv_linear(args, filter_size, num_features, bias, bias_start=0.0, scope=No return res + bias_term -class ConvLSTM(Layer): +class ConvLSTM(Module): """A fixed length Convolutional LSTM layer. See this `paper `__ . Parameters ---------- - prev_layer : :class:`Layer` + prev_layer : :class:`Module` Previous layer cell_shape : tuple of int The shape of each cell width * height diff --git a/tensorlayer/layers/scale.py b/tensorlayer/layers/scale.py index 3e14e462a..b86dcaee3 100644 --- a/tensorlayer/layers/scale.py +++ b/tensorlayer/layers/scale.py @@ -1,18 +1,16 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - +import tensorlayer as tl from tensorlayer import logging -from tensorlayer.initializers import constant -from tensorlayer.layers.core import Layer +from tensorlayer.layers.core import Module __all__ = [ 'Scale', ] -class Scale(Layer): +class Scale(Module): """The :class:`Scale` class is to multiple a trainable scale value to the layer outputs. Usually be used on the output of binary net. Parameters @@ -25,10 +23,8 @@ class Scale(Layer): Examples ---------- >>> inputs = tl.layers.Input([8, 3]) - >>> dense = tl.layers.Dense(n_units=10)(inputs) + >>> dense = tl.layers.Dense(n_units=10, in_channels=3)(inputs) >>> outputs = tl.layers.Scale(init_scale=0.5)(dense) - >>> model = tl.models.Model(inputs=inputs, outputs=[dense, outputs]) - >>> dense_out, scale_out = model(data, is_train=True) """ @@ -53,7 +49,7 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape): - self.scale = self._get_weights("scale", shape=[1], init=constant(value=self.init_scale)) + self.scale = self._get_weights("scale", shape=[1], init=tl.initializers.constant(value=self.init_scale)) # @tf.function def forward(self, inputs): diff --git a/tensorlayer/layers/shape.py b/tensorlayer/layers/shape.py index f8e7b47db..8ecdad0e6 100644 --- a/tensorlayer/layers/shape.py +++ b/tensorlayer/layers/shape.py @@ -1,12 +1,9 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer -from tensorlayer.layers.utils import flatten_reshape +from tensorlayer.layers.core import Module +import tensorlayer as tl __all__ = [ 'Flatten', @@ -16,7 +13,7 @@ ] -class Flatten(Layer): +class Flatten(Module): """A layer that reshapes high-dimension input into a vector. Then we often apply Dense, RNN, Concat and etc on the top of a flatten layer. @@ -50,15 +47,15 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape=None): - pass + self.flatten_reshape = tl.ops.FlattenReshape() # @tf.function def forward(self, inputs): - outputs = flatten_reshape(inputs, name=self.name) + outputs = self.flatten_reshape(inputs) return outputs -class Reshape(Layer): +class Reshape(Module): """A layer that reshapes a given tensor. Parameters @@ -93,15 +90,14 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape=None): - pass + self.reshape = tl.ops.Reshape(self.shape) - # @tf.function def forward(self, inputs): - outputs = tf.reshape(inputs, shape=self.shape, name=self.name) + outputs = self.reshape(inputs) return outputs -class Transpose(Layer): +class Transpose(Module): """A layer that transposes the dimension of a tensor. See `tf.transpose() `__ . @@ -144,15 +140,15 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape=None): - pass + self.transpose = tl.ops.Transpose(perm=self.perm, conjugate=self.conjugate) # @tf.function def forward(self, inputs): - outputs = tf.transpose(a=inputs, perm=self.perm, conjugate=self.conjugate, name=self.name) + outputs = self.transpose(a=inputs) return outputs -class Shuffle(Layer): +class Shuffle(Module): """A layer that shuffle a 2D image [batch, height, width, channel], see `here `__. Parameters @@ -170,9 +166,10 @@ class Shuffle(Layer): """ - def __init__(self, group, name=None): #'reshape'): + def __init__(self, group, in_channels=None, name=None): #'reshape'): super(Shuffle, self).__init__(name) self.group = group + self.inchannels = in_channels logging.info("Shuffle %s" % (self.name)) @@ -187,18 +184,31 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape=None): - pass + self.transpose = tl.ops.Transpose([0, 1, 2, 4, 3]) + inputs_shape = self.inchannels + if tl.BACKEND == 'mindspore' and inputs_shape == None: + raise ValueError("Do you forget to pass the keyword argument 'in_channels") + if tl.BACKEND == 'mindspore': + h, w, in_channel = inputs_shape[1:] + if in_channel % self.group != 0: + raise ValueError( + "The in_channel must be a multiple of the number of groups. The in_channel got %d and the number of groups is %d." + % (in_channel, self.group) + ) + self.reshape1 = tl.ops.Reshape([-1, h, w, in_channel // self.group, self.group]) + self.reshape2 = tl.ops.Reshape([-1, h, w, in_channel]) - # @tf.function def forward(self, inputs): - in_shape = inputs.get_shape().as_list() - h, w, in_channel = in_shape[1:] - if in_channel % self.group != 0: - raise ValueError( - "The in_channel must be a multiple of the number of groups. The in_channel got %d and the number of groups is %d." - % (in_channel, self.group) - ) - temp = tf.reshape(inputs, [-1, h, w, in_channel // self.group, self.group]) - temp = tf.transpose(temp, [0, 1, 2, 4, 3]) - outputs = tf.reshape(temp, [-1, h, w, in_channel], name=self.name) + if tl.BACKEND == 'tensorflow': + in_shape = tl.get_tensor_shape(inputs) + h, w, in_channel = in_shape[1:] + reshape1 = tl.ops.Reshape([-1, h, w, in_channel // self.group, self.group]) + temp = reshape1(inputs) + temp = self.transpose(temp) + reshape2 = tl.ops.Reshape([-1, h, w, in_channel]) + outputs = reshape2(temp) + else: + temp = self.reshape1(inputs) + temp = self.transpose(temp) + outputs = self.reshape2(temp) return outputs diff --git a/tensorlayer/layers/spatial_transformer.py b/tensorlayer/layers/spatial_transformer.py index 74822d565..23c94eb75 100644 --- a/tensorlayer/layers/spatial_transformer.py +++ b/tensorlayer/layers/spatial_transformer.py @@ -2,18 +2,10 @@ # -*- coding: utf-8 -*- import numpy as np -import tensorflow as tf from six.moves import xrange -from tensorflow.python.ops import array_ops - import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer -from tensorlayer.layers.utils import flatten_reshape - -# from tensorlayer.layers.core import LayersConfig -# from tensorlayer.layers.core import TF_GRAPHKEYS_VARIABLES +from tensorlayer.layers.core import Module __all__ = [ 'transformer', @@ -61,47 +53,43 @@ def transformer(U, theta, out_size, name='SpatialTransformer2dAffine'): """ def _repeat(x, n_repeats): - rep = tf.transpose(a=tf.expand_dims(tf.ones(shape=tf.stack([ + rep = tl.transpose(a=tl.expand_dims(tl.ones(shape=tl.stack([ n_repeats, - ])), 1), perm=[1, 0]) - rep = tf.cast(rep, 'int32') - x = tf.matmul(tf.reshape(x, (-1, 1)), rep) - return tf.reshape(x, [-1]) + ])), axis=1), perm=[1, 0]) + rep = tl.cast(rep, 'int32') + x = tl.matmul(tl.reshape(x, (-1, 1)), rep) + return tl.reshape(x, [-1]) def _interpolate(im, x, y, out_size): # constants - num_batch = tf.shape(input=im)[0] - height = tf.shape(input=im)[1] - width = tf.shape(input=im)[2] - channels = tf.shape(input=im)[3] - - x = tf.cast(x, 'float32') - y = tf.cast(y, 'float32') - height_f = tf.cast(height, 'float32') - width_f = tf.cast(width, 'float32') + num_batch, height, width, channels = tl.get_tensor_shape(im) + x = tl.cast(x, 'float32') + y = tl.cast(y, 'float32') + height_f = tl.cast(height, 'float32') + width_f = tl.cast(width, 'float32') out_height = out_size[0] out_width = out_size[1] - zero = tf.zeros([], dtype='int32') - max_y = tf.cast(tf.shape(input=im)[1] - 1, 'int32') - max_x = tf.cast(tf.shape(input=im)[2] - 1, 'int32') + zero = tl.zeros([], dtype='int32') + max_y = tl.cast(height - 1, 'int32') + max_x = tl.cast(width - 1, 'int32') # scale indices from [-1, 1] to [0, width/height] x = (x + 1.0) * (width_f) / 2.0 y = (y + 1.0) * (height_f) / 2.0 # do sampling - x0 = tf.cast(tf.floor(x), 'int32') + x0 = tl.cast(tl.floor(x), 'int32') x1 = x0 + 1 - y0 = tf.cast(tf.floor(y), 'int32') + y0 = tl.cast(tl.floor(y), 'int32') y1 = y0 + 1 - x0 = tf.clip_by_value(x0, zero, max_x) - x1 = tf.clip_by_value(x1, zero, max_x) - y0 = tf.clip_by_value(y0, zero, max_y) - y1 = tf.clip_by_value(y1, zero, max_y) + x0 = tl.clip_by_value(x0, zero, max_x) + x1 = tl.clip_by_value(x1, zero, max_x) + y0 = tl.clip_by_value(y0, zero, max_y) + y1 = tl.clip_by_value(y1, zero, max_y) dim2 = width dim1 = width * height - base = _repeat(tf.range(num_batch) * dim1, out_height * out_width) + base = _repeat(tl.range(num_batch) * dim1, out_height * out_width) base_y0 = base + y0 * dim2 base_y1 = base + y1 * dim2 idx_a = base_y0 + x0 @@ -111,23 +99,23 @@ def _interpolate(im, x, y, out_size): # use indices to lookup pixels in the flat image and restore # channels dim - im_flat = tf.reshape(im, tf.stack([-1, channels])) - im_flat = tf.cast(im_flat, 'float32') - Ia = tf.gather(im_flat, idx_a) - Ib = tf.gather(im_flat, idx_b) - Ic = tf.gather(im_flat, idx_c) - Id = tf.gather(im_flat, idx_d) + im_flat = tl.reshape(im, tl.stack([-1, channels])) + im_flat = tl.cast(im_flat, 'float32') + Ia = tl.gather(im_flat, idx_a) + Ib = tl.gather(im_flat, idx_b) + Ic = tl.gather(im_flat, idx_c) + Id = tl.gather(im_flat, idx_d) # and finally calculate interpolated values - x0_f = tf.cast(x0, 'float32') - x1_f = tf.cast(x1, 'float32') - y0_f = tf.cast(y0, 'float32') - y1_f = tf.cast(y1, 'float32') - wa = tf.expand_dims(((x1_f - x) * (y1_f - y)), 1) - wb = tf.expand_dims(((x1_f - x) * (y - y0_f)), 1) - wc = tf.expand_dims(((x - x0_f) * (y1_f - y)), 1) - wd = tf.expand_dims(((x - x0_f) * (y - y0_f)), 1) - output = tf.add_n([wa * Ia, wb * Ib, wc * Ic, wd * Id]) + x0_f = tl.cast(x0, 'float32') + x1_f = tl.cast(x1, 'float32') + y0_f = tl.cast(y0, 'float32') + y1_f = tl.cast(y1, 'float32') + wa = tl.expand_dims(((x1_f - x) * (y1_f - y)), 1) + wb = tl.expand_dims(((x1_f - x) * (y - y0_f)), 1) + wc = tl.expand_dims(((x - x0_f) * (y1_f - y)), 1) + wd = tl.expand_dims(((x - x0_f) * (y - y0_f)), 1) + output = tl.add_n([wa * Ia, wb * Ib, wc * Ic, wd * Id]) return output def _meshgrid(height, width): @@ -136,44 +124,43 @@ def _meshgrid(height, width): # np.linspace(-1, 1, height)) # ones = np.ones(np.prod(x_t.shape)) # grid = np.vstack([x_t.flatten(), y_t.flatten(), ones]) - x_t = tf.matmul( - tf.ones(shape=tf.stack([height, 1])), - tf.transpose(a=tf.expand_dims(tf.linspace(-1.0, 1.0, width), 1), perm=[1, 0]) + x_t = tl.matmul( + tl.ones(shape=tl.stack([height, 1])), + tl.transpose(a=tl.expand_dims(tl.linspace(-1.0, 1.0, width), 1), perm=[1, 0]) ) - y_t = tf.matmul(tf.expand_dims(tf.linspace(-1.0, 1.0, height), 1), tf.ones(shape=tf.stack([1, width]))) + y_t = tl.matmul(tl.expand_dims(tl.linspace(-1.0, 1.0, height), 1), tl.ones(shape=tl.stack([1, width]))) - x_t_flat = tf.reshape(x_t, (1, -1)) - y_t_flat = tf.reshape(y_t, (1, -1)) + x_t_flat = tl.reshape(x_t, (1, -1)) + y_t_flat = tl.reshape(y_t, (1, -1)) - ones = tf.ones_like(x_t_flat) - grid = tf.concat(axis=0, values=[x_t_flat, y_t_flat, ones]) + ones = tl.ones(shape=tl.get_tensor_shape(x_t_flat)) + grid = tl.concat(axis=0, values=[x_t_flat, y_t_flat, ones]) return grid def _transform(theta, input_dim, out_size): - num_batch = tf.shape(input=input_dim)[0] - num_channels = tf.shape(input=input_dim)[3] - theta = tf.reshape(theta, (-1, 2, 3)) - theta = tf.cast(theta, 'float32') + num_batch, _, _, num_channels = tl.get_tensor_shape(input_dim) + theta = tl.reshape(theta, (-1, 2, 3)) + theta = tl.cast(theta, 'float32') # grid of (x_t, y_t, 1), eq (1) in ref [1] out_height = out_size[0] out_width = out_size[1] grid = _meshgrid(out_height, out_width) - grid = tf.expand_dims(grid, 0) - grid = tf.reshape(grid, [-1]) - grid = tf.tile(grid, tf.stack([num_batch])) - grid = tf.reshape(grid, tf.stack([num_batch, 3, -1])) + grid = tl.expand_dims(grid, 0) + grid = tl.reshape(grid, [-1]) + grid = tl.tile(grid, tl.stack([num_batch])) + grid = tl.reshape(grid, tl.stack([num_batch, 3, -1])) # Transform A x (x_t, y_t, 1)^T -> (x_s, y_s) - T_g = tf.matmul(theta, grid) - x_s = tf.slice(T_g, [0, 0, 0], [-1, 1, -1]) - y_s = tf.slice(T_g, [0, 1, 0], [-1, 1, -1]) - x_s_flat = tf.reshape(x_s, [-1]) - y_s_flat = tf.reshape(y_s, [-1]) + T_g = tl.matmul(theta, grid) + x_s = tl.slice(T_g, [0, 0, 0], [-1, 1, -1]) + y_s = tl.slice(T_g, [0, 1, 0], [-1, 1, -1]) + x_s_flat = tl.reshape(x_s, [-1]) + y_s_flat = tl.reshape(y_s, [-1]) input_transformed = _interpolate(input_dim, x_s_flat, y_s_flat, out_size) - output = tf.reshape(input_transformed, tf.stack([num_batch, out_height, out_width, num_channels])) + output = tl.reshape(input_transformed, tl.stack([num_batch, out_height, out_width, num_channels])) return output output = _transform(theta, U, out_size) @@ -200,14 +187,14 @@ def batch_transformer(U, thetas, out_size, name='BatchSpatialTransformer2dAffine Tensor of size [batch * num_transforms, out_height, out_width, num_channels] """ - with tf.compat.v1.variable_scope(name): - num_batch, num_transforms = map(int, thetas.get_shape().as_list()[:2]) - indices = [[i] * num_transforms for i in xrange(num_batch)] - input_repeated = tf.gather(U, tf.reshape(indices, [-1])) - return transformer(input_repeated, thetas, out_size) + # with tf.compat.v1.variable_scope(name): + num_batch, num_transforms = map(int, thetas.get_shape().as_list()[:2]) + indices = [[i] * num_transforms for i in xrange(num_batch)] + input_repeated = tl.gather(U, tl.reshape(indices, [-1])) + return transformer(input_repeated, thetas, out_size) -class SpatialTransformer2dAffine(Layer): +class SpatialTransformer2dAffine(Module): """The :class:`SpatialTransformer2dAffine` class is a 2D `Spatial Transformer Layer `__ for `2D Affine Transformation `__. @@ -280,14 +267,14 @@ def forward(self, inputs): n_channels is identical to that of U. """ theta_input, U = inputs - theta = tf.nn.tanh(tf.matmul(theta_input, self.W) + self.b) + theta = tl.tanh(tl.matmul(theta_input, self.W) + self.b) outputs = transformer(U, theta, out_size=self.out_size) # automatically set batch_size and channels # e.g. [?, 40, 40, ?] --> [64, 40, 40, 1] or [64, 20, 20, 4] batch_size = theta_input.shape[0] n_channels = U.shape[-1] if self.data_format == 'channel_last': - outputs = tf.reshape(outputs, shape=[batch_size, self.out_size[0], self.out_size[1], n_channels]) + outputs = tl.reshape(outputs, shape=[batch_size, self.out_size[0], self.out_size[1], n_channels]) else: raise Exception("unimplement data_format {}".format(self.data_format)) return outputs diff --git a/tensorlayer/layers/stack.py b/tensorlayer/layers/stack.py index 4e37d1f9a..6c84291d7 100644 --- a/tensorlayer/layers/stack.py +++ b/tensorlayer/layers/stack.py @@ -1,11 +1,9 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import tensorflow as tf - +import tensorlayer as tl from tensorlayer import logging -from tensorlayer.decorators import deprecated_alias -from tensorlayer.layers.core import Layer +from tensorlayer.layers.core import Module __all__ = [ 'Stack', @@ -13,7 +11,7 @@ ] -class Stack(Layer): +class Stack(Module): """ The :class:`Stack` class is a layer for stacking a list of rank-R tensors into one rank-(R+1) tensor, see `tf.stack() `__. @@ -26,14 +24,13 @@ class Stack(Layer): Examples --------- - >>> import tensorflow as tf >>> import tensorlayer as tl - >>> ni = tl.layers.Input([None, 784], name='input') + >>> ni = tl.layers.Input([10, 784], name='input') >>> net1 = tl.layers.Dense(10, name='dense1')(ni) >>> net2 = tl.layers.Dense(10, name='dense2')(ni) >>> net3 = tl.layers.Dense(10, name='dense3')(ni) >>> net = tl.layers.Stack(axis=1, name='stack')([net1, net2, net3]) - (?, 3, 10) + (10, 3, 10) """ @@ -57,14 +54,14 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape): - pass + self.stack = tl.ops.Stack(axis=self.axis) def forward(self, inputs): - outputs = tf.stack(inputs, axis=self.axis, name=self.name) + outputs = self.stack(inputs) return outputs -class UnStack(Layer): +class UnStack(Module): """ The :class:`UnStack` class is a layer for unstacking the given dimension of a rank-R tensor into rank-(R-1) tensors., see `tf.unstack() `__. @@ -84,9 +81,9 @@ class UnStack(Layer): Examples -------- - >>> ni = Input([4, 10], name='input') - >>> nn = Dense(n_units=5)(ni) - >>> nn = UnStack(axis=1)(nn) # unstack in channel axis + >>> ni = tl.layers.Input([4, 10], name='input') + >>> nn = tl.layers.Dense(n_units=5)(ni) + >>> nn = tl.layers.UnStack(axis=1)(nn) # unstack in channel axis >>> len(nn) # 5 >>> nn[0].shape # (4,) @@ -109,8 +106,8 @@ def __repr__(self): return s.format(classname=self.__class__.__name__, **self.__dict__) def build(self, inputs_shape): - pass + self.unstack = tl.ops.Unstack(num=self.num, axis=self.axis) def forward(self, inputs): - outputs = tf.unstack(inputs, num=self.num, axis=self.axis, name=self.name) + outputs = self.unstack(inputs) return outputs diff --git a/tensorlayer/layers/utils.py b/tensorlayer/layers/utils.py index e5dd154b1..18888c25d 100644 --- a/tensorlayer/layers/utils.py +++ b/tensorlayer/layers/utils.py @@ -1,13 +1,13 @@ #! /usr/bin/python # -*- coding: utf-8 -*- -import numpy as np import tensorflow as tf from tensorflow.python.ops.rnn_cell import LSTMStateTuple import tensorlayer as tl from tensorlayer import logging from tensorlayer.decorators import deprecated, deprecated_alias +from tensorlayer.backend.ops.load_backend import BACKEND __all__ = [ 'cabs', @@ -74,9 +74,9 @@ def flatten_reshape(variable, name='flatten'): """ dim = 1 - for d in variable.get_shape()[1:].as_list(): + for d in tl.get_tensor_shape(variable)[1:]: # variable.get_shape()[1:].as_list(): dim *= d - return tf.reshape(variable, shape=[-1, dim], name=name) + return tl.reshape(variable, shape=[-1, dim]) def get_collection_trainable(name=''): @@ -129,22 +129,23 @@ def get_layers_with_name(net, name="", verbose=False): return layers -def get_variable_with_initializer(scope_name, var_name, shape, init=tl.initializers.random_normal()): +def get_variable_with_initializer(scope_name, var_name, shape, init=tl.initializers.random_normal(), trainable=True): # FIXME: documentation needed - # if tf.executing_eagerly(): var_name = scope_name + "/" + var_name - # if init_args is not None and len(init_args) != 0: - # initial_value = init(**init_args)(shape=shape) - # else: - # initial_value = init()(shape=shape) - # var = tf.Variable(initial_value=initial_value, name=var_name) # FIXME: not sure whether this is correct? + # TODO mindspore weights shape : [out_channel, in_channel, kernel_h, kernel_w] + if BACKEND == 'mindspore': + if len(shape) == 2: + pass + else: + shape = shape[::-1] + initial_value = init(shape=shape) - var = tf.Variable(initial_value=initial_value, name=var_name) #, **init_args) - # else: - # with tf.variable_scope(scope_name, reuse=tf.AUTO_REUSE): - # var = tf.get_variable(name=var_name, initializer=tf.zeros(shape), trainable=train) + if BACKEND == 'dragon': + return initial_value + + var = tl.Variable(initial_value=initial_value, name=var_name, trainable=trainable) return var @@ -426,3 +427,16 @@ def _compute_threshold(x): threshold = tf.math.divide(x_sum, tf.cast(tf.size(input=x), tf.float32), name=None) threshold = tf.multiply(0.7, threshold, name=None) return threshold + + +def mean_var_with_update(update_moving_mean, update_moving_variance, mean, variance): + with tf.control_dependencies([update_moving_mean, update_moving_variance]): + return tf.identity(mean), tf.identity(variance) + + +def w_fold(w, gama, var, epsilon): + return tf.compat.v1.div(tf.multiply(gama, w), tf.sqrt(var + epsilon)) + + +def bias_fold(beta, gama, mean, var, epsilon): + return tf.subtract(beta, tf.compat.v1.div(tf.multiply(gama, mean), tf.sqrt(var + epsilon))) diff --git a/tensorlayer/logging/__init__.py b/tensorlayer/logging/__init__.py index 274eef069..e3c0dac3d 100644 --- a/tensorlayer/logging/__init__.py +++ b/tensorlayer/logging/__init__.py @@ -5,7 +5,7 @@ various benchmarks and domain-specific problems. In addition, we also support transparent access to native TensorFlow parameters. For example, we provide not only layers for local response normalization, but also -layers that allow user to apply ``tf.nn.lrn`` on ``network.outputs``. +layers that allow user to apply ``tf.ops.lrn`` on ``network.outputs``. More functions can be found in `TensorFlow API `__. """ diff --git a/tensorlayer/logging/contrib/__init__.py b/tensorlayer/logging/contrib/__init__.py index dfb2f18f6..69a3ccb47 100644 --- a/tensorlayer/logging/contrib/__init__.py +++ b/tensorlayer/logging/contrib/__init__.py @@ -5,7 +5,7 @@ various benchmarks and domain-specific problems. In addition, we also support transparent access to native TensorFlow parameters. For example, we provide not only layers for local response normalization, but also -layers that allow user to apply ``tf.nn.lrn`` on ``network.outputs``. +layers that allow user to apply ``tf.ops.lrn`` on ``network.outputs``. More functions can be found in `TensorFlow API `__. """ diff --git a/tensorlayer/metric/__init__.py b/tensorlayer/metric/__init__.py new file mode 100644 index 000000000..75a03a345 --- /dev/null +++ b/tensorlayer/metric/__init__.py @@ -0,0 +1,13 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +from tensorlayer.backend import BACKEND + +if BACKEND == 'tensorflow': + from .tensorflow_metric import * +elif BACKEND == 'mindspore': + from .mindspore_metric import * +elif BACKEND == 'paddle': + from .paddle_metric import * +else: + raise NotImplementedError("This backend is not supported") diff --git a/tensorlayer/metric/mindspore_metric.py b/tensorlayer/metric/mindspore_metric.py new file mode 100644 index 000000000..710ed4e88 --- /dev/null +++ b/tensorlayer/metric/mindspore_metric.py @@ -0,0 +1,87 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +import mindspore.nn as nn +from mindspore.nn.metrics.metric import Metric +__all__ = [ + 'Accuracy', + 'Auc', + 'Precision', + 'Recall', +] + + +class Accuracy(object): + + def __init__(self, topk=1): + + self.accuracy = nn.TopKCategoricalAccuracy(k=topk) + + def update(self, y_pred, y_true): + + self.accuracy.update(y_pred, y_true) + + def result(self): + + return self.accuracy.eval() + + def reset(self): + + self.accuracy.clear() + + +class Auc(object): + + def __init__(self): + + pass + + def update(self, y_pred, y_true): + + raise Exception('Auc metric function not implemented') + + def result(self): + + pass + + def reset(self): + + pass + + +class Precision(object): + + def __init__(self): + + self.precision = nn.Precision(eval_type="classification") + + def update(self, y_pred, y_true): + + self.precision.update(y_pred, y_true) + + def result(self): + + return self.precision.eval() + + def reset(self): + + self.precision.clear() + + +class Recall(object): + + def __init__(self): + + self.recall = nn.Recall(eval_type="classification") + + def update(self, y_pred, y_true): + + self.recall.update(y_pred, y_true) + + def result(self): + + return self.recall.eval() + + def reset(self): + + self.recall.clear() diff --git a/tensorlayer/metric/paddle_metric.py b/tensorlayer/metric/paddle_metric.py new file mode 100644 index 000000000..b6b3f3257 --- /dev/null +++ b/tensorlayer/metric/paddle_metric.py @@ -0,0 +1,89 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +import paddle +from paddle.metric.metrics import Metric + +__all__ = [ + 'Accuracy', + 'Auc', + 'Precision', + 'Recall', +] + + +class Accuracy(object): + + def __init__( + self, + topk=1, + ): + + self.topk = topk + self.accuracy = paddle.metric.Accuracy(topk=(self.topk, )) + + def update(self, y_pred, y_true): + + self.accuracy.update(self.accuracy.compute(y_pred, y_true)) + + def result(self): + + return self.accuracy.accumulate() + + def reset(self): + + self.accuracy.reset() + + +class Auc(object): + + def __init__(self, curve='ROC', num_thresholds=4095): + + self.auc = paddle.metric.Auc(curve=curve, num_thresholds=num_thresholds) + + def update(self, y_pred, y_true): + + self.auc.update(y_pred, y_true) + + def result(self): + + return self.auc.accumulate() + + def reset(self): + + self.auc.reset() + + +class Precision(object): + + def __init__(self): + + self.precision = paddle.metric.Precision() + + def update(self, y_pred, y_true): + + self.precision.update(y_pred, y_true) + + def result(self): + + return self.precision.accumulate() + + def reset(self): + + self.precision.reset() + + +class Recall(object): + + def __init__(self): + + self.recall = paddle.metric.Recall() + + def update(self, y_pred, y_true): + self.recall.update(y_pred, y_true) + + def result(self): + return self.recall.accumulate() + + def reset(self): + self.recall.reset() diff --git a/tensorlayer/metric/tensorflow_metric.py b/tensorlayer/metric/tensorflow_metric.py new file mode 100644 index 000000000..d7398ffcc --- /dev/null +++ b/tensorlayer/metric/tensorflow_metric.py @@ -0,0 +1,98 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +import tensorflow as tf +from tensorflow.keras.metrics import Metric + +__all__ = [ + 'Accuracy', + 'Auc', + 'Precision', + 'Recall', +] + + +class Accuracy(object): + + def __init__(self, topk=1): + self.topk = topk + if topk == 1: + self.accuary = tf.keras.metrics.Accuracy() + else: + self.accuary = tf.keras.metrics.SparseTopKCategoricalAccuracy(k=topk) + + def update(self, y_pred, y_true): + + if self.topk == 1: + y_pred = tf.argmax(y_pred, axis=1) + self.accuary.update_state(y_true, y_pred) + else: + self.accuary.update_state(y_true, y_pred) + + def result(self): + + return self.accuary.result() + + def reset(self): + + self.accuary.reset_states() + + +class Auc(object): + + def __init__( + self, + curve='ROC', + num_thresholds=200, + ): + self.auc = tf.keras.metrics.AUC(num_thresholds=num_thresholds, curve=curve) + + def update(self, y_pred, y_true): + + self.auc.update_state(y_true, y_pred) + + def result(self): + + return self.auc.result() + + def reset(self): + + self.auc.reset_states() + + +class Precision(object): + + def __init__(self): + + self.precision = tf.keras.metrics.Precision() + + def update(self, y_pred, y_true): + + self.precision.update_state(y_true, y_pred) + + def result(self): + + return self.precision.result() + + def reset(self): + + self.precision.reset_states() + + +class Recall(object): + + def __init__(self): + + self.recall = tf.keras.metrics.Recall() + + def update(self, y_pred, y_true): + + self.recall.update_state(y_true, y_pred) + + def result(self): + + return self.recall.result() + + def reset(self): + + self.recall.reset_states() diff --git a/tensorlayer/models/__init__.py b/tensorlayer/models/__init__.py index 7e54c8a4b..e0b60ca14 100644 --- a/tensorlayer/models/__init__.py +++ b/tensorlayer/models/__init__.py @@ -3,10 +3,10 @@ # """A collections of pre-defined well known models.""" -from .core import * -from .mobilenetv1 import MobileNetV1 -from .resnet import ResNet50 -from .seq2seq import Seq2seq -from .seq2seq_with_attention import Seq2seqLuongAttention -from .squeezenetv1 import SqueezeNetV1 -from .vgg import * +# from .resnet import ResNet50 +# from .mobilenetv1 import MobileNetV1 +# from .squeezenetv1 import SqueezeNetV1 +# from .vgg import * +# from .seq2seq import Seq2seq +# from .seq2seq_with_attention import Seq2seqLuongAttention +from .core import Model diff --git a/tensorlayer/models/core.py b/tensorlayer/models/core.py index 514db708f..1d016cc25 100644 --- a/tensorlayer/models/core.py +++ b/tensorlayer/models/core.py @@ -1,852 +1,140 @@ -import os -from abc import abstractmethod -from queue import Queue - -import tensorflow as tf -from tensorflow.python.framework import ops as tf_ops +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- +from collections.abc import Iterable +from tensorlayer.layers.core.common import _save_weights, _load_weights import tensorlayer as tl -from tensorlayer import logging -from tensorlayer.files import utils -from tensorlayer.layers import Layer, ModelLayer - -__all__ = [ - 'Model', -] +from tensorlayer.layers.core import Module +import numpy as np +import time -_global_model_name_dict = {} # TODO: better implementation? -_global_model_name_set = set() +if tl.BACKEND == 'tensorflow': + import tensorflow as tf +if tl.BACKEND == 'mindspore': + from mindspore.ops import composite + from mindspore.ops import operations as P + from mindspore.common import ParameterTuple +if tl.BACKEND == 'paddle': + import paddle as pd -class Model(object): - """The :class:`Model` class represents a neural network. +class Model: + """ + High-Level API for Training or Testing. - It should be subclassed when implementing a dynamic model, - where 'forward' method must be overwritten. - Otherwise, please specify 'inputs' tensor(s) and 'outputs' tensor(s) - to create a static model. In that case, 'inputs' tensors should come - from tl.layers.Input(). + `Model` groups layers into an object with training and inference features. Parameters - ----------- - inputs : a Layer or list of Layer - The input(s) to the model. - outputs : a Layer or list of Layer - The output(s) to the model. - name : None or str - The name of the model. + ---------- + network : tensorlayer model + The training or testing network. + loss_fn : function + Objective function + optimizer : class + Optimizer for updating the weights + metrics : class + Dict or set of metrics to be evaluated by the model during Methods --------- - __init__(self, inputs=None, outputs=None, name=None) - Initializing the Model. - inputs() - Get input tensors to this network (only avaiable for static model). - outputs() - Get output tensors to this network (only avaiable for static model). - __call__(inputs, is_train=None, **kwargs) - Forward input tensors through this network. - all_layers() - Get all layer objects of this network in a list of layers. - weights() - Get the weights of this network in a list of tensors. - train() - Set this network in training mode. (affect layers e.g. Dropout, BatchNorm). + trin() + Model training. eval() - Set this network in evaluation mode. - as_layer() - Set this network as a ModelLayer so that it can be integrated into another Model. - release_memory() - Release the memory that was taken up by tensors which are maintained by this network. - save_weights(self, filepath, format='hdf5') - Save the weights of this network in a given format. - load_weights(self, filepath, format=None, in_order=True, skip=False) - Load weights into this network from a specified file. - save(self, filepath, save_weights=True) - Save the network with/without weights. - load(filepath, save_weights=True) - Load the network with/without weights. - - Examples - --------- - >>> import tensorflow as tf - >>> import numpy as np - >>> from tensorlayer.layers import Input, Dense, Dropout - >>> from tensorlayer.models import Model - - Define static model - - >>> class CustomModel(Model): - >>> def __init__(self): - >>> super(CustomModel, self).__init__() - >>> self.dense1 = Dense(n_units=800, act=tf.nn.relu, in_channels=784) - >>> self.dropout1 = Dropout(keep=0.8) - >>> self.dense2 = Dense(n_units=10, in_channels=800) - >>> def forward(self, x): - >>> z = self.dense1(x) - >>> z = self.dropout1(z) - >>> z = self.dense2(z) - >>> return z - >>> M_dynamic = CustomModel() - - Define static model - - >>> ni = Input([None, 784]) - >>> nn = Dense(n_units=800, act=tf.nn.relu)(ni) - >>> nn = Dropout(keep=0.8)(nn) - >>> nn = Dense(n_units=10, act=tf.nn.relu)(nn) - >>> M_static = Model(inputs=ni, outputs=nn, name="mlp") - - Get network information - - >>> print(M_static) - ... Model( - ... (_inputlayer): Input(shape=[None, 784], name='_inputlayer') - ... (dense): Dense(n_units=800, relu, in_channels='784', name='dense') - ... (dropout): Dropout(keep=0.8, name='dropout') - ... (dense_1): Dense(n_units=10, relu, in_channels='800', name='dense_1') - ... ) - - Forwarding through this network - - >>> data = np.random.normal(size=[16, 784]).astype(np.float32) - >>> outputs_d = M_dynamic(data) - >>> outputs_s = M_static(data) - - Save and load weights - - >>> M_static.save_weights('./model_weights.h5') - >>> M_static.load_weights('./model_weights.h5') - - Save and load the model - - >>> M_static.save('./model.h5') - >>> M = Model.load('./model.h5') - - Convert model to layer - - >>> M_layer = M_static.as_layer() + Model prediction. + save_weights() + Input file_path, save model weights into a file of given format. + Use load_weights() to restore. + load_weights() + Load model weights from a given file, which should be previously saved by save_weights(). + + Examples: + >>> import tensorlayer as tl + >>> class Net(Module): + >>> def __init__(self): + >>> super(Net, self).__init__() + >>> self.conv = tl.layers.Conv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), in_channels=5, name='conv2d') + >>> self.bn = tl.layers.BatchNorm2d(num_features=32, act=tl.ReLU) + >>> self.flatten = tl.layers.Flatten() + >>> self.fc = tl.layers.Dense(n_units=12, in_channels=32*224*224) # padding=0 + >>> + >>> def construct(self, x): + >>> x = self.conv(x) + >>> x = self.bn(x) + >>> x = self.flatten(x) + >>> out = self.fc(x) + >>> return out + >>> + >>> net = Net() + >>> loss = tl.cost.softmax_cross_entropy_with_logits + >>> optim = tl.optimizers.Momentum(params=net.trainable_weights, learning_rate=0.1, momentum=0.9) + >>> model = Model(net, loss_fn=loss, optimizer=optim, metrics=None) + >>> dataset = get_dataset() + >>> model.train(2, dataset) """ - @property - def inputs(self): - return self._inputs - - @property - def outputs(self): - return self._outputs - - def __init__(self, inputs=None, outputs=None, name=None): - """ - Initializing the Model. - - Parameters - ---------- - inputs : Tensor or list of tensors - Input tensor(s), which must come from tl.layers.Input() - outputs : Tensor or list of tensors - Output tensor(s), which must be the output(s) of some TL layers - name : str or None - Name for this network - """ - # Auto naming if the name is not given - self._NameNone = False - global _global_model_name_dict - global _global_model_name_set - if name is None: - self._NameNone = True - prefix = self.__class__.__name__.lower() - if _global_model_name_dict.get(prefix) is not None: - _global_model_name_dict[prefix] += 1 - name = prefix + '_' + str(_global_model_name_dict[prefix]) - else: - _global_model_name_dict[prefix] = 0 - name = prefix - while name in _global_model_name_set: - _global_model_name_dict[prefix] += 1 - name = prefix + '_' + str(_global_model_name_dict[prefix]) - _global_model_name_set.add(name) - else: - if name in _global_model_name_set: - raise ValueError( - 'Model name \'%s\' has already been used by another model. Please change the model name.' % name - ) - _global_model_name_set.add(name) - _global_model_name_dict[name] = 0 - - # Model properties - self.name = name - - # Model state: train or test - self.is_train = None - - # Model weights - self._all_weights = None - self._trainable_weights = None - self._nontrainable_weights = None - - # Model args of all layers, ordered by all_layers - self._config = None - - # Model inputs and outputs - # TODO: note that in dynamic network, inputs and outputs are both None, may cause problem, test needed - self._inputs = inputs - self._outputs = outputs - - # Model converted into a Layer - self._model_layer = None - - # Layer Node status - self._nodes_fixed = False - - # Model layers - self._all_layers = None - - if inputs is None and outputs is None: - pass - - else: - # check type of inputs and outputs - check_order = ['inputs', 'outputs'] - for co, check_argu in enumerate([inputs, outputs]): - if isinstance(check_argu, - (tf.Tensor, tf.SparseTensor, tf.Variable)) or tf_ops.is_dense_tensor_like(check_argu): - pass - elif isinstance(check_argu, list): - if len(check_argu) == 0: - raise ValueError( - "The argument `%s` is detected as an empty list. " % check_order[co] + - "It should be either Tensor or a list of Tensor." - ) - for idx in range(len(check_argu)): - if not isinstance(check_argu[idx], - (tf.Tensor, tf.SparseTensor, tf.Variable)) or not tf_ops.is_dense_tensor_like( - check_argu[idx]): - raise TypeError( - "The argument `%s` should be either Tensor or a list of Tensor " % (check_order[co]) + - "but the %s[%d] is detected as %s" % (check_order[co], idx, type(check_argu[idx])) - ) - else: - raise TypeError( - "The argument `%s` should be either Tensor or a list of Tensor but received %s" % - (check_order[co], type(check_argu)) - ) - - if not _check_tl_layer_tensors(inputs): - raise TypeError( - "The argument `inputs` should be either Tensor or a list of Tensor " - "that come from TensorLayer's Input layer: tl.layers.Input(shape). " - ) - if not _check_tl_layer_tensors(outputs): - raise TypeError( - "The argument `outputs` should be either Tensor or a list of Tensor " - "that is/are outputs from some TensorLayer's layers, e.g. tl.layers.Dense, tl.layers.Conv2d." - ) - - # build network graph - self._node_by_depth, self._all_layers = self._construct_graph() - - self._fix_nodes_for_layers() - - def __call__(self, inputs, is_train=None, **kwargs): - """Forward input tensors through this network by calling. - - Parameters - ---------- - inputs : Tensor or list of Tensors, numpy.ndarray of list of numpy.ndarray - Inputs for network forwarding - is_train : boolean - Network's mode for this time forwarding. If 'is_train' == True, this network is set as training mode. - If 'is_train' == False, this network is set as evaluation mode - kwargs : - For other keyword-only arguments. - - """ - - self._check_mode(is_train) - - # FIXME: this may cause inefficiency, this is used to check if every layer is built - self.all_layers - - # fix LayerNodes when first calling - if self._nodes_fixed is False: - self._fix_nodes_for_layers() - - # set training / inference mode if necessary - if is_train is not None: - self._set_mode_for_layers(is_train) - - # if self._input is a list, then it must be a static network - if isinstance(self._inputs, list): - if not isinstance(inputs, list): - raise ValueError("The argument `inputs` should be a list of values but detected as %s." % type(inputs)) - elif len(inputs) != len(self._inputs): - raise ValueError( - "The argument `inputs` should be a list with len=%d but detected as len=%d." % - (len(self._inputs), len(inputs)) - ) - - # convert inputs to tensor if it is originally not - # FIXME: not sure convert_to_tensor here or ask user to do it - if isinstance(inputs, list): - for idx in range(len(inputs)): - inputs[idx] = tf.convert_to_tensor(inputs[idx]) - else: - inputs = tf.convert_to_tensor(inputs) - - return self.forward(inputs, **kwargs) - - @abstractmethod - def forward(self, *inputs, **kwargs): - """Network forwarding given input tensors - - Parameters - ---------- - inputs : Tensor or list of Tensors - input tensor(s) - kwargs : - For other keyword-only arguments. - - Returns - ------- - output tensor(s) : Tensor or list of Tensor(s) - - """ - # FIXME: currently using self._outputs to judge static network or dynamic network - if self._outputs is None: - raise ValueError( - "Outputs not defined. Please define inputs and outputs when the model is created. Or overwrite forward() function." + def __init__(self, network, loss_fn=None, optimizer=None, metrics=None, **kwargs): + self.network = network + self.loss_fn = loss_fn + self.optimizer = optimizer + self.metrics = metrics + self.all_weights = network.all_weights + self.train_weights = self.network.trainable_weights + + def train(self, n_epoch, train_dataset=None, test_dataset=False, print_train_batch=False, print_freq=5): + if not isinstance(train_dataset, Iterable): + raise Exception("Expected type in (train_dataset, Iterable), but got {}.".format(type(train_dataset))) + + if tl.BACKEND == 'tensorflow': + self.tf_train( + n_epoch=n_epoch, train_dataset=train_dataset, network=self.network, loss_fn=self.loss_fn, + train_weights=self.train_weights, optimizer=self.optimizer, metrics=self.metrics, + print_train_batch=print_train_batch, print_freq=print_freq, test_dataset=test_dataset + ) + elif tl.BACKEND == 'mindspore': + self.ms_train( + n_epoch=n_epoch, train_dataset=train_dataset, network=self.network, loss_fn=self.loss_fn, + train_weights=self.train_weights, optimizer=self.optimizer, metrics=self.metrics, + print_train_batch=print_train_batch, print_freq=print_freq, test_dataset=test_dataset + ) + elif tl.BACKEND == 'paddle': + self.pd_train( + n_epoch=n_epoch, train_dataset=train_dataset, network=self.network, loss_fn=self.loss_fn, + train_weights=self.train_weights, optimizer=self.optimizer, metrics=self.metrics, + print_train_batch=print_train_batch, print_freq=print_freq, test_dataset=test_dataset ) - memory = dict() - - # get each layer's output by going through the graph in depth order - for depth, nodes in enumerate(self._node_by_depth): - if depth == 0: - if isinstance(self.inputs, list): - assert len(inputs[0]) == len(nodes) - for idx, node in enumerate(nodes): - memory[node.name] = node(inputs[0][idx]) - else: - memory[nodes[0].name] = nodes[0](inputs[0]) - else: - for node in nodes: - in_nodes = node.in_nodes - in_tensors_idxes = node.in_tensors_idxes - if len(in_nodes) == 1: - node_input = memory[in_nodes[0].name][in_tensors_idxes[0]] - else: - node_input = [memory[inode.name][idx] for inode, idx in zip(in_nodes, in_tensors_idxes)] - memory[node.name] = node(node_input) - - if not isinstance(self._outputs, list): - return memory[self._outputs._info[0].name][self._outputs._info[1]] - else: - return [memory[tensor._info[0].name][tensor._info[1]] for tensor in self._outputs] - - @property - def all_layers(self): - """Return all layers of this network in a list.""" - if self._all_layers is not None: - return self._all_layers - - if self._inputs is not None and self._outputs is not None: - # static model - return self._all_layers - else: - # dynamic model - self._all_layers = list() - attr_list = [attr for attr in dir(self) if attr[:2] != "__"] - attr_list.remove("all_weights") - attr_list.remove("trainable_weights") - attr_list.remove("nontrainable_weights") - attr_list.remove("_all_weights") - attr_list.remove("_trainable_weights") - attr_list.remove("_nontrainable_weights") - attr_list.remove("all_layers") - attr_list.remove("_all_layers") - attr_list.remove("n_weights") - for idx, attr in enumerate(attr_list): - try: - if isinstance(getattr(self, attr), Layer): - nowlayer = getattr(self, attr) - if not nowlayer._built: - raise AttributeError("Layer %s not built yet." % repr(nowlayer)) - self._all_layers.append(nowlayer) - elif isinstance(getattr(self, attr), Model): - nowmodel = getattr(self, attr) - self._all_layers.append(nowmodel) - elif isinstance(getattr(self, attr), list): - self._all_layers.extend(_add_list_to_all_layers(getattr(self, attr))) - # TODO: define customised exception for TL - except AttributeError as e: - raise e - except Exception: - pass - - # check layer name uniqueness - local_layer_name_dict = set() - for layer in self._all_layers: - if layer.name in local_layer_name_dict: - raise ValueError( - 'Layer name \'%s\' has already been used by another layer. Please change the layer name.' % - layer.name - ) - else: - local_layer_name_dict.add(layer.name) - return self._all_layers - - @property - def trainable_weights(self): - """Return trainable weights of this network in a list.""" - if self._trainable_weights is not None and len(self._trainable_weights) > 0: - # self._trainable_weights already extracted, so do nothing - pass - else: - self._trainable_weights = [] - for layer in self.all_layers: - if layer.trainable_weights is not None: - self._trainable_weights.extend(layer.trainable_weights) - - return self._trainable_weights.copy() - - @property - def nontrainable_weights(self): - """Return nontrainable weights of this network in a list.""" - if self._nontrainable_weights is not None and len(self._nontrainable_weights) > 0: - # self._nontrainable_weights already extracted, so do nothing - pass - else: - self._nontrainable_weights = [] - for layer in self.all_layers: - if layer.nontrainable_weights is not None: - self._nontrainable_weights.extend(layer.nontrainable_weights) - - return self._nontrainable_weights.copy() - - @property - def all_weights(self): - """Return all weights of this network in a list.""" - if self._all_weights is not None and len(self._all_weights) > 0: - # self._all_weights already extracted, so do nothing - pass - else: - self._all_weights = [] - for layer in self.all_layers: - if layer.all_weights is not None: - self._all_weights.extend(layer.all_weights) - - return self._all_weights.copy() - - @property - def n_weights(self): - """Return the number of weights (parameters) in this network.""" - n_weights = 0 - for i, w in enumerate(self.all_weights): - n = 1 - # for s in p.eval().shape: - for s in w.get_shape(): + def eval(self, test_dataset): + self.network.set_eval() + test_loss, test_acc, n_iter = 0, 0, 0 + for X_batch, y_batch in test_dataset: + _logits = self.network(X_batch) + test_loss += self.loss_fn(_logits, y_batch) + if self.metrics: try: - s = int(s) + test_acc += self.metrics(_logits, y_batch) except: - s = 1 - if s: - n = n * s - n_weights = n_weights + n - # print("num of weights (parameters) %d" % n_weights) - return n_weights - - @property - def config(self): - if self._config is not None and len(self._config) > 0: - return self._config - else: - # _config = [] - _config = {} - if self._NameNone is True: - _config.update({"name": None}) - else: - _config.update({"name": self.name}) - version_info = { - "tensorlayer_version": tl.__version__, - "backend": "tensorflow", - "backend_version": tf.__version__, - "training_device": "gpu", - "save_date": None, - } - _config["version_info"] = version_info - # if self.outputs is None: - # raise RuntimeError( - # "Dynamic mode does not support config yet." - # ) - model_architecture = [] - for layer in self.all_layers: - model_architecture.append(layer.config) - _config["model_architecture"] = model_architecture - if self.inputs is not None: - if not isinstance(self.inputs, list): - _config.update({"inputs": self.inputs._info[0].name}) - else: - config_inputs = [] - for config_input in self.inputs: - config_inputs.append(config_input._info[0].name) - _config.update({"inputs": config_inputs}) - if self.outputs is not None: - if not isinstance(self.outputs, list): - _config.update({"outputs": self.outputs._info[0].name}) - else: - config_outputs = [] - for config_output in self.outputs: - config_outputs.append(config_output._info[0].name) - _config.update({"outputs": config_outputs}) - if self._nodes_fixed or self.outputs is None: - self._config = _config - - return _config - - def train(self): - """Set this network in training mode. After calling this method, - all layers in network are in training mode, in particular, BatchNorm, Dropout, etc. - - Examples - -------- - >>> import tensorlayer as tl - >>> net = tl.models.vgg16() - >>> net.train() - - """ - if self.is_train !=True: - self.is_train = True - self._set_mode_for_layers(True) - - def eval(self): - """Set this network in evaluation mode. After calling this method, - all layers in network are in evaluation mode, in particular, BatchNorm, Dropout, etc. - - Examples - -------- - >>> import tensorlayer as tl - >>> net = tl.models.vgg16() - >>> net.eval() - # do evaluation - - """ - if self.is_train != False: - self.is_train = False - self._set_mode_for_layers(False) - - def test(self): - """Set this network in evaluation mode.""" - self.eval() - - def infer(self): - """Set this network in evaluation mode.""" - self.eval() - - def as_layer(self): - """Return this network as a ModelLayer so that it can be integrated into another Model. - - Examples - -------- - >>> from tensorlayer.layers import Input, Dense, Dropout - >>> from tensorlayer.models import Model - >>> ni = Input([None, 784]) - >>> nn = Dense(n_units=800, act=tf.nn.relu)(ni) - >>> nn = Dropout(keep=0.8)(nn) - >>> nn = Dense(n_units=10, act=tf.nn.relu)(nn) - >>> M_hidden = Model(inputs=ni, outputs=nn, name="mlp").as_layer() - >>> nn = M_hidden(ni) # use previously constructed model as layer - >>> nn = Dropout(keep=0.8)(nn) - >>> nn = Dense(n_units=10, act=tf.nn.relu)(nn) - >>> M_full = Model(inputs=ni, outputs=nn, name="mlp") - - """ - if self._outputs is None: - raise AttributeError("Dynamic network cannot be converted to Layer.") - - if self._model_layer is None: - self._model_layer = ModelLayer(self) - - return self._model_layer - - def _check_mode(self, is_train): - """Check whether this network is in a given mode. - - Parameters - ---------- - is_train : boolean - Network's mode. True means training mode while False means evaluation mode. - - """ - # contradiction test - if is_train is None and self.is_train is None: - raise ValueError( - "Training / inference mode not defined. Argument `is_train` should be set as True / False. Otherwise please use `Model.train()` / `Model.eval()` to switch the mode." - ) - elif is_train is not None and self.is_train is not None: - if is_train == self.is_train: - logging.warning( - "Training / inference mode redefined redundantly. Please EITHER use the argument `is_train` OR `Model.train()` / `Model.eval()` to define the mode." - ) - else: - raise AttributeError( - "Training / inference mode mismatch. The argument `is_train` is set as %s, " % is_train + - "but the mode is currently set as %s. " % - ('Training by Model.train()' if self.is_train else 'Inference by Model.eval()') + - "Please EITHER use the argument `is_train` OR `Model.train()` / `Model.eval()` to define the mode." - ) - - def _set_mode_for_layers(self, is_train): - """Set all layers of this network to a given mode. - - Parameters - ---------- - is_train : boolean - Network's mode. True means training mode while False means evaluation mode. - - """ - for layer in self.all_layers: - if isinstance(layer, Model): - layer.is_train = is_train - layer._set_mode_for_layers(is_train) - - def _fix_nodes_for_layers(self): - """Fix each Layer's LayerNode to stop growing, see LayerNode for more.""" - for layer in self.all_layers: - layer._fix_nodes_for_layers() - self._nodes_fixed = True - - def __setattr__(self, key, value): - if isinstance(value, Layer): - if value._built is False: - raise AttributeError( - "The registered layer `{}` should be built in advance. " - "Do you forget to pass the keyword argument 'in_channels'? ".format(value.name) - ) - super().__setattr__(key, value) - - def __repr__(self): - # tmpstr = self.__class__.__name__ + '(\n' - tmpstr = self.name + '(\n' - for idx, layer in enumerate(self.all_layers): - modstr = layer.__repr__() - modstr = _addindent(modstr, 2) - tmpstr = tmpstr + ' (' + layer.name + '): ' + modstr + '\n' - tmpstr = tmpstr + ')' - return tmpstr - - ## raise Exceptions for old version codes - def print_all_layers(self): - raise Exception("please change net.print_all_layers --> print(net)") - - def count_params(self, **kwargs): - raise Exception("please change count_params --> count_weights") - - def print_params(self, **kwargs): - raise Exception("please change print_params --> print_weights") - - @property - def all_params(self): - raise Exception("please change all_params --> weights") - - @property - def all_drop(self): - raise Exception("all_drop is deprecated") - - def get_layer(self, name=None, index=None): - """Network forwarding given input tensors - - Parameters - ---------- - name : str or None - Name of the requested layer. Default None. - index : int or None - Index of the requested layer. Default None. - - Returns - ------- - layer : The requested layer - - Notes - ----- - Either a layer name or a layer index should be given. - - """ - if index is not None: - if len(self.all_layers) <= index: - raise ValueError( - 'model only has ' + str(len(self.all_layers)) + ' layers, but ' + str(index) + - '-th layer is requested.' - ) + self.metrics.update(_logits, y_batch) + test_acc += self.metrics.result() + self.metrics.reset() else: - return self.all_layers[index] - elif name is not None: - for layer in self.all_layers: - if layer.name == name: - return layer - raise ValueError('Model has no layer named ' + name + '.') - else: - raise ValueError('Either a layer name or a layer index should be given.') - - def _construct_graph(self): - """construct computation graph for static model using LayerNode object""" - all_layers = [] - node_by_depth = [] # [[node0, node1], [node2, node3], ...] - - input_tensors_list = self.inputs if isinstance(self.inputs, list) else [self.inputs] - - queue_node = Queue() - - # BFS to visit all nodes that should be involved in the computation graph - output_tensors_list = self.outputs if isinstance(self.outputs, list) else [self.outputs] - output_nodes = [tensor._info[0] for tensor in output_tensors_list] - - visited_node_names = set() - for out_node in output_nodes: - if out_node.visited: - continue - queue_node.put(out_node) - - while not queue_node.empty(): - cur_node = queue_node.get() - in_nodes = cur_node.in_nodes - - for node in in_nodes: - node.out_nodes.append(cur_node) - if not node.visited: - queue_node.put(node) - node.visited = True - if node.name not in visited_node_names: - visited_node_names.add(node.name) - # else have multiple layers with the same name - else: - raise ValueError( - 'Layer name \'%s\' has already been used by another layer. Please change the layer name.' - % node.layer.name - ) - - # construct the computation graph in top-sort order - cur_depth = [tensor._info[0] for tensor in input_tensors_list] - next_depth = [] - indegrees = {} - - visited_layer_names = [] - while not len(cur_depth) == 0: - node_by_depth.append(cur_depth) - for node in cur_depth: - if node.layer.name not in visited_layer_names: - all_layers.append(node.layer) - visited_layer_names.append(node.layer.name) - for out_node in node.out_nodes: - if out_node.name not in indegrees.keys(): - indegrees[out_node.name] = len(out_node.in_nodes) - indegrees[out_node.name] -= 1 - if indegrees[out_node.name] == 0: - next_depth.append(out_node) - - cur_depth = next_depth - next_depth = [] - - return node_by_depth, all_layers - - def release_memory(self): - ''' - WARNING: This function should be called with great caution. - - Release objects that MAY NOT be necessary such as layer.outputs (if in a tf.GradientTape() scope). - For each layer in the model, layer.inputs and layer.outputs will be set as None but not deleted. - - A void function. + test_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) + n_iter += 1 + print(" test loss: {}".format(test_loss / n_iter)) + print(" test acc: {}".format(test_acc / n_iter)) - Examples - -------- - >>> import tensorlayer as tl - >>> vgg = tl.models.vgg16() - ... # training preparation - ... # ... - ... # back propagation - >>> with tf.GradientTape() as tape: - >>> _logits = vgg(x_batch) - >>> ## compute loss and update model - >>> _loss = tl.cost.cross_entropy(_logits, y_batch, name='train_loss') - >>> ## release unnecessary objects (layer.inputs, layer.outputs) - >>> ## this function should be called with great caution - >>> ## within the scope of tf.GradientTape(), using this function should be fine - >>> vgg.release_memory() - - ''' - for layer in self.all_layers: - layer._release_memory() - - def save(self, filepath, save_weights=True, customized_data=None): - """ - Save model into a given file. - This function save can save both the architecture of neural networks and weights (optional). - WARNING: If the model contains Lambda / ElementwiseLambda layer, please check the documentation of Lambda / ElementwiseLambda layer and find out the cases that have / have not been supported by Model.save(). - - Parameters - ---------- - filepath : str - Filename into which the model will be saved. - save_weights : bool - Whether to save model weights. - customized_data : dict - The user customized meta data. - - Examples - -------- - >>> net = tl.models.vgg16() - >>> net.save('./model.h5', save_weights=True) - >>> new_net = Model.load('./model.h5', load_weights=True) - - """ - # TODO: support saving LambdaLayer that includes parametric self defined function with outside variables - if self.outputs is None: - raise RuntimeError( - "Model save() not support dynamic mode yet.\nHint: you can use Model save_weights() to save the weights in dynamic mode." - ) - utils.save_hdf5_graph( - network=self, filepath=filepath, save_weights=save_weights, customized_data=customized_data - ) - - @staticmethod - def load(filepath, load_weights=True): - """ - Load model from a given file, which should be previously saved by Model.save(). - This function load can load both the architecture of neural networks and weights (optional, and needs to be saved in Model.save()). - When a model is loaded by this function load, there is no need to reimplement or declare the architecture of the model explicitly in code. - WARNING: If the model contains Lambda / ElementwiseLambda layer, please check the documentation of Lambda / ElementwiseLambda layer and find out the cases that have / have not been supported by Model.load(). - - Parameters - ---------- - filepath : str - Filename from which the model will be loaded. - load_weights : bool - Whether to load model weights. - - Examples - -------- - >>> net = tl.models.vgg16() - >>> net.save('./model.h5', save_weights=True) - >>> new_net = Model.load('./model.h5', load_weights=True) - """ - # TODO: support loading LambdaLayer that includes parametric self defined function with outside variables - M = utils.load_hdf5_graph(filepath=filepath, load_weights=load_weights) - return M - - def save_weights(self, filepath, format=None): - """Input filepath, save model weights into a file of given format. + def save_weights(self, file_path, format=None): + """Input file_path, save model weights into a file of given format. Use self.load_weights() to restore. Parameters ---------- - filepath : str + file_path : str Filename to which the model weights will be saved. format : str or None Saved file format. Value should be None, 'hdf5', 'npz', 'npz_dict' or 'ckpt'. Other format is not supported now. - 1) If this is set to None, then the postfix of filepath will be used to decide saved format. + 1) If this is set to None, then the postfix of file_path will be used to decide saved format. If the postfix is not in ['h5', 'hdf5', 'npz', 'ckpt'], then file will be saved in hdf5 format by default. 2) 'hdf5' will save model weights name in a list and each layer has its weights stored in a group of the hdf5 file. @@ -859,52 +147,31 @@ def save_weights(self, filepath, format=None): Examples -------- 1) Save model weights in hdf5 format by default. - >>> net = tl.models.vgg16() - >>> net.save_weights('./model.h5') + >>> net = vgg16() + >>> optimizer = tl.optimizers.Adam(learning_rate=0.001) + >>> metric = tl.metric.Accuracy() + >>> model = tl.models.Model(network=net, loss_fn=tl.cost.softmax_cross_entropy_with_logits, optimizer=optimizer, metrics=metric) + >>> model.save_weights('./model.h5') ... - >>> net.load_weights('./model.h5') + >>> model.load_weights('./model.h5') 2) Save model weights in npz/npz_dict format - >>> net = tl.models.vgg16() - >>> net.save_weights('./model.npz') - >>> net.save_weights('./model.npz', format='npz_dict') + >>> model.save_weights('./model.npz') + >>> model.save_weights('./model.npz', format='npz_dict') """ - if self.all_weights is None or len(self.all_weights) == 0: - logging.warning("Model contains no weights or layers haven't been built, nothing will be saved") - return - - if format is None: - postfix = filepath.split('.')[-1] - if postfix in ['h5', 'hdf5', 'npz', 'ckpt']: - format = postfix - else: - format = 'hdf5' - - if format == 'hdf5' or format == 'h5': - utils.save_weights_to_hdf5(filepath, self) - elif format == 'npz': - utils.save_npz(self.all_weights, filepath) - elif format == 'npz_dict': - utils.save_npz_dict(self.all_weights, filepath) - elif format == 'ckpt': - # TODO: enable this when tf save ckpt is enabled - raise NotImplementedError("ckpt load/save is not supported now.") - else: - raise ValueError( - "Save format must be 'hdf5', 'npz', 'npz_dict' or 'ckpt'." - "Other format is not supported now." - ) - def load_weights(self, filepath, format=None, in_order=True, skip=False): + _save_weights(net=self, file_path=file_path, format=format) + + def load_weights(self, file_path, format=None, in_order=True, skip=False): """Load model weights from a given file, which should be previously saved by self.save_weights(). Parameters ---------- - filepath : str + file_path : str Filename from which the model weights will be loaded. format : str or None - If not specified (None), the postfix of the filepath will be used to decide its format. If specified, + If not specified (None), the postfix of the file_path will be used to decide its format. If specified, value should be 'hdf5', 'npz', 'npz_dict' or 'ckpt'. Other format is not supported now. In addition, it should be the same format when you saved the file using self.save_weights(). Default is None. @@ -925,15 +192,18 @@ def load_weights(self, filepath, format=None, in_order=True, skip=False): Examples -------- 1) load model from a hdf5 file. - >>> net = tl.models.vgg16() - >>> net.load_weights('./model_graph.h5', in_order=False, skip=True) # load weights by name, skipping mismatch - >>> net.load_weights('./model_eager.h5') # load sequentially + >>> net = vgg16() + >>> optimizer = tl.optimizers.Adam(learning_rate=0.001) + >>> metric = tl.metric.Accuracy() + >>> model = tl.models.Model(network=net, loss_fn=tl.cost.softmax_cross_entropy_with_logits, optimizer=optimizer, metrics=metric) + >>> model.load_weights('./model_graph.h5', in_order=False, skip=True) # load weights by name, skipping mismatch + >>> model.load_weights('./model_eager.h5') # load sequentially 2) load model from a npz file - >>> net.load_weights('./model.npz') + >>> model.load_weights('./model.npz') - 2) load model from a npz file, which is saved as npz_dict previously - >>> net.load_weights('./model.npz', format='npz_dict') + 3) load model from a npz file, which is saved as npz_dict previously + >>> model.load_weights('./model.npz', format='npz_dict') Notes ------- @@ -943,77 +213,197 @@ def load_weights(self, filepath, format=None, in_order=True, skip=False): 'in_order' argument will be ignored. """ - if not os.path.exists(filepath): - raise FileNotFoundError("file {} doesn't exist.".format(filepath)) - if format is None: - format = filepath.split('.')[-1] + _load_weights(net=self, file_path=file_path, format=format, in_order=in_order, skip=skip) + + def tf_train( + self, n_epoch, train_dataset, network, loss_fn, train_weights, optimizer, metrics, print_train_batch, + print_freq, test_dataset + ): + for epoch in range(n_epoch): + start_time = time.time() + + train_loss, train_acc, n_iter = 0, 0, 0 + for X_batch, y_batch in train_dataset: + network.set_train() + + with tf.GradientTape() as tape: + # compute outputs + _logits = network(X_batch) + # compute loss and update model + _loss_ce = loss_fn(_logits, y_batch) + + grad = tape.gradient(_loss_ce, train_weights) + optimizer.apply_gradients(zip(grad, train_weights)) + + train_loss += _loss_ce + if metrics: + metrics.update(_logits, y_batch) + train_acc += metrics.result() + metrics.reset() + else: + train_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) + n_iter += 1 + + if print_train_batch: + print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time)) + print(" train loss: {}".format(train_loss / n_iter)) + print(" train acc: {}".format(train_acc / n_iter)) + + if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: + print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time)) + print(" train loss: {}".format(train_loss / n_iter)) + print(" train acc: {}".format(train_acc / n_iter)) + + if test_dataset: + # use training and evaluation sets to evaluate the model every print_freq epoch + if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: + network.set_eval() + val_loss, val_acc, n_iter = 0, 0, 0 + for X_batch, y_batch in test_dataset: + _logits = network(X_batch) # is_train=False, disable dropout + val_loss += loss_fn(_logits, y_batch, name='eval_loss') + if metrics: + metrics.update(_logits, y_batch) + val_acc += metrics.result() + metrics.reset() + else: + val_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) + n_iter += 1 + print(" val loss: {}".format(val_loss / n_iter)) + print(" val acc: {}".format(val_acc / n_iter)) + + def ms_train( + self, n_epoch, train_dataset, network, loss_fn, train_weights, optimizer, metrics, print_train_batch, + print_freq, test_dataset + ): + net_with_criterion = WithLoss(network, loss_fn) + train_network = GradWrap(net_with_criterion) + train_network.set_train() + for epoch in range(n_epoch): + start_time = time.time() + train_loss, train_acc, n_iter = 0, 0, 0 + for X_batch, y_batch in train_dataset: + output = network(X_batch) + loss_output = loss_fn(output, y_batch) + grads = train_network(X_batch, y_batch) + success = optimizer.apply_gradients(zip(grads, train_weights)) + loss = loss_output.asnumpy() + train_loss += loss + if metrics: + metrics.update(output, y_batch) + train_acc += metrics.result() + metrics.reset() + else: + train_acc += np.mean((P.Equal()(P.Argmax(axis=1)(output), y_batch).asnumpy())) + n_iter += 1 + + if print_train_batch: + print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time)) + print(" train loss: {}".format(train_loss / n_iter)) + print(" train acc: {}".format(train_acc / n_iter)) + + if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: + print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time)) + print(" train loss: {}".format(train_loss / n_iter)) + print(" train acc: {}".format(train_acc / n_iter)) + + if test_dataset: + # use training and evaluation sets to evaluate the model every print_freq epoch + if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: + network.set_eval() + val_loss, val_acc, n_iter = 0, 0, 0 + for X_batch, y_batch in test_dataset: + _logits = network(X_batch) + val_loss += loss_fn(_logits, y_batch, name='eval_loss') + if metrics: + metrics.update(_logits, y_batch) + val_acc += metrics.result() + metrics.reset() + else: + val_acc += np.mean((P.Equal()(P.Argmax(axis=1)(_logits), y_batch).asnumpy())) + n_iter += 1 + print(" val loss: {}".format(val_loss / n_iter)) + print(" val acc: {}".format(val_acc / n_iter)) + + def pd_train( + self, n_epoch, train_dataset, network, loss_fn, train_weights, optimizer, metrics, print_train_batch, + print_freq, test_dataset + ): + for epoch in range(n_epoch): + start_time = time.time() + + train_loss, train_acc, n_iter = 0, 0, 0 + for X_batch, y_batch in train_dataset: + network.set_train() + + output = network(X_batch) + loss = loss_fn(output, y_batch) + loss_ce = loss.numpy() + params_grads = optimizer.gradient(loss, train_weights) + optimizer.apply_gradients(params_grads) + + train_loss += loss_ce + if metrics: + metrics.update(output, y_batch) + train_acc += metrics.result() + metrics.reset() + else: + train_acc += pd.metric.accuracy(output, y_batch) + n_iter += 1 + + if print_train_batch: + print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time)) + print(" train loss: {}".format(train_loss / n_iter)) + print(" train acc: {}".format(train_acc / n_iter)) + + if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: + print("Epoch {} of {} took {}".format(epoch + 1, n_epoch, time.time() - start_time)) + print(" train loss: {}".format(train_loss / n_iter)) + print(" train acc: {}".format(train_acc / n_iter)) + + if test_dataset: + # use training and evaluation sets to evaluate the model every print_freq epoch + if epoch + 1 == 1 or (epoch + 1) % print_freq == 0: + network.set_eval() + val_loss, val_acc, n_iter = 0, 0, 0 + for X_batch, y_batch in test_dataset: + _logits = network(X_batch) # is_train=False, disable dropout + val_loss += loss_fn(_logits, y_batch, name='eval_loss') + if metrics: + metrics.update(_logits, y_batch) + val_acc += metrics.result() + metrics.reset() + else: + val_acc += np.mean(np.equal(np.argmax(_logits, 1), y_batch)) + n_iter += 1 + print(" val loss: {}".format(val_loss / n_iter)) + print(" val acc: {}".format(val_acc / n_iter)) - if format == 'hdf5' or format == 'h5': - if skip ==True or in_order == False: - # load by weights name - utils.load_hdf5_to_weights(filepath, self, skip) - else: - # load in order - utils.load_hdf5_to_weights_in_order(filepath, self) - elif format == 'npz': - utils.load_and_assign_npz(filepath, self) - elif format == 'npz_dict': - utils.load_and_assign_npz_dict(filepath, self, skip) - elif format == 'ckpt': - # TODO: enable this when tf save ckpt is enabled - raise NotImplementedError("ckpt load/save is not supported now.") - else: - raise ValueError( - "File format must be 'hdf5', 'npz', 'npz_dict' or 'ckpt'. " - "Other format is not supported now." - ) - # TODO: not supported now - # def save_ckpt(self, sess=None, mode_name='model.ckpt', save_dir='checkpoint', global_step=None, printable=False): - # # TODO: Documentation pending - # """""" - # if not os.path.exists(save_dir): - # raise FileNotFoundError("Save directory {} doesn't exist.".format(save_dir)) - # utils.save_ckpt(sess, mode_name, save_dir, self.weights, global_step, printable) - # - # def load_ckpt(self, sess=None, mode_name='model.ckpt', save_dir='checkpoint', is_latest=True, printable=False): - # # TODO: Documentation pending - # """""" - # utils.load_ckpt(sess, mode_name, save_dir, self.weights, is_latest, printable) - - -def _addindent(s_, numSpaces): - s = s_.split('\n') - # don't do anything for single-line stuff - if len(s) == 1: - return s_ - first = s.pop(0) - s = [(numSpaces * ' ') + line for line in s] - s = '\n'.join(s) - s = first + '\n' + s - return s - - -def _check_tl_layer_tensors(tensors): - if not isinstance(tensors, list): - return hasattr(tensors, '_info') - else: - for t in tensors: - if not hasattr(t, '_info'): - return False - return True - - -def _add_list_to_all_layers(list_member): - temp_all_layers = list() - for component in list_member: - if isinstance(component, Layer): - temp_all_layers.append(component) - if not component._built: - raise AttributeError("Layer %s not built yet." % repr(component)) - elif isinstance(component, Model): - temp_all_layers.append(component) - elif isinstance(component, list): - temp_all_layers.extend(_add_list_to_all_layers(component)) - return temp_all_layers +class WithLoss(Module): + + def __init__(self, backbone, loss_fn): + super(WithLoss, self).__init__() + self._backbone = backbone + self._loss_fn = loss_fn + + def construct(self, data, label): + out = self._backbone(data) + return self._loss_fn(out, label) + + @property + def backbone_network(self): + return self._backbone + + +class GradWrap(Module): + """ GradWrap definition """ + + def __init__(self, network): + super(GradWrap, self).__init__(auto_prefix=False) + self.network = network + self.weights = ParameterTuple(network.trainable_weights) + + def forward(self, x, label): + return composite.GradOperation(get_by_list=True)(self.network, self.weights)(x, label) diff --git a/tensorlayer/models/imagenet_class_index.json b/tensorlayer/models/imagenet_class_index.json deleted file mode 100644 index 5fe0dfefc..000000000 --- a/tensorlayer/models/imagenet_class_index.json +++ /dev/null @@ -1 +0,0 @@ -{"0": ["n01440764", "tench"], "1": ["n01443537", "goldfish"], "2": ["n01484850", "great_white_shark"], "3": ["n01491361", "tiger_shark"], "4": ["n01494475", "hammerhead"], "5": ["n01496331", "electric_ray"], "6": ["n01498041", "stingray"], "7": ["n01514668", "cock"], "8": ["n01514859", "hen"], "9": ["n01518878", "ostrich"], "10": ["n01530575", "brambling"], "11": ["n01531178", "goldfinch"], "12": ["n01532829", "house_finch"], "13": ["n01534433", "junco"], "14": ["n01537544", "indigo_bunting"], "15": ["n01558993", "robin"], "16": ["n01560419", "bulbul"], "17": ["n01580077", "jay"], "18": ["n01582220", "magpie"], "19": ["n01592084", "chickadee"], "20": ["n01601694", "water_ouzel"], "21": ["n01608432", "kite"], "22": ["n01614925", "bald_eagle"], "23": ["n01616318", "vulture"], "24": ["n01622779", "great_grey_owl"], "25": ["n01629819", "European_fire_salamander"], "26": ["n01630670", "common_newt"], "27": ["n01631663", "eft"], "28": ["n01632458", "spotted_salamander"], "29": ["n01632777", "axolotl"], "30": ["n01641577", "bullfrog"], "31": ["n01644373", "tree_frog"], "32": ["n01644900", "tailed_frog"], "33": ["n01664065", "loggerhead"], "34": ["n01665541", "leatherback_turtle"], "35": ["n01667114", "mud_turtle"], "36": ["n01667778", "terrapin"], "37": ["n01669191", "box_turtle"], "38": ["n01675722", "banded_gecko"], "39": ["n01677366", "common_iguana"], "40": ["n01682714", "American_chameleon"], "41": ["n01685808", "whiptail"], "42": ["n01687978", "agama"], "43": ["n01688243", "frilled_lizard"], "44": ["n01689811", "alligator_lizard"], "45": ["n01692333", "Gila_monster"], "46": ["n01693334", "green_lizard"], "47": ["n01694178", "African_chameleon"], "48": ["n01695060", "Komodo_dragon"], "49": ["n01697457", "African_crocodile"], "50": ["n01698640", "American_alligator"], "51": ["n01704323", "triceratops"], "52": ["n01728572", "thunder_snake"], "53": ["n01728920", "ringneck_snake"], "54": ["n01729322", "hognose_snake"], "55": ["n01729977", "green_snake"], "56": ["n01734418", "king_snake"], "57": ["n01735189", "garter_snake"], "58": ["n01737021", "water_snake"], "59": ["n01739381", "vine_snake"], "60": ["n01740131", "night_snake"], "61": ["n01742172", "boa_constrictor"], "62": ["n01744401", "rock_python"], "63": ["n01748264", "Indian_cobra"], "64": ["n01749939", "green_mamba"], "65": ["n01751748", "sea_snake"], "66": ["n01753488", "horned_viper"], "67": ["n01755581", "diamondback"], "68": ["n01756291", "sidewinder"], "69": ["n01768244", "trilobite"], "70": ["n01770081", "harvestman"], "71": ["n01770393", "scorpion"], "72": ["n01773157", "black_and_gold_garden_spider"], "73": ["n01773549", "barn_spider"], "74": ["n01773797", "garden_spider"], "75": ["n01774384", "black_widow"], "76": ["n01774750", "tarantula"], "77": ["n01775062", "wolf_spider"], "78": ["n01776313", "tick"], "79": ["n01784675", "centipede"], "80": ["n01795545", "black_grouse"], "81": ["n01796340", "ptarmigan"], "82": ["n01797886", "ruffed_grouse"], "83": ["n01798484", "prairie_chicken"], "84": ["n01806143", "peacock"], "85": ["n01806567", "quail"], "86": ["n01807496", "partridge"], "87": ["n01817953", "African_grey"], "88": ["n01818515", "macaw"], "89": ["n01819313", "sulphur-crested_cockatoo"], "90": ["n01820546", "lorikeet"], "91": ["n01824575", "coucal"], "92": ["n01828970", "bee_eater"], "93": ["n01829413", "hornbill"], "94": ["n01833805", "hummingbird"], "95": ["n01843065", "jacamar"], "96": ["n01843383", "toucan"], "97": ["n01847000", "drake"], "98": ["n01855032", "red-breasted_merganser"], "99": ["n01855672", "goose"], "100": ["n01860187", "black_swan"], "101": ["n01871265", "tusker"], "102": ["n01872401", "echidna"], "103": ["n01873310", "platypus"], "104": ["n01877812", "wallaby"], "105": ["n01882714", "koala"], "106": ["n01883070", "wombat"], "107": ["n01910747", "jellyfish"], "108": ["n01914609", "sea_anemone"], "109": ["n01917289", "brain_coral"], "110": ["n01924916", "flatworm"], "111": ["n01930112", "nematode"], "112": ["n01943899", "conch"], "113": ["n01944390", "snail"], "114": ["n01945685", "slug"], "115": ["n01950731", "sea_slug"], "116": ["n01955084", "chiton"], "117": ["n01968897", "chambered_nautilus"], "118": ["n01978287", "Dungeness_crab"], "119": ["n01978455", "rock_crab"], "120": ["n01980166", "fiddler_crab"], "121": ["n01981276", "king_crab"], "122": ["n01983481", "American_lobster"], "123": ["n01984695", "spiny_lobster"], "124": ["n01985128", "crayfish"], "125": ["n01986214", "hermit_crab"], "126": ["n01990800", "isopod"], "127": ["n02002556", "white_stork"], "128": ["n02002724", "black_stork"], "129": ["n02006656", "spoonbill"], "130": ["n02007558", "flamingo"], "131": ["n02009229", "little_blue_heron"], "132": ["n02009912", "American_egret"], "133": ["n02011460", "bittern"], "134": ["n02012849", "crane"], "135": ["n02013706", "limpkin"], "136": ["n02017213", "European_gallinule"], "137": ["n02018207", "American_coot"], "138": ["n02018795", "bustard"], "139": ["n02025239", "ruddy_turnstone"], "140": ["n02027492", "red-backed_sandpiper"], "141": ["n02028035", "redshank"], "142": ["n02033041", "dowitcher"], "143": ["n02037110", "oystercatcher"], "144": ["n02051845", "pelican"], "145": ["n02056570", "king_penguin"], "146": ["n02058221", "albatross"], "147": ["n02066245", "grey_whale"], "148": ["n02071294", "killer_whale"], "149": ["n02074367", "dugong"], "150": ["n02077923", "sea_lion"], "151": ["n02085620", "Chihuahua"], "152": ["n02085782", "Japanese_spaniel"], "153": ["n02085936", "Maltese_dog"], "154": ["n02086079", "Pekinese"], "155": ["n02086240", "Shih-Tzu"], "156": ["n02086646", "Blenheim_spaniel"], "157": ["n02086910", "papillon"], "158": ["n02087046", "toy_terrier"], "159": ["n02087394", "Rhodesian_ridgeback"], "160": ["n02088094", "Afghan_hound"], "161": ["n02088238", "basset"], "162": ["n02088364", "beagle"], "163": ["n02088466", "bloodhound"], "164": ["n02088632", "bluetick"], "165": ["n02089078", "black-and-tan_coonhound"], "166": ["n02089867", "Walker_hound"], "167": ["n02089973", "English_foxhound"], "168": ["n02090379", "redbone"], "169": ["n02090622", "borzoi"], "170": ["n02090721", "Irish_wolfhound"], "171": ["n02091032", "Italian_greyhound"], "172": ["n02091134", "whippet"], "173": ["n02091244", "Ibizan_hound"], "174": ["n02091467", "Norwegian_elkhound"], "175": ["n02091635", "otterhound"], "176": ["n02091831", "Saluki"], "177": ["n02092002", "Scottish_deerhound"], "178": ["n02092339", "Weimaraner"], "179": ["n02093256", "Staffordshire_bullterrier"], "180": ["n02093428", "American_Staffordshire_terrier"], "181": ["n02093647", "Bedlington_terrier"], "182": ["n02093754", "Border_terrier"], "183": ["n02093859", "Kerry_blue_terrier"], "184": ["n02093991", "Irish_terrier"], "185": ["n02094114", "Norfolk_terrier"], "186": ["n02094258", "Norwich_terrier"], "187": ["n02094433", "Yorkshire_terrier"], "188": ["n02095314", "wire-haired_fox_terrier"], "189": ["n02095570", "Lakeland_terrier"], "190": ["n02095889", "Sealyham_terrier"], "191": ["n02096051", "Airedale"], "192": ["n02096177", "cairn"], "193": ["n02096294", "Australian_terrier"], "194": ["n02096437", "Dandie_Dinmont"], "195": ["n02096585", "Boston_bull"], "196": ["n02097047", "miniature_schnauzer"], "197": ["n02097130", "giant_schnauzer"], "198": ["n02097209", "standard_schnauzer"], "199": ["n02097298", "Scotch_terrier"], "200": ["n02097474", "Tibetan_terrier"], "201": ["n02097658", "silky_terrier"], "202": ["n02098105", "soft-coated_wheaten_terrier"], "203": ["n02098286", "West_Highland_white_terrier"], "204": ["n02098413", "Lhasa"], "205": ["n02099267", "flat-coated_retriever"], "206": ["n02099429", "curly-coated_retriever"], "207": ["n02099601", "golden_retriever"], "208": ["n02099712", "Labrador_retriever"], "209": ["n02099849", "Chesapeake_Bay_retriever"], "210": ["n02100236", "German_short-haired_pointer"], "211": ["n02100583", "vizsla"], "212": ["n02100735", "English_setter"], "213": ["n02100877", "Irish_setter"], "214": ["n02101006", "Gordon_setter"], "215": ["n02101388", "Brittany_spaniel"], "216": ["n02101556", "clumber"], "217": ["n02102040", "English_springer"], "218": ["n02102177", "Welsh_springer_spaniel"], "219": ["n02102318", "cocker_spaniel"], "220": ["n02102480", "Sussex_spaniel"], "221": ["n02102973", "Irish_water_spaniel"], "222": ["n02104029", "kuvasz"], "223": ["n02104365", "schipperke"], "224": ["n02105056", "groenendael"], "225": ["n02105162", "malinois"], "226": ["n02105251", "briard"], "227": ["n02105412", "kelpie"], "228": ["n02105505", "komondor"], "229": ["n02105641", "Old_English_sheepdog"], "230": ["n02105855", "Shetland_sheepdog"], "231": ["n02106030", "collie"], "232": ["n02106166", "Border_collie"], "233": ["n02106382", "Bouvier_des_Flandres"], "234": ["n02106550", "Rottweiler"], "235": ["n02106662", "German_shepherd"], "236": ["n02107142", "Doberman"], "237": ["n02107312", "miniature_pinscher"], "238": ["n02107574", "Greater_Swiss_Mountain_dog"], "239": ["n02107683", "Bernese_mountain_dog"], "240": ["n02107908", "Appenzeller"], "241": ["n02108000", "EntleBucher"], "242": ["n02108089", "boxer"], "243": ["n02108422", "bull_mastiff"], "244": ["n02108551", "Tibetan_mastiff"], "245": ["n02108915", "French_bulldog"], "246": ["n02109047", "Great_Dane"], "247": ["n02109525", "Saint_Bernard"], "248": ["n02109961", "Eskimo_dog"], "249": ["n02110063", "malamute"], "250": ["n02110185", "Siberian_husky"], "251": ["n02110341", "dalmatian"], "252": ["n02110627", "affenpinscher"], "253": ["n02110806", "basenji"], "254": ["n02110958", "pug"], "255": ["n02111129", "Leonberg"], "256": ["n02111277", "Newfoundland"], "257": ["n02111500", "Great_Pyrenees"], "258": ["n02111889", "Samoyed"], "259": ["n02112018", "Pomeranian"], "260": ["n02112137", "chow"], "261": ["n02112350", "keeshond"], "262": ["n02112706", "Brabancon_griffon"], "263": ["n02113023", "Pembroke"], "264": ["n02113186", "Cardigan"], "265": ["n02113624", "toy_poodle"], "266": ["n02113712", "miniature_poodle"], "267": ["n02113799", "standard_poodle"], "268": ["n02113978", "Mexican_hairless"], "269": ["n02114367", "timber_wolf"], "270": ["n02114548", "white_wolf"], "271": ["n02114712", "red_wolf"], "272": ["n02114855", "coyote"], "273": ["n02115641", "dingo"], "274": ["n02115913", "dhole"], "275": ["n02116738", "African_hunting_dog"], "276": ["n02117135", "hyena"], "277": ["n02119022", "red_fox"], "278": ["n02119789", "kit_fox"], "279": ["n02120079", "Arctic_fox"], "280": ["n02120505", "grey_fox"], "281": ["n02123045", "tabby"], "282": ["n02123159", "tiger_cat"], "283": ["n02123394", "Persian_cat"], "284": ["n02123597", "Siamese_cat"], "285": ["n02124075", "Egyptian_cat"], "286": ["n02125311", "cougar"], "287": ["n02127052", "lynx"], "288": ["n02128385", "leopard"], "289": ["n02128757", "snow_leopard"], "290": ["n02128925", "jaguar"], "291": ["n02129165", "lion"], "292": ["n02129604", "tiger"], "293": ["n02130308", "cheetah"], "294": ["n02132136", "brown_bear"], "295": ["n02133161", "American_black_bear"], "296": ["n02134084", "ice_bear"], "297": ["n02134418", "sloth_bear"], "298": ["n02137549", "mongoose"], "299": ["n02138441", "meerkat"], "300": ["n02165105", "tiger_beetle"], "301": ["n02165456", "ladybug"], "302": ["n02167151", "ground_beetle"], "303": ["n02168699", "long-horned_beetle"], "304": ["n02169497", "leaf_beetle"], "305": ["n02172182", "dung_beetle"], "306": ["n02174001", "rhinoceros_beetle"], "307": ["n02177972", "weevil"], "308": ["n02190166", "fly"], "309": ["n02206856", "bee"], "310": ["n02219486", "ant"], "311": ["n02226429", "grasshopper"], "312": ["n02229544", "cricket"], "313": ["n02231487", "walking_stick"], "314": ["n02233338", "cockroach"], "315": ["n02236044", "mantis"], "316": ["n02256656", "cicada"], "317": ["n02259212", "leafhopper"], "318": ["n02264363", "lacewing"], "319": ["n02268443", "dragonfly"], "320": ["n02268853", "damselfly"], "321": ["n02276258", "admiral"], "322": ["n02277742", "ringlet"], "323": ["n02279972", "monarch"], "324": ["n02280649", "cabbage_butterfly"], "325": ["n02281406", "sulphur_butterfly"], "326": ["n02281787", "lycaenid"], "327": ["n02317335", "starfish"], "328": ["n02319095", "sea_urchin"], "329": ["n02321529", "sea_cucumber"], "330": ["n02325366", "wood_rabbit"], "331": ["n02326432", "hare"], "332": ["n02328150", "Angora"], "333": ["n02342885", "hamster"], "334": ["n02346627", "porcupine"], "335": ["n02356798", "fox_squirrel"], "336": ["n02361337", "marmot"], "337": ["n02363005", "beaver"], "338": ["n02364673", "guinea_pig"], "339": ["n02389026", "sorrel"], "340": ["n02391049", "zebra"], "341": ["n02395406", "hog"], "342": ["n02396427", "wild_boar"], "343": ["n02397096", "warthog"], "344": ["n02398521", "hippopotamus"], "345": ["n02403003", "ox"], "346": ["n02408429", "water_buffalo"], "347": ["n02410509", "bison"], "348": ["n02412080", "ram"], "349": ["n02415577", "bighorn"], "350": ["n02417914", "ibex"], "351": ["n02422106", "hartebeest"], "352": ["n02422699", "impala"], "353": ["n02423022", "gazelle"], "354": ["n02437312", "Arabian_camel"], "355": ["n02437616", "llama"], "356": ["n02441942", "weasel"], "357": ["n02442845", "mink"], "358": ["n02443114", "polecat"], "359": ["n02443484", "black-footed_ferret"], "360": ["n02444819", "otter"], "361": ["n02445715", "skunk"], "362": ["n02447366", "badger"], "363": ["n02454379", "armadillo"], "364": ["n02457408", "three-toed_sloth"], "365": ["n02480495", "orangutan"], "366": ["n02480855", "gorilla"], "367": ["n02481823", "chimpanzee"], "368": ["n02483362", "gibbon"], "369": ["n02483708", "siamang"], "370": ["n02484975", "guenon"], "371": ["n02486261", "patas"], "372": ["n02486410", "baboon"], "373": ["n02487347", "macaque"], "374": ["n02488291", "langur"], "375": ["n02488702", "colobus"], "376": ["n02489166", "proboscis_monkey"], "377": ["n02490219", "marmoset"], "378": ["n02492035", "capuchin"], "379": ["n02492660", "howler_monkey"], "380": ["n02493509", "titi"], "381": ["n02493793", "spider_monkey"], "382": ["n02494079", "squirrel_monkey"], "383": ["n02497673", "Madagascar_cat"], "384": ["n02500267", "indri"], "385": ["n02504013", "Indian_elephant"], "386": ["n02504458", "African_elephant"], "387": ["n02509815", "lesser_panda"], "388": ["n02510455", "giant_panda"], "389": ["n02514041", "barracouta"], "390": ["n02526121", "eel"], "391": ["n02536864", "coho"], "392": ["n02606052", "rock_beauty"], "393": ["n02607072", "anemone_fish"], "394": ["n02640242", "sturgeon"], "395": ["n02641379", "gar"], "396": ["n02643566", "lionfish"], "397": ["n02655020", "puffer"], "398": ["n02666196", "abacus"], "399": ["n02667093", "abaya"], "400": ["n02669723", "academic_gown"], "401": ["n02672831", "accordion"], "402": ["n02676566", "acoustic_guitar"], "403": ["n02687172", "aircraft_carrier"], "404": ["n02690373", "airliner"], "405": ["n02692877", "airship"], "406": ["n02699494", "altar"], "407": ["n02701002", "ambulance"], "408": ["n02704792", "amphibian"], "409": ["n02708093", "analog_clock"], "410": ["n02727426", "apiary"], "411": ["n02730930", "apron"], "412": ["n02747177", "ashcan"], "413": ["n02749479", "assault_rifle"], "414": ["n02769748", "backpack"], "415": ["n02776631", "bakery"], "416": ["n02777292", "balance_beam"], "417": ["n02782093", "balloon"], "418": ["n02783161", "ballpoint"], "419": ["n02786058", "Band_Aid"], "420": ["n02787622", "banjo"], "421": ["n02788148", "bannister"], "422": ["n02790996", "barbell"], "423": ["n02791124", "barber_chair"], "424": ["n02791270", "barbershop"], "425": ["n02793495", "barn"], "426": ["n02794156", "barometer"], "427": ["n02795169", "barrel"], "428": ["n02797295", "barrow"], "429": ["n02799071", "baseball"], "430": ["n02802426", "basketball"], "431": ["n02804414", "bassinet"], "432": ["n02804610", "bassoon"], "433": ["n02807133", "bathing_cap"], "434": ["n02808304", "bath_towel"], "435": ["n02808440", "bathtub"], "436": ["n02814533", "beach_wagon"], "437": ["n02814860", "beacon"], "438": ["n02815834", "beaker"], "439": ["n02817516", "bearskin"], "440": ["n02823428", "beer_bottle"], "441": ["n02823750", "beer_glass"], "442": ["n02825657", "bell_cote"], "443": ["n02834397", "bib"], "444": ["n02835271", "bicycle-built-for-two"], "445": ["n02837789", "bikini"], "446": ["n02840245", "binder"], "447": ["n02841315", "binoculars"], "448": ["n02843684", "birdhouse"], "449": ["n02859443", "boathouse"], "450": ["n02860847", "bobsled"], "451": ["n02865351", "bolo_tie"], "452": ["n02869837", "bonnet"], "453": ["n02870880", "bookcase"], "454": ["n02871525", "bookshop"], "455": ["n02877765", "bottlecap"], "456": ["n02879718", "bow"], "457": ["n02883205", "bow_tie"], "458": ["n02892201", "brass"], "459": ["n02892767", "brassiere"], "460": ["n02894605", "breakwater"], "461": ["n02895154", "breastplate"], "462": ["n02906734", "broom"], "463": ["n02909870", "bucket"], "464": ["n02910353", "buckle"], "465": ["n02916936", "bulletproof_vest"], "466": ["n02917067", "bullet_train"], "467": ["n02927161", "butcher_shop"], "468": ["n02930766", "cab"], "469": ["n02939185", "caldron"], "470": ["n02948072", "candle"], "471": ["n02950826", "cannon"], "472": ["n02951358", "canoe"], "473": ["n02951585", "can_opener"], "474": ["n02963159", "cardigan"], "475": ["n02965783", "car_mirror"], "476": ["n02966193", "carousel"], "477": ["n02966687", "carpenter's_kit"], "478": ["n02971356", "carton"], "479": ["n02974003", "car_wheel"], "480": ["n02977058", "cash_machine"], "481": ["n02978881", "cassette"], "482": ["n02979186", "cassette_player"], "483": ["n02980441", "castle"], "484": ["n02981792", "catamaran"], "485": ["n02988304", "CD_player"], "486": ["n02992211", "cello"], "487": ["n02992529", "cellular_telephone"], "488": ["n02999410", "chain"], "489": ["n03000134", "chainlink_fence"], "490": ["n03000247", "chain_mail"], "491": ["n03000684", "chain_saw"], "492": ["n03014705", "chest"], "493": ["n03016953", "chiffonier"], "494": ["n03017168", "chime"], "495": ["n03018349", "china_cabinet"], "496": ["n03026506", "Christmas_stocking"], "497": ["n03028079", "church"], "498": ["n03032252", "cinema"], "499": ["n03041632", "cleaver"], "500": ["n03042490", "cliff_dwelling"], "501": ["n03045698", "cloak"], "502": ["n03047690", "clog"], "503": ["n03062245", "cocktail_shaker"], "504": ["n03063599", "coffee_mug"], "505": ["n03063689", "coffeepot"], "506": ["n03065424", "coil"], "507": ["n03075370", "combination_lock"], "508": ["n03085013", "computer_keyboard"], "509": ["n03089624", "confectionery"], "510": ["n03095699", "container_ship"], "511": ["n03100240", "convertible"], "512": ["n03109150", "corkscrew"], "513": ["n03110669", "cornet"], "514": ["n03124043", "cowboy_boot"], "515": ["n03124170", "cowboy_hat"], "516": ["n03125729", "cradle"], "517": ["n03126707", "crane"], "518": ["n03127747", "crash_helmet"], "519": ["n03127925", "crate"], "520": ["n03131574", "crib"], "521": ["n03133878", "Crock_Pot"], "522": ["n03134739", "croquet_ball"], "523": ["n03141823", "crutch"], "524": ["n03146219", "cuirass"], "525": ["n03160309", "dam"], "526": ["n03179701", "desk"], "527": ["n03180011", "desktop_computer"], "528": ["n03187595", "dial_telephone"], "529": ["n03188531", "diaper"], "530": ["n03196217", "digital_clock"], "531": ["n03197337", "digital_watch"], "532": ["n03201208", "dining_table"], "533": ["n03207743", "dishrag"], "534": ["n03207941", "dishwasher"], "535": ["n03208938", "disk_brake"], "536": ["n03216828", "dock"], "537": ["n03218198", "dogsled"], "538": ["n03220513", "dome"], "539": ["n03223299", "doormat"], "540": ["n03240683", "drilling_platform"], "541": ["n03249569", "drum"], "542": ["n03250847", "drumstick"], "543": ["n03255030", "dumbbell"], "544": ["n03259280", "Dutch_oven"], "545": ["n03271574", "electric_fan"], "546": ["n03272010", "electric_guitar"], "547": ["n03272562", "electric_locomotive"], "548": ["n03290653", "entertainment_center"], "549": ["n03291819", "envelope"], "550": ["n03297495", "espresso_maker"], "551": ["n03314780", "face_powder"], "552": ["n03325584", "feather_boa"], "553": ["n03337140", "file"], "554": ["n03344393", "fireboat"], "555": ["n03345487", "fire_engine"], "556": ["n03347037", "fire_screen"], "557": ["n03355925", "flagpole"], "558": ["n03372029", "flute"], "559": ["n03376595", "folding_chair"], "560": ["n03379051", "football_helmet"], "561": ["n03384352", "forklift"], "562": ["n03388043", "fountain"], "563": ["n03388183", "fountain_pen"], "564": ["n03388549", "four-poster"], "565": ["n03393912", "freight_car"], "566": ["n03394916", "French_horn"], "567": ["n03400231", "frying_pan"], "568": ["n03404251", "fur_coat"], "569": ["n03417042", "garbage_truck"], "570": ["n03424325", "gasmask"], "571": ["n03425413", "gas_pump"], "572": ["n03443371", "goblet"], "573": ["n03444034", "go-kart"], "574": ["n03445777", "golf_ball"], "575": ["n03445924", "golfcart"], "576": ["n03447447", "gondola"], "577": ["n03447721", "gong"], "578": ["n03450230", "gown"], "579": ["n03452741", "grand_piano"], "580": ["n03457902", "greenhouse"], "581": ["n03459775", "grille"], "582": ["n03461385", "grocery_store"], "583": ["n03467068", "guillotine"], "584": ["n03476684", "hair_slide"], "585": ["n03476991", "hair_spray"], "586": ["n03478589", "half_track"], "587": ["n03481172", "hammer"], "588": ["n03482405", "hamper"], "589": ["n03483316", "hand_blower"], "590": ["n03485407", "hand-held_computer"], "591": ["n03485794", "handkerchief"], "592": ["n03492542", "hard_disc"], "593": ["n03494278", "harmonica"], "594": ["n03495258", "harp"], "595": ["n03496892", "harvester"], "596": ["n03498962", "hatchet"], "597": ["n03527444", "holster"], "598": ["n03529860", "home_theater"], "599": ["n03530642", "honeycomb"], "600": ["n03532672", "hook"], "601": ["n03534580", "hoopskirt"], "602": ["n03535780", "horizontal_bar"], "603": ["n03538406", "horse_cart"], "604": ["n03544143", "hourglass"], "605": ["n03584254", "iPod"], "606": ["n03584829", "iron"], "607": ["n03590841", "jack-o'-lantern"], "608": ["n03594734", "jean"], "609": ["n03594945", "jeep"], "610": ["n03595614", "jersey"], "611": ["n03598930", "jigsaw_puzzle"], "612": ["n03599486", "jinrikisha"], "613": ["n03602883", "joystick"], "614": ["n03617480", "kimono"], "615": ["n03623198", "knee_pad"], "616": ["n03627232", "knot"], "617": ["n03630383", "lab_coat"], "618": ["n03633091", "ladle"], "619": ["n03637318", "lampshade"], "620": ["n03642806", "laptop"], "621": ["n03649909", "lawn_mower"], "622": ["n03657121", "lens_cap"], "623": ["n03658185", "letter_opener"], "624": ["n03661043", "library"], "625": ["n03662601", "lifeboat"], "626": ["n03666591", "lighter"], "627": ["n03670208", "limousine"], "628": ["n03673027", "liner"], "629": ["n03676483", "lipstick"], "630": ["n03680355", "Loafer"], "631": ["n03690938", "lotion"], "632": ["n03691459", "loudspeaker"], "633": ["n03692522", "loupe"], "634": ["n03697007", "lumbermill"], "635": ["n03706229", "magnetic_compass"], "636": ["n03709823", "mailbag"], "637": ["n03710193", "mailbox"], "638": ["n03710637", "maillot"], "639": ["n03710721", "maillot"], "640": ["n03717622", "manhole_cover"], "641": ["n03720891", "maraca"], "642": ["n03721384", "marimba"], "643": ["n03724870", "mask"], "644": ["n03729826", "matchstick"], "645": ["n03733131", "maypole"], "646": ["n03733281", "maze"], "647": ["n03733805", "measuring_cup"], "648": ["n03742115", "medicine_chest"], "649": ["n03743016", "megalith"], "650": ["n03759954", "microphone"], "651": ["n03761084", "microwave"], "652": ["n03763968", "military_uniform"], "653": ["n03764736", "milk_can"], "654": ["n03769881", "minibus"], "655": ["n03770439", "miniskirt"], "656": ["n03770679", "minivan"], "657": ["n03773504", "missile"], "658": ["n03775071", "mitten"], "659": ["n03775546", "mixing_bowl"], "660": ["n03776460", "mobile_home"], "661": ["n03777568", "Model_T"], "662": ["n03777754", "modem"], "663": ["n03781244", "monastery"], "664": ["n03782006", "monitor"], "665": ["n03785016", "moped"], "666": ["n03786901", "mortar"], "667": ["n03787032", "mortarboard"], "668": ["n03788195", "mosque"], "669": ["n03788365", "mosquito_net"], "670": ["n03791053", "motor_scooter"], "671": ["n03792782", "mountain_bike"], "672": ["n03792972", "mountain_tent"], "673": ["n03793489", "mouse"], "674": ["n03794056", "mousetrap"], "675": ["n03796401", "moving_van"], "676": ["n03803284", "muzzle"], "677": ["n03804744", "nail"], "678": ["n03814639", "neck_brace"], "679": ["n03814906", "necklace"], "680": ["n03825788", "nipple"], "681": ["n03832673", "notebook"], "682": ["n03837869", "obelisk"], "683": ["n03838899", "oboe"], "684": ["n03840681", "ocarina"], "685": ["n03841143", "odometer"], "686": ["n03843555", "oil_filter"], "687": ["n03854065", "organ"], "688": ["n03857828", "oscilloscope"], "689": ["n03866082", "overskirt"], "690": ["n03868242", "oxcart"], "691": ["n03868863", "oxygen_mask"], "692": ["n03871628", "packet"], "693": ["n03873416", "paddle"], "694": ["n03874293", "paddlewheel"], "695": ["n03874599", "padlock"], "696": ["n03876231", "paintbrush"], "697": ["n03877472", "pajama"], "698": ["n03877845", "palace"], "699": ["n03884397", "panpipe"], "700": ["n03887697", "paper_towel"], "701": ["n03888257", "parachute"], "702": ["n03888605", "parallel_bars"], "703": ["n03891251", "park_bench"], "704": ["n03891332", "parking_meter"], "705": ["n03895866", "passenger_car"], "706": ["n03899768", "patio"], "707": ["n03902125", "pay-phone"], "708": ["n03903868", "pedestal"], "709": ["n03908618", "pencil_box"], "710": ["n03908714", "pencil_sharpener"], "711": ["n03916031", "perfume"], "712": ["n03920288", "Petri_dish"], "713": ["n03924679", "photocopier"], "714": ["n03929660", "pick"], "715": ["n03929855", "pickelhaube"], "716": ["n03930313", "picket_fence"], "717": ["n03930630", "pickup"], "718": ["n03933933", "pier"], "719": ["n03935335", "piggy_bank"], "720": ["n03937543", "pill_bottle"], "721": ["n03938244", "pillow"], "722": ["n03942813", "ping-pong_ball"], "723": ["n03944341", "pinwheel"], "724": ["n03947888", "pirate"], "725": ["n03950228", "pitcher"], "726": ["n03954731", "plane"], "727": ["n03956157", "planetarium"], "728": ["n03958227", "plastic_bag"], "729": ["n03961711", "plate_rack"], "730": ["n03967562", "plow"], "731": ["n03970156", "plunger"], "732": ["n03976467", "Polaroid_camera"], "733": ["n03976657", "pole"], "734": ["n03977966", "police_van"], "735": ["n03980874", "poncho"], "736": ["n03982430", "pool_table"], "737": ["n03983396", "pop_bottle"], "738": ["n03991062", "pot"], "739": ["n03992509", "potter's_wheel"], "740": ["n03995372", "power_drill"], "741": ["n03998194", "prayer_rug"], "742": ["n04004767", "printer"], "743": ["n04005630", "prison"], "744": ["n04008634", "projectile"], "745": ["n04009552", "projector"], "746": ["n04019541", "puck"], "747": ["n04023962", "punching_bag"], "748": ["n04026417", "purse"], "749": ["n04033901", "quill"], "750": ["n04033995", "quilt"], "751": ["n04037443", "racer"], "752": ["n04039381", "racket"], "753": ["n04040759", "radiator"], "754": ["n04041544", "radio"], "755": ["n04044716", "radio_telescope"], "756": ["n04049303", "rain_barrel"], "757": ["n04065272", "recreational_vehicle"], "758": ["n04067472", "reel"], "759": ["n04069434", "reflex_camera"], "760": ["n04070727", "refrigerator"], "761": ["n04074963", "remote_control"], "762": ["n04081281", "restaurant"], "763": ["n04086273", "revolver"], "764": ["n04090263", "rifle"], "765": ["n04099969", "rocking_chair"], "766": ["n04111531", "rotisserie"], "767": ["n04116512", "rubber_eraser"], "768": ["n04118538", "rugby_ball"], "769": ["n04118776", "rule"], "770": ["n04120489", "running_shoe"], "771": ["n04125021", "safe"], "772": ["n04127249", "safety_pin"], "773": ["n04131690", "saltshaker"], "774": ["n04133789", "sandal"], "775": ["n04136333", "sarong"], "776": ["n04141076", "sax"], "777": ["n04141327", "scabbard"], "778": ["n04141975", "scale"], "779": ["n04146614", "school_bus"], "780": ["n04147183", "schooner"], "781": ["n04149813", "scoreboard"], "782": ["n04152593", "screen"], "783": ["n04153751", "screw"], "784": ["n04154565", "screwdriver"], "785": ["n04162706", "seat_belt"], "786": ["n04179913", "sewing_machine"], "787": ["n04192698", "shield"], "788": ["n04200800", "shoe_shop"], "789": ["n04201297", "shoji"], "790": ["n04204238", "shopping_basket"], "791": ["n04204347", "shopping_cart"], "792": ["n04208210", "shovel"], "793": ["n04209133", "shower_cap"], "794": ["n04209239", "shower_curtain"], "795": ["n04228054", "ski"], "796": ["n04229816", "ski_mask"], "797": ["n04235860", "sleeping_bag"], "798": ["n04238763", "slide_rule"], "799": ["n04239074", "sliding_door"], "800": ["n04243546", "slot"], "801": ["n04251144", "snorkel"], "802": ["n04252077", "snowmobile"], "803": ["n04252225", "snowplow"], "804": ["n04254120", "soap_dispenser"], "805": ["n04254680", "soccer_ball"], "806": ["n04254777", "sock"], "807": ["n04258138", "solar_dish"], "808": ["n04259630", "sombrero"], "809": ["n04263257", "soup_bowl"], "810": ["n04264628", "space_bar"], "811": ["n04265275", "space_heater"], "812": ["n04266014", "space_shuttle"], "813": ["n04270147", "spatula"], "814": ["n04273569", "speedboat"], "815": ["n04275548", "spider_web"], "816": ["n04277352", "spindle"], "817": ["n04285008", "sports_car"], "818": ["n04286575", "spotlight"], "819": ["n04296562", "stage"], "820": ["n04310018", "steam_locomotive"], "821": ["n04311004", "steel_arch_bridge"], "822": ["n04311174", "steel_drum"], "823": ["n04317175", "stethoscope"], "824": ["n04325704", "stole"], "825": ["n04326547", "stone_wall"], "826": ["n04328186", "stopwatch"], "827": ["n04330267", "stove"], "828": ["n04332243", "strainer"], "829": ["n04335435", "streetcar"], "830": ["n04336792", "stretcher"], "831": ["n04344873", "studio_couch"], "832": ["n04346328", "stupa"], "833": ["n04347754", "submarine"], "834": ["n04350905", "suit"], "835": ["n04355338", "sundial"], "836": ["n04355933", "sunglass"], "837": ["n04356056", "sunglasses"], "838": ["n04357314", "sunscreen"], "839": ["n04366367", "suspension_bridge"], "840": ["n04367480", "swab"], "841": ["n04370456", "sweatshirt"], "842": ["n04371430", "swimming_trunks"], "843": ["n04371774", "swing"], "844": ["n04372370", "switch"], "845": ["n04376876", "syringe"], "846": ["n04380533", "table_lamp"], "847": ["n04389033", "tank"], "848": ["n04392985", "tape_player"], "849": ["n04398044", "teapot"], "850": ["n04399382", "teddy"], "851": ["n04404412", "television"], "852": ["n04409515", "tennis_ball"], "853": ["n04417672", "thatch"], "854": ["n04418357", "theater_curtain"], "855": ["n04423845", "thimble"], "856": ["n04428191", "thresher"], "857": ["n04429376", "throne"], "858": ["n04435653", "tile_roof"], "859": ["n04442312", "toaster"], "860": ["n04443257", "tobacco_shop"], "861": ["n04447861", "toilet_seat"], "862": ["n04456115", "torch"], "863": ["n04458633", "totem_pole"], "864": ["n04461696", "tow_truck"], "865": ["n04462240", "toyshop"], "866": ["n04465501", "tractor"], "867": ["n04467665", "trailer_truck"], "868": ["n04476259", "tray"], "869": ["n04479046", "trench_coat"], "870": ["n04482393", "tricycle"], "871": ["n04483307", "trimaran"], "872": ["n04485082", "tripod"], "873": ["n04486054", "triumphal_arch"], "874": ["n04487081", "trolleybus"], "875": ["n04487394", "trombone"], "876": ["n04493381", "tub"], "877": ["n04501370", "turnstile"], "878": ["n04505470", "typewriter_keyboard"], "879": ["n04507155", "umbrella"], "880": ["n04509417", "unicycle"], "881": ["n04515003", "upright"], "882": ["n04517823", "vacuum"], "883": ["n04522168", "vase"], "884": ["n04523525", "vault"], "885": ["n04525038", "velvet"], "886": ["n04525305", "vending_machine"], "887": ["n04532106", "vestment"], "888": ["n04532670", "viaduct"], "889": ["n04536866", "violin"], "890": ["n04540053", "volleyball"], "891": ["n04542943", "waffle_iron"], "892": ["n04548280", "wall_clock"], "893": ["n04548362", "wallet"], "894": ["n04550184", "wardrobe"], "895": ["n04552348", "warplane"], "896": ["n04553703", "washbasin"], "897": ["n04554684", "washer"], "898": ["n04557648", "water_bottle"], "899": ["n04560804", "water_jug"], "900": ["n04562935", "water_tower"], "901": ["n04579145", "whiskey_jug"], "902": ["n04579432", "whistle"], "903": ["n04584207", "wig"], "904": ["n04589890", "window_screen"], "905": ["n04590129", "window_shade"], "906": ["n04591157", "Windsor_tie"], "907": ["n04591713", "wine_bottle"], "908": ["n04592741", "wing"], "909": ["n04596742", "wok"], "910": ["n04597913", "wooden_spoon"], "911": ["n04599235", "wool"], "912": ["n04604644", "worm_fence"], "913": ["n04606251", "wreck"], "914": ["n04612504", "yawl"], "915": ["n04613696", "yurt"], "916": ["n06359193", "web_site"], "917": ["n06596364", "comic_book"], "918": ["n06785654", "crossword_puzzle"], "919": ["n06794110", "street_sign"], "920": ["n06874185", "traffic_light"], "921": ["n07248320", "book_jacket"], "922": ["n07565083", "menu"], "923": ["n07579787", "plate"], "924": ["n07583066", "guacamole"], "925": ["n07584110", "consomme"], "926": ["n07590611", "hot_pot"], "927": ["n07613480", "trifle"], "928": ["n07614500", "ice_cream"], "929": ["n07615774", "ice_lolly"], "930": ["n07684084", "French_loaf"], "931": ["n07693725", "bagel"], "932": ["n07695742", "pretzel"], "933": ["n07697313", "cheeseburger"], "934": ["n07697537", "hotdog"], "935": ["n07711569", "mashed_potato"], "936": ["n07714571", "head_cabbage"], "937": ["n07714990", "broccoli"], "938": ["n07715103", "cauliflower"], "939": ["n07716358", "zucchini"], "940": ["n07716906", "spaghetti_squash"], "941": ["n07717410", "acorn_squash"], "942": ["n07717556", "butternut_squash"], "943": ["n07718472", "cucumber"], "944": ["n07718747", "artichoke"], "945": ["n07720875", "bell_pepper"], "946": ["n07730033", "cardoon"], "947": ["n07734744", "mushroom"], "948": ["n07742313", "Granny_Smith"], "949": ["n07745940", "strawberry"], "950": ["n07747607", "orange"], "951": ["n07749582", "lemon"], "952": ["n07753113", "fig"], "953": ["n07753275", "pineapple"], "954": ["n07753592", "banana"], "955": ["n07754684", "jackfruit"], "956": ["n07760859", "custard_apple"], "957": ["n07768694", "pomegranate"], "958": ["n07802026", "hay"], "959": ["n07831146", "carbonara"], "960": ["n07836838", "chocolate_sauce"], "961": ["n07860988", "dough"], "962": ["n07871810", "meat_loaf"], "963": ["n07873807", "pizza"], "964": ["n07875152", "potpie"], "965": ["n07880968", "burrito"], "966": ["n07892512", "red_wine"], "967": ["n07920052", "espresso"], "968": ["n07930864", "cup"], "969": ["n07932039", "eggnog"], "970": ["n09193705", "alp"], "971": ["n09229709", "bubble"], "972": ["n09246464", "cliff"], "973": ["n09256479", "coral_reef"], "974": ["n09288635", "geyser"], "975": ["n09332890", "lakeside"], "976": ["n09399592", "promontory"], "977": ["n09421951", "sandbar"], "978": ["n09428293", "seashore"], "979": ["n09468604", "valley"], "980": ["n09472597", "volcano"], "981": ["n09835506", "ballplayer"], "982": ["n10148035", "groom"], "983": ["n10565667", "scuba_diver"], "984": ["n11879895", "rapeseed"], "985": ["n11939491", "daisy"], "986": ["n12057211", "yellow_lady's_slipper"], "987": ["n12144580", "corn"], "988": ["n12267677", "acorn"], "989": ["n12620546", "hip"], "990": ["n12768682", "buckeye"], "991": ["n12985857", "coral_fungus"], "992": ["n12998815", "agaric"], "993": ["n13037406", "gyromitra"], "994": ["n13040303", "stinkhorn"], "995": ["n13044778", "earthstar"], "996": ["n13052670", "hen-of-the-woods"], "997": ["n13054560", "bolete"], "998": ["n13133613", "ear"], "999": ["n15075141", "toilet_tissue"]} \ No newline at end of file diff --git a/tensorlayer/models/mobilenetv1.py b/tensorlayer/models/mobilenetv1.py deleted file mode 100644 index fd169b025..000000000 --- a/tensorlayer/models/mobilenetv1.py +++ /dev/null @@ -1,118 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- -"""MobileNet for ImageNet.""" - -import os - -import tensorflow as tf - -from tensorlayer import logging -from tensorlayer.files import (assign_weights, load_npz, maybe_download_and_extract) -from tensorlayer.layers import (BatchNorm, Conv2d, DepthwiseConv2d, Flatten, GlobalMeanPool2d, Input, Reshape) -from tensorlayer.models import Model - -__all__ = [ - 'MobileNetV1', -] - -layer_names = [ - 'conv', 'depth1', 'depth2', 'depth3', 'depth4', 'depth5', 'depth6', 'depth7', 'depth8', 'depth9', 'depth10', - 'depth11', 'depth12', 'depth13', 'globalmeanpool', 'reshape', 'out' -] -n_filters = [32, 64, 128, 128, 256, 256, 512, 512, 512, 512, 512, 512, 1024, 1024] - - -def conv_block(n, n_filter, filter_size=(3, 3), strides=(1, 1), name='conv_block'): - # ref: https://github.com/keras-team/keras/blob/master/keras/applications/mobilenet.py - n = Conv2d(n_filter, filter_size, strides, b_init=None, name=name + '.conv')(n) - n = BatchNorm(decay=0.99, act=tf.nn.relu6, name=name + '.batchnorm')(n) - return n - - -def depthwise_conv_block(n, n_filter, strides=(1, 1), name="depth_block"): - n = DepthwiseConv2d((3, 3), strides, b_init=None, name=name + '.depthwise')(n) - n = BatchNorm(decay=0.99, act=tf.nn.relu6, name=name + '.batchnorm1')(n) - n = Conv2d(n_filter, (1, 1), (1, 1), b_init=None, name=name + '.conv')(n) - n = BatchNorm(decay=0.99, act=tf.nn.relu6, name=name + '.batchnorm2')(n) - return n - - -def restore_params(network, path='models'): - logging.info("Restore pre-trained parameters") - maybe_download_and_extract( - 'mobilenet.npz', path, 'https://github.com/tensorlayer/pretrained-models/raw/master/models/', - expected_bytes=25600116 - ) # ls -al - params = load_npz(name=os.path.join(path, 'mobilenet.npz')) - # for idx, net_weight in enumerate(network.all_weights): - # if 'batchnorm' in net_weight.name: - # params[idx] = params[idx].reshape(1, 1, 1, -1) - assign_weights(params[:len(network.all_weights)], network) - del params - - -def MobileNetV1(pretrained=False, end_with='out', name=None): - """Pre-trained MobileNetV1 model (static mode). Input shape [?, 224, 224, 3], value range [0, 1]. - - Parameters - ---------- - pretrained : boolean - Whether to load pretrained weights. Default False. - end_with : str - The end point of the model [conv, depth1, depth2 ... depth13, globalmeanpool, out]. Default ``out`` i.e. the whole model. - name : None or str - Name for this model. - - Examples - --------- - Classify ImageNet classes, see `tutorial_models_mobilenetv1.py `__ - - >>> # get the whole model with pretrained weights - >>> mobilenetv1 = tl.models.MobileNetV1(pretrained=True) - >>> # use for inferencing - >>> output = mobilenetv1(img1, is_train=False) - >>> prob = tf.nn.softmax(output)[0].numpy() - - Extract features and Train a classifier with 100 classes - - >>> # get model without the last layer - >>> cnn = tl.models.MobileNetV1(pretrained=True, end_with='reshape').as_layer() - >>> # add one more layer and build new model - >>> ni = Input([None, 224, 224, 3], name="inputs") - >>> nn = cnn(ni) - >>> nn = Conv2d(100, (1, 1), (1, 1), name='out')(nn) - >>> nn = Flatten(name='flatten')(nn) - >>> model = tl.models.Model(inputs=ni, outputs=nn) - >>> # train your own classifier (only update the last layer) - >>> train_params = model.get_layer('out').trainable_weights - - Returns - ------- - static MobileNetV1. - """ - ni = Input([None, 224, 224, 3], name="input") - - for i in range(len(layer_names)): - if i == 0: - n = conv_block(ni, n_filters[i], strides=(2, 2), name=layer_names[i]) - elif layer_names[i] in ['depth2', 'depth4', 'depth6', 'depth12']: - n = depthwise_conv_block(n, n_filters[i], strides=(2, 2), name=layer_names[i]) - elif layer_names[i] == 'globalmeanpool': - n = GlobalMeanPool2d(name='globalmeanpool')(n) - elif layer_names[i] == 'reshape': - n = Reshape([-1, 1, 1, 1024], name='reshape')(n) - elif layer_names[i] == 'out': - n = Conv2d(1000, (1, 1), (1, 1), name='out')(n) - n = Flatten(name='flatten')(n) - else: - n = depthwise_conv_block(n, n_filters[i], name=layer_names[i]) - - if layer_names[i] == end_with: - break - - network = Model(inputs=ni, outputs=n, name=name) - - if pretrained: - restore_params(network) - - return network diff --git a/tensorlayer/models/resnet.py b/tensorlayer/models/resnet.py deleted file mode 100644 index 458f25912..000000000 --- a/tensorlayer/models/resnet.py +++ /dev/null @@ -1,203 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- -"""ResNet for ImageNet. - -# Reference: -- [Deep Residual Learning for Image Recognition]( - https://arxiv.org/abs/1512.03385) (CVPR 2016 Best Paper Award) - -""" - -import os - -import tensorflow as tf - -from tensorlayer import logging -from tensorlayer.files import (assign_weights, load_npz, maybe_download_and_extract) -from tensorlayer.layers import (BatchNorm, Conv2d, Dense, Elementwise, GlobalMeanPool2d, Input, MaxPool2d) -from tensorlayer.models import Model - -__all__ = [ - 'ResNet50', -] - - -def identity_block(input, kernel_size, n_filters, stage, block): - """The identity block where there is no conv layer at shortcut. - - Parameters - ---------- - input : tf tensor - Input tensor from above layer. - kernel_size : int - The kernel size of middle conv layer at main path. - n_filters : list of integers - The numbers of filters for 3 conv layer at main path. - stage : int - Current stage label. - block : str - Current block label. - - Returns - ------- - Output tensor of this block. - - """ - filters1, filters2, filters3 = n_filters - conv_name_base = 'res' + str(stage) + block + '_branch' - bn_name_base = 'bn' + str(stage) + block + '_branch' - - x = Conv2d(filters1, (1, 1), W_init=tf.initializers.he_normal(), name=conv_name_base + '2a')(input) - x = BatchNorm(name=bn_name_base + '2a', act='relu')(x) - - ks = (kernel_size, kernel_size) - x = Conv2d(filters2, ks, padding='SAME', W_init=tf.initializers.he_normal(), name=conv_name_base + '2b')(x) - x = BatchNorm(name=bn_name_base + '2b', act='relu')(x) - - x = Conv2d(filters3, (1, 1), W_init=tf.initializers.he_normal(), name=conv_name_base + '2c')(x) - x = BatchNorm(name=bn_name_base + '2c')(x) - - x = Elementwise(tf.add, act='relu')([x, input]) - return x - - -def conv_block(input, kernel_size, n_filters, stage, block, strides=(2, 2)): - """The conv block where there is a conv layer at shortcut. - - Parameters - ---------- - input : tf tensor - Input tensor from above layer. - kernel_size : int - The kernel size of middle conv layer at main path. - n_filters : list of integers - The numbers of filters for 3 conv layer at main path. - stage : int - Current stage label. - block : str - Current block label. - strides : tuple - Strides for the first conv layer in the block. - - Returns - ------- - Output tensor of this block. - - """ - filters1, filters2, filters3 = n_filters - conv_name_base = 'res' + str(stage) + block + '_branch' - bn_name_base = 'bn' + str(stage) + block + '_branch' - - x = Conv2d(filters1, (1, 1), strides=strides, W_init=tf.initializers.he_normal(), name=conv_name_base + '2a')(input) - x = BatchNorm(name=bn_name_base + '2a', act='relu')(x) - - ks = (kernel_size, kernel_size) - x = Conv2d(filters2, ks, padding='SAME', W_init=tf.initializers.he_normal(), name=conv_name_base + '2b')(x) - x = BatchNorm(name=bn_name_base + '2b', act='relu')(x) - - x = Conv2d(filters3, (1, 1), W_init=tf.initializers.he_normal(), name=conv_name_base + '2c')(x) - x = BatchNorm(name=bn_name_base + '2c')(x) - - shortcut = Conv2d(filters3, (1, 1), strides=strides, W_init=tf.initializers.he_normal(), - name=conv_name_base + '1')(input) - shortcut = BatchNorm(name=bn_name_base + '1')(shortcut) - - x = Elementwise(tf.add, act='relu')([x, shortcut]) - return x - - -block_names = ['2a', '2b', '2c', '3a', '3b', '3c', '3d', '4a', '4b', '4c', '4d', '4e', '4f', '5a', '5b', '5c' - ] + ['avg_pool', 'fc1000'] -block_filters = [[64, 64, 256], [128, 128, 512], [256, 256, 1024], [512, 512, 2048]] - - -def ResNet50(pretrained=False, end_with='fc1000', n_classes=1000, name=None): - """Pre-trained MobileNetV1 model (static mode). Input shape [?, 224, 224, 3]. - To use pretrained model, input should be in BGR format and subtracted from ImageNet mean [103.939, 116.779, 123.68]. - - Parameters - ---------- - pretrained : boolean - Whether to load pretrained weights. Default False. - end_with : str - The end point of the model [conv, depth1, depth2 ... depth13, globalmeanpool, out]. - Default ``out`` i.e. the whole model. - n_classes : int - Number of classes in final prediction. - name : None or str - Name for this model. - - Examples - --------- - Classify ImageNet classes, see `tutorial_models_resnet50.py` - - >>> # get the whole model with pretrained weights - >>> resnet = tl.models.ResNet50(pretrained=True) - >>> # use for inferencing - >>> output = resnet(img1, is_train=False) - >>> prob = tf.nn.softmax(output)[0].numpy() - - Extract the features before fc layer - >>> resnet = tl.models.ResNet50(pretrained=True, end_with='5c') - >>> output = resnet(img1, is_train=False) - - Returns - ------- - ResNet50 model. - - """ - ni = Input([None, 224, 224, 3], name="input") - n = Conv2d(64, (7, 7), strides=(2, 2), padding='SAME', W_init=tf.initializers.he_normal(), name='conv1')(ni) - n = BatchNorm(name='bn_conv1', act='relu')(n) - n = MaxPool2d((3, 3), strides=(2, 2), name='max_pool1')(n) - - for i, block_name in enumerate(block_names): - if len(block_name) == 2: - stage = int(block_name[0]) - block = block_name[1] - if block == 'a': - strides = (1, 1) if stage == 2 else (2, 2) - n = conv_block(n, 3, block_filters[stage - 2], stage=stage, block=block, strides=strides) - else: - n = identity_block(n, 3, block_filters[stage - 2], stage=stage, block=block) - elif block_name == 'avg_pool': - n = GlobalMeanPool2d(name='avg_pool')(n) - elif block_name == 'fc1000': - n = Dense(n_classes, name='fc1000')(n) - - if block_name == end_with: - break - - network = Model(inputs=ni, outputs=n, name=name) - - if pretrained: - restore_params(network) - - return network - - -def restore_params(network, path='models'): - logging.info("Restore pre-trained parameters") - maybe_download_and_extract( - 'resnet50_weights_tf_dim_ordering_tf_kernels.h5', - path, - 'https://github.com/fchollet/deep-learning-models/releases/download/v0.2/', - ) # ls -al - try: - import h5py - except Exception: - raise ImportError('h5py not imported') - - f = h5py.File(os.path.join(path, 'resnet50_weights_tf_dim_ordering_tf_kernels.h5'), 'r') - - for layer in network.all_layers: - if len(layer.all_weights) == 0: - continue - w_names = list(f[layer.name]) - params = [f[layer.name][n][:] for n in w_names] - # if 'bn' in layer.name: - # params = [x.reshape(1, 1, 1, -1) for x in params] - assign_weights(params, layer) - del params - - f.close() diff --git a/tensorlayer/models/seq2seq.py b/tensorlayer/models/seq2seq.py deleted file mode 100644 index 0473eeffc..000000000 --- a/tensorlayer/models/seq2seq.py +++ /dev/null @@ -1,163 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.layers import Dense, Dropout, Input -from tensorlayer.layers.core import Layer -from tensorlayer.models import Model - -__all__ = ['Seq2seq'] - - -class Seq2seq(Model): - """vanilla stacked layer Seq2Seq model. - - Parameters - ---------- - decoder_seq_length: int - The length of your target sequence - cell_enc : TensorFlow cell function - The RNN function cell for your encoder stack, e.g tf.keras.layers.GRUCell - cell_dec : TensorFlow cell function - The RNN function cell for your decoder stack, e.g. tf.keras.layers.GRUCell - n_layer : int - The number of your RNN layers for both encoder and decoder block - embedding_layer : tl.Layer - A embedding layer, e.g. tl.layers.Embedding(vocabulary_size=voc_size, embedding_size=emb_dim) - name : str - The model name - - Examples - --------- - Classify stacked-layer Seq2Seq model, see `chatbot `__ - - Returns - ------- - static stacked-layer Seq2Seq model. - """ - - def __init__(self, decoder_seq_length, cell_enc, cell_dec, n_units=256, n_layer=3, embedding_layer=None, name=None): - super(Seq2seq, self).__init__(name=name) - self.embedding_layer = embedding_layer - self.vocabulary_size = embedding_layer.vocabulary_size - self.embedding_size = embedding_layer.embedding_size - self.n_layer = n_layer - self.enc_layers = [] - self.dec_layers = [] - for i in range(n_layer): - if (i == 0): - self.enc_layers.append( - tl.layers.RNN( - cell=cell_enc(units=n_units), in_channels=self.embedding_size, return_last_state=True - ) - ) - else: - self.enc_layers.append( - tl.layers.RNN(cell=cell_enc(units=n_units), in_channels=n_units, return_last_state=True) - ) - - for i in range(n_layer): - if (i == 0): - self.dec_layers.append( - tl.layers.RNN( - cell=cell_dec(units=n_units), in_channels=self.embedding_size, return_last_state=True - ) - ) - else: - self.dec_layers.append( - tl.layers.RNN(cell=cell_dec(units=n_units), in_channels=n_units, return_last_state=True) - ) - - self.reshape_layer = tl.layers.Reshape([-1, n_units]) - self.dense_layer = tl.layers.Dense(n_units=self.vocabulary_size, in_channels=n_units) - self.reshape_layer_after = tl.layers.Reshape([-1, decoder_seq_length, self.vocabulary_size]) - self.reshape_layer_individual_sequence = tl.layers.Reshape([-1, 1, self.vocabulary_size]) - - def inference(self, encoding, seq_length, start_token, top_n): - """Inference mode""" - """ - Parameters - ---------- - encoding : input tensor - The source sequences - seq_length : int - The expected length of your predicted sequence. - start_token : int - : The token of "start of sequence" - top_n : int - Random search algorithm based on the top top_n words sorted by the probablity. - """ - feed_output = self.embedding_layer(encoding[0]) - state = [None for i in range(self.n_layer)] - - for i in range(self.n_layer): - feed_output, state[i] = self.enc_layers[i](feed_output, return_state=True) - batch_size = len(encoding[0].numpy()) - decoding = [[start_token] for i in range(batch_size)] - feed_output = self.embedding_layer(decoding) - for i in range(self.n_layer): - feed_output, state[i] = self.dec_layers[i](feed_output, initial_state=state[i], return_state=True) - - feed_output = self.reshape_layer(feed_output) - feed_output = self.dense_layer(feed_output) - feed_output = self.reshape_layer_individual_sequence(feed_output) - feed_output = tf.argmax(feed_output, -1) - # [B, 1] - final_output = feed_output - - for i in range(seq_length - 1): - feed_output = self.embedding_layer(feed_output) - for i in range(self.n_layer): - feed_output, state[i] = self.dec_layers[i](feed_output, initial_state=state[i], return_state=True) - feed_output = self.reshape_layer(feed_output) - feed_output = self.dense_layer(feed_output) - feed_output = self.reshape_layer_individual_sequence(feed_output) - ori_feed_output = feed_output - if (top_n is not None): - for k in range(batch_size): - idx = np.argpartition(ori_feed_output[k][0], -top_n)[-top_n:] - probs = [ori_feed_output[k][0][i] for i in idx] - probs = probs / np.sum(probs) - feed_output = np.random.choice(idx, p=probs) - feed_output = tf.convert_to_tensor([[feed_output]], dtype=tf.int64) - if (k == 0): - final_output_temp = feed_output - else: - final_output_temp = tf.concat([final_output_temp, feed_output], 0) - feed_output = final_output_temp - else: - feed_output = tf.argmax(feed_output, -1) - final_output = tf.concat([final_output, feed_output], 1) - - return final_output, state - - def forward(self, inputs, seq_length=20, start_token=None, return_state=False, top_n=None): - - state = [None for i in range(self.n_layer)] - if (self.is_train): - encoding = inputs[0] - enc_output = self.embedding_layer(encoding) - - for i in range(self.n_layer): - enc_output, state[i] = self.enc_layers[i](enc_output, return_state=True) - - decoding = inputs[1] - dec_output = self.embedding_layer(decoding) - - for i in range(self.n_layer): - dec_output, state[i] = self.dec_layers[i](dec_output, initial_state=state[i], return_state=True) - - dec_output = self.reshape_layer(dec_output) - denser_output = self.dense_layer(dec_output) - output = self.reshape_layer_after(denser_output) - else: - encoding = inputs - output, state = self.inference(encoding, seq_length, start_token, top_n) - - if (return_state): - return output, state - else: - return output diff --git a/tensorlayer/models/seq2seq_with_attention.py b/tensorlayer/models/seq2seq_with_attention.py deleted file mode 100644 index 800bbaa61..000000000 --- a/tensorlayer/models/seq2seq_with_attention.py +++ /dev/null @@ -1,210 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- - -import numpy as np -import tensorflow as tf - -import tensorlayer as tl -from tensorlayer.layers import Dense, Dropout, Input -from tensorlayer.layers.core import Layer -from tensorlayer.models import Model - -__all__ = ['Seq2seqLuongAttention'] - - -class Encoder(Layer): - - def __init__(self, hidden_size, cell, embedding_layer, name=None): - super(Encoder, self).__init__(name) - self.cell = cell(hidden_size) - self.hidden_size = hidden_size - self.embedding_layer = embedding_layer - self.build((None, None, self.embedding_layer.embedding_size)) - self._built = True - - def build(self, inputs_shape): - self.cell.build(input_shape=tuple(inputs_shape)) - self._built = True - if self._trainable_weights is None: - self._trainable_weights = list() - - for var in self.cell.trainable_variables: - self._trainable_weights.append(var) - - def forward(self, src_seq, initial_state=None): - - states = initial_state if initial_state is not None else self.cell.get_initial_state(src_seq) - encoding_hidden_states = list() - total_steps = src_seq.get_shape().as_list()[1] - for time_step in range(total_steps): - if not isinstance(states, list): - states = [states] - output, states = self.cell.call(src_seq[:, time_step, :], states, training=self.is_train) - encoding_hidden_states.append(states[0]) - return output, encoding_hidden_states, states[0] - - -class Decoder_Attention(Layer): - - def __init__(self, hidden_size, cell, embedding_layer, method, name=None): - super(Decoder_Attention, self).__init__(name) - self.cell = cell(hidden_size) - self.hidden_size = hidden_size - self.embedding_layer = embedding_layer - self.method = method - self.build((None, hidden_size + self.embedding_layer.embedding_size)) - self._built = True - - def build(self, inputs_shape): - self.cell.build(input_shape=tuple(inputs_shape)) - self._built = True - if self.method is "concat": - self.W = self._get_weights("W", shape=(2 * self.hidden_size, self.hidden_size)) - self.V = self._get_weights("V", shape=(self.hidden_size, 1)) - elif self.method is "general": - self.W = self._get_weights("W", shape=(self.hidden_size, self.hidden_size)) - if self._trainable_weights is None: - self._trainable_weights = list() - - for var in self.cell.trainable_variables: - self._trainable_weights.append(var) - - def score(self, encoding_hidden, hidden, method): - # encoding = [B, T, H] - # hidden = [B, H] - # combined = [B,T,2H] - if method is "concat": - # hidden = [B,H]->[B,1,H]->[B,T,H] - hidden = tf.expand_dims(hidden, 1) - hidden = tf.tile(hidden, [1, encoding_hidden.shape[1], 1]) - # combined = [B,T,2H] - combined = tf.concat([hidden, encoding_hidden], 2) - combined = tf.cast(combined, tf.float32) - score = tf.tensordot(combined, self.W, axes=[[2], [0]]) # score = [B,T,H] - score = tf.nn.tanh(score) # score = [B,T,H] - score = tf.tensordot(self.V, score, axes=[[0], [2]]) # score = [1,B,T] - score = tf.squeeze(score, axis=0) # score = [B,T] - - elif method is "dot": - # hidden = [B,H]->[B,H,1] - hidden = tf.expand_dims(hidden, 2) - score = tf.matmul(encoding_hidden, hidden) - score = tf.squeeze(score, axis=2) - elif method is "general": - # hidden = [B,H]->[B,H,1] - score = tf.matmul(hidden, self.W) - score = tf.expand_dims(score, 2) - score = tf.matmul(encoding_hidden, score) - score = tf.squeeze(score, axis=2) - - score = tf.nn.softmax(score, axis=-1) # score = [B,T] - return score - - def forward(self, dec_seq, enc_hiddens, last_hidden, method, return_last_state=False): - # dec_seq = [B, T_, V], enc_hiddens = [B, T, H], last_hidden = [B, H] - total_steps = dec_seq.get_shape().as_list()[1] - states = last_hidden - cell_outputs = list() - for time_step in range(total_steps): - attention_weights = self.score(enc_hiddens, last_hidden, method) - attention_weights = tf.expand_dims(attention_weights, 1) #[B, 1, T] - context = tf.matmul(attention_weights, enc_hiddens) #[B, 1, H] - context = tf.squeeze(context, 1) #[B, H] - inputs = tf.concat([dec_seq[:, time_step, :], context], 1) - if not isinstance(states, list): - states = [states] - cell_output, states = self.cell.call(inputs, states, training=self.is_train) - cell_outputs.append(cell_output) - last_hidden = states[0] - - cell_outputs = tf.convert_to_tensor(cell_outputs) - cell_outputs = tf.transpose(cell_outputs, perm=[1, 0, 2]) - if (return_last_state): - return cell_outputs, last_hidden - return cell_outputs - - -class Seq2seqLuongAttention(Model): - """Luong Attention-based Seq2Seq model. Implementation based on https://arxiv.org/pdf/1508.04025.pdf. - - Parameters - ---------- - hidden_size: int - The hidden size of both encoder and decoder RNN cells - cell : TensorFlow cell function - The RNN function cell for your encoder and decoder stack, e.g. tf.keras.layers.GRUCell - embedding_layer : tl.Layer - A embedding layer, e.g. tl.layers.Embedding(vocabulary_size=voc_size, embedding_size=emb_dim) - method : str - The three alternatives to calculate the attention scores, e.g. "dot", "general" and "concat" - name : str - The model name - - - Returns - ------- - static single layer attention-based Seq2Seq model. - """ - - def __init__(self, hidden_size, embedding_layer, cell, method, name=None): - super(Seq2seqLuongAttention, self).__init__(name) - self.enc_layer = Encoder(hidden_size, cell, embedding_layer) - self.dec_layer = Decoder_Attention(hidden_size, cell, embedding_layer, method=method) - self.embedding_layer = embedding_layer - self.dense_layer = tl.layers.Dense(n_units=self.embedding_layer.vocabulary_size, in_channels=hidden_size) - self.method = method - - def inference(self, src_seq, encoding_hidden_states, last_hidden_states, seq_length, sos): - """Inference mode""" - """ - Parameters - ---------- - src_seq : input tensor - The source sequences - encoding_hidden_states : a list of tensor - The list of encoder's hidden states at each time step - last_hidden_states: tensor - The last hidden_state from encoder - seq_length : int - The expected length of your predicted sequence. - sos : int - : The token of "start of sequence" - """ - - batch_size = src_seq.shape[0] - decoding = [[sos] for i in range(batch_size)] - dec_output = self.embedding_layer(decoding) - outputs = [[0] for i in range(batch_size)] - for step in range(seq_length): - dec_output, last_hidden_states = self.dec_layer( - dec_output, encoding_hidden_states, last_hidden_states, method=self.method, return_last_state=True - ) - dec_output = tf.reshape(dec_output, [-1, dec_output.shape[-1]]) - dec_output = self.dense_layer(dec_output) - dec_output = tf.reshape(dec_output, [batch_size, -1, dec_output.shape[-1]]) - dec_output = tf.argmax(dec_output, -1) - outputs = tf.concat([outputs, dec_output], 1) - dec_output = self.embedding_layer(dec_output) - - return outputs[:, 1:] - - def forward(self, inputs, seq_length=20, sos=None): - src_seq = inputs[0] - src_seq = self.embedding_layer(src_seq) - enc_output, encoding_hidden_states, last_hidden_states = self.enc_layer(src_seq) - encoding_hidden_states = tf.convert_to_tensor(encoding_hidden_states) - encoding_hidden_states = tf.transpose(encoding_hidden_states, perm=[1, 0, 2]) - last_hidden_states = tf.convert_to_tensor(last_hidden_states) - - if (self.is_train): - dec_seq = inputs[1] - dec_seq = self.embedding_layer(dec_seq) - dec_output = self.dec_layer(dec_seq, encoding_hidden_states, last_hidden_states, method=self.method) - batch_size = dec_output.shape[0] - dec_output = tf.reshape(dec_output, [-1, dec_output.shape[-1]]) - dec_output = self.dense_layer(dec_output) - dec_output = tf.reshape(dec_output, [batch_size, -1, dec_output.shape[-1]]) - else: - dec_output = self.inference(src_seq, encoding_hidden_states, last_hidden_states, seq_length, sos) - - return dec_output diff --git a/tensorlayer/models/squeezenetv1.py b/tensorlayer/models/squeezenetv1.py deleted file mode 100644 index b38d42dc8..000000000 --- a/tensorlayer/models/squeezenetv1.py +++ /dev/null @@ -1,111 +0,0 @@ -#! /usr/bin/python -# -*- coding: utf-8 -*- -"""SqueezeNet for ImageNet.""" - -import os - -import tensorflow as tf - -from tensorlayer import logging -from tensorlayer.files import (assign_weights, load_npz, maybe_download_and_extract) -from tensorlayer.layers import (Concat, Conv2d, Dropout, GlobalMeanPool2d, Input, Lambda, MaxPool2d) -from tensorlayer.models import Model - -__all__ = [ - 'SqueezeNetV1', -] - -layer_names = [ - 'conv1', 'maxpool1', 'fire2', 'fire3', 'fire4', 'fire5', 'fire6', 'fire7', 'fire8', 'fire9', 'drop1', 'out' -] -n_filters = [16, 16, 32, 32, 48, 48, 64, 64] - - -def fire_block(n, n_filter, max_pool=False, name='fire_block'): - n = Conv2d(n_filter, (1, 1), (1, 1), tf.nn.relu, 'SAME', name=name + '.squeeze1x1')(n) - n1 = Conv2d(n_filter * 4, (1, 1), (1, 1), tf.nn.relu, 'SAME', name=name + '.expand1x1')(n) - n2 = Conv2d(n_filter * 4, (3, 3), (1, 1), tf.nn.relu, 'SAME', name=name + '.expand3x3')(n) - n = Concat(-1, name=name + '.concat')([n1, n2]) - if max_pool: - n = MaxPool2d((3, 3), (2, 2), 'VALID', name=name + '.max')(n) - return n - - -def restore_params(network, path='models'): - logging.info("Restore pre-trained parameters") - maybe_download_and_extract( - 'squeezenet.npz', path, 'https://github.com/tensorlayer/pretrained-models/raw/master/models/', - expected_bytes=7405613 - ) # ls -al - params = load_npz(name=os.path.join(path, 'squeezenet.npz')) - assign_weights(params[:len(network.all_weights)], network) - del params - - -def SqueezeNetV1(pretrained=False, end_with='out', name=None): - """Pre-trained SqueezeNetV1 model (static mode). Input shape [?, 224, 224, 3], value range [0, 1]. - - Parameters - ------------ - pretrained : boolean - Whether to load pretrained weights. Default False. - end_with : str - The end point of the model [conv1, maxpool1, fire2, fire3, fire4, ..., out]. Default ``out`` i.e. the whole model. - name : None or str - Name for this model. - - Examples - --------- - Classify ImageNet classes, see `tutorial_models_squeezenetv1.py `__ - - >>> # get the whole model - >>> squeezenet = tl.models.SqueezeNetV1(pretrained=True) - >>> # use for inferencing - >>> output = squeezenet(img1, is_train=False) - >>> prob = tf.nn.softmax(output)[0].numpy() - - Extract features and Train a classifier with 100 classes - - >>> # get model without the last layer - >>> cnn = tl.models.SqueezeNetV1(pretrained=True, end_with='drop1').as_layer() - >>> # add one more layer and build new model - >>> ni = Input([None, 224, 224, 3], name="inputs") - >>> nn = cnn(ni) - >>> nn = Conv2d(100, (1, 1), (1, 1), padding='VALID', name='conv10')(nn) - >>> nn = GlobalMeanPool2d(name='globalmeanpool')(nn) - >>> model = tl.models.Model(inputs=ni, outputs=nn) - >>> # train your own classifier (only update the last layer) - >>> train_params = model.get_layer('conv10').trainable_weights - - Returns - ------- - static SqueezeNetV1. - - """ - ni = Input([None, 224, 224, 3], name="input") - n = Lambda(lambda x: x * 255, name='scale')(ni) - - for i in range(len(layer_names)): - if layer_names[i] == 'conv1': - n = Conv2d(64, (3, 3), (2, 2), tf.nn.relu, 'SAME', name='conv1')(n) - elif layer_names[i] == 'maxpool1': - n = MaxPool2d((3, 3), (2, 2), 'VALID', name='maxpool1')(n) - elif layer_names[i] == 'drop1': - n = Dropout(keep=0.5, name='drop1')(n) - elif layer_names[i] == 'out': - n = Conv2d(1000, (1, 1), (1, 1), padding='VALID', name='conv10')(n) # 13, 13, 1000 - n = GlobalMeanPool2d(name='globalmeanpool')(n) - elif layer_names[i] in ['fire3', 'fire5']: - n = fire_block(n, n_filters[i - 2], max_pool=True, name=layer_names[i]) - else: - n = fire_block(n, n_filters[i - 2], max_pool=False, name=layer_names[i]) - - if layer_names[i] == end_with: - break - - network = Model(inputs=ni, outputs=n, name=name) - - if pretrained: - restore_params(network) - - return network diff --git a/tensorlayer/optimizers/__init__.py b/tensorlayer/optimizers/__init__.py index e74b38801..9d654bbfb 100644 --- a/tensorlayer/optimizers/__init__.py +++ b/tensorlayer/optimizers/__init__.py @@ -5,8 +5,21 @@ various benchmarks and domain-specific problems. In addition, we also support transparent access to native TensorFlow parameters. For example, we provide not only layers for local response normalization, but also -layers that allow user to apply ``tf.nn.lrn`` on ``network.outputs``. +layers that allow user to apply ``tf.ops.lrn`` on ``network.outputs``. More functions can be found in `TensorFlow API `__. """ from .amsgrad import AMSGrad + +# ['Adadelta', 'Adagrad', 'Adam', 'Adamax', 'Ftrl', 'Nadam', 'RMSprop', 'SGD', 'Momentum', 'Lamb', 'LARS'] +from .load_optimizers_backend import Adadelta +from .load_optimizers_backend import Adagrad +from .load_optimizers_backend import Adam +from .load_optimizers_backend import Adamax +from .load_optimizers_backend import Ftrl +from .load_optimizers_backend import Nadam +from .load_optimizers_backend import RMSprop +from .load_optimizers_backend import SGD +from .load_optimizers_backend import Momentum +from .load_optimizers_backend import Lamb +from .load_optimizers_backend import LARS diff --git a/tensorlayer/optimizers/load_optimizers_backend.py b/tensorlayer/optimizers/load_optimizers_backend.py new file mode 100644 index 000000000..0fc4c0892 --- /dev/null +++ b/tensorlayer/optimizers/load_optimizers_backend.py @@ -0,0 +1,14 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +from __future__ import absolute_import, division, print_function +from tensorlayer.backend.ops.load_backend import BACKEND + +if BACKEND == 'tensorflow': + from .tensorflow_optimizers import * +elif BACKEND == 'mindspore': + from .mindspore_optimizers import * +elif BACKEND == 'paddle': + from .paddle_optimizers import * +else: + raise NotImplementedError("This backend is not supported") diff --git a/tensorlayer/optimizers/mindspore_optimizers.py b/tensorlayer/optimizers/mindspore_optimizers.py new file mode 100644 index 000000000..6472d4e7d --- /dev/null +++ b/tensorlayer/optimizers/mindspore_optimizers.py @@ -0,0 +1,158 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +from __future__ import absolute_import, division, print_function +from mindspore.nn import optim as optimizer +import mindspore as ms +from mindspore.nn import Cell + +__all__ = ['Adadelta', 'Adagrad', 'Adam', 'Adamax', 'Ftrl', 'Nadam', 'RMSprop', 'SGD', 'Momentum', 'Lamb', 'LARS'] + + +class Adadelta(Cell): + + def __init__(self): + pass + + def app_gradients(self): + raise Exception('Adadelta optimizer function not implemented') + + +class Adagrad(Cell): + + def __init__(self): + pass + + def apply_gradients(self): + raise Exception('Adagrad optimizer function not implemented') + + +class Adam(Cell): + + def __init__( + self, + learning_rate=0.001, + beta_1=0.9, + beta_2=0.999, + epsilon=1e-8, + ): + self.adam = optimizer.Adam + self.learn_rate = learning_rate + self.beta_1 = beta_1 + self.beta_2 = beta_2 + self.epsilon = epsilon + + def apply_gradients(self, grads_and_vars): + grads, vars = list(zip(*grads_and_vars)) + optimizer_adam = self.adam( + vars, learning_rate=self.learn_rate, beta1=self.beta_1, beta2=self.beta_2, eps=self.epsilon + ) + optimizer_adam(grads) + + +class Adamax(Cell): + + def __init__(self): + pass + + def apply_gradients(self): + raise Exception('Adamax optimizer function not implemented') + + +class Ftrl(Cell): + + def __init__(self): + pass + + def apply_gradients(self): + raise Exception('Ftrl optimizer function not implemented') + + +class Nadam(Cell): + + def __init__(self): + pass + + def apply_gradients(self): + raise Exception('Nadam optimizer function not implemented') + + +class RMSprop(Cell): + + def __init__(self): + pass + + def apply_gradients(self): + raise Exception('RMSprop optimizer function not implemented') + + +class RMSprop(Cell): + + def __init__(self): + pass + + def apply_gradients(self): + raise Exception('RMSprop optimizer function not implemented') + + +class SGD(Cell): + + def __init__(self, learning_rate, momentum): + self.sgd = optimizer.SGD + self.learn_rate = learning_rate + self.momentum = momentum + + def apply_gradients(self, grads_and_vars): + grads, vars = list(zip(*grads_and_vars)) + optimizer_sgd = self.sgd(vars, learning_rate=self.learn_rate, momentum=self.momentum) + optimizer_sgd(grads) + + +class Momentum(Cell): + + def __init__(self, learning_rate, momentum): + self.mom = optimizer.Momentum + self.learn_rate = learning_rate + self.momentum = momentum + + def apply_gradients(self, grads_and_vars, **kwargs): + grads, vars = list(zip(*grads_and_vars)) + optimizer_mom = self.mom(vars, learning_rate=self.learn_rate, momentum=self.momentum, **kwargs) + optimizer_mom(grads) + + +class Lamb(Cell): + + def __init__( + self, decay_steps, warmup_steps=0, start_learning_rate=0.1, end_learning_rate=0.0001, power=1.0, beta1=0.9, + beta2=0.999, eps=1e-06, weight_decay=0.0 + ): + self.lamb = optimizer.Lamb + self.decay_steps = decay_steps + self.warmup_steps = warmup_steps + self.start_learning_rate = start_learning_rate + self.end_learning_rate = end_learning_rate + self.power = power + self.beta1 = beta1 + self.beta2 = beta2 + self.eps = eps + self.weight_decay = weight_decay + + def apply_gradients(self, grads_and_vars): + grads, vars = list(zip(*grads_and_vars)) + optimizer_lamb = self.lamb( + params=vars, decay_steps=self.decay_steps, warmup_steps=self.warmup_steps, + start_learning_rate=self.start_learning_rate, end_learning_rate=self.end_learning_rate, power=self.power, + beta1=self.beta1, beta2=self.beta2, eps=self.eps, weight_decay=self.weight_decay + ) + optimizer_lamb(grads) + + +class LARS(object): + + def __init__(self, optimizer, **kwargs): + self.lars = ms.nn.LARS(optimizer=optimizer, **kwargs) + + def apply_gradients(self, grads_and_vars): + grads, _ = list(zip(*grads_and_vars)) + self.lars(grads) diff --git a/tensorlayer/optimizers/paddle_optimizers.py b/tensorlayer/optimizers/paddle_optimizers.py new file mode 100644 index 000000000..c963b5a48 --- /dev/null +++ b/tensorlayer/optimizers/paddle_optimizers.py @@ -0,0 +1,347 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- +from __future__ import absolute_import, division, print_function +import paddle +from paddle.optimizer import Optimizer + +__all__ = ['Adadelta', 'Adagrad', 'Adam', 'Adamax', 'Ftrl', 'Nadam', 'RMSprop', 'SGD', 'Momentum', 'Lamb', 'LARS'] + + +class Adadelta(Optimizer): + + def __init__(self, learning_rate=0.001, epsilon=1.0e-6, rho=0.95): + if learning_rate is None: + raise ValueError('learn_rate is not set.') + if epsilon is None: + raise ValueError('epsilon is not set.') + if rho is None: + raise ValueError('rho is not set') + self.learning_rate = learning_rate + self.epsilon = epsilon + self.rho = rho + + def gradient(self, loss, weights): + if loss is None: + raise ValueError('loss is not set.') + if weights is None: + raise ValueError('weights is not set.') + + self.adadelta = paddle.optimizer.Adadelta( + learning_rate=self.learning_rate, epsilon=self.epsilon, rho=self.rho, parameters=weights + ) + loss.backward() + weights_and_grads = self.adadelta.backward(loss=loss, parameters=weights) + + return weights_and_grads + + def apply_gradients(self, weights_and_grads): + if weights_and_grads is None: + raise ValueError('weights_and_grads is not set.') + self.adadelta._apply_optimize(loss=None, startup_program=None, params_grads=weights_and_grads) + self.adadelta.clear_grad() + + +class Adagrad(Optimizer): + + def __init__(self, learning_rate, initial_accumulator_value=0.0, epsilon=1.0e-6): + + if learning_rate is None: + raise ValueError('learning_rate is not set.') + if initial_accumulator_value is None: + raise ValueError('initial_accumulator_value is not set.') + if epsilon is None: + raise ValueError('epsilon is not set.') + + self.learning_rate = learning_rate + self.initial_accumulator_value = initial_accumulator_value + self.epsilon = epsilon + + def gradient(self, loss, weights): + if loss is None: + raise ValueError('loss is not set.') + if weights is None: + raise ValueError('weights is not set.') + self.adagrad = paddle.optimizer.Adagrad( + learning_rate=self.learning_rate, epsilon=self.epsilon, + initial_accumulator_value=self.initial_accumulator_value, parameters=weights + ) + loss.backward() + weights_and_grads = self.adagrad.backward(loss=loss, parameters=weights) + + return weights_and_grads + + def apply_gradients(self, weights_and_grads): + if weights_and_grads is None: + raise ValueError('weights_and_grads is not set.') + self.adagrad._apply_optimize(loss=None, startup_program=None, params_grads=weights_and_grads) + self.adagrad.clear_grad() + + +class Adam(Optimizer): + + def __init__(self, learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1.0e-8): + + if learning_rate is None: + raise ValueError('learning_rate is not set.') + if beta_1 is None: + raise ValueError('beta_1 is not set.') + if beta_2 is None: + raise ValueError('beta_2 is not set.') + if epsilon is None: + raise ValueError('epsilon is not set.') + + if not 0 <= beta_1 < 1: + raise ValueError("Invaild value of beta1, expect beta1 in [0,1).") + if not 0 <= beta_2 < 1: + raise ValueError("Invaild value of beta2, expect beta2 in [0,1).") + + self.learning_rate = learning_rate + self.beta_1 = beta_1 + self.beta_2 = beta_2 + self.epsilon = epsilon + + def gradient(self, loss, weights): + if loss is None: + raise ValueError('loss is not set.') + if weights is None: + raise ValueError('weights is not set.') + self.adam = paddle.optimizer.Adam( + learning_rate=self.learning_rate, beta1=self.beta_1, beta2=self.beta_2, epsilon=self.epsilon, + parameters=weights + ) + loss.backward() + weights_and_grads = self.adam.backward(loss, parameters=weights) + + return weights_and_grads + + def apply_gradients(self, weights_and_grads): + if weights_and_grads is None: + raise ValueError('weights_and_grads is not set.') + self.adam._apply_optimize(loss=None, startup_program=None, params_grads=weights_and_grads) + self.adam.clear_grad() + + +class Adamax(Optimizer): + + def __init__(self, learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1.0e-8): + + if learning_rate is None: + raise ValueError('learning_rate is not set.') + if beta_1 is None: + raise ValueError('beta_1 is not set.') + if beta_2 is None: + raise ValueError('beta_2 is not set.') + if epsilon is None: + raise ValueError('epsilon is not set.') + + if not 0 <= beta_1 < 1: + raise ValueError("Invaild value of beta1, expect beta1 in [0,1).") + if not 0 <= beta_2 < 1: + raise ValueError("Invaild value of beta2, expect beta2 in [0,1).") + + self.learning_rate = learning_rate + self.beta_1 = beta_1 + self.beta_2 = beta_2 + self.epsilon = epsilon + + def gradient(self, loss, weights): + if loss is None: + raise ValueError('loss is not set.') + if weights is None: + raise ValueError('weights is not set.') + self.adamax = paddle.optimizer.Adamax( + learning_rate=self.learning_rate, beta1=self.beta_1, beta2=self.beta_2, epsilon=self.epsilon, + parameters=weights + ) + loss.backward() + weights_and_grads = self.adamax.backward(loss=loss, parameters=weights) + + return weights_and_grads + + def apply_gradients(self, weights_and_grads): + if weights_and_grads is None: + raise ValueError('weights_and_grads is not set.') + self.adamax._apply_optimize(loss=None, startup_program=None, params_grads=weights_and_grads) + self.adamax.clear_grad() + + +class Ftrl(Optimizer): + + def __init__(self): + + raise Exception('Ftrl optimizer function not implemented') + + +class Nadam(Optimizer): + + def __init__(self): + + raise Exception('Nadam optimizer function not implemented') + + +class RMSprop(Optimizer): + + def __init__(self, learning_rate=0.001, rho=0.95, epsilon=1.0e-6, momentum=0.0, centered=False): + if learning_rate is None: + raise ValueError("learning_rate is not set.") + if rho is None: + raise ValueError("rho is not set.") + if epsilon is None: + raise ValueError("epsilon is not set.") + if momentum is None: + raise ValueError("momentum is not set.") + if not 0.0 <= epsilon: + raise ValueError("Invalid value of epsilon, expect epsilon >= 0.") + if not 0.0 <= momentum: + raise ValueError("Invalid value of momentum, expect momentum >= 0.") + if not 0.0 <= rho: + raise ValueError("Invalid value of rho, expect rho >= 0.") + + self.learning_rate = learning_rate + self.epsilon = epsilon + self.rho = rho + self.momentum = momentum + self.centered = centered + + def gradient(self, loss, weights): + if loss is None: + raise ValueError('loss is not set.') + if weights is None: + raise ValueError('weights is not set.') + + self.rmsprop = paddle.optimizer.RMSProp( + learning_rate=self.learning_rate, epsilon=self.epsilon, rho=self.rho, momentum=self.momentum, + parameters=weights + ) + loss.backward() + weights_and_grads = self.rmsprop.backward(loss=loss, parameters=weights) + + return weights_and_grads + + def apply_gradients(self, weights_and_grads): + if weights_and_grads is None: + raise ValueError('weights_and_grads is not set.') + self.rmsprop._apply_optimize(loss=None, startup_program=None, params_grads=weights_and_grads) + self.rmsprop.clear_grad() + + +class SGD(Optimizer): + + def __init__(self, learning_rate=0.001): + if learning_rate is None: + raise ValueError("learning_rate is not set.") + + self.learning_rate = learning_rate + + def gradient(self, loss, weights): + if loss is None: + raise ValueError('loss is not set.') + if weights is None: + raise ValueError('weights is not set.') + + self.sgd = paddle.optimizer.SGD(learning_rate=self.learning_rate, parameters=weights) + loss.backward() + weights_and_grads = self.sgd.backward(loss=loss, parameters=weights) + + return weights_and_grads + + def apply_gradients(self, weights_and_grads): + if weights_and_grads is None: + raise ValueError('weights_and_grads is not set.') + self.sgd._apply_optimize(loss=None, startup_program=None, params_grads=weights_and_grads) + self.sgd.clear_grad() + + +class Momentum(Optimizer): + + def __init__(self, learning_rate=0.001, momentum=0.9, nesterov=False): + if learning_rate is None: + raise ValueError("learning_rate is not set") + if momentum is None: + raise ValueError("momentum is not set") + + self.learning_rate = learning_rate + self.momentum = momentum + self.nesterov = nesterov + + def gradient(self, loss, weights): + if loss is None: + raise ValueError('loss is not set.') + if weights is None: + raise ValueError('weights is not set.') + + self.moment = paddle.optimizer.Momentum( + learning_rate=self.learning_rate, momentum=self.momentum, parameters=weights, use_nesterov=self.nesterov + ) + loss.backward() + weights_and_grads = self.moment.backward(loss=loss, parameters=weights) + return weights_and_grads + + def apply_gradients(self, weights_and_grads): + if weights_and_grads is None: + raise ValueError('weights_and_grads is not set.') + self.moment._apply_optimize(loss=None, startup_program=None, params_grads=weights_and_grads) + self.moment.clear_grad() + + +class Lamb(Optimizer): + + def __init__(self, learning_rate=0.001, lamb_weight_decay=0.01, beta_1=0.9, beta_2=0.999, epsilon=1.0e-6): + + if learning_rate is None: + raise ValueError('learning_rate is not set.') + if lamb_weight_decay is None: + raise ValueError('lamb_weight_decay is not set.') + if beta_1 is None: + raise ValueError('beta_1 is not set.') + if beta_2 is None: + raise ValueError('beta_2 is not set.') + if epsilon is None: + raise ValueError('epsilon is not set.') + + if not 0 <= beta_1 < 1: + raise ValueError("Invaild value of beta1, expect beta1 in [0,1).") + if not 0 <= beta_2 < 1: + raise ValueError("Invaild value of beta2, expect beta2 in [0,1).") + + self.learning_rate = learning_rate + self.lamb_weight_decay = lamb_weight_decay + self.beta_1 = beta_1 + self.beta_2 = beta_2 + self.epsilon = epsilon + + def gradient(self, loss, weights): + if loss is None: + raise ValueError('loss is not set.') + if weights is None: + raise ValueError('weights is not set.') + + self.lamb = paddle.optimizer.Lamb( + learning_rate=self.learning_rate, lamb_weight_decay=self.lamb_weight_decay, beta1=self.beta_1, + beta2=self.beta_2, epsilon=self.epsilon, parameters=weights + ) + loss.backward() + weights_and_grads = self.lamb.backward(loss=loss, parameters=weights) + + return weights_and_grads + + def apply_gradients(self, weights_and_grads): + if weights_and_grads is None: + raise ValueError('weights_and_grads is not set.') + self.lamb._apply_optimize(loss=None, startup_program=None, params_grads=weights_and_grads) + self.lamb.clear_grad() + + +class LARS(Optimizer): + + def __init__(self): + + pass + + def gradient(self): + + pass + + def apply_gradients(self, weights_and_grads): + + raise Exception('LARS optimizer function not implemented') diff --git a/tensorlayer/optimizers/tensorflow_optimizers.py b/tensorlayer/optimizers/tensorflow_optimizers.py new file mode 100644 index 000000000..971df3826 --- /dev/null +++ b/tensorlayer/optimizers/tensorflow_optimizers.py @@ -0,0 +1,45 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +from __future__ import absolute_import, division, print_function +import tensorflow as tf + +__all__ = ['Adadelta', 'Adagrad', 'Adam', 'Adamax', 'Ftrl', 'Nadam', 'RMSprop', 'SGD', 'Momentum', 'Lamb', 'LARS'] + +# Add module aliases + +# learning_rate=0.001, rho=0.95, epsilon=1e-07, name='Adadelta' +Adadelta = tf.optimizers.Adadelta + +# learning_rate=0.001, initial_accumulator_value=0.1, epsilon=1e-07,name='Adagrad' +Adagrad = tf.optimizers.Adagrad + +# learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07, amsgrad=False,name='Adam' +Adam = tf.optimizers.Adam + +# learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07, name='Adamax' +Adamax = tf.optimizers.Adamax + +# learning_rate=0.001, learning_rate_power=-0.5, initial_accumulator_value=0.1, +# l1_regularization_strength=0.0, l2_regularization_strength=0.0, name='Ftrl',l2_shrinkage_regularization_strength=0.0 +Ftrl = tf.optimizers.Ftrl + +# learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07, name='Nadam', +Nadam = tf.optimizers.Nadam + +# learning_rate=0.001, rho=0.9, momentum=0.0, epsilon=1e-07, centered=False,name='RMSprop' +RMSprop = tf.optimizers.RMSprop + +# learning_rate=0.01, momentum=0.0, nesterov=False, name='SGD' +SGD = tf.optimizers.SGD + +# learning_rate, momentum, use_locking=False, name='Momentum', use_nesterov=False +Momentum = tf.compat.v1.train.MomentumOptimizer + + +def Lamb(**kwargs): + raise Exception('Lamb optimizer function not implemented') + + +def LARS(**kwargs): + raise Exception('LARS optimizer function not implemented') diff --git a/tensorlayer/package_info.py b/tensorlayer/package_info.py index e21969abd..65cf30614 100644 --- a/tensorlayer/package_info.py +++ b/tensorlayer/package_info.py @@ -2,10 +2,10 @@ # -*- coding: utf-8 -*- """Deep learning and Reinforcement learning library for Researchers and Engineers.""" -MAJOR = 2 -MINOR = 2 -PATCH = 3 -PRE_RELEASE = '' +MAJOR = 3 +MINOR = 0 +PATCH = 0 +PRE_RELEASE = 'alpha' # Use the following formatting: (major, minor, patch, prerelease) VERSION = (MAJOR, MINOR, PATCH, PRE_RELEASE) diff --git a/tensorlayer/prepro.py b/tensorlayer/prepro.py index 3ba2f308c..def019971 100644 --- a/tensorlayer/prepro.py +++ b/tensorlayer/prepro.py @@ -21,6 +21,7 @@ from skimage.morphology import binary_erosion as _binary_erosion from skimage.morphology import disk from skimage.morphology import erosion as _erosion +from skimage.transform import resize import tensorlayer as tl from tensorlayer.lazy_imports import LazyImport @@ -1840,11 +1841,13 @@ def imresize(x, size=None, interp='bicubic', mode=None): if x.shape[-1] == 1: # greyscale - x = scipy.misc.imresize(x[:, :, 0], size, interp=interp, mode=mode) + # x = scipy.misc.imresize(x[:, :, 0], size, interp=interp, mode=mode) + x = resize(x[:, :, 0], size) return x[:, :, np.newaxis] else: # rgb, bgr, rgba - return scipy.misc.imresize(x, size, interp=interp, mode=mode) + return resize(x, output_shape=size) + # return scipy.misc.imresize(x, size, interp=interp, mode=mode) # value scale diff --git a/tensorlayer/rein.py b/tensorlayer/rein.py index e5cbe6bd4..bd884d51a 100644 --- a/tensorlayer/rein.py +++ b/tensorlayer/rein.py @@ -81,10 +81,10 @@ def cross_entropy_reward_loss(logits, actions, rewards, name=None): ---------- >>> states_batch_pl = tf.placeholder(tf.float32, shape=[None, D]) >>> network = InputLayer(states_batch_pl, name='input') - >>> network = DenseLayer(network, n_units=H, act=tf.nn.relu, name='relu1') + >>> network = DenseLayer(network, n_units=H, act=tf.ops.relu, name='relu1') >>> network = DenseLayer(network, n_units=3, name='out') >>> probs = network.outputs - >>> sampling_prob = tf.nn.softmax(probs) + >>> sampling_prob = tf.ops.softmax(probs) >>> actions_batch_pl = tf.placeholder(tf.int32, shape=[None]) >>> discount_rewards_batch_pl = tf.placeholder(tf.float32, shape=[None]) >>> loss = tl.rein.cross_entropy_reward_loss(probs, actions_batch_pl, discount_rewards_batch_pl) diff --git a/tensorlayer/vision/__init__.py b/tensorlayer/vision/__init__.py new file mode 100644 index 000000000..9f0fc8e48 --- /dev/null +++ b/tensorlayer/vision/__init__.py @@ -0,0 +1,3 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +from . import transforms diff --git a/tensorlayer/vision/functional_cv2.py b/tensorlayer/vision/functional_cv2.py new file mode 100644 index 000000000..de8e18e42 --- /dev/null +++ b/tensorlayer/vision/functional_cv2.py @@ -0,0 +1,667 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +import numpy as np +from numpy import sin, cos, tan +import math +import numbers +import importlib + + +def try_import(module_name): + """Try importing a module, with an informative error message on failure.""" + install_name = module_name + + if module_name.find('.') > -1: + install_name = module_name.split('.')[0] + + if module_name == 'cv2': + install_name = 'opencv-python' + + try: + mod = importlib.import_module(module_name) + return mod + except ImportError: + err_msg = ( + "Failed importing {}. This likely means that some paddle modules " + "require additional dependencies that have to be " + "manually installed (usually with `pip install {}`). " + ).format(module_name, install_name) + raise ImportError(err_msg) + + +def crop(image, offset_height, offset_width, target_height, target_width): + image_height, image_width = image.shape[0:2] + if offset_width < 0: + raise ValueError('offset_width must be >0.') + if offset_height < 0: + raise ValueError('offset_height must be >0.') + if target_height < 0: + raise ValueError('target_height must be >0.') + if target_width < 0: + raise ValueError('target_width must be >0.') + if offset_width + target_width > image_width: + raise ValueError('offset_width + target_width must be <= image width.') + if offset_height + target_height > image_height: + raise ValueError('offset_height + target_height must be <= image height.') + + return image[offset_height:offset_height + target_height, offset_width:offset_width + target_width] + + +def center_crop(image, size, central_fraction): + + image_height, image_width = image.shape[0:2] + if size is not None: + if not isinstance(size, (int, list, tuple)) or (isinstance(size, (list, tuple)) and len(size) != 2): + raise TypeError( + "Size should be a single integer or a list/tuple (h, w) of length 2.But" + "got {}.".format(size) + ) + + if isinstance(size, int): + target_height = size + target_width = size + else: + target_height = size[0] + target_width = size[1] + + elif central_fraction is not None: + if central_fraction <= 0.0 or central_fraction > 1.0: + raise ValueError('central_fraction must be within (0, 1]') + + target_height = int(central_fraction * image_height) + target_width = int(central_fraction * image_width) + + crop_top = int(round((image_height - target_height) / 2.)) + crop_left = int(round((image_width - target_width) / 2.)) + + return crop(image, crop_top, crop_left, target_height, target_width) + + +def pad(image, padding, padding_value, mode): + + if isinstance(padding, int): + top = bottom = left = right = padding + + elif isinstance(padding, (tuple, list)): + if len(padding) == 2: + left = right = padding[0] + top = bottom = padding[1] + elif len(padding) == 4: + left = padding[0] + top = padding[1] + right = padding[2] + bottom = padding[3] + else: + raise TypeError("The size of the padding list or tuple should be 2 or 4." "But got {}".format(padding)) + else: + raise TypeError("Padding can be any of: a number, a tuple or list of size 2 or 4." "But got {}".format(padding)) + + if mode not in ['constant', 'edge', 'reflect', 'symmetric']: + raise ValueError("Padding mode should be 'constant', 'edge', 'reflect', or 'symmetric'.") + cv2 = try_import('cv2') + _cv2_pad_from_str = { + 'constant': cv2.BORDER_CONSTANT, + 'edge': cv2.BORDER_REPLICATE, + 'reflect': cv2.BORDER_REFLECT_101, + 'symmetric': cv2.BORDER_REFLECT + } + + if len(image.shape) == 3 and image.shape[2] == 1: + return cv2.copyMakeBorder( + image, top=top, bottom=bottom, left=left, right=right, borderType=_cv2_pad_from_str[mode], + value=padding_value + )[:, :, np.newaxis] + else: + return cv2.copyMakeBorder( + image, top=top, bottom=bottom, left=left, right=right, borderType=_cv2_pad_from_str[mode], + value=padding_value + ) + + +def resize(image, size, method): + + if not (isinstance(size, int) or (isinstance(size, (list, tuple)) and len(size) == 2)): + raise TypeError('Size should be a single number or a list/tuple (h, w) of length 2.' 'Got {}.'.format(size)) + if method not in ('nearest', 'bilinear', 'area', 'bicubic' 'lanczos'): + raise ValueError( + "Unknown resize method! resize method must be in " + "(\'nearest\',\'bilinear\',\'bicubic\',\'area\',\'lanczos\')" + ) + cv2 = try_import('cv2') + _cv2_interp_from_str = { + 'nearest': cv2.INTER_NEAREST, + 'bilinear': cv2.INTER_LINEAR, + 'area': cv2.INTER_AREA, + 'bicubic': cv2.INTER_CUBIC, + 'lanczos': cv2.INTER_LANCZOS4 + } + + h, w = image.shape[:2] + + if isinstance(size, int): + if (w <= h and w == size) or (h <= w and h == size): + return image + if w < h: + target_w = size + target_h = int(size * h / w) + else: + target_h = size + target_w = int(size * w / h) + size = (target_h, target_w) + output = cv2.resize(image, dsize=(size[1], size[0]), interpolation=_cv2_interp_from_str[method]) + if len(image.shape) == 3 and image.shape[2] == 1: + return output[:, :, np.newaxis] + else: + return output + + +def transpose(image, order): + + if not (isinstance(order, (list, tuple)) and len(order) == 3): + raise TypeError("Order must be a list/tuple of length 3." "But got {}.".format(order)) + + image_shape = image.shape + if len(image_shape) == 2: + image = image[..., np.newaxis] + + return image.transpose(order) + + +def hwc_to_chw(image): + + image_shape = image.shape + if len(image_shape) == 2: + image = image[..., np.newaxis] + + return image.transpose((2, 0, 1)) + + +def chw_to_hwc(image): + + image_shape = image.shape + if len(image_shape) == 2: + image = image[..., np.newaxis] + + return image.transpose((1, 2, 0)) + + +def rgb_to_hsv(image): + + cv2 = try_import('cv2') + image = cv2.cvtColor(image, cv2.COLOR_RGB2HSV) + return image + + +def hsv_to_rgb(image): + + cv2 = try_import('cv2') + image = cv2.cvtColor(image, cv2.COLOR_HSV2RGB) + return image + + +def rgb_to_gray(image, num_output_channels): + + cv2 = try_import('cv2') + + if num_output_channels == 1: + image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)[:, :, np.newaxis] + elif num_output_channels == 3: + image = np.broadcast_to(cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)[:, :, np.newaxis], image.shape) + else: + raise ValueError('num_output_channels should be either 1 or 3') + + return image + + +def adjust_brightness(image, brightness_factor): + if brightness_factor < 0: + raise ValueError('brightness_factor ({}) is not non-negative.'.format(brightness_factor)) + cv2 = try_import('cv2') + + table = np.array([i * brightness_factor for i in range(0, 256)]).clip(0, 255).astype('uint8') + + if len(image.shape) == 3 and image.shape[2] == 1: + return cv2.LUT(image, table)[:, :, np.newaxis] + else: + return cv2.LUT(image, table) + + +def adjust_contrast(image, contrast_factor): + """Adjusts contrast of an image. + + Args: + img (np.array): Image to be adjusted. + contrast_factor (float): How much to adjust the contrast. Can be any + non negative number. 0 gives a solid gray image, 1 gives the + original image while 2 increases the contrast by a factor of 2. + + Returns: + np.array: Contrast adjusted image. + + """ + if contrast_factor < 0: + raise ValueError('contrast_factor ({}) is not non-negative.'.format(contrast_factor)) + cv2 = try_import('cv2') + + table = np.array([(i - 127) * contrast_factor + 127 for i in range(0, 256)]).clip(0, 255).astype('uint8') + if len(image.shape) == 3 and image.shape[2] == 1: + return cv2.LUT(image, table)[:, :, np.newaxis] + else: + return cv2.LUT(image, table) + + +def adjust_hue(image, hue_factor): + """Adjusts hue of an image. + + The image hue is adjusted by converting the image to HSV and + cyclically shifting the intensities in the hue channel (H). + The image is then converted back to original image mode. + + `hue_factor` is the amount of shift in H channel and must be in the + interval `[-0.5, 0.5]`. + + Args: + image (PIL.Image): PIL Image to be adjusted. + hue_factor (float): How much to shift the hue channel. Should be in + [-0.5, 0.5]. 0.5 and -0.5 give complete reversal of hue channel in + HSV space in positive and negative direction respectively. + 0 means no shift. Therefore, both -0.5 and 0.5 will give an image + with complementary colors while 0 gives the original image. + + Returns: + PIL.Image: Hue adjusted image. + + """ + if not (-0.5 <= hue_factor <= 0.5): + raise ValueError('hue_factor ({}) is not in [-0.5, 0.5].'.format(hue_factor)) + cv2 = try_import('cv2') + + dtype = image.dtype + image = image.astype(np.uint8) + hsv_img = cv2.cvtColor(image, cv2.COLOR_RGB2HSV_FULL) + h, s, v = cv2.split(hsv_img) + + alpha = np.random.uniform(hue_factor, hue_factor) + h = h.astype(np.uint8) + # uint8 addition take cares of rotation across boundaries + with np.errstate(over="ignore"): + h += np.uint8(alpha * 255) + hsv_img = cv2.merge([h, s, v]) + return cv2.cvtColor(hsv_img, cv2.COLOR_HSV2RGB_FULL).astype(dtype) + + +def adjust_saturation(image, saturation_factor): + """Adjusts color saturation of an image. + + Args: + image (np.array): Image to be adjusted. + saturation_factor (float): How much to adjust the saturation. 0 will + give a black and white image, 1 will give the original image while + 2 will enhance the saturation by a factor of 2. + + Returns: + np.array: Saturation adjusted image. + + """ + if saturation_factor < 0: + raise ValueError('saturation_factor ({}) is not non-negative.'.format(saturation_factor)) + cv2 = try_import('cv2') + + dtype = image.dtype + image = image.astype(np.float32) + alpha = np.random.uniform(saturation_factor, saturation_factor) + gray_img = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) + gray_img = gray_img[..., np.newaxis] + img = image * alpha + gray_img * (1 - alpha) + return img.clip(0, 255).astype(dtype) + + +def hflip(image): + """Horizontally flips the given image. + + Args: + image (np.array): Image to be flipped. + + Returns: + np.array: Horizontall flipped image. + + """ + cv2 = try_import('cv2') + + return cv2.flip(image, 1) + + +def vflip(image): + """Vertically flips the given np.array. + + Args: + image (np.array): Image to be flipped. + + Returns: + np.array: Vertically flipped image. + + """ + cv2 = try_import('cv2') + + if len(image.shape) == 3 and image.shape[2] == 1: + return cv2.flip(image, 0)[:, :, np.newaxis] + else: + return cv2.flip(image, 0) + + +def padtoboundingbox(image, offset_height, offset_width, target_height, target_width, padding_value): + ''' + + Parameters + ---------- + image: + A np.array image to be padded size of (target_width, target_height) + offset_height: + Number of rows of padding_values to add on top. + offset_width: + Number of columns of padding_values to add on the left. + target_height: + Height of output image. + target_width: + Width of output image. + padding_value: + value to pad + + Returns: + np.array image: padded image + ------- + + ''' + if offset_height < 0: + raise ValueError('offset_height must be >= 0') + if offset_width < 0: + raise ValueError('offset_width must be >= 0') + + height, width = image.shape[:2] + after_padding_width = target_width - offset_width - width + after_padding_height = target_height - offset_height - height + if after_padding_height < 0: + raise ValueError('image height must be <= target - offset') + if after_padding_width < 0: + raise ValueError('image width must be <= target - offset') + + return pad( + image, padding=(offset_width, offset_height, after_padding_width, after_padding_height), + padding_value=padding_value, mode='constant' + ) + + +def rotate(img, angle, interpolation, expand, center, fill): + """Rotates the image by angle. + + Args: + img (np.array): Image to be rotated. + angle (float or int): In degrees degrees counter clockwise order. + interpolation (int|str, optional): Interpolation method. If omitted, or if the + image has only one channel, it is set to cv2.INTER_NEAREST. + when use cv2 backend, support method are as following: + - "nearest": cv2.INTER_NEAREST, + - "bilinear": cv2.INTER_LINEAR, + - "bicubic": cv2.INTER_CUBIC + expand (bool, optional): Optional expansion flag. + If true, expands the output image to make it large enough to hold the entire rotated image. + If false or omitted, make the output image the same size as the input image. + Note that the expand flag assumes rotation around the center and no translation. + center (2-tuple, optional): Optional center of rotation. + Origin is the upper left corner. + Default is the center of the image. + fill (3-tuple or int): RGB pixel fill value for area outside the rotated image. + If int, it is used for all channels respectively. + + Returns: + np.array: Rotated image. + + """ + cv2 = try_import('cv2') + _cv2_interp_from_str = { + 'nearest': cv2.INTER_NEAREST, + 'bilinear': cv2.INTER_LINEAR, + 'area': cv2.INTER_AREA, + 'bicubic': cv2.INTER_CUBIC, + 'lanczos': cv2.INTER_LANCZOS4 + } + h, w, c = img.shape + if isinstance(fill, numbers.Number): + fill = (fill, ) * c + elif not (isinstance(fill, (list, tuple)) and len(fill) == c): + raise ValueError( + 'If fill should be a single number or a list/tuple with length of image channels.' + 'But got {}'.format(fill) + ) + + if center is None: + center = (w / 2.0, h / 2.0) + M = cv2.getRotationMatrix2D(center, angle, 1) + + if expand: + + def transform(x, y, matrix): + (a, b, c, d, e, f) = matrix + return a * x + b * y + c, d * x + e * y + f + + # calculate output size + xx = [] + yy = [] + + angle = -math.radians(angle) + expand_matrix = [ + round(math.cos(angle), 15), + round(math.sin(angle), 15), + 0.0, + round(-math.sin(angle), 15), + round(math.cos(angle), 15), + 0.0, + ] + + post_trans = (0, 0) + expand_matrix[2], expand_matrix[5] = transform( + -center[0] - post_trans[0], -center[1] - post_trans[1], expand_matrix + ) + expand_matrix[2] += center[0] + expand_matrix[5] += center[1] + + for x, y in ((0, 0), (w, 0), (w, h), (0, h)): + x, y = transform(x, y, expand_matrix) + xx.append(x) + yy.append(y) + nw = math.ceil(max(xx)) - math.floor(min(xx)) + nh = math.ceil(max(yy)) - math.floor(min(yy)) + + M[0, 2] += (nw - w) * 0.5 + M[1, 2] += (nh - h) * 0.5 + + w, h = int(nw), int(nh) + + if len(img.shape) == 3 and img.shape[2] == 1: + return cv2.warpAffine(img, M, (w, h), flags=_cv2_interp_from_str[interpolation], borderValue=fill)[:, :, + np.newaxis] + else: + return cv2.warpAffine(img, M, (w, h), flags=_cv2_interp_from_str[interpolation], borderValue=fill) + + +def get_affine_matrix(center, angle, translate, scale, shear): + + rot = math.radians(angle) + sx, sy = [math.radians(s) for s in shear] + + cx, cy = center + tx, ty = translate + + # RSS without scaling + a = math.cos(rot - sy) / math.cos(sy) + b = -math.cos(rot - sy) * math.tan(sx) / math.cos(sy) - math.sin(rot) + c = math.sin(rot - sy) / math.cos(sy) + d = -math.sin(rot - sy) * math.tan(sx) / math.cos(sy) + math.cos(rot) + + # Inverted rotation matrix with scale and shear + # det([[a, b], [c, d]]) == 1, since det(rotation) = 1 and det(shear) = 1 + matrix = [d, -b, 0.0, -c, a, 0.0] + matrix = [x / scale for x in matrix] + + # Apply inverse of translation and of center translation: RSS^-1 * C^-1 * T^-1 + matrix[2] += matrix[0] * (-cx - tx) + matrix[1] * (-cy - ty) + matrix[5] += matrix[3] * (-cx - tx) + matrix[4] * (-cy - ty) + + # Apply center translation: C * RSS^-1 * C^-1 * T^-1 + matrix[2] += cx + matrix[5] += cy + + return matrix + + +def random_shear(image, degrees, interpolation, fill): + + cv2 = try_import('cv2') + _cv2_interp_from_str = { + 'nearest': cv2.INTER_NEAREST, + 'bilinear': cv2.INTER_LINEAR, + 'area': cv2.INTER_AREA, + 'bicubic': cv2.INTER_CUBIC, + 'lanczos': cv2.INTER_LANCZOS4 + } + + h, w, c = image.shape + if isinstance(fill, numbers.Number): + fill = (fill, ) * c + elif not (isinstance(fill, (list, tuple)) and len(fill) == c): + raise ValueError( + 'If fill should be a single number or a list/tuple with length of image channels.' + 'But got {}'.format(fill) + ) + + center = (w / 2.0, h / 2.0) + shear = [-np.random.uniform(degrees[0], degrees[1]), -np.random.uniform(degrees[2], degrees[3])] + + matrix = get_affine_matrix(center=center, angle=0, translate=(0, 0), scale=1.0, shear=shear) + matrix = np.asarray(matrix).reshape((2, 3)) + + if len(image.shape) == 3 and image.shape[2] == 1: + return cv2.warpAffine(image, matrix, (w, h), flags=_cv2_interp_from_str[interpolation], + borderValue=fill)[:, :, np.newaxis] + else: + return cv2.warpAffine(image, matrix, (w, h), flags=_cv2_interp_from_str[interpolation], borderValue=fill) + + +def random_shift(image, shift, interpolation, fill): + + cv2 = try_import('cv2') + _cv2_interp_from_str = { + 'nearest': cv2.INTER_NEAREST, + 'bilinear': cv2.INTER_LINEAR, + 'area': cv2.INTER_AREA, + 'bicubic': cv2.INTER_CUBIC, + 'lanczos': cv2.INTER_LANCZOS4 + } + + h, w, c = image.shape + if isinstance(fill, numbers.Number): + fill = (fill, ) * c + elif not (isinstance(fill, (list, tuple)) and len(fill) == c): + raise ValueError( + 'If fill should be a single number or a list/tuple with length of image channels.' + 'But got {}'.format(fill) + ) + hrg = shift[0] + wrg = shift[1] + tx = -np.random.uniform(-hrg, hrg) * w + ty = -np.random.uniform(-wrg, wrg) * h + center = (w / 2.0, h / 2.0) + + matrix = get_affine_matrix(center=center, angle=0, translate=(tx, ty), scale=1.0, shear=(0, 0)) + matrix = np.asarray(matrix).reshape((2, 3)) + + if len(image.shape) == 3 and image.shape[2] == 1: + return cv2.warpAffine(image, matrix, (w, h), flags=_cv2_interp_from_str[interpolation], + borderValue=fill)[:, :, np.newaxis] + else: + return cv2.warpAffine(image, matrix, (w, h), flags=_cv2_interp_from_str[interpolation], borderValue=fill) + + +def random_zoom(image, zoom, interpolation, fill): + cv2 = try_import('cv2') + _cv2_interp_from_str = { + 'nearest': cv2.INTER_NEAREST, + 'bilinear': cv2.INTER_LINEAR, + 'area': cv2.INTER_AREA, + 'bicubic': cv2.INTER_CUBIC, + 'lanczos': cv2.INTER_LANCZOS4 + } + + h, w, c = image.shape + if isinstance(fill, numbers.Number): + fill = (fill, ) * c + elif not (isinstance(fill, (list, tuple)) and len(fill) == c): + raise ValueError( + 'If fill should be a single number or a list/tuple with length of image channels.' + 'But got {}'.format(fill) + ) + + scale = 1 / np.random.uniform(zoom[0], zoom[1]) + center = (w / 2.0, h / 2.0) + + matrix = get_affine_matrix(center=center, angle=0, translate=(0, 0), scale=scale, shear=(0, 0)) + matrix = np.asarray(matrix).reshape((2, 3)) + + if len(image.shape) == 3 and image.shape[2] == 1: + return cv2.warpAffine(image, matrix, (w, h), flags=_cv2_interp_from_str[interpolation], + borderValue=fill)[:, :, np.newaxis] + else: + return cv2.warpAffine(image, matrix, (w, h), flags=_cv2_interp_from_str[interpolation], borderValue=fill) + + +def random_affine(image, degrees, shift, zoom, shear, interpolation, fill): + cv2 = try_import('cv2') + _cv2_interp_from_str = { + 'nearest': cv2.INTER_NEAREST, + 'bilinear': cv2.INTER_LINEAR, + 'area': cv2.INTER_AREA, + 'bicubic': cv2.INTER_CUBIC, + 'lanczos': cv2.INTER_LANCZOS4 + } + h, w, c = image.shape + if isinstance(fill, numbers.Number): + fill = (fill, ) * c + elif not (isinstance(fill, (list, tuple)) and len(fill) == c): + raise ValueError( + 'If fill should be a single number or a list/tuple with length of image channels.' + 'But got {}'.format(fill) + ) + center = (w / 2.0, h / 2.0) + + angle = -float(np.random.uniform(degrees[0], degrees[1])) + + if shift is not None: + max_dx = float(shift[0] * h) + max_dy = float(shift[1] * w) + tx = -int(round(np.random.uniform(-max_dx, max_dx))) + ty = -int(round(np.random.uniform(-max_dy, max_dy))) + shift = [tx, ty] + else: + shift = [0, 0] + + if zoom is not None: + scale = 1 / np.random.uniform(zoom[0], zoom[1]) + else: + scale = 1.0 + + shear_x = shear_y = 0.0 + if shear is not None: + shear_x = float(np.random.uniform(shear[0], shear[1])) + if len(shear) == 4: + shear_y = float(np.random.uniform(shear[2], shear[3])) + shear = (-shear_x, -shear_y) + + matrix = get_affine_matrix(center=center, angle=angle, translate=shift, scale=scale, shear=shear) + matrix = np.asarray(matrix).reshape((2, 3)) + + if len(image.shape) == 3 and image.shape[2] == 1: + return cv2.warpAffine(image, matrix, (w, h), flags=_cv2_interp_from_str[interpolation], + borderValue=fill)[:, :, np.newaxis] + else: + return cv2.warpAffine(image, matrix, (w, h), flags=_cv2_interp_from_str[interpolation], borderValue=fill) diff --git a/tensorlayer/vision/functional_pil.py b/tensorlayer/vision/functional_pil.py new file mode 100644 index 000000000..124b870d9 --- /dev/null +++ b/tensorlayer/vision/functional_pil.py @@ -0,0 +1,554 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +import PIL +from PIL import Image, ImageOps, ImageEnhance +import numpy as np +import colorsys +import random +import math +from numpy import sin, cos, tan +import numbers + +_pil_interp_from_str = { + 'nearest': Image.NEAREST, + 'bilinear': Image.BILINEAR, + 'bicubic': Image.BICUBIC, + 'box': Image.BOX, + 'lanczos': Image.LANCZOS, + 'hamming': Image.HAMMING +} + + +def crop(image, offset_height, offset_width, target_height, target_width): + image_width, image_height = image.size + if offset_width < 0: + raise ValueError('offset_width must be >0.') + if offset_height < 0: + raise ValueError('offset_height must be >0.') + if target_height < 0: + raise ValueError('target_height must be >0.') + if target_width < 0: + raise ValueError('target_width must be >0.') + if offset_width + target_width > image_width: + raise ValueError('offset_width + target_width must be <= image width.') + if offset_height + target_height > image_height: + raise ValueError('offset_height + target_height must be <= image height.') + + return image.crop((offset_width, offset_height, offset_width + target_width, offset_height + target_height)) + + +def center_crop(image, size, central_fraction): + + image_width, image_height = image.size + if size is not None: + if not isinstance(size, (int, list, tuple)) or (isinstance(size, (list, tuple)) and len(size) != 2): + raise TypeError( + "Size should be a single integer or a list/tuple (h, w) of length 2.But" + "got {}.".format(size) + ) + + if isinstance(size, int): + target_height = size + target_width = size + else: + target_height = size[0] + target_width = size[1] + + elif central_fraction is not None: + if central_fraction <= 0.0 or central_fraction > 1.0: + raise ValueError('central_fraction must be within (0, 1]') + + target_height = int(central_fraction * image_height) + target_width = int(central_fraction * image_width) + + crop_top = int(round((image_height - target_height) / 2.)) + crop_left = int(round((image_width - target_width) / 2.)) + + return crop(image, crop_top, crop_left, target_height, target_width) + + +def pad(image, padding, padding_value, mode): + + if isinstance(padding, int): + top = bottom = left = right = padding + + elif isinstance(padding, (tuple, list)): + if len(padding) == 2: + left = right = padding[0] + top = bottom = padding[1] + elif len(padding) == 4: + left = padding[0] + top = padding[1] + right = padding[2] + bottom = padding[3] + else: + raise TypeError("The size of the padding list or tuple should be 2 or 4." "But got {}".format(padding)) + else: + raise TypeError("Padding can be any of: a number, a tuple or list of size 2 or 4." "But got {}".format(padding)) + + if mode not in ['constant', 'edge', 'reflect', 'symmetric']: + raise TypeError("Padding mode should be 'constant', 'edge', 'reflect', or 'symmetric'.") + + if mode == 'constant': + if image.mode == 'P': + palette = image.getpalette() + image = ImageOps.expand(image, border=padding, fill=padding_value) + image.putpalette(palette) + return image + return ImageOps.expand(image, border=padding, fill=padding_value) + + if image.mode == 'P': + palette = image.getpalette() + image = np.asarray(image) + image = np.pad(image, ((top, bottom), (left, right)), mode) + image = Image.fromarray(image) + image.putpalette(palette) + return image + + image = np.asarray(image) + # RGB image + if len(image.shape) == 3: + image = np.pad(image, ((top, bottom), (left, right), (0, 0)), mode) + # Grayscale image + if len(image.shape) == 2: + image = np.pad(image, ((top, bottom), (left, right)), mode) + + return Image.fromarray(image) + + +def resize(image, size, method): + + if not (isinstance(size, int) or (isinstance(size, (list, tuple)) and len(size) == 2)): + raise TypeError('Size should be a single number or a list/tuple (h, w) of length 2.' 'Got {}.'.format(size)) + + if method not in ('nearest', 'bilinear', 'bicubic', 'box', 'lanczos', 'hamming'): + raise ValueError( + "Unknown resize method! resize method must be in " + "(\'nearest\',\'bilinear\',\'bicubic\',\'box\',\'lanczos\',\'hamming\')" + ) + if isinstance(size, int): + w, h = image.size + if (w <= h and w == size) or (h <= w and h == size): + return image + if w < h: + ow = size + oh = int(size * h / w) + return image.resize((ow, oh), _pil_interp_from_str[method]) + else: + oh = size + ow = int(size * w / h) + return image.resize((ow, oh), _pil_interp_from_str[method]) + else: + return image.resize(size[::-1], _pil_interp_from_str[method]) + + +def transpose(image, order): + + image = np.asarray(image) + if not (isinstance(order, (list, tuple)) and len(order) == 3): + raise TypeError("Order must be a list/tuple of length 3." "But got {}.".format(order)) + + image_shape = image.shape + if len(image_shape) == 2: + image = image[..., np.newaxis] + + image = image.transpose(order) + image = Image.fromarray(image) + return image + + +def hwc_to_chw(image): + + image_shape = image.shape + if len(image_shape) == 2: + image = image[..., np.newaxis] + + image = image.transpose((2, 0, 1)) + image = Image.fromarray(image) + return image + + +def chw_to_hwc(image): + + image_shape = image.shape + if len(image_shape) == 2: + image = image[..., np.newaxis] + + image = image.transpose((1, 2, 0)) + image = Image.fromarray(image) + return image + + +def rgb_to_hsv(image): + + return image.convert('HSV') + + +def hsv_to_rgb(image): + + return image.convert('RGB') + + +def rgb_to_gray(image, num_output_channels): + + if num_output_channels == 1: + img = image.convert('L') + elif num_output_channels == 3: + img = image.convert('L') + np_img = np.array(img, dtype=np.uint8) + np_img = np.dstack([np_img, np_img, np_img]) + img = Image.fromarray(np_img, 'RGB') + else: + raise ValueError('num_output_channels should be either 1 or 3') + + return img + + +def adjust_brightness(image, brightness_factor): + """Adjusts brightness of an Image. + + Args: + image (PIL.Image): PIL Image to be adjusted. + brightness_factor (float): How much to adjust the brightness. Can be + any non negative number. 0 gives a black image, 1 gives the + original image while 2 increases the brightness by a factor of 2. + + Returns: + PIL.Image: Brightness adjusted image. + + """ + if brightness_factor < 0: + raise ValueError('brightness_factor ({}) is not non-negative.'.format(brightness_factor)) + + enhancer = ImageEnhance.Brightness(image) + image = enhancer.enhance(brightness_factor) + return image + + +def adjust_contrast(image, contrast_factor): + """Adjusts contrast of an Image. + + Args: + image (PIL.Image): PIL Image to be adjusted. + contrast_factor (float): How much to adjust the contrast. Can be any + non negative number. 0 gives a solid gray image, 1 gives the + original image while 2 increases the contrast by a factor of 2. + + Returns: + PIL.Image: Contrast adjusted image. + + """ + if contrast_factor < 0: + raise ValueError('contrast_factor ({}) is not non-negative.'.format(contrast_factor)) + + enhancer = ImageEnhance.Contrast(image) + image = enhancer.enhance(contrast_factor) + return image + + +def adjust_hue(image, hue_factor): + """Adjusts hue of an image. + + The image hue is adjusted by converting the image to HSV and + cyclically shifting the intensities in the hue channel (H). + The image is then converted back to original image mode. + + `hue_factor` is the amount of shift in H channel and must be in the + interval `[-0.5, 0.5]`. + + Args: + image (PIL.Image): PIL Image to be adjusted. + hue_factor (float): How much to shift the hue channel. Should be in + [-0.5, 0.5]. 0.5 and -0.5 give complete reversal of hue channel in + HSV space in positive and negative direction respectively. + 0 means no shift. Therefore, both -0.5 and 0.5 will give an image + with complementary colors while 0 gives the original image. + + Returns: + PIL.Image: Hue adjusted image. + + """ + if not (-0.5 <= hue_factor <= 0.5): + raise ValueError('hue_factor ({}) is not in [-0.5, 0.5].'.format(hue_factor)) + + input_mode = image.mode + if input_mode in {'L', '1', 'I', 'F'}: + return image + h, s, v = image.convert('HSV').split() + + np_h = np.array(h, dtype=np.uint8) + # uint8 addition take cares of rotation across boundaries + with np.errstate(over='ignore'): + np_h += np.uint8(hue_factor * 255) + h = Image.fromarray(np_h, 'L') + + image = Image.merge('HSV', (h, s, v)).convert(input_mode) + return image + + +def adjust_saturation(image, saturation_factor): + """Adjusts color saturation of an image. + + Args: + image (PIL.Image): PIL Image to be adjusted. + saturation_factor (float): How much to adjust the saturation. 0 will + give a black and white image, 1 will give the original image while + 2 will enhance the saturation by a factor of 2. + + Returns: + PIL.Image: Saturation adjusted image. + + """ + if saturation_factor < 0: + raise ValueError('saturation_factor ({}) is not non-negative.'.format(saturation_factor)) + enhancer = ImageEnhance.Color(image) + image = enhancer.enhance(saturation_factor) + return image + + +def hflip(image): + """Horizontally flips the given PIL Image. + + Args: + img (PIL.Image): Image to be flipped. + + Returns: + PIL.Image: Horizontall flipped image. + + """ + + return image.transpose(Image.FLIP_LEFT_RIGHT) + + +def vflip(image): + """Vertically flips the given PIL Image. + + Args: + img (PIL.Image): Image to be flipped. + + Returns: + PIL.Image: Vertically flipped image. + + """ + + return image.transpose(Image.FLIP_TOP_BOTTOM) + + +def padtoboundingbox(image, offset_height, offset_width, target_height, target_width, padding_value): + ''' + + Parameters + ---------- + image: + A PIL image to be padded size of (target_width, target_height) + offset_height: + Number of rows of padding_values to add on top. + offset_width: + Number of columns of padding_values to add on the left. + target_height: + Height of output image. + target_width: + Width of output image. + padding_value: + value to pad + + Returns: + PIL.Image: padded image + ------- + + ''' + if offset_height < 0: + raise ValueError('offset_height must be >= 0') + if offset_width < 0: + raise ValueError('offset_width must be >= 0') + + width, height = image.size + after_padding_width = target_width - offset_width - width + after_padding_height = target_height - offset_height - height + if after_padding_height < 0: + raise ValueError('image height must be <= target - offset') + if after_padding_width < 0: + raise ValueError('image width must be <= target - offset') + + return pad( + image, padding=(offset_width, offset_height, after_padding_width, after_padding_height), + padding_value=padding_value, mode='constant' + ) + + +def rotate(image, angle, interpolation, expand, center, fill): + """Rotates the image by angle. + + Args: + img (PIL.Image): Image to be rotated. + angle (float or int): In degrees degrees counter clockwise order. + interpolation (str, optional): Interpolation method. If omitted, or if the + image has only one channel, it is set to PIL.Image.NEAREST . when use pil backend, + support method are as following: + - "nearest": Image.NEAREST, + - "bilinear": Image.BILINEAR, + - "bicubic": Image.BICUBIC + expand (bool, optional): Optional expansion flag. + If true, expands the output image to make it large enough to hold the entire rotated image. + If false or omitted, make the output image the same size as the input image. + Note that the expand flag assumes rotation around the center and no translation. + center (2-tuple, optional): Optional center of rotation. + Origin is the upper left corner. + Default is the center of the image. + fill (3-tuple or int): RGB pixel fill value for area outside the rotated image. + If int, it is used for all channels respectively. + + Returns: + PIL.Image: Rotated image. + + """ + c = 1 if image.mode == 'L' else 3 + if isinstance(fill, numbers.Number): + fill = (fill, ) * c + elif not (isinstance(fill, (list, tuple)) and len(fill) == c): + raise ValueError( + 'If fill should be a single number or a list/tuple with length of image channels.' + 'But got {}'.format(fill) + ) + + return image.rotate(angle, _pil_interp_from_str[interpolation], expand, center, fillcolor=fill) + + +def get_affine_matrix(center, angle, translate, scale, shear): + + rot = math.radians(angle) + sx, sy = [math.radians(s) for s in shear] + + cx, cy = center + tx, ty = translate + + # RSS without scaling + a = math.cos(rot - sy) / math.cos(sy) + b = -math.cos(rot - sy) * math.tan(sx) / math.cos(sy) - math.sin(rot) + c = math.sin(rot - sy) / math.cos(sy) + d = -math.sin(rot - sy) * math.tan(sx) / math.cos(sy) + math.cos(rot) + + # Inverted rotation matrix with scale and shear + # det([[a, b], [c, d]]) == 1, since det(rotation) = 1 and det(shear) = 1 + matrix = [d, -b, 0.0, -c, a, 0.0] + matrix = [x / scale for x in matrix] + + # Apply inverse of translation and of center translation: RSS^-1 * C^-1 * T^-1 + matrix[2] += matrix[0] * (-cx - tx) + matrix[1] * (-cy - ty) + matrix[5] += matrix[3] * (-cx - tx) + matrix[4] * (-cy - ty) + + # Apply center translation: C * RSS^-1 * C^-1 * T^-1 + matrix[2] += cx + matrix[5] += cy + + return matrix + + +def random_shear(image, degrees, interpolation, fill): + + c = 1 if image.mode == 'L' else 3 + if isinstance(fill, numbers.Number): + fill = (fill, ) * c + elif not (isinstance(fill, (list, tuple)) and len(fill) == c): + raise ValueError( + 'If fill should be a single number or a list/tuple with length of image channels.' + 'But got {}'.format(fill) + ) + + w, h = image.size + center = (w / 2.0, h / 2.0) + shear = [np.random.uniform(degrees[0], degrees[1]), np.random.uniform(degrees[2], degrees[3])] + + interpolation = _pil_interp_from_str[interpolation] + matrix = get_affine_matrix(center=center, angle=0, translate=(0, 0), scale=1.0, shear=shear) + output_size = (w, h) + kwargs = {"fillcolor": fill} + return image.transform(output_size, Image.AFFINE, matrix, interpolation, **kwargs) + + +def random_shift(image, shift, interpolation, fill): + + c = 1 if image.mode == 'L' else 3 + if isinstance(fill, numbers.Number): + fill = (fill, ) * c + elif not (isinstance(fill, (list, tuple)) and len(fill) == c): + raise ValueError( + 'If fill should be a single number or a list/tuple with length of image channels.' + 'But got {}'.format(fill) + ) + + w, h = image.size + center = (w / 2.0, h / 2.0) + hrg = shift[0] + wrg = shift[1] + tx = np.random.uniform(-hrg, hrg) * h + ty = np.random.uniform(-wrg, wrg) * w + matrix = get_affine_matrix(center=center, angle=0, translate=(tx, ty), scale=1.0, shear=(0, 0)) + print(matrix) + + output_size = (w, h) + kwargs = {"fillcolor": fill} + return image.transform(output_size, Image.AFFINE, matrix, interpolation, **kwargs) + + +def random_zoom(image, zoom, interpolation, fill): + + c = 1 if image.mode == 'L' else 3 + if isinstance(fill, numbers.Number): + fill = (fill, ) * c + elif not (isinstance(fill, (list, tuple)) and len(fill) == c): + raise ValueError( + 'If fill should be a single number or a list/tuple with length of image channels.' + 'But got {}'.format(fill) + ) + w, h = image.size + scale = np.random.uniform(zoom[0], zoom[1]) + center = (w / 2.0, h / 2.0) + + matrix = get_affine_matrix(center=center, angle=0, translate=(0, 0), scale=scale, shear=(0, 0)) + + output_size = (w, h) + kwargs = {"fillcolor": fill} + return image.transform(output_size, Image.AFFINE, matrix, interpolation, **kwargs) + + +def random_affine(image, degrees, shift, zoom, shear, interpolation, fill): + + c = 1 if image.mode == 'L' else 3 + if isinstance(fill, numbers.Number): + fill = (fill, ) * c + elif not (isinstance(fill, (list, tuple)) and len(fill) == c): + raise ValueError( + 'If fill should be a single number or a list/tuple with length of image channels.' + 'But got {}'.format(fill) + ) + + w, h = image.size + angle = float(np.random.uniform(float(degrees[0]), float(degrees[1]))) + center = (w / 2.0, h / 2.0) + if shift is not None: + max_dx = float(shift[0] * w) + max_dy = float(shift[1] * h) + tx = int(round(np.random.uniform(-max_dx, max_dx))) + ty = int(round(np.random.uniform(-max_dy, max_dy))) + translations = (tx, ty) + else: + translations = (0, 0) + + if zoom is not None: + scale = float(np.random.uniform(zoom[0], zoom[1])) + else: + scale = 1.0 + + shear_x = shear_y = 0 + if shear is not None: + shear_x = float(np.random.uniform(shear[0], shear[1])) + if len(shear) == 4: + shear_y = float(np.random.uniform(shear[2], shear[3])) + shear = (shear_x, shear_y) + matrix = get_affine_matrix(center=center, angle=angle, translate=translations, scale=scale, shear=shear) + + output_size = (w, h) + kwargs = {"fillcolor": fill} + return image.transform(output_size, Image.AFFINE, matrix, interpolation, **kwargs) diff --git a/tensorlayer/vision/load_vision_backend.py b/tensorlayer/vision/load_vision_backend.py new file mode 100644 index 000000000..c816d3de8 --- /dev/null +++ b/tensorlayer/vision/load_vision_backend.py @@ -0,0 +1,16 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +from __future__ import absolute_import, division, print_function +from tensorlayer.backend.ops.load_backend import BACKEND + +if BACKEND == 'tensorflow': + from .tensorflow_vision import * +elif BACKEND == 'mindspore': + from .mindspore_vision import * +elif BACKEND == 'dragon': + pass +elif BACKEND == 'paddle': + from .paddle_vision import * +else: + raise NotImplementedError("This backend is not supported") diff --git a/tensorlayer/vision/mindspore_vision.py b/tensorlayer/vision/mindspore_vision.py new file mode 100644 index 000000000..b9d70a260 --- /dev/null +++ b/tensorlayer/vision/mindspore_vision.py @@ -0,0 +1,610 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +import mindspore as ms +from . import functional_cv2 as F_cv2 +from . import functional_pil as F_pil +import mindspore.ops as P +from mindspore.numpy import std +from PIL import Image +import PIL +import numpy as np +import numbers +import random +import math + +__all__ = [ + 'central_crop', + 'to_tensor', + 'crop', + 'pad', + 'resize', + 'transpose', + 'hwc_to_chw', + 'chw_to_hwc', + 'rgb_to_hsv', + 'hsv_to_rgb', + 'rgb_to_gray', + 'adjust_brightness', + 'adjust_contrast', + 'adjust_hue', + 'adjust_saturation', + 'normalize', + 'hflip', + 'vflip', + 'padtoboundingbox', + 'standardize', + 'random_brightness', + 'random_contrast', + 'random_saturation', + 'random_hue', + 'random_crop', + 'random_resized_crop', + 'random_vflip', + 'random_hflip', + 'random_rotation', + 'random_shear', + 'random_shift', + 'random_zoom', + 'random_affine', +] + + +def _is_pil_image(image): + return isinstance(image, Image.Image) + + +def _is_tensor_image(image): + return isinstance(image, ms.Tensor) + + +def _is_numpy_image(image): + return isinstance(image, np.ndarray) and (image.ndim in {2, 3}) + + +def _get_image_size(img): + if _is_pil_image(img): + return img.size[::-1] + elif _is_numpy_image(img): + return img.shape[:2] + else: + raise TypeError("Unexpected type {}".format(type(img))) + + +def random_factor(factor, name, center=1, bound=(0, float('inf')), non_negative=True): + if isinstance(factor, numbers.Number): + if factor < 0: + raise ValueError('The input value of {} cannot be negative.'.format(name)) + factor = [center - factor, center + factor] + if non_negative: + factor[0] = max(0, factor[0]) + elif isinstance(factor, (tuple, list)) and len(factor) == 2: + if not bound[0] <= factor[0] <= factor[1] <= bound[1]: + raise ValueError( + "Please check your value range of {} is valid and " + "within the bound {}.".format(name, bound) + ) + else: + raise TypeError("Input of {} should be either a single value, or a list/tuple of " "length 2.".format(name)) + factor = np.random.uniform(factor[0], factor[1]) + return factor + + +def to_tensor(image, data_format='HWC'): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray. Got {}'.format(type(image))) + + image = np.asarray(image).astype('float32') + + if image.ndim == 2: + image = image[:, :, None] + + if data_format == 'CHW': + + image = np.transpose(image, (2, 0, 1)) + image = image / 255. + else: + image = image / 255. + + return image + + +def central_crop(image, size=None, central_fraction=None): + + if size is None and central_fraction is None: + raise ValueError('central_fraction and size can not be both None') + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + + return F_pil.center_crop(image, size, central_fraction) + + else: + + return F_cv2.center_crop(image, size, central_fraction) + + +def crop(image, offset_height, offset_width, target_height, target_width): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + + return F_pil.crop(image, offset_height, offset_width, target_height, target_width) + + else: + + return F_cv2.crop(image, offset_height, offset_width, target_height, target_width) + + +def pad(image, padding, padding_value, mode): + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.pad(image, padding, padding_value, mode) + else: + return F_cv2.pad(image, padding, padding_value, mode) + + +def resize(image, size, method): + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.resize(image, size, method) + else: + return F_cv2.resize(image, size, method) + + +def transpose(image, order): + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.transpose(image, order) + else: + return F_cv2.transpose(image, order) + + +def hwc_to_chw(image): + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.hwc_to_chw(image) + else: + return F_cv2.hwc_to_chw(image) + + +def chw_to_hwc(image): + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.chw_to_hwc(image) + else: + return F_cv2.chw_to_hwc(image) + + +def rgb_to_hsv(image): + if not (_is_pil_image(image) or isinstance(image, np.ndarray) and (image.ndim == 3)): + raise TypeError('image should be PIL Image or ndarray with dim=3. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.rgb_to_hsv(image) + else: + return F_cv2.rgb_to_hsv(image) + + +def hsv_to_rgb(image): + if not (_is_pil_image(image) or isinstance(image, np.ndarray) and (image.ndim == 3)): + raise TypeError('image should be PIL Image or ndarray with dim=3. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.hsv_to_rgb(image) + else: + return F_cv2.hsv_to_rgb(image) + + +def rgb_to_gray(image, num_output_channels): + if not (_is_pil_image(image) or isinstance(image, np.ndarray) and (image.ndim == 3)): + raise TypeError('image should be PIL Image or ndarray with dim=3. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.rgb_to_gray(image, num_output_channels) + else: + return F_cv2.rgb_to_gray(image, num_output_channels) + + +def adjust_brightness(image, brightness_factor): + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.adjust_brightness(image, brightness_factor) + else: + return F_cv2.adjust_brightness(image, brightness_factor) + + +def adjust_contrast(image, contrast_factor): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.adjust_contrast(image, contrast_factor) + else: + return F_cv2.adjust_contrast(image, contrast_factor) + + +def adjust_hue(image, hue_factor): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.adjust_hue(image, hue_factor) + else: + return F_cv2.adjust_hue(image, hue_factor) + + +def adjust_saturation(image, saturation_factor): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.adjust_saturation(image, saturation_factor) + else: + return F_cv2.adjust_saturation(image, saturation_factor) + + +def hflip(image): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.hflip(image) + else: + return F_cv2.hflip(image) + + +def vflip(image): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.vflip(image) + else: + return F_cv2.vflip(image) + + +def padtoboundingbox(image, offset_height, offset_width, target_height, target_width, padding_value): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.padtoboundingbox(image, offset_height, offset_width, target_height, target_width, padding_value) + else: + return F_cv2.padtoboundingbox(image, offset_height, offset_width, target_height, target_width, padding_value) + + +def normalize(image, mean, std, data_format): + + if _is_pil_image(image): + image = np.asarray(image) + + image = image.astype('float32') + + if data_format == 'CHW': + num_channels = image.shape[0] + elif data_format == 'HWC': + num_channels = image.shape[2] + + if isinstance(mean, numbers.Number): + mean = (mean, ) * num_channels + elif isinstance(mean, (list, tuple)): + if len(mean) != num_channels: + raise ValueError("Length of mean must be 1 or equal to the number of channels({0}).".format(num_channels)) + if isinstance(std, numbers.Number): + std = (std, ) * num_channels + elif isinstance(std, (list, tuple)): + if len(std) != num_channels: + raise ValueError("Length of std must be 1 or equal to the number of channels({0}).".format(num_channels)) + mean = np.array(mean, dtype=image.dtype) + std = np.array(std, dtype=image.dtype) + + if data_format == 'CHW': + image = (image - mean[None, None, :]) / std[None, None, :] + elif data_format == 'HWC': + image = (image - mean[None, None, :]) / std[None, None, :] + + return image + + +def standardize(image): + ''' + Reference to tf.image.per_image_standardization(). + Linearly scales each image in image to have mean 0 and variance 1. + ''' + + if _is_pil_image(image): + image = np.asarray(image) + + image = image.astype('float32') + + num_pixels = image.size + image_mean = np.mean(image, keep_dims=False) + stddev = np.std(image, keep_dims=False) + min_stddev = 1.0 / np.sqrt(num_pixels) + adjusted_stddev = np.maximum(stddev, min_stddev) + + return (image - image_mean) / adjusted_stddev + + +def random_brightness(image, brightness_factor): + ''' + Perform a random brightness on the input image. + Parameters + ---------- + image: + Input images to adjust random brightness + brightness_factor: + Brightness adjustment factor (default=(1, 1)). Cannot be negative. + If it is a float, the factor is uniformly chosen from the range [max(0, 1-brightness), 1+brightness]. + If it is a sequence, it should be [min, max] for the range. + + Returns: + Adjusted image. + ------- + + ''' + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + brightness_factor = random_factor(brightness_factor, name='brightness') + + if _is_pil_image(image): + return F_pil.adjust_brightness(image, brightness_factor) + else: + return F_cv2.adjust_brightness(image, brightness_factor) + + +def random_contrast(image, contrast_factor): + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + contrast_factor = random_factor(contrast_factor, name='contrast') + + if _is_pil_image(image): + return F_pil.adjust_contrast(image, contrast_factor) + else: + return F_cv2.adjust_contrast(image, contrast_factor) + + +def random_saturation(image, saturation_factor): + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + saturation_factor = random_factor(saturation_factor, name='saturation') + + if _is_pil_image(image): + return F_pil.adjust_saturation(image, saturation_factor) + else: + return F_cv2.adjust_saturation(image, saturation_factor) + + +def random_hue(image, hue_factor): + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + hue_factor = random_factor(hue_factor, name='hue', center=0, bound=(-0.5, 0.5), non_negative=False) + + if _is_pil_image(image): + return F_pil.adjust_hue(image, hue_factor) + else: + return F_cv2.adjust_hue(image, hue_factor) + + +def random_crop(image, size, padding, pad_if_needed, fill, padding_mode): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if isinstance(size, int): + size = (size, size) + elif isinstance(size, (tuple, list)) and len(size) == 2: + size = size + else: + raise ValueError('Size should be a int or a list/tuple with length of 2. ' 'But got {}'.format(size)) + + height, width = _get_image_size(image) + if padding is not None: + image = pad(image, padding, fill, padding_mode) + + if pad_if_needed and height < size[0]: + image = pad(image, (0, height - size[0]), fill, padding_mode) + + if pad_if_needed and width < size[1]: + image = pad(image, (width - size[1], 0), fill, padding_mode) + + height, width = _get_image_size(image) + target_height, target_width = size + + if height < target_height or width < target_width: + raise ValueError( + 'Crop size {} should be smaller than input image size {}. '.format( + (target_height, target_width), (height, width) + ) + ) + + offset_height = random.randint(0, height - target_height) + offset_width = random.randint(0, width - target_width) + + return crop(image, offset_height, offset_width, target_height, target_width) + + +def random_resized_crop(image, size, scale, ratio, interpolation): + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if isinstance(size, int): + size = (size, size) + elif isinstance(size, (list, tuple)) and len(size) == 2: + size = size + else: + raise TypeError('Size should be a int or a list/tuple with length of 2.' 'But got {}.'.format(size)) + if not (isinstance(scale, (list, tuple)) and len(scale) == 2): + raise TypeError('Scale should be a list/tuple with length of 2.' 'But got {}.'.format(scale)) + if not (isinstance(ratio, (list, tuple)) and len(ratio) == 2): + raise TypeError('Scale should be a list/tuple with length of 2.' 'But got {}.'.format(ratio)) + + if (scale[0] > scale[1]) or (ratio[0] > ratio[1]): + raise ValueError("Scale and ratio should be of kind (min, max)") + + def _get_param(image, scale, ratio): + height, width = _get_image_size(image) + area = height * width + log_ratio = tuple(math.log(x) for x in ratio) + for _ in range(10): + target_area = np.random.uniform(*scale) * area + aspect_ratio = math.exp(np.random.uniform(*log_ratio)) + + w = int(round(math.sqrt(target_area * aspect_ratio))) + h = int(round(math.sqrt(target_area / aspect_ratio))) + + if 0 < w <= width and 0 < h <= height: + i = random.randint(0, height - h) + j = random.randint(0, width - w) + return i, j, h, w + + # Fallback to central crop + in_ratio = float(width) / float(height) + if in_ratio < min(ratio): + w = width + h = int(round(w / min(ratio))) + elif in_ratio > max(ratio): + h = height + w = int(round(h * max(ratio))) + else: + # return whole image + w = width + h = height + i = (height - h) // 2 + j = (width - w) // 2 + return i, j, h, w + + offset_height, offset_width, target_height, target_width = _get_param(image, scale, ratio) + + image = crop(image, offset_height, offset_width, target_height, target_width) + image = resize(image, size, interpolation) + + return image + + +def random_vflip(image, prob): + + if random.random() < prob: + return vflip(image) + return image + + +def random_hflip(image, prob): + + if random.random() < prob: + return hflip(image) + return image + + +def random_rotation(image, degrees, interpolation, expand, center, fill): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if isinstance(degrees, numbers.Number): + if degrees < 0: + raise ValueError('If degrees is a single number, it must be positive.' 'But got {}'.format(degrees)) + degrees = (-degrees, degrees) + elif not (isinstance(degrees, (list, tuple)) and len(degrees) == 2): + raise ValueError('If degrees is a list/tuple, it must be length of 2.' 'But got {}'.format(degrees)) + else: + if degrees[0] > degrees[1]: + raise ValueError('if degrees is a list/tuple, it should be (min, max).') + + angle = np.random.uniform(degrees[0], degrees[1]) + + if _is_pil_image(image): + return F_pil.rotate(image, angle, interpolation, expand, center, fill) + else: + return F_cv2.rotate(image, angle, interpolation, expand, center, fill) + + +def random_shear(image, degrees, interpolation, fill): + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if isinstance(degrees, numbers.Number): + degrees = (-degrees, degrees, 0, 0) + elif isinstance(degrees, (list, tuple)) and (len(degrees) == 2 or len(degrees) == 4): + if len(degrees) == 2: + degrees = (degrees[0], degrees[1], 0, 0) + else: + raise ValueError( + 'degrees should be a single number or a list/tuple with length in (2 ,4).' + 'But got {}'.format(degrees) + ) + + if _is_pil_image(image): + return F_pil.random_shear(image, degrees, interpolation, fill) + else: + return F_cv2.random_shear(image, degrees, interpolation, fill) + + +def random_shift(image, shift, interpolation, fill): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if not (isinstance(shift, (tuple, list)) and len(shift) == 2): + + raise ValueError('Shift should be a list/tuple with length of 2.' 'But got {}'.format(shift)) + + if _is_pil_image(image): + return F_pil.random_shift(image, shift, interpolation, fill) + else: + return F_cv2.random_shift(image, shift, interpolation, fill) + + +def random_zoom(image, zoom, interpolation, fill): + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if not (isinstance(zoom, (tuple, list)) and len(zoom) == 2): + + raise ValueError('Zoom should be a list/tuple with length of 2.' 'But got {}'.format(zoom)) + if not (0 <= zoom[0] <= zoom[1]): + + raise ValueError('Zoom values should be positive, and zoom[1] should be greater than zoom[0].') + + if _is_pil_image(image): + return F_pil.random_zoom(image, zoom, interpolation, fill) + else: + return F_cv2.random_zoom(image, zoom, interpolation, fill) + + +def random_affine(image, degrees, shift, zoom, shear, interpolation, fill): + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.random_affine(image, degrees, shift, zoom, shear, interpolation, fill) + else: + return F_cv2.random_affine(image, degrees, shift, zoom, shear, interpolation, fill) diff --git a/tensorlayer/vision/paddle_vision.py b/tensorlayer/vision/paddle_vision.py new file mode 100644 index 000000000..4c188efe4 --- /dev/null +++ b/tensorlayer/vision/paddle_vision.py @@ -0,0 +1,608 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +import paddle +from . import functional_cv2 as F_cv2 +from . import functional_pil as F_pil +import sys +import math +import numbers +import warnings +import collections +import numpy as np +from PIL import Image +from numpy import sin, cos, tan +import paddle +import random + +__all__ = [ + 'central_crop', + 'to_tensor', + 'crop', + 'pad', + 'resize', + 'transpose', + 'hwc_to_chw', + 'chw_to_hwc', + 'rgb_to_hsv', + 'hsv_to_rgb', + 'rgb_to_gray', + 'adjust_brightness', + 'adjust_contrast', + 'adjust_hue', + 'adjust_saturation', + 'normalize', + 'hflip', + 'vflip', + 'padtoboundingbox', + 'standardize', + 'random_brightness', + 'random_contrast', + 'random_saturation', + 'random_hue', + 'random_crop', + 'random_resized_crop', + 'random_vflip', + 'random_hflip', + 'random_rotation', + 'random_shear', + 'random_shift', + 'random_zoom', + 'random_affine', +] + + +def _is_pil_image(img): + return isinstance(img, Image.Image) + + +def _is_tensor_image(img): + return isinstance(img, paddle.Tensor) + + +def _is_numpy_image(img): + return isinstance(img, np.ndarray) and (img.ndim in {2, 3}) + + +def to_tensor(img, data_format='HWC'): + + return paddle.vision.functional.to_tensor(img, data_format=data_format) + + +def _get_image_size(img): + if _is_pil_image(img): + return img.size[::-1] + elif _is_numpy_image(img): + return img.shape[:2] + else: + raise TypeError("Unexpected type {}".format(type(img))) + + +def random_factor(factor, name, center=1, bound=(0, float('inf')), non_negative=True): + if isinstance(factor, numbers.Number): + if factor < 0: + raise ValueError('The input value of {} cannot be negative.'.format(name)) + factor = [center - factor, center + factor] + if non_negative: + factor[0] = max(0, factor[0]) + elif isinstance(factor, (tuple, list)) and len(factor) == 2: + if not bound[0] <= factor[0] <= factor[1] <= bound[1]: + raise ValueError( + "Please check your value range of {} is valid and " + "within the bound {}.".format(name, bound) + ) + else: + raise TypeError("Input of {} should be either a single value, or a list/tuple of " "length 2.".format(name)) + factor = np.random.uniform(factor[0], factor[1]) + return factor + + +def central_crop(image, size=None, central_fraction=None): + + if size is None and central_fraction is None: + raise ValueError('central_fraction and size can not be both None') + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + + return F_pil.center_crop(image, size, central_fraction) + + else: + + return F_cv2.center_crop(image, size, central_fraction) + + +def crop(image, offset_height, offset_width, target_height, target_width): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + + return F_pil.crop(image, offset_height, offset_width, target_height, target_width) + + else: + + return F_cv2.crop(image, offset_height, offset_width, target_height, target_width) + + +def pad(image, padding, padding_value, mode): + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.pad(image, padding, padding_value, mode) + else: + return F_cv2.pad(image, padding, padding_value, mode) + + +def resize(image, size, method): + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.resize(image, size, method) + else: + return F_cv2.resize(image, size, method) + + +def transpose(image, order): + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.transpose(image, order) + else: + return F_cv2.transpose(image, order) + + +def hwc_to_chw(image): + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.hwc_to_chw(image) + else: + return F_cv2.hwc_to_chw(image) + + +def chw_to_hwc(image): + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.chw_to_hwc(image) + else: + return F_cv2.chw_to_hwc(image) + + +def rgb_to_hsv(image): + + if not (_is_pil_image(image) or isinstance(image, np.ndarray) and (image.ndim == 3)): + raise TypeError('image should be PIL Image or ndarray with dim=3. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.rgb_to_hsv(image) + else: + return F_cv2.rgb_to_hsv(image) + + +def hsv_to_rgb(image): + if not (_is_pil_image(image) or isinstance(image, np.ndarray) and (image.ndim == 3)): + raise TypeError('image should be PIL Image or ndarray with dim=3. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.hsv_to_rgb(image) + else: + return F_cv2.hsv_to_rgb(image) + + +def rgb_to_gray(image, num_output_channels): + if not (_is_pil_image(image) or isinstance(image, np.ndarray) and (image.ndim == 3)): + raise TypeError('image should be PIL Image or ndarray with dim=3. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.rgb_to_gray(image, num_output_channels) + else: + return F_cv2.rgb_to_gray(image, num_output_channels) + + +def adjust_brightness(image, brightness_factor): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.adjust_brightness(image, brightness_factor) + else: + return F_cv2.adjust_brightness(image, brightness_factor) + + +def adjust_contrast(image, contrast_factor): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.adjust_contrast(image, contrast_factor) + else: + return F_cv2.adjust_contrast(image, contrast_factor) + + +def adjust_hue(image, hue_factor): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.adjust_hue(image, hue_factor) + else: + return F_cv2.adjust_hue(image, hue_factor) + + +def adjust_saturation(image, saturation_factor): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.adjust_saturation(image, saturation_factor) + else: + return F_cv2.adjust_saturation(image, saturation_factor) + + +def hflip(image): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.hflip(image) + else: + return F_cv2.hflip(image) + + +def vflip(image): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.vflip(image) + else: + return F_cv2.vflip(image) + + +def padtoboundingbox(image, offset_height, offset_width, target_height, target_width, padding_value): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.padtoboundingbox(image, offset_height, offset_width, target_height, target_width, padding_value) + else: + return F_cv2.padtoboundingbox(image, offset_height, offset_width, target_height, target_width, padding_value) + + +def normalize(image, mean, std, data_format): + + if not _is_tensor_image(image): + if _is_pil_image(image): + image = np.asarray(image) + image = paddle.to_tensor(image) + + image = image.astype('float32') + + if data_format == 'CHW': + num_channels = image.shape[0] + elif data_format == 'HWC': + num_channels = image.shape[2] + + if isinstance(mean, numbers.Number): + mean = (mean, ) * num_channels + elif isinstance(mean, (list, tuple)): + if len(mean) != num_channels: + raise ValueError("Length of mean must be 1 or equal to the number of channels({0}).".format(num_channels)) + if isinstance(std, numbers.Number): + std = (std, ) * num_channels + elif isinstance(std, (list, tuple)): + if len(std) != num_channels: + raise ValueError("Length of std must be 1 or equal to the number of channels({0}).".format(num_channels)) + if data_format == 'CHW': + std = np.array(std).reshape((-1, 1, 1)) + mean = np.array(mean).reshape((-1, 1, 1)) + elif data_format == 'HWC': + mean = np.array(mean) + std = np.array(std) + + mean = paddle.to_tensor(mean).astype('float32') + std = paddle.to_tensor(std).astype('float32') + + return (image - mean) / std + + +def standardize(image): + ''' + Reference to tf.image.per_image_standardization(). + Linearly scales each image in image to have mean 0 and variance 1. + ''' + if not _is_tensor_image(image): + if _is_pil_image(image): + image = np.asarray(image) + image = paddle.to_tensor(image) + + image = image.astype('float32') + num_pixels = paddle.to_tensor(image.size, dtype='float32') + image_mean = paddle.mean(image) + + stddev = paddle.std(image) + min_stddev = 1.0 / paddle.sqrt(num_pixels) + adjusted_stddev = paddle.maximum(stddev, min_stddev) + + return (image - image_mean) / adjusted_stddev + + +def random_brightness(image, brightness_factor): + ''' + Perform a random brightness on the input image. + Parameters + ---------- + image: + Input images to adjust random brightness + brightness_factor: + Brightness adjustment factor (default=(1, 1)). Cannot be negative. + If it is a float, the factor is uniformly chosen from the range [max(0, 1-brightness), 1+brightness]. + If it is a sequence, it should be [min, max] for the range. + + Returns: + Adjusted image. + ------- + + ''' + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + brightness_factor = random_factor(brightness_factor, name='brightness') + + if _is_pil_image(image): + return F_pil.adjust_brightness(image, brightness_factor) + else: + return F_cv2.adjust_brightness(image, brightness_factor) + + +def random_contrast(image, contrast_factor): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + contrast_factor = random_factor(contrast_factor, name='contrast') + + if _is_pil_image(image): + return F_pil.adjust_contrast(image, contrast_factor) + else: + return F_cv2.adjust_contrast(image, contrast_factor) + + +def random_saturation(image, saturation_factor): + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + saturation_factor = random_factor(saturation_factor, name='saturation') + + if _is_pil_image(image): + return F_pil.adjust_saturation(image, saturation_factor) + else: + return F_cv2.adjust_saturation(image, saturation_factor) + + +def random_hue(image, hue_factor): + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + hue_factor = random_factor(hue_factor, name='hue', center=0, bound=(-0.5, 0.5), non_negative=False) + + if _is_pil_image(image): + return F_pil.adjust_hue(image, hue_factor) + else: + return F_cv2.adjust_hue(image, hue_factor) + + +def random_crop(image, size, padding, pad_if_needed, fill, padding_mode): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if isinstance(size, int): + size = (size, size) + elif isinstance(size, (tuple, list)) and len(size) == 2: + size = size + else: + raise ValueError('Size should be a int or a list/tuple with length of 2. ' 'But got {}'.format(size)) + + if padding is not None: + + image = pad(image, padding, fill, padding_mode) + + h, w = _get_image_size(image) + + # pad the width if needed + if pad_if_needed and w < size[1]: + image = pad(image, (size[1] - w, 0), fill, padding_mode) + # pad the height if needed + if pad_if_needed and h < size[0]: + image = pad(image, (0, size[0] - h), fill, padding_mode) + + h, w = _get_image_size(image) + target_height, target_width = size + + if h < target_height or w < target_width: + raise ValueError( + 'Crop size {} should be smaller than input image size {}. '.format((target_height, target_width), (h, w)) + ) + + offset_height = random.randint(0, h - target_height) + offset_width = random.randint(0, w - target_width) + + return crop(image, offset_height, offset_width, target_height, target_width) + + +def random_resized_crop(image, size, scale, ratio, interpolation): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if isinstance(size, int): + size = (size, size) + elif isinstance(size, (list, tuple)) and len(size) == 2: + size = size + else: + raise TypeError('Size should be a int or a list/tuple with length of 2.' 'But got {}.'.format(size)) + if not (isinstance(scale, (list, tuple)) and len(scale) == 2): + raise TypeError('Scale should be a list/tuple with length of 2.' 'But got {}.'.format(scale)) + if not (isinstance(ratio, (list, tuple)) and len(ratio) == 2): + raise TypeError('Scale should be a list/tuple with length of 2.' 'But got {}.'.format(ratio)) + + if (scale[0] > scale[1]) or (ratio[0] > ratio[1]): + raise ValueError("Scale and ratio should be of kind (min, max)") + + def _get_param(image, scale, ratio): + height, width = _get_image_size(image) + area = height * width + log_ratio = tuple(math.log(x) for x in ratio) + for _ in range(10): + target_area = np.random.uniform(*scale) * area + aspect_ratio = math.exp(np.random.uniform(*log_ratio)) + + w = int(round(math.sqrt(target_area * aspect_ratio))) + h = int(round(math.sqrt(target_area / aspect_ratio))) + + if 0 < w <= width and 0 < h <= height: + i = random.randint(0, height - h) + j = random.randint(0, width - w) + return i, j, h, w + + # Fallback to central crop + in_ratio = float(width) / float(height) + if in_ratio < min(ratio): + w = width + h = int(round(w / min(ratio))) + elif in_ratio > max(ratio): + h = height + w = int(round(h * max(ratio))) + else: + # return whole image + w = width + h = height + i = (height - h) // 2 + j = (width - w) // 2 + return i, j, h, w + + offset_height, offset_width, target_height, target_width = _get_param(image, scale, ratio) + + image = crop(image, offset_height, offset_width, target_height, target_width) + image = resize(image, size, interpolation) + + return image + + +def random_vflip(image, prob): + + if random.random() < prob: + return vflip(image) + return image + + +def random_hflip(image, prob): + + if random.random() < prob: + return hflip(image) + return image + + +def random_rotation(image, degrees, interpolation, expand, center, fill): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if isinstance(degrees, numbers.Number): + if degrees < 0: + raise ValueError('If degrees is a single number, it must be positive.' 'But got {}'.format(degrees)) + degrees = (-degrees, degrees) + elif not (isinstance(degrees, (list, tuple)) and len(degrees) == 2): + raise ValueError('If degrees is a list/tuple, it must be length of 2.' 'But got {}'.format(degrees)) + else: + if degrees[0] > degrees[1]: + raise ValueError('if degrees is a list/tuple, it should be (min, max).') + + angle = np.random.uniform(degrees[0], degrees[1]) + + if _is_pil_image(image): + return F_pil.rotate(image, angle, interpolation, expand, center, fill) + else: + return F_cv2.rotate(image, angle, interpolation, expand, center, fill) + + +def random_shear(image, degrees, interpolation, fill): + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if isinstance(degrees, numbers.Number): + degrees = (-degrees, degrees, 0, 0) + elif isinstance(degrees, (list, tuple)) and (len(degrees) == 2 or len(degrees) == 4): + if len(degrees) == 2: + degrees = (degrees[0], degrees[1], 0, 0) + else: + raise ValueError( + 'degrees should be a single number or a list/tuple with length in (2 ,4).' + 'But got {}'.format(degrees) + ) + + if _is_pil_image(image): + return F_pil.random_shear(image, degrees, interpolation, fill) + else: + return F_cv2.random_shear(image, degrees, interpolation, fill) + + +def random_shift(image, shift, interpolation, fill): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if not (isinstance(shift, (tuple, list)) and len(shift) == 2): + + raise ValueError('Shift should be a list/tuple with length of 2.' 'But got {}'.format(shift)) + + if _is_pil_image(image): + return F_pil.random_shift(image, shift, interpolation, fill) + else: + return F_cv2.random_shift(image, shift, interpolation, fill) + + +def random_zoom(image, zoom, interpolation, fill): + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if not (isinstance(zoom, (tuple, list)) and len(zoom) == 2): + + raise ValueError('Zoom should be a list/tuple with length of 2.' 'But got {}'.format(zoom)) + if not (0 <= zoom[0] <= zoom[1]): + + raise ValueError('Zoom values should be positive, and zoom[1] should be greater than zoom[0].') + + if _is_pil_image(image): + return F_pil.random_zoom(image, zoom, interpolation, fill) + else: + return F_cv2.random_zoom(image, zoom, interpolation, fill) + + +def random_affine(image, degrees, shift, zoom, shear, interpolation, fill): + + if not (_is_pil_image(image) or _is_numpy_image(image)): + raise TypeError('image should be PIL Image or ndarray with dim=[2 or 3]. Got {}'.format(type(image))) + + if _is_pil_image(image): + return F_pil.random_affine(image, degrees, shift, zoom, shear, interpolation, fill) + else: + return F_cv2.random_affine(image, degrees, shift, zoom, shear, interpolation, fill) diff --git a/tensorlayer/vision/tensorflow_vision.py b/tensorlayer/vision/tensorflow_vision.py new file mode 100644 index 000000000..cc9595ce3 --- /dev/null +++ b/tensorlayer/vision/tensorflow_vision.py @@ -0,0 +1,1395 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +import tensorflow as tf +import numpy as np +from tensorflow.python.ops import math_ops +from tensorflow.python.ops import array_ops, random_ops +from tensorflow.python.framework import ops +from tensorflow.python.ops.image_ops_impl import _AssertAtLeast3DImage +from tensorflow.python.framework import dtypes +from tensorflow.python.ops.image_ops_impl import convert_image_dtype +import numbers +import PIL +from PIL import Image +import math +import scipy +from scipy import ndimage +__all__ = [ + 'central_crop', + 'to_tensor', + 'crop', + 'pad', + 'resize', + 'transpose', + 'hwc_to_chw', + 'chw_to_hwc', + 'rgb_to_hsv', + 'hsv_to_rgb', + 'rgb_to_gray', + 'adjust_brightness', + 'adjust_contrast', + 'adjust_hue', + 'adjust_saturation', + 'normalize', + 'hflip', + 'vflip', + 'padtoboundingbox', + 'standardize', + 'random_brightness', + 'random_contrast', + 'random_saturation', + 'random_hue', + 'random_crop', + 'random_resized_crop', + 'random_vflip', + 'random_hflip', + 'random_rotation', + 'random_shear', + 'random_shift', + 'random_zoom', + 'random_affine', +] + + +def _is_pil_image(image): + return isinstance(image, Image.Image) + + +def _is_numpy_image(image): + return isinstance(image, np.ndarray) and (image.ndim in {2, 3}) + + +def _get_image_size(image): + image_shape = image.get_shape() + if image_shape.ndims == 3: + height, width, channels = image_shape + return height, width + elif image_shape.ndims == 4: + batch, height, width, channels = image_shape + return height, width + + +def random_factor(factor, name, center=1, bound=(0, float('inf')), non_negative=True): + if isinstance(factor, numbers.Number): + if factor < 0: + raise ValueError('The input value of {} cannot be negative.'.format(name)) + factor = [center - factor, center + factor] + if non_negative: + factor[0] = max(0, factor[0]) + elif isinstance(factor, (tuple, list)) and len(factor) == 2: + if not bound[0] <= factor[0] <= factor[1] <= bound[1]: + raise ValueError( + "Please check your value range of {} is valid and " + "within the bound {}.".format(name, bound) + ) + else: + raise TypeError("Input of {} should be either a single value, or a list/tuple of " "length 2.".format(name)) + factor = np.random.uniform(factor[0], factor[1]) + return factor + + +def central_crop(image, size=None, central_fraction=None): + ''' + + Parameters + ---------- + image : + input Either a 3-D float Tensor of shape [height, width, depth], + or a 4-D Tensor of shape [batch_size, height, width, depth]. + central_fraction : + float (0, 1], fraction of size to crop + size: + size (Union[int, sequence]) – The output size of the cropped image. If size is an integer, a square crop of size (size, size) is returned. + If size is a sequence of length 2, it should be (height, width). + Returns : + 3-D / 4-D float Tensor, as per the input. + ------- + + ''' + if size is None and central_fraction is None: + raise ValueError('central_fraction and size can not be both None') + + if size is not None: + if not isinstance(size, (int, list, tuple)) or (isinstance(size, (list, tuple)) and len(size) != 2): + raise ValueError( + "Size should be a single integer or a list/tuple (h, w) of length 2.But" + "got {}.".format(type(size)) + ) + if isinstance(size, int): + target_height = size + target_width = size + else: + target_height = size[0] + target_width = size[1] + image = ops.convert_to_tensor(image, name='image') + rank = image.get_shape().ndims + if rank != 3 and rank != 4: + raise ValueError( + '`image` should either be a Tensor with rank = 3 or ' + 'rank = 4. Had rank = {}.'.format(rank) + ) + + def _get_dim(tensor, idx): + static_shape = tensor.get_shape().dims[idx].value + if static_shape is not None: + return static_shape, False + return array_ops.shape(tensor)[idx], True + + if rank == 3: + img_h, dynamic_h = _get_dim(image, 0) + img_w, dynamic_w = _get_dim(image, 1) + img_d = image.get_shape()[2] + else: + img_bs = image.get_shape()[0] + img_h, dynamic_h = _get_dim(image, 1) + img_w, dynamic_w = _get_dim(image, 2) + img_d = image.get_shape()[3] + + bbox_h_size = target_height + bbox_w_size = target_width + + if dynamic_h: + img_hd = math_ops.cast(img_h, dtypes.float64) + target_height = math_ops.cast(target_height, dtypes.float64) + bbox_h_start = math_ops.cast((img_hd - target_height) / 2, dtypes.int32) + else: + img_hd = float(img_h) + target_height = float(target_height) + bbox_h_start = int((img_hd - target_height) / 2) + + if dynamic_w: + img_wd = math_ops.cast(img_w, dtypes.float64) + target_width = math_ops.cast(target_width, dtypes.float64) + bbox_w_start = math_ops.cast((img_wd - target_width) / 2, dtypes.int32) + else: + img_wd = float(img_w) + target_width = float(target_width) + bbox_w_start = int((img_wd - target_width) / 2) + + if rank == 3: + bbox_begin = array_ops.stack([bbox_h_start, bbox_w_start, 0]) + bbox_size = array_ops.stack([bbox_h_size, bbox_w_size, -1]) + else: + bbox_begin = array_ops.stack([0, bbox_h_start, bbox_w_start, 0]) + bbox_size = array_ops.stack([-1, bbox_h_size, bbox_w_size, -1]) + + image = array_ops.slice(image, bbox_begin, bbox_size) + + if rank == 3: + image.set_shape([None if dynamic_h else bbox_h_size, None if dynamic_w else bbox_w_size, img_d]) + else: + image.set_shape([img_bs, None if dynamic_h else bbox_h_size, None if dynamic_w else bbox_w_size, img_d]) + return image + + elif central_fraction is not None: + return tf.image.central_crop(image, central_fraction) + + +def to_tensor(img, data_format): + '''Converts a ``image`` to tf.Tensor. + + Parameters + ---------- + img: + Image to be converted to tensor. + data_format: + Data format of output tensor, should be 'HWC' or + 'CHW'. Default: 'HWC'. + + Returns: + Tensor: Converted image. + ------- + + ''' + if not (_is_pil_image(img) or _is_numpy_image(img)): + raise TypeError('img should be PIL Image or ndarray. But got {}'.format(type(img))) + + if _is_pil_image(img): + # PIL Image + if img.mode == 'I': + image = tf.convert_to_tensor(np.array(img, np.int32, copy=False)) + elif img.mode == 'I;16': + # cast and reshape not support int16 + image = tf.convert_to_tensor(np.array(img, np.int32, copy=False)) + elif img.mode == 'F': + image = tf.convert_to_tensor(np.array(img, np.float32, copy=False)) + elif img.mode == '1': + image = 255 * tf.convert_to_tensor(np.array(img, np.uint8, copy=False)) + else: + image = tf.convert_to_tensor(np.array(img, copy=False)) + + if img.mode == 'YCbCr': + nchannel = 3 + elif img.mode == 'I;16': + nchannel = 1 + else: + nchannel = len(img.mode) + + dtype = image.dtype + if dtype == 'tf.uint8': + image = tf.cast(image, tf.float32) / 255. + + image = tf.reshape(image, shape=[img.size[1], img.size[0], nchannel]) + if data_format == 'CHW': + image = tf.transpose(image, perm=[2, 0, 1]) + return image + else: + if img.ndim == 2: + img = img[:, :, None] + + if data_format == 'CHW': + img = tf.convert_to_tensor(img.transpose((2, 0, 1))) + else: + img = tf.convert_to_tensor(img) + + dtype = img.dtype + if dtype == 'tf.uint8': + img = tf.cast(img, tf.float32) / 255. + return img + + +def crop(image, offset_height, offset_width, target_height, target_width): + + return tf.image.crop_to_bounding_box(image, offset_height, offset_width, target_height, target_width) + + +def pad(image, padding, padding_value, mode): + ''' + + Parameters + ---------- + image: + A 3-D or 4-D Tensor. + padding: + An integer or a list/tuple. If a single number is provided, pad all borders with this value. + If a tuple or list of 2 values is provided, pad the left and right with the first value and the top and bottom with the second value. + If 4 values are provided as a list or tuple, pad the (left , top, right, bottom) respectively. + padding_value: + In "CONSTANT" mode, the scalar pad value to use. Must be same type as tensor. + mode: + One of "CONSTANT", "REFLECT", or "SYMMETRIC" (case-insensitive) + Returns: + A padded Tensor. Has the same type as tensor. + ------- + + ''' + image = ops.convert_to_tensor(image, name='image') + image_shape = image.get_shape() + if len(image_shape) == 3: + batch_size = 0 + elif len(image_shape) == 4: + batch_size = image_shape[0] + else: + raise TypeError('Image must be a 3-D tensor or 4-D tensor.') + + if isinstance(padding, int): + padding = ((padding, padding), (padding, padding)) + elif isinstance(padding, list) or isinstance(padding, tuple): + if len(padding) == 2: + padding = ((padding[1], padding[1]), (padding[0], padding[0])) + elif len(padding) == 4: + padding = ((padding[1], padding[3]), (padding[0], padding[2])) + else: + raise ValueError('The length of padding should be 2 or 4, but got {}.'.format(len(padding))) + else: + raise TypeError('Padding should be an integer or a list/tuple, but got {}.'.format(type(padding))) + + if batch_size == 0: + padding = (padding[0], padding[1], (0, 0)) + else: + padding = ((0, 0), padding[0], padding[1], (0, 0)) + + return tf.pad(image, padding, mode=mode, constant_values=padding_value) + + +def resize(image, size, method): + ''' + + Parameters + ---------- + images: + Input images to resize + size: + The output size of the resized image. + If size is an integer, smaller edge of the image will be resized to this value with + the same image aspect ratio. + If size is a sequence of (height, width), this will be the desired output size. + method: + An image.ResizeMethod, or string equivalent shoulid be in + (bilinear, lanczos3, lanczos5, bicubic, gaussian, nearest, area, mitchellcubic). + Defaults to bilinear. + preserve_aspect_ratio: + Whether to preserve the aspect ratio. + Returns: + resized images + ------- + + ''' + if not (isinstance(size, int) or (isinstance(size, (list, tuple)) and len(size) == 2)): + raise TypeError('Size should be a single number or a list/tuple (h, w) of length 2.' 'Got {}.'.format(size)) + image = ops.convert_to_tensor(image) + orig_dtype = image.dtype + if orig_dtype not in [dtypes.float16, dtypes.float32]: + image = convert_image_dtype(image, dtypes.float32) + + if image.get_shape().ndims == 3: + h, w, _ = image.get_shape().as_list() + elif image.get_shape().ndims == 4: + _, h, w, _ = image.get_shape().as_list() + + if isinstance(size, int): + if (w <= h and w == size) or (h <= w and h == size): + size = (h, w) + if w < h: + target_w = size + target_h = int(size * h / w) + size = (target_h, target_w) + else: + target_h = size + target_w = int(size * w / h) + size = (target_h, target_w) + image = tf.image.resize(image, size, method, preserve_aspect_ratio=False) + return convert_image_dtype(image, orig_dtype, saturate=True) + + +def transpose(image, order): + image = ops.convert_to_tensor(image) + shape = image.get_shape() + if shape.ndims == 3 or shape.ndims is None: + if len(order) != 3: + raise ValueError('if image is 3-D tensor, order should be a list/tuple with length of 3') + return array_ops.transpose(image, order) + elif shape.ndims == 4: + if len(order) != 4: + raise ValueError('if image is 4-D tensor, order should be a list/tuple with length of 4') + return array_ops.transpose(image, order) + else: + raise ValueError('\'image\' must have either 3 or 4 dimensions.') + + +def hwc_to_chw(image): + + if (len(image.shape) == 3): + return transpose(image, (2, 0, 1)) + elif (len(image.shape) == 4): + return transpose(image, (0, 3, 1, 2)) + else: + raise ValueError('\'image\' must have either 3 or 4 dimensions.') + + +def chw_to_hwc(image): + + if (len(image.shape) == 3): + return transpose(image, (1, 2, 0)) + elif (len(image.shape) == 4): + return transpose(image, (0, 2, 3, 1)) + else: + raise ValueError('\'image\' must have either 3 or 4 dimensions.') + + +def rgb_to_hsv(image): + + return tf.image.rgb_to_hsv(image) + + +def hsv_to_rgb(image): + + return tf.image.hsv_to_rgb(image) + + +def rgb_to_gray(image, num_output_channels): + + if num_output_channels not in (1, 3): + raise ValueError('num_output_channels should be either 1 or 3') + + image = ops.convert_to_tensor(image, name='image') + orig_dtype = image.dtype + flt_image = convert_image_dtype(image, dtypes.float32) + rgb_weights = [0.2989, 0.5870, 0.1140] + gray_float = math_ops.tensordot(flt_image, rgb_weights, [-1, -1]) + gray_float = array_ops.expand_dims(gray_float, -1) + if num_output_channels == 3: + gray_float = array_ops.stack([gray_float, gray_float, gray_float], axis=2) + return convert_image_dtype(gray_float, orig_dtype) + + +def adjust_brightness(image, brightness_factor): + ''' + Parameters + ---------- + images: + Input images to adjust brightness + brightness_factor(float): How much to adjust the brightness. Can be + any non negative number. 0 gives a black image, 1 gives the + original image while 2 increases the brightness by a factor of 2. + Returns: + adjusted images + ------- + ''' + if brightness_factor < 0: + raise ValueError('brightness_factor ({}) is not non-negative.'.format(brightness_factor)) + + image = ops.convert_to_tensor(image, name='image') + image = _AssertAtLeast3DImage(image) + + orig_dtype = image.dtype + if orig_dtype not in [dtypes.float16, dtypes.float32]: + image = convert_image_dtype(image, dtypes.float32) + + brightness_factor = math_ops.cast(brightness_factor, image.dtype) + image_zeros = tf.zeros_like(image) + adjusted = brightness_factor * image + (1.0 - brightness_factor) * image_zeros + adjusted = tf.clip_by_value(adjusted, clip_value_min=0, clip_value_max=1.0) + return convert_image_dtype(adjusted, orig_dtype, saturate=True) + + +def adjust_contrast(image, contrast_factor): + ''' + Parameters + ---------- + images: + Input images to adjust contrast + contrast_factor(float): How much to adjust the contrast. Can be + any non negative number. 0 gives a gray image, 1 gives the + original image while 2 increases the contrast by a factor of 2. + Returns: + adjusted images + ------- + ''' + if contrast_factor < 0: + raise ValueError('contrast_factor ({}) is not non-negative.'.format(contrast_factor)) + + image = ops.convert_to_tensor(image, name='image') + image = _AssertAtLeast3DImage(image) + + orig_dtype = image.dtype + if orig_dtype not in [dtypes.float16, dtypes.float32]: + image = convert_image_dtype(image, dtypes.float32) + + contrast_factor = math_ops.cast(contrast_factor, image.dtype) + mean = tf.math.reduce_mean(tf.image.rgb_to_grayscale(image), keepdims=True) + adjusted = contrast_factor * image + (1 - contrast_factor) * mean + adjusted = tf.clip_by_value(adjusted, clip_value_min=0, clip_value_max=1.0) + return convert_image_dtype(adjusted, orig_dtype, saturate=True) + + +def adjust_hue(image, hue_factor): + ''' + Parameters + ---------- + images(Tensor): + Input images to adjust hue + hue_factor(float): How much to shift the hue channel. Should be in + [-0.5, 0.5]. 0.5 and -0.5 give complete reversal of hue channel in + HSV space in positive and negative direction respectively. + 0 means no shift. Therefore, both -0.5 and 0.5 will give an image + with complementary colors while 0 gives the original image. + Returns(Tensor): + Adjusted images + ------- + ''' + if not (-0.5 <= hue_factor <= 0.5): + raise ValueError('hue_factor ({}) is not in [-0.5, 0.5].'.format(hue_factor)) + + image = ops.convert_to_tensor(image, name='image') + image = _AssertAtLeast3DImage(image) + + orig_dtype = image.dtype + if orig_dtype not in [dtypes.float16, dtypes.float32]: + image = convert_image_dtype(image, dtypes.float32) + + hue_factor = math_ops.cast(hue_factor, image.dtype) + image = tf.image.rgb_to_hsv(image) + h, s, v = tf.split(image, num_or_size_splits=[1, 1, 1], axis=2) + h = (h + hue_factor) % 1.0 + image = tf.concat((h, s, v), axis=2) + adjusted = tf.image.hsv_to_rgb(image) + + return convert_image_dtype(adjusted, orig_dtype, saturate=True) + + +def adjust_saturation(image, saturation_factor): + ''' + Parameters + ---------- + images(Tensor): + Input images to adjust saturation + contrast_factor(float): How much to adjust the saturation. 0 will + give a black and white image, 1 will give the original image while + 2 will enhance the saturation by a factor of 2. + Returns(Tensor): + Adjusted images + ------- + ''' + if saturation_factor < 0: + raise ValueError('saturation_factor ({}) is not non-negative.'.format(saturation_factor)) + + image = ops.convert_to_tensor(image, name='image') + image = _AssertAtLeast3DImage(image) + + orig_dtype = image.dtype + if orig_dtype not in [dtypes.float16, dtypes.float32]: + image = convert_image_dtype(image, dtypes.float32) + + saturation_factor = math_ops.cast(saturation_factor, image.dtype) + gray_image = tf.image.rgb_to_grayscale(image) + adjusted = saturation_factor * image + (1 - saturation_factor) * gray_image + adjusted = tf.clip_by_value(adjusted, clip_value_min=0, clip_value_max=1.0) + return convert_image_dtype(adjusted, orig_dtype, saturate=True) + + +def hflip(image): + ''' + + Parameters + ---------- + image(Tensor): + Input images to flip an image horizontally (left to right) + + Returns(Tensor): + Flipped images + ------- + + ''' + return tf.image.flip_left_right(image) + + +def vflip(image): + ''' + + Parameters + ---------- + image(Tensor): + Input images to flip an image vertically (up to down) + + Returns(Tensor): + Flipped images + ------- + + ''' + return tf.image.flip_up_down(image) + + +def padtoboundingbox(image, offset_height, offset_width, target_height, target_width, padding_value): + ''' + + Parameters + ---------- + image: + 4-D Tensor of shape `[batch, height, width, channels]` or 3-D Tensor + of shape `[height, width, channels]`. + offset_height: + Number of rows of padding_values to add on top. + offset_width: + Number of columns of padding_values to add on the left. + target_height: + Height of output image. + target_width: + Width of output image. + padding_value: + value to pad + + Returns: + If `image` was 4-D, a 4-D float Tensor of shape + `[batch, target_height, target_width, channels]` + If `image` was 3-D, a 3-D float Tensor of shape + `[target_height, target_width, channels]` + ------- + + ''' + image = ops.convert_to_tensor(image, name='image') + + if offset_height < 0: + raise ValueError('offset_height must be >= 0') + if offset_width < 0: + raise ValueError('offset_width must be >= 0') + + image_shape = image.get_shape() + if image_shape.ndims == 3: + height, width, channels = image.get_shape() + elif image_shape.ndims == 4: + batch, height, width, channels = image.get_shape() + else: + raise ValueError('\'image\' (shape %s) must have either 3 or 4 dimensions.' % image_shape) + + after_padding_width = target_width - offset_width - width + after_padding_height = target_height - offset_height - height + if after_padding_height < 0: + raise ValueError('image height must be <= target - offset') + if after_padding_width < 0: + raise ValueError('image width must be <= target - offset') + + return pad( + image, padding=(offset_width, offset_height, after_padding_width, after_padding_height), + padding_value=padding_value, mode='constant' + ) + + +def normalize(image, mean, std, data_format): + ''' + Parameters + ---------- + image: + An n-D Tensor with at least 3 dimensions, the last 3 of which are the dimensions of each image. + mean: + List or tuple of mean values for each channel, with respect to channel order. + std: + List or tuple of standard deviations for each channel. + channel_mode: + Decide to implement standardization on whole image or each channel of image. + Returns: + A Tensor with the same shape and dtype as image. + ------- + ''' + image = ops.convert_to_tensor(image, name='image') + image = math_ops.cast(image, dtype=tf.float32) + image = _AssertAtLeast3DImage(image) + + if data_format == 'CHW': + num_channels = image.shape[0] + elif data_format == 'HWC': + num_channels = image.shape[2] + + if isinstance(mean, numbers.Number): + mean = (mean, ) * num_channels + elif isinstance(mean, (list, tuple)): + if len(mean) != num_channels: + raise ValueError("Length of mean must be 1 or equal to the number of channels({0}).".format(num_channels)) + if isinstance(std, numbers.Number): + std = (std, ) * num_channels + elif isinstance(std, (list, tuple)): + if len(std) != num_channels: + raise ValueError("Length of std must be 1 or equal to the number of channels({0}).".format(num_channels)) + + if data_format == 'CHW': + std = np.float32(np.array(std).reshape((-1, 1, 1))) + mean = np.float32(np.array(mean).reshape((-1, 1, 1))) + elif data_format == 'HWC': + mean = np.float32(np.array(mean).reshape((1, 1, -1))) + std = np.float32(np.array(std).reshape((1, 1, -1))) + + mean = ops.convert_to_tensor(mean) + mean = math_ops.cast(mean, dtype=tf.float32) + std = ops.convert_to_tensor(std) + std = math_ops.cast(std, dtype=tf.float32) + image -= mean + image = math_ops.divide(image, std) + return image + +def standardize(image): + ''' + Reference to tf.image.per_image_standardization(). + Linearly scales each image in image to have mean 0 and variance 1. + + Parameters + ---------- + image: + An n-D Tensor with at least 3 dimensions, the last 3 of which are the dimensions of each image. + + Returns: + A Tensor with the same shape as image and its dtype is float32. + ------- + + ''' + image = ops.convert_to_tensor(image, name='image') + image = math_ops.cast(image, dtype=tf.float32) + return tf.image.per_image_standardization(image) + + +def random_brightness(image, brightness_factor): + ''' + Perform a random brightness on the input image. + Parameters + ---------- + image: + Input images to adjust random brightness + brightness_factor: + Brightness adjustment factor (default=(1, 1)). Cannot be negative. + If it is a float, the factor is uniformly chosen from the range [max(0, 1-brightness), 1+brightness]. + If it is a sequence, it should be [min, max] for the range. + + Returns: + Adjusted image. + ------- + + ''' + brightness_factor = random_factor(brightness_factor, name='brightness') + + return adjust_brightness(image, brightness_factor) + + +def random_contrast(image, contrast_factor): + ''' + Perform a random contrast on the input image. + Parameters + ---------- + image: + Input images to adjust random contrast + contrast_factor: + Contrast adjustment factor (default=(1, 1)). Cannot be negative. + If it is a float, the factor is uniformly chosen from the range [max(0, 1-contrast), 1+contrast]. + If it is a sequence, it should be [min, max] for the range. + + Returns: + Adjusted image. + ------- + + ''' + contrast_factor = random_factor(contrast_factor, name='contrast') + + return adjust_contrast(image, contrast_factor) + + +def random_saturation(image, saturation_factor): + ''' + Perform a random saturation on the input image. + Parameters + ---------- + image: + Input images to adjust random saturation + saturation_factor: + Saturation adjustment factor (default=(1, 1)). Cannot be negative. + If it is a float, the factor is uniformly chosen from the range [max(0, 1-saturation), 1+saturation]. + If it is a sequence, it should be [min, max] for the range. + + Returns: + Adjusted image. + ------- + + ''' + + saturation_factor = random_factor(saturation_factor, name='saturation') + + return adjust_saturation(image, saturation_factor) + + +def random_hue(image, hue_factor): + ''' + Perform a random contrast on the input image. + Parameters + ---------- + image: + Input images to adjust random contrast + brightness_factor: + Contrast adjustment factor (default=(1, 1)). Cannot be negative. + If it is a float, the factor is uniformly chosen from the range [max(0, 1-contrast), 1+contrast]. + If it is a sequence, it should be [min, max] for the range. + + Returns: + Adjusted image. + ------- + + ''' + hue_factor = random_factor(hue_factor, name='hue', center=0, bound=(-0.5, 0.5), non_negative=False) + + return adjust_hue(image, hue_factor) + + +def random_crop(image, size, padding, pad_if_needed, fill, padding_mode): + ''' + + Parameters + ---------- + image: + Input images to crop and pad if needed. + size: + Desired output size of the crop. If size is an int instead of sequence like (h, w), + a square crop (size, size) is made. If provided a sequence of length 1, + it will be interpreted as (size[0], size[0]). + padding: + Optional, padding on each border of the image. Default is None. + If a single int is provided this is used to pad all borders. + If sequence of length 2 is provided this is the padding on left/right and top/bottom respectively. + If a sequence of length 4 is provided this is the padding for the left, top, right and bottom borders respectively. + pad_if_needed: + It will pad the image if smaller than the desired size to avoid raising an exception. + Since cropping is done after padding, the padding seems to be done at a random offset. + fill: + Pixel fill value for constant fill. Default is 0. + padding_mode: + Type of padding. Should be: constant, edge, reflect or symmetric. Default is constant. + + Returns: + cropped images. + ------- + + ''' + image = ops.convert_to_tensor(image, name='image') + _AssertAtLeast3DImage(image) + + if isinstance(size, int): + size = (size, size) + elif isinstance(size, (tuple, list)) and len(size) == 2: + size = size + else: + raise ValueError('Size should be a int or a list/tuple with length of 2. ' 'But got {}'.format(size)) + + size = ops.convert_to_tensor(size, dtype=dtypes.int32, name='size') + if padding is not None: + image = pad(image, padding, fill, padding_mode) + + image_shape = image.get_shape() + if image_shape.ndims == 3: + height, width, channels = image_shape + elif image_shape.ndims == 4: + batch, height, width, channels = image_shape + + if pad_if_needed and height < size[0]: + image = pad(image, (0, size[0] - height), fill, padding_mode) + if pad_if_needed and width < size[1]: + image = pad(image, (size[1] - width, 0), fill, padding_mode) + + image_shape = image.get_shape() + if image_shape.ndims == 3: + height, width, channels = image_shape + elif image_shape.ndims == 4: + batch, height, width, channels = image_shape + + target_height, target_width = size + if height < target_height or width < target_width: + raise ValueError( + 'Crop size {} should be smaller than input image size {}. '.format( + (target_height, target_width), (height, width) + ) + ) + + if target_height == height and target_width == width: + return crop(image, 0, 0, target_height, target_width) + + offset_height = random_ops.random_uniform([], minval=0, maxval=height - target_height + 1, dtype=size.dtype) + + offset_width = random_ops.random_uniform([], minval=0, maxval=width - target_width + 1, dtype=size.dtype) + + return crop(image, offset_height, offset_width, target_height, target_width) + + +def random_resized_crop(image, size, scale, ratio, interpolation): + '''Crop the given image to random size and aspect ratio. + + Parameters + ---------- + image: + 4-D Tensor of shape [batch, height, width, channels] or 3-D Tensor of shape [height, width, channels]. + size: + Target size of output image, with (height, width) shape. if size is int, target size will be (size, size). + scale: + Range of size of the origin size cropped. Default: (0.08, 1.0) + ratio: + Range of aspect ratio of the origin aspect ratio cropped. Default: (0.75, 1.33) + interpolation: + Interpolation method. Default: 'bilinear'. + + Returns: + Randomly cropped and resized image. + ------- + + ''' + + if isinstance(size, int): + size = (size, size) + elif isinstance(size, (list, tuple)) and len(size) == 2: + size = size + else: + raise TypeError('Size should be a int or a list/tuple with length of 2.' 'But got {}.'.format(size)) + if not (isinstance(scale, (list, tuple)) and len(scale) == 2): + raise TypeError('Scale should be a list/tuple with length of 2.' 'But got {}.'.format(scale)) + if not (isinstance(ratio, (list, tuple)) and len(ratio) == 2): + raise TypeError('Scale should be a list/tuple with length of 2.' 'But got {}.'.format(ratio)) + + if (scale[0] > scale[1]) or (ratio[0] > ratio[1]): + raise ValueError("Scale and ratio should be of kind (min, max)") + image = ops.convert_to_tensor(image, name='image') + image = _AssertAtLeast3DImage(image) + + def get_param(image, scale, ratio): + height, width = _get_image_size(image) + area = math_ops.cast(height * width, dtype=dtypes.float32) + ratio = ops.convert_to_tensor(ratio, dtype=dtypes.float32) + log_ratio = math_ops.log(ratio) + for _ in range(10): + target_area = area * random_ops.random_uniform([], minval=scale[0], maxval=scale[1], dtype=dtypes.float32) + aspect_ratio = math_ops.exp( + random_ops.random_uniform([], minval=log_ratio[0], maxval=log_ratio[1], dtype=dtypes.float32) + ) + + target_width = math_ops.to_int32(math_ops.round(math_ops.sqrt(target_area * aspect_ratio))) + + target_height = math_ops.to_int32(math_ops.round(math_ops.sqrt(target_area / aspect_ratio))) + + if 0 < target_width <= width and 0 < target_height <= height: + offset_height = random_ops.random_uniform( + [], minval=0, maxval=height - target_height + 1, dtype=dtypes.int32 + ) + + offset_width = random_ops.random_uniform( + [], minval=0, maxval=width - target_width + 1, dtype=dtypes.int32 + ) + + return offset_height, offset_width, target_height, target_width + + height = ops.convert_to_tensor(height, dtype=dtypes.float32) + width = ops.convert_to_tensor(width, dtype=dtypes.float32) + in_ratio = width / height + if in_ratio < ratio[0]: + target_width = width + target_height = math_ops.to_int32(math_ops.round(target_width / ratio[0])) + elif in_ratio > ratio[1]: + target_height = height + target_width = math_ops.to_int32(math_ops.round(target_height / ratio[1])) + else: + target_height = height + target_width = width + offset_height = (height - target_height) // 2 + offset_width = (width - target_width) // 2 + return offset_height, offset_width, target_height, target_width + + offset_height, offset_width, target_heigth, target_width = get_param(image, scale, ratio) + image = crop(image, offset_height, offset_width, target_heigth, target_width) + image = resize(image, size, interpolation) + return image + + +def random_vflip(image, prob): + '''Vertically flip the input image randomly with a given probability. + + Parameters + ---------- + image: + 4-D Tensor of shape [batch, height, width, channels] or 3-D Tensor of shape [height, width, channels]. + prob: + probability of the image being flipped. Default value is 0.5 + Returns: + A tensor of the same type and shape as image. + ------- + + ''' + image = ops.convert_to_tensor(image, name='image') + image = _AssertAtLeast3DImage(image) + random_prob = random_ops.random_uniform([], minval=0, maxval=1.0, dtype=dtypes.float32) + flip_flag = math_ops.less(random_prob, prob) + if flip_flag: + return vflip(image) + return image + + +def random_hflip(image, prob): + '''horizontally flip the input image randomly with a given probability. + + Parameters + ---------- + image: + 4-D Tensor of shape [batch, height, width, channels] or 3-D Tensor of shape [height, width, channels]. + prob: + probability of the image being flipped. Default value is 0.5 + Returns: + A tensor of the same type and shape as image. + ------- + + ''' + image = ops.convert_to_tensor(image, name='image') + image = _AssertAtLeast3DImage(image) + random_prob = random_ops.random_uniform([], minval=0, maxval=1.0, dtype=dtypes.float32) + flip_flag = math_ops.less(random_prob, prob) + if flip_flag: + return hflip(image) + return image + + +def random_rotation(image, degrees, interpolation, expand, center, fill): + '''Rotate the image by angle. + + Parameters + ---------- + image: + Input tensor. Must be 3D. + degrees: + Range of degrees to select from.If degrees is a number instead of sequence like (min, max), the range of degrees + will be (-degrees, +degrees). + interpolation: + Points outside the boundaries of the input are filled according to the given mode + (one of {'nearest', 'bilinear'}). + expand: + Optional expansion flag. + If true, expands the output to make it large enough to hold the entire rotated image. + If false or omitted, make the output image the same size as the input image. + Note that the expand flag assumes rotation around the center and no translation. + center: + Optional center of rotation, (x, y). Origin is the upper left corner. + Default is the center of the image. + fill: + Pixel fill value for the area outside the rotated image. + Default is ``0``. If given a number, the value is used for all bands respectively. + + Returns: + Rotated image tensor. + ------- + + ''' + if isinstance(image, (tf.Tensor, np.ndarray)) and len(image.shape) == 3: + image = np.asarray(image) + else: + 'Image should be a 3d tensor or np.ndarray.' + h, w, c = image.shape[0], image.shape[1], image.shape[2] + + if isinstance(degrees, numbers.Number): + if degrees < 0: + raise ValueError('If degrees is a single number, it must be positive.' 'But got {}'.format(degrees)) + degrees = (-degrees, degrees) + elif not (isinstance(degrees, (list, tuple)) and len(degrees) == 2): + raise ValueError('If degrees is a list/tuple, it must be length of 2.' 'But got {}'.format(degrees)) + else: + if degrees[0] > degrees[1]: + raise ValueError('if degrees is a list/tuple, it should be (min, max).') + + if isinstance(fill, numbers.Number): + fill = (fill, ) * c + elif not (isinstance(fill, (list, tuple)) and len(fill) == c): + raise ValueError( + 'If fill should be a single number or a list/tuple with length of image channels.' + 'But got {}'.format(fill) + ) + + if interpolation not in ('nearest', 'bilinear'): + raise ValueError('Interpolation only support {\'nearest\', \'bilinear\'} .') + + orig_dtype = image.dtype + image = np.asarray(image, dtype=np.float) + theta = np.random.uniform(degrees[0], degrees[1]) + angle = -math.radians(theta) + rotation_matrix = np.array([[np.cos(angle), -np.sin(angle)], [np.sin(angle), np.cos(angle)]]) + + if center is None: + rotn_center = (w / 2.0, h / 2.0) + else: + rotn_center = center + + matrix = [ + round(math.cos(angle), 15), + round(math.sin(angle), 15), + 0.0, + round(-math.sin(angle), 15), + round(math.cos(angle), 15), + 0.0, + ] + + def transform(x, y, matrix): + (a, b, c, d, e, f) = matrix + return a * x + b * y + c, d * x + e * y + f + + matrix[2], matrix[5] = transform(-rotn_center[0] - 0, -rotn_center[1] - 0, matrix) + matrix[2] += rotn_center[0] + matrix[5] += rotn_center[1] + + if expand: + # calculate output size + xx = [] + yy = [] + for x, y in ((0, 0), (w, 0), (w, h), (0, h)): + x, y = transform(x, y, matrix) + xx.append(x) + yy.append(y) + nw = math.ceil(max(xx)) - math.floor(min(xx)) + nh = math.ceil(max(yy)) - math.floor(min(yy)) + matrix[2], matrix[5] = transform(-(nw - w) / 2.0, -(nh - h) / 2.0, matrix) + w, h = nw, nh + + image = np.rollaxis(image, 2, 0) + dummy = np.ones((1, image.shape[1], image.shape[2]), dtype=image.dtype) + image = np.concatenate((image, dummy), axis=0) + final_offset = np.array([matrix[5], matrix[2]]) + + channel_images = [ + ndimage.interpolation.affine_transform( + x_channel, rotation_matrix, final_offset, output_shape=(h, w), order=3, mode='constant', cval=0 + ) for x_channel in image + ] + image = np.stack(channel_images, axis=0) + image = np.rollaxis(image, 0, 3) + mask = image[:, :, -1:] + image = image[:, :, :-1] + mask = np.tile(mask, (1, 1, image.shape[2])) + fill = np.tile(fill, (image.shape[0], image.shape[1], 1)) + if interpolation == 'nearest': + mask = mask < 0.5 + image[mask] = fill[mask] + else: + image = image * mask + (1.0 - mask) * fill + image = np.asarray(image, dtype=orig_dtype) + image = ops.convert_to_tensor(image) + return image + + +def transform_matrix_offset_center(matrix, x, y): + o_x = float(x) / 2 + 0.5 + o_y = float(y) / 2 + 0.5 + offset_matrix = np.array([[1, 0, o_x], [0, 1, o_y], [0, 0, 1]]) + reset_matrix = np.array([[1, 0, -o_x], [0, 1, -o_y], [0, 0, 1]]) + transform_matrix = np.dot(np.dot(offset_matrix, matrix), reset_matrix) + return transform_matrix + + +def random_shear(image, degrees, interpolation, fill): + + if isinstance(image, (tf.Tensor, np.ndarray)) and len(image.shape) == 3: + image = np.asarray(image) + else: + 'Image should be a 3d tensor or np.ndarray.' + h, w, c = image.shape[0], image.shape[1], image.shape[2] + + if interpolation not in ('nearest', 'bilinear'): + raise ValueError('Interpolation only support {\'nearest\', \'bilinear\'} .') + + if isinstance(degrees, numbers.Number): + degrees = (-degrees, degrees, 0, 0) + elif isinstance(degrees, (list, tuple)) and (len(degrees) == 2 or len(degrees) == 4): + if len(degrees) == 2: + degrees = (degrees[0], degrees[1], 0, 0) + else: + raise ValueError( + 'degrees should be a single number or a list/tuple with length in (2 ,4).' + 'But got {}'.format(degrees) + ) + + if isinstance(fill, numbers.Number): + fill = (fill, ) * c + elif not (isinstance(fill, (list, tuple)) and len(fill) == c): + raise ValueError( + 'If fill should be a single number or a list/tuple with length of image channels.' + 'But got {}'.format(fill) + ) + + orig_dtype = image.dtype + image = np.asarray(image, dtype=np.float) + shear = [np.random.uniform(degrees[0], degrees[1]), np.random.uniform(degrees[2], degrees[3])] + shear = np.deg2rad(shear) + shear_matrix = np.array( + [[math.cos(shear[1]), math.sin(shear[1]), 0], [math.sin(shear[0]), math.cos(shear[0]), 0], [0, 0, 1]] + ) + transform_matrix = shear_matrix + transform_matrix = transform_matrix_offset_center(transform_matrix, h, w) + + shear_matrix = transform_matrix[:2, :2] + offset = transform_matrix[:2, 2] + image = np.rollaxis(image, 2, 0) + + dummy = np.ones((1, image.shape[1], image.shape[2]), dtype=image.dtype) + image = np.concatenate((image, dummy), axis=0) + + channel_images = [ + ndimage.interpolation.affine_transform(x_channel, shear_matrix, offset, order=3, mode='constant', cval=0) + for x_channel in image + ] + + image = np.stack(channel_images, axis=0) + image = np.rollaxis(image, 0, 3) + mask = image[:, :, -1:] + image = image[:, :, :-1] + mask = np.tile(mask, (1, 1, c)) + fill = np.tile(fill, (h, w, 1)) + if interpolation == 'nearest': + mask = mask < 0.5 + image[mask] = fill[mask] + else: + image = image * mask + (1.0 - mask) * fill + image = np.asarray(image, dtype=orig_dtype) + image = ops.convert_to_tensor(image) + return image + + +def random_shift(image, shift, interpolation, fill): + + if isinstance(image, (tf.Tensor, np.ndarray)) and len(image.shape) == 3: + image = np.asarray(image) + else: + 'Image should be a 3d tensor or np.ndarray.' + h, w, c = image.shape[0], image.shape[1], image.shape[2] + + if interpolation not in ('nearest', 'bilinear'): + raise ValueError('Interpolation only support {\'nearest\', \'bilinear\'} .') + + if not (isinstance(shift, (tuple, list)) and len(shift) == 2): + + raise ValueError('Shift should be a list/tuple with length of 2.' 'But got {}'.format(shift)) + + if isinstance(fill, numbers.Number): + fill = (fill, ) * c + elif not (isinstance(fill, (list, tuple)) and len(fill) == c): + raise ValueError( + 'If fill should be a single number or a list/tuple with length of image channels.' + 'But got {}'.format(fill) + ) + + orig_dtype = image.dtype + image = np.asarray(image, dtype=np.float) + hrg = shift[0] + wrg = shift[1] + tx = -np.random.uniform(-hrg, hrg) * w + ty = -np.random.uniform(-wrg, wrg) * h + + shift_matrix = np.array([[1, 0, tx], [0, 1, ty], [0, 0, 1]]) + + transform_matrix = transform_matrix_offset_center(shift_matrix, h, w) + shift_matrix = transform_matrix[:2, :2] + offset = transform_matrix[:2, 2] + image = np.rollaxis(image, 2, 0) + + dummy = np.ones((1, image.shape[1], image.shape[2]), dtype=image.dtype) + image = np.concatenate((image, dummy), axis=0) + + channel_images = [ + ndimage.interpolation.affine_transform(x_channel, shift_matrix, offset, order=3, mode='constant', cval=0) + for x_channel in image + ] + + image = np.stack(channel_images, axis=0) + image = np.rollaxis(image, 0, 3) + mask = image[:, :, -1:] + image = image[:, :, :-1] + mask = np.tile(mask, (1, 1, c)) + fill = np.tile(fill, (h, w, 1)) + if interpolation == 'nearest': + mask = mask < 0.5 + image[mask] = fill[mask] + else: + image = image * mask + (1.0 - mask) * fill + image = np.asarray(image, dtype=orig_dtype) + image = ops.convert_to_tensor(image) + return image + + +def random_zoom(image, zoom, interpolation, fill): + + if isinstance(image, (tf.Tensor, np.ndarray)) and len(image.shape) == 3: + image = np.asarray(image) + else: + 'Image should be a 3d tensor or np.ndarray.' + h, w, c = image.shape[0], image.shape[1], image.shape[2] + + if interpolation not in ('nearest', 'bilinear'): + raise ValueError('Interpolation only support {\'nearest\', \'bilinear\'} .') + + if not (isinstance(zoom, (tuple, list)) and len(zoom) == 2): + + raise ValueError('Zoom should be a list/tuple with length of 2.' 'But got {}'.format(zoom)) + if not (0 <= zoom[0] <= zoom[1]): + + raise ValueError('Zoom values should be positive, and zoom[1] should be greater than zoom[0].') + + if isinstance(fill, numbers.Number): + fill = (fill, ) * c + elif not (isinstance(fill, (list, tuple)) and len(fill) == c): + raise ValueError( + 'If fill should be a single number or a list/tuple with length of image channels.' + 'But got {}'.format(fill) + ) + + orig_dtype = image.dtype + image = np.asarray(image, dtype=np.float) + zoom_factor = 1 / np.random.uniform(zoom[0], zoom[1]) + zoom_matrix = np.array([[zoom_factor, 0, 0], [0, zoom_factor, 0], [0, 0, 1]]) + transform_matrix = transform_matrix_offset_center(zoom_matrix, h, w) + zoom_matrix = transform_matrix[:2, :2] + offset = transform_matrix[:2, 2] + + image = np.rollaxis(image, 2, 0) + + dummy = np.ones((1, image.shape[1], image.shape[2]), dtype=image.dtype) + image = np.concatenate((image, dummy), axis=0) + + channel_images = [ + ndimage.interpolation.affine_transform(x_channel, zoom_matrix, offset, order=3, mode='constant', cval=0) + for x_channel in image + ] + + image = np.stack(channel_images, axis=0) + image = np.rollaxis(image, 0, 3) + mask = image[:, :, -1:] + image = image[:, :, :-1] + mask = np.tile(mask, (1, 1, c)) + fill = np.tile(fill, (h, w, 1)) + if interpolation == 'nearest': + mask = mask < 0.5 + image[mask] = fill[mask] + else: + image = image * mask + (1.0 - mask) * fill + image = np.asarray(image, dtype=orig_dtype) + image = ops.convert_to_tensor(image) + return image + + +def random_affine(image, degrees, shift, zoom, shear, interpolation, fill): + + if isinstance(image, (tf.Tensor, np.ndarray)) and len(image.shape) == 3: + image = np.asarray(image) + else: + 'Image should be a 3d tensor or np.ndarray.' + h, w, c = image.shape[0], image.shape[1], image.shape[2] + + if isinstance(fill, numbers.Number): + fill = (fill, ) * c + elif not (isinstance(fill, (list, tuple)) and len(fill) == c): + raise ValueError( + 'If fill should be a single number or a list/tuple with length of image channels.' + 'But got {}'.format(fill) + ) + orig_dtype = image.dtype + image = np.asarray(image, dtype=np.float) + theta = np.random.uniform(degrees[0], degrees[1]) + theta = np.deg2rad(theta) + rotation_matrix = np.array([[np.cos(theta), -np.sin(theta), 0], [np.sin(theta), np.cos(theta), 0], [0, 0, 1]]) + transform_matrix = rotation_matrix + + if shift is not None: + max_dx = float(shift[0] * w) + max_dy = float(shift[1] * h) + tx = -int(round(np.random.uniform(-max_dx, max_dx))) + ty = -int(round(np.random.uniform(-max_dy, max_dy))) + shift_matrix = np.array([[1, 0, tx], [0, 1, ty], [0, 0, 1]]) + transform_matrix = np.dot(transform_matrix, shift_matrix) + + if shear is not None: + shear_x = shear_y = 0 + shear_x = float(np.random.uniform(shear[0], shear[1])) + if len(shear) == 4: + shear_y = float(np.random.uniform(shear[2], shear[3])) + shear_x = np.deg2rad(shear_x) + shear_y = np.deg2rad(shear_y) + shear_matrix = np.array( + [[math.cos(shear_y), math.sin(shear_y), 0], [math.sin(shear_x), math.cos(shear_x), 0], [0, 0, 1]] + ) + transform_matrix = np.dot(transform_matrix, shear_matrix) + + if zoom is not None: + zoom = 1 / float(np.random.uniform(zoom[0], zoom[1])) + zoom_matrix = np.array([[zoom, 0, 0], [0, zoom, 0], [0, 0, 1]]) + + transform_matrix = np.dot(transform_matrix, zoom_matrix) + + transform_matrix = transform_matrix_offset_center(transform_matrix, h, w) + image = np.rollaxis(image, 2, 0) + finale_affine_matrix = transform_matrix[:2, :2] + finale_offset = transform_matrix[:2, 2] + dummy = np.ones((1, h, w), dtype=image.dtype) + image = np.concatenate((image, dummy), axis=0) + + channel_images = [ + ndimage.interpolation.affine_transform( + x_channel, finale_affine_matrix, finale_offset, order=3, mode='constant', cval=0 + ) for x_channel in image + ] + + image = np.stack(channel_images, axis=0) + image = np.rollaxis(image, 0, 3) + mask = image[:, :, -1:] + image = image[:, :, :-1] + mask = np.tile(mask, (1, 1, c)) + fill = np.tile(fill, (h, w, 1)) + if interpolation == 'nearest': + mask = mask < 0.5 + image[mask] = fill[mask] + else: + image = image * mask + (1.0 - mask) * fill + image = np.asarray(image, dtype=orig_dtype) + image = ops.convert_to_tensor(image) + return image diff --git a/tensorlayer/vision/transforms.py b/tensorlayer/vision/transforms.py new file mode 100644 index 000000000..89f1ca4ef --- /dev/null +++ b/tensorlayer/vision/transforms.py @@ -0,0 +1,1256 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- + +import tensorlayer as tl +from . import load_vision_backend as F +import numbers +import numpy as np +__all__ = [ + 'Crop', + 'CentralCrop', + 'HsvToRgb', + 'AdjustBrightness', + 'AdjustContrast', + 'AdjustHue', + 'AdjustSaturation', + 'FlipHorizontal', + 'FlipVertical', + 'RgbToGray', + 'PadToBoundingbox', + 'Pad', + 'Normalize', + 'StandardizePerImage', + 'RandomBrightness', + 'RandomContrast', + 'RandomHue', + 'RandomSaturation', + 'RandomCrop', + 'Resize', + 'RgbToHsv', + 'Transpose', + 'RandomRotation', + 'RandomShift', + 'RandomShear', + 'RandomZoom', + 'RandomFlipVertical', + 'RandomFlipHorizontal', + 'HWC2CHW', + 'CHW2HWC', + 'ToTensor', + 'Compose', + 'RandomResizedCrop', + 'RandomAffine', + 'ColorJitter', +] + + +class ToTensor(object): + """Convert a ``PIL Image`` or ``numpy.ndarray`` to tensor. + + Parameters + ---------- + data_format : str + Data format of output tensor, should be 'HWC' or 'CHW'. Default: 'HWC'. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.ToTensor(data_format='HWC') + >>> image = transform(image) + >>> print(image) + + """ + + def __init__(self, data_format='HWC'): + + if not data_format in ['CHW', 'HWC']: + raise ValueError('data_format should be CHW or HWC. Got {}'.format(data_format)) + + self.data_format = data_format + + def __call__(self, image): + + F.to_tensor(image, self.data_format) + + +class CentralCrop(object): + """Crops the given image at the center.If the size is given, image will be cropped as size. + If the central_fraction is given, image will cropped as (H * central_fraction, W * fraction). + Size has a higher priority. + + Parameters + ---------- + size : int or sequence of int + The output size of the cropped image. + If size is an integer, a square crop of size (size, size) is returned. + If size is a sequence of length 2, it should be (height, width). + central_fraction : float + float (0, 1], fraction of size to crop + + Examples + ---------- + With TensorLayer + + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.CentralCrop(size = (50, 50)) + >>> image = transform(image) + >>> print(image) + >>> image shape : (50, 50, 3) + + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.CentralCrop(central_fraction=0.5) + >>> image = transform(image) + >>> print(image) + >>> image shape : (112, 112, 3) + + """ + + def __init__(self, size=None, central_fraction=None): + + self.central_fraction = central_fraction + self.size = size + + def __call__(self, image): + + F.central_crop(image, self.size, self.central_fraction) + + +class Compose(object): + """Composes several transforms together. + + Parameters + ---------- + transforms : list of 'transform' objects + list of transforms to compose. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.Compose([tl.vision.transforms.ToTensor(data_format='HWC'),tl.vision.transforms.CentralCrop(size = 100)]) + >>> image = transform(image) + >>> print(image) + >>> image shape : (100, 100, 3) + + """ + + def __init__(self, transforms): + + self.transforms = transforms + + def __call__(self, data): + + for t in self.transforms: + + data = t(data) + + return data + + +class Crop(object): + """Crops an image to a specified bounding box. + + Parameters + ---------- + offset_height : int + Vertical coordinate of the top-left corner of the bounding box in image. + offset_width: int + Horizontal coordinate of the top-left corner of the bounding box in image. + target_height: int + Height of the bounding box. + target_width: int + Width of the bounding box. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.Crop(offset_height=10, offset_width=10, target_height=100, target_width=100) + >>> image = transform(image) + >>> print(image) + >>> image shape : (100, 100, 3) + + """ + + def __init__(self, offset_height, offset_width, target_height, target_width): + + self.offset_height = offset_height + self.offset_width = offset_width + self.target_height = target_height + self.target_width = target_width + + def __call__(self, image): + + return F.crop(image, self.offset_height, self.offset_width, self.target_height, self.target_width) + + +class Pad(object): + """Pad the given image on all sides with the given "pad" value. + + Parameters + ---------- + padding : int or sequenece + Padding on each border. + If a single int is provided, this is used to pad all borders. + If sequence of length 2 is provided, this is the padding on left/right and top/bottom respectively. + If a sequence of length 4 is provided, this is the padding for the left, top, right and bottom borders respectively. + padding_value : number or sequenece + Pixel fill value for constant fill. Default is 0. + If a tuple of length 3, it is used to fill R, G, B channels respectively. + This value is only used when the mode is constant. + mode : str + Type of padding. Default is constant. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.Pad(padding=10, padding_value=0, mode='constant') + >>> image = transform(image) + >>> print(image) + >>> image shape : (244, 244, 3) + + """ + + def __init__(self, padding, padding_value=0, mode='constant'): + + self.padding = padding + self.padding_value = padding_value + self.mode = mode + + def __call__(self, image): + + return F.pad(image, self.padding, self.padding_value, self.mode) + + +class Resize(object): + """Resize the input image to the given size. + + Parameters + ---------- + size : int or sequenece + Desired output size. + If size is a sequence like (h, w), output size will be matched to this. + If size is an int, smaller edge of the image will be matched to this number. + i.e, if height > width, then image will be rescaled to (size * height / width, size). + interpolation : str + Interpolation method. Default: 'bilinear'. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.Resize(size = (100,100), interpolation='bilinear') + >>> image = transform(image) + >>> print(image) + >>> image shape : (100, 100, 3) + + """ + + def __init__(self, size, interpolation='bilinear'): + + self.size = size + self.interpolation = interpolation + + def __call__(self, image): + + return F.resize(image, self.size, self.interpolation) + + +class Transpose(object): + """Transpose image(s) by swapping dimension. + + Parameters + ---------- + order : sequenece of int + Desired output dimension order. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.Transpose(order=(2, 0 ,1)) + >>> image = transform(image) + >>> print(image) + >>> image shape : (3, 224, 224) + + """ + + def __init__(self, order): + + self.order = order + + def __call__(self, image): + + return F.transpose(image, self.order) + + +class HWC2CHW(object): + """Transpose a image shape (H, W, C) to shape (C, H, W). + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.HWC2CHW() + >>> image = transform(image) + >>> print(image) + >>> image shape : (3, 224, 224) + + """ + + def __call__(self, image): + + F.hwc_to_chw(image) + + +class CHW2HWC(object): + """Transpose a image shape (C, H, W) to shape (H, W, C). + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(3, 224, 224) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.CHW2HWC() + >>> image = transform(image) + >>> print(image) + >>> image shape : (224, 224, 3) + + """ + + def __call__(self, image): + + F.chw_to_hwc(image) + + +class RgbToHsv(object): + """Converts a image from RGB to HSV. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.RgbToHsv() + >>> image = transform(image) + >>> print(image) + >>> image shape : (224, 224, 3) + + """ + + def __call__(self, image): + + F.rgb_to_hsv(image) + + +class HsvToRgb(object): + """Converts a image from HSV to RGB. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.HsvToRgb() + >>> image = transform(image) + >>> print(image) + >>> image shape : (224, 224, 3) + + """ + + def __call__(self, image): + + F.hsv_to_rgb(image) + + +class RgbToGray(object): + """Converts a image from RGB to grayscale. + + Parameters + ---------- + num_output_channels: int + (1 or 3) number of channels desired for output image. Default is 1. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.RgbToGray(num_output_channels=1) + >>> image = transform(image) + >>> print(image) + >>> image shape : (224, 224, 1) + + """ + + def __init__(self, num_output_channels=1): + + self.num_output_channels = num_output_channels + + def __call__(self, image): + + F.rgb_to_gray(image, self.num_output_channels) + + +class AdjustBrightness(object): + """Adjust brightness of the image. + + Parameters + ---------- + brightness_factor: float + How much to adjust the brightness. Can be any non negative number. 1 gives the original image. + Default is 1. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.AdjustBrightness(brightness_factor=1) + >>> image = transform(image) + >>> print(image) + + """ + + def __init__(self, brightness_factor=1): + self.brightness_factor = brightness_factor + + def __call__(self, image): + + return F.adjust_brightness(image, self.brightness_factor) + + +class AdjustContrast(object): + """Adjust contrast of the image. + + Parameters + ---------- + contrast_factor: float + How much to adjust the contrast. Can be any non negative number. 1 gives the original image. + Default is 1. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.AdjustContrast(contrast_factor=1) + >>> image = transform(image) + >>> print(image) + + """ + + def __init__(self, contrast_factor=1): + + self.contrast_factor = contrast_factor + + def __call__(self, image): + + return F.adjust_contrast(image, self.contrast_factor) + + +class AdjustHue(object): + """Adjust hue of the image. + + Parameters + ---------- + hue_factor: float + How much to shift the hue channel. Should be in [-0.5, 0.5]. + 0.5 and -0.5 give complete reversal of hue channel in HSV space in positive and negative direction respectively. + 0 means no shift. Therefore, both -0.5 and 0.5 will give an image with complementary colors while 0 gives the original image. + Default is 0. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.AdjustHue(hue_factor=0) + >>> image = transform(image) + >>> print(image) + + """ + + def __init__(self, hue_factor=0): + + self.hue_factor = hue_factor + + def __call__(self, image): + + return F.adjust_hue(image, self.hue_factor) + + +class AdjustSaturation(object): + """Adjust saturation of the image. + + Parameters + ---------- + saturation_factor: float + How much to adjust the saturation. Can be any non negative number. 1 gives the original image. + Default is 1. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.AdjustSaturation(saturation_factor=1) + >>> image = transform(image) + >>> print(image) + + """ + + def __init__(self, saturation_factor=1): + + self.saturation_factor = saturation_factor + + def __call__(self, image): + + return F.adjust_saturation(image, self.saturation_factor) + + +class FlipHorizontal(object): + """Flip an image horizontally. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.FlipHorizontal() + >>> image = transform(image) + >>> print(image) + + """ + + def __call__(self, image): + + return F.hflip(image) + + +class FlipVertical(object): + """Flip an image vertically. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.FlipVertical() + >>> image = transform(image) + >>> print(image) + + """ + + def __call__(self, image): + + return F.vflip(image) + + +class PadToBoundingbox(object): + """Pad image with the specified height and width to target size. + + Parameters + ---------- + offset_height: int + Number of rows to add on top. + offset_width: int + Number of columns to add on the left. + target_height: int + Height of output image. + target_width: int + Width of output image. + padding_value: int or sequence + value to pad. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand( 224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.PadToBoundingbox(offset_height=10, offset_width=10, target_height=300, target_width=300, padding_value=0) + >>> image = transform(image) + >>> print(image) + >>> image shape : (300, 300, 3) + """ + + def __init__(self, offset_height, offset_width, target_height, target_width, padding_value=0): + self.offset_height = offset_height + self.offset_width = offset_width + self.target_height = target_height + self.target_width = target_width + self.padding_value = padding_value + + def __call__(self, image): + + return F.padtoboundingbox( + image, self.offset_height, self.offset_width, self.target_height, self.target_width, self.padding_value + ) + + +class Normalize(object): + """Normalize a tensor image with mean and standard deviation. + + Parameters + ---------- + mean: number or sequence + If mean is a number, mean will be applied for all channels. Sequence of means for each channel. + std: number or sequnece + If std is a number, std will be applied for all channels.Sequence of standard deviations for each channel. + data_format: str + Data format of input image, should be 'HWC' or 'CHW'. Default: 'HWC'. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand( 224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.Normalize(mean = (155.0, 155.0, 155.0), std = (75.0, 75.0, 75.0),data_format='HWC') + >>> image = transform(image) + >>> print(image) + + """ + + def __init__(self, mean, std, data_format='HWC'): + + self.mean = mean + self.std = std + self.data_format = data_format + + def __call__(self, image): + + return F.normalize(image, self.mean, self.std, self.data_format) + + +class StandardizePerImage(object): + """For each 3-D image x in image, computes (x - mean) / adjusted_stddev, where mean is the average of all values in x. + adjusted_stddev = max(stddev, 1.0/sqrt(N)) is capped away from 0 to protect against division by 0 when handling uniform images. + N is the number of elements in x. stddev is the standard deviation of all values in x + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand( 224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.StandardizePerImage() + >>> image = transform(image) + >>> print(image) + + """ + + def __call__(self, image): + + return F.standardize(image) + + +class RandomBrightness(object): + """Random adjust brightness of the image. + + Parameters + ---------- + brightness_factor: float or sequence + Brightness adjustment factor (default=(1, 1)). + If it is a float, the factor is uniformly chosen from the range [max(0, 1-brightness_factor), 1+brightness_factor]. + If it is a sequence, it should be [min, max] for the range.Should be non negative numbers. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.RandomBrightness(brightness_factor=(0.5, 2)) + >>> image = transform(image) + >>> print(image) + + """ + + def __init__(self, brightness_factor=(1, 1)): + self.brighthness_factor = brightness_factor + + def __call__(self, image): + + return F.random_brightness(image, self.brighthness_factor) + + +class RandomContrast(object): + """Random adjust contrast of the image. + + Parameters + ---------- + contrast_factor: float or sequence + Contrast adjustment factor (default=(1, 1)). + If it is a float, the factor is uniformly chosen from the range [max(0, 1-contrast_factor), 1+contrast_factor]. + If it is a sequence, it should be [min, max] for the range.Should be non negative numbers. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.RandomContrast(contrast_factor=(0.5, 2)) + >>> image = transform(image) + >>> print(image) + + """ + + def __init__(self, contrast_factor=(1, 1)): + + self.contrast_factor = contrast_factor + + def __call__(self, image): + + return F.random_contrast(image, self.contrast_factor) + + +class RandomSaturation(object): + """Random adjust saturation of the image. + + Parameters + ---------- + saturation_factor: float or sequence + Saturation adjustment factor (default=(1, 1)). + If it is a float, the factor is uniformly chosen from the range [max(0, 1-saturation_factor), 1+saturation_factor]. + If it is a sequence, it should be [min, max] for the range.Should be non negative numbers. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.RandomSaturation(saturation_factor=(0.5, 2)) + >>> image = transform(image) + >>> print(image) + + """ + + def __init__(self, saturation_factor=(1, 1)): + + self.saturation_factor = saturation_factor + + def __call__(self, image): + + return F.random_saturation(image, self.saturation_factor) + + +class RandomHue(object): + """Random adjust hue of the image. + + Parameters + ---------- + hue_factor: float or sequence + Hue adjustment factor (default=(0, 0)). + If it is a float, the factor is uniformly chosen from the range [-hue_factor, hue_factor]. + If it is a sequence, it should be [min, max] for the range.Should have 0<= hue <= 0.5 or -0.5 <= min <= max <= 0.5. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.RandomHue(hue_factor=(-0.5, 0.5)) + >>> image = transform(image) + >>> print(image) + + """ + + def __init__(self, hue_factor=(0, 0)): + + self.hue_factor = hue_factor + + def __call__(self, image): + + return F.random_hue(image, self.hue_factor) + + +class RandomCrop(object): + """Crop the given image at a random location. + + Parameters + ---------- + size: int or sequence + Desired output size of the crop. + If size is an int instead of sequence like (h, w), a square crop (size, size) is made. + If provided a sequence of length 1, it will be interpreted as (size[0], size[0]). + padding: int or sequence, optional + Optional padding on each border of the image. + If a single int is provided this is used to pad all borders. + If sequence of length 2 is provided this is the padding on left/right and top/bottom respectively. + If a sequence of length 4 is provided, it is used to pad left, top, right, bottom borders respectively. + Default: 0. + pad_if_needed: boolean + It will pad the image if smaller than the desired size to avoid raising an exception. + Since cropping is done after padding, the padding seems to be done at a random offset. + fill: number or sequence + Pixel fill value for constant fill. Default is 0. + If a tuple of length 3, it is used to fill R, G, B channels respectively. + padding_mode: str + Type of padding. Default is constant. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.RandomCrop(size=50, padding=10, pad_if_needed=False, fill=0, padding_mode='constant') + >>> image = transform(image) + >>> print(image) + >>> image shape : (70,70,3) + + """ + + def __init__(self, size, padding=None, pad_if_needed=False, fill=0, padding_mode='constant'): + + self.size = size + self.padding = padding + self.pad_if_needed = pad_if_needed + self.fill = fill + self.padding_mode = padding_mode + + def __call__(self, image): + + return F.random_crop( + image, + size=self.size, + padding=self.padding, + pad_if_needed=self.pad_if_needed, + fill=self.fill, + padding_mode=self.padding_mode, + ) + + +class RandomResizedCrop(object): + """Crop the given image to random size and aspect ratio. + + Parameters + ---------- + size: int or sequence + Desired output size of the crop. + If size is an int instead of sequence like (h, w), a square crop (size, size) is made. + If provided a sequence of length 1, it will be interpreted as (size[0], size[0]). + scale: tuple of float + scale range of the cropped image before resizing, relatively to the origin image. + ratio: tuple of float + aspect ratio range of the cropped image before resizing. + interpolation: str + Type of interpolation. Default is bilinear. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.RandomResizedCrop(size = (100, 100), scale = (0.08, 1.0), ratio = (3./4.,4./3.), interpolation = 'bilinear') + >>> image = transform(image) + >>> print(image) + >>> image shape : (100,100,3) + + """ + + def __init__(self, size, scale=(0.08, 1.0), ratio=(3. / 4., 4. / 3.), interpolation='bilinear'): + self.size = size + self.scale = scale + self.ratio = ratio + self.interpolation = interpolation + + def __call__(self, image): + + return F.random_resized_crop(image, self.size, self.scale, self.ratio, self.interpolation) + + +class RandomFlipVertical(object): + """Vertically flip the given image randomly with a given probability. + + Parameters + ---------- + prob: float + probability of the image being flipped. Default value is 0.5 + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.RandomFlipVertical(prob = 0.5) + >>> image = transform(image) + >>> print(image) + + """ + + def __init__(self, prob=0.5): + + self.prob = prob + + def __call__(self, image): + + return F.random_vflip(image, self.prob) + + +class RandomFlipHorizontal(object): + """Horizontally flip the given image randomly with a given probability. + + Parameters + ---------- + prob: float + probability of the image being flipped. Default value is 0.5 + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.RandomFlipHorizontal(prob = 0.5) + >>> image = transform(image) + >>> print(image) + + """ + + def __init__(self, prob=0.5): + + self.prob = prob + + def __call__(self, image): + + return F.random_hflip(image, self.prob) + + +class RandomRotation(object): + """Rotate the image by random angle. + + Parameters + ---------- + degrees: number or sequnence + Range of degrees to select from. + If degrees is a number, the range of degrees will be (-degrees, +degrees). + If degrees is a sequence, the range of degrees will (degrees[0], degrees[1]). + interpolation: str + Interpolation method. Default is 'bilinear'. + expand: boolean + If true, expands the output to make it large enough to hold the entire rotated image. + If false or omitted, make the output image the same size as the input image. + Note that the expand flag assumes rotation around the center and no translation. + center: sequence or None + Optional center of rotation, (x, y). Origin is the upper left corner. + Default is the center of the image. + fill: number or sequence + Pixel fill value for the area outside the rotated image. Default is 0. + + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.RandomRotation(degrees=30, interpolation='bilinear', expand=False, center=None, fill=0) + >>> image = transform(image) + >>> print(image) + + """ + + def __init__(self, degrees, interpolation='bilinear', expand=False, center=None, fill=0): + + self.degrees = degrees + self.interpolation = interpolation + self.expand = expand + self.center = center + self.fill = fill + + def __call__(self, image): + + return F.random_rotation(image, self.degrees, self.interpolation, self.expand, self.center, self.fill) + + +class RandomShear(object): + """Shear the image by random angle. + + Parameters + ---------- + degrees: number or sequnence + Range of degrees to select from. + If degrees is a number, a shear parallel to the x axis in the range (-shear, +shear) will be applied. + If shear is a sequence of 2 values a shear parallel to the x axis in the range (shear[0], shear[1]) will be applied. + If shear is a sequence of 4 values, a x-axis shear in (shear[0], shear[1]) and y-axis shear in (shear[2], shear[3]) will be applied. + interpolation: str + Interpolation method. Default is 'bilinear'. + fill: number or sequence + Pixel fill value for the area outside the sheared image. Default is 0. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.RandomShear(degrees=30, interpolation='bilinear', fill=0) + >>> image = transform(image) + >>> print(image) + + """ + + def __init__(self, degrees, interpolation='bilinear', fill=0): + + self.degrees = degrees + self.interpolation = interpolation + self.fill = fill + + def __call__(self, image): + + return F.random_shear(image, self.degrees, self.interpolation, self.fill) + + +class RandomShift(object): + """Shift the image by random translations. + + Parameters + ---------- + shift: list or tuple + Maximum absolute fraction for horizontal and vertical translations. + shift=(a, b), then horizontal shift is randomly sampled in the range -img_width * a < dx < img_width * a. + vertical shift is randomly sampled in the range -img_height * b < dy < img_height * b. + interpolation: str + Interpolation method. Default is 'bilinear'. + fill: number or sequence + Pixel fill value for the area outside the sheared image. Default is 0. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.RandomShift(shift=(0.2, 0.2), interpolation='bilinear', fill=0) + >>> image = transform(image) + >>> print(image) + + """ + + def __init__(self, shift, interpolation='bilinear', fill=0): + + self.shift = shift + self.interpolation = interpolation + self.fill = fill + + def __call__(self, image): + + return F.random_shift(image, self.shift, self.interpolation, self.fill) + + +class RandomZoom(object): + """Zoom the image by random scale. + + Parameters + ---------- + zoom: list or tuple + Scaling factor interval, e.g (a, b), then scale is randomly sampled from the range a <= scale <= b. + interpolation: str + Interpolation method. Default is 'bilinear'. + fill: number or sequence + Pixel fill value for the area outside the sheared image. Default is 0. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.RandomZoom(zoom=(0.2, 0.5), interpolation='bilinear', fill=0) + >>> image = transform(image) + >>> print(image) + + """ + + def __init__(self, zoom, interpolation='bilinear', fill=0): + + self.zoom = zoom + self.interpolation = interpolation + self.fill = fill + + def __call__(self, image): + + return F.random_zoom(image, self.zoom, self.interpolation, self.fill) + + +class RandomAffine(object): + """Random affine transformation of the image keeping center invariant. + + Parameters + ---------- + degrees: number or sequnence + Range of degrees to select from. + If degrees is a number, the range of degrees will be (-degrees, +degrees). + If degrees is a sequence, the range of degrees will (degrees[0], degrees[1]). + Set to 0 to deactivate rotations. + shift: sequence or None + Maximum absolute fraction for horizontal and vertical translations. + shift=(a, b), then horizontal shift is randomly sampled in the range -img_width * a < dx < img_width * a. + vertical shift is randomly sampled in the range -img_height * b < dy < img_height * b. + Will not shift by default. + shear: number or sequnence or None + Range of degrees to select from. + If degrees is a number, a shear parallel to the x axis in the range (-shear, +shear) will be applied. + If shear is a sequence of 2 values a shear parallel to the x axis in the range (shear[0], shear[1]) will be applied. + If shear is a sequence of 4 values, a x-axis shear in (shear[0], shear[1]) and y-axis shear in (shear[2], shear[3]) will be applied. + Will not apply shear by default. + zoom: sequence or None + Scaling factor interval, e.g (a, b), then scale is randomly sampled from the range a <= scale <= b. + Will not zoom by default. + interpolation: str + Interpolation method. Default is 'bilinear'. + fill: number or sequence + Pixel fill value for the area outside the sheared image. Default is 0. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.RandomAffine(degrees=30, shift=(0.2,0.2), zoom=(0.2, 0.5), shear=30, interpolation='bilinear', fill=0) + >>> image = transform(image) + >>> print(image) + + """ + + def __init__(self, degrees, shift=None, zoom=None, shear=None, interpolation='bilinear', fill=0): + + if isinstance(degrees, numbers.Number): + if degrees < 0: + raise ValueError('If degrees is a single number, it must be positive.' 'But got {}.'.format(degrees)) + degrees = [-degrees, degrees] + elif not (isinstance(degrees, (list, tuple)) and len(degrees) == 2): + raise TypeError('If degrees is a list or tuple, it should be length of 2.' 'But got {}'.format(degrees)) + + self.degrees = (float(x) for x in degrees) + + if shift is not None: + if not (isinstance(shift, (list, tuple)) and len(shift) == 2): + raise TypeError("shift should be a list or tuple of length 2." "But got {}.".format(shift)) + + for s in shift: + if not (0.0 <= s <= 1.0): + raise ValueError('shift values should be between 0 and 1.' 'But got {}.'.format(shift)) + self.shift = shift + + if zoom is not None: + if not (isinstance(zoom, (list, tuple)) and len(zoom) == 2): + raise TypeError("zoom should be a list or tuple of length 2." "But got {}.".format(zoom)) + + if not (0 <= zoom[0] <= zoom[1]): + raise ValueError("zoom valuse should be positive, and zoom[1] should be less than zoom[0].") + + self.zoom = zoom + + if shear is not None: + if isinstance(shear, numbers.Number): + if shear < 0: + raise ValueError("If shear is a single number, it must be positive.") + shear = [-shear, shear] + elif not (isinstance(shear, (list, tuple)) and len(shear) in (2, 4)): + raise TypeError('shear should be a list or tuple of length (2, 4).') + + self.shear = (float(x) for x in shear) + + self.interpolation = interpolation + + if fill is None: + fill = 0 + elif not isinstance(fill, (list, tuple, numbers.Number)): + raise TypeError("Fill should be either a sequence or a number.") + + self.fill = fill + + def __call__(self, image): + + return F.random_affine(image, self.degrees, self.shift, self.zoom, self.shear, self.interpolation, self.fill) + + +class ColorJitter(object): + """Randomly change the brightness, contrast, saturation and hue of an image. + + Parameters + ---------- + brightness: float or sequence + Brightness adjustment factor (default=(1, 1)). + If it is a float, the factor is uniformly chosen from the range [max(0, 1-brightness_factor), 1+brightness_factor]. + If it is a sequence, it should be [min, max] for the range.Should be non negative numbers. + contrast: float or sequence + Contrast adjustment factor (default=(1, 1)). + If it is a float, the factor is uniformly chosen from the range [max(0, 1-contrast_factor), 1+contrast_factor]. + If it is a sequence, it should be [min, max] for the range.Should be non negative numbers. + saturation: float or sequence + Saturation adjustment factor (default=(1, 1)). + If it is a float, the factor is uniformly chosen from the range [max(0, 1-saturation_factor), 1+saturation_factor]. + If it is a sequence, it should be [min, max] for the range.Should be non negative numbers. + hue: float or sequence + Hue adjustment factor (default=(0, 0)). + If it is a float, the factor is uniformly chosen from the range [-hue_factor, hue_factor]. + If it is a sequence, it should be [min, max] for the range.Should have 0<= hue <= 0.5 or -0.5 <= min <= max <= 0.5. + + Examples + ---------- + With TensorLayer + + >>> image = (np.random.rand(224, 224, 3) * 255.).astype(np.uint8) + >>> transform = tl.vision.transforms.ColorJitter(brightness=(1,5), contrast=(1,5), saturation=(1,5), hue=(-0.2,0.2)) + >>> image = transform(image) + >>> print(image) + + """ + + def __init__(self, brightness=0, contrast=0, saturation=0, hue=0): + + self.brightness = self._check_input(brightness, 'brightness') + self.contrast = self._check_input(contrast, 'contrast') + self.saturation = self._check_input(saturation, 'saturation') + self.hue = self._check_input(hue, 'hue', center=0, bound=(-0.5, 0.5), clip_first_on_zero=False) + + def _check_input(self, value, name, center=1, bound=(0, float('inf')), clip_first_on_zero=True): + if isinstance(value, numbers.Number): + if value < 0: + raise ValueError("If {} is a single number, it must be non negative.".format(name)) + value = [center - float(value), center + float(value)] + if clip_first_on_zero: + value[0] = max(value[0], 0.0) + elif isinstance(value, (tuple, list)) and len(value) == 2: + if not bound[0] <= value[0] <= value[1] <= bound[1]: + raise ValueError("{} values should be between {}".format(name, bound)) + else: + raise TypeError("{} should be a single number or a list/tuple with lenght 2.".format(name)) + + if value[0] == value[1] == center: + value = None + return value + + @staticmethod + def get_params(brightness, contrast, saturation, hue): + fn_idx = np.random.permutation(np.arange(4)) + + b = None if brightness is None else float(np.random.uniform(brightness[0], brightness[1])) + c = None if contrast is None else float(np.random.uniform(contrast[0], contrast[1])) + s = None if saturation is None else float(np.random.uniform(saturation[0], saturation[1])) + h = None if hue is None else float(np.random.uniform(hue[0], hue[1])) + + return fn_idx, b, c, s, h + + def __call__(self, image): + + fn_idx, brightness_factor, contrast_factor, saturation_factor, hue_factor = \ + self.get_params(self.brightness, self.contrast, self.saturation, self.hue) + + for fn_id in fn_idx: + if fn_id == 0 and brightness_factor is not None: + image = F.adjust_brightness(image, brightness_factor) + elif fn_id == 1 and contrast_factor is not None: + image = F.adjust_contrast(image, contrast_factor) + elif fn_id == 2 and saturation_factor is not None: + image = F.adjust_saturation(image, saturation_factor) + elif fn_id == 3 and hue_factor is not None: + image = F.adjust_hue(image, hue_factor) + + return image diff --git a/tensorlayer/visualize.py b/tensorlayer/visualize.py index 72c1b184c..ad05acffe 100644 --- a/tensorlayer/visualize.py +++ b/tensorlayer/visualize.py @@ -5,9 +5,9 @@ import imageio import numpy as np - import tensorlayer as tl from tensorlayer.lazy_imports import LazyImport +import colorsys, random cv2 = LazyImport("cv2") @@ -16,18 +16,9 @@ # matplotlib.use('Agg') __all__ = [ - 'read_image', - 'read_images', - 'save_image', - 'save_images', - 'draw_boxes_and_labels_to_image', - 'draw_mpii_people_to_image', - 'frame', - 'CNN2d', - 'images2d', - 'tsne_embedding', - 'draw_weights', - 'W', + 'read_image', 'read_images', 'save_image', 'save_images', 'draw_boxes_and_labels_to_image', + 'draw_mpii_people_to_image', 'frame', 'CNN2d', 'images2d', 'tsne_embedding', 'draw_weights', 'W', + 'draw_boxes_and_labels_to_image_with_json' ] @@ -662,3 +653,66 @@ def draw_weights(W=None, second=10, saveable=True, shape=None, name='mnist', fig W = draw_weights + + +def draw_boxes_and_labels_to_image_with_json(image, json_result, class_list, save_name=None): + """Draw bboxes and class labels on image. Return the image with bboxes. + + Parameters + ----------- + image : numpy.array + The RGB image [height, width, channel]. + json_result : list of dict + The object detection result with json format. + classes_list : list of str + For converting ID to string on image. + save_name : None or str + The name of image file (i.e. image.png), if None, not to save image. + + Returns + ------- + numpy.array + The saved image. + + References + ----------- + - OpenCV rectangle and putText. + - `scikit-image `__. + + """ + image_h, image_w, _ = image.shape + num_classes = len(class_list) + hsv_tuples = [(1.0 * x / num_classes, 1., 1.) for x in range(num_classes)] + colors = list(map(lambda x: colorsys.hsv_to_rgb(*x), hsv_tuples)) + colors = list(map(lambda x: (int(x[0] * 255), int(x[1] * 255), int(x[2] * 255)), colors)) + random.seed(0) + random.shuffle(colors) + random.seed(None) + bbox_thick = int(0.6 * (image_h + image_w) / 600) + fontScale = 0.5 + + for bbox_info in json_result: + image_name = bbox_info['image'] + category_id = bbox_info['category_id'] + if category_id < 0 or category_id > num_classes: continue + bbox = bbox_info['bbox'] # the order of coordinates is [x1, y2, x2, y2] + score = bbox_info['score'] + + bbox_color = colors[category_id] + c1, c2 = (int(bbox[0]), int(bbox[1])), (int(bbox[2]), int(bbox[3])) + cv2.rectangle(image, c1, c2, bbox_color, bbox_thick) + + bbox_mess = '%s: %.2f' % (class_list[category_id], score) + t_size = cv2.getTextSize(bbox_mess, 0, fontScale, thickness=bbox_thick // 2)[0] + c3 = (c1[0] + t_size[0], c1[1] - t_size[1] - 3) + cv2.rectangle(image, c1, (np.float32(c3[0]), np.float32(c3[1])), bbox_color, -1) + + cv2.putText( + image, bbox_mess, (c1[0], np.float32(c1[1] - 2)), cv2.FONT_HERSHEY_SIMPLEX, fontScale, (0, 0, 0), + bbox_thick // 2, lineType=cv2.LINE_AA + ) + + if save_name is not None: + save_image(image, save_name) + + return image diff --git a/examples/text_generation/README.md b/tests/dataflow/__init__.py similarity index 100% rename from examples/text_generation/README.md rename to tests/dataflow/__init__.py diff --git a/tests/dataflow/test_dataflow_image.py b/tests/dataflow/test_dataflow_image.py new file mode 100644 index 000000000..dcdf64db4 --- /dev/null +++ b/tests/dataflow/test_dataflow_image.py @@ -0,0 +1,279 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + +import os +import unittest + +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' + +import tensorlayer as tl + +from tests.utils import CustomTestCase + + +class Dataflow_Image_Test(CustomTestCase): + + @classmethod + def setUpClass(self): + self.input_shape = [1, 100, 100, 3] + self.input_layer = tl.layers.Input(self.input_shape, name='input_layer') + self.input_shape_1 = [100, 100, 3] + self.input_layer_1 = tl.layers.Input(self.input_shape_1, name='input_layer_1') + + self.centralcrop_1 = tl.dataflow.image.CentralCrop(self.input_layer, central_fraction=0.5) + self.centralcrop_2 = tl.dataflow.image.CentralCrop(self.input_layer, size=60) + + self.hsvtorgb = tl.dataflow.image.HsvToRgb(self.input_layer) + + self.adjustbrightness = tl.dataflow.image.AdjustBrightness(self.input_layer, factor=0.5) + self.adjustconstrast = tl.dataflow.image.AdjustContrast(self.input_layer, factor=0.5) + self.adjusthue = tl.dataflow.image.AdjustHue(self.input_layer, factor=0.5) + self.adjustsaturation = tl.dataflow.image.AdjustSaturation(self.input_layer, factor=0.5) + + self.crop = tl.dataflow.image.Crop( + self.input_layer, offset_height=20, offset_width=20, target_height=60, target_width=60 + ) + + self.fliphorizontal = tl.dataflow.image.FlipHorizontal(self.input_layer) + self.flipvertical = tl.dataflow.image.FlipVertical(self.input_layer) + + self.rgbtogray = tl.dataflow.image.RgbToGray(self.input_layer) + self.graytorgb = tl.dataflow.image.GrayToRgb(self.rgbtogray) + + self.padtoboundingbox = tl.dataflow.image.PadToBoundingbox( + self.input_layer, offset_height=20, offset_width=20, target_height=150, target_width=150 + ) + + self.pad_1 = tl.dataflow.image.Pad(self.input_layer, padding=10, padding_value=1, mode='constant') + self.pad_2 = tl.dataflow.image.Pad(self.input_layer, padding=(10, 10), mode='REFLECT') + self.pad_3 = tl.dataflow.image.Pad(self.input_layer, padding=(10, 20, 30, 40), mode='SYMMETRIC') + + self.standardization_1 = tl.dataflow.image.Standardization( + self.input_layer, mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5) + ) + self.standardization_2 = tl.dataflow.image.Standardization(self.input_layer, channel_mode=False) + self.standardization_3 = tl.dataflow.image.Standardization(self.input_layer, channel_mode=True) + + self.randombrightness = tl.dataflow.image.RandomBrightness(self.input_layer, factor=0.5) + self.randomcontrast = tl.dataflow.image.RandomContrast(self.input_layer, lower=0.2, upper=0.5) + self.randomhue = tl.dataflow.image.RandomHue(self.input_layer, factor=0.5) + self.randomsaturation = tl.dataflow.image.RandomSaturation(self.input_layer, lower=0.2, upper=0.5) + + self.randomcrop_1 = tl.dataflow.image.RandomCrop(self.input_layer, size=50) + self.randomcrop_2 = tl.dataflow.image.RandomCrop(self.input_layer, size=(50, 60)) + + self.resize_1 = tl.dataflow.image.Resize( + self.input_layer, size=46, method='bilinear', preserve_aspect_ratio=False, antialias=True + ) + + self.resize_2 = tl.dataflow.image.Resize( + self.input_layer, size=(32, 45), method='bilinear', preserve_aspect_ratio=True, antialias=False + ) + + self.croporpad = tl.dataflow.image.CropOrPad(self.input_layer, target_height=50, target_width=150) + self.resizeandpad = tl.dataflow.image.ResizeAndPad( + self.input_layer, target_height=50, target_width=150, method='bilinear' + ) + self.rgbtohsv = tl.dataflow.image.RgbToHsv(self.input_layer) + self.transpose = tl.dataflow.image.Transpose(self.input_layer, order=(3, 2, 1, 0)) + self.randomrotation = tl.dataflow.image.RandomRotation( + self.input_layer_1, degrees=60, fill_mode='nearest', fill_value=1 + ) + self.randomshift_1 = tl.dataflow.image.RandomShift( + self.input_layer_1, shift=0.5, fill_mode='nearest', fill_value=0 + ) + self.randomshift_2 = tl.dataflow.image.RandomShift( + self.input_layer_1, shift=(0.5, 0.4), fill_mode='nearest', fill_value=0 + ) + + self.randomshear = tl.dataflow.image.RandomShear( + self.input_layer_1, degree=30, fill_mode='nearest', fill_value=1 + ) + + self.randomzoom_1 = tl.dataflow.image.RandomZoom( + self.input_layer_1, zoom_range=0.5, fill_mode='nearest', fill_value=1 + ) + self.randomzoom_2 = tl.dataflow.image.RandomZoom( + self.input_layer_1, zoom_range=(0.5, 0.4), fill_mode='nearest', fill_value=1 + ) + + self.rescale = tl.dataflow.image.Rescale(self.input_layer, scale=3, offset=4) + self.randomflipvertical = tl.dataflow.image.RandomFlipVertical(self.input_layer) + self.randomfliphorizontal = tl.dataflow.image.RandomFlipHorizontal(self.input_layer) + self.hwc2chw = tl.dataflow.image.HWC2CHW(self.input_layer) + self.chw2hwc = tl.dataflow.image.CHW2HWC(self.hwc2chw) + + @classmethod + def tearDownClass(self): + pass + + def test_centralcrop_1(self): + + self.assertEqual(tl.get_tensor_shape(self.centralcrop_1), [1, 50, 50, 3]) + + def test_centralcrop_2(self): + + self.assertEqual(tl.get_tensor_shape(self.centralcrop_2), [1, 60, 60, 3]) + + def test_hsvtorgb(self): + + self.assertEqual(tl.get_tensor_shape(self.hsvtorgb), [1, 100, 100, 3]) + + def test_adjustbrightness(self): + + self.assertEqual(tl.get_tensor_shape(self.adjustbrightness), [1, 100, 100, 3]) + + def test_adjustconstrast(self): + + self.assertEqual(tl.get_tensor_shape(self.adjustconstrast), [1, 100, 100, 3]) + + def test_adjusthue(self): + + self.assertEqual(tl.get_tensor_shape(self.adjusthue), [1, 100, 100, 3]) + + def test_adjustsaturation(self): + + self.assertEqual(tl.get_tensor_shape(self.adjustsaturation), [1, 100, 100, 3]) + + def test_crop(self): + + self.assertEqual(tl.get_tensor_shape(self.crop), [1, 60, 60, 3]) + + def test_fliphorizontal(self): + + self.assertEqual(tl.get_tensor_shape(self.fliphorizontal), [1, 100, 100, 3]) + + def test_flipvertical(self): + + self.assertEqual(tl.get_tensor_shape(self.flipvertical), [1, 100, 100, 3]) + + def test_rgbtogray(self): + + self.assertEqual(tl.get_tensor_shape(self.rgbtogray), [1, 100, 100, 1]) + + def test_graytorgb(self): + + self.assertEqual(tl.get_tensor_shape(self.graytorgb), [1, 100, 100, 3]) + + def test_padtoboundingbox(self): + + self.assertEqual(tl.get_tensor_shape(self.padtoboundingbox), [1, 150, 150, 3]) + + def test_pad_1(self): + + self.assertEqual(tl.get_tensor_shape(self.pad_1), [1, 120, 120, 3]) + + def test_pad_2(self): + + self.assertEqual(tl.get_tensor_shape(self.pad_2), [1, 120, 120, 3]) + + def test_pad_3(self): + + self.assertEqual(tl.get_tensor_shape(self.pad_3), [1, 130, 170, 3]) + + def test_standardization_1(self): + + self.assertEqual(tl.get_tensor_shape(self.standardization_1), [1, 100, 100, 3]) + + def test_standardization_2(self): + + self.assertEqual(tl.get_tensor_shape(self.standardization_2), [1, 100, 100, 3]) + + def test_standardization_3(self): + + self.assertEqual(tl.get_tensor_shape(self.standardization_3), [1, 100, 100, 3]) + + def test_randomcontrast(self): + + self.assertEqual(tl.get_tensor_shape(self.randomcontrast), [1, 100, 100, 3]) + + def test_randomhue(self): + + self.assertEqual(tl.get_tensor_shape(self.randomhue), [1, 100, 100, 3]) + + def test_randomsaturation(self): + + self.assertEqual(tl.get_tensor_shape(self.randomsaturation), [1, 100, 100, 3]) + + def test_randomcrop_1(self): + + self.assertEqual(tl.get_tensor_shape(self.randomcrop_1), [1, 50, 50, 3]) + + def test_randomcrop_2(self): + + self.assertEqual(tl.get_tensor_shape(self.randomcrop_2), [1, 50, 60, 3]) + + def test_resize_1(self): + + self.assertEqual(tl.get_tensor_shape(self.resize_1), [1, 46, 46, 3]) + + def test_resize_2(self): + + self.assertEqual(tl.get_tensor_shape(self.resize_2), [1, 32, 32, 3]) + + def test_croporpad(self): + + self.assertEqual(tl.get_tensor_shape(self.croporpad), [1, 50, 150, 3]) + + def test_resizeandpad(self): + + self.assertEqual(tl.get_tensor_shape(self.resizeandpad), [1, 50, 150, 3]) + + def test_rgbtohsv(self): + + self.assertEqual(tl.get_tensor_shape(self.rgbtohsv), [1, 100, 100, 3]) + + def test_transpose(self): + + self.assertEqual(tl.get_tensor_shape(self.transpose), [3, 100, 100, 1]) + + def test_randomrotation(self): + + self.assertEqual(tl.get_tensor_shape(self.randomrotation), [100, 100, 3]) + + def test_randomshift_1(self): + + self.assertEqual(tl.get_tensor_shape(self.randomshift_1), [100, 100, 3]) + + def test_randomshift_2(self): + + self.assertEqual(tl.get_tensor_shape(self.randomshift_2), [100, 100, 3]) + + def test_randoshear(self): + + self.assertEqual(tl.get_tensor_shape(self.randomshear), [100, 100, 3]) + + def test_randomzoom_1(self): + + self.assertEqual(tl.get_tensor_shape(self.randomzoom_1), [100, 100, 3]) + + def test_randomzoom_2(self): + + self.assertEqual(tl.get_tensor_shape(self.randomzoom_2), [100, 100, 3]) + + def test_rescale(self): + + self.assertEqual(tl.get_tensor_shape(self.rescale), [1, 100, 100, 3]) + + def test_randomflipvertical(self): + + self.assertEqual(tl.get_tensor_shape(self.randomflipvertical), [1, 100, 100, 3]) + + def test_randomfliphorizontal(self): + + self.assertEqual(tl.get_tensor_shape(self.randomfliphorizontal), [1, 100, 100, 3]) + + def test_hwc2chw(self): + + self.assertEqual(tl.get_tensor_shape(self.hwc2chw), [1, 3, 100, 100]) + + def test_chw2hwc(self): + + self.assertEqual(tl.get_tensor_shape(self.chw2hwc), [1, 100, 100, 3]) + + +if __name__ == '__main__': + + tl.logging.set_verbosity(tl.logging.DEBUG) + + unittest.main() diff --git a/tests/files/test_utils_saveload.py b/tests/files/test_utils_saveload.py index ea51b0ff4..58a1d374a 100644 --- a/tests/files/test_utils_saveload.py +++ b/tests/files/test_utils_saveload.py @@ -4,15 +4,15 @@ import os import unittest +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' + import numpy as np import tensorflow as tf - import tensorlayer as tl from tensorlayer.layers import * from tensorlayer.models import * -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase def basic_static_model(): diff --git a/tests/layers/test_layernode.py b/tests/layers/test_layernode.py deleted file mode 100644 index 957857f9a..000000000 --- a/tests/layers/test_layernode.py +++ /dev/null @@ -1,238 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -import os -import unittest - -import numpy as np -import tensorflow as tf -from tensorflow.python.ops.rnn_cell import LSTMCell - -import tensorlayer as tl -from tensorlayer.layers import * -from tensorlayer.models import Model -from tests.utils import CustomTestCase - -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' - - -class LayerNode_Test(CustomTestCase): - - @classmethod - def setUpClass(cls): - pass - - @classmethod - def tearDownClass(cls): - pass - - def test_net1(self): - print('-' * 20, 'test_net1', '-' * 20) - - def get_model(input_shape): - ni = Input(input_shape) - - nii = Conv2d(32, filter_size=(3, 3), strides=(1, 1), name='conv1')(ni) - nn = Dropout(keep=0.9, name='drop1')(nii) - - conv = Conv2d(32, filter_size=(3, 3), strides=(1, 1), name='conv2') - tt = conv(nn) # conv2_node_0 - nn = conv(nn) # conv2_node_1 - - # a branch - na = Conv2d(64, filter_size=(3, 3), strides=(1, 1), name='conv3')(nn) - na = MaxPool2d(name='pool1')(na) - - # b branch - nb = MaxPool2d(name='pool2')(nn) - nb = conv(nb) # conv2_node_2 - - out = Concat(name='concat')([na, nb]) - M = Model(inputs=ni, outputs=[out, nn, nb]) - - gg = conv(nii) # this node will not be added since model fixed - - return M - - net = get_model([None, 24, 24, 3]) - - for k, v in enumerate(net._node_by_depth): - print(k, [x.name for x in v], [x.in_tensors_idxes for x in v]) - - all_node_names = [] - for k, v in enumerate(net._node_by_depth): - all_node_names.extend([x.name for x in v]) - - self.assertNotIn('conv2_node_0', all_node_names) - self.assertNotIn('conv2_node_3', all_node_names) - - self.assertEqual(len(net.all_layers), 8) - print(net.all_layers) - - data = np.random.normal(size=[2, 24, 24, 3]).astype(np.float32) - out, nn, nb = net(data, is_train=True) - - self.assertEqual(nn.shape, [2, 24, 24, 32]) - self.assertEqual(nb.shape, [2, 12, 12, 32]) - - def test_net2(self): - print('-' * 20, 'test_net2', '-' * 20) - - def get_unstack_model(input_shape): - ni = Input(input_shape) - - nn = Dropout(keep=0.9)(ni) - - a, b, c = UnStack(axis=-1)(nn) - - b = Flatten()(b) - b = Dense(10)(b) - - c = Flatten()(c) - - M = Model(inputs=ni, outputs=[a, b, c]) - return M - - net = get_unstack_model([None, 24, 24, 3]) - - for k, v in enumerate(net._node_by_depth): - print(k, [x.name for x in v], [x.in_tensors_idxes for x in v]) - - data = np.random.normal(size=[2, 24, 24, 3]).astype(np.float32) - out = net(data, is_train=True) - - self.assertEqual(len(out), 3) - - def test_word2vec(self): - print('-' * 20, 'test_word2vec', '-' * 20) - - def get_word2vec(): - vocabulary_size = 800 - batch_size = 10 - embedding_size = 60 - num_sampled = 25 - inputs = tl.layers.Input([batch_size], dtype=tf.int32) - labels = tl.layers.Input([batch_size, 1], dtype=tf.int32) - - emb_net = tl.layers.Word2vecEmbedding( - vocabulary_size=vocabulary_size, - embedding_size=embedding_size, - num_sampled=num_sampled, - activate_nce_loss=True, # nce loss is activated - nce_loss_args={}, - E_init=tl.initializers.random_uniform(minval=-1.0, maxval=1.0), - nce_W_init=tl.initializers.truncated_normal(stddev=float(1.0 / np.sqrt(embedding_size))), - nce_b_init=tl.initializers.constant(value=0.0), - name='word2vec_layer', - ) - emb, nce = emb_net([inputs, labels]) - - model = tl.models.Model(inputs=[inputs, labels], outputs=[emb, nce]) - return model - - net = get_word2vec() - - for k, v in enumerate(net._node_by_depth): - print(k, [x.name for x in v], [x.in_tensors_idxes for x in v]) - - x = tf.ones(shape=(10, ), dtype=tf.int32) - y = tf.ones(shape=(10, 1), dtype=tf.int32) - out = net([x, y], is_train=True) - - self.assertEqual(len(out), 2) - - def test_layerlist(self): - print('-' * 20, 'layerlist', '-' * 20) - - class MyModel(Model): - - def __init__(self): - super(MyModel, self).__init__() - self.layers = LayerList([Dense(50, in_channels=100), Dropout(0.9), Dense(10, in_channels=50)]) - - def forward(self, x): - return self.layers(x) - - net = MyModel() - self.assertEqual(net._nodes_fixed, False) - - data = np.random.normal(size=[4, 100]).astype(np.float32) - out = net(data, is_train=False) - - self.assertEqual(net._nodes_fixed, True) - self.assertEqual(net.layers._nodes_fixed, True) - self.assertEqual(net.layers[0]._nodes_fixed, True) - self.assertEqual(net.layers[1]._nodes_fixed, True) - self.assertEqual(net.layers[2]._nodes_fixed, True) - - def test_ModelLayer(self): - print('-' * 20, 'ModelLayer', '-' * 20) - - def MyModel(): - nii = Input(shape=[None, 100]) - nn = Dense(50, in_channels=100)(nii) - nn = Dropout(0.9)(nn) - nn = Dense(10)(nn) - M = Model(inputs=nii, outputs=nn) - return M - - mlayer = MyModel().as_layer() - - ni = Input(shape=[None, 100]) - nn = mlayer(ni) - nn = Dense(5)(nn) - net = Model(inputs=ni, outputs=nn) - - self.assertEqual(net._nodes_fixed, True) - - data = np.random.normal(size=[4, 100]).astype(np.float32) - out = net(data, is_train=False) - - self.assertEqual(net._nodes_fixed, True) - self.assertEqual(net.all_layers[1]._nodes_fixed, True) - self.assertEqual(net.all_layers[1].model._nodes_fixed, True) - self.assertEqual(net.all_layers[1].model.all_layers[0]._nodes_fixed, True) - - def test_STN(self): - print('-' * 20, 'test STN', '-' * 20) - - def get_model(inputs_shape): - ni = Input(inputs_shape) - - ## 1. Localisation network - # use MLP as the localisation net - nn = Flatten()(ni) - nn = Dense(n_units=20, act=tf.nn.tanh)(nn) - nn = Dropout(keep=0.8)(nn) - # you can also use CNN instead for MLP as the localisation net - - ## 2. Spatial transformer module (sampler) - stn = SpatialTransformer2dAffine(out_size=(40, 40), in_channels=20) - # s = stn((nn, ni)) - nn = stn((nn, ni)) - s = nn - - ## 3. Classifier - nn = Conv2d(16, (3, 3), (2, 2), act=tf.nn.relu, padding='SAME')(nn) - nn = Conv2d(16, (3, 3), (2, 2), act=tf.nn.relu, padding='SAME')(nn) - nn = Flatten()(nn) - nn = Dense(n_units=1024, act=tf.nn.relu)(nn) - nn = Dense(n_units=10, act=tf.identity)(nn) - - M = Model(inputs=ni, outputs=[nn, s]) - return M - - net = get_model([None, 40, 40, 1]) - - inputs = np.random.randn(2, 40, 40, 1).astype(np.float32) - o1, o2 = net(inputs, is_train=True) - self.assertEqual(o1.shape, (2, 10)) - self.assertEqual(o2.shape, (2, 40, 40, 1)) - - self.assertEqual(len(net._node_by_depth), 10) - - -if __name__ == '__main__': - - tl.logging.set_verbosity(tl.logging.DEBUG) - - unittest.main() diff --git a/tests/layers/test_layers_activation.py b/tests/layers/test_layers_activation.py index cb04233b3..fcbe690e6 100644 --- a/tests/layers/test_layers_activation.py +++ b/tests/layers/test_layers_activation.py @@ -4,212 +4,120 @@ import os import unittest -import numpy as np -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Activation_Layer_Test(CustomTestCase): @classmethod - def setUpClass(cls): - cls.data = (10 + 10) * np.random.random(size=[10, 5]).astype(np.float32) - 10 - cls.data2 = (10 + 10) * np.random.random(size=[10, 10, 5]).astype(np.float32) - 10 + def setUpClass(self): + self.inputs = tl.layers.Input([10, 5]) @classmethod - def tearDownClass(cls): + def tearDownClass(self): pass def test_prelu_1(self): - inputs = tl.layers.Input([10, 5]) prelulayer = tl.layers.PRelu(channel_shared=True) - prelu = prelulayer(inputs) - model = tl.models.Model(inputs=inputs, outputs=prelu) - out = model(self.data, is_train=True) + class prelu_model(tl.layers.Module): + def __init__(self): + super(prelu_model, self).__init__() + self.prelu = prelulayer - print(prelulayer) + def forward(self, inputs): + return self.prelu(inputs) + net = prelu_model() - gt = np.zeros(shape=self.data.shape) - for i in range(len(gt)): - for j in range(len(gt[i])): - if self.data[i][j] >= 0: - gt[i][j] = self.data[i][j] - else: - gt[i][j] = prelulayer.alpha_var_constrained.numpy() * self.data[i][j] - - self.assertTrue(np.array_equal(out.numpy(), gt)) + self.assertTrue(tl.get_tensor_shape(net(self.inputs)), [10, 5]) def test_prelu_2(self): - inputs = tl.layers.Input([10, 5]) prelulayer = tl.layers.PRelu(in_channels=5) - prelu = prelulayer(inputs) - model = tl.models.Model(inputs=inputs, outputs=prelu) - out = model(self.data, is_train=True) + prelu = prelulayer(self.inputs) - print(prelulayer) + self.assertTrue(tl.get_tensor_shape(prelu), [10, 5]) - gt = np.zeros(shape=self.data.shape) - for i in range(len(gt)): - for j in range(len(gt[i])): - if self.data[i][j] >= 0: - gt[i][j] = self.data[i][j] - else: - gt[i][j] = prelulayer.alpha_var_constrained.numpy()[j] * self.data[i][j] + def test_prelu6_1(self): + prelu6layer = tl.layers.PRelu6(in_channels=5) + prelu6 = prelu6layer(self.inputs) - self.assertTrue(np.array_equal(out.numpy(), gt)) + self.assertTrue(tl.get_tensor_shape(prelu6), [10, 5]) - def test_prelu_3(self): - inputs = tl.layers.Input([10, 10, 5]) - prelulayer = tl.layers.PRelu(in_channels=5) - prelu = prelulayer(inputs) - model = tl.models.Model(inputs=inputs, outputs=prelu) - out = model(self.data2, is_train=True) - print(prelulayer) + def test_prelu6_2(self): + prelu6layer = tl.layers.PRelu6(channel_shared=True) - gt = np.zeros(shape=self.data2.shape) - for i in range(len(gt)): - for k in range(len(gt[i])): - for j in range(len(gt[i][k])): - if self.data2[i][k][j] >= 0: - gt[i][k][j] = self.data2[i][k][j] - else: - gt[i][k][j] = prelulayer.alpha_var_constrained.numpy()[j] * self.data2[i][k][j] + class prelu6_model(tl.layers.Module): + def __init__(self): + super(prelu6_model, self).__init__() + self.prelu = prelu6layer - self.assertTrue(np.array_equal(out.numpy(), gt)) + def forward(self, inputs): + return self.prelu(inputs) - def test_prelu6_1(self): - inputs = tl.layers.Input([10, 5]) - prelulayer = tl.layers.PRelu6(channel_shared=True) - prelu = prelulayer(inputs) - model = tl.models.Model(inputs=inputs, outputs=prelu) - out = model(self.data, is_train=True) - - print(prelulayer) - - gt = np.zeros(shape=self.data.shape) - for i in range(len(gt)): - for j in range(len(gt[i])): - if self.data[i][j] >= 0 and self.data[i][j] <= 6: - gt[i][j] = self.data[i][j] - elif self.data[i][j] > 6: - gt[i][j] = 6 - else: - gt[i][j] = prelulayer.alpha_var_constrained.numpy() * self.data[i][j] - - self.assertTrue(np.array_equal(out.numpy(), gt)) + net = prelu6_model() - def test_prelu6_2(self): - inputs = tl.layers.Input([10, 5]) - prelulayer = tl.layers.PRelu6(in_channels=5) - prelu = prelulayer(inputs) - model = tl.models.Model(inputs=inputs, outputs=prelu) - out = model(self.data, is_train=True) - - print(prelulayer) - - gt = np.zeros(shape=self.data.shape) - for i in range(len(gt)): - for j in range(len(gt[i])): - if self.data[i][j] >= 0 and self.data[i][j] <= 6: - gt[i][j] = self.data[i][j] - elif self.data[i][j] > 6: - gt[i][j] = 6 - else: - gt[i][j] = prelulayer.alpha_var_constrained.numpy()[j] * self.data[i][j] - - self.assertTrue(np.array_equal(out.numpy(), gt)) - - def test_prelu6_3(self): - inputs = tl.layers.Input([10, 10, 5]) - prelulayer = tl.layers.PRelu6(in_channels=5) - prelu = prelulayer(inputs) - model = tl.models.Model(inputs=inputs, outputs=prelu) - out = model(self.data2, is_train=True) - - print(prelulayer) - - gt = np.zeros(shape=self.data2.shape) - for i in range(len(gt)): - for k in range(len(gt[i])): - for j in range(len(gt[i][k])): - if self.data2[i][k][j] >= 0 and self.data2[i][k][j] <= 6: - gt[i][k][j] = self.data2[i][k][j] - elif self.data2[i][k][j] > 6: - gt[i][k][j] = 6 - else: - gt[i][k][j] = prelulayer.alpha_var_constrained.numpy()[j] * self.data2[i][k][j] - - self.assertTrue(np.array_equal(out.numpy(), gt)) + self.assertTrue(tl.get_tensor_shape(net(self.inputs)), [10, 5]) def test_ptrelu6_1(self): - inputs = tl.layers.Input([10, 5]) - prelulayer = tl.layers.PTRelu6(channel_shared=True) - prelu = prelulayer(inputs) - model = tl.models.Model(inputs=inputs, outputs=prelu) - out = model(self.data, is_train=True) - - print(prelulayer) - - gt = np.zeros(shape=self.data.shape) - for i in range(len(gt)): - for j in range(len(gt[i])): - if self.data[i][j] >= 0 and self.data[i][j] <= 6: - gt[i][j] = self.data[i][j] - elif self.data[i][j] > 6: - gt[i][j] = 6 + prelulayer.alpha_high_constrained.numpy() * (self.data[i][j] - 6) - else: - gt[i][j] = prelulayer.alpha_low_constrained.numpy() * self.data[i][j] - - # FIXME: Figure out why this assert randomly fail in CI. - # self.assertTrue(np.array_equal(out.numpy(), gt)) + ptrelu6layer = tl.layers.PTRelu6(channel_shared=True) + ptrelu6 = ptrelu6layer(self.inputs) + + self.assertTrue(tl.get_tensor_shape(ptrelu6), [10, 5]) def test_ptrelu6_2(self): - inputs = tl.layers.Input([10, 5]) - prelulayer = tl.layers.PTRelu6(in_channels=5) - prelu = prelulayer(inputs) - model = tl.models.Model(inputs=inputs, outputs=prelu) - out = model(self.data, is_train=True) - - print(prelulayer) - - gt = np.zeros(shape=self.data.shape) - for i in range(len(gt)): - for j in range(len(gt[i])): - if self.data[i][j] >= 0 and self.data[i][j] <= 6: - gt[i][j] = self.data[i][j] - elif self.data[i][j] > 6: - gt[i][j] = 6 + prelulayer.alpha_high_constrained.numpy()[j] * (self.data[i][j] - 6) - else: - gt[i][j] = prelulayer.alpha_low_constrained.numpy()[j] * self.data[i][j] - - self.assertTrue(np.allclose(out.numpy(), gt)) - - def test_ptrelu6_3(self): - inputs = tl.layers.Input([3, 2, 5]) - prelulayer = tl.layers.PTRelu6() - prelu = prelulayer(inputs) - model = tl.models.Model(inputs=inputs, outputs=prelu) - out = model(self.data2, is_train=True) - - print(prelulayer) - - gt = np.zeros(shape=self.data2.shape) - for i in range(len(gt)): - for k in range(len(gt[i])): - for j in range(len(gt[i][k])): - if self.data2[i][k][j] >= 0 and self.data2[i][k][j] <= 6: - gt[i][k][j] = self.data2[i][k][j] - elif self.data2[i][k][j] > 6: - gt[i][k][j] = 6 + prelulayer.alpha_high_constrained.numpy()[j] * (self.data2[i][k][j] - 6) - else: - gt[i][k][j] = prelulayer.alpha_low_constrained.numpy()[j] * self.data2[i][k][j] - - self.assertTrue(np.allclose(out.numpy(), gt)) + ptrelu6layer = tl.layers.PTRelu6(in_channels=5) + + class ptrelu6_model(tl.layers.Module): + def __init__(self): + super(ptrelu6_model, self).__init__() + self.prelu = ptrelu6layer + + def forward(self, inputs): + return self.prelu(inputs) + + net = ptrelu6_model() + + self.assertTrue(tl.get_tensor_shape(net(self.inputs)), [10, 5]) + + def test_lrelu(self): + lrelulayer = tl.layers.LeakyReLU(alpha=0.5) + lrelu = lrelulayer(self.inputs) + + self.assertTrue(tl.get_tensor_shape(lrelu), [5, 10]) + + def test_lrelu6(self): + lrelu6layer = tl.layers.LeakyReLU6(alpha=0.5) + lrelu6 = lrelu6layer(self.inputs) + + self.assertTrue(tl.get_tensor_shape(lrelu6), [5, 10]) + + def test_ltrelu6(self): + ltrelu6layer = tl.layers.LeakyTwiceRelu6() + ltrelu6 = ltrelu6layer(self.inputs) + + self.assertTrue(tl.get_tensor_shape(ltrelu6), [5, 10]) + + def test_swish(self): + swishlayer = tl.layers.Swish() + swish = swishlayer(self.inputs) + + self.assertTrue(tl.get_tensor_shape(swish), [5, 10]) + + def test_hardtanh(self): + hardtanhlayer = tl.layers.HardTanh() + hardtanh = hardtanhlayer(self.inputs) + + self.assertTrue(tl.get_tensor_shape(hardtanh), [5, 10]) + + def test_mish(self): + mishlayer = tl.layers.Mish() + mish = mishlayer(self.inputs) + + self.assertTrue(tl.get_tensor_shape(mish), [5, 10]) if __name__ == '__main__': diff --git a/tests/layers/test_layers_convolution.py b/tests/layers/test_layers_convolution.py index 6787c592a..df2f69c36 100644 --- a/tests/layers/test_layers_convolution.py +++ b/tests/layers/test_layers_convolution.py @@ -4,218 +4,133 @@ import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import tensorlayer as tl -from tensorlayer.layers import * -from tensorlayer.models import * -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Layer_Convolution_1D_Test(CustomTestCase): @classmethod - def setUpClass(cls): - print("\n#################################") + def setUpClass(self): - cls.batch_size = 8 - cls.inputs_shape = [cls.batch_size, 100, 1] - cls.input_layer = Input(cls.inputs_shape, name='input_layer') + self.batch_size = 8 + self.inputs_shape = [self.batch_size, 100, 1] + self.input_layer = tl.layers.Input(self.inputs_shape, name='input_layer') - cls.n1 = tl.layers.Conv1dLayer(shape=(5, 1, 32), stride=2)(cls.input_layer) + self.conv1dlayer1 = tl.layers.Conv1d(in_channels=1, n_filter=32, filter_size=5, stride=2) + self.n1 = self.conv1dlayer1(self.input_layer) - cls.n2 = tl.layers.Conv1d(n_filter=32, filter_size=5, stride=2)(cls.n1) + self.conv1dlayer2 = tl.layers.Conv1d(in_channels=32, n_filter=32, filter_size=5, stride=2) + self.n2 = self.conv1dlayer2(self.n1) - cls.n3 = tl.layers.DeConv1dLayer( - shape=(5, 64, 32), outputs_shape=(cls.batch_size, 50, 64), strides=(1, 2, 1), name='deconv1dlayer' - )(cls.n2) + self.dconv1dlayer1 = tl.layers.DeConv1d(n_filter=64, in_channels=32, filter_size=5, name='deconv1dlayer') + self.n3 = self.dconv1dlayer1(self.n2) - cls.n4 = tl.layers.SeparableConv1d( - n_filter=32, filter_size=3, strides=2, padding='SAME', act='relu', name='separable_1d' - )(cls.n3) + self.separableconv1d1 = tl.layers.SeparableConv1d(in_channels=1, n_filter=16, filter_size=3, stride=2) + self.n4 = self.separableconv1d1(self.input_layer) - cls.n5 = tl.layers.SubpixelConv1d(scale=2, act=tf.nn.relu, in_channels=32, name='subpixel_1d')(cls.n4) + self.separableconv1d2 = tl.layers.SeparableConv1d( + in_channels=1, n_filter=16, filter_size=3, stride=2, depth_multiplier=4 + ) + self.n5 = self.separableconv1d2(self.input_layer) - cls.model = Model(inputs=cls.input_layer, outputs=cls.n5) - print("Testing Conv1d model: \n", cls.model) + self.separableconv1d3 = tl.layers.SeparableConv1d( + in_channels=1, n_filter=16, filter_size=3, stride=2, depth_multiplier=4, b_init=None + ) + self.n6 = self.separableconv1d3(self.input_layer) @classmethod - def tearDownClass(cls): + def tearDownClass(self): pass - # tf.reset_default_graph() def test_layer_n1(self): - - # self.assertEqual(len(self.n1.all_layers), 2) - # self.assertEqual(len(self.n1.all_params), 2) - # self.assertEqual(self.n1.count_params(), 192) - self.assertEqual(len(self.n1._info[0].layer.all_weights), 2) - self.assertEqual(self.n1.get_shape().as_list()[1:], [50, 32]) + self.assertEqual(len(self.conv1dlayer1.all_weights), 2) + self.assertEqual(tl.get_tensor_shape(self.n1), [self.batch_size, 50, 32]) def test_layer_n2(self): - - # self.assertEqual(len(self.n2.all_layers), 3) - # self.assertEqual(len(self.n2.all_params), 4) - # self.assertEqual(self.n2.count_params(), 5344) - self.assertEqual(len(self.n2._info[0].layer.all_weights), 2) - self.assertEqual(self.n2.get_shape().as_list()[1:], [25, 32]) + self.assertEqual(len(self.conv1dlayer2.all_weights), 2) + self.assertEqual(tl.get_tensor_shape(self.n2), [self.batch_size, 25, 32]) def test_layer_n3(self): - - # self.assertEqual(len(self.n2.all_layers), 3) - # self.assertEqual(len(self.n2.all_params), 4) - # self.assertEqual(self.n2.count_params(), 5344) - self.assertEqual(len(self.n3._info[0].layer.all_weights), 2) - self.assertEqual(self.n3.get_shape().as_list()[1:], [50, 64]) + self.assertEqual(len(self.dconv1dlayer1.all_weights), 2) + self.assertEqual(tl.get_tensor_shape(self.n3), [self.batch_size, 25, 64]) def test_layer_n4(self): - - # self.assertEqual(len(self.n2.all_layers), 3) - # self.assertEqual(len(self.n2.all_params), 4) - # self.assertEqual(self.n2.count_params(), 5344) - self.assertEqual(len(self.n4._info[0].layer.all_weights), 3) - self.assertEqual(self.n4.get_shape().as_list()[1:], [25, 32]) + self.assertEqual(len(self.separableconv1d1.all_weights), 3) + self.assertEqual(tl.get_tensor_shape(self.n4), [self.batch_size, 50, 16]) def test_layer_n5(self): + self.assertEqual(len(self.separableconv1d2.all_weights), 3) + self.assertEqual(tl.get_tensor_shape(self.n5), [self.batch_size, 50, 16]) - # self.assertEqual(len(self.n2.all_layers), 3) - # self.assertEqual(len(self.n2.all_params), 4) - # self.assertEqual(self.n2.count_params(), 5344) - self.assertEqual(self.n5.get_shape().as_list()[1:], [50, 16]) - - # def test_layer_n3(self): - # - # self.assertEqual(len(self.n3.all_layers), 4) - # self.assertEqual(len(self.n3.all_params), 7) - # self.assertEqual(self.n3.count_params(), 6496) - # self.assertEqual(self.n3.outputs.get_shape().as_list()[1:], [23, 32]) - - -# FIXME: TF2.0 only supports NHWC now -# class Layer_Convolution_1D_NCW_Test(CustomTestCase): -# -# @classmethod -# def setUpClass(cls): -# print("\n#################################") -# -# cls.batch_size = 8 -# cls.inputs_shape = [cls.batch_size, 1, 100] -# cls.input_layer = Input(cls.inputs_shape, name='input_layer') -# -# cls.n1 = tl.layers.Conv1dLayer( -# shape=(5, 1, 32), stride=2, data_format="NCW" -# )(cls.input_layer) -# cls.n2 = tl.layers.Conv1d( -# n_filter=32, filter_size=5, stride=2, data_format='channels_first' -# )(cls.n1) -# cls.model = Model(inputs=cls.input_layer, outputs=cls.n2) -# print("Testing Conv1d model: \n", cls.model) -# -# # cls.n3 = tl.layers.SeparableConv1d( -# # cls.n2, n_filter=32, filter_size=3, strides=1, padding='VALID', act=tf.nn.relu, name='separable_1d' -# # ) -# -# @classmethod -# def tearDownClass(cls): -# pass -# # tf.reset_default_graph() -# -# def test_layer_n1(self): -# -# # self.assertEqual(len(self.n1.all_layers), 2) -# # self.assertEqual(len(self.n1.all_params), 2) -# # self.assertEqual(self.n1.count_params(), 192) -# self.assertEqual(len(self.n1._info[0].layer.all_weights), 2) -# self.assertEqual(self.n1.get_shape().as_list()[1:], [50, 32]) -# -# def test_layer_n2(self): -# -# # self.assertEqual(len(self.n2.all_layers), 3) -# # self.assertEqual(len(self.n2.all_params), 4) -# # self.assertEqual(self.n2.count_params(), 5344) -# self.assertEqual(len(self.n2._info[0].layer.all_weights), 2) -# self.assertEqual(self.n2.get_shape().as_list()[1:], [25, 32]) -# -# # def test_layer_n3(self): -# # -# # self.assertEqual(len(self.n3.all_layers), 4) -# # self.assertEqual(len(self.n3.all_params), 7) -# # self.assertEqual(self.n3.count_params(), 6496) -# # self.assertEqual(self.n3.outputs.get_shape().as_list()[1:], [23, 32]) + def test_layer_n6(self): + self.assertEqual(len(self.separableconv1d3.all_weights), 2) + self.assertEqual(tl.get_tensor_shape(self.n6), [self.batch_size, 50, 16]) class Layer_Convolution_2D_Test(CustomTestCase): @classmethod - def setUpClass(cls): - print("\n#################################") - - cls.batch_size = 5 - cls.inputs_shape = [cls.batch_size, 400, 400, 3] - cls.input_layer = Input(cls.inputs_shape, name='input_layer') - - cls.n1 = tl.layers.Conv2dLayer( - act=tf.nn.relu, shape=(5, 5, 3, 32), strides=(1, 2, 2, 1), padding='SAME', - b_init=tf.constant_initializer(value=0.0), name='conv2dlayer' - )(cls.input_layer) - - cls.n2 = tl.layers.Conv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), act=None, name='conv2d')(cls.n1) - - cls.n3 = tl.layers.Conv2d( - n_filter=32, filter_size=(3, 3), strides=(2, 2), act=tf.nn.relu, b_init=None, name='conv2d_no_bias' - )(cls.n2) - - cls.n4 = tl.layers.DeConv2dLayer( - shape=(5, 5, 32, 32), outputs_shape=(cls.batch_size, 100, 100, 32), strides=(1, 2, 2, 1), - name='deconv2dlayer' - )(cls.n3) - - cls.n5 = tl.layers.DeConv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), name='DeConv2d')(cls.n4) - - cls.n6 = tl.layers.DepthwiseConv2d( - filter_size=(3, 3), strides=(1, 1), dilation_rate=(2, 2), act=tf.nn.relu, depth_multiplier=2, + def setUpClass(self): + + self.batch_size = 5 + self.inputs_shape = [self.batch_size, 400, 400, 3] + self.input_layer = tl.layers.Input(self.inputs_shape, name='input_layer') + + self.conv2dlayer1 = tl.layers.Conv2d( + n_filter=32, in_channels=3, strides=(2, 2), filter_size=(5, 5), padding='SAME', + b_init=tl.initializers.truncated_normal(0.01), name='conv2dlayer' + ) + self.n1 = self.conv2dlayer1(self.input_layer) + + self.conv2dlayer2 = tl.layers.Conv2d( + n_filter=32, in_channels=32, filter_size=(3, 3), strides=(2, 2), act=None, name='conv2d' + ) + self.n2 = self.conv2dlayer2(self.n1) + + self.conv2dlayer3 = tl.layers.Conv2d( + in_channels=32, n_filter=32, filter_size=(3, 3), strides=(2, 2), act=tl.ReLU, b_init=None, + name='conv2d_no_bias' + ) + self.n3 = self.conv2dlayer3(self.n2) + + self.dconv2dlayer = tl.layers.DeConv2d( + n_filter=32, in_channels=32, filter_size=(5, 5), strides=(2, 2), name='deconv2dlayer' + ) + self.n4 = self.dconv2dlayer(self.n3) + + self.dwconv2dlayer = tl.layers.DepthwiseConv2d( + in_channels=32, filter_size=(3, 3), strides=(1, 1), dilation_rate=(2, 2), act=tl.ReLU, depth_multiplier=2, name='depthwise' - )(cls.n5) - - cls.n7 = tl.layers.Conv2d( - n_filter=32, filter_size=(3, 3), strides=(2, 2), act=tf.nn.relu, in_channels=64, name='conv2d2' - )(cls.n6) - - cls.n8 = tl.layers.BinaryConv2d( - n_filter=64, filter_size=(3, 3), strides=(2, 2), act=tf.nn.relu, in_channels=32, name='binaryconv2d' - )(cls.n7) - - cls.n9 = tl.layers.SeparableConv2d( - n_filter=32, filter_size=(3, 3), strides=(2, 2), act=tf.nn.relu, name='separableconv2d' - )(cls.n8) - - cls.n10 = tl.layers.GroupConv2d(n_filter=64, filter_size=(3, 3), strides=(2, 2), n_group=2, - name='group')(cls.n9) - - cls.n11 = tl.layers.DorefaConv2d( - n_filter=32, filter_size=(5, 5), strides=(1, 1), act=tf.nn.relu, padding='SAME', name='dorefaconv2d' - )(cls.n10) - - cls.n12 = tl.layers.TernaryConv2d( - n_filter=64, filter_size=(5, 5), strides=(1, 1), act=tf.nn.relu, padding='SAME', name='ternaryconv2d' - )(cls.n11) - - cls.n13 = tl.layers.QuanConv2d( - n_filter=32, filter_size=(5, 5), strides=(1, 1), act=tf.nn.relu, padding='SAME', name='quancnn2d' - )(cls.n12) - - cls.n14 = tl.layers.SubpixelConv2d(scale=2, act=tf.nn.relu, name='subpixelconv2d')(cls.n13) - - cls.n15 = tl.layers.QuanConv2dWithBN( - n_filter=64, filter_size=(5, 5), strides=(1, 1), act=tf.nn.relu, padding='SAME', name='quancnnbn2d' - )(cls.n14) - - cls.model = Model(cls.input_layer, cls.n15) - print("Testing Conv2d model: \n", cls.model) - - # cls.n12 = tl.layers.QuanConv2d(cls.n11, 64, (5, 5), (1, 1), act=tf.nn.relu, padding='SAME', name='quancnn') + ) + self.n5 = self.dwconv2dlayer(self.n4) + + self.separableconv2d = tl.layers.SeparableConv2d( + in_channels=3, filter_size=(3, 3), strides=(2, 2), dilation_rate=(2, 2), act=tl.ReLU, depth_multiplier=3, + name='separableconv2d' + ) + self.n6 = self.separableconv2d(self.input_layer) + + self.groupconv2d = tl.layers.GroupConv2d( + in_channels=3, n_filter=18, filter_size=(3, 3), strides=(2, 2), dilation_rate=(3, 3), n_group=3, + act=tl.ReLU, name='groupconv2d' + ) + self.n7 = self.groupconv2d(self.input_layer) + + self.binaryconv2d = tl.layers.BinaryConv2d( + in_channels=3, n_filter=32, filter_size=(3, 3), strides=(2, 2), dilation_rate=(2, 2), act=tl.ReLU, + name='binaryconv2d' + ) + self.n8 = self.binaryconv2d(self.input_layer) + + self.dorefaconv2d = tl.layers.DorefaConv2d( + bitA=2, bitW=8, in_channels=3, n_filter=16, filter_size=(3, 3), strides=(2, 2), dilation_rate=(2, 2), + act=tl.ReLU, name='dorefaconv2d' + ) + self.n9 = self.dorefaconv2d(self.input_layer) @classmethod def tearDownClass(cls): @@ -223,274 +138,79 @@ def tearDownClass(cls): # tf.reset_default_graph() def test_layer_n1(self): - - # self.assertEqual(len(self.n1.all_layers), 2) - # self.assertEqual(len(self.n1.all_params), 2) - # self.assertEqual(self.n1.count_params(), 2432) - self.assertEqual(len(self.n1._info[0].layer.all_weights), 2) - self.assertEqual(self.n1.get_shape().as_list()[1:], [200, 200, 32]) + self.assertEqual(len(self.conv2dlayer1.all_weights), 2) + self.assertEqual(tl.get_tensor_shape(self.n1), [self.batch_size, 200, 200, 32]) def test_layer_n2(self): - - # self.assertEqual(len(self.n2.all_layers), 3) - # self.assertEqual(len(self.n2.all_params), 4) - # self.assertEqual(self.n2.count_params(), 11680) - self.assertEqual(len(self.n2._info[0].layer.all_weights), 2) - self.assertEqual(self.n2.get_shape().as_list()[1:], [100, 100, 32]) + self.assertEqual(len(self.conv2dlayer2.all_weights), 2) + self.assertEqual(tl.get_tensor_shape(self.n2), [self.batch_size, 100, 100, 32]) def test_layer_n3(self): - - # self.assertEqual(len(self.n3.all_layers), 4) - # self.assertEqual(len(self.n3.all_params), 5) - # self.assertEqual(self.n3.count_params(), 20896) - self.assertEqual(len(self.n3._info[0].layer.all_weights), 1) # b_init is None - self.assertEqual(self.n3.get_shape().as_list()[1:], [50, 50, 32]) + self.assertEqual(len(self.conv2dlayer3.all_weights), 1) # b_init is None + self.assertEqual(tl.get_tensor_shape(self.n3), [self.batch_size, 50, 50, 32]) def test_layer_n4(self): - - # self.assertEqual(len(self.n4.all_layers), 5) - # self.assertEqual(len(self.n4.all_params), 7) - # self.assertEqual(self.n4.count_params(), 46528) - self.assertEqual(len(self.n4._info[0].layer.all_weights), 2) - self.assertEqual(self.n4.get_shape().as_list()[1:], [100, 100, 32]) + self.assertEqual(len(self.dconv2dlayer.all_weights), 2) + self.assertEqual(tl.get_tensor_shape(self.n4), [self.batch_size, 100, 100, 32]) def test_layer_n5(self): - - # self.assertEqual(len(self.n5.all_layers), 6) - # self.assertEqual(len(self.n5.all_params), 9) - # self.assertEqual(self.n5.count_params(), 55776) - self.assertEqual(len(self.n5._info[0].layer.all_weights), 2) - self.assertEqual(self.n5.get_shape().as_list()[1:], [200, 200, 32]) + self.assertEqual(len(self.dwconv2dlayer.all_weights), 2) + self.assertEqual(tl.get_tensor_shape(self.n5), [self.batch_size, 100, 100, 64]) def test_layer_n6(self): - - # self.assertEqual(len(self.n6.all_layers), 7) - # self.assertEqual(len(self.n6.all_params), 11) - # self.assertEqual(self.n6.count_params(), 56416) - self.assertEqual(len(self.n6._info[0].layer.all_weights), 2) - self.assertEqual(self.n6.get_shape().as_list()[1:], [200, 200, 64]) + self.assertEqual(len(self.separableconv2d.all_weights), 3) + self.assertEqual(tl.get_tensor_shape(self.n6), [self.batch_size, 198, 198, 32]) def test_layer_n7(self): - - # self.assertEqual(len(self.n7.all_layers), 8) - # self.assertEqual(len(self.n7.all_params), 13) - # self.assertEqual(self.n7.count_params(), 74880) - self.assertEqual(len(self.n7._info[0].layer.all_weights), 2) - self.assertEqual(self.n7.get_shape().as_list()[1:], [100, 100, 32]) + self.assertEqual(len(self.groupconv2d.all_weights), 2) + self.assertEqual(tl.get_tensor_shape(self.n7), [self.batch_size, 200, 200, 18]) def test_layer_n8(self): - - # self.assertEqual(len(self.n7.all_layers), 8) - # self.assertEqual(len(self.n7.all_params), 13) - # self.assertEqual(self.n7.count_params(), 74880) - self.assertEqual(len(self.n8._info[0].layer.all_weights), 2) - self.assertEqual(self.n8.get_shape().as_list()[1:], [50, 50, 64]) + self.assertEqual(len(self.binaryconv2d.all_weights), 2) + self.assertEqual(tl.get_tensor_shape(self.n8), [self.batch_size, 198, 198, 32]) def test_layer_n9(self): - - # self.assertEqual(len(self.n7.all_layers), 8) - # self.assertEqual(len(self.n7.all_params), 13) - # self.assertEqual(self.n7.count_params(), 74880) - self.assertEqual(len(self.n9._info[0].layer.all_weights), 3) - self.assertEqual(self.n9.get_shape().as_list()[1:], [24, 24, 32]) - - def test_layer_n10(self): - # self.assertEqual(len(self.n7.all_layers), 8) - # self.assertEqual(len(self.n7.all_params), 13) - # self.assertEqual(self.n7.count_params(), 74880) - self.assertEqual(len(self.n10._info[0].layer.all_weights), 2) - self.assertEqual(self.n10.get_shape().as_list()[1:], [12, 12, 64]) - - def test_layer_n11(self): - # self.assertEqual(len(self.n7.all_layers), 8) - # self.assertEqual(len(self.n7.all_params), 13) - # self.assertEqual(self.n7.count_params(), 74880) - self.assertEqual(len(self.n11._info[0].layer.all_weights), 2) - self.assertEqual(self.n11.get_shape().as_list()[1:], [12, 12, 32]) - - def test_layer_n12(self): - # self.assertEqual(len(self.n7.all_layers), 8) - # self.assertEqual(len(self.n7.all_params), 13) - # self.assertEqual(self.n7.count_params(), 74880) - self.assertEqual(len(self.n12._info[0].layer.all_weights), 2) - self.assertEqual(self.n12.get_shape().as_list()[1:], [12, 12, 64]) - - def test_layer_n13(self): - # self.assertEqual(len(self.n7.all_layers), 8) - # self.assertEqual(len(self.n7.all_params), 13) - # self.assertEqual(self.n7.count_params(), 74880) - self.assertEqual(len(self.n13._info[0].layer.all_weights), 2) - self.assertEqual(self.n13.get_shape().as_list()[1:], [12, 12, 32]) - - def test_layer_n14(self): - self.assertEqual(self.n14.get_shape().as_list()[1:], [24, 24, 8]) - - def test_layer_n15(self): - self.assertEqual(len(self.n15._info[0].layer.all_weights), 5) - self.assertEqual(self.n15.get_shape().as_list()[1:], [24, 24, 64]) - - # def test_layer_n8(self): - # - # self.assertEqual(len(self.n8.all_layers), 9) - # self.assertEqual(len(self.n8.all_params), 15) - # self.assertEqual(self.n8.count_params(), 79520) - # self.assertEqual(self.n8.outputs.get_shape().as_list()[1:], [50, 50, 32]) - # - # def test_layer_n9(self): - # - # self.assertEqual(len(self.n9.all_layers), 10) - # self.assertEqual(len(self.n9.all_params), 18) - # self.assertEqual(self.n9.count_params(), 80864) - # self.assertEqual(self.n9.outputs.get_shape().as_list()[1:], [48, 48, 32]) - # - # def test_layer_n10(self): - # - # self.assertEqual(len(self.n10.all_layers), 11) - # self.assertEqual(len(self.n10.all_params), 20) - # self.assertEqual(self.n10.count_params(), 132128) - # self.assertEqual(self.n10.outputs.get_shape().as_list()[1:], [48, 48, 64]) - # - # def test_layer_n11(self): - # - # self.assertEqual(len(self.n11.all_layers), 12) - # self.assertEqual(len(self.n11.all_params), 22) - # self.assertEqual(self.n11.count_params(), 150592) - # self.assertEqual(self.n11.outputs.get_shape().as_list()[1:], [96, 96, 32]) - # - # def test_layer_n12(self): - # - # self.assertEqual(len(self.n12.all_layers), 13) - # self.assertEqual(len(self.n12.all_params), 24) - # self.assertEqual(self.n12.count_params(), 201856) - # self.assertEqual(self.n12.outputs.get_shape().as_list()[1:], [96, 96, 64]) + self.assertEqual(len(self.dorefaconv2d.all_weights), 2) + self.assertEqual(tl.get_tensor_shape(self.n9), [self.batch_size, 200, 200, 16]) class Layer_Convolution_3D_Test(CustomTestCase): @classmethod - def setUpClass(cls): + def setUpClass(self): print("\n#################################") - cls.batch_size = 5 - cls.inputs_shape = [cls.batch_size, 20, 20, 20, 3] - cls.input_layer = Input(cls.inputs_shape, name='input_layer') + self.batch_size = 5 + self.inputs_shape = [self.batch_size, 20, 20, 20, 3] + self.input_layer = tl.layers.Input(self.inputs_shape, name='input_layer') - cls.n1 = tl.layers.Conv3dLayer(shape=(2, 2, 2, 3, 32), strides=(1, 2, 2, 2, 1))(cls.input_layer) + self.conv3dlayer1 = tl.layers.Conv3d(n_filter=32, in_channels=3, filter_size=(2, 2, 2), strides=(2, 2, 2)) + self.n1 = self.conv3dlayer1(self.input_layer) - cls.n2 = tl.layers.DeConv3dLayer( - shape=(2, 2, 2, 128, 32), outputs_shape=(cls.batch_size, 20, 20, 20, 128), strides=(1, 2, 2, 2, 1) - )(cls.n1) + self.deconv3dlayer = tl.layers.DeConv3d(n_filter=128, in_channels=32, filter_size=(2, 2, 2), strides=(2, 2, 2)) + self.n2 = self.deconv3dlayer(self.n1) - cls.n3 = tl.layers.Conv3d( - n_filter=64, filter_size=(3, 3, 3), strides=(3, 3, 3), act=tf.nn.relu, b_init=None, in_channels=128, + self.conv3dlayer2 = tl.layers.Conv3d( + n_filter=64, in_channels=128, filter_size=(3, 3, 3), strides=(3, 3, 3), act=tl.ReLU, b_init=None, name='conv3d_no_bias' - )(cls.n2) - - cls.n4 = tl.layers.DeConv3d(n_filter=32, filter_size=(3, 3, 3), strides=(2, 2, 2))(cls.n3) - - cls.model = Model(inputs=cls.input_layer, outputs=cls.n4) - print("Testing Conv3d model: \n", cls.model) + ) + self.n3 = self.conv3dlayer2(self.n2) @classmethod - def tearDownClass(cls): + def tearDownClass(self): pass - # tf.reset_default_graph() def test_layer_n1(self): - - # self.assertEqual(len(self.n1.all_layers), 2) - # self.assertEqual(len(self.n1.all_params), 2) - # self.assertEqual(self.n1.count_params(), 800) - self.assertEqual(len(self.n1._info[0].layer.all_weights), 2) - self.assertEqual(self.n1.get_shape().as_list()[1:], [10, 10, 10, 32]) + self.assertEqual(len(self.conv3dlayer1.all_weights), 2) + self.assertEqual(tl.get_tensor_shape(self.n1), [self.batch_size, 10, 10, 10, 32]) def test_layer_n2(self): - - # self.assertEqual(len(self.n2.all_layers), 3) - # self.assertEqual(len(self.n2.all_params), 4) - # self.assertEqual(self.n2.count_params(), 33696) - self.assertEqual(len(self.n2._info[0].layer.all_weights), 2) - self.assertEqual(self.n2.get_shape().as_list()[1:], [20, 20, 20, 128]) + self.assertEqual(len(self.deconv3dlayer.all_weights), 2) + self.assertEqual(tl.get_tensor_shape(self.n2), [self.batch_size, 20, 20, 20, 128]) def test_layer_n3(self): - - # self.assertEqual(len(self.n3.all_layers), 4) - # self.assertEqual(len(self.n3.all_params), 6) - # self.assertEqual(self.n3.count_params(), 144320) - self.assertEqual(len(self.n3._info[0].layer.all_weights), 1) # b_init is None - self.assertEqual(self.n3.get_shape().as_list()[1:], [7, 7, 7, 64]) - - def test_layer_n4(self): - - # self.assertEqual(len(self.n3.all_layers), 4) - # self.assertEqual(len(self.n3.all_params), 6) - # self.assertEqual(self.n3.count_params(), 144320) - self.assertEqual(len(self.n4._info[0].layer.all_weights), 2) - self.assertEqual(self.n4.get_shape().as_list()[1:], [14, 14, 14, 32]) - - -# class Layer_DeformableConvolution_Test(CustomTestCase): -# -# @classmethod -# def setUpClass(cls): -# -# cls.batch_size = 5 -# cls.inputs_shape = [cls.batch_size, 299, 299, 3] -# cls.input_layer = Input(cls.inputs_shape, name='input_layer') -# -# offset1 = tl.layers.Conv2d( -# 18, (3, 3), (1, 1), act=tf.nn.relu, padding='SAME', name='offset1' -# )(cls.input_layer) -# cls.net1 = tl.layers.DeformableConv2d( -# offset1, 32, (3, 3), act=tf.nn.relu, name='deformable1' -# )(cls.input_layer) -# -# offset2 = tl.layers.Conv2d( -# 18, (3, 3), (1, 1), act=tf.nn.relu, padding='SAME', name='offset2' -# )(cls.net1) -# cls.net2 = tl.layers.DeformableConv2d( -# offset2, 64, (3, 3), act=tf.nn.relu, name='deformable2' -# )(cls.net1) -# -# @classmethod -# def tearDownClass(cls): -# pass -# -# def test_layer_n1(self): -# -# self.assertEqual(len(self.net1.all_layers), 2) -# self.assertEqual(len(self.net1.all_params), 2) -# self.assertEqual(self.net1.count_params(), 896) -# self.assertEqual(self.net1.outputs.get_shape().as_list()[1:], [299, 299, 32]) -# -# def test_layer_n2(self): -# -# self.assertEqual(len(self.net2.all_layers), 3) -# self.assertEqual(len(self.net2.all_params), 4) -# self.assertEqual(self.net2.count_params(), 19392) -# self.assertEqual(self.net2.outputs.get_shape().as_list()[1:], [299, 299, 64]) - - -class Exception_test(CustomTestCase): - - @classmethod - def setUpClass(cls): - print("##### begin testing exception in activation #####") - - def test_exception(cls): - - cls.batch_size = 5 - cls.inputs_shape = [cls.batch_size, 400, 400, 3] - cls.input_layer = Input(cls.inputs_shape, name='input_layer') - - try: - cls.n1 = tl.layers.Conv2dLayer( - act='activation', shape=(5, 5, 3, 32), strides=(1, 2, 2, 1), padding='SAME', - b_init=tf.constant_initializer(value=0.0), name='conv2dlayer' - )(cls.input_layer) - except Exception as e: - cls.assertIsInstance(e, Exception) - print(e) + self.assertEqual(len(self.conv3dlayer2.all_weights), 1) # b_init is None + self.assertEqual(tl.get_tensor_shape(self.n3), [self.batch_size, 7, 7, 7, 64]) if __name__ == '__main__': diff --git a/tests/layers/test_layers_core_act.py b/tests/layers/test_layers_core_act.py index 549a192ab..71a85d787 100644 --- a/tests/layers/test_layers_core_act.py +++ b/tests/layers/test_layers_core_act.py @@ -3,103 +3,98 @@ import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import tensorlayer as tl -from tensorlayer.layers import * -from tensorlayer.models import * -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Layer_Convolution_2D_Test(CustomTestCase): @classmethod - def setUpClass(cls): - print("##### begin testing activation #####") - - @classmethod - def tearDownClass(cls): - pass - # tf.reset_default_graph() - - def test_layer_core_act(cls): - - cls.batch_size = 5 - cls.inputs_shape = [cls.batch_size, 400, 400, 3] - cls.input_layer = Input(cls.inputs_shape, name='input_layer') + def setUpClass(self): + self.batch_size = 5 + self.inputs_shape = [self.batch_size, 400, 400, 3] + self.input_layer = tl.layers.Input(self.inputs_shape, name='input_layer') + + self.conv2dlayer1 = tl.layers.Conv2d(n_filter=32, in_channels=3, act=tl.ReLU, filter_size=(5, 5), + strides=(2, 2), + padding='SAME', b_init=tl.initializers.constant(value=0.0), + name='conv2dlayer' + ) + self.n1 = self.conv2dlayer1(self.input_layer) + + self.conv2dlayer2 = tl.layers.Conv2d(n_filter=32, in_channels=32, filter_size=(3, 3), strides=(2, 2), + act="relu", name='conv2d') + self.n2 = self.conv2dlayer2(self.n1) + + self.conv2dlayer3 = tl.layers.Conv2d(n_filter=32, in_channels=32, filter_size=(3, 3), strides=(2, 2), + act="leaky_relu", b_init=None) + self.n3 = self.conv2dlayer3(self.n2) + + self.conv2dlayer4 = tl.layers.Conv2d(n_filter=32, in_channels=32, filter_size=(3, 3), strides=(2, 2), + act="lrelu", b_init=None) + self.n4 = self.conv2dlayer4(self.n3) + + self.conv2dlayer5 = tl.layers.Conv2d(n_filter=32, in_channels=32, filter_size=(3, 3), strides=(2, 2), + act="sigmoid") + self.n5 = self.conv2dlayer5(self.n4) + + self.conv2dlayer6 = tl.layers.Conv2d(n_filter=32, in_channels=32, filter_size=(3, 3), strides=(2, 2), + act="tanh") + self.n6 = self.conv2dlayer6(self.n5) + + self.conv2dlayer7 = tl.layers.Conv2d( + n_filter=32, filter_size=(3, 3), strides=(2, 2), act="leaky_relu0.22", in_channels=32 + ) + self.n7 = self.conv2dlayer7(self.n6) - cls.n1 = tl.layers.Conv2dLayer( - act=tf.nn.relu, shape=(5, 5, 3, 32), strides=(1, 2, 2, 1), padding='SAME', - b_init=tf.constant_initializer(value=0.0), name='conv2dlayer' - )(cls.input_layer) + self.conv2dlayer8 = tl.layers.Conv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), act="lrelu0.22", + in_channels=32) + self.n8 = self.conv2dlayer8(self.n7) - cls.n2 = tl.layers.Conv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), act="relu", name='conv2d')(cls.n1) + self.conv2dlayer9 = tl.layers.Conv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), act="softplus", + in_channels=32) + self.n9 = self.conv2dlayer9(self.n8) - cls.n3 = tl.layers.Conv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), act="leaky_relu", - b_init=None)(cls.n2) + self.conv2dlayer10 = tl.layers.Conv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), act="relu6", + in_channels=32) + self.n10 = self.conv2dlayer10(self.n9) - cls.n4 = tl.layers.Conv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), act="lrelu", b_init=None)(cls.n2) + @classmethod + def tearDownClass(self): + pass - cls.n5 = tl.layers.Conv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), act="sigmoid", - in_channels=32)(cls.n4) + def test_relu(self): + self.assertEqual(tl.get_tensor_shape(self.n1), [5, 200, 200, 32]) - cls.n6 = tl.layers.Conv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), act="tanh", in_channels=32)(cls.n5) + def test_relu_str(self): + self.assertEqual(tl.get_tensor_shape(self.n2), [5, 100, 100, 32]) - cls.n7 = tl.layers.Conv2d( - n_filter=32, filter_size=(3, 3), strides=(2, 2), act="leaky_relu0.22", in_channels=32 - )(cls.n6) + def test_leaky_relu_str(self): + self.assertEqual(tl.get_tensor_shape(self.n3), [5, 50, 50, 32]) - cls.n8 = tl.layers.Conv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), act="lrelu0.22", - in_channels=32)(cls.n7) + def test_lrelu_str(self): + self.assertEqual(tl.get_tensor_shape(self.n4), [5, 25, 25, 32]) - cls.n9 = tl.layers.Conv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), act="softplus", - in_channels=32)(cls.n8) + def test_sigmoid_str(self): + self.assertEqual(tl.get_tensor_shape(self.n5), [5, 13, 13, 32]) - cls.n10 = tl.layers.Conv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), act="relu6", in_channels=32)(cls.n9) + def test_tanh_str(self): + self.assertEqual(tl.get_tensor_shape(self.n6), [5, 7, 7, 32]) - cls.model = Model(cls.input_layer, cls.n8) + def test_leaky_relu_float_str(self): + self.assertEqual(tl.get_tensor_shape(self.n7), [5, 4, 4, 32]) + def test_lrelu_float_str(self): + self.assertEqual(tl.get_tensor_shape(self.n8), [5, 2, 2, 32]) -class Exception_test(CustomTestCase): + def test_softplus_str(self): + self.assertEqual(tl.get_tensor_shape(self.n9), [5, 1, 1, 32]) - @classmethod - def setUpClass(cls): - print("##### begin testing exception in activation #####") - - def test_exception(cls): - - cls.batch_size = 5 - cls.inputs_shape = [cls.batch_size, 400, 400, 3] - cls.input_layer = Input(cls.inputs_shape, name='input_layer') - - try: - cls.n1 = tl.layers.Conv2dLayer( - act='activation', shape=(5, 5, 3, 32), strides=(1, 2, 2, 1), padding='SAME', - b_init=tf.constant_initializer(value=0.0), name='conv2dlayer' - )(cls.input_layer) - except Exception as e: - cls.assertIsInstance(e, Exception) - print(e) - - try: - cls.n2 = tl.layers.Conv2dLayer( - act='leaky_relu0.2x', shape=(5, 5, 3, 32), strides=(1, 2, 2, 1), padding='SAME', - b_init=tf.constant_initializer(value=0.0), name='conv2dlayer' - )(cls.input_layer) - except Exception as e: - cls.assertIsInstance(e, Exception) - print(e) - - try: - cls.n3 = tl.layers.Conv2dLayer( - act='lrelu0.2x', shape=(5, 5, 3, 32), strides=(1, 2, 2, 1), padding='SAME', - b_init=tf.constant_initializer(value=0.0), name='conv2dlayer' - )(cls.input_layer) - except Exception as e: - cls.assertIsInstance(e, Exception) - print(e) + def test_relu6_str(self): + self.assertEqual(tl.get_tensor_shape(self.n10), [5, 1, 1, 32]) if __name__ == '__main__': diff --git a/tests/layers/test_layers_core_basedense_dropout.py b/tests/layers/test_layers_core_basedense_dropout.py index c3ecfebc5..a926a745b 100644 --- a/tests/layers/test_layers_core_basedense_dropout.py +++ b/tests/layers/test_layers_core_basedense_dropout.py @@ -4,168 +4,69 @@ import os import unittest -import numpy as np -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import tensorlayer as tl -from tensorlayer.layers import * -from tensorlayer.models import * -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Layer_Core_Test(CustomTestCase): @classmethod - def setUpClass(cls): + def setUpClass(self): + + self.batch_size = 8 - cls.batch_size = 8 + self.inputs_shape = [self.batch_size, 784] + self.input = tl.layers.Input(self.inputs_shape) + self.dense1 = tl.layers.Dense(n_units=800, act=tl.ReLU, in_channels=784, name='test_dense') + self.n1 = self.dense1(self.input) - # ============== Layer ============== + self.dropout1 = tl.layers.Dropout(keep=0.8) + self.n2 = self.dropout1(self.n1) - cls.base_layer = Layer(what=None) + self.dense2 = tl.layers.Dense(n_units=10, act='relu', b_init=None, in_channels=800) + self.n3 = self.dense2(self.n2) - # ============== DenseLayer ============== + self.dense3 = tl.layers.Dense(n_units=10, act='relu', b_init=None, in_channels=10) + self.n4 = self.dense3(self.n3) - cls.inputs_shape = [None, 784] - cls.innet = Input(cls.inputs_shape) - cls.dense1 = Dense(n_units=800, act=tf.nn.relu, in_channels=784, name='test_dense')(cls.innet) - cls.dropout1 = Dropout(keep=0.8)(cls.dense1) - cls.dense2 = Dense(n_units=10, act=tf.nn.relu, b_init=None)(cls.dropout1) - cls.dense3 = Dense(n_units=10, act=tf.nn.relu, b_init=None) - cls.concat = Concat(concat_dim=-1)([cls.dense2, cls.dropout1]) + self.concat = tl.layers.Concat(concat_dim=-1)([self.n2, self.n3]) + + class get_model(tl.layers.Module): + def __init__(self): + super(get_model, self).__init__() + self.layer1 = tl.layers.Dense(n_units=800, act=tl.ReLU, in_channels=784, name='test_dense') + self.dp = tl.layers.Dropout(keep=0.8) + self.layer2 = tl.layers.Dense(n_units=10, act='relu', b_init=None, in_channels=800) + self.layer3 = tl.layers.Dense(n_units=10, act='relu', b_init=None, in_channels=10) + + def forward(self, inputs): + z = self.layer1(inputs) + z = self.dp(z) + z = self.layer2(z) + z = self.layer3(z) + return z + + self.net = get_model() - cls.model = Model(inputs=cls.innet, outputs=cls.dense2) @classmethod def tearDownClass(cls): pass - def test_net1(self): + def test_dense(self): + self.assertEqual(tl.get_tensor_shape(self.n1), [self.batch_size, 800]) - # test exceptional cases - try: - self.base_layer.build(None) - except Exception as e: - print(e) - - try: - self.base_layer.forward(None) - except Exception as e: - print(e) - - try: - self.base_layer[4] = 1 - except Exception as e: - print(e) - - try: - del self.base_layer[4] - except Exception as e: - print(e) - - try: - Layer(what=1) - except Exception as e: - print(e) - - def test_net2(self): - - # test weights - self.assertEqual(self.innet._info[0].layer.all_weights, []) - self.assertEqual(self.dropout1._info[0].layer.all_weights, []) - self.assertEqual(self.dense1._info[0].layer.all_weights[0].get_shape().as_list(), [784, 800]) - self.assertEqual(self.dense1._info[0].layer.all_weights[1].get_shape().as_list(), [ - 800, - ]) - self.assertEqual(self.dense2._info[0].layer.all_weights[0].get_shape().as_list(), [800, 10]) - self.assertEqual(len(self.dense1._info[0].layer.all_weights), 2) - self.assertEqual(len(self.dense2._info[0].layer.all_weights), 1) - - self.assertEqual(len(self.model.all_weights), 3) - - # a special case - self.model.release_memory() - - # test printing - # print(self.innet) - # print(self.dense1) - # print(self.dropout1) - # print(self.dense2) - # print(self.dense3) - - def test_special_cases(self): - try: - innet = Input([121]) - dense1 = Dense(n_units=800, act=tf.nn.relu)(innet) - except Exception as e: - print(e) - - def test_modellayer(self): - - data = np.random.normal(size=[self.batch_size, self.inputs_shape[1]]).astype(np.float32) - - origin_results_train = self.model(data, is_train=True) - origin_results_test = self.model(data, is_train=False) - - new_innet = Input(self.inputs_shape) - new_mlayer = ModelLayer(self.model)(new_innet) - - newmodel = Model(inputs=new_innet, outputs=new_mlayer) - - new_results_train = newmodel(data, is_train=True) - new_results_test = newmodel(data, is_train=False) - - self.assertEqual(origin_results_train.shape, new_results_train.shape) - self.assertTrue(np.array_equal(origin_results_test.shape, new_results_test.shape)) - - newmodel.release_memory() - - def test_layerlist(self): - innet = Input(self.inputs_shape) - hlayer = LayerList( - [ - ModelLayer(self.model), - LayerList([Dense(n_units=100), Dense(n_units=10)]), - Dense(n_units=5), - Dense(n_units=4) - ] - )(innet) - model = Model(inputs=innet, outputs=hlayer) - - # for w in model.all_weights: - # print(w.name) - - data = np.random.normal(size=[self.batch_size, self.inputs_shape[1]]).astype(np.float32) - pred = model(data, is_train=False) - self.assertEqual(pred.get_shape().as_list(), [self.batch_size, 4]) - - print(model) - - model.release_memory() - - def test_duplicate_names(self): - dense1 = tl.layers.Dense(n_units=10, name='test_densehh') - print(dense1) - try: - dense2 = tl.layers.Dense(n_units=10, name='test_densehh') - print(dense2) - except Exception as e: - print(e) - dense1 = tl.layers.Dense(n_units=10, name='test_densehh1') - dense2 = tl.layers.Dense(n_units=10, name='test_densehh2') - print(dense1) - print(dense2) + def test_dense_nonbias(self): + self.assertEqual(len(self.dense2.all_weights), 1) def test_dropout(self): - data_x = np.random.random([10, 784]).astype(np.float32) - pred_y_1 = self.model(data_x, is_train=True) - pred_y_2 = self.model(data_x, is_train=True) - self.assertFalse(np.allclose(pred_y_1, pred_y_2)) - pred_y_1 = self.model(data_x, is_train=False) - pred_y_2 = self.model(data_x, is_train=False) - self.assertTrue(np.allclose(pred_y_1, pred_y_2)) + self.assertEqual(len(self.dropout1.all_weights), 0) + + def test_model(self): + self.assertEqual(len(self.net.all_weights), 4) if __name__ == '__main__': diff --git a/tests/layers/test_layers_core_nested.py b/tests/layers/test_layers_core_nested.py index 1c5ef5908..167d7af3a 100644 --- a/tests/layers/test_layers_core_nested.py +++ b/tests/layers/test_layers_core_nested.py @@ -3,13 +3,12 @@ import os import unittest -import numpy as np -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import tensorlayer as tl -from tests.utils import CustomTestCase +import numpy as np -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Layer_nested(CustomTestCase): @@ -21,11 +20,10 @@ def setUpClass(cls): @classmethod def tearDownClass(cls): pass - # tf.reset_default_graph() def test_nested_layer_with_inchannels(cls): - class MyLayer(tl.layers.Layer): + class MyLayer(tl.layers.Module): def __init__(self, name=None): super(MyLayer, self).__init__(name=name) @@ -38,10 +36,10 @@ def build(self, inputs_shape=None): def forward(self, inputs): inputs = self.input_layer(inputs) - output = tf.matmul(inputs, self.W) + output = tl.ops.matmul(inputs, self.W) return output - class model(tl.models.Model): + class model(tl.layers.Module): def __init__(self, name=None): super(model, self).__init__(name=name) @@ -50,70 +48,15 @@ def __init__(self, name=None): def forward(self, inputs): return self.layer(inputs) - input = tf.random.normal(shape=(100, 50)) + input = tl.layers.Input(shape=(100, 50)) model_dynamic = model() - model_dynamic.train() + model_dynamic.set_train() cls.assertEqual(model_dynamic(input).shape, (100, 10)) cls.assertEqual(len(model_dynamic.all_weights), 3) cls.assertEqual(len(model_dynamic.trainable_weights), 3) - model_dynamic.layer.input_layer.b.assign_add(tf.ones((20, ))) - cls.assertEqual(np.sum(model_dynamic.all_weights[-1].numpy() - tf.ones(20, ).numpy()), 0) - - ni = tl.layers.Input(shape=(100, 50)) - nn = MyLayer(name='mylayer1')(ni) - model_static = tl.models.Model(inputs=ni, outputs=nn) - model_static.eval() - cls.assertEqual(model_static(input).shape, (100, 10)) - cls.assertEqual(len(model_static.all_weights), 3) - cls.assertEqual(len(model_static.trainable_weights), 3) - model_static.get_layer('mylayer1').input_layer.b.assign_add(tf.ones((20, ))) - cls.assertEqual(np.sum(model_static.all_weights[-1].numpy() - tf.ones(20, ).numpy()), 0) - - def test_nested_layer_without_inchannels(cls): - - class MyLayer(tl.layers.Layer): - - def __init__(self, name=None): - super(MyLayer, self).__init__(name=name) - self.input_layer = tl.layers.Dense(n_units=20) # no need for in_channels here - self.build(None) - self._built = True - - def build(self, inputs_shape=None): - self.W = self._get_weights('weights', shape=(20, 10)) - def forward(self, inputs): - inputs = self.input_layer(inputs) - output = tf.matmul(inputs, self.W) - return output - - class model(tl.models.Model): - - def __init__(self, name=None): - super(model, self).__init__(name=name) - self.layer = MyLayer() - - def forward(self, inputs): - return self.layer(inputs) - - input = tf.random.normal(shape=(100, 50)) - model_dynamic = model() - model_dynamic.train() - cls.assertEqual(model_dynamic(input).shape, (100, 10)) - cls.assertEqual(len(model_dynamic.all_weights), 3) - cls.assertEqual(len(model_dynamic.trainable_weights), 3) - model_dynamic.layer.input_layer.b.assign_add(tf.ones((20, ))) - cls.assertEqual(np.sum(model_dynamic.all_weights[-1].numpy() - tf.ones(20, ).numpy()), 0) - - ni = tl.layers.Input(shape=(100, 50)) - nn = MyLayer(name='mylayer2')(ni) - model_static = tl.models.Model(inputs=ni, outputs=nn) - model_static.eval() - cls.assertEqual(model_static(input).shape, (100, 10)) - cls.assertEqual(len(model_static.all_weights), 3) - cls.assertEqual(len(model_static.trainable_weights), 3) - model_static.get_layer('mylayer2').input_layer.b.assign_add(tf.ones((20, ))) - cls.assertEqual(np.sum(model_static.all_weights[-1].numpy() - tf.ones(20, ).numpy()), 0) + model_dynamic.layer.input_layer.b.assign_add(tl.ops.ones((20, ))) + cls.assertEqual(np.sum(model_dynamic.all_weights[-1].numpy() - tl.ops.ones(20, ).numpy()), 0) if __name__ == '__main__': diff --git a/tests/layers/test_layers_deformable_convolution.py b/tests/layers/test_layers_deformable_convolution.py index 8c5df8e8d..a449b420d 100644 --- a/tests/layers/test_layers_deformable_convolution.py +++ b/tests/layers/test_layers_deformable_convolution.py @@ -4,53 +4,44 @@ import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import tensorlayer as tl -from tensorlayer.layers import * -from tensorlayer.models import * from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' - - class Layer_Convolution_2D_Test(CustomTestCase): @classmethod - def setUpClass(cls): + def setUpClass(self): print("\n#################################") - cls.batch_size = 5 - cls.inputs_shape = [cls.batch_size, 10, 10, 16] - cls.input_layer = Input(cls.inputs_shape, name='input_layer') - - cls.offset1 = tl.layers.Conv2d(n_filter=18, filter_size=(3, 3), strides=(1, 1), padding='SAME', - name='offset1')(cls.input_layer) - cls.deformconv1 = tl.layers.DeformableConv2d( - offset_layer=cls.offset1, n_filter=32, filter_size=(3, 3), act=tf.nn.relu, name='deformable1' - )(cls.input_layer) - cls.offset2 = tl.layers.Conv2d(n_filter=18, filter_size=(3, 3), strides=(1, 1), padding='SAME', - name='offset2')(cls.deformconv1) - cls.deformconv2 = tl.layers.DeformableConv2d( - offset_layer=cls.offset2, n_filter=64, filter_size=(3, 3), act=tf.nn.relu, name='deformable2' - )(cls.deformconv1) - - cls.model = Model(cls.input_layer, cls.deformconv2) - print("Testing Deformable Conv2d model: \n", cls.model) + self.batch_size = 5 + self.inputs_shape = [self.batch_size, 10, 10, 16] + self.input_layer = tl.layers.Input(self.inputs_shape, name='input_layer') + + self.offset1 = tl.layers.Conv2d(n_filter=18, filter_size=(3, 3), strides=(1, 1), padding='SAME', + name='offset1')(self.input_layer) + self.init_deformconv1 = tl.layers.DeformableConv2d( + offset_layer=self.offset1, n_filter=32, filter_size=(3, 3), act='relu', name='deformable1' + ) + self.deformconv1 = self.init_deformconv1(self.input_layer) + self.offset2 = tl.layers.Conv2d(n_filter=18, filter_size=(3, 3), strides=(1, 1), padding='SAME', + name='offset2')(self.deformconv1) + self.deformconv2 = tl.layers.DeformableConv2d( + offset_layer=self.offset2, n_filter=64, filter_size=(3, 3), act='relu', name='deformable2' + )(self.deformconv1) @classmethod - def tearDownClass(cls): + def tearDownClass(self): pass def test_layer_n1(self): - self.assertEqual(len(self.deformconv1._info[0].layer.all_weights), 2) - self.assertEqual(self.deformconv1.get_shape().as_list()[1:], [10, 10, 32]) + self.assertEqual(len(self.init_deformconv1.all_weights), 2) + self.assertEqual(tl.get_tensor_shape(self.deformconv1)[1:], [10, 10, 32]) def test_layer_n2(self): - - self.assertEqual(len(self.deformconv2._info[0].layer.all_weights), 2) - self.assertEqual(self.deformconv2.get_shape().as_list()[1:], [10, 10, 64]) + self.assertEqual(tl.get_tensor_shape(self.deformconv2)[1:], [10, 10, 64]) if __name__ == '__main__': diff --git a/tests/layers/test_layers_dense.py b/tests/layers/test_layers_dense.py index b6f76c1c9..c8cc32682 100644 --- a/tests/layers/test_layers_dense.py +++ b/tests/layers/test_layers_dense.py @@ -3,41 +3,29 @@ import os import unittest -import numpy as np -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import tensorlayer as tl -from tensorlayer.layers import * -from tensorlayer.models import * -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase +import numpy as np class Layer_BinaryDense_Test(CustomTestCase): @classmethod - def setUpClass(cls): + def setUpClass(self): print("-" * 20, "Layer_BinaryDense_Test", "-" * 20) - cls.batch_size = 4 - cls.inputs_shape = [cls.batch_size, 10] + self.batch_size = 4 + self.inputs_shape = [self.batch_size, 10] - cls.ni = Input(cls.inputs_shape, name='input_layer') - cls.layer1 = BinaryDense(n_units=5) - nn = cls.layer1(cls.ni) - cls.layer1._nodes_fixed = True - cls.M = Model(inputs=cls.ni, outputs=nn) + self.ni = tl.layers.Input(self.inputs_shape, name='input_layer') + self.layer1 = tl.layers.BinaryDense(n_units=5) - cls.layer2 = BinaryDense(n_units=5, in_channels=10) - cls.layer2._nodes_fixed = True + self.layer2 = tl.layers.BinaryDense(n_units=5, in_channels=10) - cls.inputs = tf.ones((cls.inputs_shape)) - cls.n1 = cls.layer1(cls.inputs) - cls.n2 = cls.layer2(cls.inputs) - cls.n3 = cls.M(cls.inputs, is_train=True) - - print(cls.layer1) - print(cls.layer2) + self.n1 = self.layer1(self.ni) + self.n2 = self.layer2(self.ni) @classmethod def tearDownClass(cls): @@ -45,57 +33,27 @@ def tearDownClass(cls): def test_layer_n1(self): print(self.n1[0]) - self.assertEqual(tf.reduce_sum(self.n1).numpy() % 1, 0.0) # should be integer + self.assertEqual(tl.ops.ReduceSum()(self.n1).numpy() % 1, 0.0) # should be integer def test_layer_n2(self): print(self.n2[0]) - self.assertEqual(tf.reduce_sum(self.n2).numpy() % 1, 0.0) # should be integer - - def test_model_n3(self): - print(self.n3[0]) - self.assertEqual(tf.reduce_sum(self.n3).numpy() % 1, 0.0) # should be integer - - def test_exception(self): - try: - layer = BinaryDense(n_units=5) - inputs = Input([4, 10, 5], name='ill_inputs') - out = layer(inputs) - self.fail('ill inputs') - except Exception as e: - print(e) - - try: - layer = BinaryDense(n_units=5, use_gemm=True) - out = layer(self.ni) - self.fail('use gemm') - except Exception as e: - print(e) + self.assertEqual(tl.ops.ReduceSum()(self.n2).numpy() % 1, 0.0) # should be integer class Layer_DorefaDense_Test(CustomTestCase): @classmethod - def setUpClass(cls): + def setUpClass(self): print("-" * 20, "Layer_DorefaDense_Test", "-" * 20) - cls.batch_size = 4 - cls.inputs_shape = [cls.batch_size, 10] - - cls.ni = Input(cls.inputs_shape, name='input_layer') - cls.layer1 = DorefaDense(n_units=5) - nn = cls.layer1(cls.ni) - cls.layer1._nodes_fixed = True - cls.M = Model(inputs=cls.ni, outputs=nn) - - cls.layer2 = DorefaDense(n_units=5, in_channels=10) - cls.layer2._nodes_fixed = True + self.batch_size = 4 + self.inputs_shape = [self.batch_size, 10] - cls.inputs = tf.ones((cls.inputs_shape)) - cls.n1 = cls.layer1(cls.inputs) - cls.n2 = cls.layer2(cls.inputs) - cls.n3 = cls.M(cls.inputs, is_train=True) + self.ni = tl.layers.Input(self.inputs_shape, name='input_layer') + self.layer1 = tl.layers.DorefaDense(n_units=5) + self.layer2 = tl.layers.DorefaDense(n_units=5, in_channels=10) - print(cls.layer1) - print(cls.layer2) + self.n1 = self.layer1(self.ni) + self.n2 = self.layer2(self.ni) @classmethod def tearDownClass(cls): @@ -103,57 +61,26 @@ def tearDownClass(cls): def test_layer_n1(self): print(self.n1[0]) - # self.assertEqual(tf.reduce_sum(self.n1).numpy() % 1, 0.0) # should be integer def test_layer_n2(self): print(self.n2[0]) - # self.assertEqual(tf.reduce_sum(self.n2).numpy() % 1, 0.0) # should be integer - - def test_model_n3(self): - print(self.n3[0]) - # self.assertEqual(tf.reduce_sum(self.n3).numpy() % 1, 0.0) # should be integer - - def test_exception(self): - try: - layer = DorefaDense(n_units=5) - inputs = Input([4, 10, 5], name='ill_inputs') - out = layer(inputs) - self.fail('ill inputs') - except Exception as e: - print(e) - - try: - layer = DorefaDense(n_units=5, use_gemm=True) - out = layer(self.ni) - self.fail('use gemm') - except Exception as e: - print(e) class Layer_DropconnectDense_Test(CustomTestCase): @classmethod - def setUpClass(cls): + def setUpClass(self): print("-" * 20, "Layer_DropconnectDense_Test", "-" * 20) - cls.batch_size = 4 - cls.inputs_shape = [cls.batch_size, 10] - - cls.ni = Input(cls.inputs_shape, name='input_layer') - cls.layer1 = DropconnectDense(n_units=5, keep=1.0) - nn = cls.layer1(cls.ni) - cls.layer1._nodes_fixed = True - cls.M = Model(inputs=cls.ni, outputs=nn) + self.batch_size = 4 + self.inputs_shape = [self.batch_size, 10] - cls.layer2 = DropconnectDense(n_units=5, in_channels=10, keep=0.01) - cls.layer2._nodes_fixed = True + self.ni = tl.layers.Input(self.inputs_shape, name='input_layer') + self.layer1 = tl.layers.DropconnectDense(n_units=5, keep=1.0) - cls.inputs = tf.ones((cls.inputs_shape)) - cls.n1 = cls.layer1(cls.inputs) - cls.n2 = cls.layer2(cls.inputs) - cls.n3 = cls.M(cls.inputs, is_train=True) + self.layer2 = tl.layers.DropconnectDense(n_units=5, in_channels=10, keep=0.01) - print(cls.layer1) - print(cls.layer2) + self.n1 = self.layer1(self.ni) + self.n2 = self.layer2(self.ni) @classmethod def tearDownClass(cls): @@ -163,55 +90,24 @@ def test_layer_n1(self): print(self.n1[0]) def test_layer_n2(self): - zero_rate = tf.reduce_mean(tf.cast(tf.equal(self.n2, 0.0), tf.float32)) - print(zero_rate) - self.assertGreater(zero_rate, 0.0) print(self.n2[0]) - def test_model_n3(self): - print(self.n3[0]) - - def test_exception(self): - try: - layer = DropconnectDense(n_units=5) - inputs = Input([4, 10, 5], name='ill_inputs') - out = layer(inputs) - self.fail('ill inputs') - except Exception as e: - print(e) - - try: - layer = DropconnectDense(n_units=5, keep=0.0) - self.fail('keep no elements') - except Exception as e: - self.assertIsInstance(e, ValueError) - print(e) - class Layer_QuanDense_Test(CustomTestCase): @classmethod - def setUpClass(cls): + def setUpClass(self): print("-" * 20, "Layer_QuanDense_Test", "-" * 20) - cls.batch_size = 4 - cls.inputs_shape = [cls.batch_size, 10] - - cls.ni = Input(cls.inputs_shape, name='input_layer') - cls.layer1 = QuanDense(n_units=5) - nn = cls.layer1(cls.ni) - cls.layer1._nodes_fixed = True - cls.M = Model(inputs=cls.ni, outputs=nn) + self.batch_size = 4 + self.inputs_shape = [self.batch_size, 10] - cls.layer2 = QuanDense(n_units=5, in_channels=10) - cls.layer2._nodes_fixed = True + self.ni = tl.layers.Input(self.inputs_shape, name='input_layer') + self.layer1 = tl.layers.QuanDense(n_units=5) - cls.inputs = tf.random.uniform((cls.inputs_shape)) - cls.n1 = cls.layer1(cls.inputs) - cls.n2 = cls.layer2(cls.inputs) - cls.n3 = cls.M(cls.inputs, is_train=True) + self.layer2 = tl.layers.QuanDense(n_units=5, in_channels=10) - print(cls.layer1) - print(cls.layer2) + self.n1 = self.layer1(self.ni) + self.n2 = self.layer2(self.ni) @classmethod def tearDownClass(cls): @@ -223,50 +119,21 @@ def test_layer_n1(self): def test_layer_n2(self): print(self.n2[0]) - def test_model_n3(self): - print(self.n3[0]) - - def test_exception(self): - try: - layer = QuanDense(n_units=5) - inputs = Input([4, 10, 5], name='ill_inputs') - out = layer(inputs) - self.fail('ill inputs') - except Exception as e: - print(e) - - try: - layer = QuanDense(n_units=5, use_gemm=True) - out = layer(self.ni) - self.fail('use gemm') - except Exception as e: - print(e) - class Layer_QuanDenseWithBN_Test(CustomTestCase): @classmethod - def setUpClass(cls): + def setUpClass(self): print("-" * 20, "Layer_QuanDenseWithBN_Test", "-" * 20) - cls.batch_size = 4 - cls.inputs_shape = [cls.batch_size, 10] - - cls.ni = Input(cls.inputs_shape, name='input_layer') - cls.layer1 = QuanDenseWithBN(n_units=5) - nn = cls.layer1(cls.ni) - cls.layer1._nodes_fixed = True - cls.M = Model(inputs=cls.ni, outputs=nn) + self.batch_size = 4 + self.inputs_shape = [self.batch_size, 10] - cls.layer2 = QuanDenseWithBN(n_units=5, in_channels=10) - cls.layer2._nodes_fixed = True + self.inputs = tl.initializers.TruncatedNormal()(shape=self.inputs_shape) + self.layer1 = tl.layers.QuanDenseWithBN(n_units=5) + self.layer2 = tl.layers.QuanDenseWithBN(n_units=5, in_channels=10) - cls.inputs = tf.random.uniform((cls.inputs_shape)) - cls.n1 = cls.layer1(cls.inputs) - cls.n2 = cls.layer2(cls.inputs) - cls.n3 = cls.M(cls.inputs, is_train=True) - - print(cls.layer1) - print(cls.layer2) + self.n1 = self.layer1(self.inputs) + self.n2 = self.layer2(self.inputs) @classmethod def tearDownClass(cls): @@ -278,50 +145,21 @@ def test_layer_n1(self): def test_layer_n2(self): print(self.n2[0]) - def test_model_n3(self): - print(self.n3[0]) - - def test_exception(self): - try: - layer = QuanDenseWithBN(n_units=5) - inputs = Input([4, 10, 5], name='ill_inputs') - out = layer(inputs) - self.fail('ill inputs') - except Exception as e: - print(e) - - try: - layer = QuanDenseWithBN(n_units=5, use_gemm=True) - out = layer(self.ni) - self.fail('use gemm') - except Exception as e: - print(e) - class Layer_TernaryDense_Test(CustomTestCase): @classmethod - def setUpClass(cls): + def setUpClass(self): print("-" * 20, "Layer_BinaryDense_Test", "-" * 20) - cls.batch_size = 4 - cls.inputs_shape = [cls.batch_size, 10] + self.batch_size = 4 + self.inputs_shape = [self.batch_size, 10] - cls.ni = Input(cls.inputs_shape, name='input_layer') - cls.layer1 = TernaryDense(n_units=5) - nn = cls.layer1(cls.ni) - cls.layer1._nodes_fixed = True - cls.M = Model(inputs=cls.ni, outputs=nn) + self.inputs = tl.layers.Input(self.inputs_shape, name='input_layer') + self.layer1 = tl.layers.TernaryDense(n_units=5) + self.layer2 = tl.layers.TernaryDense(n_units=5, in_channels=10) - cls.layer2 = TernaryDense(n_units=5, in_channels=10) - cls.layer2._nodes_fixed = True - - cls.inputs = tf.ones((cls.inputs_shape)) - cls.n1 = cls.layer1(cls.inputs) - cls.n2 = cls.layer2(cls.inputs) - cls.n3 = cls.M(cls.inputs, is_train=True) - - print(cls.layer1) - print(cls.layer2) + self.n1 = self.layer1(self.inputs) + self.n2 = self.layer2(self.inputs) @classmethod def tearDownClass(cls): @@ -335,26 +173,6 @@ def test_layer_n2(self): print(np.unique(self.n2.numpy().reshape(-1))) print(self.n2[0]) - def test_model_n3(self): - print(np.unique(self.n3.numpy().reshape(-1))) - print(self.n3[0]) - - def test_exception(self): - try: - layer = TernaryDense(n_units=5) - inputs = Input([4, 10, 5], name='ill_inputs') - out = layer(inputs) - self.fail('ill inputs') - except Exception as e: - print(e) - - try: - layer = TernaryDense(n_units=5, use_gemm=True) - out = layer(self.ni) - self.fail('use gemm') - except Exception as e: - print(e) - if __name__ == '__main__': diff --git a/tests/layers/test_layers_embedding.py b/tests/layers/test_layers_embedding.py index 4377b79a7..832e47bbe 100644 --- a/tests/layers/test_layers_embedding.py +++ b/tests/layers/test_layers_embedding.py @@ -4,13 +4,12 @@ import os import unittest -import numpy as np -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import tensorlayer as tl -from tests.utils import CustomTestCase +import numpy as np -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Layer_Embed_Test(CustomTestCase): @@ -24,37 +23,34 @@ def tearDownClass(cls): pass def test_onehot(self): - input = tl.layers.Input([32], dtype=tf.int32) + input = tl.layers.Input([32], dtype=tl.int32) onehot = tl.layers.OneHot(depth=8, on_value=1, off_value=0, axis=-1) print(onehot) tensor = tl.layers.OneHot(depth=8)(input) self.assertEqual(tensor.get_shape().as_list(), [32, 8]) - model = tl.models.Model(inputs=input, outputs=tensor) def test_embed(self): - input = tl.layers.Input([8, 100], dtype=tf.int32) + input = tl.layers.Input([8, 100], dtype=tl.int32) embed = tl.layers.Embedding(vocabulary_size=1000, embedding_size=50, name='embed') print(embed) tensor = embed(input) self.assertEqual(tensor.get_shape().as_list(), [8, 100, 50]) - model = tl.models.Model(inputs=input, outputs=tensor) def test_avg_embed(self): batch_size = 8 length = 5 - input = tl.layers.Input([batch_size, length], dtype=tf.int32) + input = tl.layers.Input([batch_size, length], dtype=tl.int32) avgembed = tl.layers.AverageEmbedding(vocabulary_size=1000, embedding_size=50, name='avg') print(avgembed) tensor = avgembed(input) # print(tensor) self.assertEqual(tensor.get_shape().as_list(), [batch_size, 50]) - model = tl.models.Model(inputs=input, outputs=tensor) def test_word2vec_nce(self): batch_size = 8 embedding_size = 50 - inputs = tl.layers.Input([batch_size], dtype=tf.int32) - labels = tl.layers.Input([batch_size, 1], dtype=tf.int32) + inputs = tl.layers.Input([batch_size], dtype=tl.int32) + labels = tl.layers.Input([batch_size, 1], dtype=tl.int32) emb_net = tl.layers.Word2vecEmbedding( vocabulary_size=10000, embedding_size=embedding_size, @@ -66,38 +62,21 @@ def test_word2vec_nce(self): nce_b_init=tl.initializers.constant(value=0.0), ) print(emb_net) - try: - embed_tensor, embed_nce_loss = emb_net(inputs) - except ValueError as e: - print(e) - try: - embed_tensor = emb_net(inputs, use_nce_loss=False) - print("Not use NCE without labels") - except Exception as e: - print(e) + embed_tensor = emb_net([inputs, labels], use_nce_loss=False) embed_tensor, embed_nce_loss = emb_net([inputs, labels], use_nce_loss=True) embed_tensor, embed_nce_loss = emb_net([inputs, labels]) self.assertEqual(embed_tensor.get_shape().as_list(), [batch_size, embedding_size]) - outputs = tl.layers.Dense(n_units=10)(embed_tensor) - model = tl.models.Model(inputs=[inputs, labels], outputs=[outputs, embed_nce_loss]) - out, nce = model( - [np.random.randint(0, 1, size=[batch_size]), - np.random.randint(0, 1, size=[batch_size, 1])], is_train=True - ) - self.assertEqual(out.get_shape().as_list(), [batch_size, 10]) - print(nce) - def test_word2vec_no_nce(self): batch_size = 8 embedding_size = 50 - inputs = tl.layers.Input([batch_size], dtype=tf.int32) + inputs = tl.layers.Input([batch_size], dtype=tl.int32) emb_net = tl.layers.Word2vecEmbedding( vocabulary_size=10000, embedding_size=embedding_size, num_sampled=100, - activate_nce_loss=False, # the nce loss is activated + activate_nce_loss=False, nce_loss_args={}, E_init=tl.initializers.random_uniform(minval=-1.0, maxval=1.0), nce_W_init=tl.initializers.truncated_normal(stddev=float(1.0 / np.sqrt(embedding_size))), @@ -111,7 +90,6 @@ def test_word2vec_no_nce(self): except AttributeError as e: print(e) self.assertEqual(embed_tensor.get_shape().as_list(), [batch_size, embedding_size]) - model = tl.models.Model(inputs=inputs, outputs=embed_tensor) if __name__ == '__main__': diff --git a/tests/layers/test_layers_extend.py b/tests/layers/test_layers_extend.py index 5d4decc60..22b685618 100644 --- a/tests/layers/test_layers_extend.py +++ b/tests/layers/test_layers_extend.py @@ -4,12 +4,11 @@ import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Layer_Extend_Test(CustomTestCase): @@ -26,15 +25,13 @@ def test_expand_dims(self): x = tl.layers.Input([8, 3]) expandlayer = tl.layers.ExpandDims(axis=-1) y = expandlayer(x) - print(expandlayer) - self.assertEqual(y.get_shape().as_list(), [8, 3, 1]) + self.assertEqual(tl.get_tensor_shape(y), [8, 3, 1]) def test_tile(self): x = tl.layers.Input([8, 3]) tilelayer = tl.layers.Tile(multiples=[2, 3]) y = tilelayer(x) - print(tilelayer) - self.assertEqual(y.get_shape().as_list(), [16, 9]) + self.assertEqual(tl.get_tensor_shape(y), [16, 9]) if __name__ == '__main__': diff --git a/tests/layers/test_layers_lambda.py b/tests/layers/test_layers_lambda.py index cb487e86f..d2ab4a107 100644 --- a/tests/layers/test_layers_lambda.py +++ b/tests/layers/test_layers_lambda.py @@ -3,14 +3,14 @@ import os import unittest - import numpy as np -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' + +import tensorflow as tf import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Layer_Lambda_Test(CustomTestCase): @@ -34,7 +34,7 @@ def test_lambda_keras(self): # in order to get trainable_variables of keras _ = perceptron(np.random.random([100, 5]).astype(np.float32)) - class CustomizeModel(tl.models.Model): + class CustomizeModel(tl.layers.Module): def __init__(self): super(CustomizeModel, self).__init__() @@ -51,7 +51,7 @@ def forward(self, x): model = CustomizeModel() print(model.lambdalayer) - model.train() + model.set_train() for epoch in range(10): with tf.GradientTape() as tape: @@ -73,7 +73,7 @@ def customize_func(x, foo=42): else: return tf.identity(x) - class CustomizeModel(tl.models.Model): + class CustomizeModel(tl.layers.Module): def __init__(self): super(CustomizeModel, self).__init__() @@ -90,7 +90,7 @@ def forward(self, x, bar): model = CustomizeModel() print(model.lambdalayer) - model.train() + model.set_train() out, out2 = model(self.data_x, bar=-1) self.assertTrue(np.array_equal(out2.numpy(), tf.nn.relu(out).numpy())) @@ -108,7 +108,7 @@ def test_lambda_func_with_weight(self): def customize_fn(x): return x + a - class CustomizeModel(tl.models.Model): + class CustomizeModel(tl.layers.Module): def __init__(self): super(CustomizeModel, self).__init__() @@ -122,14 +122,14 @@ def forward(self, x): model = CustomizeModel() print(model.lambdalayer) - model.train() + model.set_train() out = model(self.data_x) print(out.shape) def test_lambda_func_without_args(self): - class CustomizeModel(tl.models.Model): + class CustomizeModel(tl.layers.Module): def __init__(self): super(CustomizeModel, self).__init__() @@ -143,7 +143,7 @@ def forward(self, x): model = CustomizeModel() print(model.lambdalayer) - model.train() + model.set_train() out, out2 = model(self.data_x) self.assertTrue(np.array_equal(out2.numpy(), out.numpy() * 2)) @@ -153,7 +153,7 @@ def test_elementwiselambda_func_with_args(self): def customize_func(noise, mean, std, foo=42): return mean + noise * tf.exp(std * 0.5) + foo - class CustomizeModel(tl.models.Model): + class CustomizeModel(tl.layers.Module): def __init__(self): super(CustomizeModel, self).__init__() @@ -174,7 +174,7 @@ def forward(self, x, bar=None): model = CustomizeModel() print(model.lambdalayer) - model.train() + model.set_train() noise, mean, std, out = model(self.data_x) self.assertTrue(np.allclose(out.numpy(), customize_func(noise, mean, std, foo=1024).numpy())) @@ -186,7 +186,7 @@ def test_elementwiselambda_func_without_args(self): def customize_func(noise, mean, std): return mean + noise * tf.exp(std * 0.5) - class CustomizeModel(tl.models.Model): + class CustomizeModel(tl.layers.Module): def __init__(self): super(CustomizeModel, self).__init__() @@ -204,7 +204,7 @@ def forward(self, x): model = CustomizeModel() print(model.lambdalayer) - model.train() + model.set_train() noise, mean, std, out = model(self.data_x) self.assertTrue(np.array_equal(out.numpy(), customize_func(noise, mean, std).numpy())) diff --git a/tests/layers/test_layers_merge.py b/tests/layers/test_layers_merge.py index 75e711054..0aeb763aa 100644 --- a/tests/layers/test_layers_merge.py +++ b/tests/layers/test_layers_merge.py @@ -4,13 +4,12 @@ import os import unittest -import numpy as np -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +import numpy as np import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Layer_Merge_Test(CustomTestCase): @@ -25,12 +24,12 @@ def tearDownClass(cls): def test_concat(self): - class CustomModel(tl.models.Model): + class CustomModel(tl.layers.Module): def __init__(self): super(CustomModel, self).__init__() - self.dense1 = tl.layers.Dense(in_channels=20, n_units=10, act=tf.nn.relu, name='relu1_1') - self.dense2 = tl.layers.Dense(in_channels=20, n_units=10, act=tf.nn.relu, name='relu2_1') + self.dense1 = tl.layers.Dense(in_channels=20, n_units=10, act=tl.ReLU, name='relu1_1') + self.dense2 = tl.layers.Dense(in_channels=20, n_units=10, act=tl.ReLU, name='relu2_1') self.concat = tl.layers.Concat(concat_dim=1, name='concat_layer') def forward(self, inputs): @@ -40,8 +39,8 @@ def forward(self, inputs): return outputs model = CustomModel() - model.train() - inputs = tf.convert_to_tensor(np.random.random([4, 20]).astype(np.float32)) + model.set_train() + inputs = tl.ops.convert_to_tensor(np.random.random([4, 20]).astype(np.float32)) outputs = model(inputs) print(model) @@ -49,13 +48,13 @@ def forward(self, inputs): def test_elementwise(self): - class CustomModel(tl.models.Model): + class CustomModel(tl.layers.Module): def __init__(self): super(CustomModel, self).__init__() - self.dense1 = tl.layers.Dense(in_channels=20, n_units=10, act=tf.nn.relu, name='relu1_1') - self.dense2 = tl.layers.Dense(in_channels=20, n_units=10, act=tf.nn.relu, name='relu2_1') - self.element = tl.layers.Elementwise(combine_fn=tf.minimum, name='minimum', act=tf.identity) + self.dense1 = tl.layers.Dense(in_channels=20, n_units=10, act=tl.ReLU, name='relu1_1') + self.dense2 = tl.layers.Dense(in_channels=20, n_units=10, act=tl.ReLU, name='relu2_1') + self.element = tl.layers.Elementwise(combine_fn=tl.minimum, name='minimum', act=None) def forward(self, inputs): d1 = self.dense1(inputs) @@ -64,12 +63,12 @@ def forward(self, inputs): return outputs, d1, d2 model = CustomModel() - model.train() - inputs = tf.convert_to_tensor(np.random.random([4, 20]).astype(np.float32)) + model.set_train() + inputs = tl.ops.convert_to_tensor(np.random.random([4, 20]).astype(np.float32)) outputs, d1, d2 = model(inputs) print(model) - min = tf.minimum(d1, d2) + min = tl.ops.minimum(d1, d2) self.assertEqual(outputs.get_shape().as_list(), [4, 10]) self.assertTrue(np.array_equal(min.numpy(), outputs.numpy())) diff --git a/tests/layers/test_layers_noise.py b/tests/layers/test_layers_noise.py index 056410ba1..2a8b4fe1e 100644 --- a/tests/layers/test_layers_noise.py +++ b/tests/layers/test_layers_noise.py @@ -4,13 +4,12 @@ import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import tensorlayer as tl from tensorlayer.layers import * -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Layer_Convolution_1D_Test(CustomTestCase): @@ -23,12 +22,10 @@ def setUpClass(cls): cls.inputs_shape = [cls.batch_size, 200] cls.input_layer = Input(cls.inputs_shape, name='input_layer') - cls.dense = tl.layers.Dense(n_units=100, act=tf.nn.relu, in_channels=200)(cls.input_layer) + cls.dense = tl.layers.Dense(n_units=100, act=tl.ReLU, in_channels=200)(cls.input_layer) cls.noiselayer = tl.layers.GaussianNoise(name='gaussian')(cls.dense) - print("Testing GaussianNoise: \n", cls.noiselayer._info[0].layer) - @classmethod def tearDownClass(cls): pass @@ -39,6 +36,6 @@ def test_layer_n1(self): if __name__ == '__main__': - tl.logging.set_verbosity(tl.logging.DEBUG) + # tl.logging.set_verbosity(tl.logging.DEBUG) unittest.main() diff --git a/tests/layers/test_layers_normalization.py b/tests/layers/test_layers_normalization.py index b6bb30ad2..51cd41387 100644 --- a/tests/layers/test_layers_normalization.py +++ b/tests/layers/test_layers_normalization.py @@ -4,15 +4,12 @@ import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import tensorlayer as tl from tensorlayer.layers import * -from tensorlayer.models import Model from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' - class Laye_BatchNorm_Test(CustomTestCase): @@ -25,31 +22,28 @@ def setUpClass(cls): x_3_input_shape = [None, 100, 100, 100, 3] batchsize = 2 - cls.x0 = tf.random.normal([batchsize] + x_0_input_shape[1:]) - cls.x1 = tf.random.normal([batchsize] + x_1_input_shape[1:]) - cls.x2 = tf.random.normal([batchsize] + x_2_input_shape[1:]) - cls.x3 = tf.random.normal([batchsize] + x_3_input_shape[1:]) + cls.x0 = tl.ops.truncated_normal(shape=[batchsize] + x_0_input_shape[1:]) + cls.x1 = tl.ops.truncated_normal([batchsize] + x_1_input_shape[1:]) + cls.x2 = tl.ops.truncated_normal([batchsize] + x_2_input_shape[1:]) + cls.x3 = tl.ops.truncated_normal([batchsize] + x_3_input_shape[1:]) ## Base ni_1 = Input(x_1_input_shape, name='test_ni1') nn_1 = Conv1d(n_filter=32, filter_size=5, stride=2, name='test_conv1d')(ni_1) n1_b = BatchNorm(name='test_conv')(nn_1) cls.n1_b = n1_b - cls.base_1d = Model(inputs=ni_1, outputs=n1_b, name='test_base_1d') ni_2 = Input(x_2_input_shape, name='test_ni2') nn_2 = Conv2d(n_filter=32, filter_size=(3, 3), strides=(2, 2), name='test_conv2d')(ni_2) n2_b = BatchNorm(name='test_bn2d')(nn_2) cls.n2_b = n2_b - cls.base_2d = Model(inputs=ni_2, outputs=n2_b, name='test_base_2d') ni_3 = Input(x_3_input_shape, name='test_ni2') nn_3 = Conv3d(n_filter=32, filter_size=(3, 3, 3), strides=(2, 2, 2), name='test_conv3d')(ni_3) n3_b = BatchNorm(name='test_bn3d')(nn_3) cls.n3_b = n3_b - cls.base_3d = Model(inputs=ni_3, outputs=n3_b, name='test_base_3d') - class bn_0d_model(Model): + class bn_0d_model(tl.layers.Module): def __init__(self): super(bn_0d_model, self).__init__() @@ -61,7 +55,7 @@ def forward(self, x): return x dynamic_base = bn_0d_model() - cls.n0_b = dynamic_base(cls.x0, is_train=True) + cls.n0_b = dynamic_base(cls.x0) ## 0D ======================================================================== @@ -72,9 +66,7 @@ def forward(self, x): cls.n0 = n0 - cls.static_0d = Model(inputs=nin_0, outputs=n0) - - class bn_0d_model(Model): + class bn_0d_model(tl.layers.Module): def __init__(self): super(bn_0d_model, self).__init__(name='test_bn_0d_model') @@ -87,10 +79,6 @@ def forward(self, x): cls.dynamic_0d = bn_0d_model() - print("Printing BatchNorm0d") - print(cls.static_0d) - print(cls.dynamic_0d) - ## 1D ======================================================================== nin_1 = Input(x_1_input_shape, name='test_in1') @@ -100,9 +88,7 @@ def forward(self, x): cls.n1 = n1 - cls.static_1d = Model(inputs=nin_1, outputs=n1) - - class bn_1d_model(Model): + class bn_1d_model(tl.layers.Module): def __init__(self): super(bn_1d_model, self).__init__(name='test_bn_1d_model') @@ -115,10 +101,6 @@ def forward(self, x): cls.dynamic_1d = bn_1d_model() - print("Printing BatchNorm1d") - print(cls.static_1d) - print(cls.dynamic_1d) - ## 2D ======================================================================== nin_2 = Input(x_2_input_shape, name='test_in2') @@ -128,9 +110,7 @@ def forward(self, x): cls.n2 = n2 - cls.static_2d = Model(inputs=nin_2, outputs=n2) - - class bn_2d_model(Model): + class bn_2d_model(tl.layers.Module): def __init__(self): super(bn_2d_model, self).__init__(name='test_bn_2d_model') @@ -143,22 +123,16 @@ def forward(self, x): cls.dynamic_2d = bn_2d_model() - print("Printing BatchNorm1d") - print(cls.static_2d) - print(cls.dynamic_2d) - ## 3D ======================================================================== nin_3 = Input(x_3_input_shape, name='test_in3') n3 = Conv3d(n_filter=32, filter_size=(3, 3, 3), strides=(2, 2, 2), name='test_conv3d')(nin_3) - n3 = BatchNorm3d(name='test_bn3d', act=tf.nn.relu)(n3) + n3 = BatchNorm3d(name='test_bn3d', act=tl.ReLU)(n3) cls.n3 = n3 - cls.static_3d = Model(inputs=nin_3, outputs=n3) - - class bn_3d_model(Model): + class bn_3d_model(tl.layers.Module): def __init__(self): super(bn_3d_model, self).__init__(name='test_bn_3d_model') @@ -173,9 +147,6 @@ def forward(self, x): cls.dynamic_3d = bn_3d_model() - print("Printing BatchNorm1d") - print(cls.static_3d) - print(cls.dynamic_3d) @classmethod def tearDownClass(cls): @@ -184,37 +155,25 @@ def tearDownClass(cls): def test_BatchNorm(self): self.assertEqual(self.n1_b.shape[1:], (50, 32)) - out = self.base_1d(self.x1, is_train=True) self.assertEqual(self.n2_b.shape[1:], (50, 50, 32)) - out = self.base_2d(self.x2, is_train=True) self.assertEqual(self.n3_b.shape[1:], (50, 50, 50, 32)) - out = self.base_3d(self.x3, is_train=True) self.assertEqual(self.n0_b.shape[1:], (32)) print("test_BatchNorm OK") def test_BatchNorm0d(self): self.assertEqual(self.n0.shape[1:], (32)) - out = self.static_0d(self.x0, is_train=True) - out = self.dynamic_0d(self.x0, is_train=True) def test_BatchNorm1d(self): self.assertEqual(self.n1.shape[1:], (50, 32)) - out = self.static_1d(self.x1, is_train=True) - out = self.dynamic_1d(self.x1, is_train=True) def test_BatchNorm2d(self): self.assertEqual(self.n2.shape[1:], (50, 50, 32)) - out = self.static_2d(self.x2, is_train=True) - out = self.dynamic_2d(self.x2, is_train=True) - out = self.dynamic_2d(self.x2, is_train=False) def test_BatchNorm3d(self): self.assertEqual(self.n3.shape[1:], (50, 50, 50, 32)) - out = self.static_3d(self.x3, is_train=True) - out = self.dynamic_3d(self.x3, is_train=True) def test_dataformat(self): bn1d = BatchNorm1d(data_format='channels_first', num_features=32) @@ -242,26 +201,6 @@ def test_exception(self): self.assertIsInstance(e, ValueError) print(e) - def test_input_shape(self): - try: - bn = BatchNorm1d(num_features=32) - out = bn(self.x2) - except Exception as e: - self.assertIsInstance(e, ValueError) - print(e) - try: - bn = BatchNorm2d(num_features=32) - out = bn(self.x3) - except Exception as e: - self.assertIsInstance(e, ValueError) - print(e) - try: - bn = BatchNorm3d(num_features=32) - out = bn(self.x1) - except Exception as e: - self.assertIsInstance(e, ValueError) - print(e) - if __name__ == '__main__': diff --git a/tests/layers/test_layers_padding.py b/tests/layers/test_layers_padding.py index a92da5197..fa3e13bb1 100644 --- a/tests/layers/test_layers_padding.py +++ b/tests/layers/test_layers_padding.py @@ -4,12 +4,11 @@ import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Layer_Padding_Test(CustomTestCase): @@ -23,9 +22,6 @@ def setUpClass(cls): n1 = tl.layers.ZeroPad1d(padding=1)(cls.input_layer1) n2 = tl.layers.ZeroPad1d(padding=(2, 3))(cls.input_layer1) - print(n1._info[0].layer) - print(n2._info[0].layer) - cls.n1_shape = n1.get_shape().as_list() cls.n2_shape = n2.get_shape().as_list() @@ -37,13 +33,7 @@ def setUpClass(cls): n4 = tl.layers.ZeroPad2d(padding=(2, 3))(cls.input_layer2) n5 = tl.layers.ZeroPad2d(padding=((3, 3), (4, 4)))(cls.input_layer2) - print(n0._info[0].layer) - print(n3._info[0].layer) - print(n4._info[0].layer) - print(n5._info[0].layer) - cls.n0_shape = n0.get_shape().as_list() - print(cls.n0_shape) cls.n3_shape = n3.get_shape().as_list() cls.n4_shape = n4.get_shape().as_list() cls.n5_shape = n5.get_shape().as_list() @@ -55,10 +45,6 @@ def setUpClass(cls): n7 = tl.layers.ZeroPad3d(padding=(2, 3, 4))(cls.input_layer3) n8 = tl.layers.ZeroPad3d(padding=((3, 3), (4, 4), (5, 5)))(cls.input_layer3) - print(n6._info[0].layer) - print(n7._info[0].layer) - print(n8._info[0].layer) - cls.n6_shape = n6.get_shape().as_list() cls.n7_shape = n7.get_shape().as_list() cls.n8_shape = n8.get_shape().as_list() diff --git a/tests/layers/test_layers_pooling.py b/tests/layers/test_layers_pooling.py index 5ab3e3e98..de61d52bc 100644 --- a/tests/layers/test_layers_pooling.py +++ b/tests/layers/test_layers_pooling.py @@ -4,13 +4,12 @@ import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import tensorlayer as tl from tensorlayer.layers import * -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Layer_Pooling_Test(CustomTestCase): @@ -31,24 +30,18 @@ def setUpClass(cls): n16 = tl.layers.MaxPool1d(filter_size=3, strides=1, padding='VALID', dilation_rate=2, name='test_maxpool1d')(n1) n17 = tl.layers.MeanPool1d(filter_size=3, strides=1, padding='VALID', dilation_rate=2, name='test_meanpool1d')(n1) - - cls.n1_shape = n1.get_shape().as_list() - cls.n2_shape = n2.get_shape().as_list() - cls.n3_shape = n3.get_shape().as_list() - cls.n4_shape = n4.get_shape().as_list() - cls.n5_shape = n5.get_shape().as_list() - cls.n16_shape = n16.get_shape().as_list() - cls.n17_shape = n17.get_shape().as_list() - - print("Printing Pool1d") - print(nin_1._info[0].layer) - print(n1._info[0].layer) - print(n2._info[0].layer) - print(n3._info[0].layer) - print(n4._info[0].layer) - print(n5._info[0].layer) - print(n16._info[0].layer) - print(n17._info[0].layer) + n19 = tl.layers.AdaptiveMeanPool1d(output_size=44, name='test_adaptivemeanpool1d')(n1) + n20 = tl.layers.AdaptiveMaxPool1d(output_size=44, name='test_adaptivemaxpool1d')(n1) + + cls.n1_shape = tl.get_tensor_shape(n1) + cls.n2_shape = tl.get_tensor_shape(n2) + cls.n3_shape = tl.get_tensor_shape(n3) + cls.n4_shape = tl.get_tensor_shape(n4) + cls.n5_shape = tl.get_tensor_shape(n5) + cls.n16_shape = tl.get_tensor_shape(n16) + cls.n17_shape = tl.get_tensor_shape(n17) + cls.n19_shape = tl.get_tensor_shape(n19) + cls.n20_shape = tl.get_tensor_shape(n20) ## 2D ======================================================================== @@ -61,25 +54,18 @@ def setUpClass(cls): n9 = tl.layers.GlobalMaxPool2d(name='test_maxpool2d')(n6) n10 = tl.layers.GlobalMeanPool2d(name='test_meanpool2d')(n6) n15 = tl.layers.PoolLayer(name='test_pool2d')(n6) - n18 = tl.layers.CornerPool2d('TopLeft', name='test_cornerpool2d')(n6) - - cls.n6_shape = n6.get_shape().as_list() - cls.n7_shape = n7.get_shape().as_list() - cls.n8_shape = n8.get_shape().as_list() - cls.n9_shape = n9.get_shape().as_list() - cls.n10_shape = n10.get_shape().as_list() - cls.n15_shape = n15.get_shape().as_list() - cls.n18_shape = n18.get_shape().as_list() - - print("Printing Pool2d") - print(nin_2._info[0].layer) - print(n6._info[0].layer) - print(n7._info[0].layer) - print(n8._info[0].layer) - print(n9._info[0].layer) - print(n10._info[0].layer) - print(n15._info[0].layer) - print(n18._info[0].layer) + # n18 = tl.layers.CornerPool2d('TopLeft', name='test_cornerpool2d')(n6) + n21 = tl.layers.AdaptiveMeanPool2d(output_size=(45, 32), name='test_adaptivemeanpool2d')(n6) + n22 = tl.layers.AdaptiveMaxPool2d(output_size=(45, 32), name='test_adaptivemaxpool2d')(n6) + + cls.n6_shape = tl.get_tensor_shape(n6) + cls.n7_shape = tl.get_tensor_shape(n7) + cls.n8_shape = tl.get_tensor_shape(n8) + cls.n9_shape = tl.get_tensor_shape(n9) + cls.n10_shape = tl.get_tensor_shape(n10) + cls.n15_shape = tl.get_tensor_shape(n15) + cls.n21_shape = tl.get_tensor_shape(n21) + cls.n22_shape = tl.get_tensor_shape(n22) ## 3D ======================================================================== @@ -93,17 +79,17 @@ def setUpClass(cls): n14 = tl.layers.MaxPool3d(filter_size=(3, 3, 3), strides=(2, 2, 2), padding='SAME', name='test_maxpool3d')(nin_3) + n23 = tl.layers.AdaptiveMeanPool3d(output_size=(45, 32, 55), name='test_adaptivemeanpool3d')(nin_3) + n24 = tl.layers.AdaptiveMaxPool3d(output_size=(45, 32, 55), name='test_adaptivemaxpool3d')(nin_3) + cls.n11_shape = n11.get_shape().as_list() cls.n12_shape = n12.get_shape().as_list() cls.n13_shape = n13.get_shape().as_list() cls.n14_shape = n14.get_shape().as_list() - - print("Printing Pool3d") - print(nin_3._info[0].layer) - print(n11._info[0].layer) - print(n12._info[0].layer) - print(n13._info[0].layer) - print(n14._info[0].layer) + cls.n21_shape = n21.get_shape().as_list() + cls.n22_shape = n22.get_shape().as_list() + cls.n23_shape = n23.get_shape().as_list() + cls.n24_shape = n24.get_shape().as_list() @classmethod def tearDownClass(cls): @@ -159,10 +145,28 @@ def test_n16_shape(self): self.assertEqual(self.n16_shape[1:4], [46, 32]) def test_n17_shape(self): - self.assertEqual(self.n17_shape[1:4], [48, 32]) + self.assertEqual(self.n17_shape[1:4], [46, 32]) + + def test_n19_shape(self): + self.assertEqual(self.n19_shape[1:3], [44, 32]) + + def test_n20_shape(self): + self.assertEqual(self.n20_shape[1:3], [44, 32]) + + def test_n21_shape(self): + self.assertEqual(self.n21_shape[1:4], [45, 32, 32]) + + def test_n22_shape(self): + self.assertEqual(self.n22_shape[1:4], [45, 32, 32]) + + def test_n23_shape(self): + self.assertEqual(self.n23_shape[1:5], [45, 32, 55, 3]) + + def test_n24_shape(self): + self.assertEqual(self.n24_shape[1:5], [45, 32, 55, 3]) - def test_n18_shape(self): - self.assertEqual(self.n18_shape[1:], [50, 50, 32]) + # def test_n18_shape(self): + # self.assertEqual(self.n18_shape[1:], [50, 50, 32]) if __name__ == '__main__': diff --git a/tests/layers/test_layers_recurrent.py b/tests/layers/test_layers_recurrent.py index 6f9eff3ea..011327e71 100644 --- a/tests/layers/test_layers_recurrent.py +++ b/tests/layers/test_layers_recurrent.py @@ -4,13 +4,12 @@ import os import unittest -import numpy as np -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +import numpy as np import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Layer_RNN_Test(CustomTestCase): diff --git a/tests/layers/test_layers_resampling.py b/tests/layers/test_layers_resampling.py index 643303558..4f0bbc903 100644 --- a/tests/layers/test_layers_resampling.py +++ b/tests/layers/test_layers_resampling.py @@ -1,18 +1,17 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -import os import sys +sys.path.append("/home/wurundi/workspace/tensorlayer2") + +import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import tensorlayer as tl from tensorlayer.layers import * -from tests.utils import CustomTestCase -sys.path.append("/home/wurundi/workspace/tensorlayer2") - -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Layer_Pooling_Test(CustomTestCase): @@ -43,14 +42,6 @@ def setUpClass(cls): cls.n9_shape = n9.get_shape().as_list() cls.n10_shape = n10.get_shape().as_list() - print("Printing UpSampling2d") - print(nin_2._info[0].layer) - print(n6._info[0].layer) - print(n7._info[0].layer) - print(n8._info[0].layer) - print(n9._info[0].layer) - print(n10._info[0].layer) - @classmethod def tearDownClass(cls): pass diff --git a/tests/layers/test_layers_scale.py b/tests/layers/test_layers_scale.py index fdf5228ed..c87e560af 100644 --- a/tests/layers/test_layers_scale.py +++ b/tests/layers/test_layers_scale.py @@ -4,13 +4,11 @@ import os import unittest -import numpy as np -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Layer_Scale_Test(CustomTestCase): @@ -24,16 +22,22 @@ def tearDownClass(cls): pass def test_scale(self): - inputs = tl.layers.Input([8, 3]) - dense = tl.layers.Dense(n_units=10)(inputs) - scalelayer = tl.layers.Scale(init_scale=0.5) - outputs = scalelayer(dense) - model = tl.models.Model(inputs=inputs, outputs=[dense, outputs]) - - print(scalelayer) - data = np.random.random(size=[8, 3]).astype(np.float32) - dout, fout = model(data, is_train=True) + class model(tl.layers.Module): + def __init__(self): + super(model, self).__init__() + self.dense = tl.layers.Dense(n_units=10) + self.scalelayer = tl.layers.Scale(init_scale=0.5) + + def forward(self, inputs): + output1 = self.dense(inputs) + output2 = self.scalelayer(output1) + return output1, output2 + + input = tl.layers.Input((8, 3), init=tl.initializers.random_normal()) + net = model() + net.set_train() + dout, fout = net(input) for i in range(len(dout)): for j in range(len(dout[i])): diff --git a/tests/layers/test_layers_shape.py b/tests/layers/test_layers_shape.py index 2ece6b0b7..139a3f8ed 100644 --- a/tests/layers/test_layers_shape.py +++ b/tests/layers/test_layers_shape.py @@ -3,22 +3,21 @@ import os import unittest - import numpy as np -import tensorflow as tf + +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Layer_Shape_Test(CustomTestCase): @classmethod def setUpClass(cls): - cls.data = np.random.random(size=[8, 4, 3]).astype(np.float32) - cls.imgdata = np.random.random(size=[2, 16, 16, 8]).astype(np.float32) + cls.data = tl.layers.Input(shape=(8, 4, 3), init=tl.initializers.random_normal()) + cls.imgdata = tl.layers.Input(shape=(2, 16, 16, 8), init=tl.initializers.random_normal()) @classmethod def tearDownClass(cls): @@ -26,7 +25,7 @@ def tearDownClass(cls): def test_flatten(self): - class CustomizeModel(tl.models.Model): + class CustomizeModel(tl.layers.Module): def __init__(self): super(CustomizeModel, self).__init__() @@ -37,13 +36,13 @@ def forward(self, x): model = CustomizeModel() print(model.flatten) - model.train() + model.set_train() out = model(self.data) self.assertEqual(out.get_shape().as_list(), [8, 12]) def test_reshape(self): - class CustomizeModel(tl.models.Model): + class CustomizeModel(tl.layers.Module): def __init__(self): super(CustomizeModel, self).__init__() @@ -58,7 +57,7 @@ def forward(self, x): print(model.reshape1) print(model.reshape2) print(model.reshape3) - model.train() + model.set_train() out1, out2, out3 = model(self.data) self.assertEqual(out1.get_shape().as_list(), [8, 12]) self.assertEqual(out2.get_shape().as_list(), [8, 12]) @@ -66,7 +65,7 @@ def forward(self, x): def test_transpose(self): - class CustomizeModel(tl.models.Model): + class CustomizeModel(tl.layers.Module): def __init__(self): super(CustomizeModel, self).__init__() @@ -78,15 +77,16 @@ def __init__(self): def forward(self, x): return self.transpose1(x), self.transpose2(x), self.transpose3(x), self.transpose4(x) - real = np.random.random([8, 4, 3]).astype(np.float32) - comp = np.random.random([8, 4, 3]).astype(np.float32) - complex_data = real + 1j * comp + real = tl.layers.Input(shape=(8, 4, 3), init=tl.initializers.random_normal()) + comp = tl.layers.Input(shape=(8, 4, 3), init=tl.initializers.random_normal()) + import tensorflow as tf + complex_data = tf.dtypes.complex(real, comp) model = CustomizeModel() print(model.transpose1) print(model.transpose2) print(model.transpose3) print(model.transpose4) - model.train() + model.set_train() out1, out2, out3, out4 = model(self.data) self.assertEqual(out1.get_shape().as_list(), [3, 4, 8]) self.assertEqual(out2.get_shape().as_list(), [3, 4, 8]) @@ -103,7 +103,7 @@ def forward(self, x): def test_shuffle(self): - class CustomizeModel(tl.models.Model): + class CustomizeModel(tl.layers.Module): def __init__(self, x): super(CustomizeModel, self).__init__() @@ -114,18 +114,9 @@ def forward(self, x): model = CustomizeModel(2) print(model.shuffle) - model.train() + model.set_train() out = model(self.imgdata) self.assertEqual(out.get_shape().as_list(), [2, 16, 16, 8]) - try: - model_fail = CustomizeModel(3) - print(model_fail.shuffle) - model_fail.train() - out = model_fail(self.imgdata) - self.assertEqual(out.get_shape().as_list(), [2, 16, 16, 8]) - except Exception as e: - self.assertIsInstance(e, ValueError) - print(e) if __name__ == '__main__': diff --git a/tests/layers/test_layers_stack.py b/tests/layers/test_layers_stack.py index 4005c61e8..6a703cb82 100644 --- a/tests/layers/test_layers_stack.py +++ b/tests/layers/test_layers_stack.py @@ -3,14 +3,12 @@ import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import tensorlayer as tl from tensorlayer.layers import * -from tensorlayer.models import * -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Layer_Stack_Test(CustomTestCase): @@ -22,16 +20,29 @@ def setUpClass(cls): cls.inputs_shape = [cls.batch_size, 10] cls.ni = Input(cls.inputs_shape, name='input_layer') + class model(tl.layers.Module): + def __init__(self): + super(model, self).__init__() + self.a = Dense(n_units=5) + self.b = Dense(n_units=5) + self.stack = Stack(axis=1) + + def forward(self, inputs): + output1 = self.a(inputs) + output2 = self.b(inputs) + output = self.stack([output1, output2]) + return output + a = Dense(n_units=5)(cls.ni) b = Dense(n_units=5)(cls.ni) cls.layer1 = Stack(axis=1) cls.n1 = cls.layer1([a, b]) - cls.M = Model(inputs=cls.ni, outputs=cls.n1) - cls.inputs = tf.random.uniform(cls.inputs_shape) - cls.n2 = cls.M(cls.inputs, is_train=True) + net = model() + net.set_train() + cls.inputs = Input(cls.inputs_shape) + cls.n2 = net(cls.inputs) - print(cls.layer1) @classmethod def tearDownClass(cls): @@ -54,12 +65,25 @@ def setUpClass(cls): cls.ni = Input(cls.inputs_shape, name='input_layer') a = Dense(n_units=5)(cls.ni) - cls.layer1 = UnStack(axis=1) # unstack in channel axis + cls.layer1 = UnStack(axis=1) cls.n1 = cls.layer1(a) - cls.M = Model(inputs=cls.ni, outputs=cls.n1) - cls.inputs = tf.random.uniform(cls.inputs_shape) - cls.n2 = cls.M(cls.inputs, is_train=True) + class model(tl.layers.Module): + def __init__(self): + super(model, self).__init__() + self.a = Dense(n_units=5) + self.unstack = UnStack(axis=1) + + def forward(self, inputs): + output1 = self.a(inputs) + output = self.unstack(output1) + return output + + + cls.inputs = Input(cls.inputs_shape) + net = model() + net.set_train() + cls.n2 = net(cls.inputs) print(cls.layer1) diff --git a/tests/models/test_auto_naming.py b/tests/models/test_auto_naming.py index 65337a8c9..fb8f03720 100644 --- a/tests/models/test_auto_naming.py +++ b/tests/models/test_auto_naming.py @@ -3,15 +3,15 @@ import os import unittest +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' + import numpy as np import tensorflow as tf - import tensorlayer as tl from tensorlayer.layers import * from tensorlayer.models import * -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase def basic_static_model(name=None, conv1_name="conv1", conv2_name="conv2"): diff --git a/tests/models/test_keras_save.py b/tests/models/test_keras_save.py index caadd6574..2d40b31ef 100644 --- a/tests/models/test_keras_save.py +++ b/tests/models/test_keras_save.py @@ -1,8 +1,8 @@ -import tensorflow as tf -from tensorflow.python.keras import Model from tensorflow.python.keras.applications import VGG16 -from tensorflow.python.keras.layers import Conv2D, Dense +from tensorflow.python.keras.layers import Dense, Conv2D +from tensorflow.python.keras import Model from tensorflow.python.training import saver +import tensorflow as tf # get the whole model # vgg = VGG16(weights=None) diff --git a/tests/models/test_model_core.py b/tests/models/test_model_core.py index 0a98e154d..3db470f9d 100644 --- a/tests/models/test_model_core.py +++ b/tests/models/test_model_core.py @@ -3,15 +3,15 @@ import os import unittest +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' + import numpy as np import tensorflow as tf - import tensorlayer as tl from tensorlayer.layers import * from tensorlayer.models import * -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase def basic_static_model(): diff --git a/tests/models/test_model_save.py b/tests/models/test_model_save.py index 001e9a3df..ba224ee25 100644 --- a/tests/models/test_model_save.py +++ b/tests/models/test_model_save.py @@ -3,15 +3,15 @@ import os import unittest +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' + import numpy as np import tensorflow as tf - import tensorlayer as tl from tensorlayer.layers import * from tensorlayer.models import * -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase def basic_static_model(include_top=True): @@ -80,6 +80,7 @@ def setUpClass(cls): print([l.name for l in cls.dynamic_basic.all_layers]) print([l.name for l in cls.dynamic_basic_skip.all_layers]) + pass @classmethod def tearDownClass(cls): diff --git a/tests/models/test_model_save_graph.py b/tests/models/test_model_save_graph.py index 1e9b898a1..3e527159d 100644 --- a/tests/models/test_model_save_graph.py +++ b/tests/models/test_model_save_graph.py @@ -4,15 +4,15 @@ import os import unittest +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' + import numpy as np import tensorflow as tf - import tensorlayer as tl from tensorlayer.layers import * from tensorlayer.models import * -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase def RemoveDateInConfig(config): diff --git a/tests/models/test_seq2seq_model.py b/tests/models/test_seq2seq_model.py index 52939e764..d77aa47ba 100644 --- a/tests/models/test_seq2seq_model.py +++ b/tests/models/test_seq2seq_model.py @@ -4,17 +4,16 @@ import os import unittest +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' + import numpy as np import tensorflow as tf -from sklearn.utils import shuffle -from tqdm import tqdm - import tensorlayer as tl -from tensorlayer.cost import cross_entropy_seq +from tqdm import tqdm +from sklearn.utils import shuffle from tensorlayer.models.seq2seq import Seq2seq from tests.utils import CustomTestCase - -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tensorlayer.cost import cross_entropy_seq class Model_SEQ2SEQ_Test(CustomTestCase): diff --git a/tests/models/test_seq2seq_with_attention.py b/tests/models/test_seq2seq_with_attention.py index 9cfc07cec..d7dbeae34 100644 --- a/tests/models/test_seq2seq_with_attention.py +++ b/tests/models/test_seq2seq_with_attention.py @@ -4,17 +4,16 @@ import os import unittest +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' + import numpy as np import tensorflow as tf -from sklearn.utils import shuffle -from tqdm import tqdm - import tensorlayer as tl -from tensorlayer.cost import cross_entropy_seq +from tqdm import tqdm +from sklearn.utils import shuffle from tensorlayer.models.seq2seq_with_attention import Seq2seqLuongAttention from tests.utils import CustomTestCase - -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tensorlayer.cost import cross_entropy_seq class Model_SEQ2SEQ_WITH_ATTENTION_Test(CustomTestCase): diff --git a/tests/pending/test_array_ops.py b/tests/pending/test_array_ops.py index 7813e286e..56b80d485 100644 --- a/tests/pending/test_array_ops.py +++ b/tests/pending/test_array_ops.py @@ -4,13 +4,14 @@ import os import unittest -import numpy as np -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +import tensorflow as tf import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +import numpy as np + +from tests.utils import CustomTestCase class Array_Op_Alphas_Test(CustomTestCase): diff --git a/tests/pending/test_decorators.py b/tests/pending/test_decorators.py index fbe91b2ba..cc8878543 100644 --- a/tests/pending/test_decorators.py +++ b/tests/pending/test_decorators.py @@ -4,13 +4,14 @@ import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +import tensorflow as tf import tensorlayer as tl + from tensorlayer.decorators import private_method -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Layer_Pooling_Test(CustomTestCase): diff --git a/tests/pending/test_documentation.py b/tests/pending/test_documentation.py index 332a5cb03..211142e8d 100755 --- a/tests/pending/test_documentation.py +++ b/tests/pending/test_documentation.py @@ -4,10 +4,10 @@ import os import unittest -from sphinx.application import Sphinx - os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from sphinx.application import Sphinx + class DocTest(unittest.TestCase): source_dir = u'docs/' diff --git a/tests/pending/test_layers_basic.py b/tests/pending/test_layers_basic.py index 209663bd2..2771f961a 100644 --- a/tests/pending/test_layers_basic.py +++ b/tests/pending/test_layers_basic.py @@ -4,12 +4,12 @@ import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +import tensorflow as tf import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Layer_Basic_Test(CustomTestCase): diff --git a/tests/pending/test_layers_flow_control.py b/tests/pending/test_layers_flow_control.py index b82c460b6..d86eb217a 100644 --- a/tests/pending/test_layers_flow_control.py +++ b/tests/pending/test_layers_flow_control.py @@ -4,12 +4,12 @@ import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +import tensorflow as tf import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Layer_Flow_Control_Test(CustomTestCase): diff --git a/tests/pending/test_layers_importer.py b/tests/pending/test_layers_importer.py index c5a2f0d3c..1c1321acb 100644 --- a/tests/pending/test_layers_importer.py +++ b/tests/pending/test_layers_importer.py @@ -4,17 +4,20 @@ import os import unittest -import tensorflow as tf -from tensorflow.contrib.slim.python.slim.nets.inception_v3 import (inception_v3, inception_v3_arg_scope) +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' -import tensorlayer as tl -from tests.utils import CustomTestCase +import tensorflow as tf -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tensorflow.contrib.slim.python.slim.nets.inception_v3 import inception_v3 +from tensorflow.contrib.slim.python.slim.nets.inception_v3 import inception_v3_arg_scope slim = tf.contrib.slim keras = tf.keras +import tensorlayer as tl + +from tests.utils import CustomTestCase + class Layer_Importer_Test(CustomTestCase): diff --git a/tests/pending/test_layers_normalization.py b/tests/pending/test_layers_normalization.py index e6fd8bd81..d0891abf1 100644 --- a/tests/pending/test_layers_normalization.py +++ b/tests/pending/test_layers_normalization.py @@ -4,12 +4,12 @@ import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +import tensorflow as tf import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase def model(x, is_train=True, reuse=False): diff --git a/tests/pending/test_layers_padding.py b/tests/pending/test_layers_padding.py index 163838cb5..ab6f6b54d 100644 --- a/tests/pending/test_layers_padding.py +++ b/tests/pending/test_layers_padding.py @@ -4,12 +4,12 @@ import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +import tensorflow as tf import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Layer_Padding_Test(CustomTestCase): diff --git a/tests/pending/test_layers_spatial_transformer.py b/tests/pending/test_layers_spatial_transformer.py index b585f6032..4c6d81b44 100644 --- a/tests/pending/test_layers_spatial_transformer.py +++ b/tests/pending/test_layers_spatial_transformer.py @@ -4,12 +4,12 @@ import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +import tensorflow as tf import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase def model(x, is_train, reuse): @@ -21,8 +21,8 @@ def model(x, is_train, reuse): nt = tl.layers.DenseLayer(nt, n_units=20, act=tf.nn.tanh, name='dense1') nt = tl.layers.DropoutLayer(nt, keep=0.8, is_fix=True, is_train=is_train, name='drop1') # you can also use CNN instead for MLP as the localisation net - # nt = Conv2d(nin, 16, (3, 3), (2, 2), act=tf.nn.relu, padding='SAME', name='tc1') - # nt = Conv2d(nt, 8, (3, 3), (2, 2), act=tf.nn.relu, padding='SAME', name='tc2') + # nt = Conv2d(nin, 16, (3, 3), (2, 2), act=tf.ops.relu, padding='SAME', name='tc1') + # nt = Conv2d(nt, 8, (3, 3), (2, 2), act=tf.ops.relu, padding='SAME', name='tc2') ## 2. Spatial transformer module (sampler) n = tl.layers.SpatialTransformer2dAffineLayer(nin, theta_layer=nt, out_size=(40, 40), name='spatial') s = n diff --git a/tests/pending/test_layers_stack.py b/tests/pending/test_layers_stack.py index c223b0553..0745a834d 100644 --- a/tests/pending/test_layers_stack.py +++ b/tests/pending/test_layers_stack.py @@ -4,12 +4,12 @@ import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +import tensorflow as tf import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Layer_Stack_Test(CustomTestCase): diff --git a/tests/pending/test_layers_super_resolution.py b/tests/pending/test_layers_super_resolution.py index f60986700..9b359cb99 100644 --- a/tests/pending/test_layers_super_resolution.py +++ b/tests/pending/test_layers_super_resolution.py @@ -4,12 +4,12 @@ import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +import tensorflow as tf import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Layer_Super_Resolution_Test(CustomTestCase): diff --git a/tests/pending/test_layers_time_distributed.py b/tests/pending/test_layers_time_distributed.py index bb2f33fc0..a97c51117 100644 --- a/tests/pending/test_layers_time_distributed.py +++ b/tests/pending/test_layers_time_distributed.py @@ -4,12 +4,12 @@ import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +import tensorflow as tf import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase def model(x, is_train=True, reuse=False, name_scope="env1"): diff --git a/tests/pending/test_logging.py b/tests/pending/test_logging.py index 59f171b21..fffdf7cc5 100644 --- a/tests/pending/test_logging.py +++ b/tests/pending/test_logging.py @@ -4,12 +4,12 @@ import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +import tensorflow as tf import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class TL_Logger_Test(CustomTestCase): diff --git a/tests/pending/test_logging_hyperdash.py b/tests/pending/test_logging_hyperdash.py index 6616bd1c9..c39e66160 100644 --- a/tests/pending/test_logging_hyperdash.py +++ b/tests/pending/test_logging_hyperdash.py @@ -2,16 +2,18 @@ # -*- coding: utf-8 -*- import os -import time import unittest -import tensorflow as tf +import time + +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +import tensorflow as tf import tensorlayer as tl + from tensorlayer.logging.contrib import hyperdash as hd -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class TL_Logger_Test(CustomTestCase): diff --git a/tests/pending/test_mnist_simple.py b/tests/pending/test_mnist_simple.py index 90fa18b36..ec14c390a 100644 --- a/tests/pending/test_mnist_simple.py +++ b/tests/pending/test_mnist_simple.py @@ -4,12 +4,12 @@ import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +import tensorflow as tf import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Simple_MNIST_Test(CustomTestCase): @@ -31,7 +31,7 @@ def setUpClass(cls): # the softmax is implemented internally in tl.cost.cross_entropy(y, y_) to # speed up computation, so we use identity here. - # see tf.nn.sparse_softmax_cross_entropy_with_logits() + # see tf.ops.sparse_softmax_cross_entropy_with_logits() cls.network = tl.layers.DenseLayer(network, n_units=10, name='output') # define cost function and metric. @@ -41,7 +41,7 @@ def setUpClass(cls): correct_prediction = tf.equal(tf.argmax(y, 1), cls.y_) cls.acc = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) - # y_op = tf.argmax(tf.nn.softmax(y), 1) + # y_op = tf.argmax(tf.ops.softmax(y), 1) # define the optimizer train_params = cls.network.trainable_weights diff --git a/tests/pending/test_models.py b/tests/pending/test_models.py index dd0e07cbd..ecaf036fc 100644 --- a/tests/pending/test_models.py +++ b/tests/pending/test_models.py @@ -4,12 +4,12 @@ import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +import tensorflow as tf import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class VGG_Model_Test(CustomTestCase): @@ -26,7 +26,7 @@ def setUpClass(cls): # sess = tf.InteractiveSession() # vgg.restore_params(sess) # use for inferencing - # probs = tf.nn.softmax(vgg1.outputs) + # probs = tf.ops.softmax(vgg1.outputs) cls.vgg1_layers = vgg1.all_layers cls.vgg1_params = vgg1.all_params diff --git a/tests/pending/test_optimizer_amsgrad.py b/tests/pending/test_optimizer_amsgrad.py index 919881c41..0ceb8b372 100644 --- a/tests/pending/test_optimizer_amsgrad.py +++ b/tests/pending/test_optimizer_amsgrad.py @@ -4,12 +4,12 @@ import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +import tensorflow as tf import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Layer_Pooling_Test(CustomTestCase): diff --git a/tests/pending/test_pydocstyle.py b/tests/pending/test_pydocstyle.py index 5a7143d1d..b93bf74db 100755 --- a/tests/pending/test_pydocstyle.py +++ b/tests/pending/test_pydocstyle.py @@ -4,10 +4,12 @@ import os import unittest -from pydocstyle.checker import check, violations +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' + from tests.utils import list_all_py_files -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from pydocstyle.checker import check +from pydocstyle.checker import violations registry = violations.ErrorRegistry diff --git a/tests/pending/test_reuse_mlp.py b/tests/pending/test_reuse_mlp.py index 5992b8bda..3ca435b38 100644 --- a/tests/pending/test_reuse_mlp.py +++ b/tests/pending/test_reuse_mlp.py @@ -4,12 +4,12 @@ import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +import tensorflow as tf import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase # define the network diff --git a/tests/pending/test_tf_layers.py b/tests/pending/test_tf_layers.py index 3ba11820c..dc04a06ff 100644 --- a/tests/pending/test_tf_layers.py +++ b/tests/pending/test_tf_layers.py @@ -4,12 +4,12 @@ import os import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +import tensorflow as tf import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Layer_Convolution_1D_Test(CustomTestCase): diff --git a/tests/pending/test_timeout.py b/tests/pending/test_timeout.py index 914c0bdf6..9b5dda621 100644 --- a/tests/pending/test_timeout.py +++ b/tests/pending/test_timeout.py @@ -3,15 +3,21 @@ import os import time + import unittest -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +import tensorflow as tf import tensorlayer as tl -from tests.utils import (CustomTestCase, TimeoutContext, TimeoutError, WindowsError) -from tests.utils.custom_networks import InceptionV4_Network -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import WindowsError +from tests.utils import TimeoutError + +from tests.utils import TimeoutContext +from tests.utils import CustomTestCase + +from tests.utils.custom_networks import InceptionV4_Network if os.getenv("TRAVIS", None) is not None: NETWORK_CREATION_TIMEOUT = 120 # Seconds before timeout diff --git a/tests/pending/test_utils_predict.py b/tests/pending/test_utils_predict.py index bea7eb99e..ec751e275 100644 --- a/tests/pending/test_utils_predict.py +++ b/tests/pending/test_utils_predict.py @@ -4,13 +4,14 @@ import os import unittest +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' + import numpy as np -import tensorflow as tf +import tensorflow as tf import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Util_Predict_Test(CustomTestCase): diff --git a/tests/pending/test_yapf_format.py b/tests/pending/test_yapf_format.py index 2dc790ea9..05ff6f699 100644 --- a/tests/pending/test_yapf_format.py +++ b/tests/pending/test_yapf_format.py @@ -4,9 +4,10 @@ import sys import unittest -from yapf.yapflib.yapf_api import FormatCode +from tests.utils import list_all_py_files +from tests.utils import CustomTestCase -from tests.utils import CustomTestCase, list_all_py_files +from yapf.yapflib.yapf_api import FormatCode def _read_utf_8_file(filename): diff --git a/tests/performance_test/vgg/keras_test.py b/tests/performance_test/vgg/keras_test.py index fdb0b89d6..4b77cbea1 100644 --- a/tests/performance_test/vgg/keras_test.py +++ b/tests/performance_test/vgg/keras_test.py @@ -1,14 +1,12 @@ -import os import time - +import os import psutil -import tensorflow as tf - import keras -from exp_config import (BATCH_SIZE, LERANING_RATE, MONITOR_INTERVAL, NUM_ITERS, random_input_generator) from keras.applications.vgg16 import VGG16 from keras.backend.tensorflow_backend import set_session from keras.utils import to_categorical +import tensorflow as tf +from exp_config import random_input_generator, MONITOR_INTERVAL, NUM_ITERS, BATCH_SIZE, LERANING_RATE config = tf.ConfigProto() config.gpu_options.allow_growth = True diff --git a/tests/performance_test/vgg/pytorch_test.py b/tests/performance_test/vgg/pytorch_test.py index aaf278d4f..a81aa0be3 100644 --- a/tests/performance_test/vgg/pytorch_test.py +++ b/tests/performance_test/vgg/pytorch_test.py @@ -1,14 +1,12 @@ -import os -import time - -import numpy as np -import psutil import torch import torch.nn.functional as F import torch.optim as optim - -from exp_config import (BATCH_SIZE, LERANING_RATE, MONITOR_INTERVAL, NUM_ITERS, random_input_generator) from torchvision.models import vgg16 +import time +import os +import psutil +import numpy as np +from exp_config import random_input_generator, MONITOR_INTERVAL, NUM_ITERS, BATCH_SIZE, LERANING_RATE # set gpu_id 0 device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") diff --git a/tests/performance_test/vgg/tf2-autograph.py b/tests/performance_test/vgg/tf2-autograph.py index 220196d34..90d2ccf0d 100644 --- a/tests/performance_test/vgg/tf2-autograph.py +++ b/tests/performance_test/vgg/tf2-autograph.py @@ -1,11 +1,9 @@ -import os import time - +import os import psutil -import tensorflow as tf from tensorflow.python.keras.applications import VGG16 - -from exp_config import (BATCH_SIZE, LERANING_RATE, MONITOR_INTERVAL, NUM_ITERS, random_input_generator) +import tensorflow as tf +from exp_config import random_input_generator, MONITOR_INTERVAL, NUM_ITERS, BATCH_SIZE, LERANING_RATE gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: diff --git a/tests/performance_test/vgg/tf2-eager.py b/tests/performance_test/vgg/tf2-eager.py index 800d4421d..d4c78088f 100644 --- a/tests/performance_test/vgg/tf2-eager.py +++ b/tests/performance_test/vgg/tf2-eager.py @@ -1,11 +1,9 @@ -import os import time - +import os import psutil -import tensorflow as tf from tensorflow.python.keras.applications import VGG16 - -from exp_config import (BATCH_SIZE, LERANING_RATE, MONITOR_INTERVAL, NUM_ITERS, random_input_generator) +import tensorflow as tf +from exp_config import random_input_generator, MONITOR_INTERVAL, NUM_ITERS, BATCH_SIZE, LERANING_RATE gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: diff --git a/tests/performance_test/vgg/tl2-autograph.py b/tests/performance_test/vgg/tl2-autograph.py index 1bfd6fb8c..63f553960 100644 --- a/tests/performance_test/vgg/tl2-autograph.py +++ b/tests/performance_test/vgg/tl2-autograph.py @@ -1,11 +1,9 @@ -import os import time - +import os import psutil import tensorflow as tf - import tensorlayer as tl -from exp_config import (BATCH_SIZE, LERANING_RATE, MONITOR_INTERVAL, NUM_ITERS, random_input_generator) +from exp_config import random_input_generator, MONITOR_INTERVAL, NUM_ITERS, BATCH_SIZE, LERANING_RATE gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: diff --git a/tests/performance_test/vgg/tl2-eager.py b/tests/performance_test/vgg/tl2-eager.py index 9f0699fd3..fd2ef4085 100644 --- a/tests/performance_test/vgg/tl2-eager.py +++ b/tests/performance_test/vgg/tl2-eager.py @@ -1,11 +1,9 @@ -import os import time - +import os import psutil import tensorflow as tf - import tensorlayer as tl -from exp_config import (BATCH_SIZE, LERANING_RATE, MONITOR_INTERVAL, NUM_ITERS, random_input_generator) +from exp_config import random_input_generator, MONITOR_INTERVAL, NUM_ITERS, BATCH_SIZE, LERANING_RATE gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: diff --git a/tests/performance_test/vgg/tl2-static-autograph.py b/tests/performance_test/vgg/tl2-static-autograph.py index 4c42a0616..0af20adb8 100644 --- a/tests/performance_test/vgg/tl2-static-autograph.py +++ b/tests/performance_test/vgg/tl2-static-autograph.py @@ -1,11 +1,9 @@ -import os import time - +import os import psutil import tensorflow as tf - import tensorlayer as tl -from exp_config import (BATCH_SIZE, LERANING_RATE, MONITOR_INTERVAL, NUM_ITERS, random_input_generator) +from exp_config import random_input_generator, MONITOR_INTERVAL, NUM_ITERS, BATCH_SIZE, LERANING_RATE gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: diff --git a/tests/performance_test/vgg/tl2-static-eager.py b/tests/performance_test/vgg/tl2-static-eager.py index 003ed5f41..b6d5287ba 100644 --- a/tests/performance_test/vgg/tl2-static-eager.py +++ b/tests/performance_test/vgg/tl2-static-eager.py @@ -1,11 +1,9 @@ -import os import time - +import os import psutil import tensorflow as tf - import tensorlayer as tl -from exp_config import (BATCH_SIZE, LERANING_RATE, MONITOR_INTERVAL, NUM_ITERS, random_input_generator) +from exp_config import random_input_generator, MONITOR_INTERVAL, NUM_ITERS, BATCH_SIZE, LERANING_RATE gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: diff --git a/tests/test_activations.py b/tests/test_activations.py index e168bd91e..39097a63b 100644 --- a/tests/test_activations.py +++ b/tests/test_activations.py @@ -4,12 +4,12 @@ import os import unittest +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' + import tensorflow as tf -import numpy as np import tensorlayer as tl -from tests.utils import CustomTestCase -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Test_Leaky_ReLUs(CustomTestCase): @@ -116,14 +116,6 @@ def test_swish(self): self.assertAlmostEqual(computed_output.numpy(), good_output, places=5) - def test_mish(self): - for i in range(-5, 15): - good_output = i * np.tanh(np.math.log(1 + np.math.exp(i))) - - computed_output = tl.act.mish(float(i)) - - self.assertAlmostEqual(computed_output.numpy(), good_output, places=5) - if __name__ == '__main__': diff --git a/tests/test_initializers.py b/tests/test_initializers.py index a5c978251..df86fd834 100644 --- a/tests/test_initializers.py +++ b/tests/test_initializers.py @@ -4,13 +4,13 @@ import os import unittest -import numpy as np -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +import tensorflow as tf import tensorlayer as tl -from tests.utils import CustomTestCase +import numpy as np -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils import CustomTestCase class Test_Leaky_ReLUs(CustomTestCase): diff --git a/tests/test_nlp.py b/tests/test_nlp.py index a8ca6dd21..680eeb83b 100644 --- a/tests/test_nlp.py +++ b/tests/test_nlp.py @@ -4,15 +4,14 @@ import os import unittest -import nltk -import tensorflow as tf -from tensorflow.python.platform import gfile +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +import tensorflow as tf import tensorlayer as tl -from tests.utils import CustomTestCase - -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tensorflow.python.platform import gfile +from tests.utils import CustomTestCase +import nltk nltk.download('punkt') diff --git a/tests/utils/__init__.py b/tests/utils/__init__.py index 323329d63..15d4814c2 100644 --- a/tests/utils/__init__.py +++ b/tests/utils/__init__.py @@ -1,8 +1,9 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -from tests.utils.custom_layers import * -from tests.utils.custom_networks import * from tests.utils.custom_testcase import * from tests.utils.list_py_files import * from tests.utils.timeout_utils import * + +from tests.utils.custom_layers import * +from tests.utils.custom_networks import * \ No newline at end of file diff --git a/tests/utils/custom_layers/__init__.py b/tests/utils/custom_layers/__init__.py index d9abe0d59..995a053ce 100644 --- a/tests/utils/custom_layers/__init__.py +++ b/tests/utils/custom_layers/__init__.py @@ -2,4 +2,4 @@ # -*- coding: utf-8 -*- from tests.utils.custom_layers.basic_layers import * -from tests.utils.custom_layers.inception_blocks import * +from tests.utils.custom_layers.inception_blocks import * \ No newline at end of file diff --git a/tests/utils/custom_layers/basic_layers.py b/tests/utils/custom_layers/basic_layers.py index 27ce5c1fc..83f320aec 100644 --- a/tests/utils/custom_layers/basic_layers.py +++ b/tests/utils/custom_layers/basic_layers.py @@ -2,7 +2,6 @@ # -*- coding: utf-8 -*- import tensorflow as tf - import tensorlayer as tl __all__ = [ @@ -62,9 +61,10 @@ def activation_module(layer, activation_fn, leaky_relu_alpha=0.2, name=None): def conv_module( - prev_layer, n_out_channel, filter_size, strides, padding, is_train=True, use_batchnorm=True, activation_fn=None, - conv_init=tl.initializers.random_uniform(), batch_norm_init=tl.initializers.truncated_normal(mean=1., stddev=0.02), - bias_init=tf.zeros_initializer(), name=None + prev_layer, n_out_channel, filter_size, strides, padding, is_train=True, use_batchnorm=True, activation_fn=None, + conv_init=tl.initializers.random_uniform(), + batch_norm_init=tl.initializers.truncated_normal(mean=1., + stddev=0.02), bias_init=tf.zeros_initializer(), name=None ): if activation_fn not in ["ReLU", "ReLU6", "Leaky_ReLU", "PReLU", "PReLU6", "PTReLU6", "CReLU", "ELU", "SELU", @@ -98,8 +98,10 @@ def conv_module( def dense_module( - prev_layer, n_units, is_train, use_batchnorm=True, activation_fn=None, dense_init=tl.initializers.random_uniform(), - batch_norm_init=tl.initializers.truncated_normal(mean=1., stddev=0.02), bias_init=tf.zeros_initializer(), name=None + prev_layer, n_units, is_train, use_batchnorm=True, activation_fn=None, + dense_init=tl.initializers.random_uniform(), + batch_norm_init=tl.initializers.truncated_normal(mean=1., + stddev=0.02), bias_init=tf.zeros_initializer(), name=None ): if activation_fn not in ["ReLU", "ReLU6", "Leaky_ReLU", "PReLU", "PReLU6", "PTReLU6", "CReLU", "ELU", "SELU", diff --git a/tests/utils/custom_layers/inception_blocks.py b/tests/utils/custom_layers/inception_blocks.py index 90c38a9a3..89d2640d4 100644 --- a/tests/utils/custom_layers/inception_blocks.py +++ b/tests/utils/custom_layers/inception_blocks.py @@ -2,8 +2,8 @@ # -*- coding: utf-8 -*- import tensorflow as tf - import tensorlayer as tl + from tests.utils.custom_layers.basic_layers import conv_module __all__ = [ diff --git a/tests/utils/custom_networks/__init__.py b/tests/utils/custom_networks/__init__.py index e245d6ac1..81dd159ba 100644 --- a/tests/utils/custom_networks/__init__.py +++ b/tests/utils/custom_networks/__init__.py @@ -1,4 +1,4 @@ #!/usr/bin/env python # -*- coding: utf-8 -*- -from tests.utils.custom_networks.inceptionv4 import * +from tests.utils.custom_networks.inceptionv4 import * \ No newline at end of file diff --git a/tests/utils/custom_networks/inceptionv4.py b/tests/utils/custom_networks/inceptionv4.py index e9895eec0..bac2ae897 100644 --- a/tests/utils/custom_networks/inceptionv4.py +++ b/tests/utils/custom_networks/inceptionv4.py @@ -3,15 +3,20 @@ import os -import tensorflow as tf +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +import tensorflow as tf import tensorlayer as tl -from tests.utils.custom_layers.basic_layers import conv_module, dense_module -from tests.utils.custom_layers.inception_blocks import ( - block_inception_a, block_inception_b, block_inception_c, block_reduction_a, block_reduction_b -) -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' +from tests.utils.custom_layers.basic_layers import conv_module +from tests.utils.custom_layers.basic_layers import dense_module + +from tests.utils.custom_layers.inception_blocks import block_inception_a +from tests.utils.custom_layers.inception_blocks import block_inception_b +from tests.utils.custom_layers.inception_blocks import block_inception_c + +from tests.utils.custom_layers.inception_blocks import block_reduction_a +from tests.utils.custom_layers.inception_blocks import block_reduction_b __all__ = ['InceptionV4_Network']