Skip to content

Commit

Permalink
Round 2
Browse files Browse the repository at this point in the history
Signed-off-by: AlexDBlack <[email protected]>
  • Loading branch information
AlexDBlack committed Dec 11, 2019
1 parent e01ffcd commit 97e80d8
Showing 1 changed file with 56 additions and 6 deletions.
62 changes: 56 additions & 6 deletions release-notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,9 +76,11 @@ redirect_from: "/releasenotes"
## Highlights - 1.0.0-beta6 Release

* SameDiff optimizations
* Deeplearning4j UI - Play framework replaced with Vertx; deeplearning4j-ui dependency now no longer has Scala dependency or Scala version suffix
* Deeplearning4j UI - Play framework replaced with Vertx; deeplearning4j-ui dependency now no longer has Scala dependency or Scala version suffix [Link](https://github.com/KonduitAI/deeplearning4j/pull/68)
* Note: No API changes, only artifact ID change: replace `deeplearning4j-ui_2.1x` with `deeplearning4j-ui`
* OpenMP replaced with thread pool c++ parallelism framework; enabled c++ parallelism for platforms without threading
* ND4J namespaces
* ND4j namespace operation methods: Nd4j.math, Nd4j.random, Nd4j.bitwise, Nd4j.nn (neural network) [Link](https://github.com/KonduitAI/deeplearning4j/pull/83)
* Added support for CUDA 10.2. 1.0.0-beta6 released with CUDA 9.2, 10.0, 10.1 and 10.2 support



Expand All @@ -87,21 +89,39 @@ redirect_from: "/releasenotes"
### Deeplearning4J: Features and Enhancements

* DNNL (MKL-DNN) upgraded to version 1.1
* Added causal convolution mode for Convolution1D layer (ConvolutionMode.Causal) and added causal conv1d support for Keras import [Link](https://github.com/KonduitAI/deeplearning4j/pull/107)
* Keras import now supports scaled identity weight initialization [Link](https://github.com/eclipse/deeplearning4j/issues/8395)
* Added Mish activation function [Link](https://github.com/eclipse/deeplearning4j/issues/8417), [Link](https://github.com/KonduitAI/deeplearning4j/pull/55)
* BertIterator now has a `BertIterator.featurizeSentences(List<String>)` method for inference [Link](https://github.com/KonduitAI/deeplearning4j/pull/71), [Link](https://github.com/eclipse/deeplearning4j/issues/8415)
* BertIterator now supports sentence pairs for supervised training [Link](https://github.com/KonduitAI/deeplearning4j/pull/108)
* Added Spark multi-class cross entropy for both Deeplearning4j and Keras import [Link](https://github.com/KonduitAI/deeplearning4j/pull/72), [Link](https://github.com/KonduitAI/deeplearning4j/pull/73)
* Deeplearning4j UI: migrated from Play to Vertx for web serving backend, also removing dependency on Scala libraries; no API changes, only artifact ID change - replace `deeplearning4j-ui_2.1x` with `deeplearning4j-ui` [Link](https://github.com/KonduitAI/deeplearning4j/pull/68), [Link](https://github.com/KonduitAI/deeplearning4j/pull/79)
* Added TimeDistributed wrapper layer [Link](https://github.com/KonduitAI/deeplearning4j/pull/78)


### Deeplearning4J: Bug Fixes and Optimizations

* KDTree implementation optimized [Link](https://github.com/KonduitAI/deeplearning4j/pull/7)
* Deeplearning4j zoo models and datasets hosting location updated [Link](https://github.com/eclipse/deeplearning4j/pull/8292)
* Fixed nIn validation for Deconv2D layer [Link](https://github.com/eclipse/deeplearning4j/issues/8225)
* Fixed an issue with incorrect Deconvolution2d results for Keras import models [Link](https://github.com/eclipse/deeplearning4j/issues/8298)
* Added DNNL/MKLDNN support for batch normalization layer [Link](https://github.com/KonduitAI/deeplearning4j/pull/14)
* Added DNNL/MKLDNN support for batch normalization layer [Link](https://github.com/KonduitAI/deeplearning4j/pull/14), [Link](https://github.com/eclipse/deeplearning4j/issues/8172)
* Fixed various integer casts to avoid overflows for very large arrays (with dimensions or length > Integer.MAX_VALUE) [Link](https://github.com/KonduitAI/deeplearning4j/pull/15)
* Fixed an issue with UNet non-pretrained model architecture (last layer kernel size) [Link](https://github.com/eclipse/deeplearning4j/issues/8214)
* Deeplearning4j SameDiff layers now use DL4J workspaces for better performance and reduced memory consumption [Link](https://github.com/KonduitAI/deeplearning4j/pull/23)
* Updated broken links in afew error messages [Link](https://github.com/eclipse/deeplearning4j/issues/8308)
* Cleaned up a few unused dependencies in various modules [Link](https://github.com/KonduitAI/deeplearning4j/pull/43)
* Cleaned up duplicate SamplingDataSetIterator class [Link](https://github.com/eclipse/deeplearning4j/issues/8352)
* Fixed an issue where ComputationGraph instances with a single input going into multiple embedding layers could throw a NPE [Link](https://github.com/KonduitAI/deeplearning4j/pull/52)
* Fixed an issue where loss function weights were not automatically cast to network datatype, resulting in an exception if not already correct type [Link](https://github.com/eclipse/deeplearning4j/issues/8431)
* Shaded Jackson version upgraded from 2.9.9/2.9.9.3 to 2.10.1 [Link](https://github.com/KonduitAI/deeplearning4j/pull/82)
* Fixed an issue with KNN where getMostPopulatedClusters actually returned the least populated clusters [Link](https://github.com/eclipse/deeplearning4j/issues/8383)



### Deeplearning4j: Transition Guide, 1.0.0-beta5 to 1.0.0-beta6

* Deeplearning4j UI artifact ID has changed: `deeplearning4j-ui_2.1x` (beta5 and earlier) with `deeplearning4j-ui`


### Deeplearning4j: 1.0.0-beta6 Known Issues
Expand All @@ -111,10 +131,12 @@ redirect_from: "/releasenotes"

### ND4J/SameDiff: Features and Enhancements

* DNNL (MKL-DNN) upgraded to version 1.1
* Added suport for CUDA 10.2 [Link](https://github.com/KonduitAI/deeplearning4j/pull/89)
* DNNL (MKL-DNN) upgraded to version 1.1 [Link](https://github.com/KonduitAI/deeplearning4j/pull/62)
* Added ND4j namespaces to match SameDiff: Nd4j.math, Nd4j.random, Nd4j.bitwise, Nd4j.nn (neural network) [Link](https://github.com/KonduitAI/deeplearning4j/pull/83)
* Added SameDiff.calculateGradientsAndOutputs method [Link](https://github.com/eclipse/deeplearning4j/issues/8318) [Link](https://github.com/KonduitAI/deeplearning4j/pull/21/)
* Additional SameDiff single batch .output method overloads for DataSet/MultiDataSet added [Link](https://github.com/SkymindIO/deeplearning4j/pull/253)
* TensorFlow import ops coverage enhanced (significant number of additional ops supported) [Link](https://github.com/SkymindIO/deeplearning4j/pull/254), [Link](https://github.com/eclipse/deeplearning4j/pull/8341), [Link](https://github.com/KonduitAI/deeplearning4j/pull/25)
* TensorFlow import ops coverage enhanced (significant number of additional ops supported) [Link](https://github.com/SkymindIO/deeplearning4j/pull/254), [Link](https://github.com/eclipse/deeplearning4j/pull/8341), [Link](https://github.com/KonduitAI/deeplearning4j/pull/25), [Link](https://github.com/KonduitAI/deeplearning4j/pull/49), [Link](https://github.com/KonduitAI/deeplearning4j/pull/65)
* PRelu op added [Link](https://github.com/eclipse/deeplearning4j/pull/8247)
* adjust_contrast, igamma and igammac ops added [Link](https://github.com/KonduitAI/deeplearning4j/pull/1)
* ND4J/SameDiff: BitCast, CompareAndBitpack, DivideNoNan, DrawBoundingBoxes, FakeQuantWithMinMaxVarsPerChannel ops added [Link](https://github.com/KonduitAI/deeplearning4j/pull/2)
Expand All @@ -124,11 +146,18 @@ redirect_from: "/releasenotes"
* Added Gamma and Poisson RNG distributions [Link](https://github.com/KonduitAI/deeplearning4j/pull/27)
* SameDiff's use of DeviceLocal for variables/constants etc is now configurable [Link](https://github.com/KonduitAI/deeplearning4j/pull/32)
* Uniform distribution op now supports random integer generation, not just random floating point generation [Link](https://github.com/KonduitAI/deeplearning4j/pull/30)
* SameDiff: Added simple OpBenchmarkListener for benchmarking purposes [Link](https://github.com/KonduitAI/deeplearning4j/pull/42)
* Added the ability to disable platform helpers (DNNL/MKLDNN etc) via `Nd4jCPU.Environment.getInstance().allowHelpers(false);` and `Nd4jCuda.Environment.getInstance().allowHelpers(false);` [Link](https://github.com/KonduitAI/deeplearning4j/pull/44)
* Added draw_bounding_boxes operation [Link](https://github.com/KonduitAI/deeplearning4j/pull/61)
* Added resize_bicubic operation [Link](https://github.com/KonduitAI/deeplearning4j/pull/56)
* Added causal padding mode to conv1d operation [Link](https://github.com/KonduitAI/deeplearning4j/pull/90)
* DNNL (MKLDNN) is included and enabled by default for non-AVX builds [Link](https://github.com/KonduitAI/deeplearning4j/pull/104)
* Added SameDiff ArraySavingListener for debugging purposes [Link](https://github.com/KonduitAI/deeplearning4j/pull/114)

### ND4J/SameDiff: Bug Fixes and Optimizations

* OpenMP replaced with ThreadPool abstraction, enables parallelism for platforms without OpenMP support [Link](https://github.com/KonduitAI/deeplearning4j/pull/8)
* SameDiff memory management overheauled for (in some cases significantlny) reduced memory consumption and improved performance [Link](https://github.com/KonduitAI/deeplearning4j/pull/10)
* SameDiff memory management overheauled for (in some cases significantlny) reduced memory consumption and improved performance [Link](https://github.com/KonduitAI/deeplearning4j/pull/10), [Link](https://github.com/KonduitAI/deeplearning4j/pull/39)
* Switched to Clang instead of gcc for OSX compilation to avoid compiler-related issues [Link](https://github.com/KonduitAI/deeplearning4j/pull/8)
* Removed `SameDiff.outputs()` "best guess" output inference due to being unreliable, in favor of explicit `SameDiff.setOutputs(String...)` call [Link](https://github.com/eclipse/deeplearning4j/issues/8265)
* Fixed an issue with Nd4j.hstack on 1D arrays [Link](https://github.com/eclipse/deeplearning4j/issues/8218)
Expand All @@ -146,6 +175,20 @@ redirect_from: "/releasenotes"
* Fixed an issue with biasadd_bp operation and NHWC data format [Link](https://github.com/eclipse/deeplearning4j/issues/8280)
* Fixed an issue with certain strided slice backprop configurations [Link](https://github.com/eclipse/deeplearning4j/issues/8342), [Link](https://github.com/KonduitAI/deeplearning4j/pull/29)
* Fixed an issue with LogSumExp reduction operation backprop for along dimension case [Link](https://github.com/KonduitAI/deeplearning4j/pull/35), [Link](https://github.com/eclipse/deeplearning4j/issues/8360)
* INDArray.toString() now has correct brackets for rank 1+ scalars to avoid ambiguity [Link](https://github.com/eclipse/deeplearning4j/issues/8382)
* Fixed an issue where some ND4J methods could fail when the library is compiled on Java 9+ but run on Java 8 [Link](https://github.com/KonduitAI/deeplearning4j/pull/59)
* Fixed empty array input case for is_strictly_increasing, non_decreasing and non_max_suppression ops [Link](https://github.com/KonduitAI/deeplearning4j/pull/63), [Link](https://github.com/KonduitAI/deeplearning4j/pull/67)
* Fixed empty input arrays for legacy ops (transform, scalar, pairwise, broadcast) [Link](https://github.com/KonduitAI/deeplearning4j/pull/66)
* CUDA compute capability 3.0 is supported again [Link](https://github.com/KonduitAI/deeplearning4j/commit/7f90930e7a5cec6eaed87121c6deaf3209b932f3)
* Improved performance for Scatter operations (1D case) + index validation [Link](https://github.com/KonduitAI/deeplearning4j/pull/84)
* Fixed an issue where SameDiff TrainingConfig serialization would fail if evaluation instances are set [Link](https://github.com/KonduitAI/deeplearning4j/pull/93), [Link](https://github.com/eclipse/deeplearning4j/issues/8470)
* SameDiff execution will now throw an exception when assertion operations in the graph fail [Link](https://github.com/KonduitAI/deeplearning4j/pull/96)
* PolyGamma function now returns NaNs when passed double for args requiring integer values [Link](https://github.com/KonduitAI/deeplearning4j/pull/98)
* Fixed some issues for pad and mirror_pad ops to ensure they conform with Tensorflow for imported networks [Link](https://github.com/KonduitAI/deeplearning4j/pull/100)
* Updated and fixed some issues for TensorFlow graph runner [Link](https://github.com/KonduitAI/deeplearning4j/pull/87)
* Improved performance for Reverse operation [Link](https://github.com/KonduitAI/deeplearning4j/pull/115)
* Removed/cleanup up unused ND4J list functionality [Link](https://github.com/eclipse/deeplearning4j/pull/8262)
* Fixed reduce bool operation results (such as any, all, IsInf, etc) for empty array inputs [Link](https://github.com/KonduitAI/deeplearning4j/pull/118)



Expand All @@ -164,6 +207,7 @@ redirect_from: "/releasenotes"

### DataVec: Bug Fixes and Optimizations

* NativeImageLoader now checks for empty input streams and throws an exception instead of crashing [Link](https://github.com/KonduitAI/deeplearning4j/pull/121)
* NDArrayScalarOpTransform now supports modulus operator [Link](https://github.com/eclipse/deeplearning4j/pull/8330)


Expand All @@ -173,6 +217,7 @@ redirect_from: "/releasenotes"

* Added AsyncTrainingListener [Link](https://github.com/eclipse/deeplearning4j/pull/8072)
* Replaced multiple uses of java.util.Random with ND4J Random [Link](https://github.com/eclipse/deeplearning4j/pull/8282)
* Added Observable and LegacyMDPWrapper [Link](https://github.com/eclipse/deeplearning4j/pull/8368)

### RL4J: Bug Fixes and Optimizations

Expand Down Expand Up @@ -201,6 +246,11 @@ redirect_from: "/releasenotes"

* PyDataVec TransformProcess now supports non-inplace operations [Link](https://github.com/eclipse/deeplearning4j/pull/8326)

### PyDataVec Bug Fixes and Optimizations

* Fixed various issues with PyDataVec [Link](https://github.com/KonduitAI/deeplearning4j/pull/86)
* Fixed an issue with data locality that could cause incorrect results under some circumstances when running on CUDA [Link](https://github.com/KonduitAI/deeplearning4j/pull/113)


---
---
Expand Down

0 comments on commit 97e80d8

Please sign in to comment.