Skip to content

Commit

Permalink
Merge pull request Theano#5545 from notoraptor/prepare-rc-0-9
Browse files Browse the repository at this point in the history
Prepare release 0.9.0rc1.
  • Loading branch information
nouiz authored Feb 20, 2017
2 parents cc93c29 + c707a74 commit 6bfc2ac
Show file tree
Hide file tree
Showing 8 changed files with 169 additions and 20 deletions.
3 changes: 3 additions & 0 deletions .mailmap
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,8 @@ Benjamin Scellier <[email protected]> Benjamin Scellier <scellier@bart4.
Benjamin Scellier <[email protected]> Benjamin Scellier <scellier@bart5>
Benjamin Scellier <[email protected]> Benjamin Scellier <[email protected]>
Benjamin Scellier <[email protected]> Benjamin Scellier <[email protected]>
Benjamin Scellier <[email protected]> Benjamin Scellier <[email protected]>
Benjamin Scellier <[email protected]> bscellier <[email protected]>
Bogdan Budescu <[email protected]> bbudescu <[email protected]>
Brian Cheung <[email protected]> briancheung <[email protected]>
Caglar <[email protected]> Caglar <[email protected]>
Expand Down Expand Up @@ -61,6 +63,7 @@ Ethan Buchman <[email protected]> ebuchman <[email protected]>
Evelyn Mitchell <[email protected]> evelynmitchell <[email protected]>
Faruk Ahmed <[email protected]> Faruk Ahmed <[email protected]>
Faruk Ahmed <[email protected]> Faruk Ahmed <[email protected]>
Faruk Ahmed <[email protected]> Faruk Ahmed <[email protected]>
Fei Wang <[email protected]> fay <[email protected]>
Francesco Visin <[email protected]> Francesco <[email protected]>
Francesco Visin <[email protected]> fvisin <[email protected]>
Expand Down
85 changes: 78 additions & 7 deletions NEWS.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,82 @@ Release Notes
=============


Theano 0.9.0rc1 (20th of February, 2017)
========================================

This release extends the 0.9.0beta1 and announces the upcoming final release 0.9.

Highlights (since 0.9.0beta1):
- Better integration of Theano+libgpuarray packages into conda distribution
- Better handling of Windows end-lines into C codes
- Better compatibility with NumPy 1.12
- Faster scan optimizations
- Fixed broadcast checking in scan
- Bug fixes related to merge optimizer and shape inference
- many other bug fixes and improvements
- Updated documentation

- New GPU back-end:

- Value of a shared variable is now set inplace

A total of 26 people contributed to this release since 0.9.0beta1 and 118 since 0.8.0, see the list at the bottom.

Interface changes:
- In MRG, replaced method `multinomial_wo_replacement()` with new method `choice()`

Convolution updates:
- Implement conv2d_transpose convenience function

GPU:
- GPUMultinomialFromUniform op now supports multiple dtypes

New features:
- OpFromGraph now allows gradient overriding for every input
- Added Abstract Ops for batch normalization that use cuDNN when available and pure Theano CPU/GPU alternatives otherwise
- Added new Theano flag cuda.enabled
- Added new Theano flag print_global_stats to print some global statistics (time spent) at the end

Others:
- Split op now has C code for CPU and GPU
- "theano-cache list" now includes compilation times


Committers since 0.9.0beta1:
- Frederic Bastien
- Benjamin Scellier
- khaotik
- Steven Bocco
- Arnaud Bergeron
- Pascal Lamblin
- Gijs van Tulder
- Reyhane Askari
- Chinnadhurai Sankar
- Vincent Dumoulin
- Alexander Matyasko
- Cesar Laurent
- Nicolas Ballas
- affanv14
- Faruk Ahmed
- Anton Chechetka
- Alexandre de Brebisson
- Amjad Almahairi
- Dimitar Dimitrov
- Fuchai
- Jan Schlüter
- Jonas Degrave
- Mathieu Germain
- Rebecca N. Palmer
- Simon Lefrancois
- valtron


Theano 0.9.0beta1 (24th of January, 2017)
=========================================

This release contains a lot of bug fixes and improvements + new features, to prepare the upcoming release candidate.

Highlight:
Highlights:
- Many computation and compilation speed up
- More numerical stability by default for some graph
- Jenkins (gpu tests run on PR in addition to daily buildbot)
Expand All @@ -23,14 +93,15 @@ Highlight:
- scan with checkpoint (trade off between speed and memory usage, useful for long sequences)
- Added a bool dtype

- New back-end:
- New GPU back-end:

- float16 storage
- better mapping between theano device number and nvidia-smi number, using the PCI bus ID of graphic cards
- More pooling support on GPU when cuDNN isn't there.
- ignore_border=False is now implemented for pooling.
- More pooling support on GPU when cuDNN isn't there
- ignore_border=False is now implemented for pooling


A total of 112 people contributed to this release, see the list at the bottom.
A total of 112 people contributed to this release since 0.8.0, see the list at the bottom.


Interface changes:
Expand All @@ -41,7 +112,7 @@ Interface changes:
- Move softsign out of sandbox to theano.tensor.nnet.softsign
- Roll make the shift be modulo the size of the axis we roll on
- Merge CumsumOp/CumprodOp into CumOp
- round() default to the same as NumPy: half_to_even.
- round() default to the same as NumPy: half_to_even

Convolution updates:
- Multi-cores convolution and pooling on CPU
Expand All @@ -57,7 +128,7 @@ GPU:
- Support for solve (using cusolver), erfinv and erfcinv
- cublas gemv workaround when we reduce on an axis with a dimensions size of 0
- Warn user that some cuDNN algorithms may produce unexpected results in certain environments
for convolution backward filter operations.
for convolution backward filter operations

New features:
- Add gradient of solve, tensorinv (CPU), tensorsolve (CPU) searchsorted (CPU)
Expand Down
85 changes: 77 additions & 8 deletions NEWS_DEV.txt
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,15 @@ git shortlog -sn rel-0.8.0..

TODO: better Theano conv doc

Highlight:
Highlights:
- Better integration of Theano+libgpuarray packages into conda distribution
- Better handling of Windows end-lines into C codes
- Better compatibility with NumPy 1.12
- Faster scan optimizations
- Fixed broadcast checking in scan
- Bug fixes related to merge optimizer and shape inference
- many other bug fixes and improvements
- Updated documentation
- Many computation and compilation speed up
- More numerical stability by default for some graph
- Jenkins (gpu tests run on PR in addition to daily buildbot)
Expand All @@ -30,29 +38,34 @@ Highlight:
- scan with checkpoint (trade off between speed and memory usage, useful for long sequences)
- Added a bool dtype

- New back-end:
- New GPU back-end:

- Value of a shared variable is now set inplace
- float16 storage
- better mapping between theano device number and nvidia-smi number, using the PCI bus ID of graphic cards
- More pooling support on GPU when cuDNN isn't there.
- ignore_border=False is now implemented for pooling.
- More pooling support on GPU when cuDNN isn't there
- ignore_border=False is now implemented for pooling

Interface changes:
- In MRG, replaced method `multinomial_wo_replacement()` with new method `choice()`
- New pooling interface
- Pooling parameters can change at run time
- When converting empty list/tuple, now we use floatX dtype
- The MRG random generator now try to infer the broadcast pattern of its output
- Move softsign out of sandbox to theano.tensor.nnet.softsign
- Roll make the shift be modulo the size of the axis we roll on
- Merge CumsumOp/CumprodOp into CumOp
- round() default to the same as NumPy: half_to_even.
- round() default to the same as NumPy: half_to_even

Convolution updates:
- Implement conv2d_transpose convenience function
- Multi-cores convolution and pooling on CPU
- New abstract 3d convolution interface similar to the 2d convolution interface
- Dilated convolution


GPU:
- GPUMultinomialFromUniform op now supports multiple dtypes
- cuDNN: support versoin 5.1 and wrap batch normalization (2d and 3d) and RNN functions
- Multiple-GPU, synchrone update (via platoon, use NCCL)
- GpuAdvancedSubtensor in new back-end
Expand All @@ -61,9 +74,13 @@ GPU:
- Support for solve (using cusolver), erfinv and erfcinv
- cublas gemv workaround when we reduce on an axis with a dimensions size of 0
- Warn user that some cuDNN algorithms may produce unexpected results in certain environments
for convolution backward filter operations.
for convolution backward filter operations

New features:
- OpFromGraph now allows gradient overriding for every input
- Added Abstract Ops for batch normalization that use cuDNN when available and pure Theano CPU/GPU alternatives otherwise
- Added new Theano flag cuda.enabled
- Added new Theano flag print_global_stats to print some global statistics (time spent) at the end
- Add gradient of solve, tensorinv (CPU), tensorsolve (CPU) searchsorted (CPU)
- Add Multinomial Without Replacement
- conv3d2d support full and half mode (REMOVE?)
Expand All @@ -77,15 +94,15 @@ New features:


Others:
- Split op now has C code for CPU and GPU
- "theano-cache list" now includes compilation times
- Speed up argmax only on gpu (without also needing the max)
- A few unfrequent bugfix
- More stack trace in error message
- Speed up cholesky grad
- log(sum(exp(...))) now get stability optimized




Other more detailed changes:
- Allow more then one output to be an destructive inplace
- Add flag profiling.ignore_first_call, useful to profile the new gpu back-end
Expand All @@ -100,6 +117,58 @@ Other more detailed changes:


ALL THE PR BELLOW HAVE BEEN CHECKED
* https://github.com/Theano/Theano/pull/5559
* https://github.com/Theano/Theano/pull/5568
* https://github.com/Theano/Theano/pull/5553
* https://github.com/Theano/Theano/pull/5561
* https://github.com/Theano/Theano/pull/5558
* https://github.com/Theano/Theano/pull/5544
* https://github.com/Theano/Theano/pull/5552
* https://github.com/Theano/Theano/pull/5547
* https://github.com/Theano/Theano/pull/5542
* https://github.com/Theano/Theano/pull/5451
* https://github.com/Theano/Theano/pull/5520
* https://github.com/Theano/Theano/pull/5539
* https://github.com/Theano/Theano/pull/5532
* https://github.com/Theano/Theano/pull/5457
* https://github.com/Theano/Theano/pull/5477
* https://github.com/Theano/Theano/pull/5499
* https://github.com/Theano/Theano/pull/5518
* https://github.com/Theano/Theano/pull/5527
* https://github.com/Theano/Theano/pull/5522
* https://github.com/Theano/Theano/pull/5505
* https://github.com/Theano/Theano/pull/5523
* https://github.com/Theano/Theano/pull/5516
* https://github.com/Theano/Theano/pull/5511
* https://github.com/Theano/Theano/pull/5443
* https://github.com/Theano/Theano/pull/5255
* https://github.com/Theano/Theano/pull/5508
* https://github.com/Theano/Theano/pull/5479
* https://github.com/Theano/Theano/pull/5462
* https://github.com/Theano/Theano/pull/5490
* https://github.com/Theano/Theano/pull/5480
* https://github.com/Theano/Theano/pull/5497
* https://github.com/Theano/Theano/pull/5489
* https://github.com/Theano/Theano/pull/5474
* https://github.com/Theano/Theano/pull/5454
* https://github.com/Theano/Theano/pull/5469
* https://github.com/Theano/Theano/pull/5458
* https://github.com/Theano/Theano/pull/5481
* https://github.com/Theano/Theano/pull/5190
* https://github.com/Theano/Theano/pull/5473
* https://github.com/Theano/Theano/pull/5456
* https://github.com/Theano/Theano/pull/5398
* https://github.com/Theano/Theano/pull/5468
* https://github.com/Theano/Theano/pull/5459
* https://github.com/Theano/Theano/pull/5452
* https://github.com/Theano/Theano/pull/5298
* https://github.com/Theano/Theano/pull/5442
* https://github.com/Theano/Theano/pull/5450
* https://github.com/Theano/Theano/pull/5435
* https://github.com/Theano/Theano/pull/5446
* https://github.com/Theano/Theano/pull/5447
* https://github.com/Theano/Theano/pull/5445
* https://github.com/Theano/Theano/pull/5323
* https://github.com/Theano/Theano/pull/5421
* https://github.com/Theano/Theano/pull/5343
* https://github.com/Theano/Theano/pull/5437
Expand Down
2 changes: 1 addition & 1 deletion doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@
# The short X.Y version.
version = '0.9'
# The full version, including alpha/beta/rc tags.
release = '0.9.0beta1'
release = '0.9.0rc1'

# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
Expand Down
2 changes: 2 additions & 0 deletions doc/index.txt
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,8 @@ learning/machine learning <https://mila.umontreal.ca/en/cours/>`_ classes).
News
====

* 2017/02/20: Release of Theano 0.9.0rc1, many improvements and bugfixes, final release to coming.

* 2017/01/24: Release of Theano 0.9.0beta1, many improvements and bugfixes, release candidate to coming.

* 2016/05/09: New technical report on Theano:
Expand Down
6 changes: 5 additions & 1 deletion doc/introduction.txt
Original file line number Diff line number Diff line change
Expand Up @@ -165,11 +165,13 @@ Note: There is no short term plan to support multi-node computation.
Theano Vision State
===================

Here is the state of that vision as of January 24th, 2017 (after Theano 0.9.0beta1):
Here is the state of that vision as of February 20th, 2017 (after Theano 0.9.0rc1):

* We support tensors using the `numpy.ndarray` object and we support many operations on them.
* We support sparse types by using the `scipy.{csc,csr,bsr}_matrix` object and support some operations on them.
* We have implementing/wrapping more advanced linear algebra operations. Still many more possible.
* We have basic support for the creation of new operations from graphs at runtime. It supports well gradient overload
for every input and inlining at the start of compilation. We don't cover well the case when it is not inlined.
* We have many graph transformations that cover the 4 categories listed above.
* We can improve the graph transformation with better storage optimization
and instruction selection.
Expand All @@ -195,7 +197,9 @@ Here is the state of that vision as of January 24th, 2017 (after Theano 0.9.0bet

* No multi-node support.
* Most, but not all NumPy functions/aliases are implemented.

* https://github.com/Theano/Theano/issues/1080

* Wrapping an existing Python function in easy and documented.
* We know how to separate the shared variable memory
storage location from its object type (tensor, sparse, dtype, broadcast
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@
MAJOR = 0
MINOR = 9
MICRO = 0
SUFFIX = "beta1" # Should be blank except for rc's, betas, etc.
SUFFIX = "rc1" # Should be blank except for rc's, betas, etc.
ISRELEASED = False

VERSION = '%d.%d.%d%s' % (MAJOR, MINOR, MICRO, SUFFIX)
Expand Down
4 changes: 2 additions & 2 deletions theano/gpuarray/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,9 +45,9 @@ def init_dev(dev, name=None):
global pygpu_activated
if not config.cxx:
raise RuntimeError("The new gpu-backend need a c++ compiler.")
if (pygpu.version.major, pygpu.version.minor) < (0, 6):
if (pygpu.version.major, pygpu.version.minor, pygpu.version.patch) < (0, 6, 1):
raise ValueError(
"Your installed version of pygpu is too old, please upgrade to 0.6 or later")
"Your installed version of pygpu is too old, please upgrade to 0.6.1 or later")
# This is for the C headers API, we need to match the exact version.
if pygpu.gpuarray.api_version()[0] != 1:
raise ValueError(
Expand Down

0 comments on commit 6bfc2ac

Please sign in to comment.