LearnerTorchModel
can now be parallelized and trained with encapsulation activated.jit_trace
now works in combination with batch normalization.- Ensures compatibility with
R6
version 2.6.0
- Removed some optimizers for which no fast ('ignite') variant exists.
- The default optimizer is now AdamW instead of Adam.
- The private
LearnerTorch$.dataloader()
method now operates no longer on thetask
but on thedataset
generated by the privateLearnerTorch$.dataset()
method. - The
shuffle
parameter during model training is now initialized toTRUE
to sidestep issues where data is sorted.
- Optimizers now use the faster ('ignite') version of the optimizers, which leads to considerable speed improvements.
- The
jit_trace
parameter was added toLearnerTorch
, which when set toTRUE
can lead to significant speedups. This should only be enabled for 'static' models, see the torch tutorial for more information. - Added parameter
num_interop_threads
toLearnerTorch
. - The
tensor_dataset
parameter was added, which allows to stack all batches at the beginning of training to make loading of batches afterwards faster. - Use a faster default image loader.
- Added
PipeOp
for adaptive average pooling. - The
n_layers
parameter was added to the MLP learner. - Added multimodal melanoma and cifar{10, 100} example tasks.
- Added a callback to iteratively unfreeze parameters for finetuning.
- Added different learning rate schedulers as callbacks.
- Torch learners can now be used with
AutoTuner
. - Early stopping now not uses
epochs - patience
for the internally tuned values instead of the trained number ofepochs
as it was before. - The
dataset
of a learner must no longer return the tensors on the specifieddevice
, which allows for parallel dataloading on GPUs. PipeOpBlock
should no longer create ID clashes with other PipeOps in the graph (#260).
- Don't use deprecated
data_formats
anymore - Added
CallbackSetTB
, which allows logging that can be viewed by TensorBoard.
- fix(preprocessing): regarding the construction of some
PipeOps
such aspo("trafo_resize")
which failed in some cases. - fix(ci): tests were not run in the CI
- fix(learner):
LearnerTabResnet
now works correctly - Fix that tests were not run in the CI
- feat: added the
nn()
helper function to simplify the creation of neural network layers
- Initial CRAN release