Tags: HumanCompatibleAI/imitation
Tags
Running benchmarks (#812) * Add script to run benchmarks on slurm * Increase memory and decrease QOS for benchmark runs. * Add bash script to run the entire benchmark without any parallelization. * Add scripts to generate benchmark summary information and computing the probability of improvement. * Speed up the numpy transform when loading a trajectories from a huggingface dataset. * Add asserts and explanation for the sample matrix. * Add script to export sacred runs to csv file. * Add mean/std/ICM and confidence intervals to the markdown summary script. * Explain how to run the entire benchmarking suite and how to compare a new algorithm to the benchmark runs. * Switch from coco's deprecated --side-by-side option to --allow-downgrade. * Split up the python and openssl/ffmpeg installation to ensure that the step properly fails when the installation of one of the packages fails. * Add tests for sacred file parsing. * Add link to the rliable library to the benchmarking README. * Add no-cover pragma for warnings about incomplete runs.
Automate PyPi uploads on release (#489) * Automate PyPi release * Use our own PyPi release of sacred (yuck!) * Bump version number (#490) * Switch to generating version number from setuptools_scm * fix typo * Use v1 release * Add backwards compatibility logic * Disable local scheme * Fetch tags * use v3 not master * Bump to 3.8 everywhere * No cover for clause not expected to hit * Workaround flaky windows tests
Clean up for PyPI Upload (#187) * Update setup.py to prepare for PyPI release * Remove stray __init__ file (was breaking pytype amongst other problems) * Seals not benchmark_environments * Rename benchmark_environments to seals * Fix SEALS version * Update configuration guidelines (now horribly out of date) * Clarify contribution guidelines in README and update code coverage to have stricter check for project/main * Add PyPi badge in anticipation of our glourious indexed future * Update Ray to 0.8.x and make dependency optional