-> Jenkinsfile.prebuilt-cache
Compile all dependencies of DeepDetect and use Jenkins artefacts to keep them here:
/var/lib/jenkins/jobs/deepdetect-prebuilt-cache/branches/master/builds/<BUILD ID>/archive/build/
- Triggered manually
Job that just keep in sync /var/lib/jenkins/jobs/deepdetect-prebuilt-cache/
between CIs servers
- Triggered manually
Build and run all tests.
Everything is done inside a docker image: ci/devel.Dockerfile
The docker container mounts the prebuilt directory as copy-on-write volume
Jenkinsfile.unittests
trigger on pull request only
Build tensorrt backend and run tensorrt tests on a Jetson Nano
Everything is done inside a docker image: ci/devel-jetsone-nano.Dockerfile
Jenkinsfile-jetson-nano.unittests
trigger on pull request with ci:embedded only
Build all docker images and push them on dockerhub. Keep in sync the dockerhub README with the GitHub README
trigger every night on master branch trigger manually on release tag
On a clean master branch with all tags fetched:
$ git fetch --tags
$ git checkout master
$ git reset --hard origin/master
$ yarn
$ ci/release.sh
If the result is OK, publish the release note on GitHub and push tags:
$ git push --follow-tags origin master
The script ci/release.sh
updates CHANGELOG.md, commits it, creates a tag, and
creates the GitHub release.
On Jenkins in deepdetect-docker-build
job, Tags
tab, ran the released version
When the docker images have been released, platform_ui
and dd_platform_docker
can be released:
$ git fetch --tags
$ git checkout master
$ git reset --hard origin/master
$ yarn
$ ci/release.sh
$ git push --follow-tags origin master
$ docker build -t dd-dev -f ci/devel.Dockerfile --progress=plain .
$ docker build -t dd-dev -f ci/devel.Dockerfile --progress=plain \
--build-arg DD_UBUNTU_VERSION=20.04 \
--build-arg DD_CUDA_VERSION=11.1 \
--build-arg DD_CUDNN_VERSION=8 \
--build-arg DD_TENSORRT_VERSION=7.2.1-1+cuda11.1 \
.
$ docker run -it -v $(pwd):/dd -w /dd dd-dev /bin/bash
root@4b4d72e9c8b4:/dd# mkdir build
root@4b4d72e9c8b4:/dd# cd build
root@4b4d72e9c8b4:/dd/build# cmake .. -DBUILD_TESTS=ON
root@4b4d72e9c8b4:/dd/build# make -j$(nproc)
docker run -it -v $(pwd):/dd -w /dd dd-dev /bin/bash
root@4b4d72e9c8b4:/dd# cd build && make tests
OR
root@4b4d72e9c8b4:/dd# cd build/tests && ./ut_caffeapi
On the new slave node as root:
apt install -y openjdk-11-jre
adduser jenkins --shell /bin/bash --disabled-password --home /var/lib/jenkins
usermod jenkins -a -G docker
mkdir /var/lib/jenkins/.ssh
echo "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHmrxyMYsQL8HSKjq4ASmxtWUXl4395XswKmGXDtQpvk jenkins@jenkins" > /var/lib/jenkins/.ssh/authorized_keys
chown -R jenkins:jenkins /var/lib/jenkins/.ssh
chmod 500 /var/lib/jenkins/.ssh/authorized_keys
On x86 GPU node, ensure cuda nvidia drivers and docker are installed too.
On jenkins Master nodes:
sudo -u jenkins -i
ssh-keyscan 10.10.77.72 >> /var/lib/jenkins/.ssh/known_hosts
On Jenkins UI:
- Click on
Manage Jenkins
->Manage Nodes and Clouds
->New Node
- Set the
Node name
, selectPermanent Agent
and click onAdd
- Set
Remote root directory
to/var/lib/jenkins
- In
Labels
addgpu
for x86 nodesnano
for jetson nano nodes
- In
Usage
, selectOnly build jobs with label expressions matching this node
- In
Launch method
, selectLaunch agents via SSH
- Set
Host
to the machine hostname or IP - Use
Jenkins
Credentials
- Set
- On x86, you can increse
# of executors
depending on RAM available. - Click
Save
- Click
Relaunch Agent
When you see Agent successfully connected and online
you're good.
For x86 GPU nodes only:
- Click on
Manage Jenkins
->Configure System
- In
Lockable Resources Manager
section adds all GPUs the node have The naming is important,Jenkins.unittests
job use it to reserve GPU For each GPU to must create a resources with:- name and description:
<UPPERCASE NODE NAME> GPU <GPU_INDEX>
(example:NEPTUNE05T GPU 0
) - labels
<NODE NAME>-gpu
(example:neptune05t-gpu
)
- name and description:
- Click
Save
Job dispatch use Jenkins Labels. We have master
, gpu
and nano
.
master
is used mainly for sync prebuild cache and docker images
gpu
to run unittests on pull requests
nano
for jetson nano related jobs
In Jenkins job file, the node is select by the agent section, eg:
pipeline {
agent { node { label 'gpu' } }
...
}