-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using cargo-chef increased my build time from 5 minutes to 25 mintues #273
Comments
adding in sccache seems to bring down the time from my non cargo-chef build from 5 mins to 3 mins. But if I put cargo-chef back in - then it goes up from 3 mins to 9 mins. So I guess for my monorepo style rust setup cargo-chef is not suitable. |
You need to pass the |
Thanks :) - I tried that but it's still a lot slower at around 6 minutes compared to 3 minutes without cargo-chef. It seems to compile my service twice - once in the cook and then again in the build and it seems to have a lot more dependencies to build also. |
without cargo-chef:
with cargo-chef:
|
It's expected that it takes a comparable amount of time if you're starting without a cache. |
This is running in my CI build in github actions - do you expect the caching would work like that in the free runners on github? Running several times I didn't see any improvement in the time. |
I've add this to my build-push-action in the github actions yml
and now I can see the layers being cached - unfortunately the time taken for the layers to be exported/imported from github actions negates the time saved :( so its still significantly faster to not use cargo-chef with github actions in my case. |
maybe I can cache in the S3 bucket also for the build-push-action to speed up the import/export part |
it's a bit faster with the S3 cache - but even with no code changes the build step still compiles quite a lot of dependencies - I was expecting that with no code changes - it would not compile anything as would be all cached. |
What are you referring to here as "build step"? |
I was referring to the last part of the Dockerfile where the build --release is called - so after the cook. I only changed some lines in the Dockerfile and not in any of the code - but the build --release still compiled some things that had already been compiled in the cook stage. |
Understanding why that is the case would require a reproducible example for me to analyse. |
Hey! @kingsleyh I'm just now learning Github Actions but I had the same trouble as you and ended up finding a solution:
build-and-push:
runs-on: ubuntu-latest
steps:
# Checkout the code so we can build the Docker image
- name: Checkout code
uses: actions/checkout@v4
# Set up Docker Buildx for multi-platform builds
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
# Log in to GitHub Container Registry
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# Cache the Cargo dependencies to speed up the build
- uses: actions/cache@v4
with:
path: /tmp/.buildx-cache
key: service-${{ runner.os }}-buildx-cache
# Save the latest commit hash to an environment variable so we can use it in the Dockerfile
- name: Set commit hash
run: |
echo "COMMIT_HASH=$(git rev-parse --short HEAD)"
echo "COMMIT_HASH=$(git rev-parse --short HEAD)" >> $GITHUB_ENV
# Build and push the Docker image for the changed service
- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
context: services/${{ matrix.service }}
push: true
tags: |
ghcr.io/${{ github.repository_owner }}/service:latest
ghcr.io/${{ github.repository_owner }}/service:${{ env.COMMIT_HASH }}
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new
# Move the new cache to the original location
- name: Move cache to original location
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
# Output the image digest so we can use it in other jobs
- name: Image digest
run: echo ${{ steps.build-push.outputs.digest }} And here's my Dockerfile (it's the same as the examples) # Start from the official rust image and install cargo-chef
FROM lukemathwalker/cargo-chef:latest-rust-1 AS chef
WORKDIR /app
# Plan the build. If this has already been done, no need to re-do it
FROM chef AS planner
COPY . .
# Print all the files in the current directory
RUN ls
RUN cargo chef prepare --recipe-path recipe.json
# Build the dependencies
FROM chef AS builder
COPY --from=planner /app/recipe.json recipe.json
# Build dependencies - this is the caching Docker layer!
RUN cargo chef cook --release --recipe-path recipe.json
# Build application
COPY . .
RUN cargo build --release --bin market-price-scrapper
# Use a minimal base image for the final build
FROM ubuntu:latest AS runtime
# Set working directory
WORKDIR /app
# Copy the built binary from the builder stage
COPY --from=builder /app/target/release/market-price-scrapper /app/market-price-scrapper
# Run the binary
ENTRYPOINT ["/app/market-price-scrapper"] |
Hi
I have a monorepo with a workspace at the root and each micro-service as a cargo project.
when I copy all the files from the root to docker e..g
COPY . .
and then run the cargo-chef commands as shown in the README - in the chef cook --release it compiles and builds my entire workspace including all the binaries which takes a long time.Then it does the build release for the binary I actually want:
RUN cargo build --release --bin image-categorisation-service
Is there a way cargo-chef can avoid building the entire workspace when I just want to release the image-categorisation-service binary in a docker image
using cargo-chef has increased the build time for releasing just the image-categorisation-service from 5 minutes to 25 minutes because it's building the entire workspace
The text was updated successfully, but these errors were encountered: