Skip to content

Tolulade-A/deepseekr1-on-bedrock

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DeepSeek-R1 on Amazon Bedrock

This repository will guide you through the process of importing and using the distilled DeepSeek-R1-Distill-Llama-8B model based on Llama-3.1-8B as the base model from Hugging Face on Amazon Bedrock.

To learn more about DeepSeek-R1, please visit DeepSeek.

For detailed paper walkthrough on DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning, check out this paper read on DeepSeek-R1 by Umar Jamil

DeepSeek-R1-Distill-Llama-8B

Prerequisites

  • AWS Account with Bedrock access
  • Python environment with the following packages:
    • huggingface_hub
    • boto3

Setup Process

  1. Download Model Weights

    • The model weights are downloaded from Hugging Face Hub
    • Model used: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
  2. Upload to S3

    • Model weights are uploaded to an S3 bucket
    • Target path: s3://[your-bucket]/models/DeepSeek-R1-Distill-Llama-8B/
  3. Import to Amazon Bedrock

    • Navigate to AWS Console > Bedrock > Foundation Models > Imported Models
    • Click "Import Model"
    • Name the model (e.g., my-DeepSeek-R1-Distill-Llama-8B)
    • Provide the S3 location of the model weights
    • Wait for successful import
    • Note down the Model ARN for API calls

Import Model

Usage

Run the Jupyter notebook deepseek-bedrock.ipynb for detailed implementation.

AWS Guide: https://community.aws/content/2sECf0xbpgEIaUpAJcwbrSnIGfu/deploying-deepseek-r1-model-on-amazon-bedrock

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%