Skip to content

Exploiting Cloud Services for Cost-Effective, SLO-Aware Machine Learning Inference Serving

Notifications You must be signed in to change notification settings

mjaysonnn/MArk-Project

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MArk

Instructions

pre-launch instances

  • modify the instance type and model type in constants.py (INS_SOURCE, MODEL)
  • cmd : ./bin/start_server.sh launch $tag(optional,default 0)

run experiment

  • run frontend: ./bin/start_server.sh start $tag(optional,default 0)
  • modify which sender to use in experiment/request_sender.py
  • run request sending process: ./bin/start_server.sh send $burst(optional)

collect log

  • move log to assigned dir: ./bin/start_server.sh move $tag(prefix name of log dir, e.g. $tag-v1)
  • parse latency from log: python3 experiment/parser/parse_latency.py $path_to_log_dir

About

Exploiting Cloud Services for Cost-Effective, SLO-Aware Machine Learning Inference Serving

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 69.6%
  • C++ 29.8%
  • C 0.6%
  • Fortran 0.0%
  • CSS 0.0%
  • JavaScript 0.0%