8000 GitHub - fredrickang/LaLaRAND: LaLaRAND: Flexible Layer-by-Layer CPU/GPU Scheduling for Real-Time DNN Tasks
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

LaLaRAND: Flexible Layer-by-Layer CPU/GPU Scheduling for Real-Time DNN Tasks

Notifications You must be signed in to change notification settings

fredrickang/LaLaRAND

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

LaLaRAND: Flexible Layer-by-Layer CPU/GPU Scheduling for Real-Time DNN Tasks

LaLaRAND: Flexible Layer-by-Layer CPU/GPU Scheduling for Real-Time DNN Tasks
Author: Woosung Kang, Kilho Lee, Jinkyu Lee, Insik Shin, Hoon Sung Chwa
In 42nd IEEE Real-Time Systems Symposium (RTSS 2021) Dortmund, Germany, December 2021

Requirements

CUDA: >= 10.2
cuDNN: >=8.0.2
PyTorch: 1.4.0
Python: >= 3.6
CMake: >= 3.10.2

How to use

PyTorch modification

  1. Install PyTorch with version 1.4.0
  2. Go to installation directory (probably /home/{username}/.local/lib/python{version}/site-packages/torch}
  3. Replace directory nn, quantization.

Scheduler

  1. Run scheduler before DNN tasks
  2. Provide resource configuration of DNN tasks by txt file (current: /tmp/{pid of task}.txt)

DNN task

  1. Before inference code,
  2. Call {model}.set_rt() to set rt-priority of task
  3. Call {model}.hetero() to use heterogeous resource allocation
  4. hetero() requires inference function and sample inputs for input calibaration

About

LaLaRAND: Flexible Layer-by-Layer CPU/GPU Scheduling for Real-Time DNN Tasks

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0