8000 GitHub - submission-artefacts/evaluation
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

submission-artefacts/evaluation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Benchmark for prediction accuracy and performance comparision

cd benchmark
python python benchmark.py -s default

Plot results:

cd benchmark/eval/default
python plot_default.py
python plot_comparison.py

Benchmark for evolving dataset

cd benchmark
python benchmark_evolving_ds.py -s medium

Plot results:

cd benchmark/eval/evolving_ds
python plot_evolving_dataset.py -s medium

Benchmark for cost restriction

cd benchmark
python benchmark_cost_restriction.py

Plot results:

cd benchmark/eval/cost_restriction
python plot_cost_restriction.py

Benchmark for number of training workloads

cd benchmark
python python benchmark.py -s n_jobs
cd benchmark/eval/n_jobs
python plot_job_n.py

Benchmark for row density for a number of workloads

Steady state: all 90 workloads

cd benchmark
python python benchmark.py -s steady_state
cd benchmark/eval/row_density
python plot_row_den.py -s steady_state

Cold start: R=2.5 or 16*2.5=40 workloads

cd benchmark
python python benchmark.py -s cold_start
cd benchmark/eval/row_density
python plot_row_den.py -s cold_start

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0