8000 GitHub - adwuerth/vroom: Userspace NVMe driver written in Rust
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

adwuerth/vroom

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

vroom

vroom is a userspace NVMe driver written in Rust. As an userspace driver which (optionally) uses VFIO, it can be run without root privileges. It aims to be as fast as the SPDK NVMe driver, while minimizing unsafe code and offering a simplified API. vroom currently serves as a proof of concept and has many features yet to be implemented.

For further details take a look at @bootreer's thesis on vroom, or my thesis for the VFIO implementation.

Build instructions

vroom needs to be compiled from source using rust's package manager cargo. vroom uses hugepages, enable them using:

cd vroom
sudo ./scripts/setup-hugetlbfs.sh

To build the driver r 66EA un:

cargo build --release --all-targets

An example can be run by using:

cargo run --example <example>

To re-bind the kernel driver after vroom use

./scripts/bind-kernel-driver.sh <pci_address>

Using the IOMMU

By default, vroom needs root to use DMA. By using the IOMMU and the Linux VFIO framework, the driver can be run without root privileges while also achieving improved safety.

  1. Enable the IOMMU in the BIOS. On most Intel machines, the BIOS entry is called VT-d and has to be enabled in addition to any other virtualization technique.
  2. Enable the IOMMU in the linux kernel. Add intel_iommu=on to your cmdline (if you are running a grub, the file /etc/default/grub.cfg contains a GRUB_CMDLINE_LINUX where you can add it).

From step 3 you can either use a provided script or continue manually. To bind the vfio driver using the script execute

./scripts/bind-vfio-driver.sh <pci_address> 

To make the driver run without root use

chown $user:$group /dev/vfio/*

To unbind the vfio driver use

./scripts/unbind-vfio-driver.sh <pci_address>

To enable it manually:

  1. Get the PCI address, vendor and device ID: lspci -nn | grep NVM returns something like 00:01.0 Non-Volatile memory controller [0108]: Red Hat, Inc. QEMU NVM Express Controller [1b36:0010] (rev 02). In this case, 0000:00:01.0 is our PCI Address, and 1b36 and 0010 are the vendor and device id, respectively.
  2. Unbind the device from the Linux NVMe driver. echo $PCI_ADDRESS > /sys/bus/pci/devices/$PCI_ADDRESS/driver/unbind
  3. Enable the vfio-pci driver. modprobe vfio-pci
  4. Bind the device to the vfio-pci driver. echo $VENDOR_ID $DEVICE_ID > /sys/bus/pci/drivers/vfio-pci/new_id
  5. Chown the device to the user. chown $USER:$GROUP /dev/vfio/*
  6. That's it! Now you can compile and run vroom as stated above!

Testing

Currently there are a few integration tests implemented. It is necessary to first set an environment variable containing the NVMe PCI address, for example for 0000:00:01.0:

export NVME_ADDR="0000:00:01.0"

Then, run the tests using:

cargo test

About

Userspace NVMe driver written in Rust

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Rust 92.3%
  • Python 4.4%
  • Shell 3.3%
0