quicR is an open-source R package for the analysis of real-time quaking-induced conversion (RT-QuIC) assay data—a powerful technique used in the diagnosis of protein misfolding disorders like Creutzfeldt-Jakob disease (CJD) and Parkinson’s disease.
As RT-QuIC becomes increasingly adopted in research and clinical settings, standardized methods for data processing, quality control, and high-throughput analysis remain limited. quicR fills this gap by offering a reproducible and flexible framework for RT-QuIC data curation, metric calculation, and visualization.
Built for integration into existing R workflows, quicR helps labs streamline RT-QuIC analysis and improve reproducibility across studies.
Workflow hierarchy of the quicR package. Blue nodes indicate steps where BMG software is needed. Purple nodes indicate functions dedicated to handling metadata. Red nodes are functions that acquire and manipulate raw data. Orange nodes are functions which calculate some metric. Finally, yellow nodes represent data analysis endpoints.
Much of the functionality is designed to be integrated with the Excel output files generated by the BMG analysis software, MARS. For best results, in the MARS software select the following options from the MARS export window:
Excel export settings in MARS. Ensure that "Microplate View", "Table view", "Transpose table", and "Add test run information" are selected. Having both the microplate view and the table view helps with integrating plate layouts and the real-time data. For many of the functions, it is important to have a table labelled “Sample IDs” on the microplate view sheet of the Excel file.
quicR has functions for calculating time-to-threshold (TtT), maxpoint ratio (MPR), and max slope (MS). There is no dedicated function for rate of amyloid formation (RAF) since it can be expressed as the inverse of TtT, and can therefore be calculated separately.
TtT is calculated by iterating through each sample until a value is greater than the user-supplied threshold. It then determines the intersection between the previous and current read times and the threshold. If no value was found larger than the threshold, the total reaction run-time is returned.
MPR is defined as the maximum fluorescence divided by the background fluorescence. Thus, in order to calculate, the raw data must first be normalized against the background. This is done by the user choosing a cycle for background determination, and then dividing each read by that value. The MPR is taken as the max value of the normalized data.
MS is determined by approximating the maximum of the derivative of the raw data and is typically reported in units of ΔRFU/h (i.e. the change in relative fluorescent units per hour). Slopes are calculated using differences between two data points within the range of a sliding window. While this slightly reduces the accuracy of the approximation, the improvement in computation time exceeded the loss in resolution.
Example graph highlighting the calculated metrics described above. The red curve represents a raw data curve that has been normalized against background. The maxpoint ratio is calculated as the maximum fluorescent value achieved in the normalized raw data. Time-to-threshold is determined as the time required to cross a given threshold (in this example, the threshold is set at 0.2). The blue curve represents the approximate derivative of the raw data, and max slope is determined as the maximum of the derivative.
BMG_format("plate_layout.csv", write_file = TRUE, save_path = "", save_name = "formatted.txt")
For an example CSV file, see inst/extdata/BMG_formatting/plate_layout.csv
The file, “formatted.txt” in the same folder, is how the export should look before importing into MARS.
Getting the real-time data from the MARS export is quite simple. Normalization can also be performed on the data.
# Import the raw data
df_ <- get_real("file.xlsx")
# Normalize the data based on the initial background reading.
df_norm <- normalize_RFU(df_)
The MARS software exports microplate views of the samples with information that the user chose. Often, you will want to ensure that you included "Sample IDs" in that export. To import the metadata into your environment, run the following:
# Import the metadata as a named list of tables.
tabs <- organize_tables("file.xlsx", plate = 96)
# Convert the tables into dataframe columns
df_ <- convert_tables(tabs)
Additionally, MARS will export the run's metadata which is not sample-dependent. This includes information such as the date, run ID, user ID, etc. To get this information run the following:
get_meta("file.xlsx")
If you need to know which wells were used in the plate, run:
get_wells("file.xlsx")
The functions, get_sample_location
and plate_view
, are used in tandem to generate plate-level overviews of an RT-QuIC assay. Together, they plot every sample in an 8x12 or 16x24 faceted grid for 96-well and 384-well micro-plates, respectively.
sample_locations <- get_sample_locations(
"file.xlsx",
dilution_bool = TRUE,
dilution_fun = function(x) -log10(x)
)
plate_view(df_, sample_locations)
Plate‐view analog of a 96‐well microplate. Each facet in the 8x12 grid shows the real‐time curves of an RT‐QuIC reaction. The numbers underneath the sample IDs are the dilution factors.
quicR provides functions for calculating kinetic information from real-time data. These metrics include:
- Maxpoint ratio:
calculate_MPR
- Maximum slope:
calculate_MS
- Time-to-threshold:
calculate_TtT
There is no function included for RAF since that can be calculated as the inverse of time-to-threshold.
Additionally, a useful function, calculate_metrics
, has been included which will automatically calculate all these metrics, including RAF (assuming TtT was selected), and add them to a single data frame. These metrics can be automatically plotted using plot_metrics
. This function generates a faceted figure.
df_norm <- get_real("file.xlsx")[[1]] |>
normalize_RFU()
meta <- organize_tables("file.xlsx") |>
convert_tables()
# Calculate each metric individually
data.frame("Sample IDs" = meta$`Sample IDs`) |>
mutate(
Dilutions = -log10(meta$dilutions),
MPR = calculate_MPR(df_norm, start_col = 3, data_is_norm = TRUE),
MS = calculate_MS(df_norm, data_is_norm = TRUE),
TtT = calculate_TtT(df_norm, threshold = 2, start_col = 3),
RAF = 1 / TtT
)
# Optionally, calculate all metrics at once.
analyzed <- calculate_metrics(df_norm, meta)
# Use plot_metrics() to automatically generate a facet plot of the metrics.
plot_metrics(analyzed)
Example XLSX files can be found in inst/extdata/input_files/
Example scripts can be found in .example_scripts/
library(quicR)
file <- "file.xlsx"
raw <- get_real(file)[[1]]
normal <- normalize_RFU(raw, transposed = FALSE)
meta <- organize_tables(file) |>
convert_tables()
analyzed <- calculate_metrics(normal, meta)
This package requires the following dependencies: dplyr, ggplot2, stringr, tidyr, janitor, openxlsx, readxl, and slider. The packages, openxlsx and readxl, were fundamental to performing initial data handling of raw data Excel files. The tidyverse packages (dplyr, ggplot2, stringr, and tidyr), were vital for writing easy-to-read code and for data visualization. The janitor package has data cleaning functions which were useful when importing data from Excel. The slider package provides tools which apply some function to a moving window which was crucial for determining the approximate derivative of raw data.
This work was funded through the Legislative-Citizen Commission on Minnesota Resources (LCCMR).
# For latest release
install.packages("quicR")
# For development version
devtools::install_github("https://github.com/gage1145/quicR")
Gage Rowdena,b,c,+
a University of Minnesota Twin Cities
b Minnesota Center for Prion Research & Outreach
c Priogen Corp.
+ Corresponding author