LibEER estabilshes a unified evaluation framework with standardized experimental settings, enabling unbiased evaluations of over representative deep learning-based EER models across the four most commonly used datasets.
- Standardized Benchmark: LibEER provides a unified benchmark for fair comparisons in EER research, addressing inconsistencies in datasets, settings, and metrics, making it easier to evaluate various models.
- Comprehensive Algorithm Library: The framework includes implementations of over ten deep learning models, covering a wide range of architectures (CNN, RNN, GNN, and Transformers), making it highly versatile for EEG analysis.
- Efficient Preprocessing and Training: LibEER offers various preprocessing techniques and customizable settings, enabling efficient model fine-tuning, lowering the entry barrier for researchers, and boosting research efficiency.
- Extensive Dataset Support: LibEER gives standardized access to major datasets like SEED, SEED-IV, DEAP, and MAHNOB-HCI, supporting both subject-dependent and cross-subject evaluations, with plans to add more datasets in the future.
To run this project, you'll need the following dependencies:
- Python 3.x recommended
- Dependencies: You can install the required Python packages by running:
pip install -r requirements.txt
To install the LibEER by pip, please use the following command, all reproduced models have been integrated, easily for direct use. Please refer to chapter use model via pip for more information.
pip install LibEER
Method | Experimental Settings | Results (Accuracy) | ||||||
Method | Dataset | Preprocessing | Task | Splitting (Train : Test) | Evaluation | Reported (%) | Ours (%) | Gap (%) |
DGCNN | SEED | B, R, DE, 1s | dependent | 3 : 2 | ACC | 90.40±8.49 | 89.48±8.49 | 0.92↓ |
RGNN | SEED | B, R, DE, 1s | dependent | 3 : 2 | ACC | 94.24±5.95 | 84.66±10.74 | 9.58↓ |
EEGNet | SEED | B, R, DE, 1s | dependent | 3 : 2 | ACC | —— | 68.15±12.32 | —— |
DBN | SEED | B, R, DE, 1s | dependent | 3 : 2 | ACC | 86.91±7.62 | 81.18±8.13 | 5.73↓ |
BiDANN | SEED | B, R, DE, 9s | dependent | 3 : 2 | ACC | 92.38±7.04 | 89.06±9.42 | 3.32↓ |
R2G-STNN | SEED | B, R, DE, 9s | dependent | 3 : 2 | ACC | 93.38±5.96 | 84.11±8.47 | 9.27↓ |
MS-MDA | SEED | B, DE, 1s | cross | 14 : 1 | ACC | 89.63 | 93.97 | 4.34↑ |
GCBNet | SEED | B, R, DE, 1s | dependent | 3 : 2 | ACC | 92.30±7.40 | 89.04±8.03 | 3.26↓ |
GCBNet_BLS | SEED | B, R, DE, 1s | dependent | 3 : 2 | ACC | 94.24±6.70 | 88.80±9.54 | 5.44↓ |
CDCN | SEED | B, R, DE, 1s | dependent | 3 : 2 | ACC | 90.63 | 85.10±8.80 | 5.53↓ |
CDCN | DEAP-V | B, R, 1s | dependent | 9 : 1 | ACC | 92.24 | 92.30±11.33 | 0.06↑ |
CDCN | DEAP-A | B, R, 1s | dependent | 9 : 1 | ACC | 92.92 | 91.99±12.20 | 0.93↓ |
ACRNN | DEAP-V | B, R, DE, 3s | dependent | 9 : 1 | ACC | 93.72 | 86.03±9.20 | 7.69↓ |
ACRNN | DEAP-A | B, R, DE, 3s | dependent | 9 : 1 | ACC | 93.38 | 88.31±7.77 | 5.07↓ |
HSLT | DEAP-V | B, R, 6s | cross | 14 : 1 | ACC, F1 | 66.51 | 69.18 | 2.67↑ |
HSLT | DEAP-A | B, R, 6s | cross | 14 : 1 | ACC, F1 | 65.75 | 68.81 | 3.06↑ |
HSLT | DEAP-VA | B, R, 6s | cross | 14 : 1 | ACC, F1 | 56.93 | 49.57 | 7.36↓ |
LibEER implements three main modules: data loader, data splitting, and model training and evaluation. It also incorporates many representative algorithms in the field of EEG-based Emotion Recognition. The specific usage is detailed as follows. Additionally, to make it easier for users, we have implemented several one-step methods for common data processing and data splitting tasks. All reproduced models have corresponding main files named $MODEL_NAME$_train.py
for reference.. For more details, please refer to the quick start of this chapter.
LibEER supports the use of four EEG emotion recognition datasets. If you wish to conduct experiments on these datasets, please visit their respective official websites to apply for and download the datasets:
To facilitate easy use for users, we implemented the Setting class, allowing one-stop data usage through parameter configuration. Additionally, we have preconfigured many common experimental settings to help users quickly get started. Data is achieved through the Setting class: Guide about setting parameters
from models.Models import Model
from config.setting import Setting, preset_setting
from data_utils.load_data import get_data
from data_utils.split import merge_t
5DA8
o_part, index_to_data, get_split_index
from utils.args import get_args_parser
from utils.store import make_output_dir
from utils.utils import result_log, setup_seed
from Trainer.training import train
from models.DGCNN import NewSparseL2Regularization
import torch
import torch.optim as optim
import torch.nn as nn
def main(args):
setting = Setting(dataset='deap', # Select the dataset
dataset_path='DEAP/data_preprocessed_python', # Specify the path to the corresponding dataset.
pass_band=[0.3, 50], # use a band-pass filter with a range of 0.3 to 50 Hz,
extract_bands=[[0.5, 4], [4, 8], [8, 14], [14, 30], [30, 50]],
# Set the frequency bands for extracting frequency features.
time_window=1, # Set the time window for feature extraction to 1 second.
overlap=0, # The overlap length of the time window for feature extraction.
sample_length=1,
# Use a sliding window to extract the features with a window size of sample_length set to 1 and a step size of 1.
stride=1,
seed=2024, # set up the random seed
feature_type='de_lds', # set the feature type to extract
label_used=['valence'], # specify the label used
bounds=[5,5], # The bounds parameter is used to define the thresholds for high and low. Values below bounds[0] are considered negative samples, while values above bounds[1] are considered positive samples.
experiment_mode="subject-dependent",
split_type='train-val-test',
test_size=0.2,
val_size=0.2)
setup_seed(2024) # Set the random seed for the experiment to ensure reproducibility.
data, label, channels, feature_dim, num_classes = get_data(setting) # Get the corresponding data and information based on the setting class.
# The organization of data and label is [session(1), subject(32), trial(40), sample(XXX)].
data, label = merge_to_part(data, label, setting) # Merge the data based on the experiment task specified in the setting class.
# After the merge_to_part() function and the specified subject-independent method, the organization of data and label will be [[subject(32), trail(40), sample(xxx)]].
device = torch.device(args.device) # Set the device based on the args command-line parameters.
best_metrics = [] # Prepare to record the experimental results.
for rridx, (data_i, label_i) in enumerate(zip(data, label), 1): # This loop will only execute 32 times; it will be enabled only under the subject-dependent task.
tts = get_split_index(data_i, label_i, setting) # Get the task indexes for the experiment based on the setting class. The leave-one-out splitting method was chosen.
# Here, in tts:
# train indexes:[2, 15, 4, 17, 5, 22, 39, 20, 23, 7, 18, 14, 35, 28, 12, 3, 33, 31, 36, 11, 32, 13, 9, 24], val indexes:[1, 19, 25, 16, 27, 29, 8, 6], test indexes:[0, 21, 26, 30, 10, 38, 37, 34]
# train indexes:[0, 19, 1, 23, 8, 13, 10, 17, 18, 3, 11, 2, 24, 22, 29, 38, 26, 33, 28, 37, 34, 36, 5, 20], val indexes:[35, 39, 14, 15, 6, 21, 32, 4], test indexes:[25, 7, 16, 12, 27, 9, 31, 30]
# ...
for ridx, (train_indexes, test_indexes, val_indexes) in enumerate(zip(tts['train'], tts['test'], tts['val']), 1):
setup_seed(args.seed) # Set the random seed again to ensure reproducibility.
if val_indexes[0] == -1:
print(f"train indexes:{train_indexes}, test indexes:{test_indexes}")
else:
print(f"train indexes:{train_indexes}, val indexes:{val_indexes}, test indexes:{test_indexes}")
# Retrieve the corresponding data based on the indexes. train_data contains data from 24 trails, val_data contains data from 8 trails, and test_data contains data from other 8 trails.
train_data, train_label, val_data, val_label, test_data, test_label = \
index_to_data(data_i, label_i, train_indexes, test_indexes, val_indexes)
# model to train
if len(val_data) == 0:
val_data = test_data
val_label = test_label
# Choose a model. Alternatively, you can use the method below to import the DGCNN model:
# model = DGCNN(channels, feature_dim, num_classes)
# You can configure the model parameters in model_param/DGCNN.yaml
model = Model['DGCNN'](channels, feature_dim, num_classes)
# Prepare the corresponding dataloader.
dataset_train = torch.utils.data.TensorDataset(torch.Tensor(train_data), torch.Tensor(train_label))
dataset_val = torch.utils.data.TensorDataset(torch.Tensor(val_data), torch.Tensor(val_label))
dataset_test = torch.utils.data.TensorDataset(torch.Tensor(test_data), torch.Tensor(test_label))
# Select an appropriate optimizer.
optimizer = optim.AdamW(model.parameters(), lr=args.lr, weight_decay=1e-4, eps=1e-4)
# Select appropriate loss functions. The first is a classification loss function, and the second is the L2 regularization loss in DGCNN.
criterion = nn.CrossEntropyLoss()
loss_func = NewSparseL2Regularization(0.01).to(device)
# Specify the output_dir, mainly for saving intermediate results during model training. It is set based on args but may show errors currently.
output_dir = make_output_dir(args, "DGCNN")
# Call the training function to train. Batch size, epochs, etc., can be set via command-line parameters, or manually if desired.
round_metric = train(model=model, dataset_train=dataset_train, dataset_val=dataset_val, dataset_test=dataset_test, device=device,
output_dir=output_dir, metrics=args.metrics, metric_choose=args.metric_choose, optimizer=optimizer,
batch_size=args.batch_size, epochs=args.epochs, criterion=criterion, loss_func=loss_func, loss_param=model)
best_metrics.append(round_metric)
# best metrics: every round metrics dict
result_log(args, best_metrics)
if __name__ == '__main__':
args = get_args_parser()
args = args.parse_args()
main(args)
Data also can be achieved by preset setting by:
from config.setting import deap_sub_dependent_train_val_test_setting
def main(args):
setting = preset_setting["deap_sub_dependent_train_val_test_setting"](args)
# ...
if __name__ == '__main__':
args = get_args_parser()
args = args.parse_args()
main(args)
Currently supported prset setting can be found in Preset Setting in LibEER
To enable users to have more precise control and use of intermediate results, this section presents the detailed usage of the three main modules. If the settings class does not meet the requirements of your experiment, you can refer to the usage methods below.
In the data loader, LibEER supports four EEG emotion recognition datasets: SEED, SEED-IV, DEAP, and HCI. It also provides support for various data preprocessing methods and a range of feature extraction techniques. The following example demonstrates how to use LibEER to load a dataset and preprocess the data. Specifically, it extracts 1-second DE (Differential Entropy) features from the DEAP dataset, after baseline removal and band-pass filtering between 0.3-50Hz, across five frequency bands.
# get data, baseline, label, sample rate of data, channels of data using get_uniform_data() function
unified_data, baseline, label, sample_rate, channels = get_uniform_data(dataset="deap", dataset_path="DEAP/data_preprocessed_python")
# remove baseline
data = baseline_removal(unified_data, baseline)
# using a 0.3-50 Hz bandpass filter to process the data
data = bandpass_filter(data, sample_rate, pass_band=[0.3, 50])
# a 1-second non-overlapping preprocess window to extract de_lds features on specified extract bands
data = feature_extraction(data, sample_rate, extract_ban
AE20
ds=[[0.5,4],[4,8],[8,14],[14,30],[30,50]] , time_window=1, overlap=0, feature_type="de_lds")
# sliding window with a size of 1 and a step size of 1 to segment the samples.
data, feature_dim = segment_data(data, sample_length=1, stride=1)
# data format: (session, subject, trail, sample)
In LibEER, the Data Split module is mainly responsible for data partitioning under different experimental tasks and split settings. It supports three mainstream experimental tasks: subject-dependent, cross-subject, and cross-session, and offers various data splitting methods. The following example demonstrates how to split the dataset into training, validation, and testing sets in a subject-dependent task, with a ratio of 0.6, 0.2, and 0.2, respectively.
from data_utils.split import merge_to_part
data, label = merge_to_part(data, label, experiment_mode="subject_dependent")
# further split each subject's subtask
for idx, (data_i, label_i) in enumerate(zip(data,label)):
# according to the data format and label, the test size is 0.2 and the validation size is 0.2
spi = get_split_index(data_i, label_i, split_type="train-val-test", test_size=0.2, val_size=0.2)
for jdx, (train_indexes, test_indexes, val_indexes) in enumerate(zip(spi['train'],spi['test'], spi['val'])):
# organize the data according to the resulting index
(train_data, train_label, val_data, val_label, test_data, test_label) = index_to_data(data_i, label_i, train_indexes, test_indexes, val_indexes)
LibEER supports various mainstream emotion recognition methods. For details, please refer to the Support Methods section. We selected DGCNN for training and testing.
from models.Models import Model
from Trainer.training import train
model = Model['DGCNN'](num_electrodes=channels, feature_dim=5, num_classes=3, k=2, layers=[64], dropout_rate=0.5)
# train and evaluate model, then output the metric
round_metric = train(model,train_data,train_label,val_data,val_label,test_data,test_label)
If you are only interested in the reproduced model, install the LibEER via pip and see the following instructions.
import LibEER.models.MsMda as MsMda
# use the training method provided by LibEER or yours
import LibEER.Trainer.msmdaTrain as train
model = MsMda(channels, feature_dim, num_classes, number_of_source=samples_source)
# result dicts
round_metric = train(model=model, datasets_train=datasets_train, dataset_val=dataset_val, dataset_test=dataset_test, output_dir=output_dir, samples_source=samples_source, device=device, metrics=args.metrics, metric_choose=args.metric_choose, optimizer=optimizer,
                 batch_size=args.batch_size, epochs=args.epochs, criterion=criterion)
The Mean Accuracies and F1 scores (and Standard Deviations) using the proposed benchmark for Subject-Dependent EER Experiment. The top two methods in each scenario are highlighted using bold and underlined formatting.
Method | SVM | DNN | Trans | CNN | RNN | GNN | ||||||
Method | SVM | DBN | HSLT | EEGNet | CDCN | TSception | ACRNN | DGCNN | RGNN | GCBNet | GCBNet_BLS | |
SEED | ACC | 75.08 | 71.88 | 64.83 | 58.81 | 68.23 | 64.01 | 49.71 | 82.55 | 76.55 | 80.56 | 76.64 |
(19.73) | (19.02) | (20.47) | (16.22) | (20.35) | (16.44) | (13.15) | (15.61) | (16.92) | (16.98) | (17.44) | ||
F1 | 70.82 | 67.39 | 58.82 | 54.41 | 63.76 | 60.53 | 45.78 | 79.89 | 72.52 | 77.29 | 72.52 | |
(23.51) | (22.81) | (23.36) | (17.59) | (24.49) | (18.51) | (14.18) | (18.93) | (20.08) | (20.92) | (21.20) | ||
SEED-IV | ACC | 47.80 | 45.56 | 40.28 | 29.89 | 52.26 | 36.06 | 29.01 | 52.39 | 45.40 | 53.28 | 53.51 |
(23.03) | (21.19) | (23.80) | (13.53) | (21.97) | (15.12) | (7.10) | (24.32) | (22.90) | (21.05) | (22.45) | ||
F1 | 40.17 | 37.61 | 30.92 | 26.59 | 45.26 | 32.77 | 19.80 | 45.94 | 38.24 | 46.26 | 46.91 | |
(21.68) | (20.68) | (24.47) | (13.58) | (23.00) | (15.08) | (5.42) | (24.17) | (23.09) | (22.27) | (22.46) | ||
HCI-V | ACC | 64.83 | 62.03 | 64.00 | 61.15 | 60.48 | 61.12 | 60.51 | 67.83 | 64.86 | 66.84 | 69.60 |
(25.95) | (24.90) | (11.40) | (16.76) | (21.90) | (15.52) | (16.89) | (22.40) | (17.36) | (21.42) | (22.09) | ||
F1 | 55.99 | 52.84 | 55.77 | 50.35 | 51.73 | 50.51 | 49.39 | 54.78 | 50.41 | 54.61 | 57.78 | |
(27.80) | (27.04) | (13.12) | (17.28) | (23.11) | (16.69) | (15.70) | (26.69) | (20.34) | (25.08) | (27.18) | ||
HCI-A | ACC | 63.61 | 68.51 | 67.74 | 67.42 | 71.82 | 68.26 | 66.26 | 67.29 | 70.96 | 64.89 | 69.60 |
(23.44) | (21.72) | (17.22) | (21.71) | (21.72) | (23.10) | (22.69) | (27.73) | (19.79) | (27.12) | (22.09) | ||
F1 | 50.99 | 57.18 | 58.42 | 54.50 | 62.89 | 56.29 | 55.17 | 58.04 | 57.66 | 57.43 | 57.78 | |
(24.89) | (26.63) | (19.50) | (20.05) | (24.99) | (23.79) | (23.20) | (31.41) | (25.39) | (29.50) | (27.18) | ||
HCI-VA | ACC | 46.29 | 44.38 | 46.99 | 38.32 | 52.00 | 40.00 | 41.00 | 53.06 | 49.46 | 49.15 | 49.24 |
(28.42) | (26.78) | (20.76) | (19.51) | (26.05) | (20.60) | (21.58) | (24.44) | (23.20) | (26.16) | (27.97) | ||
F1 | 33.91 | 31.96 | 34.76 | 24.56 | 38.04 | 27.19 | 27.10 | 39.75 | 35.97 | 36.49 | 36.68 | |
(27.65) | (27.87) | (19.69) | (13.71) | (26.88) | (13.65) | (15.13) | (26.01) | (23.84) | (27.58) | (27.25) | ||
DEAP-V | ACC | 54.17 | 56.08 | 56.20 | 51.50 | 57.71 | 51.52 | 53.52 | 56.07 | 55.90 | 56.49 | 57.02 |
(18.67) | (17.38) | (18.14) | (11.57) | (14.72) | (9.54) | (9.29) | (17.15) | (16.24) | (18.17) | (15.07) | ||
F1 | 49.73 | 48.61 | 48.21 | 47.85 | 53.41 | 47.33 | 48.31 | 49.08 | 47.25 | 50.36 | 51.40 | |
(18.92) | (19.33) | (18.73) | (11.70) | (15.50) | (9.58) | (7.77) | (17.50) | (17.55) | (19.57) | (17.25) | ||
DEAP-A | ACC | 63.49 | 64.60 | 59.74 | 61.30 | 63.37 | 57.49 | 61.83 | 62.68 | 66.09 | 65.95 | 61.07 |
(16.72) | (19.42) | (18.82) | (15.88) | (14.18) | (11.86) | (14.32) | (19.66) | (13.91) | (17.61) | (16.56) | ||
F1 | 53.31 | 52.61 | 50.10 | 53.26 | 53.94 | 50.75 | 49.68 | 53.94 | 49.27 | 55.34 | 50.43 | |
(14.39) | (19.85) | (18.08) | (13.05) | (13.76) | (11.30) | (9.20) | (20.10) | (12.86) | (17.86) | (16.36) | ||
DEAP-VA | ACC | 37.32 | 39.50 | 43.34 | 39.41 | 38.08 | 35.68 | 38.20 | 41.86 | 44.53 | 38.80 | 37.51 |
(17.24) | (13.99) | (14.49) | (11.53) | (15.52) | (12.08) | (11.34) | (11.57) | (14.35) | (16.23) | (15.29) | ||
F1 | 25.55 | 24.88 | 23.47 | 29.19 | 28.90 | 26.75 | 21.05 | 29.12 | 25.88 | 27.89 | 27.00 | |
(14.23) | (10.79) | (14.21) | (10.21) | (13.15) | (8.30) | (6.13) | (10.92) | (12.08) | (15.94) | (13.52) |
The Mean Accuracies and F1 scores using the proposed benchmark for Cross-Subject EER Experiment. The top two methods in each scenario are highlighted using bold and underlined formatting.
Method | SVM | DNN | Trans | CNN | RNN | GNN | |||||||
Method | SVM | DBN | MS-MDA | HSLT | EEGNet | CDCN | TSception | ACRNN | DGCNN | RGNN | GCBNet | GCBNet_BLS | |
SEED | ACC | 37.07 | 36.16 | 64.00 | 56.00 | 38.19 | 57.72 | 45.60 | 45.39 | 60.87 | 57.20 | 56.32 | 56.32 |
F1 | 33.45 | 22.67 | 57.35 | 55.75 | 31.83 | 58.66 | 43.54 | 42.37 | 57.22 | 51.37 | 55.12 | 51.43 | |
SEED-IV | ACC | 28.98 | 36.82 | 56.07 | 30.33 | 28.19 | 31.03 | 34.19 | 31.97 | 42.54 | 44.13 | 32.27 | 40.54 |
F1 | 24.56 | 32.60 | 48.68 | 11.64 | 28.35 | 27.01 | 26.83 | 18.82 | 43.10 | 43.30 | 32.89 | 42.73 | |
HCI-V | ACC | 70.33 | 69.27 | 67.64 | 66.94 | 57.06 | 67.69 | 57.36 | 54.53 | 63.19 | 65.89 | 65.16 | 71.06 |
F1 | 66.36 | 65.51 | 53.70 | 62.22 | 53.83 | 62.67 | 54.76 | 52.58 | 58.75 | 44.33 | 63.32 | 61.83 | |
HCI-A | ACC | 54.41 | 57.50 | 57.23 | 49.48 | 54.70 | 55.93 | 52.30 | 51.23 | 59.42 | 57.26 | 52.21 | 60.85 |
F1 | 52.91 | 56.24 | 55.68 | 47.43 | 54.02 | 55.15 | 50.25 | 49.45 | 57.02 | 56.89 | 51.84 | 56.26 | |
HCI-VA | ACC | 31.56 | 28.30 | 44.92 | 35.07 | 34.84 | 30.09 | 26.99 | 27.21 | 41.54 | 37.43 | 43.02 | 36.92 |
F1 | 26.29 | 27.58 | 21.65 | 27.19 | 27.96 | 24.71 | 21.95 | 20.92 | 38.08 | 19.60 | 36.07 | 32.99 | |
DEAP-V | ACC | 49.58 | 55.02 | 54.82 | 56.56 | 52.36 | 57.78 | 54.44 | 51.94 | 49.91 | 52.15 | 53.58 | 52.34 |
F1 | 48.49 | 53.14 | 50.60 | 56.56 | 49.74 | 57.72 | 48.94 | 47.37 | 47.09 | 44.88 | 50.68 | 49.03 | |
DEAP-A | ACC | 51.48 | 50.99 | 26.48 | 41.08 | 48.94 | 49.73 | 45.90 | 44.09 | 49.91 | 43.86 | 50.05 | 50.59 |
F1 | 50.95 | 50.09 | 25.39 | 41.05 | 48.94 | 49.17 | 45.56 | 41.51 | 47.09 | 40.46 | 47.79 | 46.38 | |
DEAP-VA | ACC | 24.58 | 16.06 | 17.31 | 25.41 | 23.90 | 24.64 | 20.89 | 25.66 | 19.09 | 30.92 | 20.49 | |
F1 | 24.70 | 24.36 | 15.19 | 16.87 | 24.44 | 22.58 | 23.24 | 15.16 | 24.95 | 13.02 | 30.98 | 18.41 |
@inproceeding{liu2024libeercomprehensivebenchmarkalgorithm,
title={LibEER: A Comprehensive Benchmark and Algorithm Library for EEG-based Emotion Recognition},
author={Huan Liu and Shusen Yang and Yuzhe Zhang and Mengze Wang and Fanyu Gong and Chengxi Xie and Guanjian Liu and Zejun Liu and Yong-Jin Liu and Bao-Liang Lu and Dalin Zhang},
year={2024},
eprint={2410.09767},
archivePrefix={arXiv},
primaryClass={cs.HC},
url={https://arxiv.org/abs/2410.09767},
}
@article{Liu2024EEGBasedME,
title={EEG-Based Multimodal Emotion Recognition: A Machine Learning Perspective},
author={Huan Liu and Tianyu Lou and Yuzhe Zhang and Yixiao Wu and Yang Xiao and Christian S. Jensen and Dalin Zhang},
journal={IEEE Transactions on Instrumentation and Measurement},
year={2024},
volume={73},
pages={1-29},
url={https://api.semanticscholar.org/CorpusID:267978819} }