8000 Various possible improvements · Issue #23 · lina-usc/eog-learn · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
Various possible improvements #23
Open
@christian-oreilly

Description

@christian-oreilly

This is somewhat similar to #10, but does not include only models to test. I want to collect here ideas for improvements that I don't have the time to pursue, and that we may want to pursue when funding allows us to put resources on this project.

  • Activation function: The use of a sinusoidal activation function could allow the LSTM to better capture high-frequency noise
  • Overfitting control: Although we are not seeking generalizability at this point, we still need to make sure that the model does not capture irrelevant noise. So, although we don't necessarily want to use hold-out or cross-validation, we need to make sure to control overfitting by stopping the learning when validation loss starts picking up
  • Bi-LSTM: Might improve performance
  • Filtering: I suspect some filtering issues, e.g., in Figure 3 of the ASPAI paper. Voltages seem to pick up before the saccade... which might be due to artifacts caused by filters not dealing well with the sharp transition associated with saccades
  • For the analyses in the ASPAI paper, the training/testing should be within-participants, not within-recordings. We could benchmark the effect of the sample size on the performance as well as within- vs between-participant training/testing

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0