8000 GitHub - jaska120/auto-literature-review: Fast and automated study selection for literature review with Scopus and OpenAI.
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

jaska120/auto-literature-review

Repository files navigation

Auto Literature Review

Available at https://auto-literature-review.vercel.app/.

Fast and automated study selection for literature review with Scopus and OpenAI.

How it works

Auto Literature Review (ALR) follows a systematic literature review process consisting of three key stages: the formulation of a search string for Scopus database, the execution of literature search, and the evaluation and selection of studies using natural language and machine learning reasoning. The results from each stage are automatically aggregated into a final output, as shown in the figure below.

Simplified description of process flow of artifact

Configuration

ALR uses Scopus and OpenAI APIs to perform the literature search and evaluation. To use the APIs, you need to create an account and request API keys on Scopus and OpenAI. After obtaining Scopus API key, you must request for Institutional Token, as it is required to access abstracts of the articles.

You must enter API keys at https://auto-literature-review.vercel.app/configuration to use the application.

Tip

Your API keys never leave your browser and are stored in the browser's local storage. They are used only to make straight requests from your browser to Scopus and OpenAI APIs.

Search String

The Search String stage enables users to input a natural language description of a topic or discipline they wish to explore, and the system responds with a reasoned, Scopus-compatible search string. As illustrated in figure below, this process is guided by a predefined system prompt to ensure that responses strictly follow the Scopus search format.

Simplified flow chart of Search String step of artifact

Literature Search

The Literature Search stage enables users to perform Boolean searches against the Scopus database and efficiently evaluate the returned results, as illustrated in the following figure.

Simplified flow chart of Literature Search step of artifact

Literature Evaluation

The Literature Evaluation stage enables users to request AI-driven reasoning on a test set derived from the Literature Search stage output. Users can provide natural language selection criteria, accompanied by a predefined system prompt, to assess the relevance of identified literature. In response, the system evaluates whether a paper meets the criteria and provides a reasoned justification, as illustrated in following figure.

Simplified flow chart of Literature Evaluation step of artifact

Results

The Results stage processes all entries from the Literature Search stage, evaluates each based on the criteria defined in the Literature Evaluation stage, and compiles the final output into a standardized spreadsheet format, as illustrated in following figure.

Simplified flow chart of Results step of artifact

Development

This project uses dev containers to ensure a consistent development environment. To get started, you need to have Docker and Visual Studio Code installed. Please refer to official documentation on how to get started with dev containers.

If you don't prefer to use dev containers, you can follow .devcontainer/Dockerfile to install the required dependencies on your local machine.

Running the project locally

Run npm run dev to start the development server. The server will be available at http://localhost:3000.

Running tests

Run npm run test to run tests.

Releasing

This project uses changesets to manage releases. Therefore, include a changeset in your pull request following guidelines.

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

Fast and automated study selection for literature review with Scopus and OpenAI.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages

0