8000 GitHub - douxiaotian/HighMMT: Towards Modality and Task Generalization for High-Modality Representation Learning
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Towards Modality and Task Generalization for High-Modality Representation Learning

License

Notifications You must be signed in to change notification settings

douxiaotian/HighMMT

 
 

Repository files navigation

HighMMT

HighMMT is a general-purpose model for high-modality (large number of modalities beyond the prototypical language, visual, and acoustic modalities) and partially-observable (across many tasks, where each task is defined only over a small subset of all modalities we are interested in modeling) scenarios.

HighMMT uses multitask learning with shared unimodal and multimodal layers to enable stable parameter counts (addressing scalability) and cross-modal transfer learning to enable information sharing across modalities and tasks (addressing partial observability).

The same HighMMT model (architecture and parameters) is able to simultaneously encode joint representations between different subsets spanning images, text, audio, sets, time-series, and graphs.

Paper

HighMMT: Towards Modality and Task Generalization for High-Modality Representation Learning
Paul Pu Liang, Yiwei Lyu, Xiang Fan, Shentong Mo, Dani Yogatama, Louis-Philippe Morency, Ruslan Salakhutdinov
arXiv 2022.

If you find this repository useful, please cite our paper:

@article{liang2022highmmt,
  title={HighMMT: Towards Modality and Task Generalization for High-Modality Representation Learning},
  author={Liang, Paul Pu and Lyu, Yiwei and Fan, Xiang and Mo, Shentong and Yogatama, Dani and Morency, Louis-Philippe and Salakhutdinov, Ruslan},
  journal={arXiv preprint arXiv:2203.01311},
  year={2022}
}

Contributors

Correspondence to:

Usage

Data Download

This repo is built on top of the MultiBench repository, so to download the dataset, follow the same instructions as https://github.com/pliang279/MultiBench.git

Easy setting experiment code

From the root of this repo, run

python private_test_scripts/perceivers/roboticstasks.py model.pt

The model will be saved to model.pt.

Medium setting experiment code

To run medium tasks, please run

python private_test_scripts/perceivers/medium_tasks.py

Hard setting experiment code

To run multitask training on 1/2/3/4 tasks, please run

python private_test_scripts/perceivers/singletask.py
python private_test_scripts/perceivers/twomultitask.py
python private_test_scripts/perceivers/threemultitask.py
python private_test_scripts/perceivers/fourmultitask.py

About

Towards Modality and Task Generalization for High-Modality Representation Learning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.9%
  • Shell 0.1%
0