8000 GitHub - zhiweihu1103/MEL-MMoE: [KDD2025] Multi-level Mixture of Experts for Multimodal Entity Linking
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

zhiweihu1103/MEL-MMoE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multi-level Mixture of Experts for Multimodal Entity Linking

This repo provides the source code & data of our paper: Multi-level Mixture of Experts for Multimodal Entity Linking(KDD2025).

Dependencies

  • conda create -n mmoe python=3.7 -y
  • torch==1.11.0+cu113
  • transformers==4.27.1
  • torchmetrics==0.11.0
  • tokenizers==0.12.1
  • pytorch-lightning==1.7.7
  • omegaconf==2.2.3
  • pillow==9.3.0

Running the code

Dataset

  1. Download the datasets from MIMIC paper.
  2. Download the data with WikiData description information from here and move it to the corresponding MIMIC datasets folder.
  3. Create the root directory ./data and put the dataset in.
  4. Download the pretrained_weight from clip-vit-base-patch32.
  5. Create the root directory ./checkpoint and put the pretrained_weight in.

Training model

sh run.sh

Note: We provide commands for running three datasets in run.sh. You can switch commands by opening comments.

Training logs

Note: We provide logs of our training in the logs directory.

Citation

If you find this code useful, please consider citing the following paper.

@article{
  author={Zhiwei Hu and Víctor Gutiérrez-Basulto and Zhiliang Xiang and Ru Li and Jeff Z. Pan},
  title={Multi-level Mixture of Experts for Multimodal Entity Linking},
  publisher="ACM SIGKDD Conference on Knowledge Discovery and Data Mining",
  year={2025}
}

Acknowledgement

We refer to codes of MIMIC and MEL-M3EL. Thanks for their contributions.

About

[KDD2025] Multi-level Mixture of Experts for Multimodal Entity Linking

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0