This code is for our paper titled: Understanding Co-speech Gestures in-the-wild.
Authors: Sindhu Hegde, K R Prajwal, Taein Kwon, Andrew Zisserman
📝 Paper | 📑 Project Page | 📦 AVS-Spot Dataset | 🛠 Demo |
---|---|---|---|
Paper | Website | Dataset | Coming soon |
We present JEGAL, a Joint Embedding space for Gestures, Audio and Language. Our semantic gesture representations can be used to perform multiple downstream tasks such as cross-modal retrieval, spotting gestured words, and identifying who is speaking solely using gestures.
- [2025.03.31] 🔥 The paper has been released on arXiv.
- [2025.03.29] 🤗 Our new gesture-spotting dataset: AVS-Spot has been released!
AVS-Spot is a gestured word-spotting dataset. Refer to 🤗 datasets and dataset section for details on downloading and pre-processing the data.
Thank you for visiting, we appreciate your interest in our work! We plan to release the inference script along with the trained models soon, likely within the next few weeks. Until then, stay tuned and watch the repository for updates.