🪴
hard working
Fourth-year Ph.D. Student @ictnlp
- Beijing
- https://fangqingkai.github.io/
Highlights
- Pro
Pinned Loading
-
ictnlp/LLaMA-Omni
ictnlp/LLaMA-Omni PublicLLaMA-Omni is a low-latency and high-quality end-to-end speech interaction model built upon Llama-3.1-8B-Instruct, aiming to achieve speech capabilities at the GPT-4o level.
-
-
ictnlp/DASpeech
ictnlp/DASpeech PublicCode for NeurIPS 2023 paper "DASpeech: Directed Acyclic Transformer for Fast and High-quality Speech-to-Speech Translation".
-
ictnlp/STEMM
ictnlp/STEMM PublicCode for ACL 2022 main conference paper "STEMM: Self-learning with Speech-text Manifold Mixup for Speech Translation".
-
ictnlp/ComSpeech
ictnlp/ComSpeech PublicCode for ACL 2024 main conference paper "Can We Achieve High-quality Direct Speech-to-Speech Translation Without Parallel Speech Data?".
-
ictnlp/CRESS
ictnlp/CRESS PublicCode for ACL 2023 main conference paper "Understanding and Bridging the Modality Gap for Speech Translation".
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.