-
Notifications
You must be signed in to change notification settings - Fork 501
My SID and ASV results of the data2vec-base model are different from the benchmark #469
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi, Hmm this is really different. |
I have tried -u data2vec (result is 54.85) and -u data2vec_local (result is 56.57) |
I believe the default learning rate of SID is too small, please try 1.0e-2 and 1.0e-3, and the results can be closer. |
Ok, I will try your suggestion later, thanks a lot! I have another question, if I use DDP with multi-GPU, could I adjust some hyperparameters? To consistent with the benchmark, which hyperparameters are allowed to be adjusted in DDP training? Lr, total_steps, or any other? |
Sure, I think only these two are adjustable. |
Ok, I know, thank you! |
Hello. Thanks for your work. I mainly work on ASV. Actually, I have two questions. The results of EER reported in the paper is 6.02% by using a pre-trained wav2vec2 base model. I wonder if the training data is voxceleb1 dev or voxceleb 2 dev. Another question is that when I tried to reproduce the results (wav2vec2 base + x-vector), I found the accuracy was quite low, after several epochs, it is still below 10%. Can you share the training log for us to refer to if possible? Thank you. |
来信已收到,谢谢
|
I trained a data2vec-base model myself, and then used the parameters in s3prl/s3prl/downstream/voxceleb1/config.yaml to reproduce the ASV and SID task in superb benchmark, but the results are very different. I trained the downstream task with single GPU.
Benchmark results:
SID: 70.21
ASV: 5.77
My results:
SID: 56.57
ASV: 6.77
Even if I use the official model from torch.hub, the downstream SID task results are not reproducible (only 54.85)
May I ask if you changed the parameters in the config.yaml file when training the downstream tasks of different pre-train models? If yes, where can I find the list of parameters you actually used?
The text was updated successfully, but these errors were encountered: