The memory footprint kept increasing during training and everything else was fine · Issue #1 · NJUNLP/njuqe · GitHub
More Web Proxy on the site http://driver.im/
You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello guys, your work is great, but during my training, my memory usage continues to increase steadily, and I can see the normal update of various indicators. I will soon run out of 40GB memory and be forced to stop. What could be the problem?
The text was updated successfully, but these errors were encountered:
Hi! Thanks for your attention! You may not use --qe-meter in pre-train. To calculate the right dataset-level metrics like Pearson, MCC, and F1-MULT, we have to save the predictions in the "reduce_metrics" function of qe loss. Fairseq may record all training states including these predictions. Therefore memory usage continues to increase.
Hello guys, your work is great, but during my training, my memory usage continues to increase steadily, and I can see the normal update of various indicators. I will soon run out of 40GB memory and be forced to stop. What could be the problem?
The text was updated successfully, but these errors were encountered: