Open
Description
When training, I found although the training loss is decreasing, the validation loss doesn't decrease.
But the latter model is indeed better than the former.
Does anyone face the same weird phenomenon?
我在訓練時發現,雖然訓練損失有在下降,但是驗證損失卻幾乎沒下降,
但奇怪的是,訓練越多次的 model 的確比前面的 model 表現更好?
有人也遇到類似的奇怪現象嗎?
要怎麼解釋這種情況呢?
Metadata
Metadata
Assignees
Labels
No labels