Open
Description
In the implementation of w4a8 and w8a8, it seems that you have used layer normalization instead of RMS normalization. Why does this direct substitution of the normalization method work? Additionally, after testing and comparing the results of RMS_norm and layer_norm on w4a8_per_chn model, it was found that layer_norm seems to perform better on GSM8K. Could you explain this in more detail?
Metadata
Metadata
Assignees
Labels
No labels