8000
We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Bump `fla` to v0.2.1
[Token Mixing] Remove the `head_first` arg from token mixing layers (#… …347)
Bump `fla` to v0.1.2 (#264)
Bump `fla` to v0.1.1
[Misc.] Use pkg name `flash-linear-attention`