8000 Fix erroneous quantization operations in TFLite model by puddly · Pull Request #39 · kahrendt/microWakeWord · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Fix erroneous quantization operations in TFLite model #39

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Dec 21, 2024

Conversation

puddly
Copy link
Contributor
@puddly puddly commented Dec 21, 2024

#36 accidentally removed a critical flag for model export.

Without it, TensorFlow does not quantize the Stream pseudo-layer's state variables to int8, resulting in a TFLite model that is littered with Quantize and Dequantize opcodes. These cause a crash on interpreter startup when running on an ESP32 since they're not registered opcodes with the MWW ESPHome component.

Copy link
Owner
@kahrendt kahrendt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I had just noticed this on a model where I was experimenting with the architecture a bit, but hadn't had a chance to investigate it further.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0