Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Things on this page are fragmentary and immature notes/thoughts of the author. Please read with your own judgement!

https://www.mathworks.com/company/newsletters/articles/what-is-int8-quantization-and-why-is-it-popular-for-deep-neural-networks.html

https://engineering.fb.com/ai-research/floating-point-math/

Rethinking floating point for deep learning

Training Deep Neural Networks with 8-bit Floating Point Numbers

8-Bit Quantization and TensorFlow Lite: Speeding up mobile inference with low precision