Ben Chuanlong Du's Blog

It is never too late to learn.

Number Precision in Deep Learning

Things on this page are fragmentary and immature notes/thoughts of the author. Please read with your own judgement!

https://www.mathworks.com/company/newsletters/articles/what-is-int8-quantization-and-why-is-it-popular-for-deep-neural-networks.html

https://engineering.fb.com/ai-research/floating-point-math/

Rethinking floating point for deep learning

Training Deep Neural Networks with 8-bit Floating Point Numbers

8-Bit Quantization and TensorFlow Lite: Speeding up mobile inference with low precision

Comments