- Machine Learning Projects for Mobile Applications
- Karthikeyan NG
- 337字
- 2021-06-10 19:41:38
TensorFlow Lite memory usage and performance
TensorFlow uses FlatBuffers for the model. FlatBuffers is a cross-platform, open source serialization library. The main advantage of using FlatBuffers is that it does not need a secondary representation before accessing the data through packing/unpacking. It is often coupled with per-object memory allocation. FlatBuffers is more memory-efficient than Protocol Buffers because it helps us to keep the memory footprint small.
FlatBuffers was originally developed for gaming platforms. It is also used in other platforms since it is performance-sensitive. At the time of conversion, TensorFlow Lite pre-fuses the activations and biases, allowing TensorFlow Lite to execute faster. The interpreter uses static memory and execution plans that allow it to load faster. The optimized operation kernels run faster on the NEON and ARM platforms.
TensorFlow takes advantage of all innovations that happen on a silicon level on these devices. TensorFlow Lite supports the Android NNAPI. At the time of writing, a few of the Oracle Enterprise Managers (OEMs) have started using the NNAPI. TensorFlow Lite uses direct graphics acceleration, which uses Open Graphics Library (OpenGL) on Android and Metal on iOS.
To improve performance, there have been changes to quantization. This is a technique to store numbers and perform calculations on them. This helps in two ways. Firstly, as long as the model is smaller, it is better for smaller devices. Secondly, many processors have specialized synthe instruction sets, which process fixed-point operands much faster than they process floating point numbers. So, a very naive way to do quantization would be to simply shrink the weights and activations after you are done training. However, this leads to suboptimal accuracies.
TensorFlow Lite gives three times the performance of TensorFlow on MobileNet and Inception-v3. While TensorFlow Lite only supports inference, it will soon be adapted to also have a training module in it. TensorFlow Lite supports around 50 commonly used operations.
It supports MobileNet, Inception-v3, ResNet50, SqueezeNet, DenseNet, Inception-v4, SmartReply, and others:

- Arduino入門基礎教程
- Python GUI Programming:A Complete Reference Guide
- Mastering Delphi Programming:A Complete Reference Guide
- The Applied AI and Natural Language Processing Workshop
- micro:bit魔法修煉之Mpython初體驗
- Artificial Intelligence Business:How you can profit from AI
- VCD、DVD原理與維修
- 面向對象分析與設計(第3版)(修訂版)
- 單片機系統設計與開發教程
- 單片機技術及應用
- Spring Cloud微服務和分布式系統實踐
- Wireframing Essentials
- 觸摸屏應用技術從入門到精通
- The Deep Learning with PyTorch Workshop
- The Reinforcement Learning Workshop