STMicroelectronics has released STM32Cube.AI version 7.2.0, the first artificial intelligence (AI) development tool from a microcontroller (MCU) vendor to support ultra-efficient, deeply quantized neural networks.
STM32Cube.AI converts pre-trained neural networks into C code optimized for STM32 microcontrollers (MCUs). It is an essential tool for developing cutting-edge AI solutions that make the most of the constrained memory sizes and computing power of embedded products.
Moving AI to the edge, away from the cloud, offers substantial advantages for the application. This includes privacy by design, deterministic and real-time response, increased reliability and lower power consumption. It also helps optimize cloud usage.
Now, with support for deep quantization input formats like qKeras or Larq, developers can further reduce network size, memory consumption, and latency. These benefits unlock more possibilities for AI at the edge, including cost-sensitive and cost-effective applications.
Developers can therefore create edge devices, such as self-powered IoT endpoints, that offer advanced functionality and performance with greater battery life. ST's STM32 family offers many suitable hardware platforms. The portfolio extends from ultra-low-power Arm Cortex-M0 MCUs to high-performance devices utilizing Cortex-M7, -M33 and Cortex-A7 cores.
STM32Cube.AI version 7.2.0 also adds support for TensorFlow 2.9 models, kernel performance improvements, new scikit-learn machine learning algorithms, and new Open Neural Network eXchange (ONNX) operators.