[Library] TensorFlowLite port is live!

Hey Folks,

A number of you are aware that I’ve been working to port the TensorFlow Lite Micro library over to the Particle platform for the last few months. I’m happy to share that v0.1.0 of the library is live and publicly available for installation and use! You can find the repo here: https://github.com/bsatrom/Particle_TensorFlowLite

In addition to the library, I’ve included a couple of working examples, as well as a complete end-to-end guide that goes from model creation with TensorFlow and Keras to on-device inferencing. If you’re looking to run your own models on MCUs, that’s a good place to start.

Three of the examples (micro_speech, magic_wand and person_detection) are not quite complete. Hoping to get those ported over the next few weeks.

For those chomping at the bit for some on-device ML, I hope this is useful! I’m also hoping to flesh this out with community input, so feel free to file issues and PRs as you work with this library.

Happy inferencing!

Brandon

12 Likes

This is very cool stuff!

A big step forward for micro-controller capabilities into the future.

1 Like

Well Done!
I thought you were blocked by lack of flash space - how did you overcome this?

Initially, I was. Working with the DeviceOS team, however, they were able to free up quite a bit of flash space by flagging some internal dependencies as debug-build only. The net result was about an 15-20k smaller system part in 1.4.0+.

The second issue I was fighting with the last few months was some heap fragmentation and overflows in the TFLite core library itself. That was resolved by the Google team a few weeks ago, so I’ve been moving much faster since then.

Model size is still going to be the core constraint here (as it is for all MCUs) and I don’t quite yet know what the model size limit is for a simple app (current working demo models go up to ~19k). Hoping to get that figured out in the next few days as I port some of the other demos into the platform. Even still, there are a couple of things coming down the pike on the DeviceOS side that should make even large models workable.

Either way, we’re inferencing with Particle and I’m pumped to keep exploring this space!

2 Likes

The net result was about an 15-20k smaller system part in 1.4.0+.

Is this something I will notice with other applications or specifically tied to the TFLite library?

For all apps on Gen3 devices. The change was mainlined in 1.4.0.

Wow! I missed that one in the release notes - that’s a massive chunk of application code space.

Could you recommend where to start with TensorFlow Lite from you experience to date? Sorry if this is in the github repo - I didn’t see anything standing out. Also, for the examples - is there somewhere were I could find more about how the ML model was selected and how these work.

For general getting started, I’d go here: https://www.tensorflow.org/lite/microcontrollers. I created a Particle-specific guide from model to MCU here as well.

My changes to the port haven’t been mainlined into TensorFlow yet, but I’m working with the Google team to make that happen. Once we do, I’m hoping we can land on that page above. :smiley:

3 Likes

@bsatrom,

This is awesome!

Amazing what you have done so far but, if I might wish for more… We have a hackathon coming up in the first week of December. Any change the person detection example would be working / hack-worthy by then?

Thanks,

Chip

Hey Chip, thanks! I actually did port the person detection example yesterday. That said, I cannot run it on our hardware at the moment because the model is 250k and the binary would overflow flash by about ~80k. I’m blocked on this one until there are some options for storing and loading models from other locations.

That said, even if it did work, its important to note that the inference time of this demo on other hardware is ~19sec. I know that CV is an exciting use case in the ML space, but I don’t believe that its a good fit for ML on MCUs at the present. It’s a nice example of what’s possible, but not quite yet what’s practical.

1 Like

And thus the reason for the creation of the Kendryte K210 AI chip which can offload a lot of the ML work from a host MCU such as a Particle device.

2 Likes

Chip, you could look at something like the OMRON B5T-0007001-010 human vision component. It is expensive but contains a camera and offloads all the processing from the Particle device by providing face, hand, body detection and count, does face recognition and age, gender and mood detection! I made a rig for a show for a bit of fun - can I guess your mood? Interfaces via UART. If you haven’t the budget but fancy some real hacking then ESP32-Cam can be programmed via Arduino IDE and can communicate with a Particle device in a number of ways. They are incredibly cheap $6 and can be loaded with some basic CV - facial recognition software.

2 Likes

@armor,

Thank you for the suggestion. I will take a look. I also have an OpenMV Cam and have backed the Husky Lens project on Kickstarter. We are on the cusp of this technology delivering the hat trick of low cost, low power and simplicity. Can’t wait!

Chip

P.S. - Implicit in my excitement is that this technology will be used for good. We can all hope.

1 Like

This is great work!

I have been trying to get the Micro-Speech example up and running, but I am having some trouble.

Everything compiles, flash and runs fine. It is flashing the D7, showing it is doing inference. If I comment out that section in particle_command_responder.cpp, the LED stops blinking. It doesn’t look like it is recognizing any spoken words though.

I am using the adafruit electret microphone. If I run the FFT demo from the ADDAC library, it shows it can read audio from the microphone fine. I have the output from the microphone hooked to A0, which seems to match particle_audio_provider.

One oddity is that no messages are output to the serial monitor. If I run the TF hello_world example, I am seeing messages.

Is there anything obvious I should try? I am using a Xeon. I have tried updating the firmware from the Particle CLI. I am using DeviceOS 1.4.2 and I am using the Particle Workbench. I have been doing Local Compiles and Local Flashing over USB.

2 Likes

Hey @Robotastic, thanks for posting and welcome to the Particle community! Sorry about the trouble, but as you’ve seen inferencing is working fine, but there’s a bug in the audio pre-processing that I recently found and am working to fix. I’m hoping to get it working and pushed in the next few days. Also wrapping up the magic_wand accelerometer demo right now and hoping to push that soon. Thanks for your patience!

1 Like

No problem - thanks for your work on it! The magic wand demo could be a lot of fun to integrate with IFTTT to control lights.

If you are able to add a few details on how you set the gain on the Adafruit Mic board, that would be awesome too. Thanks!

Is there anyway to use a PDM mic with the Xeon, to do the recording? It looks like the nRF52840 supports it and Adafruit has extended their Arduino port for it:

I am looking to build a Feather board with a Mic on it and using PDM would be a bit easier.

It's possible, but we don't currently have a PDM library that works with the nrf52840. I started porting one, but set it aside for a bit to focus on getting the TFLite examples fully supported. Right now, I'm using the Electret mic because @rickkas7 has created a library that makes it easier to work with than starting from scratch with a new library and PDM mic.

First of all, thanks for all your efforts to create the TensorFlow lite port to the Particle. We want to use this library to execute an LSTM model. We have detected some issues to run this kind of models on microcontrollers. We want to ask if you know if it is possible to run on Particle microcontroller using your library.
Thanks for your help.

Hello Brandon,

I am wondering if the TensorFlowLite Library is still available. When I try to include it in the Web IDE, it just says “Loading”.

Is it still active?
Thanks,
Mike