Has anyone played with running machine learning models locally on a photon?
I ask because we are looking to run trained models at the edge for human activity recognition. We collect the data through RedBear now, and we can train the model in the cloud using TensorFlow, but once trained we want to run it locally.
We’re new to this and not even sure if it’s possible. Interested in any learning anyone has.
“machine learning” is a very wide set of functions - what exactly do you have in mind? TensorFlow suggests you are running some sort of deep NNW? Yes I am running a “machine learning” algorithm on the P1 (not TensorFlow based) but as @rickkas7 points out, RAM is very limited (practically much less than the 60K he points out). You can, however, connect a SD card fairly easily and swap vector sets at the cost of overall processing speed.
Thanks joost! Yeah, neural network. We’re trying to predict wake-up events via leg movement/actigraphy. We are starting to use TensorFlow to train a model, but we’re not far enough along to know the number of nodes & layers, but I was looking ahead to see how we might run this on our redbear duos, followed by the argon/borons.
First step is to prove it’s possible/accurate though. I may ping you to better understand what you were able to achieve.
@aronsemle I started modelling a sensor system (sorry, need to stay vague; company stuff you know) with python and the common python libraries for machine learning to understand if the problem I faced could be modelled and automated at all. After that proved to work, I removed all python libraries and wrote the actual machine learning code myself with the objective to write it simply so it could be transferred to a small target. Once that was done, I moved it all over to the target (particle p1) in C.
I think this is the standard “development plan” most will go through. Not that it was easy and straightforward but this has the benefit to a) prove the theory without being bogged down with low level details, b) optimize the actual machine learning methods for small targets once you know what is really needed without bringing with you a lot of stuff you don’t need.
Having said that, NNWs are a bit ‘heavy’ with all that matrix data and that might require a bit of trickery. I too faced the memory limitations of the particle device - ideally I needed 1Meg RAM but realized that I could swap in/out the pieces of data i need from an SD card. This slows down the entire analysis on my side but not by so much that it became a problem. So that is perhaps something to think about; if you know your NNWs are going to work but you’re not sure if you can deal with small slices of memory; set up a mock app that ‘sort of’ does what you need and see how tough it is to create or slow it is to operate.
I am waiting for my mesh devices. What I have been doing is making a machine learning curriculum for high school students. see website of examples at https://www.rocksetta.com/tensorflowjs/
Since tensorflowjs uses javascript it works fairly well on RPI3 and mobile devices. Try https://hpssjellis.github.io/face-api.js-for-beginners/ on your cell phone. Wheras the Photon and Mesh devices probably can’t do much machine learning on their own with a websocket or database your could probably do a fair bit of machine learning data collection. with feedback.
A toy RC car converted to wifi using a Particle.io Photon with a websocket for a fast connection to an attached cell phone running it’s webcam in the browser using tensorflowjs a Machine Learning Javascript library to run posenet to identify peoples poses to control the car.
The right knee location determines cars turning direction and speed, while both hands near your shoulders determine if it should go backwards.
Have you looked into any of the TensorFlow Lite Micro or uTensor stuff lately? I think we’re at the point with some of the newer Cortex M4 devices where its becoming possible to perform inference on MCUs for some class of models. I’m currently working on a port of TF Lite Micro to the Argon and hope to have something up and running in a few weeks.
Thanks @armor, I saw that as well. Interesting stuff. I was actually planning on porting the examples from those posts this week to see where we stand with a benchmark. Should be pretty straightforward. For the general Argon port, I’m working on a TF Lite Micro port as I think that will end up being the direction that the Merged TF-uTensor project ends up going. It’s slow thus far as I’m trying to fully grok the way that the TF source is built and structured, but I’m getting there.
I read that article also and thought it was pretty cool.
@bsatrom It will be very cool if you do get Tensorflow Lite up and running on the Particle devices. Thanks for spending the time trying to get it working.
Hey @armor, progress, but I don’t have a completed POC yet. The current issue is that there’s not enough APP_FLASH available for user firmware that includes the tensorflow library. I’m working on a few different approaches to address this with the team, some stopgap, and a few other, long-term solutions.
Hey folks! Just wanted to drop into this thread and share that I’ve just published a Particle port of the TensorFlow Lite for Microcontrollers library! More details here: [Library] TensorFlowLite port is live!
@muneebr1 Thanks! The library itself works with Gen2 (Photon, Electron) and Gen3 (Argon, Boron, Xenon) devices, yes. Not all of the examples will work on the Photon, but a few do and I’ve noted those in the readme for the library.