Machine Learning on Photon?


#1

Has anyone played with running machine learning models locally on a photon?

I ask because we are looking to run trained models at the edge for human activity recognition. We collect the data through RedBear now, and we can train the model in the cloud using TensorFlow, but once trained we want to run it locally.

We’re new to this and not even sure if it’s possible. Interested in any learning anyone has.

Aron


#2

Likely not possible.

While some people have gotten it small enough to run on a Raspberry Pi, the Photon has a number of things running against it:

  • Very limited RAM (about 60 Kbytes free)
  • No hardware floating point
  • No GPU

I won’t say that it’s impossible, but it’s not very practical on the Photon.


#3

“machine learning” is a very wide set of functions - what exactly do you have in mind? TensorFlow suggests you are running some sort of deep NNW? Yes I am running a “machine learning” algorithm on the P1 (not TensorFlow based) but as @rickkas7 points out, RAM is very limited (practically much less than the 60K he points out). You can, however, connect a SD card fairly easily and swap vector sets at the cost of overall processing speed.


#4

Thanks joost! Yeah, neural network. We’re trying to predict wake-up events via leg movement/actigraphy. We are starting to use TensorFlow to train a model, but we’re not far enough along to know the number of nodes & layers, but I was looking ahead to see how we might run this on our redbear duos, followed by the argon/borons.

First step is to prove it’s possible/accurate though. I may ping you to better understand what you were able to achieve.


#5

@aronsemle I started modelling a sensor system (sorry, need to stay vague; company stuff you know) with python and the common python libraries for machine learning to understand if the problem I faced could be modelled and automated at all. After that proved to work, I removed all python libraries and wrote the actual machine learning code myself with the objective to write it simply so it could be transferred to a small target. Once that was done, I moved it all over to the target (particle p1) in C.

I think this is the standard “development plan” most will go through. Not that it was easy and straightforward but this has the benefit to a) prove the theory without being bogged down with low level details, b) optimize the actual machine learning methods for small targets once you know what is really needed without bringing with you a lot of stuff you don’t need.

Having said that, NNWs are a bit ‘heavy’ with all that matrix data and that might require a bit of trickery. I too faced the memory limitations of the particle device - ideally I needed 1Meg RAM but realized that I could swap in/out the pieces of data i need from an SD card. This slows down the entire analysis on my side but not by so much that it became a problem. So that is perhaps something to think about; if you know your NNWs are going to work but you’re not sure if you can deal with small slices of memory; set up a mock app that ‘sort of’ does what you need and see how tough it is to create or slow it is to operate.

Good luck!


#6

I am waiting for my mesh devices. What I have been doing is making a machine learning curriculum for high school students. see website of examples at https://www.rocksetta.com/tensorflowjs/

Since tensorflowjs uses javascript it works fairly well on RPI3 and mobile devices. Try https://hpssjellis.github.io/face-api.js-for-beginners/ on your cell phone. Wheras the Photon and Mesh devices probably can’t do much machine learning on their own with a websocket or database your could probably do a fair bit of machine learning data collection. with feedback.

Will be interesting to see where all this goes.


High School Robotics Course using the Particle.io Mesh Devices Blog
#7

@bsatrom

So this has taken a few years.

A toy RC car converted to wifi using a Particle.io Photon with a websocket for a fast connection to an attached cell phone running it’s webcam in the browser using tensorflowjs a Machine Learning Javascript library to run posenet to identify peoples poses to control the car.

The right knee location determines cars turning direction and speed, while both hands near your shoulders determine if it should go backwards.

Also has a bad attempt at using Speech-Commands.

Cool Eh. Just got it working, it has several things that desperately need tweaking.


High School Robotics Course using the Particle.io Mesh Devices Blog
#8

@rocksetta this is very cool!

Have you looked into any of the TensorFlow Lite Micro or uTensor stuff lately? I think we’re at the point with some of the newer Cortex M4 devices where its becoming possible to perform inference on MCUs for some class of models. I’m currently working on a port of TF Lite Micro to the Argon and hope to have something up and running in a few weeks.