@aronsemle I started modelling a sensor system (sorry, need to stay vague; company stuff you know) with python and the common python libraries for machine learning to understand if the problem I faced could be modelled and automated at all. After that proved to work, I removed all python libraries and wrote the actual machine learning code myself with the objective to write it simply so it could be transferred to a small target. Once that was done, I moved it all over to the target (particle p1) in C.
I think this is the standard “development plan” most will go through. Not that it was easy and straightforward but this has the benefit to a) prove the theory without being bogged down with low level details, b) optimize the actual machine learning methods for small targets once you know what is really needed without bringing with you a lot of stuff you don’t need.
Having said that, NNWs are a bit ‘heavy’ with all that matrix data and that might require a bit of trickery. I too faced the memory limitations of the particle device - ideally I needed 1Meg RAM but realized that I could swap in/out the pieces of data i need from an SD card. This slows down the entire analysis on my side but not by so much that it became a problem. So that is perhaps something to think about; if you know your NNWs are going to work but you’re not sure if you can deal with small slices of memory; set up a mock app that ‘sort of’ does what you need and see how tough it is to create or slow it is to operate.