Slow OTA updates with threading on photon


My OTA updates to my photons have become particularly slow (takes around 4 minutes from the started event to the hash event). I have a suspicion from reading around that it may be caused by some while loops that are called quite often. Every loop cycle the device is measuring current and temperature and those functions have while loops to sample over a particular amount of time (for eg. 1 second).

while(count < currentsamplenumber){

I have SYSTEM_THREAD(ENABLED) because having some user firmware run while the device is offline is desirable. I’ve tried making additional threads as well but I think that has made things worse. I think the suggested approach would be to stop sampling when an incoming OTA is detected? Would this be the correct approach? How would I go about doing that. Thanks for any help.

Also sidenote, is there a recommended delay to have in our loop functions? Or any at all?


Your code should be as non-blocking as possible.
Long running loops are bound to impact the performance of the cloud connection.
But if you can’t avoid them try adding Particle.process() which allows for extra cloud processing time.

While SYSTEM_THREAD(ENABLED) is my prefered mode of operation too, to achieve what you provide as reason for it can also achieved with the correct SYSTEM_MODE().

The recommendation is no delay() when possible.

But given your code can’t be streamlined, subscribing to he firmware_update_pending event is probably what you want.


Awesome! That’s really helpful and pretty much exactly what I needed. Thanks alot.

In general though, what counts as blocking. I think I know while and for loops would. Would something like a large if statement block block cloud connectivity? In addition to that, does having large amount of blocking code slow down API calls to cloud functions? (Which I think may be happening in my situation). Perhaps taking on your suggestion to add Particle.process() here and there might improve performance.


There is no hard limit, but anything that does not execute in just a few couple of milliseconds can be considered blocking.
And anything that prevents loop() to finish within a few ms does impact the cloud’s responsiveness in single threaded mode directly. With multi threaded mode things are not as straight forward. But actions that need to be run on the application thread (like Particle.function() and Particle.subscribe() callbacks) will be impacted.


I’m playing and observing the exact same thing here running a 2000+ line project using SYSTEM_THREAD(ENABLED). The OTA updates are slowed for me also.

I learned that calling Particle Process in functions that can take more than a few milliseconds is a good idea.

Is there any negative impact in calling Particle.process multiple times?




Alrite, looks like putting in some Particle.process() methods is a good idea. I forgot to ask though, what is the difference between firmware_update and firmware_update_pending in terms of the handler? I’m using just firmware_update at the moment and it seems to do what I want but I’m not sure about the differences.

Also with regards to multi threading, the way I thought of it (and its not very educated) was that the system firmware (like cloud and networking) would run in between function calls or something like that. For eg.

void loop(){

So I thought even though the loop function may be quite long and does not actually execute within a few milliseconds, cloud handling would still happen, in between function1 and function2 for example, or in between lines of code even. Forgive me if I have no idea what I’m talking about.


For me the OTA update happens in-between my functions running when running in threaded.


The former tells you that there is an ongoing update, while the latter tells you that there will be and update starting.
But if you look in the docs that I’ve linked you’ll get a more elaborate description of the latter also in regards of disabled updates - in which case the former wouldn’t fire at all.

It’s a bit more complicated than that. The FreeRTOS does assign 1ms timeslices to each thread and does not care about function boundaries when it takes control of the µC to let have the next thread its share. But there are instances where thread switching is not permitted (e.g. during single threaded or atomic blocks, or when shared resources, like UART, are occupied by one thread) and hence one thread could hog the µC.