System thread and subscribe with multipart reply

I noticed strange behavior of subscribe callback when the SYSTEM_THREAD is enabled.

I use the webhook to send some configuration data back to electron after it publishes a “config” topic.
This works great as long as the configuration data are short enough to fit one packet. When data are longer and there are 2 reply packets (index 0 and index 1) things break. Each of the received packets results in calling the callback functions and it seems that the second callback is called before the first one even finish. In result data are corrupted in 90% and the content of a second packet overrides the first one. In 10% of cases data will arrive not corrupted so I assume that this really depends on timing.

If the SYSTEM_THREAD is disabled the callbacks are called sequentially and not concurrently and everything seems to be OK. I cannot disable SYSTEM_THREAD because I need my loop() up and running even in the offline state.

I would very much appreciate any tricks and tips to overcome this issue. Any suggestions?

That will depend on how you deal with your data inside the handler.
There is only one buffer, but if you immeditately pull a copy of the data and only work on that copy, you shouldn’t have problems.

But if you do and still see the same issue, then this might call for a bug report.

1 Like

Thx a lot ScruffR. That was it! Now I copy data* to local buffer as soon as subscribe callback is called. No more overruns. Great!

1 Like

For future readers.

I recently stumble upon the same issue within my code again. My conslusion is that whenever your electron is busy with taking care of other important things like polling inputs or reading the sensors it cannot reliably receive multipart replies received from the cloud.

Even when you copy data out of the bufffer as the first operation within the handler, this will not work as expected, because the handler is called too late. When the handler code is called in the receive buffer there’s already the last part of the multipart reply, thus you process the last reply multiple times depending on how many parts there were in the reply. The workaround to this issue in my case was to refrain from any blocking actions, whenever multipart reply is expected.

It’s hard to say if this kind of behavior is a feature or a bug :wink:


I believe this is fixed in 0.7.0: