Deep update leads to SOS Hard Fault on previously working code

Thanks for the feedback. I am aware of the garbled information occasionally appearing with Flashee. Whilst the Flashee libraries appear to be compromising my own code I think it is related to their size. I have now built a Core totally separate from the cloud and running Flashee wear levelling libraries to see if I can isolate particular problems such as string size and speed of read write operations. I look forward to mdma’s return as the Flashee is critical for those of us with embedded data logging requirements.

2 Likes

Hi @Dave,

I’m the collegue of Henk (@nika8991).  The publish
behavior looks like the same after the deep-update. But the Spark.function() and
the OTA only  are possible with a time delay of about 5 seconds between
Spark.connect() and starting publishing.  By us the order after
Spark.connect() is first publishing, then the Spark.function() and then the OTA.

But the Spark.function() and the OTA are not possible when
there is not a delay between the Spark.connect() and start publishing. Before
the deep_update that extra delay was not needed.

In fact the Spark.connected() comes too early, because after the Spark.connected() we assume that the SparkCloud connection is fully established.

We assume, that the connection from the SparkCloud to the SparkCore is still not fully established, when the Spark.connected() is received. And for publishing only the connection from SparkCore to the SparkCloud is used and that is established. But for the Spark.function() and the OTA the other not fully established connection (from SparkCloud to SparkCore) is necessary…

BTW
We notice, that sometimes the first published messages are lost. And we are sure, that we have sent them out. But this problem was also existing before the deep_update.

Regards.
Albert.

Hi @elnavdo,

I haven’t forgotten about you! We’re pushing out another big round of firmware improvements today to the build IDE, and I’m hoping we can see if they improve the hard fault after connection issue we’re seeing here, and on the local cloud.

Thanks,
David