Latest firmware released to Spark Build

I know your app is kind of intense, so perhaps you are running into an issue where it’s running out of memory (Flash or RAM). I’m guessing because you are seeing the SOS that the debugging is turned on by default. This will add about 20KB of Flash memory. In the local build environment you can disable the debugging and see if that helps. If you go this route, grab the compile-server2 branch for a proper comparison.

Maybe @david_s5 can shed some light on how to debug the code via Serial1 or Serial. I think you have to add a debug routine to your app that will enable it… then you should be able to tell what’s happening.

1 Like

We just bumped the compile-server2 branches up and deployed to the build server, so the changes through the end of February are now available in :spark: Build.

3 Likes

@zachary so to be clear we have to change our sketches to get them to compile with the new firmware right? Or can we just recompile without changes to our saved code and then flash to get the new firmware?

@zachary We need to fix the WD from killing the PANIC. It is a simple fix - kick the dog 2 time in the panic loop. So you get 2 full panics cycles . Then let the WD clobber the thing.

@luz Most likely it is out of RAM. The buffers for TCP.UDB where increased to 512 each.

If you can build locally build with debugging. It will disable the watchdog and you will see the SOS
If you add the debug_output code shown above you can see the panic printed out on the tx/rx (3.3v) pins

@david_s5 Were there any thing new of yours that was included in this latest firmware update they just pushed out to the Web IDE interface today?

I am not sure what got pushed. There is a really nice graph that shows the merge activity but for the life of me I can not find it on git hub

@RWB EOSM https://github.com/spark/core-firmware/network EOSM - End of Senior Moment

@RWB Yup, always have to change something to get new code.

@david_s5 Pull request to prevent the watchdog from killing panic?

@zachary PR - I did it online in GH, Not tested because I do not have time at the moment. If it compiles it should work. :smile:

Yup, saw it. Thanks! Cool feature that online editing and pull request, eh? Love it.

You were right, the problem was out-of-memory! Thanks for the hint!

I reduced the number of LEDs to 120 (instead of 240) which requires 320 bytes less RAM, and now the app works again.

However I’m surprised RAM should be so tight, given that there’s a total of 20kB RAM, and my app’s usage is only around 1kB, plus maybe another 700 bytes in case the const static byte[] with the dot matrix font is still copied to RAM (I don’t know, using the WebIDE for now, can’t see what the compiler really does).

Is the heap size particularily restricted? I allocate the LED buffer (900 bytes) on the heap so far in my WS2812 lib for flexibility, but maybe I should try using a static array instead…

[Update: static array did not really make a difference, still crashes, a bit differently than before but still unusable].

Yes, it was out of RAM. Reducing the number of LEDs reduced the frame buffer by ~300 bytes and made the app start again. However I wonder why the mere 1KB of RAM my app uses with all LEDs enabled (static + heap combined) already exhausts the core with its 20KB of RAM?

This is a good problem to understand, because in the very recent past… we could globally allocate 1Ki buffers no problem. We were even local/dynamically allocating up to 10KB buffers without a crash.

I know the Cloud Handshake takes up a large amount of dynamically allocated RAM… so I’m guessing that between before and now, something else is globally/statically taking up like 9Ki of RAM. :eyes:

We need a way to remove the debugging code via the web IDE :wink: @satishgn @Dave Maybe another #include option? Perhaps debug should be removed by default, and you #include it to have it compiled in.

@luz can you build locally and remove the debugging options?

See here:

HI @luz and @BDub

It would be very interesting to see the numbers coming out of gcc after a local compile. There can be a lot of FLASH used by the new firmware if you don't throw the right panic switch, but there was only a very modest increase in RAM used.

RAM is data+bss since data represents variable initializer values.

Debug should not be active on a RELEASE_BUILD

From the WebIde:

../../core-common-lib/SPARK_Firmware_Driver/inc/config.h:12:2: warning: #warning "Defaulting to Release Build" [-Wcpp]

I am not sure if USE_ONLY_PANIC was defined in the lib build that the WebIde is using.

I pushed a commit today the will insure USE_ONLY_PANIC on release build.

All that being said, TCPClient, UDP are now using 512 buffers per object.

So, latest master (as of this morning), locally compiled my code with panic_only and the following results:

   text    data     bss     dec     hex filename
 101404    3024   14328  118756   1cfe4 core-firmware.elf

Get the red flash of death after flashing green and my code does not run. Data+bss = 17,352. How can I tell if there is not enough RAM?

ok, so far I was using the WebIDE for lazyness related reasons :slight_smile: I have so many different gcc-whatevers already installed for my zoo of gadgets that I was happy to use the spark without another one…

Of course, no problem to install the gcc_arm toolchain if I need to - just did it and here’s the output of messagetorch compilations:

a) number of LEDs tuned down such that it just works:

   text	   data	    bss	    dec	    hex	filename
  89948	   3032	  13040	 106020	  19e24	core-firmware.elf

b) original version that does no longer work (immediately causes panic after starting):

   text	   data	    bss	    dec	    hex	filename
  89948	   3032	  13768	 106748	  1a0fc	core-firmware.elf

As expected, the bss is smaller in the working version exactly by the amount I reduced the buffers.
Apparently, the crash limit is pretty exactly at data+bss = 16k or 0x4000. I have no idea how the total 20k are mapped, but reaching a 12bit boundary looks somehow plausible as a limit…

To actually get a number of how much RAM my particular app is using I did a third test:

c) a completely empty app (just setup() and loop(), both doing nothing)

   text	   data	    bss	    dec	    hex	filename
  81476	   2620	  11496	  95592	  17568	core-firmware.elf

Which means that my app needs 2684 bytes of RAM. Apparently, that’s too much :frowning:

2 Likes

luz, an empty app should compile to about 67K or so. You may have debugging enabled. I am going to test the 16K boundary idea to see if I see the same thing.

UPDATE: I believe you hit it. If my RAM use goes over 16KB, I get panic. Anything below works just fine.

Yo Spark team… what can be done here?

1 Like

Yes, I had USE_ONLY_PANIC commented out.
It does make a difference of 20k for the code (text), but almost nothing for RAM: here’s the same empty app with USE_ONLY_PANIC enabled:

   text	   data	    bss	    dec	    hex	filename
  63784	   2564	  11496	  77844	  13014	core-firmware.elf