Understanding local compile output

I am locally compiling a project and suspect I am going over the maximum code size when I include all the necessary libraries. First, to review:

  1. The maximum user code size which must include the core-firmware is 108KB or 1B000 hex.
  2. The maximum RAM size is 20KB total
  3. The present compile/dfu-util setup (as per core firmware instructions) does not place any “const” data or PROMEM declared data in the external flash (not yet at least).

The output of my compile (not all libraries included) shows:

arm-none-eabi-size --format=berkeley core-firmware.elf
   text    data     bss     dec     hex filename
  97932    3024   13044  114000   1bd50 core-firmware.elf

These numbers represent:

The code size (text), constant data (data) and global variables or statically allocated variables (bss), all in decimal. The sum of text+data+bss is shown in dec(imal) and hex(adecimal)

If I understand correctly, text+data = flash used and bss = RAM used. Correct? So in this case, there are 101,136 (x18B10) bytes of flash and 13,044 (x32F4) bytes of RAM used. :question:

1 Like

Spark team… comments please :smile:

I found this link to be good:

Basically text+data=flash, but data+bss=RAM since everything in data gets copied into RAM at startup.

I thought that const globals counted only as text, not as data, obviating most of the need for PROGMEM, but I could be wrong.


bko, thanks for the link. I forgot about initialized vars! The thing that concerns me is that const vars can take a lot of room and there is limited flash (108KB) for code. Ideally, that could be moved to external flash but there is no mechanism for that. The fact that there is no exposed functionality for working with external flash means using an alternative like SD or, like what timb is working on, exterrnal FRAM. Regardless, it makes the stability of the core-firmware SPI code critical.

Perhaps the Spark team can clarify on const globals not needing to go to RAM cause that makes a huge difference.

I’m hoping @zachary or @satishgn can provide some feedback on this one, I’m less familiar with what the exact heap/stack/build numbers look like firmware wise.

Working on a full answer to this right now, but it will take me a while. Overview—(1) try setting PANIC_ONLY because @david_s5’s debugging changes recently made the text about 20k bigger, and (2) there are external flash usage functions I posted in the community recently.

Zachary, I did not notice but does the latest local makefile default to PANIC_ONLY? I will check out your posts on the external flash before commenting further. :smile:

Make Your Binary Smaller By Removing Debug Logging

Locally buliding, I just went from

   text	   data	    bss	    dec	    hex	filename
  90156	   2968	  11484	 104608	  198a0	core-firmware.elf


   text	   data	    bss	    dec	    hex	filename
  68272	   2912	  11484	  82668	  142ec	core-firmware.elf

simply by uncommenting line 16 defining USE_ONLY_PANIC in core-common-lib/SPARK_Firmware_Drive/inc/config.h:

There was a discussion last week in a community thread about making this more easily accessible from the command line, leading to this issue in the backlog:

Store Data On External Flash

Check out this thread with functions for reading from the external flash chip the same way we read keys and factory reset firmware:

Additionally in that thread is a discussion of how to use the CC3000 EEPROM.

@mdma was working on a library for :spark: SD/EEPROM access—he has the last post in that thread currently, saying he was taking a break from it a couple weeks ago.

Also, in the libraries category, @mattande posted his NVM library that supports basic wear-leveling and CRC checking:

Lots of ways to read and write over 1.5MB of available flash on the Core!