What makes the firmware binary so huge?

As we know, around 94KB 69KB of the available firmware space of 108KB is taken up by the basics without any user code.

I’ve been trying to get the boilerplate down by eliminating library code that is unnecessary to my purposes - namely I2C and USART in their entirety, but my changes only managed to shave off 2kB from the resulting binary. I can only assume that -fdce was already taking care of most of them already, because I thought the difference would be much more notable.

Any thoughts on what I could be doing more to minimize the existing boilerplate?

Hello @noora,

we have made good progress on this and will be available on the Web IDE once the team pushes the new firmware version.

If you are compiling locally, you can edit this in the linker and reduce ram significantly.

See: https://github.com/spark/core-firmware/commit/73dc282c9daf71c43f8f212f1e7705a75d968a31

My understanding is that the core-stack for communication is takes the main bulk of the space usage.

The reduction work done by @satishgn is really incredible! :wink:

1 Like

Yeah, I’m building locally. I’m following the master branch on all of the libraries; I assume all this reduction work will be merged there shortly?

am i missing something or is this just saving 2k by changing a compiler flag?

@noora, a little clarification is needed. The Core firmware (no tinker) takes less than 70KB (text = 66624) so I’m not sure how you got 94KB. RAM sits at 14,752 bytes (data + bss). This was on a local build, latest master, with an empty application.cpp file. :smile:

It’s a great feature done up by gcc to support more microcontrollers and @satishgn did a great job with knowing all the new stuff going on and giving us more ram with 1 extra line of code in Linker!


We just ended our Sprint hangout and just have to figure out which version GCC needs to be minimally for that magic to work.

If you are building locally, you should be able to pull in the branch updates, compile and see how it goes :slight_smile:

Alright, yeah, I now get 69KB with a clean compile. I guess my program was already up to 25KB… Damn scary, I barely have anything in there!

For giggles, here’s a toplist of the fattest functions in a current clean master build.
SHA1 is the winner, while AES encryption & decryption take the second and third place, respectively.

I believe the internal implementation of sprintf, _svprintf (iirc) would be the winner at around 2KB if it were used. In a clean build it doesn’t exist, since the base firmware never calls it and thus it gets eliminated by GCC.


Alright, with my proprietary fat-free conditionals enabled, I get the clean build down to 61KB.
This is mostly by leaving out I2C and USART (external serial) support in their entirety.

I assume I’m not the only user out there with no need for these interfaces, and for whom the 8KB drop in fat content would be pretty welcome.

So, if I were to bake my stuff into a proper git fork, would the Spark team entertain the idea of pulling in the changes?
To the user, my changes are present simply as a makefile switch, when they want to leave out said features to free space. By default everything is still there.

1 Like

so in theory if you #include spark_disable_cloud.h those should no longer be linked (or is it still needed for wifi?) and neither should all the spark_protocol stuff? if it doesn’t work that way then it should probably be fixed to do so.

same goes for usart, i2c, spi etc. they shouldn’t be automatically included just to make it easier for the clueless, the #includes should be moved out of application.h and build.mk into the user’s own application.

1 Like

I’m sure your PR will be reviewed so plese submit one! :wink:

The crypto isn’t needed if you don’t use networking, so those could be left out, yes. But the include file merely sets a variable; Something else could still change that variable back, so it isn’t enough for GCC to automatically leave out the networking code. It would have to be known at the preprocessor stage that you want to leave out networking, so a preprocessor directive would have to be used instead - and in fact, there is one, called SPARK_WLAN_ENABLE, but I can’t tell you how well disabling it is actually supported.

Surely the compiler should be smart enough to see whether your application actually needs any particular libraries, and recompile the core with specific libraries along with your application.

1 Like

Actually, I just noticed that the second-most expensive operation in the entire codebase, AES, is there twice.

In the table above, the function aes_crypt_ecb is from the tropicssl library and used by :spark:'s protocol layer. Meanwhile, the function aes_decr is a part of the CC3000 driver, and used internally for the WiFi encryption.

So… maybe something for devs to optimize right there. Modify the CC3000 driver to use tropicssl’s implementation, or use the (more naïve, I guess) implementation from CC3000 in the protocol layer. Both are in ECB mode anyway.

I guess this is one additional vote for compiling tropicssl as a separate library, so it could be shared by both the common & communications projects. @zachary ?

I think if you have gcc 4.7 and above, the newlib nano-option optimization will work :slight_smile:

So just #include "spark_disable_cloud.h" won’t remove the code since you can turn the cloud back on. There is a separate #define SPARK_WLAN_ENABLE that I believe will remove the cloud code. That is in core-common-lib/SPARK_Firmware_Driver/inc/platform_config.h

Right now, the team is focusing on RAM optimization, not FLASH optimization.

1 Like

but that will disable wifi entirely i expect, i’d rather just disable the cloud but still be able to use TCPServer() etc.

1 Like

Hey @noora thanks for all the info! As @bko said, we’re more constrained on RAM than on flash, so optimizing RAM right now.

Keep in mind that the Spark Core with the CC3000 will not always be our only product, and we still need to be able to perform the same encryption on other systems.

Also, the :spark: protocol actually uses AES in CBC mode (which wraps around ECB), whereas the CC3000 directly uses ECB with no initialization vector. It’s true there might be a way to integrate them together and save some flash space, but I think the cost in terms of portability makes it not worthwhile, at least currently. If you’re hitting flash limits, there is much lower-hanging fruit.

:+1: to @kennethlimcp’s newlib-nano comment. It’s in core-firmware master as of today, and we’ll be releasing it in the next firmware version within a week or two.

1 Like

Yeah, what I thought is that it’d be nice if the CC3000 driver shared the tropicssl implementation (which could be an external library for both to link against). But obviously I guess it’d be a pain to go in and modify that driver, because then it won’t be drop-in upgradeable when TI releases updates.

1 Like

Hey all,
There are some very small sha1 implementations out there that are a bit less efficient than the one I see in the compile. Lemme go digging through my notes. I did a size-speed comparison at some point.