Spark v0.2.2 out of memory


I just pulled new core firmware from master repository and i got red flash (out of memory) every time! For further investigation i found that I can not use TCPclient library. I remembered that somewhere in this community someone noticed that UDPclient and TCPclient is not possible to use at the same time. So dear Spark Core team: can you fix this issue asap or I and someone else should do a rollback?

1 Like

I’m finding that I’m often out of memory as well when trying to use UDP.

Hi @Jeff and @markopraakli

The Spark team is aware that the RAM footprint has gone up again. Some of this is a known increase from the new Spark.subscribe() feature, but some of it is unknown and needs to be investigated. I know the team is on top of this issue, but they had a big event this weekend at Maker Faire in the Bay area that took some time away from development.

If you can build locally, you can adjust the sizes of the buffers used for TCPClient and UDP. Some folks have had good luck doing that. Interestingly one of the Spark team’s qualification tests is apparently running both TCPClient and UDP at the same time before every release, so it is not just that combination that breaks things.

There are a lot of things your code that have the potential to run you out of memory too. Creating and then throwing away lots Arduino String objects can run you out of memory. You don’t need to copy the entire TCP or UDP buffer into your own buffer, you can parse the data as it comes in via and sometimes have smaller or even no buffers in your code.

If the Spark protocol for the cloud needs to reconnect, it generates new session keys and it needs a lot of temporary storage for the key protocol, but it then deallocates that storage when it is done. If you are turning WiFi or the cloud on and off, you need to be aware of this.

I know that the elites have all mentioned that RAM optimization needs to be a priority for the Spark team and I am sure they are going to continue working on it.

You’re completely right @bko – I found that as I’ve decreased buffer sizes things seem to run smoother.

I’m pretty new to the programming language, where can I find more information about parsing data as it comes in rather than storing it in a buffer?

This is generally known as FSM or finite state machine parsing. I use this technique to parse an XML weather stream and then once I find the part I am interested in, I use the C function strtok() to find the boundaries between things, in my case I use the double-quote character.

OK, I am going to post this, but this is not the most beautiful code I have ever written–I have been meaning the go back and clean this up.

The serialEvent function consumes one byte out of the client buffer and uses a series of boolean flags to know where it is in stream, either tag or data–the title flag for future use. When it finds something that starts with "<yweather:forecast ", it gathers a line of data for the strtok() parsing part. I use under 200 bytes this way. The things called ptr are just indexes really–not the best name.

const char startMatch[] = "<yweather:forecast ";
const char titleStart[] = "<title>";
const char titleEnd[] = {'<', '/','\0'};

void serialEvent() {
    char inChar =;
    if (tagFlag==false && dataFlag==false && inChar == startMatch[matchPtr]) {
        tagFlag = true;
        dataFlag = false;
        titleFlag = false;
    } else if (tagFlag==true && inChar == startMatch[matchPtr]) {
        if (matchPtr == strlen(startMatch)) {  //done with tag, start data
            dataPtr = 0;
            dataFlag = true;
            tagFlag = false;
            titleFlag = false;
            matchPtr = 0;
    } else if (tagFlag == true) {
        matchPtr = 0;
        tagFlag = false;
        if (inChar == startMatch[matchPtr]) {
            tagFlag = true;
            dataFlag = false;
    } else if (dataFlag==true && ( (inChar==char(10)) || (inChar==char(12)) ) ) {  // carriage-return or line-feed
      dataStr[dataPtr] = '\0';	  //null term the string
      parseForecast();    // call the next parse step
    } else if (dataFlag == true) {
    dataStr[dataPtr] = inChar;  // store data away
    if (dataPtr < MAX_DATA_STR_LEN-2) {

Here’s a link to an older post I made about about the parseForecast() part:

I know this is a bit unclear but I hope it gets the ideas flowing.

@bko Thanks for this information but, it brings a few questions to my mind.

Referring to your statement “If you can build locally, you can adjust the sizes of buffers…”

What about the Web IDE ? Can’t adjust the buffers ?

@zachary @zach @Dave - ? Adding functions to Spark Core Firmware at the cost of user accessable Ram - Why ?

For options like Spark.subscribe() feature (and others) can they be included manually by the user and then compiled in to the firmware ? Instead of making the Spark Core - “More limited” ?


Since the offical supported interaction with the spark core is via the spark cloud, why not use pointers or space on the cloud to parse code fuction to the Spark Core when called by a user ?

Either way I suggest would not require a Spark Core II but, adding more “mandatory” code to the firmware will cause an upgrade in the future as we run out of room; am I wrong ?

Ok, I see how that can work for an online feed that gets pulled by the Spark, but would this method also work for parsing smaller packets of data being broadcast across the local network?

Essentially I’m looking for any packets of data coming in on port 6454 with a header of “Art-Net” and the Opcode “0x5000”, taking the data from the next few bytes, and then determining what values to give three outputs based on the data available here. Is this possible with this method?

For example, here is some of the packet data which Wireshark is sniffing:
Essentially I need to have the Spark pick out the packets which have the blue highlighting and purple box in common and pull out the red green and blue boxed values, convert to decimals, and apply this decimal to a PWM output.

Hey guys!

We are chatting about RAM right at this very moment while waiting to head to Maker Faire.

Any suggestions on how to reduce them?

We have a few ideas and will dig into it when we all get home! :slight_smile:

1 Like

Hi @spydrop

I don’t think you can control the size of these buffer on the webIDE. They are defined at sizes to work for most use cases, but they can be too big if you have a lot RAM used in your code. Like I said, on the of the standard tests the Spark team uses is to have both TCPClient and UDP connection open, so it can be done.

All features take some RAM. Lots of things, like UDP, don’t use much RAM unless you are using them in your code, but the Spark cloud needs to reserve space since you could have a Spark.publish hidden away in your code. The space on the core for Spark features is just enough to make those features work, so the cloud storage really doesn’t work here.

RAM is always a problem on embedded processors. RAM is a problem on Arduino, that is one reason why there is a Teensy. RAM can be a problem on a RaspPi or Beaglebone if you try to do too much–we have that problem at work all the time. Projects always expand to use all available RAM and then effort gets applied to reduce RAM usage. It is just human nature.


Yes, this a perfect example of something where this would work. Is the purple box always the same values? Do you always skip 8 bytes after the purple box?

So I like the If (booleanVar) way, but a switch with an integer or enum would a bit less memory. Also I split out three counter/index int’s for readability but you could combine them since they are used at different times.

This turned out to be a bit of code, and I have not tested it, but it should get you started. There are four states, searching, in the prefix, skipping the 8 bytes, and reading the three values.

const char prefix[] = { 'A','r','t','-','N','e','t', 0x0, 0x0, 0x50};
#define NSKIP 8
byte x;  // three values to update
byte y;
byte z;

int prefixIdx = 0;
int skipCount = 0;
int readCount = 0;
bool searching = true;
bool inPrefix = false;
bool skipping = false;
bool reading3 = false;
int nbytes = myUDP.parsePacket();
while (nbytes>0) {,1);
  if (searching) {
    if (databyte == prefix[prefixIdx]) {
      searching = false;
      inPrefix = true;
  } else if (inPrefix) {
    if (databyte == prefix[prefixIdx]) {
      if (prefixIdx == sizeof(prefix)) { //done
        inPrefix = false;
        skipping = true;
	skipCount = 0;
        prefixIdx = 0; // for next time
    } else { // stopped matching part way, see we start a new one
      prefixIdx = 0;
      if (databyte == prefix[prefixIdx]) {
      } else {
	inPrefix = false;
	searching = true;
  } else if (skipping) {
    if (skipCount++ == NSKIP) {
      reading3 = true;
      skipCount = 0;
      readCount = 0;
  } else if (reading3) {
    switch (readCount) {
    case 0:
      x = databyte;
    case 1:
      y = databyte;
    case 2:
      z = databyte;
      reading3 = false;
      searching = true;
      reading3 = false;
      searching = true;


[Minor edit: go back to searching right away after last byte (z)]


Guys, stop please. Have anyone tried to comple this code:

#include "application.h"

TCPClient client;

void setup()

void loop(void){

You can only guess one time what happened if i’ll deploy this code on my spark… and you’re correct, red flashing led…
When i remove line “TCPClient client;” all works… So i think there could be problem with the TCPClient itself or the new feature of updating time via UDP…

Something in the new update used up more ram and attempting to use stuff which required more ram like TCPClient caused the issue.

We are looking into this! Sorry for this matter, really.

OK, I just tried this and I do not get a panic and red flashing LED. I will keep looking around to see what it might be.

Have you made any other changes to the source?

Are you sure all three repositories are up-to-date?

1 Like

@bko can you also jump in to see where we can down the buffers in the 3 repos and what is causing the increase in ram?


In file core-firmware/inc/spark_wiring_tcpclient.h


And in core-firmware/inc/spark_wiring_udp.h

#define RX_BUF_MAX_SIZE	512

You can try 256. If you are not using UDP, you don’t need to change it and similarly for TCPClient.

It is still a mystery why @markopraakli is having that problem. I know that the Spark team has lots of tests that would have failed if you can’t instantiate the TCP or UDP objects.

Hi @markopraakli – I can also run the code you posted without getting a memory error. I can even call on UDP at the same time and it loads fine.

I know this might be a silly question, and I’m assuming you have already checked this, but is there any chance you are getting a different error?


Okay I found another issue. I used SD library too ( if i renamed build file and recompiled then my spark works fine.

arm-none-eabi-size --format=berkeley core-firmware.elf
   text       data        bss        dec        hex    filename
  66876       2592      12644      82112      140c0    core-firmware.elf

and deployed to my spark:

Downloading to address = 0x08005000, size = 69472

After I re-enabled SD library then i got footprint and red flashing (8 times) error.

arm-none-eabi-size --format=berkeley core-firmware.elf
   text       data        bss        dec        hex    filename
  70360       3124      13316      86800      15310    core-firmware.elf

and deploy:

Downloading to address = 0x08005000, size = 73488

P.S. Used same plain code, just includeing TCPClient.

1 Like

I’m pretty new to Spark, but I have a few things that I noticed that may save a little memory. They may have been thought of and discarded already, but I figured I would throw them out just in case.

  1. TCPServer::available() returns a full TCPClient object, when the client object is a class member already and can easily be returned as a pointer instead. This would save >500 bytes on the stack per call, and I would argue improves the object model as well since currently we have two different TCPClient objects in RAM representing the same actual TCP client

  2. A large-scale improvement could be using global network buffers rather than each UDP/TCPClient object storing their own. You’ll likely need to roll your own basic allocation system, but this way you have much more flexibility to adapt each object to how much data it’s actually receiving, and you reduce the RAM cost of additional network objects substantially. This becomes more necessary if you intend to expand TCPServer to allow multiple clients in the future.