Web IDE: App Size Limit

It doesn’t seem like the Web IDE prevents apps that are too large from being flashed: I had a buffer that was apparently too big (512 bytes), because after flashing began the Spark Core fell off the network and its RGB led wasn’t flashing at all anymore, even across resets, so I had to reset it back to Tinker.

I tried a smaller buffer size (128 bytes), which worked, but then later I was unable to reflash, and the Spark Core was unresponsive to Spark Cloud requests, even though I could see it was still running the app, so I had to reset it back to Tinker again. Perhaps it didn’t have enough spare RAM to download the new app?

Finally I made the buffer 64 bytes and it’s staying connected.

Can the Web IDE be made to flag apps that are too big to a) run, b) be subsequently updated?

I don’t think it will warn you about an app that is too large to flash to your Core… not sure. I do know that globally defined uint8_t buffers up to 1600 bytes work just fine. I tried one at 2048 globally and it failed… but buffers defined within your functions or setup or loop can be quite large, as much as uint8_t 10000.

what type of buffer is it? uint8_t, 16, 32?

You can also have problems when you are trying to overwrite your buffer larger than it’s allocated in memory for.

There is about 17KB of ram available… but a lot of that gets eaten up by background tasks… so maybe 10KB is the limit I’d stay under for total ram usage.

Actually this is a case of PEBKAC: I created the 512 char buffer using calloc (see https://community.spark.io/t/simple-ring-buffer/2733), so it’s coming off the heap rather than statically included in the app. I should add a static-sizing option to that ring buffer class, since in most cases I’ll only need one and presumably the buffer can be much larger if it’s part of the app image rather than the heap.

Hmm, I updated the ring buffer so by default its size is defined at compile time, so that it doesn’t use the heap. I set it to 1024 (chars). The app runs but the Spark Cloud interface worked once then timed out. Yet I can see the app continues to send HTTP requests out every 10 seconds as I expect it to.

When I ran with the heap-allocated 64-byte buffer loop() had gone round 420,000 times without the Spark Cloud interface once dropping off. And yet now it timed out after ~1640 loops (I had curl polling a variable set to the loop count every 10 seconds). The HTTP part is unchanged between the two runs and doesn’t block loop() long enough to kick the cloud connection.

Dropped it to static 512 and it choked at ~1700 loops again.
Dropped it to static 256 and it choked at ~3000 loops.
Dropped it to static 128 and it choked at ~4200 loops.
Dropped it to static 64 and it choked at ~20,000 loops.

By choked I mean the cloud requests are returning:

  "error": "Timed out.”

The main source file is pretty small:

#include "RingBuffer.h"

RingBuffer ringBuffer;

unsigned int loopCount = 0;
const unsigned long LOG_EVERY_MS = 10000;
unsigned long lastSendTime = 0;

TCPClient dweetClient;
const char *DWEET_ADDRESS = "dweet.io";

void setup()
    Spark.variable("log", (void*) ringBuffer.buffer(), STRING);
    Spark.variable("loopcount", &loopCount, INT);

void loop()
    if (!dweetClient.connected()) {
        if (dweetClient.connect(DWEET_ADDRESS, 80)) {
        } else {
            ringBuffer.log("connect failed.");

    if (dweetClient.connected()) {
        if (dweetClient.available()) {
        } else {
            unsigned long now = millis();
            if ((now - lastSendTime) > LOG_EVERY_MS) {
                lastSendTime = now;

                dweetClient.print("GET /dweet/for/mydweetname?loopCount=");
                dweetClient.println(" HTTP/1.1");

And the ring buffer class isn’t big either, other than the actual buffer of course.

All the while the HTTP requests continue to go out.

Left it running and the HTTP part locked up after 17,922 loops. Power-cycled it and both Cloud and HTTP made it to 72,239 and then both choked.

Just seems to be generally flakey even with a 64-char buffer.

Is there a size limit on STRING Spark.variables, or I guess on responses from the core to the cloud in general?

Hi @finsprings,

Hmm, I think right now we determined that the Spark.variable response for STRINGs maxes out at around 230 characters (I think, we’re working on making that larger). I think right now user-firmware is limited to exposing 4 functions and 4 variables, but I’m having trouble finding the exact number in the docs, so I’ll add that as a bug. :smile:

The cloud will prevent binaries that are too large from being flashed, but it’s not checking / estimating how much ram your app will use in conjunction with the encrypted handshake. (Which would be helpful!)

I wonder if the disconnect / failures you’re seeing are actually just wifi stability issues and not code stability issues?


I’m exposing my ring buffer as a STRING variable, so that 230 limit is good to know, and is presumably part of why I’d see the Cloud part drop off while the TcpClient part would continue to work.

When they both drop off I agree that’s probably “just” general stability issues with the wifi.

1 Like

Going through this thread, I never did find an answer to the question of how big my Photon program can be using the web IDE. Do I just keep growing the code until one day I brick the Photon?

Right now I have about 1200 lines of code, nine included libraries, a dozen ints, a dozen bytes, 80 floats, and a bunch of strings totally maybe 100 characters. How close am I to the limit?

You won’t brick the Photon. The app just won’t build and you’d get an error message. But even if it built you couldn’t upload it - so you’re safe.
AFAIR the Photon and Electron have about 60-80K of available RAM and 128K of flash.

You should be able to see the build statistics when clicking on the (i) symbol in the status line.

Ah ha! Thanks - that’s what I was looking for. That tiny little “i” in gray on a black background is easy to miss.

Looks like I still have a ways to go before I need to start worrying:

Output of arm-none-eabi-size:
text data bss dec hex
35156 236 3552 38944 9820

In a nutshell:
Flash used 35392 / 110592 32.0 %
RAM used 3788 / 20480 18.5 %

@Michele, I don’t believe the numbers correctly reflect the flash and RAM max amounts correctly. I believe this is a know issue.

Oh! That’s good to know - thanks. So I guess I can start worrying again…

@Michele, assume the max for flash is 120K and RAM is 70K. From those numbers you posted, you are doing fine :wink:

Great! Thank you so much for your help.

1 Like