Delay killing core

Try this script:

// We name pin D0 as led
int led = D7; 

// This routine runs only once upon reset
void setup() 
{
    // Initialize D0 pin as an output
    pinMode(led, OUTPUT);
    
    Serial.begin(9600);
    Serial.println("Spark started");
}

void loop() {
    digitalWrite(led, HIGH);   // Turn ON the LED
    delay(3000);               // Wait for 1000mS = 1 second
    digitalWrite(led, LOW);    // Turn OFF the LED
    delay(3000);               // Wait for 1 second
    
    Serial.println("Wait 50s");
    delay(50000); 
}

After min or two, core will start blinking green and you cannot flash it again?

Factory reset helps after that, but you know something is wrong here?!

could maybe be this? https://community.spark.io/t/known-issue-long-delays-or-blocking-code-kills-the-connection-to-the-cloud/950/10

Hey Guys,

Right now the core is trying to ping the server every 15 seconds to make sure it’s still online. We have a bug (feature? :wink: ) right now where long delays make the core think it’s offline, and cause it to stop and try to reconnect. Short delays should make this go away while we find a better way to accomplish the heartbeating. :smile:

Thanks!
David

Hi @Dave,

What is going on in the background while our code delaying? Is Spark’s firmware code still running or everything stops for given seconds?

If everything stops how can core’s heart(led) is beating? If not, why core cannot communicate with server?

Lots of questions, Thanks :smiley:
.
Hasan

I had an idea that the delay(); function should be intercepted by the background tasks and process the heartbeat while we are delaying… still think that’s worth looking into :wink:

+1 on that BDub. That was brought up in another topic if I recall. Delay should delay appear as a delay to the user code but be transparent (so to speak) to the background task. Seems too logical!

:smile:

This is definitely something we want too, but it’s pretty tricky. The networking code and logic is blocking, and running it inside the timing interrupt is tricky. We’re working on it though! We tried hacking multithreading onto the core as one workaround, and now we’re experimenting with a non-blocking networking driver. We’ll get it!

Hi @triuman,

When you call delay, the code runs a while loop while it waits for the delay to be whittled down by the timing interrupt, so no other code can run during the delay. (Except the stuff in the timing interrupt).

couldn’t you do a version that if the user sets a really long delay such as 50 seconds then that in the background gets split up into multiple shorter delays which would enable a heartbeat to the cloud trickle through. The time that the heartbeat takes would need to be checked and subtracted from the total delay time so that we still get the same precision.

1 Like

I think we could just simply subtract the delay time from the heartbeat time, and assume if we were sleeping and missed one it was okay, but I think the CC3000 will still drop the connection for lack of activity, so some extra juggling would be required, but I think something like that would be possible.

If you run loop() as fast as it will go it takes about 5-6ms for the background tasks to complete.

For this idea to work, you would need to put a wrapper on the background tasks.

Pseudo code might look like this:

main() {
  backgroundTasks();
  userTasks();  // user code calls delay(50000);
}

delay(ms) {
  start = millis();
  if(ms > 1000) {  
    while( (millis() - start) > (ms - offset1) ) {
      backgroundTasks();  // this takes 5 - 6ms per call
    }
  }
  else {
    while( (millis() - start) > (ms - offset2) ) {
    // do nothing
    }
  }
  // millis() used just to make this clear, real code might be a timer.

  // offset1 and offset2 account for the delay of the millis() save and if-else().

  // The 1000ms threshold is just an arbitrary number much less than 10 sec
  // but greater than any small delay that might need to be more precise.
  // Keep in mind once you call backgroundTasks() it's going to block for 5-6ms

  // Could also create a new delayMilliseconds(); function that takes the place 
  // of the current delay() function for more precise small millisecond delays.
  // or just recommend use of delayMicroseconds();
}

Life is good :smile:

@dave @satishgn

3 Likes

After testing I found that multiple shorter delays like 10000ms will also kill core, but not so fast.

Now i replace delay(50000); with something like this:

    if(millis()-lastAttemp < 50000)
    return;

And core is still alive after 10 hours of running.

But of course, this is not a solution to the problem.
Core is not working well with delays and this should be fixed.

2 Likes

Hi @sanwin,

This is a known issue, and has to do with the heartbeating and socket timeouts in both the CC3000, and our code for sustaining the connection to the cloud. I agree though, and we’re working on a way to not get tripped up by very long blocking code in loop().

Thanks!
David

Approximate date when this-like issue will be fixed?

Especially since there’s a workaround, this isn’t top priority for us internally. We’re planning the next two-week sprint today, and it’s not in our plans so far.

Of course, since the firmware’s open source, anyone is welcome to submit a solution as a pull request!

is the plans for the next sprint public anywhere? Would be good to know what you guys are working on so that the community doesnt double on some of the work.

Nope, not currently. We’ve considered it, but right now we’re posting the results of finished sprints on the :spark: blog. You can expect a summary of the most recent sprint today.

I don’t really get the logic of not sharing the bug/task list and what the team is focusing on so feel free to elaborate on this. Just be honest saying this is what you work on and hoping to solve and do not promise that you will reach there during the sprints.

For example the work the community is doing of solving CFOD could technically be wasted and you guys have already solved it during the sprint just not revealed it. Same goes for a lot of other work with saving variables in the cloud etc. It doesn’t feel very team work and for sure doesn’t motivate me to do any work in my spare time.

It does feel like a waste to work on something when someone else is working on it as well, but that said… NOT knowing someone is working on the same thing keeps you focused and free to come up with your own unique solution that may be better. This is a tricky one to paint black or white. Certainly, I would hope any time we had the desire to work on something, we could ask here if the Spark Team has plans to work on it any time soon… and get a straight answer. At least that’s been the case so far :wink:

I agree with you @BDub but asking each time you plan to do something gets counter productive. If Spark is going to answer anyways then why not just post a list of task they are working on, a list of task that they see as high/mid/low priority and then people can pick from that list if they don’t know what to help with. We all know/hope that this list exists somewhere so it is just about making that public and mark tasks in there as working on by Spark.