Interrupt Spark.connect() and Spark.disconnect()?

Working on a little something that will get moved around and may not have a steady connection. I’ve been able to avoid connecting to the cloud on setup with:

#include "spark_disable_cloud.h"

Then I’d like to connect to attempt to the cloud when I’m not receiving input, but always be ready to cancel the cloud requests if I do receive input.

I have a switch with an interrupt, but it seems if I start a Spark.connect() it won’t be called. It’s very possible there’s a better way to do this, but I’d like user input to prioritize over the Spark connect calls. This way I can respond immediately and worry about syncing with the cloud later.

I’ve got a simplified example below that uses a switch on D5 to turn on/off the LED on D7 and connect or disconnect to the cloud. Ffter multiple switches the interrupt seems to stop working (the LED does not respond). Meanwhile the RGB LED is indicating a successful connection to the cloud.

#include "spark_disable_cloud.h" // disable cloud on start

int ledPin = D7;
unsigned long lastSwitchedAt = 0;
int switchPin = D4;

void switchSwitched(void);

void setup() {
    pinMode(switchPin, INPUT_PULLDOWN);
    pinMode(ledPin, OUTPUT);
    attachInterrupt(switchPin, switchSwitched, CHANGE);
}

void loop() {
}

void switchSwitched() {
    // debounce
    if(millis() - lastSwitchedAt > 200) {
        lastSwitchedAt = millis();
        
        if(digitalRead(switchPin) == HIGH) {
            Spark.connect();
            digitalWrite(ledPin, HIGH);

        } else {
            Spark.disconnect();
            digitalWrite(ledPin, LOW);
        }
    }
}

There’s no way to interrupt a method like Spark.connect(). However, there is another way - rather than calling Spark.conenct() directly in the ISR - set a flag that is then read in your main loop.

Something like this:


enum SparkAction {
    NONE, CONNECT, DISCONNECT
};

volatile SparkAction action;

void setup() {
    action = NONE;
    pinMode(switchPin, INPUT_PULLDOWN);
    pinMode(ledPin, OUTPUT);
    attachInterrupt(switchPin, switchSwitched, CHANGE);
}

void loop() {   
   SparkAction previous = action;
   action = NONE;
   switch (previous) {
     case CONNECT: Spark.connect(); break;
     case DISCONNECT: Spark.disconnect(); break;
   }
}

void switchSwitched() {
// debounce
    if(millis() - lastSwitchedAt > 200) {
        lastSwitchedAt = millis();

      if(digitalRead(switchPin) == HIGH) {
               action = CONNECT;       // <==========
               digitalWrite(ledPin, HIGH);
      }
      else {
               action = DISCONNECT;  // <==========
               digitalWrite(ledPin, LOW);
      }
   }
}

The key point is that rather than calling a Spark cloud function from switchSwitched(), the interrupt service routine, which should always try to complete quickly, the ISR sets a flag so that it’s not held up calling cloud functions, and instead the main loop calls connect/disconnect as needed.

@mdma thanks for your help. It’s helpful to see how you think about these problems.

However, even with your code changes I’m seeing the same behavior as before where the interrupt isn’t successfully being called (the LED doesn’t change) after flipping the switch a couple times.

I’m wondering if I’m just thinking about this problem incorrectly. How are other people dealing with establishing infrequent and possibly unsuccessful connections to the cloud. Is it just necessary for this to lock things up?

Oh! That’s a bit odd! It seems surprising to me that what your main loop is doing would have any bearing on how your interrupt code responds.

Does the situation improve if you remove the calls to Spark.connect() etc… in the main loop?

If you want to paste your entire code, feel free to do that.

Spark.connect() and Spark.disconnect() just a set a flag for SPARK_WLAN_Loop() to attempt to connect (or disconnect) the next time your code goes through loop(). Connecting to cloud can take some time (several seconds) that you don’t often see because it happens at core reboot time normally. If you are throwing the switch quickly, I would count 10-20 seconds etween flips and see if that is any better.

Regarding your original code: There is special case handling for the case where loop() is empty–I wonder if something could be going wrong there. Maybe you should try putting a delay(20); in loop(); just so it is not empty.

Ah, that’s great to hear! I was having kittens thinking about all the strangeness that would happen if someone actually tried to do the actual cloud connect/diconnect in a ISR!

@chap - you can ignore my suggestion since it’s duplicating what Spark already do in the firmware - I remember now that @bko mentions it that this is all done with setting flags below the surface. Sorry for the misdirection.

@bko thanks for your insight and @mdma for sticking with me.

I’m surprised the cloud connectivity has this blocking aspect to it. It seems common enough that a project would would want to prioritize input over network status.

From my new understanding, the only options for instantly responding to input consistently is to have a unfailing cloud connection or turn the cloud off completely. I can’t guarantee 100% connectivity (can anyone?) but I’d like to connect and send an update occasionally.

I’m open to other techniques or approaches if anyone has other suggestions.

I don’t understand why this isn’t working as you’d expect - the interrupt you set up should be fired as soon as the button is pressed unless the system is servicing a higher priority interrupt.

I hope someone can shed some light on this - it’s quite confusing!

@chap, does it work as expected if you comment out the Spark.connect()/disconnect() calls?

@mdma yes, the code works fine without the Spark.connect() and disconnect(). From what I now understand, the networking stuff blocks all activity in your loop and overrides interrupts.

This is also confusing to me, because it means the interrupts aren’t really interrupts and there’s no way to reliably capture input. (I started a more specific topic to see how other people were dealing with it.)

P.S. I also realized in my code examples I was relying on millis() inside of interrupts which is a no-no, so I’ve rewritten things below:

#include "spark_disable_cloud.h" // disable cloud on start

int switchPin = D4;
int ledPin = D7;
unsigned long lastSwitchedAt = 0;
bool switched = false;
enum SparkAction {
    none, sparkConnect, sparkDisconnect
};
volatile SparkAction action = none;

void switchSwitched(void);

void setup() {
    pinMode(switchPin, INPUT_PULLDOWN);
    pinMode(ledPin, OUTPUT);
    attachInterrupt(switchPin, switchSwitched, CHANGE);
}

void loop() {   
    if (switched && (millis() - lastSwitchedAt > 200)) {
        lastSwitchedAt = millis();
        
        if(digitalRead(switchPin) == HIGH) {
            action = sparkConnect;
            digitalWrite(ledPin, HIGH);
        } else {
            action = sparkDisconnect;
            digitalWrite(ledPin, LOW);
        }
    }
    switched = false;
   
    SparkAction previous = action;
    action = none;
    switch (previous) {
        case sparkConnect: Spark.connect(); break;
        case sparkDisconnect: Spark.disconnect(); break;
    }
    
    delay(10);
}

void switchSwitched() {
    switched = true;
}

millis() inside an ISR is ok as long as you’re not expecting the value to change during any given invocation of the ISR.