Spark.publish from ISR - freeze, red flashes, reset [SOLVED]

Hi everyone,

Like many here, I’m new to electronics. I’m trying to set up a very simple device to send events when a door is opened and closed. I have a magnetic switch, an indicator LED and the following code. The problem is that about 5 seconds after the ISR runs, the core freezes up, and about 10 seconds after, the light starts blinking red, seemingly at random like in the video in this question:

Using the spark CLI tools, I verified that each event was getting published (using ‘spark subscribe door-closed mine’ and ‘spark subscribe door-opened mine’), but only about three or four such events would get published before the core froze.

If I comment out the Spark.publish calls, the rest of this very simple program works perfectly; the indicator LED turns on and off correctly based on whether or not the switch is opened or closed. Here’s the code, and I can post a schematic of the circuit if needed. Thanks so much! --Dan

int door_sensor = D0;
int indicator = D4;
volatile int last_interrupt_time = 0;
void door_opened(void);
void door_closed(void);

void setup() {
    pinMode(door_sensor, INPUT_PULLDOWN);
    pinMode(indicator, OUTPUT);
    digitalWrite(indicator, LOW);
    
    attachInterrupt(door_sensor, door_change, CHANGE);
}

void loop() {
    
}

void door_change() {
    int interrupt_time = millis();
    if (interrupt_time - last_interrupt_time > 300) {
        int door_reading = digitalRead(door_sensor);
        if (door_reading == HIGH) {
            Spark.publish("door-closed", NULL, 60, PRIVATE);
        } else {
            Spark.publish("door-opened", NULL, 60, PRIVATE);
        }
        digitalWrite(indicator, door_reading);  
        last_interrupt_time = interrupt_time;
    }
}
1 Like

OTA-ing your code to my core now :wink:

  1. Verified that 8 times after Publishing, the core enters SOS-mode

  2. The rate-limiting for Spark.Publish() should kick in if you are having too many requests.

I looked like some overflow to me.

Doing some debug now :slight_smile:

1 Like

I suggest you don’t do so much in your ISR. Instead of actually calling Spark.publish, just set a global variable indicating what change has happened and call Spark.publish in loop().

1 Like

Oh good point @erjm,

Try changing the code @fajpunk,

Set a flag which is checked by the main loop() should do the job! :smiley: :smiley:

One other point, the return type for millis() is unsigned long, not int so the subtraction of interrupt_time and last_interrupt_time is not computing the correct answer. Both variables should be declared as unsigned long.

As others said above, you should probably rate limit your code to one event per second and pass a flag back to loop to do the publish.

The red flashing LED is not blinking at random! When the core has a serious error called a panic, the core blinks a red SOS (… — …) and then number of red flashes and then SOS again. The number of flashes between the SOS’s tells you the panic code–8 flashes means out of memory, but there are other codes too.

2 Likes

Thanks so much everyone! After implementing the suggestions, this works very well now. Should’ve seen the unsigned long type for the milis() function in the docs. @bko, is there somewhere in docs that describes the panic codes?

Here is the code that works:

int door_sensor = D0;
int indicator = D4;
int door_reading = 0;
volatile unsigned long last_interrupt_time = 0;
volatile int door_flag= 0;


void setup() {
    pinMode(door_sensor, INPUT_PULLDOWN);
    pinMode(indicator, OUTPUT);
    digitalWrite(indicator, LOW);
    
    attachInterrupt(door_sensor, door_change, CHANGE);
}


void loop() {
    if (door_flag) {
        door_reading = digitalRead(door_sensor);
        digitalWrite(indicator, door_reading);  
        if (door_reading == HIGH) {
            Spark.publish("door-closed", NULL, 60, PRIVATE);
        } else {
            Spark.publish("door-opened", NULL, 60, PRIVATE);
        }
        door_flag = 0;
    }
}


void door_change() {
    unsigned long interrupt_time = millis();
    if (interrupt_time - last_interrupt_time > 1000) {
        door_flag = 1;
        last_interrupt_time = interrupt_time;
    }
}

Thanks again.

Hi @fajpunk

I created a new topic for this until it makes its way into the doc:

I too am looking into using the Spark.publish() call in my program. I saw mentions that this function should not be used outside the main loop. Is this still an issue? Here is my simple code to return some events on button pushes.

volatile int state = LOW;

int plungerButton = D0;
int switchPosA = D1;
int switchPosB = D2;
int led = D3;

void setup() {
    pinMode(plungerButton, INPUT_PULLUP);
    pinMode(switchPosA, INPUT_PULLUP);
    pinMode(switchPosB, INPUT_PULLUP);
    pinMode(led, OUTPUT);
    
    attachInterrupt(plungerButton, plungerButtonPushed, FALLING);
    attachInterrupt(switchPosA, switchPosASwitched, FALLING);
    attachInterrupt(switchPosB, switchPosBSwitched, FALLING);

    digitalWrite(led, HIGH);
}

void loop() {

    digitalWrite(led, state);
    //delay(500);

}

void blink()
{
  state = !state;
}

void plungerButtonPushed()
{
    Spark.publish("button-push", "plunger");
    blink(); 
}

void switchPosASwitched()
{
    Spark.publish("button-push", "switchA");
    blink();
}

void switchPosBSwitched()
{
    Spark.publish("button-push", "switchB");
    blink();
}

After a couple of button presses, the core flashes red. I get 11 flashes - Invalid Case.

Looking at my code does anybody have any thoughts?

Follow On Note: I noticed when running this that my LED State would not change and that I would get double event messages. I know they were double due to a different time stamp. I moved the blink() function before publish and the double messages went away. We still crash at the 3-4 event mark.

Hi @cloris

I think your multiple messages are likely due to contact bounce in the switches, unless you have taken care of that somehow with external circuitry. When you push a button it does not just contact once cleanly, but instead multiple close/open sequence can happen for some time (typically 5ms or so).

Spark publish is rate limited to once per second with a burst of up to four allowed as long as the average is 1/sec. If you go over that, you will start missing events. As you can imagine, contact bounce might be causing problems for you there as well.

As to your red flashes: the codes are SOS followed by N-flashes and then SOS N-flashes again. Are you getting 11 flashes after the SOS? Or could you be counting the SOS (which is 9 flashes for …—…) and then getting two flashes? That would make more sense since two flashes is a non-maskable interrupt fault.

Ok. I am going to build something to limit the sending. That kind of limits the usefulness of it but I will take what I can get. What should it do if the message rate exceeds 1 per minute?

The bouncing also seems to be a fair explanation. Need to hook up the serial port and verify.

I will try something and post here.

I also reconfirmed that it is the 9 SOS flashes followed by 11 Flashes. I thought that was interesting but the Why? is outside my depth.

Thanks!