Can't flash from SparkIDE when using publish, udp

Hi, my first post. I have had several spark cores sitting for a while. I finally have those precious days to experiment and inspired by my recently installed Solar Voltaic setup decided to upgrade an arduino/wifly energy monitor with a spark. Basic setup was so easy but getting internet connectivity to work the way I wanted has proved to be more of a challenge.

I can’t reliably reflash the devices from the web IDE. It does not appear to be a simple connectivity issue as Spark.variable, function, publish and UDP read and write all appear to work reliably. I see some similar problems on the community but none which seem so invasive.

I have versions of code which used Spark.publish or broadcast local UDP packets with the same effect. I can no longer reflash the device from the web IDE while the connections are active. I structured the code to ensure that the main loop is short (around 200ms). A work around for the UDP version is to define a Spark.function which stops the UDP activity which I can call before flashing the code.

I have the same problem with a slave device which responds to the published event in one version or performs a UDP read in another. As it’s small I have included the code for the UDP version below.

Remote code update is a major reason for using the spark so I can experiment with devices in place. Am I missing something in the code to make the reflash reliable or might this be a local network configuration problem?

// This #include statement was automatically added by the Spark IDE.
#include "SparkIntervalTimer/SparkIntervalTimer.h"

/****************************************************************
 * skeleton receiver for a UDP packet
 ****************************************************************/
UDP udp;

int  noPacketCount;
bool tFlag;

IntervalTimer sTimer1;   // timer to trigger looking for a UDP packet
void udpRcv( void )
{
   tFlag = true;
}

void setup() {
    RGB.control(true);
    udp.begin(50001);
    
    noPacketCount = 0;
    tFlag = false;
    sTimer1.begin(udpRcv, 200, hmSec); // 100ms between udp receives
}

void loop() {
    char rBuff[16];
    int balance;
    if (tFlag){
        tFlag = false;
        if (udp.parsePacket() > 0) {
            noPacketCount = 0;
            udp.read(rBuff, 16);
            balance = String(rBuff).toInt();
            if (balance>0) {
                RGB.color(balance/4,0,0);
            } else {
                RGB.color(0, -balance/4, 0);
            }
        } else {
            noPacketCount = min(100, noPacketCount+1);
            if (noPacketCount > 5) {
                RGB.color(0, 0, 255);  // failed to find UDP packets
            }
        }
    } 
}

Hi @nickgb

I have had this problem too and I don’t know what is causing it. Somehow the socket used by the cloud connection is getting clobbered, particularly when doing UDP broadcast.

There is an open issue on github for a similar UDP broadcast problem. If you could add info (code samples etc.) to that issue, I think it will help it get debugged.

Here is a work-around that I use–dedicate a spare pin to be your reboot pin. When it is low, things work as they did before, but if I need to do an over-the-air flash, I tie the pin high and hit reset. When the core enters loop(), it will loop forever calling the Spark cloud handler, basically just waiting to be re-flashed.

void loop() {
    int pin = digitalRead(D0);  //allow OTA flash by pulling D0 up and resetting
    if (HIGH==pin) {
        for(;;) {
        SPARK_WLAN_Loop();
        }
    }
...
}  

Hi @bko thanks for a quick response.

I think I have seen similar behaviour with Spark.publish() / Spark.subscribe(). I moved to UDP to see if the problem was with those functions (and because it should be lower latency?). I’ll see if I can add to the problem definition on github.

I have a similar solution to yours except I register a function for turning the extra communications on and off so I can do it remotely. (I’m really lazy and hate to get out of my chair).