New feature: control your connection!

I've been using #include "spark_disable_cloud" and #include "spark_disable_wlan.h". Will I still be able to use these though!?

If not...

I'm assuming I have to use Spark.process() extensively along with what you said?

The docs and the process have been clarified a bit more since I wrote that.

Those two includes are now deprecated and cannot be used. (You’ll get an error and a polite request to use SYSTEM_MODE instead.)

If you don’t want control over the spark processing, just the connection, use SEMI_AUTOMATIC as the mode. Then you decide when the cloud connects/disconnects, but otherwise it takes care of itself.

1 Like

i’m still confused. according to the docs spark_disable_cloud.h is essentially replaced by SYSTEM_MODE(MANUAL), there’s no mention of Wifi.Connect()

1 Like

How is connection loss / reconnection handled in mode SEMI_AUTOMATIC?

Will the spark reconnect automatically if it looses connection to the cloud or can I manually detect that it lost connection and reconnect via Spark.connect() ?

Due to issue https://github.com/spark/core-firmware/issues/278 I try to do

if (!Spark.connected()) {
Spark.connect();
Spark.subscribe(“myevent”, myhandler, MY_DEVICES);
}

in the main loop to reinstate my subscription on a reconnect event. Could that work?

Hi @michael1

With semi-automatic, my understanding is that you manage the connection, and once connected, the system takes care of the background processing, so no need to call Spark.process(). But I don’t know if it automatically takes care of reconnecting once the connection is lost. I can take some time to look at the code/write some tests and get back to you.

Thanks @mdma !

Do you know if it would be harmful to call Spark.subscribe() with the same, and possibly already existing, subscription e.g. every 10min to keep the subscription over reconnects?

Hi, it’s strictly neccesary that the button in Semiautomatic mode is attached to D0? Could be another (e.g. D6)?

Hi @afromero,

from my understanding the connection to the button is just an example of what you can do in semiautomatic mode. You can use any other pin by adapting the code ( attachInterrupt(D0, connect, FALLING):wink: or do not use a button at all and call Spark.Connect() in some other place in your code.

1 Like

@michael1 - let’s take this discussion back over on the original discussion thread for the subscribe problem.

1 Like

Has anyone had a problem with SYSTEM_MODE(MANUAL) and OTA updates? If I set MANUAL and I try to do OTA uptade then core led is solid magenta and then resets it. In loop function I have this:

if(Spark.connected()){
        Spark.process();
        if(serverStatus == 1){
            serverStatus = 2;
            Serial.println("HELLO");
        }
        if(serverStatus == 0){
            serverStatus = 1;
        }
    }else{
        Spark.connect();
        serverStatus = 0;
    }

Do I missing something or it is a bug?

1 Like

Does anyone tried it or perhaps Spark employees can investigate it @zach ?

I can take a look at this the next day or two - from what I know of how this works, this could likely be a bug, since MANUAL mode is expecting user code to call Spark.process() but that is of course not possible during a OTA update.

Until a fix is available, can you use SEMI_AUTOMATIC mode?

I’m trying to help debug this but my core will not even enter breathing cyan in Semi_auto or manual mode…

Weird!!

1 Like

That's by design. With those two modes, the core starts of disconnected. (The change to semi-auto was quite recent.)

The semi-automatic mode will not attempt to connect the Core to the Cloud automatically.

http://docs.spark.io/firmware/#advanced-system-modes

So you need to put Spark.connect() in setup to get it to connect.

I wrote a code to connect like @markopraakli by my core keeps blinking green forever :open_mouth:

Update

Alright so let’s ignore my code. I got it working now (somehow). I just tested OTA in SEMI_AUTOMATIC and it’s flashing well now as we speak.

Will jump over to MANUAL next

Try dropping the Wifi calls - they shouldn’t be needed and could be interfering. (I agree this code should work as is, and that the extra calls to Wifi should have no effect if not needed.)

I uncommented this to let SPARK_WLAN_Loop() handle Spark.process() instead and was able to perform OTA flashing successfully.

Seems like some flags are not being set in Spark.process()? Just my wild guess. :wink:

I tried the following but it didn’t work as well:

SPARK_WLAN_Loop();
Spark.process();

@mdma, i have found the issue :smiley:

Due to that new IF condition (for the case of MANUAL, during an OTA update, Spark.process() is not executed just before OTA update.

See: https://github.com/spark/core-firmware/blob/c8fa2e0b79f9792f6dc9a6bd07697ca300cee9bc/src/spark_wiring.cpp#L666

I added a Spark.process() after the line and tada! OTA :smiley:

8 Likes

Nice! I am repeatedly clicking the “like” button, sadly discourse will only give one like!

@zachary has been working in this area recently so he might be interested in looking at the fix.

I think there’s more cleaning up than it looks though…

But it’s probably due to my poor understanding of the entire firmware for OTA flashing.

The SPARK_FLASH_UPDATE flag gets set to 0 in the Spark.process() function though OTA is happening. I wonder what’s the rationale…

I have used all three modes and found some minor issues that appear to have been resolved. I initially used the MANUAL mode and called “Spark.process()” every second or so in my main loop. However, when I tried to Spark.connect() I had the breathing cyan led but no connection. It was fine for working on a local network but to synchronise the time I had to download a Unix timestamp over the local TCP IP connection as I couldn’t use Spark.syncTime();

Next, I started using SEMI_AUTOMATIC mode and this was perfect for my code, allowing it to work on a local network and then calling Spark.connect() when required to download new firmware or to use Spark cloud functionality such as Spark.variable and Spark.syncTime. However, I had to call Spark.process() regularly in my main loop - this is not made clear in the documentation.

AUTOMATIC mode took care of everything but I prefer the above SEMI_AUTOMATIC to minimise any critical timing issues.

Overall I would say this functionality is a great step forward for those of us who want to dip in and out of the Cloud.