New feature: control your connection!


#1

This content pulled from our blog post

One of our goals with the Spark Core and Spark OS was to abstract away the connectivity layer. When you’re running a distributed OS where some of your software runs on the device and some of your software runs in the cloud, you want the connection between the two to “just work”.

However, sometimes you don’t want everything to be automatic; you want to take control of the connection, so you can decide when the device should try to connect and when it shouldn’t. This is particularly helpful when you want your application code to start running immediately as soon as the device is powered, and the connectivity stuff can happen later on.

As of today, the Spark Core has three modes: AUTOMATIC, SEMI_AUTOMATIC, and MANUAL. Let’s go through each of them in turn.

Automatic mode

The default mode of the Spark Core is “automatic mode”. This means that the Core will attempt to connect to Wi-Fi automatically. If you don’t explicitly define the connection mode, the Core will be running in automatic mode. This is identical to how the Spark Core has always worked up until now.

Behind the scenes, what’s running on the Spark Core looks something like this:

void main() {
  // First, connect to the internet
  Spark.connect();

  // Then run the user-defined setup function
  setup();

  while (1) {
    // Then alternate between processing messages to and from the Cloud...
    Spark.process();

    // ...and running the user-defined loop function
    loop();

  }
}

But the whole point of the automatic mode is you don’t really need to know that. The Wi-Fi connection just works. So let’s say your code looks like this:

// You don't have to add this, but if you want to be explicit:
SYSTEM_MODE(AUTOMATIC);

void setup() {
  pinMode(D7, OUTPUT);
}

void loop() {
  digitalWrite(D7, HIGH);
  delay(500);
  digitalWrite(D7, LOW);
  delay(500);
}

What’s actually happening is that first we’re calling Spark.connect(), which will connect the device to the Cloud. Once it’s connected, then your code will run, and your loop() will alternate with Spark.process() so that we can process incoming messages in something that resembles a background process. (Side note: Spark.process() also runs during delays).

Ok, that’s all well and good, but what if I don’t know whether my Spark Core will have an internet connection? I still want my LED to blink. So now we’ve got:

Semi-automatic mode

// Insert firearm metaphor here
SYSTEM_MODE(SEMI_AUTOMATIC);

void setup() {
  pinMode(D7, OUTPUT);
  attachInterrupt(D0, connect, FALLING);
}

void loop() {
  digitalWrite(D7, HIGH);
  delay(500);
  digitalWrite(D7, LOW);
  delay(500);
}

void connect() {
  if (Spark.connected() == false) {
    Spark.connect();
  }
}

In this version of the code, when the Spark Core is plugged in, the LED will immediately start blinking. When a button attached to D0 is depressed (bringing DO to LOW), Spark.connect() will be called. If the Spark Core already has Wi-Fi credentials in memory, it will attempt to connect; otherwise, it will enter listening mode, and wait for your network name and password through the Spark mobile app or over USB.

The only main difference between SEMI_AUTOMATIC mode and AUTOMATIC mode is that Spark.connect() is not called at the beginning of your code; you have to do that yourself. Let’s go deeper down the rabbit hole with:

Manual mode

The Spark Core’s manual mode puts everything in your hands. This mode gives you a lot of rope to hang yourself with, so tread cautiously.

Like SEMI_AUTOMATIC mode, in MANUAL mode you need to connect to the Cloud using Spark.connect() yourself. However, in manual mode, the Core will not call Spark.process() automatically; you have to call it yourself. So your code might look like this:

SYSTEM_MODE(MANUAL);

void setup() {
  pinMode(D7, OUTPUT);
  attachInterrupt(D0, connect, FALLING);
}

void loop() {
  digitalWrite(D7, HIGH);
  Spark.process();
  delay(500);
  digitalWrite(D7, LOW);
  Spark.process();
  delay(500);
}

void connect() {
  if (Spark.connected() == false) {
    Spark.connect();
  }
}

You must call Spark.process() as frequently as possible to process messages from the Wi-Fi module. If you do not do so, you will encounter erratic behavior, such as:

  • The Core losing its connection to the Cloud
  • The Core breathing cyan when in fact it is not connected
  • Long delays when a request is sent to the Core because the Core won’t respond until it’s processed the message

Sounds kinda terrible, right? Except this can be really useful when you’re writing code that is very sensitive to exact timing, and the Spark.process() call might interrupt your sensitive code. By turning on MANUAL mode, you can make sure that Spark.process() is called when you want, and not when the processor is busy with a time-sensitive task.

As Stan Lee once said: with great power comes great responsibility. Go forth and control the connection. Be careful. Good luck.


Spark fails to upload. Cyan led
Using SPARK without internet access
No cloud local wifi
Learning Curve is so steep, to level it, you need to invest lost of efforts
#2

Create time out for Spark.connect()?
#3

A lot of people will be very happy with this development. Yet another job very well done guys (and girls!), thanks a lot!


#4

How will the WiFi.connect/disconnect handle when in MANUAL mode?

So if I am in manual and I need TCPClient/Server then I need to turn Wifi on then check connection and proceed?


#5

the distinction between connecting to wifi and connecting to the cloud is unclear.

should we not have automatic modes:

  1. don’t connect to wifi
  2. connect to wifi but not the cloud
  3. connect to wifi and the cloud

and manual versions of each of the above?

or do we just add the usual #include spark_disable_cloud.h ?

my use case is i want to sleep wifi to preserve battery life, perhaps turning back on with an interrupt or timer but never connect to the cloud


#6

#include spark_disable_cloud.h is no longer necessary; now you do the same thing with system modes.

Also for a deeper explanation of things you can do, check out the new firmware docs, which have had a major update to include a whole new section for the WiFi class to control the Wi-Fi connection:

http://docs.spark.io/firmware/

(@sej7278 and @carsten4207 I think the docs will answer your questions, but if not please let me know)


#7

Don’t connect to Wi-Fi: just do SYSTEM_MODE(MANUAL) and never call Spark.connect() or WiFi.connect().

Connect to Wi-Fi but not the Cloud: do SYSTEM_MODE(MANUAL) and call WiFi.connect() but not Spark.connect()

Connect to Wi-Fi and the Cloud: do any of the system modes, and if it’s not automatic, then call Spark.connect()


#9

lol, i just deleted instead of edited my post!

anyway, @zach is there any chance of you guys looking into static ip, i’ve got it working from within an application here, but not got a nice way to configure it or put it globally into the firmware.


#10

This is a great addition that allows for better control. Unfortunately my webserver project is having stability problems when I leave the cloud connection on. Without a connection the stability is much better. Thank you for this new feature!

P.S. After the latest update the Network functions were deprecated and now give compilation errors. I noticed that although the WiFi functions are now properly documented, some of the code examples (such as the IPAddress example) still mention the Network class instead of WiFi. This may hinder people that try to use the examples…


#11

well that’s bound to happen really isn’t it - if your webserver is using a lot of cpu/bandwidth, then having a side process communicating with the internet constantly is not going to help.

i had a similar problem when constantly printing to serial whilst running a telnet server without any delay() calls.

you can totally disable the cloud by adding this to the top of your sketch:

#include "spark_disable_cloud.h"

#12

Just a heads up - you don’t need this anymore! The new SYSTEM_MODE(MANUAL) followed by Wifi.connect() is the way to have wifi, but disable the cloud now. (I imagine the spark_disable_cloud.h / spark_disable_wlan.h will be deprecated/removed in future.)


#13

yeah maybe, although i prefer to just include the file than call a function and a macro


#14

From the docs, it looks like you get wifi but no cloud with the SEMI_AUTOMATIC mode. In this case you can just replace the include with SYSTEM_MODE(SEMI_AUTOMATIC), no need for the function call.

EDIT: the docs aren’t clear regarding if wifi is enabled initially with the SEMI_AUTOMATIC mode, or if wifi is only enabled as part of connecting to the cloud.


#15

I’ve been using #include "spark_disable_cloud" and #include "spark_disable_wlan.h". Will I still be able to use these though!?

If not…

I’m assuming I have to use Spark.process() extensively along with what you said?


#16

The docs and the process have been clarified a bit more since I wrote that.

Those two includes are now deprecated and cannot be used. (You’ll get an error and a polite request to use SYSTEM_MODE instead.)

If you don’t want control over the spark processing, just the connection, use SEMI_AUTOMATIC as the mode. Then you decide when the cloud connects/disconnects, but otherwise it takes care of itself.


#17

i’m still confused. according to the docs spark_disable_cloud.h is essentially replaced by SYSTEM_MODE(MANUAL), there’s no mention of Wifi.Connect()


#18

How is connection loss / reconnection handled in mode SEMI_AUTOMATIC?

Will the spark reconnect automatically if it looses connection to the cloud or can I manually detect that it lost connection and reconnect via Spark.connect() ?

Due to issue https://github.com/spark/core-firmware/issues/278 I try to do

if (!Spark.connected()) {
Spark.connect();
Spark.subscribe(“myevent”, myhandler, MY_DEVICES);
}

in the main loop to reinstate my subscription on a reconnect event. Could that work?


#19

Hi @michael1

With semi-automatic, my understanding is that you manage the connection, and once connected, the system takes care of the background processing, so no need to call Spark.process(). But I don’t know if it automatically takes care of reconnecting once the connection is lost. I can take some time to look at the code/write some tests and get back to you.


#20

Thanks @mdma !

Do you know if it would be harmful to call Spark.subscribe() with the same, and possibly already existing, subscription e.g. every 10min to keep the subscription over reconnects?


Losing Spark.subscribe() connection/subscription
#21

Hi, it’s strictly neccesary that the button in Semiautomatic mode is attached to D0? Could be another (e.g. D6)?