If my delay is greater than 200 ms the core “freezes” as described in all the cyan blink and cloud connection lost posts, REST GET is blocked sometimes and events are published irregularly.
If it’s smaller than 200 REST GET works fine, but no event is published.
I’m not sure if this is what’s causing your problems, but there’s currently a limit of how many publishes can be made in a time frame. At the moment that number is 1 per second, with bursts of up to 4 per second allowed. I haven’t found this in the docs, which might explain why you haven’t been able to find it. (@bko, could you perhaps comfirm this/add this to the docs, it seems fairly relevant?)
Could you try an even greater relay, let’s say, 1000+?
It might just be that the Cloud is kicking you out since you’re “spamming” it with publishes. If it’s limited to a maximum of four, and you’re going over that with your highest tolerance (1/200=5), then I guess it’s protecting itself. But that’s only a guess, I might be completely off. Just give it a try, and see what it does. It won’t hurt you
Depending on whether or not you actually require a resolution of 5 p/s, there are other options available (TCP springs to mind(?))
Is there a way in the CLI to see what is being published?
For example
C:>spark list
Checking with the cloud...
Retrieving cores... (this might take a few seconds)
Spark Dev Unit A (**********************************) is online
Variables:
uptime (string)
ssid (string)
temperature (int32)
pressure (int32)
altitude (int32)
Delay(1000) still hangs, Delay(2000) loses connection sometimes and has problems while publishing and getting REST requests simultaneously (408 HTTP). I’ll try for Delay(15000) to have consistent time to use REST API.
Update: 15 seconds delay locks Spark Core completely. Trying to access from atomiot or REST Client gives HTTP 408.
This is the 4th time I reset my device, it seems impossible to get publish and delay work together.
double temperature = 0;
int rawtemperature = 0;
char temp1[64];
char temp2[64];
void setup()
{
Serial.begin(9600);
Serial.println("Starting...");
// Register a Spark variable here
Spark.variable("temperature", &temperature, DOUBLE);
Spark.variable("rawtemp", &rawtemperature, INT);
// Connect the temperature sensor to A7 and configure it
// to be an input
pinMode(A7, INPUT);
}
@Barabba Hmm… Everything looks fine. I have an app that is working with Spark.publish and delay(2000). It reads a lot of the network / cloud settings, displays them on a LCD display and tracks cloud disconnects and WiFi disconnects. So far in the past 50 mins I have had 11 cloud disconnects and 1 WiFi disconnect.
I have occasionally had 1 second hangs where my loop won’t execute - but I’m still trying to figure out what is going on when that happens. What version of the core are you using - Black or White - not sure if that makes a difference.
By REST calls you mean reading Spark.variables() via the cloud API, right? I know that I can’t read variables over the cloud faster than about once per second without having failures. Part of that is that I am about 0.135 seconds round-trip from the cloud and so there are two round-trips for every variable: PC to cloud to Spark Core back to cloud back to PC.
I’m checking my variable once while publishing every 2 seconds and it returns a timeout message.
The HTTP GET runs 30 seconds. I check again after 5/6 seconds from the timeout to let everything keep a consistent state, but it fails again.
Now without publish and delay I’m able to receive a GET response every 1 second, I’m pretty sure there’s something blocking the cloud connection.
@Barabba have not tried to read the spark variables from the core at the same time. I have a few variables so I will run a test trying to read them every 15s as well
@Dave - yes, I have that running so I see the data when it arrives, just hoping I could leverage the Google datastore, drive, … and build some graphs and dashboards. Still researching if I need to write an intermediate app or if I can publish to Google directly.