Photon connected to internet, but no Particle cloud

I have a photon that was working properly and lost connectivity to Particle.
Shown as disconnected on the console, and obviously not triggering webhooks.
But it is connected to the internet, continuing to publish periodic logs on papertrail.

What could cause this? Is it possible to debug on the Particle side.
On my side, no changes to the firewall.

Device is located remotely, can’t go connect to USB.

Get someone to power cycle it!

What Device OS are you using and what are the basic settings and checks you have put around Particle.publish()?
Are you sure that nothing has changed with the Particle meta-data - owner and the like?

I use Particle.connected() prior to any publish call.
But the problem is not with Particle.publish(), why is it shown as disconnected in the console, when it is connected to the internet?

Because the cloud is not permanently pinging each device whether it is still there or not. That would cause a horrendous amount of data traffic (particularly for cellular devices).
Instead the cloud expects the device to regularly check in at some interval (depending on platform) and only after two of these intervals being missed the cloud assumes the device has gone offline.
A similar thing is true for being online but shown offline. The device immediately knows whether itself is online, but the cloud only learns about the fact after the device made itself known by actively “talking”.

However, we need some clarification.
When you say the device is connected to the internet but not the cloud: What did the RGB LED do at the time?
Being connected to the internet but not to the cloud is a perfectly valid state and may be used intentionally.

void setup() {
  WiFi.connect(); // connect to the internet only
void loop() {
  if (!(Particle.connected() || digitalRead(BTN))) { // trigger cloud connection via MODE button

Possible causes for a failure to reconnect to the cloud after lost connection may (among others) be heap fragmentation or memory leaks. Either of these may be caused by your code but only show up after long periods and in multiple unpredictable ways.


Thanks for the info, @ScruffR.
I was hoping it would be possible to trigger the re-connection from the cloud side.

My applications (maybe for most users) involve hard to access devices, so maintaining remote connectivity is critical. I am focusing a lot of effort to understand how to make the device as robust as possible.

Unfortunately there is no way to know what the LED did at the time; device is locked up in a panel 200 miles away. Not so easy to reset the device either.

That’s definitely a possibility. But loop() is still running, as the device never stopped logging periodically.

Oct 19 10:56:19 27001c000d47353136383631 [app] WARN: -- MARK --
Oct 19 11:04:47 27001c000d47353136383631 [app] WARN: -- MARK --
Oct 19 11:13:15 27001c000d47353136383631 [app] WARN: -- MARK --
Oct 19 11:21:43 27001c000d47353136383631 [app] WARN: -- MARK --

I have tested the code below to deal with cloud connection loss; I would prefer not to continuing to run the code while trying to reconnect.

    // attempt reconnect if cloud connection lost
    if ( !Particle.connected() ) 
        if ( reconnTry++ == 0 )
            CloudLostMillis       = now;
            deltaCloudLostMillis  = 0;
            rc = CONN_LOST;

        // attempt reconnection

        // reconnection attempts timed out --> system reset
        if ( deltaCloudLostMillis > RECONN_TO_MS )
            *resetReason = PARTICLE_TO;

Unfortunately that doesn’t count as evidence against the memory leak/heap fragmentation hypothesis. Well written code would fail gracefully under these circumstances and hence keep loop() running but merely unable to do their job.

This can be done by doing this

  if (!waitFor(Particle.connected, 60000)) {
    // deal with failure to connect

I misspoke – should have said " I would prefer to continue to run the code while trying to connect".

And I agree with you, leak/heap fragmentation caused by my imperfect code is probably the culprit! Keeps getting better with every update and suggestion.

1 Like