Strange new event appearing in my photons


#1
{"data":"{\"service\":{\"device\":{\"status\":\"unsupported\"},\"coap\":{\"round_trip\":105},\"cloud\":{\"uptime\":1,\"publish\":{\"sent\":1}}}}","ttl":60,"published_at":"2017-11-21T18:41:52.760Z","coreid":"24XXXXXXXXXXXXXXXXXXXXXXXXX","name":"spark/device/diagnostics/update"}

The photons are both running 0.6.3 firmware. This event started appearing this morning and occurs apparently spontaneously as well as on powerup. The photon appear to be functioning as expected otherwise. Anyone know what this is about?


Keep publishing status unsupported by Spark/device/diagnostics/update and no code is not working
Getting a weird Message on my screen
Safe mode updater aborted. What causes this?
#2

I have 4 devices setup us a product running the same firmware. One of my devices is constantly going offline and coming online. I get this same event (spark/device/diagnostics/update) every time it rejoins. So I’m also looking for answers…


#3

We are getting the same/similar message on Raspberry Pi and also Firmware hanging after 30-60 seconds. Devices configured last week are working, this is only for the 2 devices we set up yesterday.


#4

Same here.
2 Photons with 0.7.0-rc.3 and 1 with 0.6.3.
And it seems, that it is confusing my Network at Home…
Location: Germany


#5

Same here,

One of 7 company Electron currently deployed in the field.

Running spark firmware v0.6.2

Can still OTA flash to it if I catch it right after it connected.

OTA flash of same application firmware stops the connecting/disconnecting loop. and re-registers my application Particle.variables and Paricle.functions, however the error message is still there

@rickkas7 or @ScruffR - have you noticed these cryptic error events being generated all of a sudden or know what could cause such a message?

{"data":"{\"service\":{\"device\":{\"status\":\"unsupported\"},\"coap\":{\"round_trip\":1679},\"cloud\":{\"uptime\":2,\"publish\":{\"sent\":0}}}}","ttl":60,"published_at":"2017-11-23T16:32:32.016Z"


#6

Particle is on Thanksgiving break. These strange messages will be explained when they return and my understanding is that they are nothing to panic about. :wink:


#7

Also, that event should be published from the server side so not to worry about that. :slight_smile:


#8

same issue here.
Hi folks,
previously somehow my photon went to safe mode forever and publishes like “too many tries” then i updated my Particle CLI update and done particle update.now device is flashing cyan mode (normal mode)but even blinking led is not working and publishing this
spark/device/diagnostics/update

{“data”:"{“service”:{“device”:{“status”:“unsupported”},“coap”:{“round_trip”:997},“cloud”:{“uptime”:2,“publish”:{“sent”:1}}}}",“ttl”:60,“published_at”:“2017-11-23T09:46:22.244Z”,“coreid”:“1c003b000247343337373738”,“name”:“spark/device/diagnostics/update”}
if anyone know what is this condition and solution please help.
thanks in advance


#9

This event is not an issue, but some prep-work for some new feature yet to be announced :wink:


#10

I hope you’re right, but it seems a few of us are having issues with publish and subscribe around the time this showed up. Hopefully it’s a coincidence that can be managed quickly.


#11

If this is an update to remotely manage particle firmware just like how I can manage personal firmware… I would be so happy.


#12

Super curious what the new features or maybe hardware that’s about to be announced might be :slight_smile:

I have no idea what’s coming :robot:


#13

I’m having the same issue with the Electron appearing to lose its cloud connection.

It runs perfectly for days or hours and publishes every 10 minutes. Recently, though, it just disappears mysteriously for a few hours and connects at random times. It appears that the code is running great, and the times it reconnects are mostly on-schedule, but the connection itself is unreliable. I get the same diagnostics message sometimes when it reconnects, usually without the published data I want.

I’m fairly new to using the Electron. Is it common that the service has issues like this, and is there a good way to make it more reliable?


#14

This event may not be issue.but with that even blinking LED is not working :frowning2:


#15

Maybe these two things are not or only indirectly related

Can you post the exact code you are running that doesn’t work? My test-devices behave as expected.


#17

My Photon breathes cyan for about a minute, then changes to breathing green, and periodically posts spark/device/diagnostics/update messages similar to the one in the OP:

{"data":"{\"service\":{\"device\":{\"status\":\"unsupported\"},\"coap\":{\"round_trip\":null},\"cloud\":{\"uptime\":44,\"publish\":{\"sent\":1}}}}","ttl":60,"published_at":"2017-11-27T02:25:30.559Z"

The code that was running on it without issues for a couple weeks before the messages started appearing a few days ago doesn’t appear to be executing at all now. If it makes a difference, my code uses the SparkFun LSM9DSM1 and MAX17043 libraries to transmit accelerometer/gyroscope and battery charge data over UDP. Running firmware v0.6.3.

I have another Photon at the office I can try to flash with the same code to see if it has the same issues, but I ran the exact same code on that one without any problems five or six days ago.


#18

Having the same issue- my photon breathes cyan for just a moment, but seems to never actually connect to the cloud at all before turning green, before publishing the {“status”:“unsupported”} message. The code was running fine a few days ago, and the problem was not resolved even after running device doctor and reflashing via serial .


#19

I’m having the same issue with my devices, now I’m trying to figure out what happens with my raspberry test environment, every time I publish an event I got the “unsupported” event.

Here the code (very simple!)

long publishMillis;

long publishTimer = 5 * 60 * 1000;// 1800000;


void setup() {


  Serial.begin(9600);

  //Particle.variable("SoC", currentSoc);
    Serial.println("");
    Serial.print(Time.timeStr());
    Serial.println(" System is ready!");

  
  delay(1000);

  publishMillis = millis() + publishTimer;

}

void loop() {

  if((millis() - publishMillis >= publishTimer )) {
    Serial.print(Time.timeStr());
    Serial.println(" Fake publish");
    Particle.publish("energyMonitor", "99.99", PRIVATE, NO_ACK);
    publishMillis = millis();
  }
}

and below the event I got:

{"data":"{\"service\":{\"device\":{\"status\":\"unsupported\"},\"coap\":{\"round_trip\":119},\"cloud\":{\"uptime\":1,\"publish\":{\"sent\":1}}}}","ttl":60,"published_at":"2017-11-27T14:50:08.937Z","coreid":"7ab710a968bee0f481c8049a","name":"spark/device/diagnostics/update"}

with the exact time stamp of the energyMonitor event.

Any idea?


#20

Update, I’ve changed the code, without the Particle.publish and I got the “unsupported” event at every flash or reset…


#21

Hey all,

Jeff here from the Particle team. Sorry that folks have been running into issues here, that’s frustrating! Hopefully I can clear things up a bit.

As @ScruffR mentioned, the new event that you’re seeing is part of a new feature we are starting to roll out. There’s nothing wrong with your device if you’re seeing this event — in fact, as it stands now, all devices will automatically publish this event on handshake with the cloud. However, this feature is not GA quite yet, and we should have been more proactive about communicating what this event means. We’ve decided that for now, we’re going to remove this event from being automatically being published to avoid confusion. We’ll work on a fix today and update here when it’s done.

As far as some of the issues reported here, the addition of this event should not impact any other device behaviors (connectivity, pub/sub, etc.). There’s also enough variability in the reports here such that my gut feeling says these issues are caused by something other than this new event. However, we always like to err on the side of being precautious. Once we remove the event from being published, I’d love your help reporting back on if you’re still seeing problems.

Thanks, and I appreciate you all taking the time to report these problems!

Jeff