Incorrect value read for Spark.Variable via API

Hi,

I’m seeing an occasional incorrect spark.variable() value when reading via the api. This has happened on more than one device/app so it’s not something in my firmware. The variable appears to take the value of a different variable.

Most recent is reading a variable that should not have changed from 0 without outside interaction and is only ever assigned a value of 0 or 1 but was read back as 4827, the json from api call below.

I’ve removed the device id for obvious reasons.

2015-09-15 01:21:40.6675|TRACE|Read variable state for , Value: {
  "cmd": "VarReturn",
  "name": "state",
  "result": 4827,
  "coreInfo": {
    "last_app": "",
    "last_heard": "2015-09-15T00:21:39.974Z",
    "connected": true,
    "last_handshake_at": "2015-09-14T10:49:16.182Z",
    "deviceID": "2d0.............739",
    "product_id": 6
  }
}

Yesterday I had something similar where the value that was for a variable named ‘humidity’ was returned for a request for the ‘temperature’ variable, it is identical to lots of decimal places and was not from an incorrect sensor read as I also published the values when read from the sensor and have them via the streaming api so I knew exactly what had been read. See below. First 2 lines are the published values (with appropriate values - the sensors in a humidity controlled environment), then the result of the variable read follows that.

2015-09-14 21:13:42.3825|TRACE|NewSparkEventEvent -> DeviceId: 290...739, EventName: Humidity		 Data: Measured humidity as: 21.450562

2015-09-14 21:13:42.4138|TRACE|NewSparkEventEvent -> DeviceId: 290...739, EventName: Temperature		 Data: Measured temperature as: 27.946829

2015-09-14 21:13:55.2888|TRACE|Read variable humidity for , Value: {
  "cmd": "VarReturn",
  "name": "humidity",
  "result": 21.4505615234375,
  "coreInfo": {
    "last_app": "",
    "last_heard": "2015-09-14T21:13:52.370Z",
    "connected": true,
    "last_handshake_at": "2015-09-14T20:00:02.562Z",
    "deviceID": "290...739",
    "product_id": 6
  }
}
2015-09-14 21:13:55.2888|TRACE|Read variable temperature for , Value: {
  "cmd": "VarReturn",
  "name": "temperature",
  "result": 21.4505615234375,
  "coreInfo": {
    "last_app": "",
    "last_heard": "2015-09-14T21:13:53.253Z",
    "connected": true,
    "last_handshake_at": "2015-09-14T20:00:02.562Z",
    "deviceID": "290.....739",
    "product_id": 6
  }
}

Both these devices have been flashed recently (last 7 days or so) so should be running the latest Particle firmware.

I also constantly see last_app as “”, I’m guessing that’s just broken?

Cheers.

Steve.

I noticed this peculiarity when I used a datatype of

bool lightOn = true;

and a variable like this:

Particle.variable("backLight", &lightOn, INT);

switching to:

int lightOn = 1;

solved it.

Are you using mixed datatypes?

variable supports INT DOUBLE or STRING

2 Likes

There is an issue about this in the communications-lib repo- https://github.com/spark/core-communication-lib/issues/26

Since the firmware is single-threaded, and that the request and response are handled as one continuous flow, making interleaving of requests impossible, I’m not able to see how this can be a firmware issue. cc @Dave

I’ve pinged the the cloud guys so hopefully they’ll find time to look into this!

1 Like

Hey all,

I’ll look into this, but generally speaking I think it tends to happen when many variable requests are being sent very quickly. Based on your logs it looks like the requests were sent near simultaneously so that fits the scenario. As a short term workaround, spacing out the requests slightly even 50-100 milliseconds would fix it for now.

Thanks!
David

Hi,

Thanks for all the feedback and looking into this.

I have already refactored things to introduce delays since I noticed it was going a bit to fast, funny how digging into a log file helps that :smile:

Part of the problem was that it’s actually 2 sets of requests to read all the variables from that device, supposedly 10s apart, but the first set of requests were being slow (I’m guessing due to rate limiting) and so they ended up overlapping.

It would be really nice to be able to have a single api request to read all the variable values in one go. I’m a bit surprised you don’t have that already as it’s a lot of overhead for your platform to handle numerous requests for each device, particularly where we can have up-to 12 variables exposed on each device.

As it is now it only takes 10 devices with 6 variables exposed and reading them every minute to be up against the api rate limiting (working on what I’ve read in other threads) which is easily achieved, and likewise, if the variable reads are spaced out with a 1s round trip for each read and have a dozen variables on the device it’s then taking 12s for each device which is a painfully long time.

Cheers,

Steve.

1 Like

I’m not sure if there’s a rate limit on the variable requests. SSEs and webhooks are the ones being limited if I’m correct.
Would it perhaps be possible to concatenate your variables on the device, expose them as a single variable, and parse them on the receiving end? That depends on your use case, but if the goal is to get them all at once, that would help. You can still have separate ones for those times you need to check only one of them, although it shouldn’t make a difference if you’re parsing them on the receiving end.

2 Likes

Hi @Moors7

The big problem here is that this is for the Tinamous.com which is an IoT platform where members can add their own Particle devices through out integration with the API, hence I have very little control over how, what or how many variables are exposed.

The other minor issue is that trying to put a number of variables into a single variable starts to get messy from a coding and string handling perspective and easily broken if you’re trying to construct json with quotes all over the place! Tinamous does support the Spark.Publish() route of getting data using the senml format and that’s obviously better for rapid updates and multiple variables, but stuffing a simple value in a Spark.Variable()/Particle.Variable() is such a nice simple method to get up and running quickly with.

From the API perspective theirs a lot of overhead in terms of data and requests having to read each variable individually, I would imaging it would be preferable for the Particle platform to allow multiple variables to be read in a single request compared to multiple requests, any thing that reduces requests and bandwidth is good right :slight_smile:

Hey folks, we have identified a bug that would cause this behavior in rare scenarios. We haven’t deployed the fix yet, but I will update this thread when that happens.