Non ASCII Charicters in Spark.Variable()

Since Particle still has not created a function to retrieve multiple variables at once, I’m trying to pack a few variables into a String data type. This results in mostly non ASCII characters. Older posts showed there may be an issue with this. Can anyone confirm if this is still an issue and if they are working on a fix?

Example:

  char arr[] = {0x25, 0x75, 0xA5, 0xC5, 0x01};
  String tst_str ="";
  for (byte i=0; i<len; i++)
    tst_str = tst_str + String(arr[i]);
  Particle.variable("tst_str", tst_str);

PHP Backend:

    $x = getRemoteVal($spark, "tst_str");
    error_log(x);

I have tested this works in firmware and can pass raw HEX data out via a serial port. It appears my web application pulling the variable always returns a Unicode character of 0xFFFD. The web back end is running PHP, that would be the issue… but I believe it is in the Particle Cloud.

Can anyone help?

My next test step may be to try to get a variable using python… Just to rule out PHP being an issue.

The easy way to get the variable is to use the CLI (if you have that installed). When I test your code, and get the variable the way, I get the 0xFFFD character followed by a “T”. So, it appears that the problem is with the cloud. Why do you need non-ASCCI characters? How are you packing your variables into a string? How many variable do you have and how long (what type) are they?

BTW, I don’t know why you would convert your char array to a String. You can use your char array directly,

char arr[] = {0x25, 0x75, 0xA5, 0xC5, 0x01};
 Particle.variable("tst_str", arr);

When I do it this way, I get %u followed by two 0xFFFD characters (which unfortunately, is still not what you’re hoping for).

1 Like

I’ve been creating JSON strings as variables with good success:

Particle.variable("conditions", conditions, STRING);
...
...
  snprintf(conditions, sizeof(conditions), "{\"current\":{\"conditions\":\"%s\",\"temp\":%d,\"humidity\":%d,\"dewpoint\":%d,\"windDir\":\"%s\",\"windSpeed\":%d}}", observation, int(outdoorTemp), int(relHumidity), int(dewPoint), windDir, windSpeed);

and using the JavaScript/jQuery getJSON() method to retreive:

function getConditions(){
  var condURL = "https://api.spark.io/v1/devices/" + deviceID + "/" + deviceVar + "/?access_token=" + accessToken;
    $.getJSON(condURL, function(data) {
      var currentConditions = JSON.parse(data.result);
      console.log(currentConditions);
      document.getElementById("currentTemp").innerHTML = "Outside Temp: " + (currentConditions.current.temp) + "&deg;F";
      document.getElementById("currentDewPoint").innerHTML = "Dew Point: " + (currentConditions.current.dewpoint) + "&deg;F";
      document.getElementById("currentHumidity").innerHTML = "Humidity: " + (currentConditions.current.humidity) + "%";
      document.getElementById("currentConditions").innerHTML = (currentConditions.current.conditions);
      document.getElementById("windSpeed").innerHTML = (currentConditions.current.windSpeed);
      document.getElementById("windDir").innerHTML = (currentConditions.current.windDir);
  });
}

Thanks for testing that out. I’m basically storing 8 bit Bytes as a single char. I then unpack it on the receiving end. This is to reduce overall data usage. It is for a telemetry system my company is designing around the electron module. So every bit of data we can reduce, the cheaper it is.

After reading more posts similar to this topic I believe I may go to a base64 encoding. At-least I get 6/8 bit efficiency.

Also just to be complete, I just did a Unicode lookup on what the “\uFFFD” means. Turns out it is designated as a replacement character… So that leads me to believe Particle Cloud replaces non ASCII chars on purpose.

1 Like

You might also look at Base85 (Ascii85) which is more efficient than base64 encoding. Base85 uses 5 characters to encode 4 bytes, instead of 4 characters for 3 bytes using base64.

1 Like