Getting an Array of data from spark :)

So… I’m trying to retrieve some data from my spark core :smiley: The problem is tho that I’d like to send it as an array if possible - can I use the Spark.variable to send an array of data ?

I’ve also looked at the librato example and I’ve managed to send some singular data - is there any way I could send an array of data using the librato example from earlier ?

@Kevinruder, Spark.variable() does not support arrays directly. You could, however, use Spark.variable() by converting your array into a string by “serializing” your array. There is a limit of 622 chars that can be used with Spark.variable() so assuming an array of 16-bit integer values (max value is 65535 or 5 digits), then you could store 622/(5+1) or about 103 array values. The +1 is for a comma separator between values. You would then parse and convert that string back to an array on the “receiver” side.

What kind of values are in your array and how large is the array? :smile:

1 Like

It’s actually a multidimensional array of 3x100 floats - But I could make it smaller I guess. How would i serialize the array ?

@Kevinruder, you will need to reconsider the use of Spark.variables() and instead use tcpclient/tcpserver to send your data.

I can see one way you could do it with four Spark variables or one Spark variable used four times. Here’s a quick sketch of one way to do it:

Step 1 is to allocate strings of hex numbers for your 300 32-bit floats. This will take 2400 bytes as 622 + 622 + 622 + 534 or 600*4 to be easy but you don’t have to do them all at once if you have a Spark.function() that lets you know which variable is being read–see below. Don’t forget you need to allocate 601 bytes to hold 600 characters due to the zero terminator.

Step 2 is to convert from float to char arrays “DEADBEEF” or “00001234” etc. First you have to use a C union to change the 32-bit floats to 32-bit unsigned ints so you can work on them–this step does not change stored bits of data, just the type. Then you can use some helper functions in the string library to convert from unsigned 32-bit ints to hex: utoa( intfloat, charBuffer, 16).

Step 3 is to assemble the 8 hex digits into a longer Spark.variable string using strcpy or similar. Don’t use any separator between the 8 hex digits since they are always 8 chars long.

Wait for the first Spark.variable to be read by your web side and then have the web side call a spark function which causes your core code to load up the next batch of values.

You will have to unpack the hex bytes on the web side.

You could get more efficiency by using a radix larger than 16 for hex like base 64 encoding but there will high complexity to go along with it.

2 Likes

Gosh I love seeing @bko in action! Always learn new stuff :wink:

While a bit old, this topic still seems highly relevant. Is there a anyone out there willing to share an example of a viable method to transfer arrays that are ~1000 elements in length (e.g. Using JSON)? I’d very much like to transfer upon request a larger array of recorded values but feel like it is outside of my existing skill set and can’t seem to find an example. Any assistance would be much appreciated.

Hi @Bhclowers

The limits for Particle publish have increases to 255 bytes of data but variables remain at 622 bytes as described above.

There are lots of way to compress data, but the simplest way is to know something special about that data. For instance, if you have 1000 temperature measurements from the same sensor over some long time interval, you know that the change from one measurement to another is likely to be small, so might want encode the data as one full value at the start and a series of offsets (using fewer bits/lower precision) from the initial value. Other data might have a different way of compression that makes sense for its values.

Actual data compression schemes often use a “dictionary” in which both sides agree to a short-hand for value you need to send often. In things like GIF and JPEG this dictionary is dynamically generated but you don’t have to do that. You can analyze your data to see which values are the most common and plan for a short representation for those values.

You can also think about delimiters–JSON is fairly verbose compared to raw bytes. One of the advantages of using hex or similar schemes is that you can use the same number of characters per byte all the time, eliminating the need for any kind of delimiter.

Without knowing more about the particular problem you are solving, it is difficult to recommend an easy solution. How many bits are in each of your ~1000 elements? Are 8-bit or 32-bit for instance? Do you know something special about the data as I describe above? How fast to want to send the data? Can you space it out over time and still achieve your goals?

1 Like

After pondering your questions and points I still don’t have a full solution but the following code is where I’m at regarding this approach. As you can see I’d like to acquire data rapidly in a “burst mode” and then transmit the time and ADC values either in chunks or in their entirety. I’m somewhat convinced from what I’ve read that some sort of chunking will be necessary. This is because ultimately, I’d like to average these data on that particle which will then create an array of floats. For example, I’d like to take data for ~20 ms following an external interrupt with data points <=10 us per point and repeat this exercise multiple times and average the result. However, before I even attempt the averaging I need to be sure I can transmit the result. The notion of a JSON container sounds nice but I see that it could be bloated. Any suggestions?

As for the transmission time, I’m somewhat flexible but the fast the better.

Granted, if I can be certain of the timing I can transmit just the ADC values but I’m still too unfamiliar with the system. Regardless, I’d very much like to find a way to transmit these data to a container/file online. Here is the code I’ve been putting together but as you can see it is very much unfinished and I’m entirely uncertain how to send and finally unpack the result. Any help would be appreciated.

// This #include statement was automatically added by the Particle IDE.
#include "SparkJson/SparkJson.h"

int const numPnts = 2048;
int16_t read_times[numPnts];
int16_t values[numPnts];
const int BUFFER_SIZE = JSON_OBJECT_SIZE(1) + JSON_ARRAY_SIZE(numPnts);

void setup(){
    Serial.begin(9600);
    //https://github.com/spark/firmware/blob/c8fa2e0b79f9792f6dc9a6bd07697ca300cee9bc/src/spark_wiring.cpp#L366
    setADCSampleTime(3.0);
    pinMode(A0, INPUT);
    // StaticJsonBuffer<BUFFER_SIZE> jsonBuffer;
    // JsonObject& root = jsonBuffer.createObject();

}

void daq_loop(){
  unsigned int i;
  unsigned long startTime;
  for (i=0;i<numPnts;i++){
    startTime = micros();
    //delayMicroseconds(2);
    values[i] += analogRead(A0);
    read_times[i] = micros()-startTime;
    /*Serial.print(i);*/
    /*Serial.print(values[i]);*/
    /*Serial.print(", ");*/
    /*Serial.print(", ");*/
    /*Serial.println(read_times[i]);*/
  }
}

void loop(){
    daq_loop();

    int j;
    StaticJsonBuffer<1024> xBuffer;
    JsonObject& xroot = xBuffer.createObject();
    StaticJsonBuffer<1024> yBuffer;
    JsonObject& yroot = yBuffer.createObject();

    JsonArray& readTimes = xroot.createNestedArray("readTimes");
    JsonArray& readVals = yroot.createNestedArray("readVals");
    Serial.println(BUFFER_SIZE);
    Serial.println(sizeof(xroot));
    Serial.println();
    for(j=0;j<1024;j++){
      readVals.add(values[j]);
      readTimes.add(read_times[j]);
    }

    Serial.println();
    xroot.prettyPrintTo(Serial);

    //Serial.write((uint8_t *)values, 1024);
    //Serial.write((uint8_t *)read_times, 1024);
}

In general, I endorse your approach.

I have no idea if this will work with the reporting cadence you desire, but in general the idea of conditioning the data on the photon (responding to the stimulus, averaging multiple readings, converting to float, formatting/organizing data) and reporting the result is a near-perfect use case, IMHO.

I do the same thing for an application where I read multiple thermocouple values at a fairly brisk cadence, display them locally on an attached OLED display, then upload to the cloud as a single JSON string every 5 minutes.

Another option would be to use the WebServer library. Requesting a page has no data limit. You can dump as much as you want. If the timing is unknown, you could publish to trigger the read. Otherwise, read at an interval.

Could not find a suitable thread to post my question, but this one seems related:

For a big project, I want to get a series of variables (integers and floating point) from one Photon into a second one.

You can’t read a Particle.variable() from another Particle. Or did I miss something?
If that would be possible, my problem is solved!

Till now I could not find a way to do this.
So, I try to solve my problem in another way:

With Particle.publish(), we can (only) make “strings” available to other Particles.
But with JSON formatting, we can send a string, containing variables, which can be decoded/parsed again in another Photon with Particle.subscribe().

Now, I try to use the “SparkJson.h” library to first create that string.
It works like a charm, but I have no idea which string variable is catching this JSON string…

In my sketch below I use “xxxxxx” as a placeholder for that string variable.

Can anybody help me?

----- Based on SparkJson library example “JsonGeneratorExample” -------

#include "SparkJson/SparkJson.h"
// We initiate all variables we want to package in a JSON string:
char Controller[ ] = "ECO system";
int T1=25;
int T2=40;
int T3=65;
int T4=78;
int T5=92;
int T6=103;
int T7=9999999;
int Energy=37;


void setup()
{
  Serial.begin(9600);

  StaticJsonBuffer<300> jsonBuffer;

  JsonObject& root = jsonBuffer.createObject();
  
  root["Controller"] = Controller;
  root["T1"] = T1;
  root["T2"] = T2;
  root["T3"] = T3;
  root["T4"] = T4;
  root["T5"] = T5;
  root["T6"] = T6;
  root["T7"] = T7;
  root["Energy"] = Energy;

  root.printTo(Serial);
// This prints: {"Controller":"ECO system","T1":25,"T2":40,"T3":65,"T4":78,"T5":92,"T6":103,"T7":9999999,"Energy":37}

  Serial.println();
  Serial.print("This is the string json created: ");
  Serial.println(xxxxxxx);// This is a temporary placeholder as I can't figure out what the variable is called...
}

void loop() 
{
}
1 Like

I don’t know the answer to your question, but another way to accomplish your goal without using JSON, is to use sprintf to construct a string with a suitable separator between your data points ( a comma in my example). In the receiving Photon, you would use strtok to unpack the string. The example below has the publish and subscribe in the same Photon for testing purposes, but of course, you would have the subscribe and its handler in the receiving Photon.

void setup() {
    Serial.begin(9600);
    delay(3000);
    char str[255];
    sprintf(str, "%.1f,%.1f,%.1f,%d,%d", 23.8, 47.1, 12.5, 61432, 99999999);
    Serial.println(str);
    Particle.subscribe("rdTestPub", stringParser);
    Particle.publish("rdTestPub", str);
    
}


void stringParser(const char *event, const char *data) {

    float first = atof(strtok((char*)data, ","));
        float second = atof(strtok(NULL, ","));
        float third = atof(strtok(NULL, ","));
        int fourth = atoi(strtok(NULL, ","));
        int fifth = atoi(strtok(NULL, ","));
        Serial.printlnf("first: %.1f  second: %.1f  third: %.1f  fourth: %d  fifth: %d", first,second,third,fourth,fifth);
}
1 Like

Fantastic alternative proposal @Ric Thanks!

I have also used the sprintf command before, but thought JSON would be more suitable for certain extra possibilities like monitoring these values in a Google sheet for example.

Probably this is also possible with printf.
I’ll study your method and try it out…


Still, I wonder about 2 questions:

1) Why is it so complex to read a variable of one Photon by another one?

It seems so logical to expect this and guess many users would welcome it!
Wouldnt’t we?

2) What’s the variable used in the SparkJson library example “JsonGeneratorExample”?

I will still need it for other applications…

Greetz,
Filip

It's on the backlog :wink:

1 Like

:heart_eyes: Now that’s great news @Moors7 !
Any idea of the progress?

It certainly is. Unfortunately I’ve got no clue what the backlog looks like. With the electron just being released, I think some more time will have to be put in there. That said, it sounds like a useful and reasonable feature, so I too hope it’s not too long anymore :wink:

1 Like

@Ric : I’m trying to run your test sketch Ric, but I get an error in the Web IDE…
See screenshot below:

Any idea what could be wrong?
Tks!

I add the sketch below:

void setup()
{
    Serial.begin(9600);
    delay(3000);
    
    char str[255];
    sprintf(str, "%.1f,%.1f,%.1f,%d,%d", 23.8, 47.1, 12.5, 61432, 99999999);
    
    Serial.println(str);
    
    Particle.subscribe("rdTestPub", stringParser); // This must finally be in the subscriber's script!
    Particle.publish("rdTestPub", str); // This must finally be in the publisher's script!
}


void stringParser(const char *event, const char *data) // This must finally be in the subscriber's script!
{
    float first = atof(strtok((char*)data, ","));
    float second = atof(strtok(NULL, ","));
    float third = atof(strtok(NULL, ","));
    int fourth = atoi(strtok(NULL, ","));
    int fifth = atoi(strtok(NULL, ","));
    
    //Serial.printlnf("first: %.1f  second: %.1f  third: %.1f  fourth: %d  fifth: %d", first,second,third,fourth,fifth); // Debug: Is "printlnf" a valid command? Try with the usual "println"...
    Serial.println("first: %.1f  second: %.1f  third: %.1f  fourth: %d  fifth: %d", first,second,third,fourth,fifth);
}

Try to rename your sketch without the ampersand &

1 Like

Works! That’s a good (undocumented?) tip!

Thanks again @ScruffR for your great help!

1 Like