Spark.publish() Data Size Limit [SOLVED]

Hello!

What is/is going to be the Spark.publish() data size limit for the Photon? Will it be the same as the Core? I’m trying to publish data from seven sensors and would like to do it in the fewest publishes/min.

Thanks!

How big is the data you’re currently sending? Could you show us an example, perhaps we can provide ideas to save some space?
As to what the limits will be, I’m not sure. Let’s ping @dave for that.

1 Like

Sure! Below I’ve posted the code with a reduced publish.

// This #include statement was automatically added by the Spark IDE.
#include "Adafruit_TMP007.h"
#include "Adafruit_HTU21DF.h"
#include "Adafruit_BMP085.h"
#include "Adafruit_Sensor.h"
#include "math.h"

//Pin declarations
#define PHOTO_PIN A0
#define SOUND_PIN A1

//Timing & Counters  
unsigned long sensReadMillis = 0;    
unsigned long sensInterval = 5000;
const int8_t sampleWindow = 50;
unsigned long lastSync = 0;   
#define ONE_DAY_MILLIS (24 * 60 * 60 * 1000) 

//State and sensor variables
uint16_t photoVal = 0;
double tempVal, humidVal, pressureVal, soundVal, objt, diet;
uint16_t sample;

char sensorStr[125];    

//Class instantiation
Adafruit_HTU21DF htu = Adafruit_HTU21DF();
Adafruit_BMP085 bmp = Adafruit_BMP085();
Adafruit_TMP007 tmp = Adafruit_TMP007();

void setup() {
    pinMode(PHOTO_PIN, INPUT);
    pinMode(SOUND_PIN, INPUT);

    //Spark Cloud exposed variables and functions
    //Spark.variable("Lightlvl", &photoVal, INT);
    Spark.variable("Sensor_Data", &sensorStr, STRING);

    Serial.begin(9600); 

    //Catch if temp/humidity and pressure sensors can't be found
    if (!htu.begin()) {
        Serial.println("Couldn't find HTU21D-F sensor!");
        while (1);
    }
    if (!bmp.begin()) {
    Serial.println("Could not find BMP180 sensor!");
    while (1);
}

if(!tmp.begin()) {
    Serial.println("Couldn't find TMP007 sensor!");
    while(1);
}

    //Sync Spark Core time with Cloud
    Spark.syncTime();
    Time.zone(-8);
}

void loop() {
    //Syncs Core time with Spark Cloud
    if (millis() - lastSync > ONE_DAY_MILLIS) {
        Spark.syncTime();
        lastSync = millis();
    }

    //Reads temp/humidity sensor after interval elapses
    if(millis() - sensReadMillis > sensInterval) {
        sensorRead();
        sensReadMillis = millis();
    }
}

//Read sensor values
void sensorRead() {
    tempVal = htu.readTemperature();
    humidVal = htu.readHumidity();
    pressureVal = bmp.readPressure()/100;
    photoVal = analogRead(PHOTO_PIN);
    objt = tmp.readObjTempC();
    //diet = tmp.readDieTempC();

    uint16_t peakToPeak = 0;
    uint16_t sigMax = 0;
    uint16_t sigMin = 4096;

    unsigned long sampleStartMillis = millis();
    while(millis() - sampleStartMillis < sampleWindow) {
        sample = analogRead(SOUND_PIN);
        if (sample < 4096) {
            if (sample > sigMax) {
                sigMax = sample;
            } else if (sample < sigMin) {
                sigMin = sample;
            }
        }
    }

    peakToPeak = sigMax - sigMin;
    soundVal =  (20 * log10((peakToPeak * (3.3 / 4096)) / .0063096)) + 20;

    //[Write sensor data to external flash memory]
    sensorPub();
}

//Publish sensor data to Cloud
void sensorPub() {
    char sensorPubString[150];
    sprintf(sensorPubString, "{\"t\":%3.2f,\"h\":%3.2f,\"p\":%3.2f,\"s\":%3.2f,\"rT\":%3.2f}", tempVal,humidVal, pressureVal, soundVal, objt);
    Spark.publish("Sensor_Data", sensorPubString);

    //Format string for exposed Spark.variable
    //sprintf(sensorStr, "{\"Temp\":%3.2f,\"Humidity\":%3.2f,\"Pressure\":%.2f,\"Sound\":%3.2f}", tempVal, humidVal, pressureVal, soundVal);
}

Ideally I’d like to have complete (descriptive) properties for the Spark.publish(). I still have two sensors I’d like to add, but am very close to the 63 byte limit. Any ideas on how to fit it all in would be greatly appreciated!

Thanks!

1 Like

I think you will have to re-frame your thinking on what publish is good for in order to get more out of it. The way I see it, publish was designed for short messages that alert your web-side code to something.

You are using publish like a core-to-human protocol and want everything human readable and easily digested, but if you think of publish as a core-to-web-page or core-to-program API, then you can get rid of a lot of baggage, like pre-translating your values from 12-bit integers to floating point numbers to make them human readable. Whatever is receiving these values could do the math and you could send the values encoded in a variety of non-human-readable ways that would save bytes in the publish stream.

So there is a clear trade-off to be made: human-readable but few values or machine-readable with more values. Either is fine depending your goals, it is just a trade-off you can make.

Even better, you also have Spark.variable available to you with a much larger possible length. So a good strategy is to poll your core with a variable but the best strategy is a hybrid strategy where the core publishes an event saying “there’s data to pickup” maybe with a few key values included and the the other side can decide if want’s to go read the variable with all the data. The way I see it, this is what the designers had in mind.

4 Likes

Thank you for the great reply, @bko! Re-contextualizing publish, and how I format the data for the publish, as well as the hybrid use case will definitely make accessing and utilizing the data much more manageable. I really appreciate the detail!

3 Likes

Glad you got this sorted! :slight_smile:

Just wanted to pop in and say we’ve been experimenting with larger sizes for the publish topic / contents, I’m hoping this will be included when we rollout all the awesome work that’s been happening in firmware land. :slight_smile:

Thanks,
David

1 Like

Awesome! That’s great to know, @Dave! Thanks!

I ended up implementing @bko’s suggestion: publishing key data and deciding what to do on the server side.

Thanks again!

3 Likes

Hello,

I am using the Particle Photon to log accelerometer and gyroscope data. I’m having a hard time figuring out the best way to store the large amounts of data that I have. I also need the device to sample at a fast rate, and it seems that the cloud does not have the ability to keep up. Please provide me with the best possible way to save my data at a fast sampling rate.

Stream it to a server directly, TCP/UDP for example, or save it locally (on a SD card) and offload it later, either wireless, or by moving the SD card.

2 Likes

With photons, is there a way to write the measurement file to save on a SD card locally? Also would you mind providing any available instructions on how to stream data to the server using TCP/UDP.

There are several threads that can be found via the search feature: “SD card” or “SDFat” (which is the name of a contributed library)

And for UDP/TCP the docs do have some dedicated sections too
https://docs.particle.io/reference/firmware/photon/#tcpserver
https://docs.particle.io/reference/firmware/photon/#tcpclient
https://docs.particle.io/reference/firmware/photon/#udp

And again, the search feature of this very forum does also find threads dealing with these

2 Likes