Meaning of ttl parameter in Spark.publish()

Hi,

In the Spark.publish() there is an optional parameter ttl (time to live, 0–16777215 seconds, default 60).
What is the meaning of this ttl parameter?

Thanks,
Henk

1 Like

@nika8991, the TTL parameter is the lifespan of the data in the cloud, default is 60 seconds. After the lifespan it should be deleted from the cloud.

@krvarma, thanks.

But what does it mean for the receiving part? I my case I have a simple server running on node.js waiting for an event from the SparkCore. Most of the time it goes well but some time I miss some published information. Does that mean the ttl is too short? What happens (in the cloud) when I increase it to the maximum value? I have added the event handling part of the node.js code.

var eventSource = new EventSource("https://api.spark.io/v1/events/?access_token=" + accessToken);
*
*
*    
eventSource.addEventListener('Log', function(event){    // Wait for publish event
        var rawData = JSON.parse(event.data);               // Parse the main JSON data 
        var parsedData = JSON.parse(rawData.data);          // Parse the sub JSON data
        if(parsedData.V == "START") {
            lineNumber = 0;
            console.log("CoreId: " + rawData.coreid);
            console.log("Time: " + rawData.published_at);
        }
        lineNumber++;
        console.log(lineNumber + " Time: " + parsedData.T + " " + parsedData.N + ": " + parsedData.V); // Show the publish data on the console
        if(parsedData.N == "BoxID") {
            serialNumber = parsedData.V;
            getDuikelBox(serialNumber, function(duikelBox){  // Get the user from the MongoDB
                if(!duikelBox) {
                    userName = "DuikelBox gebruiker";
                }
                else {
                    userName = capitalize(duikelBox.name);  
                }
            })
        } 
        if(parsedData.N == "DuikelCode") {
            duikelCode = parsedData.V;
        } 
        if(parsedData.N == "OpenCode") {
            openCode = parsedData.V;
        } 
        if(parsedData.N == "LastOpenCode") {  // End of the published data
            lastOpenCode = parsedData.V;
            afleverBericht();                // Send a email with the publish information
            console.log("*************************************");   // Show the end of the published data
            lineNumber = 0;
        }
    },false);

Thanks,
Henk

@nika8991, what is the time delay between publishes. If we publish before the TTL is over then the old data should overwrite. Can you check if you increase the time delay between publishes solves you problem?

I understand what TTL would mean for a Spark.variable, I think. But for publish, I think it is just reserved for future expansion.

1 Like

@bko, not sure this is implemented or not, but this says so.

At some point in the future, there will be a mechanism to retrieve persistent data stored in the Cloud; currently there is not, so as @bko says, this feature isn’t useful quite yet :slight_smile:

4 Likes

@zach, so if we publish and event with some data, the data will not be stored in the server, am I correct?

If so when we publish an event and nobody is subscribed then the data discarded?

1 Like

Yes, that’s correct; the data currently disappears if no one is subscribed.

1 Like

Thank you @zach, that is a new information.

1 Like

@nika8991, so the TTL is not the cause of your problem. You are already subscribed and missing some events.

@zach, @bko, is there any requirement on the time delay between publishes?

Hi @krvarma

There is rate limiting such that one event per second average and a burst of 4 events is allowed, I believe.

The cloud is shared and we should all try to respect that. If you have a temperature sensor in your house, I really don’t think you need to know the temperature every second. Every minute or even every 15 minutes is fine for something like that. Be a good neighbor.

2 Likes

@bko, thanks and I totally agree with you. Also one should not expect Spark Cloud to store the data for a long time, it should only be able to re-route the data to some other servers or services like Xively, etc… And I suppose WebHooks will come helpful in that case.

Thank you @krvarma @bko @zach,
It is clear that ttl has not yet been implemented for the Spark.publish().

I send the messages from the SparkCore to my server running on node.js. The messages are published in a sequence of 20 to 50 messages after each other with a pause of one second between each published message. After such a ‘burst’ of publish messages there will be a much longer pause, normally at least a few hours or maybe days.

At the moment I have 2 problems with the published messages:

  1. Sometimes I miss some published messages, it is at the beginning or at the end of the sequence. I have not seen it in the middle. I will try to solve it by increasing the pause between the messages from 1 to 2 seconds.

  2. When there is a long pause between the published sequences then most of the times the server does not receive the messages. I have to restart the server, then I only receive the messages that were sent after the restart. It seems that the connection between the node.js server and the SparkCloud was lost. Is there maybe a timeout somewhere?

Thanks,
Henk

Can “unimplemented” please be added to the docs. It would have saved this thread which contains wrong info before the correct info appears and it would save everyone’s time. And the Q will be asked again, it’s been asked before https://community.spark.io/t/publish-subscribe-semantics-documentation/4403

@nika8991, I am not sure about the timeout, someone from the Spark team or Elite team should answer this.

Regarding the missing events, can you also try simply logging the received event data without processing it in Node.js, just to try.

I think you should try logging to file with curl and see how that goes:

curl --no-buffer https://api.spark.io/v1/events/MyEventHere/?access_token=0123456789012345678901234567890123456789 > log.txt

The event stream sends keep-alives every 9 seconds, so as I recall that file should have lots of blank lines.

With a browser listening for events, they do crash for me too and it looks like the event stream stops. It will stay up for days to weeks, generally. If you can measure the incoming bytes including the keep-alives, then you could timeout.

1 Like

Thanks @bko, I am using the curl command and I can see the published messages coming in. I will keep it running for a longer time to see if that is still the case after a few hours/days.

Thanks,
Henk

1 Like

One more thing I forgot to say earlier: All of the Spark cloud JSON values that you get include some kind of timestamp and for published events, you get a “published_at” field with a GMT/Zulu time ISO format timestamp. So you could decode that and knowing the publish rate (the core should phone home at least every hour, say), your node.js logic could look at the current time and when it 1.5 or 2 times the publish rate (1 hour perhaps), then reset the connection.