Something changed in the last few days. I have a core that is pushing spark.publish( ) events that sometimes exceed 4 events per second (the so-called burst) and sometimes the events come even more dense. Typically, the overflow events would be ignored (fine), but there were enough data points/events to achieve what I was looking to do (psuedo-near-realtime events).
But, recently, the server (and I have tested this on my local server as well as the spark cloud) ignores many more messages. If there is a burst, the server is unresponsive for 4-5 seconds, and then picks up the .publish( ) events again. It is as if the throttling has been increased (even on my personal server).
The only thing that has changed is that I am at a new location with a new internet connection, but I donât know how that would change anything. I have the .publish( ) events âshadowedâ by a Serial.print( ), so I know that data/activity is being read by the core as it should. Again, itâs as if the throttling has be increased.
Any ideas, suggestions. BTW, yes, would love to use websockets or UDP, but these options are currently buggy.
@jgeist, there is an outstanding PR on the docs regarding this. It was posted by @kennethlimcp:
-NOTE: Currently, a device can publish at rate of about 1 event/sec, with bursts of up to 4 allowed in 1 second.
+NOTE: Currently, a device can publish at rate of about 1 event/sec, with bursts of up to 4 allowed in 1 second. Back to back burst of 4 messages will take 4 seconds to recover.
So your long bursts are causing the long delays you are seeing
We havenât changed any event limits in the past few days, we did roll out some new infrastructure, but I donât think that would have caused what youâre seeing.
@peekay123 is right in that the firmware is very aggressive about throttling. If you exceed the limit, the firmware will (hard stop) drop all events until you fall back under the limit. Where the cloud will continue to let you hit 60 events / minute, and then let them through at that rate without a hard stop, etc.
So, and I think the question has been proposed earlier, is it is possible to modify the firmware to allow more events through? Alas: probably not. But it is my server (and my firmwares) that I would like to modify here, so I am paying for the bandwidth.
Itâs open source, if youâre running a local server, you can definitely change the rate limiting! We can also work out increasing limits for you on the main cloud as well depending on the case. Maybe @nexxy would be willing to help walk you through modifying the local cloud and the firmware to increase your limits?
I am happy to help if this sounds like the route youâd like to take. Another route to consider is buffering your events and publishing them in âbatchâ on a fixed timer.
Let me know if youâd like help with the firmware/server
onCoreSentEvent: function(msg, isPublic) {
if (!msg) {
logger.error("CORE EVENT - msg obj was empty?!");
return;
}
//TODO: if the core is publishing messages too fast:
//this.sendReply("EventSlowdown", msg.getId());
and a bit later, this
try {
if (!global.publisher) {
return;
}
if (!global.publisher.publish(isPublic, obj.name, obj.userid, obj.data, obj.ttl, obj.published_at, this.getHexCoreID())) {
//this core is over its limit, and that message was not sent.
this.sendReply("EventSlowdown", msg.getId());
}
else {
this.sendReply("EventAck", msg.getId());
}
}
catch (ex) {
logger.error("onCoreSentEvent: failed writing to socket - " + ex);
}
},
What, where, when is âEventSlowdownâ and can it be modified?
Cheers!
Iâve edited your post to properly format the code. Please check out this post, so you know how to do this yourself in the future. Thanks in advance! ~Jordy