Publish: rate limit / missing data

So I’ve read the rate limit in the docs, unfortunately, the system I’m integrating into occasionally sends bursts of data. Say a few bytes, but 6 or 7 events in a few seconds.

So what happens when I hit the rate limit? Does the publish method just return false? Do I need to implement my own buffer/queue and then re-try these events later?

What’s the accepted way of dealing with this scenario?

Hi @Nemiah

The rate limits for publishing events are such that you must average less than one per second, but a burst of up to four is allowed as long as your average remains less than one per second. This is out of fairness to all users of the service.

When you hit the limit, the publishes just don’t go through. They can be blocked at your device by system firmware or in the cloud.

I would advise you to gather your events into a packed format so that one published event can contain many of your events. JSON is a good format to consider but there are many other ways to do it. If you sometimes need to burst faster then that is available but you need to either slow down later or deal with the consequences of dropped publishes.

2 Likes

While I’m fully with @bko, that the best way would be to pack as much data as you can in one single publish, there already are some libraries that will take your events and then push them one-by-one while adhering to the rate limit.

https://build.particle.io/libs/PublishQueue/0.0.11
https://build.particle.io/libs/PublishManager/1.0.0
https://build.particle.io/libs/PublishQueueAsyncRK/0.0.1

Don’t pay too much attention to the 0 usage count on the latter two libraries, I think the usage couter is broken for a while now in Web IDE.

Thanks for the info, it’s annoying because 99% of the time the data is within the limits so it seems a bit excessive to implement a system of bunding up events, I also like the logic of 1 real-world event = 1 published event. I can see how you would do it though.

I think the idea of building a queue and then having this emptied at a maximum of 1 per second suits this scenario better.

Although I am surprised there’s no feedback from the published event saying that it’s not been accepted, it took me a while to figure out what was going on there as it was only occasionally hitting the limit.

Do you mean can't?

snprintf() is one way that many users are using.
Or you could use a JSON library like this one

AFAIK it should return false, if it doesn't I'd consider it a bug which may want to be reported as GitHub issue.

No ment I can see how you would package up multiple events or data samples into a single Publish() event, e.g. using an array of JSON objects, then have the webhook endpoint scan each published event for 1 or multiple events inside the JSON object. But I like the simplicity of havving 1 realworld event = 1 published event.

My events contain very little data, and usually fire once a minute at most. But occasionally, maybe once or twice a day, but not every day, there mught be a burst of 5 or 6. I think the queung option is best for me.

I’ve since refined how events work and now I doubt they will even hit the limit but I’ve kept the queue in place, to cover every eventuality.

Thanks for the help, it’s been really useful as always!