So I’ve read the rate limit in the docs, unfortunately, the system I’m integrating into occasionally sends bursts of data. Say a few bytes, but 6 or 7 events in a few seconds.
So what happens when I hit the rate limit? Does the publish method just return false? Do I need to implement my own buffer/queue and then re-try these events later?
What’s the accepted way of dealing with this scenario?
The rate limits for publishing events are such that you must average less than one per second, but a burst of up to four is allowed as long as your average remains less than one per second. This is out of fairness to all users of the service.
When you hit the limit, the publishes just don’t go through. They can be blocked at your device by system firmware or in the cloud.
I would advise you to gather your events into a packed format so that one published event can contain many of your events. JSON is a good format to consider but there are many other ways to do it. If you sometimes need to burst faster then that is available but you need to either slow down later or deal with the consequences of dropped publishes.
While I’m fully with @bko, that the best way would be to pack as much data as you can in one single publish, there already are some libraries that will take your events and then push them one-by-one while adhering to the rate limit.
Thanks for the info, it’s annoying because 99% of the time the data is within the limits so it seems a bit excessive to implement a system of bunding up events, I also like the logic of 1 real-world event = 1 published event. I can see how you would do it though.
I think the idea of building a queue and then having this emptied at a maximum of 1 per second suits this scenario better.
Although I am surprised there’s no feedback from the published event saying that it’s not been accepted, it took me a while to figure out what was going on there as it was only occasionally hitting the limit.
No ment I can see how you would package up multiple events or data samples into a single Publish() event, e.g. using an array of JSON objects, then have the webhook endpoint scan each published event for 1 or multiple events inside the JSON object. But I like the simplicity of havving 1 realworld event = 1 published event.
My events contain very little data, and usually fire once a minute at most. But occasionally, maybe once or twice a day, but not every day, there mught be a burst of 5 or 6. I think the queung option is best for me.
I’ve since refined how events work and now I doubt they will even hit the limit but I’ve kept the queue in place, to cover every eventuality.
Thanks for the help, it’s been really useful as always!