If one wants to set up a high availability/redundant receiving end of the Particle events SSE, are there any choices these days?
From this post, I get that the recommended mechanism is to use webhooks if we want high availability, or avoid missing any events in the case the stream of the SSE gets interrupted.
Are webhooks the way to go, or did someone come across a way to have a redundant SSE library/server?
EDIT: I see this on the docs:
Wondering if something changed in the area. Thanks
Webhooks are the way to go. They’re the most reliable way to get events off the Particle platform, and also allow you to load-balance or have redundant servers on your end.
The problem with SSE is that delivery is not guaranteed, and it can take up to a minute to determine if the connection is no longer working. If you open multiple streams from multiple servers, you need to de-duplicate the events because every client will get all events.
The other option worth mentioning is the Google Cloud Platform integration that maps Particle events into Google PubSub events. It has the same benefits of reliability of webhooks, since basically it’s a special webhook from the Particle side. However, Google PubSub has reliable, distributed, de-duplicated events, and does not require a SSL server certificate or firewall changes.
Follow up question:
would there be an issue on the webhook or Particle side if I create webhooks that match the internal Particle events?
Examples in blue boxes:
OK, last one:
Is there a webhook technique that allows me to “forward” all these events to my cloud infrastructure?
I’ve tried * (so it matches everything) but it did not work (nothing came out, not even the test message).
On the other side, spark and particle worked.
You can make a webhook that triggers off internal events.
You can’t make a wildcard webhook that gets all events.
You can, however, take advantage of the prefix feature and make two webhooks, one for “spark” and one for “particle” which will catch all of the internal events.