Particle.publish() rate limits

I’m building a product and in each “product” there are 5-8 Photons.

At the moment I’m using one photon as the “Hub” photon and routing all traffic to the particle cloud though the one photon.

I’m struggling with the 1 event per second limitation.

I need to transmit the data to the hub anyway for local state & logic so it makes architectural sense to have all cloud communication go through the one photon.

How about instead of messages being restricted to one message per second per device. Have the same one message per device per second limit over the whole account. So if you had 4 Photons in your account you could have one transmitting at 4 messages per second for example.

This seems fair as its a more flexible way of distributing the “Cloud Capacity” that you have “purchased” along with the photons.

can’t you just combine all the messages in one longer message?

How are you routing that traffic to the one Photon? Can’t you just publish with all Photons and subscribe to that with the hub for any state/storage purposes?

The 1/s rate limit is baked into the system firmware on each device and not a cloud limit. So neither the cloud nor any other device knows about another device hitting the limit or having "bare bandwith" available.

@ScruffR Thanks for that information I didn’t realise it worked that way. That explains why the cloud doesn’t send an error message when you exceed the limit because it dosen’t know.

@Moors7 The local traffic uses COAP but not the particle system baked one. The whole system need to be able to keep working without internet access so the local communication is necessary.

I can always have the photons publish to the particle cloud and to the local hub as a workaround, but I thought it was worth asking to understand more about the rate limit. Thanks everyone.

Just to follow up on the rate limits thing.

Can I suggest that the “is_rate_limited” function used by the system firmware to tell if publishing is rate limited be made part of the public API so we could check “is_rate_limited” before trying to publish an event?

The other way round, I guess (but am not sure), the return value of publish might indicate when the limit hit.
Checking that and maybe repeating the same publish after a short delay might be feasible too.

That’s probably the better way anyhow, since is_rate_limited is not will_be_rate_limited after all. So you won’t know if the next publish will be allowed or not before you tried it.

According to the documentation a return code of false for Particle.publish relates to whether the device was connected to particle cloud and succeeded in publishing or not. I don’t think you would want to change that.

After having another look at publisher.h I realised it’s pretty trivial to implement my own user space version of is_rate_limited. Once I’ve done that and tested it i’ll post it here for others if it of use.

Actually the return value can’t tell you if the publish actually succeeded, it can only tell you whether the event could be queued for delivery or not (for whatever reason - including rate limit I’d guess) - hence no change in its behaviour needed.

But to test my theory a quick sketch hitting the limit and checking the retval might settle the matter :wink: