Particle API call rate limit

I have approx 100 devices and I need to update OUTPUT to a specific value from remote for which I am using API call.

I need to update the value of all devices simultaneously, probably within 1 sec. I understand API call rate is approx 10 per sec.

Is there anyway I can meet my above requirement? Any lead would be highly appreciated

I dont fully understand your situation but the limit is per call. Why not stuff more into each call?

Sorry if my question was not clear. Situation is simple - I have 100 particle devices and need to set the LED light level to a specific value (say 80 %). I need to set the value from remote so I am using Particle Cloud API.

The issue is we can call only 10 API calls per second where I need to make 100 calls to set value for 100 devices. Also, I have to set the value in 1 sec means need to call the API 100 times in a sec, which is not permitted.

Also, particle API allows max of 63 chars length for the API argument, which restricts the option to concatenate the device names

This comment might help

Thanks, let me try

If all devices need to be set at the same light level then you could just have them all subscribe to the same Particle publish event or Variable so they all can receive the published event data at the same time and change the LED at the same time.

1 Like

Since that is not true for all API calls it would be good to know which API call you are refering to :wink:

Espectially since none of the default cloud interfaces are limite to 63 bytes anymore (unless you still use a Spark Core)

However, as RWB already said, the way to go would be a “broadcast” event that all devices subscribe to and will receive in parallel with a single publish.

Thanks for this input. I need to set value for a set /group of devices rather than “all” devices

Are the groups predefined?
Do devices of one group share a common property?
If not, can such a thing be introduced?
Will 622 bytes still not suffice to package the calls?
What is more important when setting a new value: Latency or synchronicity of the switching?

At the moment groups are dynamic in nature as a user can select 100 devices out of 1000 devices and set its light level to 80%, probably i need to explore the option to create the groups using API - publish the data and ungroup.

All properties of the group can’t be same… only few properties can be same eg: light level

622 would be fine as I could combine device names using delimiter and send in few calls. But we need to separate names and set it at particle side (this may keep as last option)

Latency is the key… if I select 100 or 200 devices (from a custom developed application) and set light level, all selected devices light level should change immediately

So the follow-up question would be: Can grouping and changing the light level be done as a two-step process where you first assign devices to a group in a non-time-critical fashion and after that keep controling that group with a single command with low latency?

Yes can do it as a two step process & should work. first create the group and controlling that group with a single command would be fine for my case

In that case, have all your devices listen to one update event so that all (even the non-affected ones) receive the bite size update event but have them check in the subscription handler whether they are actually meant to respond to that event.

This can be done by sending (non-time critical) group assignment events to all devices of a group which contains a filter phrase they should store and check against when the update event arrives and decide whether to respond or not.
With a clever layout of that filter phrase you could even have overlapping groups.

Alternatively, if the group assignment is somewhat “semi-static” you could use unique subscription names and use Particle.unsubscribe() and then re-subscribe with the new filter term (which can also be stored in EEPROM to automatically rejoin the same group after power-up).

Thanks for your input and mention on the overlapping groups as I have overlapping groups.

I will work on this. Once again thanks for your input and help

1 Like