Argon - why is there a 5 mesh subscription handler limit?

argon
Tags: #<Tag:0x00007f038fd211a0>

#1

Why is there a 5 mesh subscription handler limit for an Argon? This makes no sense and severely limits the possible uses of the 3G Particle system potential.

You can have dozens of Xenons in a Mesh with dozens of sensors, but Argon can subscribe to only 5 of those sensors.


#2

Subscriptions listen for all events published under a specified prefix by devices on your Particle account and are not tied to individual devices. You do not need 5 subscriptions for 5 devices if they are all publishing with the same event prefix. When devices are publishing similar data it is best practice to use a single subscription handler instead of using several.

In your situation you could have the xenons publish events with identical prefixes but unique suffixes containing uniformly formatted payloads holding the sensor data. The argon would parse the suffixes in the subscription handler, allowing the argon to know which xenon is reporting the incoming sensor data, and then do whatever you wish with the sensor data.

As for why the number of Mesh subscriptions is capped at 5 is unknown to me. I’d imagine that 5 is the most stable limit the Particle Firmware Engineers found, and frankly if you need more than 5 subscriptions you are likely doing something wrong.

Also, you mention 3G potential but you are asking about the argon. Are you aware that the argon is the Wi-Fi + Mesh board?


Basic Mesh: How to identify each Xenon
#3

Got it. Just don’t seem elegant coding.

On this forum there a is a decent write-up on using JSON. Use JSON to send data between Particles

just use Mesh.publish/subscribe instead of Particle.publish/subscribe

3G - 3rd Generation Particle devices


#4

That’s open for debate.
You can elegantly code around that limit when you have your subscription handler act as dispatcher function with common tasks for one of the five individual categories concentrated in the dispatcher and handing off the individual work to individual functions.


#5

About the subscription:

So if you have 10 devices all pub a temp every 5 seconds (example). Your gateway is subbed to the “temp” event. Can it handle all 10 if they pub at the same or near same time?


#6

Short answer is yes. It should be able to handle many messages coming in very quickly. You’ll get a few milliseconds delay between all the readings. I have been testing with my heartbeat code with 4 endpoints and 1 gateway. The gateway publishes an event every 10 seconds to which all the endpoints respond simultaneously. There are usually a few milliseconds between each response received on the gateway. I’d be curious to test the results with 10 endpoints on the mesh.

To accommodate the possible flood of responses, you must adhere to good code design on your subscribe function. You must record the data to a variable (an array or something) and then process the results in the loop(), not the callback function. I was a bit paranoid about that so I rewrote the callback in v0.3.1 so that I did as little processing in the callback as possible.

With all that said, you might have issues doing OTA firmware updates when the mesh network is stressed. When an OTA firmware update starts, you may miss some messages published on the mesh, or the OTA update may timeout and report failure. It’s still something I hope Particle will be reviewing for future OS versions.

Edit: Here’s a typical response when all nodes respond simultaneously. Now I do realize that there may be a slight delay for each gateway heartbeat request to be received by the endpoint. That delay may be amplified when the endpoint sends the heartbeat response. But we’re probably talking microseconds.

image


#7

To add to that, while you should be safe when it comes to processing speed but since the mesh communication is UDP based there is no guaranteed delivery of any given packet - especially in complex topologies you may need to distinguish between data that is vital and data that can be lost.
For vital data, you should incorporate some acknowledgement scheme.