This may be a question that has been asked before but i just want to clarify
I currently have a project which may involve rather a large number of particle devices be it Photons or RaspberryPi’s (5,000+)
The devices are going to be grouped in 500’s in different buildings for various needs. One condition with the system, is that it needs to work even without internet and replies on api calls very heavily
So i have been looking at the possibility of setting up a local cloud in each building with 500 devices. This will be on its own dedicated server
This should be fine however the api calls are a concern. Ideally i need to get responses from the api calls in ~200ms as it is for controlling lights etc. To give more of an idea of the api call needs we would need to be doing a request to a function atleast twice a second (ideally every 100ms or so) to get status updates. The devices may then have multiple requests to functions being run per second. All in i could easily see around 10 requests per second going to each device on a 24/7 basis. Right now i have a device running the software on a very stripped back version and am aware that 4 requests a second seems to be the limit for the particle cloud
Does anyone know if the local cloud would be able to handle this sort of response time? The devices doing the requests will be on the internal network with the server, so there should be very little latency between them to add in extra time (hence the possibility of the Pi for ethernet)
That limit only applies to Particle.publish() from the device but not to function calls and variable requests and events originating from the cloud.
But your code may impact the response time by "starving" the cloud task of µC time.
You could use SYSTEM_THREAD(ENABLED), avoid delay() and call Particle.process() (or drop out of loop()) as often as possible.
And if you need to publish more frequently, you'd need to tweak the system firmware and the local cloud server.
Thanks for your reply. As it stands if I do a request to my particle it’s around a 1 second response time using the cloud, I’m not using any publish events only calling functions and variables. If I then have 2 or 3 of the updates running I can see the delay gets a lot longer and sometimes fails entirely due to taking longer than 5 seconds
Right now I have 2 calls inside my loop going to a mcp23017 then with a delay of 100ms to make sure I’m not requesting to much from the device (how often this can be run I’m not sure)
One thing I am wondering is if I remove the delay and rather put a check using millis() and check the mcp when millisecond difference is greater than 100ms this should allow the loop to run constantly without delay (and I believe call particle.process() after every loop) in theory increase the response time.
I am aware that my currently delay may be increasing the response time a bit but I was under the impression that the particle.connect now ran within delays as well?
In essence my code is all funtion calls, variable calls and a tiny internal loop that checks for inputs being on to enable outputs. As I say this is using i2c so there may be slow downs from that as I’m not sure how quickly i2c responds
That's true, but my (dated) insight on that is that this happens once after 1000ms accumulated delay time and not constantly.
And AFAIK only one pending function call is serviced per call to Particle.process() or between iterations if loop().
And you can call multiple functions before a previous call has returned.
So for safety then if i run the millisecond check, then before the end of the loop i run say 5 calls to Particle.process() in theory that should remove the delay and handle the 5 manual calls plus 1 automatically call every time the loop runs. As long as the loop with these requests doesn’t take huge lengths of time (>300ms) i should be fine to do it this way and handle very large numbers of request per second
Its been a while since i have worked with particle (still running my spark core) so i’m catching up on everything as i go
I'd rather go for a light weight loop() that's dropping out at a similar rate as your 5 manual calls are.
I've not measured the timing, but with cloud connection on the periode for one loop() iteration (including the hidden tasks) is ~1ms and I'd guess a manual call might be in the same range which may introduce a 5ms extra delay to your loop() without any benefit to your actual work tasks.
If i read the loop time variable i’m seeing an average time of 25 to 32ms, with the 5 process calls. Following your suggestion you would remove the 5 calls and allow it to just run as normal if i’m correct?
That's a surprise.
Without knowing how long the msp1 calls are taking, I would have expected a loop time of no more than 10ms, but as you correctly assumed, I'd just let the code leave loop(), since when your millis() condition is not met you'll be in there just a few clock cycles and attend to the cloud tasks immediately again anyway.
I will give that a try when I’m back to see how long it takes for the loop without the process calls
I will also add in a test to see how long the mcp1 calls take to run as well as one to check the longest loop time. Obviously I can cut out a few calls to the device every 100ms as I haven’t been storing them to a variable and I may move a partical.process in between the if statements and the updatepinstate function call to try and react faster
The updatepinstate function actually does 16 calls to the mcp1 device so that may be adding a large slow down into the loop
The times are as follows, these are shown in the order (last time, maximum time, minimum time):
Loop Run - 0ms, 119ms, 0ms
MCP Read (2 pins) - 1ms, 1ms, 0ms
MCP Read (16 pins) - 7ms, 7ms, 6ms
Function Requests - 8ms, 8ms, 7ms
So it seems all my requests are very quick on the spark core (besides the loop), worse case i can see 200ms to respond to a request which is fine as long as the local cloud would respond in a few milliseconds, right now its 0.5 to 1.8 seconds for a response to a variable call so i’m hoping the local cloud can respond much faster