I’m not complaining that there are more, rather I want to know if there’s a detrimental effect (as in there’s a burden on Particle.process() or something along those lines.
If I recall correctly, it’s a memory thing. Variables take up a bit of memory, which is guaranteed for 20 variables at 12 characters each. It might be possible to use more with shorter names, but memory limits might pop up at a certain point.
Shouldn’t be harmful as far as I’m aware
Both Photon’s were in “connect” mode this morning and offline… I have a local unit on the desk and an installed unit on premise. The both went down at the same time early this morning ~5:30 AM with hard faults. The code otherwise has been incrementally built and been rock solid reliable for weeks.
I’m thinking there’s something afoot by pushing the variable limits…
I traced my reset issue down to memory resource bleed (don’t init a struct via memset that has string members).
I’ve also had the grace of being able to employ a setup() delay which provides a window to push in a new build should I find my remote photon is having hard fault restarts as a result of a FOTA push…
So, in essence the question still looms… What or where is the limit and are there downsides?
I have since condensed data results to a json payload @BulldogLowell, as that was my ultimate goal…
In the interim of acquiring analytic data and logic results I’ve had to instantiate several cloud var instances to characterize my sampling and analytics routines remotely as I can’t replicate the process environment on my desk.
Now that I have bullet proofed that operation I’m happily pushing json payloads. I am however using the SparkJson lib vs. direct string building as you illustrated.