Something changed in the last few days. I have a core that is pushing spark.publish( ) events that sometimes exceed 4 events per second (the so-called burst) and sometimes the events come even more dense. Typically, the overflow events would be ignored (fine), but there were enough data points/events to achieve what I was looking to do (psuedo-near-realtime events).
But, recently, the server (and I have tested this on my local server as well as the spark cloud) ignores many more messages. If there is a burst, the server is unresponsive for 4-5 seconds, and then picks up the .publish( ) events again. It is as if the throttling has been increased (even on my personal server).
The only thing that has changed is that I am at a new location with a new internet connection, but I don’t know how that would change anything. I have the .publish( ) events ‘shadowed’ by a Serial.print( ), so I know that data/activity is being read by the core as it should. Again, it’s as if the throttling has be increased.
Any ideas, suggestions. BTW, yes, would love to use websockets or UDP, but these options are currently buggy.