As I continue working on a robust local cloud setup for a trio of test Cores, I’m encountering an interesting issue.
Sometimes the Cores drop out from spark-server
, but yet they continue a quasi-connected existence, represented by their breathing cyan status LED, and my ability to still flash OTA to them. However, in this state, I am no longer able to retrieve cloud variables, functions or SSEs or list them as being ‘online’, and the only fix so far is to either do a manual reset, or force an OTA to remotely reset the Core. This dropout can happen anywhere from a half hour to overnight, but usually when you least expect it
Below are the relevant logs for anyone who can help with this issue.
When everything is working smoothly, running particle list
gives me this readout, which is correct (core IDs removed). The Cores are also happily publishing SSEs by the way (I wonder if that could be the cause?)
Checking with the cloud...
chicken_mighty (xxxx) is online
Variables:
humidity (double)
tempC (double)
dewPt (double)
dewPtSlow (double)
cranky_banjo (xxxx) is online
Variables:
humidity (double)
tempC (double)
dewPt (double)
dewPtSlow (double)
normal_ninja (xxxx) is online
Variables:
humidity (double)
tempC (double)
dewPt (double)
dewPtSlow (double)
The corresponding spark-server log when this was run didn’t report anything amiss either:
ListDevices { userID: 'aaaa' }
isDeviceOnline: Pinging core... { coreID: 'xxxx' }
isDeviceOnline: Pinging core... { coreID: 'yyyy' }
isDeviceOnline: Pinging core... { coreID: 'zzzz' }
ListDevices... waiting for connected state to settle { userID: 'aaaa' }
isDeviceOnline: Device service thinks it is online... { coreID: 'xxxx' }
isDeviceOnline: Device service thinks it is online... { coreID: 'yyyy' }
isDeviceOnline: Device service thinks it is online... { coreID: 'zzzz' }
192.168.2.101 - - [Wed, 20 May 2015 07:26:36 GMT] "GET /v1/devices?access_token=bbbb HTTP/1.1" 200 387 "-" "-"
GetAttr { coreID: 'xxxx', userID: 'aaaa' }
192.168.2.101 - - [Wed, 20 May 2015 07:26:41 GMT] "GET /v1/devices/xxxx?access_token=bbbb HTTP/1.1" 200 224 "-" "-"
GetAttr { coreID: 'yyyy', userID: 'aaaa' }
192.168.2.101 - - [Wed, 20 May 2015 07:26:41 GMT] "GET /v1/devices/yyyy?access_token=bbbb HTTP/1.1" 200 224 "-" "-"
GetAttr { coreID: 'yyyy', userID: 'aaaa' }
192.168.2.101 - - [Wed, 20 May 2015 07:26:41 GMT] "GET /v1/devices/zzzz?access_token=bbbb HTTP/1.1" 200 226 "-" "-"
Now here comes the ‘fun’ part. After a period of time (x hours, overnight?!), one or more Cores start ‘dropping out’. Which is perfectly normal especially if they get intermittent WiFi signal. And I’ve seen Cores on the Particle Cloud drop out and reconnect again with no issue. On the local cloud though, the spark-server
logs report:
recovering from early data! { step: 5, data: '410102f58eb174', cache_key: '_1' }
connect.multipart() will be removed in connect 3.0
visit https://github.com/senchalabs/connect/wiki/Connect-3.0 for alternatives
connect.limit() will be removed in connect 3.0
recovering from early data! { step: 5, data: '4101b074bdb174', cache_key: '_0' }
recovering from early data! { step: 6,
data: '5002b075b1450d0d737061726b2f6363333030302d70617463682d76657273696f6eff312e3239',
cache_key: '_0' }
recovering from early data! { step: 5, data: '41012697cbb174', cache_key: '_3' }
recovering from early data! { step: 5, data: '4101f2a032b174', cache_key: '_4' }
recovering from early data! { step: 5, data: '4101d5ce54b174', cache_key: '_5' }
recovering from early data! { step: 5, data: '4101459fdfb174', cache_key: '_6' }
recovering from early data! { step: 5, data: '41010c467db174', cache_key: '_7' }
recovering from early data! { step: 5, data: '4101e34ef7b174', cache_key: '_9' }
recovering from early data! { step: 5, data: '4101613f92b174', cache_key: '_11' }
recovering from early data! { step: 5, data: '4101ed8b00b174', cache_key: '_12' }
recovering from early data! { step: 5, data: '4101b10507b174', cache_key: '_13' }
recovering from early data! { step: 5, data: '41013546f5b174', cache_key: '_14' }
recovering from early data! { step: 5, data: '41015e0be4b174', cache_key: '_15' }
recovering from early data! { step: 5, data: '41011dd828b174', cache_key: '_16' }
recovering from early data! { step: 5, data: '41010d21ffb174', cache_key: '_26' }
on ready { coreID: '54ff6e066667515129382367',
ip: '192.168.2.108',
product_id: 0,
firmware_version: 10,
cache_key: '_26' }
Core online!
(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace
at Object.addListener (events.js:160:15)
at Object.EventPublisher.subscribe (/home/pi/spark-server/node_modules/spark-protocol/lib/EventPublisher.js:52:14)
at Object.CoreController.subscribe (/home/pi/spark-server/lib/CoreController.js:145:20)
at EventsApi.get_events (/home/pi/spark-server/views/EventViews001.js:198:10)
at callbacks (/home/pi/spark-server/node_modules/express/lib/router/index.js:164:37)
at param (/home/pi/spark-server/node_modules/express/lib/router/index.js:138:11)
at param (/home/pi/spark-server/node_modules/express/lib/router/index.js:135:11)
at pass (/home/pi/spark-server/node_modules/express/lib/router/index.js:145:5)
at Router._dispatch (/home/pi/spark-server/node_modules/express/lib/router/index.js:173:5)
at Object.router (/home/pi/spark-server/node_modules/express/lib/router/index.js:33:10)
Can someone share what ‘recovering from early data’ does?
And when I run particle list
again, I get, for example:
Checking with the cloud...
chicken_mighty (xxxx) is offline
cranky_banjo (xxxx) is online
Variables:
humidity (double)
tempC (double)
dewPt (double)
dewPtSlow (double)
normal_ninja (xxxx) is offline
The EventEmitter memory leak looks very suspect, and I wonder if that has to do with my spark.publish() lines in my code, which I’m using sprintf
to format my char pubOut[63]
as JSON output:
sprintf(pubOut, "{\"c\":\"%2.2f\",\"h\":\"%2.2f\", \"d\":\"%2.2f\",\"ds\":\"%2.2f\"}", tempC, humidity, dewPt, dewPtSlow);
Anyone with any idea what’s going on?