Run the flash command: sparkparticle flash --usb deep_update_2014_06
This installs the deep update from a binary that is packaged with the Spark CLI, so you donât have to download it.
TODO: DEPLOY THIS
Update - nothing ⌠same error. Did not find the file.
But now I know, thanks to you, it´s in the package of the spark-cli and is not downloaded right in the moment where I try to flash the core.
So I did a quick
find / -name deep_update_2014_06.bin
and found the file (Raspbian) under:
/usr/local/lib/node_modules/spark-cli/binaries/deep_update_2014_06.bin
Now I finally able to do the deep-flash-dance with:
âŚ
DfuSe interface name: "Internal Flash "
Downloading to address = 0x08005000, size = 93636
Download   [=========================] 100%       94208 bytes
Download done.
File downloaded successfully
Transitioning to dfuMANIFEST state
Flashed!
Last thing I´d like to know is: In the link from my first post there is a troubleshoot section or something like to check if the deep_update applied successfully. But my output looks quite different and there is no âversionâ
If you ran the command spark flash --usb deep_update_2014_06 and it didnât work as expected, then you have the wrong version of the CLI, and the wrong version of deep_update. The local spark-protocol module doesnât store the cc3000 patch version, but if youâre subscribing to your events, you should see it when your core starts up, try a spark subscribe mine.
Hmm, the deep_update works by optionally patching the cc3000, then it connects to the cloud and does an auto-upgrade. That feature might not be working on the local cloud. It sounds like youâre using the local cloud, can you try flashing the latest tinker via usb after the patch is done?
Ăhm sorry what this exactly means?
I cannot subscribe because cli tells me to log in (which is the login to the spark cloud)⌠so what exactly did i do wrong ?
If you have the CLI pointed at your local cloud, then try the following:
# new feature! :)
# fix normal apiUrl
spark config spark apiUrl https://api.spark.io
# setup your local api url
spark config local apiUrl http://mylocalcloud:8080
# switch to your local setup
spark config local
#start the setup so you can create a new user, but hit ctrl-c after your user account is setup. :)
spark setup
# listen for local events
spark subscribe mine
~ clyde$ spark subscribe mine Subscribing to all events from my personal stream (my cores only)
Listening to: /v1/devices/events
Now it stops, i have tried to ctrl+c and then type another time the curl command with same result. Then I tried in a screen to subscribe and outside the screen run the curl command⌠same resultâŚ
Sadly I have more problems since I used the local cloud, hoped the deep update clears up some of them.
Connecting to the cloud takes about 20-30 times.
Connection from: 192.168.178.xx, connId: 1
on ready { coreID: âcoreidâ,
 ip: â192.168.178.xxâ,
 product_id: 65535,
 firmware_version: 65535,
 cache_key: undefined }
Core online!
this game runs a dozen times if i switch my core on. The most time it connects successfully to the wifi and then searches for the cloud blinks red and resets and starts at the beginning. I have found in the forum that I not the alone with this problem. After 20-30 tries the core connect and the program starts?! Can´t find the problem, the spark-server log says as you see every time âCore onlineâ and there is no disconnect message. But something must be missing?
Is that âdeep updateâ problem. What we have discussed here.
Sure, I can put that code in my setup call of my project, but y? I thought the update should run out of the box? Give me a bit more what I have to do, feels like you have lost me at the side of the road, when you know what I meanâŚ
And currently I have the problem that I have to flash with USB not OTA means I have to unplugg my project and bring in to my computer thats little bit annoying.
Should I keep it there or is this a âone timerâ?
Deep_update doesnât work the same way on the local cloud, if you want to deep_update your core locally, try the following (one time thing):
1.) update your copy of the spark-cli npm update -g spark-cli
2.) connect your core in dfu mode:
spark flash --factory tinker
spark flash --usb cc3000
#when your core starts flashing yellow again
spark flash --usb tinker
If youâre building locally, make sure youâve pulled the latest code from the repositories as well.
â A few other people have mentioned theyâre not seeing the local cloud API show published messages properly -( https://github.com/spark/spark-server/issues/22 ), so Iâm looking into that.
â 20-30 connection attempts sounds pretty nuts, what else is happening? Does it take 20-30 tries with stock tinker?
Deinstalled curernt version with npm uninstall -g spark-cli
Reinstalled it with npm install -g spark-cli
To absolute be safe I updeded with npm update -g spark-cli
Info from Readme: 0.3.95 (/usr/local/lib/node_modules/spark-cli/Readme.md
I connect the core to my raspberry and startet the spark-server in a screen
I set the Core in DFU Mode bei Pressing Mode + RST release RST and whaiting for flashing yellow
Now I tried spark flash --factory tinker
/usr/local/lib/node_modules/spark/bin/spark:14
   netBinding = process.binding(ânetâ),
                        ^
Error: No such module
   at Object. (/usr/local/lib/node_modules/spark/bin/spark:14:26)
   at Module._compile (module.js:456:26)
   at Object.Module._extensionsâŚjs (module.js:474:10)
   at Module.load (module.js:356:32)
   at Function.Module._load (module.js:312:12)
   at Function.Module.runMain (module.js:497:10)
   at startup (node.js:119:16)
   at node.js:906:3
Thats it, tried to locate âtinkerâ or âcc3000â as a file but did not find it.
Sorry for making such trouble⌠it´s not my intention!
Some checks later I can locate in the /binaries folder of the spark-cli there are the files which we need I think. So I tried first do move to the correct folder:
cd /usr/local/lib/node_modules/spark-cli/
spark flash --factory tinker
Again nothing, last try:
cd /binaries
spark flash --factory spark-tinker.bin