@Dave@zachary It would be good to add it in the toplevel makefile (like the USE_SWD_JTAG) and have a set of switches in the webIDE for Release / Debug (in debug add the debug_ouput code to sketch), USE_SW_JATAG and VERBOSE_PANIC. For the sort term this is need because the WD will kill PANIC before it will flahs the code so seeing the text will help a user debug and out of heap error
You guys might be interested in some statistics on dma interrupts I’ve been cumulating. Basically, CC3000 are generating 10 dma interrupts when there should be only two. The system I’m using is my own implementation of the TI/Spark host drivers under a small RTOS. The problem is that TX occurs before RX with SPI dma, and the read side interrupt codes looks for read completion (as it should) so ignores the interrupts under read completion occurs. I’ll try changing the code to use interrupts on SPI_RX_DMA_CHANNEL rather than SPI_TX+DMA_CHANNEL and see whether that fixes the problem:
/usr/local/bin/node app.js
========================================================================================
= Please ensure that you set the default write concern for the database by setting =
= one of the options =
= =
= w: (value of > -1 or the string ‘majority’), where < 1 means =
= no write acknowledgement =
= journal: true/false, wait for flush to journal before acknowledgement =
= fsync: true/false, wait for flush to file system before acknowledgement =
= =
= For backward compatibility safe is still supported and =
= allows values of [true | false | {j:true} | {w:n, wtimeout:n} | {fsync:true}] =
= the default value is false which means the driver receives does not =
= return the information of the success/error of the insert/update/remove =
= =
= ex: new Db(new Server(‘localhost’, 27017), {safe:false}) =
= =
= http://www.mongodb.org/display/DOCS/getLastError+Command =
= =
= The default of no acknowledgement will change in the very near future =
= =
= This message will disappear when the default safe is set on the driver Db =
Listening as TCP Server on Port 3001
Express server listening on port 3000
Connected to ‘VO-96’ database
devices collection created
statistics collection created
users collection created
presets collection created
Client has connected = ip:192.168.1.107
Spark msg received from - ip192.168.1.107 Data Length:18
msgtype:1
msg stype:0
msgLen:18
device id message received
Spark msg received from - ip192.168.1.107 Data Length:48
msgtype:4
msg stype:0
msgLen:48
device statistics message received
{ id: ‘48ff6f0650675550332615’,
timestamp: Tue Feb 25 2014 23:23:02 GMT-0500 (EST),
idle: 100,
spiWrites: 15,
eventsReceived: 17,
unsolEventsReceived: 2,
dmaInterrupts: 156,
dmaReadIRQInts: 90,
dmaReadFirstInt: 0,
dmaReadEOTInt: 52,
dmaWriteEOTInt: 14,
dmaWriteFirstInt: 0,
dmaSpuriousInt: 111 }
Spark msg received from - ip192.168.1.107 Data Length:48
msgtype:4
msg stype:0
msgLen:48
device statistics message received
{ id: ‘48ff6f0650675550332615’,
timestamp: Tue Feb 25 2014 23:23:32 GMT-0500 (EST),
idle: 99,
spiWrites: 286,
eventsReceived: 288,
unsolEventsReceived: 2,
dmaInterrupts: 2878,
dmaReadIRQInts: 1440,
dmaReadFirstInt: 0,
dmaReadEOTInt: 1152,
dmaWriteEOTInt: 286,
dmaWriteFirstInt: 0,
dmaSpuriousInt: 2016 }
It 's the Spark firmware from a couple of weeks ago (before you modded it) adapted to run under CoOS (part of CoIDE from CooCox). Under CoOS. I have a number of tasks and a mutex lock on the HCI complex. I have a background task to do CC3000 event handling, and I also utilize CoOS’s memory manager. Incoming events are imply moved to a memory packet and queued for processing by the event handler, so time spent in the DMA interrupt routine is minimal.
It runs very reliably for me, but I’m only on my private network (no ARP issues) and its early days yet.
Right now I sending a stats packet to my node server every 30 seconds, and IO;m doing about 8 CC3000 functions a second in my main work loop. Things like Delay() are handled in CoOS and just suspends the task not the processor,
I downloaded it and zoomed in with preview. The legends become square blocks before they become readable. You can email higher res versions if you want!
David
Thats did the trick. The issue I was having is that the DMA int occurs with tx complete. Because rx_complete was not yet set, the int pending was not cleared. So as soon as the interrupt routine exited, the DMA interrupted again. As int pending is not cleared it's impossible to tell from an external LA whether you have the issue or not. The fix is simple - do a "while(!rx_complete);" in the read sections of the interrupt routine.
Peter
@david_s5 The recent firmware updates are keeping my core from crashing and its staying connected to the web without any intervention from me. Ill keep testing but I think you deserve the reward for fixing this!
@RWB +1 to that! Given the amazing amount of fixes and improvements submitted by @david_s5 to the Core code base in a relatively short amount of time, I completely agree.
I know we’re still waiting on TI to get their, umm, stuff together but they’re not volunteers and obviously should have shipped better code with their CC3000 product.
@david_s5 Thanks for all your contributions so far. Stellar job.
@david_s5 I can also report in 5 happy breathing Cores on my desk after 6 days of use. Looking promising and is by far the longest session I have had for my Cores.
@zach@zachary We think we have Spark Cores that actually stay reliably conneceted to the internet now! And know thats because a lot of hard work done by @david_s5 on getting the firmware fixed up after weeks or trouble shooting. @david_s5 certainly beat TI when it comes to creating a solution that should keep all the Spark users and their projects happy for now.
I say he deserves the reward that was promised. What do you guys think?
Let me preface this by saying that while CFOD isn’t fixed per se, we now have a solution in place where the Core consistently recovers from bad states - and the true fix can only come from TI, since the issue is buried deep within the CC3000. We expect a true fix to be coming from Texas Instruments in the very near future.
That said, there is clearly one person who deserves this bounty…
:drumroll:
I’m sure it will be no surprise that the $500 bounty will be awarded to @david_s5 for his incredible contributions to the Spark codebase, dramatically improving the reliability of the Spark Core. Congrats @david_s5! @steph will reach out to you to award the prize.
But wait: there’s more!
@davids_5 wouldn’t have been able to make all these great improvements without support from the rest of the community. So we’d like to thank everybody else who contributed to the cause with a free Spark Core, or two shields if you’d prefer.
Thank you Cores go out to the following, in no particular order:
If I missed anyone who thinks you should get a Core for your contributions, please let me know! This thread is extremely long and it took me a half hour to even come up with this list, so it’s likely I missed someone who made a meaningful contribution.
Thank you everybody for your support and your patience resolving this issue. We couldn’t ask for a better community.