Code help please - API calls crashing core

Hi,

I’ve written some code to control an LED strip. I am using a library kindly ported by @peekay123 and I have had it working fine with direct calls to the library routines within my firmware code.

Now I am trying to change an array variable through API calls to my core, but I’m finding as soon as I try this, everytime I make an API call the core just resets.

I’m not sure if this is just my lack of understanding of the wiring language and arrays (and perhaps memory management)?

I have pasted my code here: http://pastebin.com/V2fma4YC

The commented out lines 45 to 57 do what I expect them to do if I uncomment them (ignore the incorrect comment about rack 4). What they do is assign a bank of LEDs on the strip to a rack top in my data centre) and this code flashes each rack (1 to 9) in turn. So this proves to me that the various routines work properly.

Any assistance on this would really be appreciated.

Thanks.

P.S. Wasn’t sure which category in the forums this fit into, but I’m only having problems with the API calls, so this is why I picked “Cloud Software”

Although I haven’t naild down your actual problem (due to some missing info e.g. your command string for ledSet), I saw some ways to improve your code.
First always make sure your indexes (e.g. returned by indexOf and toInt) are valid and within the bounds of your array.
Second try to do less work inside setLed (e.g. transfer your call to setall back into loop) and only set an update flag inside your setLed().

Maybe this already does away with your error :wink:

Another way to parse your command string, you could use strtok() which might also help avoiding invalid indexes and would make your code a bit easier to maintain.

1 Like

Hi @jellifish

It would be really good to know what the main LED on your core is doing when this crashes–it is likely blinking red in a pattern that says what the fault is:

http://docs.spark.io/troubleshooting/#troubleshoot-by-color-flashing-red

2 Likes

@jellifish, @ScruffR is dead on in recommending that you keep your setLed() small by just setting a flag that you process in loop(). The only thing to keep in setLed() would be the parsing and range checking of your key values so you can return an error code (eg. -1) if any of those values are out of range. Those variables could be globals.

Your loop() code would read the flag, process the variables and then reset the flag. Your setLed() could reject a request (return an error) until loop() has reset the flag.

BTW, the delay(50); at the end of setLed() does nothing since there is a return(200); before it that exits the function. :smile:

1 Like

Hi again @peekay123 and thanks to @ScruffR and @bko for your speedy and helpful responses too.

All of these are good and useful ideas. The problem with the blinking light pattern at the moment is I’m 100 miles away from my core and watching it on a CCTV camera, which means I can see general things like flashes and so on, but getting timed detailed response from the red light at the moment isn’t an easy option for me!

I’ve done the flag setting idea and simplified the function. I haven’t done the stroc (is that actually strok?) change but I appreciate the bounds issues with what I have at the moment if I were to call it incorrectly.

In fact, I’ve experimented with returning the values of the variables and am happy that my API call is correct when it works.

That brings me to the interesting thing…I can get it to work (although the LED strip doesn’t work of course) by making my setup() loop only contain the Spark.function definition, but if I add any other subroutines like strip.begin() to the setup I get my reboot/reset problems.

I even tried taking all the functions out of the setup() loop and just setting an init flag to run the setup once at the start, but even then the API call makes the spark reboot. I have the latest code I’m taking about here:- http://pastebin.com/aVrK5i6f

And with line 37 commented out it responds as I’d expect:-

curl https://api.spark.io/v1/devices/jjcore1/ledset -d access_tken=$AUTHTOKEN -d “c=4-32-32-32-”
{
“id”: “55ff72065075555327161787”,
“name”: “jjcore1”,
“last_app”: null,
“connected”: true,
“return_value”: 3

but with line 37 in (or the calls directly in the setup loop as before) it breaks and reboots the core on API call.

Thanks again for the help.

@jellifish, can you rename your Spark.function() name to something different than “setLed”, like “LedSet”? Anything different than your actual function name.

So if you remove the Spark.function() stuff, does the Led stuff work? Have you tried setting the various variables normally set by setLed() and calling setall()? Does that work? It is important to isolate the LED stuff from the Spark.function() stuff to figure out which part is causing the problem. Perhaps you can try the Spark.function() stuff but no LED stuff and Serial.print the flag in loop(). :smile:

1 Like

@peekay123 Renaming the function doesn’t seem to make any difference. Also, yes if I remove the Spark function then the LED stuff does work. I’ve been scratching my head over this for days!

I can’t use Serial.print as I’m not physically next to the spark and it’s not plugged into a server via USB. I suppose I could use spark.variables or something?

It seems to be strip.begin() that causes the API function to fail. Is it something to do with the timer/interrupt code and the API code conflicting in some strange way?

@jellifish, if you are compiling on the IDE, remove and pull in the SparkIntervalTimer library again as I changed the timer interrupt priority to a level which will not interfere with important firmware interrupts. I will revisit the LED code to see what, if any, could be the problem. :smile:

Can you put some print statements in to see what line of code it is crashing in?

@peekay123 I have been ill so have just come back to this again today. I did as you suggested and removed and put back in the SparkIntervalTimer library (using cloud IDE) is that what you meant? I don’t have a local IDE as I’m using Linux and couldn’t get that working. Is the version of SparkIntervalTimer in the cloud the new version? It didn’t seem to help unfortunately.

@dpt I’m looking at my core remotely via CCTV and don’t have it plugged into a server via USB so not sure I can usefully do any printing (unless I’m missing something). I have narrowed it down to failing on LED library initialisation (and only when an API call is made) I think. At some point in the next few weeks I will be going back “on-site” so will be able to fiddle more with print statements and examine the LED when it reboots a little closer to see if there’s any panic code.

@jellifish, the IDE version of SparkIntervalTimer is v1.2.0 which is the latest. :smile:

Ok. I guess I don’t understand your environment. Are you able to upload code to it? Can you telnet to it? I’m not understanding how you can debug it at all, but don’t have the ability to monitor print statements.

@dpt, the link is in the first post of this topic :smile:

@jellifish, I have tested your code on my own Core (although without the external circuitry), but I can’t see any resets whatever I do.

So I would imagine, that either your external circuitry does trigger the reset, or you are not actually running the code you have posted (e.g. OTA fails to update propperly).
One possible external issue, might be the power supply not delivering enought current for all your ext. devices.
How does your HW look like?

1 Like

Yes, I saw the code. That doesn’t explain the environment where you can modify the code but can’t view print statements.

@dpt, this was already posted previously:

And modifying code remotely is one of the fundamental things the Core provides via OTA flash capability.
But for Serial.printing you'd need physical access. On the other hand Spark.variable/Spark.publish might be useful for debugging.

1 Like

Ok but that’s why I mentioned telnet. I’m new to the Core and have more experience with the Arduino Yun, which you can telnet to to view debut messages output to the console. Can you not do that on a Core? Otherwise Scruff’s idea sounds great. The point is to be able to send output from the program so you can see its state and see where it’s crashing. Could you even use publish to send debug messages?

If you’d want to use telnet, you’d need a telnet server firmware on the Core, which @jellifish doesn’t have.

Another way to debug remotely is to use Spark.publish() to send debug messages out much you would use Serial.print(). You just have to be careful with the one-per-second rule.

Thanks @bko, I forgot to mention the one-per-second rule.
And also the payload limit of 63byte per publish.
On the other hand you can build up to four (I think) STRING Spark.variable with up to 622 (I think again) char each.

1 Like