Thank You for your reply.
Yes, I was using the fifth function which was not listed.
I found that in file spark_utilities.h, USER_FUNC_MAX_COUNT was #define to 4.
I changed it to 10 and fifth function was listed.
BTW, can you please point me to doc which mentions these things, I have gone through several docs, but I never came across 4 function limit.
Yup! The way the firmware is written now, each function you register requires a small amount of ram, because the firmware has to keep an array of function pointers in memory. So to conserve ram we capped it at 4 to start. I think the plan is to change that system so it’s a bit more dynamic, and can be more flexible. If you compile locally / edit the firmware, you can change / increase that limit.
Sure - now ya tell us after I engineered my multi-function function where I include a 5 character function name, (currently with 13) which is working great btw.
I’m actually liking my multi-function function and the way it works. Have a look… Multi Spark Function
I’m curious, though. With local compile and firmware edit, about how many hard coded functions do you think the Spark could actually run without issues? Could I also use the fram/sd card shield to extend that capability?
Hmm, how many functions could you support… Essentially we’re talking about building a simple routing table, and lets assume the cloud doesn’t care how many functions you’ve registered. If we’re naive about it, and just make the table bigger… Looking at the function routing structure:
I'm pretty sure my byte counts are going to be wrong here
1 bool + 1 int + (1* USER_FUNC_KEY_LENGTH) + (1* USER_FUNC_ARG_LENGTH )
so lets say 1 entry is roughly (1+2+12+64) = 79 bytes of ram. So you could bump it up to 12, and use about 1k of ram, etc. (Please feel free to correct me, I’m sure I got the size of something mixed up in there somewhere… )