OK, so I see two directions you could go from here:
Make this code more like the failing code by moving the UDP part off into a function that has the UDP client as an argument declared as Print type and see if that fails.
Try changing the send method to pass in the UDP client type, not a Print type object, and see if that improves things.
I donāt know what is going on but it sure seems like you are getting Print behavior that you donāt want in the send method.
I think I found some sort of nasty bug in the Spark core development environment.
I had suspected that my cpp files were no longer getting compiled for a while.
So I made an entirely new project and moved everything over and now I'm getting errors when I compile.
Apparently in that original project only the .ino file was getting recompiled.
This is the same project where I was having this problem, so I imagine the problems are connected:
So I at least know that the code Iām compiling is getting onto the spark core.
@bko Could you look at the send function I wrote in this code and see if you can give me some pointers (budum tsss)?
I have a uint8_t pointer to some space on the heap and I keep trying to realloc more space so I can add more data before I give it to the write function.
I havenāt worked with pointers in a long time so Iām not sure if Iām doing any of it right.
I think at this point what I might try doing next is start over on this function and only send the OSC address as an array on the stack, then try to change that so itās on the heap, then attack the rest of it step by step.
Oh man, so Iām making serious progress.
I made a little command line program to get into the flow of dealing with pointer memory allocation madness.
The current code I checked in makes it up to the comma.
It wonāt be long now till I have it working.
It appears to be working.
I only tested it on strings and ints and itās pretty ugly right now, but I will do more thorough testing and refactoring later.
Sure, itās on my github, feel free to try it.
As of right now it only can send an OSCMessage from the Spark to some other UDP endpoint. It canāt send bundles yet and it canāt receive, but Iām going to try porting that stuff next.
So Iām running into an issue where at some the Spark Core just stops sending messages.
Usually shortly after that it starts flashing cyan, then it gets back on the network, but itās still not sending.
However, itās definitely on the network, because I can ping it.
Iām not sure whatās going on or how I would even go about debugging something like that. I managed to take a network capture leading up to the point where it happens. It usually takes between 5-10 minutes for this to happen.
When it gets into this mode, even if I unplug it and plug it back in, it doesnāt start sending messages again. If I reprogram it, it starts sending again.
Usually I have to reprogram it a few times. I often get the dark LED of doom, where one second itās flashing magenta, and then suddenly it just goes dark, and I have to unplug it and plug it back in again and start over again programming. But usually after I do this a few times it eventually works.
Good question! You might have seen this already, but there was a big thread on a āconnection droppingā problem here: https://community.spark.io/t/bug-bounty-kill-the-cyan-flash-of-death/1322/437 ā The current firmware should be better about recovering from this, which is what you might be seeing when it comes back onto the network. If itās resetting to older firmware, itās possible there could be a bug in your code that causes it to fail back to an older version. Does that sound right @zachary / @satishgn ?
Weāre still working with TI directly to try and resolve an issue where the CC3000ās buffers can be exhausted on busy networks and it can disconnect.
Yup, sounds correct. @jfenwick thereās probably a bug in your app causing the Core to die. We are working on a way to define whether you want this āboot back into safe modeā behavior or not.
The current behaviorāif thereās a very serious problem, the Core may reboot and fall back to a previous working firmware copy or even factory reset firmware. This means that even when thereās a bug in your code, you will still be able wirelessly to flash new code, which is especially helpful if you canāt get near your Core to reset or debug it.
The behavior you may want that weāll be creating soonāthe ability to declare that you never want your firmware overwritten, even if it hard faults. In which case you will not be able to recover without physically debugging your Core. However, unlike above, you will be able more easily to catch your Core in the act of a failure to figure out whatās wrong.
Also, just wanted to sayāawesome! I OSC. And @jmej, the composition of mine thatās gotten the most performances around the world is a sax and computer duo with the computer part built in Pd.
@jfenwick I see you malloc and realloc a lot in OSCData and OSCMessage. The heap management on the Core is super simple. See the stub for _sbrk, the underlying routine used by all the {m,c,re}alloc calls. There is no handling of fragmentation. Itās much better to manage your own buffer or only use the stack. Make sense?
I can help with osc,been using PD and Reaper and controlling synths and filters using hands,but im very very busy.Looking forward on your solution for cyan flash of death as it currwntly stopped all my spark projects including udp client you plan to do.
This is not solution,just workaround. And a nasty one. Its a matter of time when someone hacks the cloud and performs ddos.wait,users now perform dos on their own .If spark wa intended for network communication it shouldnr hang and reset every few minutes.
Any chance you could explain or give an example of what you mean by manage your own buffer?
For using the stack only I imagine I would create a bunch of arrays of data as I go, then at the end make one big array and put all the data in it, then send the array. Which is probably what Iāll do next unless you can give me a better idea.
Sureāthe SparkProtocol has an internal buffer; the variableās called queue. Itās 640 bytes. All messages received from or sent to the Cloud have to fit within that buffer as theyāre copied into it or built within it. In many ways, the use case is very similar to OSCāsending & receiving variable-length network messages.
We have been working on porting the CNMATās Oscuino library to the Spark Core as well, using suggestions from this thread.
Weāve found out that (as of commit āEnable analogwrite on digital pinsā pushed on June 9th), Oscuino code can be kept as is and to get it to work, the only changes weāve applied were on the Sparkās core-firmware code instead.
What weāve added is a slightly modified UDP class (in application.cpp) overloading the original UDP class. We added buffering for functions beginPacket(), endPacket() and write().
There youāll find application.cpp which provides the myUDP class with our modifications and an example application for both reception and emission of OSCMessages and OSCBundles over UDP, as well as associated PD and Max/MSP patches.
Along with it you will get OSCData.h and OSCMessage.h, modified to include the right files for the Spark Core.
Also, this code will only work with the most recent versions of the core-firmware (or the current online compiler of the Spark Cloud, since we use the CFLAGS += -DSPARK recently defined in the Sparkās makefile.