OSC (Open Sound Control) with Spark Core?

I think I found some sort of nasty bug in the Spark core development environment.
I had suspected that my cpp files were no longer getting compiled for a while.
So I made an entirely new project and moved everything over and now I’m getting errors when I compile.
Apparently in that original project only the .ino file was getting recompiled.
This is the same project where I was having this problem, so I imagine the problems are connected:

So I at least know that the code I’m compiling is getting onto the spark core.

@bko Could you look at the send function I wrote in this code and see if you can give me some pointers (budum tsss)?

I have a uint8_t pointer to some space on the heap and I keep trying to realloc more space so I can add more data before I give it to the write function.
I haven’t worked with pointers in a long time so I’m not sure if I’m doing any of it right.
I think at this point what I might try doing next is start over on this function and only send the OSC address as an array on the stack, then try to change that so it’s on the heap, then attack the rest of it step by step.

Hi @jfenwick,

I’m not totally sure based on your posts, but was there a case of the build IDE not including your project code in a build?

Thanks!
David

Oh man, so I’m making serious progress.
I made a little command line program to get into the flow of dealing with pointer memory allocation madness.
The current code I checked in makes it up to the comma.
It won’t be long now till I have it working.

2 Likes

Go man go @jfenwick!

If you still need help, I will have some time this evening (GMT-5).

It appears to be working.
I only tested it on strings and ints and it’s pretty ugly right now, but I will do more thorough testing and refactoring later.

4 Likes

Super curious about this project - I’m hoping to use a spark core with PD - a working OSC api/library would be great!!

1 Like

Sure, it’s on my github, feel free to try it.
As of right now it only can send an OSCMessage from the Spark to some other UDP endpoint. It can’t send bundles yet and it can’t receive, but I’m going to try porting that stuff next.

So I’m running into an issue where at some the Spark Core just stops sending messages.
Usually shortly after that it starts flashing cyan, then it gets back on the network, but it’s still not sending.
However, it’s definitely on the network, because I can ping it.
I’m not sure what’s going on or how I would even go about debugging something like that. I managed to take a network capture leading up to the point where it happens. It usually takes between 5-10 minutes for this to happen.
When it gets into this mode, even if I unplug it and plug it back in, it doesn’t start sending messages again. If I reprogram it, it starts sending again.
Usually I have to reprogram it a few times. I often get the dark LED of doom, where one second it’s flashing magenta, and then suddenly it just goes dark, and I have to unplug it and plug it back in again and start over again programming. But usually after I do this a few times it eventually works.

Hi @jfenwick,

Good question! You might have seen this already, but there was a big thread on a ‘connection dropping’ problem here: https://community.spark.io/t/bug-bounty-kill-the-cyan-flash-of-death/1322/437 – The current firmware should be better about recovering from this, which is what you might be seeing when it comes back onto the network. If it’s resetting to older firmware, it’s possible there could be a bug in your code that causes it to fail back to an older version. Does that sound right @zachary / @satishgn ?

We’re still working with TI directly to try and resolve an issue where the CC3000’s buffers can be exhausted on busy networks and it can disconnect.

Thanks!
David

Yup, sounds correct. @jfenwick there’s probably a bug in your app causing the Core to die. We are working on a way to define whether you want this “boot back into safe mode” behavior or not.

The current behavior—if there’s a very serious problem, the Core may reboot and fall back to a previous working firmware copy or even factory reset firmware. This means that even when there’s a bug in your code, you will still be able wirelessly to flash new code, which is especially helpful if you can’t get near your Core to reset or debug it.

The behavior you may want that we’ll be creating soon—the ability to declare that you never want your firmware overwritten, even if it hard faults. In which case you will not be able to recover without physically debugging your Core. However, unlike above, you will be able more easily to catch your Core in the act of a failure to figure out what’s wrong.

1 Like

Also, just wanted to say—awesome! I :heart: OSC. And @jmej, the composition of mine that’s gotten the most performances around the world is a sax and computer duo with the computer part built in Pd.

@jfenwick I see you malloc and realloc a lot in OSCData and OSCMessage. The heap management on the :spark: Core is super simple. See the stub for _sbrk, the underlying routine used by all the {m,c,re}alloc calls. There is no handling of fragmentation. It’s much better to manage your own buffer or only use the stack. Make sense?

1 Like

I can help with osc,been using PD and Reaper and controlling synths and filters using hands,but im very very busy.Looking forward on your solution for cyan flash of death as it currwntly stopped all my spark projects including udp client you plan to do.

edit: hands and kinect pc

This is not solution,just workaround. And a nasty one. Its a matter of time when someone hacks the cloud and performs ddos.wait,users now perform dos on their own :smiley: .If spark wa intended for network communication it shouldnr hang and reset every few minutes.

Any chance you could explain or give an example of what you mean by manage your own buffer?

For using the stack only I imagine I would create a bunch of arrays of data as I go, then at the end make one big array and put all the data in it, then send the array. Which is probably what I’ll do next unless you can give me a better idea.

Sure—the SparkProtocol has an internal buffer; the variable’s called queue. It’s 640 bytes. All messages received from or sent to the :spark: Cloud have to fit within that buffer as they’re copied into it or built within it. In many ways, the use case is very similar to OSC—sending & receiving variable-length network messages.

We have been working on porting the CNMAT’s Oscuino library to the Spark Core as well, using suggestions from this thread.

We’ve found out that (as of commit “Enable analogwrite on digital pins” pushed on June 9th), Oscuino code can be kept as is and to get it to work, the only changes we’ve applied were on the Spark’s core-firmware code instead.

As explained by @jfenwick, UDP code for beginPacket(), endPacket() and write() do not work as they are supposed to (and not even [as the documentation says] (http://docs.spark.io/#/firmware/communication-udp))

What we’ve added is a slightly modified UDP class (in application.cpp) overloading the original UDP class. We added buffering for functions beginPacket(), endPacket() and write().

Our code can be browsed in our GitHub repository and we’ve created a thread in the library section.

There you’ll find application.cpp which provides the myUDP class with our modifications and an example application for both reception and emission of OSCMessages and OSCBundles over UDP, as well as associated PD and Max/MSP patches.

Along with it you will get OSCData.h and OSCMessage.h, modified to include the right files for the Spark Core.

Also, this code will only work with the most recent versions of the core-firmware (or the current online compiler of the Spark Cloud, since we use the CFLAGS += -DSPARK recently defined in the Spark’s makefile.

1 Like

@trublion - your application.cpp needs an update:

// Get the IP address of the Spark Core and send it as an OSC Message
              
    	coreIPAddress = Network.localIP();
         	coreIPMessage.add(coreIPAddress[0]).add(coreIPAddress[1]).add(coreIPAddress[2]).add(coreIPAddress[3]);

Error message:

Network.localIP(); has been deprecated in favour of WiFi.localIP();

Change coreIPAddress = Network.localIP(); to coreIPAddress = WiFi.localIP();

1 Like

Hi.
I can’t compile the code using make in MacOS.

I get:
collect2: error: ld returned 1 exit status
make: *** [core-firmware.elf] Error 1

Any ideas how to resolve this?

I have exactly the same issue. Ay news on this matter?