OSC (Open Sound Control) with Spark Core?

Will it be plausible to make a library that makes the Spark Core able to send and revive OSC messages?
I really like the simplicity of OSC and would love if the Spark Core were OSC compatible. I can’t see what should prevent it from being it, there are already a library for ethernet shield, but I’m not sure that i will be able rewrite it to function well with the Spark Core.
It would make communication with for example Audio/Video software very easy, as many of these kind of software are already supporting OSC, and I think that many are to come.


I wouldn’t see why it can’t be done! I’ve done some OSC controlled projects with my TouchOSC iPhone app and it should be very possible to set your Host to the Spark Core’s IP Address once the TCPServer library is complete. This is a very cool way to make a simple project have a fantastic interface that’s mostly drag and drop. You don’t have to write an iOS app, or create a webapp if you go this route. There are a bunch more OSC apps for iPhone and other OS’s as well. I think this is the library you were referring to: https://github.com/CNMAT/OSC

1 Like

I’m already using OSC via PyOSC for a robotics project so it would be useful to me to have OSC support in the SparkCore firmware as well.

Yep this will definitely be possible!

OSC encoded messages usually travel over UDP or SLIP over serial ports or USB serial. Both are possible as CNMAT’s OSC library already supports ARM microcontrollers like the one used on the Teensy 3. I have a spark but I won’t be looking at contributing to porting our library until next year (busy with Yun, Teensy 3.1 and x-osc). If you want to contribute, here you go: https://github.com/CNMAT/OSC


I can see that Spark Core is on the goals for next release for CNMAT OSC. Any idea what the eta for that is? I’m just wondering if it’s worth trying to port it myself, or if I should just wait.

I was looking at this repo yesterday. Last update 4 months ago, I would not be very optimistic about next release. It depends on what you want to do with OSC. Nowadays you can send also BLOB data over OSC (for example bitmap picture). OSC is just simple middle software layer over TCP or UDP (I use UDP for realtime VSTi automation). For receiving OSC messages it means just simple regular expression checking on received string really.

Apart from being dev, I produce drum&bass neurofunk and have some releases as well. I use Kinect for Windows and control various parameters via OSC. The best thing about OSC over MIDI is network support with broadcast. This way I can use one quad-core just for synthesis and one quad-core for Kinect recognition. Link: http://www.kvraudio.com/developer/cyberluke-dsp

Oh man, cyberluke you sound so talented. Maybe you should write the library for me :slight_smile:

I’m busy with experimenting on Digole OLED display (realtime wireless display showing Windows desktop 160x128). But I can help if you need to receive some basic commands from TouchOSC. I would define an array of string messages that Spark Core would listen to and then call some method with value as argument. Something like Spark.function(); for registering public API.

1 Like

So I decided to take a shot at it myself.
Here is my code so far:

The code verifies and uploads successfully on the spark core.

However, I’m sending messages to Max MSP and they’re coming out wrong.
This is what it’s saying:
OSC packet size (10) not a multiple of 4 bytes: dropping
OSC packet size (1) not a multiple of 4 bytes: dropping
OSC packet size (1) not a multiple of 4 bytes: dropping
OSC packet size (1) not a multiple of 4 bytes: dropping
OSC packet size (1) not a multiple of 4 bytes: dropping
OSC packet size (1) not a multiple of 4 bytes: dropping
OSC packet size (1) not a multiple of 4 bytes: dropping
OSC Bad message name string: DataAfterAlignedString: Incorrectly padded string. Dropping entire message.

It says this over and over again.
I noticed that if I insert an error into my program and then hit Verify, there are some warnings that show up.
Here are the warnings:
OSCMessage.cpp: In member function ‘int32_t OSCMessage::getInt(int)’:
OSCMessage.cpp:128:16: warning: converting to non-pointer type ‘int32_t {aka long int}’ from NULL [-Wconversion-null]
OSCMessage.cpp: In member function ‘uint64_t OSCMessage::getTime(int)’:
OSCMessage.cpp:136:16: warning: converting to non-pointer type ‘uint64_t {aka long long unsigned int}’ from NULL [-Wconversion-null]
OSCMessage.cpp: In member function ‘float OSCMessage::getFloat(int)’:
OSCMessage.cpp:144:16: warning: converting to non-pointer type ‘float’ from NULL [-Wconversion-null]
OSCMessage.cpp: In member function ‘double OSCMessage::getDouble(int)’:
OSCMessage.cpp:153:16: warning: converting to non-pointer type ‘double’ from NULL [-Wconversion-null]
OSCMessage.cpp: In member function ‘int OSCMessage::getString(int, char*, int)’:
OSCMessage.cpp:164:16: warning: converting to non-pointer type ‘int’ from NULL [-Wconversion-null]
OSCMessage.cpp: In member function ‘int OSCMessage::getBlob(int, uint8_t*, int)’:
OSCMessage.cpp:175:16: warning: converting to non-pointer type ‘int’ from NULL [-Wconversion-null]
OSCMessage.cpp: In member function ‘char OSCMessage::getType(int)’:
OSCMessage.cpp:184:16: warning: converting to non-pointer type ‘char’ from NULL [-Wconversion-null]
OSCMessage.cpp: In member function ‘void OSCMessage::decodeData(uint8_t)’:
OSCMessage.cpp:567:64: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
OSCMessage.cpp: In member function ‘void OSCMessage::decode(uint8_t)’:
OSCMessage.cpp:585:12: warning: enumeration value ‘DONE’ not handled in switch [-Wswitch]

So that’s where I’ve gotten to.
Anyone who wants to help, take a look at the code, add to this thread, and maybe we can get OSC working soon!
Or maybe someone else will take what I’ve done so far as a base and run with it :slight_smile:

I don’t know anything about OSC :-), but I read through the code and have a suggestion or two. In general, this is very clean code and in most places the code is very careful about integer types and sizes, but there are other places I am not so sure about.

For instance, for OSCData there are overloaded constructors for both int and int32_t but those are the same on :spark:.

//overload the constructor to account for all the types and sizes
OSCData(const char * s);
OSCData (int);
OSCData (int32_t);
OSCData (float);
OSCData (double);
OSCData (uint8_t *, int);

Should that be changed to OSCData(int16_t) for :spark:? The constructor for int casts the input to int32_t anyway, but you would have to look at the callers to be sure.

I think the code could benefit from a few thoughtful changes from int to int16_t for :spark: but it is hard to know sometimes which way to go. It would be a lot easier for somebody who already understood what it was doing.

Similarly for float and double. On Arduino, float is 32-bits and double is usually 32 but sometimes 64-bits like on the Due.

I would also look at the byte alignment code, calls to sizeof(), and big vs little endian stuff. I could not see anything I thought was a problem, but those are traditional problem areas porting code like this and worth further investigation.

Also there is a #define that changes the alignment code to a faster inlined version, so you should figure out which one you are using.

Can you try sending a test message that only uses char or uint8_t types?

Can you try sending a test message that is already 4-byte aligned using char or uint8_t?

That would remove some variables from your problem and might show you where to look next.

Good luck!




“The only thing we need to know about the OSC Packet is that it needs to be in multiples of 32 bits (4, 8 bit bytes). This explains why we need to pad out parts with NULLs (0 value bytes).”

1 Like

I changed it up so I was just sending a message that had the name /foo
That should have been just 4 characters.
But These are the packets I’m seeing:
0000 2f 66 6f 6f 00 /foo.
0000 00 .
0000 00 .
0000 00 .
0000 2c ,
0000 00 .
0000 00 .
0000 00 .

Clearly none of these are in a multiple of 4.
I tried inserting a Serial.println inside the send function of OSCMessage and it completely froze the Spark Core. I had to do a factory reset to get it working again.
Like the cyan light didn’t even come on when I restarted it, just the dark blue LED.
So I had to factory reset it.
I haven’t gotten back to hacking on it since then.
I need to track down where it’s inserting those stray .'s, which I imagine will amount to adding Serial.printlns and looking at the serial terminal output, but not doing it so frequently that it freezes like it did last time.

HI @jfenwick

Looking at the data you show, it does not seem that far off from what I understand the format to be. There is a /foo that is null terminated (so 5 bytes) and then three zeros padding it to a 4-byte boundry, then a comma and then three more zeros, so 4-bytes again.

One thing I noticed reading through the code is that it sometimes does a write for each byte rather that build up an array of bytes and then do a UDPclient.write(bufferArray, bufferSize). So code like this:

    p.write((uint8_t *) address, addrLen);
    //add the padding

Fits the pattern you report above of

0000 2f 66 6f 6f 00 /foo.
0000 00 .
0000 00 .
0000 00 .

Could it be that the protocol somehow requires the entire OSC packet to be in a single UDP packet? If true, you could modify the send method to build up a packet and then write it all at once.

I think you’re right, this is what happens when you run the original code on an Arduino Ethernet:
2f 66 6f 6f 00 00 00 00 2c 00 00 00 /foo…,…

I imagine that it has to do with the way that the Print object works differently on the Spark Core.

Comparing the comments in the Arduino Ethernet UDP code:

To the Spark Core UDP code:

It’s clear that they do not work the same.
Arduino Ethernet uses the write function as a way to buffer data into a packet in preparation for sending it out with the endPacket function.
Spark Core, on the other hand, sends data as soon as you call the write function, and the endPacket function appears to just be a dummy function.
I feel like this is a subtle and possibly unfortunate API difference between Spark Core and Arduino.

You can consider UDP to be a byte stream and packetization does not matter OR you can consider it packetized and the packet boundaries are critical. It sounds like this protocol assumes that latter. The problem for :spark: is that the UDP buffer lives on the other side of the garden wall, in the driver _remoteSockAddr.

You can make this work by changing the OSC send method to do the buffering that is done by default in Arduino land, and then UDPClient.write(buf,len) to send the entire buffer.

In a perfect world, we would have both a low latency, send as soon as you can interface AND a buffered, wait for endPacket() interface for UDP.

I tried creating a dynamic array and adding all the bytes and then sending them all at once in the send function:

But I got the same result of the packet being broken up in the same places!
I’m starting to wonder if there is something deeper in one of the libs that breaks it up based on null characters, like maybe in the CC3000 lib.
That would be kind of awful.

It is very strange that you have the same problems with new code. I have been using the UDP client for NTP packets for some time now with lots of zeros in them without problems, so I don’t think that’s related. Maybe this was not the cause.

This code uses base class Print as the type for the UDP object; I use UDP* and pass in a &udp in begin() and then do _localUDP->write(buffer,length). I don’t see a problem with Print, but it is a difference.

How are your currently getting the dumps you show?

Can you run a packet monitor like wireshark on the computer receiving these packets?

I’ve been using Wireshark.