I'm having a problem getting UDP traffic from my spark cores

I am running the following application on my spark core.
At the time of this writing, I have run loop() 560 times, and have received exactly 5 on wireshark, which is listening on the wifi interface on my Mac on the same WiFi network as the spark.

#include “application.h”

UDP Udp;
unsigned count = 0;

void setup (void)
{
Udp.begin (12345);
Serial.begin (38400);
}

void loop (void)
{
Serial.print ("UDP ");
Serial.println (count++);

Udp.beginPacket (IPAddress (255,255,255,255), 12345);
Udp.write (“Hello”);
Udp.endPacket();
delay (1500);
}

Now, when I change the IPAddress to the WiFi address of my Mac, it works every time.

I’m pretty sure what I’m doing is legal; there’s no reason why I shouldn’t be able to do this that I know of.

I have also used the subnet broadcast address (192.168.254.255) and I got the same results as a normal broadcast.

Would someone please explain this? Is this my WiFi AP? Is this the spark core?

Thanks,

Matt

Hi @mgssnr

I tried it and sure enough after about 14-17 packets my core does the cyan-flash-of-death and then goes directly to flashing green and reconnects with the previous (safe) firmware.

The problem is that you are not picking up your own broadcast packets (you get them too!) and so the UDP receive buffer is overflowing. Hey @david_s5, I didn’t get any red SOS codes for this–is that correct for the webIDE?

You could use Udp.begin and Udp.stop around each packet send since Udp.stop flushes the receive buffer or you could do this–I commented out the serial port and added a flashing LED so I could know when it died.

UDP Udp;
unsigned count = 0;

char discard[64];

void setup (void)
{
Udp.begin (12345);
//Serial.begin (38400);
pinMode(D7,OUTPUT);
}

void loop (void)
{
//Serial.print ("UDP ");
//Serial.println (count++);

digitalWrite(D7,HIGH);

Udp.beginPacket (IPAddress (255,255,255,255), 12345);
Udp.write ("Hello");
Udp.endPacket();
int32_t retries = 0;
int32_t bytesrecv = Udp.parsePacket();
while(bytesrecv == 0 && retries < 1000) {
    bytesrecv = Udp.parsePacket();
    retries++;
}

if (bytesrecv>0) {
    Udp.read(discard,bytesrecv);
}
digitalWrite(D7,LOW);
delay (1500);
}

You might want to take some action (Udp.stop() ?) when retries hits a 1000 or not, so I left that out. If you don’t know the packet size (64 max for this code) then you will have to Udp.read one byte at a time to discard them.

2 Likes

@bko - The IWD watch dog is trouncing the Panic. There is a pull request that fixes it, It will ensure 2 SOS N SOS cycles before the reset. Just be mindful that a IWD timeout may block your setup from being called. So hit reset to retest.

1 Like

While it’s legal, I’ll wager that the CC3000 isn’t the only embedded TCP/IP stack that doesn’t handle RFC919 broadcast packets correctly.

You might want to shuffle over to the TI forum and see if they have an answer.

In the meantime, I recommend that you calculate the subnet broadcast address and use that.
I’m not clear whether you said that works or doesn’t, but it is probably worth trying to listen for the packets yourself, to avoid buffer problems, as suggested by others.

Thanks AndyW. The subnet broadcasts cause the same problem.

I will have to figure out some other way to alert devices on my network (perhaps a multicast like the core does, but with a different payload) that the spark core is alive and ready.

bko:

I did not think that the stack would keep broadcasts from itself. Thanks for the tip.
I could certainly read and throw away all UDP traffic; I’m not expecting any.

Thanks for the tip!

@bko Using similar code to yours I am able to make this work.
Thanks very much for your input.

I will still consider moving to some kind of multi-cast.

Hi @mgssnr

Based on your code, I had thought that your wanted to have a generic broadcast protocol where every :spark: core is running the same firmware but they can discover each other. It sounds like you might instead have a master/slave architecture, which is easier since the master runs different firmware from the slaves.

You are getting you own packets because the local port defined in Udp.begin() is the same as the remote port defined in Udp.beginPacket() and you are using a broadcast address. So if you want to separate these ports, you can!

So in the master, you use Udp.begin(slaveTxPort), and since the master never transmits, you don’t call Udp.beginPacket();

In the slaves you do Udp.begin(randomLargePort) and Udp.beginPacket(broadcastAddr, slaveTxPort);

You might want to pick port numbers no one else is using–there are lots. Here is the registry. If you have to depend on a router to rebroadcast your packets, you will likely need to pick a well-known port number.

I hope this makes sense, but if not, let me know and I will try to write more details.

@bko , this makes PERFECT sense! D’oh!!!

What I’m really trying to do is broadcast some bit of information on a known UDP port, which tells systems on the network listening on that port that the core is up and running and able to communicate. The other nodes will then have the IP address of the core and will know how to communicate with it.

Using a well-known port sounds like a bad idea, though. Perhaps multi-cast will turn out to be a better use of the network, but I know that home routers are spotty as to how they operate with them. I’m really not sure yet.

Thanks.

Hi @mgssnr

Sounds like a great system! Can I ask if the “systems on the network” are other :spark: cores OR are they other full computers?

It makes a difference since one way to get the IP address of your core to other full computer systems is to use a cloud variable, but since the cloud requires SSL/TLS (aka https: not http:) the :spark: cannot do that right now. So if it is other full computers on your local network, then the cloud is a great solution.

I have been working on using the :spark: core multi-cast startup messages to allow cores to find each other on a subnet, but until recently there were some UDP issues that made that hard. These issues are mostly fixed now. There is still the issue that cores only broadcast their start-up message at power on or reboot, but that seems pretty manageable.

I am very interested in having some :spark: cores that are sensor nodes and other :spark: cores that are view and control nodes, and letting them talk to each other on a local subnet, since it hard for them to talk through the cloud today.

Well, this functionality is only for demoing.

What eventually will probably happen is I will use the Spark Cloud software on my own servers to talk to a NoSQL database which collects each node’s sensor information over time to present to the end customer.

Later on down the road, we’re going to be doing something similar to what you’re describing. There will be multiple sensor nodes, and then an iOS or Android app (or web page view) that will allow the customer to see everything that’s happening with the cores.

There are command and control aspects of this, but mostly what I need is a reasonably fast autonomous node with wireless network capabilities.

I like the idea of the network you’re describing. Lonworks and others had this idea a while back, but AFAIK not wireless. It will be interesting to see what people build around the .

I am not sure of the depth of the multicast support in the CC3000, if you have been able to make local subnet broadcasts work, I would declare success and move on - assuming it works well enough for your requirements.

Many a good project has floundered on the rocks of multicast.

@AndyW I have successfully made the UDP broadcast work. Thanks.

1 Like

Hi, can you post your solution?