UDP Broadcast problems with simple application

I created what I thought was an extremely simple application which just constantly does a UDP broadcast saying Hello across all devices on the network on port 3333. I used 255.255.255.255 as the IP so all devices would receive the packet as a UDP broadcast. This is not working. I did Serial port print outs for testing to make sure everything was working and I get those prints but nothing on the UDP broadcast. Would someone be so kind as to flash this firmware and let me know if it works?

UDP udp;

byte remoteIP[4] = {255, 255, 255, 255};
int remotePort = 3333;

unsigned char TxMsg[5] = { 'H', 'E', 'L', 'L', 'O'};

void setup() {
    udp.begin(3333);
    Serial.begin(9600);
}

void loop() {
    Serial.print("Running");
    udp.beginPacket(IPAddress(255,255,255,255), 3333);
    udp.write(TxMsg, 5);
    udp.endPacket();
    delay(1000);
}

Thank you

I donā€™t have a spare Core so if UDP has stopped working to the extent that it seems I dare not flash your code and later re-flash mine as I think mine would then no longer work. My app does what yours does except

-the loop delay is 10,000.
-it also reads a sensor each loop, reporting its value by UDP
-it uses different values for local and remote port (although this ought not to matter, it does on the Spark after the first few datagrams)
-the sent string is typically 5 chars but is also null-terminated - i.e. 6 bytes are sent
-every 50 secs I udp.stop() and udp.start() - an old bug in UDP perhaps now fixed

Try listening on a different port than you are broadcasting on. You are receiving your own packets and the TI CC3000 is likely running out of packet buffers since you never pickup your received data.

Thank you @bko

I actually just read the book you and @SomeFixItDude wrote about UDP not working haha.

Yes I tried changing the listen port. Here is my updated code:

UDP udp;

byte remoteIP[4] = {255, 255, 255, 255};
int remotePort = 3333;

unsigned char TxMsg[5] = { 'H', 'E', 'L', 'L', 'O'};

void setup() {
    udp.begin(5000);
    Serial.begin(9600);
}

void loop() {
    Serial.print("Running");
    udp.beginPacket(remoteIP, 3333);
    udp.println("Hello");
    udp.endPacket();
    delay(1000);
} 

Still no go. From what I read it sounded like it might have to due with interference caused by the Spark Cloud. I was going to try turning that off in my setup, then try 20 broadcasts and turn Spark Cloud back on, then try 20 more broadcasts. @SomeFixItDude seemed to indicate that turning cloud service off made a difference.

I have this code stripped down to be absolutely as simple as possible. I am never transmitting anything to the board, just transmitting from the board and it does not work.

Thank you.

When I run this code, it works for me. One of the weird things about looking for broadcast packets is that netcat (nc) has a long standing bug where it receives one packet and then switches out of broadcast mode, but if you restart it you can get one more etc. tcpdump does not have such a bug. Here is the output I got:

17:35:05.536566 IP 10.0.0.2.5000 > 255.255.255.255.3333: UDP, length 5
17:35:06.560879 IP 10.0.0.2.5000 > 255.255.255.255.3333: UDP, length 5
17:35:10.656661 IP 10.0.0.2.5000 > 255.255.255.255.3333: UDP, length 5
17:35:11.885344 IP 10.0.0.2.5000 > 255.255.255.255.3333: UDP, length 5
17:35:12.704915 IP 10.0.0.2.5000 > 255.255.255.255.3333: UDP, length 5
17:35:13.523800 IP 10.0.0.2.5000 > 255.255.255.255.3333: UDP, length 5
17:35:14.547945 IP 10.0.0.2.5000 > 255.255.255.255.3333: UDP, length 5
17:35:15.571853 IP 10.0.0.2.5000 > 255.255.255.255.3333: UDP, length 5
17:35:16.595769 IP 10.0.0.2.5000 > 255.255.255.255.3333: UDP, length 5
17:35:17.619777 IP 10.0.0.2.5000 > 255.255.255.255.3333: UDP, length 5
17:35:18.643744 IP 10.0.0.2.5000 > 255.255.255.255.3333: UDP, length 5
17:35:19.667900 IP 10.0.0.2.5000 > 255.255.255.255.3333: UDP, length 5
17:35:20.691920 IP 10.0.0.2.5000 > 255.255.255.255.3333: UDP, length 5
17:35:21.715824 IP 10.0.0.2.5000 > 255.255.255.255.3333: UDP, length 5
17:35:22.739925 IP 10.0.0.2.5000 > 255.255.255.255.3333: UDP, length 5
17:35:23.763845 IP 10.0.0.2.5000 > 255.255.255.255.3333: UDP, length 5
17:35:24.583057 IP 10.0.0.2.5000 > 255.255.255.255.3333: UDP, length 5
17:35:25.607077 IP 10.0.0.2.5000 > 255.255.255.255.3333: UDP, length 5
17:35:26.631755 IP 10.0.0.2.5000 > 255.255.255.255.3333: UDP, length 5
17:35:27.655116 IP 10.0.0.2.5000 > 255.255.255.255.3333: UDP, length 5

@IOTrav welcome to the UDP party. :smile:

ok, what exactly is not working? Is your core freezing, CFOD, red blinks, or udp message just not going anywhere? If it is the message what device is not receiving it?

I ended up coming up with a solution for my partitcular problem, Iā€™ll explain in my next post. Things to try:

  1. Change broadcast address to single point ip address and see if the message is delivered. Some routers are not UDP broadcast friendly, some are not wifi udp broadcast friendly. Most routers are not managed and are dumb so most likely not your problem, but worth testing.

  2. The application / device that is the target of your message. Go unicast and send directly to an ip. Can you setup a very simple nodejs app and read udp datagrams from the port and console out the results? I can provide you one if you need. Or you could use wireshark on your pcā€¦ Have the core send the udp straight to your pc. Verify it worked and then switch to broadcast and verify it doesnā€™t work.

My problem was freezing etc, I could get a few udp packets to work. Anyway, the two above tests will verify it is not the target and router as your problem and it is just your spark core.

Good Luck!! Try those tests out and let us know.

2 Likes

Thank you @bko and @SomeFixItDude

I am using Com Operator which is a great program for testing communications. Its available here(Windows Only, free for 30 days):
http://serialporttool.com/CommOpInfo.htm
There is also a free version here:
http://serialporttool.com/CommPalInfo.htm

I am quite experienced with these types of products. We actually already manufacture a push notification product line which sends out UDP packets when inputs are tripped here:
http://www.controlanything.com/Relay/Relay/Push_Notification
I am just stating this to say I am extremely experienced with the UDP and TCP protocols. That makes this a little frustrating since it seems I am doing everything right.

@SomeFixItDude I will try directing my packets to a particular IP(My Computer), but I am 100% confident it is not router/network related since I have done this so much in the past. I will let you know what I find.

@bko do you have a later install of the Spark Core firmware perhaps? I purchased these Spark Cores on September 15th of this year. Have any improvements been made to them that would have corrected this issue since then?

Thank you

Hi @IOTrav

I compiled and downloaded your code using the web ide since that represents the latest released version. Other than the bootloader and emergency backup firmware, every time you load new firmware into your Spark core, you are replacing the last version completely so what really matters is what branch you compiled against.

With the Spark cloud connected, I am able to send UDP broadcast packets any way I want from my core. There are some problems with UDP broadcast with the cloud turned off at startup, but I was able to broadcast to a subnet broadcast address.

As I am sure you know, the router can also make a difference. Where I work, no 255.255.255.255 packets will be routed due to a security rule, for instance.

What tools are you using to search for the broadcast packets?

@IOTrav,

That is what I mean by managed switch. @IOTrav sounds like you are knowledgeable about UDP so lets eliminate the network from the possibilities. Most managed switches block multicast, but those few tests will rule that out. Some switches will not block the subnet broadcast. At my work we setup a vlan and isolate the ports that require multicast so they don't clobber our whole network. I am putting up two nodejs programs to test the broadcast at the end of this post. I would try the send from one host and the receive on another host. Also a test where one or both hosts are on wifi would be also advised. Another try should be just changing the last number in your ip address to 255. For example my ip address is 10.0.0.13 broadcast to 10.0.0.255. If that all works I'd say your network is in the clear.

Also what is your CC3K Patch level? You can look at it by going to https://api.spark.io/v1/devices/[your deviceid]/?access_token=[your token]

if your patch version "cc3000_patch_version": "1.28" is less than 1.28 I would do the deep update. Instructions here http://docs.spark.io/troubleshooting/#deep-update There are some changes to UDP and ARP in that update.

adjust port and ip as needed below

udp_recv.js

"use strict";

var dgram = require('dgram');
var udpServer = dgram.createSocket('udp4');

udpServer.on('message', function(message, remote) {
    console.log(remote);
    console.log(message);
});

udpServer.bind(9000);

udp_send.js

var PORT = 9000;
var HOST = "255.255.255.255";
//var HOST = "10.0.0.255";

var dgram = require("dgram");
var message = new Buffer("Hello Sparky");

var client = dgram.createSocket("udp4");
client.send(message, 0, message.length, PORT, HOST, function(err, bytes) {
    if (err) throw err;
    console.log("UDP message (len = " + message.length + ") sent to " + HOST +":"+ PORT);
    client.close();
});

Good Luck and let us know

So here is the latest @bko and @SomeFixItDude

I changed the last byte of the IP broadcasting to 255 so all devices on the network get the broadcast. Our dhcp server gives 192.168.2.XXX ips so I set the module to send the packets to 192.168.2.255 This works. However the problem with this is I have to know the users default gateway and things could get harry there. I really need to set the UDP broadcast to 255.255.255.255 so it never fails. I believe the issue may be that this broadcast is going to the Spark Core server perhaps which is wreaking havoc on the whole system. Am I way out of line? Here is my latest code which works:

UDP udp;

byte remoteIP[4] = {192, 168, 2, 255};
int remotePort = 3333;

unsigned char TxMsg[5] = { 'H', 'E', 'L', 'L', 'O'};

void setup() {
    udp.begin(5000);
    Serial.begin(9600);
}

void loop() {
    Serial.print("Running");
    udp.beginPacket(remoteIP, 3333);
    udp.println("Hello");
    udp.endPacket();
    delay(1000);
}

Am I crazy here? Every other Network device I have ever used which is quite a few worked in this method of sending a UDP broadcast on 255.255.255.255 so it works on everyones network no problem. Im at a loss and grasping for reasons or ideas here.

As I said above, with your exact code above I was able to broadcast on 255.255.255.255 so it is not the Spark code that is holding you back here. Make sure you have done the ā€œdeep updateā€ patch to the TI CC3000.

Can any other hosts on this same network broadcast on 255.255.255.255? What kind of router setup do you have?

Here at work, I cannot broadcast on 255.255.255.255ā€“it is blocked by a rule in a router setup by the IT department. At home on my el-cheapo Netgear router I use for Spark, broadcast works great.

If you want to use a subnet address, you can construct one by combining the first three bytes of the local IP address with 255 as the forth byte for testing on a class C subnet like you have in your code above. In general you should use the subnet mask to find out how many 1ā€™s to add to your address and not assume it only 8-bits.

http://docs.spark.io/firmware/#wifi-subnetmask

1 Like

@IOTrav, for this problem

Spark has got a cure.

The Core firmware docs give you this

WiFi.gatewayIP() returns the gateway IP address of the network as an IPAddress.

Together with @bko 's WiFi.subnetMask() hint you might be able to get it running.
For me broadcasting to the whole world just out of conveniance and not due to actual need seems a bit hacky.

Nonetheless it should still work - as it does at @bko 's an my own home network, too.

1 Like

Ok. So apparently I have completely lost my mind!! I thought I would give it one my try with my original code as posted below and it is working perfectly. I did get it to lock up(flashing red LED) so I did a reset. Then I ran this code.

UDP udp;

byte remoteIP[4] = {255, 255, 255, 255};
int remotePort = 3333;

unsigned char TxMsg[5] = { 'H', 'E', 'L', 'L', 'O'};

void setup() {
    udp.begin(5000);
    Serial.begin(9600);
}

void loop() {
    Serial.print("Running");
    udp.beginPacket(remoteIP, 3333);
    udp.println("Hello");
    udp.endPacket();
    delay(1000);
}

Been running for about 30 minutes now no issue. Not sure what I did but perhaps this was all for not. I donā€™t know whether to mark this as solved or not.

2 Likes

@IOTrav, lol we have all been there. So if you donā€™t need to receive any udp packets I highly recommend a reset like the following.

void sendUDP()
    if (WiFi.ready()) {
        unsigned char TxMsg[12] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };

	/* removed logic here...  read sensor pack in data in the array */

        Udp.begin(9000);
        Udp.beginPacket(IPAddress(10,0,0,21), 9000);
        Udp.write(TxMsg, 12);
        Udp.endPacket();
        Udp.stop();
    }
}

In the previous project where @bko and I were talking I was trying to create a peer to peer network where any core could join in and share sensor readings with each other. Each core occasionally broadcast its data, the other cores picked up the broadcast and added the ip to their send list. Most data was sent via unicast to each ip. I gave up on that design and now I have spark cores all sending data to a raspberry pi. I wonā€™t get too much in to the details but I found it very hard to implement with the current issues the core has with udp.

You canā€™t trust the remote ip address (minor problem) that comes in the packet, you have to instead wrap you udp datagram in your own custom envelope that contains all the vital info you want.

Sending udp to ip addresses that no longer exist caused the core to freeze (major problem). Receiving a lot of datagrams quickly caused CFOD (major problem).

All communication if you want it to not crash the core seems to have to use the cloud. I know other users have successfully received data not via the cloud. I.E. bkoā€™s ntp library.

I see the spark photon has ditched the CC3K, so I am really hopeful many of the network issues will be worked out.

ANYWAY, if you donā€™t need to receive data resetting in the above method will be very reliable. I currently run this way and do not experience any problems.

Good Luck on your project!

Hi All,
Iā€™m getting similar results to IOTrav initially had, but only in certain circumstances:

I can broadcast to 255.255.255.255 ONLY after connecting to the cloud.

  • It doesnā€™t matter if Iā€™ve connected ā€˜automaticallyā€™ or manually.
  • It doesnā€™t matter if I manually disconnect having connected to the cloud.
  • I can always broadcast to my local net (e.g. 192.168.1.255).
  • I can see the Spark Core sending broadcasts to 255.255.255.255 as part of the DHCP process, so I know this is not a network configuration issue
  • I feel this all indicates the Spark Core is capable of doing what I want, but IS NOT DOING what I want.

Extra info:
I am detecing broadcast packets with tcpdump -nnvei filtered by src MAC address.
I have applied ā€˜deep_updateā€™ and no apparent change on this issue.

Iā€™ve just discovered this thread - the info I have seems relevant and may help IOTrav explain why he isnā€™t going mad - Iā€™ll post code soon.

Any thoughts appreciated!

There are several UDP threads running parallel at the moment.
As it seems your problem

might be fixable with @bko 's tip in post #21

http://community.spark.io/t/spark-core-cant-send-udp-broadcast-packets-without-cloud-connection/8070/20

Sorry for the rotten formatting of the quote :wink:

1 Like

Hi @wmcelderry

I have posted a work-around for the problem of not UDP broadcasting to 255.255.255.255 ā€“ I see @ScruffR beat me to the link!

Try that and report back! We are still gathering data on this problem. There is also a github issue if you want to comment there.

Thanks for taking the time to reply!
Is it best to keep these threads separate or should I jump across for any future replies?

Iā€™m not having any success with either TCP or ICMP ping work arounds - both give the same results as I originally posted.

My code is getting very hacky, but Iā€™m quite sure itā€™s working as intended, based on the network output Iā€™m seeing when I change #define options. If you spot any errors that have crept in, that would be good to know!

I am compiling locally using the master branch of the spark firmware repo, then uploading using dfu-util. Could any of that be causing issues as a few others confirm the workaround works for them?

If thereā€™s any code that is known to work, Iā€™d love a link so I can test if it is my environment or something elseā€¦

My code is getting very messy now, sorry for that - but I hope it conveys that Iā€™m trying all the options!

W.


#include "neopixel__spark_internet_button/neopixel__spark_internet_button.h"

// IMPORTANT: Set pixel COUNT, PIN and TYPE
#define PIXEL_PIN A7
#define PIXEL_COUNT 11
#define PIXEL_TYPE WS2812B




#define INITIALISE 0
#define CONNECTING 1
#define CONNECTED 2
#define DISCONNECTED 3


//config:
#define MAN_WIFI
#define EVERYONE
//#define MANUAL_CONNECT_TO_CLOUD
//#define DISCONNECT_FROM_CLOUD
#define TCP_WORKAROUND
#define WORKAROUND

#ifdef MAN_WIFI
	SYSTEM_MODE(MANUAL);
#else
	SYSTEM_MODE(AUTOMATIC);
#endif

void signal(int state);

#ifdef WORKAROUND
bool once = true;

#ifdef TCP_WORKAROUND
TCPClient client;
#endif
#endif

Adafruit_NeoPixel strip = Adafruit_NeoPixel(PIXEL_COUNT, PIXEL_PIN, PIXEL_TYPE);

unsigned int
	state = INITIALISE;


UDP udp;

int limit,localPort = 5050,packetCount=0;


unsigned char tgtIP[] =
#ifdef EVERYONE
	{255,255,255,255}
#else
	{192,168,1,255}
#endif
	;

void sendPacket()
{
	udp.begin(localPort);
	udp.beginPacket(tgtIP, 8080);
	udp.write((const unsigned char *)"Hello World!",12);

	udp.endPacket();
	udp.stop();
}

void setup() 
{
	pinMode(D0,INPUT);
	pinMode(D1, INPUT_PULLUP);

	strip.begin();
	strip.show(); // Initialize all pixels to 'off'

	limit=1;
	signal(10);
}

int togState=0;

void loop() 
{
#ifdef MAN_WIFI
	if(WiFi.connecting())
		return;

	if(!WiFi.ready())
	{
		//not ready or connecting - turn it on and start to connect...
		WiFi.on();
		WiFi.connect();
		signal(1);
		return;
	}

	
	#ifdef MANUAL_CONNECT_TO_CLOUD
	switch(state)
	{
		case INITIALISE:
			signal(2);
			Spark.connect();
			state = CONNECTING;
			return;
		case CONNECTING:
			if(Spark.connected())
				signal(3);
				state=CONNECTED;
			else
				return;
	}

	#else
	state = CONNECTED;
	#endif
#endif

#ifdef DISCONNECT_FROM_CLOUD
	if(state == CONNECTED && Spark.connected())
	{
		signal(4);
		Spark.disconnect();
		state = DISCONNECTED;
	}
#endif

#ifdef WORKAROUND
	if(once)
	{
		once = false;
		signal(5);
		#ifdef TCP_WORKAROUND
			client.connect("www.google.com",80);
			delay(1000);
			client.stop();
		#else 
			//WiFi.ping({192,168,27,210});
			WiFi.ping(WiFi.gatewayIP());
		#endif
		signal(6);
		delay(1000);
		signal(7);
	}
#endif

    if(digitalRead(D1) == 0)
    {
        limit=1;
        packetCount=1;
    }
    
    if(packetCount > 0)    
    {
        packetCount--;
        sendPacket();
    }
    

    if(limit > 0)
    {
	signal(8 + togState);
	togState = (togState ? 0 : 1);
        limit =0;
	delay(500);
    }
}

void blank()
{
	for(int i=0; i<strip.numPixels(); i++)
		strip.setPixelColor(i, 0);
}


unsigned int on = Adafruit_NeoPixel::Color(63,0,0);
void signal(int state)
{
	blank();
	strip.setPixelColor(state,on);
	strip.show();
}

Just for completeness:

I was struggling to get this to work for me, but Iā€™ve got there in the end.
The problem may have been two fold:

  1. You should test results of ping (or the tcp connection) and retry if it fails.
  2. In another thread @bko points out that ā€˜udp.endPacket()ā€™ is not a blocking call and a call to ā€˜udp.stop()ā€™ terminates transmission immediately. Iā€™ve since modified my code to have another ā€˜#defineā€™ option to allow using udp with one begin and stop call per packet OR one begin call for the whole application.
1 Like

One of the problems is that UDP.ping(), and many other UDP functions, and many of the other functions too, do actually have return codes but there is nothing in the documentation text or examples to indicate that they do. They are documented and used in the doc examples as if they are declared void in the underlying code. I have raised this issue a number of times and would ask that others do too. As I see it the ever-ending need to refer to the cabal in the cathedral for information which should be readily available detracts from Spark considerably.

If I had known a return code was returned, and what the codes meant, I would already be testing them, and my code would react appropriately. I will now return again to yet another cycle of burn&test&deploy on this ā€œhow to get UDP broadcasts working reliablyā€ exercise.

Similarly for the info that one ought not to call UDP.stop() immediately after UDP.endPacket(). Just how, exactly, dear gatekeepers, are we ordinary mortals ever supposed to know that? [For the avoidance of doubt the answer is ā€œdocumentationā€.]

The #define referred to by @wmcelderry above becomes another of the several++ things to try in the 2^several magic incantations required to get oneā€™s code to work.