UDP Broadcast, Spark don´t reply after some hours

Hi community!
I have some problem with the code below. It works fine for several hours but suddenly it stop to replay on my broadcasts.
I could send several UDP broadcast/ min Spark replay everything works fin. Then I stop broadcast for some hours and come back to broadcast again and this time Spark don´t replay.

I have put in Serial output in the loop() just to see if its still run my code and not have booted up in Tinker app but it seems correct. But the Serial in the if (Udp.parsePacket() > 0) don´t seems to be running. So Udp seems to have stopped working.
Could this be related to TI CC3000 UDP problems or is it my code? I have installed latest CC3000 patch.

// An UDP instance to let us send and receive packets over UDP
UDP Udp;

// UDP Broadcast msg
const char UDP_ALL[] = "SPARK";        // Broadcast msg to all units

const int RX_BUFFER_SIZE = 20;
unsigned char mRxBuffer[RX_BUFFER_SIZE];

const int UNIQUE_ID_SIZE = 12;
unsigned char *pUniqueID = (unsigned char *)0x1FFFF7E8;

const int UNIT_INFO_SIZE = 23;
unsigned char mUnitInfo[UNIT_INFO_SIZE] = { 'S', 'P', 'A', 'R', 'K', 1, 1, 1, 0, 0, 0, pUniqueID[0], pUniqueID[1], pUniqueID[2], pUniqueID[3], pUniqueID[4], pUniqueID[5], pUniqueID[6], pUniqueID[7], pUniqueID[8], pUniqueID[9], pUniqueID[10], pUniqueID[11]};

uint8_t mUnitAddress = 0;
IPAddress ipAddressRemote;
unsigned int portRemote;

// UDP Port
unsigned int localPort = 8888;

enum unit_info {
    UNIT_INFO_TYPE      = 5,

void setup() {
  // start the UDP

  // Print your device IP Address via serial


void loop() {
   Serial.print("loop is running ");
  // Check if data has been received
  if (Udp.parsePacket() > 0) {
    Serial.print("parsePacket: Ok ");
    Udp.read(mRxBuffer, RX_BUFFER_SIZE);

// Store sender ip and port
    ipAddressRemote = Udp.remoteIP();
    portRemote = Udp.remotePort();

    if (strcmp((char*)mRxBuffer, UDP_ALL) == 0) {
        Serial.println("UDP Broadcast: Ok");
        mUnitInfo[UNIT_INFO_ADDRESS] = mUnitAddress;
        // Send UnitInfo data to sender
        Udp.beginPacket(ipAddressRemote, portRemote);
        Udp.write(mUnitInfo, UNIT_INFO_SIZE);
    } else if (strcmp((char*)mRxBuffer, UDP_SET_ADDRESS) == 0) {
        Serial.println("Set address: Ok"); 
        // Send data to sender
        Udp.beginPacket(ipAddressRemote, portRemote);
    } else {
        Serial.print("UDP Broadcast: Error");

I have experienced exactly the same problem. I have an up to date Spark Core running in MANUAL mode, no Cloud, which does UDP broadcasts, one every 15 seconds. After several hours the Core just stops sending broadcasts - no other error indication.

I “fixed” the issue by forcing a reset/reboot every 15 minutes with a System.reset().

Fortunately the reset/reboots do not matter, in my application - I am just broadcasting the reading of a TMP36 sensor.

This code seems really fragile to me. Since you only call Udp.begin(localPort) once in setup(), you can’t recover from any problems.

What happens when say for instance your DHCP lease expires and the core gets a new IP address? The socket you opened with Udp.begin() on the TI CC3000 is probably closed but your code never reacts to this and every call of the Udp methods will fail from then on.

I think it would better if you tried to detect failures on Udp.write() and close and reopen the UDP socket.

1 Like

The return codes (or that there are any) of the UDP.* functions are not documented. The examples do not show them being tested. There was a promise to address this but all went quiet.

On my network addresses do not expire often, and when they do the same IP address is allocated. In any event owing to another Spark bug, perhaps now fixed but once well known, I close and re-open the UDP connection once per minute.

You are certainly right that the return codes are not documented here on Spark or on Arduino. Still they are useful and are the key to getting robust networking.

I don’t know what the TI CC3000 does when your DHCP lease expires but it cannot depend on getting the same address again since that is never guaranteed. Whatever the TI part does, it does it completely inside its own software environment with only notification possibly given outside back to the Spark firmware. The Arduino wiring libraries that Spark is designed to be compatible with do not have these kinds of ideas baked in and present a primitive interface that lacks methods for dealing with asynchronous events like this.

DHCP lease expiration is just one of many possible failure modes that could cause a socket on the TI part to close unexpectedly. You have to program defensively if you want to recover from errors.

Thanks for the replies @psb777 and @bko !

@bko you are right the code is fragile. But I don´t want to put so much working on the Spark if the CC3000 is so buggy. I need the UDP and TCP to work, otherwise I have to look on a different solution :frowning: .

For the DHCP “problem” thanks for pointing that out. I didn’t think of it but at the moment I don´t have to handle that then I have manual assigned an IP on my router for the Spark.Is there any way to detect if the socket is closed from the CC3000 side? I need to have one UDP socket always open to detect broadcast can´t close it.

Could it be some failure when I answer on the broadcast with udp.write() that closes the socket?
I think I have to look in the udp.write code to see the return codes “documented”.

I did some test during the night…I had the same code as above but also added TCP socket to it. No when I try to communicate with the Spark on the morning UDP has stopped working but TCP is working fine. I open the socket at the same time so both are fragile when it comes to DHCP changes. Is TCP more stable then UDP for the CC3000?

if (client.connected()) {
    // echo all available bytes back to the client
    while (client.available()) {
        mRxBuffer[0] = 'O';
        mRxBuffer[1] = 'K';
        mRxBuffer[2] = 10;
        server.write(mRxBuffer, 3);
} else {
    // if no client is yet connected, check for a new connection
    client = server.available();

The repeated effort to note that the return codes are not documented on the Spark (or on the Arduino) is now much more than would be documenting same. That could so easily be done using the doc editing tools provided, by someone who knows what the return codes are, here for Spark (but not for the Arduino).

I find the Arduino comparison not as apt as you seem to. It seems to me that the Arduino is now end of life, Moore’s law and the IoT making single-threaded single-process microcontrollers redundant. Refer the Spark Photon. As an aside I am sure the next generation, the Photon++ (or the Raspberry Pi–), will be running full-blown Linux, either Android or the distribution of your choice.