UDP problems with #include "spark_disable_cloud.h"

Hello,

I’m trying to work with UDP broadcasts and I’d like to #include “spark_disable_cloud.h” to prevent attempts to connect to the cloud (eventually the core will be deployed in a non-internet connected network). The program below works fine until I include spark_disable_cloud.h, after which my computer no longer picks up any UDP traffic from the core.

I can put Spark.disconnect() in setup() instead of including the header, and it seems to work, but I’m worried that some attempt will be made to contact the Spark server before setup() is called which might have negative effects if there’s no internet.

Can anyone replicate my problem, or am I doing something else wrong? I’m building locally, libraries downloaded via Git on 5/16/14

(Apologies for the weird formatting, I can’t seem to figure out how to make a block of code in this forum)

#include "application.h"
#include "spark_disable_cloud.h" // Comment this out, program works

UDP udp;
char discard[64];

void setup()
{
  udp.begin(60000);
}

void loop()
{
  static int i = 0;
  
  if (++i >= 100)
  {
    // Send UDP broadcast on port 60000
    udp.beginPacket(IPAddress(255, 255, 255, 255), 60000);
    udp.write("UDP Broadcast Test");
    udp.endPacket();
    i = 0;
    
    // Read and dump my own broadcast bytes
    int32_t retries = 0;
    int32_t bytesrecv = udp.parsePacket();
    while(bytesrecv == 0 && retries < 1000)
    {
      bytesrecv = udp.parsePacket();
      retries++;
    }

    if (bytesrecv > 0)
    {
      udp.read(discard,bytesrecv);
    }
  }
}

Thanks!

Hi @dpursell

The fix for formatting your code here in the forum is to use the special markup ```cpp before the code and ``` after the code. That character is not the normal single-quote, but the “other” one called grave accent. I did a quick fix above to make it readable.

If you wait until setup() to call Spark.disconnect() the core will definitely connect to the cloud before setup, so that does not seem to be what you want.

The only thing that really jumped out at me from your code is that you are reading bytesrecv worth of bytes based on udp.parsePacket() into a 64 length buffer but you don’t test to make sure bytesrecv is less than 64 so that you don’t overrun that buffer.

I assume you are watching the traffic on a separate computer since your code doesn’t have any debug output or anything. When you include the cloud disable, does it produce a few packets and then stop? Or is there never any output?

One thing that is different with the cloud turned off is that going around loop() is faster. I am not sure if that figures in or not here.

Hello @bko,

Thanks for the quick response!

You’re right, I should be checking the response length, that was just a mindless copy-paste. I’ve corrected it and also added a print to Serial1 for debugging:

// in setup():
  Serial1.begin(115200);

// in loop(), replacing "if (bytesrecv > 0) { ... }":
    while (bytesrecv > 0)
    {
      // Read bytes
      int bytesToRead = (bytesrecv > 63 ? 63 : bytesrecv);
      udp.read(discard, bytesToRead);
      bytesrecv -= bytesToRead;
      
      // Null-terminate to prevent print overflow, just in case
      discard[63] = '\0';
      Serial1.println(discard);
    }

However, this does not seem to be the root of the issue, I’m getting the same behavior.

You are correct, I’m using a python script to monitor UDP port 60000 from my computer, and once the disable file is included I do not receive any UDP packets at all. Additionally, I see the same behavior from the SparkCore now that I’m printing from Serial1 - when the file is included, nothing is ever printed, and when it’s commented out I get “UDP Broadcast Test” 2-3 times a second.

Regarding the loop speedup, you weren’t kidding, I had to change my delay index i from 100 to 100000 to get roughly the same send timing! However, that also doesn’t seem to be the issue, as I still don’t see anything being sent despite slowing down the send rate to match.

I can clean up and attach my Python file if that would make it easier for anyone to verify this behavior

Thanks

OK @dpursell I will try to figure out what is going on tonight.

As an aside, I would strongly recommend wireshark for this type of debugging–it’s a great tool.

This isn’t a useful intervention in that it does not address your immediate issue, but whereas @bko Brian is correct in that you need on Spark UDP to check you are not about to read more than will fit in your buffer, there would ordinarily be no need to do that in your code as originally written as the receive buffer is the same size as your sent packets - your code would work as it originally was in any other UDP implementation I have ever seen.

You need to be aware - and this remains undocumented - that the standard UDP semantics, that 1 UDP read returns exactly zero or one sent packets, packet boundaries are always respected, that unread bytes from a packet are always discarded if not read on the original read for that packet, that you can reliably determine the sender IP, etc etc, do not apply here, in Spark UDP. Spark so-called UDP is a significantly different beast to that described in the standard networking references. I have managed to use it, but packet boundaries are something I have to manage, as if I were writing a TCP program. Your code will not work as you expect because packet boundaries will not be respected. If the sending program sends two short UDP packets, Spark UDP will read them as one. If the sending prog sends one packet which exceeds any intermediate buffer limit then the packet will arrive, according to Spark UDP, but incomplete - UDP is supposed to deliver the packet intact or not at all. Etc

Hi @dpursell

I am definitely seeing the same behavior but even with the cloud turned on, it eventually fails for me and the core reboots.

So I started testing with the cloud on and I changed your program to print a count of bytes sent versus bytes received and the rx side is definitely not keeping up. I see it sending 18 bytes and receiving 18 bytes for a while (both counts go up by 18) but then at some point rx falls behind and then the core crashes and reboots without any flashing red LED.

With the i>=500 I see around 4 seconds between packets (cloud on) and it tx/rx’ed about 3500 bytes before failing, but it did fail even at that slow rate.

There is a lot of other broadcast UDP traffic on my small network. Dropbox seems to broadcast looking for other Dropbox connected computers, for instance. None of it goes to your port, but I am not sure what is going on.

I will try to keep digging, but maybe @zachary or @satishgn could take a quick look too, in case I am missing something obvious. Here are my changes to your sketch:

//#include "spark_disable_cloud.h" // Comment this out, program works for a while

UDP udp;
char discard[64];

#define LOOPCOUNT 500

int txCount = 0;
int rxCount = 0;

void setup()
{
  udp.begin(60000);
  Serial1.begin(9600);
}

void loop()
{
  static int i = 0;

  if (++i >= LOOPCOUNT)
  {
    // Send UDP broadcast on port 60000
    udp.beginPacket(IPAddress(255, 255, 255, 255), 60000);
    udp.write("UDP Broadcast Test");
    udp.endPacket();
    i = 0;
    txCount += 18;

    // Read and dump my own broadcast bytes
    int32_t retries = 0;
    int32_t bytesrecv = udp.parsePacket();
    while (bytesrecv > 0)
    {
      // Read bytes
      int bytesToRead = (bytesrecv > 63 ? 63 : bytesrecv);
      udp.read(discard, bytesToRead);
      bytesrecv -= bytesToRead;
      rxCount += bytesToRead;

      // Null-terminate to prevent print overflow, just in case
      //discard[63] = '\0';
      
    }  
  }
  
  if (i == LOOPCOUNT-1) {
    Serial1.print(txCount);
    Serial1.print(":");
    Serial1.print(rxCount);
    Serial1.print(" ");
  }
  

}

Hi @bko and @psb777, thanks for the input and help, I appreciate it.

@bko, I installed your code and ran it a few times, and I’m also seeing some odd behavior though I don’t think my core ever crashes and reboots. In general what I see is the numbers increase for a while at a steady rate, and Wireshark sees all the packets, then after a while (anywhere from 700 to 7000 bytes), the serial output and wireless packets stop. At this point the RGB LED generally flashes cyan and/or green rapidly for a few seconds, then once it starts breathing cyan again the serial TX count continues to increase, but RX count stays where it is and Wireshark no longer sees any packets from the core.

I’ll keep poking around and report anything else that might be helpful, and hopefully someone with more core library knowledge than I will see a pattern emerge :smile:

Thanks!

Edit: After adding the parsePacket() loop back in, my core seems to now be able to recover from the momentary lapses. Serial printing will still stop for a few seconds, and when it resumes one or two TX packets will be lost, but after that the SparkCore returns to broadcasting UDP packets successfully (4 out of 4 times I’ve tried it so far). This doesn’t help my original problem with spark_disable_cloud.h, but at least the UDP seems somewhat more reliable now

    // in loop(), before "while (bytesrecv > 0) { ... }"
    int32_t retries = 0;
    int32_t bytesrecv = udp.parsePacket();
    while(bytesrecv == 0 && retries < 1000)
    {
      bytesrecv = udp.parsePacket();
      retries++;
    }

Hi @dpursell

This generally means you had a crash and you are now running the back-up firmware (usually the Tinker app if you haven’t gone out of your way to change it). This is a good safety feature to avoid any possibility of not being able to talk to your core.

Maybe there is something with the one-to-one nature of the code that gets messed up. I will switch back to parsePacket() in the while loop and try to look at the cloud.

When I work with the cloud off, I like to put an if-statement in the loop() function that looks at a spare input pin and if it is HIGH, I turn the cloud on via Spark.connect() and delay a bit. This lets me re-flash over-the-air but still try stuff with the cloud off at startup.

I’m still working on this, haven’t solved anything yet, but thought I would post my findings in case anyone else is having a similar problem:

  1. When including spark_disable_cloud.h, do not call UDP::begin() from setup(). If you do, the UDP object will be unable to send or receive packets. This post by @zach_l shows a good way to wait until you have a valid IP address before trying to start the UDP object. This only seems to be an issue when including spark_disable_cloud.h for some reason.

  2. I think that there is a bug somewhere in the UDP library or the CC3000 when you attempt to send UDP packets to IPs that don’t exist on the network. In my experience, doing so will either cause a hardfault or will slow the main loop down to the point where it’s unusable (like 1 call to loop() every 30s). Since the broadcast IP (255.255.255.255) doesn’t belong to any device, sending to it will trigger this condition. Again, this problem only shows itself when including spark_disable_cloud.h.

So right now my best solution is to a) wait until I have an IP address before starting UDP, and b) broadcast from my computer and have the SparkCore respond directly, rather than broadcasting from the SparkCore.

1 Like

When you have the cloud on at start up, the network connection and the cloud are always fully up before you enter setup–otherwise you core will flash and retry until the cloud connects.

With the cloud off at startup, I wonder if the network is fully up when you get to setup()? I think it is a better practice to have udp.begin() in loop with the appropriate if’s to restart the connection anyway.

Back when we were all trying to help debug the CC3000 problem known as CFOD, I wrote UDP broadcast code that hit every address in a range I knew was empty in an attempt to induce the CFOD problem. Eventually we figured out that doing an ARP flood was a better trigger, but this worked sometimes.

I do wonder if you are seeing what a CFOD looks like when you don’t have the cloud turned on. The TI patch is for the CC3000 is available and Spark is working on a safe update process, but you can update yourself if you want to and haven’t already.

       for(int i=STARTRANGE;i<ENDRANGE;i++) {
            tryAddr = IPAddress(localAddr[0],localAddr[1],localAddr[2],i);
            UDPClient.beginPacket(tryAddr,localPort);
            UDPClient.write(packetBuffer,BUFFERSIZE);
            UDPClient.endPacket();
            UDPClient.stop();
            delay(1);
        }

Just letting everyone on this thread know I’m seeing this, and it’s super helpful. Look forward to nailing this down. Thank you all for your help!

1 Like

fyi, this repo has some code which uses both the send the originating IP address in the packet (ie, don’t trust recvfrom address) and handle offline and online loop:

it’s becoming somewhat specific to our project but hopefully it’s helpful to see how we are solving this? We’ve gotten things to be fairly stable w/ this system.

1 Like