UDP + red LED restart loop

Hi everyone,

I’ve been working on getting the core to receive and send a message back to openFrameworks app via UDP. I got to the point where sometimes it works perfectly but sometimes the core would goes into blinking red LED (click for video). Then even after I reset the core via reset button it would just goes back into that red LED mode right away until I unplug the USB. Here is the code used:

#include "application.h"
#include "spark_disable_cloud.h"



class timer {

private:
  int lastFireTime;
  int  millisTimer;
  bool bFiredThisFrame;

public:
  void setup( int millisToFire ){
    lastFireTime = 0;
    bFiredThisFrame = false;
    millisTimer = millisToFire;
  }

  void update( int currentTimeMillis){
    int elapsedTime = currentTimeMillis - lastFireTime;
    bFiredThisFrame = false;
    if (elapsedTime > millisTimer){
      bFiredThisFrame = true;
      lastFireTime = currentTimeMillis;
    }
  }

  bool bTimerFired(){
    return bFiredThisFrame;
  }

};

//packet structs
typedef struct {
  float time;
  int frameNumber;    
} packet;







// local port to listen on
unsigned int localPort = 8888;
unsigned int outgoingPort = 7777;
int led = D0;

const int PACKET_SIZE = 12;
byte  packetBuffer[PACKET_SIZE]; 

// An UDP instance to let us send and receive packets over UDP
UDP Udp, UdpOut;

timer t, t2;






void setup()
{
  pinMode(led, OUTPUT);
  // start the UDP
  Udp.begin(localPort);
  UdpOut.begin(outgoingPort);

  Serial.begin(9600);
  
  
  Serial.println(Network.localIP());
  Serial.println(Network.subnetMask());
  Serial.println(Network.gatewayIP());
  Serial.println(Network.SSID());

  t.setup(100);
  t2.setup(100);
}

void loop()
{
  t.update(millis());

  if (t.bTimerFired()){

    if (int nbytes = Udp.parsePacket()) {
      
      if (nbytes != sizeof(packet)){
        Serial.println("bad packet ???");
        Udp.flush();

      } else {

        memset(packetBuffer, 0, sizeof(packet));

        Udp.read(packetBuffer,nbytes);

        packet p;
        memset(&p, 0, sizeof(packet));
        memcpy(&p, packetBuffer, sizeof(packet));

        Serial.print(Udp.remoteIP());
        Serial.print(" : time = ");
        Serial.print(p.time);
        Serial.print(" : nFrame = ");
        Serial.println(p.frameNumber);

        Udp.flush();

        UdpOut.beginPacket(Udp.remoteIP(), outgoingPort);

        char buffer [50];
        int n=sprintf (buffer, "%lu", millis());

        UdpOut.write("I've been running for " );
        UdpOut.write(buffer);
        UdpOut.write(" milliseconds");
        UdpOut.endPacket();

        UdpOut.stop();
        delay(1);
        UdpOut.begin(outgoingPort);
      }

    }
  }

  t2.update(millis());
  if (t2.bTimerFired()){
    Serial.println("restarting!");
    Udp.stop();
    delay(1);
    Udp.begin(localPort);
    Udp.flush();

  }

  delay(5);

}

NOTE: I’m working on this project with sparky command line utility tool.

I looked up flashing red LED meanings but it mostly refers to cloud connections (which I’m obviously not using in this case). Any suggestion on what’s going on?
Thanks!

From your video, it seems that you’re getting a single red flash, which indicates a hard fault (the red light will flash morse code S-O-S before and after, but in the middle it looks to me like only a single flash).

Hard faults are usually a real pain to diagnose unfortunately, when I’ve worked with ARM Cortex M chips previously I’ve installed a custom hard fault handler, but I haven’t tried it with the SparkCore so I’m not sure how much work you’ll have to do to port it. I’m guessing the SOS is called from a hard fault handler already so maybe you can hijack that. The information you can glean from the registers after a hard fault is very limited though and not always helpful.

I don’t see any buffer overflows in your code at first glance, looks like your sizes are all in range…it’s not very sleek, but maybe try some print debugging - print a different character at each major step in your code, and then when it crashes check what the last printed character was to get a feel for where the code is crashing. I hate debugging that way, but it may be the fastest path to the break point in this case.

Not super helpful, sorry, I’ll try to take a look more in-depth tomorrow, especially since I’m intrigued that you got UDP to work while using spark_disable_cloud.h

1 Like

Hi @firmread,

I don’t think having two UDP object instances is working right now, but you don’t need two–you can receive one port and transmit on another port with one UDP object.

The port number you pass into udp.begin(localPort) is your receive port and the one you pass into udp.beginPacket(ip, outgoingPort) is your transmit port.

The red LED flashes are doc’ed here and don’t really have anything to do with the cloud–they are all general panic codes.

1 Like

@dpursell one thing we’ve noticed (I’ve been working with @firmread) is that it works but the UDP object is a bit flaky. We found some code that consistently restarts the UDP object to flush the buffer – this was originally code used to keep the UDP from timing out that we found on the forum, but it seems to help. We were experimenting with timing (trying to make that happen less often) thus the timer object code and the obsessive amount of restarts to UDP, but it seems better to maybe try reseting the UDP object after it receives a message.

What’s odd to us is that we’ve seen the system stable, then incredibly unstable – sometimes this code will run for minutes getting slammed with packets and answering back like a champ, other times it gets a red signal after a few packets.

we’ll definitely keep posting if we make more progress. For our project we’re trying to develop a robust UDP / non-cloud solution.

@zach_l @firmread, I may have found something interesting. In testing your program, I’m finding that creating a UDP packet with an IP that doesn’t actually exist on the network is causing a hardfault. Try it yourselves and see if you can verify my findings:

// In loop(), replace this:
UdpOut.beginPacket(Udp.remoteIP(), outgoingPort);

// With this, hardcoded to something you know isn't present on the network:
UdpOut.beginPacket(IPAddress(192, 168, 1, 70), outgoingPort);

In particular, it seems to handle the first send OK, and then hardfaults right away on the second.

@psb777 has some good insights into the Spark’s UDP oddities and failings posted on these forums, and in particular, he has found that Spark’s UDP::remoteIP() function is not reliable. Although he mentions this in the context of Spark not following the rules of UDP packetization, I wouldn’t be too surprised to find that your call to UDP::remoteIP() is sometimes returning garbage, which you then pass to UDP::beginPacket(), which causes the core to hardfault.

I’m not 100% certain of my findings though, so verify them yourself if you can, but if your tests agree with mine, you may want to implement some sort of IP verification. An easy first step might be to pass the server IP inside every UDP packet it sends, and then use that value instead of UDP::remoteIP()

1 Like

So the issue with Udp.remoteIP() is that it might not correspond the data you are reading when you do not fully keep up with reading the incoming bytes. If you have not read the data and the remote IP address and another packet comes, the data is buffered, but the remote IP address is not, and the new one overwrites the previous remote IP.

I had not looked closely at the beginpacket(Udp.remoteIP()… and I definitely think you test that for all zeros before trying to use it.

Are you still trying to use two UDP objects? In principle that should work fine, but when I tried it, the object and/or the CC3000 driver did not work. Is using two UDP objects a requirement for what your are trying do?

1 Like

thanks that’s really interesting and very helpful. we’re already starting to put IP addresses in the packet and this makes sense. The UDP acting like a byte stream is sort of unfortunate, but in our case, we’re going to make all the packets the same size, which should mitigate things.

One thing that’s helping for stability is removing all the starting and stopping of UDP objects – I think this code is around the forum but with the current firmware there is no timeout and the main thing is just starting the udp object with the device is online. Since we’ve disabled the cloud, setup() isn’t a great place to put code like starting udp since that code runs while the device may not have an IP address (blinking green). I’ve taken to doing the udp objects setup in the following way in loop:

void loop()
{

    //------------------------------------------------------------------------- are we just coming online
    
    bool bOnlinePrev = bOnline;
    
    IPAddress addr = Network.localIP();

    if  (addr[0] == 0 && addr[1] == 0 && addr[2] == 0 && addr[3] == 0){
        bOnline = false;
    } else {
        bOnline = true;
    }
        
    if (bOnline == true && bOnlinePrev == false){
        Udp.begin(localPort);
        UdpOut.begin(outgoingPort);
    }

this seems to help, and we just use flush to deal with things. This seems way more stable then restarting the UDP object, which was kind of voodoo.

@firmread has switched to one UDP object, which also seems to help.

thanks everyone for the tips and help. this community is great.

@dpursell @bko thanks for super helpful feedbacks, really love how active community in this forum is! We (me and @zach_l) got a good progress today with only one UDP object and got pretty stable send/receive UDP packets between OF and spark core (and yep, without the cloud!). While still not really well documented (yet, it will be), feel free to take a look on our work in progress here: https://github.com/firmread/spark-OF-UDP :slight_smile:

1 Like

here's a software errata about the recvfrom( ) api and "incorrect remote address"

UDP transmission may fail after receiving UDP packet

Description

UDP data transmissions may not work after UDP data reception. Issue is related to incorrect remote IP
address and UDP port obtained by calling the recvfrom API.

Workaround

UDP transmission should not be based on address obtained from the recvfrom API.

2 Likes