Broadcast data to Multiple Cores

https://github.com/spark/docs/blob/master/docs/firmware.md is a good place to start, I suppose, but the docs are not great. In fairness I haven’t tried to write a broadcast packet from the Spark yet! [I have now, it works fine.] My app has a *ix boxes broadcasting and the Spark reading. Code for reading UDP on Spark is complicated up by a bug which means the UDP guarantee of one or zero complete datagrams per read is not guaranteed. You need to read the buffer as if it were a TCP stream looking for a packet delimiter which your sending app must append to the packets. Sending is trivially simple:

  // IPAddress remote_IP_addr( 192, 168, 7, 65); //one remote machine
  
  IPAddress remote_IP_addr( 255, 255, 255, 255); //to broadcast to the entire LAN
  unsigned int remote_port = 2222;

  UDP udp;

  udp.beginPacket( remote_IP, remote_port);
  udp.write( buf);
  udp.endPacket();

Reading should be as simple as this:

  UDP udp;
  udp.start(2222); // in setup() perhaps

  do {
    packetSize = udp.parsePacket();
  } while( 0 == packetSize);

  packetSize = min( sizeof(buf)-1, packetSize);
  udp.read( sbuf, packetSize); // any unread portion of datagram is hereby discarded
  sbuf[packetSize] = '\0'; // assuming this is ascii and you want it terminated

  udp.stop(); //when finished reading packets

but because of the bug I refer to you need to do something like this

  UDP udp;
  udp.start(2222); // not for every packet but until another bug is fixed you need to start and stop at least once a minute

  char c;
  char delim = '\n';
  char *p;

  do {
    packetSize = udp.parsePacket();
  } while( 0 == packetSize);
  packetSize = min( sizeof(buf)-1, packetSize);
  
  p = buf;
  for( i = 0; i < packetSize; i++) {
    udp.read( &c, 1); // remainder of packet is not discarded 
    if( c == delim) {
      *p = '\0'; // if its ascii and you want it terminated
      break;
    }
    *p++ = c;
  }

  udp.stop(); // at least every 60secs
2 Likes