Broadcast data to Multiple Cores

As I’m wrapping up my first project with the Core, I’m wondering if it’s possible to broadcast data to multiple cores simultaneously, such that each core responds at (nearly) the same time. That is, my project is a proximity lighting system with one core serving as the “switch” and two other cores acting as lighting installations with NeoPixels. I currently have a webpage that’s listening for SSEs from the switch core to POST to the NeoPixel cores. Each of the NeoPixel cores are POSTed to sequentially and, as such, there is approximately a 0.5 second delay between the pixels toggling.

I’m wondering if it’s possible, through either a webpage or perhaps Node.js or Python, to broadcast to the NeoPixel cores so that they respond simultaneously?

Thank you for the help!

I think it’s a feature-in-progress for now.

How about the Neopixel cores checking a website instead? Maybe the switch core will update a variable and the cores will all check the same variable periodically. :smiley:

Thanks for the reply, @kennethlimcp. That sounds like a great idea! So with that, how would I go about checking a variable on a website? I’m familiar with GETting variables from a core via terminal or a site, but not the other way around.

Thanks for the help.

The best example i can find is here:

It’s using the TCPclient to reach a URL.

Just be careful when you start testing this! Sometimes a wrong code will lead the core to hang (red flashes)

The url you will be accessing is like :smile:

https://api.spark.io/v1/devices/core_id/variable_name?access_token=xxx

1 Like

Your problem is a classic use case for broadcast UDP. You broadcast one packet to the entire subnet. Receipt of a UDP datagram isn’t guaranteed so you broadcast the current status once every so many seconds. It’s only one packet with I guess only “ON” and “OFF” in the data so even including the packet header the total network traffic is negligible. On the very odd occasion a packet is not received the temporarily deaf Core-NeoPixel will change state next broadcast. Note there is no need for acknowledgements or anything. It’s all one way traffic, a very short packet every time the status changes and, for resilience, every 5 (or 1 or 10 or 30) seconds thereafter.

Implementation is trivially simple, much quicker than going through a web page, and UDP programming is easier than TCP for this type of problem domain. [TCP requires a connection to be established and does not allow for broadcasts]. A limitation with UDP broadcasts is that (usually) all the communicating devices must be on the same LAN. Oh, and owing to a bug in Spark UDP, I suggest you add a trailing newline to the ON or OFF so you can work out the end of the packet.

UDP might be fast enough and reliable enough that you don’t notice any skew between them, but if not you might try…

to implement an RTC on all NeoPixel Cores sync’d to an NTP server… send your update messages with a near future timestamp. You can broadcast round robin however slow you want, and then the Cores will all update at exactly the same time.

2 Likes

I like that. I run the danger of you thinking I only have a hammer and everything looks like a nail, but the latency and simplicity of UDP will be the lowest of anything and the programming will still be easier than round robin. Combine your idea with a UDP broadcast. Instead of “Y” or “N” you send “Y1427\n” which means turn on at the next time it is 14.27s into a minute or “N2123\n”. Even easier would be to assume that one must turn ON or OFF at the next 1/10th second clock tick . Then once again one need only transmit one “Y” or “N” packet - not even the \n will be reqd for one char datagrams.

1 Like

Thank you all for the replies! I’m looking forward to experimenting. In regard to UDP, can someone point me in the right direction for documentation/resources on how to set it up?

Thanks.

https://github.com/spark/docs/blob/master/docs/firmware.md is a good place to start, I suppose, but the docs are not great. In fairness I haven’t tried to write a broadcast packet from the Spark yet! [I have now, it works fine.] My app has a *ix boxes broadcasting and the Spark reading. Code for reading UDP on Spark is complicated up by a bug which means the UDP guarantee of one or zero complete datagrams per read is not guaranteed. You need to read the buffer as if it were a TCP stream looking for a packet delimiter which your sending app must append to the packets. Sending is trivially simple:

  // IPAddress remote_IP_addr( 192, 168, 7, 65); //one remote machine
  
  IPAddress remote_IP_addr( 255, 255, 255, 255); //to broadcast to the entire LAN
  unsigned int remote_port = 2222;

  UDP udp;

  udp.beginPacket( remote_IP, remote_port);
  udp.write( buf);
  udp.endPacket();

Reading should be as simple as this:

  UDP udp;
  udp.start(2222); // in setup() perhaps

  do {
    packetSize = udp.parsePacket();
  } while( 0 == packetSize);

  packetSize = min( sizeof(buf)-1, packetSize);
  udp.read( sbuf, packetSize); // any unread portion of datagram is hereby discarded
  sbuf[packetSize] = '\0'; // assuming this is ascii and you want it terminated

  udp.stop(); //when finished reading packets

but because of the bug I refer to you need to do something like this

  UDP udp;
  udp.start(2222); // not for every packet but until another bug is fixed you need to start and stop at least once a minute

  char c;
  char delim = '\n';
  char *p;

  do {
    packetSize = udp.parsePacket();
  } while( 0 == packetSize);
  packetSize = min( sizeof(buf)-1, packetSize);
  
  p = buf;
  for( i = 0; i < packetSize; i++) {
    udp.read( &c, 1); // remainder of packet is not discarded 
    if( c == delim) {
      *p = '\0'; // if its ascii and you want it terminated
      break;
    }
    *p++ = c;
  }

  udp.stop(); // at least every 60secs
2 Likes

Totally agree, UDP on a local network will probably be the fastest way to broadcast an event locally, but we’re also working on a subscribe compliment to publish :slight_smile:

Thanks,
David

3 Likes

Excellent! Thank you, again. I’ll try these suggestions out once I get off work and report back with my (hopeful) success.