Issue publishing event during handler for an event with a shorter name

I’ve been running into some issues with mangled data when trying to publish events from within an event handler. (My original idea was to attempt a sort of printf debugging/error reporting using events instead of using the Serial interface, at least for now.) As far as I can tell, this only happens when the name of the event being published is longer than the original received event name. For example, with the following program:

void signal1(const char *event, const char *data) {
  Spark.publish("received_signal1", data, 60, PRIVATE);
}

void signal2(const char *event, const char *data) {
  Spark.publish("rec_s2", data, 60, PRIVATE);
}

void setup() {
  Spark.subscribe("signal1", signal1, MY_DEVICES);
  Spark.subscribe("signal2", signal2, MY_DEVICES);
}

Here’s an example interaction, where I’ve sent the “signal2” event first, and then the “signal1” event:

{"name":"signal2","data":"foo","ttl":"60","published_at":"2015-08-14T15:55:59.562Z","coreid":"001"}
{"name":"rec_s2","data":"foo","ttl":"60","published_at":"2015-08-14T15:55:59.579Z","coreid":"53ff..."}
{"name":"signal1","data":"foo","ttl":"60","published_at":"2015-08-14T15:56:03.115Z","coreid":"001"}
{"name":"received_signal1","data":"d_signal1�d_signal1�d_signal1�d_signal1�d_signal1�d_signal1�d_signal1�d_signal1�d_signal1�d_signal1�d_signal1�d_signal1�d_signal1�d_signal1�d_signal1�d_signal1�d_signal1�d_signal1�d_signal1�d_signal1�d_signal1�d_signal1�d_signal1�d_signal1�d_signal1�d_sig","ttl":"60","published_at":"2015-08-14T15:56:03.138Z","coreid":"53ff..."}

It looks like publishing a longer event name ends up overrunning some internal buffer, since the missing part of the event name in the bad output is exactly the length of the original event name. I’ve tried a factory reset and also tried it on another core just in case, but I’m still seeing this behavior. Any help would be greatly appreciated.

Hi @sstrickl

I think you are running to a design decision in the Spark.publish/subscribe methods to not make them reentrant. When you get a subscribe event and the system hands you pointers to the char arrays representing the event and data, it is really handing you pointers to its own internal data structure used for cloud communication. Then when you do a publish it is overwriting those same memory locations causing problems. Can you try it with memcpy like this:

char pubArray[64];

void signal1(const char *event, const char *data) {
  memcpy(pubArray, data, strlen(data)+1); //copy the trailing zero
  Spark.publish("received_signal1", pubArray, 60, PRIVATE);
}

void signal2(const char *event, const char *data) {
  memcpy(pubArray, data, strlen(data)+1);
  Spark.publish("rec_s2", pubArray, 60, PRIVATE);
}

void setup() {
  Spark.subscribe("signal1", signal1, MY_DEVICES);
  Spark.subscribe("signal2", signal2, MY_DEVICES);
}
2 Likes

Thanks, @bko! I appreciate the suggestion, and it sounds like you’re confirming my hypothesis about internal buffers, so I imagine it’ll work. I’ll try this out when I next get a chance and report back to confirm.

1 Like