Sprintf() escape characters show up in output string?

I feel like this is a silly question, but I searched for a similar question but didn’t find it, so let’s see… By the way, I guess it doesn’t seem to hurt anything, after looking around at the examples. But I’m still curious as to what’s happening.

OK, so the following code embeds JSON within the JSON from Spark.publish() (modeled after the tutorial here):

sprintf(publishString, "{\"Timestamp\":\"%i\",\"delayTime\":\"%i\"}", lastTime, delayTime);
Spark.publish("button-pressed", publishString);

When I look at the output in my terminal using curl, I see all the escape characters (backslashes) like so (copied right out of the terminal window):

event: button-pressed
data: {"data":"{\"Timestamp\":\"1410055062\",\"delayTime\":\"20\"}","ttl":"60","published_at":"2014-09-07T01:57:42.283Z","coreid":"abc123"}

I thought that the backslashes don’t really show up, they just serve to let sprintf() know that the quotation marks are meant to be part of the string and not the end of the string. But they are all there. And I guess they take up space in the string, too, right? Space is kind of at a premium there so it seems like not the best thing…

Hi @squishyrobot

They don’t take two characters in the published string–this is the standard C syntax to have a double-quote inside a char array.

Maybe @Dave could comment on the how the cloud handles them.

If I may jump in here - it looks to me as if “data” is interpreted as a string and not json. This is probably the right thing to do unless there is an encoding flag or similar on the publish() call.

The “data” value is simply a carrier for whatever else you want to add to the event - it could be html, xml, csv, as well as json, as posted here. In other words the cloud doesn’t really care what you put there - it’s the event name that’s important to the cloud, the data is used by your own applications.

When you’re reading events from the cloud api, and want to get your data string as jaon, then you’ll need to first read the json even data as normal, fetch out the data string, and then json decode that data string to another json object.

I guess you’re saying that it’s interpreted as a C string by the Spark Cloud or Spark.publish()? So the cloud or Spark.publish() puts those escape characters back in there?

In any case, it doesn’t seem to hurt anything. I was really just curious because this seemed to be challenging how I thought sprintf() worked.

Yes, the cloud is putting them back in as part of writing out the json - it’s part of the json spec that quotes in a literal string have to be escaped. Sprintf() is working as you expect - if you looked at what was being sent over the wire, there would be no escape characters, just the quotes.

Quotes at the start and end and escape characters only come into play when you turn the string from it’s physical form back into source form, such as when printing out json.

It’s hard to be clear about this things, but I hope that was at least clear enough! :slight_smile:


That explains it perfectly right there. I get it now and it all comes together with what @bko said earlier. Thanks.