When Subscribing to Public Events by Name (Cloud Code) I get all events Beginning with that Name

When I open up an SSE Stream using the following address:

https://api.spark.io/v1/events/temp?access_token=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

to get all public events named ‘temp’, I receive messages for all public event beginning with temp. Is this by design or a bug? Anybody else seeing this?

Event:  temperature
Data:   25.589744
TTL:    60
PubAt:  10/6/2014 3:39:20 PM
CoreID: 55ff6c065075555340251787
Event:  temp
Data:   {"temp": 22.000000 }
TTL:    60
PubAt:  10/6/2014 3:39:21 PM
CoreID: 53ff6d065075535140381687
Event:  temperature
Data:   25.589744
TTL:    60
PubAt:  10/6/2014 3:39:21 PM
CoreID: 55ff6c065075555340251787
Done!

I will be playing with private events by name as I complete my code library for subscribing to events I will be curious to see if the same issue exists.

Thanks!

Hi @cloris

This is by design. The public stream is available to anyone with an access token, but you can always mark your events as private. You can also select just the messages from one core by using /devices/<<hex core id>> as part of the URL.

Hey @bko

A suggestion would be to not have it wildcard after the event name. The public event stream seems wild enough.

Can you confirm that if I select device specific events for devices I own (ie. /devices/events/temp) that I will only get precise matches?

Can you confirm that if I select device specific events for a specific device I own (ie. /devices/mydeviceid/events/temp) that I will only get precise matches?

This might be an important point of documentation to not have event names like temp and temperature co-exist without understanding the consequences when working by event name.

Thanks again.

Hi @cloris

I think the Spark team designed it this way on purpose. It is actually really flexible but might take some getting used to.

If you use the /devices/<<hex core id>> field in the URL, you will only get events from that specific core. You would need a different URL and listener for each core that you want to monitor. With the device constrained, you can further constrain by event name and then you will get a precise match as you say.

Also the JSON that you get when you subscribe to the event includes the core id hex number, so you can filter on that in your web program if you want to.

You can also use the PRIVATE keyword when publishing and then only the account that has claimed that core can see the events. The other rules still apply with devices in the URL.

@bko… makes perfect sense, thank you for the insights. If I had 3 temp sensors, I could name their corresponding events temp1, tmep2 and temp3 (or just place the sensor id in the data). This would let me watch one event stream for ‘temp’ and then sort it out further up in the code. Still would suggest a documentation update to note the event names act as a ‘begins with’. Any ideas on how to submit it to the Spark folks?

I think the preferred method is to fork the documentation repository from github, make a branch for your change and commit it to your clone, then submit a Pull Request.

2 Likes

Forked and Pulled my first GitHub repository…

1 Like

:+1: