The docs say callback functionality will be ready sometime in March.
Is that still the plan otherwise should we just rely on making TCP connections on the device?
Yep still the plan, itās coming very soon. Already developed, in testing now. But in the meantime, try out Server-Sent Events!
Thanks for the fast response. Iām using Parse for my backend and looks like they donāt support Server-Sent events further more Iām working on a similar project to your pizza ordering button so the callback might be more practical, unless you have a better suggestion?
@zach i got really curious and read the blog a few times wonder about thisā¦
It seems to me that the name for Server-sent-events can be almost anything a user wants.
What if and what if both user happens to be using the same name? eg. āme/temperatureā
Sounds like itās gonna be a rare occasion but i got really curious about it
Also, letās say my event TTL is 60sā¦ And it expired before the event was triggered againā¦ And someone published with a same ānameā. Wonāt i be getting data from that publish?
Sorry for the silly question cos i guess probably people would sayā¦ āuse a unique and special name for your eventā
Just wondering
Good question!
It all depends on which endpoint your program is listening to:
- All Public Events - api.spark.io/v1/events
- All your Events - api.spark.io/v1/devices/events
- All events from a particular core - api.spark.io/v1/devices/your_device_id/events
e.g:
- All Events starting with Temperature - https://api.spark.io/v1/events/Temperature
- All your Events starting with Temperature - https://api.spark.io/v1/devices/events/Temperature
- All events from a particular core - https://api.spark.io/v1/devices/your_device_id/events/Temperature
etc
So even if someone else is using the same event name, you can just filter to your cores, or a particular core. Also, any state persisted from a message you publish is stored only along with your core information, and is only accessible to you.
Thanks!
David
That sounds interesting!
Soā¦ Our events are probably all going under api.spark.io/v1/devices/events
whether public or private?
Or anyoneās public event goes to āAll Public Eventsā?
Just the public events appear in the āAll Public Eventsā firehose, if your event is flagged private it will only show up for you in those feeds.
got it!
Isnāt that going to have some event name clashes in the public when the core community gets incredibly large which is going to happen?
orā¦ Letās say 100 people uses ātemperatureā public eventā¦ i can filter mine with api.spark.io/v1/devices/events/Temperature
?
Hi @Dave
First off, this is great and works wonderfully!
I do have a question thoughāwhen I run a curl command to listen to the events, I get the event data as expected, but I also get occasional carriage returns between event data, sorta like a keep-alive is running. This is expected, right?
And to who ever is publishing āTemperature/Deckā, wow 3 degrees F is cold even by Boston standards!
@kennethlimcp ā totally, api.spark.io/v1/devices/events/Temperature
would filter to just events from your cores starting with "Temperature.
@bko - Thanks! That would be my deck, it is cold! Also, yes, the Carriage Returns are an intentional keepalive, they should come in every 9 seconds when there is no other traffic.
Ah iām still really curious and having some hands-on makes me pick things up faster
So @bko could you share how you do the curl command and how to you know about āTemperature/Deckā events?
I have a python script which uses the ārequestsā module when i tried to help test the CFOD issue to perform HTTP requests.
Also, i modified it slightly to check my access_token and delete etc.
Thatās like the most i ever did with python and API so i definitely need you veterans to start me off with new feature!
Iāll put up a Tutorial in return. haha!
Hi @kennethlimcp,
One thing thatās nice about these endpoints are that they use GET requests, so you can just open them in a browser window, if youāre in chrome you have to āview sourceā on the page to see whatās coming across:
view-source:https://api.spark.io/v1/events/?access_token=your_access_token
@Dave THATās REALLY FUN
Seriously i always loved microcontroller and the made it more fun!
What happens if a user accidentally calls the Spark.publish() too frequently?
I just did what David suggested and used:
curl -H "Authorization: Bearer <<hex number here>>" https://api.spark.io/v1/events/Temperature
You to replace the entire <<hex number here>>
part with your Auth Token.
Interestingly this finds events who names begin with Temperature like Temperature/Deck
and Temperature/Basement
(go in the basement David, itās much warmer! ) but it does not find
spark-hq/Temperature
. I think that is a good way for this to work with hierarchical levels.
Then there will be a lot of events!
Is there something wrong here which causes a compile issue?
String data = 'ms';
void loop() {
Spark.publish("Core_Uptime/Singapore", data);
Yup! in C++ / arduino coding land, 'a'
single quotes denote a single character, and "abc"
double quotes denote a string. So you want to change 'ms'
to "ms"
I would recommend starting out trying to not publish more than say, 5-10 events a second, if you send too many too quickly you may knock your core offline (I suspect itās possible to flood yourself offline currently)
Hmmā¦ Seems like a problem still.
Is this related to this issue?