My 1-page synopsis is gleaned from Particle Docs & my own programming. I’m sure some of this is sub-optimal if not outright mistaken. Comments please. (changes marked !!!)
Particle Cloud Cheat Sheet
Variable
Particle.variable("varName", labelOfValue)
up to 20 variables (per “access token” or per “device ID”? – !!!**Answer: device ID[per ScruffR, below])
up to 12-character variable names
3 value data types
INT
DOUBLE
STRING – à la C++ (maximum string length is 622 bytes)
return value is always a string
declare in setup()
access (read-only) from the WEB: curl https://api.particle.io/v1/devices/$id/$var?access_token=$ac
Function
Particle.function("cloud-name", functionName)
a value is passed in – not shown above (see curl statement, below)
up to 15 cloud function names
up to 12-character cloud-names
the program function must take a string ARG (max 63 char) and return an INT
Note: only between Particle devices; not directly WEB accessible
Particle.subscribe("pub-name", functionName)
name can be up to 63 char long
the program_function handles the published eventName and data
subscribe events have to be declared in the setup()
Examples
void setup() {
// "subscribes" persist as long as the device is connected (?)
Particle.subscribe("temperature", myHandler);
}
only allow events from my devices: Particle.subscribe("the_event_prefix", myHandler, MY_DEVICES);
// won’t get other Pcloud events of same name – like with PRIVATE, above
How to get publish to subscribe to web page
a) Publishing device: … Particle.publish(“my-pub”, CharData, 60, PRIVATE); …
b) Subscribing device:
void myHandler(const char *event, const char *data) { . . .
Any mention of "declare in setup()" or " have to be declared in the setup()" has to be put in perspective.
These actions have to happen before or at the latest a few seconds after connecting to the cloud otherwise the cloud won't know about it and these are only allowed once for each individual name.
To further expand on what @ScruffR mentioned, this statement is just wrong. You can use the pub/sub system from native iOS and Android apps (in the docs here, and here, respectively) , as well as from web apps.
I had in mind getting subscribed data from the subscribing sketch via Particle.variable. The curl example in
"https://docs.particle.io/reference/api/#publish-an-event" is cool. Is there a matching subscribe? The particle CLI is no help on my server – I can’t install it.
Your 2 sentences starting with “Any…”. Does “connect to the cloud” refer to the the moment when rapid cyan changes to breathing cyan? And when does setup() execute?
That depends on SYSTEM_MODE() and SYSTEM_THREAD().
Subscribing is done via SSE listeners which is a common feature for most web "languages" and not actually the responsibility of the Particle cloud. The cloud pushes the event down the throat of any listener who asks for it.
You can just do it via your (modern) browser https://api.particle.io/v1/devices/events/?access_token=<yourAccessToken>
or for a specific event https://api.particle.io/v1/devices/<yourDeviceID>/events/?access_token=<yourAccessToken>
Actually, it seems to expire immediately. I tested this yesterday in response to another post, and the implication that you can subscribe to an event within that 60 second window does not seem to be true. I published a different event every second, then subscribed (via CLI) about 15 seconds later, and I only got the ones published after I subscribed, not any of the first 15 published before I subscribed. The statement in the docs about it not being implemented is unclear; I certainly thought (before testing) that it meant you couldn't change the ttl from the default 60 seconds, but that the event would persist in the cloud for those 60 seconds.
You may be uninterested, but I didn't want other users who might see this to get the idea, that you couldn't do it with native phone apps.
The respective part in the docs means you can set any TTL in the call but the event will never keep living after it was published (for now). If it's not caught there and then, it'll be gone immediately.
So the TTL feature is not implemented at all - yet (despite the fact that you can set "any" value for that field and override the also ignored default value of 60sec).
Re publish events currently can only be intercepted by a pre-existing subscribe. Not so cool.
But (unlike what I wrote in my cheat sheet) you can publish from a non-Particle device by this:
$ curl https://api.particle.io/v1/devices/events -d “name=my_name” -d “data=$data” -d access_token=$ac
But you presumably can’t subscribe with curl because it won’t just wait for the publish.
Also, this seems like a bug:
$ curl https://api.particle.io/v1/devices/events -d “name=my_name” -d “data=$data” -d “private=true” -d access_token=$a
{
“ok”: true
}
This returns “true” but it doesn’t mean that the subscribe end receives it – you can have a PRIVATE/MY_DEVICES mismatch. BTW: seems to me that PRIVATE should be the default.
Slightly off topic: Particle devices could really use some persistent cloud data. Doesn’t need to be much – a paltry few hundred bytes. The problem is about restoring “state” information after a power outage. Right now I have a Linux system reading via USB to a Photon that subscribes to published reboot messages and then the Linux restores data via curl to Particle.function. But this in nasty and it can fail.
You are not the first to suggest that and it's not that Particle doesn't do that - it's just not there yet, hence the used wording
That's also not a bug.
The subscription for the SSE is registered with the broker, that your curl (which is not a Particle app) does not stay connected to benefit of that subscription but immediately cancels it again is the expected behaviour. curl is not the tool for keeping the connection open and wait for events, it's just a tool which can be used to test the endpoint.
And since the endpoint does its job the response is perfectly valid - the subscribe will work for any valid event and connection.
There are means for that as well. EEPROM and retained may be keywords to look up in the docs.
I now recall reading (2 years ago?) the EEPROM bit. At the time I guess I thought it was too ugly to think about. Now I suppose I’ll give it a try. But since it’s tied to a particular Photon it’s no substitute for cloud storage. Also, the docs don’t say, but what happens when a new sketch is downloaded: should be EEPROM.clear(), probably. Better warn Raspberry Pi-ers that it doesn’t apply to those devices.
Particle’s metaphorical re-purposing of terms (like EEPROM, FLASH, etc.) is disconcerting.
@rch
I haven’t played around with a Raspberry Pi at all so the “re-purposing” of terms doesn’t alarm me.
I use the EEPROM on all my products made with Particle as a memory area to grab initialization variable settings - not as data storage. Given the 2048 byte size of the EEPROM area, I doubt that continuously throwing data into it as a storage means is not ideal. You could always add an external memory chip or… use a cloud service like Ubidots. I just started my business account with them, and I’ve been super happy with how easy to use it is and how useful it is as well. Their support team is extremely eager and responsive, and I’m sure they can help you get started if you need help. They even have a ready to use API for the raspberry Pi.
That’s correct. Not a bad idea to have a “ClearEEPROM” app. You could actually have it clear the EEPROM and then reenter DFU/Safe Mode so it’s 100% ready for reflashing. (would also indicate that it was done clearing the flash)
I’m wondering if the reference to “only between Particle device” statement might be a bit misleading to the newbie. Yes, you do some clarification about capabilities via cURL, but there are other avenues, such as from PHP (or any other scripting language) through which data can be sent to Particle devices that have subscribed to a particular topic. Also, I don’t see a reference to webhooks which provide a convenient means of getting data out of the Particle cloud via the Publish function with an associated Subscribe receiving the response.
I offer these not as criticisms, but rather as food for thought to consider for revisions to your cheat sheet.