[New Library] PublishManager

PublishManager does what it says, it manages your Particle.publish() calls so that you avoid breaking the 1/second speed limit and can generate publish events while offline.

PublishManager will store up to 10 as default (but more are possible) publish events for you and sends them out when the cloud is ready to accept your messages.

Documentation is here:

PublishManager is published to Particle, so it can be installed via the CLI: particle library add PublishManager or through the Cloud IDE.

Please let me know if you have any issues or if any improvements could be made.

Thanks.

5 Likes

The one second publish limit has been removed.

The queuing publish events when the device is offline sounds useful though!

Not sure about that tho'

I know the device and user limits on the cloud side were lifted, but the limit built into the local firmware is still in place IIRC.

2 Likes

OK, so the cloud will not throw publish rate errors now but the firmware will throw 1 second publish errors?

What can we expect if we try to publish every 500ms?

AFAIK the firmware never did throw errors, it just muted the publish (may have returned false for the call, but I'd not call that throwing errors).

Four events will go out and then nothing anymore till you keep 4000ms radio silence.

1 Like

Would a method in the PublishManager for allowing a “burst” publish be useful? Trying to keep the library minimal and I suppose if a user absolutely needed to send 4 messages really quickly they could do so outside of the PublishManager. In that case there could be a method for adding additional delay to Software Timer.

This could be useful if you just have to have a burst of discreet messages that exceed the rate limiting of Particle. Of course another way would be to pack more into the 255 bytes you have and parse the values server-side.

Some time ago, I was working on a version of a publication class that would do just that. My intent was to create a fairly large array, then break it up into max 255 byte sizes for publication (or whatever the current limit is), or break it into the size of the original publication attempt if the user so desired. I was trying to keep away from dynamic allocation if possible, and that's where I have a question. Is there a way to pass in the size for the class member array in the constructor that wouldn't cause allocation on the heap?

Sure, C++ allows for that; this is one way:

template <size_t size>
class Publish {
  public:
    Publish() : arrySize(size){}
    size_t getSize() {
      return arrySize;
    }
  private:
    uint16_t classArrray[size];
    uint16_t arrySize;
};

Publish<16> publish;

void setup() {
  Serial.begin(9600);
  Serial.println(publish.getSize());
}

void loop() {

}

I saw something like that in my searches, but I couldn’t get the syntax correct when the constructor has some arguments (for instance, Publish(char* name) ). Can you update your answer to show the correct way to do that?

Hmmm…

So, you want the constructor, for example, to accept the first argument in the publish() function? As in:

Particle.publish(const char *eventName, const char *data)

like this:

template <size_t size>
class Publish {
  public:
    Publish(const char* eventName) : arrySize(size), event(eventName) {
      }
    size_t getSize() {
      return arrySize;
    }
    const char* getEventName() {
      return event;
    }
  private:
    uint16_t classArrray[size];
    uint16_t arrySize;
    const char* event;
};

constexpr const char* EVENT_NAME = "someEventName";  // alternate way to pass the pointer (compile time const)

Publish<16> publish("example");
Publish<8> otherPublish(EVENT_NAME);  //alternate


void setup() {
  Serial.begin(9600);
  Serial.println(publish.getSize());
  Serial.println(publish.getEventName());
  Serial.println(otherPublish.getSize());
  Serial.println(otherPublish.getEventName());
}

void loop() {

}
1 Like

Thanks a lot, that clears some things up for me. I’m afraid my knowledge of C++ is still pretty superficial; too many holes in my understanding to grasp what the references I was looking at were telling me. Is there some advantage to using constexpr const char* EVENT_NAME? Since doing it that way doesn’t give the user flexibility in choosing a name, why would one do it that way?

It is just the C++ way, not having to use a #define, e.g.

#define EVENT_NAME "someEventName" 

and have to worry about pre-processor #define collisions...

2 Likes

For the storing of publish events offline feature, could each individual event have a timestamp associated with it once it gets published to the Particle cloud?

For example, I can see this feature especially useful if you have a time-critical sensor application where you want to know the timestamp of each sensor measurement, but don’t want to turn on the modem to send them until you have 10 of them. This could save on battery life and data.

That’s something I’m doing in my current project and I do find it very useful. I’d rather keep that out of the library as it could be incompatable for some people. It could possibly be a separate method like .publishWithTimeStamp(). In either case. I’ll add an example to the library for saving timestamps.

1 Like

v0.0.2 has been published to the Particle cloud.


v0.0.2 add a .cacheSize() method that returns the current size of the cache. Also returns -1 if the cache is empty and at least 1 second has elapsed since the last publish, which indicated the next .publish() with go to the cloud immediately.

Working on a few other updates including an example with a timestamp.

1 Like

@fishmastaflex v0.0.4 is available on the Github repo and includes an example for publishing with a timestamp

I’m waiting to test v0.0.3 (which added compatibility for Core) more thoroughly before publishing this version to the cloud. I expect v0.0.4 to be available as a Particle Library tomorrow (4/24/18)

Here’s the key component of the example:

// Publishes "data" as a JSON char string called buffer, which contains the
//  original data and a timestamp.
//  ex: {"data": "test: 0", "time": 1524500000}
void publishWithTimeStamp(String eventName, String data){
  char buffer[255];

  sprintf(buffer, "{\"data\": \"%s\", \"time\": %u}",data.c_str(), Time.now());

  publishManager.publish(eventName, buffer);
}

This formats your normal data string into a JSON string. You could change the sprintf() call to only add a comma (or anything else you want), but most likely, if you’re adding a timestamp, you’ll need some kind of post processing via a server and JSON should be readily acceptable in most servers or databases.

I chose not to add this as a method to the library in order to keep the library “format agnostic”.

2 Likes

This is most awesome, thanks @bveenema

v0.0.4 has been tested (compiled and ran all examples on photon - except Core only example) and is now available on Particle Cloud.

Still looking for someone to test compatibility with the Core platform

Thanks

1 Like

v 1.0.0 is now released.

This version replaces the dynamic std::queue with a statically allocated circular buffer and removes the Software Timer in favor of a .process() method.

Replacing the std::queue with the circular buffer is a trade off in ease of use and memory size vs stability. The circular buffer allocates all the memory it could ever use when the program begins, but then never resizes (or relocates) which avoids potential heap fragmentation. In order to keep PublishManager slightly flexible, the library is now instantiated using a template to define the buffer.

The default instantiation: PublishManager<> publishManager() will allocate a buffer than can hold 5 events with the maximum publish-able eventName (63 characters) and data (255 bytes) which takes up about ~1590 bytes of heap. You can optimize the buffer however you like to use less memory and/or hold more events

Removing the Software Timer does 2 things:

  1. Makes the actual publish safer (the Software Timer callback has a limited stack size) and
  2. Makes the library equally compatible with Core
4 Likes