Spark.publish() -- adjusting the 4 burst per second rule

OK, I’ve decided to start a new topic which is a fork of this earlier conversation . . . here:

To recap I am running my own server and want to change the throttling schedule . . . preferably to allow 4 .publish() events per second and ignore any overage (which there won’t be because I already impose throttling in my wiring code on the core).

So, I am combing through the source, looking for options, and lo and behold, in the spark-protocol.cpp, around line 459 we see this . . .

// Returns true on success, false on sending timeout or rate-limiting failure

bool SparkProtocol::send_event(const char *event_name, const char *data,int ttl, EventType::Enum event_type){

  if (updating){
  return false;

  static system_tick_t recent_event_ticks[5] = {
    (system_tick_t) -1000, (system_tick_t) -1000,
    (system_tick_t) -1000, (system_tick_t) -1000,
    (system_tick_t) -1000 };

    static int evt_tick_idx = 0;
    system_tick_t now = recent_event_ticks[evt_tick_idx] = callback_millis();
    evt_tick_idx %= 5;

    if (now - recent_event_ticks[evt_tick_idx] < 1000) { 
     // exceeded allowable burst of 4 events per second <<< interesting
        return false;

  uint16_t msg_id = next_message_id();
  size_t msglen = event(queue + 2, msg_id, event_name, data, ttl, event_type);
  size_t wrapped_len = wrap(queue, msglen);
  return (0 <= blocking_send(queue, wrapped_len));

So, 3 lines of questioning:

  1. What is happening here, it appears to be taking 5 measurements against callback_millis() and saying “hey, if the cumulative time over which the last 5 events transpired is greater than 1000 millis, then return false”, rather return no message.

a) is callback_millis() the number of milliseconds since the spark connected?
b) if returning false, how long until it is reset to true, or what resets it to true ?

  1. Let’s say I wanted to change this to only allow the first 4 events of every second through (true) and discard the rest (false) until the top of the next second, how would I do this?

a) if I made these changes how would I recompile the firmware (could I do this via the browser build, download, and flash via CLI). I am not sure how to see the complete firmware in the browser editor.

  1. Finally, there is the cloud code, in SparkCore.js there is reference to an EventSlowdown event, but I can’t find the code that actually does the throttling . . . or I don’t understand the C syntax very well! :smile:

Thank you Spark Community, this has been a wonderful adventure.


Hi @jgeist.

For 1: Not quite. Each time when you call send_event() the time of the call is captured and if the time between the current call and the previous but four is less than 1000ms the current event won’t be published and your calling routine will be informed via a return false;.
But since the current time is still captured, no further bursting events will make it through anymore, even if the time since the last actually sent event exceeds one second. So you actually have to keep silence for at least one second, after a burst (once send_event() returned false).

For 1.a: Not sure, but I guess it’s the time since the Core started, but it doesn’t actually matter - main point it’s milliseconds :wink:

For 1.b: false is the return value and doesn’t need setting back to true. The next call to send_event() will give you its own result.

For 2: There are several ways. If you want four per second with a burst of 16 you just exchange 1000 for 250. If you don’t want the burst logic, you additionally avoid using the times buffer recent_event_ticks[].

For 2.a: You’d need to compile locally, with local toolchain.

Haven’t had a look at SparkCore.js so no idea for 3 - but it’s not really C but JavaScript.

1 Like

Thanks @ScruffR!

That makes sense. And yes, the SparkCore is JavaScript . . . which I am just beginning to tinker with. So, I think I will bypass the ‘burst logic’ (what a great term) and compile with local chain . . . I think I have seen some discussions for compiling locally around here somewhere . . .

Good stuff . . .


1 Like