Private event seems didnt get to work in local cloud?

Hi Guys, @Dave :

I just tested below API call which will subscribe the event according to the userID(This is mentioned in GitHub - particle-iot/spark-server: UNMAINTAINED - An API compatible open source server for interacting with devices speaking the spark-protocol) .Unfortunately, It didnt get to work.

/v1/devices/events

I found I can received the event by using below api:). But it will not meet my requirment ,because it is receiving all events from all the users...

/v1/events

In hardware side , I coded it as

Spark.publish("my-event",publishString,60,PRIVATE);

In my test , There are only one core and one user account. I have also review the source code both server and protocol, i found the message from hardware seems not meet the logic which deal with the userid ? Below is what found ,can you read it to help me, it maybe not correct, I just trying to known where is my mistake or it is from the system.

The event subscribe logic:
Spark-Server -> CoreController -> the method subscribe will used for register a event to the global publisher which is obviousely appending the userid to the event name.

subscribe: function (isPublic, name, userid) {
    if (userid && (userid != "")) {
        name = userid + "/" + name;
    }


//    if (!sock) {
//        return false;
//    }

    //start permitting these messages through on this socket.
    global.publisher.subscribe(name, this);

    return false;
},

In the event publish logic:
In the spark-protocol -> SparkCore.js-> method named "onCoreSentEvent -- line 1055" , It is publishing a event as below,, but the obj.userid are not taken care in the whole method ?..

onCoreSentEvent: function(msg, isPublic) {
    if (!msg) {
        logger.error("CORE EVENT - msg obj was empty?!");
        return;
    }

    //TODO: if the core is publishing messages too fast:
    //this.sendReply("EventSlowdown", msg.getId());


    //name: "/E/TestEvent", trim the "/e/" or "/E/" off the start of the uri path

    var obj = {
        name: msg.getUriPath().substr(3),
        is_public: isPublic,
        ttl: msg.getMaxAge(),
        data: msg.getPayload().toString(),
        published_by: this.getHexCoreID(),
        published_at: moment().toISOString()
    };

    //snap obj.ttl to the right value.
    obj.ttl = (obj.ttl > 0) ? obj.ttl : 60;

    //snap data to not incorrectly default to an empty string.
    if (msg.getPayloadLength() == 0) {
        obj.data = null;
    }

    var lowername = obj.name.toLowerCase();
    if (lowername.indexOf("spark") == 0) {
        //allow some kinds of message through.
        var eat_message = true;

        //if we do let these through, make them private.
        isPublic = false;

        if (lowername == "spark/cc3000-patch-version") {
        }

        if (eat_message) {
            //short-circuit
            this.sendReply("EventAck", msg.getId());
            return;
        }
    }


    try {
        if (!global.publisher) {
            return;
        }

        if (!global.publisher.publish(isPublic, obj.name, obj.userid, obj.data, obj.ttl, obj.published_at, this.getHexCoreID())) {
            //this core is over its limit, and that message was not sent.
            this.sendReply("EventSlowdown", msg.getId());
        }
        else {
            this.sendReply("EventAck", msg.getId());
        }
    }
    catch (ex) {
        logger.error("onCoreSentEvent: failed writing to socket - " + ex);
    }
},

@yuanetking, I don’t believe subscriptions are supported on the local cloud yet. Perhaps @kennethlimcp can clarify?

so pub is available but sub is not.

The local :cloud: assumes a simplistic scenario where the setup is probably 1 user of a group sharing a bunch of core.

So events firehose only work with the generic global API endpoint of http://your_ip_add/v1/events

I never tried /v1/devices/events but i’m assuming it’s a filter that only looks at stream from your account. With that say, the description i mentioned above tells you that it’s not available :wink:

But i might be wrong :smiley: Gotta test it out first!

1 Like

Seems like the API is stubbed in https://github.com/spark/spark-server/blob/master/js/views/EventViews001.js#L204

but this line looks weird to me: https://github.com/spark/spark-server/blob/master/js/views/EventViews001.js#L217

but let’s see what @dave have to say :smiley:

Hi All,

Sorry about the slow response, we’ve been really deep in a few projects the last few weeks, wow.

Hmm, private events should work (sounds like a bug), but there also isn’t the concept of ownership / distinct user privacy on the local cloud yet, so public / private events should be effectively the same, yeah? It might be a little while before I’m able to work on the local cloud again and fix bugs, but maybe I can talk @nexxy or @jtzemp into helping :slight_smile:

Thanks,
David

I’m definitely happy to help if this is a bug or something we need to implement for proper pub/sub support for spark-server :smile:

1 Like

Hi @nexxy,

Seems it is no a tiny issue, some logic may losing. You can Imagin a event occured from Spark Core, and then the LocalServe should find the corresponding user infomation, then send to that user. So we need to has a Hash table which describe for which hardware is belong to the which user ? like below :

Hash[Device, User].
Hash[device1, chris]
Hash[device2, linda]
Hash[device3, john]

but the system seems lose it, the same is also wrote in upon description, maybe my suppose is wrong ,or you can give me some info which pointing for that logic

@yuanetking,

Thank you!

I’ve been looking into this, and believe I have figured out a fix. I will update the thread when I’ve pushed an update. :smile:

Hi Nexxy,

Is that fixed or you still working on the solution. I have a similar issue as of now in my local cloud.

Please let me know.

Thanks,
Satyen

@nexxy Same issue here, 45 days after the latest unanswered message in this topic :disappointed:

Do you have any approximate estimation when you are planing (or not) to add support for this feature to the spark-server, and the other really important feature I really miss, that is the Photon OTA upgrade support in the local :cloud:?. This one I really don’t remember if it was working with the Cores or not, and therefore, it’s just only failing with the current Photon (with 0.4.x develop firmware).

I would be glad to collaborate with you as beta tester for any improves involving the Photon and the local :cloud: (aka spark-server).

1 Like

Hi !

What I came to know from particle support group that they are not working on any issues as of now. There is an another issue we found that local cloud does not work on 0.12 v of node.js and still I did not see any response on the issue …

Thanks,
Satyen

@satendra4u, @jrodas,

Hey!

Apologies for not responding to this thread sooner. We are in the process of refactoring the necessary code to allow us to better support spark-server (which will of course be renamed particle-server). I will make sure to communicate this clearly when the new version of the server (and protocol) are released. In the mean time I am happy to help facilitate any pull requests or assist with small updates on a 1-on-1 basis.

I did take a stab at fixing some of the eventing in the server, but there are definitely still some discrepancies between this implementation and the way things work in the cloud. They will become more congruent as time goes on :smile:

Thanks for pinging me on this again!

— nexxy

2 Likes

@nexxy A new particle-server fork sounds great to me :+1: Let’s see if it comes a reality soon :smile:

Me (and I imagine many others) are really interested in knowing the possibilities that will include that particle-server and your first release date expectations. Please keep us informed.

I’m particularly also waiting to see the OTA support working again with the Photons on the local :cloud:. Just the remote binary send/upgrade, not the online compiling that is 30dB less important to me :smile:, as we can compile it manually and locally in order to generate new firmware releases.

I consider that the OTA support not working with the Photon as a critical issue, actually. I don’t know if you agree or not.

Thanks Nexxy ! Any update which are the previous issues will be fixed in the particle-server when you will be done with re-factoring?

1 Like

Any progress/timeline of particle-server or OTA updates?

1 Like

I made a pull request to solve this problem. Now it work on HTTP side, does not work on COAP side.

Solved the COAP side

1 Like