Local cloud event stream

Hey,

I’ve installed a local cloud and got my cores to connect to it on a local network.

Now, my spark cores are publishing an event, which I have successfully read using the Javascript API from the Spark cloud, but as I try to read it on a local cloud I get this error:

{  
   "code": 400,
   "error": "invalid_grant",
   "error_description": "The access token provided is invalid."
}

Note that I can successfully login with the Javascript API to my local cloud, either with username/password or access token and that this happens AFTER I have successfully logged in.

Oh, if I do a GET /v1/devices/ request with the access token to my local cloud I get an instant successful result, but with GET /v1/events/ the server is not responding with anything (doesn’t even log a request in the console).

Any ideas why this is happening?

Okay, so I had to move the event listener into the login callback and I don’t get this error anymore. But I also don’t get any events.

I noticed that when I call getEventStream() on a specific core id, I get this response:

{  "error": "Permission Denied",
   "info": "I didn't recognize that core name or ID, try opening https://api.spark.io/v1/devices?access_token=df589dffa0ffcee3c1b432871690709ca395ffb2"
}

The core is definitely online and registered with my local cloud. It’s weird that the link provided here is to query the Spark cloud, not my local one. Not sure if that is a generic response or is it for some reason checking the wrong one? I have apiUrl set in my config file properly and other things seem to work fine.

What happens if you modified the returned link to correctly point to your local cloud? What response do you get back?

I have a similar situation. The odd thing is this command:

 curl "http://10.0.0.11:8080/v1/devices/5......7/AllValues?access_token=c....5"

works on one system, my mac, but fails with the “code”: 400" on the system hosting the spark server.

BTW: I have 2 Spark cores, only one of them is getting the error. I’m going to try and re-key the one that’s failing.

I had to reboot the mac, now it has the same “code”: 400 error.
When I restart the spark-server, it finds the same cores twice.
I’m going to try deleting the cores a starting again