Local cloud a.k.a. spark-server updates


#21

In our repo I just implemented the device_claims endpoint. I think it will be a bit of work to reverse-engineer how the Particle implementation actually works.

I think we should have the claiming up and running by the end of next week.

Keep in mind – our server has been completely refactored from the original and isn’t ready for production. We are hoping to have it ready sometime in February but we haven’t written any tests to see how it handles load/concurrency.


#22

@sud335, you project sounds interesting :+1:

But since I’m not a Particle employee and haven’t been involved in the dev of the local cloud server in any way, I can’t help you there.
On the other hand, I’m pretty sure that there hasn’t been much work been done on the local cloud server by Particle in the last two years anyway (which I’ve been moaning about to Particle staff frequently :wink: ).


#23

Are particle engineers planning to make the full version of particle server available for setting up local cloud? Including those missing API implementations that work fine with Particle Cloud? Is there a reason why the spark server repository is not being updated?


#24

When I had discussed the matter with @bryce a while back, he suggested to tag him so he could get a feeling how much that’s needed by the community.


#25

For me I think as long as the basics worked I would be happy. I.e. I can link a device to it, request functions, variables, do publish events, and subscribe

Ideally I would love to be able to program through it for ota updates. Even if this needed the atom based local ide to do it (it can connect to the spark cloud so why not a local cloud?)

Jamie


#26

@bigjme, that’s easily done via CLI without the need of anything special. And since CLI is open source you could even strip out that part and add that to your own project. I think @Moors7 has done something in that direction already.

Scrap that - that’ll still require Particle cloud connectivity :blush:
But there are other threads that used FTP and an SD card to do “pseudo OTA”.

But an open question is, how do you want to build the binary (via Particle’s build farm or locally).


#27

For me it would be local, i have a few projects that require a local cloud managing hundreds of devices which may all have no internet. So i would need to be able to compile locally and push updates to all ~500 with no internet. Realistically we would need the part building the code to be able to self update libraries from the cloud and pull through firmware versions if it ever gets internet

Jamie


#28

We’re trying to completely understand how the claim code is generated but as far as we can tell, it’s just a (basically) random unique string that gets saved to the device attributes on the server and the memory of the device itself during SoftAP.

I don’t think the keys are generated using any keys but I could be completely wrong there.


Anyways, during the particle setup process, all it really does is set up the device wifi and try to claim the device.

You need to set the server key and device key before all this in order to connect to the private cloud.

particle keys server ..\spark-server\default_key.pub.pem IP_ADDRESS
particle keys doctor your_core_id

#29

@bigjme - we are planning on at least handing OTA updates in our fork.

When the device is going to be flashed, we will check the firmware version and compare against a JSON file (which will need to be updated depending any time Particle updates their firmware). If the device firmware is older, it will just grab the correct firmware updates from a bin folder that you will need to keep in sync with Particle’s releases.

This isn’t super ideal but it will make sure that the firmware is only updated when you actually want it to.


For the local compile work – this shouldn’t be too hard to implement but it’s the lowest priority on our list. I think all you’d really need to do is pipe the request body from /v1/binary into a compiler and keep a copy of the most recent firmware on disk.

If you want to make a PR for this, we’d love the extra help.


#30

Would the bin folder be a manual update? If so i don’t see that being an issue as long as the links to fetch them from was in a readme file or something (for ease of use)

OTA would be great even if it partly relied on the particle cloud initially and pushed locally (just as a first step if possible, better to have something than nothing :slight_smile: ), as for testing the local server with high demand how much testing has been done? I may be in a position to have 8 photons connected to it shortly with requests to a variable and function attempting to run every 100ms on each device for long periods of time, say a week or more if that would help?

Jamie


#31
  1. Yeah, that’s the plan. I wish that I could automate a bunch of this to sync with the Particle APIs but I don’t think there are any endpoints to get the data.

  2. We are just wrapping up the device claims work and then will be changing over to the OTA updates work. Outside of figuring out what addresses and parameters need to be passed when sending the binary, the OTA updates code should be fairly straight-forward.

  3. @wdimmit did some initial testing on scalability (he caught a few bugs) but I think we will probably just need to write some node code that spins up a few thousand COAP connections. I’m also worried about high latency/low bandwidth scenarios and we will try to account for that in the test script.


#32

More than happy to help stress test if i get the go ahead for my project. My current project has timing in all the loops and my curl requests are all timed to see how long they take so i can likely provide some logged statistics for it at various distances and WiFi signals

Whats the current response time to a variable query on the local cloud? I can provide some php curl code to time the request if anyone wants it for testing

Jamie


#33

Awesome, we’ll take any testing we can get.

Make sure you are on the dev branch when testing. We are rapidly making improvements so that’s the best place to make sure you get all the latest fixes.


#34

Great to hear, looking forward to it over the coming months

Jamie


#35

Amazing. A few months ago there seems to be no hope for a local Server.


#36

Yeah, it’s looking like we should match the Particle API (outside of Organizations) by early February.

The only things we are currently missing are products, customers, local compile, and libraries. I don’t think it makes sense to implement libraries unless we are fetching right from the particle cloud so it probably won’t get implemented unless we come up with a nice way of doing it.

The rest of the features are working great. If you just want webhooks and OTA system updates, the current version should have you covered. @bigjme - we actually figured out a way to fetch all the system firmware binaries so you don’t even have to think about how it works.

@sud335 - we also have the device_claims working in the dev branch if you want to check that out.


#37

Amazing work! Time to go order some photons and get some servers set up


#38

Awesome. @abhijit @synergylabs We need to try out the devic_claim API from this version of nodejs server. We will let you know if we face any issues. Thank you for the remarkable work @Brewskey.


#39

@Brewskey This looks great and I would really like to try this out now (even though your update is still in dev). I ran into some issues installing it on my server - just to make sure I am not making anything harder on myself, what version of node and what OS are you using?


#40

I am on windows with the latest node.

Can you post in the issues on the Brewskey/spark-server project? I think the issue you are seeing has been fixed as of Wednesday but I’m not 100% sure.