Local cloud a.k.a. spark-server updates

Any official Particle response as to the future of spark-server? Appreciate the response, even if it’s a “No, we don’t plan to support it further. it’s dead” response.

It is currently incompatible with the latest builds of the Particle CLI. All the steps that used to work, to switch clouds on my Photons, no longer do, because the CLI is calling updated methods that do not exist on spark-server. (example: see screenshot below)

Cheers

@bryce might be the one. I had a chat about the future of particle-server a while ago, but it’s a low priority thing unless the community pushes it up the list.

Thanks @ScruffR

I posted my case for more spark-server support to github & @Dave; didn’t see a need to repeat it here so here’s the link: https://github.com/spark/spark-server/issues/74

There are many reasons why a local deployment is necessary, and most of the time we’re limited to the security limitations imposed by our clients. It’s been 2 years since it was given a look.

@jgeist, @frlobo, @tslater, – folks who have discussed local cloud matters here in the past – can we make a case for this to be heard?

Hi @chuank,

I’m aware of it, and I’m one of a few engineers at Particle who could quickly investigate / fix the issue. I’m just really slammed for time these days, so I’m trying to make space so I can get to it soon. Sorry about the wait!

Thanks,
David

1 Like

I’m not at all schooled in these matters, but the ability to have a mobile local cloud without the need for the internet, say out in the country, would be cool. And as mentioned by @chuank, a local cloud could be more secure and reliable in some respects, depending on the person maintaining it of course. But I wouldn’t want to give up the Particle cloud either. And I know that and firmware development takes up the lion’s share of Particle’s efforts, as it should.

1 Like

@Dave thanks again for this.

If it helps, there were also some changes from @straccio’s PR for spark-server and spark-protocol that fixed a previous issue with credential authentication.

I’m not at a level or time to commit PRs to spark-server, but I’ll be very willing to test and report back on updates.

What can I say! I wrote my own node server for my application. However is not that robust and does not feature encryption. Would love to have the server updated to be fully capable and compliment!

I wrote my own because I gave up on waiting. I understand that this might be a low priority thing. So…

I sense this is more of a strategic decision, as providing a local server that works might severely undercut Particle’s own cloud-based offerings. But this decision is also walling off developers that need to build encrypted ‘off-the-grid’ setups. Some of us (like @frlobo) wrote custom implementations. I’ve moved to using mqtt over tls (using alternative hardware platforms) for offline projects, but it’s just such a pity.

Not every project requires, or has the ability to have an internet connection. Many clients wall off their networks for security reasons, or require enterprise-level authentication (another separate issue discussed elsewhere).

I only hope that we get basic operability back, and maintain that basic operability just so offline projects can continue to be developed using Particle products.

A suggestion: Why not declare support for a bare-bones model for local cloud? Allow connect, claim, rudimentary SSE (no wildcards/no prefix filtering/no webhooks needed), allow variables, functions to be called?

If product cannibalisation is a concern, perhaps offer local cloud as single-seat licenses that we can purchase and activate? I’d gladly do that if that guarantees the continued support for local cloud.

1 Like

Hi @chuank,

We’re not strategically trying to prevent the local server from working. We’re not worried about cannibalizing the production cloud, we’re committed to offering an open-source version of the cloud.

I’m just really crazy busy, I’m sorry about the wait, and I said I’d fix it, I just need to scrape together a few hours.

Thanks,
David

1 Like

@Dave, I’m working on a stasis field generator that could help with the time thing. Unfortunately, it requires the local cloud to work ! :smirk:

1 Like

Well in that case:

Okay, so I just spent some time loading up 060 on my local photon, and testing it against the local server. I used the latest copy of the particle-cli, and latest system firmware, and my photon connected no problem. I’m a little confused, maybe make sure you’ve pulled the latest copy from github, and remove / reinstall your node_modules?

Please send me any extra info you can think of (node versions, cli versions, etc), if reinstalling modules / from source, and reflashing your server key doesn’t fix it.

Thanks,
David

Are you responding to your parallel universe self? :wink:

1 Like

Hi @Dave,

My apologies if I sounded like I was pushy – we all understand that time is always short for all of us. The community feels like the best place to discuss such matters. It’s great to hear that the local cloud will stay functional.

Perhaps @peekay123 did manage to get his stasis field generator working. More likely a brain unscrambler tuned to my cranial resonant frequency, because something worked today!

What fixed the issue was the re-generation of the device key, which was then re-sent to my local cloud. I manually un-provisioned the previous device key on the local cloud before repeating particle keys send. I’m aware that particle keys doctor streamlines this process, but I’ve never been successful with it on the local cloud.

My local cloud’s server key remained the same throughout. I’m using node 4.4.2 on the server (had to because other services on the server require it), CLI is 1.16.0.

In summary, this was the old process I did for switching clouds on devices:

  • bring device to DFU mode
  • particle keys server xyz.pub.der IP_ADDRESS
  • if never claimed on the local cloud, do the particle cloud claim / name thing
  • reset device, restart spark-server, and good to go

Normally that works, but I’m finding that I now need to do this first (or at least this fixes the issue if the above steps don’t work):

  • bring device to DFU mode
  • particle keys new [devid]
  • dfu-util -d 2b04:d006 -a 1 -s 34 -D [devid].der
  • particle keys send [devid] [devid].pub.pem
  • reset device, restart spark-server, and good to go
2 Likes

Hey all, I’m going to do a full post once I get closer to the finish line but I have a large refactor of the protocol and server code about 70% finished. I have a programmer working on the server code and writing API tests and I’m handling most of the work simplifying the protocol code.

You can check it out here https://github.com/Brewskey/spark-server
https://github.com/Brewskey/spark-protocol

6 Likes

Hi @jlkalberer,

I’ve been following your project with interest on Github. Do you have any further information about your status or your goals to share?

Cheers.

Yeah, we’re still steadily working on this. I wouldn’t recommend using it right now as we haven’t built out tests for everything and most of the code has been completely rewritten/refactored. I have one engineer working on it full-time so it’s moving along.

We want to get this in parity with the main Particle cloud except for organizations. I haven’t decided if we want to add support for compiling source on the server.

The end goal for me is to deploy this on Azure alongside my other server code. With this rewrite you will be able to easily swap out the different stores (core keys, users, etc) with a REST API or some type of remote storage. In my case I’ll be using my existing REST API for user auth.

Hi,

We are students and researchers from Carnegie Mellon University, Pittsburgh, working in Synergy Labs under Prof. Yuvraj Agarwal for an IoT project called Giotto (www.iotexpedition.org). We have built a sensor package around the Photon (P0) that streams high-frequency sensor data from 12 different sensors over an encrypted channel. We have 100 devices manufactured and plan to scale this to thousands. Given the scale, privacy requirements and the throughput needed we have to use a local nodejs server setup, including claiming devices on a per user basis. We would want to do the provisioning (key-setup, claiming, etc) over the air without needing serial communication (dfu or USB).

In trying to do so, we are using the modified version of tinker app to configure and claim the device with the local nodejs server. This we do by changing the hostname in the app used for cloud APIs to our local server. However, we found some issues with some cloud APIs with local nodejs server.

  1. We want to know how to change the just the Wi-Fi of the Particle Photon (that is configured and connected to local cloud) using the particle-tinker app or the particle setup sdk.

  2. The /v1/device_claims API used for getting claim code is not available on the open source version of the spark server, which hindered the claiming process using tinker app. This problem also occurs when configuring and claiming device using particle-cli setup with local spark server. Can you direct us to the API definitions of the same that would allow us to claim the device using tinker app to the local spark server instance.

  3. As per our analysis, we need to generate the magical claim code (64 byte string) as the global particle cloud does. But we rather using particle keys commands with the ssh public keys (.pem, .der) to claim the device with local server. Is it the case that the magical claim code is generated using these keys? If so, how can we generate the claim code using these keys?

@Dave @ScruffR It would be great if we can get the latest spark server update since the repository is more than 2 years old. This would be infinitely helpful for us.

@sud335 - I don’t know about #1, but on #2/#3 I have that about half implemented in my branch. Some other things came up but I can have my engineer focus on finishing that next.

@Brewskey - Thanks for the quick response. You mentioned you have them half implemented in your repository. Can you point us to the commit and file which contains the partial implementation of these APIs? Also, is there an ETA for the working version of spark server with claiming?
Thanks again,
Synergy Labs