I’m guessing that the Particle team is hard at work fixing more pressing issues with the Photon.
There is basic functionality and connectivity for spark-server – device claiming, OTA for Cores, SSEs, calling cloud functions (but just up to 4), retrieving variables from devices.
However, issues 53, 55 and 56 are crippling the functionality of the local cloud.
So, I’d like to post the following questions to see if anyone here might have a fix for these issues, which I also raised on spark-server, a.k.a. the local cloud:
For those of us who have to run locally (corporate/institution WiFi security, latency, privacy), the current state of spark-server presents obstacles in deploying projects with those requirements.
Will be good to hear from the community on this. Am I the only one here clamouring for the local cloud to be fixed?
It looks like it’s not really a priority. They haven’t updated the local cloud in months. I’m guessing that it’s because they want to monetize their cloud? You can’t blame them I guess. I’ve been hacking on my own branch a little. I have it working with MySql instead of json. I hope to contribute to the public repo soon on some of the other features.
I think it will get a look eventually, @nexxy recently labelled the issues as bugs so it’s just a matter of time before spark-server gets some attention. When that will be is anyone’s guess. @tslater I’ve wondered about your point too.
The plus side to all this is that it has forced me to dive into the source code and attempt quick (i.e. lousy) temporary fixes on my local deployment. It’s great if you enjoy learning this as a hobby, contribute back and have time to spare, but bad if you are pressed for time and want to push on with your actual project.
Not all of us tinker with Particle devices as weekend projects, and I worry that interest in Particle.io will wane, once this realisation hits: while the Particle.io cloud is great for prototyping (and if you have a fast, reliable connection), you cannot deploy projects that impose access restrictions for any reason to their cloud. This scenario comes up way more often than one might think!
Wondering if they will at least update spark-server to mirror the current cloud features on Particle.io.
I’m hopeful though - it’s a wonderful community and spark-server will probably become a priority when more folks start to handle locally-deployed projects and ask these same questions. At least I’m relieved to know it’s not just me.
I think it’s wonderful how much they’ve open-sourced and done, but you’re right, for an actual project with your own cloud, there is a still a ton to be done. To me, the biggest challenge will be converting the cloud server to be highly available and scale horizontally. Right now it can only work on one process.
@tslater I probably sounded like I’m complaining too much in my above posts, but only because of the potential that the Particle ecosystem has. I too have always been appreciative of their open-source model.
Indeed, there’s their sales contact to discuss options for private clouds, high-availability and scaling, which is great if you’re developing a Particle-embedded product. But for developers looking to deploy smaller-scale setups (the proverbial ‘middle-class’, I suppose) for one-off projects and deployments, we often have to rely on sifting through the source code to roll our own solutions. Which, to look at it positively, is always better than starting from scratch.
It provides support for clustering/load balancing of node.js scripts, but unfortunately clustering is currently not supported on node 10.x, which spark-server is built on. So I suppose when spark-server gets bumped up to use node 12.x, we might be able to test this out.
That’s correct! I do have spark-server on my radar as a priority for when I return from vacation (September).
The server is definitely a priority, but so is everything else! It’s a matter of prioritization, and this kind of feedback is exactly the thing that helps us better prioritize. When I dig into bringing the server up on compatibility for the Photon, I will definitely be looking for input (and contributions) on what matters the most to folks.
With that said I would also like to make sure it’s clear that the Particle Cloud is ready for production as well. If you have concerns about access restriction, we can help! As a matter of fact, the illustrious @jeiden recently made this lovely addition to our documentation: https://docs.particle.io/guide/how-to-build-a-product/authentication/
I say this to say that if you have any specific concerns about authentication and security on the Cloud, let’s chat about it!
I would also love to get a run-down of the biggest blockers for folks building products with the intention of maintaining their own servers/cloud.
@nexxy Personally, for my work, the current inability to flash OTA is a big drawback. I am about to deploy 20 sparks in late November to global collaborators, many of whom are not going to be able to perform --usb flashes when I need to change things. (Well, they would be able to do it, I guess, but why would I want to worry the end user with installing the CLI and basically taking the device apart to perform the flash?)
One of the main reasons I was drawn to Particle (Spark) originally was that I could manage the particulars (code updates, etc) while my collaborators could just ‘plug and play’. Another plus was that I could host the server myself and have complete control and open source the project eventually. I originally prototyped on a Core, and it all worked quite well. Please make the Photon work as well as the Core with respect to the sparticle-server.
Second would be the firehose issue described above, though I could always find a way to parse/code around this I suppose.
Thank you for responding to our issues, I love my Photons, and the particle-cli is such an improvement over the spark-cli. Do you think you might have OTA flash up and running by October?
@nexxy Thanks for the feedback and hope you have a good break! That’s a great article by @jeiden and describes how a product might be designed with the Particle Cloud in mind.
My concerns point to situations when we have to deploy sans Particle Cloud, leaving us to deploy our own server using spark-server, for reasons explained above. So I hope spark-server gets the attention it needs. In any case, as far as priorities are concerned, I’d really like for sparticle-server (nice one @jgeist) to fix, in this order:
Thank you for elucidating your concerns as well! Just before I went on vacation, I was talking with David about working with him on getting the server back up to speed with the changes made in the cloud to support the Photon. I know this will be one of my priorities when I get back, so I will be sure to update relevant threads when I make progress.
Also I think your prioritization is spot-on. The OTA part is something I will definitely need some help from David on (as he is the one primarily responsible for the changes in the cloud). The other two things I believe we can get working on our own!
Hey! I’d love to hear more about your project! It sounds like you have something pretty epic in the works.
It sounds like we’re all pretty much on the same page about priorities for the server! I will definitely be making this a priority when I’m back in the saddle.
I’m always very happy to discuss complications and concerns, so thank you for taking the time to tell me about how things are for you! I’m also very happy to hear that you’re enjoying the updates to the particle-cli!
I don’t know what exactly I’ll be getting into come September; I know that there is a lot of work coming up for the Electron, but October sounds reasonable from here, and I will keep everyone updated on progress. I’m also happy to coordinate any other changes we might want to make to the sparticle-server if there is anything in particular you’d like to help with!
I haven’t had time to tackle all of these yet, but I wanted to post with an update that handshakes and OTAs for the photon should now work as expected on the spark-server / spark-protocol projects. Sorry about the delay!
So, you are confirming that OTA updates will now work with local cloud? If so, that’s great! I assume I will need to update the spark-server software on my servers?
I will check this out over the weekend! Thank you.
It worked super well. I have a question. I’m currently flashing up a (my-app.bin) binary that I compiled locally. Does that file include the core firmware? Is this a flash of the user/app and core firmware or just my app?
@tslater it’s just the user firmware that gets uploaded in a local cloud OTA. So what you wrote and compiled (either locally or via the cloud) as a .bin gets sent through the OTA.
that brings up a good point though - i have not tried (via particle-cli), and am doubtful that local OTAs can send system firmware to a device.
when it comes to local compiling and management of a local device’s system firmware, it’s all still done using usb/dfu-util for me. unfortunately if it’s already mounted on top of a grain silo there’s no other way but to climb up and do a usb dfu-util upgrade at this point!
i’m glad i don’t have that situation to deal with, but this is certainly something that will come up for any hard-to-reach installations. can this be lodged as an important request for local cloud users?
If you’re already using particle CLI with local cloud OTA then sending system firmware should be fine. Just be sure to send part1 first then part2. The device itself takes care of flashing these to the correct locations.
Thanks! I tried that and it worked!! That’s pretty crazy. Why is it divided into 2 chunks? It’s wonky that it has to reconnect to the cloud between tries the way i did it though. Can I do particle flash [id] system-part1-0.4.6-photon.bin system-part2-0.4.6-photon.bin in one line? If only one part flashes over, am I in big trouble?