I have just set up my own local cloud on a Raspberry Pi 2 Model B V1.1. It was not easy, but now I have the cloud working!
The main reason behind the local cloud is because I need to use the sparks in a building without network access. But also to improve speed.
I was a bit disappointed when I detected that the time from that I send
Is this normal? How can I debug this slow call? What can be wrong?
I’m using the spark core 1.0, with the Tinker software.
The server is running Raspbian Jessie, kernel: Linux raspberrypi 4.4.38-v7+ armv7l GNU/Linux
node is v6.9.4
spark-server from https://github.com/spark/spark-server (not updated in 2 years?)
I’d suspect a big chunck of these 5sec is the time used by CLI to just issue the call.
You could use a more direct aproach to issue that call (e.g. curl)
I’d agree with @ScruffR here, if you’re running the CLI on the Pi, then you’re starting a new NodeJS process each time. Node tends to have a long spin-up time, but then is fairly performant once it’s running. Try a smaller app for testing, like curl ( https://docs.particle.io/reference/api/#call-a-function ), which might be more optimized on the pi.
Wow! Thanks @ScruffR and @Dave, with Curl it is instant! No apparent delay at all!
So the delay of 5 second is expected with the particle-cli running on a pi? Not a big problem when I could use curl instead, but would it be possible to spinning up node in the background and just using particle against it?
Nice! Glad that helped! You could certainly write a persistent app with an SDK (or just HTTP requests) to run more quickly than a fresh node invocation each time, but I also wonder if your Node installation is optimized to the OS / board you’re running on? It’s possible someone has a version of node for it that’s faster?
Also I imagine as more modules and functionality have been baked into the CLI, the warmup speed might have slowed a bit over time. So that’s something we might also want to optimize over the long-term for the CLI.