5 sec delay on local cloud, too slow


I have just set up my own local cloud on a Raspberry Pi 2 Model B V1.1. It was not easy, but now I have the cloud working!
The main reason behind the local cloud is because I need to use the sparks in a building without network access. But also to improve speed.

I was a bit disappointed when I detected that the time from that I send

particle call 0123456789ABCDEFGHI digitalwrite "D7,HIGH"

to the LED is light up is 5 seconds. I timed the call several times and it is ± 0.1 sec every time. It is also 5 sec to turn off the LED.

0123456789ABCDEFGHI is replaced with my deviceID above and below.

From that I issue the “particle call” command to that I receive anything on the server it is also about 5 seconds. I receive this on the server-side:

FunCall { coreID: '0123456789ABCDEFGHI',
  user_id: 'MY_USER_ID' }
FunCall - calling core  { coreID: '0123456789ABCDEFGHI',
  user_id: 'MY_USER_ID' }
::ffff: - - [Mon, 16 Jan 2017 23:26:13 GMT] "POST /v1/devices    /0123456789ABCDEFGHI/digitalwrite HTTP/1.1" 200 116 "-" "-"

Is this normal? How can I debug this slow call? What can be wrong?

I’m using the spark core 1.0, with the Tinker software.
The server is running Raspbian Jessie, kernel: Linux raspberrypi 4.4.38-v7+ armv7l GNU/Linux
node is v6.9.4
spark-server from https://github.com/spark/spark-server (not updated in 2 years?)

// Fredrik Löfgren

I’d suspect a big chunck of these 5sec is the time used by CLI to just issue the call.
You could use a more direct aproach to issue that call (e.g. curl)

1 Like

Hi @TechnoX,

I’d agree with @ScruffR here, if you’re running the CLI on the Pi, then you’re starting a new NodeJS process each time. Node tends to have a long spin-up time, but then is fairly performant once it’s running. Try a smaller app for testing, like curl ( https://docs.particle.io/reference/api/#call-a-function ), which might be more optimized on the pi.


Wow! Thanks @ScruffR and @Dave, with Curl it is instant! No apparent delay at all!

So the delay of 5 second is expected with the particle-cli running on a pi? Not a big problem when I could use curl instead, but would it be possible to spinning up node in the background and just using particle against it?


Hi @TechnoX,

Nice! Glad that helped! You could certainly write a persistent app with an SDK (or just HTTP requests) to run more quickly than a fresh node invocation each time, but I also wonder if your Node installation is optimized to the OS / board you’re running on? It’s possible someone has a version of node for it that’s faster?

Also I imagine as more modules and functionality have been baked into the CLI, the warmup speed might have slowed a bit over time. So that’s something we might also want to optimize over the long-term for the CLI.


One thing that probably has slowed down CLI startup considerably is the check if there is a newer version on startup.


Ah, @ScruffR, my computer running particle-cli is not connected to the Internet. What is the timeout for checking for new updates?