I continue to have problems with the Spark TCP Server/Client. I am not familiar enough with Unix sockets to know for sure the Spark is the problem, but it looks that way. I created crude logs of the transactions. My host machine wants some info from the Spark, a child process is spawned to handle the transaction. The child asks for a connection to Spark server, gets it, the child sends the request for info to the Spark. Most of the time the Spark replies, but then it won’t. I can send a “spark flash …” command, effectively rebooting the spark with cli and the TCP comms are fine again for a while. So what I want to know is:
- Can I check the status of the TCP Server to know if it’s still running?
- Is there a way to restart the server, can I issue multiple server.begins()?
- Can I set a timeout on the server or client to close the client server pairs in case something gets stuck?
- is there a server.stop() to kill the server, so it can be restarted cleanly?
- Is there a “spark reboot” command for the cli?