Local spark-server bleeding edge, lots of blood!

I have a local server running on a raspberry pi. I have 2 cores. Both are currently flashed with identical code. I also use a mac to test communication with the cores.

I have several issues scattered as replies to other posts. I thought it might be best to combine them into one post.

  1. USER ERROR ON THIS ONE-> I’m getting a “code”: 400 error from one core only. For a while I only got this error when trying to communicate with the core from the RPi, the mac would get the proper data. After rebooting the mac, it now gets the error code when I try to talk to that core. --SORRY

  2. I tried to remove the core and reclaim it to see if that would help. I tried “spark core remove XXX…”. Nothing happens, the command doesn’t return. I watch the output from the server and see it gets the request, but I don’t see any response.

  3. Running “spark list” on the mac and on the RPi have different information. A core might be online on the mac, and offline on the RP and vice-versa. These commands are sent at about the same time. I don’t see a consistent relationship in the outputs.

  4. Online and offline doesn’t seem to mean much. A core listed as online won’t respond to a request and a core listed as offline will.

  5. When I start the spark server, It claims to have found each core twice.

Questions:

  1. Can I remove the info from core-keys to remove a core from the server?
  2. Can some explain as much as possible how the server determines if a core is online or not?
  3. How serious is the local server to spark? With all the product expansion, are resources going to be put into getting the bumps smoothed out?

Just a dumb question before one of the more experianced fellas reply:

Have you made sure your Cores are not trying to connect to the Spark cloud anymore?

No, once you reset the server ip in the core, it’s set. I don’t think even a factory reset returns communication to the cloud server.