Using Spark as Server or Client?

Hey Sparkers, I’m developing a bit complicated project(for me) that includes Sparks, Local Server/c++, Web Server/Asp.Net and Browser Clients/javascript.

I’ll be asking a question about communication between Sparks and Local Server.

Whenever a Spark Core connects to wifi, It should also connect to Local Server and keep connection open to get commands. I have a basic connection algorithm for that.

1) Local Server listens 1111 as UDP server (I'm still not sure how to do that)
2) Spark broadcasts it's IP with UDP to *.*.*.255
3) After LS gets the datagram, sends a available port number X to sender Spark with Udp on port 2222 and listens port X for incoming Tcp connection.
4) After Spark gets the datagram includes port number X, Spark connects to LS on port X with TCP
5) After TCP connection is established, Spark sends its deviceId and wait for commands.

With this algorithm, because I’m not sure how to use a server in C++, I’m having trouble to implement it. But I have other one in which I use Spark as server and LS connects to it.

1) Local Server listens 1111 as UDP server (I'm still not sure how to do that)
2) Spark listens port X for TCP connection
2) Spark broadcasts it's IP and port X with Udp to *.*.*.255
3) After LS gets the datagram, LS creates a client socket and connects to Spark's X port
5) After TCP connection established, Spark sends its deviceId and wait for commands.

Note: There is one Local Server and many Spark Cores connect to it. Then I think Local Server must behave like a server.

I hope I express myself clearly. Please help me to decide what to do.

Thanks a lot.

Why not use the local cloud and then use functions? remove the rate limiting etc as its your server… AES encryption… all the other goodies? AND its already built/tested/working

Spark core, if on wifi.ready(), will always (try) to connect to Spark cloud, which assumed available to GET requests. Connecting to a local or remote server does not terminate that socket to its cloud. (Unless it gets blocked for 20s and disconnect)

Mine is something like this (perhaps it can give some ideas) :smile:

  1. Remote server listens on port 1111 as TCP server
  2. Spark (as client) write TCP data to the server occasionally
  3. Remote server send HTTP GET and POST requests to the core via Spark cloud's REST API occasionally

So, I think you can have your core behave as both a server and client depending on its behavior at the time.

Hootie81,

I’m dealing with Webrtc native library already. Since I’m not good at C++, it would be hard to integrate it with Spark Cloud. I may integrate it later, when project gets larger.

Thank you for your advice.

Metaculus,

My core shouldn’t connect to Spark Cloud. They connect to my own application which runs in LAN. So, I’ll use the second algorithm I mentioned above.

I think my question wasn’t clear enough. I’ll try harder next time :blush:

Thank you for your help.