Semantic of concurrent client connections to TCPServer

I’m super new to the Particle platform but have experience in sockets programming. I am studying the TCPServer class. I am specifically studying the available() method. My testing seems to show that it doesn’t block but instead returns a TCPClient reference. If we have a new in-comming connection, then calling the connected() method on the TCPClient will return true.

A different world from the sockets API but different doesn’t mean wrong or bad. In sockets, I can define a backlog of connections that will be held by the TCP server when I call the sockets listen() API. When I then subsequently call the sockets accept(), then next available and waiting connection is returned to me.

I wanted to experiment and see what the story is with TCPServer. To that end I wrote the following app:

TCPServer server = TCPServer(9876);
TCPClient client;
uint8_t buffer[100];

void setup() {
    Serial.begin(9600);
    while(!Serial.available()) Particle.process();
    Serial.printlnf("Listening on %s on port 9876", WiFi.localIP().toString().c_str());
    server.begin();
}

void loop() {
    if (client.connected()) {
        if (client.available() > 0) {
            int bytesRead = client.read(buffer, sizeof(buffer));
            Serial.printlnf("Read %d bytes", bytesRead);
            client.write(buffer, bytesRead);
        }
    } else {
        client = server.available();
    }
}

I then started creating connections to the Photon hosted server using multiple terminals on my Linux system and in each one ran the command:

nc 192.168.1.16 9876

I connected 1 client … all good. I connected 2 clients … all good. At 4 clients, still all good. Connecting the 5th client, something happened that I didn’t expect.

On connecting the 5th client, the 1st connected client was disconnected. On connecting the 6th client, the 2nd connected client was disconnected.

Before running this experiment, I was guessing at possible outcomes but this wasn’t one that I was at all expecting. My anticipation (based on sockets programming) would have been that the 5th client would have been rejected and the originally connected clients would continue. If one of them in the backlog was consumed then we would be able to backlog one more new connection.

Again, I’m not saying at all that anything is wrong here but it does call into question semantics of the APIs. Seeing that docs are open for editing, we could update the docs with more detailed semantics … but I feel that would be overly complex for the new readers.

1 Like

IIRC internally the WICED stack (written by Broadcom/Cypress) does handle a limited number (5?) sockets where one is reserved for the Particle system connection and the Wiring APIs (like TCPServer/TCPClient) only build onto of that.

But maybe @rickkas7 can chime.

As you have the code written, it will behave in a somewhat unorthodox manner, but it’s fairly easily fixable.

There is a limit of 5 sockets in the WICED network layer, and it’s not currently changeable. One of them is normally used for the cloud connection.

The way you have it written, you keep accepting connections. What ends up happening is that if you accept a connection when all of the underlying sockets are used, it kicks one off, randomly. (In old versions of system firmware, it would sometimes kick off the cloud connection, which was kind of bad.)

The workaround is to only accept 4 connections at a time. It’s not perfect because I think the remote side will see it as a timeout, not a reset, but I think it’s a reasonable behavior.

For a real program, you’ll probably want 4 finite state machines to handle each of the incoming connections, so not accepting when you don’t have resources to handle it will be a fairly obvious implementation.

3 Likes