I have the following problem:
If the photon does not recognise that the client is disconnected (eg. I move out of the routers rage with the client, or I put it on flight mode (it is an ipad) the free memory starts to decrease and the photon freeze.
I could make a workaround!
I have noticed that the memory decreases always by 1264 bytes every 100 ms.
So I just monitor and if it three time in a row decreases I just close the tcp connection…
However would be nice if somebody could tell a real solution.
long last_free_memory = 0;
int free_memory_error = 0;
and in the loop:
long current_free_memory = System.freeMemory();
long diff_free_memory = last_free_memory - current_free_memory;
if (diff_free_memory == 1264) {
free_memory_error++;
if (free_memory_error == 3) {
Serial.println("bug found");
client.stop();
connected_to_me = false;
free_memory_error = 0;
}
} else {
free_memory_error = 0;
}
You are doing some checks to see if the client is connected, but in the millis timer section, you do not ever check to see if the client is connected. But you still do client.write() every 100ms. It would seem that you should not indiscriminately publish to the client if you know it is disconnected.
You could add an extra check in the millis timer section to verify that client.connected() == true. Or you could add a return; statement to your conditional check for if (client.connected()) if the client is not connected.
Thanks for input!
Sorry you are right! But please note I have shorten my original code to make it simple and illustrate the problem. If I check before client.write(tcp_msg, 1152); it has the same result, because the client.connected() return true even if the client is disconnected (in not a normal way, eg. you move out with your client from the wifi range)… As you can see on my screenshot, the free memory is 44320 with client connected. at 90001ms the client disconnected, but the system still show it is connected… and at the same time the memory decreases in the next 500 ms by 1264 bytes every 100ms and then freeze…
There are situations where the client becomes disconnected and TCPServer won’t know that it happened. Basically the TCP connection will keep buffering data assuming that the packets are just getting lost and retrying them.
This is partially a limitation of TCP, because it doesn’t know for sure when the network has really gone away, or is just temporarily not responding.
In protocols where there’s an acknowledgement, you can tell from that. Including a timeout is also a good idea, in case you get no data from the other side.
Also, I’m pretty sure the -16 buffer full error only occurs on 0.6.x. The behavior is different in 0.7.0 and 0.8.0-rc.
Yes, I thought so that is the problem.
I have also tried with settimeout but no luck, because I have 500ms to detect the problem.
However monitoring the memory and closing the connection if I see continuous decrease solved the problem for now.
With the latest firmware fortunately I do not have to use the workaround because the -16 buffer full error seems to work, and I can close the connection after 3 seconds. However by closing the connection the memory does not free up. And if the device is connected again, and then if there is a disconnection again then we run out of the memory, and get a -8 error for the getwriteerror.
After a sec the memory frees up… but only if there is a new connection, and the time also can be 5-60 seconds…