TCPClient as simple logger [SOLVED]

Summary: For TCPClient to work nicely it is absolutelly required to wait for server to start responding before client.flush/client.stop operations
Sample solution: https://community.spark.io/t/tcpclient-as-simple-logger/3014/22?u=ryotsuke

Below original question

How should I use TCPClient to use it as simple data logger?
Currently I have a very simple php backend:

<? if("START"==$_SERVER['QUERY_STRING']) {
   file_put_contents("log", "");
 } else { 
   file_put_contents("log", $_SERVER['QUERY_STRING']."\r\n", FILE_APPEND);
 }

No content is returned.
In Spark i use something like this:

void setup() {
    client.connect(server, 9606);
}
void loop() {
   if(client.connected()) {
    client.flush(); //we dont need to read anything, flush it
    client.println("GET /l.php?ping"+(String)millis()+" HTTP/1.0");//this is received by server only once :frowning: 
    client.println();    
   
  }
  //do stuff, at least for 0.1 seconds
}

Currently similar code has hang my core to factory reset state. I decided that at this point I need an advice how to proceess further.

Parameters of server/port are verified and correct.

Can you add in the missing lines like client.print("Host: "); and test again? :slight_smile:

1 Like

Do I need to tcp.connect tcp.print tcp.flush tcp.stop every request?
First request goes fine without host, IP+port is enough. So issue about constant repeated requests.

Code below in loop works mostly fine, but produces up to 40 seconds gaps http://screenshots.ryotsuke.ru/scr_eac2d6c02184.png
How can I get rid of delay(magicValue)? How can I make “connect” timeout much lower - 1 sec max

 void loop() {
    if(client.connect(server, 9606)) {
       client.println("GET /v.php?"+something+" HTTP/1.0");
       client.println();
       delay(50);
       while(client.available()) {
         client.read();
        }
        client.stop();   
    } 
}

Sorry but I can’t figure out what you are logging from the server. :X

The screenshot looks ok except for the few thicker area. Can’t relate how the gap inbetween 2 is a delay…

Thicker area is area where data was received. X axis is seconds from core start. Thin line is when no data was coming.
It doesn’t matter what I am logging at all.

Question is what is correct way of repeated frequent sending GET requests. Using magic delay value does not seem to be right.

One quick way I can think of is to serial output at some specific points and watch it on the serial terminal.

Might have a clue where’s the delay coming from

Hi @ryotsuke,

Make sure you’re making a normal full HTTP request, and also cleaning up your socket when done. You also probably want a delay between when you open a connection to the server, otherwise you’ll be creating and destroying sockets (an expensive operation) very frequently. I had an example function here that also included basic http auth: https://community.spark.io/t/doorbell-to-send-jsonrpc/2805/2?u=dave

Thanks,
David

http://screenshots.ryotsuke.ru/scr_3d46f175e771.png This looks ok?
Also, does client.connect has configurable timeout?

I think you want String(voltage1) and not (String)voltage1, yeah?

For some reason this

Is working more stable(sendin more frequently) than new version: http://screenshots.ryotsuke.ru/scr_3d46f175e771.png
How do you think, what can be the reason?

Maybe like what @Dave said, the server is being hit with too many request at one go.

Imagine the core repeatedly sending GET requests within a short span of time. Insufficient time to react.

You server logs might give you some clue!

1 Like

server is in local lan and would be capable of handling over 100 requests per sec. so as fast as spark can send, server will be able to respond

How about you add some serial output previously mentioned and have a visual on where is the problem?

And what does your server log say? logging 100 requests per seconds? I guess not since previously you mentioned the data has a 40 seconds gap

We can solve this together faster with more details :smile:

40 seconds gaps are when spark is sending requests. When I send from other PC in a loop I’m able to get over 100 log entries per sec.

Please, I want to know internal firmware difference between client.flush and while(client.available) { client.read } and only guys understanding firmware code can answer this.

Interestingly enough, @finsprings is also working on streaming data via http from the core rapidly here: https://community.spark.io/t/non-cyan-flash-offline-core/3012/25 maybe we should join forces? :smile:

1 Like

One more question - is timeouts for connect function are configurable?

As if not I feed it would be more reasonable to use a TCPServer and wait for requests instead of sending to server. I don’t want to have long blocks if WiFi connection drops.

Hmm. I am not sure about the connection timeout for TCPClient. I’m asking our firmware guys about it, but it looks like it’s not easily exposed at the moment.

Hmm, interesting. It appears Spark is breaking connection too early. So actually requests are ~500 micros
Apparently when sending from other PC i am waiting till request works, and in spark not. I guess PC can do multiple socket requests and because of that it is working much faster.
Addining a delay(50) before client.flush, client.stop makes things much more reliable. I still don’t like idea of using magic delay values. I would like to have a method to actually wait for server to completely respond.
I have one idea to test… Will report in a while

Got my Spark core stucked in a loop and not being able to flash firmware using Spark Cloud.

hahaha! We might want to have that fix in future :smile: