TCPClient Strangeness

Hi, I am having some trouble with the TCPClient class.

I am using a DS18B20 1 wire sensor, and reading the temperature values from it, which is working fine. However there are some delays and longish functions in the code.

When I tried to connect to a local server using TCPClient it would always fail to connect. I removed the Sensor code and replaced it with a couple approximate delays. To see if there was some hardware conflict, however the problem persisted.

I managed to get the TCPClient to connect by calling Spark.process(), before calling TCPClient.Connect(). This causes TCPClient to connect consistently.

With the delays and without the Spark.process() call it will never connect.
Without delays it will connect.
What is the reason behind this and is it a bug or something specific to my setup?

The code is as follows; I tried to make it as simple as possible yet still provoke the issue.

class Reading
{
    public:
    byte addr[8];
    float celsius;
};


Reading ReadingResult;
TCPClient client;

void setup()
{
    Serial.begin(57600);  // local hardware test only
    ReadingResult.celsius = 22.50;
}

void loop()
{
    //if(read(&ReadingResult))
    //{
    delay(500);
    delay(900);

    Serial.print("  Temperature = ");
    Serial.print(ReadingResult.celsius);
    Serial.println("Celsius\r\n");

    Spark.process();
    if (client.connect(IPAddress(192,168,1,5), 8888))
    {
        Serial.println("connected to server");
        String temp = String(ReadingResult.celsius);
        client.print(temp);
        client.flush();
        client.stop();
    }
    else
    {
        Serial.println("cannot connect to server");
        client.flush();
        client.stop();
    }
    //}
}

@Dial0, what SYSTEM_MODE are you using (or are you just defaulting to AUTOMATIC)?

One thing to note is that client.flush() does not work as expected (a fix will be released in the upcoming new version release). This causes the CC3000 receive buffer to overfill and crash. To avoid this you can use this code prior to client.flush()"

while(client.available()) char c=client.read();

This will read all remaining bytes from the receive queue. Another thing to consider is not using dynamically allocated String vars as this can cause heap fragmentation. You could globally declare the String and reserve a fixed size to avoid fragementation like this:

String msg;
in setup()...
  msg.reserve(20);  //reserve 20 bytes of String storage

You can also use sprintf() along with a pre-allocated char array which avoids the dynamic allocation issue as well.

Hi peekay123,

Yeah, defaulting to AUTOMATIC.

I rewrote the code, and still has the some issueā€¦

Uncommenting Spark.process() causes it to connect fine.

    TCPClient client;

    void ClientFlush(TCPClient &TcpClient)
    {
        while(TcpClient.available()) char c=TcpClient.read();
    }

    void setup()
    {
        Serial.begin(57600);  // local hardware test only
    }
    
    void loop()
    {
        
        delay(300);
        delay(900);
        
        //Spark.process();
        if (client.connect(IPAddress(192,168,1,5), 8888))
        {
            Serial.println("connected to server");
            client.print("Data");
            ClientFlush(client);
            client.stop();
        }
        else
        {
            Serial.println("cannot connect to server");
            ClientFlush(client);
            client.stop();
        }
    }

@Dial0, I just realized that you have the client.connect() in loop() running without any delays or checks to see if the client is disconnected first before reconnecting. This is why having the delays or Spark.process() prior to client.connect() works!

I suggest adding logic to make sure the client is disconnected prior to attempting a new connection:

if (!client.connected()) {  //This ensures there is no open connection
    if (client.connect(IPAddress(192,168,1,5), 8888))
    {
        Serial.println("connected to server");
        client.print("Data");
    }
    else
    {
        Serial.println("cannot connect to server");
    }
    ClientFlush(client);
    client.stop();

/* The following code waits for the connection to close.  This is optional and you can simply allow loop() to exit to run the background tasks to process the disconnect */
    while (client.connected())  //you should also add a timeout
      Spark.process();
}

I figured the client.stop() call would close the socket.

Without the delays it runs fine. Its with the delays that there is a problem. The delays are to approximate time spent in some function calls.

Using your code in the loop, shows the same effect. With the delays there it cannot connect, unless a call to Spark.process() is made after the delays.

    delay(300);
    delay(900);
    //Spark.process();
    if (!client.connected()) 
    {  
        if (client.connect(IPAddress(192,168,1,5), 8888))
        {
            Serial.println("connected to server");
            client.print("Data");
        }
        else
        {
            Serial.println("cannot connect to server");
        }
        ClientFlush(client);
        client.stop();

    }

@Dial0, is it the first connection that fails or subsequent connections? Also, do you have the latest CC3000 firmware installed (v1.29)?

Will not connect at all, adjusting the delay by a couple hundred milliseconds can make it work sporadically. Maybe 50/50 chance of connecting each loop. With it going in cycles, so connects 8 times in a row, then fails to connect 8 times in a row.

Which seems like its some timing issueā€¦

The CC3000 Firmware is version 1.29.

@Dial0, besides a slow router, I am at a loss here. @bko might be able to inject some wisdom here.

Hi @Dial0

The main problem with your earlier code is that it opened a new socket every time around the loop() and there are only four sockets available for TCP/UDP in the TI CC3000, one of which is used by the cloud. So letā€™s put that aside for now.

One other point of interest is that delay() calls the Spark loop to keep the cloud connection working, so you are probably running the cloud code three times in a row. Spark.process() was meant to be a replacement for SPARK_WLAN_Loop() but it is not quite the same; that problem is being addressed.

There are a couple of avenues of investigation to pursue here:

  • Is there anything ā€œfunnyā€ about your network? Slow satellite link? Your router is not your DHCP server? You have lots of ARP traffic for some reason? These have been problems in the past for the TI CC3000.

  • Can you use the cloud with one of the simple sample apps like controlling an LED? If this works you can rule out a bunch of things related to your router etc.

  • Can you run the TCP client example to fetch a Google page? This will rule out other problems.

  • What kind of server are you connecting to? Is it something like Apache or something home-built? Can you use netcat or similar instead?

  • I donā€™t why this might help but all of my code has the delay(), if any, at end of loop() not the beginning. I understand that the delay is just there to simulate other work, but perhaps re-ordering the loop will help.

Let us know and we will try to help more. I will try to test your code on my core, but it may take me some time to get it done.

1 Like

One other point of interest is that delay() calls the Spark loop to keep the cloud connection working, so you are probably running the cloud code three times in a row. Spark.process() was meant to be a replacement for SPARK_WLAN_Loop() but it is not quite the same; that problem is being addressed.

I read in the docs that the spark loop is only called in delays greater that 1 second, has that changed?

Is there anything "funny" about your network? Slow satellite link? Your router is not your DHCP server? You have lots of ARP traffic for some reason? These have been problems in the past for the TI CC3000.

Just a standard home router, adsl, ethernet, wifi combo.PC on ethernet and a couple laptops/phones connected over wifi.

Can you use the cloud with one of the simple sample apps like controlling an LED? If this works you can rule out a bunch of things related to your router etc.

The tinker app seems to work fine from what I played with.

Can you run the TCP client example to fetch a Google page? This will rule out other problems.

The google example in the docs seems to run fine.

What kind of server are you connecting to? Is it something like Apache
or something home-built? Can you use netcat or similar instead?

Its a basic asio server that's written in python, just prints out whatever it receives from a tcp connection. Ill try with netcat.

The code for delay() is clever and will run the cloud service loop when it has time to do the service or at least every 1 second. The code is in spark_wiring.cpp if you want to look.

So the "1 second" rule is a simplified version that is easy to explain, but having a lot of small delays will also allow the cloud to be serviced at least every second.

It will interesting to see if Python is the limiting step--that has happened before for other folks.

1 Like

The code for delay() is clever and will run the cloud service loop when it has time to do the service or at least every 1 second. The code is in spark_wiring.cpp if you want to look.

So the "1 second" rule is a simplified version that is easy to explain, but having a lot of small delays will also allow the cloud to be serviced at least every second.

It will interesting to see if Python is the limiting step--that has happened before for other folks.

Ahh, that makes sense about the delays.

I tried with netcat and with a c# server, and it showed the same behavior. I ran wireshark, and without the Spark.process() call I was not even receiving any SYN packets. The TCPClient.connect() call also blocked for about 5 seconds, while with the Spark.process() call it will only block for around 1 second if it cannot connect.

Connecting to a remote server, running the same server application seems to work fine most of the time without the call to Spark.process().

Iā€™m going to out on a limb here and guess that it is an ARP problem.

Can you try pinging the core from local host and see if that helps? You can have the core announce its IP address via a cloud variable or over the serial port.

If not, we can try the opposite, pinging the server (or even the gateway/router) from the core.

The TI CC3000 does not (to the best of my knowledge) do gratuitous ARP to announce itself and some router/host combinations seem to want this.

2 Likes

Thanks bko,

I am with you on the ARP problem, I cannot ping it but, I remember reading that the CC3000 doesnā€™t autonomously reply to pings, after the latest firmware update?

Looking at the router, in the ARP section the Spark Core (192.168.1.3) has the ā€˜incompleteā€™ flag while all other connected devices have ā€˜completeā€™.

The Spark Core also doesnā€™t show up in the DCHP list.

1 Like

I would try repatching the TI CC3000 just to be sure you are up to patch. The easiest way is by using the CLI with the core connected by DFU mode:

spark flash --usb cc3000

Repatched everything and it seems to be working the same, however if I leave it running for long enough it manages to untwist its knickers and start working. After about 40mins the ARP issues disappear.

Iā€™m guess the Spark.process() call clears some buffers or allows some processing of the ARP table or packets that get ARP tables kick started and working?

2 Likes

My gut feeling half way through this post was to wait 30 minutes. I have two spark cores reporting energy usage of my home every 10 seconds to my windows server. I have been working with the cores for months and have always had to wait roughly 20 minutes to see data from the spark cores to my server. Iā€™ve posted on the spark forums before and Id love get the Sparks an I mmediate connection to my server.

And the Google TCP client example works every time for some reasonā€¦

I have not made sure Iā€™m updated lately, can someone remind me where I can check the spark firmware version? Thanks.

Hi @jaysettle,

If you are building locally, you can pull from the master branch of the github repo at github.com/spark/firmware.

We will be releasing new firmware within a few weeks to provide many improvements and bugfixes to the Core.