[v0.4.8-rc1 / v0.4.9] WiFi Reconnection Issue

Hey guys - we’ve been doing some WiFi testing on the photon with the v0.4.8-rc1 tag prior to deployment and we have seen a couple if interesting issues. I wanted to raise them here to see if anyone has experienced the same? Or just to track insight in case this is something to be fixed.

The method for testing was as follows:

  • Setup the Photon on the programmer shield and connect with GDB, run our codebase.
  • Connect the Photon to an Access Point that we can bring up and down at will.
  • Repeatedly shutdown and restart the Access Point to simulate reconnects.

While carrying out this test there were two different problems occurred:

Problem 1

After a lot of reconnects, the Photon will disconnect and not be able to reconnect even when the Access Point appears again. Our code is still running but it never reconnects. We have been seeing this a lot with a test running in an area with bad WiFi as well.

This issue may possibly be related to the new threading capability of the Photon - we have inferred this for the following reason:

From what we can see on the debugger there is an important function manage_network_connection which is called from the function Spark_Idle_Events which should be handled by the system thread.

Previously, we think this was handled by the Particle.process function, but now following down the function stack we see that when the platform has threading enabled, it will process the application thread but no longer process the idle events (as they should be handled on the system thread).

This works normally, but after a seemingly large number of reconnects this seems to stop working. Attaching a breakpoint to the manage_network_connection shows that it is never called. As a test, a colleague built a small wrapper function into the particle firmware to call manage_network_connection just after the application thread was processed BUT not calling the full Spark_Idle_Events.

This seemed to stop the issue as we could not reproduce it thereafter - this is unlikely a solid fix though so was hoping to garnish more info here if possible.

Problem 2

An SOS. This sometimes occurs after multiple repeated disconnects. I have very little information about this right now but we have managed to reproduce it 3 times. Just to note, this was vanilla firmware, not including the ‘fix’ we added in point 1. Sometimes, rather than seeing the issue in point 1, it would simply SOS.

We have noted a distinct pattern though - the previous re-connection before the reconnect causing SOS will rapidly flash green, rapidly flash red (or orange?) twice then immediately go to breathing cyan. We saw this in all three cases of the SOS.

While it can’t be explained it is a tell for when it is about to happen, so we can maybe attach GDB to a re-connection function just before we reconnect again, this might lead to a bit more information - I’m trying to get back to this investigation at the minute but thought I’d add this here to see if it generates any insight.

As always - thanks :slight_smile:

3 Likes

@mhazley, this is great work! I wonder if this other topic is related to the SOS issue, which I believe is related to problem 1 IMO:

1 Like

Interesting - I have scripted the Access Point to go up and down automatically now so I’ll monitor the memory as I run and see if I can get any information.

1 Like

Do you have SYSTEM_THREAD(ENABLED) in your application? On then will the system be multithreaded.

Indeed we do:

SYSTEM_MODE(MANUAL);
SYSTEM_THREAD(ENABLED);

Managed to replicate Problem 1 quite quickly with the scripted Access Point (cycling wifi off and on every ~30 secs) and was monitoring the free memory system call.

It has disconnected and reconnected 5 times this test and now refuses to connect again - I have stopped the AP from cycling and WiFi is now on always.

The System.freeMemory() call was returning the following after each reconnect:

42712
41624
40576
40052
39528

Interestingly, this number has continued to go down as its trying to reconnect when I’ve been typing this.

[Edit: This is happening slowly albeit]

FWIW, I didn’t mention before but the LED is flashing green while all this is happening and I still have my breakpoint on manage_network_connection which is not being hit.

I’ll run a few more tests like these when I get a chance at weekend or next week.

Thank you, this is useful info.

Been running this replication a few more times with the Particle Logging enabled at WARN level. I’ve left in my device state and free memory logging from the application thread - distinguishable because it doesnt have a timestamp :smile:

Not sure if this actually tells us anything new - memory was ~47800 at the beginning of execution.

One thing maybe worth noting - I only see the issue when the free memory gets below 40000.

Anyway - log below but more interesting point (possibly) following that…

.
. .
. . .
IDLE-39624
IDLE-39624
IDLE-39624
Cloud connected: 0
Cloud connecting: 1
0001242464:ERROR: int Spark_Connect() (582):Cloud: unable to resolve IP for device.spark.io
0001242483:WARN : void establish_cloud_connection() (223):Cloud socket connection failed: -9
0001242503:ERROR: void handle_cfod() (179):Resetting CC3000 due to 2 failed connect attempts
0001242621:WARN : void manage_network_connection() (98):!! Resetting WLAN due to SPARK_WLAN_RESET
Wifi ready: 0
Wifi connecting: 1
WIFI CONNECTING-39624
Wifi connecting: 0
WIFI CONNECTING-39624
WIFI CONNECTING-39624
WIFI CONNECTING-39624
Wifi ready: 1
WIFI CONNECTING-39624
WIFI OFF-39624
Cloud connecting: 0
0001268236:ERROR: int SparkProtocol::handshake() (102):Handshake: could not receive nonce: -19
0001268246:WARN : void handle_cloud_connection(bool) (264):Cloud handshake failed, code=-19
Cloud connecting: 1
CLOUD CONNECTING-39624
0001270121:ERROR: int SparkProtocol::handshake() (130):Handshake: could not receive hello response
0001270141:WARN : void handle_cloud_connection(bool) (264):Cloud handshake failed, code=-1
CLOUD CONNECTING-39624
CLOUD CONNECTING-39624
CLOUD CONNECTING-39624
CLOUD CONNECTING-39624
Cloud connected: 1
Cloud connecting: 0
IDLE-39624
IDLE-39624
IDLE-39624
IDLE-39624
IDLE-39624
IDLE-39624
IDLE-39624
IDLE-39624
IDLE-39624
IDLE-39624
0001284666:ERROR: int Spark_Connect() (582):Cloud: unable to resolve IP for device.spark.io
0001284675:WARN : void establish_cloud_connection() (223):Cloud socket connection failed: -9
0001284685:ERROR: void handle_cfod() (179):Resetting CC3000 due to 2 failed connect attempts
0001284794:WARN : void manage_network_connection() (98):!! Resetting WLAN due to SPARK_WLAN_RESET
IDLE-39624
Wifi ready: 0
Cloud connected: 0
WIFI CONNECTING-39624
WIFI CONNECTING-39624
WIFI CONNECTING-39624
WIFI CONNECTING-39624
(continues)
. . .
. .
.

I was reading about the threading and looked a bit deeper and saw that the System Thread queue has a background/idle task called system_thread_idle so I had set a break point here after we failed to connect and we were never hitting it like we did normally.

I thought the system thread might have hung somewhere so I dumped the threads and no matter how long I run it for after this point, I see the threads below:

(gdb) info threads
Id   Target Id         Frame
9    Thread 536927672 (worker thread) 0x08025ac4 in vPortYield () at WICED/RTOS/FreeRTOS/ver7.5.2/Source/portable/GCC/ARM_CM3/port.c:332
8    Thread 536928496 (worker thread) 0x08025ac4 in vPortYield () at WICED/RTOS/FreeRTOS/ver7.5.2/Source/portable/GCC/ARM_CM3/port.c:332
7    Thread 536943264 (WWD) 0x08025ac4 in vPortYield () at WICED/RTOS/FreeRTOS/ver7.5.2/Source/portable/GCC/ARM_CM3/port.c:332
6    Thread 536926384 (Tmr Svc) 0x08025ac4 in vPortYield () at WICED/RTOS/FreeRTOS/ver7.5.2/Source/portable/GCC/ARM_CM3/port.c:332
5    Thread 536939984 (tcpip_thread) 0x08025ac4 in vPortYield () at WICED/RTOS/FreeRTOS/ver7.5.2/Source/portable/GCC/ARM_CM3/port.c:332
4    Thread 536918960 (system monitor) 0x08025ac4 in vPortYield () at WICED/RTOS/FreeRTOS/ver7.5.2/Source/portable/GCC/ARM_CM3/port.c:332
3    Thread 536936048 (std::thread) 0x08025ac4 in vPortYield () at WICED/RTOS/FreeRTOS/ver7.5.2/Source/portable/GCC/ARM_CM3/port.c:332
2    Thread 536919568 (app_thread) 0x08025ac4 in vPortYield () at WICED/RTOS/FreeRTOS/ver7.5.2/Source/portable/GCC/ARM_CM3/port.c:332
1    Thread 536925888 (IDLE :  : Running) 0x08024e52 in prvIdleTask (pvParameters=<optimized out>) at WICED/RTOS/FreeRTOS/ver7.5.2/Source/tasks.c:2261

Now this confused me even more because when I went to inspect the locations in those files, I couldn’t find those files :confused: .

I am at tag v0.4.8-rc-1, git sha 0480c79 and when I look through my source tree, I only see port.c and tasks.c under hal/src/electron/rtos/FreeRTOSv8.2.2/FreeRTOS/Source/portable.. -

So yeah - am I doing something stupid? I have performed a full system firmware upgrade several times to be sure.

You guys probably already know this but I am getting the exact same behaviour and stack trace with v0.4.9-rc3 tag.

0001052415:WARN : void Spark_Process_Events() (201):Communication loop error, closing cloud socket
0001052528:ERROR: int determine_connection_address(IPAddress&, uint16_t&, ServerAddress&, bool) (799):Cloud: unable to resolve IP for device.spark.io
0001052543:ERROR: int Spark_Connect() (888):connection failed to 54.208.229.4:5683, code=-9
0001052553:WARN : void establish_cloud_connection() (224):Cloud socket connection failed: -9
0001052562:ERROR: void handle_cfod() (180):Resetting CC3000 due to 2 failed connect attempts
0001052672:WARN : void manage_network_connection() (99):!! Resetting WLAN due to SPARK_WLAN_RESET
<then nothing>

It actually seems to be a little worse now as it stops running my application loop both during disconnects and during the fail scenario.

At one point early on in the test, when I dropped the connection for over ~45 seconds, my application disconnected and reconnected for this time but never once ran the main application loop when it was disconnected.

Then when the fail scenario occurred, it just hung, no application loop running at all.

I have a feeling this issue might be related??

Anyway, just trying to keep this issue alive as it has been biting us a lot lately in areas of intermittently bad signal strength.

Hi @mhazley You mentioned that you don’t see your breakpoint being hit, how are you setting up gdb? Do you see other brackpoints in system firmware being hit?

The application loop will block waiting for the system thread if any synchronous system functions are called - https://docs.particle.io/reference/firmware/photon/#system-functions. That is most likely the reason your application code is blocking, since it needs the system thread to respond to a synchronous function call.

When the system thread is no longer calling system_idle() can you pause the system to inspect where the system thread is? Attempting to connect to WiFi can take up to 30 seconds before control returns from the WiFi driver back to the system thread.

Hey @mdma - setting up gdb as follows using openocd and the programmer shield:

openocd -f ./particle-ftdi.cfg -f ./stm32f2x.cfg -c "gdb_port 3333" -c "\$_TARGETNAME configure -rtos FreeRTOS"

then

arm-none-eabi-gdb -ex "target remote localhost:3333" ../build/brewbot-photon.elf

having built my .elf in the non-modular way with jtag support in make.

USE_SWD_JTAG=y MODULAR=n 

I can hit that breakpoint (and others)…

(gdb) b system_thread_idle
Breakpoint 1 at 0x8046628: file src/system_threading.cpp, line 12.
(gdb) c
Continuing.
Note: automatically using hardware breakpoints for read-only addresses.
[New Thread 536933392]
[Switching to Thread 536933392]

Breakpoint 1, system_thread_idle () at src/system_threading.cpp:12
12	    Spark_Idle_Events(true);

When I cause the behaviour above, I only ever see the stack trace I have outlined above - it never seems to move away from this and all the threads show that they are in the following locations.

WICED/RTOS/FreeRTOS/ver7.5.2/Source/portable/GCC/ARM_CM3/port.c:332
WICED/RTOS/FreeRTOS/ver7.5.2/Source/tasks.c:2261

I don’t seem to have these files in the firmware so I’m thinking they are included via a library?

One thing I have noticed though, from the openocd output, is the following error when I halt GDB:

Error: JTAG-DP STICKY ERROR 
Error: MEM_AP_CSW 0x23000050, MEM_AP_TAR 0xa5a5a5a6
Error: Failed to read memory at 0xa5a5a5a6

That said, it still gives me a thread trace back and the breakpoints still hit after this.

Happy to debug deeper if you can give me some direction, bit lost as to where to go next.

Thanks for the details on your debug setup. That looks fine.

The socket error is -9, which is WICED_NOTUP, meaning the network interface isn’t up. The system then tries to bring up the network interface in manage_network_connection(), but only if Particle.connect() has been called (which is done automatically in AUTOMATIC mode.)

If there is no call to Particle.connect() then the system will not try to reestablish the network connection. The application loop will then need to include a WiFi.connect() call to bring up the network.

Hi @mdma, thanks for the reply - does this mean you think the problem could be from my management of the connection?

Its worth noting that I use the calls like you have explained (well, at least I think I do) and I can get through this scenario up to 10 times before it fails to reconnect.

I’ve included my function checkConnectionStatus() below, which is the first thing called from my main application loop() to manage the connection [Which is in SYSTEM_MODE(MANUAL)] - can you check and confirm if I am treating these calls correctly?

void checkConnectionStatus(){
  // Output states if changed (Cloud Connecting etc…)
	updateWiFiState();

  if (!Particle.connected()) {

    // Not connected so start monitoring timer
    if (!wifiReconnectTimer->isStarted()) {
      wifiReconnectTimer->start();
    }

    if (!WiFi.ready()) {
      if (!WiFi.listening() && !WiFi.connecting()) {
        WiFi.connect(WIFI_CONNECT_SKIP_LISTEN);
      }
    } else {
      if (!_cloudConnecting) {
        Particle.connect();
        _cloudConnecting = true;
      }
    }
  } else {
    _cloudConnecting = false;
    Particle.process();
  }
}

Below I have logged out when these calls are made alongside the particle debug to show the happy and the sad path of the reconnect:

Happy Path

Cloud connected: 1
Cloud connecting: 0  
. .  
. Some time passes before connection is dropped
. .
0000375470:WARN : void Spark_Process_Events() (201):Communication loop error, closing cloud socket
0000375603:ERROR: int determine_connection_address(IPAddress&, uint16_t&, ServerAddress&, bool) (799):Cloud: unable to resolve IP for device.spark.io
0000375641:ERROR: int Spark_Connect() (888):connection failed to 54.208.229.4:5683, code=-9
0000375665:WARN : void establish_cloud_connection() (224):Cloud socket connection failed: -9
0000375690:ERROR: void handle_cfod() (180):Resetting CC3000 due to 2 failed connect attempts
0000375819:WARN : void manage_network_connection() (99):!! Resetting WLAN due to SPARK_WLAN_RESET
Wifi ready: 0
Wifi connecting: 1
Cloud connected: 0
Wifi connecting: 0
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
Wifi connecting: 1
Wifi connecting: 0
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
0000401484:ERROR: int SparkProtocol::handshake() (108):Handshake: Unable to receive key -19
0000401494:WARN : void handle_cloud_connection(bool) (267):Cloud handshake failed, code=-19
Wifi ready: 1
Particle.connect()
Cloud connecting: 1
Cloud connected: 1
Cloud connecting: 0
. . 
.
execution continues

Sad Path

Cloud connected: 1
Cloud connecting: 0
. .  
. Some time passes before connection is dropped
. .
0000723598:WARN : void Spark_Process_Events() (201):Communication loop error, closing cloud socket
Cloud connected: 0
Particle.connect()
Cloud connecting: 1
0000723722:ERROR: int determine_connection_address(IPAddress&, uint16_t&, ServerAddress&, bool) (799):Cloud: unable to resolve IP for device.spark.io
0000723742:ERROR: int Spark_Connect() (888):connection failed to 54.208.229.4:5683, code=-9
0000723752:WARN : void establish_cloud_connection() (224):Cloud socket connection failed: -9
0000723761:ERROR: void handle_cfod() (180):Resetting CC3000 due to 2 failed connect attempts
0000723871:WARN : void manage_network_connection() (99):!! Resetting WLAN due to SPARK_WLAN_RESET
Wifi ready: 0
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
WiFi.connect(WIFI_CONNECT_SKIP_LISTEN)
. .
.
Call never seems to return, no loop() call again

Thanks for these details. When the device goes into the bad path, what is happening with the LED?

@mdma It’s flashing green in that case.

I think I’m also seeing this issue.
Running v0.4.9 on photon.
Been trying to figure out why my photons freeze.

In my main loop, once every 1000ms I check the WiFi is ready and if not instruct connect() and report System.freeMemory().
When WiFi isn’t ready() freememory decrements by 44 bytes every second. Eventually the system freezes (firmware not app I think). I work around with a System.Reset if the freememory drops 2000 below where it started, but… it’s not great…

e.g.

loop
{

if (WiFi.Ready())
{
do_stuff
}
else if (1000s since last time)
{
WiFi.connect()
Serial.print(freeMemory())
}
}

Post your actual code instead of pseudo code, this might help to confirm your assumption

It’s a bit long…

I’ll see if I can show the same issue with a shorter version and post that… really just wanted to confirm a similar behaviour.

The following illustrates the behaviour… Note that WiFi connection needs to be lost or intermittent.

SYSTEM_THREAD(ENABLED);

bool debug_serial = true;
long unsigned int last_reconnect = 0;

void setup() {
    if (debug_serial) {
        Serial.begin(9600);
        delay(250);
        Serial.println("Starting.");
    }
    last_reconnect = millis();
}

void loop() {
    if (WiFi.ready())
    {
        // Do Stuff
    }
    else if (millis() - last_reconnect > 1000)
    {
        if (debug_serial) {
            Serial.print("Not connected to WiFi and 1s since last check..");
            Serial.printlnf("System Memory is: %d",System.freeMemory());
        }
        WiFi.connect();
        last_reconnect = millis();        
    }
}
1 Like

Try to add a one-shot flag to prevent a subsequent connection attempt if a previous is running.

Sure even without mem leaks should not happen, but give it a try still.

(I’ll also to test this when I’ve got time)