Please add timeout to System.connect()/Spark.connect()

Currently the invocation and behavior of System.connect() is controlled (partially) by the SYSTEM_MODE() macro.

However, there are cases where I’d like to, for example, use SEMI_AUTOMATIC mode, but still be able to have System.connect() return in a non-blocking fashion as it does in MANUAL mode.

I propose the following change:
current prototype: void System.connect(void);
Proposed: bool System.connect(int32_t timeout = 0);

If timeout < 0, run in non-blocking mode, start the connection in the background and return immediately. (as is default in MANUAL mode).
If timeout == 0, behave as is standard today, based on SYSTEM_MODE() setting.
If timeout > 0, start the connection process and then wait up to timeout milliseconds for System.connected() to be true. Return at the first of System.connected()==true (return true) or timeout exceeded (return false).

In all cases, the return value should be the same as System.connected(), true if connection is established at the time of return, false otherwise.

I believe this functionality would be very useful for a variety of battery and other low-power scenarios.


Having a bit more experience now, I’m going to make this request even more generic…

Any function which is capable of going into an indeterminate wait state should take an optional timeout argument.

These are micro controllers. Therefore they are likely being used in process control, data acquisition, or other environments where things don’t always go as planned. As such, the application developer must be able to depend on getting control back in at least a semi-deterministic way (OS, go do this if you can, but even if you can’t, give up after seconds).

Therefore, there should exist no standard system call or library function which does not have either a deterministic maximum return time or the ability to specify a timeout parameter.

For example, digitalWrite() does not need a timeout because it deterministically returns within a certain number of milliseconds regardless of the outside environment. System.connect() is a strange bird because its level of determinism depends on the SYSTEM_MODE() specification. In SYSTEM_MODE(MANUAL), it returns immediately no matter what. This behavior is acceptable and does not require a timeout parameter. However, the behavior in SYSTEM_MODE(SEMI_AUTOMATIC) and SYSTEM_MODE(AUTOMATIC) is that it will hang until the cloud connection is successful, up to and including forever. In this mode, the function allow for the application developer to set a maximum allowed duration of that wait state.

This should apply to any function which has an indeterminate wait state, not just System.connect().

I’ve been looking into this lately for the 0.4.5 release on the Core, allowing the number of attempts to be specified when calling Particle.connect(). On the photon we will have multithreading so this issue on that platform will soon be moot, although the maximum retries feature will be supported.

Issue being tracked here.

1 Like

What FW version are you running?
In my experience this is not the case for SEMI_AUTOMATIC, Spark.connect() returns near immediately but different stages of the connection process add some lage to the code flow, but never block forever.

There you can find some test code for this "theory" (for 0.4.3 which should have been your FW at the time of your first post)
Photon/P1 loop()/user code blocking in SEMI_AUTOMATIC mode - #4 by ScruffR

But I'd second your proposal for timeouts :+1:

I'm pretty sure I did my testing against 0.4.3. Additionally, that's actually how the documentation here:

describes Spark.connect() behavior in SEMI_AUTOMATIC mode.:

The semi-automatic mode is therefore much like the automatic mode, except:
When the device boots up, the user code will begin running immediately.
When the user calls Spark.connect(), the user code will be blocked, and the device will attempt to negotiate a connection. This connection will block until either the device connects to the Cloud or an interrupt is fired that calls Spark.disconnect().

This would imply that in a situation where connection to the cloud fails, the call to Spark.connect() will hang indefinitely.

That was my experience as well in very limited testing with the 0.4.3 firmware, but I admit to doing far less than deterministic testing as I just needed something that worked. So once I had working code (MANUAL MODE), I kind of moved on.

That part of the docs was the reason for some other topic I fought through where I clearly argued that it's not completely correct :wink:

Maybe I can find it and post a link here

Found it