Wifi/Core sleep strategy for programs depending on the Cloud API

Hi There!

I have been starting out slowly with my core and simple programs.
Now that I am starting to plan making larger and more usable programs, I have been unable to
find resources on how I should deal with sleeping the core and the Wifi circuit to preserve energy.

I am making something like a countdown timer and the plan is that you can set/start the timer from your phone.
Some questions came up during my planning, that I couldn’t find mentioned anywhere in the docs.

What happens if the core is sleeping or deep sleeping as a request is sent to it over the Cloud API?

As far as I understand the Cloud sees the Core as unreachable and returns an error after a while.

Are there any best practices / patterns I can utilise to deal with not knowing when a request will come in from the API and preserving energy at the same time?

Thanks in advance for any suggestions.



i see that no one has picked up this question and shall take a stab at it. :smile:

You are right about this. @peekay123 is looking at whether we can wake up on external trigger (signal) but it’s definitely not Cloud requests.

This will be like a database (of your own) where the request first ends up to. When it detects the core come online, or the core can hit it when it’s online, the request gets pushed :smile:

I’m looking at this idea to “queue” the messages for the Messenger Torch and send it to the core one at a time while keeping a list sent by people during public display :wink:

Hope this helps!

1 Like

@Ricki, you have highlighted one of the challenges with the CC3000 and possibly with wifi in general. The only power savings modes on the Spark put the CC3000 in standby or off all together. In neither of these modes does the CC3000 “wake” when it receives data (in fact the CC3000 does not do this at all).

An alternative approach is to have the Spark wake from sleep or deep sleep on a regular basis and check for events. As @kennethlimcp pointed out, you could queue those events on a server and the Spark polls that queue when it wakes. This does not lend itself to real or even near-real time events. In fact, you have to balance the Spark sleep time (to lower power consumption) vs the event response time you need. :smile: