Time.zone() and daylight savings and Spark.syncTime()

The new Time library has a zone() call to which you pass a numeric hour offset from GMT/UTC to set your local time. I would like to suggest this function take either a text or a numeric argument. The numeric version would work as before. The text argument would contain what is often/usually found in the Unix-style TZ environment variable e.g. “GMT0BST” or in the /etc/timezone file e.g. “London/Europe”. This would allow the Spark Cloud to return the time modified as necessary for daylight savings, should DST apply.

Spark.syncTime() should take the same parameters as Time.zone(). Indeed, the two functions may as well be synonyms for each other. And in the interim it needs to be made clear as to whether Spark.syncTime does or does not reset the zone.

A documentation bug has been reported and we’re told it is fixed and so will appear soon, one hopes, in the web-available version of the docs. Time.zone() does not return a value, so there is nothing to be printed. Here is the example from the current docs.

// Set time zone to Eastern USA daylight saving time

There is another documentation bug, or an undocumented feature. Does Time.zone() return the time offset from GMT, or does it return the legal time for the timezone at that offset? Is the comment in the doc correct? Is daylight savings time returned or not. It ought not to return DST because many offsets from GMT have varying DST rules. E.g. I know +2 has some countries which have DST and some which don’t.

@psb777, I did put in a pull request for the documentation fix. The documentation states:

Set the time zone offset (+/-) from UTC.

So it is an offset from UTC not GMT. The Time.zone() function is void and returns nothing. It does, however, store the offset in Spark memory until the Spark is rebooted, as described in the documentation. As you might have surmised, the DST part is not presently handled. @bko is working on a library extension to the new built-in Time firmware that will add DST among other things.

1 Like

UTC and GMT are always the same. I think I’ve had a good idea about DST (daylight savings time) and timezones and it is in the first posting in this topic. I can’t see any worldwide way of determining if you’re currently in DST or not being able to be determined by code running in a small enough footprint on the Spark itself. My idea would require Time.zone() to interact with the Spark Cloud to determine the offset from a proper timezone code, and the Spark Core would then store the offset reported from the Cloud locally, allowing the Time library to remain otherwise unchanged…

1 Like

@psb777 it is a good idea and you are correct in that Time.zone() could be replaced with a “smarter” Spark.timeSync() that the user can call with a suitable code which then returns the DST adjusted time for that code. Very interesting indeed. Can you provide more detail on the argument that would be the most efficient (bytes) and easy to use?

You could easily do this yourself with a Spark.variable() and a Spark.function() and your own server. I don’t find the overhead for the European and US DST rules to be very large, some small tables in flash and a few stack variables. The Euro and US rules actually fit lots of places in the world since alarm clocks get made for mass markets.

Like the doc says, time is only synchronized with the cloud time at cloud initialization time, unless you take steps to do more. I am not sure how that model scales to depending on the cloud for DST status, especially if the core is up for several months at a time. You would have to call the cloud a lot more frequently than just at cloud initialization time, which would be a burden.

It would be no more difficult to use - an (optional?) parameter would be added to Time.zone() and to timeSync(). The code to call a function in the Spark Cloud must already exist - as this is already done for other purposes - so I suggest the extra RAM on the Core is but a few bytes.

I haven’t seen the code but were it me I would make Time.zone() call timeSync() if it were passed a text param. I would enhance timeSync() to pass a param to the same Spark Cloud server function it already calls. The call that makes to determine GMT may already take a TZ param, if not there will be an alternate function to call. The time returned to the core would be GMT but with an extra value, the TZ offset already DST adjusted, and the offset would be stored exactly where Time.zone() stores the offset now. No other changes reqd. See, we already have a tech spec! [Slightly more elegant would be to return only the DST corrected time for the zone but this would require a few extra Core changes to the Time library.]

To implement a worldwide database on the Core? The Debian package tzdata (world wide DST start and end dates) has an installed size of 1700 bytes. But that is only the data! You have then to have program code which decodes that very compact data. And then several times a year one country or another changes its DST rules and the 1700 bytes has to be redistributed. There is no mechanism for that on Spark.

Even @bko’s small table and the code to read that will take a couple of hundred bytes out of precious RAM.

@bko, you have set yourself an interesting project, so have fun.

If already the Spark Core interacts with the Cloud at startup to find out the time it seems dumb (in the extreme) not to tell the Cloud your TZ descriptor and then let the Cloud call its local C o/s provided function to work out if the Core is in DST or not.

You already recommend elsewhere in the forum timeSync be called daily or hourly. What’s your point about Cloud overhead, again?

You misread the above–the table is never in RAM but always in flash. Just a few bytes of RAM are needed. You seem to have a world-wide time zone database as a hard requirement. If that’s a requirement for your project, I would suggest you just get a Linux box and be done–Raspberry Pi is cheap enough. Or as I suggested, you could use Spark.variable and function with your own web server.

My interesting project is actually done–the NTP time library I put out some time ago already handles US and European DST. I have working code for the new RTC already. I just need to package up the DST parts and figure out where to put it.

Checking hourly is not generally enough for DST since time moves backward by an entire hour at the start of DST.

And finally what I said in the other post is that if you want to get better accuracy you can call the sync function up to once an hour. It was not a requirement, merely a suggestion to get better accuracy.

Flash, RAM, whatever, it’s wasted, especially if you’re calling the Cloud anyway to set the time. Is there no thought of selling the Spark into Australia & NZ and China and Argentina and India? The only problem I have with your solution is if it crowds out one that will work for everyone. I don’t mind there being two solutions to the problem.

Checking if DST has been imposed / removed since the Spark was booted is, as you say, a recommendation not a requirement, just as checking the time is sync’ed. And both are checked with the new calling syntax:

Spark.syncTime( "London/Europe");

hi @psb777, is

Spark.syncTime( “London/Europe”); pseudo code?

I couldn’t get it to work in my project, nor can I find documentation for it anywhere.


Hi @daneboomer

This is a pseudo code feature request that has not been implemented.

ah, thanks @bko. So at the moment is there any built-in way to deal with DST? Or is it just a case of dealing with it in code by detecting the time of year?

There is nothing built-in for DST correction at this time, but I am sure it can come later.

What I have done is in my webhook to weatherunderground.com they are giving me a isDST field with a 1 or 0 value. I am switching my local offiset for Time.zone between -5 and -6 based on that flag. There is also the timezonedb.com which is focused on just Time Zones and DST. They will respond with the offset needed based on lat and long or by TimeZone in general. Looks like lat & long are best since they don’t have a specific entry for Arizona which I know doesn’t honor DST. May not be the right way for everyone but it serves my needs.

1 Like

Maybe you can take a whack at porting Jack Christensen’s TimeZone Library!!

good stuff…

My SparkTime NTP library handles DST. It runs on Photon with minor changes and I will re-release it soon.

For built-in Time, I should port the DST part of my library.

Using or not using DST is always a local choice. In addition to Arizona, there are other parts of the USA that don’t follow the rules, for example northern Indiana which is like a suburb of Chicago but Indiana is normally Eastern time zone and Chicago is central time zone.



seems like a lot of value to have just a universal DST library.

I’d think folks would find it very useful for end-user personalization reasons.

Don’t forget about North Korea’s new timezone!! Kim Jong Un is a BIG Particle-phile!

1 Like

We have this issue tracking DST support in firmware. https://github.com/spark/firmware/issues/211

We’re still working out the details and the road forwards is unclear to me, so feedback/suggestions welcome! As a minimum, how about we add the hooks in system firmware to allow libraries to implement DST in whatever way they chose? Any thoughts on what those APIs would look like? Is the existing Time.zone(float offset) setter sufficient?