It would be no more difficult to use - an (optional?) parameter would be added to Time.zone() and to timeSync(). The code to call a function in the Spark Cloud must already exist - as this is already done for other purposes - so I suggest the extra RAM on the Core is but a few bytes.
I haven’t seen the code but were it me I would make Time.zone() call timeSync() if it were passed a text param. I would enhance timeSync() to pass a param to the same Spark Cloud server function it already calls. The call that makes to determine GMT may already take a TZ param, if not there will be an alternate function to call. The time returned to the core would be GMT but with an extra value, the TZ offset already DST adjusted, and the offset would be stored exactly where Time.zone() stores the offset now. No other changes reqd. See, we already have a tech spec! [Slightly more elegant would be to return only the DST corrected time for the zone but this would require a few extra Core changes to the Time library.]
To implement a worldwide database on the Core? The Debian package tzdata (world wide DST start and end dates) has an installed size of 1700 bytes. But that is only the data! You have then to have program code which decodes that very compact data. And then several times a year one country or another changes its DST rules and the 1700 bytes has to be redistributed. There is no mechanism for that on Spark.
Even @bko’s small table and the code to read that will take a couple of hundred bytes out of precious RAM.