Can the Spark Core trigger the Reset Pin?

#define FEED_ID "12345" //note: fake id here.. 
#define XIVELY_API_KEY "12345" //note: fake key here

TCPClient client;

int trigger = A7; // this is the pin talking to the attiny
int reading = 0; // this is analog pin we are taking sensor readings at
int ledD = D7; // this is our onboard LED Pin
unsigned long LastUpTime = 0;
uint32_t lastReset = 0; // last known reset time
bool s = true;
char whichApp[64] = "READ TEMPERATURE with XIVELY";

// This routine runs only once upon reset
void setup()
{
   //Register our Spark function here
  Spark.variable("whichapp", &whichApp, STRING);
  Spark.variable("reading", &reading, INT);
  Spark.function("degres", tempCalculation);
  Spark.function("volt", analogReading);
  pinMode(A0, INPUT);
  pinMode(ledD, OUTPUT);
  ledStatus(2,100); //Blink
  lastReset = millis(); // We just powered up 


}

void loop()
{


  reading = analogRead(A0);
  int temp_calc = (reading*3.3/4095)*100 - 50;

   if (millis()-LastUpTime>2000)
   {
      xivelyTemp(temp_calc);
      LastUpTime = millis();
   }
}

void xivelyTemp(int temperature) {

   //Serial.println("Connecting to server...");
    if (client.connect("api.xively.com", 8081)) 
    {

        // Connection succesful, update datastreams
        client.print("{");
        client.print("  \"method\" : \"put\",");
        client.print(" \"resource\" : \"/feeds/");
        client.print(FEED_ID);
        client.print("\",");
        client.print("  \"params\" : {},");
        client.print("  \"headers\" : {\"X-ApiKey\":\"");
        client.print(XIVELY_API_KEY);
        client.print("\"},");
        client.print("  \"body\" :");
        client.print("    {");
        client.print("      \"version\" : \"1.0.0\",");
        client.print("      \"datastreams\" : [");
        client.print("        {");
        client.print("          \"id\" : \"Sensor_Data\",");
        client.print("          \"current_value\" : \"");
        client.print(temperature);
        client.print("\"");
        client.print("        }");
        client.print("      ]");
        client.print("    },");
        client.print("  \"token\" : \"0x123abc\"");
        client.print("}");
        client.println();

        ledStatus(2, 500);        
    } 
    else 
    {
        // Connection failed
        //Serial.println("connection failed");
        ledStatus(4, 500);
    }


    if (client.available()) 
    {
        // Read response
        //char c = client.read();
        //Serial.print(c);
    }

    if (!client.connected()) 
    {
        //Serial.println();
        //Serial.println("disconnecting.");
        client.stop();
    }

    client.flush();
    client.stop();

}


void ledStatus(int x, int t)
{
    for (int j = 0; j <= x-1; j++)
    {
        digitalWrite(ledD, HIGH);
        delay(t);
        digitalWrite(ledD, LOW);
        delay(t); 
   }
}

int tempCalculation(String command) {
    int tempCalc = (reading*3.3/4095)*100 - 50;
    return tempCalc;
}

int analogReading(String command) {
    return reading;
}

@BDub

Hey I have this new temp sensor code worked up and now I need to flash the Spark Core with your WatchDog Firmware + the Temp Code.

I know how to add my main program to the Application.cpp file in NetBeans.

I think I put the .h file in the inc folder which is inside the core-firmware folder right? Thatā€™s where all the other .h files are.

Now in the Spark IDE there are 2 .cpp files. My main loop file and then there are the SHT1x.h and SHT1x.cpp files that are named the same. Where do I put the SHT1x.cpp file? In the same folder as Application.cpp file?

Hope you can follow what Iā€™m asking.

Yes, I do. You need to put the SHT1x.h in the INC folder, and SHT1.cpp in the SRC folder along with the application.cpp. Then edit the build.mk file in the SRC directory in notepad quickly ā€¦ just right-click, edit. Then add a line under your where you see application.cppā€¦ copy that line, paste it under that one, and rename application.cpp to SHT1x.cpp. Save, compile, Bobā€™s your uncle :wink:

edit: donā€™t forget to add SHT1x.h to your application.cpp

1 Like

Hi BDub
FYI, I am a newbie at this stuff
I noticed there is newlib_stubs.cpp however there is no newlib_stubs.h
Do you know that is?
Thanks!
Dup

@Dup I have no idea. Do you need to do something with that file?

@Bdub Hey I received some feedback from @Dub who is also running your Watchdog firmware and he told me that the core is resetting a lot more than it ever did running the stock Spark Firmware. I noticed the same thing when it comes to the Spark Core resetting alot more often with your Watchdog firmware.

I could get the stock firmware to stay online for almost 24 hours before a reset sometimes. It would stay up for 6 hours easy without issues. With the Watchdog firmware it was resetting every few hours probably on average. I have my RED LED after recovery drop off after 30 mins so I can tell how often its been resetting. Because the reset process happens so quickly and reliably you really couldnā€™t tell in a sensor logging application and I didnā€™t really care as long as it stayed connected without having to manually reset it.

Iā€™m just curious if you have any thoughts about why we the Watchdog is catching a reset fault a lot more often than the stock script was freezing due to a fault? Its like the Watchdog is alot more sensitive somewhere which causes a reset more alot more often than the stock firmware.

How often do you see the RED breathing LED after a reset? Or are you even watching it during the day?

Just figured I would throw it out there.

Interesting data. I would say itā€™s not the watchdog that causes things to reset moreā€¦ more like something is blocking the watchdog from getting reset at least every 26.208 secondsā€¦ which then that blocking action causes the watchdog to reset the Core.

If you were able to stay online and actually communicating for almost 24 hours before, I would say that the Core is able to be blocked somewhere for longer than 26.208 seconds and not cause it to lose a connection with the cloud completely without recovering on itā€™s own (without a watchdog). However, since that is not always the caseā€¦ and sometimes it canā€™t recover (e.g. CFOD)ā€¦ you are still better off with the watchdog running and resetting if the system is blocked for more than 26.208 seconds.

I havenā€™t been testing it myself because I donā€™t have CFOD problemsā€¦ so it would likely never reset.

All of this is also kind of a temporary fix (in lieu of the ATtiny85 solution) until the system is more stable and the watchdog need not come into play (and reset the core repeatedly).

Hey BDub,
No, it was just something a noticed while I was running through the folders. Just trying to help identify issuesā€¦maybe I should leav that to the experts :smile:
Dup

Yea the Watchdog solution is the only working solution to the Spark Freezing issue out there so its certainly better than any other solution at the time. I donā€™t want ya to waste any more time on it but I did find it interesting that somebody else said the same thing happened to them.

Hopefully the TI Genius Bar can figure it all out tomorrow :smiley:

1 Like

@RWB Did you try the spark_master_new? If so please pull a fresh copy from the core-firmware, thet test app was using too much memory see https://community.spark.io/t/bug-bounty-kill-the-cyan-flash-of-death/1322/373

@david_s5 I tried to load it for 2 hours but couldnā€™t get it to work. Iā€™ll wait for the Spark Update Rollout.

Wow! I havenā€™t read through this humongous thread (I probably donā€™t have enough life left to do thatā€¦!) but Iā€™m hoping that someone here has flagged up that the system.reset() call doesnā€™t actually seem to do the same as pressing the reset button.

In my program some of the initialisations re-run after calling system.reset() but some donā€™t. Unless I am misunderstanding something, the page linked below should be updated or explain more fully?

http://docs.spark.io/firmware/#system-reset

I see that this thread offers an alternative which I will try.

I have a question about the code on that page (copied below)

lastReset will surely always be set to zero (or very close to zero) after a reset - so why bother with it? Perhaps I am missing something?

// **********
uint32_t lastReset = 0;

void setup() {
lastReset = millis();
}

void loop() {
// Reset after 5 minutes of operation
// ==================================
if (millis() - lastReset > 5*60000UL) {
System.reset();
}
}

Not necessarily.


Edit: This is not actually the case for the Spark Core - my fault, sorry :blush:

Since millis() reads an internal counter of the ĀµC which will not be reset to zero during a soft reset you have to initialize it to fit the actual power time of the micro.


Further more there are some other registers and settings that will survive a soft reset (backup domain) and this is good so for multiple reasons (e.g. to check the cause for the reset/reboot, fault recovery, ā€¦)

Ah okay, thanks. I understand now.

But anyway, you confirm my other concern that the page describing system.reset() is rather misleading in that itā€™s plainly NOT going to have the same effect as pressing the reset button - and some of the differences are important. Can we get that updated guys?

1 Like

Edit: Forget the parts about millis() - my wrong :blush:

Hmm, there may be some discussion about the wording :wink:
'Just like' does not necessarily denote the same thing as 'the same effect'.
While it seems like nit picking, it might solve the misunderstanding/misconception.

In almost all respects (at least for the average user) System.reset() does cause the Core to behave the same as after a hard reset, but with some minor (mostly ignorable) differences - hence only 'like' :wink:

I guess the mentioned difference of the millis() might be one of the most prominent differences, while info inside the backup domain will only be concerning programmers very close to the bare mettal.

But this in deed might be some extra info for the docs :+1:

If you could outline what you are refering to here

We might find a way that makes System.reset() work for your use cases just the same and not only like a hard reset.
My first guess would revolve around your use of millis().

If you want millis() to behave as if it starts from 0 on reset you could do this:

uint32_t start_time;
class Startup {
   Startup() {
       start_time = millis();
   }
};

Startup start;

uint32_t millis_since_start() {
   return millis()-start_time;
}

This grabs the millis() counter at the earliest possible opportunity and then subtracts that to give the time elapsed since startup.

Thereā€™s a part of me that feels this should be part of the standard firmware, since not having millis() reset seems to be against expectations. For folks that want the actual hardware millis timer that is the time since power on, thereā€™s HAL_Timer_Milliseconds() (this will be in the 0.4.0 release, presently the feature/hal branch.)

4 Likes

resetting millis( ) on a soft reset would certainly be consistent with the Arduino paradigm, it seems.

(and consistent with the use of the word "reset")

1 Like

Thanks for the replies guys.

ScruffR - to your question about what fails to happen on a soft resetā€¦

I havenā€™t investigated it in detail, and no, I hadnā€™t even realised that millis() wasnā€™t zeroed because Iā€™m not really using it.

The thing that drew my interest was that the chevron-highlighted line in the following code (part of my setup() ) does not execute on a soft reset, when all the others do. That line executes after a hard reset though.
############
Serial1.begin(4800);
>>>Serial1.println(VERSION);
pinMode(LED_PIN,OUTPUT);
initOutputBuffer();
showNetworkDetails();
##########

Not even a partial (or corrupted) part of the VERSION string appears at the serial port, that seems to be just missed out altogether, so I donā€™t think itā€™s a race condition?

Iā€™m assuming that it some weird consequence to VERSION being declared with a #define up top? as inā€¦

#define  VERSION "Project Name: Version 3.2 03-Feb-2015" 

Everything else seems to work okay after the soft reset, but I was just interested that line was not executed, seemingly.

In fact, I hadnā€™t realised that millis() doesnā€™t restart, that could easily be a problem to me in future projects though, where Iā€™d like to use a Spark Core at the heart of Smart Home devices which do a periodic reset of themselves - perhaps every four weeks.

Best regards
Alan T

How does Serial1.println() behave if you add some delay() at the beginning of setup()?
Iā€™m not sure how the CC3000 takes a soft reset. Maybe it is quicker online after a soft reset and hence some race condition might play a role.

#define shouldnā€™t be a problem since the preprocessor just substitutes VERSION for the string literal before the compiler does its job. So if you put yor string in direct, it should still be the same.

1 Like

Looks like thatā€™s it. If I put ANY delay(x) in there, the VERSION line appears as it should.
I tried delay(1000) and that worked
I tried delay(100) and that worked
I tried delay(1) and that worked too.
commented out that line and it no longer worked.

So, yeah, looks like your right - itā€™s just the I/O not resetting quite as fast as the CPU. If itā€™s such a close-run race condition it seems a bit odd to me that it isnā€™t outputting some partial version or some junk - but I sā€™pose it would make sense if you knew the specifics of how the CC3000 worked.

Many thanks!
Best regards
Alan T

2 Likes