Delay(), vs "soft delay" [end of discourse]

@Jack,

I’m not sure if i’m understanding you correctly. When you use a delay(), the code is technically “stuck there” and any code after that delay() line will not run until the specified time is up.

Not sure if it’s implemented yet but in future, :cloud: stuff will be processed while while user code has something blocking. Eg. a delay() or sensor polling

@kennethlimcp,
So, if only one process is coded to run, a delay will run that process again after the delay time.
But if there are two or more processes in the script, the other processes will not get to run, until the delay() has timed out, hence, if you are running more than one process, use a “soft delay” rather than a delay() ? Am I still not clear?

@Jack,

Maybe it’s better for you to write a simple pseudo-code :smile:

Only 1 line runs at any one time.

Example:

void loop(){
  -- line 1
  -- line 2
  delay(2000);
  -- line 3 
  -- line 4
  }

In this example,

line one will execute and once done, line two will execute. Once line two is processed, there is a delay of 2 seconds before line 3 followed by line 4 is executed. After that, the cycle repeats as indicated by the name loop()

// just pseudo spelling is a mess…

loop() {
  nowMili = millis;
  if (nowMilli> nextToggleLedMilli) toggleLed();
  if (nowMilli> nextCheckDoreMilli) checkDore();
}

void toggleLed() {
...... nextToggleLed = nowMilli+7000 
}
void checkDoor() {
.... do check and output buzzer if open.
nextCheckDoorMilli = nowMilli+ 4
}
=====================

that is two processes, one toggled led about 7 seconds, the other checks door about 4 milliseconds. I think you can’t do those two processes if you use the delay() command. What do you think?

Ok i get what you are trying to do.

Also, code formatting in the forum goes like this:

``` <--- add this

post code here

``` <-- add this

Yes so if you use a delay(), trying to do 2 process at 2 different interval becomes a problem.

@peekay123 has a timer library that allows you to do that in a more elegant way. delay() method is definitely not the way to go.

Hope this helped! :wink:

1 Like

@Jack, check out the elapsedMillis library in the web IDE. With it you can create millisecond “timers” that magically run in the background (they don’t but it appears that way).

The way you do “multiple” processes are you call them is really cascaded timers triggering at their designated intervals. For example:

elapsedMillis event1;
elapsedMillis event2;

setup()
{
  event1 = 0;  //reset the event "timers"
  event2 = 0;
} 

loop()
{
  if (event1 > 3)  // 4ms or more have passed
  {
    checkDoor()
    event1 = 0;  // reset event timer
  }

  if(event2 > 7000)  // 7000ms or more (7sec) have passed
  {
    toggleLed()
    event2 = 0;  //reset event timer
  }
}

You have the right idead. There are a couple of catches however:

  1. The code called within the event can’t run longer than the shortest event time you use
  2. When loop() ends, it will call the background firmware which may run for several milliseconds (until the new firmware for both the Core and the Photon is released) so events may not be accurate unless you turn off the Cloud connection.

:smiley:

6 Likes

But to prevent any hick-ups with type incompatibilities - uint32_t vs. int vs. number literals - it has prooven useful to just snapshot millis() in your sub-process and check against ((millis() - snapshotMillis) >= processDelayTime)

On the other hand there are also interrupt driven ways to achieve some of these things ...


And beaten again by @peekay123 :wink: :+1:

2 Likes

Generally, I don't need accurate timing to the milli second, or nanosecond. By using the > it will execute pretty quickly, probably within 1/1000 of a second. Is that not correct?

@Jack, the Core runs at 72MHz and the Photon at 120MHz so I would say the “>” comparison will run in nanoseconds! In your example, it is the toggleLed() and checkDoor() functions which need to run quickly.

I think the toggle led (ever 7 seconds) will not be delayed, nor will the check door ever 1/1000 of a second since the processor is so fast (no delay() ) . each of these two processes, should take way less than a millisecond to complete. What do you think?

I think 15 more processes (functions) could be added on here, and the loop() would still take less than 1/100,000 of a second to repeat itself. Of course, no delay() commands. Maybe WiFi/internet problems could slow that down tho. What do you think?

I agree. But if we don't use delay() commands in those functions, they should execute really fast. I would not expect them to need any heavy math calculations to slow down the processor, and they will not need any looping, since we take care of that in the main loop().
Thanks
Just FYI, I have been coding for about 40 years, and I don't think I ever needed my main loop() to run faster than one hundred thousandth of a second ( loop 100,000 times a second). But I haven't coded any raw video kind of stuff, or bitCoin code.

@Jack, the worst case scenario is ALL the timer events fire off at the same time all the event code runs. As you say, as long as all those functions run without delay then everything works great.

In my designs I don’t assign a timer per event. Instead I run a number of “base” timers such as a 1ms, 100ms and 1000ms timers. In each, I have counters for desired timed events. So for a 7ms event, I have a counter in the 1ms event that must count up to 6 before triggering for example. The example of your 7 second LED could be after 7 secs, flash the LED for 1 sec. So in the 1000ms event I have a counter that goes to 6 which the gets reset but also sets a flag to flash the LED. This flag is also serviced in the 1000ms event and it turns on the LED and set another flag to turn it off. You get the idea.

For very fast accurate events I use a hardware timer and interrupts. Otherwise the software timers are great. In the Photon, there will also be FreeRTOS software timers that will fire a callback when triggered, much like an interrupt. The minimum resolution will be 1ms. :smiley:

2 Likes

This flag is also serviced in the 1000ms event and it turns on the LED and set another flag to turn it off. You get the idea.
I halfway understand that. Is that like my example, that within the function, I set the next time to run the function? So for the led on/off, in the function, if the led is turned on, I set next time to call the toggle function to 1/10 second later, but if the led is turned off, I set next time to call the toggle function to 7 seconds later. If I don’t want the led to toggle during the night, Inside the function I can set the nextime to toggle for 9 hours, based on the time of day. I give the functions the control to say when to run them again. Is that your idea?

@Jack, what I mean is that I run “base” events that I then use for counters in the range of those base times. So for ms events, I use the 1ms timer event, etc. This save me having a whole bunch of different timers (eg. elapsedMillis) and replacing with just a few generalized timers. Hope that makes sense.

For super long events based on timer of day you may also want to look at TimeAlarms by @bko. :smile:

1 Like

Thanks for helping me with this. But, I don’t think we are on the same page together yet.
My code is not using “timers”. It sets the variable to the next time, based on millis() that a function needs to run. Using “unsigned long”, that should be good for about 2 weeks I think. Does that sound right?

Your code is great, but I still like mine better. Mine seems to be cleaner inside the loop(), and allows the function to adjust/set when the function should be called next.

Back to the topic of this thread, I think we have established that the delay() command can be used in setup() effectively, but it should not be used in the loop() if there are more than one process. Is that your idea also?

1 Like

Even tho we can almost make this processor look like it can run several events at the same time, in fact, it is not true multi tasking. We don't want to use any delay() commands within the loop(), or functions that it calls. Is that your idea?

Even if there was a delay(2) in each of the processes/functions, I think it would not be a problem. But, no delay() is best.

1 Like

Hi @Jack

I think you are trying to enforce a philosophical point and not a technical one here. This is classic cooperative multi-taskting where any subroutine that blocks will disturb the cooperation and penalize the other subroutines. That doesn’t mean however that your spec is required to be “no delays in subroutines” and on the contrary a perfectly valid spec might be delays up to 10ms are allowed in any particular subroutine.

The delay can be whatever is acceptable in your application, ranging from the code minimum execution time up to any maximum you choose. If you are philosophically attracted to minimum delay, that’s great, but it is not the only valid approach. Different programmers are going to address this in different ways.

Whether the cooperative subroutines are called by loop() as a scheduler or by a timer interrupt is really just a preference. @peekay123 hybrid scheme of using timer interrupts to call the 1 millisecond task that then dispatches based on a count is just small refinement (and clever!).

There are a lot possible “right” answers here and I am glad you found yours, but that does not rule out other options for other applications.

2 Likes

I appreciate your input, and learn from it.
I did not intend to say anyones code is bad (actually the opposite). There are many ways to code. I just try to learn, and offer my opinion, hoping others will offer their opinion, so I and others on the forum can learn more. Please, nobody take my questions/comments as offensive.
Thanks, Jack

3 Likes

in my opinion, as I had said, some short delays would not be so bad, and if I only run one process/function any delay would be OK.
As the title of this thread "Delay(), vs "soft delay".
What do you think? Do you use the delay() command a lot?