Time in incrementing fine, it just takes several hundred loops to simulate unless you also have a pause in your loop. e.g. pause 100 will mean you only need 10 loops before 1 second elapses
Thank you, this little tip made all the difference.
My code is fairly complex, it has a great many things to do and because it is interacting with real-world events on an analog input, the code is in small blocks that run quickly, then return to the main supervisory loop.
There isn't any scope for delays in the final version (except for a few hundreds-of-microseconds delays while generating external sync and sequencing commands) - which probably explains how I got a few timer increments.
I've now added a temporary
pause 100 as you suggest for the simulation and although slow, I can now confirm that my time-based code segments are running properly.
It's a big ask, and probably beyond the scope of the product, but is there a way for me to determine (short of actually building and testing real-world hardware), exactly how long a given piece of code will take to execute?
EG, if I have a section of code that executes on condition 'x', I'd like to know exactly how many microseconds that would take at a given processor clock speed.
(Knowing that, I can determine just how long I need to make a synchronising signal from another device, and be certain it will be caught - without making it any longer than necessary. I don't want to send 100ms sync if 3ms will do!)