[Simh] Representative Instruction Execution Timing

Vince Mulhollon vince at mulhollon.com
Tue Apr 5 11:23:02 EDT 2011


On Tue, Apr 05, 2011 at 10:47:30AM -0400, Dalby, Jeremy R (GE Power & Water) wrote:
> I am trying to figure out how to slow down the SIMH PDP-8 simulator such
> that the instructions are executed with similar timing to that of a real
> PDP-8.
...
>  Any thoughts or ideas would be much appreciated.

Just as an upfront warning, its probably impossible to implement 
a system that will run on anything that simh compiles on.

Select a batch size below human perceptual level.  Lets say 1024
instructions.

User hits run on the command line, simh remembers a hires timestamp.  
Calculate the theoretical timestamp in the future after emulating 
1024 instructions.  If your hires timestamp is any good, this task
is something like add a small integer to a 64 bit integer.  Emulate 
1024 instructions in a for loop.  Ask the OS for the latest hi res 
timestamp, if its behind the calculated value then sleep otherwise do
another batch of 1024 instructions.  I suppose if you have a hi-res 
enough TS and the OS/etc is fast enough, you could  reduce your batch 
from 1024 instructions to perhaps one instruction.

Or use a hard real time operating system or something like linux that 
has RT extensions, and only execute the proper number of instructions 
per guaranteed slice.

Historically this has been strongly discouraged because some of the
legacy platforms (windows XP, etc) think "high res timestamp" is 
something like wallclock hh:mm:ss unlike something more modern, 
such as linux, etc.

Assuming you go "modern only" that brings up the other traditional 
argument of the post 2005 era HPET vs the pre 2005 era PIT hardware
I'm guessing software for a HPET motherboard will not work on a 
now 6 years old PIT motherboard.

Then there is the entertainment value of ktime_t on linux being
a 64 bit nanosecond counter on 64 bit hardware and a merged 32 bits
of seconds and a 32 bits of nanoseconds in the LSB on legacy 32 bit
hardware/OS.  So that makes the software design weird.  ktime_t is 
circa 2006 ish so it may or may not be obsolete in 2011, I haven't 
checked.  But I do remember this problem.

You need to research why the TSC is a bad idea on modern multicore 
processors before trying to go ultra low level like that.  At the
very least, include self monitoring code such that crazy results
means skip any sleep and reinitialize the timing algorithm.  If the
timer difference implies the process has been sleeping or swapped
out for 400 years... etc.




More information about the Simh mailing list