[Simh] Simulator development: Advice on system timing

Seth Morabito web at loomcom.com
Thu Oct 26 13:55:42 EDT 2017


Hello all, and especially those who have written or are writing
simulators,

I'm battling system timing in the 3B2/400 emulator I'm working on. As
with any system, particular activities such as disk seeks, reads, and
writes must be completed within certain margins -- if they happen too
early or too late, an interrupt will be missed. But the "fudge factor",
so to speak, seems pretty tight on the 3B2, possibly because it runs at
a system clock frequency of 10MHz.

In the simulator, I really want to be able to say "call sim_activate()
with a delay of 8ms (or 720us, or whatever) of simulated time". I'm
trying to come up with the best strategy to map simulator steps with
simulated time.

If I know that the real system runs at 10MHz, I know each clock cycle
takes 100ns. So far so good -- but of course on a real system, each
instruction takes several system clock steps. If I had to hazard a
guess, I'd say each real instruction on a real system takes an average
of 8-10 clock cycles, depending on the instruction length and number of
memory accesses. Each step of the simulator does a complete instruction
cycle - fetch, decode, execute, reads and writes - in one go, so it's
not a direct mapping of simulator step to the 10Mhz clock.

How do I translate this knowledge into accurate delays for
"sim_activate()" in my disk access routines? Is there a best practice
for this?

-Seth
-- 
  Seth Morabito
  web at loomcom.com


More information about the Simh mailing list