[Simh] Simulator development: Advice on system timing

Mark Pizzolato Mark at infocomm.com
Thu Oct 26 20:39:36 EDT 2017


On Thursday, October 26, 2017 at 10:56 AM, Seth Morabito wrote:
> I'm battling system timing in the 3B2/400 emulator I'm working on. As
> with any system, particular activities such as disk seeks, reads, and
> writes must be completed within certain margins -- if they happen too
> early or too late, an interrupt will be missed. But the "fudge factor",
> so to speak, seems pretty tight on the 3B2, possibly because it runs at
> a system clock frequency of 10MHz.
> 
> In the simulator, I really want to be able to say "call sim_activate()
> with a delay of 8ms (or 720us, or whatever) of simulated time". I'm
> trying to come up with the best strategy to map simulator steps with
> simulated time.
> 
> If I know that the real system runs at 10MHz, I know each clock cycle
> takes 100ns. So far so good -- but of course on a real system, each
> instruction takes several system clock steps. If I had to hazard a
> guess, I'd say each real instruction on a real system takes an average
> of 8-10 clock cycles, depending on the instruction length and number of
> memory accesses. Each step of the simulator does a complete instruction
> cycle - fetch, decode, execute, reads and writes - in one go, so it's
> not a direct mapping of simulator step to the 10Mhz clock.
> 
> How do I translate this knowledge into accurate delays for
> "sim_activate()" in my disk access routines? Is there a best practice
> for this?

On top of Paul's explanation, there are some relevant concepts relating to 
how other simulators address this subject.

In general, device simulations use sim_activate() delay times which are 
somewhat empirically determined based on the minimum instruction 
times that the common software running on the system requires.  For 
many devices, that is completely sufficient, and in combination with 
simulator throttling the user experience can reasonably closely reflect 
the experience that occurred with the original systems.  This probably 
would be considered best practice.

Meanwhile, if you REALLY want explicit time based device activation 
delays, you can use sim_activate_after().  This API takes a time (in usecs) 
as the time until activation.  Time here is dynamically calibrated based 
on the actual simulated instruction execution rate.

- Mark


More information about the Simh mailing list