[Simh] Potential design for 'real time' controls

Kevin Brunt k.brunt at ccs.bbk.ac.uk
Mon Apr 18 18:07:26 EDT 2005


On Mon, 18 Apr 2005 11:52:44 -0400 "Hittner, David T." 
<david.hittner at ngc.com> wrote:

[Snipped - I won't requote the post....]

I'm less bothered about a flashy GUI myself. The earliest machine I had 
much to do with was a PDP-11/34, even it didn't really have much in the 
way of flashing lights, and the PDP-11/44 that came next only had 'RUN' 
and 'BATT' (the latter of which was always off, indicating that there 
was no backup battery!)

Of course, elaborating the 'separate thread' idea would provide the 
basis for a GUI running in another process. If the connection were made 
over a TCP channel, the GUI could even be on another machine.

What I'd like to see in a future version of SIMH is some generalisation 
of the TCP multiplexor support code. Specifically, I'd like the ability 
to initiate an outgoing connection, and the separation of the 'TCP' and 
'Telnet' components. I'm envisaging the Telnet-specific code being 
taken out and replaced by function calls through some form of call 
table. In particular, I'd like to see an alternative 'encoding' 
suitable for serial synchronous devices. I also think that a lot of the 
vagaries of a typical card reader would be best emulated by a separate 
program, rather than the SIMH 'ATTACH' model.

(I must admit to an ulterior motive here, in the form of a copy of the 
binary of a DG Nova-based emulator of a CDC '200UT' batch station.)

On the simulation side proper, as I've said before, what I'd like to 
see is for the elimination of the device (and CPU) fixed global data 
structures, and the fixed subroutine names. Thus, the SIMH/processor 
simulation interface is reduced to a single list of function entry 
points, one of which is "create the data structures for a processor and 
return a pointer to it".

This would require a number of extra data structures, in particular, a 
pair of structures to describe the SIMH and the machine-dependent 
halves of the CPU. It would also entail implicitly or explicitly 
passing around pointers to CPU and device data structures. (This is 
basically equivalent to David's call to use C++ methods, etc. It makes 
the creation of multiple instances of a device easier, and more 
generic.)

A fancy touch on this is to follow through on the 'configuration' 
thought and have a mechanism for specifying the numbers of devices 
ahead of the initialisation step.

Having got here, it's possible to envisage having several simulations 
bound in the same image so that the first configuration step is to say 
what sort of processor to configure.

Next is to invert the instruction execution, so that instructions are 
executed inside the event queue, rather than the other way round. This 
treats the CPU entirely as a device, rather than having the CPU 
simulation function process the device event queue itself.

At this point it is possible to envisage having two (or more) 
processors running on the event queue. These could be:


a) independent systems. (Not very interesting.)
b) ditto with a interconnection communications channel. (Again not that 
   interesting, as this could be achieved with a network connection.)
c) ditto, but with a more tightly-coupled channel, such as a shared 
   disk or (say) a PCL-11 TDM multi-processor link, where there are 
   timing issues that have to be simulated.
d) shared memory multi-processor, a la VAX-11/782 or the PDP-15/76, or 
   even a KMC-11 auxillary processor.

Yes, I know that there are a lot of issues in emulating a 782, but the 
15/76 ought to be possible with the existing PDP-15 and PDP-11 
simulations, and it is, as much as anything, the idea of making 
multi-processor simulations possible that is important.


Kevin





More information about the Simh mailing list