[Simh] The Interconnect Task Force (was: Origins of MSCP)

Bob Supnik bob at supnik.org
Tue Jun 25 19:12:34 EDT 2019


Ah, yes, the Interconnect Task Force. 1980, perhaps? All my engineering 
notebooks are entombed at the Computer History Museum now.

CI - computer interconnect - realized in the passive "star coupler" and 
the CIxxx family of interfaces.
BI - backplane interconnect - realized in the BIIC chips - used as the 
memory and IO bus in the 8200 and the IO bus in several further systems. 
Superseded by the XMI, which was the memory and IO bus of the VAX 6000 
family and was later used as an IO bus.
NI - network interconnect - Ethernet. Success!
DI - device interconnect - intended as a low cost interconnect to 
devices - basically async serial. Really only used in the DECtape II 
implementation with its own simplified form of MSCP (Radial Serial 
Protocol).
II - interchip interconnect. This went nowhere at the time. The Semi 
Group standardized much later on the CVAX pin bus as a defacto "II" and 
created a family of pin-compatible chips. The CVAX pin bus morphed from 
a memory-and-IO bus in CVAX to an IO-only bus in later chips. Chips in 
the "CVAX pin bus family" included the memory controller (CMCTL), the 
console chip (CSSC), the second-generation Ethernet control (SGEC), the 
DSSI controller (SHAC), and the Qbus interface (CQBIC). The last three 
were used across multiple generations; the CMCTL and CSSC were only used 
in CVAX systems. Superseded, in most senses, by PCI in the 1990s.

About the only thing that was consistent in DEC's interconnect strategy 
was that one generation's memory and IO backplane interconnect became a 
pure IO backplane interconnect later. This was true of the Unibus, Qbus, 
BI, and XMI. Memory cards were easy to design and replace; IO 
controllers tended to live much longer.

/Bob

On 6/25/2019 1:33 PM, Paul Koning wrote:
>
>> On Jun 25, 2019, at 11:43 AM, Bob Supnik <bob at supnik.org> wrote:
>>
>> True. My first assignment at DEC was managing the "New Disk Subsystem" (NDS) advanced development project, which led eventually to both the HSC50 and the UDA50. Among the goals of the project were
>>
>> 1. To move ECC correction off the host and into the disk subsystem, so that much more powerful and complex ECC codes could be used.
>> 2. To move bad block replacement off the host and into the disk subsystem.
>> 3. To provide a uniform software interface for radically different disk drives.
>> 4. To abstract away all device geometry information.
>> 5. To implement command queuing and to perform all performance optimization, particularly seek optimization, in the disk subsystem.
> #2 was only partially true in the UDA50 -- I remember an amazingly large body of code in RSTS for bad block replacement for the RA80 that's about 2000 lines of code -- roughly the same size as the rest of the MSCP driver.
>
> I remember MSCP as part of the larger "Interconnect Architecture" effort, which produced a range of "interconnects" some of which seemed to become real and some less so.  There was the new peripheral bus (BI), the cluter interconnect (CI, computer interconnect) and one or two others.  I vaguely remember II (Interchip Interconnect) -- did that become I2C, or something else, or nothing?  And DI (device interconnect) ???  Also NI, which became Ethernet.  And XI?  I think we used that term in the networking group for the next generation high speed network.  Gigaswitch?  FDDI?  Not sure.  Part of the impression I had was that there was some overall concept unifying all this, but whether that was actually realistic is not clear.
>
> One place it showed up was in the Jupiter mainframe (which didn't happen), supposedly built around CI and NI as its connections to the outside world.
>
> There's also XMI, but that was a generation later as I recall.
> 	paul
>
>
>



More information about the Simh mailing list