[Simh] EXT :Re: PDP-10 simulation: DEUNA support help needed

Timothe Litt litt at ieee.org
Thu Apr 30 20:03:52 EDT 2015


On 30-Apr-15 19:13, Cory Smelosky wrote:
> On Thu, 30 Apr 2015, Timothe Litt wrote:
>
>> In DEC, volume shadowing was first built into the HSC50 (a CI-based
>> disk/tape controller), released in the early 80s.  Drives were the 14"
>> RA81 and follow-ons.  The early design work was in the late 70s.  The
>> TOPS20 announcement of the CI, CFS and clusters predated the VMS
>> announcement, to the great annoyance of the VMS crew, which had it
>> running internally but wasn't permitted to announce it.  TOPS20 didn't
>> support volume shadowing.  Neither did TOPS10, though it did support CI
>> disks and tapes (but not clusters) later.  The CI/MSCP protocol layers
>> used common code in both OSs, similar to what was done with DECnet.
>> Host-based volume shadowing came considerably later.
>>
>
> So...were MSCP drives connected to the KL/KS over CI to an HSC50?  I
> saw MSCP drivers but couldn't figure out how they were attached. ;)
>
KL only (KLIPA = KL -> CI adapter).  No CI adapter for the KS.

>> I implemented V1 of the VAX Striping Driver, c.a. 1988, as a midnight
>> engineering project.  It was originally proposed by the LCG I/O group as
>> part of the VAX9000 development to address high-bandwidth I/O
>> requirements of the HPTC market.  It turned out to be extremely popular
>> for general timesharing, as it increased the sustainable request rate -
>> much of a timesharing workload consists of small I/Os - to mail files,
>> editor journals, and the like.  It was sold across the product line,
>> more for request rate than for bandwidth.
>
> Interesting!
>
> Have a paper on it?  I'm interested in read/versus write
> speeds/latencies.
In my archives, somewhere I have an internal document based on a week of
experiments that I did.  Not published, but the basis for all the
training and sales materials.

Read and write speeds don't differ because of striping; it will saturate
the hardware in either direction.  (It's more complicated for RAID due
to parity-induced asymmetry - both updates and when required for
reconstruction.)  Striping for bandwidth, the key parameters are the
request size, the stripe size (tracks are good),  number of members, and
whether the controller supports request fragmentation.  Those that don't
suffer, especially when striping for bandwidth because as the number of
drives increases, the probability of some drive's request requiring a
full revolution to start approaches unity. Hardware striping addressed
traditionally addressed this with rotationally synchronized spindles. 
With request fragmentation, I demonstrated that this was unnecessary,
as  the latency is 1/2 sector, max 1.  Striping for request rate is
rather more tolerant; generally you don't want the stripe size to get
too large. 

Workloads do have different read/write characteristics - but this is
independent of striping.  IIRC, VAX VMS timesharing tended to be about
90% reads.  But what hits the heads varies based on host (and
controller) caching.  The host can cache metadata and/or user data.  And
with VMS, you get multiple levels of caching; the ACP, RMS &
applications.  With an effective cache, the heads can see 90% writes
with the same workload.  The Unix buffer cache has different
characteristics, only partly because of the VMS lock manager's effects. 
These days, drives do read-ahead, write-thru (and sometimes write-behind
or write-back) caching. 

TOPS-10 implemented a physical disk block cache, IIRC somewhere around
7.02/7.03.  Prior to that, we only cached the SATs (bitmap allocation)
and to some extent, RIBs.

But by now we're at least triply off-topic...
>
>>
>> However, the Striping driver was perfectly happy to stripe any
>> underlying physical devices (though one wanted the sizes to be the
>> same.)  This means it wasn't limited to RAID 0.  People striped
>> shadowsets (host and controller-based),  which some call RAID 1+0.
>> People also striped MSCP-served disks, whatever the underlying
>> technology.  And MSCP-served stripesets.  Some of these configurations
>> had, er, interesting performance characteristics.  But the Striping
>> driver was inexpensive, and the placebo effect overcame engineering
>> reality.
>>
>
> Unusual perferformance characteristics always make for interesting
> papers!
Perhaps.  I was in the engineering business, not the paper writing business.
By the time people got around to papers, I was almost always on the bleeding
edge of something else.

In this case, it's pretty easy to predict what happens when you add
layers of software and interconnect to something tuned for performance.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 4942 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://mailman.trailing-edge.com/pipermail/simh/attachments/20150430/e7ed5c1a/attachment-0001.bin>


More information about the Simh mailing list