[Simh] EXT :Re: DEC Alpha Emulation

Clem Cole clemc at ccc.com
Mon Feb 5 15:53:44 EST 2018


I hear you.  My point was it only failed the corner cases of failover, so I
think it would have made it to logs.  And that should have been good
enough.   Not perfect, but in practice it would have worked and it would
have simplified things immensely.

And not having the Adaptec support was in fact a real problem to added cost
and really did not add value.   My last act, I was trying too build the $1K
Alpha at the time (which I did prototype until Jessie kill it, buts that's
another story).  Folks said the cheapest Alpha was $5K -- well there was a
reason.  When we took a $799 [end user Radio Shack priced] Compaq K7 based
system and spliced a $200 EV6 into and got Tru64 working (with an Adapter
chipset on the motherboard BTW), it worked and people wanted it!!!  [I
still have the the motherboard at home and EV6 from it is on my desk at
Intel].

Look I tend to be a practical engineer.   I always felt that DEC's building
things 100% fool proof, everything had to be perfect was really what killed
the golden goose - not palmer et al.  Not being able to understand what did
needed to be "you bet the company/farm/your life" and what could be "good
enough for now and move to the next problem".  I always thought that was
one of the things Roger Gourd taught me -- how to differentiate between the
two.   I think DEC had that when the PDP-8 and PDP-11 was done and in the
early development of the Vax (hey I programmed Vax serial #1 -- VMS V1.0
was buggy as could be).   But with Vax becoming supreme, DEC lost its
way/believed its own hype.

Alpha and Tru64 are great examples of the problem.  BTW: I loved Alpha,
bleed for it etc..  Tru64 was the best UNIX implementation I have ever
used, and am proud to be a developer of TruClusters.  But it took 3 extra
years to get Tru64 out the door because it had to be perfect (and nobody
got fired for it either).

I never understood that.   Every subsystem that needed to be rewritten (TTY
handler, memory, bulk I/O) did need to be do over from the original code
from OSF for the 386 and PMAX.  But I always felt, DEC could have shipped
OSF/1 on the Alpha pretty much as is, and started to get revenue and move
the installed base.  Then subsystem by subsystem, replaced them with
something better.    I also argued with Supnik BTW (whom I adore and think
the world of), the lack of 32 bits certainly made the engineering support
easier, but again it cost us.   We also basically paid the ISVs to fix
their code so it would run on 64 bit SPARC (Judy Ward's errors are still
the best I know for cleaning up 32 bit-isms.  If I have old code, I'm about
to port, I often run it through my Alpha with Judy's compiler to tell me is
going to be troublesome).

Yup 32 bit support would have been messy and we would have had to have 4
versions of the libraries just like MIPS, SPARC et al.  It would have been
a little ugly and not 'perfect' ...  but it would have worked and been
faster to market.  And ISV's would started to get some revenue.   By the
time we were 'done' - it was too little to late and folks hard already
started to look for an alternative - and guess what Winders on a 386 was
'good enough.'

As I said to Jessie et al, SW is not written on $1M computers - its written
on the cheapest thing that gets the job done.  Then moved upstream if it is
valuable.

By the time the Sr Managers took the 'cut a deal with Microsoft and get
their SW' strategy, the death spiral was well underway. And they
misunderstood, they were never going to get 'big bucks' for 'commodity sw.'
 [Intel suffered that also with Itanium].

BTW: I'm watching Intel make more of the same mistakes....   sigh.  As I
say to folks here, I have those tee shirts,   I know how this movie ends.


Clem

BTW: the best people in Intel IT are the Macs at Intel folks.    They listen
and understand I can help them.  So they try to help me.   Its a good
arrangement.  When they send me a note, I do try to help.  I'm one of the
few folks in lingering that 'update access' to their tools library (biggest
issue is its all written in VB  -- seriously for Mac's -- don't ask).
 But when I find something, I do try to help.   In return, I ran into an
issue last Tues with my keyboard and called them that PM.  I missed the
Fedex time for Folsom, but on Thursday an new Mac was on my doorstep.
 Only reason I have not switched is because I had to travel Thursday for
work - so I took them both.   Got my job done and will switch systems
completely tonight (I hope).

Anyway -- back to work....
ᐧ

On Mon, Feb 5, 2018 at 3:10 PM, Timothe Litt <litt at ieee.org> wrote:

> On 05-Feb-18 13:36, Clem Cole wrote:
>
>
>
> ​Point taken, but DEC used the SPD as its primary defense for exactly this
> type of problem.​ It was the 'legal' definition of what was and was not
> allowed.   But as you point out, that behavior does not always make for
> happy customers or sr managers.
>
> I started in the field, and consulted with the corporate flying squads.
> The SPDs' value as legal definition was of more interest to lawyers &
> junior product managers than to those at the sharp end of the spear.
> Happiness, even at expense above and beyond legal technicalities brought
> more business than sticking to the letter of the law.  Unhappiness was
> very, very expensive.  I have stories that run both ways...
>
>
>
>>  The truth is in at least Tru64  (I think is was Feed Knight - Mr. SCSI)
>> had code that detect when your SCSI bus was being shared.   It would have
>> been easy to add add a side look up to check the control being used and if
>> it was not in the official table, produce a boot message saying -- "*shared
>> bus with unsupported SCSI controller, please remove sharing or replace
>> controller and reboot."*
>>
>
> ​But I could never get marketing to accept that.​
>
> I wish it were that simple.  In this case, Marketing's intuition covered
> some technical challenges.  I had many a talk with Fred when I was in the
> Tru64 group.  That 'table' would have to deal not only with controller
> types, but with compatibility of firmware versions for every device on the
> bus.  And the permutations of what worked (and didn't) weren't static.  The
> sys_check maintainer made some efforts, as did the SPEAR folks in CSSE.
> But everything was a moving target.
>
> The trivial case of "don't ever use this controller in a cluster" isn't
> all that hard to blacklist.  Of course, when the foobar-plus comes out with
> a different device ID, but the same bug, you have to blacklist it too.
> Before any customer finds one a "American Used Computers" (Kenmore Square,
> before e-bay:-)  And don't forget that to find another controller on the
> bus, you have to enumerate the bus.  This can have side-effects with "bad"
> controllers.  The bugs weren't all limited to fail-over.  IIRC tagging and
> command queuing had issues; at least one controller created parity errors
> (and some undetected one).
>
> But maintaining a useful whitelist - with all the churn in the SCSI space
> - would be a nightmare.  Disks have firmware & HW revs.  Controllers too.
> Blocking all 3rd party disks (despite the frequent firmware issues) isn't
> viable.  Don't forget CD/DVD, tape, and even ethernet.  Even getting
> customers to install patches was hard (patch quality and interactions was
> one of my issues); patching to keep up with hardware/firmware revs wasn't
> going to fly.  And you need this information before you have a file system;
> preferably in the boot driver.  So no, not a config file.  Maybe SRM
> console environment variables...  Even in the relatively controlled
> environment that DEC was able to impose, SCSI should have been called
> CHAOSnet - except that name was taken.
>
> Worse, once you produce one error message in a problem space (e.g. invalid
> HW config), suddenly NOT producing errors for all other cases that don't
> work become bugs.
>
> My point was that if we detected it (which was not not that hard), then we
> could have at least said something.   And in practice if you still ignored
> it and it was in all those system logs, it would have been pretty easy to
> say to the end customer, *we told you not to do that*.
>
> By the time it's in a system log, it's too late.  The logging disk is
> probably on the SCSI bus.
>
> "I told you so" - not a happy strategy.
>
> For the simple case of only two machines sharing a bus: what do you mean
> by "at boot time"?  The first machine powers up, and is "alone" with a
> "good" controller.  Two weeks later, the owner of the second machine (with
> a "bad" one) returns from vacation and turns his on.  His dog brought him a
> magazine article on clusters, so why not jump in?  It might, maybe, manage
> to boot to the point of noticing the first one without polluting its
> transfers.  Note that at this point, the first machine is undoubtedly doing
> disk writes; packet corruption is not as "harmless" as when you have a
> ROFS.  And the second machine has to touch the first's controller to query
> it's versions.  And to find it, it enumerates the entire bus.  Meantime,
> does the first machine repeat the boot-time check? How does it notice?
>
> As I said, when something's wrong, logging to disk with an invalid
> hardware configuration isn't going fly.  Above the hardware level, you're
> not in the cluster (yet), so how are you going to get the disk bitmaps (and
> locks)?  And write to a ROFS?  Normally, these are queued in memory (and
> retrieved for syslog by dmesg).  But with this misconfiguration, the last
> thing you want to do is join the cluster & remount the logging disk R/W.
> So you can't log to disk.  You might want to try to send to a network
> syslog - but that means you've gotten a LOT further into kernel
> initialization; you have a file system, network configuration, know where
> to send it, etc.  Besides the fact that your network chip may be on the
> same SCSI bus, you've done a whole lot more I/O to get this far.  With this
> kind of error, you want to make the test and panic very, very early in
> initialization to minimize collateral damage.
>
> There are many more cases to cover.  This is one of the simpler.
>
> It's really not that simple to verify hardware configurations, once you
> dig in to the problem space.  Fred's test was undoubtedly useful for
> logging & cluster initialization - with supported controllers.  It might
> have been a good reminder for engineering experiments.  I'd need to be
> convinced that it could solve the issue that you wanted to address.  "For
> every problem, there is a solution that is simple, obvious, and ... wrong".
>
> You're correct that some simple check at driver initialization that stuck
> with console logging could probably be 80-90% effective.  But getting the
> rest right, while an interesting engineering project, would be a P.roject.
> Sunshine with a slight chance of data corruption just wasn't the DEC way :-)
>
> As I said, a lot of fun for the engineers, but hard to justify in order to
> save a few customers $100.
>
>
ᐧ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.trailing-edge.com/pipermail/simh/attachments/20180205/072f8149/attachment-0001.html>


More information about the Simh mailing list