[Simh] Looking for a milestone

Paul Koning paulkoning at comcast.net
Mon Oct 17 14:00:17 EDT 2016


I think that the same answer applies to your narrower question, though I didn't see it mentioned specifically in the documents I've read.  For example, the treatment of underflow and very small numbers in Electrologica was novel at the time; Knuth specifically refers to it in a footnote of Volume 2.  The EL-X8 would never turn a non-zero result into zero, for example.  I think IEEE ended up doing the same thing, but that was almost 20 years later.

	paul

> On Oct 17, 2016, at 1:41 PM, Leo Broukhis <leob at mailcom.com> wrote:
> 
> Paul,
> 
> My question is more narrow. It focuses specifically on the binary<->decimal transformation. It appears that while the f.p. instructions and the elementary functions were proved correct to the appropriate precision, there was not much care taken to ensure, for example, that "FLOAT_MAX", "FLOAT_MIN" and "FLOAT_EPS" can be converted in both directions without exceptions and loss of precision, etc. It appears to me that people started caring about these things in the late 80s at the earliest. I'd like to be wrong.
> 
> Thanks,
> Leo
> 
> On Mon, Oct 17, 2016 at 8:55 AM, Paul Koning <paulkoning at comcast.net <mailto:paulkoning at comcast.net>> wrote:
> 
>> On Oct 14, 2016, at 7:22 PM, Leo Broukhis <leob at mailcom.com <mailto:leob at mailcom.com>> wrote:
>> 
>> I wonder what is the historically first programming environment with native binary floating point which had been proved/demonstrated to handle f.p. binary<->decimal I/O conversions 100% correctly?
>> By 100% correctly I mean that all valid binary representations of floating point numbers could be, given a format with enough significant digits, converted to unique text strings, and these strings could be converted back to the corresponding unique binary representations.
>> 
>> Of course, there is enquire.c which facilitated finding bugs in the Unix/Posix environments, but has anyone cared about this during the mainframe era?
> 
> I believe so, yes.  For the design of the floating point feature of the Electrologica X8 (early 1960s) the design documents discuss correctness, including what the definition of "correct" should be.
> 
> There is a very nice and very detailed correctness proof of that floating point design, documented in http://repository.tue.nl/674735 <http://repository.tue.nl/674735> .  That paper was written long after the fact, but by one of the people originally involved in that machine.
> 
> Apart from proofs of the correctness of each of the floating point instructions, that paper also describes the sqrt library function.   Interestingly enough, the implementation of that function does not use floating point operations.  But the analysis, in appendix B of the paper, clearly shows the error terms of the approximation used and why the number of steps used is sufficient for correctness of the sqrt implementation.
> 
> For a different machine, the CDC 6000 series, I remember reading complaints about its bizarre rounding behavior (rounding at 1/3 ?).  I forgot where that appeared; possibly a paper by prof. Niklaus Wirth of ETH Zürich.
> 
> 	paul
> 
> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.trailing-edge.com/pipermail/simh/attachments/20161017/9c879250/attachment.html>


More information about the Simh mailing list