Rocksolid Light

Welcome to Rocksolid Light

mail  files  register  newsreader  groups  login

Message-ID:  

The absence of labels [in ECL] is probably a good thing. -- T. Cheatham


computers / comp.arch / Re: Two New 128-bit Floating-Point Formats

Re: Two New 128-bit Floating-Point Formats

<ce3a0704-5a59-45f6-b8ea-31c640269a13n@googlegroups.com>

  copy mid

https://news.novabbs.org/computers/article-flat.php?id=33570&group=comp.arch#33570

  copy link   Newsgroups: comp.arch
X-Received: by 2002:ac8:5489:0:b0:40f:eaf1:1c01 with SMTP id h9-20020ac85489000000b0040feaf11c01mr82534qtq.1.1691542024493;
Tue, 08 Aug 2023 17:47:04 -0700 (PDT)
X-Received: by 2002:a05:6830:14da:b0:6bc:a4ff:b7c5 with SMTP id
t26-20020a05683014da00b006bca4ffb7c5mr452574otq.3.1691542024201; Tue, 08 Aug
2023 17:47:04 -0700 (PDT)
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Tue, 8 Aug 2023 17:47:03 -0700 (PDT)
In-Reply-To: <f0673c9a-a6a4-46b7-8ec8-ec37d8bc769dn@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=136.50.14.162; posting-account=AoizIQoAAADa7kQDpB0DAj2jwddxXUgl
NNTP-Posting-Host: 136.50.14.162
References: <439bb4ce-e70d-4e81-a6f3-2bb9e6e654b3n@googlegroups.com>
<af45f172-2766-4711-a1de-d0650a0f011dn@googlegroups.com> <f0673c9a-a6a4-46b7-8ec8-ec37d8bc769dn@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <ce3a0704-5a59-45f6-b8ea-31c640269a13n@googlegroups.com>
Subject: Re: Two New 128-bit Floating-Point Formats
From: jim.brakefield@ieee.org (JimBrakefield)
Injection-Date: Wed, 09 Aug 2023 00:47:04 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
 by: JimBrakefield - Wed, 9 Aug 2023 00:47 UTC

On Tuesday, August 8, 2023 at 6:46:58 PM UTC-5, MitchAlsup wrote:
> On Tuesday, August 8, 2023 at 5:41:47 PM UTC-5, JimBrakefield wrote:
> > On Tuesday, August 8, 2023 at 3:28:23 PM UTC-5, Quadibloc wrote:
> > > Only the _first_ of which is _intentionally_ silly.
> > >
> > > I have a section on my web site which discusses the history of the computer,
> > > at
> > > http://www.quadibloc.com/comp/histint.htm
> > >
> > > On that page, one of the many computer systems I discuss is the
> > > HP 9845, from 1978. This computer had amazing capabilities
> > > for its day; some have termed it the "first workstation".
> > > Unlike anything by Sun or Apollo, though, the processor for this
> > > computer, designed by HP, had an architecture based on the
> > > HP 211x microcomputer, but did calculations in decimal floating
> > > point.
> > > Hey, wait a moment. Isn't that a description of the processor chip
> > > used in HP pocket calculators, and the earlier HP 9830? How on
> > > Earth can something that does floating-point calculations at the
> > > speed of a pocket calculator, even a good one, be called a
> > > "workstation"?
> > > Well, further study allowed me to resolve this doubt. The CPU
> > > module of the 9845 included a chip called EMC, which did its
> > > floating-point arithmetic. It did it within a 16-bit ALU, and the
> > > floating-point format had a *binary* exponent with a range from
> > > -511 to +511. It *did* do its arithmetic at speeds considerably
> > > greater than those of pocket calculators.
> > >
> > > Well, the HP 85 may have been the world's cutest computer, but
> > > the HP 9845C seemed to me to have taken the crown for the
> > > most quintessentially geeky computer to ever warm the heart of
> > > a retrocomputing enthusiast that ever existed.
> > >
> > > Inspired by this computer, and by another favorite of mine, the
> > > famous RECOMP II computer, the one that's capable of handling
> > > numbers that can go 2 1/2 times around the world, I came up with
> > > this floating-point format...
> > >
> > > the intended goal of which is to be included, along with more
> > > conventional floating-point formats, in the processor for a
> > > computer that boots up as a calculator, but can then be
> > > switched over to full computer operation when desired.
> > >
> > > Here it is:
> > >
> > > (76 bits) Mantissa: 19 BCD digits
> > > (1 bit) Sign
> > > (51 bits) Excess-1,125,899,906,842,642 decimal exponent
> > >
> > > Initially, I had conceptualized the format as being closer to that
> > > of the RECOMP II, with one word of mantissa, and the sign and
> > > exponent in the second word.
> > > But then I thought of making the first 64 bit word into one BCD
> > > digit, and six groups of three digits encoded by Chen-Ho encoding.
> > > That would allow nineteen-digit precision.
> > > Then I decided that a 63-bit exponent was so large that it would
> > > be preferable to sacrifice some exponent bits, and have the same
> > > increase of precision without going to the extra gate delays
> > > required for Chen-Ho encoding.
> > >
> > > The ideas I played with in that chain of thought then turned my
> > > attention to how they might be used for a more serious
> > > purpose.
> > > Remember John Gustafson, and his quest, first with Unums, and
> > > then with Posits, to devise a better floating-point format that
> > > would help combat the dangerous numerical errors that abound
> > > in conventional floating-point arithmetic?
> > > Perhaps I could come up with something more conventional
> > > that would go partway, at least, towards providing the facilities
> > > that his inventions provide.
> > >
> > > And here is where that chain of thought went:
> > >
> > > (1 bit) Sign
> > > (31 bits) Excess-1,073,741,824 binary exponent
> > > (96 bits) Significand
> > >
> > > Providing a wide exponent range (like Posits and Unums) and a high
> > > precision (like Unums) but both within the bounds of reason, and
> > > without any uncoventional steps, like decreasing precision for
> > > large exponents, or having the length of the number variable.
> > > But there's something *else* that I also came up with to do when
> > > implementing this floating-point format in order to help it achieve
> > > its ends.
> > >
> > > Seymour Cray was the designer of the Control Data 6600 computer.
> > > It had a 60-bit word. When he designed the Cray I computer, although
> > > he surrendered to the 8-bit byte, and gave it a 64-bit word, apparently
> > > he still felt that the 60-bit floats of the 6600 provided all the precision
> > > that anyone needed.
> > > So the floating-point format of the Cray I had an exponent field that
> > > was 15 bits long. But the defined range of possible exponents in that
> > > format would fit in a *14-bit* exponent field.
> > > I guess this would make it easier to detect and, even more importantly,
> > > to recover from floating-point overflows and underflows.
> > >
> > > At first, I thought that simply copying this idea would be useful.
> > > Then, inspired by the inexact bit of the IEEE-754 standard, I
> <
> > Would think that in most calculations, most calculated values are inexact.
> > Previously considered taking one mantissa bit to indicate inexact.
> > Which is a painful loss of accuracy.
> <
> The most precise thing we can routinely measure is 22-bits.
> The most precise thing we can measure in 1ns is 8-bits.
> The most precise thing we have ever measured is 44-bits.
> {and this took 25+ years to decrease the noise to this}
> <
> However: there are lots of calculations that expand (ln, sqrt) and
> compress (^2, exp, erf) the number of bits needed to retain precision
> "down the line".

Then there is the choice of the wrong approach:
Area of a triangle when length of two sides slightly greater than the third:
No surveyor would establish a triangle based on side lengths?
Instead coordinates of the corners serves for all practical purposes.

> Just computing ln2() to IEEE accuracy requires
> something-like 72-bit of fraction in the intermediate x*ln2(y) part
> to achieve IEEE 754 accuracy in the final result of Ln2() over the
> entire range of x and y.
> <
> This is the problem, not the number of bits in the fraction at one
> instant. It is a problem well understood by numerical analysists
> ..........And something casual programmers remain unaware of
> for decades of experience.........
> >
> > So how many values are exact? Can those values be encoded into
> > the NAN bits? If so, why not let inexact be the default, thereby allowing
> > one to use round-to-odd thereby eliminating double rounding issues?
> > (one would still follow the rule of rounding the nearest representable value)
> <
> Nobody doing real FP math gives a crap about exactness. The 99%
> Only people testing FP arithmetic units do. The way-less-than 0.1%
> <
> Consider: COS( 6381956970095103×2^797) = -4.68716592425462761112E-19
> <
> Conceptually, this requires calculating over 800-bits of intermediate
> INT(2/pi×x) !!! to get the proper reduced argument which will result in
> the above properly rounded result.
> <
> To get that 800-bits one uses Payne-Hanek argument reduction which
> takes somewhat longer than 100 cycles--compared to computing the
> COS(reduced) polynomial taking slightly less than 100 cycles.
> <
> I have a patented method that can perform reduction in 5 cycles: and a
> designed function unit that can perform the above COS(actual)
> in 19 cycles.
> <
> > > decided on an even better way to softly warn the user, while allowing
> > > the computation to proceed to completion without being halted by
> > > an error, that it had used more of the available exponent range than
> > > would be reasonable for a program which was correctly written
> > > with consciousness of the requirements of sound numerical
> > > analysis.
> > >
> > > Even though the exponent, being an excess-1,073,741,824 binary
> > > exponent, has a range from -1,073,741,824 to +1,073,741,823,
> > > just like a two's complement number of the same length, there
> > > would also be a latching Range status bit associated with the
> > > use of this floating-point format that would be set if the exponent
> > > during a computation ever strays out of the range -65,536 to
> > > +65,535, which ought to be enough for anyone!
> > > So a calculation that is blowing up somewhere into excessively
> > > high exponents can be detected without the overhead of adding
> > > a lot of debugging code testing for out-of-range values.
> > >
> > > John Savard

SubjectRepliesAuthor
o Two New 128-bit Floating-Point Formats

By: Quadibloc on Tue, 8 Aug 2023

25Quadibloc
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor