Rocksolid Light

Welcome to Rocksolid Light

mail  files  register  newsreader  groups  login

Message-ID:  

E Pluribus Unix


devel / comp.arch / Re: Solving the Floating-Point Conundrum

SubjectAuthor
* Solving the Floating-Point ConundrumQuadibloc
+* Re: Solving the Floating-Point ConundrumStephen Fuld
|+* Re: Solving the Floating-Point ConundrumQuadibloc
||+- Re: Solving the Floating-Point ConundrumJohn Levine
||`- Re: Solving the Floating-Point ConundrumStephen Fuld
|`* Re: Solving the Floating-Point Conundrummac
| `- Re: Solving the Floating-Point ConundrumThomas Koenig
+* Re: Solving the Floating-Point ConundrumMitchAlsup
|+* Re: Solving the Floating-Point ConundrumQuadibloc
||+* Re: Solving the Floating-Point ConundrumMitchAlsup
|||`* Re: Solving the Floating-Point ConundrumQuadibloc
||| `* Re: Solving the Floating-Point ConundrumMitchAlsup
|||  `- Re: Solving the Floating-Point ConundrumQuadibloc
||`- Re: Solving the Floating-Point ConundrumJohn Dallman
|+- Re: Solving the Floating-Point ConundrumScott Lurndal
|`* Re: Solving the Floating-Point ConundrumQuadibloc
| +* Re: Solving the Floating-Point ConundrumMitchAlsup
| |`* Re: Solving the Floating-Point ConundrumBGB
| | +* Re: Solving the Floating-Point ConundrumScott Lurndal
| | |+* Re: Solving the Floating-Point ConundrumQuadibloc
| | ||+* Re: Solving the Floating-Point ConundrumMitchAlsup
| | |||`- Re: Solving the Floating-Point ConundrumTerje Mathisen
| | ||`* Re: Solving the Floating-Point ConundrumBGB
| | || `* Re: Solving the Floating-Point ConundrumStephen Fuld
| | ||  `* Re: Solving the Floating-Point ConundrumScott Lurndal
| | ||   `- Re: Solving the Floating-Point ConundrumMitchAlsup
| | |`* Re: Solving the Floating-Point ConundrumThomas Koenig
| | | `* Re: memory speeds, Solving the Floating-Point ConundrumJohn Levine
| | |  +- Re: memory speeds, Solving the Floating-Point ConundrumQuadibloc
| | |  +* Re: memory speeds, Solving the Floating-Point ConundrumScott Lurndal
| | |  |+* Re: memory speeds, Solving the Floating-Point ConundrumMitchAlsup
| | |  ||+* Re: memory speeds, Solving the Floating-Point ConundrumEricP
| | |  |||+* Re: memory speeds, Solving the Floating-Point ConundrumScott Lurndal
| | |  ||||`* Re: memory speeds, Solving the Floating-Point ConundrumEricP
| | |  |||| `- Re: memory speeds, Solving the Floating-Point ConundrumScott Lurndal
| | |  |||+- Re: memory speeds, Solving the Floating-Point ConundrumQuadibloc
| | |  |||+* Re: memory speeds, Solving the Floating-Point ConundrumJohn Levine
| | |  ||||`* Re: memory speeds, Solving the Floating-Point ConundrumEricP
| | |  |||| `- Re: memory speeds, Solving the Floating-Point ConundrumMitchAlsup
| | |  |||+- Re: memory speeds, Solving the Floating-Point ConundrumMitchAlsup
| | |  |||`- Re: memory speeds, Solving the Floating-Point ConundrumMitchAlsup
| | |  ||`* Re: memory speeds, Solving the Floating-Point ConundrumTimothy McCaffrey
| | |  || `- Re: memory speeds, Solving the Floating-Point ConundrumMitchAlsup
| | |  |`* Re: memory speeds, Solving the Floating-Point ConundrumQuadibloc
| | |  | +- Re: memory speeds, Solving the Floating-Point ConundrumMitchAlsup
| | |  | `- Re: memory speeds, Solving the Floating-Point Conundrummoi
| | |  `* Re: memory speeds, Solving the Floating-Point ConundrumAnton Ertl
| | |   +* Re: memory speeds, Solving the Floating-Point ConundrumMichael S
| | |   |+* Re: memory speeds, Solving the Floating-Point ConundrumJohn Levine
| | |   ||+- Re: memory speeds, Solving the Floating-Point ConundrumLynn Wheeler
| | |   ||`* Re: memory speeds, Solving the Floating-Point ConundrumAnton Ertl
| | |   || +- Re: memory speeds, Solving the Floating-Point ConundrumEricP
| | |   || `- Re: memory speeds, Solving the Floating-Point ConundrumJohn Levine
| | |   |`* Re: memory speeds, Solving the Floating-Point ConundrumAnton Ertl
| | |   | `- Re: memory speeds, Solving the Floating-Point ConundrumStephen Fuld
| | |   `* Re: memory speeds, Solving the Floating-Point ConundrumThomas Koenig
| | |    `- Re: memory speeds, Solving the Floating-Point ConundrumAnton Ertl
| | +* Re: Solving the Floating-Point ConundrumQuadibloc
| | |`* Re: Solving the Floating-Point ConundrumBGB
| | | `- Re: Solving the Floating-Point ConundrumStephen Fuld
| | +- Re: Solving the Floating-Point ConundrumMitchAlsup
| | `- Re: Solving the Floating-Point ConundrumMitchAlsup
| +* Re: Solving the Floating-Point ConundrumQuadibloc
| |`* Re: Solving the Floating-Point ConundrumQuadibloc
| | `* Re: Solving the Floating-Point ConundrumBGB
| |  `- Re: Solving the Floating-Point ConundrumScott Lurndal
| `* Re: Solving the Floating-Point ConundrumTimothy McCaffrey
|  +- Re: Solving the Floating-Point ConundrumScott Lurndal
|  +- Re: Solving the Floating-Point ConundrumStephen Fuld
|  +* Re: Solving the Floating-Point ConundrumQuadibloc
|  |`* Re: Solving the Floating-Point ConundrumQuadibloc
|  | +* Re: Solving the Floating-Point ConundrumQuadibloc
|  | |`* Re: Solving the Floating-Point ConundrumThomas Koenig
|  | | `* Re: Solving the Floating-Point ConundrumQuadibloc
|  | |  `* Re: Solving the Floating-Point ConundrumThomas Koenig
|  | |   `* Re: Solving the Floating-Point ConundrumQuadibloc
|  | |    `- Re: Solving the Floating-Point ConundrumThomas Koenig
|  | +* Re: Solving the Floating-Point ConundrumMitchAlsup
|  | |+- Re: Solving the Floating-Point ConundrumTerje Mathisen
|  | |`* Re: Solving the Floating-Point ConundrumQuadibloc
|  | | +* Re: Solving the Floating-Point ConundrumThomas Koenig
|  | | |+* Re: Solving the Floating-Point ConundrumJohn Dallman
|  | | ||+- Re: Solving the Floating-Point ConundrumQuadibloc
|  | | ||+* Re: Solving the Floating-Point ConundrumQuadibloc
|  | | |||+* Re: Solving the Floating-Point ConundrumMichael S
|  | | ||||+* Re: Solving the Floating-Point ConundrumMitchAlsup
|  | | |||||`- Re: Solving the Floating-Point ConundrumQuadibloc
|  | | ||||`- Re: Solving the Floating-Point ConundrumQuadibloc
|  | | |||+* Re: Solving the Floating-Point ConundrumMitchAlsup
|  | | ||||`- Re: Solving the Floating-Point ConundrumQuadibloc
|  | | |||`* Re: Solving the Floating-Point ConundrumTerje Mathisen
|  | | ||| `* Re: Solving the Floating-Point ConundrumMitchAlsup
|  | | |||  +* Re: Solving the Floating-Point Conundrumrobf...@gmail.com
|  | | |||  |+- Re: Solving the Floating-Point ConundrumScott Lurndal
|  | | |||  |+* Re: Solving the Floating-Point ConundrumMitchAlsup
|  | | |||  ||`- Re: Solving the Floating-Point ConundrumGeorge Neuner
|  | | |||  |+- Re: Solving the Floating-Point ConundrumThomas Koenig
|  | | |||  |`* Re: Solving the Floating-Point ConundrumTerje Mathisen
|  | | |||  | `- Re: Solving the Floating-Point ConundrumBGB
|  | | |||  `* Re: Solving the Floating-Point ConundrumTerje Mathisen
|  | | |||   +* Re: Solving the Floating-Point Conundrumcomp.arch
|  | | |||   `* Re: Solving the Floating-Point ConundrumMitchAlsup
|  | | ||`* Re: Solving the Floating-Point ConundrumQuadibloc
|  | | |`* Re: Solving the Floating-Point ConundrumJohn Levine
|  | | `- Re: Solving the Floating-Point ConundrumMitchAlsup
|  | +- Re: Solving the Floating-Point ConundrumQuadibloc
|  | `* Re: Solving the Floating-Point ConundrumStefan Monnier
|  +* Re: Solving the Floating-Point ConundrumBGB
|  `- Re: Solving the Floating-Point ConundrumThomas Koenig
+* Re: Solving the Floating-Point ConundrumMitchAlsup
`- Re: Solving the Floating-Point ConundrumQuadibloc

Pages:12345678910
Re: Solving the Floating-Point Conundrum

<udu7us$3v2e2$1@newsreader4.netcologne.de>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34024&group=comp.arch#34024

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!newsreader4.netcologne.de!news.netcologne.de!.POSTED.2001-4dd7-f292-0-ef35-95fd-bf3e-12af.ipv6dyn.netcologne.de!not-for-mail
From: tkoenig@netcologne.de (Thomas Koenig)
Newsgroups: comp.arch
Subject: Re: Solving the Floating-Point Conundrum
Date: Thu, 14 Sep 2023 06:07:24 -0000 (UTC)
Organization: news.netcologne.de
Distribution: world
Message-ID: <udu7us$3v2e2$1@newsreader4.netcologne.de>
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com>
<a0dd4fb4-d708-48ae-9764-3ce5e24aec0cn@googlegroups.com>
<5fa92a78-d27c-4dff-a3dc-35ee7b43cbfan@googlegroups.com>
<093b4223-81e2-4a15-bd70-b5ecb3264e30n@googlegroups.com>
<udspsq$27b0q$1@dont-email.me> <qrmMM.7$5jrd.6@fx06.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 14 Sep 2023 06:07:24 -0000 (UTC)
Injection-Info: newsreader4.netcologne.de; posting-host="2001-4dd7-f292-0-ef35-95fd-bf3e-12af.ipv6dyn.netcologne.de:2001:4dd7:f292:0:ef35:95fd:bf3e:12af";
logging-data="4164034"; mail-complaints-to="abuse@netcologne.de"
User-Agent: slrn/1.0.3 (Linux)
 by: Thomas Koenig - Thu, 14 Sep 2023 06:07 UTC

Scott Lurndal <scott@slp53.sl.home> schrieb:
> BGB <cr88192@gmail.com> writes:
>>On 9/13/2023 10:43 AM, MitchAlsup wrote:
>>> On Wednesday, September 13, 2023 at 9:02:45 AM UTC-5, Quadibloc wrote:
>>>> On Tuesday, September 12, 2023 at 1:09:38 PM UTC-6, MitchAlsup wrote:
>>>>> On Tuesday, September 12, 2023 at 12:32:41 AM UTC-5, Quadibloc wrote:
>>>>
>>>>>> Instead, have 36 bit floats by having a 36-bit word and 9-bit bytes.
>>>>
>>>>> Well that only took 2 years longer than it should have.
>>> <
>>>> A number of the solutions I proposed to using variables of odd lengths
>>>> would work with reasonable efficiency, such as using a 12-bit unit and
>>>> simply using standard techniques for supporting access of unaligned
>>>> operands in memory - basically, use dual-channel memory, and everything
>>>> works just fine, with only a small overhead.
>>>>
>>>> But I wasn't satisfied since my goal is a blazingly-fast vector architecture,
>>> <
>>> I should note: almost all computers known in their day as being blazingly fast
>>> were extremely simple......
>>> <
>>
>>Yeah.
>>
>>Early on, maximizing clock speed seemed to be the ideal.
>>More MHz, more instructions per second, more fast.
>
> Early on, clock speeds were in the khz (e.g. PDP-8/E ran at 385 khz,
> the "high speed" PDP-8/A at 666khz).
>
> A controlling factor was the access time to magnetic core memory.

Reading about this (I am too old to have used core memory computers
myself), I found it interesting that there was a time when CPUs
were bound by core memory speeds. Caches helped then, but were
very expensive.

Then, with the introduction of semiconductor memory (at least some)
CPUs could be run much faster, for example the SuperNOVA SC.
Then, CPU speed started to overtake memory latency again, and
caches are now here to stay.

Re: Solving the Floating-Point Conundrum

<21e54328-e162-4080-b399-f0a2602213f7n@googlegroups.com>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34026&group=comp.arch#34026

  copy link   Newsgroups: comp.arch
X-Received: by 2002:ad4:58ae:0:b0:656:2de1:9b14 with SMTP id ea14-20020ad458ae000000b006562de19b14mr4119qvb.3.1694693250285;
Thu, 14 Sep 2023 05:07:30 -0700 (PDT)
X-Received: by 2002:a05:6808:2391:b0:3a7:3488:bc37 with SMTP id
bp17-20020a056808239100b003a73488bc37mr2165966oib.9.1694693250063; Thu, 14
Sep 2023 05:07:30 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Thu, 14 Sep 2023 05:07:29 -0700 (PDT)
In-Reply-To: <5fa92a78-d27c-4dff-a3dc-35ee7b43cbfan@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=2001:56a:fa34:c000:f5e8:d2ea:b7bb:cd4a;
posting-account=1nOeKQkAAABD2jxp4Pzmx9Hx5g9miO8y
NNTP-Posting-Host: 2001:56a:fa34:c000:f5e8:d2ea:b7bb:cd4a
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com>
<a0dd4fb4-d708-48ae-9764-3ce5e24aec0cn@googlegroups.com> <5fa92a78-d27c-4dff-a3dc-35ee7b43cbfan@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <21e54328-e162-4080-b399-f0a2602213f7n@googlegroups.com>
Subject: Re: Solving the Floating-Point Conundrum
From: jsavard@ecn.ab.ca (Quadibloc)
Injection-Date: Thu, 14 Sep 2023 12:07:30 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 2347
 by: Quadibloc - Thu, 14 Sep 2023 12:07 UTC

On Wednesday, September 13, 2023 at 8:02:45 AM UTC-6, Quadibloc wrote:

> I think you need to sing along with me...
>
> The world will be better for this,
> That one man, scorned and covered with scars,
> Still strove, with his last ounce of courage

From what has followed in this thread, it's apparent
that some do not know the tune in order to sing along.

So here we go:
https://www.youtube.com/watch?v=dyd0ucV2MCM
https://www.youtube.com/watch?v=oo7VlD66ISM

As some may have felt that the song was supportive of U.S. military
action in Vietnam, Jack Jones sang it with slightly modified words:

https://www.youtube.com/watch?v=THY15gW2jBI

which were also used by Frank Sinatra when he sang the song.

https://www.youtube.com/watch?v=xoN7_lR1MOw

The line in this changed version
"To be better far than you are"
always reminds me of another song, though...

https://www.youtube.com/watch?v=BEnBcm3QIfs

which I find distracting.

John Savard

Re: Solving the Floating-Point Conundrum

<f0b735cc-1bc0-4ff8-831b-3136e4f0f270n@googlegroups.com>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34027&group=comp.arch#34027

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:622a:1891:b0:412:2646:9995 with SMTP id v17-20020a05622a189100b0041226469995mr126123qtc.10.1694694197459;
Thu, 14 Sep 2023 05:23:17 -0700 (PDT)
X-Received: by 2002:a05:6808:13c5:b0:3a8:8b74:fd46 with SMTP id
d5-20020a05680813c500b003a88b74fd46mr2302798oiw.8.1694694197203; Thu, 14 Sep
2023 05:23:17 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Thu, 14 Sep 2023 05:23:16 -0700 (PDT)
In-Reply-To: <21e54328-e162-4080-b399-f0a2602213f7n@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=2001:56a:fa34:c000:f5e8:d2ea:b7bb:cd4a;
posting-account=1nOeKQkAAABD2jxp4Pzmx9Hx5g9miO8y
NNTP-Posting-Host: 2001:56a:fa34:c000:f5e8:d2ea:b7bb:cd4a
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com>
<a0dd4fb4-d708-48ae-9764-3ce5e24aec0cn@googlegroups.com> <5fa92a78-d27c-4dff-a3dc-35ee7b43cbfan@googlegroups.com>
<21e54328-e162-4080-b399-f0a2602213f7n@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <f0b735cc-1bc0-4ff8-831b-3136e4f0f270n@googlegroups.com>
Subject: Re: Solving the Floating-Point Conundrum
From: jsavard@ecn.ab.ca (Quadibloc)
Injection-Date: Thu, 14 Sep 2023 12:23:17 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 1873
 by: Quadibloc - Thu, 14 Sep 2023 12:23 UTC

On Thursday, September 14, 2023 at 6:07:32 AM UTC-6, Quadibloc wrote:

> The line in this changed version
> "To be better far than you are"
> always reminds me of another song, though...
>
> https://www.youtube.com/watch?v=BEnBcm3QIfs
>
> which I find distracting.

Here's a modern version...
https://www.youtube.com/watch?v=6eL5rkmemis

and a famous animated version:
https://www.youtube.com/watch?v=Sy0mLLKJ0DU

John Savard

Re: Solving the Floating-Point Conundrum

<c9131381-2e9b-4008-bc43-d4df4d4d8ab4n@googlegroups.com>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34028&group=comp.arch#34028

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:620a:8d09:b0:76f:1614:5767 with SMTP id rb9-20020a05620a8d0900b0076f16145767mr115148qkn.14.1694701548982;
Thu, 14 Sep 2023 07:25:48 -0700 (PDT)
X-Received: by 2002:a05:6870:2c1:b0:1c6:7d66:d6e with SMTP id
r1-20020a05687002c100b001c67d660d6emr2058613oaf.4.1694701548354; Thu, 14 Sep
2023 07:25:48 -0700 (PDT)
Path: i2pn2.org!i2pn.org!newsfeed.endofthelinebbs.com!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Thu, 14 Sep 2023 07:25:48 -0700 (PDT)
In-Reply-To: <5fa92a78-d27c-4dff-a3dc-35ee7b43cbfan@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=71.230.96.169; posting-account=ujX_IwoAAACu0_cef9hMHeR8g0ZYDNHh
NNTP-Posting-Host: 71.230.96.169
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com>
<a0dd4fb4-d708-48ae-9764-3ce5e24aec0cn@googlegroups.com> <5fa92a78-d27c-4dff-a3dc-35ee7b43cbfan@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <c9131381-2e9b-4008-bc43-d4df4d4d8ab4n@googlegroups.com>
Subject: Re: Solving the Floating-Point Conundrum
From: timcaffrey@aol.com (Timothy McCaffrey)
Injection-Date: Thu, 14 Sep 2023 14:25:48 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 2445
 by: Timothy McCaffrey - Thu, 14 Sep 2023 14:25 UTC

On Wednesday, September 13, 2023 at 10:02:45 AM UTC-4, Quadibloc wrote:
> On Tuesday, September 12, 2023 at 1:09:38 PM UTC-6, MitchAlsup wrote:
> > On Tuesday, September 12, 2023 at 12:32:41 AM UTC-5, Quadibloc wrote:
>
> > > Instead, have 36 bit floats by having a 36-bit word and 9-bit bytes.
>
> > Well that only took 2 years longer than it should have.
> A number of the solutions I proposed to using variables of odd lengths
> would work with reasonable efficiency, such as using a 12-bit unit and
> simply using standard techniques for supporting access of unaligned
> operands in memory - basically, use dual-channel memory, and everything
> works just fine, with only a small overhead.

I always thought 48 bit was a good length for FP, it just exceeds the precision
of hand-held calculators (about 11 digits). I have this weird hang-up about
multi-million dollar mainframes (circa-1975) having less accuracy than calculators.
(Obviously not true of the CDC systems).

Plus, you can store either 8 or 6 bit characters! :)

- Tim

Re: Solving the Floating-Point Conundrum

<NzFMM.3190$H0Ge.1252@fx05.iad>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34030&group=comp.arch#34030

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx05.iad.POSTED!not-for-mail
X-newsreader: xrn 9.03-beta-14-64bit
Sender: scott@dragon.sl.home (Scott Lurndal)
From: scott@slp53.sl.home (Scott Lurndal)
Reply-To: slp53@pacbell.net
Subject: Re: Solving the Floating-Point Conundrum
Newsgroups: comp.arch
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com> <a0dd4fb4-d708-48ae-9764-3ce5e24aec0cn@googlegroups.com> <5fa92a78-d27c-4dff-a3dc-35ee7b43cbfan@googlegroups.com> <c9131381-2e9b-4008-bc43-d4df4d4d8ab4n@googlegroups.com>
Lines: 38
Message-ID: <NzFMM.3190$H0Ge.1252@fx05.iad>
X-Complaints-To: abuse@usenetserver.com
NNTP-Posting-Date: Thu, 14 Sep 2023 15:23:25 UTC
Organization: UsenetServer - www.usenetserver.com
Date: Thu, 14 Sep 2023 15:23:25 GMT
X-Received-Bytes: 2261
 by: Scott Lurndal - Thu, 14 Sep 2023 15:23 UTC

Timothy McCaffrey <timcaffrey@aol.com> writes:
>On Wednesday, September 13, 2023 at 10:02:45=E2=80=AFAM UTC-4, Quadibloc wr=
>ote:
>> On Tuesday, September 12, 2023 at 1:09:38=E2=80=AFPM UTC-6, MitchAlsup wr=
>ote:=20
>> > On Tuesday, September 12, 2023 at 12:32:41=E2=80=AFAM UTC-5, Quadibloc =
>wrote:=20
>>=20
>> > > Instead, have 36 bit floats by having a 36-bit word and 9-bit bytes.=
>=20
>>=20
>> > Well that only took 2 years longer than it should have.
>> A number of the solutions I proposed to using variables of odd lengths=20
>> would work with reasonable efficiency, such as using a 12-bit unit and=20
>> simply using standard techniques for supporting access of unaligned=20
>> operands in memory - basically, use dual-channel memory, and everything=
>=20
>> works just fine, with only a small overhead.=20
>
>I always thought 48 bit was a good length for FP, it just exceeds the preci=
>sion
>of hand-held calculators (about 11 digits). I have this weird hang-up abo=
>ut
>multi-million dollar mainframes (circa-1975) having less accuracy than calc=
>ulators.
>(Obviously not true of the CDC systems).

And not true of the Burroughs systems, medium or large.

Although most of medium systems were doing fixed point, rather
than floating point, they still supported 100 digit operands.

>
>Plus, you can store either 8 or 6 bit characters! :)

Indeed, Burroughs moved from BCL (6-bit) to EBCDIC (8-bit)
on the Large systems in that timeframe.

Re: Solving the Floating-Point Conundrum

<udvaff$2m74v$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34031&group=comp.arch#34031

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: sfuld@alumni.cmu.edu.invalid (Stephen Fuld)
Newsgroups: comp.arch
Subject: Re: Solving the Floating-Point Conundrum
Date: Thu, 14 Sep 2023 08:56:31 -0700
Organization: A noiseless patient Spider
Lines: 32
Message-ID: <udvaff$2m74v$1@dont-email.me>
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com>
<a0dd4fb4-d708-48ae-9764-3ce5e24aec0cn@googlegroups.com>
<5fa92a78-d27c-4dff-a3dc-35ee7b43cbfan@googlegroups.com>
<c9131381-2e9b-4008-bc43-d4df4d4d8ab4n@googlegroups.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 14 Sep 2023 15:56:31 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="9da50a8c82fab622889801bfaf7a0634";
logging-data="2825375"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/C0sUpSbsavqG+AgBy3xZR+Jr5XW381vA="
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:NqEngF5ve4gu6/FSFVWG2CY7mNE=
Content-Language: en-US
In-Reply-To: <c9131381-2e9b-4008-bc43-d4df4d4d8ab4n@googlegroups.com>
 by: Stephen Fuld - Thu, 14 Sep 2023 15:56 UTC

On 9/14/2023 7:25 AM, Timothy McCaffrey wrote:
> On Wednesday, September 13, 2023 at 10:02:45 AM UTC-4, Quadibloc wrote:
>> On Tuesday, September 12, 2023 at 1:09:38 PM UTC-6, MitchAlsup wrote:
>>> On Tuesday, September 12, 2023 at 12:32:41 AM UTC-5, Quadibloc wrote:
>>
>>>> Instead, have 36 bit floats by having a 36-bit word and 9-bit bytes.
>>
>>> Well that only took 2 years longer than it should have.
>> A number of the solutions I proposed to using variables of odd lengths
>> would work with reasonable efficiency, such as using a 12-bit unit and
>> simply using standard techniques for supporting access of unaligned
>> operands in memory - basically, use dual-channel memory, and everything
>> works just fine, with only a small overhead.
>
> I always thought 48 bit was a good length for FP, it just exceeds the precision
> of hand-held calculators (about 11 digits). I have this weird hang-up about
> multi-million dollar mainframes (circa-1975) having less accuracy than calculators.

Circa 1975 hand held calculators were just becoming popular, so weren't
an influence on the computer designs of that time.

However, the story goes that, in the 1950s or 60s, the Navy, a large
Univac customer, wanted computers with at least the precision of the
desktop electro-mechanical calculators of the day which was 10 decimal
digits. This forced the requirement of 36 bit integers.

--
- Stephen Fuld
(e-mail address disguised to prevent spam)

Re: Solving the Floating-Point Conundrum

<edb0d2c4-1689-44b4-ae81-5ab1ef234f8en@googlegroups.com>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34032&group=comp.arch#34032

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:620a:8885:b0:76e:f686:cac6 with SMTP id qk5-20020a05620a888500b0076ef686cac6mr109730qkn.8.1694707151988;
Thu, 14 Sep 2023 08:59:11 -0700 (PDT)
X-Received: by 2002:a05:6870:b7b5:b0:1c0:ffa6:4c68 with SMTP id
ed53-20020a056870b7b500b001c0ffa64c68mr796615oab.1.1694707151619; Thu, 14 Sep
2023 08:59:11 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Thu, 14 Sep 2023 08:59:11 -0700 (PDT)
In-Reply-To: <c9131381-2e9b-4008-bc43-d4df4d4d8ab4n@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=2001:56a:fa34:c000:f5e8:d2ea:b7bb:cd4a;
posting-account=1nOeKQkAAABD2jxp4Pzmx9Hx5g9miO8y
NNTP-Posting-Host: 2001:56a:fa34:c000:f5e8:d2ea:b7bb:cd4a
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com>
<a0dd4fb4-d708-48ae-9764-3ce5e24aec0cn@googlegroups.com> <5fa92a78-d27c-4dff-a3dc-35ee7b43cbfan@googlegroups.com>
<c9131381-2e9b-4008-bc43-d4df4d4d8ab4n@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <edb0d2c4-1689-44b4-ae81-5ab1ef234f8en@googlegroups.com>
Subject: Re: Solving the Floating-Point Conundrum
From: jsavard@ecn.ab.ca (Quadibloc)
Injection-Date: Thu, 14 Sep 2023 15:59:11 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 2738
 by: Quadibloc - Thu, 14 Sep 2023 15:59 UTC

On Thursday, September 14, 2023 at 8:25:50 AM UTC-6, Timothy McCaffrey wrote:

> I always thought 48 bit was a good length for FP, it just exceeds the precision
> of hand-held calculators (about 11 digits). I have this weird hang-up about
> multi-million dollar mainframes (circa-1975) having less accuracy than calculators.
> (Obviously not true of the CDC systems).
>
> Plus, you can store either 8 or 6 bit characters! :)

I tend to agree, although my view is a lilttle different.

Given the history of the IBM 7090 versus the IBM 360, apparently the 36-bit
floats of the former were suitable for many applications while the 32-bit
floats of the 360 weren't. That's why I chose it as the lowest available
precision, given information that it was useful.

But I included a 48-bit precision, designed to give 11 bits of precision and
an exponent range that included that of a pocket calculator - 10 ^ +/- 99.

It wasn't just to match the pocket calculator, though. I took pocket calculators
as one piece of evidence - along with a large number of old mathematical
tables that contained numbers to 10-bit precision. So it seemed to me that
before computers, 10 digit precision was considered a desirable maximum
for at least some types of scientific computation.

John Savard

Re: Solving the Floating-Point Conundrum

<udvgi8$2nda9$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34033&group=comp.arch#34033

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: cr88192@gmail.com (BGB)
Newsgroups: comp.arch
Subject: Re: Solving the Floating-Point Conundrum
Date: Thu, 14 Sep 2023 12:40:18 -0500
Organization: A noiseless patient Spider
Lines: 84
Message-ID: <udvgi8$2nda9$1@dont-email.me>
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com>
<a0dd4fb4-d708-48ae-9764-3ce5e24aec0cn@googlegroups.com>
<5fa92a78-d27c-4dff-a3dc-35ee7b43cbfan@googlegroups.com>
<c9131381-2e9b-4008-bc43-d4df4d4d8ab4n@googlegroups.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 14 Sep 2023 17:40:24 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="85c787f02124ca9cea273f1e43995271";
logging-data="2864457"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/p/cJnIvWob50pYAlfy5Cg"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.15.0
Cancel-Lock: sha1:b8G5cyBOmy937EfK/cBYVJVosuk=
Content-Language: en-US
In-Reply-To: <c9131381-2e9b-4008-bc43-d4df4d4d8ab4n@googlegroups.com>
 by: BGB - Thu, 14 Sep 2023 17:40 UTC

On 9/14/2023 9:25 AM, Timothy McCaffrey wrote:
> On Wednesday, September 13, 2023 at 10:02:45 AM UTC-4, Quadibloc wrote:
>> On Tuesday, September 12, 2023 at 1:09:38 PM UTC-6, MitchAlsup wrote:
>>> On Tuesday, September 12, 2023 at 12:32:41 AM UTC-5, Quadibloc wrote:
>>
>>>> Instead, have 36 bit floats by having a 36-bit word and 9-bit bytes.
>>
>>> Well that only took 2 years longer than it should have.
>> A number of the solutions I proposed to using variables of odd lengths
>> would work with reasonable efficiency, such as using a 12-bit unit and
>> simply using standard techniques for supporting access of unaligned
>> operands in memory - basically, use dual-channel memory, and everything
>> works just fine, with only a small overhead.
>
> I always thought 48 bit was a good length for FP, it just exceeds the precision
> of hand-held calculators (about 11 digits). I have this weird hang-up about
> multi-million dollar mainframes (circa-1975) having less accuracy than calculators.
> (Obviously not true of the CDC systems).
>
> Plus, you can store either 8 or 6 bit characters! :)
>

FWIW:
I had briefly considered using a "Binary64 truncated to 48 bits" format
for BJX2 (would have ignored the low 16 bits on input, and zeroed them
on output).

I ended up not going this direction though, as the cost savings didn't
really seem to be enough to justify the downsides.

If one wants the Fp<->Int conversion to handle 64-bit integers, FADD
still needs at least a 64-bit mantissa internally.

In both cases, I would have needed ~ 6 DSP48s for FMUL, just for full
Binary64 one needs to spend some extra LUTs to deal with stretching
things a little further:
For FMUL, one effectively needs to deal with two 54-bit numbers;
Mostly to add '01' as the top two bits of the unpacked mantissa.

The 18s*18s->36s / 17u*17u->34u DSP48 multipliers aren't quite
sufficient, but one can make up the difference by using some 3-bit
multipliers (which can be made out of LUTs).

Well, either this, or live with a Binary64 variant that effectively
ignores the low 3 bits of the mantissa...

In this case, the next size up is effectively a 68-bit mantissa built
using 10 DSPs, or 85-bit using 15 DSPs.

I had considered trying to use the latter in a "LongDouble" format
(effectively Binary128 with the mantissa truncated to 80 bits).

However, this was too expensive to be worthwhile.
The "cheaper" option being mostly to implement 128-bit ALU ops in order
to allow for faster software emulation (which, in turn, can give
full-precision Binary128).

Comparably, a Long-Double with a 64 or 68 bit mantissa didn't seem to
offer enough to really be worthwhile; but the cost overhead would have
been "less absurd".

One other wacky possibility being to implement full Binary128 in
hardware, just only FADD/FSUB and FCMP are fast; whereas FMUL/FDIV would
be handled with a Shift-Add multiplier unit (possibly, but would be
horridly slow).

Well, or alternatively, use internal pipelining trickery to build the
big multiplier via multiplexing a smaller multiplier (had evaluated and
rejected this idea for 64-bit multiplier, eventually ending up with a
shift-add unit instead, but this remains as possible, *).

*: This is faster but also somewhat more expensive than a Shift-Add
design. However, doing something in hardware is "kinda moot" if it will
be slower than doing it in software (and doing this in software being
one area where widening integer multiply and multiply-accumulate
instructions are useful...).

....

Re: Solving the Floating-Point Conundrum

<udviul$2muvo$2@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34034&group=comp.arch#34034

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: sfuld@alumni.cmu.edu.invalid (Stephen Fuld)
Newsgroups: comp.arch
Subject: Re: Solving the Floating-Point Conundrum
Date: Thu, 14 Sep 2023 11:21:09 -0700
Organization: A noiseless patient Spider
Lines: 74
Message-ID: <udviul$2muvo$2@dont-email.me>
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com>
<a0dd4fb4-d708-48ae-9764-3ce5e24aec0cn@googlegroups.com>
<5fa92a78-d27c-4dff-a3dc-35ee7b43cbfan@googlegroups.com>
<093b4223-81e2-4a15-bd70-b5ecb3264e30n@googlegroups.com>
<udspsq$27b0q$1@dont-email.me> <qrmMM.7$5jrd.6@fx06.iad>
<c7691518-3285-40df-b9f3-2335acdf4c21n@googlegroups.com>
<udtr0p$2fsjp$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 14 Sep 2023 18:21:09 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="9da50a8c82fab622889801bfaf7a0634";
logging-data="2849784"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/zkugPyCbWri1dsME5XKXBeMNkIwHLE48="
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:8r6nkLSLHN7bmpfi5IRGRUWmrQs=
In-Reply-To: <udtr0p$2fsjp$1@dont-email.me>
Content-Language: en-US
 by: Stephen Fuld - Thu, 14 Sep 2023 18:21 UTC

On 9/13/2023 7:26 PM, BGB wrote:
> On 9/13/2023 2:00 PM, Quadibloc wrote:
>> On Wednesday, September 13, 2023 at 11:37:30 AM UTC-6, Scott Lurndal
>> wrote:
>>> BGB <cr8...@gmail.com> writes:
>>
>>> Early on, clock speeds were in the khz (e.g. PDP-8/E ran at 385 khz,
>>> the "high speed" PDP-8/A at 666khz).
>>
>>> A controlling factor was the access time to magnetic core memory.
>>
>> Your "early" is not his "early". His early is much more recent - before
>> Dennard Scaling died, back when we were at 65nm or so.
>>
>
> Yeah, seems probably so.
>
> I was imagining an era of CPUs that ran roughly 3 orders of magnitude
> faster, where it seemed like processors like the DEC Alpha were claiming
> clock-speeds that seemingly put everyone else to shame.
>
> In the naivety of youth, it seemed like all this could go on forever.
>
> Like, the general mindset of the era seemingly summed up in Weird Al's
> "It's all about the Pentiums" song.
>
> ...
>
>
>
> But, at least in terms of MHz, computers now are not *that* much faster
> than what I had back when I was in high-school. Way more RAM and HDD
> space, but MHz had mostly hit an impassable wall.
>
> Now, HDDs and RAM have also began to slow down, and may also soon hit a
> wall, ...
>
>
>
> Though, by other metrics (other than MHz), performance comparison gets a
> bit more weird.
>
> Like, I can't really establish a good comparison between my project and
> vintage computers, as things seem to quickly become a bit non-linear
> (and trying to extrapolate things here keeps running into stuff that
> "just doesn't make sense").

Making good choices of what programs to benchmark is not trivial. If
you are talking about really older computers, the benchmark programs
Whetstone and Dhrystone used to be popular, and are, I believe, freely
available. Unfortualtely, at least for you, their successor, SPEC costs
money to get.

> Well, also, I can't run benchmarks on stuff I don't have...
>
> I can't even entirely classify whether performance is "good" or "bad"
> relative to clock-speed, it seems to depend a lot on what I am looking
> at (and the specifics of the piece of code that is being used as the
> benchmark).

Yup. But clearly you should "broaden" your test suite from just old
games. There are lots of "standardized" benchmarks. Check out

https://en.wikipedia.org/wiki/Category:Benchmarks_(computing)

for a partial list.

I suspect others here can make better suggestions than I can.

--
- Stephen Fuld
(e-mail address disguised to prevent spam)

Re: Solving the Floating-Point Conundrum

<udvk03$2o18j$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34035&group=comp.arch#34035

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: cr88192@gmail.com (BGB)
Newsgroups: comp.arch
Subject: Re: Solving the Floating-Point Conundrum
Date: Thu, 14 Sep 2023 13:38:53 -0500
Organization: A noiseless patient Spider
Lines: 82
Message-ID: <udvk03$2o18j$1@dont-email.me>
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com>
<a0dd4fb4-d708-48ae-9764-3ce5e24aec0cn@googlegroups.com>
<5fa92a78-d27c-4dff-a3dc-35ee7b43cbfan@googlegroups.com>
<21e54328-e162-4080-b399-f0a2602213f7n@googlegroups.com>
<f0b735cc-1bc0-4ff8-831b-3136e4f0f270n@googlegroups.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 14 Sep 2023 18:38:59 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="85c787f02124ca9cea273f1e43995271";
logging-data="2884883"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+UgZdfeGe8/dYlNjpsymum"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.15.0
Cancel-Lock: sha1:oI5Q1IdMWC32EYtrDirz1Belc2k=
In-Reply-To: <f0b735cc-1bc0-4ff8-831b-3136e4f0f270n@googlegroups.com>
Content-Language: en-US
 by: BGB - Thu, 14 Sep 2023 18:38 UTC

On 9/14/2023 7:23 AM, Quadibloc wrote:
> On Thursday, September 14, 2023 at 6:07:32 AM UTC-6, Quadibloc wrote:
>
>> The line in this changed version
>> "To be better far than you are"
>> always reminds me of another song, though...
>>
>> https://www.youtube.com/watch?v=BEnBcm3QIfs
>>
>> which I find distracting.
>
> Here's a modern version...
> https://www.youtube.com/watch?v=6eL5rkmemis
>
> and a famous animated version:
> https://www.youtube.com/watch?v=Sy0mLLKJ0DU
>

I had not heard of any of this before...

Granted, these sorts of songs are not really my style.

I am more into things like House/EDM/Dubstep/etc.

Though, I guess apparently House was a descendant of the Disco genre,
some Disco is OK, but is a little dated.

Also, some amount of Tracker/Demoscene music is also cool, but this is
more oriented around a technology (Mod Players/Trackers) rather than a
genre in its own right (but, does generally tend to lean in a
Techno-like direction). Though, this also overlaps to some extent with
Chiptune music.

When I was much younger it was mostly stuff like Goth-Rock and
Industrial. Though, around my early/mid 20s, I started to get kinda
off-put by a lot of the more overt cultish and anti-religious aspects of
many of the bands in these genres.

So, then, it was mostly a lot of Trance and House...
At least (unlike most Goth Rock) they had something better to do than
bashing religion or singing what were effectively praise songs about
vampires or similar...

Meanwhile, my parents are more into Rock (or Classic Rock) and Metal.

Though, I suspect my transition in terms of preferred genres was also
associated with a general shift in views/attitudes.

Granted, never did quite make the switch over to more mainstream views
about the nature of value, seemingly instead just sort of making a move
from a more nihilistic outlook, to more identifying with existentialism
(and then face the annoyance that many people can't seem to see the
difference, and try to use the same condemnations/arguments against
existentialism as they would about nihilism).

Though, I suspect there are differences in some of my views compared
with the more traditional variants though. Some of the more traditional
philosophers needlessly assigning subjective intent to something (the
nature of meaninglessness) which is, by nature, entirely incapable of
subjective intent (the void has neither care nor malice; given it is, by
nature, nothingness).

And, apparently, some people seem to think that existentialism is
incompatible with theistic beliefs, but I disagree here, ... Like, I
come at it from a perspective more like it seeming like believing in
intrinsic values leads to a tangled mess of paradoxes, whereas in this
sense existentialism is a much simpler explanation (in terms of the
whole "Occam's Razor" thing); ...

Like, unless one assumes pantheism or similar, there is little reason to
assume any incompatibility here.

But, alas...

> John Savard

Re: Solving the Floating-Point Conundrum

<lyIMM.19137$hmAd.15816@fx12.iad>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34036&group=comp.arch#34036

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx12.iad.POSTED!not-for-mail
X-newsreader: xrn 9.03-beta-14-64bit
Sender: scott@dragon.sl.home (Scott Lurndal)
From: scott@slp53.sl.home (Scott Lurndal)
Reply-To: slp53@pacbell.net
Subject: Re: Solving the Floating-Point Conundrum
Newsgroups: comp.arch
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com> <a0dd4fb4-d708-48ae-9764-3ce5e24aec0cn@googlegroups.com> <5fa92a78-d27c-4dff-a3dc-35ee7b43cbfan@googlegroups.com> <093b4223-81e2-4a15-bd70-b5ecb3264e30n@googlegroups.com> <udspsq$27b0q$1@dont-email.me> <qrmMM.7$5jrd.6@fx06.iad> <c7691518-3285-40df-b9f3-2335acdf4c21n@googlegroups.com> <udtr0p$2fsjp$1@dont-email.me> <udviul$2muvo$2@dont-email.me>
Lines: 72
Message-ID: <lyIMM.19137$hmAd.15816@fx12.iad>
X-Complaints-To: abuse@usenetserver.com
NNTP-Posting-Date: Thu, 14 Sep 2023 18:46:41 UTC
Organization: UsenetServer - www.usenetserver.com
Date: Thu, 14 Sep 2023 18:46:41 GMT
X-Received-Bytes: 3628
 by: Scott Lurndal - Thu, 14 Sep 2023 18:46 UTC

Stephen Fuld <sfuld@alumni.cmu.edu.invalid> writes:
>On 9/13/2023 7:26 PM, BGB wrote:
>> On 9/13/2023 2:00 PM, Quadibloc wrote:
>>> On Wednesday, September 13, 2023 at 11:37:30 AM UTC-6, Scott Lurndal
>>> wrote:
>>>> BGB <cr8...@gmail.com> writes:
>>>
>>>> Early on, clock speeds were in the khz (e.g. PDP-8/E ran at 385 khz,
>>>> the "high speed" PDP-8/A at 666khz).
>>>
>>>> A controlling factor was the access time to magnetic core memory.
>>>
>>> Your "early" is not his "early". His early is much more recent - before
>>> Dennard Scaling died, back when we were at 65nm or so.
>>>
>>
>> Yeah, seems probably so.
>>
>> I was imagining an era of CPUs that ran roughly 3 orders of magnitude
>> faster, where it seemed like processors like the DEC Alpha were claiming
>> clock-speeds that seemingly put everyone else to shame.
>>
>> In the naivety of youth, it seemed like all this could go on forever.
>>
>> Like, the general mindset of the era seemingly summed up in Weird Al's
>> "It's all about the Pentiums" song.
>>
>> ...
>>
>>
>>
>> But, at least in terms of MHz, computers now are not *that* much faster
>> than what I had back when I was in high-school. Way more RAM and HDD
>> space, but MHz had mostly hit an impassable wall.
>>
>> Now, HDDs and RAM have also began to slow down, and may also soon hit a
>> wall, ...
>>
>>
>>
>> Though, by other metrics (other than MHz), performance comparison gets a
>> bit more weird.
>>
>> Like, I can't really establish a good comparison between my project and
>> vintage computers, as things seem to quickly become a bit non-linear
>> (and trying to extrapolate things here keeps running into stuff that
>> "just doesn't make sense").
>
>Making good choices of what programs to benchmark is not trivial. If
>you are talking about really older computers, the benchmark programs
>Whetstone and Dhrystone used to be popular, and are, I believe, freely
>available. Unfortualtely, at least for you, their successor, SPEC costs
>money to get.
>
>
>> Well, also, I can't run benchmarks on stuff I don't have...
>>
>> I can't even entirely classify whether performance is "good" or "bad"
>> relative to clock-speed, it seems to depend a lot on what I am looking
>> at (and the specifics of the piece of code that is being used as the
>> benchmark).
>
>Yup. But clearly you should "broaden" your test suite from just old
>games. There are lots of "standardized" benchmarks. Check out
>
>https://en.wikipedia.org/wiki/Category:Benchmarks_(computing)
>

This one is commonly used for core comparisions:

https://www.eembc.org/coremark/

Re: Solving the Floating-Point Conundrum

<FAIMM.19138$hmAd.18115@fx12.iad>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34037&group=comp.arch#34037

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx12.iad.POSTED!not-for-mail
X-newsreader: xrn 9.03-beta-14-64bit
Sender: scott@dragon.sl.home (Scott Lurndal)
From: scott@slp53.sl.home (Scott Lurndal)
Reply-To: slp53@pacbell.net
Subject: Re: Solving the Floating-Point Conundrum
Newsgroups: comp.arch
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com> <a0dd4fb4-d708-48ae-9764-3ce5e24aec0cn@googlegroups.com> <5fa92a78-d27c-4dff-a3dc-35ee7b43cbfan@googlegroups.com> <21e54328-e162-4080-b399-f0a2602213f7n@googlegroups.com> <f0b735cc-1bc0-4ff8-831b-3136e4f0f270n@googlegroups.com> <udvk03$2o18j$1@dont-email.me>
Lines: 34
Message-ID: <FAIMM.19138$hmAd.18115@fx12.iad>
X-Complaints-To: abuse@usenetserver.com
NNTP-Posting-Date: Thu, 14 Sep 2023 18:49:09 UTC
Organization: UsenetServer - www.usenetserver.com
Date: Thu, 14 Sep 2023 18:49:09 GMT
X-Received-Bytes: 1735
 by: Scott Lurndal - Thu, 14 Sep 2023 18:49 UTC

BGB <cr88192@gmail.com> writes:
>On 9/14/2023 7:23 AM, Quadibloc wrote:
>> On Thursday, September 14, 2023 at 6:07:32 AM UTC-6, Quadibloc wrote:
>>
>>> The line in this changed version
>>> "To be better far than you are"
>>> always reminds me of another song, though...
>>>
>>> https://www.youtube.com/watch?v=BEnBcm3QIfs
>>>
>>> which I find distracting.
>>
>> Here's a modern version...
>> https://www.youtube.com/watch?v=6eL5rkmemis
>>
>> and a famous animated version:
>> https://www.youtube.com/watch?v=Sy0mLLKJ0DU
>>
>
>I had not heard of any of this before...
>
>
>Granted, these sorts of songs are not really my style.

You might find this a bit more interesting. It's all
instrumental, no vocals.

(GRP All Star Big Band)

https://www.youtube.com/watch?v=1RrrPxdo-H0

The were fun to see live.

Lately I've been binging Dream Theater.

Re: Solving the Floating-Point Conundrum

<1632b7c2-85df-453e-a2bf-fc9b22ddb164n@googlegroups.com>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34038&group=comp.arch#34038

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:622a:1aa5:b0:40f:e2a5:3100 with SMTP id s37-20020a05622a1aa500b0040fe2a53100mr73539qtc.6.1694718364125;
Thu, 14 Sep 2023 12:06:04 -0700 (PDT)
X-Received: by 2002:a05:6871:6aa8:b0:1bb:72af:4373 with SMTP id
zf40-20020a0568716aa800b001bb72af4373mr2024836oab.10.1694718363859; Thu, 14
Sep 2023 12:06:03 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Thu, 14 Sep 2023 12:06:03 -0700 (PDT)
In-Reply-To: <lyIMM.19137$hmAd.15816@fx12.iad>
Injection-Info: google-groups.googlegroups.com; posting-host=2600:1700:291:29f0:304a:8171:2be2:9586;
posting-account=H_G_JQkAAADS6onOMb-dqvUozKse7mcM
NNTP-Posting-Host: 2600:1700:291:29f0:304a:8171:2be2:9586
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com>
<a0dd4fb4-d708-48ae-9764-3ce5e24aec0cn@googlegroups.com> <5fa92a78-d27c-4dff-a3dc-35ee7b43cbfan@googlegroups.com>
<093b4223-81e2-4a15-bd70-b5ecb3264e30n@googlegroups.com> <udspsq$27b0q$1@dont-email.me>
<qrmMM.7$5jrd.6@fx06.iad> <c7691518-3285-40df-b9f3-2335acdf4c21n@googlegroups.com>
<udtr0p$2fsjp$1@dont-email.me> <udviul$2muvo$2@dont-email.me> <lyIMM.19137$hmAd.15816@fx12.iad>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <1632b7c2-85df-453e-a2bf-fc9b22ddb164n@googlegroups.com>
Subject: Re: Solving the Floating-Point Conundrum
From: MitchAlsup@aol.com (MitchAlsup)
Injection-Date: Thu, 14 Sep 2023 19:06:04 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 4826
 by: MitchAlsup - Thu, 14 Sep 2023 19:06 UTC

On Thursday, September 14, 2023 at 1:46:45 PM UTC-5, Scott Lurndal wrote:
> Stephen Fuld <sf...@alumni.cmu.edu.invalid> writes:
> >On 9/13/2023 7:26 PM, BGB wrote:
> >> On 9/13/2023 2:00 PM, Quadibloc wrote:
> >>> On Wednesday, September 13, 2023 at 11:37:30 AM UTC-6, Scott Lurndal
> >>> wrote:
> >>>> BGB <cr8...@gmail.com> writes:
> >>>
> >>>> Early on, clock speeds were in the khz (e.g. PDP-8/E ran at 385 khz,
> >>>> the "high speed" PDP-8/A at 666khz).
> >>>
> >>>> A controlling factor was the access time to magnetic core memory.
> >>>
> >>> Your "early" is not his "early". His early is much more recent - before
> >>> Dennard Scaling died, back when we were at 65nm or so.
> >>>
> >>
> >> Yeah, seems probably so.
> >>
> >> I was imagining an era of CPUs that ran roughly 3 orders of magnitude
> >> faster, where it seemed like processors like the DEC Alpha were claiming
> >> clock-speeds that seemingly put everyone else to shame.
> >>
> >> In the naivety of youth, it seemed like all this could go on forever.
> >>
> >> Like, the general mindset of the era seemingly summed up in Weird Al's
> >> "It's all about the Pentiums" song.
> >>
> >> ...
> >>
> >>
> >>
> >> But, at least in terms of MHz, computers now are not *that* much faster
> >> than what I had back when I was in high-school. Way more RAM and HDD
> >> space, but MHz had mostly hit an impassable wall.
> >>
> >> Now, HDDs and RAM have also began to slow down, and may also soon hit a
> >> wall, ...
> >>
> >>
> >>
> >> Though, by other metrics (other than MHz), performance comparison gets a
> >> bit more weird.
> >>
> >> Like, I can't really establish a good comparison between my project and
> >> vintage computers, as things seem to quickly become a bit non-linear
> >> (and trying to extrapolate things here keeps running into stuff that
> >> "just doesn't make sense").
> >
> >Making good choices of what programs to benchmark is not trivial. If
> >you are talking about really older computers, the benchmark programs
> >Whetstone and Dhrystone used to be popular, and are, I believe, freely
> >available. Unfortualtely, at least for you, their successor, SPEC costs
> >money to get.
> >
> >
> >> Well, also, I can't run benchmarks on stuff I don't have...
> >>
> >> I can't even entirely classify whether performance is "good" or "bad"
> >> relative to clock-speed, it seems to depend a lot on what I am looking
> >> at (and the specifics of the piece of code that is being used as the
> >> benchmark).
> >
> >Yup. But clearly you should "broaden" your test suite from just old
> >games. There are lots of "standardized" benchmarks. Check out
> >
> >https://en.wikipedia.org/wiki/Category:Benchmarks_(computing)
> >
> This one is commonly used for core comparisions:
>
> https://www.eembc.org/coremark/
<
Another (more towards the embedded side) is EMBench::

https://github.com/embench/embench-iot

Re: Solving the Floating-Point Conundrum

<ee908360-76d7-42d1-931e-1d99317be4a2n@googlegroups.com>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34039&group=comp.arch#34039

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:622a:1803:b0:40f:f509:3a75 with SMTP id t3-20020a05622a180300b0040ff5093a75mr156487qtc.7.1694718773949;
Thu, 14 Sep 2023 12:12:53 -0700 (PDT)
X-Received: by 2002:a05:6808:350f:b0:3ad:29a4:f542 with SMTP id
cn15-20020a056808350f00b003ad29a4f542mr129835oib.5.1694718773734; Thu, 14 Sep
2023 12:12:53 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Thu, 14 Sep 2023 12:12:53 -0700 (PDT)
In-Reply-To: <udvgi8$2nda9$1@dont-email.me>
Injection-Info: google-groups.googlegroups.com; posting-host=2600:1700:291:29f0:304a:8171:2be2:9586;
posting-account=H_G_JQkAAADS6onOMb-dqvUozKse7mcM
NNTP-Posting-Host: 2600:1700:291:29f0:304a:8171:2be2:9586
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com>
<a0dd4fb4-d708-48ae-9764-3ce5e24aec0cn@googlegroups.com> <5fa92a78-d27c-4dff-a3dc-35ee7b43cbfan@googlegroups.com>
<c9131381-2e9b-4008-bc43-d4df4d4d8ab4n@googlegroups.com> <udvgi8$2nda9$1@dont-email.me>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <ee908360-76d7-42d1-931e-1d99317be4a2n@googlegroups.com>
Subject: Re: Solving the Floating-Point Conundrum
From: MitchAlsup@aol.com (MitchAlsup)
Injection-Date: Thu, 14 Sep 2023 19:12:53 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 2707
 by: MitchAlsup - Thu, 14 Sep 2023 19:12 UTC

On Thursday, September 14, 2023 at 12:40:28 PM UTC-5, BGB wrote:
> On 9/14/2023 9:25 AM, Timothy McCaffrey wrote:
> >
> FWIW:
> I had briefly considered using a "Binary64 truncated to 48 bits" format
> for BJX2 (would have ignored the low 16 bits on input, and zeroed them
> on output).
>
>
> I ended up not going this direction though, as the cost savings didn't
> really seem to be enough to justify the downsides.
>
> If one wants the Fp<->Int conversion to handle 64-bit integers, FADD
> still needs at least a 64-bit mantissa internally.
>
Overall, it is a quality of implementation metric--you either want quality
or you slice and dice to gain in other areas.
<
Me, I have IEEE transcendentals, and I have about 3/4 of them only require
57-bits (same as Goldschmidt or Newton-Raphson Division) the other 1/4
require 58-bits. But lower end machines (are there any these days??) really
want 64×64 integer multiply (and divide) so by expanding the multiplier
tree to 64×64, one gets a universal Function Unit that handles "everything
else" the ISA has to offer. In addition, instead of making a rounding error
1 out of 237 times, I now make a rounding error 1 out of 3E+6 times on the
transcendentals.
>

Re: memory speeds, Solving the Floating-Point Conundrum

<ue0esp$ps2$1@gal.iecc.com>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34041&group=comp.arch#34041

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!news.misty.com!news.iecc.com!.POSTED.news.iecc.com!not-for-mail
From: johnl@taugh.com (John Levine)
Newsgroups: comp.arch
Subject: Re: memory speeds, Solving the Floating-Point Conundrum
Date: Fri, 15 Sep 2023 02:18:01 -0000 (UTC)
Organization: Taughannock Networks
Message-ID: <ue0esp$ps2$1@gal.iecc.com>
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com> <udspsq$27b0q$1@dont-email.me> <qrmMM.7$5jrd.6@fx06.iad> <udu7us$3v2e2$1@newsreader4.netcologne.de>
Injection-Date: Fri, 15 Sep 2023 02:18:01 -0000 (UTC)
Injection-Info: gal.iecc.com; posting-host="news.iecc.com:2001:470:1f07:1126:0:676f:7373:6970";
logging-data="26498"; mail-complaints-to="abuse@iecc.com"
In-Reply-To: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com> <udspsq$27b0q$1@dont-email.me> <qrmMM.7$5jrd.6@fx06.iad> <udu7us$3v2e2$1@newsreader4.netcologne.de>
Cleverness: some
X-Newsreader: trn 4.0-test77 (Sep 1, 2010)
Originator: johnl@iecc.com (John Levine)
 by: John Levine - Fri, 15 Sep 2023 02:18 UTC

According to Thomas Koenig <tkoenig@netcologne.de>:
>Reading about this (I am too old to have used core memory computers
>myself), I found it interesting that there was a time when CPUs
>were bound by core memory speeds. Caches helped then, but were
>very expensive.

Caches arrived too late to help much.

Core memory was invented in the early 1950s (by multiple people
leading to lengthy and expensive patent fights) and was used
commercially in the IBM 704 in 1954. It was the dominant kind
of RAM until the early 1970s when MOS DRAM replaced it.

The first computer with a cache was the 360/85, announced in early
1968 but not shipped until the end of 1969. Before that the main way
that people sped up core memory was to divide it into a pair of
interleaved banks so the CPU could run alternating cycles in each bank
and overlap them. I think a few comptuters did four-way interleave. It
was also common to make the memory double width and fetch an even/odd
pair of words in each cycle.

In that era, you could build ROMs that were several times faster than
core, so microprogrammed machines running out of ROM could keep the
core RAM going at full speed so there was no performance penalty. I
think that's what they had in mind when they designed the Vax, but of
course their assumptions were almost immediately outdated.

--
Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly

Re: memory speeds, Solving the Floating-Point Conundrum

<7b006031-9160-4769-8839-929a630e4c5bn@googlegroups.com>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34042&group=comp.arch#34042

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:622a:1110:b0:410:9089:6b5f with SMTP id e16-20020a05622a111000b0041090896b5fmr10056qty.5.1694748453342;
Thu, 14 Sep 2023 20:27:33 -0700 (PDT)
X-Received: by 2002:a05:6870:3a01:b0:1d1:4472:2f5b with SMTP id
du1-20020a0568703a0100b001d144722f5bmr191768oab.0.1694748453010; Thu, 14 Sep
2023 20:27:33 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Thu, 14 Sep 2023 20:27:32 -0700 (PDT)
In-Reply-To: <ue0esp$ps2$1@gal.iecc.com>
Injection-Info: google-groups.googlegroups.com; posting-host=2001:56a:fa34:c000:f5e8:d2ea:b7bb:cd4a;
posting-account=1nOeKQkAAABD2jxp4Pzmx9Hx5g9miO8y
NNTP-Posting-Host: 2001:56a:fa34:c000:f5e8:d2ea:b7bb:cd4a
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com>
<udspsq$27b0q$1@dont-email.me> <qrmMM.7$5jrd.6@fx06.iad> <udu7us$3v2e2$1@newsreader4.netcologne.de>
<ue0esp$ps2$1@gal.iecc.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <7b006031-9160-4769-8839-929a630e4c5bn@googlegroups.com>
Subject: Re: memory speeds, Solving the Floating-Point Conundrum
From: jsavard@ecn.ab.ca (Quadibloc)
Injection-Date: Fri, 15 Sep 2023 03:27:33 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 1950
 by: Quadibloc - Fri, 15 Sep 2023 03:27 UTC

On Thursday, September 14, 2023 at 8:18:06 PM UTC-6, John Levine wrote:

> In that era, you could build ROMs that were several times faster than
> core, so microprogrammed machines running out of ROM could keep the
> core RAM going at full speed so there was no performance penalty.

And then there was the Packard-Bell 440, which was user-microprogrammable.

It used a form of RAM that ran five times faster than the machine's regular
core memory. That RAM was magnetic as well, and it was closely related to
core - it was BIAX, made by the Aeronutronics division of Ford.

John Savard

Re: Solving the Floating-Point Conundrum

<43901a10-4859-43d7-b500-70030047c8b2n@googlegroups.com>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34043&group=comp.arch#34043

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:622a:1a0b:b0:3fd:df16:18f4 with SMTP id f11-20020a05622a1a0b00b003fddf1618f4mr11802qtb.8.1694754394459;
Thu, 14 Sep 2023 22:06:34 -0700 (PDT)
X-Received: by 2002:a9d:7d91:0:b0:6b9:620e:d6a7 with SMTP id
j17-20020a9d7d91000000b006b9620ed6a7mr225196otn.1.1694754394304; Thu, 14 Sep
2023 22:06:34 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Thu, 14 Sep 2023 22:06:34 -0700 (PDT)
In-Reply-To: <edb0d2c4-1689-44b4-ae81-5ab1ef234f8en@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=2001:56a:fa34:c000:903:2658:f92:5abe;
posting-account=1nOeKQkAAABD2jxp4Pzmx9Hx5g9miO8y
NNTP-Posting-Host: 2001:56a:fa34:c000:903:2658:f92:5abe
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com>
<a0dd4fb4-d708-48ae-9764-3ce5e24aec0cn@googlegroups.com> <5fa92a78-d27c-4dff-a3dc-35ee7b43cbfan@googlegroups.com>
<c9131381-2e9b-4008-bc43-d4df4d4d8ab4n@googlegroups.com> <edb0d2c4-1689-44b4-ae81-5ab1ef234f8en@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <43901a10-4859-43d7-b500-70030047c8b2n@googlegroups.com>
Subject: Re: Solving the Floating-Point Conundrum
From: jsavard@ecn.ab.ca (Quadibloc)
Injection-Date: Fri, 15 Sep 2023 05:06:34 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 3274
 by: Quadibloc - Fri, 15 Sep 2023 05:06 UTC

On Thursday, September 14, 2023 at 9:59:13 AM UTC-6, Quadibloc wrote:

> But I included a 48-bit precision, designed to give 11 bits of precision and
> an exponent range that included that of a pocket calculator - 10 ^ +/- 99..

Intermediate precision touches on what remains unsolved.

Changing from 36 bits and 60 bits to 36 bits and 72 bits gets to a conventional
architecture, where there is no need for complications in addressing or memory
structure to handle the single and double precision types.

A 54-bit intermediate precision can easily be addressed - and handling scalars
is simple enough on any system that can support unaligned operands. That
basically just requires dual channel memory and appropriate shifting circuitry
in the memory access path.

But that only works for scalars. What about vectors of intermediate precision
quantities?

One can have a vector register composed of a number of 72-bit registers,
and one can have it handle twice as many 36-bit values by having one in
each half of the register. That would make for rapid transfers to and from
memory that are simple conceptually.

If one does it that way, one could handle intermediate precision by doubling
the width of the individual portions of the vector register - so that it's made
up of 144-bit registers that can hold two double-precision numbers or four
single-precision numbers. That could also hold three intermediate precision
numbers... that are 48 bits in length.

Putting numbers like that in memory, though, makes for very complicated
addressing.

So perhaps the option that needs to be taken is to allow 48-bit numbers
to be saved either in vector form, or in scalar form, where scalar form is
a 54-bit cell with six bits of unused space!

John Savard

Re: Solving the Floating-Point Conundrum

<ue0t5p$349du$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34044&group=comp.arch#34044

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: cr88192@gmail.com (BGB)
Newsgroups: comp.arch
Subject: Re: Solving the Floating-Point Conundrum
Date: Fri, 15 Sep 2023 01:21:39 -0500
Organization: A noiseless patient Spider
Lines: 76
Message-ID: <ue0t5p$349du$1@dont-email.me>
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com>
<a0dd4fb4-d708-48ae-9764-3ce5e24aec0cn@googlegroups.com>
<5fa92a78-d27c-4dff-a3dc-35ee7b43cbfan@googlegroups.com>
<c9131381-2e9b-4008-bc43-d4df4d4d8ab4n@googlegroups.com>
<udvgi8$2nda9$1@dont-email.me>
<ee908360-76d7-42d1-931e-1d99317be4a2n@googlegroups.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Fri, 15 Sep 2023 06:21:46 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="7e23c672fb7fd6bd85fa3d0132928f4d";
logging-data="3286462"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+sHH6+mGLwBjbHpmswFt77"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.15.0
Cancel-Lock: sha1:TiAbOnxiVh2gBTm5Cq5mvf8+Nkg=
Content-Language: en-US
In-Reply-To: <ee908360-76d7-42d1-931e-1d99317be4a2n@googlegroups.com>
 by: BGB - Fri, 15 Sep 2023 06:21 UTC

On 9/14/2023 2:12 PM, MitchAlsup wrote:
> On Thursday, September 14, 2023 at 12:40:28 PM UTC-5, BGB wrote:
>> On 9/14/2023 9:25 AM, Timothy McCaffrey wrote:
>>>
>> FWIW:
>> I had briefly considered using a "Binary64 truncated to 48 bits" format
>> for BJX2 (would have ignored the low 16 bits on input, and zeroed them
>> on output).
>>
>>
>> I ended up not going this direction though, as the cost savings didn't
>> really seem to be enough to justify the downsides.
>>
>> If one wants the Fp<->Int conversion to handle 64-bit integers, FADD
>> still needs at least a 64-bit mantissa internally.
>>
> Overall, it is a quality of implementation metric--you either want quality
> or you slice and dice to gain in other areas.

Full Binary64 and ability to do integer conversion with 64-bit values
seemed worthwhile.

Vs, say, partial precision and only being able to work with 32-bit
integer values...

Had also went with a Binary64 FPU rather than Binary32 partly as
Binary32 isn't really sufficient as a primary FPU; and needing to fall
back to software emulation for "double" almost defeats the point of
having an FPU.

Originally, there were no separate Binary32 ops, with Binary32 being
handled by always converting to Binary64 internally.

Now, technically some Binary32 ops exist indirectly via SIMD ops.

There are also FADDA/FSUBA/FMULA ops for operations which still use the
Binary64 format, but may use a faster but lower precision FPU (if this
low-precision FPU offers at least the equivalent of Binary32 precision).

Mostly this is because the low-precision FPU can offer 3-cycle latency
(pipelined) vs the usual 6 cycles (non-pipelined) needed for Binary64 ops.

> <
> Me, I have IEEE transcendentals, and I have about 3/4 of them only require
> 57-bits (same as Goldschmidt or Newton-Raphson Division) the other 1/4
> require 58-bits. But lower end machines (are there any these days??) really
> want 64×64 integer multiply (and divide) so by expanding the multiplier
> tree to 64×64, one gets a universal Function Unit that handles "everything
> else" the ISA has to offer. In addition, instead of making a rounding error
> 1 out of 237 times, I now make a rounding error 1 out of 3E+6 times on the
> transcendentals.

Far more minimal on my end.

For a while, there was no FDIV, only FADD/FSUB/FMUL.

The current FDIV instruction is still pretty slow though, and is still
regarded as an optional extension.

At present, it is still generally faster to do FDIV in software via an
implicit runtime call.

Note that dividing by a constant is implicitly converted into a multiply
by the reciprocal of the constant.

So, say:
y=x/2.0;
Becomes:
y=x*0.5;

Re: memory speeds, Solving the Floating-Point Conundrum

<TG_MM.3194$H0Ge.3155@fx05.iad>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34047&group=comp.arch#34047

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx05.iad.POSTED!not-for-mail
X-newsreader: xrn 9.03-beta-14-64bit
Sender: scott@dragon.sl.home (Scott Lurndal)
From: scott@slp53.sl.home (Scott Lurndal)
Reply-To: slp53@pacbell.net
Subject: Re: memory speeds, Solving the Floating-Point Conundrum
Newsgroups: comp.arch
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com> <udspsq$27b0q$1@dont-email.me> <qrmMM.7$5jrd.6@fx06.iad> <udu7us$3v2e2$1@newsreader4.netcologne.de> <ue0esp$ps2$1@gal.iecc.com>
Lines: 21
Message-ID: <TG_MM.3194$H0Ge.3155@fx05.iad>
X-Complaints-To: abuse@usenetserver.com
NNTP-Posting-Date: Fri, 15 Sep 2023 15:24:35 UTC
Organization: UsenetServer - www.usenetserver.com
Date: Fri, 15 Sep 2023 15:24:35 GMT
X-Received-Bytes: 1549
 by: Scott Lurndal - Fri, 15 Sep 2023 15:24 UTC

John Levine <johnl@taugh.com> writes:
>According to Thomas Koenig <tkoenig@netcologne.de>:
>>Reading about this (I am too old to have used core memory computers
>>myself), I found it interesting that there was a time when CPUs
>>were bound by core memory speeds. Caches helped then, but were
>>very expensive.
>
>Caches arrived too late to help much.
>
>Core memory was invented in the early 1950s (by multiple people
>leading to lengthy and expensive patent fights) and was used
>commercially in the IBM 704 in 1954. It was the dominant kind
>of RAM until the early 1970s when MOS DRAM replaced it.
>
>The first computer with a cache was the 360/85, announced in early
>1968 but not shipped until the end of 1969.

I'm pretty sure that the B5500 had some form of small cache at
that point.

Re: Solving the Floating-Point Conundrum

<55e51f03-be11-4a45-a706-ba0dac0a148fn@googlegroups.com>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34048&group=comp.arch#34048

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:620a:44d3:b0:76e:ffbf:8235 with SMTP id y19-20020a05620a44d300b0076effbf8235mr63112qkp.0.1694796201904;
Fri, 15 Sep 2023 09:43:21 -0700 (PDT)
X-Received: by 2002:a05:6808:20a4:b0:3a8:74ff:6c01 with SMTP id
s36-20020a05680820a400b003a874ff6c01mr875444oiw.5.1694796201636; Fri, 15 Sep
2023 09:43:21 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Fri, 15 Sep 2023 09:43:21 -0700 (PDT)
In-Reply-To: <ue0t5p$349du$1@dont-email.me>
Injection-Info: google-groups.googlegroups.com; posting-host=2600:1700:291:29f0:4d5f:e2b1:6f40:1cb8;
posting-account=H_G_JQkAAADS6onOMb-dqvUozKse7mcM
NNTP-Posting-Host: 2600:1700:291:29f0:4d5f:e2b1:6f40:1cb8
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com>
<a0dd4fb4-d708-48ae-9764-3ce5e24aec0cn@googlegroups.com> <5fa92a78-d27c-4dff-a3dc-35ee7b43cbfan@googlegroups.com>
<c9131381-2e9b-4008-bc43-d4df4d4d8ab4n@googlegroups.com> <udvgi8$2nda9$1@dont-email.me>
<ee908360-76d7-42d1-931e-1d99317be4a2n@googlegroups.com> <ue0t5p$349du$1@dont-email.me>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <55e51f03-be11-4a45-a706-ba0dac0a148fn@googlegroups.com>
Subject: Re: Solving the Floating-Point Conundrum
From: MitchAlsup@aol.com (MitchAlsup)
Injection-Date: Fri, 15 Sep 2023 16:43:21 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 4733
 by: MitchAlsup - Fri, 15 Sep 2023 16:43 UTC

On Friday, September 15, 2023 at 1:21:50 AM UTC-5, BGB wrote:
> On 9/14/2023 2:12 PM, MitchAlsup wrote:
> > On Thursday, September 14, 2023 at 12:40:28 PM UTC-5, BGB wrote:
> >> On 9/14/2023 9:25 AM, Timothy McCaffrey wrote:
> >>>
> >> FWIW:
> >> I had briefly considered using a "Binary64 truncated to 48 bits" format
> >> for BJX2 (would have ignored the low 16 bits on input, and zeroed them
> >> on output).
> >>
> >>
> >> I ended up not going this direction though, as the cost savings didn't
> >> really seem to be enough to justify the downsides.
> >>
> >> If one wants the Fp<->Int conversion to handle 64-bit integers, FADD
> >> still needs at least a 64-bit mantissa internally.
> >>
> > Overall, it is a quality of implementation metric--you either want quality
> > or you slice and dice to gain in other areas.
> Full Binary64 and ability to do integer conversion with 64-bit values
> seemed worthwhile.
>
> Vs, say, partial precision and only being able to work with 32-bit
> integer values...
>
> Had also went with a Binary64 FPU rather than Binary32 partly as
> Binary32 isn't really sufficient as a primary FPU; and needing to fall
> back to software emulation for "double" almost defeats the point of
> having an FPU.
>
>
> Originally, there were no separate Binary32 ops, with Binary32 being
> handled by always converting to Binary64 internally.
>
> Now, technically some Binary32 ops exist indirectly via SIMD ops.
>
>
> There are also FADDA/FSUBA/FMULA ops for operations which still use the
> Binary64 format, but may use a faster but lower precision FPU (if this
> low-precision FPU offers at least the equivalent of Binary32 precision).
>
> Mostly this is because the low-precision FPU can offer 3-cycle latency
> (pipelined) vs the usual 6 cycles (non-pipelined) needed for Binary64 ops..
> > <
> > Me, I have IEEE transcendentals, and I have about 3/4 of them only require
> > 57-bits (same as Goldschmidt or Newton-Raphson Division) the other 1/4
> > require 58-bits. But lower end machines (are there any these days??) really
> > want 64×64 integer multiply (and divide) so by expanding the multiplier
> > tree to 64×64, one gets a universal Function Unit that handles "everything
> > else" the ISA has to offer. In addition, instead of making a rounding error
> > 1 out of 237 times, I now make a rounding error 1 out of 3E+6 times on the
> > transcendentals.
> Far more minimal on my end.
>
> For a while, there was no FDIV, only FADD/FSUB/FMUL.
>
> The current FDIV instruction is still pretty slow though, and is still
> regarded as an optional extension.
>
> At present, it is still generally faster to do FDIV in software via an
> implicit runtime call.
>
> Note that dividing by a constant is implicitly converted into a multiply
> by the reciprocal of the constant.
>
> So, say:
> y=x/2.0;
> Becomes:
> y=x*0.5;
<
This is legal in IEEE 754 when 1/x = y is exact.

Re: memory speeds, Solving the Floating-Point Conundrum

<2332d098-8ff4-496c-85cb-502ddf501054n@googlegroups.com>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34049&group=comp.arch#34049

  copy link   Newsgroups: comp.arch
X-Received: by 2002:a05:620a:2701:b0:770:f19d:d6ac with SMTP id b1-20020a05620a270100b00770f19dd6acmr43310qkp.0.1694796299627;
Fri, 15 Sep 2023 09:44:59 -0700 (PDT)
X-Received: by 2002:a9d:6396:0:b0:6bb:102d:1ff6 with SMTP id
w22-20020a9d6396000000b006bb102d1ff6mr544241otk.1.1694796299299; Fri, 15 Sep
2023 09:44:59 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.arch
Date: Fri, 15 Sep 2023 09:44:59 -0700 (PDT)
In-Reply-To: <TG_MM.3194$H0Ge.3155@fx05.iad>
Injection-Info: google-groups.googlegroups.com; posting-host=2600:1700:291:29f0:4d5f:e2b1:6f40:1cb8;
posting-account=H_G_JQkAAADS6onOMb-dqvUozKse7mcM
NNTP-Posting-Host: 2600:1700:291:29f0:4d5f:e2b1:6f40:1cb8
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com>
<udspsq$27b0q$1@dont-email.me> <qrmMM.7$5jrd.6@fx06.iad> <udu7us$3v2e2$1@newsreader4.netcologne.de>
<ue0esp$ps2$1@gal.iecc.com> <TG_MM.3194$H0Ge.3155@fx05.iad>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <2332d098-8ff4-496c-85cb-502ddf501054n@googlegroups.com>
Subject: Re: memory speeds, Solving the Floating-Point Conundrum
From: MitchAlsup@aol.com (MitchAlsup)
Injection-Date: Fri, 15 Sep 2023 16:44:59 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 2457
 by: MitchAlsup - Fri, 15 Sep 2023 16:44 UTC

On Friday, September 15, 2023 at 10:24:40 AM UTC-5, Scott Lurndal wrote:
> John Levine <jo...@taugh.com> writes:
> >According to Thomas Koenig <tko...@netcologne.de>:
> >>Reading about this (I am too old to have used core memory computers
> >>myself), I found it interesting that there was a time when CPUs
> >>were bound by core memory speeds. Caches helped then, but were
> >>very expensive.
> >
> >Caches arrived too late to help much.
> >
> >Core memory was invented in the early 1950s (by multiple people
> >leading to lengthy and expensive patent fights) and was used
> >commercially in the IBM 704 in 1954. It was the dominant kind
> >of RAM until the early 1970s when MOS DRAM replaced it.
> >
> >The first computer with a cache was the 360/85, announced in early
> >1968 but not shipped until the end of 1969.
<
CDC 6600 had a small loop buffer (like four 60-bit loop registers)
CDC 7600 greatly expanded this buffer.
<
> I'm pretty sure that the B5500 had some form of small cache at
> that point.

Re: memory speeds, Solving the Floating-Point Conundrum

<DQ0NM.794$AfZe.536@fx45.iad>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34051&group=comp.arch#34051

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx45.iad.POSTED!not-for-mail
From: ThatWouldBeTelling@thevillage.com (EricP)
User-Agent: Thunderbird 2.0.0.24 (Windows/20100228)
MIME-Version: 1.0
Newsgroups: comp.arch
Subject: Re: memory speeds, Solving the Floating-Point Conundrum
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com> <udspsq$27b0q$1@dont-email.me> <qrmMM.7$5jrd.6@fx06.iad> <udu7us$3v2e2$1@newsreader4.netcologne.de> <ue0esp$ps2$1@gal.iecc.com> <TG_MM.3194$H0Ge.3155@fx05.iad> <2332d098-8ff4-496c-85cb-502ddf501054n@googlegroups.com>
In-Reply-To: <2332d098-8ff4-496c-85cb-502ddf501054n@googlegroups.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 55
Message-ID: <DQ0NM.794$AfZe.536@fx45.iad>
X-Complaints-To: abuse@UsenetServer.com
NNTP-Posting-Date: Fri, 15 Sep 2023 17:51:31 UTC
Date: Fri, 15 Sep 2023 13:50:54 -0400
X-Received-Bytes: 3580
 by: EricP - Fri, 15 Sep 2023 17:50 UTC

MitchAlsup wrote:
> On Friday, September 15, 2023 at 10:24:40 AM UTC-5, Scott Lurndal wrote:
>> John Levine <jo...@taugh.com> writes:
>>> According to Thomas Koenig <tko...@netcologne.de>:
>>>> Reading about this (I am too old to have used core memory computers
>>>> myself), I found it interesting that there was a time when CPUs
>>>> were bound by core memory speeds. Caches helped then, but were
>>>> very expensive.
>>> Caches arrived too late to help much.
>>>
>>> Core memory was invented in the early 1950s (by multiple people
>>> leading to lengthy and expensive patent fights) and was used
>>> commercially in the IBM 704 in 1954. It was the dominant kind
>>> of RAM until the early 1970s when MOS DRAM replaced it.
>>>
>>> The first computer with a cache was the 360/85, announced in early
>>> 1968 but not shipped until the end of 1969.
> <
> CDC 6600 had a small loop buffer (like four 60-bit loop registers)
> CDC 7600 greatly expanded this buffer.
> <
>> I'm pretty sure that the B5500 had some form of small cache at
>> that point.

The CDC 6600 was released 1964 and the book "Considerations in Computer
Design – Leading up to the Control Data 6600" is dated 1963.

I tracked the cache concept back as far as this
1962 paper from The National Cash Register Company.

Considerations in the design of a computer with high
logic-to-memory speed ratio 1962
https://archive.computerhistory.org/resources/access/text/2020/10/102714096-05-01-acc.pdf

"Look-aside consists of a set of logic speed registers, which are invisible
to the programmer as they are never addressed and are not addressable by
the programmer. Thus, they are, philosophically, part of the main memory.
The conventional memory in this system will henceforth be called the
"store" and the entire memory, including look-aside, will be referred
to as "main memory".

Each look-aside register consists basically of three sections:
The first of these holds the contents of a store cell,
the second section holds the store address of that cell,
and the third is a usage indicator. (See Figure 1)
The store address portions of the look-aside registers are connected
to a comparator which has the ability to simultaneously compare the cell
addresses in look-aside with the address of a cell requested by the
system. If the address is in look-aside, an operation on the contents
of that cell may take place immediately without cycling the store.
If there is no matching address, the main store must be accessed.
When the store is accessed, the contents of the cell and the
cell address are placed in their respective places in look-aside."

Re: memory speeds, Solving the Floating-Point Conundrum

<Y81NM.38977$CVBc.20612@fx16.iad>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34052&group=comp.arch#34052

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!usenet.goja.nl.eu.org!3.eu.feeder.erje.net!feeder.erje.net!feeder1.feed.usenet.farm!feed.usenet.farm!peer03.ams4!peer.am4.highwinds-media.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx16.iad.POSTED!not-for-mail
X-newsreader: xrn 9.03-beta-14-64bit
Sender: scott@dragon.sl.home (Scott Lurndal)
From: scott@slp53.sl.home (Scott Lurndal)
Reply-To: slp53@pacbell.net
Subject: Re: memory speeds, Solving the Floating-Point Conundrum
Newsgroups: comp.arch
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com> <udspsq$27b0q$1@dont-email.me> <qrmMM.7$5jrd.6@fx06.iad> <udu7us$3v2e2$1@newsreader4.netcologne.de> <ue0esp$ps2$1@gal.iecc.com> <TG_MM.3194$H0Ge.3155@fx05.iad> <2332d098-8ff4-496c-85cb-502ddf501054n@googlegroups.com> <DQ0NM.794$AfZe.536@fx45.iad>
Lines: 36
Message-ID: <Y81NM.38977$CVBc.20612@fx16.iad>
X-Complaints-To: abuse@usenetserver.com
NNTP-Posting-Date: Fri, 15 Sep 2023 18:13:12 UTC
Organization: UsenetServer - www.usenetserver.com
Date: Fri, 15 Sep 2023 18:13:12 GMT
X-Received-Bytes: 2549
 by: Scott Lurndal - Fri, 15 Sep 2023 18:13 UTC

EricP <ThatWouldBeTelling@thevillage.com> writes:
>MitchAlsup wrote:
>> On Friday, September 15, 2023 at 10:24:40 AM UTC-5, Scott Lurndal wrote:
>>> John Levine <jo...@taugh.com> writes:
>>>> According to Thomas Koenig <tko...@netcologne.de>:
>>>>> Reading about this (I am too old to have used core memory computers
>>>>> myself), I found it interesting that there was a time when CPUs
>>>>> were bound by core memory speeds. Caches helped then, but were
>>>>> very expensive.
>>>> Caches arrived too late to help much.
>>>>
>>>> Core memory was invented in the early 1950s (by multiple people
>>>> leading to lengthy and expensive patent fights) and was used
>>>> commercially in the IBM 704 in 1954. It was the dominant kind
>>>> of RAM until the early 1970s when MOS DRAM replaced it.
>>>>
>>>> The first computer with a cache was the 360/85, announced in early
>>>> 1968 but not shipped until the end of 1969.
>> <
>> CDC 6600 had a small loop buffer (like four 60-bit loop registers)
>> CDC 7600 greatly expanded this buffer.
>> <
>>> I'm pretty sure that the B5500 had some form of small cache at
>>> that point.
>
>The CDC 6600 was released 1964 and the book "Considerations in Computer
>Design – Leading up to the Control Data 6600" is dated 1963.
>
>I tracked the cache concept back as far as this
>1962 paper from The National Cash Register Company.

The electrodata 220 had a "D" register which cached
an 11-digit word from memory[*] (and served as an store
buffer for some arithmetic operations). Circa 1954.

[*] Drum memory, in this case.

Re: memory speeds, Solving the Floating-Point Conundrum

<U13NM.19558$Yxl8.17383@fx14.iad>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34054&group=comp.arch#34054

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx14.iad.POSTED!not-for-mail
From: ThatWouldBeTelling@thevillage.com (EricP)
User-Agent: Thunderbird 2.0.0.24 (Windows/20100228)
MIME-Version: 1.0
Newsgroups: comp.arch
Subject: Re: memory speeds, Solving the Floating-Point Conundrum
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com> <udspsq$27b0q$1@dont-email.me> <qrmMM.7$5jrd.6@fx06.iad> <udu7us$3v2e2$1@newsreader4.netcologne.de> <ue0esp$ps2$1@gal.iecc.com> <TG_MM.3194$H0Ge.3155@fx05.iad> <2332d098-8ff4-496c-85cb-502ddf501054n@googlegroups.com> <DQ0NM.794$AfZe.536@fx45.iad> <Y81NM.38977$CVBc.20612@fx16.iad>
In-Reply-To: <Y81NM.38977$CVBc.20612@fx16.iad>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 43
Message-ID: <U13NM.19558$Yxl8.17383@fx14.iad>
X-Complaints-To: abuse@UsenetServer.com
NNTP-Posting-Date: Fri, 15 Sep 2023 20:22:12 UTC
Date: Fri, 15 Sep 2023 16:21:53 -0400
X-Received-Bytes: 2817
 by: EricP - Fri, 15 Sep 2023 20:21 UTC

Scott Lurndal wrote:
> EricP <ThatWouldBeTelling@thevillage.com> writes:
>> MitchAlsup wrote:
>>> On Friday, September 15, 2023 at 10:24:40 AM UTC-5, Scott Lurndal wrote:
>>>> John Levine <jo...@taugh.com> writes:
>>>>> According to Thomas Koenig <tko...@netcologne.de>:
>>>>>> Reading about this (I am too old to have used core memory computers
>>>>>> myself), I found it interesting that there was a time when CPUs
>>>>>> were bound by core memory speeds. Caches helped then, but were
>>>>>> very expensive.
>>>>> Caches arrived too late to help much.
>>>>>
>>>>> Core memory was invented in the early 1950s (by multiple people
>>>>> leading to lengthy and expensive patent fights) and was used
>>>>> commercially in the IBM 704 in 1954. It was the dominant kind
>>>>> of RAM until the early 1970s when MOS DRAM replaced it.
>>>>>
>>>>> The first computer with a cache was the 360/85, announced in early
>>>>> 1968 but not shipped until the end of 1969.
>>> <
>>> CDC 6600 had a small loop buffer (like four 60-bit loop registers)
>>> CDC 7600 greatly expanded this buffer.
>>> <
>>>> I'm pretty sure that the B5500 had some form of small cache at
>>>> that point.
>> The CDC 6600 was released 1964 and the book "Considerations in Computer
>> Design – Leading up to the Control Data 6600" is dated 1963.
>>
>> I tracked the cache concept back as far as this
>> 1962 paper from The National Cash Register Company.
>
> The electrodata 220 had a "D" register which cached
> an 11-digit word from memory[*] (and served as an store
> buffer for some arithmetic operations). Circa 1954.
>
> [*] Drum memory, in this case.

But did it have cache-like behavior - invisible to the programmer and
skips the memory access if the current location matches the last location?
Otherwise it is a memory data register.

Re: memory speeds, Solving the Floating-Point Conundrum

<DA3NM.4810$3lL1.3557@fx47.iad>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=34056&group=comp.arch#34056

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!newsreader4.netcologne.de!news.netcologne.de!peer03.ams1!peer.ams1.xlned.com!news.xlned.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx47.iad.POSTED!not-for-mail
X-newsreader: xrn 9.03-beta-14-64bit
Sender: scott@dragon.sl.home (Scott Lurndal)
From: scott@slp53.sl.home (Scott Lurndal)
Reply-To: slp53@pacbell.net
Subject: Re: memory speeds, Solving the Floating-Point Conundrum
Newsgroups: comp.arch
References: <57c5e077-ac71-486c-8afa-edd6802cf6b1n@googlegroups.com> <udspsq$27b0q$1@dont-email.me> <qrmMM.7$5jrd.6@fx06.iad> <udu7us$3v2e2$1@newsreader4.netcologne.de> <ue0esp$ps2$1@gal.iecc.com> <TG_MM.3194$H0Ge.3155@fx05.iad> <2332d098-8ff4-496c-85cb-502ddf501054n@googlegroups.com> <DQ0NM.794$AfZe.536@fx45.iad> <Y81NM.38977$CVBc.20612@fx16.iad> <U13NM.19558$Yxl8.17383@fx14.iad>
Lines: 43
Message-ID: <DA3NM.4810$3lL1.3557@fx47.iad>
X-Complaints-To: abuse@usenetserver.com
NNTP-Posting-Date: Fri, 15 Sep 2023 20:59:15 UTC
Organization: UsenetServer - www.usenetserver.com
Date: Fri, 15 Sep 2023 20:59:15 GMT
X-Received-Bytes: 3011
 by: Scott Lurndal - Fri, 15 Sep 2023 20:59 UTC

EricP <ThatWouldBeTelling@thevillage.com> writes:
>Scott Lurndal wrote:
>> EricP <ThatWouldBeTelling@thevillage.com> writes:
>>> MitchAlsup wrote:
>>>> On Friday, September 15, 2023 at 10:24:40 AM UTC-5, Scott Lurndal wrote:
>>>>> John Levine <jo...@taugh.com> writes:
>>>>>> According to Thomas Koenig <tko...@netcologne.de>:
>>>>>>> Reading about this (I am too old to have used core memory computers
>>>>>>> myself), I found it interesting that there was a time when CPUs
>>>>>>> were bound by core memory speeds. Caches helped then, but were
>>>>>>> very expensive.
>>>>>> Caches arrived too late to help much.
>>>>>>
>>>>>> Core memory was invented in the early 1950s (by multiple people
>>>>>> leading to lengthy and expensive patent fights) and was used
>>>>>> commercially in the IBM 704 in 1954. It was the dominant kind
>>>>>> of RAM until the early 1970s when MOS DRAM replaced it.
>>>>>>
>>>>>> The first computer with a cache was the 360/85, announced in early
>>>>>> 1968 but not shipped until the end of 1969.
>>>> <
>>>> CDC 6600 had a small loop buffer (like four 60-bit loop registers)
>>>> CDC 7600 greatly expanded this buffer.
>>>> <
>>>>> I'm pretty sure that the B5500 had some form of small cache at
>>>>> that point.
>>> The CDC 6600 was released 1964 and the book "Considerations in Computer
>>> Design – Leading up to the Control Data 6600" is dated 1963.
>>>
>>> I tracked the cache concept back as far as this
>>> 1962 paper from The National Cash Register Company.
>>
>> The electrodata 220 had a "D" register which cached
>> an 11-digit word from memory[*] (and served as an store
>> buffer for some arithmetic operations). Circa 1954.
>>
>> [*] Drum memory, in this case.
>
>But did it have cache-like behavior - invisible to the programmer and
>skips the memory access if the current location matches the last location?
>Otherwise it is a memory data register.

In the singular case of a branch to self :-).


devel / comp.arch / Re: Solving the Floating-Point Conundrum

Pages:12345678910
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor