Rocksolid Light

Welcome to Rocksolid Light

mail  files  register  newsreader  groups  login

Message-ID:  

Stellar rays prove fibbing never pays. Embezzlement is another matter.


devel / comp.arch / Re: Whither the Mill?

SubjectAuthor
* Whither the Mill?Stephen Fuld
+- Re: Whither the Mill?Scott Lurndal
+- Re: Whither the Mill?BGB
+* Re: Whither the Mill?George Neuner
|`* Re: Whither the Mill?BGB
| +* Re: Whither the Mill?MitchAlsup
| |+* Re: Whither the Mill?Scott Lurndal
| ||+* Re: Whither the Mill?MitchAlsup
| |||`* Re: Whither the Mill?Scott Lurndal
| ||| `* Re: Whither the Mill?MitchAlsup
| |||  +* Re: Whither the Mill?EricP
| |||  |`* Re: Whither the Mill?BGB-Alt
| |||  | +* Re: Whither the Mill?MitchAlsup
| |||  | |`- Re: Whither the Mill?BGB-Alt
| |||  | +- Re: Whither the Mill?MitchAlsup
| |||  | +- Re: Whither the Mill?Scott Lurndal
| |||  | `* Re: Whither the Mill?EricP
| |||  |  +* Re: Whither the Mill?Scott Lurndal
| |||  |  |+* Re: Whither the Mill?Paul A. Clayton
| |||  |  ||+- Re: Whither the Mill?Scott Lurndal
| |||  |  ||`* Re: Whither the Mill?MitchAlsup
| |||  |  || `* Re: Whither the Mill?Paul A. Clayton
| |||  |  ||  +* Re: Whither the Mill?MitchAlsup
| |||  |  ||  |`* Re: Whither the Mill?Paul A. Clayton
| |||  |  ||  | `* Re: Whither the Mill?Scott Lurndal
| |||  |  ||  |  `* Re: Whither the Mill?MitchAlsup
| |||  |  ||  |   `* Re: Whither the Mill?Scott Lurndal
| |||  |  ||  |    `- Re: Whither the Mill?MitchAlsup
| |||  |  ||  `- Re: Whither the Mill?Scott Lurndal
| |||  |  |`- Re: Whither the Mill?EricP
| |||  |  `- Re: Whither the Mill?BGB
| |||  `* Re: Whither the Mill?Scott Lurndal
| |||   `* Re: Whither the Mill?MitchAlsup
| |||    `- Re: Whither the Mill?Scott Lurndal
| ||`* Re: Whither the Mill?Niklas Holsti
| || +* Re: Whither the Mill?Anton Ertl
| || |`- Re: Whither the Mill?Thomas Koenig
| || +- Re: Whither the Mill?Scott Lurndal
| || `* Re: Whither the Mill?moi
| ||  `* Re: Whither the Mill?BGB
| ||   `* Re: Whither the Mill?Thomas Koenig
| ||    +- Re: Whither the Mill?BGB
| ||    +- Re: fast compiling, Whither the Mill?John Levine
| ||    `* Re: Whither the Mill?Terje Mathisen
| ||     `- Re: Whither the Mill?BGB
| |`- Re: Whither the Mill?BGB-Alt
| +- Re: Whither the Mill?EricP
| `* Re: Whither the Mill?Andreas Eder
|  +- Re: Whither the Mill?Chris M. Thomasson
|  `- Re: Whither the Mill?BGB
`- Re: Whither the Mill?Quadibloc

Pages:123
Whither the Mill?

<ulclu3$3sglk$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35722&group=comp.arch#35722

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: sfuld@alumni.cmu.edu.invalid (Stephen Fuld)
Newsgroups: comp.arch
Subject: Whither the Mill?
Date: Wed, 13 Dec 2023 08:25:39 -0800
Organization: A noiseless patient Spider
Lines: 18
Message-ID: <ulclu3$3sglk$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Wed, 13 Dec 2023 16:25:40 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="26fc1140f1e9c4c9e5ddfa1afb8448f2";
logging-data="4080308"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/Gt2ijpG/TFYphz1PN0whBwt/eojYuaBE="
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:/AykGD9sxuyevDMH2lKGqkw/kcM=
Content-Language: en-US
 by: Stephen Fuld - Wed, 13 Dec 2023 16:25 UTC

When we last heard from the merry band of Millers, they were looking for
substantial funding from a VC or similar. I suppose that if they had
gotten it, we would have heard, so I guess they haven't.

But I think there are things they could do to move forward even without
a large investment. For example, they could develop an FPGA based
system, even if it required multiple FPGAs on a custom circuit board for
not huge amounts of money. Whether this is worthwhile, I cannot say.

Anyway, has all development stopped? Or is their "sweat equity" model
still going on?

Inquiring minds want to know.

--
- Stephen Fuld
(e-mail address disguised to prevent spam)

Re: Whither the Mill?

<aVleN.31259$q3F7.20932@fx45.iad>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35723&group=comp.arch#35723

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx45.iad.POSTED!not-for-mail
X-newsreader: xrn 9.03-beta-14-64bit
Sender: scott@dragon.sl.home (Scott Lurndal)
From: scott@slp53.sl.home (Scott Lurndal)
Reply-To: slp53@pacbell.net
Subject: Re: Whither the Mill?
Newsgroups: comp.arch
References: <ulclu3$3sglk$1@dont-email.me>
Lines: 16
Message-ID: <aVleN.31259$q3F7.20932@fx45.iad>
X-Complaints-To: abuse@usenetserver.com
NNTP-Posting-Date: Wed, 13 Dec 2023 17:32:54 UTC
Organization: UsenetServer - www.usenetserver.com
Date: Wed, 13 Dec 2023 17:32:54 GMT
X-Received-Bytes: 1284
 by: Scott Lurndal - Wed, 13 Dec 2023 17:32 UTC

Stephen Fuld <sfuld@alumni.cmu.edu.invalid> writes:
>When we last heard from the merry band of Millers, they were looking for
>substantial funding from a VC or similar. I suppose that if they had
>gotten it, we would have heard, so I guess they haven't.
>
>But I think there are things they could do to move forward even without
>a large investment. For example, they could develop an FPGA based
>system, even if it required multiple FPGAs on a custom circuit board for
>not huge amounts of money. Whether this is worthwhile, I cannot say.
>

There might even be some way of renting time on a real
emulator from cadence (Palladium) or synopsys (Zebu).

Although in my experience those who have them use them
24x7.

Re: Whither the Mill?

<ulealp$19o1i$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35736&group=comp.arch#35736

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: cr88192@gmail.com (BGB)
Newsgroups: comp.arch
Subject: Re: Whither the Mill?
Date: Thu, 14 Dec 2023 01:25:44 -0600
Organization: A noiseless patient Spider
Lines: 83
Message-ID: <ulealp$19o1i$1@dont-email.me>
References: <ulclu3$3sglk$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 14 Dec 2023 07:25:45 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="471f0bfd2f5302d6124df84eb17c3e7e";
logging-data="1368114"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18YFyUsYfJqRACQzclKiGII"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:UxUyEZBpITv0xUZu5v6gZaQKga0=
In-Reply-To: <ulclu3$3sglk$1@dont-email.me>
Content-Language: en-US
 by: BGB - Thu, 14 Dec 2023 07:25 UTC

On 12/13/2023 10:25 AM, Stephen Fuld wrote:
> When we last heard from the merry band of Millers, they were looking for
> substantial funding from a VC or similar.  I suppose that if they had
> gotten it, we would have heard, so I guess they haven't.
>
> But I think there are things they could do to move forward even without
> a large investment.  For example, they could develop an FPGA based
> system, even if it required multiple FPGAs on a custom circuit board for
> not huge amounts of money.  Whether this is worthwhile, I cannot say.
>
> Anyway, has all development stopped?  Or is their "sweat equity" model
> still going on?
>
> Inquiring minds want to know.
>

Yeah, don't think I have seen anything from Ivan on here in a while...

In my case, I was doing everything on Spartan-7 and Artix-7 boards, and
had OK results (within the limits of what is possible on an FPGA).

Kinda wish it could be faster, but alas.

Sadly, anything much bigger (or faster) than the XC7A200T actually
requires paying money for the non-free version of Vivado...

Ironically, this makes the XC7A200T more valuable in a way than the
XC7K325T, as while technically smaller and weaker, it is basically the
biggest FPGA one can get before needing to hand over absurd amounts of
money to AMD/Xilinx.

Well, there is also the XC7K70T, which is technically faster, but has a
lot less LUTs.

And the XC7K160T, which is faster and only slightly smaller, but
significantly more expensive.

But, if one wants an FPGA they could "afford to put in stuff and
potentially have someone be willing to buy it", would need to aim a
little lower here. Can't put all that fancy of a soft-processor in an
XC7S50 or XC7A35T, but could be more reasonable to put into
consumer-electronics devices.

Though, sadly, a soft-processor can't really match something like an ARM
chip in terms of performance per dollar. Though, would be nice if
*someone* could dethrone ARM in terms of perf/$ (RISC-V holds promise,
but only really if someone releases a chip that is both cheap and fast).

And, custom ASIC's are only really an option if one has a huge amount of
money up-front.

Though printable electronics with semi-conductive ink seems promising,
but even here, the ink is stupid expensive and one would still need to
build a special-purpose printer to be able to make use of it (and not
particularly high-density; so probably would be physically much larger
and slower than a design running on an FPGA).

Though, not really sure what the densities or clock speeds of printed
electronics are like.

....

Though, at least in my case, it is all mostly a hobby project.
Unless "someone with a lot of money" thinks it is cool.

....

Though, one possible feature in my case of my project being FPGA based,
is that theoretically I could get one of those gameboy-like FPGA-based
emulator things and port my stuff to this, though most of these devices
seem to be based around the Cyclone V for whatever reason, ...

Re: Whither the Mill?

<gp1pni5t4vfqjsp81fogoboeoqe5hrj5pv@4ax.com>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35765&group=comp.arch#35765

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!.POSTED!not-for-mail
From: gneuner2@comcast.net (George Neuner)
Newsgroups: comp.arch
Subject: Re: Whither the Mill?
Date: Fri, 15 Dec 2023 12:48:00 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <gp1pni5t4vfqjsp81fogoboeoqe5hrj5pv@4ax.com>
References: <ulclu3$3sglk$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Injection-Info: i2pn2.org;
logging-data="23274"; mail-complaints-to="usenet@i2pn2.org";
posting-account="h5eMH71iFfocGZucc+SnA0y5I+72/ecoTCcIjMd3Uww";
User-Agent: ForteAgent/8.00.32.1272
 by: George Neuner - Fri, 15 Dec 2023 17:48 UTC

On Wed, 13 Dec 2023 08:25:39 -0800, Stephen Fuld
<sfuld@alumni.cmu.edu.invalid> wrote:

>When we last heard from the merry band of Millers, they were looking for
>substantial funding from a VC or similar. I suppose that if they had
>gotten it, we would have heard, so I guess they haven't.
>
>But I think there are things they could do to move forward even without
>a large investment. For example, they could develop an FPGA based
>system, even if it required multiple FPGAs on a custom circuit board for
>not huge amounts of money. Whether this is worthwhile, I cannot say.
>
>Anyway, has all development stopped? Or is their "sweat equity" model
>still going on?
>
>Inquiring minds want to know.

There was a post, ostensibly from Ivan, in their web forum just a few
days ago. No news though - just an acknowledgement of another user's
post.

Last I heard, the next (current?) round of financing was - at least in
part - to be used for FPGA "proof of concept" implementations.

Problem is the Mill really is a SoC, and (to me at least) the design
appears to be so complex that it would require a large, top-of-line
(read "expensive") FPGA to fit all the functionality.

Then there is their idea that everything - from VHDL to software build
toolchain to system software - be automatically generated from a
simple functional specification. Getting THAT right is likely proving
far more difficult than simply implementing a fixed design in an FPGA.

YMMV,
George

Re: Whither the Mill?

<uli82g$217dj$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35766&group=comp.arch#35766

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!news.swapon.de!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: cr88192@gmail.com (BGB)
Newsgroups: comp.arch
Subject: Re: Whither the Mill?
Date: Fri, 15 Dec 2023 13:05:49 -0600
Organization: A noiseless patient Spider
Lines: 102
Message-ID: <uli82g$217dj$1@dont-email.me>
References: <ulclu3$3sglk$1@dont-email.me>
<gp1pni5t4vfqjsp81fogoboeoqe5hrj5pv@4ax.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Fri, 15 Dec 2023 19:05:52 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="99cc3dc3d98763c6aa1ea7c4bfbd5255";
logging-data="2137523"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19gn+NnubQ5/yXb4XwE+Eur"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:rY7TYdoeVPdRD60MGREJndNJRz4=
Content-Language: en-US
In-Reply-To: <gp1pni5t4vfqjsp81fogoboeoqe5hrj5pv@4ax.com>
 by: BGB - Fri, 15 Dec 2023 19:05 UTC

On 12/15/2023 11:48 AM, George Neuner wrote:
> On Wed, 13 Dec 2023 08:25:39 -0800, Stephen Fuld
> <sfuld@alumni.cmu.edu.invalid> wrote:
>
>> When we last heard from the merry band of Millers, they were looking for
>> substantial funding from a VC or similar. I suppose that if they had
>> gotten it, we would have heard, so I guess they haven't.
>>
>> But I think there are things they could do to move forward even without
>> a large investment. For example, they could develop an FPGA based
>> system, even if it required multiple FPGAs on a custom circuit board for
>> not huge amounts of money. Whether this is worthwhile, I cannot say.
>>
>> Anyway, has all development stopped? Or is their "sweat equity" model
>> still going on?
>>
>> Inquiring minds want to know.
>
> There was a post, ostensibly from Ivan, in their web forum just a few
> days ago. No news though - just an acknowledgement of another user's
> post.
>
>
> Last I heard, the next (current?) round of financing was - at least in
> part - to be used for FPGA "proof of concept" implementations.
>
> Problem is the Mill really is a SoC, and (to me at least) the design
> appears to be so complex that it would require a large, top-of-line
> (read "expensive") FPGA to fit all the functionality.
>

Yeah. the lower end isn't cheap, the upper end is absurd...

For FPGA's over $1k, almost makes more sense to ignore that they exist
(also this appears to be around the cutoff point for the free version of
Vivado as well; but one would have thought Xilinx would have already
gotten their money by someone having bought the FPGA?...).

> Then there is their idea that everything - from VHDL to software build
> toolchain to system software - be automatically generated from a
> simple functional specification. Getting THAT right is likely proving
> far more difficult than simply implementing a fixed design in an FPGA.
>

Yeah.

Long ago, I watched another project (FoNC, led by Alan Kay) that was
also trying to go this route. I think the idea was that they wanted to
try to find a way to describe the entire software stack (from OS to
applications) in under 20k lines.

Practically, it seemed to mostly end up going nowhere best I can tell, a
lot of "design", nothing that someone could actually use.

Though, if one sets the limits a little higher, there is a lot one can do:
One can at least, surely, make a usable compiler tool chain in under 1
million lines of code (at present, BGBCC weighs in at around 250 kLOC,
could be smaller; but, fitting a "basically functional" C compiler into
30k lines, or around the size of the Doom engine, seems a little harder).

Though, an intermediate option, would be trying to pull off a "semi
decent" compiler in under 100K lines.

If the compiler is kept smaller, it is faster to recompile from source.

Also, it would be nice to have a basically usable OS and core software
stack in under 1M lines.

Say, by not trying to be everything to everyone, and limiting how much
is allowed in the core OS (or is allowed within the build process for
the core OS).

Though, within moderate limits, 1M lines would basically be enough to fit:
A basic kernel;
(this excludes the Linux kernel, which is well over the size limit).
A (moderate sized) C compiler;
(but not GCC, which is also well over this size limit).
A shell+utils comparable to BusyBox;
Various core OS libraries and similar, etc.

For this, will assume an at least nominally POSIX like environment.

Programs that run on the OS would not be counted in the line-count budget.

How to deal with multi-platform portability would be more of an open
question, as this sort of thing tends to be a big source of code
expansion (or, for an OS kernel, the matter of hardware drivers, ...).

But, as can be noted, pretty much any project that gains mainstream
popularity seems to spiral out of control regarding code-size.

> YMMV,
> George

Re: Whither the Mill?

<a54ae908ce5af533e638e112833b35ea@news.novabbs.com>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35768&group=comp.arch#35768

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!.POSTED!not-for-mail
From: mitchalsup@aol.com (MitchAlsup)
Newsgroups: comp.arch
Subject: Re: Whither the Mill?
Date: Fri, 15 Dec 2023 20:59:15 +0000
Organization: novaBBS
Message-ID: <a54ae908ce5af533e638e112833b35ea@news.novabbs.com>
References: <ulclu3$3sglk$1@dont-email.me> <gp1pni5t4vfqjsp81fogoboeoqe5hrj5pv@4ax.com> <uli82g$217dj$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: i2pn2.org;
logging-data="38877"; mail-complaints-to="usenet@i2pn2.org";
posting-account="t+lO0yBNO1zGxasPvGSZV1BRu71QKx+JE37DnW+83jQ";
User-Agent: Rocksolid Light
X-Spam-Checker-Version: SpamAssassin 4.0.0 (2022-12-13) on novalink.us
X-Rslight-Site: $2y$10$ANCvN35MGNruEp0H2DQKHegWBz5D5Hj87efcF4eafP53tm0KS2pTO
X-Rslight-Posting-User: 7e9c45bcd6d4757c5904fbe9a694742e6f8aa949
 by: MitchAlsup - Fri, 15 Dec 2023 20:59 UTC

BGB wrote:

> On 12/15/2023 11:48 AM, George Neuner wrote:
>> On Wed, 13 Dec 2023 08:25:39 -0800, Stephen Fuld
>> <sfuld@alumni.cmu.edu.invalid> wrote:
>>
>>> When we last heard from the merry band of Millers, they were looking for
>>> substantial funding from a VC or similar. I suppose that if they had
>>> gotten it, we would have heard, so I guess they haven't.
>>>
>>> But I think there are things they could do to move forward even without
>>> a large investment. For example, they could develop an FPGA based
>>> system, even if it required multiple FPGAs on a custom circuit board for
>>> not huge amounts of money. Whether this is worthwhile, I cannot say.
>>>
>>> Anyway, has all development stopped? Or is their "sweat equity" model
>>> still going on?
>>>
>>> Inquiring minds want to know.
>>
>> There was a post, ostensibly from Ivan, in their web forum just a few
>> days ago. No news though - just an acknowledgement of another user's
>> post.
>>
>>
>> Last I heard, the next (current?) round of financing was - at least in
>> part - to be used for FPGA "proof of concept" implementations.
>>
>> Problem is the Mill really is a SoC, and (to me at least) the design
>> appears to be so complex that it would require a large, top-of-line
>> (read "expensive") FPGA to fit all the functionality.
>>

> Yeah. the lower end isn't cheap, the upper end is absurd...

Look into the cost of making a mask-set at 7nm or at 3nm. Then we can
have a discussion on how high the number has to be to rate absurd.

> For FPGA's over $1k, almost makes more sense to ignore that they exist
> (also this appears to be around the cutoff point for the free version of
> Vivado as well; but one would have thought Xilinx would have already
> gotten their money by someone having bought the FPGA?...).

>> Then there is their idea that everything - from VHDL to software build
>> toolchain to system software - be automatically generated from a
>> simple functional specification. Getting THAT right is likely proving
>> far more difficult than simply implementing a fixed design in an FPGA.
>>

> Yeah.

> Long ago, I watched another project (FoNC, led by Alan Kay) that was
> also trying to go this route. I think the idea was that they wanted to
> try to find a way to describe the entire software stack (from OS to
> applications) in under 20k lines.

Was the language of choice APL-like ??

> Practically, it seemed to mostly end up going nowhere best I can tell, a
> lot of "design", nothing that someone could actually use.

> Though, if one sets the limits a little higher, there is a lot one can do:
> One can at least, surely, make a usable compiler tool chain in under 1
> million lines of code (at present, BGBCC weighs in at around 250 kLOC,
> could be smaller; but, fitting a "basically functional" C compiler into
> 30k lines, or around the size of the Doom engine, seems a little harder).

> Though, an intermediate option, would be trying to pull off a "semi
> decent" compiler in under 100K lines.

> If the compiler is kept smaller, it is faster to recompile from source.

In 1979 I joined a company with a FORTRAN mostly-77- that compiled at
10,000 lines of code per second for an IBM-like minicomputer (less decimal
and string) and did a pretty good job of spitting out high performance
code; on a machine with a 150ns cycle time.

We now have compilers struggling to achieve 10,000 lines per second per CPU
with machines of 0.2ns cycle time -- 75× faster {times the number of CPUs
thrown at the problem.}

> Also, it would be nice to have a basically usable OS and core software
> stack in under 1M lines.

There is no salable market for an OS that sheds featured for compactness.

> Say, by not trying to be everything to everyone, and limiting how much
> is allowed in the core OS (or is allowed within the build process for
> the core OS).

> Though, within moderate limits, 1M lines would basically be enough to fit:
> A basic kernel;
> (this excludes the Linux kernel, which is well over the size limit).

If there were an efficient way to run the device driver sack in user-mode
without privilege and only the MMI/O pages this driver can touch mapped
into his VAS. Poof none of the driver stack is in the kernel. --IF--

> A (moderate sized) C compiler;
> (but not GCC, which is also well over this size limit).

In 1990 C was a small language, In 2023 that statement is no longer true.
In 1990 the C compiler had 2 or 3 passes, in 2023 the LLVM compile has
<what> 35 passes (some of them duplicates as one pass converts into some-
thing a future pass will convert into something some other pass can
optimize.)
In 1990 your C compiler ran natively on your machine.
In 2023 your LLVM compiler compiles 6+ front end languages and compiles
to 20+ target ISAs and has to produce good code on all of them.

> A shell+utils comparable to BusyBox;

Until someone prevents someone else from writing new shells, filters,
and utilities, there is no way to moderate the growth in Shell+utils.

> Various core OS libraries and similar, etc.

> For this, will assume an at least nominally POSIX like environment.

> Programs that run on the OS would not be counted in the line-count budget.

> How to deal with multi-platform portability would be more of an open
> question, as this sort of thing tends to be a big source of code
> expansion (or, for an OS kernel, the matter of hardware drivers, ...).

> But, as can be noted, pretty much any project that gains mainstream
> popularity seems to spiral out of control regarding code-size.

With 20TB disk drives, 32 GB main memory sizes, Fiber internet;
what is the reason for worrying about something you can do almost
nothing about.

>> YMMV,

Indeed.

>> George

Re: Whither the Mill?

<eo3fN.2272$SyNd.1815@fx33.iad>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35769&group=comp.arch#35769

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!usenet.goja.nl.eu.org!weretis.net!feeder8.news.weretis.net!newsreader4.netcologne.de!news.netcologne.de!peer01.ams1!peer.ams1.xlned.com!news.xlned.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx33.iad.POSTED!not-for-mail
From: ThatWouldBeTelling@thevillage.com (EricP)
User-Agent: Thunderbird 2.0.0.24 (Windows/20100228)
MIME-Version: 1.0
Newsgroups: comp.arch
Subject: Re: Whither the Mill?
References: <ulclu3$3sglk$1@dont-email.me> <gp1pni5t4vfqjsp81fogoboeoqe5hrj5pv@4ax.com> <uli82g$217dj$1@dont-email.me>
In-Reply-To: <uli82g$217dj$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Lines: 45
Message-ID: <eo3fN.2272$SyNd.1815@fx33.iad>
X-Complaints-To: abuse@UsenetServer.com
NNTP-Posting-Date: Fri, 15 Dec 2023 21:18:02 UTC
Date: Fri, 15 Dec 2023 16:17:41 -0500
X-Received-Bytes: 2635
 by: EricP - Fri, 15 Dec 2023 21:17 UTC

BGB wrote:
> On 12/15/2023 11:48 AM, George Neuner wrote:
>> On Wed, 13 Dec 2023 08:25:39 -0800, Stephen Fuld
>> <sfuld@alumni.cmu.edu.invalid> wrote:
>>
>>> When we last heard from the merry band of Millers, they were looking for
>>> substantial funding from a VC or similar. I suppose that if they had
>>> gotten it, we would have heard, so I guess they haven't.
>>>
>>> But I think there are things they could do to move forward even without
>>> a large investment. For example, they could develop an FPGA based
>>> system, even if it required multiple FPGAs on a custom circuit board for
>>> not huge amounts of money. Whether this is worthwhile, I cannot say.
>>>
>>> Anyway, has all development stopped? Or is their "sweat equity" model
>>> still going on?
>>>
>>> Inquiring minds want to know.
>>
>> There was a post, ostensibly from Ivan, in their web forum just a few
>> days ago. No news though - just an acknowledgement of another user's
>> post.
>>
>>
>> Last I heard, the next (current?) round of financing was - at least in
>> part - to be used for FPGA "proof of concept" implementations.
>>
>> Problem is the Mill really is a SoC, and (to me at least) the design
>> appears to be so complex that it would require a large, top-of-line
>> (read "expensive") FPGA to fit all the functionality.
>>
>
> Yeah. the lower end isn't cheap, the upper end is absurd...
>
> For FPGA's over $1k, almost makes more sense to ignore that they exist
> (also this appears to be around the cutoff point for the free version of
> Vivado as well; but one would have thought Xilinx would have already
> gotten their money by someone having bought the FPGA?...).

Found a recent article that says Xilinx prices run from 8$ to $100,
low end Intel fpga's start at $3, but the high end Stratix models
go from $10,000 to $100,000.

Re: Whither the Mill?

<JA4fN.38899$xHn7.23180@fx14.iad>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35772&group=comp.arch#35772

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx14.iad.POSTED!not-for-mail
X-newsreader: xrn 9.03-beta-14-64bit
Sender: scott@dragon.sl.home (Scott Lurndal)
From: scott@slp53.sl.home (Scott Lurndal)
Reply-To: slp53@pacbell.net
Subject: Re: Whither the Mill?
Newsgroups: comp.arch
References: <ulclu3$3sglk$1@dont-email.me> <gp1pni5t4vfqjsp81fogoboeoqe5hrj5pv@4ax.com> <uli82g$217dj$1@dont-email.me> <a54ae908ce5af533e638e112833b35ea@news.novabbs.com>
Lines: 45
Message-ID: <JA4fN.38899$xHn7.23180@fx14.iad>
X-Complaints-To: abuse@usenetserver.com
NNTP-Posting-Date: Fri, 15 Dec 2023 22:39:37 UTC
Organization: UsenetServer - www.usenetserver.com
Date: Fri, 15 Dec 2023 22:39:37 GMT
X-Received-Bytes: 2733
 by: Scott Lurndal - Fri, 15 Dec 2023 22:39 UTC

mitchalsup@aol.com (MitchAlsup) writes:
>BGB wrote:

>> For FPGA's over $1k, almost makes more sense to ignore that they exist
>> (also this appears to be around the cutoff point for the free version of
>> Vivado as well; but one would have thought Xilinx would have already
>> gotten their money by someone having bought the FPGA?...).

For anyone serious, an verif engineer can cost $500-1000/day. The FPGA
cost is in the noise.

For a hobby? Well...

>> If the compiler is kept smaller, it is faster to recompile from source.
>
>In 1979 I joined a company with a FORTRAN mostly-77- that compiled at
>10,000 lines of code per second for an IBM-like minicomputer (less decimal
>and string) and did a pretty good job of spitting out high performance
>code; on a machine with a 150ns cycle time.

As did our COBOL compiler (which ran in 50KB). But in both cases,
the languages were far simpler and much easier to generate efficient
code than languages like Modula, Pascal, C, et alia.

>> Though, within moderate limits, 1M lines would basically be enough to fit:
>> A basic kernel;
>> (this excludes the Linux kernel, which is well over the size limit).
>
>If there were an efficient way to run the device driver sack in user-mode
>without privilege and only the MMI/O pages this driver can touch mapped
>into his VAS. Poof none of the driver stack is in the kernel. --IF--

That's actually quite common and one of the raison d'etre of the
PCI Express SR-IOV feature. When you can present a virtual
function to the user directly (mapping the MMIO region into
the user mode virtual address space) the app had direct access
to the hardware. Interrupts are the only tricky part, and
the kernel virtio subsystem, which interfaces with the user
application via shared memory provides interrupt handling
to the application.

An I/OMMU provides memory protection for DMA operations initiated
by the virtual function ensuring it only accesses the application
virtual address space.

Re: Whither the Mill?

<2695abc72966c220809e5c6690a8edf6@news.novabbs.com>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35773&group=comp.arch#35773

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!.POSTED!not-for-mail
From: mitchalsup@aol.com (MitchAlsup)
Newsgroups: comp.arch
Subject: Re: Whither the Mill?
Date: Fri, 15 Dec 2023 23:02:13 +0000
Organization: novaBBS
Message-ID: <2695abc72966c220809e5c6690a8edf6@news.novabbs.com>
References: <ulclu3$3sglk$1@dont-email.me> <gp1pni5t4vfqjsp81fogoboeoqe5hrj5pv@4ax.com> <uli82g$217dj$1@dont-email.me> <a54ae908ce5af533e638e112833b35ea@news.novabbs.com> <JA4fN.38899$xHn7.23180@fx14.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: i2pn2.org;
logging-data="48074"; mail-complaints-to="usenet@i2pn2.org";
posting-account="t+lO0yBNO1zGxasPvGSZV1BRu71QKx+JE37DnW+83jQ";
User-Agent: Rocksolid Light
X-Rslight-Site: $2y$10$lIc3xT687vtaR2ju11iDjuZs/lRVy/G4BhEpA5SY8TZI04K/ELQUG
X-Spam-Checker-Version: SpamAssassin 4.0.0 (2022-12-13) on novalink.us
X-Rslight-Posting-User: 7e9c45bcd6d4757c5904fbe9a694742e6f8aa949
 by: MitchAlsup - Fri, 15 Dec 2023 23:02 UTC

Scott Lurndal wrote:

> mitchalsup@aol.com (MitchAlsup) writes:
>>BGB wrote:

>>> For FPGA's over $1k, almost makes more sense to ignore that they exist
>>> (also this appears to be around the cutoff point for the free version of
>>> Vivado as well; but one would have thought Xilinx would have already
>>> gotten their money by someone having bought the FPGA?...).

> For anyone serious, an verif engineer can cost $500-1000/day. The FPGA
> cost is in the noise.

> For a hobby? Well...

>>> If the compiler is kept smaller, it is faster to recompile from source.
>>
>>In 1979 I joined a company with a FORTRAN mostly-77- that compiled at
>>10,000 lines of code per second for an IBM-like minicomputer (less decimal
>>and string) and did a pretty good job of spitting out high performance
>>code; on a machine with a 150ns cycle time.

> As did our COBOL compiler (which ran in 50KB). But in both cases,
> the languages were far simpler and much easier to generate efficient
> code than languages like Modula, Pascal, C, et alia.

>>> Though, within moderate limits, 1M lines would basically be enough to fit:
>>> A basic kernel;
>>> (this excludes the Linux kernel, which is well over the size limit).
>>
>>If there were an efficient way to run the device driver sack in user-mode
>>without privilege and only the MMI/O pages this driver can touch mapped
>>into his VAS. Poof none of the driver stack is in the kernel. --IF--

> That's actually quite common and one of the raison d'etre of the
> PCI Express SR-IOV feature. When you can present a virtual
> function to the user directly (mapping the MMIO region into
> the user mode virtual address space) the app had direct access
> to the hardware. Interrupts are the only tricky part, and
> the kernel virtio subsystem, which interfaces with the user
> application via shared memory provides interrupt handling
> to the application.

> An I/OMMU provides memory protection for DMA operations initiated
> by the virtual function ensuring it only accesses the application
> virtual address space.

Why should device be able to access user VaS outside of the buffer the
user provided, OH so long ago ??

Re: Whither the Mill?

<ulimvh$23jf2$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35774&group=comp.arch#35774

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!usenet.goja.nl.eu.org!2.eu.feeder.erje.net!feeder.erje.net!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: bohannonindustriesllc@gmail.com (BGB-Alt)
Newsgroups: comp.arch
Subject: Re: Whither the Mill?
Date: Fri, 15 Dec 2023 17:20:15 -0600
Organization: A noiseless patient Spider
Lines: 244
Message-ID: <ulimvh$23jf2$1@dont-email.me>
References: <ulclu3$3sglk$1@dont-email.me>
<gp1pni5t4vfqjsp81fogoboeoqe5hrj5pv@4ax.com> <uli82g$217dj$1@dont-email.me>
<a54ae908ce5af533e638e112833b35ea@news.novabbs.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Fri, 15 Dec 2023 23:20:18 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="a9d94dbe3cd6ed31c6e55114dd5be74f";
logging-data="2215394"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX198tjhF7ceo8T6b9aaL+fFFIPHURZE51ik="
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:0VfU6TGiu0mcllbOuLOEbxGmlvM=
In-Reply-To: <a54ae908ce5af533e638e112833b35ea@news.novabbs.com>
Content-Language: en-US
 by: BGB-Alt - Fri, 15 Dec 2023 23:20 UTC

On 12/15/2023 2:59 PM, MitchAlsup wrote:
> BGB wrote:
>
>> On 12/15/2023 11:48 AM, George Neuner wrote:
>>> On Wed, 13 Dec 2023 08:25:39 -0800, Stephen Fuld
>>> <sfuld@alumni.cmu.edu.invalid> wrote:
>>>
>>>> When we last heard from the merry band of Millers, they were looking
>>>> for
>>>> substantial funding from a VC or similar.  I suppose that if they had
>>>> gotten it, we would have heard, so I guess they haven't.
>>>>
>>>> But I think there are things they could do to move forward even without
>>>> a large investment.  For example, they could develop an FPGA based
>>>> system, even if it required multiple FPGAs on a custom circuit board
>>>> for
>>>> not huge amounts of money.  Whether this is worthwhile, I cannot say.
>>>>
>>>> Anyway, has all development stopped?  Or is their "sweat equity" model
>>>> still going on?
>>>>
>>>> Inquiring minds want to know.
>>>
>>> There was a post, ostensibly from Ivan, in their web forum just a few
>>> days ago. No news though - just an acknowledgement of another user's
>>> post.
>>>
>>>
>>> Last I heard, the next (current?) round of financing was - at least in
>>> part - to be used for FPGA "proof of concept" implementations.
>>>
>>> Problem is the Mill really is a SoC, and (to me at least) the design
>>> appears to be so complex that it would require a large, top-of-line
>>> (read "expensive") FPGA to fit all the functionality.
>>>
>
>> Yeah. the lower end isn't cheap, the upper end is absurd...
>
> Look into the cost of making a mask-set at 7nm or at 3nm. Then we can
> have a discussion on how high the number has to be to rate absurd.
>

This sort of thing is only really within reach of big companies...

The Spartan and Artix boards are within reach of hobbyists.
Kintex is, sorta, if a person has a lot of money to burn on it.

>> For FPGA's over $1k, almost makes more sense to ignore that they exist
>> (also this appears to be around the cutoff point for the free version
>> of Vivado as well; but one would have thought Xilinx would have
>> already gotten their money by someone having bought the FPGA?...).
>
>
>>> Then there is their idea that everything - from VHDL to software build
>>> toolchain to system software - be automatically generated from a
>>> simple functional specification.  Getting THAT right is likely proving
>>> far more difficult than simply implementing a fixed design in an FPGA.
>>>
>
>> Yeah.
>
>> Long ago, I watched another project (FoNC, led by Alan Kay) that was
>> also trying to go this route. I think the idea was that they wanted to
>> try to find a way to describe the entire software stack (from OS to
>> applications) in under 20k lines.
>
> Was the language of choice APL-like ??
>

Alan Kay was known for Smalltalk, and the languages they were using were
using a Smalltalk like syntax IIRC.

I never really got much into Smalltalk though as it tended to be
difficult to make sense of.

But, I guess, they didn't achieve the goals of either keeping it under
the size limit, or of making something usable.

>> Practically, it seemed to mostly end up going nowhere best I can tell,
>> a lot of "design", nothing that someone could actually use.
>
>
>
>> Though, if one sets the limits a little higher, there is a lot one can
>> do:
>> One can at least, surely, make a usable compiler tool chain in under 1
>> million lines of code (at present, BGBCC weighs in at around 250 kLOC,
>> could be smaller; but, fitting a "basically functional" C compiler
>> into 30k lines, or around the size of the Doom engine, seems a little
>> harder).
>
>> Though, an intermediate option, would be trying to pull off a "semi
>> decent" compiler in under 100K lines.
>
>
>
>> If the compiler is kept smaller, it is faster to recompile from source.
>
> In 1979 I joined a company with a FORTRAN mostly-77- that compiled at
> 10,000 lines of code per second for an IBM-like minicomputer (less
> decimal and string) and did a pretty good job of spitting out high
> performance
> code; on a machine with a 150ns cycle time.
>
> We now have compilers struggling to achieve 10,000 lines per second per CPU
> with machines of 0.2ns cycle time -- 75× faster {times the number of CPUs
> thrown at the problem.}
>

If the compiler is 250k lines of C, it can still compile in a few
seconds on a modern PC.

If if is several million lines with a bunch of C++ thrown in (or
entirely in C++), then it takes a bit longer.

Recompiling LLVM and Clang is a bit much even with a fairly beefy PC.

>> Also, it would be nice to have a basically usable OS and core software
>> stack in under 1M lines.
>
> There is no salable market for an OS that sheds featured for compactness.
>

Could be easier to port to new targets, less RAM and space needed.
If the footprint is small enough to fit on a moderately cheap SPI Flash,
one can use a moderately cheap SPI Flash.

Though, for end-user use, one is probably going to need things like a
web-browser and similar, and "small but actually useful" web browser
probably isn't going to happen (IOW: people aren't going to use
something that can't do much more than a basic subset of static HTML).

>> Say, by not trying to be everything to everyone, and limiting how much
>> is allowed in the core OS (or is allowed within the build process for
>> the core OS).
>
>> Though, within moderate limits, 1M lines would basically be enough to
>> fit:
>>    A basic kernel;
>>      (this excludes the Linux kernel, which is well over the size limit).
>
> If there were an efficient way to run the device driver sack in user-mode
> without privilege and only the MMI/O pages this driver can touch mapped
> into his VAS. Poof none of the driver stack is in the kernel.  --IF--
>

Yeah, or "superusermode" drivers (in my scheme).

Where the drivers aren't technically in the kernel, but still have
access to hardware MMIO and similar.

Though, absent some design changes, superusermode can easily bypass my
existing memory protection scheme if it so chooses. Would need to come
up with a way to allow actual usermode tasks to be able to have
selective access to MMIO to be able to have any hope of protecting the
OS from malicious drivers.

Though, if it needs to run on x86 or ARM, this is more of a problem, and
there is likely little practical alternative either than:
Run drivers in bare kernel space;
Run drivers in logical processes with a bunch of extra overhead.

>>    A (moderate sized) C compiler;
>>      (but not GCC, which is also well over this size limit).
>
> In 1990 C was a small language, In 2023 that statement is no longer true.
> In 1990 the C compiler had 2 or 3 passes, in 2023 the LLVM compile has
> <what> 35 passes (some of them duplicates as one pass converts into some-
> thing a future pass will convert into something some other pass can
> optimize.)
> In 1990 your C compiler ran natively on your machine.
> In 2023 your LLVM compiler compiles 6+ front end languages and compiles
> to 20+ target ISAs and has to produce good code on all of them.
>

C proper hasn't changed *that* much.
C++ kinda wrecks this.

>>    A shell+utils comparable to BusyBox;
>
> Until someone prevents someone else from writing new shells, filters,
> and utilities, there is no way to moderate the growth in Shell+utils.
>

Yeah...

If you want something like Bash + GNU CoreUtils, it is going to be a lot
bigger than something along the lines of Ash + BusyBox.

I was considering possibly reworking how shell works in my case;
currently the shell is in the kernel (though now splits off into
separate tasks for each shell instance), but a design more akin to
BusyBox could make more sense.

But, not entirely a fan of GPL (which BusyBox uses), and while ToyBox
has a better license, I am admittedly less of a fan of the main author
(in past interactions he had acted like a condescending jerk, this isn't
really a win for me even if the design and license seems good in other
areas).

>>    Various core OS libraries and similar, etc.
>
>> For this, will assume an at least nominally POSIX like environment.
>
>
>> Programs that run on the OS would not be counted in the line-count
>> budget.
>
>> How to deal with multi-platform portability would be more of an open
>> question, as this sort of thing tends to be a big source of code
>> expansion (or, for an OS kernel, the matter of hardware drivers, ...).
>
>> But, as can be noted, pretty much any project that gains mainstream
>> popularity seems to spiral out of control regarding code-size.
>
> With 20TB disk drives, 32 GB main memory sizes, Fiber internet;
> what is the reason for worrying about something you can do almost
> nothing about.
>


Click here to read the complete article
Re: Whither the Mill?

<ZP5fN.58208$83n7.3029@fx18.iad>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35775&group=comp.arch#35775

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!news.nobody.at!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx18.iad.POSTED!not-for-mail
X-newsreader: xrn 9.03-beta-14-64bit
Sender: scott@dragon.sl.home (Scott Lurndal)
From: scott@slp53.sl.home (Scott Lurndal)
Reply-To: slp53@pacbell.net
Subject: Re: Whither the Mill?
Newsgroups: comp.arch
References: <ulclu3$3sglk$1@dont-email.me> <gp1pni5t4vfqjsp81fogoboeoqe5hrj5pv@4ax.com> <uli82g$217dj$1@dont-email.me> <a54ae908ce5af533e638e112833b35ea@news.novabbs.com> <JA4fN.38899$xHn7.23180@fx14.iad> <2695abc72966c220809e5c6690a8edf6@news.novabbs.com>
Lines: 57
Message-ID: <ZP5fN.58208$83n7.3029@fx18.iad>
X-Complaints-To: abuse@usenetserver.com
NNTP-Posting-Date: Sat, 16 Dec 2023 00:04:09 UTC
Organization: UsenetServer - www.usenetserver.com
Date: Sat, 16 Dec 2023 00:04:09 GMT
X-Received-Bytes: 3289
 by: Scott Lurndal - Sat, 16 Dec 2023 00:04 UTC

mitchalsup@aol.com (MitchAlsup) writes:
>Scott Lurndal wrote:
>
>> mitchalsup@aol.com (MitchAlsup) writes:
>>>BGB wrote:
>
>>>> For FPGA's over $1k, almost makes more sense to ignore that they exist
>>>> (also this appears to be around the cutoff point for the free version of
>>>> Vivado as well; but one would have thought Xilinx would have already
>>>> gotten their money by someone having bought the FPGA?...).
>
>> For anyone serious, an verif engineer can cost $500-1000/day. The FPGA
>> cost is in the noise.
>
>> For a hobby? Well...
>
>
>>>> If the compiler is kept smaller, it is faster to recompile from source.
>>>
>>>In 1979 I joined a company with a FORTRAN mostly-77- that compiled at
>>>10,000 lines of code per second for an IBM-like minicomputer (less decimal
>>>and string) and did a pretty good job of spitting out high performance
>>>code; on a machine with a 150ns cycle time.
>
>> As did our COBOL compiler (which ran in 50KB). But in both cases,
>> the languages were far simpler and much easier to generate efficient
>> code than languages like Modula, Pascal, C, et alia.
>
>>>> Though, within moderate limits, 1M lines would basically be enough to fit:
>>>> A basic kernel;
>>>> (this excludes the Linux kernel, which is well over the size limit).
>>>
>>>If there were an efficient way to run the device driver sack in user-mode
>>>without privilege and only the MMI/O pages this driver can touch mapped
>>>into his VAS. Poof none of the driver stack is in the kernel. --IF--
>
>> That's actually quite common and one of the raison d'etre of the
>> PCI Express SR-IOV feature. When you can present a virtual
>> function to the user directly (mapping the MMIO region into
>> the user mode virtual address space) the app had direct access
>> to the hardware. Interrupts are the only tricky part, and
>> the kernel virtio subsystem, which interfaces with the user
>> application via shared memory provides interrupt handling
>> to the application.
>
>> An I/OMMU provides memory protection for DMA operations initiated
>> by the virtual function ensuring it only accesses the application
>> virtual address space.
>
>Why should device be able to access user VaS outside of the buffer the
>user provided, OH so long ago ??

Because the device wants to do DMA directly into or from the users
virtual address space. Bulk transfer, not MMIO accesses.

Think network controller fetching packets from userspace.

Re: Whither the Mill?

<ku51hoFaf95U1@mid.individual.net>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35780&group=comp.arch#35780

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!nntp.comgw.net!fu-berlin.de!uni-berlin.de!individual.net!not-for-mail
From: niklas.holsti@tidorum.invalid (Niklas Holsti)
Newsgroups: comp.arch
Subject: Re: Whither the Mill?
Date: Sat, 16 Dec 2023 09:22:32 +0200
Organization: Tidorum Ltd
Lines: 17
Message-ID: <ku51hoFaf95U1@mid.individual.net>
References: <ulclu3$3sglk$1@dont-email.me>
<gp1pni5t4vfqjsp81fogoboeoqe5hrj5pv@4ax.com> <uli82g$217dj$1@dont-email.me>
<a54ae908ce5af533e638e112833b35ea@news.novabbs.com>
<JA4fN.38899$xHn7.23180@fx14.iad>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-Trace: individual.net QeWO/H7Bkof25xnRhD1ZWgqnWbLI1WYDUIyt/YbyMqxi2wGZ5i
Cancel-Lock: sha1:MfV25mYwV64HC1yZ4/EehFkYANk= sha256:DoQyXCHjwHNEc0fjxmGZxVzA0lmxoBQBy3Do9ZQ1Pyo=
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <JA4fN.38899$xHn7.23180@fx14.iad>
 by: Niklas Holsti - Sat, 16 Dec 2023 07:22 UTC

On 2023-12-16 0:39, Scott Lurndal wrote:
> mitchalsup@aol.com (MitchAlsup) writes:

[snip]

>> In 1979 I joined a company with a FORTRAN mostly-77- that compiled at
>> 10,000 lines of code per second for an IBM-like minicomputer (less decimal
>> and string) and did a pretty good job of spitting out high performance
>> code; on a machine with a 150ns cycle time.
>
> As did our COBOL compiler (which ran in 50KB).

Are you both sure that those numbers are really lines per *second*? They
seem improbably high, and compilation speeds in those years used to be
stated in lines per *minute*.

Re: Whither the Mill?

<2023Dec16.131422@mips.complang.tuwien.ac.at>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35781&group=comp.arch#35781

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!news.samoylyk.net!news.gegeweb.eu!gegeweb.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: anton@mips.complang.tuwien.ac.at (Anton Ertl)
Newsgroups: comp.arch
Subject: Re: Whither the Mill?
Date: Sat, 16 Dec 2023 12:14:22 GMT
Organization: Institut fuer Computersprachen, Technische Universitaet Wien
Lines: 26
Message-ID: <2023Dec16.131422@mips.complang.tuwien.ac.at>
References: <ulclu3$3sglk$1@dont-email.me> <gp1pni5t4vfqjsp81fogoboeoqe5hrj5pv@4ax.com> <uli82g$217dj$1@dont-email.me> <a54ae908ce5af533e638e112833b35ea@news.novabbs.com> <JA4fN.38899$xHn7.23180@fx14.iad> <ku51hoFaf95U1@mid.individual.net>
Injection-Info: dont-email.me; posting-host="93a45da3cc58c9ae8f2434f1f769d1a4";
logging-data="2529327"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18HW8iY7juNKwjasJ+GPfnW"
Cancel-Lock: sha1:gHlQ/ZcfQNGwebbWJiQmvjOVURw=
X-newsreader: xrn 10.11
 by: Anton Ertl - Sat, 16 Dec 2023 12:14 UTC

Niklas Holsti <niklas.holsti@tidorum.invalid> writes:
>On 2023-12-16 0:39, Scott Lurndal wrote:
>> mitchalsup@aol.com (MitchAlsup) writes:
>
> [snip]
>
>>> In 1979 I joined a company with a FORTRAN mostly-77- that compiled at
>>> 10,000 lines of code per second for an IBM-like minicomputer (less decimal
>>> and string) and did a pretty good job of spitting out high performance
>>> code; on a machine with a 150ns cycle time.
>>
>> As did our COBOL compiler (which ran in 50KB).
>
>
>Are you both sure that those numbers are really lines per *second*? They
>seem improbably high, and compilation speeds in those years used to be
>stated in lines per *minute*.

Especially given that 10Klines/s is probably around 500KB/s which has
to be read from disk and probably a similar amount that has to be
written to disk. What were the I/O throughputs available at the time?

- anton
--
'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>

Re: Whither the Mill?

<ulk59n$ur3h$2@newsreader4.netcologne.de>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35783&group=comp.arch#35783

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!news.nntp4.net!weretis.net!feeder8.news.weretis.net!newsreader4.netcologne.de!news.netcologne.de!.POSTED.2001-4dd7-dd23-0-3405-29ed-c929-c26d.ipv6dyn.netcologne.de!not-for-mail
From: tkoenig@netcologne.de (Thomas Koenig)
Newsgroups: comp.arch
Subject: Re: Whither the Mill?
Date: Sat, 16 Dec 2023 12:30:47 -0000 (UTC)
Organization: news.netcologne.de
Distribution: world
Message-ID: <ulk59n$ur3h$2@newsreader4.netcologne.de>
References: <ulclu3$3sglk$1@dont-email.me>
<gp1pni5t4vfqjsp81fogoboeoqe5hrj5pv@4ax.com> <uli82g$217dj$1@dont-email.me>
<a54ae908ce5af533e638e112833b35ea@news.novabbs.com>
<JA4fN.38899$xHn7.23180@fx14.iad> <ku51hoFaf95U1@mid.individual.net>
<2023Dec16.131422@mips.complang.tuwien.ac.at>
Injection-Date: Sat, 16 Dec 2023 12:30:47 -0000 (UTC)
Injection-Info: newsreader4.netcologne.de; posting-host="2001-4dd7-dd23-0-3405-29ed-c929-c26d.ipv6dyn.netcologne.de:2001:4dd7:dd23:0:3405:29ed:c929:c26d";
logging-data="1010801"; mail-complaints-to="abuse@netcologne.de"
User-Agent: slrn/1.0.3 (Linux)
 by: Thomas Koenig - Sat, 16 Dec 2023 12:30 UTC

Anton Ertl <anton@mips.complang.tuwien.ac.at> schrieb:
> Niklas Holsti <niklas.holsti@tidorum.invalid> writes:
>>On 2023-12-16 0:39, Scott Lurndal wrote:
>>> mitchalsup@aol.com (MitchAlsup) writes:
>>
>> [snip]
>>
>>>> In 1979 I joined a company with a FORTRAN mostly-77- that compiled at
>>>> 10,000 lines of code per second for an IBM-like minicomputer (less decimal
>>>> and string) and did a pretty good job of spitting out high performance
>>>> code; on a machine with a 150ns cycle time.
>>>
>>> As did our COBOL compiler (which ran in 50KB).
>>
>>
>>Are you both sure that those numbers are really lines per *second*? They
>>seem improbably high, and compilation speeds in those years used to be
>>stated in lines per *minute*.
>
> Especially given that 10Klines/s is probably around 500KB/s which has
> to be read from disk and probably a similar amount that has to be
> written to disk. What were the I/O throughputs available at the time?

It depends a bit how the Fortran and Cobol statements were stored.
If they were stored in punched card format, 80 characters per line,
then it would be 800000 characters per second read. Object code,
probably much less, but the total could still come to around
1 MB/s.

The IBM 3350 (introduced in 1975) is probably fairly representative
of the high end of that era, it had a data transfer speed of 1198
kB/second, and a seek time of 25 milliseconds.

So, 10000 lines/s would almost definitely have been I/O bound at the
time.

Re: Whither the Mill?

<b6jfN.22728$vFZa.8216@fx13.iad>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35787&group=comp.arch#35787

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!news.nntp4.net!weretis.net!feeder8.news.weretis.net!newsreader4.netcologne.de!news.netcologne.de!peer02.ams1!peer.ams1.xlned.com!news.xlned.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx13.iad.POSTED!not-for-mail
X-newsreader: xrn 9.03-beta-14-64bit
Sender: scott@dragon.sl.home (Scott Lurndal)
From: scott@slp53.sl.home (Scott Lurndal)
Reply-To: slp53@pacbell.net
Subject: Re: Whither the Mill?
Newsgroups: comp.arch
References: <ulclu3$3sglk$1@dont-email.me> <gp1pni5t4vfqjsp81fogoboeoqe5hrj5pv@4ax.com> <uli82g$217dj$1@dont-email.me> <a54ae908ce5af533e638e112833b35ea@news.novabbs.com> <JA4fN.38899$xHn7.23180@fx14.iad> <ku51hoFaf95U1@mid.individual.net>
Lines: 22
Message-ID: <b6jfN.22728$vFZa.8216@fx13.iad>
X-Complaints-To: abuse@usenetserver.com
NNTP-Posting-Date: Sat, 16 Dec 2023 15:11:03 UTC
Organization: UsenetServer - www.usenetserver.com
Date: Sat, 16 Dec 2023 15:11:03 GMT
X-Received-Bytes: 1692
 by: Scott Lurndal - Sat, 16 Dec 2023 15:11 UTC

Niklas Holsti <niklas.holsti@tidorum.invalid> writes:
>On 2023-12-16 0:39, Scott Lurndal wrote:
>> mitchalsup@aol.com (MitchAlsup) writes:
>
> [snip]
>
>>> In 1979 I joined a company with a FORTRAN mostly-77- that compiled at
>>> 10,000 lines of code per second for an IBM-like minicomputer (less decimal
>>> and string) and did a pretty good job of spitting out high performance
>>> code; on a machine with a 150ns cycle time.
>>
>> As did our COBOL compiler (which ran in 50KB).
>
>
>Are you both sure that those numbers are really lines per *second*? They
>seem improbably high, and compilation speeds in those years used to be
>stated in lines per *minute*.

Yes, lines per minute is the proper metric. Note that for many
years, the compilation rate was bounded by the speed of the card
reader (300 to 600 cards per minute).

Re: Whither the Mill?

<ku6760FivvvU1@mid.individual.net>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35794&group=comp.arch#35794

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!news.1d4.us!usenet.goja.nl.eu.org!3.eu.feeder.erje.net!feeder.erje.net!fu-berlin.de!uni-berlin.de!individual.net!not-for-mail
From: findlaybill@blueyonder.co.uk (moi)
Newsgroups: comp.arch
Subject: Re: Whither the Mill?
Date: Sat, 16 Dec 2023 18:04:48 +0000
Lines: 27
Message-ID: <ku6760FivvvU1@mid.individual.net>
References: <ulclu3$3sglk$1@dont-email.me>
<gp1pni5t4vfqjsp81fogoboeoqe5hrj5pv@4ax.com> <uli82g$217dj$1@dont-email.me>
<a54ae908ce5af533e638e112833b35ea@news.novabbs.com>
<JA4fN.38899$xHn7.23180@fx14.iad> <ku51hoFaf95U1@mid.individual.net>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Trace: individual.net SRU9rh2np4ggx7DKx4bFgg4h06HISJ8skwkzTDDcnsFilxZSq9
Cancel-Lock: sha1:1Vka6dLR/UWECUsGf8xI3f80C3Y= sha256:rUtHzhz6FiqZSGQzlA5T3lprVKmQ3GsWAdhu3aBUGEk=
User-Agent: Mozilla Thunderbird
Content-Language: en-GB
In-Reply-To: <ku51hoFaf95U1@mid.individual.net>
 by: moi - Sat, 16 Dec 2023 18:04 UTC

On 16/12/2023 07:22, Niklas Holsti wrote:
> On 2023-12-16 0:39, Scott Lurndal wrote:
>> mitchalsup@aol.com (MitchAlsup) writes:
>
>    [snip]
>
>>> In 1979 I joined a company with a FORTRAN mostly-77- that compiled at
>>> 10,000 lines of code per second for an IBM-like minicomputer (less
>>> decimal
>>> and string) and did a pretty good job of spitting out high performance
>>> code; on a machine with a 150ns cycle time.
>>
>> As did our COBOL compiler (which ran in 50KB).
>
>
> Are you both sure that those numbers are really lines per *second*? They
> seem improbably high, and compilation speeds in those years used to be
> stated in lines per *minute*.
>

Almost certainly per minute.
I worked on a compiler in 1975 that ran on the most powerful ICL 1900.
It achieved 20K cards per minute and was considered to be very fast.

--
Bill F.

Re: Whither the Mill?

<ulkr8e$2gtuu$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35795&group=comp.arch#35795

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!feeder8.news.weretis.net!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: cr88192@gmail.com (BGB)
Newsgroups: comp.arch
Subject: Re: Whither the Mill?
Date: Sat, 16 Dec 2023 12:45:30 -0600
Organization: A noiseless patient Spider
Lines: 51
Message-ID: <ulkr8e$2gtuu$1@dont-email.me>
References: <ulclu3$3sglk$1@dont-email.me>
<gp1pni5t4vfqjsp81fogoboeoqe5hrj5pv@4ax.com> <uli82g$217dj$1@dont-email.me>
<a54ae908ce5af533e638e112833b35ea@news.novabbs.com>
<JA4fN.38899$xHn7.23180@fx14.iad> <ku51hoFaf95U1@mid.individual.net>
<ku6760FivvvU1@mid.individual.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sat, 16 Dec 2023 18:45:35 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="771391b6fd48b3a01c2b79763989ca51";
logging-data="2652126"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19APvuQ6wfj6YAh6a1LAuJr"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:07ZFX3GaSeb2O7z6V8L+E6zhsD0=
Content-Language: en-US
In-Reply-To: <ku6760FivvvU1@mid.individual.net>
 by: BGB - Sat, 16 Dec 2023 18:45 UTC

On 12/16/2023 12:04 PM, moi wrote:
> On 16/12/2023 07:22, Niklas Holsti wrote:
>> On 2023-12-16 0:39, Scott Lurndal wrote:
>>> mitchalsup@aol.com (MitchAlsup) writes:
>>
>>     [snip]
>>
>>>> In 1979 I joined a company with a FORTRAN mostly-77- that compiled at
>>>> 10,000 lines of code per second for an IBM-like minicomputer (less
>>>> decimal
>>>> and string) and did a pretty good job of spitting out high performance
>>>> code; on a machine with a 150ns cycle time.
>>>
>>> As did our COBOL compiler (which ran in 50KB).
>>
>>
>> Are you both sure that those numbers are really lines per *second*?
>> They seem improbably high, and compilation speeds in those years used
>> to be stated in lines per *minute*.
>>
>
> Almost certainly per minute.
> I worked on a compiler in 1975 that ran on the most powerful ICL 1900.
> It achieved 20K cards per minute and was considered to be very fast.
>

Lines per minute seems to make sense.

Modern PC's are orders of magnitude faster, but still don't have
"instant" compile times by any means.

Could be faster though, but would likely need languages other than C or
(especially) C++.

For both languages, one has the overheads of needing to read in a whole
lot of header code (often expanding out to 100s of kB or sometimes a few
MB) often for only 5-20kB of actual source code.

C++ then ruins compiler speed with things like templates.

Though, final code generation often does take some extra time.

For example, in BGBCC, a lot of time tends to be spent in the
"WEXifier", which mostly tries to shuffle instructions around and bundle
them in parallel (a lot of this being in terms of the code for figuring
out whether instructions can swap places, be run in parallel, and for
comparing relative costs).

....

Re: Whither the Mill?

<ea8c8a6be398fa64936d2da4efc2ca71@news.novabbs.com>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35796&group=comp.arch#35796

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!.POSTED!not-for-mail
From: mitchalsup@aol.com (MitchAlsup)
Newsgroups: comp.arch
Subject: Re: Whither the Mill?
Date: Sat, 16 Dec 2023 18:57:36 +0000
Organization: novaBBS
Message-ID: <ea8c8a6be398fa64936d2da4efc2ca71@news.novabbs.com>
References: <ulclu3$3sglk$1@dont-email.me> <gp1pni5t4vfqjsp81fogoboeoqe5hrj5pv@4ax.com> <uli82g$217dj$1@dont-email.me> <a54ae908ce5af533e638e112833b35ea@news.novabbs.com> <JA4fN.38899$xHn7.23180@fx14.iad> <2695abc72966c220809e5c6690a8edf6@news.novabbs.com> <ZP5fN.58208$83n7.3029@fx18.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: i2pn2.org;
logging-data="136835"; mail-complaints-to="usenet@i2pn2.org";
posting-account="t+lO0yBNO1zGxasPvGSZV1BRu71QKx+JE37DnW+83jQ";
User-Agent: Rocksolid Light
X-Spam-Checker-Version: SpamAssassin 4.0.0 (2022-12-13) on novalink.us
X-Rslight-Site: $2y$10$XPtpYqkSYvMtWBsO/9dCquLkPVbS8vGxgEVQhF3NSGVmjjfa77eci
X-Rslight-Posting-User: 7e9c45bcd6d4757c5904fbe9a694742e6f8aa949
 by: MitchAlsup - Sat, 16 Dec 2023 18:57 UTC

Scott Lurndal wrote:

> mitchalsup@aol.com (MitchAlsup) writes:
>>Scott Lurndal wrote:
>>
>>> mitchalsup@aol.com (MitchAlsup) writes:
>>>>BGB wrote:
>>
>>>>> For FPGA's over $1k, almost makes more sense to ignore that they exist
>>>>> (also this appears to be around the cutoff point for the free version of
>>>>> Vivado as well; but one would have thought Xilinx would have already
>>>>> gotten their money by someone having bought the FPGA?...).
>>
>>> For anyone serious, an verif engineer can cost $500-1000/day. The FPGA
>>> cost is in the noise.
>>
>>> For a hobby? Well...
>>
>>
>>>>> If the compiler is kept smaller, it is faster to recompile from source.
>>>>
>>>>In 1979 I joined a company with a FORTRAN mostly-77- that compiled at
>>>>10,000 lines of code per second for an IBM-like minicomputer (less decimal
>>>>and string) and did a pretty good job of spitting out high performance
>>>>code; on a machine with a 150ns cycle time.
>>
>>> As did our COBOL compiler (which ran in 50KB). But in both cases,
>>> the languages were far simpler and much easier to generate efficient
>>> code than languages like Modula, Pascal, C, et alia.
>>
>>>>> Though, within moderate limits, 1M lines would basically be enough to fit:
>>>>> A basic kernel;
>>>>> (this excludes the Linux kernel, which is well over the size limit).
>>>>
>>>>If there were an efficient way to run the device driver sack in user-mode
>>>>without privilege and only the MMI/O pages this driver can touch mapped
>>>>into his VAS. Poof none of the driver stack is in the kernel. --IF--
>>
>>> That's actually quite common and one of the raison d'etre of the
>>> PCI Express SR-IOV feature. When you can present a virtual
>>> function to the user directly (mapping the MMIO region into
>>> the user mode virtual address space) the app had direct access
>>> to the hardware. Interrupts are the only tricky part, and
>>> the kernel virtio subsystem, which interfaces with the user
>>> application via shared memory provides interrupt handling
>>> to the application.
>>
>>> An I/OMMU provides memory protection for DMA operations initiated
>>> by the virtual function ensuring it only accesses the application
>>> virtual address space.
>>
>>Why should device be able to access user VaS outside of the buffer the
>>user provided, OH so long ago ??

> Because the device wants to do DMA directly into or from the users
> virtual address space. Bulk transfer, not MMIO accesses.

OK, I will ask the question in the contrapositive way::
If the user ask device to read into a buffer, why does the device get
to see everything of the user's space along with that buffer ?

The way you write you are assuming the device can write into the
user's code space when he ask for a read from one of his buffers !?!

You _could_ give device translations to anything and everything
in user space, but this seems excessive when the user only wants
the device to read/write small area inside his VaS.

OS code already has to manipulate PTE entries or MMU tables so
the device can write read-only and execute-only pages along with
removing write-permission on a page with data inbound from a device.

> Think network controller fetching packets from userspace.

Re: Whither the Mill?

<LQmfN.5865$zqTf.4843@fx35.iad>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35798&group=comp.arch#35798

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!nntp.comgw.net!peer02.ams4!peer.am4.highwinds-media.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx35.iad.POSTED!not-for-mail
From: ThatWouldBeTelling@thevillage.com (EricP)
User-Agent: Thunderbird 2.0.0.24 (Windows/20100228)
MIME-Version: 1.0
Newsgroups: comp.arch
Subject: Re: Whither the Mill?
References: <ulclu3$3sglk$1@dont-email.me> <gp1pni5t4vfqjsp81fogoboeoqe5hrj5pv@4ax.com> <uli82g$217dj$1@dont-email.me> <a54ae908ce5af533e638e112833b35ea@news.novabbs.com> <JA4fN.38899$xHn7.23180@fx14.iad> <2695abc72966c220809e5c6690a8edf6@news.novabbs.com> <ZP5fN.58208$83n7.3029@fx18.iad> <ea8c8a6be398fa64936d2da4efc2ca71@news.novabbs.com>
In-Reply-To: <ea8c8a6be398fa64936d2da4efc2ca71@news.novabbs.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Lines: 88
Message-ID: <LQmfN.5865$zqTf.4843@fx35.iad>
X-Complaints-To: abuse@UsenetServer.com
NNTP-Posting-Date: Sat, 16 Dec 2023 19:25:31 UTC
Date: Sat, 16 Dec 2023 14:25:19 -0500
X-Received-Bytes: 4781
 by: EricP - Sat, 16 Dec 2023 19:25 UTC

MitchAlsup wrote:
> Scott Lurndal wrote:
>
>> mitchalsup@aol.com (MitchAlsup) writes:
>>> Scott Lurndal wrote:
>>>
>>>> mitchalsup@aol.com (MitchAlsup) writes:
>>>>> BGB wrote:
>>>
>>>>>> For FPGA's over $1k, almost makes more sense to ignore that they
>>>>>> exist (also this appears to be around the cutoff point for the
>>>>>> free version of Vivado as well; but one would have thought Xilinx
>>>>>> would have already gotten their money by someone having bought the
>>>>>> FPGA?...).
>>>
>>>> For anyone serious, an verif engineer can cost $500-1000/day. The
>>>> FPGA
>>>> cost is in the noise.
>>>
>>>> For a hobby? Well...
>>>
>>>
>>>>>> If the compiler is kept smaller, it is faster to recompile from
>>>>>> source.
>>>>>
>>>>> In 1979 I joined a company with a FORTRAN mostly-77- that compiled
>>>>> at 10,000 lines of code per second for an IBM-like minicomputer
>>>>> (less decimal and string) and did a pretty good job of spitting out
>>>>> high performance
>>>>> code; on a machine with a 150ns cycle time.
>>>
>>>> As did our COBOL compiler (which ran in 50KB). But in both cases,
>>>> the languages were far simpler and much easier to generate efficient
>>>> code than languages like Modula, Pascal, C, et alia.
>>>
>>>>>> Though, within moderate limits, 1M lines would basically be enough
>>>>>> to fit:
>>>>>> A basic kernel;
>>>>>> (this excludes the Linux kernel, which is well over the size
>>>>>> limit).
>>>>>
>>>>> If there were an efficient way to run the device driver sack in
>>>>> user-mode
>>>>> without privilege and only the MMI/O pages this driver can touch
>>>>> mapped
>>>>> into his VAS. Poof none of the driver stack is in the kernel. --IF--
>>>
>>>> That's actually quite common and one of the raison d'etre of the
>>>> PCI Express SR-IOV feature. When you can present a virtual
>>>> function to the user directly (mapping the MMIO region into
>>>> the user mode virtual address space) the app had direct access
>>>> to the hardware. Interrupts are the only tricky part, and
>>>> the kernel virtio subsystem, which interfaces with the user
>>>> application via shared memory provides interrupt handling
>>>> to the application.
>>>
>>>> An I/OMMU provides memory protection for DMA operations initiated
>>>> by the virtual function ensuring it only accesses the application
>>>> virtual address space.
>>>
>>> Why should device be able to access user VaS outside of the buffer
>>> the user provided, OH so long ago ??
>
>
>> Because the device wants to do DMA directly into or from the users
>> virtual address space. Bulk transfer, not MMIO accesses.
>
> OK, I will ask the question in the contrapositive way::
> If the user ask device to read into a buffer, why does the device get
> to see everything of the user's space along with that buffer ?
>
> The way you write you are assuming the device can write into the
> user's code space when he ask for a read from one of his buffers !?!
>
> You _could_ give device translations to anything and everything
> in user space, but this seems excessive when the user only wants
> the device to read/write small area inside his VaS.
>
> OS code already has to manipulate PTE entries or MMU tables so
> the device can write read-only and execute-only pages along with
> removing write-permission on a page with data inbound from a device.

The OS can't remove the page RW access for a user mode page while an
IO device is DMA writing the page, if that's what you meant,
as the DMA-in may be writing to a smaller buffer within a larger page.
It is perfectly normal for a thread to continue to work in buffer
bytes adjacent to the one currently involved in an async IO.

Re: Whither the Mill?

<DRofN.41751$xHn7.34362@fx14.iad>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35799&group=comp.arch#35799

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx14.iad.POSTED!not-for-mail
X-newsreader: xrn 9.03-beta-14-64bit
Sender: scott@dragon.sl.home (Scott Lurndal)
From: scott@slp53.sl.home (Scott Lurndal)
Reply-To: slp53@pacbell.net
Subject: Re: Whither the Mill?
Newsgroups: comp.arch
References: <ulclu3$3sglk$1@dont-email.me> <gp1pni5t4vfqjsp81fogoboeoqe5hrj5pv@4ax.com> <uli82g$217dj$1@dont-email.me> <a54ae908ce5af533e638e112833b35ea@news.novabbs.com> <JA4fN.38899$xHn7.23180@fx14.iad> <2695abc72966c220809e5c6690a8edf6@news.novabbs.com> <ZP5fN.58208$83n7.3029@fx18.iad> <ea8c8a6be398fa64936d2da4efc2ca71@news.novabbs.com>
Lines: 68
Message-ID: <DRofN.41751$xHn7.34362@fx14.iad>
X-Complaints-To: abuse@usenetserver.com
NNTP-Posting-Date: Sat, 16 Dec 2023 21:42:59 UTC
Organization: UsenetServer - www.usenetserver.com
Date: Sat, 16 Dec 2023 21:42:59 GMT
X-Received-Bytes: 3927
 by: Scott Lurndal - Sat, 16 Dec 2023 21:42 UTC

mitchalsup@aol.com (MitchAlsup) writes:
>Scott Lurndal wrote:
>
>> mitchalsup@aol.com (MitchAlsup) writes:
>>>Scott Lurndal wrote:
>>>
>>>> mitchalsup@aol.com (MitchAlsup) writes:
>>>>>BGB wrote:
>>>
>>>>>> For FPGA's over $1k, almost makes more sense to ignore that they exist
>>>>>> (also this appears to be around the cutoff point for the free version of
>>>>>> Vivado as well; but one would have thought Xilinx would have already
>>>>>> gotten their money by someone having bought the FPGA?...).
>>>
>>>> For anyone serious, an verif engineer can cost $500-1000/day. The FPGA
>>>> cost is in the noise.
>>>
>>>> For a hobby? Well...
>>>
>>>
>>>>>> If the compiler is kept smaller, it is faster to recompile from source.
>>>>>
>>>>>In 1979 I joined a company with a FORTRAN mostly-77- that compiled at
>>>>>10,000 lines of code per second for an IBM-like minicomputer (less decimal
>>>>>and string) and did a pretty good job of spitting out high performance
>>>>>code; on a machine with a 150ns cycle time.
>>>
>>>> As did our COBOL compiler (which ran in 50KB). But in both cases,
>>>> the languages were far simpler and much easier to generate efficient
>>>> code than languages like Modula, Pascal, C, et alia.
>>>
>>>>>> Though, within moderate limits, 1M lines would basically be enough to fit:
>>>>>> A basic kernel;
>>>>>> (this excludes the Linux kernel, which is well over the size limit).
>>>>>
>>>>>If there were an efficient way to run the device driver sack in user-mode
>>>>>without privilege and only the MMI/O pages this driver can touch mapped
>>>>>into his VAS. Poof none of the driver stack is in the kernel. --IF--
>>>
>>>> That's actually quite common and one of the raison d'etre of the
>>>> PCI Express SR-IOV feature. When you can present a virtual
>>>> function to the user directly (mapping the MMIO region into
>>>> the user mode virtual address space) the app had direct access
>>>> to the hardware. Interrupts are the only tricky part, and
>>>> the kernel virtio subsystem, which interfaces with the user
>>>> application via shared memory provides interrupt handling
>>>> to the application.
>>>
>>>> An I/OMMU provides memory protection for DMA operations initiated
>>>> by the virtual function ensuring it only accesses the application
>>>> virtual address space.
>>>
>>>Why should device be able to access user VaS outside of the buffer the
>>>user provided, OH so long ago ??
>
>
>> Because the device wants to do DMA directly into or from the users
>> virtual address space. Bulk transfer, not MMIO accesses.
>
>OK, I will ask the question in the contrapositive way::
>If the user ask device to read into a buffer, why does the device get
>to see everything of the user's space along with that buffer ?

It doesn't, necessarily. The IOMMU translation table is a
proper subset of the user's virtual address space. The
application tells the kernel which portions of the address
space are valid DMA regions for the device to access.

Re: Whither the Mill?

<ull9he$2j2v6$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35802&group=comp.arch#35802

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: bohannonindustriesllc@gmail.com (BGB-Alt)
Newsgroups: comp.arch
Subject: Re: Whither the Mill?
Date: Sat, 16 Dec 2023 16:49:17 -0600
Organization: A noiseless patient Spider
Lines: 153
Message-ID: <ull9he$2j2v6$1@dont-email.me>
References: <ulclu3$3sglk$1@dont-email.me>
<gp1pni5t4vfqjsp81fogoboeoqe5hrj5pv@4ax.com> <uli82g$217dj$1@dont-email.me>
<a54ae908ce5af533e638e112833b35ea@news.novabbs.com>
<JA4fN.38899$xHn7.23180@fx14.iad>
<2695abc72966c220809e5c6690a8edf6@news.novabbs.com>
<ZP5fN.58208$83n7.3029@fx18.iad>
<ea8c8a6be398fa64936d2da4efc2ca71@news.novabbs.com>
<LQmfN.5865$zqTf.4843@fx35.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sat, 16 Dec 2023 22:49:18 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="38ce5fdfcd8e5d9f301039e7ee3900ea";
logging-data="2722790"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/4iy1Ounw0BSdovT4nq+L4Gz/c/zlpvmI="
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:qqAD/hgrJVfIwThtQ80RBuBgzLE=
Content-Language: en-US
In-Reply-To: <LQmfN.5865$zqTf.4843@fx35.iad>
 by: BGB-Alt - Sat, 16 Dec 2023 22:49 UTC

On 12/16/2023 1:25 PM, EricP wrote:
> MitchAlsup wrote:
>> Scott Lurndal wrote:
>>
>>> mitchalsup@aol.com (MitchAlsup) writes:
>>>> Scott Lurndal wrote:
>>>>
>>>>> mitchalsup@aol.com (MitchAlsup) writes:
>>>>>> BGB wrote:
>>>>
>>>>>>> For FPGA's over $1k, almost makes more sense to ignore that they
>>>>>>> exist (also this appears to be around the cutoff point for the
>>>>>>> free version of Vivado as well; but one would have thought Xilinx
>>>>>>> would have already gotten their money by someone having bought
>>>>>>> the FPGA?...).
>>>>
>>>>> For anyone serious, an verif engineer can cost $500-1000/day.   The
>>>>> FPGA
>>>>> cost is in the noise.
>>>>
>>>>> For a hobby?  Well...
>>>>
>>>>
>>>>>>> If the compiler is kept smaller, it is faster to recompile from
>>>>>>> source.
>>>>>>
>>>>>> In 1979 I joined a company with a FORTRAN mostly-77- that compiled
>>>>>> at 10,000 lines of code per second for an IBM-like minicomputer
>>>>>> (less decimal and string) and did a pretty good job of spitting
>>>>>> out high performance
>>>>>> code; on a machine with a 150ns cycle time.
>>>>
>>>>> As did our COBOL compiler (which ran in 50KB).  But in both cases,
>>>>> the languages were far simpler and much easier to generate efficient
>>>>> code than languages like Modula, Pascal, C, et alia.
>>>>
>>>>>>> Though, within moderate limits, 1M lines would basically be
>>>>>>> enough to fit:
>>>>>>>    A basic kernel;
>>>>>>>      (this excludes the Linux kernel, which is well over the size
>>>>>>> limit).
>>>>>>
>>>>>> If there were an efficient way to run the device driver sack in
>>>>>> user-mode
>>>>>> without privilege and only the MMI/O pages this driver can touch
>>>>>> mapped
>>>>>> into his VAS. Poof none of the driver stack is in the kernel.  --IF--
>>>>
>>>>> That's actually quite common and one of the raison d'etre of the
>>>>> PCI Express SR-IOV feature.    When you can present a virtual
>>>>> function to the user directly (mapping the MMIO region into
>>>>> the user mode virtual address space) the app had direct access
>>>>> to the hardware.    Interrupts are the only tricky part, and
>>>>> the kernel virtio subsystem, which interfaces with the user
>>>>> application via shared memory provides interrupt handling
>>>>> to the application.
>>>>
>>>>> An I/OMMU provides memory protection for DMA operations initiated
>>>>> by the virtual function ensuring it only accesses the application
>>>>> virtual address space.
>>>>
>>>> Why should device be able to access user VaS outside of the buffer
>>>> the user provided, OH so long ago ??
>>
>>
>>> Because the device wants to do DMA directly into or from the users
>>> virtual address space.   Bulk transfer, not MMIO accesses.
>>
>> OK, I will ask the question in the contrapositive way::
>> If the user ask device to read into a buffer, why does the device get
>> to see everything of the user's space along with that buffer ?
>>
>> The way you write you are assuming the device can write into the
>> user's code space when he ask for a read from one of his buffers !?!
>>
>> You _could_ give device translations to anything and everything
>> in user space, but this seems excessive when the user only wants
>> the device to read/write small area inside his VaS.
>>
>> OS code already has to manipulate PTE entries or MMU tables so
>> the device can write read-only and execute-only pages along with
>> removing write-permission on a page with data inbound from a device.
>
> The OS can't remove the page RW access for a user mode page while an
> IO device is DMA writing the page, if that's what you meant,
> as the DMA-in may be writing to a smaller buffer within a larger page.
> It is perfectly normal for a thread to continue to work in buffer
> bytes adjacent to the one currently involved in an async IO.
>

One thing I don't get here is why there would be direct DMA between
userland and the device (at least for filesystem and similar).

Like, say, for a filesystem, it is presumably:
read syscall from user to OS;
route this to the corresponding VFS driver;
Requests spanning multiple blocks being broken up into parts;
VFS driver checks the block-cache / buffer-cache;
If found, copy from cache into user-space;
If not found, send request to the underlying block device;
Wait for response (and/or reschedule task for later);
Copy result back into userland.

Though, it may make sense that if a request isn't available immediately,
and there is some sort of DMA mechanism, the OS could block the task and
then resume it once the data becomes available. For polling IO, doesn't
likely make much difference as the CPU is basically stuck in a busy loop
either way until the IO finishes.

Though, could make sense for hardware accelerating pixel-copying
operations for a GUI.

For GUI, there would be multiple stages of copying, say:
Copying from user buffer to window buffer;
Copying from window buffer to screen buffer;
Copying from screen buffer to VRAM.

For video playback or GL, there may be an additional stage of copying
from GL's buffer to a user's buffer, then from the user's buffer to the
window buffer. Though, considering possibly adding a shortcut path where
GL and video codecs copy more directly into the window buffer (bypassing
needing to pass the frame data through the userland program).

Could be also possible maybe to have GL render directly into the window
buffer, which could be possible if they have the same format/resolution,
and the window buffer is physically mapped (say, for my current hardware
rasterizer module).

If running a program full-screen, it is possible to copy more directly
from the user buffer into VRAM, saving some time here.

Some time could be saved here if one had hardware support for these
sorts of "copy pixel buffers around and convert between formats" tasks,
but to be useful, this would need to be able to work with virtual
memory, which adds some complexity (would either need to be CPU-like
and/or have a page-walker; neither is particularly cheap).

Could maybe offload the task to the rasterizer module, but would need to
add a page-walker to the rasterizer... Though, trying to deal with some
scenarios (such as the final conversion/copy to VRAM) would add a lot of
extra complexity. For now, its framebuffer/zbuffer/textures need to be
in physically-mapped addresses (also with a 128-bit buffer alignment).

Though, cheaper could be to make use of the second CPU core, but then
schedule things like pixel copy operations to it (maybe also things like
vertex transform and similar for OpenGL). Currently, if enabled, the
second core hasn't seen a lot of use thus far in my case.

....

Re: Whither the Mill?

<ull9ua$vm0s$1@newsreader4.netcologne.de>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35803&group=comp.arch#35803

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!i2pn.org!usenet.network!news.neodome.net!weretis.net!feeder8.news.weretis.net!newsreader4.netcologne.de!news.netcologne.de!.POSTED.2001-4dd7-dd23-0-3405-29ed-c929-c26d.ipv6dyn.netcologne.de!not-for-mail
From: tkoenig@netcologne.de (Thomas Koenig)
Newsgroups: comp.arch
Subject: Re: Whither the Mill?
Date: Sat, 16 Dec 2023 22:56:10 -0000 (UTC)
Organization: news.netcologne.de
Distribution: world
Message-ID: <ull9ua$vm0s$1@newsreader4.netcologne.de>
References: <ulclu3$3sglk$1@dont-email.me>
<gp1pni5t4vfqjsp81fogoboeoqe5hrj5pv@4ax.com> <uli82g$217dj$1@dont-email.me>
<a54ae908ce5af533e638e112833b35ea@news.novabbs.com>
<JA4fN.38899$xHn7.23180@fx14.iad> <ku51hoFaf95U1@mid.individual.net>
<ku6760FivvvU1@mid.individual.net> <ulkr8e$2gtuu$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Injection-Date: Sat, 16 Dec 2023 22:56:10 -0000 (UTC)
Injection-Info: newsreader4.netcologne.de; posting-host="2001-4dd7-dd23-0-3405-29ed-c929-c26d.ipv6dyn.netcologne.de:2001:4dd7:dd23:0:3405:29ed:c929:c26d";
logging-data="1038364"; mail-complaints-to="abuse@netcologne.de"
User-Agent: slrn/1.0.3 (Linux)
 by: Thomas Koenig - Sat, 16 Dec 2023 22:56 UTC

BGB <cr88192@gmail.com> schrieb:
> On 12/16/2023 12:04 PM, moi wrote:
>> On 16/12/2023 07:22, Niklas Holsti wrote:
>>> On 2023-12-16 0:39, Scott Lurndal wrote:
>>>> mitchalsup@aol.com (MitchAlsup) writes:
>>>
>>>     [snip]
>>>
>>>>> In 1979 I joined a company with a FORTRAN mostly-77- that compiled at
>>>>> 10,000 lines of code per second for an IBM-like minicomputer (less
>>>>> decimal
>>>>> and string) and did a pretty good job of spitting out high performance
>>>>> code; on a machine with a 150ns cycle time.
>>>>
>>>> As did our COBOL compiler (which ran in 50KB).
>>>
>>>
>>> Are you both sure that those numbers are really lines per *second*?
>>> They seem improbably high, and compilation speeds in those years used
>>> to be stated in lines per *minute*.
>>>
>>
>> Almost certainly per minute.
>> I worked on a compiler in 1975 that ran on the most powerful ICL 1900.
>> It achieved 20K cards per minute and was considered to be very fast.
>>
>
> Lines per minute seems to make sense.
>
>
> Modern PC's are orders of magnitude faster, but still don't have
> "instant" compile times by any means.
>
> Could be faster though, but would likely need languages other than C or
> (especially) C++.

I assume you never worked with Turbo Pascal.

That was amazing. It compiled code so fast that it was never a
bother, to wait for it, even on a 8088 IBM PC running at 4.7 MHz.
The first version I ever used, 3.0 (?) compiled from memory to
memory, so even slow I/O (to floppy disc, at the time) was not
an issue.

This was made possible by using a streamlined one-pass compiler. It
didn't do much optimization, but when the alternative was BASIC, the
generated code was still extremely fast by comparision.

There were a few drawbacks. The biggest one was that programming errors
tended to freeze the machine. Another (not so important) was that,
if you were one of the lucky people to have an 80x87 coprocessor, the
generated code did not check for overflow of the coprocessor stack.

Re: Whither the Mill?

<3fb0f80749d67cb528df1a60731dccee@news.novabbs.com>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35804&group=comp.arch#35804

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!.POSTED!not-for-mail
From: mitchalsup@aol.com (MitchAlsup)
Newsgroups: comp.arch
Subject: Re: Whither the Mill?
Date: Sat, 16 Dec 2023 22:58:54 +0000
Organization: novaBBS
Message-ID: <3fb0f80749d67cb528df1a60731dccee@news.novabbs.com>
References: <ulclu3$3sglk$1@dont-email.me> <gp1pni5t4vfqjsp81fogoboeoqe5hrj5pv@4ax.com> <uli82g$217dj$1@dont-email.me> <a54ae908ce5af533e638e112833b35ea@news.novabbs.com> <JA4fN.38899$xHn7.23180@fx14.iad> <2695abc72966c220809e5c6690a8edf6@news.novabbs.com> <ZP5fN.58208$83n7.3029@fx18.iad> <ea8c8a6be398fa64936d2da4efc2ca71@news.novabbs.com> <DRofN.41751$xHn7.34362@fx14.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: i2pn2.org;
logging-data="156306"; mail-complaints-to="usenet@i2pn2.org";
posting-account="t+lO0yBNO1zGxasPvGSZV1BRu71QKx+JE37DnW+83jQ";
User-Agent: Rocksolid Light
X-Spam-Checker-Version: SpamAssassin 4.0.0 (2022-12-13) on novalink.us
X-Rslight-Posting-User: 7e9c45bcd6d4757c5904fbe9a694742e6f8aa949
X-Rslight-Site: $2y$10$manZqeiupjJbGFdLFIla2uhaLfrBwp24Y4RsEqX4Dp1T3P2ezcTo6
 by: MitchAlsup - Sat, 16 Dec 2023 22:58 UTC

Scott Lurndal wrote:

> mitchalsup@aol.com (MitchAlsup) writes:
>>Scott Lurndal wrote:
>>
>>> mitchalsup@aol.com (MitchAlsup) writes:
>>>>Scott Lurndal wrote:
>>>>
>>>>> mitchalsup@aol.com (MitchAlsup) writes:
>>>>>>BGB wrote:
>>>>
>>>>>>> For FPGA's over $1k, almost makes more sense to ignore that they exist
>>>>>>> (also this appears to be around the cutoff point for the free version of
>>>>>>> Vivado as well; but one would have thought Xilinx would have already
>>>>>>> gotten their money by someone having bought the FPGA?...).
>>>>
>>>>> For anyone serious, an verif engineer can cost $500-1000/day. The FPGA
>>>>> cost is in the noise.
>>>>
>>>>> For a hobby? Well...
>>>>
>>>>
>>>>>>> If the compiler is kept smaller, it is faster to recompile from source.
>>>>>>
>>>>>>In 1979 I joined a company with a FORTRAN mostly-77- that compiled at
>>>>>>10,000 lines of code per second for an IBM-like minicomputer (less decimal
>>>>>>and string) and did a pretty good job of spitting out high performance
>>>>>>code; on a machine with a 150ns cycle time.
>>>>
>>>>> As did our COBOL compiler (which ran in 50KB). But in both cases,
>>>>> the languages were far simpler and much easier to generate efficient
>>>>> code than languages like Modula, Pascal, C, et alia.
>>>>
>>>>>>> Though, within moderate limits, 1M lines would basically be enough to fit:
>>>>>>> A basic kernel;
>>>>>>> (this excludes the Linux kernel, which is well over the size limit).
>>>>>>
>>>>>>If there were an efficient way to run the device driver sack in user-mode
>>>>>>without privilege and only the MMI/O pages this driver can touch mapped
>>>>>>into his VAS. Poof none of the driver stack is in the kernel. --IF--
>>>>
>>>>> That's actually quite common and one of the raison d'etre of the
>>>>> PCI Express SR-IOV feature. When you can present a virtual
>>>>> function to the user directly (mapping the MMIO region into
>>>>> the user mode virtual address space) the app had direct access
>>>>> to the hardware. Interrupts are the only tricky part, and
>>>>> the kernel virtio subsystem, which interfaces with the user
>>>>> application via shared memory provides interrupt handling
>>>>> to the application.
>>>>
>>>>> An I/OMMU provides memory protection for DMA operations initiated
>>>>> by the virtual function ensuring it only accesses the application
>>>>> virtual address space.
>>>>
>>>>Why should device be able to access user VaS outside of the buffer the
>>>>user provided, OH so long ago ??
>>
>>
>>> Because the device wants to do DMA directly into or from the users
>>> virtual address space. Bulk transfer, not MMIO accesses.
>>
>>OK, I will ask the question in the contrapositive way::
>>If the user ask device to read into a buffer, why does the device get
>>to see everything of the user's space along with that buffer ?

> It doesn't, necessarily. The IOMMU translation table is a
> proper subset of the user's virtual address space. The
> application tells the kernel which portions of the address
> space are valid DMA regions for the device to access.

Which is my point !! you only want the device to see that <small> subset
of the requesting application--not the whole address space. Done right
the device can still use the application virtual address, but the device
is not allowed to access stuff not associated with the request at hand
right now.

For example, you are a large entity and and Chinese disk drives are way
less expensive than non-Chinese; so you buy some. Would you let those
disk drives access anything in some requestors address space--no, you
would only allow that device to access the user supplied buffer and
whatever page rounding up that transpires.

Principle of least Privilege works in the I/O space too.

Re: Whither the Mill?

<73e64061bc14cec73e1e94cadc65cb79@news.novabbs.com>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35805&group=comp.arch#35805

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!.POSTED!not-for-mail
From: mitchalsup@aol.com (MitchAlsup)
Newsgroups: comp.arch
Subject: Re: Whither the Mill?
Date: Sat, 16 Dec 2023 23:01:44 +0000
Organization: novaBBS
Message-ID: <73e64061bc14cec73e1e94cadc65cb79@news.novabbs.com>
References: <ulclu3$3sglk$1@dont-email.me> <gp1pni5t4vfqjsp81fogoboeoqe5hrj5pv@4ax.com> <uli82g$217dj$1@dont-email.me> <a54ae908ce5af533e638e112833b35ea@news.novabbs.com> <JA4fN.38899$xHn7.23180@fx14.iad> <2695abc72966c220809e5c6690a8edf6@news.novabbs.com> <ZP5fN.58208$83n7.3029@fx18.iad> <ea8c8a6be398fa64936d2da4efc2ca71@news.novabbs.com> <LQmfN.5865$zqTf.4843@fx35.iad> <ull9he$2j2v6$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: i2pn2.org;
logging-data="156595"; mail-complaints-to="usenet@i2pn2.org";
posting-account="t+lO0yBNO1zGxasPvGSZV1BRu71QKx+JE37DnW+83jQ";
User-Agent: Rocksolid Light
X-Spam-Checker-Version: SpamAssassin 4.0.0 (2022-12-13) on novalink.us
X-Rslight-Site: $2y$10$IffPxnLSf1MXLZo/S3UxrOUDxIapzvOCnbITcI.7ZV3blS312NdWu
X-Rslight-Posting-User: 7e9c45bcd6d4757c5904fbe9a694742e6f8aa949
 by: MitchAlsup - Sat, 16 Dec 2023 23:01 UTC

BGB-Alt wrote:

Why did you acquire an alt ?? Ego perhaps ??

Re: Whither the Mill?

<b71e2dab573615a4529da208b45fac23@news.novabbs.com>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=35807&group=comp.arch#35807

  copy link   Newsgroups: comp.arch
Path: i2pn2.org!.POSTED!not-for-mail
From: mitchalsup@aol.com (MitchAlsup)
Newsgroups: comp.arch
Subject: Re: Whither the Mill?
Date: Sat, 16 Dec 2023 23:06:21 +0000
Organization: novaBBS
Message-ID: <b71e2dab573615a4529da208b45fac23@news.novabbs.com>
References: <ulclu3$3sglk$1@dont-email.me> <gp1pni5t4vfqjsp81fogoboeoqe5hrj5pv@4ax.com> <uli82g$217dj$1@dont-email.me> <a54ae908ce5af533e638e112833b35ea@news.novabbs.com> <JA4fN.38899$xHn7.23180@fx14.iad> <2695abc72966c220809e5c6690a8edf6@news.novabbs.com> <ZP5fN.58208$83n7.3029@fx18.iad> <ea8c8a6be398fa64936d2da4efc2ca71@news.novabbs.com> <LQmfN.5865$zqTf.4843@fx35.iad> <ull9he$2j2v6$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: i2pn2.org;
logging-data="157111"; mail-complaints-to="usenet@i2pn2.org";
posting-account="t+lO0yBNO1zGxasPvGSZV1BRu71QKx+JE37DnW+83jQ";
User-Agent: Rocksolid Light
X-Spam-Checker-Version: SpamAssassin 4.0.0 (2022-12-13) on novalink.us
X-Rslight-Posting-User: 7e9c45bcd6d4757c5904fbe9a694742e6f8aa949
X-Rslight-Site: $2y$10$X8HpEh27StX068zNmekl6.PwNHcI34FkQ/oF5s9tjw/JMLAfwKcMa
 by: MitchAlsup - Sat, 16 Dec 2023 23:06 UTC

BGB-Alt wrote:

> On 12/16/2023 1:25 PM, EricP wrote:
>> MitchAlsup wrote:

> One thing I don't get here is why there would be direct DMA between
> userland and the device (at least for filesystem and similar).

> Like, say, for a filesystem, it is presumably:
> read syscall from user to OS;
> route this to the corresponding VFS driver;
> Requests spanning multiple blocks being broken up into parts;
> VFS driver checks the block-cache / buffer-cache;
> If found, copy from cache into user-space;
> If not found, send request to the underlying block device;
> Wait for response (and/or reschedule task for later);
> Copy result back into userland.

This is correct enough for a file system buffered by a disk cache.

Are ALL file systems buffered in a disk cache ??

> Though, it may make sense that if a request isn't available immediately,
> and there is some sort of DMA mechanism, the OS could block the task and
> then resume it once the data becomes available. For polling IO, doesn't
> likely make much difference as the CPU is basically stuck in a busy loop
> either way until the IO finishes.

> Though, could make sense for hardware accelerating pixel-copying
> operations for a GUI.

> For GUI, there would be multiple stages of copying, say:
> Copying from user buffer to window buffer;
> Copying from window buffer to screen buffer;
> Copying from screen buffer to VRAM.

> For video playback or GL, there may be an additional stage of copying
> from GL's buffer to a user's buffer, then from the user's buffer to the
> window buffer. Though, considering possibly adding a shortcut path where
> GL and video codecs copy more directly into the window buffer (bypassing
> needing to pass the frame data through the userland program).

> Could be also possible maybe to have GL render directly into the window
> buffer, which could be possible if they have the same format/resolution,
> and the window buffer is physically mapped (say, for my current hardware
> rasterizer module).

> If running a program full-screen, it is possible to copy more directly
> from the user buffer into VRAM, saving some time here.

> Some time could be saved here if one had hardware support for these
> sorts of "copy pixel buffers around and convert between formats" tasks,
> but to be useful, this would need to be able to work with virtual
> memory, which adds some complexity (would either need to be CPU-like
> and/or have a page-walker; neither is particularly cheap).

I have MM (memory to memory move:: memmove() if you will) that transmits
up to 1 page of data as if atomically (single "bus" transaction.)

> Could maybe offload the task to the rasterizer module, but would need to
> add a page-walker to the rasterizer... Though, trying to deal with some
> scenarios (such as the final conversion/copy to VRAM) would add a lot of
> extra complexity. For now, its framebuffer/zbuffer/textures need to be
> in physically-mapped addresses (also with a 128-bit buffer alignment).

> Though, cheaper could be to make use of the second CPU core, but then
> schedule things like pixel copy operations to it (maybe also things like
> vertex transform and similar for OpenGL). Currently, if enabled, the
> second core hasn't seen a lot of use thus far in my case.

> ....


devel / comp.arch / Re: Whither the Mill?

Pages:123
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor