Rocksolid Light

Welcome to Rocksolid Light

mail  files  register  newsreader  groups  login

Message-ID:  

"Survey says..." -- Richard Dawson, weenie, on "Family Feud"


arts / rec.arts.sf.written / Re: "We???re Gonna Need a Bigger Moat" by Steve Yegge

SubjectAuthor
* "We’re Gonna Need a Bigger Moat" by Steve YegLynn McGuire
+* Re: "We’re Gonna Need a Bigger Moat" by Steve YeggAndrew McDowell
|`* Re: Re: "We’re Gonna Need a Bigger Moat" by Steve YeggeScott Lurndal
| `- Re: "We???re Gonna Need a Bigger Moat" by Steve YeggeDavid Duffy
`- Re: "We’re Gonna Need a Bigger Moat" by Steve Yeggpete...@gmail.com

1
"We’re Gonna Need a Bigger Moat" by Steve Yegge

<u4427i$5e92$1@dont-email.me>

  copy mid

https://news.novabbs.org/arts/article-flat.php?id=92913&group=rec.arts.sf.written#92913

  copy link   Newsgroups: rec.arts.sf.written
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: lynnmcguire5@gmail.com (Lynn McGuire)
Newsgroups: rec.arts.sf.written
Subject: "We’re_Gonna_Need_a_Bigger_Moat"_by_Steve_Yeg
ge
Date: Wed, 17 May 2023 21:23:44 -0500
Organization: A noiseless patient Spider
Lines: 34
Message-ID: <u4427i$5e92$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 18 May 2023 02:23:46 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="463d19f0748ac905bbe17983afdff9ce";
logging-data="178466"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18HhIA9NzI4x7CsIELbFstS82B9DXQPPpc="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.11.0
Cancel-Lock: sha1:mSO1pTwFWfq0nYO1TnUfKu99ngs=
Content-Language: en-US
 by: Lynn McGuire - Thu, 18 May 2023 02:23 UTC

"We’re Gonna Need a Bigger Moat" by Steve Yegge
https://steve-yegge.medium.com/were-gonna-need-a-bigger-moat-478a8df6a0d2

“In related news, these “billion dollar”-class LLMs can now be cloned on
macbooks and copied directly onto Boston Dynamics robots via their
Raspberry Pi adapter, at which point…”

“Oh yeah, it was training costs. Remember when it was roughly $1B to
train an LLM like GPT-4?"

“According to the leaked Google memo, world-class competitive LLM
training costs just dropped from a billion dollars to… that’s right, you
guessed it…”

"A hundred dollars.”

“Before last week, there were, oh, maybe five LLMs in GPT’s class. In
the whole world. It was like back in the 1950s when there were like five
computers in the world, and IBM owned three of them.”

“Which sounds to me like a very safe and defensible moat. That is, until
you realize LLMs can f*****’ copy each other. So their so-called “data
advantage” was really only going to be safe for as long as all the big
players kept the AIs locked up.”

“I swear this is a damn Jerry Bruckheimer movie, unfolding before our eyes.”

Uh, it did not go well for the human race in the Robopocalypse book that
I just read and reviewed. 99.9% dead in three years.
https://www.amazon.com/Robopocalypse-Contemporaries-Daniel-H-Wilson/dp/0307740803/

Lynn

Re: "We’re Gonna Need a Bigger Moat" by Steve Yegge

<14eed119-676d-4a43-bb2d-d999ba36d301n@googlegroups.com>

  copy mid

https://news.novabbs.org/arts/article-flat.php?id=92919&group=rec.arts.sf.written#92919

  copy link   Newsgroups: rec.arts.sf.written
X-Received: by 2002:a05:622a:51:b0:3ea:3d30:af91 with SMTP id y17-20020a05622a005100b003ea3d30af91mr807191qtw.1.1684384535156;
Wed, 17 May 2023 21:35:35 -0700 (PDT)
X-Received: by 2002:a05:6870:9185:b0:196:74ca:432a with SMTP id
b5-20020a056870918500b0019674ca432amr121874oaf.4.1684384534803; Wed, 17 May
2023 21:35:34 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: rec.arts.sf.written
Date: Wed, 17 May 2023 21:35:34 -0700 (PDT)
In-Reply-To: <u4427i$5e92$1@dont-email.me>
Injection-Info: google-groups.googlegroups.com; posting-host=2.127.209.193; posting-account=utyrIAoAAACcAz1G5lMc301fthWOXU_Z
NNTP-Posting-Host: 2.127.209.193
References: <u4427i$5e92$1@dont-email.me>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <14eed119-676d-4a43-bb2d-d999ba36d301n@googlegroups.com>
Subject: Re:_"We’re_Gonna_Need_a_Bigger_Moat"_by_Steve_Yegg
e
From: mcdowell_ag@sky.com (Andrew McDowell)
Injection-Date: Thu, 18 May 2023 04:35:35 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 4220
 by: Andrew McDowell - Thu, 18 May 2023 04:35 UTC

On Thursday, May 18, 2023 at 3:25:18 AM UTC+1, Lynn McGuire wrote:
> "We’re Gonna Need a Bigger Moat" by Steve Yegge
>
> https://steve-yegge.medium.com/were-gonna-need-a-bigger-moat-478a8df6a0d2
>
> “In related news, these “billion dollar”-class LLMs can now be cloned on
> macbooks and copied directly onto Boston Dynamics robots via their
> Raspberry Pi adapter, at which point…”
>
> “Oh yeah, it was training costs. Remember when it was roughly $1B to
> train an LLM like GPT-4?"
>
> “According to the leaked Google memo, world-class competitive LLM
> training costs just dropped from a billion dollars to… that’s right, you
> guessed it…”
>
> "A hundred dollars.”
>
> “Before last week, there were, oh, maybe five LLMs in GPT’s class. In
> the whole world. It was like back in the 1950s when there were like five
> computers in the world, and IBM owned three of them.”
>
> “Which sounds to me like a very safe and defensible moat. That is, until
> you realize LLMs can f*****’ copy each other. So their so-called “data
> advantage” was really only going to be safe for as long as all the big
> players kept the AIs locked up.”
>
> “I swear this is a damn Jerry Bruckheimer movie, unfolding before our eyes.”
>
> Uh, it did not go well for the human race in the Robopocalypse book that
> I just read and reviewed. 99.9% dead in three years.
>
> https://www.amazon.com/Robopocalypse-Contemporaries-Daniel-H-Wilson/dp/0307740803/
>
> Lynn
I am in the skeptical minority. I think that a Large Language Model is just that - a means of predicting the probabilities of the different possible extensions of a stretch of text. If it models human intelligence, it models the intelligence of a human who is articulate and plausible, but doesn't actually know anything or have any skills. There is in fact a precedent for this - from https://en.wikipedia.org/wiki/Williams_syndrome "Despite their physical and cognitive deficits, people with Williams syndrome exhibit impressive social and verbal abilities. Williams patients can be highly verbal relative to their IQ. When children with Williams syndrome are asked to name an array of animals, they may well list a wild assortment of creatures such as a koala, saber-toothed cat, vulture, unicorn, sea lion, yak, ibex and Brontosaurus, a far greater verbal array than would be expected of children with IQs in the 60s."

Perhaps an LLM would be truly useful as part of a larger system which had some way of applying quality control to its outputs, such as turning the LLM speculations into a valid mathematical argument if possible. But I have seen no sign that this has happened (yet).

Re: "We’re Gonna Need a Bigger Moat" by Steve Yegge

<892d0ecf-c02e-4024-8f73-5d9477e22248n@googlegroups.com>

  copy mid

https://news.novabbs.org/arts/article-flat.php?id=92926&group=rec.arts.sf.written#92926

  copy link   Newsgroups: rec.arts.sf.written
X-Received: by 2002:ac8:5ad3:0:b0:3f5:a07:e6d5 with SMTP id d19-20020ac85ad3000000b003f50a07e6d5mr1187409qtd.8.1684415631509;
Thu, 18 May 2023 06:13:51 -0700 (PDT)
X-Received: by 2002:a05:6808:3315:b0:396:1512:6fd5 with SMTP id
ca21-20020a056808331500b0039615126fd5mr712771oib.10.1684415631044; Thu, 18
May 2023 06:13:51 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.goja.nl.eu.org!2.eu.feeder.erje.net!feeder.erje.net!border-1.nntp.ord.giganews.com!nntp.giganews.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: rec.arts.sf.written
Date: Thu, 18 May 2023 06:13:50 -0700 (PDT)
In-Reply-To: <u4427i$5e92$1@dont-email.me>
Injection-Info: google-groups.googlegroups.com; posting-host=136.226.18.63; posting-account=BUItcQoAAACgV97n05UTyfLcl1Rd4W33
NNTP-Posting-Host: 136.226.18.63
References: <u4427i$5e92$1@dont-email.me>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <892d0ecf-c02e-4024-8f73-5d9477e22248n@googlegroups.com>
Subject: Re:_"We’re_Gonna_Need_a_Bigger_Moat"_by_Steve_Yegg
e
From: petertrei@gmail.com (pete...@gmail.com)
Injection-Date: Thu, 18 May 2023 13:13:51 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Lines: 54
 by: pete...@gmail.com - Thu, 18 May 2023 13:13 UTC

On Wednesday, May 17, 2023 at 10:25:18 PM UTC-4, Lynn McGuire wrote:
> "We’re Gonna Need a Bigger Moat" by Steve Yegge
>
> https://steve-yegge.medium.com/were-gonna-need-a-bigger-moat-478a8df6a0d2
>
> “In related news, these “billion dollar”-class LLMs can now be cloned on
> macbooks and copied directly onto Boston Dynamics robots via their
> Raspberry Pi adapter, at which point…”
>
> “Oh yeah, it was training costs. Remember when it was roughly $1B to
> train an LLM like GPT-4?"
>
> “According to the leaked Google memo, world-class competitive LLM
> training costs just dropped from a billion dollars to… that’s right, you
> guessed it…”
>
> "A hundred dollars.”
>
> “Before last week, there were, oh, maybe five LLMs in GPT’s class. In
> the whole world. It was like back in the 1950s when there were like five
> computers in the world, and IBM owned three of them.”
>
> “Which sounds to me like a very safe and defensible moat. That is, until
> you realize LLMs can f*****’ copy each other. So their so-called “data
> advantage” was really only going to be safe for as long as all the big
> players kept the AIs locked up.”
>
> “I swear this is a damn Jerry Bruckheimer movie, unfolding before our eyes.”
>
> Uh, it did not go well for the human race in the Robopocalypse book that
> I just read and reviewed. 99.9% dead in three years.
>
> https://www.amazon.com/Robopocalypse-Contemporaries-Daniel-H-Wilson/dp/0307740803/
>
> Lynn

I've got to say, this is starting to sound like the precursors of a 'Technological Singularity'
novel.

pt

Re: Re: "We’re Gonna Need a Bigger Moat" by Steve Yegge

<5aq9M.259749$qpNc.96377@fx03.iad>

  copy mid

https://news.novabbs.org/arts/article-flat.php?id=92931&group=rec.arts.sf.written#92931

  copy link   Newsgroups: rec.arts.sf.written
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!panix!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx03.iad.POSTED!not-for-mail
X-newsreader: xrn 9.03-beta-14-64bit
Sender: scott@dragon.sl.home (Scott Lurndal)
From: scott@slp53.sl.home (Scott Lurndal)
Reply-To: slp53@pacbell.net
Subject: Re: Re:_"We’re_Gonna_Need_a_Bigger_Moat"_by_Steve_Yegge
Newsgroups: rec.arts.sf.written
References: <u4427i$5e92$1@dont-email.me> <14eed119-676d-4a43-bb2d-d999ba36d301n@googlegroups.com>
Lines: 58
Message-ID: <5aq9M.259749$qpNc.96377@fx03.iad>
X-Complaints-To: abuse@usenetserver.com
NNTP-Posting-Date: Thu, 18 May 2023 13:58:25 UTC
Organization: UsenetServer - www.usenetserver.com
Date: Thu, 18 May 2023 13:58:25 GMT
X-Received-Bytes: 2919
 by: Scott Lurndal - Thu, 18 May 2023 13:58 UTC

Andrew McDowell <mcdowell_ag@sky.com> writes:
>On Thursday, May 18, 2023 at 3:25:18=E2=80=AFAM UTC+1, Lynn McGuire wrote:
>> "We=E2=80=99re Gonna Need a Bigger Moat" by Steve Yegge=20
>>=20
>> https://steve-yegge.medium.com/were-gonna-need-a-bigger-moat-478a8df6a0d2=
>=20
>>=20
>> =E2=80=9CIn related news, these =E2=80=9Cbillion dollar=E2=80=9D-class LL=
>Ms can now be cloned on=20
>> macbooks and copied directly onto Boston Dynamics robots via their=20
>> Raspberry Pi adapter, at which point=E2=80=A6=E2=80=9D=20
>>=20
>> =E2=80=9COh yeah, it was training costs. Remember when it was roughly $1B=
> to=20
>> train an LLM like GPT-4?"=20
>>=20
>> =E2=80=9CAccording to the leaked Google memo, world-class competitive LLM=
>=20
>> training costs just dropped from a billion dollars to=E2=80=A6 that=E2=80=
>=99s right, you=20
>> guessed it=E2=80=A6=E2=80=9D=20
>>=20
>> "A hundred dollars.=E2=80=9D=20
>>=20
>> =E2=80=9CBefore last week, there were, oh, maybe five LLMs in GPT=E2=80=
>=99s class. In=20
>> the whole world. It was like back in the 1950s when there were like five=
>=20
>> computers in the world, and IBM owned three of them.=E2=80=9D=20
>>=20
>> =E2=80=9CWhich sounds to me like a very safe and defensible moat. That is=
>, until=20
>> you realize LLMs can f*****=E2=80=99 copy each other. So their so-called =
>=E2=80=9Cdata=20
>> advantage=E2=80=9D was really only going to be safe for as long as all th=
>e big=20
>> players kept the AIs locked up.=E2=80=9D=20
>>=20
>> =E2=80=9CI swear this is a damn Jerry Bruckheimer movie, unfolding before=
> our eyes.=E2=80=9D=20
>>=20
>> Uh, it did not go well for the human race in the Robopocalypse book that=
>=20
>> I just read and reviewed. 99.9% dead in three years.=20
>>=20
>> https://www.amazon.com/Robopocalypse-Contemporaries-Daniel-H-Wilson/dp/03=
>07740803/=20
>>=20
>> Lynn
>I am in the skeptical minority. I think that a Large Language Model is just=
> that - a means of predicting the probabilities of the different possible e=
>xtensions of a stretch of text.

Indeed.

https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

It is _NOT_ AI. Nor is it sentient.

Re: "We???re Gonna Need a Bigger Moat" by Steve Yegge

<u46sue$ihh7$1@dont-email.me>

  copy mid

https://news.novabbs.org/arts/article-flat.php?id=92942&group=rec.arts.sf.written#92942

  copy link   Newsgroups: rec.arts.sf.written
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: davidd02@tpg.com.au (David Duffy)
Newsgroups: rec.arts.sf.written
Subject: Re: "We???re Gonna Need a Bigger Moat" by Steve Yegge
Date: Fri, 19 May 2023 04:12:00 -0000 (UTC)
Organization: A noiseless patient Spider
Lines: 25
Message-ID: <u46sue$ihh7$1@dont-email.me>
References: <u4427i$5e92$1@dont-email.me> <14eed119-676d-4a43-bb2d-d999ba36d301n@googlegroups.com> <5aq9M.259749$qpNc.96377@fx03.iad>
Injection-Date: Fri, 19 May 2023 04:12:00 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="8ead77a21e662ae67ee3c7cf44ba8566";
logging-data="607783"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18l82chgp9608Oe2HRS6ant3rZIvX4aZR4="
User-Agent: tin/2.6.2-20220130 ("Convalmore") (Linux/5.15.0-72-generic (x86_64))
Cancel-Lock: sha1:LxkNox3olNhYzHj3jOsGTaS50/M=
 by: David Duffy - Fri, 19 May 2023 04:12 UTC

Scott Lurndal <scott@slp53.sl.home> wrote:
> Andrew McDowell <mcdowell_ag@sky.com> writes:
>>On Thursday, May 18, 2023 at 3:25:18=E2=80=AFAM UTC+1, Lynn McGuire wrote:
>>> "We=E2=80=99re Gonna Need a Bigger Moat" by Steve Yegge=20
>>I am in the skeptical minority. I think that a Large Language Model is just=
>> that - a means of predicting the probabilities of the different possible e=
>>xtensions of a stretch of text.
>
> Indeed.
>
> https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
>
> It is _NOT_ AI. Nor is it sentient.

It's just Babel-17, We could all learn to do it ;)

Actually, one earlier model (Word2vec) of the chemistry literature
successfully predicted properties of new compounds tested after the
year cutoff of the papers the program was trained on. Just by
calculating "a dot product projection) of normalized word embeddings".

Minsky's 1968 definition of AI: "the science of making machines do the
things that would require intelligence if done by men."


arts / rec.arts.sf.written / Re: "We???re Gonna Need a Bigger Moat" by Steve Yegge

1
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor