Rocksolid Light

Welcome to Rocksolid Light

mail  files  register  newsreader  groups  login

Message-ID:  

19 May, 2024: Line wrapping has been changed to be more consistent with Usenet standards.
 If you find that it is broken please let me know here rocksolid.nodes.help


devel / comp.theory / Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?

SubjectAuthor
* Why does H1(D,D) actually get a different result than H(D,D) ???olcott
+* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
|+* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||+* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
|||`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| +* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| |`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| | +* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| | |`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| | | +* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Mikko
||| | | |`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| | | | +- Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| | | | `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Mikko
||| | | |  `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| | | |   +* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| | | |   |`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| | | |   | `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| | | |   |  `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| | | |   |   `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| | | |   |    `* How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     +* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | +* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | | `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | |   `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |    `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | +* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Mikko
||| | | |   |     | |`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | | +* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | | |`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | | | `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | | |  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | | |   `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | | |    `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | | `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Mikko
||| | | |   |     | |  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | |   +* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |   |`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | |   | `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |   |  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | |   |   `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |   |    `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | |   |     `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |   |      `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | |   |       `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |   |        `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | |   |         `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |   `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?immibis
||| | | |   |     | `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     |   `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |    +* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     |    |+- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |    |`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Mikko
||| | | |   |     |    | `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     |    |  +- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?immibis
||| | | |   |     |    |  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Mikko
||| | | |   |     |    |   `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     |    |    +* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?immibis
||| | | |   |     |    |    |+* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     |    |    ||+- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |    |    ||`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?immibis
||| | | |   |     |    |    || `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     |    |    ||  +- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?immibis
||| | | |   |     |    |    ||  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |    |    ||   `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     |    |    ||    `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |    |    |`- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |    |    `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |    `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Mikko
||| | | |   |     |     `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?immibis
||| | | |   |      `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Mikko
||| | | |    `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| | | |     +* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| | | |     |`* Correcting the foundation of analytic truth and Linz H ⟨Ĥ⟩ ⟨Ĥ⟩olcott
||| | | |     | `* Re: Correcting the foundation of analytic truth and Linz H ⟨Ĥ⟩ ⟨Ĥ⟩Richard Damon
||| | | |     |  `* Re: Correcting the foundation of analytic truth and Linz H ⟨Ĥ⟩ ⟨Ĥ⟩olcott
||| | | |     |   `- Re: Correcting the foundation of analytic truth and Linz H ⟨Ĥ⟩ ⟨Ĥ⟩Richard Damon
||| | | |     `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Mikko
||| | | |      `* Why do H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ derive different results ?olcott
||| | | |       +- Re: Why do H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ derive different results ?immibis
||| | | |       `- Re: Why do H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ derive different results ?Richard Damon
||| | | `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| | |  `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| | |   `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| | |    `* Actual limits of computations != actual limits of computers with unlimited memorolcott
||| | |     `* Re: Actual limits of computations != actual limits of computers with unlimited mRichard Damon
||| | |      `* Re: Actual limits of computations != actual limits of computers with unlimited molcott
||| | |       `* Re: Actual limits of computations != actual limits of computers with unlimited mRichard Damon
||| | |        `* Re: Actual limits of computations != actual limits of computers with unlimited molcott
||| | |         +* Re: Actual limits of computations != actual limits of computers with unlimited mRichard Damon
||| | |         |`* Limits of computations != actual limits of computers [ Church Turing ]olcott
||| | |         | +* Re: Limits of computations != actual limits of computers [ Church Turing ]Richard Damon
||| | |         | |`* Re: Limits of computations != actual limits of computers [ Church Turing ]olcott
||| | |         | | `* Re: Limits of computations != actual limits of computers [ Church Turing ]Richard Damon
||| | |         | |  +* Re: Limits of computations != actual limits of computers [ Church Turing ]olcott
||| | |         | |  |+* Re: Limits of computations != actual limits of computers [ Church Turing ]immibis
||| | |         | |  |`* Re: Limits of computations != actual limits of computers [ Church Turing ]Richard Damon
||| | |         | |  `* Re: Limits of computations != actual limits of computers [ Church Turing ]olcott
||| | |         | `- Re: Finlayson [ Church Turing ]Ross Finlayson
||| | |         `* Re: Actual limits of computations != actual limits of computers with unlimited mMikko
||| | `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???immibis
||| `- Re: Why does H1(D,D) actually get a different result than H(D,D) ???immibis
||`- Re: Why does H1(D,D) actually get a different result than H(D,D) ???Mikko
|+- Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
|+* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
|`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Tristan Wibberley
+* Re: Why does H1(D,D) actually get a different result than H(D,D) ???immibis
`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Mikko

Pages:1234567
How do we know that ChatGPT 4.0 correctly evaluated my ideas?

<us3j5o$2vhd5$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54550&group=comp.theory#54550

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Sun, 3 Mar 2024 22:37:10 -0600
Organization: A noiseless patient Spider
Lines: 72
Message-ID: <us3j5o$2vhd5$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 4 Mar 2024 04:37:12 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="ce657901b2f63b080d23b72945a142c8";
logging-data="3130789"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+4VNpQZla4JjRRkKeOs7hg"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:b2TrTtg+4UcUUUAQjwZi20dIGKE=
Content-Language: en-US
In-Reply-To: <us3if9$lq4c$11@i2pn2.org>
 by: olcott - Mon, 4 Mar 2024 04:37 UTC

On 3/3/2024 10:25 PM, Richard Damon wrote:
> On 3/3/24 10:32 PM, olcott wrote:
>> On 3/3/2024 8:24 PM, Richard Damon wrote:
>>> On 3/3/24 9:13 PM, olcott wrote:
>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>> On 3/3/24 1:47 PM, olcott wrote:
>>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>>
>>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>>
>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>
>>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>>
>>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>>
>>>>>>
>>>>>> The first thing that it does is agree that Hehner's
>>>>>> "Carol's question" (augmented by Richards critique)
>>>>>> is an example of the Liar Paradox.
>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>
>>>>>> It ends up concluding that myself, professor Hehner and
>>>>>> professor Stoddart are all correct in that there is
>>>>>> something wrong with the halting problem.
>>>>>
>>>>> Which since it is proven that Chat GPT doesn't actually know what
>>>>> is a fact, and has been proven to lie,
>>>>
>>>> The first thing that it figured out on its own is that
>>>> Carol's question is isomorphic to the Liar Paradox.
>>>>
>>>> It eventually agreed with the same conclusion that
>>>> myself and professors Hehner and Stoddart agreed to.
>>>> It took 34 pages of dialog to understand this. I
>>>> finally have a good PDF of this.
>>>>
>>>
>>> It didn't "Figure it out". it pattern matched it to previous input it
>>> has been given.
>>>
>>> If it took 34 pages to argee with your conclusion, then it really
>>> didn't agree with you initially, but you finally trained it to your
>>> version of reality.
>>
>> *HERE IS ITS AGREEMENT*
>> When an input, such as the halting problem's pathological input D, is
>> designed to contradict every value that the halting decider H returns,
>> it creates a self-referential paradox that prevents H from providing a
>> consistent and correct response. In this context, D can be seen as
>> posing an incorrect question to H, as its contradictory nature
>> undermines the possibility of a meaningful and accurate answer.
>>
>>
>
> Which means NOTHING as LLM will tell non-truths if feed misleading
> information.

The above paragraph is proven to be completely true entirely
on the basis of the meaning of its words as these words were
defined in the dialogue that precedes them.

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Actual limits of computations != actual limits of computers with unlimited memory ?

<us3kd5$2vo1i$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54551&group=comp.theory#54551

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: Actual limits of computations != actual limits of computers with
unlimited memory ?
Date: Sun, 3 Mar 2024 22:58:12 -0600
Organization: A noiseless patient Spider
Lines: 180
Message-ID: <us3kd5$2vo1i$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1pkc$fjqv$25@i2pn2.org>
<us27ut$2iua6$1@dont-email.me> <us2n8e$lq4d$3@i2pn2.org>
<us3c9f$2qj3n$1@dont-email.me> <us3iev$lq4c$10@i2pn2.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 4 Mar 2024 04:58:13 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="ce657901b2f63b080d23b72945a142c8";
logging-data="3137586"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19qRd6PiFIk1wmmIOVayO6P"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:IkGcmMTDzITzkktbikMYMFl7qMk=
In-Reply-To: <us3iev$lq4c$10@i2pn2.org>
Content-Language: en-US
 by: olcott - Mon, 4 Mar 2024 04:58 UTC

On 3/3/2024 10:25 PM, Richard Damon wrote:
> On 3/3/24 9:39 PM, olcott wrote:
>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>> On 3/3/24 11:19 AM, olcott wrote:
>>>> On 3/3/2024 6:15 AM, Richard Damon wrote:
>>>>> On 3/2/24 9:09 PM, olcott wrote:
>>>>>> On 3/2/2024 7:46 PM, Richard Damon wrote:
>>>>>>> On 3/2/24 8:38 PM, olcott wrote:
>>>>>>>> On 3/2/2024 7:30 PM, Richard Damon wrote:
>>>>>>>>> On 3/2/24 8:08 PM, olcott wrote:
>>>>>>>>>> On 3/2/2024 6:47 PM, Richard Damon wrote:
>>>>>>>>>>> On 3/2/24 6:01 PM, olcott wrote:
>>>>>>>>>>>> On 3/2/2024 4:54 PM, Richard Damon wrote:
>>>>>>>>>>>>> On 3/2/24 5:28 PM, olcott wrote:
>>>>>>>>>>>>>> The reason that people assume that H1(D,D) must get
>>>>>>>>>>>>>> the same result as H(D,D) is that they make sure
>>>>>>>>>>>>>> to ignore the reason why they get a different result.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Namely that you are lying that H and H1 are actually the
>>>>>>>>>>>>> same computation.
>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> It turns out that the only reason that H1(D,D) derives a
>>>>>>>>>>>>>> different result than H(D,D) is that H is at a different
>>>>>>>>>>>>>> physical machine address than H1.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Which means that H and H1 are not computations, and you
>>>>>>>>>>>>> have been just an ignorant pathological liar all this time.
>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> D calls 00001522 // machine address of H
>>>>>>>>>>>>>> thus D does call 00001422 // machine address of H1
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Since this <is> the empirically determined deciding
>>>>>>>>>>>>>> factor I don't think that it can be correctly ignored.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Right, but since the algorithm for H/H1 uses the address of
>>>>>>>>>>>>> the decider, which isn't defined as an "input" to it, we
>>>>>>>>>>>>> see that you have been lying that this code is a
>>>>>>>>>>>>> computation. Likely because you have made yourself ignorant
>>>>>>>>>>>>> of what a computation actually is,
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thus you have made yourself into an Ignorant Pathological
>>>>>>>>>>>>> Lying Idiot.
>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Any physically implemented Turing machine must exist
>>>>>>>>>>>>>> at some physical memory location.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Nope.
>>>>>>>>>>>> Any *physically implemented Turing machine*
>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>> must exist at some physical memory location.
>>>>>>>>>>>>
>>>>>>>>>>>> Try and explain the details of how it can be otherwise.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> And description of a Turing Machine (or a Computation) that
>>>>>>>>>>> needs to reference atributes of Modern Electronic Computers
>>>>>>>>>>> is just WRONG as they predate the development of such a thing.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Virtual Machines that are exactly Turing Machines
>>>>>>>>>> except for unlimited memory can and do exist.
>>>>>>>>>>
>>>>>>>>>> They necessarily must be implemented in physical memory
>>>>>>>>>> and cannot possibly be implemented any other way.
>>>>>>>>>
>>>>>>>>> So?
>>>>>>>>>
>>>>>>>>> Doesn't let a "Computation" change its answer based on its
>>>>>>>>> memory address.
>>>>>>>>>
>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does
>>>>>>>> not halt
>>>>>>>>
>>>>>>>> We ourselves can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by
>>>>>>>> Ĥ.H cannot possibly terminate unless Ĥ.H aborts its
>>>>>>>> simulation.
>>>>>>>>
>>>>>>>> We ourselves can also see that when Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
>>>>>>>> does abort its simulation then Ĥ will halt.
>>>>>>>>
>>>>>>>> That no computer will ever achieve this degree of
>>>>>>>> understanding is directly contradicted by this:
>>>>>>>>
>>>>>>>> ChatGPT 4.0 dialogue.
>>>>>>>> https://www.liarparadox.org/ChatGPT_HP.pdf
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> No COMPUTATION can solve it, because it has been proved impossible.
>>>>>>
>>>>>> None-the-less actual computers do actually demonstrate
>>>>>> actual very deep understanding of these things.
>>>>>
>>>>> Do computers actually UNDERSTAND?
>>>>
>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>> Demonstrates the functional equivalent of deep understanding.
>>>> The first thing that it does is categorize Carol's question
>>>> as equivalent to the Liar Paradox.
>>>
>>> Nope, doesn't show what you claim, just that it has been taught by
>>> "rote memorization" that the answer to a question put the way you did
>>> is the answer it gave.
>>>
>>> You are just showing that YOU don't understand what the word
>>> UNDERSTAND actually means.
>>>
>>>>
>>>>>>
>>>>>> This proves that the artifice of the human notion of
>>>>>> computation is more limiting than actual real computers.
>>>>>
>>>>> In other words, you reject the use of definitions to define words.
>>>>>
>>>>> I guess to you, nothing means what others have said it means,
>>>>>
>>>>
>>>> I have found that it is the case that some definitions of
>>>> technical terms sometimes boxes people into misconceptions
>>>> such that alternative views are inexpressible within the
>>>> technical language.
>>>> https://en.wikipedia.org/wiki/Linguistic_relativity
>>>
>>> In other words, you are admtting that when you claim to be working in
>>> a technical field and using the words as that field means, you are
>>> just being a out and out LIAR.
>>
>> Not all all. When working with any technical definition I never
>> simply assume that it is coherent. I always assume that it is
>> possibly incoherent until proven otherwise.
>
> In other words, you ADMIT that you ignore technical definitions and thus
> you comments about working in the field is just an ignorant pathological
> lie.
>
>>
>> If there are physically existing machines that can answer questions
>> that are not Turing computable only because these machine can access
>> their own machine address then these machines would be strictly more
>> powerful than Turing Machines on these questions.
>
> Nope.


Click here to read the complete article
Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us43rr$32mqs$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54555&group=comp.theory#54555

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: mikko.levanto@iki.fi (Mikko)
Newsgroups: comp.theory
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Mon, 4 Mar 2024 11:22:03 +0200
Organization: -
Lines: 40
Message-ID: <us43rr$32mqs$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org> <us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org> <us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org> <us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org> <us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me> <us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me> <us2gk1$2ksv3$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: dont-email.me; posting-host="9efba8c74a76d3cab9b457bbb9881149";
logging-data="3234652"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18eyZ4ekGboSOdtsN/ZQMa7"
User-Agent: Unison/2.2
Cancel-Lock: sha1:nQ7SmlCKfICa1HT5FVq21N3th+M=
 by: Mikko - Mon, 4 Mar 2024 09:22 UTC

On 2024-03-03 18:47:29 +0000, olcott said:

> On 3/3/2024 11:48 AM, Mikko wrote:
>> On 2024-03-03 15:08:17 +0000, olcott said:
>>
>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>
>>>>> None-the-less actual computers do actually demonstrate
>>>>> actual very deep understanding of these things.
>>>>
>>>> Not very deep, just deeper that you can achieve.
>>>>
>>>
>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>
>> That does not demonstrate any understanding, even shallow.
>>
>
> The first thing that it does is agree that Hehner's
> "Carol's question" (augmented by Richards critique)
> is an example of the Liar Paradox.
> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>
> It ends up concluding that myself, professor Hehner and
> professor Stoddart are all correct in that there is
> something wrong with the halting problem.

None of that demonstrates any understanding.

> My persistent focus on these ideas gives me an increasingly
> deeper understanding thus my latest position is that the
> halting problem proofs do not actually show that halting
> is not computable.

Your understanding is still defective and shallow.

--
Mikko

Re: Why does H1(D,D) actually get a different result than H(D,D) (Linz version)

<us4489$32p1b$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54556&group=comp.theory#54556

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: mikko.levanto@iki.fi (Mikko)
Newsgroups: comp.theory
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) (Linz version)
Date: Mon, 4 Mar 2024 11:28:41 +0200
Organization: -
Lines: 76
Message-ID: <us4489$32p1b$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us1kfq$2f1an$1@dont-email.me> <us2795$2ipl5$1@dont-email.me> <us2n89$lq4d$1@i2pn2.org> <us36a1$2pf6s$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: dont-email.me; posting-host="9efba8c74a76d3cab9b457bbb9881149";
logging-data="3236907"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX180vnwlJDuGVwfcDww0KjI0"
User-Agent: Unison/2.2
Cancel-Lock: sha1:/L5vTM38PAdorXuqie27810GKqc=
 by: Mikko - Mon, 4 Mar 2024 09:28 UTC

On 2024-03-04 00:57:37 +0000, olcott said:

> On 3/3/2024 2:40 PM, Richard Damon wrote:
>> On 3/3/24 11:08 AM, olcott wrote:
>>> On 3/3/2024 4:47 AM, Mikko wrote:
>>>> On 2024-03-02 22:28:44 +0000, olcott said:
>>>>
>>>>> The reason that people assume that H1(D,D) must get
>>>>> the same result as H(D,D) is that they make sure
>>>>> to ignore the reason why they get a different result.
>>>>
>>>> It is quite obvious why H1 gets a different result from H.
>>>> It simply is that your "simulator" does not simulate
>>>> correctly. In spite of that, your H gets H(D,D) wrong,
>>>> so it is not a counter example to Linz' proof.
>>>>
>>>
>>> H/D are equivalent to Ĥ and in reality that is the only
>>> way that H/D can be defined in an actual Turing Machine.
>>> This makes H1 equivalent to Linz H.
>>
>> Only if H1 is the exact same computation as H, meaning it gives the
>> exact sames answer as H for the same input. At this point, we don't
>> really need two different names as far as Turing Machines.
>>
>> Otherwise, you are just LYING that you built Linz H^ properly, as it
>> must be built on a computationally exact copy of Linz H.
>>
>>>
>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not halt
>>>
>>> Using an adaptation of Mike's idea combined with an earlier
>>> idea of mine: Ĥ.H simulates Ĥ applied to ⟨Ĥ⟩ until it sees
>>> an exact copy of the same machine description try to simulate
>>> itself again with identical copies of its same input.
>>
>> Except that there isn't a unique description for a given Turing
>> Machine, so H can't know what is its "Description".
>>
>>>
>>> Then the outermost Ĥ.H transitions to Ĥ.Hqn indicating that it
>>> must abort its simulation of Ĥ applied to ⟨Ĥ⟩ to prevent its own
>>> infinite execution.
>>>
>>> Although this halt status does not correspond to the actual
>>> behavior of Ĥ applied to ⟨Ĥ⟩ it does cause Ĥ to halt. When
>>> H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ it can see that Ĥ applied to ⟨Ĥ⟩ halts.
>>> *Thus Linz Ĥ can fool itself yet cannot fool Linz H*
>>>
>>> The reason that this works is that Ĥ contradicts its own
>>> internal copy of H yet cannot contradict the actual Linz H.
>>>
>>
>> But that impliess that H and H1 ard NOT the same computation,
>> otherwise, why did they act differe, and thus your argument is proved
>> to be a LIE.
>
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
>
> Execution trace of Ĥ applied to ⟨Ĥ⟩
> (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to Ĥ.H
> (b) Ĥ.H applied ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
> (c) which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process
>
> Both H and Ĥ.H transition to their NO state when a correct and
> complete simulation of their input would cause their own infinite
> execution and otherwise transition to their YES state.

There is no correct and complete simulation of a non-halting
computation.

--
Mikko

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us461o$333cq$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54557&group=comp.theory#54557

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: tristan.wibberley+netnews2@alumni.manchester.ac.uk (Tristan Wibberley)
Newsgroups: comp.theory,sci.logic
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Mon, 4 Mar 2024 09:59:21 +0000
Organization: A noiseless patient Spider
Lines: 13
Message-ID: <us461o$333cq$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 4 Mar 2024 09:59:20 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="6a213c20a07505fed1d5c2c4849f1b17";
logging-data="3247514"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18mdQOKA6SGItMkfvYZHTqSKXUm/1kHijgswDClVeLneA=="
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:nM0rnD2MrgqFQ7KUocNHBx4bSUM=
In-Reply-To: <us0ao0$fjqv$19@i2pn2.org>
Content-Language: en-GB
 by: Tristan Wibberley - Mon, 4 Mar 2024 09:59 UTC

On 02/03/2024 22:54, Richard Damon wrote:
> On 3/2/24 5:28 PM, olcott wrote:
>> The reason that people assume that H1(D,D) must get
>> the same result as H(D,D) is that they make sure
>> to ignore the reason why they get a different result.
>
> Namely that you are lying that H and H1 are actually the same computation.

If they have machine addresses then they're objects of a non-applicative
formal system, even if they're embeddings therein of objects from an
applicative system. So they may continue to determine computations.

Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?

<us4eva$o3ci$2@i2pn2.org>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54560&group=comp.theory#54560

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 07:31:39 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us4eva$o3ci$2@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 4 Mar 2024 12:31:38 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="789906"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
In-Reply-To: <us3j5o$2vhd5$1@dont-email.me>
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
 by: Richard Damon - Mon, 4 Mar 2024 12:31 UTC

On 3/3/24 11:37 PM, olcott wrote:
> On 3/3/2024 10:25 PM, Richard Damon wrote:
>> On 3/3/24 10:32 PM, olcott wrote:
>>> On 3/3/2024 8:24 PM, Richard Damon wrote:
>>>> On 3/3/24 9:13 PM, olcott wrote:
>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>> On 3/3/24 1:47 PM, olcott wrote:
>>>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>>>
>>>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>>>
>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>
>>>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>>>
>>>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>>>
>>>>>>>
>>>>>>> The first thing that it does is agree that Hehner's
>>>>>>> "Carol's question" (augmented by Richards critique)
>>>>>>> is an example of the Liar Paradox.
>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>
>>>>>>> It ends up concluding that myself, professor Hehner and
>>>>>>> professor Stoddart are all correct in that there is
>>>>>>> something wrong with the halting problem.
>>>>>>
>>>>>> Which since it is proven that Chat GPT doesn't actually know what
>>>>>> is a fact, and has been proven to lie,
>>>>>
>>>>> The first thing that it figured out on its own is that
>>>>> Carol's question is isomorphic to the Liar Paradox.
>>>>>
>>>>> It eventually agreed with the same conclusion that
>>>>> myself and professors Hehner and Stoddart agreed to.
>>>>> It took 34 pages of dialog to understand this. I
>>>>> finally have a good PDF of this.
>>>>>
>>>>
>>>> It didn't "Figure it out". it pattern matched it to previous input
>>>> it has been given.
>>>>
>>>> If it took 34 pages to argee with your conclusion, then it really
>>>> didn't agree with you initially, but you finally trained it to your
>>>> version of reality.
>>>
>>> *HERE IS ITS AGREEMENT*
>>> When an input, such as the halting problem's pathological input D, is
>>> designed to contradict every value that the halting decider H returns,
>>> it creates a self-referential paradox that prevents H from providing a
>>> consistent and correct response. In this context, D can be seen as
>>> posing an incorrect question to H, as its contradictory nature
>>> undermines the possibility of a meaningful and accurate answer.
>>>
>>>
>>
>> Which means NOTHING as LLM will tell non-truths if feed misleading
>> information.
>
> The above paragraph is proven to be completely true entirely
> on the basis of the meaning of its words as these words were
> defined in the dialogue that precedes them.
>

Nope, the problem is you gave it incorrect implications on the meaning
of the wrods.

Since a Halt Decider is a Computation, and a Computation has a definite
behavior for a given input, there IS a definite answer to the question
does H^ (H^) Halt. Thus, that is NOT a "improper question"

Thus, by the plain meaning of the words, it is an INCORRECT statement,
and you are proven, AGAIN, to be just an ignorant pathological liar.

Re: Actual limits of computations != actual limits of computers with unlimited memory ?

<us4ffh$o3cj$1@i2pn2.org>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54561&group=comp.theory#54561

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re: Actual limits of computations != actual limits of computers with
unlimited memory ?
Date: Mon, 4 Mar 2024 07:40:18 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us4ffh$o3cj$1@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1pkc$fjqv$25@i2pn2.org>
<us27ut$2iua6$1@dont-email.me> <us2n8e$lq4d$3@i2pn2.org>
<us3c9f$2qj3n$1@dont-email.me> <us3iev$lq4c$10@i2pn2.org>
<us3kd5$2vo1i$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 4 Mar 2024 12:40:17 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="789907"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us3kd5$2vo1i$1@dont-email.me>
Content-Language: en-US
 by: Richard Damon - Mon, 4 Mar 2024 12:40 UTC

On 3/3/24 11:58 PM, olcott wrote:
> On 3/3/2024 10:25 PM, Richard Damon wrote:
>> On 3/3/24 9:39 PM, olcott wrote:
>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>> On 3/3/24 11:19 AM, olcott wrote:
>>>>> On 3/3/2024 6:15 AM, Richard Damon wrote:
>>>>>> On 3/2/24 9:09 PM, olcott wrote:
>>>>>>> On 3/2/2024 7:46 PM, Richard Damon wrote:
>>>>>>>> On 3/2/24 8:38 PM, olcott wrote:
>>>>>>>>> On 3/2/2024 7:30 PM, Richard Damon wrote:
>>>>>>>>>> On 3/2/24 8:08 PM, olcott wrote:
>>>>>>>>>>> On 3/2/2024 6:47 PM, Richard Damon wrote:
>>>>>>>>>>>> On 3/2/24 6:01 PM, olcott wrote:
>>>>>>>>>>>>> On 3/2/2024 4:54 PM, Richard Damon wrote:
>>>>>>>>>>>>>> On 3/2/24 5:28 PM, olcott wrote:
>>>>>>>>>>>>>>> The reason that people assume that H1(D,D) must get
>>>>>>>>>>>>>>> the same result as H(D,D) is that they make sure
>>>>>>>>>>>>>>> to ignore the reason why they get a different result.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Namely that you are lying that H and H1 are actually the
>>>>>>>>>>>>>> same computation.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> It turns out that the only reason that H1(D,D) derives a
>>>>>>>>>>>>>>> different result than H(D,D) is that H is at a different
>>>>>>>>>>>>>>> physical machine address than H1.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Which means that H and H1 are not computations, and you
>>>>>>>>>>>>>> have been just an ignorant pathological liar all this time.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> D calls 00001522 // machine address of H
>>>>>>>>>>>>>>> thus D does call 00001422 // machine address of H1
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Since this <is> the empirically determined deciding
>>>>>>>>>>>>>>> factor I don't think that it can be correctly ignored.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Right, but since the algorithm for H/H1 uses the address
>>>>>>>>>>>>>> of the decider, which isn't defined as an "input" to it,
>>>>>>>>>>>>>> we see that you have been lying that this code is a
>>>>>>>>>>>>>> computation. Likely because you have made yourself
>>>>>>>>>>>>>> ignorant of what a computation actually is,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Thus you have made yourself into an Ignorant Pathological
>>>>>>>>>>>>>> Lying Idiot.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Any physically implemented Turing machine must exist
>>>>>>>>>>>>>>> at some physical memory location.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Nope.
>>>>>>>>>>>>> Any *physically implemented Turing machine*
>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>> must exist at some physical memory location.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Try and explain the details of how it can be otherwise.
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> And description of a Turing Machine (or a Computation) that
>>>>>>>>>>>> needs to reference atributes of Modern Electronic Computers
>>>>>>>>>>>> is just WRONG as they predate the development of such a thing.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Virtual Machines that are exactly Turing Machines
>>>>>>>>>>> except for unlimited memory can and do exist.
>>>>>>>>>>>
>>>>>>>>>>> They necessarily must be implemented in physical memory
>>>>>>>>>>> and cannot possibly be implemented any other way.
>>>>>>>>>>
>>>>>>>>>> So?
>>>>>>>>>>
>>>>>>>>>> Doesn't let a "Computation" change its answer based on its
>>>>>>>>>> memory address.
>>>>>>>>>>
>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does
>>>>>>>>> not halt
>>>>>>>>>
>>>>>>>>> We ourselves can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by
>>>>>>>>> Ĥ.H cannot possibly terminate unless Ĥ.H aborts its
>>>>>>>>> simulation.
>>>>>>>>>
>>>>>>>>> We ourselves can also see that when Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
>>>>>>>>> does abort its simulation then Ĥ will halt.
>>>>>>>>>
>>>>>>>>> That no computer will ever achieve this degree of
>>>>>>>>> understanding is directly contradicted by this:
>>>>>>>>>
>>>>>>>>> ChatGPT 4.0 dialogue.
>>>>>>>>> https://www.liarparadox.org/ChatGPT_HP.pdf
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>> No COMPUTATION can solve it, because it has been proved impossible.
>>>>>>>
>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>> actual very deep understanding of these things.
>>>>>>
>>>>>> Do computers actually UNDERSTAND?
>>>>>
>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>> Demonstrates the functional equivalent of deep understanding.
>>>>> The first thing that it does is categorize Carol's question
>>>>> as equivalent to the Liar Paradox.
>>>>
>>>> Nope, doesn't show what you claim, just that it has been taught by
>>>> "rote memorization" that the answer to a question put the way you
>>>> did is the answer it gave.
>>>>
>>>> You are just showing that YOU don't understand what the word
>>>> UNDERSTAND actually means.
>>>>
>>>>>
>>>>>>>
>>>>>>> This proves that the artifice of the human notion of
>>>>>>> computation is more limiting than actual real computers.
>>>>>>
>>>>>> In other words, you reject the use of definitions to define words.
>>>>>>
>>>>>> I guess to you, nothing means what others have said it means,
>>>>>>
>>>>>
>>>>> I have found that it is the case that some definitions of
>>>>> technical terms sometimes boxes people into misconceptions
>>>>> such that alternative views are inexpressible within the
>>>>> technical language.
>>>>> https://en.wikipedia.org/wiki/Linguistic_relativity
>>>>
>>>> In other words, you are admtting that when you claim to be working
>>>> in a technical field and using the words as that field means, you
>>>> are just being a out and out LIAR.
>>>
>>> Not all all. When working with any technical definition I never
>>> simply assume that it is coherent. I always assume that it is
>>> possibly incoherent until proven otherwise.
>>
>> In other words, you ADMIT that you ignore technical definitions and
>> thus you comments about working in the field is just an ignorant
>> pathological lie.
>>
>>>
>>> If there are physically existing machines that can answer questions
>>> that are not Turing computable only because these machine can access
>>> their own machine address then these machines would be strictly more
>>> powerful than Turing Machines on these questions.
>>
>> Nope.
>
>
> If machine M can solve problems that machine N
> cannot solve then for these problems M is more
> powerful than N.


Click here to read the complete article
Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us4q0r$37eej$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54564&group=comp.theory#54564

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Mon, 4 Mar 2024 09:40:11 -0600
Organization: A noiseless patient Spider
Lines: 20
Message-ID: <us4q0r$37eej$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us461o$333cq$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 4 Mar 2024 15:40:11 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="ce657901b2f63b080d23b72945a142c8";
logging-data="3389907"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19G8OSniOJMwcEhWhD9yEYX"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:NHCsmwODlCDm5kSZExJpahwT4/o=
Content-Language: en-US
In-Reply-To: <us461o$333cq$1@dont-email.me>
 by: olcott - Mon, 4 Mar 2024 15:40 UTC

On 3/4/2024 3:59 AM, Tristan Wibberley wrote:
> On 02/03/2024 22:54, Richard Damon wrote:
>> On 3/2/24 5:28 PM, olcott wrote:
>>> The reason that people assume that H1(D,D) must get
>>> the same result as H(D,D) is that they make sure
>>> to ignore the reason why they get a different result.
>>
>> Namely that you are lying that H and H1 are actually the same
>> computation.
>
> If they have machine addresses then they're objects of a non-applicative
> formal system, even if they're embeddings therein of objects from an
> applicative system. So they may continue to determine computations.

I do not understand what you are saying.

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Actual limits of computations != actual limits of computers with unlimited memory ?

<us57it$3ag4o$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54566&group=comp.theory#54566

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: Actual limits of computations != actual limits of computers with
unlimited memory ?
Date: Mon, 4 Mar 2024 13:31:40 -0600
Organization: A noiseless patient Spider
Lines: 244
Message-ID: <us57it$3ag4o$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1pkc$fjqv$25@i2pn2.org>
<us27ut$2iua6$1@dont-email.me> <us2n8e$lq4d$3@i2pn2.org>
<us3c9f$2qj3n$1@dont-email.me> <us3iev$lq4c$10@i2pn2.org>
<us3kd5$2vo1i$1@dont-email.me> <us4ffh$o3cj$1@i2pn2.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 4 Mar 2024 19:31:42 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="ce657901b2f63b080d23b72945a142c8";
logging-data="3489944"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+mAewo8Y9LYIfhdHkLgQD5"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:E3hJZ2IcE6w1K2nRf/8YLJwSY9Y=
Content-Language: en-US
In-Reply-To: <us4ffh$o3cj$1@i2pn2.org>
 by: olcott - Mon, 4 Mar 2024 19:31 UTC

On 3/4/2024 6:40 AM, Richard Damon wrote:
> On 3/3/24 11:58 PM, olcott wrote:
>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>> On 3/3/24 9:39 PM, olcott wrote:
>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>> On 3/3/24 11:19 AM, olcott wrote:
>>>>>> On 3/3/2024 6:15 AM, Richard Damon wrote:
>>>>>>> On 3/2/24 9:09 PM, olcott wrote:
>>>>>>>> On 3/2/2024 7:46 PM, Richard Damon wrote:
>>>>>>>>> On 3/2/24 8:38 PM, olcott wrote:
>>>>>>>>>> On 3/2/2024 7:30 PM, Richard Damon wrote:
>>>>>>>>>>> On 3/2/24 8:08 PM, olcott wrote:
>>>>>>>>>>>> On 3/2/2024 6:47 PM, Richard Damon wrote:
>>>>>>>>>>>>> On 3/2/24 6:01 PM, olcott wrote:
>>>>>>>>>>>>>> On 3/2/2024 4:54 PM, Richard Damon wrote:
>>>>>>>>>>>>>>> On 3/2/24 5:28 PM, olcott wrote:
>>>>>>>>>>>>>>>> The reason that people assume that H1(D,D) must get
>>>>>>>>>>>>>>>> the same result as H(D,D) is that they make sure
>>>>>>>>>>>>>>>> to ignore the reason why they get a different result.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Namely that you are lying that H and H1 are actually the
>>>>>>>>>>>>>>> same computation.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> It turns out that the only reason that H1(D,D) derives a
>>>>>>>>>>>>>>>> different result than H(D,D) is that H is at a different
>>>>>>>>>>>>>>>> physical machine address than H1.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Which means that H and H1 are not computations, and you
>>>>>>>>>>>>>>> have been just an ignorant pathological liar all this time.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> D calls 00001522 // machine address of H
>>>>>>>>>>>>>>>> thus D does call 00001422 // machine address of H1
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Since this <is> the empirically determined deciding
>>>>>>>>>>>>>>>> factor I don't think that it can be correctly ignored.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Right, but since the algorithm for H/H1 uses the address
>>>>>>>>>>>>>>> of the decider, which isn't defined as an "input" to it,
>>>>>>>>>>>>>>> we see that you have been lying that this code is a
>>>>>>>>>>>>>>> computation. Likely because you have made yourself
>>>>>>>>>>>>>>> ignorant of what a computation actually is,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Thus you have made yourself into an Ignorant Pathological
>>>>>>>>>>>>>>> Lying Idiot.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Any physically implemented Turing machine must exist
>>>>>>>>>>>>>>>> at some physical memory location.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Nope.
>>>>>>>>>>>>>> Any *physically implemented Turing machine*
>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>> must exist at some physical memory location.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Try and explain the details of how it can be otherwise.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> And description of a Turing Machine (or a Computation) that
>>>>>>>>>>>>> needs to reference atributes of Modern Electronic Computers
>>>>>>>>>>>>> is just WRONG as they predate the development of such a thing.
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Virtual Machines that are exactly Turing Machines
>>>>>>>>>>>> except for unlimited memory can and do exist.
>>>>>>>>>>>>
>>>>>>>>>>>> They necessarily must be implemented in physical memory
>>>>>>>>>>>> and cannot possibly be implemented any other way.
>>>>>>>>>>>
>>>>>>>>>>> So?
>>>>>>>>>>>
>>>>>>>>>>> Doesn't let a "Computation" change its answer based on its
>>>>>>>>>>> memory address.
>>>>>>>>>>>
>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does
>>>>>>>>>> not halt
>>>>>>>>>>
>>>>>>>>>> We ourselves can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by
>>>>>>>>>> Ĥ.H cannot possibly terminate unless Ĥ.H aborts its
>>>>>>>>>> simulation.
>>>>>>>>>>
>>>>>>>>>> We ourselves can also see that when Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
>>>>>>>>>> does abort its simulation then Ĥ will halt.
>>>>>>>>>>
>>>>>>>>>> That no computer will ever achieve this degree of
>>>>>>>>>> understanding is directly contradicted by this:
>>>>>>>>>>
>>>>>>>>>> ChatGPT 4.0 dialogue.
>>>>>>>>>> https://www.liarparadox.org/ChatGPT_HP.pdf
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> No COMPUTATION can solve it, because it has been proved
>>>>>>>>> impossible.
>>>>>>>>
>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>> actual very deep understanding of these things.
>>>>>>>
>>>>>>> Do computers actually UNDERSTAND?
>>>>>>
>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>> Demonstrates the functional equivalent of deep understanding.
>>>>>> The first thing that it does is categorize Carol's question
>>>>>> as equivalent to the Liar Paradox.
>>>>>
>>>>> Nope, doesn't show what you claim, just that it has been taught by
>>>>> "rote memorization" that the answer to a question put the way you
>>>>> did is the answer it gave.
>>>>>
>>>>> You are just showing that YOU don't understand what the word
>>>>> UNDERSTAND actually means.
>>>>>
>>>>>>
>>>>>>>>
>>>>>>>> This proves that the artifice of the human notion of
>>>>>>>> computation is more limiting than actual real computers.
>>>>>>>
>>>>>>> In other words, you reject the use of definitions to define words.
>>>>>>>
>>>>>>> I guess to you, nothing means what others have said it means,
>>>>>>>
>>>>>>
>>>>>> I have found that it is the case that some definitions of
>>>>>> technical terms sometimes boxes people into misconceptions
>>>>>> such that alternative views are inexpressible within the
>>>>>> technical language.
>>>>>> https://en.wikipedia.org/wiki/Linguistic_relativity
>>>>>
>>>>> In other words, you are admtting that when you claim to be working
>>>>> in a technical field and using the words as that field means, you
>>>>> are just being a out and out LIAR.
>>>>
>>>> Not all all. When working with any technical definition I never
>>>> simply assume that it is coherent. I always assume that it is
>>>> possibly incoherent until proven otherwise.
>>>
>>> In other words, you ADMIT that you ignore technical definitions and
>>> thus you comments about working in the field is just an ignorant
>>> pathological lie.
>>>
>>>>
>>>> If there are physically existing machines that can answer questions
>>>> that are not Turing computable only because these machine can access
>>>> their own machine address then these machines would be strictly more
>>>> powerful than Turing Machines on these questions.
>>>
>>> Nope.
>>
>>
>> If machine M can solve problems that machine N
>> cannot solve then for these problems M is more
>> powerful than N.
>
> But your H1 doesn't actually SOLVE the problem, as it fails on the input
> (H1^) (H1^)
>


Click here to read the complete article
Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us58r3$3aoj4$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54567&group=comp.theory#54567

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: comp.theory
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Mon, 4 Mar 2024 13:53:05 -0600
Organization: A noiseless patient Spider
Lines: 57
Message-ID: <us58r3$3aoj4$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us43rr$32mqs$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 4 Mar 2024 19:53:07 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="ce657901b2f63b080d23b72945a142c8";
logging-data="3498596"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18vRP2QjedPg5xTlQjGRlUm"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:9wn0vIlHFc4grCRG/TcaP7atAMs=
Content-Language: en-US
In-Reply-To: <us43rr$32mqs$1@dont-email.me>
 by: olcott - Mon, 4 Mar 2024 19:53 UTC

On 3/4/2024 3:22 AM, Mikko wrote:
> On 2024-03-03 18:47:29 +0000, olcott said:
>
>> On 3/3/2024 11:48 AM, Mikko wrote:
>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>
>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>
>>>>>> None-the-less actual computers do actually demonstrate
>>>>>> actual very deep understanding of these things.
>>>>>
>>>>> Not very deep, just deeper that you can achieve.
>>>>>
>>>>
>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>
>>> That does not demonstrate any understanding, even shallow.
>>>
>>
>> The first thing that it does is agree that Hehner's
>> "Carol's question" (augmented by Richards critique)
>> is an example of the Liar Paradox.
>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>
>> It ends up concluding that myself, professor Hehner and
>> professor Stoddart are all correct in that there is
>> something wrong with the halting problem.
>
> None of that demonstrates any understanding.
>
>> My persistent focus on these ideas gives me an increasingly
>> deeper understanding thus my latest position is that the
>> halting problem proofs do not actually show that halting
>> is not computable.
>
> Your understanding is still defective and shallow.
>

If it really was shallow then a gap in my reasoning
could be pointed out. The actual case is that because
I have focused on the same problem based on the Linz
proof for such a long time I noticed things that no
one every noticed before. *Post from 2004*

*On 7/25/2004 9:55 PM, Peter Olcott wrote: comp.theory*
> If the state transition diagram representation of the Halting Problem
> In An Introduction to Formal Languages and Automata, by Peter Lintz
> copyright 1990 pages 317-321 accurately represents the actual
> Halting Problem, then what is taken to be a proof that solving the
> Halting Problem is impossible, can not be correctly concluded
> from the proof.

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?

<us5a3c$3b05o$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54569&group=comp.theory#54569

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 14:14:34 -0600
Organization: A noiseless patient Spider
Lines: 101
Message-ID: <us5a3c$3b05o$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 4 Mar 2024 20:14:36 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="ce657901b2f63b080d23b72945a142c8";
logging-data="3506360"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+qmkRkfpSh3Roks2ApMdV5"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:lesklOLTIEG2NX+biFAxpQJRNs0=
In-Reply-To: <us4eva$o3ci$2@i2pn2.org>
Content-Language: en-US
 by: olcott - Mon, 4 Mar 2024 20:14 UTC

On 3/4/2024 6:31 AM, Richard Damon wrote:
> On 3/3/24 11:37 PM, olcott wrote:
>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>> On 3/3/24 10:32 PM, olcott wrote:
>>>> On 3/3/2024 8:24 PM, Richard Damon wrote:
>>>>> On 3/3/24 9:13 PM, olcott wrote:
>>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>>> On 3/3/24 1:47 PM, olcott wrote:
>>>>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>>>>
>>>>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>>>>
>>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>>
>>>>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>>>>
>>>>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>>>>
>>>>>>>>
>>>>>>>> The first thing that it does is agree that Hehner's
>>>>>>>> "Carol's question" (augmented by Richards critique)
>>>>>>>> is an example of the Liar Paradox.
>>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>>
>>>>>>>> It ends up concluding that myself, professor Hehner and
>>>>>>>> professor Stoddart are all correct in that there is
>>>>>>>> something wrong with the halting problem.
>>>>>>>
>>>>>>> Which since it is proven that Chat GPT doesn't actually know what
>>>>>>> is a fact, and has been proven to lie,
>>>>>>
>>>>>> The first thing that it figured out on its own is that
>>>>>> Carol's question is isomorphic to the Liar Paradox.
>>>>>>
>>>>>> It eventually agreed with the same conclusion that
>>>>>> myself and professors Hehner and Stoddart agreed to.
>>>>>> It took 34 pages of dialog to understand this. I
>>>>>> finally have a good PDF of this.
>>>>>>
>>>>>
>>>>> It didn't "Figure it out". it pattern matched it to previous input
>>>>> it has been given.
>>>>>
>>>>> If it took 34 pages to argee with your conclusion, then it really
>>>>> didn't agree with you initially, but you finally trained it to your
>>>>> version of reality.
>>>>
>>>> *HERE IS ITS AGREEMENT*
>>>> When an input, such as the halting problem's pathological input D, is
>>>> designed to contradict every value that the halting decider H returns,
>>>> it creates a self-referential paradox that prevents H from providing a
>>>> consistent and correct response. In this context, D can be seen as
>>>> posing an incorrect question to H, as its contradictory nature
>>>> undermines the possibility of a meaningful and accurate answer.
>>>>
>>>>
>>>
>>> Which means NOTHING as LLM will tell non-truths if feed misleading
>>> information.
>>
>> The above paragraph is proven to be completely true entirely
>> on the basis of the meaning of its words as these words were
>> defined in the dialogue that precedes them.
>>
>
> Nope, the problem is you gave it incorrect implications on the meaning
> of the wrods.

*ChatGPT 4.0 final analysis of everything that preceded it*
When an input, such as the halting problem's pathological input D, is
designed to contradict every value that the halting decider H returns,
it creates a self-referential paradox that prevents H from providing a
consistent and correct response. In this context, D can be seen as
posing an incorrect question to H, as its contradictory nature
undermines the possibility of a meaningful and accurate answer.

Within my definitions of my terms the above paragraph written by
Chat GPT 4.0 necessarily follows. This proves that the Chat GPT
4.0 analysis is sound.

*People are not free to disagree with stipulative definitions*

A stipulative definition is a type of definition in which a new or
currently existing term is given a new specific meaning for the purposes
of argument or discussion in a given context. When the term already
exists, this definition may, but does not necessarily, contradict the
dictionary (lexical) definition of the term.
https://en.wikipedia.org/wiki/Stipulative_definition

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?

<us5oi1$psb8$3@i2pn2.org>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54574&group=comp.theory#54574

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 19:21:23 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5oi1$psb8$3@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 00:21:22 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848232"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <us5a3c$3b05o$1@dont-email.me>
X-Spam-Checker-Version: SpamAssassin 4.0.0
 by: Richard Damon - Tue, 5 Mar 2024 00:21 UTC

On 3/4/24 3:14 PM, olcott wrote:
> On 3/4/2024 6:31 AM, Richard Damon wrote:
>> On 3/3/24 11:37 PM, olcott wrote:
>>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>>> On 3/3/24 10:32 PM, olcott wrote:
>>>>> On 3/3/2024 8:24 PM, Richard Damon wrote:
>>>>>> On 3/3/24 9:13 PM, olcott wrote:
>>>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>>>> On 3/3/24 1:47 PM, olcott wrote:
>>>>>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>>>>>
>>>>>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>>>>>
>>>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>>>
>>>>>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>>>>>
>>>>>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> The first thing that it does is agree that Hehner's
>>>>>>>>> "Carol's question" (augmented by Richards critique)
>>>>>>>>> is an example of the Liar Paradox.
>>>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>>>
>>>>>>>>> It ends up concluding that myself, professor Hehner and
>>>>>>>>> professor Stoddart are all correct in that there is
>>>>>>>>> something wrong with the halting problem.
>>>>>>>>
>>>>>>>> Which since it is proven that Chat GPT doesn't actually know
>>>>>>>> what is a fact, and has been proven to lie,
>>>>>>>
>>>>>>> The first thing that it figured out on its own is that
>>>>>>> Carol's question is isomorphic to the Liar Paradox.
>>>>>>>
>>>>>>> It eventually agreed with the same conclusion that
>>>>>>> myself and professors Hehner and Stoddart agreed to.
>>>>>>> It took 34 pages of dialog to understand this. I
>>>>>>> finally have a good PDF of this.
>>>>>>>
>>>>>>
>>>>>> It didn't "Figure it out". it pattern matched it to previous input
>>>>>> it has been given.
>>>>>>
>>>>>> If it took 34 pages to argee with your conclusion, then it really
>>>>>> didn't agree with you initially, but you finally trained it to
>>>>>> your version of reality.
>>>>>
>>>>> *HERE IS ITS AGREEMENT*
>>>>> When an input, such as the halting problem's pathological input D, is
>>>>> designed to contradict every value that the halting decider H returns,
>>>>> it creates a self-referential paradox that prevents H from providing a
>>>>> consistent and correct response. In this context, D can be seen as
>>>>> posing an incorrect question to H, as its contradictory nature
>>>>> undermines the possibility of a meaningful and accurate answer.
>>>>>
>>>>>
>>>>
>>>> Which means NOTHING as LLM will tell non-truths if feed misleading
>>>> information.
>>>
>>> The above paragraph is proven to be completely true entirely
>>> on the basis of the meaning of its words as these words were
>>> defined in the dialogue that precedes them.
>>>
>>
>> Nope, the problem is you gave it incorrect implications on the meaning
>> of the wrods.
>
> *ChatGPT 4.0 final analysis of everything that preceded it*
> When an input, such as the halting problem's pathological input D, is
> designed to contradict every value that the halting decider H returns,
> it creates a self-referential paradox that prevents H from providing a
> consistent and correct response. In this context, D can be seen as
> posing an incorrect question to H, as its contradictory nature
> undermines the possibility of a meaningful and accurate answer.
>
> Within my definitions of my terms the above paragraph written by
> Chat GPT 4.0 necessarily follows. This proves that the Chat GPT
> 4.0 analysis is sound.
>
> *People are not free to disagree with stipulative definitions*
>
> A stipulative definition is a type of definition in which a new or
> currently existing term is given a new specific meaning for the purposes
> of argument or discussion in a given context. When the term already
> exists, this definition may, but does not necessarily, contradict the
> dictionary (lexical) definition of the term.
> https://en.wikipedia.org/wiki/Stipulative_definition
>
>
>

Right, and by that EXACT SAME RULE, when you "stipulate" a definition
different then that stipulated by a field, you place yourself outside
that field, and if you still claim to be working in it, you are just
admitting to being a bald-face LIAR.

And, no, CHAT GPT's analysis is NOT "Sound", at least not in the field
you claim to be working, as that has definition that must be followed,
which you don't.

So, maybe it is sound POOP logic, but not sound Computation logic.

And you are js admitting you have been lying all these years.

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us5ojq$psb8$4@i2pn2.org>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54575&group=comp.theory#54575

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: comp.theory
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Mon, 4 Mar 2024 19:22:20 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5ojq$psb8$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us43rr$32mqs$1@dont-email.me>
<us58r3$3aoj4$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 00:22:19 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848232"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <us58r3$3aoj4$1@dont-email.me>
X-Spam-Checker-Version: SpamAssassin 4.0.0
 by: Richard Damon - Tue, 5 Mar 2024 00:22 UTC

On 3/4/24 2:53 PM, olcott wrote:
> On 3/4/2024 3:22 AM, Mikko wrote:
>> On 2024-03-03 18:47:29 +0000, olcott said:
>>
>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>
>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>
>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>> actual very deep understanding of these things.
>>>>>>
>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>
>>>>>
>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>
>>>> That does not demonstrate any understanding, even shallow.
>>>>
>>>
>>> The first thing that it does is agree that Hehner's
>>> "Carol's question" (augmented by Richards critique)
>>> is an example of the Liar Paradox.
>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>
>>> It ends up concluding that myself, professor Hehner and
>>> professor Stoddart are all correct in that there is
>>> something wrong with the halting problem.
>>
>> None of that demonstrates any understanding.
>>
>>> My persistent focus on these ideas gives me an increasingly
>>> deeper understanding thus my latest position is that the
>>> halting problem proofs do not actually show that halting
>>> is not computable.
>>
>> Your understanding is still defective and shallow.
>>
>
> If it really was shallow then a gap in my reasoning
> could be pointed out. The actual case is that because
> I have focused on the same problem based on the Linz
> proof for such a long time I noticed things that no
> one every noticed before. *Post from 2004*

It has been.

You are just too stupid to understand.

You can't fix intentional stupid.

>
> *On 7/25/2004 9:55 PM, Peter Olcott wrote: comp.theory*
> > If the state transition diagram representation of the Halting Problem
> > In An Introduction to Formal Languages and Automata, by Peter Lintz
> > copyright 1990 pages 317-321 accurately represents the actual
> > Halting Problem, then what is taken to be a proof that solving the
> > Halting Problem is impossible, can not be correctly concluded
> > from the proof.
>

Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?

<us5q58$3drq0$2@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54577&group=comp.theory#54577

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 18:48:40 -0600
Organization: A noiseless patient Spider
Lines: 136
Message-ID: <us5q58$3drq0$2@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 00:48:40 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="273c7008c0bbd7cbe8e4eb86f2d7214a";
logging-data="3600192"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19vxaNdSII1h8QH3ciMPvpm"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:Au1V6C0EXADYMqVSCLY8wBMRZhY=
Content-Language: en-US
In-Reply-To: <us5oi1$psb8$3@i2pn2.org>
 by: olcott - Tue, 5 Mar 2024 00:48 UTC

On 3/4/2024 6:21 PM, Richard Damon wrote:
> On 3/4/24 3:14 PM, olcott wrote:
>> On 3/4/2024 6:31 AM, Richard Damon wrote:
>>> On 3/3/24 11:37 PM, olcott wrote:
>>>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>>>> On 3/3/24 10:32 PM, olcott wrote:
>>>>>> On 3/3/2024 8:24 PM, Richard Damon wrote:
>>>>>>> On 3/3/24 9:13 PM, olcott wrote:
>>>>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>>>>> On 3/3/24 1:47 PM, olcott wrote:
>>>>>>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>>>>>>
>>>>>>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>>>>>>
>>>>>>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> The first thing that it does is agree that Hehner's
>>>>>>>>>> "Carol's question" (augmented by Richards critique)
>>>>>>>>>> is an example of the Liar Paradox.
>>>>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>>>>
>>>>>>>>>> It ends up concluding that myself, professor Hehner and
>>>>>>>>>> professor Stoddart are all correct in that there is
>>>>>>>>>> something wrong with the halting problem.
>>>>>>>>>
>>>>>>>>> Which since it is proven that Chat GPT doesn't actually know
>>>>>>>>> what is a fact, and has been proven to lie,
>>>>>>>>
>>>>>>>> The first thing that it figured out on its own is that
>>>>>>>> Carol's question is isomorphic to the Liar Paradox.
>>>>>>>>
>>>>>>>> It eventually agreed with the same conclusion that
>>>>>>>> myself and professors Hehner and Stoddart agreed to.
>>>>>>>> It took 34 pages of dialog to understand this. I
>>>>>>>> finally have a good PDF of this.
>>>>>>>>
>>>>>>>
>>>>>>> It didn't "Figure it out". it pattern matched it to previous
>>>>>>> input it has been given.
>>>>>>>
>>>>>>> If it took 34 pages to argee with your conclusion, then it really
>>>>>>> didn't agree with you initially, but you finally trained it to
>>>>>>> your version of reality.
>>>>>>
>>>>>> *HERE IS ITS AGREEMENT*
>>>>>> When an input, such as the halting problem's pathological input D, is
>>>>>> designed to contradict every value that the halting decider H
>>>>>> returns,
>>>>>> it creates a self-referential paradox that prevents H from
>>>>>> providing a
>>>>>> consistent and correct response. In this context, D can be seen as
>>>>>> posing an incorrect question to H, as its contradictory nature
>>>>>> undermines the possibility of a meaningful and accurate answer.
>>>>>>
>>>>>>
>>>>>
>>>>> Which means NOTHING as LLM will tell non-truths if feed misleading
>>>>> information.
>>>>
>>>> The above paragraph is proven to be completely true entirely
>>>> on the basis of the meaning of its words as these words were
>>>> defined in the dialogue that precedes them.
>>>>
>>>
>>> Nope, the problem is you gave it incorrect implications on the
>>> meaning of the wrods.
>>
>> *ChatGPT 4.0 final analysis of everything that preceded it*
>> When an input, such as the halting problem's pathological input D, is
>> designed to contradict every value that the halting decider H returns,
>> it creates a self-referential paradox that prevents H from providing a
>> consistent and correct response. In this context, D can be seen as
>> posing an incorrect question to H, as its contradictory nature
>> undermines the possibility of a meaningful and accurate answer.
>>
>> Within my definitions of my terms the above paragraph written by
>> Chat GPT 4.0 necessarily follows. This proves that the Chat GPT
>> 4.0 analysis is sound.
>>
>> *People are not free to disagree with stipulative definitions*
>>
>> A stipulative definition is a type of definition in which a new or
>> currently existing term is given a new specific meaning for the purposes
>> of argument or discussion in a given context. When the term already
>> exists, this definition may, but does not necessarily, contradict the
>> dictionary (lexical) definition of the term.
>> https://en.wikipedia.org/wiki/Stipulative_definition
>>
>>
>>
>
>
> Right, and by that EXACT SAME RULE, when you "stipulate" a definition
> different then that stipulated by a field, you place yourself outside
> that field, and if you still claim to be working in it, you are just
> admitting to being a bald-face LIAR.
>

Not exactly. When I stipulate a definition that shows the
incoherence of the conventional definitions then I am working
at the foundational level above this field.

> And, no, CHAT GPT's analysis is NOT "Sound", at least not in the field
> you claim to be working, as that has definition that must be followed,
> which you don't.
>

It is perfectly sound within my stipulated definitions.
Or we could say that it is perfectly valid when one take
my definitions as its starting premises.

> So, maybe it is sound POOP logic, but not sound Computation logic.
>
> And you are js admitting you have been lying all these years.

My stipulative definitions show that the conventional
ones are established within an incoherent foundation.

Your reviews continue to be very helpful for me to
make my points increasingly more clearly.

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Correcting the foundation of analytic truth and Linz H ⟨Ĥ⟩ ⟨Ĥ⟩

<us5r0p$3drq0$4@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54579&group=comp.theory#54579

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Correcting_the_foundation_of_analytic_truth_and_Linz_H_
⟨Ĥ⟩ ⟨Ĥ⟩
Date: Mon, 4 Mar 2024 19:03:21 -0600
Organization: A noiseless patient Spider
Lines: 89
Message-ID: <us5r0p$3drq0$4@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us43rr$32mqs$1@dont-email.me>
<us58r3$3aoj4$1@dont-email.me> <us5ojq$psb8$4@i2pn2.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 5 Mar 2024 01:03:22 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="273c7008c0bbd7cbe8e4eb86f2d7214a";
logging-data="3600192"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18lbkPGNnUY92cz2GfqQq5H"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:Zld/Al2MIzKgrZxKB9kUF/CK3qk=
Content-Language: en-US
In-Reply-To: <us5ojq$psb8$4@i2pn2.org>
 by: olcott - Tue, 5 Mar 2024 01:03 UTC

On 3/4/2024 6:22 PM, Richard Damon wrote:
> On 3/4/24 2:53 PM, olcott wrote:
>> On 3/4/2024 3:22 AM, Mikko wrote:
>>> On 2024-03-03 18:47:29 +0000, olcott said:
>>>
>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>
>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>
>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>> actual very deep understanding of these things.
>>>>>>>
>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>
>>>>>>
>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>
>>>>> That does not demonstrate any understanding, even shallow.
>>>>>
>>>>
>>>> The first thing that it does is agree that Hehner's
>>>> "Carol's question" (augmented by Richards critique)
>>>> is an example of the Liar Paradox.
>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>
>>>> It ends up concluding that myself, professor Hehner and
>>>> professor Stoddart are all correct in that there is
>>>> something wrong with the halting problem.
>>>
>>> None of that demonstrates any understanding.
>>>
>>>> My persistent focus on these ideas gives me an increasingly
>>>> deeper understanding thus my latest position is that the
>>>> halting problem proofs do not actually show that halting
>>>> is not computable.
>>>
>>> Your understanding is still defective and shallow.
>>>
>>
>> If it really was shallow then a gap in my reasoning
>> could be pointed out. The actual case is that because
>> I have focused on the same problem based on the Linz
>> proof for such a long time I noticed things that no
>> one every noticed before. *Post from 2004*
>
> It has been.
>
> You are just too stupid to understand.
>
> You can't fix intentional stupid.
>

You can't see outside of the box of the incorrect foundation
of the notion of analytic truth.

Because hardly anyone knows that the Liar Paradox is not a truth bearer,
even fewer people understand that epistemological antinomies are not
truth bearers.

All the people that do not understand that epistemological antinomies
are not truth bearers cannot understand that asking question about the
truth value of an epistemological antinomy is mistake.

These people lack the basis to understand the decision problem/input
pairs that are asking about the truth value of an epistemological
antinomy is a mistake.

People lacking these prerequisite understandings simply write off what
they do not understand as nonsense.

Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt

Simulating termination analyzer Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ must transition
to Ĥ.Hqn or fail to halt.

When its sole criterion measure is to always say NO to every input
that would prevent it from halting then it must say NO to ⟨Ĥ⟩ ⟨Ĥ⟩.

When H ⟨Ĥ⟩ ⟨Ĥ⟩ correctly uses this exact same criterion measure
then the "abort simulation" criteria <is not> met thus providing
the correct basis for a different answer.

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Actual limits of computations != actual limits of computers with unlimited memory ?

<us5sks$psb9$1@i2pn2.org>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54580&group=comp.theory#54580

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re: Actual limits of computations != actual limits of computers with
unlimited memory ?
Date: Mon, 4 Mar 2024 20:31:09 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5sks$psb9$1@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1pkc$fjqv$25@i2pn2.org>
<us27ut$2iua6$1@dont-email.me> <us2n8e$lq4d$3@i2pn2.org>
<us3c9f$2qj3n$1@dont-email.me> <us3iev$lq4c$10@i2pn2.org>
<us3kd5$2vo1i$1@dont-email.me> <us4ffh$o3cj$1@i2pn2.org>
<us57it$3ag4o$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 5 Mar 2024 01:31:08 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
In-Reply-To: <us57it$3ag4o$1@dont-email.me>
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
 by: Richard Damon - Tue, 5 Mar 2024 01:31 UTC

On 3/4/24 2:31 PM, olcott wrote:
> On 3/4/2024 6:40 AM, Richard Damon wrote:
>> On 3/3/24 11:58 PM, olcott wrote:
>>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>>> On 3/3/24 9:39 PM, olcott wrote:
>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>> On 3/3/24 11:19 AM, olcott wrote:
>>>>>>> On 3/3/2024 6:15 AM, Richard Damon wrote:
>>>>>>>> On 3/2/24 9:09 PM, olcott wrote:
>>>>>>>>> On 3/2/2024 7:46 PM, Richard Damon wrote:
>>>>>>>>>> On 3/2/24 8:38 PM, olcott wrote:
>>>>>>>>>>> On 3/2/2024 7:30 PM, Richard Damon wrote:
>>>>>>>>>>>> On 3/2/24 8:08 PM, olcott wrote:
>>>>>>>>>>>>> On 3/2/2024 6:47 PM, Richard Damon wrote:
>>>>>>>>>>>>>> On 3/2/24 6:01 PM, olcott wrote:
>>>>>>>>>>>>>>> On 3/2/2024 4:54 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>> On 3/2/24 5:28 PM, olcott wrote:
>>>>>>>>>>>>>>>>> The reason that people assume that H1(D,D) must get
>>>>>>>>>>>>>>>>> the same result as H(D,D) is that they make sure
>>>>>>>>>>>>>>>>> to ignore the reason why they get a different result.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Namely that you are lying that H and H1 are actually the
>>>>>>>>>>>>>>>> same computation.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> It turns out that the only reason that H1(D,D) derives a
>>>>>>>>>>>>>>>>> different result than H(D,D) is that H is at a different
>>>>>>>>>>>>>>>>> physical machine address than H1.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Which means that H and H1 are not computations, and you
>>>>>>>>>>>>>>>> have been just an ignorant pathological liar all this time.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> D calls 00001522 // machine address of H
>>>>>>>>>>>>>>>>> thus D does call 00001422 // machine address of H1
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Since this <is> the empirically determined deciding
>>>>>>>>>>>>>>>>> factor I don't think that it can be correctly ignored.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Right, but since the algorithm for H/H1 uses the address
>>>>>>>>>>>>>>>> of the decider, which isn't defined as an "input" to it,
>>>>>>>>>>>>>>>> we see that you have been lying that this code is a
>>>>>>>>>>>>>>>> computation. Likely because you have made yourself
>>>>>>>>>>>>>>>> ignorant of what a computation actually is,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Thus you have made yourself into an Ignorant
>>>>>>>>>>>>>>>> Pathological Lying Idiot.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Any physically implemented Turing machine must exist
>>>>>>>>>>>>>>>>> at some physical memory location.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Nope.
>>>>>>>>>>>>>>> Any *physically implemented Turing machine*
>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>> must exist at some physical memory location.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Try and explain the details of how it can be otherwise.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> And description of a Turing Machine (or a Computation)
>>>>>>>>>>>>>> that needs to reference atributes of Modern Electronic
>>>>>>>>>>>>>> Computers is just WRONG as they predate the development of
>>>>>>>>>>>>>> such a thing.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> Virtual Machines that are exactly Turing Machines
>>>>>>>>>>>>> except for unlimited memory can and do exist.
>>>>>>>>>>>>>
>>>>>>>>>>>>> They necessarily must be implemented in physical memory
>>>>>>>>>>>>> and cannot possibly be implemented any other way.
>>>>>>>>>>>>
>>>>>>>>>>>> So?
>>>>>>>>>>>>
>>>>>>>>>>>> Doesn't let a "Computation" change its answer based on its
>>>>>>>>>>>> memory address.
>>>>>>>>>>>>
>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does
>>>>>>>>>>> not halt
>>>>>>>>>>>
>>>>>>>>>>> We ourselves can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by
>>>>>>>>>>> Ĥ.H cannot possibly terminate unless Ĥ.H aborts its
>>>>>>>>>>> simulation.
>>>>>>>>>>>
>>>>>>>>>>> We ourselves can also see that when Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
>>>>>>>>>>> does abort its simulation then Ĥ will halt.
>>>>>>>>>>>
>>>>>>>>>>> That no computer will ever achieve this degree of
>>>>>>>>>>> understanding is directly contradicted by this:
>>>>>>>>>>>
>>>>>>>>>>> ChatGPT 4.0 dialogue.
>>>>>>>>>>> https://www.liarparadox.org/ChatGPT_HP.pdf
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> No COMPUTATION can solve it, because it has been proved
>>>>>>>>>> impossible.
>>>>>>>>>
>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>
>>>>>>>> Do computers actually UNDERSTAND?
>>>>>>>
>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>> Demonstrates the functional equivalent of deep understanding.
>>>>>>> The first thing that it does is categorize Carol's question
>>>>>>> as equivalent to the Liar Paradox.
>>>>>>
>>>>>> Nope, doesn't show what you claim, just that it has been taught by
>>>>>> "rote memorization" that the answer to a question put the way you
>>>>>> did is the answer it gave.
>>>>>>
>>>>>> You are just showing that YOU don't understand what the word
>>>>>> UNDERSTAND actually means.
>>>>>>
>>>>>>>
>>>>>>>>>
>>>>>>>>> This proves that the artifice of the human notion of
>>>>>>>>> computation is more limiting than actual real computers.
>>>>>>>>
>>>>>>>> In other words, you reject the use of definitions to define words.
>>>>>>>>
>>>>>>>> I guess to you, nothing means what others have said it means,
>>>>>>>>
>>>>>>>
>>>>>>> I have found that it is the case that some definitions of
>>>>>>> technical terms sometimes boxes people into misconceptions
>>>>>>> such that alternative views are inexpressible within the
>>>>>>> technical language.
>>>>>>> https://en.wikipedia.org/wiki/Linguistic_relativity
>>>>>>
>>>>>> In other words, you are admtting that when you claim to be working
>>>>>> in a technical field and using the words as that field means, you
>>>>>> are just being a out and out LIAR.
>>>>>
>>>>> Not all all. When working with any technical definition I never
>>>>> simply assume that it is coherent. I always assume that it is
>>>>> possibly incoherent until proven otherwise.
>>>>
>>>> In other words, you ADMIT that you ignore technical definitions and
>>>> thus you comments about working in the field is just an ignorant
>>>> pathological lie.
>>>>
>>>>>
>>>>> If there are physically existing machines that can answer questions
>>>>> that are not Turing computable only because these machine can access
>>>>> their own machine address then these machines would be strictly more
>>>>> powerful than Turing Machines on these questions.
>>>>
>>>> Nope.
>>>
>>>
>>> If machine M can solve problems that machine N
>>> cannot solve then for these problems M is more
>>> powerful than N.
>>
>> But your H1 doesn't actually SOLVE the problem, as it fails on the
>> input (H1^) (H1^)
>>
>
> I am not even talking about that.
> In this new thread I am only talking about the generic case of:
> *Actual limits of computations != actual limits of computers*
> *with unlimited memory*
>
>> Note, I realise I misspoke a bit. Any "Non-computation" sub-program
>> can be turned into a Computation, just by being honest and declairing
>> as inputs the "Hidden Data" that it is using
>>
>>>
>>>>
>>>> But you just admitted you are too ignorant of the actual meaning to
>>>> make a reasoned statement and too dishonest to conceed that, even
>>>> after admitting it,
>>>>
>>>>>
>>>>> If computability only means can't be done in a certain artificially
>>>>> limited way and not any actual limit on what computers can actually
>>>>> do then computability would seem to be nonsense.
>>>>>
>>>
>>> Try and explain how this would not be nonsense.
>>
>> First, it ISN'T "Artificial", it is a natural outcome of the sorts of
>> problems we actuallly want to solve.
>>
>
> If there is a physical machine that can solve problems that a Turing
> machine cannot solve then we are only pretending that the limits of
> computation are the limits of computers.


Click here to read the complete article
Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?

<us5vc9$psb9$4@i2pn2.org>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54585&group=comp.theory#54585

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: Richard Damon - Tue, 5 Mar 2024 02:17 UTC

On 3/4/24 7:48 PM, olcott wrote:
> On 3/4/2024 6:21 PM, Richard Damon wrote:
>> On 3/4/24 3:14 PM, olcott wrote:
>>> On 3/4/2024 6:31 AM, Richard Damon wrote:
>>>> On 3/3/24 11:37 PM, olcott wrote:
>>>>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>>>>> On 3/3/24 10:32 PM, olcott wrote:
>>>>>>> On 3/3/2024 8:24 PM, Richard Damon wrote:
>>>>>>>> On 3/3/24 9:13 PM, olcott wrote:
>>>>>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>>>>>> On 3/3/24 1:47 PM, olcott wrote:
>>>>>>>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>>>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>>>>>>>
>>>>>>>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>>>>>>>
>>>>>>>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> The first thing that it does is agree that Hehner's
>>>>>>>>>>> "Carol's question" (augmented by Richards critique)
>>>>>>>>>>> is an example of the Liar Paradox.
>>>>>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>>>>>
>>>>>>>>>>> It ends up concluding that myself, professor Hehner and
>>>>>>>>>>> professor Stoddart are all correct in that there is
>>>>>>>>>>> something wrong with the halting problem.
>>>>>>>>>>
>>>>>>>>>> Which since it is proven that Chat GPT doesn't actually know
>>>>>>>>>> what is a fact, and has been proven to lie,
>>>>>>>>>
>>>>>>>>> The first thing that it figured out on its own is that
>>>>>>>>> Carol's question is isomorphic to the Liar Paradox.
>>>>>>>>>
>>>>>>>>> It eventually agreed with the same conclusion that
>>>>>>>>> myself and professors Hehner and Stoddart agreed to.
>>>>>>>>> It took 34 pages of dialog to understand this. I
>>>>>>>>> finally have a good PDF of this.
>>>>>>>>>
>>>>>>>>
>>>>>>>> It didn't "Figure it out". it pattern matched it to previous
>>>>>>>> input it has been given.
>>>>>>>>
>>>>>>>> If it took 34 pages to argee with your conclusion, then it
>>>>>>>> really didn't agree with you initially, but you finally trained
>>>>>>>> it to your version of reality.
>>>>>>>
>>>>>>> *HERE IS ITS AGREEMENT*
>>>>>>> When an input, such as the halting problem's pathological input
>>>>>>> D, is
>>>>>>> designed to contradict every value that the halting decider H
>>>>>>> returns,
>>>>>>> it creates a self-referential paradox that prevents H from
>>>>>>> providing a
>>>>>>> consistent and correct response. In this context, D can be seen as
>>>>>>> posing an incorrect question to H, as its contradictory nature
>>>>>>> undermines the possibility of a meaningful and accurate answer.
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> Which means NOTHING as LLM will tell non-truths if feed misleading
>>>>>> information.
>>>>>
>>>>> The above paragraph is proven to be completely true entirely
>>>>> on the basis of the meaning of its words as these words were
>>>>> defined in the dialogue that precedes them.
>>>>>
>>>>
>>>> Nope, the problem is you gave it incorrect implications on the
>>>> meaning of the wrods.
>>>
>>> *ChatGPT 4.0 final analysis of everything that preceded it*
>>> When an input, such as the halting problem's pathological input D, is
>>> designed to contradict every value that the halting decider H returns,
>>> it creates a self-referential paradox that prevents H from providing a
>>> consistent and correct response. In this context, D can be seen as
>>> posing an incorrect question to H, as its contradictory nature
>>> undermines the possibility of a meaningful and accurate answer.
>>>
>>> Within my definitions of my terms the above paragraph written by
>>> Chat GPT 4.0 necessarily follows. This proves that the Chat GPT
>>> 4.0 analysis is sound.
>>>
>>> *People are not free to disagree with stipulative definitions*
>>>
>>> A stipulative definition is a type of definition in which a new or
>>> currently existing term is given a new specific meaning for the purposes
>>> of argument or discussion in a given context. When the term already
>>> exists, this definition may, but does not necessarily, contradict the
>>> dictionary (lexical) definition of the term.
>>> https://en.wikipedia.org/wiki/Stipulative_definition
>>>
>>>
>>>
>>
>>
>> Right, and by that EXACT SAME RULE, when you "stipulate" a definition
>> different then that stipulated by a field, you place yourself outside
>> that field, and if you still claim to be working in it, you are just
>> admitting to being a bald-face LIAR.
>>
>
> Not exactly. When I stipulate a definition that shows the
> incoherence of the conventional definitions then I am working
> at the foundational level above this field.

Nope, if you change the definition of the field, you are in a new field.

Just like ZFC Set Theory isn't Naive Set Theory, if you change the
basis, you are in a new field.

>
>> And, no, CHAT GPT's analysis is NOT "Sound", at least not in the field
>> you claim to be working, as that has definition that must be followed,
>> which you don't.
>>
>
> It is perfectly sound within my stipulated definitions.
> Or we could say that it is perfectly valid when one take
> my definitions as its starting premises.

And not when you look at the field you claim to be in.

That just make you a LIAR.

>
>> So, maybe it is sound POOP logic, but not sound Computation logic.
>>
>> And you are js admitting you have been lying all these years.
>
> My stipulative definitions show that the conventional
> ones are established within an incoherent foundation.
>
> Your reviews continue to be very helpful for me to
> make my points increasingly more clearly.
>

Then build your new foundations and stop lying that you are working in
the ones you say are broken.

Only an idiot works in a field they think is broken, maybe that is why
you are doing it.

Of course, this means you need to learn enough of foundation building to
build your new system. That is a fairly rigorous task.

Re: Correcting the foundation of analytic truth and Linz H ⟨Ĥ⟩ ⟨Ĥ⟩

<us5vm7$psb9$5@i2pn2.org>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54586&group=comp.theory#54586

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re:_Correcting_the_foundation_of_analytic_truth_and_Lin
z H ⟨Ĥ⟩ ⟨Ĥ⟩
Date: Mon, 4 Mar 2024 21:23:05 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vm7$psb9$5@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us43rr$32mqs$1@dont-email.me>
<us58r3$3aoj4$1@dont-email.me> <us5ojq$psb8$4@i2pn2.org>
<us5r0p$3drq0$4@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 5 Mar 2024 02:23:03 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5r0p$3drq0$4@dont-email.me>
 by: Richard Damon - Tue, 5 Mar 2024 02:23 UTC

On 3/4/24 8:03 PM, olcott wrote:
> On 3/4/2024 6:22 PM, Richard Damon wrote:
>> On 3/4/24 2:53 PM, olcott wrote:
>>> On 3/4/2024 3:22 AM, Mikko wrote:
>>>> On 2024-03-03 18:47:29 +0000, olcott said:
>>>>
>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>
>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>
>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>
>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>
>>>>>>>
>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>
>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>
>>>>>
>>>>> The first thing that it does is agree that Hehner's
>>>>> "Carol's question" (augmented by Richards critique)
>>>>> is an example of the Liar Paradox.
>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>
>>>>> It ends up concluding that myself, professor Hehner and
>>>>> professor Stoddart are all correct in that there is
>>>>> something wrong with the halting problem.
>>>>
>>>> None of that demonstrates any understanding.
>>>>
>>>>> My persistent focus on these ideas gives me an increasingly
>>>>> deeper understanding thus my latest position is that the
>>>>> halting problem proofs do not actually show that halting
>>>>> is not computable.
>>>>
>>>> Your understanding is still defective and shallow.
>>>>
>>>
>>> If it really was shallow then a gap in my reasoning
>>> could be pointed out. The actual case is that because
>>> I have focused on the same problem based on the Linz
>>> proof for such a long time I noticed things that no
>>> one every noticed before. *Post from 2004*
>>
>> It has been.
>>
>> You are just too stupid to understand.
>>
>> You can't fix intentional stupid.
>>
>
> You can't see outside of the box of the incorrect foundation
> of the notion of analytic truth.
>
> Because hardly anyone knows that the Liar Paradox is not a truth bearer,
> even fewer people understand that epistemological antinomies are not
> truth bearers.
>
> All the people that do not understand that epistemological antinomies
> are not truth bearers cannot understand that asking question about the
> truth value of an epistemological antinomy is mistake.
>
> These people lack the basis to understand the decision problem/input
> pairs that are asking about the truth value of an epistemological
> antinomy is a mistake.

If the foundation is so bad, why are you still in it?

You are effectively proving you can't do better by staying.

>
> People lacking these prerequisite understandings simply write off what
> they do not understand as nonsense.
>
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn     // Ĥ applied to ⟨Ĥ⟩ does not halt
>
> Simulating termination analyzer Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ must transition
> to Ĥ.Hqn or fail to halt.

And thus you LIE that you are working on the Halting Problem, by using
the wrong criteria.

>
> When its sole criterion measure is to always say NO to every input
> that would prevent it from halting then it must say NO to ⟨Ĥ⟩ ⟨Ĥ⟩.
>

Which makes it a LIE to claim you are working on the Halting Problem

> When H ⟨Ĥ⟩ ⟨Ĥ⟩ correctly uses this exact same criterion measure
> then the "abort simulation" criteria <is not> met thus providing
> the correct basis for a different answer.
>

Which just proves you are a PATHOLOGICAL LIAR since you keep on
insisting that you are working on the Halting problem when you are using
the wrong criteria.

The right answer to the wrong question is not a right answer, since it
is the question that matters, and to say otherwise is just a LIE.

Limits of computations != actual limits of computers [ Church Turing ]

<us5vvi$3ii6o$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54587&group=comp.theory#54587

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Limits of computations != actual limits of computers [ Church Turing
]
Date: Mon, 4 Mar 2024 20:28:01 -0600
Organization: A noiseless patient Spider
Lines: 328
Message-ID: <us5vvi$3ii6o$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1pkc$fjqv$25@i2pn2.org>
<us27ut$2iua6$1@dont-email.me> <us2n8e$lq4d$3@i2pn2.org>
<us3c9f$2qj3n$1@dont-email.me> <us3iev$lq4c$10@i2pn2.org>
<us3kd5$2vo1i$1@dont-email.me> <us4ffh$o3cj$1@i2pn2.org>
<us57it$3ag4o$1@dont-email.me> <us5sks$psb9$1@i2pn2.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 5 Mar 2024 02:28:02 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="273c7008c0bbd7cbe8e4eb86f2d7214a";
logging-data="3754200"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19JmpkHCM6HjT/iMM2K7aPd"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:f92rIxWj5W/Gj0pbm/1hDp3H4d4=
Content-Language: en-US
In-Reply-To: <us5sks$psb9$1@i2pn2.org>
 by: olcott - Tue, 5 Mar 2024 02:28 UTC

On 3/4/2024 7:31 PM, Richard Damon wrote:
> On 3/4/24 2:31 PM, olcott wrote:
>> On 3/4/2024 6:40 AM, Richard Damon wrote:
>>> On 3/3/24 11:58 PM, olcott wrote:
>>>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>>>> On 3/3/24 9:39 PM, olcott wrote:
>>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>>> On 3/3/24 11:19 AM, olcott wrote:
>>>>>>>> On 3/3/2024 6:15 AM, Richard Damon wrote:
>>>>>>>>> On 3/2/24 9:09 PM, olcott wrote:
>>>>>>>>>> On 3/2/2024 7:46 PM, Richard Damon wrote:
>>>>>>>>>>> On 3/2/24 8:38 PM, olcott wrote:
>>>>>>>>>>>> On 3/2/2024 7:30 PM, Richard Damon wrote:
>>>>>>>>>>>>> On 3/2/24 8:08 PM, olcott wrote:
>>>>>>>>>>>>>> On 3/2/2024 6:47 PM, Richard Damon wrote:
>>>>>>>>>>>>>>> On 3/2/24 6:01 PM, olcott wrote:
>>>>>>>>>>>>>>>> On 3/2/2024 4:54 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>> On 3/2/24 5:28 PM, olcott wrote:
>>>>>>>>>>>>>>>>>> The reason that people assume that H1(D,D) must get
>>>>>>>>>>>>>>>>>> the same result as H(D,D) is that they make sure
>>>>>>>>>>>>>>>>>> to ignore the reason why they get a different result.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Namely that you are lying that H and H1 are actually
>>>>>>>>>>>>>>>>> the same computation.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> It turns out that the only reason that H1(D,D) derives a
>>>>>>>>>>>>>>>>>> different result than H(D,D) is that H is at a different
>>>>>>>>>>>>>>>>>> physical machine address than H1.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Which means that H and H1 are not computations, and you
>>>>>>>>>>>>>>>>> have been just an ignorant pathological liar all this
>>>>>>>>>>>>>>>>> time.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> D calls 00001522 // machine address of H
>>>>>>>>>>>>>>>>>> thus D does call 00001422 // machine address of H1
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Since this <is> the empirically determined deciding
>>>>>>>>>>>>>>>>>> factor I don't think that it can be correctly ignored.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Right, but since the algorithm for H/H1 uses the
>>>>>>>>>>>>>>>>> address of the decider, which isn't defined as an
>>>>>>>>>>>>>>>>> "input" to it, we see that you have been lying that
>>>>>>>>>>>>>>>>> this code is a computation. Likely because you have
>>>>>>>>>>>>>>>>> made yourself ignorant of what a computation actually is,
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Thus you have made yourself into an Ignorant
>>>>>>>>>>>>>>>>> Pathological Lying Idiot.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Any physically implemented Turing machine must exist
>>>>>>>>>>>>>>>>>> at some physical memory location.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Nope.
>>>>>>>>>>>>>>>> Any *physically implemented Turing machine*
>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>> must exist at some physical memory location.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Try and explain the details of how it can be otherwise.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> And description of a Turing Machine (or a Computation)
>>>>>>>>>>>>>>> that needs to reference atributes of Modern Electronic
>>>>>>>>>>>>>>> Computers is just WRONG as they predate the development
>>>>>>>>>>>>>>> of such a thing.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Virtual Machines that are exactly Turing Machines
>>>>>>>>>>>>>> except for unlimited memory can and do exist.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> They necessarily must be implemented in physical memory
>>>>>>>>>>>>>> and cannot possibly be implemented any other way.
>>>>>>>>>>>>>
>>>>>>>>>>>>> So?
>>>>>>>>>>>>>
>>>>>>>>>>>>> Doesn't let a "Computation" change its answer based on its
>>>>>>>>>>>>> memory address.
>>>>>>>>>>>>>
>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩
>>>>>>>>>>>> does not halt
>>>>>>>>>>>>
>>>>>>>>>>>> We ourselves can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by
>>>>>>>>>>>> Ĥ.H cannot possibly terminate unless Ĥ.H aborts its
>>>>>>>>>>>> simulation.
>>>>>>>>>>>>
>>>>>>>>>>>> We ourselves can also see that when Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
>>>>>>>>>>>> does abort its simulation then Ĥ will halt.
>>>>>>>>>>>>
>>>>>>>>>>>> That no computer will ever achieve this degree of
>>>>>>>>>>>> understanding is directly contradicted by this:
>>>>>>>>>>>>
>>>>>>>>>>>> ChatGPT 4.0 dialogue.
>>>>>>>>>>>> https://www.liarparadox.org/ChatGPT_HP.pdf
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> No COMPUTATION can solve it, because it has been proved
>>>>>>>>>>> impossible.
>>>>>>>>>>
>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>
>>>>>>>>> Do computers actually UNDERSTAND?
>>>>>>>>
>>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>> Demonstrates the functional equivalent of deep understanding.
>>>>>>>> The first thing that it does is categorize Carol's question
>>>>>>>> as equivalent to the Liar Paradox.
>>>>>>>
>>>>>>> Nope, doesn't show what you claim, just that it has been taught
>>>>>>> by "rote memorization" that the answer to a question put the way
>>>>>>> you did is the answer it gave.
>>>>>>>
>>>>>>> You are just showing that YOU don't understand what the word
>>>>>>> UNDERSTAND actually means.
>>>>>>>
>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> This proves that the artifice of the human notion of
>>>>>>>>>> computation is more limiting than actual real computers.
>>>>>>>>>
>>>>>>>>> In other words, you reject the use of definitions to define words.
>>>>>>>>>
>>>>>>>>> I guess to you, nothing means what others have said it means,
>>>>>>>>>
>>>>>>>>
>>>>>>>> I have found that it is the case that some definitions of
>>>>>>>> technical terms sometimes boxes people into misconceptions
>>>>>>>> such that alternative views are inexpressible within the
>>>>>>>> technical language.
>>>>>>>> https://en.wikipedia.org/wiki/Linguistic_relativity
>>>>>>>
>>>>>>> In other words, you are admtting that when you claim to be
>>>>>>> working in a technical field and using the words as that field
>>>>>>> means, you are just being a out and out LIAR.
>>>>>>
>>>>>> Not all all. When working with any technical definition I never
>>>>>> simply assume that it is coherent. I always assume that it is
>>>>>> possibly incoherent until proven otherwise.
>>>>>
>>>>> In other words, you ADMIT that you ignore technical definitions and
>>>>> thus you comments about working in the field is just an ignorant
>>>>> pathological lie.
>>>>>
>>>>>>
>>>>>> If there are physically existing machines that can answer questions
>>>>>> that are not Turing computable only because these machine can access
>>>>>> their own machine address then these machines would be strictly more
>>>>>> powerful than Turing Machines on these questions.
>>>>>
>>>>> Nope.
>>>>
>>>>
>>>> If machine M can solve problems that machine N
>>>> cannot solve then for these problems M is more
>>>> powerful than N.
>>>
>>> But your H1 doesn't actually SOLVE the problem, as it fails on the
>>> input (H1^) (H1^)
>>>
>>
>> I am not even talking about that.
>> In this new thread I am only talking about the generic case of:
>> *Actual limits of computations != actual limits of computers*
>> *with unlimited memory*
>>
>>> Note, I realise I misspoke a bit. Any "Non-computation" sub-program
>>> can be turned into a Computation, just by being honest and declairing
>>> as inputs the "Hidden Data" that it is using
>>>
>>>>
>>>>>
>>>>> But you just admitted you are too ignorant of the actual meaning to
>>>>> make a reasoned statement and too dishonest to conceed that, even
>>>>> after admitting it,
>>>>>
>>>>>>
>>>>>> If computability only means can't be done in a certain artificially
>>>>>> limited way and not any actual limit on what computers can actually
>>>>>> do then computability would seem to be nonsense.
>>>>>>
>>>>
>>>> Try and explain how this would not be nonsense.
>>>
>>> First, it ISN'T "Artificial", it is a natural outcome of the sorts of
>>> problems we actuallly want to solve.
>>>
>>
>> If there is a physical machine that can solve problems that a Turing
>> machine cannot solve then we are only pretending that the limits of
>> computation are the limits of computers.
>
> But there isn't.
>
> At least not problems that can be phrased as a computation


Click here to read the complete article
Re: Limits of computations != actual limits of computers [ Church Turing ]

<us6209$psb9$6@i2pn2.org>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54588&group=comp.theory#54588

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re: Limits of computations != actual limits of computers [ Church
Turing ]
Date: Mon, 4 Mar 2024 22:02:35 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us6209$psb9$6@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1pkc$fjqv$25@i2pn2.org>
<us27ut$2iua6$1@dont-email.me> <us2n8e$lq4d$3@i2pn2.org>
<us3c9f$2qj3n$1@dont-email.me> <us3iev$lq4c$10@i2pn2.org>
<us3kd5$2vo1i$1@dont-email.me> <us4ffh$o3cj$1@i2pn2.org>
<us57it$3ag4o$1@dont-email.me> <us5sks$psb9$1@i2pn2.org>
<us5vvi$3ii6o$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 5 Mar 2024 03:02:33 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
In-Reply-To: <us5vvi$3ii6o$1@dont-email.me>
X-Spam-Checker-Version: SpamAssassin 4.0.0
Content-Language: en-US
 by: Richard Damon - Tue, 5 Mar 2024 03:02 UTC

On 3/4/24 9:28 PM, olcott wrote:
> On 3/4/2024 7:31 PM, Richard Damon wrote:
>> On 3/4/24 2:31 PM, olcott wrote:
>>> On 3/4/2024 6:40 AM, Richard Damon wrote:
>>>> On 3/3/24 11:58 PM, olcott wrote:
>>>>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>>>>> On 3/3/24 9:39 PM, olcott wrote:
>>>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>>>> On 3/3/24 11:19 AM, olcott wrote:
>>>>>>>>> On 3/3/2024 6:15 AM, Richard Damon wrote:
>>>>>>>>>> On 3/2/24 9:09 PM, olcott wrote:
>>>>>>>>>>> On 3/2/2024 7:46 PM, Richard Damon wrote:
>>>>>>>>>>>> On 3/2/24 8:38 PM, olcott wrote:
>>>>>>>>>>>>> On 3/2/2024 7:30 PM, Richard Damon wrote:
>>>>>>>>>>>>>> On 3/2/24 8:08 PM, olcott wrote:
>>>>>>>>>>>>>>> On 3/2/2024 6:47 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>> On 3/2/24 6:01 PM, olcott wrote:
>>>>>>>>>>>>>>>>> On 3/2/2024 4:54 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>>> On 3/2/24 5:28 PM, olcott wrote:
>>>>>>>>>>>>>>>>>>> The reason that people assume that H1(D,D) must get
>>>>>>>>>>>>>>>>>>> the same result as H(D,D) is that they make sure
>>>>>>>>>>>>>>>>>>> to ignore the reason why they get a different result.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Namely that you are lying that H and H1 are actually
>>>>>>>>>>>>>>>>>> the same computation.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> It turns out that the only reason that H1(D,D) derives a
>>>>>>>>>>>>>>>>>>> different result than H(D,D) is that H is at a different
>>>>>>>>>>>>>>>>>>> physical machine address than H1.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Which means that H and H1 are not computations, and
>>>>>>>>>>>>>>>>>> you have been just an ignorant pathological liar all
>>>>>>>>>>>>>>>>>> this time.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> D calls 00001522 // machine address of H
>>>>>>>>>>>>>>>>>>> thus D does call 00001422 // machine address of H1
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Since this <is> the empirically determined deciding
>>>>>>>>>>>>>>>>>>> factor I don't think that it can be correctly ignored.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Right, but since the algorithm for H/H1 uses the
>>>>>>>>>>>>>>>>>> address of the decider, which isn't defined as an
>>>>>>>>>>>>>>>>>> "input" to it, we see that you have been lying that
>>>>>>>>>>>>>>>>>> this code is a computation. Likely because you have
>>>>>>>>>>>>>>>>>> made yourself ignorant of what a computation actually is,
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Thus you have made yourself into an Ignorant
>>>>>>>>>>>>>>>>>> Pathological Lying Idiot.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Any physically implemented Turing machine must exist
>>>>>>>>>>>>>>>>>>> at some physical memory location.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Nope.
>>>>>>>>>>>>>>>>> Any *physically implemented Turing machine*
>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>> must exist at some physical memory location.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Try and explain the details of how it can be otherwise.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> And description of a Turing Machine (or a Computation)
>>>>>>>>>>>>>>>> that needs to reference atributes of Modern Electronic
>>>>>>>>>>>>>>>> Computers is just WRONG as they predate the development
>>>>>>>>>>>>>>>> of such a thing.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Virtual Machines that are exactly Turing Machines
>>>>>>>>>>>>>>> except for unlimited memory can and do exist.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> They necessarily must be implemented in physical memory
>>>>>>>>>>>>>>> and cannot possibly be implemented any other way.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> So?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Doesn't let a "Computation" change its answer based on its
>>>>>>>>>>>>>> memory address.
>>>>>>>>>>>>>>
>>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩
>>>>>>>>>>>>> does not halt
>>>>>>>>>>>>>
>>>>>>>>>>>>> We ourselves can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by
>>>>>>>>>>>>> Ĥ.H cannot possibly terminate unless Ĥ.H aborts its
>>>>>>>>>>>>> simulation.
>>>>>>>>>>>>>
>>>>>>>>>>>>> We ourselves can also see that when Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
>>>>>>>>>>>>> does abort its simulation then Ĥ will halt.
>>>>>>>>>>>>>
>>>>>>>>>>>>> That no computer will ever achieve this degree of
>>>>>>>>>>>>> understanding is directly contradicted by this:
>>>>>>>>>>>>>
>>>>>>>>>>>>> ChatGPT 4.0 dialogue.
>>>>>>>>>>>>> https://www.liarparadox.org/ChatGPT_HP.pdf
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> No COMPUTATION can solve it, because it has been proved
>>>>>>>>>>>> impossible.
>>>>>>>>>>>
>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>
>>>>>>>>>> Do computers actually UNDERSTAND?
>>>>>>>>>
>>>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>>> Demonstrates the functional equivalent of deep understanding.
>>>>>>>>> The first thing that it does is categorize Carol's question
>>>>>>>>> as equivalent to the Liar Paradox.
>>>>>>>>
>>>>>>>> Nope, doesn't show what you claim, just that it has been taught
>>>>>>>> by "rote memorization" that the answer to a question put the way
>>>>>>>> you did is the answer it gave.
>>>>>>>>
>>>>>>>> You are just showing that YOU don't understand what the word
>>>>>>>> UNDERSTAND actually means.
>>>>>>>>
>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> This proves that the artifice of the human notion of
>>>>>>>>>>> computation is more limiting than actual real computers.
>>>>>>>>>>
>>>>>>>>>> In other words, you reject the use of definitions to define
>>>>>>>>>> words.
>>>>>>>>>>
>>>>>>>>>> I guess to you, nothing means what others have said it means,
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I have found that it is the case that some definitions of
>>>>>>>>> technical terms sometimes boxes people into misconceptions
>>>>>>>>> such that alternative views are inexpressible within the
>>>>>>>>> technical language.
>>>>>>>>> https://en.wikipedia.org/wiki/Linguistic_relativity
>>>>>>>>
>>>>>>>> In other words, you are admtting that when you claim to be
>>>>>>>> working in a technical field and using the words as that field
>>>>>>>> means, you are just being a out and out LIAR.
>>>>>>>
>>>>>>> Not all all. When working with any technical definition I never
>>>>>>> simply assume that it is coherent. I always assume that it is
>>>>>>> possibly incoherent until proven otherwise.
>>>>>>
>>>>>> In other words, you ADMIT that you ignore technical definitions
>>>>>> and thus you comments about working in the field is just an
>>>>>> ignorant pathological lie.
>>>>>>
>>>>>>>
>>>>>>> If there are physically existing machines that can answer questions
>>>>>>> that are not Turing computable only because these machine can access
>>>>>>> their own machine address then these machines would be strictly more
>>>>>>> powerful than Turing Machines on these questions.
>>>>>>
>>>>>> Nope.
>>>>>
>>>>>
>>>>> If machine M can solve problems that machine N
>>>>> cannot solve then for these problems M is more
>>>>> powerful than N.
>>>>
>>>> But your H1 doesn't actually SOLVE the problem, as it fails on the
>>>> input (H1^) (H1^)
>>>>
>>>
>>> I am not even talking about that.
>>> In this new thread I am only talking about the generic case of:
>>> *Actual limits of computations != actual limits of computers*
>>> *with unlimited memory*
>>>
>>>> Note, I realise I misspoke a bit. Any "Non-computation" sub-program
>>>> can be turned into a Computation, just by being honest and
>>>> declairing as inputs the "Hidden Data" that it is using
>>>>
>>>>>
>>>>>>
>>>>>> But you just admitted you are too ignorant of the actual meaning
>>>>>> to make a reasoned statement and too dishonest to conceed that,
>>>>>> even after admitting it,
>>>>>>
>>>>>>>
>>>>>>> If computability only means can't be done in a certain artificially
>>>>>>> limited way and not any actual limit on what computers can actually
>>>>>>> do then computability would seem to be nonsense.
>>>>>>>
>>>>>
>>>>> Try and explain how this would not be nonsense.
>>>>
>>>> First, it ISN'T "Artificial", it is a natural outcome of the sorts
>>>> of problems we actuallly want to solve.
>>>>
>>>
>>> If there is a physical machine that can solve problems that a Turing
>>> machine cannot solve then we are only pretending that the limits of
>>> computation are the limits of computers.
>>
>> But there isn't.
>>
>> At least not problems that can be phrased as a computation
>
> If (hypothetically) there are physical computers that can
> solve decision problems that Turing machines cannot solve
> then the notion of computability is not any actual real
> limit it is merely a fake limit.


Click here to read the complete article
Re: Limits of computations != actual limits of computers [ Church Turing ]

<us6548$3jd6k$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54589&group=comp.theory#54589

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: Limits of computations != actual limits of computers [ Church
Turing ]
Date: Mon, 4 Mar 2024 21:55:51 -0600
Organization: A noiseless patient Spider
Lines: 500
Message-ID: <us6548$3jd6k$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1pkc$fjqv$25@i2pn2.org>
<us27ut$2iua6$1@dont-email.me> <us2n8e$lq4d$3@i2pn2.org>
<us3c9f$2qj3n$1@dont-email.me> <us3iev$lq4c$10@i2pn2.org>
<us3kd5$2vo1i$1@dont-email.me> <us4ffh$o3cj$1@i2pn2.org>
<us57it$3ag4o$1@dont-email.me> <us5sks$psb9$1@i2pn2.org>
<us5vvi$3ii6o$1@dont-email.me> <us6209$psb9$6@i2pn2.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 5 Mar 2024 03:55:53 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="273c7008c0bbd7cbe8e4eb86f2d7214a";
logging-data="3781844"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19GkLCPcx5wV5YccJy4yj2J"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:8jYPQl5HkeLNjjod0u2fDECZ6Ms=
Content-Language: en-US
In-Reply-To: <us6209$psb9$6@i2pn2.org>
 by: olcott - Tue, 5 Mar 2024 03:55 UTC

On 3/4/2024 9:02 PM, Richard Damon wrote:
> On 3/4/24 9:28 PM, olcott wrote:
>> On 3/4/2024 7:31 PM, Richard Damon wrote:
>>> On 3/4/24 2:31 PM, olcott wrote:
>>>> On 3/4/2024 6:40 AM, Richard Damon wrote:
>>>>> On 3/3/24 11:58 PM, olcott wrote:
>>>>>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>>>>>> On 3/3/24 9:39 PM, olcott wrote:
>>>>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>>>>> On 3/3/24 11:19 AM, olcott wrote:
>>>>>>>>>> On 3/3/2024 6:15 AM, Richard Damon wrote:
>>>>>>>>>>> On 3/2/24 9:09 PM, olcott wrote:
>>>>>>>>>>>> On 3/2/2024 7:46 PM, Richard Damon wrote:
>>>>>>>>>>>>> On 3/2/24 8:38 PM, olcott wrote:
>>>>>>>>>>>>>> On 3/2/2024 7:30 PM, Richard Damon wrote:
>>>>>>>>>>>>>>> On 3/2/24 8:08 PM, olcott wrote:
>>>>>>>>>>>>>>>> On 3/2/2024 6:47 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>> On 3/2/24 6:01 PM, olcott wrote:
>>>>>>>>>>>>>>>>>> On 3/2/2024 4:54 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>>>> On 3/2/24 5:28 PM, olcott wrote:
>>>>>>>>>>>>>>>>>>>> The reason that people assume that H1(D,D) must get
>>>>>>>>>>>>>>>>>>>> the same result as H(D,D) is that they make sure
>>>>>>>>>>>>>>>>>>>> to ignore the reason why they get a different result.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Namely that you are lying that H and H1 are actually
>>>>>>>>>>>>>>>>>>> the same computation.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> It turns out that the only reason that H1(D,D)
>>>>>>>>>>>>>>>>>>>> derives a
>>>>>>>>>>>>>>>>>>>> different result than H(D,D) is that H is at a
>>>>>>>>>>>>>>>>>>>> different
>>>>>>>>>>>>>>>>>>>> physical machine address than H1.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Which means that H and H1 are not computations, and
>>>>>>>>>>>>>>>>>>> you have been just an ignorant pathological liar all
>>>>>>>>>>>>>>>>>>> this time.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> D calls 00001522 // machine address of H
>>>>>>>>>>>>>>>>>>>> thus D does call 00001422 // machine address of H1
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Since this <is> the empirically determined deciding
>>>>>>>>>>>>>>>>>>>> factor I don't think that it can be correctly ignored.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Right, but since the algorithm for H/H1 uses the
>>>>>>>>>>>>>>>>>>> address of the decider, which isn't defined as an
>>>>>>>>>>>>>>>>>>> "input" to it, we see that you have been lying that
>>>>>>>>>>>>>>>>>>> this code is a computation. Likely because you have
>>>>>>>>>>>>>>>>>>> made yourself ignorant of what a computation actually
>>>>>>>>>>>>>>>>>>> is,
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Thus you have made yourself into an Ignorant
>>>>>>>>>>>>>>>>>>> Pathological Lying Idiot.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Any physically implemented Turing machine must exist
>>>>>>>>>>>>>>>>>>>> at some physical memory location.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Nope.
>>>>>>>>>>>>>>>>>> Any *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>> must exist at some physical memory location.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Try and explain the details of how it can be otherwise.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> And description of a Turing Machine (or a Computation)
>>>>>>>>>>>>>>>>> that needs to reference atributes of Modern Electronic
>>>>>>>>>>>>>>>>> Computers is just WRONG as they predate the development
>>>>>>>>>>>>>>>>> of such a thing.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Virtual Machines that are exactly Turing Machines
>>>>>>>>>>>>>>>> except for unlimited memory can and do exist.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> They necessarily must be implemented in physical memory
>>>>>>>>>>>>>>>> and cannot possibly be implemented any other way.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> So?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Doesn't let a "Computation" change its answer based on
>>>>>>>>>>>>>>> its memory address.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩
>>>>>>>>>>>>>> halts
>>>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩
>>>>>>>>>>>>>> does not halt
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> We ourselves can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by
>>>>>>>>>>>>>> Ĥ.H cannot possibly terminate unless Ĥ.H aborts its
>>>>>>>>>>>>>> simulation.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> We ourselves can also see that when Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
>>>>>>>>>>>>>> does abort its simulation then Ĥ will halt.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> That no computer will ever achieve this degree of
>>>>>>>>>>>>>> understanding is directly contradicted by this:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> ChatGPT 4.0 dialogue.
>>>>>>>>>>>>>> https://www.liarparadox.org/ChatGPT_HP.pdf
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> No COMPUTATION can solve it, because it has been proved
>>>>>>>>>>>>> impossible.
>>>>>>>>>>>>
>>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>>
>>>>>>>>>>> Do computers actually UNDERSTAND?
>>>>>>>>>>
>>>>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>>>> Demonstrates the functional equivalent of deep understanding.
>>>>>>>>>> The first thing that it does is categorize Carol's question
>>>>>>>>>> as equivalent to the Liar Paradox.
>>>>>>>>>
>>>>>>>>> Nope, doesn't show what you claim, just that it has been taught
>>>>>>>>> by "rote memorization" that the answer to a question put the
>>>>>>>>> way you did is the answer it gave.
>>>>>>>>>
>>>>>>>>> You are just showing that YOU don't understand what the word
>>>>>>>>> UNDERSTAND actually means.
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> This proves that the artifice of the human notion of
>>>>>>>>>>>> computation is more limiting than actual real computers.
>>>>>>>>>>>
>>>>>>>>>>> In other words, you reject the use of definitions to define
>>>>>>>>>>> words.
>>>>>>>>>>>
>>>>>>>>>>> I guess to you, nothing means what others have said it means,
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I have found that it is the case that some definitions of
>>>>>>>>>> technical terms sometimes boxes people into misconceptions
>>>>>>>>>> such that alternative views are inexpressible within the
>>>>>>>>>> technical language.
>>>>>>>>>> https://en.wikipedia.org/wiki/Linguistic_relativity
>>>>>>>>>
>>>>>>>>> In other words, you are admtting that when you claim to be
>>>>>>>>> working in a technical field and using the words as that field
>>>>>>>>> means, you are just being a out and out LIAR.
>>>>>>>>
>>>>>>>> Not all all. When working with any technical definition I never
>>>>>>>> simply assume that it is coherent. I always assume that it is
>>>>>>>> possibly incoherent until proven otherwise.
>>>>>>>
>>>>>>> In other words, you ADMIT that you ignore technical definitions
>>>>>>> and thus you comments about working in the field is just an
>>>>>>> ignorant pathological lie.
>>>>>>>
>>>>>>>>
>>>>>>>> If there are physically existing machines that can answer questions
>>>>>>>> that are not Turing computable only because these machine can
>>>>>>>> access
>>>>>>>> their own machine address then these machines would be strictly
>>>>>>>> more
>>>>>>>> powerful than Turing Machines on these questions.
>>>>>>>
>>>>>>> Nope.
>>>>>>
>>>>>>
>>>>>> If machine M can solve problems that machine N
>>>>>> cannot solve then for these problems M is more
>>>>>> powerful than N.
>>>>>
>>>>> But your H1 doesn't actually SOLVE the problem, as it fails on the
>>>>> input (H1^) (H1^)
>>>>>
>>>>
>>>> I am not even talking about that.
>>>> In this new thread I am only talking about the generic case of:
>>>> *Actual limits of computations != actual limits of computers*
>>>> *with unlimited memory*
>>>>
>>>>> Note, I realise I misspoke a bit. Any "Non-computation" sub-program
>>>>> can be turned into a Computation, just by being honest and
>>>>> declairing as inputs the "Hidden Data" that it is using
>>>>>
>>>>>>
>>>>>>>
>>>>>>> But you just admitted you are too ignorant of the actual meaning
>>>>>>> to make a reasoned statement and too dishonest to conceed that,
>>>>>>> even after admitting it,
>>>>>>>
>>>>>>>>
>>>>>>>> If computability only means can't be done in a certain artificially
>>>>>>>> limited way and not any actual limit on what computers can actually
>>>>>>>> do then computability would seem to be nonsense.
>>>>>>>>
>>>>>>
>>>>>> Try and explain how this would not be nonsense.
>>>>>
>>>>> First, it ISN'T "Artificial", it is a natural outcome of the sorts
>>>>> of problems we actuallly want to solve.
>>>>>
>>>>
>>>> If there is a physical machine that can solve problems that a Turing
>>>> machine cannot solve then we are only pretending that the limits of
>>>> computation are the limits of computers.
>>>
>>> But there isn't.
>>>
>>> At least not problems that can be phrased as a computation
>>
>> If (hypothetically) there are physical computers that can
>> solve decision problems that Turing machines cannot solve
>> then the notion of computability is not any actual real
>> limit it is merely a fake limit.
>
> Except that it has been shown that there isn't such a thing, so your
> hypthetical is just a trip into fantasy land.
>


Click here to read the complete article
Re: Finlayson [ Church Turing ]

<0aWcnebHVN2hBXv4nZ2dnZfqnPadnZ2d@giganews.com>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54590&group=comp.theory#54590

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!border-2.nntp.ord.giganews.com!nntp.giganews.com!Xl.tags.giganews.com!local-1.nntp.ord.giganews.com!news.giganews.com.POSTED!not-for-mail
NNTP-Posting-Date: Tue, 05 Mar 2024 03:57:48 +0000
Subject: Re: Finlayson [ Church Turing ]
Newsgroups: comp.theory,sci.logic
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1pkc$fjqv$25@i2pn2.org>
<us27ut$2iua6$1@dont-email.me> <us2n8e$lq4d$3@i2pn2.org>
<us3c9f$2qj3n$1@dont-email.me> <us3iev$lq4c$10@i2pn2.org>
<us3kd5$2vo1i$1@dont-email.me> <us4ffh$o3cj$1@i2pn2.org>
<us57it$3ag4o$1@dont-email.me> <us5sks$psb9$1@i2pn2.org>
<us5vvi$3ii6o$1@dont-email.me>
From: ross.a.finlayson@gmail.com (Ross Finlayson)
Date: Mon, 4 Mar 2024 19:58:03 -0800
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101
Thunderbird/38.6.0
MIME-Version: 1.0
In-Reply-To: <us5vvi$3ii6o$1@dont-email.me>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Message-ID: <0aWcnebHVN2hBXv4nZ2dnZfqnPadnZ2d@giganews.com>
Lines: 73
X-Usenet-Provider: http://www.giganews.com
X-Trace: sv3-qYARuqL11G1pdZDrPmp0Rc8PLxU/gMIDkeTweIt3hRumhNuRlOh5CfLVjOcAWl9fGFDljUWpzN3CjRQ!FibFfF5ws9lESs+QfKOtD0gKdO+o5ZV/bpYMNj0RNHGK8LLqp4E/OpX9i7+dOHziLwMyKon54EU=
X-Complaints-To: abuse@giganews.com
X-DMCA-Notifications: http://www.giganews.com/info/dmca.html
X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers
X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly
X-Postfilter: 1.3.40
 by: Ross Finlayson - Tue, 5 Mar 2024 03:58 UTC

On 03/04/2024 06:28 PM, olcott wrote:

[...]

> If there is an actual limit that a Turing equivalent machine
> can't know its own machine address I expect that either this
> limit is fake or machines that can know their own machine
> address (and have unlimited memory) may be more powerful
> than Turing machines.
>
> In computability theory, the Church–Turing thesis...
> is a thesis about the nature of computable functions. It states
> that a function on the natural numbers can be calculated by an
> effective method if and only if it is computable by a Turing machine.
> https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis
>
> If a machine that can know its own machine address is an
> aspect of the requirements to solve any decision problem
> such as the halting problem and the finite string input
> to such a machine is construed as a natural number and
> halting is construed as a function on natural numbers
> then such a machine would seem to refute Church/Turing.
>

Some theories of the natural numbers, have that,
there is some "standard model", of the natural numbers.

Some theories of the natural numbers, have that,
there is not a "standard model", of the natural numbers,
only "fragments", and, "extensions".

Most of the time, that there is a, "law of large numbers",
is according to infinite induction wiling away the finite,
to zero, and along the way, linearly.

Some theories have that there are, "law(s) of large numbers",
and it depends on the space, and, the practical, the potential,
the effective, the actual: infinity, of the numbers, with
regards to classical expectations or standard expectations,
and non-classical expectations or non-standard expectations
or the non-linear.

You'll usually find these notions in probability theory,
theories of the non-standard and theories of the non-classical,
the non-linear, to explain why things usually framed in the
Central Limit Theorem, don't pan out, vis-a-vis the long tail
or "error record", and as for, ...,
"functions that converge very sloooww-ly",
and also "functions that belie their finite inputs".

So, any Turing model you've built is not having an infinite
tape, it is having an unbounded tape.

And, there are functions of natural numbers,
"computed", not by it, the unbounded Turing machine,
in these theories with "law(s), plural", of large numbers,
and what's called non-classical, non-standard, or non-linear,
but isn't just called wrong, because it's so.

So, Church-Rice theorem, Rice theorem, and Church-Turing thesis,
with regards to numbers: are incomplete.

One might qualify that by declaring what all's "classical,
standard, and linear", establishing a distinctness result,
instead of a wrong uniqueness result.

Then, this is part of what's called "extra-ordinary" the theory.

Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?

<us68fc$3jtk1$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54592&group=comp.theory#54592

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 22:52:58 -0600
Organization: A noiseless patient Spider
Lines: 177
Message-ID: <us68fc$3jtk1$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me> <us5vc9$psb9$4@i2pn2.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 5 Mar 2024 04:53:00 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="273c7008c0bbd7cbe8e4eb86f2d7214a";
logging-data="3798657"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/ADQMXBuGv6iwyUuHee+Ee"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:YRxydPY93Vgq3iAe1CE6QW6bqT4=
Content-Language: en-US
In-Reply-To: <us5vc9$psb9$4@i2pn2.org>
 by: olcott - Tue, 5 Mar 2024 04:52 UTC

On 3/4/2024 8:17 PM, Richard Damon wrote:
> On 3/4/24 7:48 PM, olcott wrote:
>> On 3/4/2024 6:21 PM, Richard Damon wrote:
>>> On 3/4/24 3:14 PM, olcott wrote:
>>>> On 3/4/2024 6:31 AM, Richard Damon wrote:
>>>>> On 3/3/24 11:37 PM, olcott wrote:
>>>>>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>>>>>> On 3/3/24 10:32 PM, olcott wrote:
>>>>>>>> On 3/3/2024 8:24 PM, Richard Damon wrote:
>>>>>>>>> On 3/3/24 9:13 PM, olcott wrote:
>>>>>>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>>>>>>> On 3/3/24 1:47 PM, olcott wrote:
>>>>>>>>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>>>>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>>>>>>>>
>>>>>>>>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> The first thing that it does is agree that Hehner's
>>>>>>>>>>>> "Carol's question" (augmented by Richards critique)
>>>>>>>>>>>> is an example of the Liar Paradox.
>>>>>>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>>>>>>
>>>>>>>>>>>> It ends up concluding that myself, professor Hehner and
>>>>>>>>>>>> professor Stoddart are all correct in that there is
>>>>>>>>>>>> something wrong with the halting problem.
>>>>>>>>>>>
>>>>>>>>>>> Which since it is proven that Chat GPT doesn't actually know
>>>>>>>>>>> what is a fact, and has been proven to lie,
>>>>>>>>>>
>>>>>>>>>> The first thing that it figured out on its own is that
>>>>>>>>>> Carol's question is isomorphic to the Liar Paradox.
>>>>>>>>>>
>>>>>>>>>> It eventually agreed with the same conclusion that
>>>>>>>>>> myself and professors Hehner and Stoddart agreed to.
>>>>>>>>>> It took 34 pages of dialog to understand this. I
>>>>>>>>>> finally have a good PDF of this.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> It didn't "Figure it out". it pattern matched it to previous
>>>>>>>>> input it has been given.
>>>>>>>>>
>>>>>>>>> If it took 34 pages to argee with your conclusion, then it
>>>>>>>>> really didn't agree with you initially, but you finally trained
>>>>>>>>> it to your version of reality.
>>>>>>>>
>>>>>>>> *HERE IS ITS AGREEMENT*
>>>>>>>> When an input, such as the halting problem's pathological input
>>>>>>>> D, is
>>>>>>>> designed to contradict every value that the halting decider H
>>>>>>>> returns,
>>>>>>>> it creates a self-referential paradox that prevents H from
>>>>>>>> providing a
>>>>>>>> consistent and correct response. In this context, D can be seen as
>>>>>>>> posing an incorrect question to H, as its contradictory nature
>>>>>>>> undermines the possibility of a meaningful and accurate answer.
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> Which means NOTHING as LLM will tell non-truths if feed
>>>>>>> misleading information.
>>>>>>
>>>>>> The above paragraph is proven to be completely true entirely
>>>>>> on the basis of the meaning of its words as these words were
>>>>>> defined in the dialogue that precedes them.
>>>>>>
>>>>>
>>>>> Nope, the problem is you gave it incorrect implications on the
>>>>> meaning of the wrods.
>>>>
>>>> *ChatGPT 4.0 final analysis of everything that preceded it*
>>>> When an input, such as the halting problem's pathological input D, is
>>>> designed to contradict every value that the halting decider H returns,
>>>> it creates a self-referential paradox that prevents H from providing a
>>>> consistent and correct response. In this context, D can be seen as
>>>> posing an incorrect question to H, as its contradictory nature
>>>> undermines the possibility of a meaningful and accurate answer.
>>>>
>>>> Within my definitions of my terms the above paragraph written by
>>>> Chat GPT 4.0 necessarily follows. This proves that the Chat GPT
>>>> 4.0 analysis is sound.
>>>>
>>>> *People are not free to disagree with stipulative definitions*
>>>>
>>>> A stipulative definition is a type of definition in which a new or
>>>> currently existing term is given a new specific meaning for the
>>>> purposes
>>>> of argument or discussion in a given context. When the term already
>>>> exists, this definition may, but does not necessarily, contradict the
>>>> dictionary (lexical) definition of the term.
>>>> https://en.wikipedia.org/wiki/Stipulative_definition
>>>>
>>>>
>>>>
>>>
>>>
>>> Right, and by that EXACT SAME RULE, when you "stipulate" a definition
>>> different then that stipulated by a field, you place yourself outside
>>> that field, and if you still claim to be working in it, you are just
>>> admitting to being a bald-face LIAR.
>>>
>>
>> Not exactly. When I stipulate a definition that shows the
>> incoherence of the conventional definitions then I am working
>> at the foundational level above this field.
>
> Nope, if you change the definition of the field, you are in a new field.
>
> Just like ZFC Set Theory isn't Naive Set Theory, if you change the
> basis, you are in a new field.

Yes that is exactly what I am doing, good call !

ZFC corrected the error of Naive Set Theory. I am correcting
the error of the conventional foundation of computability.

>>
>>> And, no, CHAT GPT's analysis is NOT "Sound", at least not in the
>>> field you claim to be working, as that has definition that must be
>>> followed, which you don't.
>>>
>>
>> It is perfectly sound within my stipulated definitions.
>> Or we could say that it is perfectly valid when one take
>> my definitions as its starting premises.
>
> And not when you look at the field you claim to be in.
>

OK then the field that I am in is the field of the
*correction to the foundational notions of computability*
Just like ZFC corrected Naive Set Theory.

> That just make you a LIAR.
>
>>
>>> So, maybe it is sound POOP logic, but not sound Computation logic.
>>>
>>> And you are js admitting you have been lying all these years.
>>
>> My stipulative definitions show that the conventional
>> ones are established within an incoherent foundation.
>>
>> Your reviews continue to be very helpful for me to
>> make my points increasingly more clearly.
>>
>
> Then build your new foundations and stop lying that you are working in
> the ones you say are broken.
>
> Only an idiot works in a field they think is broken, maybe that is why
> you are doing it.
>
> Of course, this means you need to learn enough of foundation building to
> build your new system. That is a fairly rigorous task.


Click here to read the complete article
Re: Correcting the foundation of analytic truth and Linz H ⟨Ĥ⟩ ⟨Ĥ⟩

<us6c3c$3kffi$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54596&group=comp.theory#54596

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re:_Correcting_the_foundation_of_analytic_truth_and_Lin
z H ⟨Ĥ⟩ ⟨Ĥ⟩
Date: Mon, 4 Mar 2024 23:54:51 -0600
Organization: A noiseless patient Spider
Lines: 141
Message-ID: <us6c3c$3kffi$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us43rr$32mqs$1@dont-email.me>
<us58r3$3aoj4$1@dont-email.me> <us5ojq$psb8$4@i2pn2.org>
<us5r0p$3drq0$4@dont-email.me> <us5vm7$psb9$5@i2pn2.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 5 Mar 2024 05:54:52 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="273c7008c0bbd7cbe8e4eb86f2d7214a";
logging-data="3816946"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX193kPHpmQ88jTAAafga828/"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:12Km+zyUucK6iEdMmjJMzCxtjzE=
Content-Language: en-US
In-Reply-To: <us5vm7$psb9$5@i2pn2.org>
 by: olcott - Tue, 5 Mar 2024 05:54 UTC

On 3/4/2024 8:23 PM, Richard Damon wrote:
> On 3/4/24 8:03 PM, olcott wrote:
>> On 3/4/2024 6:22 PM, Richard Damon wrote:
>>> On 3/4/24 2:53 PM, olcott wrote:
>>>> On 3/4/2024 3:22 AM, Mikko wrote:
>>>>> On 2024-03-03 18:47:29 +0000, olcott said:
>>>>>
>>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>>
>>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>>
>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>
>>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>>
>>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>>
>>>>>>
>>>>>> The first thing that it does is agree that Hehner's
>>>>>> "Carol's question" (augmented by Richards critique)
>>>>>> is an example of the Liar Paradox.
>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>
>>>>>> It ends up concluding that myself, professor Hehner and
>>>>>> professor Stoddart are all correct in that there is
>>>>>> something wrong with the halting problem.
>>>>>
>>>>> None of that demonstrates any understanding.
>>>>>
>>>>>> My persistent focus on these ideas gives me an increasingly
>>>>>> deeper understanding thus my latest position is that the
>>>>>> halting problem proofs do not actually show that halting
>>>>>> is not computable.
>>>>>
>>>>> Your understanding is still defective and shallow.
>>>>>
>>>>
>>>> If it really was shallow then a gap in my reasoning
>>>> could be pointed out. The actual case is that because
>>>> I have focused on the same problem based on the Linz
>>>> proof for such a long time I noticed things that no
>>>> one every noticed before. *Post from 2004*
>>>
>>> It has been.
>>>
>>> You are just too stupid to understand.
>>>
>>> You can't fix intentional stupid.
>>>
>>
>> You can't see outside of the box of the incorrect foundation
>> of the notion of analytic truth.
>>
>> Because hardly anyone knows that the Liar Paradox is not a truth bearer,
>> even fewer people understand that epistemological antinomies are not
>> truth bearers.
>>
>> All the people that do not understand that epistemological antinomies
>> are not truth bearers cannot understand that asking question about the
>> truth value of an epistemological antinomy is mistake.
>>
>> These people lack the basis to understand the decision problem/input
>> pairs that are asking about the truth value of an epistemological
>> antinomy is a mistake.
>
> If the foundation is so bad, why are you still in it?

When I explain what is wrong with the foundation I am
not within this same foundation that I am rebuking.

There are some aspects of the notion of analytical
truth that are incorrect and others that are not.

> You are effectively proving you can't do better by staying.
>
>>
>> People lacking these prerequisite understandings simply write off what
>> they do not understand as nonsense.
>>
>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn     // Ĥ applied to ⟨Ĥ⟩ does not halt
>>
>> Simulating termination analyzer Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ must transition
>> to Ĥ.Hqn or fail to halt.
>
> And thus you LIE that you are working on the Halting Problem, by using
> the wrong criteria.
>

When the "right" criteria cause Ĥ.H to never halt
then these "right" criteria are wrong.

>>
>> When its sole criterion measure is to always say NO to every input
>> that would prevent it from halting then it must say NO to ⟨Ĥ⟩ ⟨Ĥ⟩.
>>
>
> Which makes it a LIE to claim you are working on the Halting Problem
>

If the only way that Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ can correctly determine that
it must abort its simulation requires Ĥ to know its own machine
address then I expanded the scope of the halting problem to
include RASP machines where every P knows its own address.

>> When H ⟨Ĥ⟩ ⟨Ĥ⟩ correctly uses this exact same criterion measure
>> then the "abort simulation" criteria <is not> met thus providing
>> the correct basis for a different answer.
>>
>
>
> Which just proves you are a PATHOLOGICAL LIAR since you keep on
> insisting that you are working on the Halting problem when you are using
> the wrong criteria.
>

As long as Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ can correctly determine that it must abort its
simulation then I am merely showing that the conventional halting
problem does not prove that a halt decider does not exist.

That you admitted that Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ must transition to Ĥ.Hqn
to prevent its own infinite execution and you admitted
that this makes Ĥ ⟨Ĥ⟩ halt then that proves that when
H ⟨Ĥ⟩ ⟨Ĥ⟩ transitions to H.qy *THIS IS THE CORRECT ANSWER*

You already know that all of that is correct, you
*simply don't believe that H can figure out how to do that*

> The right answer to the wrong question is not a right answer, since it
> is the question that matters, and to say otherwise is just a LIE.

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?

<us6okb$3mmn6$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=54598&group=comp.theory#54598

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: mikko.levanto@iki.fi (Mikko)
Newsgroups: comp.theory
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Tue, 5 Mar 2024 11:28:43 +0200
Organization: -
Lines: 13
Message-ID: <us6okb$3mmn6$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org> <us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org> <us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org> <us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org> <us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me> <us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me> <us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org> <us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org> <us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org> <us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org> <us5a3c$3b05o$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: dont-email.me; posting-host="8f92a93ab87016805ff6bd0ad153beb0";
logging-data="3889894"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18+wXZ8SwG+AhUIWS4IYM0e"
User-Agent: Unison/2.2
Cancel-Lock: sha1:EbfsDhtS1k7Cr6iaG6pw97Nirkc=
 by: Mikko - Tue, 5 Mar 2024 09:28 UTC

On 2024-03-04 20:14:34 +0000, olcott said:

> *People are not free to disagree with stipulative definitions*

People are free to disgree whether your stipulative definitions
are useful or sensible. They are not free to disagree with any
correct inferences from those definitions nor to agree with any
incorrect inferences. They may disagree about the relevance of
any conclusions based on such definitions.

--
Mikko


devel / comp.theory / Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?

Pages:1234567
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor