Rocksolid Light

Welcome to Rocksolid Light

mail  files  register  newsreader  groups  login

Message-ID:  

"Pascal is Pascal is Pascal is dog meat." -- M. Devine and P. Larson, Computer Science 340


devel / comp.theory / Re: Does the halting problem actually limit what computers can do?

SubjectAuthor
* Does the halting problem actually limit what computers can do?olcott
+- Does the halting problem actually limit what computers can do?Richard Damon
+- Does the halting problem actually limit what computers can do?Jim Burns
+* Does the halting problem actually limit what computers can do?olcott
|`- Does the halting problem actually limit what computers can do?Richard Damon
+- Does the halting problem actually limit what computers can do?Richard Damon
+* Does the halting problem actually limit what computers can do?olcott
|+- Does the halting problem actually limit what computers can do?Richard Damon
|`* Does the halting problem actually limit what computers can do?olcott
| `- Does the halting problem actually limit what computers can do?Richard Damon
+* Does the halting problem actually limit what computers can do?olcott
|`- Does the halting problem actually limit what computers can do?Richard Damon
+* Does the halting problem actually limit what computers can do?olcott
|`- Does the halting problem actually limit what computers can do?Richard Damon
+* Does the halting problem actually limit what computers can do?olcott
|+- Does the halting problem actually limit what computers can do?Richard Damon
|`* Does the halting problem actually limit what computers can do?olcott
| +- Does the halting problem actually limit what computers can do?Richard Damon
| `* Does the halting problem actually limit what computers can do?olcott
|  +- Does the halting problem actually limit what computers can do?Richard Damon
|  `* Does the halting problem actually limit what computers can do?olcott
|   +- Does the halting problem actually limit what computers can do?Richard Damon
|   `* Does the halting problem actually limit what computers can do?olcott
|    +- Does the halting problem actually limit what computers can do?Richard Damon
|    `* Does the halting problem actually limit what computers can do?olcott
|     +* Does the halting problem actually limit what computers can do?Richard Damon
|     |`- Does the halting problem actually limit what computers can do?Richard Damon
|     `* Does the halting problem actually limit what computers can do?olcott
|      +- Does the halting problem actually limit what computers can do?Richard Damon
|      `* Does the halting problem actually limit what computers can do?olcott
|       +- Does the halting problem actually limit what computers can do?Richard Damon
|       `* Does the halting problem actually limit what computers can do?olcott
|        +- Does the halting problem actually limit what computers can do?Richard Damon
|        +* Does the halting problem actually limit what computers can do?olcott
|        |`- Does the halting problem actually limit what computers can do?Richard Damon
|        +- Does the halting problem actually limit what computers can do?olcott
|        +* Does the halting problem actually limit what computers can do?olcott
|        |`- Does the halting problem actually limit what computers can do?Richard Damon
|        `* Does the halting problem actually limit what computers can do?olcott
|         `- Does the halting problem actually limit what computers can do?Richard Damon
`* Does the halting problem actually limit what computers can do?olcott
 +- Does the halting problem actually limit what computers can do?Richard Damon
 `* Does the halting problem actually limit what computers can do?olcott
  +- Does the halting problem actually limit what computers can do?Richard Damon
  `* Does the halting problem actually limit what computers can do?olcott
   +- Does the halting problem actually limit what computers can do?Richard Damon
   `* Does the halting problem actually limit what computers can do?olcott
    +- Does the halting problem actually limit what computers can do?Richard Damon
    `* Does the halting problem actually limit what computers can do?olcott
     +- Does the halting problem actually limit what computers can do?Richard Damon
     `* Does the halting problem actually limit what computers can do?olcott
      +- Does the halting problem actually limit what computers can do?Richard Damon
      `* Does the halting problem actually limit what computers can do?olcott
       +- Does the halting problem actually limit what computers can do?Richard Damon
       `* Does the halting problem actually limit what computers can do?olcott
        +- Does the halting problem actually limit what computers can do?Richard Damon
        `* Does the halting problem actually limit what computers can do?olcott
         +- Does the halting problem actually limit what computers can do?Richard Damon
         `* Does the halting problem actually limit what computers can do?olcott
          `- Does the halting problem actually limit what computers can do?Richard Damon

Pages:123
Re: Does the halting problem actually limit what computers can do?

<uhp9k7$l6kb$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=49294&group=comp.theory#49294

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder2.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Mon, 30 Oct 2023 17:10:15 -0500
Organization: A noiseless patient Spider
Lines: 125
Message-ID: <uhp9k7$l6kb$1@dont-email.me>
References: <uhm4r5$7n5$1@dont-email.me> <uholm0$hgki$1@dont-email.me>
<uhon90$hqbs$1@dont-email.me> <uhoopq$i2kc$1@dont-email.me>
<uhore5$im5o$1@dont-email.me> <uhp2m7$k1ls$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 30 Oct 2023 22:10:15 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="01655030c6df07099b4d908f11be3d86";
logging-data="694923"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+JcbeKWUdcvKQWnUc73eIC"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:1LFXdCw4gI72FdcSPJkFuyEWozk=
In-Reply-To: <uhp2m7$k1ls$1@dont-email.me>
Content-Language: en-US
 by: olcott - Mon, 30 Oct 2023 22:10 UTC

On 10/30/2023 3:11 PM, olcott wrote:
> On 10/30/2023 1:08 PM, olcott wrote:
>> On 10/30/2023 12:23 PM, olcott wrote:
>>> On 10/30/2023 11:57 AM, olcott wrote:
>>>> On 10/30/2023 11:29 AM, olcott wrote:
>>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>>> *Everyone agrees that this is impossible*
>>>>>> No computer program H can correctly predict what another computer
>>>>>> program D will do when D has been programmed to do the opposite of
>>>>>> whatever H says.
>>>>>>
>>>>>> H(D) is functional notation that specifies the return value from H(D)
>>>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt
>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>>
>>>>>> For all H ∈ TM there exists input D such that
>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>>
>>>>>> *No one pays attention to what this impossibility means*
>>>>>> The halting problem is defined as an unsatisfiable specification thus
>>>>>> isomorphic to a question that has been defined to have no correct
>>>>>> answer.
>>>>>>
>>>>>> What time is it (yes or no)?
>>>>>> has no correct answer because there is something wrong with the
>>>>>> question. In this case we know to blame the question and not the one
>>>>>> answering it.
>>>>>>
>>>>>> When we understand that there are some inputs to every TM H that
>>>>>> contradict both Boolean return values that H could return then the
>>>>>> question: Does your input halt? is essentially a self-contradictory
>>>>>> (thus incorrect) question in these cases.
>>>>>>
>>>>>> The inability to correctly answer an incorrect question places no
>>>>>> actual
>>>>>> limit on anyone or anything.
>>>>>>
>>>>>> This insight opens up an alternative treatment of these pathological
>>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>>
>>>>>
>>>>> *The halting problem proofs merely show that*
>>>>> *self-contradictory questions have no correct answer*
>>>>>
>>>>> *A self-contradictory question is defined as*
>>>>>    Any yes/no question that contradicts both yes/no answers.
>>>>>
>>>>> For every H in the set of all Turing Machines there exists a D
>>>>> that derives a self-contradictory question for this H in that
>>>>> (a) If this H says that its D will halt, D loops
>>>>> (b) If this H that says its D will loop it halts.
>>>>> *Thus the question: Does D halt? is contradicted by some D for each H*
>>>>
>>>> *proving that this is literally true*
>>>> *The halting problem proofs merely show that*
>>>> *self-contradictory questions have no correct answer*
>>>>
>>>
>>>     Nope, since each specific question HAS
>>>     a correct answer, it shows that, by your
>>>     own definition, it isn't "Self-Contradictory"
>>>
>>> *That is a deliberate strawman deception paraphrase*
>>> *That is a deliberate strawman deception paraphrase*
>>> *That is a deliberate strawman deception paraphrase*
>>>
>>> There does not exist a solution to the halting problem because
>>> *for every Turing Machine of the infinite set of all Turing machines*
>>> *for every Turing Machine of the infinite set of all Turing machines*
>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>
>>> there exists a D that makes the question:
>>> Does your input halt?
>>> a self-contradictory thus incorrect question.
>>
>>     Where does it say that a Turing
>>     Machine must exsit to do it?
>>
>> *The only reason that no such Turing Machine exists is*
>>
>> For every H in the set of all Turing Machines there exists a D
>> that derives a self-contradictory question for this H in that
>> (a) If this H says that its D will halt, D loops
>> (b) If this H that says its D will loop it halts.
>> *Thus the question: Does D halt? is contradicted by some D for each H*
>>
>> *therefore*
>>
>> *The halting problem proofs merely show that*
>> *self-contradictory questions have no correct answer*
>
>    The issue that you ignore is that you are
>    confalting a set of questions with a question,
>    and are baseing your logic on a strawman,
>
> It is not my mistake. Linguists understand that the
> context of who is asked a question changes the meaning
> of the question.
>
> This can easily be shown to apply to decision problem
> instances as follows:
>
> In that H.true and H.false are the wrong answer when
> D calls H to do the opposite of whatever value that
> either H returns.
>
> Whereas exactly one of H1.true or H1.false is correct
> for this exact same D.
>
> This proves that the question: "Does your input halt?"
> has a different meaning across the H and H1 pairs.

It *CAN* if the question ask something about
the person being questioned.

But it *CAN'T* if the question doesn't in any
way reffer to who you ask.

D calls H thus D DOES refer to H
D does not call H1 therefore D does not refer to H1

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Does the halting problem actually limit what computers can do?

<uhpajk$3b08n$4@i2pn2.org>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=49295&group=comp.theory#49295

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Mon, 30 Oct 2023 15:27:01 -0700
Organization: i2pn2 (i2pn.org)
Message-ID: <uhpajk$3b08n$4@i2pn2.org>
References: <uhm4r5$7n5$1@dont-email.me> <uholm0$hgki$1@dont-email.me>
<uhon90$hqbs$1@dont-email.me> <uhoopq$i2kc$1@dont-email.me>
<uhore5$im5o$1@dont-email.me> <uhp2m7$k1ls$1@dont-email.me>
<uhp9k7$l6kb$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 30 Oct 2023 22:27:00 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="3506455"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
In-Reply-To: <uhp9k7$l6kb$1@dont-email.me>
Content-Language: en-US
 by: Richard Damon - Mon, 30 Oct 2023 22:27 UTC

On 10/30/23 3:10 PM, olcott wrote:
> On 10/30/2023 3:11 PM, olcott wrote:
>> On 10/30/2023 1:08 PM, olcott wrote:
>>> On 10/30/2023 12:23 PM, olcott wrote:
>>>> On 10/30/2023 11:57 AM, olcott wrote:
>>>>> On 10/30/2023 11:29 AM, olcott wrote:
>>>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>>>> *Everyone agrees that this is impossible*
>>>>>>> No computer program H can correctly predict what another computer
>>>>>>> program D will do when D has been programmed to do the opposite of
>>>>>>> whatever H says.
>>>>>>>
>>>>>>> H(D) is functional notation that specifies the return value from
>>>>>>> H(D)
>>>>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt
>>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>>>
>>>>>>> For all H ∈ TM there exists input D such that
>>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>>>
>>>>>>> *No one pays attention to what this impossibility means*
>>>>>>> The halting problem is defined as an unsatisfiable specification
>>>>>>> thus
>>>>>>> isomorphic to a question that has been defined to have no correct
>>>>>>> answer.
>>>>>>>
>>>>>>> What time is it (yes or no)?
>>>>>>> has no correct answer because there is something wrong with the
>>>>>>> question. In this case we know to blame the question and not the one
>>>>>>> answering it.
>>>>>>>
>>>>>>> When we understand that there are some inputs to every TM H that
>>>>>>> contradict both Boolean return values that H could return then the
>>>>>>> question: Does your input halt? is essentially a self-contradictory
>>>>>>> (thus incorrect) question in these cases.
>>>>>>>
>>>>>>> The inability to correctly answer an incorrect question places no
>>>>>>> actual
>>>>>>> limit on anyone or anything.
>>>>>>>
>>>>>>> This insight opens up an alternative treatment of these pathological
>>>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>>>
>>>>>>
>>>>>> *The halting problem proofs merely show that*
>>>>>> *self-contradictory questions have no correct answer*
>>>>>>
>>>>>> *A self-contradictory question is defined as*
>>>>>>    Any yes/no question that contradicts both yes/no answers.
>>>>>>
>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>> that derives a self-contradictory question for this H in that
>>>>>> (a) If this H says that its D will halt, D loops
>>>>>> (b) If this H that says its D will loop it halts.
>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>> each H*
>>>>>
>>>>> *proving that this is literally true*
>>>>> *The halting problem proofs merely show that*
>>>>> *self-contradictory questions have no correct answer*
>>>>>
>>>>
>>>>     Nope, since each specific question HAS
>>>>     a correct answer, it shows that, by your
>>>>     own definition, it isn't "Self-Contradictory"
>>>>
>>>> *That is a deliberate strawman deception paraphrase*
>>>> *That is a deliberate strawman deception paraphrase*
>>>> *That is a deliberate strawman deception paraphrase*
>>>>
>>>> There does not exist a solution to the halting problem because
>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>
>>>> there exists a D that makes the question:
>>>> Does your input halt?
>>>> a self-contradictory thus incorrect question.
>>>
>>>     Where does it say that a Turing
>>>     Machine must exsit to do it?
>>>
>>> *The only reason that no such Turing Machine exists is*
>>>
>>> For every H in the set of all Turing Machines there exists a D
>>> that derives a self-contradictory question for this H in that
>>> (a) If this H says that its D will halt, D loops
>>> (b) If this H that says its D will loop it halts.
>>> *Thus the question: Does D halt? is contradicted by some D for each H*
>>>
>>> *therefore*
>>>
>>> *The halting problem proofs merely show that*
>>> *self-contradictory questions have no correct answer*
>>
>>     The issue that you ignore is that you are
>>     confalting a set of questions with a question,
>>     and are baseing your logic on a strawman,
>>
>> It is not my mistake. Linguists understand that the
>> context of who is asked a question changes the meaning
>> of the question.
>>
>> This can easily be shown to apply to decision problem
>> instances as follows:
>>
>> In that H.true and H.false are the wrong answer when
>> D calls H to do the opposite of whatever value that
>> either H returns.
>>
>> Whereas exactly one of H1.true or H1.false is correct
>> for this exact same D.
>>
>> This proves that the question: "Does your input halt?"
>> has a different meaning across the H and H1 pairs.
>
>    It *CAN* if the question ask something about
>    the person being questioned.
>
>    But it *CAN'T* if the question doesn't in any
>    way reffer to who you ask.
>
> D calls H thus D DOES refer to H
> D does not call H1 therefore D does not refer to H1
>

The QUESTION doesn't refer to the person being asked?

That D calls H doesn't REFER to the asker, but to a specific machine.

Thus, nothing in the question refers to the asker.

Does "What is Joe Blows age?" depend on who you are asking? Even if you
are asking Joe Blow?

NO.

So, you are just continuing to prove your stupidity.

Re: Does the halting problem actually limit what computers can do?

<uhpbot$lib1$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=49296&group=comp.theory#49296

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder2.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Mon, 30 Oct 2023 17:46:52 -0500
Organization: A noiseless patient Spider
Lines: 151
Message-ID: <uhpbot$lib1$1@dont-email.me>
References: <uhm4r5$7n5$1@dont-email.me> <uholm0$hgki$1@dont-email.me>
<uhon90$hqbs$1@dont-email.me> <uhoopq$i2kc$1@dont-email.me>
<uhore5$im5o$1@dont-email.me> <uhp2m7$k1ls$1@dont-email.me>
<uhp9k7$l6kb$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 30 Oct 2023 22:46:53 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="01655030c6df07099b4d908f11be3d86";
logging-data="706913"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1//0kCU4TA/45kQVMSBuvFh"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:/iHNZF5LKxBlA7BB5vS6u4+Va6U=
Content-Language: en-US
In-Reply-To: <uhp9k7$l6kb$1@dont-email.me>
 by: olcott - Mon, 30 Oct 2023 22:46 UTC

On 10/30/2023 5:10 PM, olcott wrote:
> On 10/30/2023 3:11 PM, olcott wrote:
>> On 10/30/2023 1:08 PM, olcott wrote:
>>> On 10/30/2023 12:23 PM, olcott wrote:
>>>> On 10/30/2023 11:57 AM, olcott wrote:
>>>>> On 10/30/2023 11:29 AM, olcott wrote:
>>>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>>>> *Everyone agrees that this is impossible*
>>>>>>> No computer program H can correctly predict what another computer
>>>>>>> program D will do when D has been programmed to do the opposite of
>>>>>>> whatever H says.
>>>>>>>
>>>>>>> H(D) is functional notation that specifies the return value from
>>>>>>> H(D)
>>>>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt
>>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>>>
>>>>>>> For all H ∈ TM there exists input D such that
>>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>>>
>>>>>>> *No one pays attention to what this impossibility means*
>>>>>>> The halting problem is defined as an unsatisfiable specification
>>>>>>> thus
>>>>>>> isomorphic to a question that has been defined to have no correct
>>>>>>> answer.
>>>>>>>
>>>>>>> What time is it (yes or no)?
>>>>>>> has no correct answer because there is something wrong with the
>>>>>>> question. In this case we know to blame the question and not the one
>>>>>>> answering it.
>>>>>>>
>>>>>>> When we understand that there are some inputs to every TM H that
>>>>>>> contradict both Boolean return values that H could return then the
>>>>>>> question: Does your input halt? is essentially a self-contradictory
>>>>>>> (thus incorrect) question in these cases.
>>>>>>>
>>>>>>> The inability to correctly answer an incorrect question places no
>>>>>>> actual
>>>>>>> limit on anyone or anything.
>>>>>>>
>>>>>>> This insight opens up an alternative treatment of these pathological
>>>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>>>
>>>>>>
>>>>>> *The halting problem proofs merely show that*
>>>>>> *self-contradictory questions have no correct answer*
>>>>>>
>>>>>> *A self-contradictory question is defined as*
>>>>>>    Any yes/no question that contradicts both yes/no answers.
>>>>>>
>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>> that derives a self-contradictory question for this H in that
>>>>>> (a) If this H says that its D will halt, D loops
>>>>>> (b) If this H that says its D will loop it halts.
>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>> each H*
>>>>>
>>>>> *proving that this is literally true*
>>>>> *The halting problem proofs merely show that*
>>>>> *self-contradictory questions have no correct answer*
>>>>>
>>>>
>>>>     Nope, since each specific question HAS
>>>>     a correct answer, it shows that, by your
>>>>     own definition, it isn't "Self-Contradictory"
>>>>
>>>> *That is a deliberate strawman deception paraphrase*
>>>> *That is a deliberate strawman deception paraphrase*
>>>> *That is a deliberate strawman deception paraphrase*
>>>>
>>>> There does not exist a solution to the halting problem because
>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>
>>>> there exists a D that makes the question:
>>>> Does your input halt?
>>>> a self-contradictory thus incorrect question.
>>>
>>>     Where does it say that a Turing
>>>     Machine must exsit to do it?
>>>
>>> *The only reason that no such Turing Machine exists is*
>>>
>>> For every H in the set of all Turing Machines there exists a D
>>> that derives a self-contradictory question for this H in that
>>> (a) If this H says that its D will halt, D loops
>>> (b) If this H that says its D will loop it halts.
>>> *Thus the question: Does D halt? is contradicted by some D for each H*
>>>
>>> *therefore*
>>>
>>> *The halting problem proofs merely show that*
>>> *self-contradictory questions have no correct answer*
>>
>>     The issue that you ignore is that you are
>>     confalting a set of questions with a question,
>>     and are baseing your logic on a strawman,
>>
>> It is not my mistake. Linguists understand that the
>> context of who is asked a question changes the meaning
>> of the question.
>>
>> This can easily be shown to apply to decision problem
>> instances as follows:
>>
>> In that H.true and H.false are the wrong answer when
>> D calls H to do the opposite of whatever value that
>> either H returns.
>>
>> Whereas exactly one of H1.true or H1.false is correct
>> for this exact same D.
>>
>> This proves that the question: "Does your input halt?"
>> has a different meaning across the H and H1 pairs.
>
>    It *CAN* if the question ask something about
>    the person being questioned.
>
>    But it *CAN'T* if the question doesn't in any
>    way reffer to who you ask.
>
> D calls H thus D DOES refer to H
> D does not call H1 therefore D does not refer to H1
>

The QUESTION doesn't refer to the person
being asked?

That D calls H doesn't REFER to the asker,
but to a specific machine.

For the H/D pair D does refer to the specific
machine being asked: Does your input halt?
D knows about and references H.

For the H1/D pair D does not refer to the specific
machine being asked: Does your input halt?
D does not know about or reference H1.

If these things were not extremely difficult to
understand they would have been addressed before
publication in 1936.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Does the halting problem actually limit what computers can do?

<uhpcel$3b08n$5@i2pn2.org>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=49297&group=comp.theory#49297

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Mon, 30 Oct 2023 15:58:30 -0700
Organization: i2pn2 (i2pn.org)
Message-ID: <uhpcel$3b08n$5@i2pn2.org>
References: <uhm4r5$7n5$1@dont-email.me> <uholm0$hgki$1@dont-email.me>
<uhon90$hqbs$1@dont-email.me> <uhoopq$i2kc$1@dont-email.me>
<uhore5$im5o$1@dont-email.me> <uhp2m7$k1ls$1@dont-email.me>
<uhp9k7$l6kb$1@dont-email.me> <uhpbot$lib1$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 30 Oct 2023 22:58:29 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="3506455"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <uhpbot$lib1$1@dont-email.me>
 by: Richard Damon - Mon, 30 Oct 2023 22:58 UTC

On 10/30/23 3:46 PM, olcott wrote:
> On 10/30/2023 5:10 PM, olcott wrote:
>> On 10/30/2023 3:11 PM, olcott wrote:
>>> On 10/30/2023 1:08 PM, olcott wrote:
>>>> On 10/30/2023 12:23 PM, olcott wrote:
>>>>> On 10/30/2023 11:57 AM, olcott wrote:
>>>>>> On 10/30/2023 11:29 AM, olcott wrote:
>>>>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>>>>> *Everyone agrees that this is impossible*
>>>>>>>> No computer program H can correctly predict what another computer
>>>>>>>> program D will do when D has been programmed to do the opposite of
>>>>>>>> whatever H says.
>>>>>>>>
>>>>>>>> H(D) is functional notation that specifies the return value from
>>>>>>>> H(D)
>>>>>>>> Correct(H(D)==false) means that H(D) is correct that D does not
>>>>>>>> halt
>>>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>>>>
>>>>>>>> For all H ∈ TM there exists input D such that
>>>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>>>>
>>>>>>>> *No one pays attention to what this impossibility means*
>>>>>>>> The halting problem is defined as an unsatisfiable specification
>>>>>>>> thus
>>>>>>>> isomorphic to a question that has been defined to have no correct
>>>>>>>> answer.
>>>>>>>>
>>>>>>>> What time is it (yes or no)?
>>>>>>>> has no correct answer because there is something wrong with the
>>>>>>>> question. In this case we know to blame the question and not the
>>>>>>>> one
>>>>>>>> answering it.
>>>>>>>>
>>>>>>>> When we understand that there are some inputs to every TM H that
>>>>>>>> contradict both Boolean return values that H could return then the
>>>>>>>> question: Does your input halt? is essentially a self-contradictory
>>>>>>>> (thus incorrect) question in these cases.
>>>>>>>>
>>>>>>>> The inability to correctly answer an incorrect question places
>>>>>>>> no actual
>>>>>>>> limit on anyone or anything.
>>>>>>>>
>>>>>>>> This insight opens up an alternative treatment of these
>>>>>>>> pathological
>>>>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>>>>
>>>>>>>
>>>>>>> *The halting problem proofs merely show that*
>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>
>>>>>>> *A self-contradictory question is defined as*
>>>>>>>    Any yes/no question that contradicts both yes/no answers.
>>>>>>>
>>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>>> that derives a self-contradictory question for this H in that
>>>>>>> (a) If this H says that its D will halt, D loops
>>>>>>> (b) If this H that says its D will loop it halts.
>>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>>> each H*
>>>>>>
>>>>>> *proving that this is literally true*
>>>>>> *The halting problem proofs merely show that*
>>>>>> *self-contradictory questions have no correct answer*
>>>>>>
>>>>>
>>>>>     Nope, since each specific question HAS
>>>>>     a correct answer, it shows that, by your
>>>>>     own definition, it isn't "Self-Contradictory"
>>>>>
>>>>> *That is a deliberate strawman deception paraphrase*
>>>>> *That is a deliberate strawman deception paraphrase*
>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>
>>>>> There does not exist a solution to the halting problem because
>>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>>
>>>>> there exists a D that makes the question:
>>>>> Does your input halt?
>>>>> a self-contradictory thus incorrect question.
>>>>
>>>>     Where does it say that a Turing
>>>>     Machine must exsit to do it?
>>>>
>>>> *The only reason that no such Turing Machine exists is*
>>>>
>>>> For every H in the set of all Turing Machines there exists a D
>>>> that derives a self-contradictory question for this H in that
>>>> (a) If this H says that its D will halt, D loops
>>>> (b) If this H that says its D will loop it halts.
>>>> *Thus the question: Does D halt? is contradicted by some D for each H*
>>>>
>>>> *therefore*
>>>>
>>>> *The halting problem proofs merely show that*
>>>> *self-contradictory questions have no correct answer*
>>>
>>>     The issue that you ignore is that you are
>>>     confalting a set of questions with a question,
>>>     and are baseing your logic on a strawman,
>>>
>>> It is not my mistake. Linguists understand that the
>>> context of who is asked a question changes the meaning
>>> of the question.
>>>
>>> This can easily be shown to apply to decision problem
>>> instances as follows:
>>>
>>> In that H.true and H.false are the wrong answer when
>>> D calls H to do the opposite of whatever value that
>>> either H returns.
>>>
>>> Whereas exactly one of H1.true or H1.false is correct
>>> for this exact same D.
>>>
>>> This proves that the question: "Does your input halt?"
>>> has a different meaning across the H and H1 pairs.
>>
>>     It *CAN* if the question ask something about
>>     the person being questioned.
>>
>>     But it *CAN'T* if the question doesn't in any
>>     way reffer to who you ask.
>>
>> D calls H thus D DOES refer to H
>> D does not call H1 therefore D does not refer to H1
>>
>
>    The QUESTION doesn't refer to the person
>    being asked?
>
>    That D calls H doesn't REFER to the asker,
>    but to a specific machine.
>
> For the H/D pair D does refer to the specific
> machine being asked: Does your input halt?
> D knows about and references H.

Nope. The question does this input representing D(D) Halt does NOT refer
to any particular decider, just what ever one this is given to.

>
> For the H1/D pair D does not refer to the specific
> machine being asked: Does your input halt?
> D does not know about or reference H1.
>
> If these things were not extremely difficult to
> understand they would have been addressed before
> publication in 1936.
>

They are only "exteremly difficult to understand" because they are FALSE
statements,

You are just too stupid to understand that the Halting question is:

"Does the computation represented by the input Halt?" doesn't have
ANYTHING in it that refers to the machine doing the deciding, and the
input being represented also doesn't refer to the machine doing the
deciding, but only a particular decider that it is designed to foil.

Just be cause we give it to that one, doesn't make it "refer" to the one
being asked.

You are just FAILING basic logic theory, because you are showing
yourself to be a total idiot.

Please find references for you "claims" and definition that are reliable
sources.

Re: Does the halting problem actually limit what computers can do?

<uhpdj4$m00m$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=49298&group=comp.theory#49298

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder2.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Mon, 30 Oct 2023 18:17:55 -0500
Organization: A noiseless patient Spider
Lines: 155
Message-ID: <uhpdj4$m00m$1@dont-email.me>
References: <uhm4r5$7n5$1@dont-email.me> <uholm0$hgki$1@dont-email.me>
<uhon90$hqbs$1@dont-email.me> <uhoopq$i2kc$1@dont-email.me>
<uhore5$im5o$1@dont-email.me> <uhp2m7$k1ls$1@dont-email.me>
<uhp9k7$l6kb$1@dont-email.me> <uhpbot$lib1$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 30 Oct 2023 23:17:56 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="3083f5adb2def64a185472ea93a42693";
logging-data="720918"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/RLM7S8iEFAzcUq7oQMCel"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:Fx69Bp0ZB5RNb5Qzm3oYbQDkwGI=
Content-Language: en-US
In-Reply-To: <uhpbot$lib1$1@dont-email.me>
 by: olcott - Mon, 30 Oct 2023 23:17 UTC

On 10/30/2023 5:46 PM, olcott wrote:
> On 10/30/2023 5:10 PM, olcott wrote:
>> On 10/30/2023 3:11 PM, olcott wrote:
>>> On 10/30/2023 1:08 PM, olcott wrote:
>>>> On 10/30/2023 12:23 PM, olcott wrote:
>>>>> On 10/30/2023 11:57 AM, olcott wrote:
>>>>>> On 10/30/2023 11:29 AM, olcott wrote:
>>>>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>>>>> *Everyone agrees that this is impossible*
>>>>>>>> No computer program H can correctly predict what another computer
>>>>>>>> program D will do when D has been programmed to do the opposite of
>>>>>>>> whatever H says.
>>>>>>>>
>>>>>>>> H(D) is functional notation that specifies the return value from
>>>>>>>> H(D)
>>>>>>>> Correct(H(D)==false) means that H(D) is correct that D does not
>>>>>>>> halt
>>>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>>>>
>>>>>>>> For all H ∈ TM there exists input D such that
>>>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>>>>
>>>>>>>> *No one pays attention to what this impossibility means*
>>>>>>>> The halting problem is defined as an unsatisfiable specification
>>>>>>>> thus
>>>>>>>> isomorphic to a question that has been defined to have no correct
>>>>>>>> answer.
>>>>>>>>
>>>>>>>> What time is it (yes or no)?
>>>>>>>> has no correct answer because there is something wrong with the
>>>>>>>> question. In this case we know to blame the question and not the
>>>>>>>> one
>>>>>>>> answering it.
>>>>>>>>
>>>>>>>> When we understand that there are some inputs to every TM H that
>>>>>>>> contradict both Boolean return values that H could return then the
>>>>>>>> question: Does your input halt? is essentially a self-contradictory
>>>>>>>> (thus incorrect) question in these cases.
>>>>>>>>
>>>>>>>> The inability to correctly answer an incorrect question places
>>>>>>>> no actual
>>>>>>>> limit on anyone or anything.
>>>>>>>>
>>>>>>>> This insight opens up an alternative treatment of these
>>>>>>>> pathological
>>>>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>>>>
>>>>>>>
>>>>>>> *The halting problem proofs merely show that*
>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>
>>>>>>> *A self-contradictory question is defined as*
>>>>>>>    Any yes/no question that contradicts both yes/no answers.
>>>>>>>
>>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>>> that derives a self-contradictory question for this H in that
>>>>>>> (a) If this H says that its D will halt, D loops
>>>>>>> (b) If this H that says its D will loop it halts.
>>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>>> each H*
>>>>>>
>>>>>> *proving that this is literally true*
>>>>>> *The halting problem proofs merely show that*
>>>>>> *self-contradictory questions have no correct answer*
>>>>>>
>>>>>
>>>>>     Nope, since each specific question HAS
>>>>>     a correct answer, it shows that, by your
>>>>>     own definition, it isn't "Self-Contradictory"
>>>>>
>>>>> *That is a deliberate strawman deception paraphrase*
>>>>> *That is a deliberate strawman deception paraphrase*
>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>
>>>>> There does not exist a solution to the halting problem because
>>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>>
>>>>> there exists a D that makes the question:
>>>>> Does your input halt?
>>>>> a self-contradictory thus incorrect question.
>>>>
>>>>     Where does it say that a Turing
>>>>     Machine must exsit to do it?
>>>>
>>>> *The only reason that no such Turing Machine exists is*
>>>>
>>>> For every H in the set of all Turing Machines there exists a D
>>>> that derives a self-contradictory question for this H in that
>>>> (a) If this H says that its D will halt, D loops
>>>> (b) If this H that says its D will loop it halts.
>>>> *Thus the question: Does D halt? is contradicted by some D for each H*
>>>>
>>>> *therefore*
>>>>
>>>> *The halting problem proofs merely show that*
>>>> *self-contradictory questions have no correct answer*
>>>
>>>     The issue that you ignore is that you are
>>>     confalting a set of questions with a question,
>>>     and are baseing your logic on a strawman,
>>>
>>> It is not my mistake. Linguists understand that the
>>> context of who is asked a question changes the meaning
>>> of the question.
>>>
>>> This can easily be shown to apply to decision problem
>>> instances as follows:
>>>
>>> In that H.true and H.false are the wrong answer when
>>> D calls H to do the opposite of whatever value that
>>> either H returns.
>>>
>>> Whereas exactly one of H1.true or H1.false is correct
>>> for this exact same D.
>>>
>>> This proves that the question: "Does your input halt?"
>>> has a different meaning across the H and H1 pairs.
>>
>>     It *CAN* if the question ask something about
>>     the person being questioned.
>>
>>     But it *CAN'T* if the question doesn't in any
>>     way reffer to who you ask.
>>
>> D calls H thus D DOES refer to H
>> D does not call H1 therefore D does not refer to H1
>>
>
>    The QUESTION doesn't refer to the person
>    being asked?
>
>    That D calls H doesn't REFER to the asker,
>    but to a specific machine.
>
> For the H/D pair D does refer to the specific
> machine being asked: Does your input halt?
> D knows about and references H.

Nope. The question does this input representing
D(D) Halt does NOT refer to any particular decider,
just what ever one this is given to.

*You can ignore that D calls H none-the-less when D*
*calls H this does mean that D <is> referencing H*

The only way that I can tell that I am proving my point
is that rebuttals from people that are stuck in rebuttal
mode become increasingly nonsensical.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Does the halting problem actually limit what computers can do?

<uhpf53$3b08m$1@i2pn2.org>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=49299&group=comp.theory#49299

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Mon, 30 Oct 2023 16:44:36 -0700
Organization: i2pn2 (i2pn.org)
Message-ID: <uhpf53$3b08m$1@i2pn2.org>
References: <uhm4r5$7n5$1@dont-email.me> <uholm0$hgki$1@dont-email.me>
<uhon90$hqbs$1@dont-email.me> <uhoopq$i2kc$1@dont-email.me>
<uhore5$im5o$1@dont-email.me> <uhp2m7$k1ls$1@dont-email.me>
<uhp9k7$l6kb$1@dont-email.me> <uhpbot$lib1$1@dont-email.me>
<uhpdj4$m00m$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 30 Oct 2023 23:44:36 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="3506454"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <uhpdj4$m00m$1@dont-email.me>
 by: Richard Damon - Mon, 30 Oct 2023 23:44 UTC

On 10/30/23 4:17 PM, olcott wrote:
> On 10/30/2023 5:46 PM, olcott wrote:
>> On 10/30/2023 5:10 PM, olcott wrote:
>>> On 10/30/2023 3:11 PM, olcott wrote:
>>>> On 10/30/2023 1:08 PM, olcott wrote:
>>>>> On 10/30/2023 12:23 PM, olcott wrote:
>>>>>> On 10/30/2023 11:57 AM, olcott wrote:
>>>>>>> On 10/30/2023 11:29 AM, olcott wrote:
>>>>>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>>>>>> *Everyone agrees that this is impossible*
>>>>>>>>> No computer program H can correctly predict what another computer
>>>>>>>>> program D will do when D has been programmed to do the opposite of
>>>>>>>>> whatever H says.
>>>>>>>>>
>>>>>>>>> H(D) is functional notation that specifies the return value
>>>>>>>>> from H(D)
>>>>>>>>> Correct(H(D)==false) means that H(D) is correct that D does not
>>>>>>>>> halt
>>>>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>>>>>
>>>>>>>>> For all H ∈ TM there exists input D such that
>>>>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>>>>>
>>>>>>>>> *No one pays attention to what this impossibility means*
>>>>>>>>> The halting problem is defined as an unsatisfiable
>>>>>>>>> specification thus
>>>>>>>>> isomorphic to a question that has been defined to have no correct
>>>>>>>>> answer.
>>>>>>>>>
>>>>>>>>> What time is it (yes or no)?
>>>>>>>>> has no correct answer because there is something wrong with the
>>>>>>>>> question. In this case we know to blame the question and not
>>>>>>>>> the one
>>>>>>>>> answering it.
>>>>>>>>>
>>>>>>>>> When we understand that there are some inputs to every TM H that
>>>>>>>>> contradict both Boolean return values that H could return then the
>>>>>>>>> question: Does your input halt? is essentially a
>>>>>>>>> self-contradictory
>>>>>>>>> (thus incorrect) question in these cases.
>>>>>>>>>
>>>>>>>>> The inability to correctly answer an incorrect question places
>>>>>>>>> no actual
>>>>>>>>> limit on anyone or anything.
>>>>>>>>>
>>>>>>>>> This insight opens up an alternative treatment of these
>>>>>>>>> pathological
>>>>>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>>>>>
>>>>>>>>
>>>>>>>> *The halting problem proofs merely show that*
>>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>>
>>>>>>>> *A self-contradictory question is defined as*
>>>>>>>>    Any yes/no question that contradicts both yes/no answers.
>>>>>>>>
>>>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>>>> that derives a self-contradictory question for this H in that
>>>>>>>> (a) If this H says that its D will halt, D loops
>>>>>>>> (b) If this H that says its D will loop it halts.
>>>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>>>> each H*
>>>>>>>
>>>>>>> *proving that this is literally true*
>>>>>>> *The halting problem proofs merely show that*
>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>
>>>>>>
>>>>>>     Nope, since each specific question HAS
>>>>>>     a correct answer, it shows that, by your
>>>>>>     own definition, it isn't "Self-Contradictory"
>>>>>>
>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>
>>>>>> There does not exist a solution to the halting problem because
>>>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>>>
>>>>>> there exists a D that makes the question:
>>>>>> Does your input halt?
>>>>>> a self-contradictory thus incorrect question.
>>>>>
>>>>>     Where does it say that a Turing
>>>>>     Machine must exsit to do it?
>>>>>
>>>>> *The only reason that no such Turing Machine exists is*
>>>>>
>>>>> For every H in the set of all Turing Machines there exists a D
>>>>> that derives a self-contradictory question for this H in that
>>>>> (a) If this H says that its D will halt, D loops
>>>>> (b) If this H that says its D will loop it halts.
>>>>> *Thus the question: Does D halt? is contradicted by some D for each H*
>>>>>
>>>>> *therefore*
>>>>>
>>>>> *The halting problem proofs merely show that*
>>>>> *self-contradictory questions have no correct answer*
>>>>
>>>>     The issue that you ignore is that you are
>>>>     confalting a set of questions with a question,
>>>>     and are baseing your logic on a strawman,
>>>>
>>>> It is not my mistake. Linguists understand that the
>>>> context of who is asked a question changes the meaning
>>>> of the question.
>>>>
>>>> This can easily be shown to apply to decision problem
>>>> instances as follows:
>>>>
>>>> In that H.true and H.false are the wrong answer when
>>>> D calls H to do the opposite of whatever value that
>>>> either H returns.
>>>>
>>>> Whereas exactly one of H1.true or H1.false is correct
>>>> for this exact same D.
>>>>
>>>> This proves that the question: "Does your input halt?"
>>>> has a different meaning across the H and H1 pairs.
>>>
>>>     It *CAN* if the question ask something about
>>>     the person being questioned.
>>>
>>>     But it *CAN'T* if the question doesn't in any
>>>     way reffer to who you ask.
>>>
>>> D calls H thus D DOES refer to H
>>> D does not call H1 therefore D does not refer to H1
>>>
>>
>>     The QUESTION doesn't refer to the person
>>     being asked?
>>
>>     That D calls H doesn't REFER to the asker,
>>     but to a specific machine.
>>
>> For the H/D pair D does refer to the specific
>> machine being asked: Does your input halt?
>> D knows about and references H.
>
>   Nope. The question does this input representing
>   D(D) Halt does NOT refer to any particular decider,
>   just what ever one this is given to.
>
> *You can ignore that D calls H none-the-less when D*
> *calls H this does mean that D <is> referencing H*
>
> The only way that I can tell that I am proving my point
> is that rebuttals from people that are stuck in rebuttal
> mode become increasingly nonsensical.
>

CALLING H doesn't REFER to the decider deciding it.

Note key difference, a Turing machine can have a copy of the code for
another machine, but it doesn't "Refer" to it. as any changes to that
machine after making the first machine doesn't change it.

That is the key point you miss.

D has the code for the H that you are claiming to give the right value,
when you try to vary it to prove something, that DOESN'T change D, as D
had a copy of the original code of H, not a "reference" to H,


Click here to read the complete article
Re: Does the halting problem actually limit what computers can do?

<uhpgan$md1k$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=49300&group=comp.theory#49300

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!rocksolid2!news.neodome.net!news.mixmin.net!eternal-september.org!feeder2.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Mon, 30 Oct 2023 19:04:38 -0500
Organization: A noiseless patient Spider
Lines: 162
Message-ID: <uhpgan$md1k$1@dont-email.me>
References: <uhm4r5$7n5$1@dont-email.me> <uholm0$hgki$1@dont-email.me>
<uhon90$hqbs$1@dont-email.me> <uhoopq$i2kc$1@dont-email.me>
<uhore5$im5o$1@dont-email.me> <uhp2m7$k1ls$1@dont-email.me>
<uhp9k7$l6kb$1@dont-email.me> <uhpbot$lib1$1@dont-email.me>
<uhpdj4$m00m$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 31 Oct 2023 00:04:39 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="3083f5adb2def64a185472ea93a42693";
logging-data="734260"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/RGDK45Je4mKLY4musehJ0"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:XEm0DjRZ4AlrCxlKMsDPWxex970=
In-Reply-To: <uhpdj4$m00m$1@dont-email.me>
Content-Language: en-US
 by: olcott - Tue, 31 Oct 2023 00:04 UTC

On 10/30/2023 6:17 PM, olcott wrote:
> On 10/30/2023 5:46 PM, olcott wrote:
>> On 10/30/2023 5:10 PM, olcott wrote:
>>> On 10/30/2023 3:11 PM, olcott wrote:
>>>> On 10/30/2023 1:08 PM, olcott wrote:
>>>>> On 10/30/2023 12:23 PM, olcott wrote:
>>>>>> On 10/30/2023 11:57 AM, olcott wrote:
>>>>>>> On 10/30/2023 11:29 AM, olcott wrote:
>>>>>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>>>>>> *Everyone agrees that this is impossible*
>>>>>>>>> No computer program H can correctly predict what another computer
>>>>>>>>> program D will do when D has been programmed to do the opposite of
>>>>>>>>> whatever H says.
>>>>>>>>>
>>>>>>>>> H(D) is functional notation that specifies the return value
>>>>>>>>> from H(D)
>>>>>>>>> Correct(H(D)==false) means that H(D) is correct that D does not
>>>>>>>>> halt
>>>>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>>>>>
>>>>>>>>> For all H ∈ TM there exists input D such that
>>>>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>>>>>
>>>>>>>>> *No one pays attention to what this impossibility means*
>>>>>>>>> The halting problem is defined as an unsatisfiable
>>>>>>>>> specification thus
>>>>>>>>> isomorphic to a question that has been defined to have no correct
>>>>>>>>> answer.
>>>>>>>>>
>>>>>>>>> What time is it (yes or no)?
>>>>>>>>> has no correct answer because there is something wrong with the
>>>>>>>>> question. In this case we know to blame the question and not
>>>>>>>>> the one
>>>>>>>>> answering it.
>>>>>>>>>
>>>>>>>>> When we understand that there are some inputs to every TM H that
>>>>>>>>> contradict both Boolean return values that H could return then the
>>>>>>>>> question: Does your input halt? is essentially a
>>>>>>>>> self-contradictory
>>>>>>>>> (thus incorrect) question in these cases.
>>>>>>>>>
>>>>>>>>> The inability to correctly answer an incorrect question places
>>>>>>>>> no actual
>>>>>>>>> limit on anyone or anything.
>>>>>>>>>
>>>>>>>>> This insight opens up an alternative treatment of these
>>>>>>>>> pathological
>>>>>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>>>>>
>>>>>>>>
>>>>>>>> *The halting problem proofs merely show that*
>>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>>
>>>>>>>> *A self-contradictory question is defined as*
>>>>>>>>    Any yes/no question that contradicts both yes/no answers.
>>>>>>>>
>>>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>>>> that derives a self-contradictory question for this H in that
>>>>>>>> (a) If this H says that its D will halt, D loops
>>>>>>>> (b) If this H that says its D will loop it halts.
>>>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>>>> each H*
>>>>>>>
>>>>>>> *proving that this is literally true*
>>>>>>> *The halting problem proofs merely show that*
>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>
>>>>>>
>>>>>>     Nope, since each specific question HAS
>>>>>>     a correct answer, it shows that, by your
>>>>>>     own definition, it isn't "Self-Contradictory"
>>>>>>
>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>
>>>>>> There does not exist a solution to the halting problem because
>>>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>>>
>>>>>> there exists a D that makes the question:
>>>>>> Does your input halt?
>>>>>> a self-contradictory thus incorrect question.
>>>>>
>>>>>     Where does it say that a Turing
>>>>>     Machine must exsit to do it?
>>>>>
>>>>> *The only reason that no such Turing Machine exists is*
>>>>>
>>>>> For every H in the set of all Turing Machines there exists a D
>>>>> that derives a self-contradictory question for this H in that
>>>>> (a) If this H says that its D will halt, D loops
>>>>> (b) If this H that says its D will loop it halts.
>>>>> *Thus the question: Does D halt? is contradicted by some D for each H*
>>>>>
>>>>> *therefore*
>>>>>
>>>>> *The halting problem proofs merely show that*
>>>>> *self-contradictory questions have no correct answer*
>>>>
>>>>     The issue that you ignore is that you are
>>>>     confalting a set of questions with a question,
>>>>     and are baseing your logic on a strawman,
>>>>
>>>> It is not my mistake. Linguists understand that the
>>>> context of who is asked a question changes the meaning
>>>> of the question.
>>>>
>>>> This can easily be shown to apply to decision problem
>>>> instances as follows:
>>>>
>>>> In that H.true and H.false are the wrong answer when
>>>> D calls H to do the opposite of whatever value that
>>>> either H returns.
>>>>
>>>> Whereas exactly one of H1.true or H1.false is correct
>>>> for this exact same D.
>>>>
>>>> This proves that the question: "Does your input halt?"
>>>> has a different meaning across the H and H1 pairs.
>>>
>>>     It *CAN* if the question ask something about
>>>     the person being questioned.
>>>
>>>     But it *CAN'T* if the question doesn't in any
>>>     way reffer to who you ask.
>>>
>>> D calls H thus D DOES refer to H
>>> D does not call H1 therefore D does not refer to H1
>>>
>>
>>     The QUESTION doesn't refer to the person
>>     being asked?
>>
>>     That D calls H doesn't REFER to the asker,
>>     but to a specific machine.
>>
>> For the H/D pair D does refer to the specific
>> machine being asked: Does your input halt?
>> D knows about and references H.
>
>   Nope. The question does this input representing
>   D(D) Halt does NOT refer to any particular decider,
>   just what ever one this is given to.
>
> *You can ignore that D calls H none-the-less when D*
> *calls H this does mean that D <is> referencing H*
>
> The only way that I can tell that I am proving my point
> is that rebuttals from people that are stuck in rebuttal
> mode become increasingly nonsensical.
>

"CALLING H doesn't REFER to the decider deciding it."

Sure it does with H(D,D) D is calling the decider deciding it.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Does the halting problem actually limit what computers can do?

<uhphpv$3bucv$1@i2pn2.org>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=49301&group=comp.theory#49301

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Mon, 30 Oct 2023 17:29:50 -0700
Organization: i2pn2 (i2pn.org)
Message-ID: <uhphpv$3bucv$1@i2pn2.org>
References: <uhm4r5$7n5$1@dont-email.me> <uholm0$hgki$1@dont-email.me>
<uhon90$hqbs$1@dont-email.me> <uhoopq$i2kc$1@dont-email.me>
<uhore5$im5o$1@dont-email.me> <uhp2m7$k1ls$1@dont-email.me>
<uhp9k7$l6kb$1@dont-email.me> <uhpbot$lib1$1@dont-email.me>
<uhpdj4$m00m$1@dont-email.me> <uhpgan$md1k$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 31 Oct 2023 00:29:51 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="3537311"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <uhpgan$md1k$1@dont-email.me>
 by: Richard Damon - Tue, 31 Oct 2023 00:29 UTC

On 10/30/23 5:04 PM, olcott wrote:
> On 10/30/2023 6:17 PM, olcott wrote:
>> On 10/30/2023 5:46 PM, olcott wrote:
>>> On 10/30/2023 5:10 PM, olcott wrote:
>>>> On 10/30/2023 3:11 PM, olcott wrote:
>>>>> On 10/30/2023 1:08 PM, olcott wrote:
>>>>>> On 10/30/2023 12:23 PM, olcott wrote:
>>>>>>> On 10/30/2023 11:57 AM, olcott wrote:
>>>>>>>> On 10/30/2023 11:29 AM, olcott wrote:
>>>>>>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>>>>>>> *Everyone agrees that this is impossible*
>>>>>>>>>> No computer program H can correctly predict what another computer
>>>>>>>>>> program D will do when D has been programmed to do the
>>>>>>>>>> opposite of
>>>>>>>>>> whatever H says.
>>>>>>>>>>
>>>>>>>>>> H(D) is functional notation that specifies the return value
>>>>>>>>>> from H(D)
>>>>>>>>>> Correct(H(D)==false) means that H(D) is correct that D does
>>>>>>>>>> not halt
>>>>>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>>>>>>
>>>>>>>>>> For all H ∈ TM there exists input D such that
>>>>>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>>>>>>
>>>>>>>>>> *No one pays attention to what this impossibility means*
>>>>>>>>>> The halting problem is defined as an unsatisfiable
>>>>>>>>>> specification thus
>>>>>>>>>> isomorphic to a question that has been defined to have no correct
>>>>>>>>>> answer.
>>>>>>>>>>
>>>>>>>>>> What time is it (yes or no)?
>>>>>>>>>> has no correct answer because there is something wrong with the
>>>>>>>>>> question. In this case we know to blame the question and not
>>>>>>>>>> the one
>>>>>>>>>> answering it.
>>>>>>>>>>
>>>>>>>>>> When we understand that there are some inputs to every TM H that
>>>>>>>>>> contradict both Boolean return values that H could return then
>>>>>>>>>> the
>>>>>>>>>> question: Does your input halt? is essentially a
>>>>>>>>>> self-contradictory
>>>>>>>>>> (thus incorrect) question in these cases.
>>>>>>>>>>
>>>>>>>>>> The inability to correctly answer an incorrect question places
>>>>>>>>>> no actual
>>>>>>>>>> limit on anyone or anything.
>>>>>>>>>>
>>>>>>>>>> This insight opens up an alternative treatment of these
>>>>>>>>>> pathological
>>>>>>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *The halting problem proofs merely show that*
>>>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>>>
>>>>>>>>> *A self-contradictory question is defined as*
>>>>>>>>>    Any yes/no question that contradicts both yes/no answers.
>>>>>>>>>
>>>>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>>>>> that derives a self-contradictory question for this H in that
>>>>>>>>> (a) If this H says that its D will halt, D loops
>>>>>>>>> (b) If this H that says its D will loop it halts.
>>>>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>>>>> each H*
>>>>>>>>
>>>>>>>> *proving that this is literally true*
>>>>>>>> *The halting problem proofs merely show that*
>>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>>
>>>>>>>
>>>>>>>     Nope, since each specific question HAS
>>>>>>>     a correct answer, it shows that, by your
>>>>>>>     own definition, it isn't "Self-Contradictory"
>>>>>>>
>>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>>
>>>>>>> There does not exist a solution to the halting problem because
>>>>>>> *for every Turing Machine of the infinite set of all Turing
>>>>>>> machines*
>>>>>>> *for every Turing Machine of the infinite set of all Turing
>>>>>>> machines*
>>>>>>> *for every Turing Machine of the infinite set of all Turing
>>>>>>> machines*
>>>>>>>
>>>>>>> there exists a D that makes the question:
>>>>>>> Does your input halt?
>>>>>>> a self-contradictory thus incorrect question.
>>>>>>
>>>>>>     Where does it say that a Turing
>>>>>>     Machine must exsit to do it?
>>>>>>
>>>>>> *The only reason that no such Turing Machine exists is*
>>>>>>
>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>> that derives a self-contradictory question for this H in that
>>>>>> (a) If this H says that its D will halt, D loops
>>>>>> (b) If this H that says its D will loop it halts.
>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>> each H*
>>>>>>
>>>>>> *therefore*
>>>>>>
>>>>>> *The halting problem proofs merely show that*
>>>>>> *self-contradictory questions have no correct answer*
>>>>>
>>>>>     The issue that you ignore is that you are
>>>>>     confalting a set of questions with a question,
>>>>>     and are baseing your logic on a strawman,
>>>>>
>>>>> It is not my mistake. Linguists understand that the
>>>>> context of who is asked a question changes the meaning
>>>>> of the question.
>>>>>
>>>>> This can easily be shown to apply to decision problem
>>>>> instances as follows:
>>>>>
>>>>> In that H.true and H.false are the wrong answer when
>>>>> D calls H to do the opposite of whatever value that
>>>>> either H returns.
>>>>>
>>>>> Whereas exactly one of H1.true or H1.false is correct
>>>>> for this exact same D.
>>>>>
>>>>> This proves that the question: "Does your input halt?"
>>>>> has a different meaning across the H and H1 pairs.
>>>>
>>>>     It *CAN* if the question ask something about
>>>>     the person being questioned.
>>>>
>>>>     But it *CAN'T* if the question doesn't in any
>>>>     way reffer to who you ask.
>>>>
>>>> D calls H thus D DOES refer to H
>>>> D does not call H1 therefore D does not refer to H1
>>>>
>>>
>>>     The QUESTION doesn't refer to the person
>>>     being asked?
>>>
>>>     That D calls H doesn't REFER to the asker,
>>>     but to a specific machine.
>>>
>>> For the H/D pair D does refer to the specific
>>> machine being asked: Does your input halt?
>>> D knows about and references H.
>>
>>    Nope. The question does this input representing
>>    D(D) Halt does NOT refer to any particular decider,
>>    just what ever one this is given to.
>>
>> *You can ignore that D calls H none-the-less when D*
>> *calls H this does mean that D <is> referencing H*
>>
>> The only way that I can tell that I am proving my point
>> is that rebuttals from people that are stuck in rebuttal
>> mode become increasingly nonsensical.
>>
>
>    "CALLING H doesn't REFER to the decider deciding it."
>
> Sure it does with H(D,D) D is calling the decider deciding it.
>


Click here to read the complete article
Re: Does the halting problem actually limit what computers can do?

<uhpicn$mn7v$1@dont-email.me>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=49302&group=comp.theory#49302

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder2.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Mon, 30 Oct 2023 19:39:51 -0500
Organization: A noiseless patient Spider
Lines: 186
Message-ID: <uhpicn$mn7v$1@dont-email.me>
References: <uhm4r5$7n5$1@dont-email.me> <uholm0$hgki$1@dont-email.me>
<uhon90$hqbs$1@dont-email.me> <uhoopq$i2kc$1@dont-email.me>
<uhore5$im5o$1@dont-email.me> <uhp2m7$k1ls$1@dont-email.me>
<uhp9k7$l6kb$1@dont-email.me> <uhpbot$lib1$1@dont-email.me>
<uhpdj4$m00m$1@dont-email.me> <uhpgan$md1k$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 31 Oct 2023 00:39:51 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="3083f5adb2def64a185472ea93a42693";
logging-data="744703"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/WOruO8zQt6JuGsLdjSwn4"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:0vnVxymqdLPumA+OdTxp8BRr+Lo=
In-Reply-To: <uhpgan$md1k$1@dont-email.me>
Content-Language: en-US
 by: olcott - Tue, 31 Oct 2023 00:39 UTC

On 10/30/2023 7:04 PM, olcott wrote:
> On 10/30/2023 6:17 PM, olcott wrote:
>> On 10/30/2023 5:46 PM, olcott wrote:
>>> On 10/30/2023 5:10 PM, olcott wrote:
>>>> On 10/30/2023 3:11 PM, olcott wrote:
>>>>> On 10/30/2023 1:08 PM, olcott wrote:
>>>>>> On 10/30/2023 12:23 PM, olcott wrote:
>>>>>>> On 10/30/2023 11:57 AM, olcott wrote:
>>>>>>>> On 10/30/2023 11:29 AM, olcott wrote:
>>>>>>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>>>>>>> *Everyone agrees that this is impossible*
>>>>>>>>>> No computer program H can correctly predict what another computer
>>>>>>>>>> program D will do when D has been programmed to do the
>>>>>>>>>> opposite of
>>>>>>>>>> whatever H says.
>>>>>>>>>>
>>>>>>>>>> H(D) is functional notation that specifies the return value
>>>>>>>>>> from H(D)
>>>>>>>>>> Correct(H(D)==false) means that H(D) is correct that D does
>>>>>>>>>> not halt
>>>>>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>>>>>>
>>>>>>>>>> For all H ∈ TM there exists input D such that
>>>>>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>>>>>>
>>>>>>>>>> *No one pays attention to what this impossibility means*
>>>>>>>>>> The halting problem is defined as an unsatisfiable
>>>>>>>>>> specification thus
>>>>>>>>>> isomorphic to a question that has been defined to have no correct
>>>>>>>>>> answer.
>>>>>>>>>>
>>>>>>>>>> What time is it (yes or no)?
>>>>>>>>>> has no correct answer because there is something wrong with the
>>>>>>>>>> question. In this case we know to blame the question and not
>>>>>>>>>> the one
>>>>>>>>>> answering it.
>>>>>>>>>>
>>>>>>>>>> When we understand that there are some inputs to every TM H that
>>>>>>>>>> contradict both Boolean return values that H could return then
>>>>>>>>>> the
>>>>>>>>>> question: Does your input halt? is essentially a
>>>>>>>>>> self-contradictory
>>>>>>>>>> (thus incorrect) question in these cases.
>>>>>>>>>>
>>>>>>>>>> The inability to correctly answer an incorrect question places
>>>>>>>>>> no actual
>>>>>>>>>> limit on anyone or anything.
>>>>>>>>>>
>>>>>>>>>> This insight opens up an alternative treatment of these
>>>>>>>>>> pathological
>>>>>>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *The halting problem proofs merely show that*
>>>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>>>
>>>>>>>>> *A self-contradictory question is defined as*
>>>>>>>>>    Any yes/no question that contradicts both yes/no answers.
>>>>>>>>>
>>>>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>>>>> that derives a self-contradictory question for this H in that
>>>>>>>>> (a) If this H says that its D will halt, D loops
>>>>>>>>> (b) If this H that says its D will loop it halts.
>>>>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>>>>> each H*
>>>>>>>>
>>>>>>>> *proving that this is literally true*
>>>>>>>> *The halting problem proofs merely show that*
>>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>>
>>>>>>>
>>>>>>>     Nope, since each specific question HAS
>>>>>>>     a correct answer, it shows that, by your
>>>>>>>     own definition, it isn't "Self-Contradictory"
>>>>>>>
>>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>>
>>>>>>> There does not exist a solution to the halting problem because
>>>>>>> *for every Turing Machine of the infinite set of all Turing
>>>>>>> machines*
>>>>>>> *for every Turing Machine of the infinite set of all Turing
>>>>>>> machines*
>>>>>>> *for every Turing Machine of the infinite set of all Turing
>>>>>>> machines*
>>>>>>>
>>>>>>> there exists a D that makes the question:
>>>>>>> Does your input halt?
>>>>>>> a self-contradictory thus incorrect question.
>>>>>>
>>>>>>     Where does it say that a Turing
>>>>>>     Machine must exsit to do it?
>>>>>>
>>>>>> *The only reason that no such Turing Machine exists is*
>>>>>>
>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>> that derives a self-contradictory question for this H in that
>>>>>> (a) If this H says that its D will halt, D loops
>>>>>> (b) If this H that says its D will loop it halts.
>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>> each H*
>>>>>>
>>>>>> *therefore*
>>>>>>
>>>>>> *The halting problem proofs merely show that*
>>>>>> *self-contradictory questions have no correct answer*
>>>>>
>>>>>     The issue that you ignore is that you are
>>>>>     confalting a set of questions with a question,
>>>>>     and are baseing your logic on a strawman,
>>>>>
>>>>> It is not my mistake. Linguists understand that the
>>>>> context of who is asked a question changes the meaning
>>>>> of the question.
>>>>>
>>>>> This can easily be shown to apply to decision problem
>>>>> instances as follows:
>>>>>
>>>>> In that H.true and H.false are the wrong answer when
>>>>> D calls H to do the opposite of whatever value that
>>>>> either H returns.
>>>>>
>>>>> Whereas exactly one of H1.true or H1.false is correct
>>>>> for this exact same D.
>>>>>
>>>>> This proves that the question: "Does your input halt?"
>>>>> has a different meaning across the H and H1 pairs.
>>>>
>>>>     It *CAN* if the question ask something about
>>>>     the person being questioned.
>>>>
>>>>     But it *CAN'T* if the question doesn't in any
>>>>     way reffer to who you ask.
>>>>
>>>> D calls H thus D DOES refer to H
>>>> D does not call H1 therefore D does not refer to H1
>>>>
>>>
>>>     The QUESTION doesn't refer to the person
>>>     being asked?
>>>
>>>     That D calls H doesn't REFER to the asker,
>>>     but to a specific machine.
>>>
>>> For the H/D pair D does refer to the specific
>>> machine being asked: Does your input halt?
>>> D knows about and references H.
>>
>>    Nope. The question does this input representing
>>    D(D) Halt does NOT refer to any particular decider,
>>    just what ever one this is given to.
>>
>> *You can ignore that D calls H none-the-less when D*
>> *calls H this does mean that D <is> referencing H*
>>
>> The only way that I can tell that I am proving my point
>> is that rebuttals from people that are stuck in rebuttal
>> mode become increasingly nonsensical.
>>
>
>    "CALLING H doesn't REFER to the decider deciding it."
>
> Sure it does with H(D,D) D is calling the decider deciding it.
>


Click here to read the complete article
Re: Does the halting problem actually limit what computers can do?

<uhpjfm$3bucv$2@i2pn2.org>

  copy mid

https://news.novabbs.org/devel/article-flat.php?id=49303&group=comp.theory#49303

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Mon, 30 Oct 2023 17:58:30 -0700
Organization: i2pn2 (i2pn.org)
Message-ID: <uhpjfm$3bucv$2@i2pn2.org>
References: <uhm4r5$7n5$1@dont-email.me> <uholm0$hgki$1@dont-email.me>
<uhon90$hqbs$1@dont-email.me> <uhoopq$i2kc$1@dont-email.me>
<uhore5$im5o$1@dont-email.me> <uhp2m7$k1ls$1@dont-email.me>
<uhp9k7$l6kb$1@dont-email.me> <uhpbot$lib1$1@dont-email.me>
<uhpdj4$m00m$1@dont-email.me> <uhpgan$md1k$1@dont-email.me>
<uhpicn$mn7v$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 31 Oct 2023 00:58:31 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="3537311"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <uhpicn$mn7v$1@dont-email.me>
 by: Richard Damon - Tue, 31 Oct 2023 00:58 UTC

On 10/30/23 5:39 PM, olcott wrote:
> On 10/30/2023 7:04 PM, olcott wrote:
>> On 10/30/2023 6:17 PM, olcott wrote:
>>> On 10/30/2023 5:46 PM, olcott wrote:
>>>> On 10/30/2023 5:10 PM, olcott wrote:
>>>>> On 10/30/2023 3:11 PM, olcott wrote:
>>>>>> On 10/30/2023 1:08 PM, olcott wrote:
>>>>>>> On 10/30/2023 12:23 PM, olcott wrote:
>>>>>>>> On 10/30/2023 11:57 AM, olcott wrote:
>>>>>>>>> On 10/30/2023 11:29 AM, olcott wrote:
>>>>>>>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>>>>>>>> *Everyone agrees that this is impossible*
>>>>>>>>>>> No computer program H can correctly predict what another
>>>>>>>>>>> computer
>>>>>>>>>>> program D will do when D has been programmed to do the
>>>>>>>>>>> opposite of
>>>>>>>>>>> whatever H says.
>>>>>>>>>>>
>>>>>>>>>>> H(D) is functional notation that specifies the return value
>>>>>>>>>>> from H(D)
>>>>>>>>>>> Correct(H(D)==false) means that H(D) is correct that D does
>>>>>>>>>>> not halt
>>>>>>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>>>>>>>
>>>>>>>>>>> For all H ∈ TM there exists input D such that
>>>>>>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>>>>>>>
>>>>>>>>>>> *No one pays attention to what this impossibility means*
>>>>>>>>>>> The halting problem is defined as an unsatisfiable
>>>>>>>>>>> specification thus
>>>>>>>>>>> isomorphic to a question that has been defined to have no
>>>>>>>>>>> correct
>>>>>>>>>>> answer.
>>>>>>>>>>>
>>>>>>>>>>> What time is it (yes or no)?
>>>>>>>>>>> has no correct answer because there is something wrong with the
>>>>>>>>>>> question. In this case we know to blame the question and not
>>>>>>>>>>> the one
>>>>>>>>>>> answering it.
>>>>>>>>>>>
>>>>>>>>>>> When we understand that there are some inputs to every TM H that
>>>>>>>>>>> contradict both Boolean return values that H could return
>>>>>>>>>>> then the
>>>>>>>>>>> question: Does your input halt? is essentially a
>>>>>>>>>>> self-contradictory
>>>>>>>>>>> (thus incorrect) question in these cases.
>>>>>>>>>>>
>>>>>>>>>>> The inability to correctly answer an incorrect question
>>>>>>>>>>> places no actual
>>>>>>>>>>> limit on anyone or anything.
>>>>>>>>>>>
>>>>>>>>>>> This insight opens up an alternative treatment of these
>>>>>>>>>>> pathological
>>>>>>>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> *The halting problem proofs merely show that*
>>>>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>>>>
>>>>>>>>>> *A self-contradictory question is defined as*
>>>>>>>>>>    Any yes/no question that contradicts both yes/no answers.
>>>>>>>>>>
>>>>>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>>>>>> that derives a self-contradictory question for this H in that
>>>>>>>>>> (a) If this H says that its D will halt, D loops
>>>>>>>>>> (b) If this H that says its D will loop it halts.
>>>>>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>>>>>> each H*
>>>>>>>>>
>>>>>>>>> *proving that this is literally true*
>>>>>>>>> *The halting problem proofs merely show that*
>>>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>>>
>>>>>>>>
>>>>>>>>     Nope, since each specific question HAS
>>>>>>>>     a correct answer, it shows that, by your
>>>>>>>>     own definition, it isn't "Self-Contradictory"
>>>>>>>>
>>>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>>>
>>>>>>>> There does not exist a solution to the halting problem because
>>>>>>>> *for every Turing Machine of the infinite set of all Turing
>>>>>>>> machines*
>>>>>>>> *for every Turing Machine of the infinite set of all Turing
>>>>>>>> machines*
>>>>>>>> *for every Turing Machine of the infinite set of all Turing
>>>>>>>> machines*
>>>>>>>>
>>>>>>>> there exists a D that makes the question:
>>>>>>>> Does your input halt?
>>>>>>>> a self-contradictory thus incorrect question.
>>>>>>>
>>>>>>>     Where does it say that a Turing
>>>>>>>     Machine must exsit to do it?
>>>>>>>
>>>>>>> *The only reason that no such Turing Machine exists is*
>>>>>>>
>>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>>> that derives a self-contradictory question for this H in that
>>>>>>> (a) If this H says that its D will halt, D loops
>>>>>>> (b) If this H that says its D will loop it halts.
>>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>>> each H*
>>>>>>>
>>>>>>> *therefore*
>>>>>>>
>>>>>>> *The halting problem proofs merely show that*
>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>
>>>>>>     The issue that you ignore is that you are
>>>>>>     confalting a set of questions with a question,
>>>>>>     and are baseing your logic on a strawman,
>>>>>>
>>>>>> It is not my mistake. Linguists understand that the
>>>>>> context of who is asked a question changes the meaning
>>>>>> of the question.
>>>>>>
>>>>>> This can easily be shown to apply to decision problem
>>>>>> instances as follows:
>>>>>>
>>>>>> In that H.true and H.false are the wrong answer when
>>>>>> D calls H to do the opposite of whatever value that
>>>>>> either H returns.
>>>>>>
>>>>>> Whereas exactly one of H1.true or H1.false is correct
>>>>>> for this exact same D.
>>>>>>
>>>>>> This proves that the question: "Does your input halt?"
>>>>>> has a different meaning across the H and H1 pairs.
>>>>>
>>>>>     It *CAN* if the question ask something about
>>>>>     the person being questioned.
>>>>>
>>>>>     But it *CAN'T* if the question doesn't in any
>>>>>     way reffer to who you ask.
>>>>>
>>>>> D calls H thus D DOES refer to H
>>>>> D does not call H1 therefore D does not refer to H1
>>>>>
>>>>
>>>>     The QUESTION doesn't refer to the person
>>>>     being asked?
>>>>
>>>>     That D calls H doesn't REFER to the asker,
>>>>     but to a specific machine.
>>>>
>>>> For the H/D pair D does refer to the specific
>>>> machine being asked: Does your input halt?
>>>> D knows about and references H.
>>>
>>>    Nope. The question does this input representing
>>>    D(D) Halt does NOT refer to any particular decider,
>>>    just what ever one this is given to.
>>>
>>> *You can ignore that D calls H none-the-less when D*
>>> *calls H this does mean that D <is> referencing H*
>>>
>>> The only way that I can tell that I am proving my point
>>> is that rebuttals from people that are stuck in rebuttal
>>> mode become increasingly nonsensical.
>>>
>>
>>     "CALLING H doesn't REFER to the decider deciding it."
>>
>> Sure it does with H(D,D) D is calling the decider deciding it.
>>
>
>    Nope, D is calling the original H, no matter
>    WHAT decider is deciding it.
>
> Duh? calling the original decider when
> the original decider is deciding it


Click here to read the complete article
Pages:123
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor