Rocksolid Light

Welcome to Rocksolid Light

mail  files  register  newsreader  groups  login

Message-ID:  

19 May, 2024: Line wrapping has been changed to be more consistent with Usenet standards.
 If you find that it is broken please let me know here rocksolid.nodes.help


computers / comp.ai.philosophy / Re: Does the halting problem actually limit what computers can do?

Re: Does the halting problem actually limit what computers can do?

<uhpicn$mn7v$1@dont-email.me>

  copy mid

https://news.novabbs.org/computers/article-flat.php?id=12390&group=comp.ai.philosophy#12390

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder2.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Mon, 30 Oct 2023 19:39:51 -0500
Organization: A noiseless patient Spider
Lines: 186
Message-ID: <uhpicn$mn7v$1@dont-email.me>
References: <uhm4r5$7n5$1@dont-email.me> <uholm0$hgki$1@dont-email.me>
<uhon90$hqbs$1@dont-email.me> <uhoopq$i2kc$1@dont-email.me>
<uhore5$im5o$1@dont-email.me> <uhp2m7$k1ls$1@dont-email.me>
<uhp9k7$l6kb$1@dont-email.me> <uhpbot$lib1$1@dont-email.me>
<uhpdj4$m00m$1@dont-email.me> <uhpgan$md1k$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 31 Oct 2023 00:39:51 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="3083f5adb2def64a185472ea93a42693";
logging-data="744703"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/WOruO8zQt6JuGsLdjSwn4"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:0vnVxymqdLPumA+OdTxp8BRr+Lo=
In-Reply-To: <uhpgan$md1k$1@dont-email.me>
Content-Language: en-US
 by: olcott - Tue, 31 Oct 2023 00:39 UTC

On 10/30/2023 7:04 PM, olcott wrote:
> On 10/30/2023 6:17 PM, olcott wrote:
>> On 10/30/2023 5:46 PM, olcott wrote:
>>> On 10/30/2023 5:10 PM, olcott wrote:
>>>> On 10/30/2023 3:11 PM, olcott wrote:
>>>>> On 10/30/2023 1:08 PM, olcott wrote:
>>>>>> On 10/30/2023 12:23 PM, olcott wrote:
>>>>>>> On 10/30/2023 11:57 AM, olcott wrote:
>>>>>>>> On 10/30/2023 11:29 AM, olcott wrote:
>>>>>>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>>>>>>> *Everyone agrees that this is impossible*
>>>>>>>>>> No computer program H can correctly predict what another computer
>>>>>>>>>> program D will do when D has been programmed to do the
>>>>>>>>>> opposite of
>>>>>>>>>> whatever H says.
>>>>>>>>>>
>>>>>>>>>> H(D) is functional notation that specifies the return value
>>>>>>>>>> from H(D)
>>>>>>>>>> Correct(H(D)==false) means that H(D) is correct that D does
>>>>>>>>>> not halt
>>>>>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>>>>>>
>>>>>>>>>> For all H ∈ TM there exists input D such that
>>>>>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>>>>>>
>>>>>>>>>> *No one pays attention to what this impossibility means*
>>>>>>>>>> The halting problem is defined as an unsatisfiable
>>>>>>>>>> specification thus
>>>>>>>>>> isomorphic to a question that has been defined to have no correct
>>>>>>>>>> answer.
>>>>>>>>>>
>>>>>>>>>> What time is it (yes or no)?
>>>>>>>>>> has no correct answer because there is something wrong with the
>>>>>>>>>> question. In this case we know to blame the question and not
>>>>>>>>>> the one
>>>>>>>>>> answering it.
>>>>>>>>>>
>>>>>>>>>> When we understand that there are some inputs to every TM H that
>>>>>>>>>> contradict both Boolean return values that H could return then
>>>>>>>>>> the
>>>>>>>>>> question: Does your input halt? is essentially a
>>>>>>>>>> self-contradictory
>>>>>>>>>> (thus incorrect) question in these cases.
>>>>>>>>>>
>>>>>>>>>> The inability to correctly answer an incorrect question places
>>>>>>>>>> no actual
>>>>>>>>>> limit on anyone or anything.
>>>>>>>>>>
>>>>>>>>>> This insight opens up an alternative treatment of these
>>>>>>>>>> pathological
>>>>>>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *The halting problem proofs merely show that*
>>>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>>>
>>>>>>>>> *A self-contradictory question is defined as*
>>>>>>>>>    Any yes/no question that contradicts both yes/no answers.
>>>>>>>>>
>>>>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>>>>> that derives a self-contradictory question for this H in that
>>>>>>>>> (a) If this H says that its D will halt, D loops
>>>>>>>>> (b) If this H that says its D will loop it halts.
>>>>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>>>>> each H*
>>>>>>>>
>>>>>>>> *proving that this is literally true*
>>>>>>>> *The halting problem proofs merely show that*
>>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>>
>>>>>>>
>>>>>>>     Nope, since each specific question HAS
>>>>>>>     a correct answer, it shows that, by your
>>>>>>>     own definition, it isn't "Self-Contradictory"
>>>>>>>
>>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>>
>>>>>>> There does not exist a solution to the halting problem because
>>>>>>> *for every Turing Machine of the infinite set of all Turing
>>>>>>> machines*
>>>>>>> *for every Turing Machine of the infinite set of all Turing
>>>>>>> machines*
>>>>>>> *for every Turing Machine of the infinite set of all Turing
>>>>>>> machines*
>>>>>>>
>>>>>>> there exists a D that makes the question:
>>>>>>> Does your input halt?
>>>>>>> a self-contradictory thus incorrect question.
>>>>>>
>>>>>>     Where does it say that a Turing
>>>>>>     Machine must exsit to do it?
>>>>>>
>>>>>> *The only reason that no such Turing Machine exists is*
>>>>>>
>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>> that derives a self-contradictory question for this H in that
>>>>>> (a) If this H says that its D will halt, D loops
>>>>>> (b) If this H that says its D will loop it halts.
>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>> each H*
>>>>>>
>>>>>> *therefore*
>>>>>>
>>>>>> *The halting problem proofs merely show that*
>>>>>> *self-contradictory questions have no correct answer*
>>>>>
>>>>>     The issue that you ignore is that you are
>>>>>     confalting a set of questions with a question,
>>>>>     and are baseing your logic on a strawman,
>>>>>
>>>>> It is not my mistake. Linguists understand that the
>>>>> context of who is asked a question changes the meaning
>>>>> of the question.
>>>>>
>>>>> This can easily be shown to apply to decision problem
>>>>> instances as follows:
>>>>>
>>>>> In that H.true and H.false are the wrong answer when
>>>>> D calls H to do the opposite of whatever value that
>>>>> either H returns.
>>>>>
>>>>> Whereas exactly one of H1.true or H1.false is correct
>>>>> for this exact same D.
>>>>>
>>>>> This proves that the question: "Does your input halt?"
>>>>> has a different meaning across the H and H1 pairs.
>>>>
>>>>     It *CAN* if the question ask something about
>>>>     the person being questioned.
>>>>
>>>>     But it *CAN'T* if the question doesn't in any
>>>>     way reffer to who you ask.
>>>>
>>>> D calls H thus D DOES refer to H
>>>> D does not call H1 therefore D does not refer to H1
>>>>
>>>
>>>     The QUESTION doesn't refer to the person
>>>     being asked?
>>>
>>>     That D calls H doesn't REFER to the asker,
>>>     but to a specific machine.
>>>
>>> For the H/D pair D does refer to the specific
>>> machine being asked: Does your input halt?
>>> D knows about and references H.
>>
>>    Nope. The question does this input representing
>>    D(D) Halt does NOT refer to any particular decider,
>>    just what ever one this is given to.
>>
>> *You can ignore that D calls H none-the-less when D*
>> *calls H this does mean that D <is> referencing H*
>>
>> The only way that I can tell that I am proving my point
>> is that rebuttals from people that are stuck in rebuttal
>> mode become increasingly nonsensical.
>>
>
>    "CALLING H doesn't REFER to the decider deciding it."
>
> Sure it does with H(D,D) D is calling the decider deciding it.
>

Nope, D is calling the original H, no matter
WHAT decider is deciding it.

Duh? calling the original decider when
the original decider is deciding it

Because the halting problem and Tarski Undefinability
(attempting to formalize the notion of truth itself)
are different aspects of the same problem:

My same ideas can be used to automatically divide
truth from disinformation so that climate change
denial does not cause humans to become extinct.

Are you going to perpetually play head games?

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

SubjectRepliesAuthor
o Does the halting problem actually limit what computers can do?

By: olcott on Sun, 29 Oct 2023

61olcott
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor