Discussion:
A state transition diagram proves ... GOOD PROGRESS
(too old to reply)
olcott
2024-10-18 14:10:04 UTC
Permalink
_DDD()
[00002172] 55         push ebp      ; housekeeping
[00002173] 8bec       mov ebp,esp   ; housekeeping
[00002175] 6872210000 push 00002172 ; push DDD
[0000217a] e853f4ffff call 000015d2 ; call HHH(DDD)
[0000217f] 83c404     add esp,+04
[00002182] 5d         pop ebp
[00002183] c3         ret
Size in bytes:(0018) [00002183]
When DDD is correctly emulated by HHH according
to the semantics of the x86 language DDD cannot
possibly reach its own machine address [00002183]
no matter what HHH does.
+-->[00002172]-->[00002173]-->[00002175]-->[0000217a]--+
+------------------------------------------------------+
That may not line up that same way when view
https://en.wikipedia.org/wiki/State_diagram
Except that 0000217a doesn't go to 00002172, but to 000015d2
IS THIS OVER YOUR HEAD?
What is the first machine address of DDD that HHH
emulating itself emulating DDD would reach?
Yes, HHH EMULATES the code at that address,
Which HHH emulates what code at which address?
Everyone, just once, which you should know, but ignore.
The Emulating HHH sees those addresses at its begining and then never
again.
Then the HHH that it is emulating will see those addresses, but not the
outer one that is doing that emulation of HHH.
Then the HHH that the second HHH is emulating will, but neither of the
outer 2 HHH.
And so on.
Which HHH do you think EVER gets back to 00002172?
What instruction do you think that it emulates that would tell it to do so?
It isn't the call instruction at 0000217a, as that tells it to go into HHH.
00002172
00002173
00002175
0000217a
conditional emulation of 00002172
conditional emulation of 00002173
conditional emulation of 00002175
conditional emulation of 0000217a
CE of CE of 00002172
...
OK great this is finally good progress.
The "state" never repeats, it is alway a new state,
Every emulated DDD has an identical process state at every point
in its emulation trace when adjusting for different top of stack values.
and if HHH decides
to abort its emulation, it also should know that every level of
condition emulation it say will also do the same thing,
If I understand his words correctly Mike has already disagreed
with this. Let's see if you can understand my reasoning.

Not exactly. Each HHH can only abort its emulation when its
abort criteria has been met. The outermost HHH can see one
more execution trace than the next inner one thus meets its
abort criteria first.
Obviously a simulator has access to the internal state
(tape contents etc.) of the simulated machine. No problem
there.
This seems to indicate that the Turing machine UTM version of
HHH can somehow see each of the state transitions of the DDD
resulting from emulating its own Turing machine description
emulating DDD.

Even though this is a little different for Turing machines it
is equivalent in essence to HHH being able to see the steps of
the DDD resulting from HHH emulating itself emulating this DDD.

*Joes can't seem to understand this*
Only the outer-most HHH meets its abort criteria first, thus
unless it aborts as soon as it meets this criteria none of
them will ever abort.
and thus the
call HHH at 0000217a will be returned from, > and HHH has no idea what
will happen after that, so it KNOWS it is ignorant of the answer.
That you don't quite yet understand the preceding points
will make it impossible for you to understand any reply
to the above point.
--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer
joes
2024-10-18 14:41:55 UTC
Permalink
Post by olcott
When DDD is correctly emulated by HHH according to the semantics
of the x86 language DDD cannot possibly reach its own machine
address [00002183] no matter what HHH does.
+-->[00002172]-->[00002173]-->[00002175]-->[0000217a]--+
Except that 0000217a doesn't go to 00002172, but to 000015d2
The Emulating HHH sees those addresses at its begining and then never
again.
Then the HHH that it is emulating will see those addresses, but not the
outer one that is doing that emulation of HHH.
And so on.
Which HHH do you think EVER gets back to 00002172?
What instruction do you think that it emulates that would tell it to do so?
00002172 00002173 00002175 0000217a conditional emulation of 00002172
conditional emulation of 00002173 conditional emulation of 00002175
conditional emulation of 0000217a CE of CE of 00002172 ...
OK great this is finally good progress.
The more interesting part is HHH simulating itself, specifically the
if(Root) check on line 502.
Post by olcott
and if HHH decides to abort its emulation, it also should know that
every level of condition emulation it say will also do the same thing,
If I understand his words correctly Mike has already disagreed with
this.
He hasn't.
Post by olcott
Obviously a simulator has access to the internal state (tape contents
etc.) of the simulated machine. No problem there.
This seems to indicate that the Turing machine UTM version of HHH can
somehow see each of the state transitions of the DDD resulting from
emulating its own Turing machine description emulating DDD.
Of course. It needs to, in order to simulate it. Strictly speaking
it has no idea of its simulation of a simulation two levels down,
only of the immediate simulation; the rest is just part of whatever
program the simulated simulator is simulating, which happens to be
itself.
Post by olcott
*Joes can't seem to understand this*
Only the outer-most HHH meets its abort criteria first, thus unless it
aborts as soon as it meets this criteria none of them will ever abort.
This is very simple to understand. Almost as simple as: even if only
the outermost HHH didn't abort, it would still halt, since it is
simulating a halting program: the nested version will abort.
Post by olcott
and thus the call HHH at 0000217a will be returned from, > and HHH has
no idea what will happen after that, so it KNOWS it is ignorant of the
answer.
--
Am Sat, 20 Jul 2024 12:35:31 +0000 schrieb WM in sci.math:
It is not guaranteed that n+1 exists for every n.
olcott
2024-10-18 16:39:52 UTC
Permalink
Post by joes
Post by olcott
When DDD is correctly emulated by HHH according to the semantics
of the x86 language DDD cannot possibly reach its own machine
address [00002183] no matter what HHH does.
+-->[00002172]-->[00002173]-->[00002175]-->[0000217a]--+
Except that 0000217a doesn't go to 00002172, but to 000015d2
The Emulating HHH sees those addresses at its begining and then never
again.
Then the HHH that it is emulating will see those addresses, but not the
outer one that is doing that emulation of HHH.
And so on.
Which HHH do you think EVER gets back to 00002172?
What instruction do you think that it emulates that would tell it to do so?
00002172 00002173 00002175 0000217a conditional emulation of 00002172
conditional emulation of 00002173 conditional emulation of 00002175
conditional emulation of 0000217a CE of CE of 00002172 ...
OK great this is finally good progress.
The more interesting part is HHH simulating itself, specifically the
if(Root) check on line 502.
That has nothing to do with any aspect of the emulation
until HHH has correctly emulated itself emulating DDD.
Post by joes
Post by olcott
and if HHH decides to abort its emulation, it also should know that
every level of condition emulation it say will also do the same thing,
If I understand his words correctly Mike has already disagreed with
this.
He hasn't.
Post by olcott
Obviously a simulator has access to the internal state (tape contents
etc.) of the simulated machine. No problem there.
This seems to indicate that the Turing machine UTM version of HHH can
somehow see each of the state transitions of the DDD resulting from
emulating its own Turing machine description emulating DDD.
Of course. It needs to, in order to simulate it. Strictly speaking
it has no idea of its simulation of a simulation two levels down,
only of the immediate simulation; the rest is just part of whatever
program the simulated simulator is simulating, which happens to be
itself.
From the concrete execution trace of DDD emulated by HHH
according to the semantics of the x86 language people with
sufficient technical competence can see that the halt status
criteria that professor Sipser agreed to has been met.

<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
If simulating halt decider H correctly simulates its input D
until H correctly determines that its simulated D would never
stop running unless aborted then

H can abort its simulation of D and correctly report that D
specifies a non-halting sequence of configurations.
</MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>

I will paraphrase this to use clearer language that directly applies
to HHH and DDD.

If emulating termination analyzer HHH emulates its input DDD
according to the semantics of the x86 language (including HHH
emulating itself emulating DDD) until HHH correctly determines
that its emulated DDD would never stop running unless aborted
then ...

HHH can abort its emulation of DDD and correctly report that DDD
specifies a non-terminating sequence of x86 instructions.
Post by joes
Post by olcott
*Joes can't seem to understand this*
Only the outer-most HHH meets its abort criteria first, thus unless it
aborts as soon as it meets this criteria none of them will ever abort.
This is very simple to understand. Almost as simple as: even if only
the outermost HHH didn't abort, it would still halt,
Yet that is based on the factually incorrect assumption
that every instance of HHH does not use the exact same
machine code.

Since you should know that this assumption is factually
incorrect I could it as flat out dishonestly on your part.
Post by joes
since it is
simulating a halting program: the nested version will abort.
Post by olcott
and thus the call HHH at 0000217a will be returned from, > and HHH has
no idea what will happen after that, so it KNOWS it is ignorant of the
answer.
--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer
olcott
2024-10-18 16:44:15 UTC
Permalink
Post by olcott
Post by joes
Post by olcott
When DDD is correctly emulated by HHH according to the semantics
of the x86 language DDD cannot possibly reach its own machine
address [00002183] no matter what HHH does.
+-->[00002172]-->[00002173]-->[00002175]-->[0000217a]--+
Except that 0000217a doesn't go to 00002172, but to 000015d2
The Emulating HHH sees those addresses at its begining and then never
again.
Then the HHH that it is emulating will see those addresses, but not the
outer one that is doing that emulation of HHH.
And so on.
Which HHH do you think EVER gets back to 00002172?
What instruction do you think that it emulates that would tell it to do so?
00002172 00002173 00002175 0000217a conditional emulation of 00002172
conditional emulation of 00002173 conditional emulation of 00002175
conditional emulation of 0000217a CE of CE of 00002172 ...
OK great this is finally good progress.
The more interesting part is HHH simulating itself, specifically the
if(Root) check on line 502.
That has nothing to do with any aspect of the emulation
until HHH has correctly emulated itself emulating DDD.
Post by joes
Post by olcott
and if HHH decides to abort its emulation, it also should know that
every level of condition emulation it say will also do the same thing,
If I understand his words correctly Mike has already disagreed with
this.
He hasn't.
Post by olcott
  > Obviously a simulator has access to the internal state (tape
contents
  > etc.) of the simulated machine. No problem there.
This seems to indicate that the Turing machine UTM version of HHH can
somehow see each of the state transitions of the DDD resulting from
emulating its own Turing machine description emulating DDD.
Of course. It needs to, in order to simulate it. Strictly speaking
it has no idea of its simulation of a simulation two levels down,
only of the immediate simulation; the rest is just part of whatever
program the simulated simulator is simulating, which happens to be
itself.
From the concrete execution trace of DDD emulated by HHH
according to the semantics of the x86 language people with
sufficient technical competence can see that the halt status
criteria that professor Sipser agreed to has been met.
<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
    If simulating halt decider H correctly simulates its input D
    until H correctly determines that its simulated D would never
    stop running unless aborted then
    H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.
</MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
I will paraphrase this to use clearer language that directly applies
to HHH and DDD.
    If emulating termination analyzer HHH emulates its input DDD
    according to the semantics of the x86 language (including HHH
    emulating itself emulating DDD) until HHH correctly determines
    that its emulated DDD would never stop running unless aborted
    then ...
    HHH can abort its emulation of DDD and correctly report that DDD
    specifies a non-terminating sequence of x86 instructions.
Post by joes
Post by olcott
*Joes can't seem to understand this*
Only the outer-most HHH meets its abort criteria first, thus unless it
aborts as soon as it meets this criteria none of them will ever abort.
This is very simple to understand. Almost as simple as: even if only
the outermost HHH didn't abort, it would still halt,
Yet that is based on the factually incorrect assumption
that every instance of HHH does not use the exact same
machine code.
Since you should know that this assumption is factually
incorrect I could it as flat out dishonestly on your part.
Post by joes
since it is
simulating a halting program: the nested version will abort.
Post by olcott
and thus the call HHH at 0000217a will be returned from, > and HHH has
no idea what will happen after that, so it KNOWS it is ignorant of the
answer.
--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer
joes
2024-10-18 17:00:38 UTC
Permalink
Post by joes
Post by olcott
When DDD is correctly emulated by HHH according to the semantics
of the x86 language DDD cannot possibly reach its own machine
address [00002183] no matter what HHH does.
+-->[00002172]-->[00002173]-->[00002175]-->[0000217a]--+
Except that 0000217a doesn't go to 00002172, but to 000015d2
The Emulating HHH sees those addresses at its begining and then never
again.
Then the HHH that it is emulating will see those addresses, but not
the outer one that is doing that emulation of HHH.
And so on.
Which HHH do you think EVER gets back to 00002172?
What instruction do you think that it emulates that would tell it to do so?
00002172 00002173 00002175 0000217a conditional emulation of 00002172
conditional emulation of 00002173 conditional emulation of 00002175
conditional emulation of 0000217a CE of CE of 00002172 ...
OK great this is finally good progress.
The more interesting part is HHH simulating itself, specifically the
if(Root) check on line 502.
That has nothing to do with any aspect of the emulation until HHH has
correctly emulated itself emulating DDD.
What? That is part of HHH, not DDD.
Post by joes
Post by olcott
and if HHH decides to abort its emulation, it also should know that
every level of condition emulation it say will also do the same thing,
If I understand his words correctly Mike has already disagreed with
this.
He hasn't.
Post by olcott
Obviously a simulator has access to the internal state (tape
contents etc.) of the simulated machine. No problem there.
This seems to indicate that the Turing machine UTM version of HHH can
somehow see each of the state transitions of the DDD resulting from
emulating its own Turing machine description emulating DDD.
Of course. It needs to, in order to simulate it. Strictly speaking it
has no idea of its simulation of a simulation two levels down, only of
the immediate simulation; the rest is just part of whatever program the
simulated simulator is simulating, which happens to be itself.
From the concrete execution trace of DDD emulated by HHH
according to the semantics of the x86 language people with sufficient
technical competence can see that the halt status criteria that
professor Sipser agreed to has been met.
If emulating termination analyzer HHH emulates its input DDD
until HHH determines that
its emulated DDD would never stop running unless aborted ...
But it would.
Post by joes
Post by olcott
*Joes can't seem to understand this*
Only the outer-most HHH meets its abort criteria first, thus unless it
aborts as soon as it meets this criteria none of them will ever abort.
This is very simple to understand. Almost as simple as: even if only
the outermost HHH didn't abort, it would still halt,
Yet that is based on the factually incorrect assumption that every
instance of HHH does not use the exact same machine code.
Same as the outer HHH returning that the inner ones wouldn't.
Post by joes
since it is simulating a halting program: the nested version will
abort.
Post by olcott
and thus the call HHH at 0000217a will be returned from, > and HHH
has no idea what will happen after that, so it KNOWS it is ignorant
of the answer.
--
Am Sat, 20 Jul 2024 12:35:31 +0000 schrieb WM in sci.math:
It is not guaranteed that n+1 exists for every n.
Richard Damon
2024-10-18 23:19:09 UTC
Permalink
Post by olcott
Post by joes
Post by olcott
When DDD is correctly emulated by HHH according to the semantics
of the x86 language DDD cannot possibly reach its own machine
address [00002183] no matter what HHH does.
+-->[00002172]-->[00002173]-->[00002175]-->[0000217a]--+
Except that 0000217a doesn't go to 00002172, but to 000015d2
The Emulating HHH sees those addresses at its begining and then never
again.
Then the HHH that it is emulating will see those addresses, but not the
outer one that is doing that emulation of HHH.
And so on.
Which HHH do you think EVER gets back to 00002172?
What instruction do you think that it emulates that would tell it to do so?
00002172 00002173 00002175 0000217a conditional emulation of 00002172
conditional emulation of 00002173 conditional emulation of 00002175
conditional emulation of 0000217a CE of CE of 00002172 ...
OK great this is finally good progress.
The more interesting part is HHH simulating itself, specifically the
if(Root) check on line 502.
That has nothing to do with any aspect of the emulation
until HHH has correctly emulated itself emulating DDD.
Post by joes
Post by olcott
and if HHH decides to abort its emulation, it also should know that
every level of condition emulation it say will also do the same thing,
If I understand his words correctly Mike has already disagreed with
this.
He hasn't.
Post by olcott
  > Obviously a simulator has access to the internal state (tape
contents
  > etc.) of the simulated machine. No problem there.
This seems to indicate that the Turing machine UTM version of HHH can
somehow see each of the state transitions of the DDD resulting from
emulating its own Turing machine description emulating DDD.
Of course. It needs to, in order to simulate it. Strictly speaking
it has no idea of its simulation of a simulation two levels down,
only of the immediate simulation; the rest is just part of whatever
program the simulated simulator is simulating, which happens to be
itself.
From the concrete execution trace of DDD emulated by HHH
according to the semantics of the x86 language people with
sufficient technical competence can see that the halt status
criteria that professor Sipser agreed to has been met.
Nope.

Proven previously and you accepted by default by not pointing out an error.

Your HHH neither "correctly simulated" per his definitions or correctly
predicts the behavior of such a simulation, and thus never acheived the
required criteria.

All you have done is proved you lie.
Post by olcott
<MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
    If simulating halt decider H correctly simulates its input D
    until H correctly determines that its simulated D would never
    stop running unless aborted then
    H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.
</MIT Professor Sipser agreed to ONLY these verbatim words 10/13/2022>
I will paraphrase this to use clearer language that directly applies
to HHH and DDD.
    If emulating termination analyzer HHH emulates its input DDD
    according to the semantics of the x86 language (including HHH
    emulating itself emulating DDD) until HHH correctly determines
    that its emulated DDD would never stop running unless aborted
    then ...
    HHH can abort its emulation of DDD and correctly report that DDD
    specifies a non-terminating sequence of x86 instructions.
Post by joes
Post by olcott
*Joes can't seem to understand this*
Only the outer-most HHH meets its abort criteria first, thus unless it
aborts as soon as it meets this criteria none of them will ever abort.
This is very simple to understand. Almost as simple as: even if only
the outermost HHH didn't abort, it would still halt,
Yet that is based on the factually incorrect assumption
that every instance of HHH does not use the exact same
machine code.
Since you should know that this assumption is factually
incorrect I could it as flat out dishonestly on your part.
Post by joes
since it is
simulating a halting program: the nested version will abort.
Post by olcott
and thus the call HHH at 0000217a will be returned from, > and HHH has
no idea what will happen after that, so it KNOWS it is ignorant of the
answer.
olcott
2024-10-19 01:04:11 UTC
Permalink
Post by Richard Damon
Post by olcott
Post by joes
Post by olcott
When DDD is correctly emulated by HHH according to the semantics
of the x86 language DDD cannot possibly reach its own machine
address [00002183] no matter what HHH does.
+-->[00002172]-->[00002173]-->[00002175]-->[0000217a]--+
Except that 0000217a doesn't go to 00002172, but to 000015d2
The Emulating HHH sees those addresses at its begining and then never
again.
Then the HHH that it is emulating will see those addresses, but not the
outer one that is doing that emulation of HHH.
And so on.
Which HHH do you think EVER gets back to 00002172?
What instruction do you think that it emulates that would tell it
to do
so?
00002172 00002173 00002175 0000217a conditional emulation of 00002172
conditional emulation of 00002173 conditional emulation of 00002175
conditional emulation of 0000217a CE of CE of 00002172 ...
OK great this is finally good progress.
The more interesting part is HHH simulating itself, specifically the
if(Root) check on line 502.
That has nothing to do with any aspect of the emulation
until HHH has correctly emulated itself emulating DDD.
Post by joes
Post by olcott
and if HHH decides to abort its emulation, it also should know that
every level of condition emulation it say will also do the same thing,
If I understand his words correctly Mike has already disagreed with
this.
He hasn't.
Post by olcott
  > Obviously a simulator has access to the internal state (tape
contents
  > etc.) of the simulated machine. No problem there.
This seems to indicate that the Turing machine UTM version of HHH can
somehow see each of the state transitions of the DDD resulting from
emulating its own Turing machine description emulating DDD.
Of course. It needs to, in order to simulate it. Strictly speaking
it has no idea of its simulation of a simulation two levels down,
only of the immediate simulation; the rest is just part of whatever
program the simulated simulator is simulating, which happens to be
itself.
 From the concrete execution trace of DDD emulated by HHH
according to the semantics of the x86 language people with
sufficient technical competence can see that the halt status
criteria that professor Sipser agreed to has been met.
Nope.
Proven previously and you accepted by default by not pointing out an error.
Your HHH neither "correctly simulated" per his definitions or correctly
predicts the behavior of such a simulation, and thus never acheived the
required criteria.
So you are still trying to stupidly get away with saying
that when a finite string of x86 code is emulated according
to the semantics of the x86 language

(including HHH emulating itself emulating DDD)
THAT THE EMULATION CAN BE WRONG ???
--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer
Richard Damon
2024-10-19 02:49:12 UTC
Permalink
Post by olcott
Post by Richard Damon
Post by olcott
Post by joes
Post by olcott
When DDD is correctly emulated by HHH according to the semantics
of the x86 language DDD cannot possibly reach its own machine
address [00002183] no matter what HHH does.
+-->[00002172]-->[00002173]-->[00002175]-->[0000217a]--+
Except that 0000217a doesn't go to 00002172, but to 000015d2
The Emulating HHH sees those addresses at its begining and then never
again.
Then the HHH that it is emulating will see those addresses, but not the
outer one that is doing that emulation of HHH.
And so on.
Which HHH do you think EVER gets back to 00002172?
What instruction do you think that it emulates that would tell it
to do
so?
00002172 00002173 00002175 0000217a conditional emulation of 00002172
conditional emulation of 00002173 conditional emulation of 00002175
conditional emulation of 0000217a CE of CE of 00002172 ...
OK great this is finally good progress.
The more interesting part is HHH simulating itself, specifically the
if(Root) check on line 502.
That has nothing to do with any aspect of the emulation
until HHH has correctly emulated itself emulating DDD.
Post by joes
Post by olcott
and if HHH decides to abort its emulation, it also should know that
every level of condition emulation it say will also do the same thing,
If I understand his words correctly Mike has already disagreed with
this.
He hasn't.
Post by olcott
  > Obviously a simulator has access to the internal state (tape
contents
  > etc.) of the simulated machine. No problem there.
This seems to indicate that the Turing machine UTM version of HHH can
somehow see each of the state transitions of the DDD resulting from
emulating its own Turing machine description emulating DDD.
Of course. It needs to, in order to simulate it. Strictly speaking
it has no idea of its simulation of a simulation two levels down,
only of the immediate simulation; the rest is just part of whatever
program the simulated simulator is simulating, which happens to be
itself.
 From the concrete execution trace of DDD emulated by HHH
according to the semantics of the x86 language people with
sufficient technical competence can see that the halt status
criteria that professor Sipser agreed to has been met.
Nope.
Proven previously and you accepted by default by not pointing out an error.
Your HHH neither "correctly simulated" per his definitions or
correctly predicts the behavior of such a simulation, and thus never
acheived the required criteria.
So you are still trying to stupidly get away with saying
that when a finite string of x86 code is emulated according
to the semantics of the x86 language
(including HHH emulating itself emulating DDD)
THAT THE EMULATION CAN BE WRONG ???
It is WRONG for the determination of the final behavior of DDD it is
aborted.

Remember, the "semantics of the x86 processor" includes the fact that
the x86 processor WON'T STOP until it reaches a terminal instruction,
and thus stopping before that isn't actually correct.

If you are willing to admit partial behavior, it can be correct, but
saying it will "never" do something, is unsupported.
Richard Damon
2024-10-18 23:06:19 UTC
Permalink
Post by olcott
_DDD()
[00002172] 55         push ebp      ; housekeeping
[00002173] 8bec       mov ebp,esp   ; housekeeping
[00002175] 6872210000 push 00002172 ; push DDD
[0000217a] e853f4ffff call 000015d2 ; call HHH(DDD)
[0000217f] 83c404     add esp,+04
[00002182] 5d         pop ebp
[00002183] c3         ret
Size in bytes:(0018) [00002183]
When DDD is correctly emulated by HHH according
to the semantics of the x86 language DDD cannot
possibly reach its own machine address [00002183]
no matter what HHH does.
+-->[00002172]-->[00002173]-->[00002175]-->[0000217a]--+
+------------------------------------------------------+
That may not line up that same way when view
https://en.wikipedia.org/wiki/State_diagram
Except that 0000217a doesn't go to 00002172, but to 000015d2
IS THIS OVER YOUR HEAD?
What is the first machine address of DDD that HHH
emulating itself emulating DDD would reach?
Yes, HHH EMULATES the code at that address,
Which HHH emulates what code at which address?
Everyone, just once, which you should know, but ignore.
The Emulating HHH sees those addresses at its begining and then never
again.
Then the HHH that it is emulating will see those addresses, but not
the outer one that is doing that emulation of HHH.
Then the HHH that the second HHH is emulating will, but neither of the
outer 2 HHH.
And so on.
Which HHH do you think EVER gets back to 00002172?
What instruction do you think that it emulates that would tell it to do so?
It isn't the call instruction at 0000217a, as that tells it to go into HHH.
00002172
00002173
00002175
0000217a
conditional emulation of 00002172
conditional emulation of 00002173
conditional emulation of 00002175
conditional emulation of 0000217a
CE of CE of 00002172
...
OK great this is finally good progress.
The "state" never repeats, it is alway a new state,
Every emulated DDD has an identical process state at every point
in its emulation trace when adjusting for different top of stack values.
Nope, remember, each of those levels are CONDITIONAL, and thus, if HHH
is defined to abort its simulaiton, as it is, then none of the HHH's
actually NEED To abort, as if they were changed (without changing their
input, so it still calls the HHH that does abort, per the definition of
the problem), then that emulation would go to the point where it sees
the emulator it is emulating abort its emulation and return.

Your LIE is based on trying to change the input when you do this
hypothetcal step, which you are not allowed to do, as then you are just
showing you are LYING about talking about the behavior of the specifed
behavior of DDD, which includes ALL of the code it uses, including that
of HHH.
Post by olcott
and if HHH decides to abort its emulation, it also should know that
every level of condition emulation it say will also do the same thing,
If I understand his words correctly Mike has already disagreed
with this. Let's see if you can understand my reasoning.
Big *IF*, I beleive he has pointed out that you don't understand his
words correctly
Post by olcott
Not exactly. Each HHH can only abort its emulation when its
abort criteria has been met. The outermost HHH can see one
more execution trace than the next inner one thus meets its
abort criteria first.
Sort of, Each HHH *WILL* abort its emulation when its abort criteria has
been met, and the abortion of the emulation of that machine doesn't
"stop" the behavior of that machine, only the behavior of the partial
emulation, which isn't the same thing.
Post by olcott
Obviously a simulator has access to the internal state
(tape contents etc.) of the simulated machine. No problem
there.
Yes, it could be posible for HHH to somehow figure out what HHH is doing
and detetect that it has started another layer of emulation. The issue
is that HHH needs to KNOW that it is emulating a copy of itself, which
you only detect by using a non-turing complete system.
Post by olcott
This seems to indicate that the Turing machine UTM version of
HHH can somehow see each of the state transitions of the DDD
resulting from emulating its own Turing machine description
emulating DDD.
The problem is detecting that it *IS* running a copy of itself. This is
the problem I have been pointing out to youy for years.
Post by olcott
Even though this is a little different for Turing machines it
is equivalent in essence to HHH being able to see the steps of
the DDD resulting from HHH emulating itself emulating this DDD.
But you can only detect that your are emulating HHH because of your
non-turing complete system.
Post by olcott
*Joes can't seem to understand this*
Only the outer-most HHH meets its abort criteria first, thus
unless it aborts as soon as it meets this criteria none of
them will ever abort.
No, it only meets its criteria first in its reference frame. The machine
that it is emulating doesn't know (and its behavior doesn't care) that
it is being emulated, and that just continues till it get to the same point.

You just don't seem to understand that "behavior" doesn't require the
actual performance of it, but is a mathematical concept that comes into
existance the moment the program is created. We just don't KNOW what
that behavior is until we something to find out about it.
Post by olcott
and thus the call HHH at 0000217a will be returned from, > and HHH has
no idea what will happen after that, so it KNOWS it is ignorant of the
answer.
That you don't quite yet understand the preceding points
will make it impossible for you to understand any reply
to the above point.
No, YOU are just showing you don't understand the difference between the
TRUTH of the behavior, which was established when the program was
created, and the KNOWLEDGE of that behavior, that HHH never actually
gets, because it stops too soon.

But then, that has been one of your problems for years, not
understanding the difference between Truth and Knowledge.
olcott
2024-10-19 00:52:57 UTC
Permalink
Post by Richard Damon
Post by olcott
_DDD()
[00002172] 55         push ebp      ; housekeeping
[00002173] 8bec       mov ebp,esp   ; housekeeping
[00002175] 6872210000 push 00002172 ; push DDD
[0000217a] e853f4ffff call 000015d2 ; call HHH(DDD)
[0000217f] 83c404     add esp,+04
[00002182] 5d         pop ebp
[00002183] c3         ret
Size in bytes:(0018) [00002183]
When DDD is correctly emulated by HHH according
to the semantics of the x86 language DDD cannot
possibly reach its own machine address [00002183]
no matter what HHH does.
+-->[00002172]-->[00002173]-->[00002175]-->[0000217a]--+
+------------------------------------------------------+
That may not line up that same way when view
https://en.wikipedia.org/wiki/State_diagram
Except that 0000217a doesn't go to 00002172, but to 000015d2
IS THIS OVER YOUR HEAD?
What is the first machine address of DDD that HHH
emulating itself emulating DDD would reach?
Yes, HHH EMULATES the code at that address,
Which HHH emulates what code at which address?
Everyone, just once, which you should know, but ignore.
The Emulating HHH sees those addresses at its begining and then never
again.
Then the HHH that it is emulating will see those addresses, but not
the outer one that is doing that emulation of HHH.
Then the HHH that the second HHH is emulating will, but neither of
the outer 2 HHH.
And so on.
Which HHH do you think EVER gets back to 00002172?
What instruction do you think that it emulates that would tell it to do so?
It isn't the call instruction at 0000217a, as that tells it to go into HHH.
00002172
00002173
00002175
0000217a
conditional emulation of 00002172
conditional emulation of 00002173
conditional emulation of 00002175
conditional emulation of 0000217a
CE of CE of 00002172
...
OK great this is finally good progress.
The "state" never repeats, it is alway a new state,
Every emulated DDD has an identical process state at every point
in its emulation trace when adjusting for different top of stack values.
Nope, remember, each of those levels are CONDITIONAL,
*There are THREE different questions here*
(1) Can DDD emulated by HHH according to the semantics
of the x86 language possibly reach its machine address
[00002183] no matter what HHH does?

(2) Does HHH correctly detect and report the above?

(3) Does HHH do (2) it as a Turing computable function?
--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer
Richard Damon
2024-10-19 02:49:07 UTC
Permalink
Post by olcott
Post by Richard Damon
Post by olcott
_DDD()
[00002172] 55         push ebp      ; housekeeping
[00002173] 8bec       mov ebp,esp   ; housekeeping
[00002175] 6872210000 push 00002172 ; push DDD
[0000217a] e853f4ffff call 000015d2 ; call HHH(DDD)
[0000217f] 83c404     add esp,+04
[00002182] 5d         pop ebp
[00002183] c3         ret
Size in bytes:(0018) [00002183]
When DDD is correctly emulated by HHH according
to the semantics of the x86 language DDD cannot
possibly reach its own machine address [00002183]
no matter what HHH does.
+-->[00002172]-->[00002173]-->[00002175]-->[0000217a]--+
+------------------------------------------------------+
That may not line up that same way when view
https://en.wikipedia.org/wiki/State_diagram
Except that 0000217a doesn't go to 00002172, but to 000015d2
IS THIS OVER YOUR HEAD?
What is the first machine address of DDD that HHH
emulating itself emulating DDD would reach?
Yes, HHH EMULATES the code at that address,
Which HHH emulates what code at which address?
Everyone, just once, which you should know, but ignore.
The Emulating HHH sees those addresses at its begining and then
never again.
Then the HHH that it is emulating will see those addresses, but not
the outer one that is doing that emulation of HHH.
Then the HHH that the second HHH is emulating will, but neither of
the outer 2 HHH.
And so on.
Which HHH do you think EVER gets back to 00002172?
What instruction do you think that it emulates that would tell it to do so?
It isn't the call instruction at 0000217a, as that tells it to go into HHH.
00002172
00002173
00002175
0000217a
conditional emulation of 00002172
conditional emulation of 00002173
conditional emulation of 00002175
conditional emulation of 0000217a
CE of CE of 00002172
...
OK great this is finally good progress.
The "state" never repeats, it is alway a new state,
Every emulated DDD has an identical process state at every point
in its emulation trace when adjusting for different top of stack values.
Nope, remember, each of those levels are CONDITIONAL,
*There are THREE different questions here*
(1) Can DDD emulated by HHH according to the semantics
    of the x86 language possibly reach its machine address
    [00002183] no matter what HHH does?
Ambiguouse question, as pointed out previously.

A) Do you mean the behavior of the PROGRAM DDD, that HHH has emulated a
copy of.

In that case, the answer is, if HHH aborts its emulation and return,
YES, if HHH never aborts its emulation, and thus doesn't ever return an
answer to anyone NO.

B) If you mean, does the emulation done by HHH ever reach that place, no.
Post by olcott
(2) Does HHH correctly detect and report the above?
No, because that isn't what you claim HHH is doing, so it can't be
correct about that.

We need to look at the two possible interpreations to question 1.

If you means A, then since HHH says no but the correct answer is yes, it
is wrong.

If you mean B, and you mean your question is can HHH predict that it
can't reach the final state, but only needs to be right for this one
input, then the problem is the question has become trivial, if it
doesn't need to actually know anything about the input, it can just be
programmed to say no.

Also, we can make a trivial HHH, that just does the absolute minimum,
then aborts and returns no unconditionally to be correct, showing your
problem isn't interesting.

Or, your "problem" has left the domain of Program Theory, becuause you
don't consider DDD to be an actual program, at which point it also
becomes much less interesting.
Post by olcott
(3) Does HHH do (2) it as a Turing computable function?
No, because the method your HHH uses isn't possible to be expressed as a
Turing Machine with a seperate input tape with the full representatation
of the program DDD.

This assumes that you implied intent is that DDD is given as a
description of the equivalent Turing Machine, and not just a text string
that doesn't represent a full program, or that HHH doesn't need to
olcott
2024-10-20 21:59:53 UTC
Permalink
A "First Principles" approach that you refer to STARTS with an study
and understanding of the actual basic principles of the system. That
would be things like the basic definitions of things like "Program",
"Halting" "Deciding", "Turing Machine", and then from those concepts,
sees what can be done, without trying to rely on the ideas that
others have used, but see if they went down a wrong track, and the
was a different path in the same system.
The actual barest essence for formal systems and computations
is finite string transformation rules applied to finite strings.
So, show what you can do with that.
Note, WHAT the rules can be is very important, and seems to be beyond
you ability to reason about.
After all, all a Turing Machine is is a way of defining a finite stting
transformation computation.
The next minimal increment of further elaboration is that some
finite strings has an assigned or derived property of Boolean
true. At this point of elaboration Boolean true has no more
semantic meaning than FooBar.
And since you can't do the first step, you don't understand what that
actually means.
As soon as any algorithm is defined to transform any finite
string into any other finite string we have conclusively
proven that algorithms can transform finite strings.

The simplest formal system that I can think of transforms
pairs of strings of ASCII digits into their sum. This algorithm
can be easily specified in C.
Some finite strings are assigned the FooBar property and other
finite string derive the FooBar property by applying FooBar
preserving operations to the first set.
But, since we have an infinite number of finite strings to be assigned
values, we can't just enumerate that set.
The infinite set of pairs of finite strings of ASCII digits
can be easily transformed into their corresponding sum for
arbitrary elements of this infinite set.
Once finite strings have the FooBar property we can define
computations that apply Foobar preserving operations to
determine if other finite strings also have this FooBar property.
It seems you never even learned the First Principles of Logic
Systems, bcause you don't understand that Formal Systems are built
from their definitions, and those definitions can not be changed and
let you stay in the same system.
The actual First Principles are as I say they are: Finite string
transformation rules applied to finite strings. What you are
referring to are subsequent principles that have added more on
top of the actual first principles.
But it seems you never actually came up with actual "first Principles'
about what could be done at your first step, and thus you have no idea
what can be done at each of the later steps.
Also, you then want to talk about fields that HAVE defined what those
mean, but you don't understand that, so your claims about what they can
do are just baseless.
All you have done is proved that you don't really understand what you
are talking about, but try to throw around jargon that you don't
actually understand either, which makes so many of your statements just
false or meaningless.
When we establish the ultimate foundation of computation and
formal systems as transformations of finite strings having the
FooBar (or any other property) by FooBar preserving operations
into other finite strings then the membership algorithm would
seem to always be computable.

There would either be some finite sequence of FooBar preserving
operations that derives X from the set of finite strings defined
to have the FooBar property or not.
--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer
Richard Damon
2024-10-21 03:26:44 UTC
Permalink
Post by olcott
A "First Principles" approach that you refer to STARTS with an study
and understanding of the actual basic principles of the system. That
would be things like the basic definitions of things like "Program",
"Halting" "Deciding", "Turing Machine", and then from those
concepts, sees what can be done, without trying to rely on the ideas
that others have used, but see if they went down a wrong track, and
the was a different path in the same system.
The actual barest essence for formal systems and computations
is finite string transformation rules applied to finite strings.
So, show what you can do with that.
Note, WHAT the rules can be is very important, and seems to be beyond
you ability to reason about.
After all, all a Turing Machine is is a way of defining a finite
stting transformation computation.
The next minimal increment of further elaboration is that some
finite strings has an assigned or derived property of Boolean
true. At this point of elaboration Boolean true has no more
semantic meaning than FooBar.
And since you can't do the first step, you don't understand what that
actually means.
As soon as any algorithm is defined to transform any finite
string into any other finite string we have conclusively
proven that algorithms can transform finite strings.
So?
Post by olcott
The simplest formal system that I can think of transforms
pairs of strings of ASCII digits into their sum. This algorithm
can be easily specified in C.
So?
Post by olcott
Some finite strings are assigned the FooBar property and other
finite string derive the FooBar property by applying FooBar
preserving operations to the first set.
But, since we have an infinite number of finite strings to be assigned
values, we can't just enumerate that set.
The infinite set of pairs of finite strings of ASCII digits
can be easily transformed into their corresponding sum for
arbitrary elements of this infinite set.
So?
Post by olcott
Once finite strings have the FooBar property we can define
computations that apply Foobar preserving operations to
determine if other finite strings also have this FooBar property.
It seems you never even learned the First Principles of Logic
Systems, bcause you don't understand that Formal Systems are built
from their definitions, and those definitions can not be changed and
let you stay in the same system.
The actual First Principles are as I say they are: Finite string
transformation rules applied to finite strings. What you are
referring to are subsequent principles that have added more on
top of the actual first principles.
But it seems you never actually came up with actual "first Principles'
about what could be done at your first step, and thus you have no idea
what can be done at each of the later steps.
Also, you then want to talk about fields that HAVE defined what those
mean, but you don't understand that, so your claims about what they
can do are just baseless.
All you have done is proved that you don't really understand what you
are talking about, but try to throw around jargon that you don't
actually understand either, which makes so many of your statements
just false or meaningless.
When we establish the ultimate foundation of computation and
formal systems as transformations of finite strings having the
FooBar (or any other property) by FooBar preserving operations
into other finite strings then the membership algorithm would
seem to always be computable.
There would either be some finite sequence of FooBar preserving
operations that derives X from the set of finite strings defined
to have the FooBar property or not.
But you don't understand that if you need to answer a question that
isn;t based on a computable function, you get a question that you can
not compute.

Remember, a problem statement is effectively asking for a machine to
compute a mapping from EVERY POSSIBLE finite string input to the
corresponding answer.

By simple counting, there are Aleph_0 possible deciders (since we can
express the algorithm of the system as a finite string, so we must have
only a countable infinite number of possible computations.

When we count the possible problems to ask, even for a binary question,
we have Aleph_0 possible inputs too, and thus 2 ^ Aleph_0 possible
mappings (as each mapping can have a unique combinations of output for
every possible input).

It turns out that 2 ^ Aleph_0 is Aleph_1, and that is greater than Aleph_0.

This means we have more problems than deciders, and thus there MUST be
problems that can not be solved.

When we look at the problem of proof finding, the problem is that from
the finite number of statements, we can build an arbitrary length finite
string that establishes the theorem. Trying to find an arbitrary length
finite s
olcott
2024-10-21 03:58:05 UTC
Permalink
Post by Richard Damon
Post by olcott
A "First Principles" approach that you refer to STARTS with an
study and understanding of the actual basic principles of the
system. That would be things like the basic definitions of things
like "Program", "Halting" "Deciding", "Turing Machine", and then
from those concepts, sees what can be done, without trying to rely
on the ideas that others have used, but see if they went down a
wrong track, and the was a different path in the same system.
The actual barest essence for formal systems and computations
is finite string transformation rules applied to finite strings.
So, show what you can do with that.
Note, WHAT the rules can be is very important, and seems to be beyond
you ability to reason about.
After all, all a Turing Machine is is a way of defining a finite
stting transformation computation.
The next minimal increment of further elaboration is that some
finite strings has an assigned or derived property of Boolean
true. At this point of elaboration Boolean true has no more
semantic meaning than FooBar.
And since you can't do the first step, you don't understand what that
actually means.
As soon as any algorithm is defined to transform any finite
string into any other finite string we have conclusively
proven that algorithms can transform finite strings.
So?
Post by olcott
The simplest formal system that I can think of transforms
pairs of strings of ASCII digits into their sum. This algorithm
can be easily specified in C.
So?
Post by olcott
Some finite strings are assigned the FooBar property and other
finite string derive the FooBar property by applying FooBar
preserving operations to the first set.
But, since we have an infinite number of finite strings to be
assigned values, we can't just enumerate that set.
The infinite set of pairs of finite strings of ASCII digits
can be easily transformed into their corresponding sum for
arbitrary elements of this infinite set.
So?
Post by olcott
Once finite strings have the FooBar property we can define
computations that apply Foobar preserving operations to
determine if other finite strings also have this FooBar property.
It seems you never even learned the First Principles of Logic
Systems, bcause you don't understand that Formal Systems are built
from their definitions, and those definitions can not be changed
and let you stay in the same system.
The actual First Principles are as I say they are: Finite string
transformation rules applied to finite strings. What you are
referring to are subsequent principles that have added more on
top of the actual first principles.
But it seems you never actually came up with actual "first
Principles' about what could be done at your first step, and thus you
have no idea what can be done at each of the later steps.
Also, you then want to talk about fields that HAVE defined what those
mean, but you don't understand that, so your claims about what they
can do are just baseless.
All you have done is proved that you don't really understand what you
are talking about, but try to throw around jargon that you don't
actually understand either, which makes so many of your statements
just false or meaningless.
When we establish the ultimate foundation of computation and
formal systems as transformations of finite strings having the
FooBar (or any other property) by FooBar preserving operations
into other finite strings then the membership algorithm would
seem to always be computable.
There would either be some finite sequence of FooBar preserving
operations that derives X from the set of finite strings defined
to have the FooBar property or not.
But you don't understand that if you need to answer a question that
isn;t based on a computable function, you get a question that you can
not compute.
Remember, a problem statement is effectively asking for a machine to
compute a mapping from EVERY POSSIBLE finite string input to the
corresponding answer.
By simple counting, there are Aleph_0 possible deciders (since we can
express the algorithm of the system as a finite string, so we must have
only a countable infinite number of possible computations.
When we count the possible problems to ask, even for a binary question,
we have Aleph_0 possible inputs too, and thus 2 ^ Aleph_0 possible
mappings (as each mapping can have a unique combinations of output for
every possible input).
It turns out that 2 ^ Aleph_0 is Aleph_1, and that is greater than Aleph_0.
This means we have more problems than deciders, and thus there MUST be
problems that can not be solved.
The problem is always:
Can this finite string be derived in L by applying FooBar
preserving operations to a set of strings in L having the
FooBar property?

With finite strings that express all human knowledge that
can be expressed in language we can always reduce what could
otherwise be infinities into a finite set of categories.
Post by Richard Damon
When we look at the problem of proof finding, the problem is that from
the finite number of statements, we can build an arbitrary length finite
string that establishes the theorem. Trying to find an arbitrary length
finite s
Human knowledge expressed in language just doesn't seem
to work that way. When you ask someone a question as long
as they are not brain damaged they give you a succinct answer.
--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer
Mikko
2024-10-21 09:40:28 UTC
Permalink
Post by olcott
Post by Richard Damon
Post by olcott
A "First Principles" approach that you refer to STARTS with an study
and understanding of the actual basic principles of the system. That
would be things like the basic definitions of things like "Program",
"Halting" "Deciding", "Turing Machine", and then from those concepts,
sees what can be done, without trying to rely on the ideas that others
have used, but see if they went down a wrong track, and the was a
different path in the same system.
The actual barest essence for formal systems and computations
is finite string transformation rules applied to finite strings.
So, show what you can do with that.
Note, WHAT the rules can be is very important, and seems to be beyond
you ability to reason about.
After all, all a Turing Machine is is a way of defining a finite stting
transformation computation.
The next minimal increment of further elaboration is that some
finite strings has an assigned or derived property of Boolean
true. At this point of elaboration Boolean true has no more
semantic meaning than FooBar.
And since you can't do the first step, you don't understand what that
actually means.
As soon as any algorithm is defined to transform any finite
string into any other finite string we have conclusively
proven that algorithms can transform finite strings.
So?
Post by olcott
The simplest formal system that I can think of transforms
pairs of strings of ASCII digits into their sum. This algorithm
can be easily specified in C.
So?
Post by olcott
Some finite strings are assigned the FooBar property and other
finite string derive the FooBar property by applying FooBar
preserving operations to the first set.
But, since we have an infinite number of finite strings to be assigned
values, we can't just enumerate that set.
The infinite set of pairs of finite strings of ASCII digits
can be easily transformed into their corresponding sum for
arbitrary elements of this infinite set.
So?
Post by olcott
Once finite strings have the FooBar property we can define
computations that apply Foobar preserving operations to
determine if other finite strings also have this FooBar property.
It seems you never even learned the First Principles of Logic Systems,
bcause you don't understand that Formal Systems are built from their
definitions, and those definitions can not be changed and let you stay
in the same system.
The actual First Principles are as I say they are: Finite string
transformation rules applied to finite strings. What you are
referring to are subsequent principles that have added more on
top of the actual first principles.
But it seems you never actually came up with actual "first Principles'
about what could be done at your first step, and thus you have no idea
what can be done at each of the later steps.
Also, you then want to talk about fields that HAVE defined what those
mean, but you don't understand that, so your claims about what they can
do are just baseless.
All you have done is proved that you don't really understand what you
are talking about, but try to throw around jargon that you don't
actually understand either, which makes so many of your statements just
false or meaningless.
When we establish the ultimate foundation of computation and
formal systems as transformations of finite strings having the
FooBar (or any other property) by FooBar preserving operations
into other finite strings then the membership algorithm would
seem to always be computable.
There would either be some finite sequence of FooBar preserving
operations that derives X from the set of finite strings defined
to have the FooBar property or not.
But you don't understand that if you need to answer a question that
isn;t based on a computable function, you get a question that you can
not compute.
Remember, a problem statement is effectively asking for a machine to
compute a mapping from EVERY POSSIBLE finite string input to the
corresponding answer.
By simple counting, there are Aleph_0 possible deciders (since we can
express the algorithm of the system as a finite string, so we must have
only a countable infinite number of possible computations.
When we count the possible problems to ask, even for a binary question,
we have Aleph_0 possible inputs too, and thus 2 ^ Aleph_0 possible
mappings (as each mapping can have a unique combinations of output for
every possible input).
It turns out that 2 ^ Aleph_0 is Aleph_1, and that is greater than Aleph_0.
This means we have more problems than deciders, and thus there MUST be
problems that can not be solved.
Can this finite string be derived in L by applying FooBar
preserving operations to a set of strings in L having the
FooBar property?
With finite strings that express all human knowledge that
can be expressed in language we can always reduce what could
otherwise be infinities into a finite set of categories.
Post by Richard Damon
When we look at the problem of proof finding, the problem is that from
the finite number of statements, we can build an arbitrary length
finite string that establishes the theorem. Trying to find an arbitrary
length finite s
Human knowledge expressed in language just doesn't seem
to work that way. When you ask someone a question as long
as they are not brain damaged they give you a succinct answer.
Answers like "I don't know" and "What are you talking about" are
fairly common.
--
Mikko
olcott
2024-10-21 13:31:34 UTC
Permalink
Post by Mikko
Post by olcott
Post by Richard Damon
Post by olcott
A "First Principles" approach that you refer to STARTS with an
study and understanding of the actual basic principles of the
system. That would be things like the basic definitions of things
like "Program", "Halting" "Deciding", "Turing Machine", and then
from those concepts, sees what can be done, without trying to
rely on the ideas that others have used, but see if they went
down a wrong track, and the was a different path in the same system.
The actual barest essence for formal systems and computations
is finite string transformation rules applied to finite strings.
So, show what you can do with that.
Note, WHAT the rules can be is very important, and seems to be
beyond you ability to reason about.
After all, all a Turing Machine is is a way of defining a finite
stting transformation computation.
The next minimal increment of further elaboration is that some
finite strings has an assigned or derived property of Boolean
true. At this point of elaboration Boolean true has no more
semantic meaning than FooBar.
And since you can't do the first step, you don't understand what
that actually means.
As soon as any algorithm is defined to transform any finite
string into any other finite string we have conclusively
proven that algorithms can transform finite strings.
So?
Post by olcott
The simplest formal system that I can think of transforms
pairs of strings of ASCII digits into their sum. This algorithm
can be easily specified in C.
So?
Post by olcott
Some finite strings are assigned the FooBar property and other
finite string derive the FooBar property by applying FooBar
preserving operations to the first set.
But, since we have an infinite number of finite strings to be
assigned values, we can't just enumerate that set.
The infinite set of pairs of finite strings of ASCII digits
can be easily transformed into their corresponding sum for
arbitrary elements of this infinite set.
So?
Post by olcott
Once finite strings have the FooBar property we can define
computations that apply Foobar preserving operations to
determine if other finite strings also have this FooBar property.
It seems you never even learned the First Principles of Logic
Systems, bcause you don't understand that Formal Systems are
built from their definitions, and those definitions can not be
changed and let you stay in the same system.
The actual First Principles are as I say they are: Finite string
transformation rules applied to finite strings. What you are
referring to are subsequent principles that have added more on
top of the actual first principles.
But it seems you never actually came up with actual "first
Principles' about what could be done at your first step, and thus
you have no idea what can be done at each of the later steps.
Also, you then want to talk about fields that HAVE defined what
those mean, but you don't understand that, so your claims about
what they can do are just baseless.
All you have done is proved that you don't really understand what
you are talking about, but try to throw around jargon that you
don't actually understand either, which makes so many of your
statements just false or meaningless.
When we establish the ultimate foundation of computation and
formal systems as transformations of finite strings having the
FooBar (or any other property) by FooBar preserving operations
into other finite strings then the membership algorithm would
seem to always be computable.
There would either be some finite sequence of FooBar preserving
operations that derives X from the set of finite strings defined
to have the FooBar property or not.
But you don't understand that if you need to answer a question that
isn;t based on a computable function, you get a question that you can
not compute.
Remember, a problem statement is effectively asking for a machine to
compute a mapping from EVERY POSSIBLE finite string input to the
corresponding answer.
By simple counting, there are Aleph_0 possible deciders (since we can
express the algorithm of the system as a finite string, so we must
have only a countable infinite number of possible computations.
When we count the possible problems to ask, even for a binary
question, we have Aleph_0 possible inputs too, and thus 2 ^ Aleph_0
possible mappings (as each mapping can have a unique combinations of
output for every possible input).
It turns out that 2 ^ Aleph_0 is Aleph_1, and that is greater than Aleph_0.
This means we have more problems than deciders, and thus there MUST
be problems that can not be solved.
Can this finite string be derived in L by applying FooBar
preserving operations to a set of strings in L having the
FooBar property?
With finite strings that express all human knowledge that
can be expressed in language we can always reduce what could
otherwise be infinities into a finite set of categories.
Post by Richard Damon
When we look at the problem of proof finding, the problem is that
from the finite number of statements, we can build an arbitrary
length finite string that establishes the theorem. Trying to find an
arbitrary length finite s
Human knowledge expressed in language just doesn't seem
to work that way. When you ask someone a question as long
as they are not brain damaged they give you a succinct answer.
Answers like "I don't know" and "What are you talking about" are
fairly common.
For the Golbach conjecture IDK is the only correct answer.
--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer
Richard Damon
2024-10-21 22:46:22 UTC
Permalink
Post by olcott
Post by Mikko
Post by olcott
Post by Richard Damon
Post by olcott
A "First Principles" approach that you refer to STARTS with an
study and understanding of the actual basic principles of the
system. That would be things like the basic definitions of
things like "Program", "Halting" "Deciding", "Turing Machine",
and then from those concepts, sees what can be done, without
trying to rely on the ideas that others have used, but see if
they went down a wrong track, and the was a different path in
the same system.
The actual barest essence for formal systems and computations
is finite string transformation rules applied to finite strings.
So, show what you can do with that.
Note, WHAT the rules can be is very important, and seems to be
beyond you ability to reason about.
After all, all a Turing Machine is is a way of defining a finite
stting transformation computation.
The next minimal increment of further elaboration is that some
finite strings has an assigned or derived property of Boolean
true. At this point of elaboration Boolean true has no more
semantic meaning than FooBar.
And since you can't do the first step, you don't understand what
that actually means.
As soon as any algorithm is defined to transform any finite
string into any other finite string we have conclusively
proven that algorithms can transform finite strings.
So?
Post by olcott
The simplest formal system that I can think of transforms
pairs of strings of ASCII digits into their sum. This algorithm
can be easily specified in C.
So?
Post by olcott
Some finite strings are assigned the FooBar property and other
finite string derive the FooBar property by applying FooBar
preserving operations to the first set.
But, since we have an infinite number of finite strings to be
assigned values, we can't just enumerate that set.
The infinite set of pairs of finite strings of ASCII digits
can be easily transformed into their corresponding sum for
arbitrary elements of this infinite set.
So?
Post by olcott
Once finite strings have the FooBar property we can define
computations that apply Foobar preserving operations to
determine if other finite strings also have this FooBar property.
It seems you never even learned the First Principles of Logic
Systems, bcause you don't understand that Formal Systems are
built from their definitions, and those definitions can not be
changed and let you stay in the same system.
The actual First Principles are as I say they are: Finite string
transformation rules applied to finite strings. What you are
referring to are subsequent principles that have added more on
top of the actual first principles.
But it seems you never actually came up with actual "first
Principles' about what could be done at your first step, and thus
you have no idea what can be done at each of the later steps.
Also, you then want to talk about fields that HAVE defined what
those mean, but you don't understand that, so your claims about
what they can do are just baseless.
All you have done is proved that you don't really understand what
you are talking about, but try to throw around jargon that you
don't actually understand either, which makes so many of your
statements just false or meaningless.
When we establish the ultimate foundation of computation and
formal systems as transformations of finite strings having the
FooBar (or any other property) by FooBar preserving operations
into other finite strings then the membership algorithm would
seem to always be computable.
There would either be some finite sequence of FooBar preserving
operations that derives X from the set of finite strings defined
to have the FooBar property or not.
But you don't understand that if you need to answer a question that
isn;t based on a computable function, you get a question that you
can not compute.
Remember, a problem statement is effectively asking for a machine to
compute a mapping from EVERY POSSIBLE finite string input to the
corresponding answer.
By simple counting, there are Aleph_0 possible deciders (since we
can express the algorithm of the system as a finite string, so we
must have only a countable infinite number of possible computations.
When we count the possible problems to ask, even for a binary
question, we have Aleph_0 possible inputs too, and thus 2 ^ Aleph_0
possible mappings (as each mapping can have a unique combinations of
output for every possible input).
It turns out that 2 ^ Aleph_0 is Aleph_1, and that is greater than Aleph_0.
This means we have more problems than deciders, and thus there MUST
be problems that can not be solved.
Can this finite string be derived in L by applying FooBar
preserving operations to a set of strings in L having the
FooBar property?
With finite strings that express all human knowledge that
can be expressed in language we can always reduce what could
otherwise be infinities into a finite set of categories.
Post by Richard Damon
When we look at the problem of proof finding, the problem is that
from the finite number of statements, we can build an arbitrary
length finite string that establishes the theorem. Trying to find an
arbitrary length finite s
Human knowledge expressed in language just doesn't seem
to work that way. When you ask someone a question as long
as they are not brain damaged they give you a succinct answer.
Answers like "I don't know" and "What are you talking about" are
fairly common.
For the Golbach conjecture IDK is the only correct answer.
So, you admit that the statment might be true and unprovable?

Remeber, I don't know may be a valid answer about knowledge, but NOT
about the truth value of a truth bearing statement, as the Golbach
conjecture must be, since the question does follow the rule of the
excluded middle.

Note, that shows that you don't actually understand the meaning of
uncompuitable and undecidable for systems.
olcott
2024-10-21 23:12:49 UTC
Permalink
Post by Richard Damon
Post by olcott
Post by Mikko
Post by olcott
Post by Richard Damon
Post by olcott
A "First Principles" approach that you refer to STARTS with an
study and understanding of the actual basic principles of the
system. That would be things like the basic definitions of
things like "Program", "Halting" "Deciding", "Turing Machine",
and then from those concepts, sees what can be done, without
trying to rely on the ideas that others have used, but see if
they went down a wrong track, and the was a different path in
the same system.
The actual barest essence for formal systems and computations
is finite string transformation rules applied to finite strings.
So, show what you can do with that.
Note, WHAT the rules can be is very important, and seems to be
beyond you ability to reason about.
After all, all a Turing Machine is is a way of defining a finite
stting transformation computation.
The next minimal increment of further elaboration is that some
finite strings has an assigned or derived property of Boolean
true. At this point of elaboration Boolean true has no more
semantic meaning than FooBar.
And since you can't do the first step, you don't understand what
that actually means.
As soon as any algorithm is defined to transform any finite
string into any other finite string we have conclusively
proven that algorithms can transform finite strings.
So?
Post by olcott
The simplest formal system that I can think of transforms
pairs of strings of ASCII digits into their sum. This algorithm
can be easily specified in C.
So?
Post by olcott
Some finite strings are assigned the FooBar property and other
finite string derive the FooBar property by applying FooBar
preserving operations to the first set.
But, since we have an infinite number of finite strings to be
assigned values, we can't just enumerate that set.
The infinite set of pairs of finite strings of ASCII digits
can be easily transformed into their corresponding sum for
arbitrary elements of this infinite set.
So?
Post by olcott
Once finite strings have the FooBar property we can define
computations that apply Foobar preserving operations to
determine if other finite strings also have this FooBar property.
It seems you never even learned the First Principles of Logic
Systems, bcause you don't understand that Formal Systems are
built from their definitions, and those definitions can not be
changed and let you stay in the same system.
The actual First Principles are as I say they are: Finite string
transformation rules applied to finite strings. What you are
referring to are subsequent principles that have added more on
top of the actual first principles.
But it seems you never actually came up with actual "first
Principles' about what could be done at your first step, and thus
you have no idea what can be done at each of the later steps.
Also, you then want to talk about fields that HAVE defined what
those mean, but you don't understand that, so your claims about
what they can do are just baseless.
All you have done is proved that you don't really understand what
you are talking about, but try to throw around jargon that you
don't actually understand either, which makes so many of your
statements just false or meaningless.
When we establish the ultimate foundation of computation and
formal systems as transformations of finite strings having the
FooBar (or any other property) by FooBar preserving operations
into other finite strings then the membership algorithm would
seem to always be computable.
There would either be some finite sequence of FooBar preserving
operations that derives X from the set of finite strings defined
to have the FooBar property or not.
But you don't understand that if you need to answer a question that
isn;t based on a computable function, you get a question that you
can not compute.
Remember, a problem statement is effectively asking for a machine
to compute a mapping from EVERY POSSIBLE finite string input to the
corresponding answer.
By simple counting, there are Aleph_0 possible deciders (since we
can express the algorithm of the system as a finite string, so we
must have only a countable infinite number of possible computations.
When we count the possible problems to ask, even for a binary
question, we have Aleph_0 possible inputs too, and thus 2 ^ Aleph_0
possible mappings (as each mapping can have a unique combinations
of output for every possible input).
It turns out that 2 ^ Aleph_0 is Aleph_1, and that is greater than Aleph_0.
This means we have more problems than deciders, and thus there MUST
be problems that can not be solved.
Can this finite string be derived in L by applying FooBar
preserving operations to a set of strings in L having the
FooBar property?
With finite strings that express all human knowledge that
can be expressed in language we can always reduce what could
otherwise be infinities into a finite set of categories.
Post by Richard Damon
When we look at the problem of proof finding, the problem is that
from the finite number of statements, we can build an arbitrary
length finite string that establishes the theorem. Trying to find
an arbitrary length finite s
Human knowledge expressed in language just doesn't seem
to work that way. When you ask someone a question as long
as they are not brain damaged they give you a succinct answer.
Answers like "I don't know" and "What are you talking about" are
fairly common.
For the Golbach conjecture IDK is the only correct answer.
So, you admit that the statment might be true and unprovable?
There are some expressions of language that seem to
have a truth value of UNKNOWABLE.

All other expressions of language have a truth value
of True, False, Not a truth bearer.

Most undecidability is the mistake of trying to
determine the truth value of an expression that has none.
--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer
Richard Damon
2024-10-22 02:51:09 UTC
Permalink
Post by olcott
Post by Richard Damon
Post by olcott
Post by Mikko
Post by olcott
Post by Richard Damon
Post by olcott
A "First Principles" approach that you refer to STARTS with an
study and understanding of the actual basic principles of the
system. That would be things like the basic definitions of
things like "Program", "Halting" "Deciding", "Turing Machine",
and then from those concepts, sees what can be done, without
trying to rely on the ideas that others have used, but see if
they went down a wrong track, and the was a different path in
the same system.
The actual barest essence for formal systems and computations
is finite string transformation rules applied to finite strings.
So, show what you can do with that.
Note, WHAT the rules can be is very important, and seems to be
beyond you ability to reason about.
After all, all a Turing Machine is is a way of defining a finite
stting transformation computation.
The next minimal increment of further elaboration is that some
finite strings has an assigned or derived property of Boolean
true. At this point of elaboration Boolean true has no more
semantic meaning than FooBar.
And since you can't do the first step, you don't understand what
that actually means.
As soon as any algorithm is defined to transform any finite
string into any other finite string we have conclusively
proven that algorithms can transform finite strings.
So?
Post by olcott
The simplest formal system that I can think of transforms
pairs of strings of ASCII digits into their sum. This algorithm
can be easily specified in C.
So?
Post by olcott
Some finite strings are assigned the FooBar property and other
finite string derive the FooBar property by applying FooBar
preserving operations to the first set.
But, since we have an infinite number of finite strings to be
assigned values, we can't just enumerate that set.
The infinite set of pairs of finite strings of ASCII digits
can be easily transformed into their corresponding sum for
arbitrary elements of this infinite set.
So?
Post by olcott
Once finite strings have the FooBar property we can define
computations that apply Foobar preserving operations to
determine if other finite strings also have this FooBar property.
It seems you never even learned the First Principles of Logic
Systems, bcause you don't understand that Formal Systems are
built from their definitions, and those definitions can not be
changed and let you stay in the same system.
The actual First Principles are as I say they are: Finite string
transformation rules applied to finite strings. What you are
referring to are subsequent principles that have added more on
top of the actual first principles.
But it seems you never actually came up with actual "first
Principles' about what could be done at your first step, and
thus you have no idea what can be done at each of the later steps.
Also, you then want to talk about fields that HAVE defined what
those mean, but you don't understand that, so your claims about
what they can do are just baseless.
All you have done is proved that you don't really understand
what you are talking about, but try to throw around jargon that
you don't actually understand either, which makes so many of
your statements just false or meaningless.
When we establish the ultimate foundation of computation and
formal systems as transformations of finite strings having the
FooBar (or any other property) by FooBar preserving operations
into other finite strings then the membership algorithm would
seem to always be computable.
There would either be some finite sequence of FooBar preserving
operations that derives X from the set of finite strings defined
to have the FooBar property or not.
But you don't understand that if you need to answer a question
that isn;t based on a computable function, you get a question that
you can not compute.
Remember, a problem statement is effectively asking for a machine
to compute a mapping from EVERY POSSIBLE finite string input to
the corresponding answer.
By simple counting, there are Aleph_0 possible deciders (since we
can express the algorithm of the system as a finite string, so we
must have only a countable infinite number of possible computations.
When we count the possible problems to ask, even for a binary
question, we have Aleph_0 possible inputs too, and thus 2 ^
Aleph_0 possible mappings (as each mapping can have a unique
combinations of output for every possible input).
It turns out that 2 ^ Aleph_0 is Aleph_1, and that is greater than Aleph_0.
This means we have more problems than deciders, and thus there
MUST be problems that can not be solved.
Can this finite string be derived in L by applying FooBar
preserving operations to a set of strings in L having the
FooBar property?
With finite strings that express all human knowledge that
can be expressed in language we can always reduce what could
otherwise be infinities into a finite set of categories.
Post by Richard Damon
When we look at the problem of proof finding, the problem is that
from the finite number of statements, we can build an arbitrary
length finite string that establishes the theorem. Trying to find
an arbitrary length finite s
Human knowledge expressed in language just doesn't seem
to work that way. When you ask someone a question as long
as they are not brain damaged they give you a succinct answer.
Answers like "I don't know" and "What are you talking about" are
fairly common.
For the Golbach conjecture IDK is the only correct answer.
So, you admit that the statment might be true and unprovable?
There are some expressions of language that seem to
have a truth value of UNKNOWABLE.
But that isn't a TRUTH VALUE.

That is a statement about KNOWLEDGE.
Post by olcott
All other expressions of language have a truth value
of True, False, Not a truth bearer.
No, ALL expression of language have a truth value of True, False, or the
expression is not a truth Bearer.

There is no "truth value" of Unknowable.

And, Not a truth bearer isn't normally considered a "truth value"
Post by olcott
Most undecidability is the mistake of trying to
determine the truth value of an expression that has none.
Nope, that just shows your ignorance.
Mikko
2024-10-22 07:27:03 UTC
Permalink
Post by olcott
Post by Richard Damon
Post by olcott
Post by Mikko
Post by olcott
Post by Richard Damon
Post by olcott
A "First Principles" approach that you refer to STARTS with an study
and understanding of the actual basic principles of the system. That
would be things like the basic definitions of things like "Program",
"Halting" "Deciding", "Turing Machine", and then from those concepts,
sees what can be done, without trying to rely on the ideas that others
have used, but see if they went down a wrong track, and the was a
different path in the same system.
The actual barest essence for formal systems and computations
is finite string transformation rules applied to finite strings.
So, show what you can do with that.
Note, WHAT the rules can be is very important, and seems to be beyond
you ability to reason about.
After all, all a Turing Machine is is a way of defining a finite stting
transformation computation.
The next minimal increment of further elaboration is that some
finite strings has an assigned or derived property of Boolean
true. At this point of elaboration Boolean true has no more
semantic meaning than FooBar.
And since you can't do the first step, you don't understand what that
actually means.
As soon as any algorithm is defined to transform any finite
string into any other finite string we have conclusively
proven that algorithms can transform finite strings.
So?
Post by olcott
The simplest formal system that I can think of transforms
pairs of strings of ASCII digits into their sum. This algorithm
can be easily specified in C.
So?
Post by olcott
Some finite strings are assigned the FooBar property and other
finite string derive the FooBar property by applying FooBar
preserving operations to the first set.
But, since we have an infinite number of finite strings to be assigned
values, we can't just enumerate that set.
The infinite set of pairs of finite strings of ASCII digits
can be easily transformed into their corresponding sum for
arbitrary elements of this infinite set.
So?
Post by olcott
Once finite strings have the FooBar property we can define
computations that apply Foobar preserving operations to
determine if other finite strings also have this FooBar property.
It seems you never even learned the First Principles of Logic Systems,
bcause you don't understand that Formal Systems are built from their
definitions, and those definitions can not be changed and let you stay
in the same system.
The actual First Principles are as I say they are: Finite string
transformation rules applied to finite strings. What you are
referring to are subsequent principles that have added more on
top of the actual first principles.
But it seems you never actually came up with actual "first Principles'
about what could be done at your first step, and thus you have no idea
what can be done at each of the later steps.
Also, you then want to talk about fields that HAVE defined what those
mean, but you don't understand that, so your claims about what they can
do are just baseless.
All you have done is proved that you don't really understand what you
are talking about, but try to throw around jargon that you don't
actually understand either, which makes so many of your statements just
false or meaningless.
When we establish the ultimate foundation of computation and
formal systems as transformations of finite strings having the
FooBar (or any other property) by FooBar preserving operations
into other finite strings then the membership algorithm would
seem to always be computable.
There would either be some finite sequence of FooBar preserving
operations that derives X from the set of finite strings defined
to have the FooBar property or not.
But you don't understand that if you need to answer a question that
isn;t based on a computable function, you get a question that you can
not compute.
Remember, a problem statement is effectively asking for a machine to
compute a mapping from EVERY POSSIBLE finite string input to the
corresponding answer.
By simple counting, there are Aleph_0 possible deciders (since we can
express the algorithm of the system as a finite string, so we must have
only a countable infinite number of possible computations.
When we count the possible problems to ask, even for a binary question,
we have Aleph_0 possible inputs too, and thus 2 ^ Aleph_0 possible
mappings (as each mapping can have a unique combinations of output for
every possible input).
It turns out that 2 ^ Aleph_0 is Aleph_1, and that is greater than Aleph_0.
This means we have more problems than deciders, and thus there MUST be
problems that can not be solved.
Can this finite string be derived in L by applying FooBar
preserving operations to a set of strings in L having the
FooBar property?
With finite strings that express all human knowledge that
can be expressed in language we can always reduce what could
otherwise be infinities into a finite set of categories.
Post by Richard Damon
When we look at the problem of proof finding, the problem is that from
the finite number of statements, we can build an arbitrary length
finite string that establishes the theorem. Trying to find an arbitrary
length finite s
Human knowledge expressed in language just doesn't seem
to work that way. When you ask someone a question as long
as they are not brain damaged they give you a succinct answer.
Answers like "I don't know" and "What are you talking about" are
fairly common.
For the Golbach conjecture IDK is the only correct answer.
So, you admit that the statment might be true and unprovable?
There are some expressions of language that seem to
have a truth value of UNKNOWABLE.
Also there ae expressions that are knowable but unknown.
And there are expressions are not known to be knowable or unknowable.
--
Mikko
Richard Damon
2024-10-21 11:36:15 UTC
Permalink
Post by olcott
Post by Richard Damon
Post by olcott
A "First Principles" approach that you refer to STARTS with an
study and understanding of the actual basic principles of the
system. That would be things like the basic definitions of things
like "Program", "Halting" "Deciding", "Turing Machine", and then
from those concepts, sees what can be done, without trying to rely
on the ideas that others have used, but see if they went down a
wrong track, and the was a different path in the same system.
The actual barest essence for formal systems and computations
is finite string transformation rules applied to finite strings.
So, show what you can do with that.
Note, WHAT the rules can be is very important, and seems to be
beyond you ability to reason about.
After all, all a Turing Machine is is a way of defining a finite
stting transformation computation.
The next minimal increment of further elaboration is that some
finite strings has an assigned or derived property of Boolean
true. At this point of elaboration Boolean true has no more
semantic meaning than FooBar.
And since you can't do the first step, you don't understand what
that actually means.
As soon as any algorithm is defined to transform any finite
string into any other finite string we have conclusively
proven that algorithms can transform finite strings.
So?
Post by olcott
The simplest formal system that I can think of transforms
pairs of strings of ASCII digits into their sum. This algorithm
can be easily specified in C.
So?
Post by olcott
Some finite strings are assigned the FooBar property and other
finite string derive the FooBar property by applying FooBar
preserving operations to the first set.
But, since we have an infinite number of finite strings to be
assigned values, we can't just enumerate that set.
The infinite set of pairs of finite strings of ASCII digits
can be easily transformed into their corresponding sum for
arbitrary elements of this infinite set.
So?
Post by olcott
Once finite strings have the FooBar property we can define
computations that apply Foobar preserving operations to
determine if other finite strings also have this FooBar property.
It seems you never even learned the First Principles of Logic
Systems, bcause you don't understand that Formal Systems are built
from their definitions, and those definitions can not be changed
and let you stay in the same system.
The actual First Principles are as I say they are: Finite string
transformation rules applied to finite strings. What you are
referring to are subsequent principles that have added more on
top of the actual first principles.
But it seems you never actually came up with actual "first
Principles' about what could be done at your first step, and thus
you have no idea what can be done at each of the later steps.
Also, you then want to talk about fields that HAVE defined what
those mean, but you don't understand that, so your claims about what
they can do are just baseless.
All you have done is proved that you don't really understand what
you are talking about, but try to throw around jargon that you don't
actually understand either, which makes so many of your statements
just false or meaningless.
When we establish the ultimate foundation of computation and
formal systems as transformations of finite strings having the
FooBar (or any other property) by FooBar preserving operations
into other finite strings then the membership algorithm would
seem to always be computable.
There would either be some finite sequence of FooBar preserving
operations that derives X from the set of finite strings defined
to have the FooBar property or not.
But you don't understand that if you need to answer a question that
isn;t based on a computable function, you get a question that you can
not compute.
Remember, a problem statement is effectively asking for a machine to
compute a mapping from EVERY POSSIBLE finite string input to the
corresponding answer.
By simple counting, there are Aleph_0 possible deciders (since we can
express the algorithm of the system as a finite string, so we must
have only a countable infinite number of possible computations.
When we count the possible problems to ask, even for a binary
question, we have Aleph_0 possible inputs too, and thus 2 ^ Aleph_0
possible mappings (as each mapping can have a unique combinations of
output for every possible input).
It turns out that 2 ^ Aleph_0 is Aleph_1, and that is greater than Aleph_0.
This means we have more problems than deciders, and thus there MUST be
problems that can not be solved.
Can this finite string be derived in L by applying FooBar
preserving operations to a set of strings in L having the
FooBar property?
With finite strings that express all human knowledge that
can be expressed in language we can always reduce what could
otherwise be infinities into a finite set of categories.
But searching the infinite space of possible strings can not be always
done in finite time.

Remember, we can express all of infinity with just two characters, (0
and 1) in there unlimited combinations.

Part of your problem is you just don't understand what infinity is,
because you just don't understand how logic actually works, and some
attributes of infinity are not self-evident.
Post by olcott
Post by Richard Damon
When we look at the problem of proof finding, the problem is that from
the finite number of statements, we can build an arbitrary length
finite string that establishes the theorem. Trying to find an
arbitrary length finite s
Human knowledge expressed in language just doesn't seem
to work that way. When you ask someone a question as long
as they are not brain damaged they give you a succinct answer.
If they know it.

Note also, you premise confuses knowledge with truth. You could store
everything we know in a computer database, and perhaps program a
computer to work to "evolve it" to discover things we didn't understand
before (but would need a good filter so you don't fill with a lot of
truths like 1+3 = 4)

The problem with such a system is it doesn't tell us if a statement its
TRUE< but if it is KNOWN. True statements that haven't been discovered
yet will not be in its database, and if that database is based on human
knowleged which comes from observations, it WILL contain errors due to
errors in observations.

After all, if done at some points in time, it would have the "fact" that
it was known that the Earth was flat. (and also that it was round).

You are just proving that you don't understand the difference between
facts and knowledge, and thus much of what you claim to be true is
actually just a lie based on your own misunderstandings.
Loading...