Something is not computable, can it be co-recursively enumerable? - turing-machines

My understanding is since it is not computable, it may not halt when the answer is 'yes' or 'no'. That's why it cannot be co-recursively enumerable since it can't guarantee it always halts on 'no'.

A problem can be uncomputable but still be co-recursively enumerable.
Computable, decidable, or recursive sets have TMs which can always halt by either accepting or rejecting any input.
Uncomputable sets can still be semidecidable, recursively enumerable or co-recursively enumerable if they have TMs which can halt by accepting everything in the set (while possibly failing to halt at all when the input isn't in the set) or by rejecting everything that's not in the set (while possibly failing to halt at all when the input is in the set).
Clearly, if a set is both recursively enumerable as well as co-recursively enumerable, it is recursive (computable, decidable) since you can run both TMs - one that eventually halts by accepting, the other one that eventually halts by rejecting - and you know one of the two will eventually give you the correct answer.

Related

Why is the raising of an exception a side effect?

According to the wikipedia entry for side effect, raising an exception constitutes a side effect. Consider this simple python function:
def foo(arg):
if not arg:
raise ValueError('arg cannot be None')
else:
return 10
Invoking it with foo(None) will always be met with an exception. Same input, same output. It is referentially transparent. Why is this not a pure function?
Purity is only violated if you observe the exception, and make a decision based on it that changes the control flow. Actually throwing an exception value is referentially transparent -- it is semantically equivalent to non-termination or other so-called bottom values.
If a (pure) function is not total, then it evaluates to a bottom value. How you encode the bottom value is up to the implementation - it could be an exception; or non-termination, or dividing by zero, or some other failure.
Consider the pure function:
f :: Int -> Int
f 0 = 1
f 1 = 2
This is not defined for all inputs. For some it evaluates to bottom. The implementation encodes this by throwing an exception. It should be semantically equivalent to using a Maybe or Option type.
Now, you only break referential transparency when you observe the bottom value, and make decisions based on it -- which could introduce non-determinism as many different exceptions may be thrown, and you can't know which one. So for this reason catching exceptions is in the IO monad in Haskell, while generating so-called "imprecise" exceptions can be done purely.
So it is just not true that raising an exception is a side effect as such. It is whether or not you can modify the behavior of a pure function based on an exceptional value -- thus breaking referential transparency -- that is the issue.
From the first line:
"In computer science, a function or expression is said to have a side
effect if, in addition to returning a value, it also modifies some
state or has an observable interaction with calling functions or the
outside world"
The state it modifies is the termination of the program. To answer your other question about why it is not a pure function. The function is not pure because throwing an exception terminates the program therefore it has a side effect (your program ends).
Raising an exception can either be pure OR non-pure, it just depends on the type of exception that is raised. A good rule-of-thumb is if the exception is raised by code, it is pure, but if it is raised by the hardware then it usually must be classed as non-pure.
This can be seen by looking at what occurs when an exception is raised by the hardware: First an interrupt signal is raised, then the interrupt handler starts executing. The issue here is that the interrupt handler was not an argument to your function nor specified in your function, but a global variable. Any time a global variable (aka state) is read or written, you no longer have a pure function.
Compare that to an exception being raised in your code: You construct the Exception value from a set of known, locally scoped arguments or constants, and you "throw" the result. There are no global variables used. The process of throwing an exception is essentially syntactic sugar provided by your language, it does not introduce any non-deterministic or non-pure behaviour. As Don said "It should be semantically equivalent to using a Maybe or Option type", meaning that it should also have all the same properties, including purity.
When I said that raising a hardware exception is "usually" classed as a side effect, it does not always have to be the case. For example, if the computer your code is running on does not call an interrupt when it raises an exception, but instead pushes a special value onto the stack, then it is not classifiable as non-pure. I believe that the IEEE floating point NAN error is thrown using a special value and not an interrupt, so any exceptions raised while doing floating point maths can be classed as side-effect free as the value is not read from any global state, but is a constant encoded into the FPU.
Looking at all the requirements for a piece code to be pure, code based exceptions and throw statement syntactic sugar tick all the boxes, they do not modify any state, they do not have any interaction with their calling functions or anything outside their invocation, and they are referentially transparent, but only once the compiler has had its way with your code.
Like all pure vs non-pure discussions, I have excluded any notion of execution times or memory operations and have operated under the assumption that any function that CAN be implemented purely IS implemented purely regardless of its actual implementation. I also have no evidence of the IEEE Floating point NAN exception claim.
Referential transparency is also the possibility to replace a computation (e.g. a function invocation) with the result of the computation itself, something that you can't do if your function raises an exception. That's because exceptions do not take part of computation but they need to be catch!

Why is this an invalid Turing machine?

Whilst doing exam revision I am having trouble answering the following question from the book, "An Introduction to the Theory of Computation" by Sipser. Unfortunately there's no solution to this question in the book.
Explain why the following is not a legitimate Turing machine.
M = {
The input is a polynomial p over variables x1, ..., xn
Try all possible settings of x1, ..., xn to integer values
Evaluate p on all of these settings
If any of these settings evaluates to 0, accept; otherwise reject.
}
This is driving me crazy! I suspect it is because the set of integers is infinite? Does this somehow exceed the alphabet's allowable size?
Although this is quite an informal way of describing a Turing machine, I'd say the problem is one of the following:
otherwise reject - i agree with Welbog on that. Since you have a countably infinite set of possible settings, the machine can never know whether a setting on which it evaluates to 0 is still to come, and will loop forever if it doesn't find any - only when such a setting is encountered, the machine may stop. That last statement is useless and will never be true, unless of course you limit the machine to a finite set of integers.
The code order: I would read this pseudocode as "first write all possible settings down, then evaluate p on each one" and there's your problem:
Again, by having an infinite set of possible settings, not even the first part will ever terminate, because there never is a last setting to write down and continue with the next step. In this case, not even can the machine never say "there is no 0 setting", but it can never even start evaluating to find one. This, too, would be solved by limiting the integer set.
Anyway, i don't think the problem is the alphabet's size. You wouldn't use an infinite alphabet since your integers can be written in decimal / binary / etc, and those only use a (very) finite alphabet.
I'm a bit rusty on turing machines, but I believe your reasoning is correct, ie the set of integers is infinite therefore you cannot compute them all. I am not sure how to prove this theoretically though.
However, the easiest way to get your head around Turing machines is to remember "Anything a real computer can compute, a Turing machine can also compute.". So, if you can write a program that given a polynomial can solve your 3 questions, you will be able to find a Turing machine which can also do it.
I think the problem is with the very last part: otherwise reject.
According to countable set basics, any vector space over a countable set is countable itself. In your case, you have a vector space over the integers of size n, which is countable. So your set of integers is countable and therefore it is possible to try every combination of them. (That is to say without missing any combination.)
Also, computing the result of p on a given set of inputs is also possible.
And entering an accepting state when p evaluates to 0 is also possible.
However, since there is an infinite number of input vectors, you can never reject the input. Therefore no Turing machine can follow all of the rules defined in the question. Without that last rule, it is possible.

What does 'parametrize' do in DrScheme?

I'm trying to make sense of the example code here (below Examples). I don't understand that parametrize construct. The docs for it are here, but they don't help. What does it do?
parameterize is used to have values that are "dynamically scoped". You get a parameter with make-parameter. The parameter itself behaves as a function: call it with no inputs and you get its value, call it with one value and it will set the value. For example:
> (define p (make-parameter "blah"))
> (p)
"blah"
> (p "meh")
> (p)
"meh"
Many functions (including many primitive ones) use parameters as a way to customize their behavior. For example printf will print stuff using the port that is the value of the current-output-port parameter. Now, say that you have some function that prints something:
> (define (foo x) (printf "the value of x is ~s\n"))
You usually call this function and see something printed on the screen -- but in some cases you want to use it to print something to a file or whatever. You could do this:
(define (bar)
(let ([old-stdout (current-output-port)])
(current-output-port my-own-port)
(foo some-value)
(current-output-port old-stdout)))
One problem with this is that it is tedious to do -- but that's easily solved with a macro. (In fact, PLT still has a construct that does that in some languages: fluid-let.) But there are more problems here: what happens if the call to foo results in a runtime error? This might leave the system in a bad state, where all output goes to your port (and you won't even see a problem, since it won't print anything). A solution for that (which fluid-let uses too) is to protect the saving/restoring of the parameter with dynamic-wind, which makes sure that if there's an error (and more, if you know about continuations) then the value is still restored.
So the question is what's the point of having parameters instead of just using globals and fluid-let? There are two more problems that you cannot solve with just globals. One is what happens when you have multiple threads -- in this case, setting the value temporarily will affect other threads, which may still want to print to the standard output. Parameters solve this by having a specific value per-thread. What happens is that each thread "inherits" the value from the thread that created it, and changes in one thread are visible only in that thread.
The other problem is more subtle. Say that you have a parameter with a numeric value, and you want to do the following:
(define (foo)
(parameterize ([p ...whatever...])
(foo)))
In Scheme, "tail calls" are important -- they are the basic tool for creating loops and much more. parameterize does some magic that allows it to change the parameter value temporarily but still preserve these tail calls. For example, in the above case, you will get an infinite loop, rather than get a stack overflow error -- what happens is that each of these parameterize expressions can somehow detect when there's an earlier parameterize that no longer needs to do its cleanup.
Finally, parameterize actually uses two important parts of PLT to do its job: it uses thread cells to implement per-thread values, and it uses continuation marks to be able to preserve tail-calls. Each of these features is useful in itself.
parameterize sets particular parameters to specified values for the duration of the block, without affecting their values outside of it.
Parameterize is a means by which you can dynamically re-bind values within an existing function, without using lambda to do so. In practice sometimes it is a lot easier to use parameterize to re-bind values within a function rather than being required to pass arguments and bind them using lambda.
For example, say that a library that you use emits HTML to stdout but for sake of convenience you want to capture that value to a string and perform further operations on it. The library designer has at least two choices to make that easy for you: 1) accept an output port as a argument to the function or 2) parameterize the current-output-port value. 1 is ugly and a hassle. 2 is nicer since the most likely behavior is to print to stdout, but in case you want to print to a string-port you can just parameterize the call to that function.

How does std.string.toStringz work in dlang?

https://dlang.org/library/std/string/to_stringz.html
In my understanding it could not work:
toStringz creates an array on the stack and returns its pointer. After toStringz returns, the array on the stack is discarded and the pointer becomes invalid.
But I suppose it indeed works because of being a part of the standard library. So what is wrong in my understanding of the above?
Another related question:
What does scope return in the signature of this function mean? I visited https://dlang.org/spec/function.html but found no scope return there.
It does not create an array on the stack. If necessary, it allocates a new string on the GC heap.
The implementation works by checking the existing string for a zero terminator - if it deems it possible to do so without a memory fault (which is guesses by checking the alignment of the last byte. If it is a multiple of four, it doesn't risk it, but if it is not, it reads one byte ahead of the pointer because fault boundaries are on multiple of four intervals).
If there is a zero byte already there, it returns the input unmodified. That's what the return thing in the signature means - it may return that same input. (This is a new feature that just got documented... yesterday. And it isn't even merged yet: https://github.com/dlang/dlang.org/pull/2536 But the stdlib docs are rebuilt from the master branch lol)
Anyway, if there isn't a zero byte there, it allocates a new GC'd string, copies the existing one over, appends the zero, and returns that. That's why the note in the documentation warns about the C function keeping it. If the C function keeps it beyond execution, it isn't the stack that will get it - it is the D garbage collector. D's GC cannot see memory allocated by C functions (unless specifically informed about it) and will think the string is unreferenced next time it runs and thus free it, leading to a use-after-free bug.
The scope keyword in the signature is D's way of checking this btw: it means the argument will only be used in this function's scope (though the combination of return means it will only be used in this function's scope OR returned through this function). But that's on toStringz's input - the C function you call probably doesn't use that D language restriction and this it would not be automatically caught.
So to sum up the attributes again:
scope - the argument will not leave the function's scope. Won't be assigned to a global or an external structure, etc.
return - the argument might be returned by the function.
return scope - hybrid of the above; it will not leave the function's scope EXCEPT through the return value.

Why Fortran Code gets Segmentation Fault?

The fortran code below gets segmentation faults.
However, when I modify print*,pow(10_8,i) to print*,pow(j,i), it works without vomiting segmentation fault. Why? This is very weird.
module mdl
implicit none
integer(kind=8)::n,m=1000000007
integer(kind=8)::p(1000),k(1000),div(10000000)
contains
integer(kind=8) function pow(a,pwr)
implicit none
integer(kind=8)::a,pwr
integer(kind=8)::cur
cur=pwr
pow=1
do while(cur>0)
if(mod(cur,2)==1)pow=mod(pow*a,m)
a=MOD(a*a,m)
cur=cur/2
end do
end function
end module
program main
use mdl
implicit none
integer(kind=8)::i,j,l,r,x,y
i=2
j=10
print*,pow(10_8,i)
print*,i
end program
The problem here is with the argument a of the function pow. In the function the argument a is (potentially) modified on the line
a=MOD(a*a,m)
The actual argument 10_8 when referencing the function is a literal constant which may not be modified. This is when your program fails. When you use print*,pow(j,i) the j is a variable which may be modified, and your program doesn't fail.
There is a lot of complicated stuff going on here, that I won't fully explain in this answer (you can search for other questions for that). One topic is argument association which explains why you are trying to modify the constant 10_8. However, I'll say something about dummy argument intents.
The dummy argument a has no intent specified. As you intend to use the value of the a as it enters the function and you wish to (potentially) modify it an appropriate intent would be intent(inout). If you apply this, you should find your compiler complain about that assignment line.
Having no intent, such as in the case of the question, is an acceptible thing. This has certain meaning. That is, whether a may be modified depends on whether the actual argument when referencing the function may. When the actual argument is 10_8 it may not; when it is j it may.
The crucial thing is that it isn't the compiler's responsibility, but yours, to check whether the program is doing something here it shouldn't.
Now, you may not want to modify the actual argument j even when you are allowed to. You have a couple of options:
you can make a temporary local copy (and mark a as intent(in)), which may be safely modified;
you can make an anonymous modifiable copy of the input data using the value attribute.
You do this first with cur=pwr. As an example of the second:
integer(kind=bigint) function pow(a,pwr)
implicit none
integer(kind=bigint), value :: a, pwr
pow=1
do while(cur>0)
if(mod(pwr,2)==1)pow=mod(pow*a,m)
a=MOD(a*a,m)
pwr=pwr/2
end do
end function
You now may even mark pow as a pure function.
Finally, if using the value attribute it is required that an explicit interface be available when referencing the function. With the module for the function this is the case here, but this is something to consider in more general cases.

Resources