Explanation of the Turing Machine Halting Problem - turing-machines

I'm looking for a simple explanation of the halting problem for Turing machines. I know the basis of how TMs work, how they enumerate things, machine configurations, etc., but I don't have a good handle on the halting problem.
Can someone provide a good explanation of this topic?

For a minute, let's ignore the world of Turing machines and just think about computer programs. This will make our reasoning less rigorous, but probably a heck of a lot easier to follow.
Consider the following task: write a program that, given a program P and an input to that program, determines whether the program will terminate when given that input. (For simplicity, we'll assume that the program doesn't ask for user input and doesn't involve randomness, so running the same program on the same input always does the same thing). Is it possible to write a program that meets this description? The answer is no. To show this, we'll use a proof by contradiction. We'll assume that, somehow, someone manages to write the program, and then show that something terrible would happen if this were the case.
Imagine that someone writes a function that looks like this:
function willHalt(program, input)
This function has the following properties:
It always returns a value.
If the function returns true, then the specified program eventually terminates (halts) when run on the specified input.
If the function returns false, then the specified program never terminates when run on the specified input (loops).
At this point we can start to be skeptical about whoever wrote this function.
Them: "Hey! I just wrote a program that can take in any program and and input, and it will tell you whether or not the program halts on that input!"
Us: "Oh really? It can take in any program? Any program at all?"
Them: "Yeah! That's what I said."
Us: "Well, then, what about this program right here?"
And then we give them this program:
function trickyTricky(input) {
/* Ask whether this program (named trickyTricky) is going to halt
* on its input.
*/
if (willHalt(trickyTricky, input)) {
/* If so, loop infinitely! */
while (true) { }
} else {
/* If not, do nothing and stop running! */
}
}
So let's think about what this program does.
First, imagine that this program, when given a particular input, eventually terminates when run on that input. Trace through the program carefully and see what happens then. First, it asks willHalt whether it's going to terminate, and the answer is "yes, yes it will." That means that the if statement evaluates to true... so the program then goes into an infinite loop! Oops - the program was supposed to halt, but instead it looped infinitely!
Second, imagine that this program, when given a particular input, goes into an infinite loop. Trace through the program carefully to see what happens then. First, it asks willHalt whether it's going to terminate. The answer is no, so it doesn't go into the if statement, and instead immediately finishes running. But that's not good - the program was supposed to loop infinitely, but instead it terminated!
So now we have a problem. If you really truly can write a function that tells you whether a program will halt on some input, then you can use that program to build a program that does the opposite of what it's supposed to do - and that's impossible!
The halting problem is just a mathematically rigorous way of formalizing the above idea. Instead of talking about programs, we talk about Turing machines and TM encodings. Really, though, the core idea behind the math is just what's shown above.
If you're interested, for a class I taught last year, I put together a guide to self-reference and undecidability that might give you a little bit more exposition on how this style of argument works.

The halting problems asks that we determine whether or not a program, given an input, will halt (reach some final state). Turing proved that no algorithm exists that can determine this for any given program and input.
An algorithm could spend an arbitrarily long amount of time processing a program and its input, but for all programs and all inputs, the algorithm could never accurately determine whether or not the program would eventually halt. With each change of state, the next could be the last.
The halting problem is an early example of a decision problem.
Because no algorithm exists that can accurately answer 'yes, it will halt' or 'no, it will not halt' for the halting problem, it is undecidable.

Related

why there can't be a program that checks another program

I am trying to find the logical alan turing explanation why there can't be a program that checks another programs.
I remember we learned in on the computation course but now i just can't find the solution ,
and i need to explain it to some one at my work .
Thanks for help.
"Checks another program" is very broad. In fact, some features of programs can be checked, such as whether or not a Java program type checks. However, type checking a Java program will also reject some programs which will never actually produce a type error when run, such as:
int foo() {
if (true) return 5;
else return null;
}
This method will never actually return null, but the type checker can't see this. But couldn't we just make a smarter type system?
Unfortunately, the answer is no. Consider the following program:
int bar() {
if (infiniteComputation()) return 5;
else return null;
}
The type checker can't check if infiniteComputation will ever return false, because of the halting problem.
Another related theorem is Rice's Theorem, which is probably closer to what your question was about than the halting problem.
It is worth pointing out that the theorem only state that there is no program property which can be accurately checked, it is still possible to approximate such checks well enough for practical purposes. One example is type systems, where we accept that some "correct" programs are rejected, like the snippet above. Compilers can also eliminate dead code in a lot of cases, even though it impossible to do in every case.
You are looking for the halting problem.
Alan Turing proved in 1936 that a
general algorithm to solve the halting
problem for all possible program-input
pairs cannot exist. We say that the
halting problem is undecidable over
Turing machines.
There's the wikipedia entry on this...
But basically, in order to determine if any sufficiently complicated program is stoppable, you'd have to run it in order to trace the execution path. That means you're back to one program running another program, and if that program doesn't stop, the program that's watching it won't stop either.
It's like calculating the digits to pi-- will it stop? How can you say it's infinite in runtime, as opposed to suffering from some computational problem? We know that that particular problem is infinite, but others that are similar have not been so proven.
Byron's answer should point you to the important info. As an aside, you can have a program that checks a specific program. What you can't have is a program that checks an arbitrary program for correctness.

Compiler and Interpreter on memory efficiency

I was understanding the concept of Compiler and Interpreter. I researched it on the internet but I found two statement tending to contradiction :
one is saying --- Interpreter doesn't involve intermediate code and hence memory efficient.
https://www.programiz.com/article/difference-compiler-interpreter
other is saying : an interpreter reads a statement from the input, converts it to an intermediate code, executes it, then takes the next statement in sequence.
https://www.tutorialspoint.com/compiler_design/compiler_design_overview.htm
Can anyone please let me know that which one is right and which one is memory efficient?
There are many ways to code an interpreter. Both options mentioned are possible, with different tradeoffs.
The short answer is that neither article is right. Both have a very narrow (old fashioned?) idea of what an interpreter is, corresponding to something we might call a "command processor". Moreover, neither article is self-consistent, so attempting to resolve their disagreements is probably a waste of time.
That said, when the programiz reference says "No intermediate object code is generated, hence are memory efficient," I think what it means (using its terms) is that an interpreter does translate a statement into intermediate code (note that "Figure: Interpreter" includes a box labelled "Intermediate Code"), but:
That code is not object code.
After executing that code, it discards it, so it never has the code for more than one statement in memory at a time.
The interpreter does not produce object code as output.
Given that reading, the two articles more-or-less agree.
But even given the narrow definition of 'interpreter', saying that it's "memory efficient" simply because it holds at most one statement's intermediate code in memory at a time ignores all the memory that the interpreter itself takes up.
Moreover, note that this can only be talking about the memory efficiency of the interpreter itself, and NOT about the memory efficiency of any programs it runs.
In short: forget about those articles. Wikipedia's article seems like a good place to start.

Universal turing machine U should determine if M(x) stops

so we have a universal turing machine U, that should determine if a turing machine M with input x will stop. The solution should be presented in pseudo code.
can someone help me a bit out, who should i solve it ?
This sounds like the halting problem:
The halting problem can be stated as follows: "Given a description of an arbitrary computer program, decide whether the program finishes running or continues to run forever". This is equivalent to the problem of deciding, given a program and an input, whether the program will eventually halt when run with that input, or will run forever.
Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. A key part of the proof was a mathematical definition of a computer and program, what became known as a Turing machine; the halting problem is undecidable over Turing machines.
So no, it's not possible.
If you want, you can probably run M on x for a while. If it stops, we know it stops. If it doesn't stop, we don't really know whether or not it stops.

Common Lisp: compilation vs evaluation

On Emacs + Slime with sbcl, once I define a function (or more) in a file I have two choices:
Evaluation: e.g. with C-M-x eval-defun
Compilation: e.g. with C-c M-k compile-file
The second one produces a .fasl file, too.
What are the differences between the two?
What's going on under the hood when I compile a definition / a file?
What are the Pros and Cons of each one?
First of all, there's a function eval[1], that allows to evaluate (i.e. execute) arbitrary CL form in the language runtime. CL implementations may have 2 different modes of operation: compilation mode and interpretation mode. Compilation mode implies, that before evaluation the form is first compiled in-memory. Also in CL evaluation happens not on the file-level, but on the level of individual forms. So eval may both compile and interpret the form, depending on the mode of operation. (For example SBCL by default always compiles, unless you instruct it not to by setting sb-ext:*evaluator-mode* to :interpret, while CLISP always interprets).
Now, there's also a convenience function compile-file[2] that allows to compile all the forms in some file and save the results in another file. This doesn't trigger evaluation of these forms.
Also CL defines 3 distinct times of program lifecycle: compile-time, load-time and execution time. And there's a possibility to control what happens when with one of the most (if not the most) cryptic CL special operators eval-when[3].
To sum up, C-M-x eval-defun will call eval on the form under cursor. It will not necessary compile it, but that is possible, depending on implementation. C-c M-k compile-file will compile-file your buffer, but not evaluate it's contents.
Maybe a metaphor will be a bit easier to understand.
Imagine that you have some job to do and there's a worker who can do it. Unfortunately, this worker doesn't know your language. Let's say you speak English, and he knows only French. So you need translator. Ok, no problem, you have translator too. Here you have 2 options:
Stay near the worker, tell to translator what to do and see how worker does it.
Ask translator to write the task down to the paper and then give this paper to worker each time you need the job to be executed.
If you need the job done only once, there's no big difference what way to go. However, if you want the same thing to be done many times and possibly by different workers (all French), you may want to get the paper with translated instructions.
So now to programming. You write the program in one language (e.g. Common Lisp), but computer itself doesn't understand it. It "speaks" only its internal language - native code. So you need some kind of translator. And that's where compiler comes into the game. Compiler translates (compiles) your code into native code so computer could execute it.
Just as in example with French worker, you have 2 options:
Tell the computer what to do just when you need it. In this case compiler will translate instructions, computer will execute them, and both will forget about it immediately. This is called evaluation.
Write instructions to the file and then use it to tell the computer what to do every time you need it. This is usually referred as compilation.
Note the mess in the terminology: actually, compiler works in both cases, but when you compare evaluation and compilation, the later refers to the 2nd case only. In other contexts terminology may differ, so try to understand underlying processes while reading about things like evaluation, compilation, interpretation and about translation in general.
Also note, that in SBCL REPL compilation (writing to the file) has a side effect of evaluation. So in this specific case the only difference is in writing to the file.
What actually happens when you evaluate an expression is that it is sent to sbcl where the text of your expression will have to be parsed and then compiled into native code which will be stored in memory in the Common Lisp environment.
The second method will do the same but compile all the code into the file. The reason you would want to compile code is to make it quicker to load; there's no need to parse the syntax and semantics of your code and generate the code again, it can simply be loaded into memory ready to run.
So the benefits of compilation are just speed of loading, saving the computer work.
In the case of editing a file that is already compiled though, you can eval-defun to recreate a function in memory only, which may be quicker than compiling the whole file.
This is not answering your question directly, but this is too long for the comment too.
Most of the times you wouldn't like to use either option, if we speak about SLIME and Emacs - you'd be using C-c C-c, (or M-x slime-compile-defun). This will pop up (if not already open) the compilation buffer, which shows compilation errors and warnings + it will highlight the problems inside your code. This also works nice with things like Flymake cursor (once you navigate to the problematic area it will show in the minibuffer what exactly the problem was).
Compiling a file happens in a rare event when you actually have a product you want to use later as it is. Or, most probably, you want others to use it w/o you having to set it up. For example, if you have a web server, and you want the system administrator to be able to (re)start it as needed - the administrator needs not to know how your software functions, she'd need only to know how to launch it.
Eval'ing a defun is just that - it sends the text to SWANK, but doesn't analyze the result. Of course your Lisp will print something back to you after you did that, but SLIME will stand aside.

Determining most register hungry part of kernel

when I get a kernel using too many registers there are basically 3 options I can do:
leave the kernel as it is, which results in low occupancy
set compiler to use lower number of registers, spilling them, causing worse performance
rewrite the kernel
For option 3, I'd like to know which part of the kernel needs the maximum number of registers. Is there any tool or technique allowing me to identify this part? Reading through the PTX code (I develop on NVidia) is not helpful, the registers have various high numbers and to be honest, the best I can do is to identify which part of the assembly code maps to which part of the C code.
Just commenting out some code is not much a way to go - for example, I noticed that if I just put the code into loop, the number of registers raises dramatically, not only by one for the loop control variable. I personally suspect the NVidia compiler from imperfect variable liveness analysis, but of course I cannot do much with that :-)
If you're running on NVidia hardware, you can pass -cl-nv-verbose compile option to clBuildProgram then clGetProgramInfo CL_PROGRAM_BINARIES to get human readable text about the compile. In there it will say the number of registers it uses. Note that NVidia caches compiles and it only produces that register info when the kernel source actually changes, so you may want to inject some superfluous change in the source code to force it to do the full analysis.
If you're running on AMD hardware, just set the environment variable GPU_DUMP_DEVICE_KERNEL=1. It will produce a text file of the IL during the compile. Not sure it explicitly says the number of registers used, but it's what is equivalent to the NVidia technique above.
When looking at that output (at least on nvidia), you'll find that it seems to use an infinite number of registers (if you go by the register numbers). In reality, it does a flow analysis and actually reuses registers in a way that is not at all obvious when looking at the IL.
This is a tough question in any language, and there's probably not one correct answer. But here are some things to think about:
Look for the code in the "deepest" scope you can find, keeping in mind that most functions are probably inlined by your OpenCL compiler. Count the variables used in this scope, and walk up the containing scopes. In each containing scope, count variables that are used both before and after the inner scope. These are potentially live while the inner scope executes. This process could help you account for the live registers at a particular part of the program.
Try to take variables that "span" deep scopes and make them not span the scope if possible. For example, if you have something like this:
int i = func1();
int j = func2(); // perhaps lots of live registers here
int k = func3(i,j);
you could try to reorder the first two lines if func2 has lots of live registers. That would remove i from the set of live registers while func2 is running. This is a trivial pattern, of course, but hopefully it's illustrative.
Think about getting rid of variables that just keep around the results of simple computations. You might be able to recompute these when you need them. For example, if you have something like int i = get_local_id(0) you might be able to just use get_local_id(0) wherever you would have used i.
Think about getting rid of variables that are keeping around values stored in memory.
Without good tools for this kind of thing, it ends up being more art than science. But hopefully some of this is helpful.

Resources