Explanation of the Turing Machine Halting Problem - turing-machines

why there can't be a program that checks another program

```I am trying to find the logical alan turing explanation why there can't be a program that checks another programs.
I remember we learned in on the computation course but now i just can't find the solution ,
and i need to explain it to some one at my work .
Thanks for help.
```
```"Checks another program" is very broad. In fact, some features of programs can be checked, such as whether or not a Java program type checks. However, type checking a Java program will also reject some programs which will never actually produce a type error when run, such as:
int foo() {
if (true) return 5;
else return null;
}
This method will never actually return null, but the type checker can't see this. But couldn't we just make a smarter type system?
Unfortunately, the answer is no. Consider the following program:
int bar() {
if (infiniteComputation()) return 5;
else return null;
}
The type checker can't check if infiniteComputation will ever return false, because of the halting problem.
Another related theorem is Rice's Theorem, which is probably closer to what your question was about than the halting problem.
It is worth pointing out that the theorem only state that there is no program property which can be accurately checked, it is still possible to approximate such checks well enough for practical purposes. One example is type systems, where we accept that some "correct" programs are rejected, like the snippet above. Compilers can also eliminate dead code in a lot of cases, even though it impossible to do in every case.
```
```You are looking for the halting problem.
Alan Turing proved in 1936 that a
general algorithm to solve the halting
problem for all possible program-input
pairs cannot exist. We say that the
halting problem is undecidable over
Turing machines.
```
```There's the wikipedia entry on this...
But basically, in order to determine if any sufficiently complicated program is stoppable, you'd have to run it in order to trace the execution path. That means you're back to one program running another program, and if that program doesn't stop, the program that's watching it won't stop either.
It's like calculating the digits to pi-- will it stop? How can you say it's infinite in runtime, as opposed to suffering from some computational problem? We know that that particular problem is infinite, but others that are similar have not been so proven.
```
`Byron's answer should point you to the important info. As an aside, you can have a program that checks a specific program. What you can't have is a program that checks an arbitrary program for correctness.`

Compiler and Interpreter on memory efficiency

```I was understanding the concept of Compiler and Interpreter. I researched it on the internet but I found two statement tending to contradiction :
one is saying --- Interpreter doesn't involve intermediate code and hence memory efficient.
https://www.programiz.com/article/difference-compiler-interpreter
other is saying : an interpreter reads a statement from the input, converts it to an intermediate code, executes it, then takes the next statement in sequence.
https://www.tutorialspoint.com/compiler_design/compiler_design_overview.htm
Can anyone please let me know that which one is right and which one is memory efficient?
```
```There are many ways to code an interpreter. Both options mentioned are possible, with different tradeoffs.
```
```The short answer is that neither article is right. Both have a very narrow (old fashioned?) idea of what an interpreter is, corresponding to something we might call a "command processor". Moreover, neither article is self-consistent, so attempting to resolve their disagreements is probably a waste of time.
That said, when the programiz reference says "No intermediate object code is generated, hence are memory efficient," I think what it means (using its terms) is that an interpreter does translate a statement into intermediate code (note that "Figure: Interpreter" includes a box labelled "Intermediate Code"), but:
That code is not object code.
After executing that code, it discards it, so it never has the code for more than one statement in memory at a time.
The interpreter does not produce object code as output.
Given that reading, the two articles more-or-less agree.
But even given the narrow definition of 'interpreter', saying that it's "memory efficient" simply because it holds at most one statement's intermediate code in memory at a time ignores all the memory that the interpreter itself takes up.
Moreover, note that this can only be talking about the memory efficiency of the interpreter itself, and NOT about the memory efficiency of any programs it runs.
In short: forget about those articles. Wikipedia's article seems like a good place to start.```

Universal turing machine U should determine if M(x) stops

```so we have a universal turing machine U, that should determine if a turing machine M with input x will stop. The solution should be presented in pseudo code.
can someone help me a bit out, who should i solve it ?
```
```This sounds like the halting problem:
The halting problem can be stated as follows: "Given a description of an arbitrary computer program, decide whether the program finishes running or continues to run forever". This is equivalent to the problem of deciding, given a program and an input, whether the program will eventually halt when run with that input, or will run forever.
Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. A key part of the proof was a mathematical definition of a computer and program, what became known as a Turing machine; the halting problem is undecidable over Turing machines.
So no, it's not possible.
If you want, you can probably run M on x for a while. If it stops, we know it stops. If it doesn't stop, we don't really know whether or not it stops.```

Common Lisp: compilation vs evaluation

```On Emacs + Slime with sbcl, once I define a function (or more) in a file I have two choices:
Evaluation: e.g. with C-M-x eval-defun
Compilation: e.g. with C-c M-k compile-file
The second one produces a .fasl file, too.
What are the differences between the two?
What's going on under the hood when I compile a definition / a file?
What are the Pros and Cons of each one?
```
```First of all, there's a function eval[1], that allows to evaluate (i.e. execute) arbitrary CL form in the language runtime. CL implementations may have 2 different modes of operation: compilation mode and interpretation mode. Compilation mode implies, that before evaluation the form is first compiled in-memory. Also in CL evaluation happens not on the file-level, but on the level of individual forms. So eval may both compile and interpret the form, depending on the mode of operation. (For example SBCLÂ by default always compiles, unless you instruct it not to by setting sb-ext:*evaluator-mode* to :interpret, while CLISP always interprets).
Now, there's also a convenience function compile-file[2] that allows to compile all the forms in some file and save the results in another file. This doesn't trigger evaluationÂ of these forms.
Also CL defines 3 distinct times of program lifecycle: compile-time, load-time and execution time. And there's a possibility to control what happens when with one of the most (if not the most) cryptic CL special operators eval-when[3].
To sum up, C-M-x eval-defun will call eval on the form under cursor. It will not necessary compile it, but that is possible, depending on implementation. C-c M-k compile-file will compile-file your buffer, but not evaluate it's contents.
```
```Maybe a metaphor will be a bit easier to understand.
Imagine that you have some job to do and there's a worker who can do it. Unfortunately, this worker doesn't know your language. Let's say you speak English, and he knows only French. So you need translator. Ok, no problem, you have translator too. Here you have 2 options:
Stay near the worker, tell to translator what to do and see how worker does it.
Ask translator to write the task down to the paper and then give this paper to worker each time you need the job to be executed.
If you need the job done only once, there's no big difference what way to go. However, if you want the same thing to be done many times and possibly by different workers (all French), you may want to get the paper with translated instructions.
So now to programming. You write the program in one language (e.g. Common Lisp), but computer itself doesn't understand it. It "speaks" only its internal language - native code. So you need some kind of translator. And that's where compiler comes into the game. Compiler translates (compiles) your code into native code so computer could execute it.
Just as in example with French worker, you have 2 options:
Tell the computer what to do just when you need it. In this case compiler will translate instructions, computer will execute them, and both will forget about it immediately. This is called evaluation.
Write instructions to the file and then use it to tell the computer what to do every time you need it. This is usually referred as compilation.
Note the mess in the terminology: actually, compiler works in both cases, but when you compare evaluation and compilation, the later refers to the 2nd case only. In other contexts terminology may differ, so try to understand underlying processes while reading about things like evaluation, compilation, interpretation and about translation in general.
Also note, that in SBCL REPL compilation (writing to the file) has a side effect of evaluation. So in this specific case the only difference is in writing to the file.
```
```What actually happens when you evaluate an expression is that it is sent to sbcl where the text of your expression will have to be parsed and then compiled into native code which will be stored in memory in the Common Lisp environment.
The second method will do the same but compile all the code into the file. The reason you would want to compile code is to make it quicker to load; there's no need to parse the syntax and semantics of your code and generate the code again, it can simply be loaded into memory ready to run.
So the benefits of compilation are just speed of loading, saving the computer work.
In the case of editing a file that is already compiled though, you can eval-defun to recreate a function in memory only, which may be quicker than compiling the whole file.
```
```This is not answering your question directly, but this is too long for the comment too.
Most of the times you wouldn't like to use either option, if we speak about SLIME and Emacs - you'd be using C-c C-c, (or M-x slime-compile-defun). This will pop up (if not already open) the compilation buffer, which shows compilation errors and warnings + it will highlight the problems inside your code. This also works nice with things like Flymake cursor (once you navigate to the problematic area it will show in the minibuffer what exactly the problem was).
Compiling a file happens in a rare event when you actually have a product you want to use later as it is. Or, most probably, you want others to use it w/o you having to set it up. For example, if you have a web server, and you want the system administrator to be able to (re)start it as needed - the administrator needs not to know how your software functions, she'd need only to know how to launch it.
Eval'ing a defun is just that - it sends the text to SWANK, but doesn't analyze the result. Of course your Lisp will print something back to you after you did that, but SLIME will stand aside.```

Determining most register hungry part of kernel

```when I get a kernel using too many registers there are basically 3 options I can do:
leave the kernel as it is, which results in low occupancy
set compiler to use lower number of registers, spilling them, causing worse performance
rewrite the kernel
For option 3, I'd like to know which part of the kernel needs the maximum number of registers. Is there any tool or technique allowing me to identify this part? Reading through the PTX code (I develop on NVidia) is not helpful, the registers have various high numbers and to be honest, the best I can do is to identify which part of the assembly code maps to which part of the C code.
Just commenting out some code is not much a way to go - for example, I noticed that if I just put the code into loop, the number of registers raises dramatically, not only by one for the loop control variable. I personally suspect the NVidia compiler from imperfect variable liveness analysis, but of course I cannot do much with that :-)
```
```If you're running on NVidia hardware, you can pass -cl-nv-verbose compile option to clBuildProgram then clGetProgramInfo CL_PROGRAM_BINARIES to get human readable text about the compile. In there it will say the number of registers it uses. Note that NVidia caches compiles and it only produces that register info when the kernel source actually changes, so you may want to inject some superfluous change in the source code to force it to do the full analysis.
If you're running on AMD hardware, just set the environment variable GPU_DUMP_DEVICE_KERNEL=1. It will produce a text file of the IL during the compile. Not sure it explicitly says the number of registers used, but it's what is equivalent to the NVidia technique above.
When looking at that output (at least on nvidia), you'll find that it seems to use an infinite number of registers (if you go by the register numbers). In reality, it does a flow analysis and actually reuses registers in a way that is not at all obvious when looking at the IL.
```
```This is a tough question in any language, and there's probably not one correct answer. But here are some things to think about:
Look for the code in the "deepest" scope you can find, keeping in mind that most functions are probably inlined by your OpenCL compiler. Count the variables used in this scope, and walk up the containing scopes. In each containing scope, count variables that are used both before and after the inner scope. These are potentially live while the inner scope executes. This process could help you account for the live registers at a particular part of the program.
Try to take variables that "span" deep scopes and make them not span the scope if possible. For example, if you have something like this:
int i = func1();
int j = func2(); // perhaps lots of live registers here
int k = func3(i,j);
you could try to reorder the first two lines if func2 has lots of live registers. That would remove i from the set of live registers while func2 is running. This is a trivial pattern, of course, but hopefully it's illustrative.
Think about getting rid of variables that just keep around the results of simple computations. You might be able to recompute these when you need them. For example, if you have something like int i = get_local_id(0) you might be able to just use get_local_id(0) wherever you would have used i.
Think about getting rid of variables that are keeping around values stored in memory.
Without good tools for this kind of thing, it ends up being more art than science. But hopefully some of this is helpful.```