## How can Turing machine compare to a computer? - turing-machines

I have read articles including wikipedia about Turing machine. And here is also question about Turing machine. After reading those, what I understand about T.M is it's just a logical machine with infinite tape, R/W head and a table with rules. If it's true, without that table a Turing machine is nothing. even a simple computer can do everything from simple word processing to playing games, but how can a Turing machine compare to a computer.

## Related

### Universal Turing Machine Problems

If I have a machine, call it machine 1, that is able to solve a problem: it's just a machine, not per se a Turing machine. It can solve one specific problem. If this exact same problem can be solved on a Universal Turing Machine, then is my original machine, 1, a Universal Turing Machine too? This does not hold for all problems, which is already ansered. Are there any problems which have this described property at all? If it is absolutely not true, then why? Can someone give an example of a problem to be solved. If this problem is solved by my original machine, 1, definately makes this a Universal Turning Machine? Or does such a problem not exists? If it doesn't exists, why? I'm very interested, but can't figure it out... Thanks. Edit: made the question more clear.

The point of the Universal Turning Machine (UTM) is that for any Turing Machine (TM) you could take that TM and create an encoding for it that describes the operation of the TM and have that encoding run on another TM. The UTM is a TM which has an definition sufficiently powerful such that any other TM definition could be rewritten in it. Think of the UTM as an interpreter. The TM is a specific task. Unless the TM is also in the class of interpreters then it is not a UTM as well. (Because a UTM is also a specifically tasked TM). So to answer your second question: if you can show that the UTM and TM are equivalent then you have shown that TM is also a UTM. To do this you need to be able to show how an encoded program for the UTM can be changed into an equivalent program for the TM.

A Universal Turing Machine can solve any of a huge class of problems. If your machine(1) can solve 1+1, that doesn't mean it can solve any of the huge class. So it may not be a Universal Turing Machine.

The logicians differentiate between "sufficient" and "neccessary" conditions. Take, for example, the sentence The sky is blue. (let's just assume that's always true). What you know now is this: When you look at the sky, you see the color blue. What you don't know is this: When you see the color blue, you're looking at the sky. -- you might as well be looking at your neighbour's car. In logical terms, the color blue is neccessary for the sky, but it's not sufficient. The same is true for your case: Machine (1) does solve your problem, so it's indeed a solvable problem. Hence, being able to solve the problem is a neccessary condition for a UTM, but not a sufficient one, because a UTM must be able to solve any problem (that's solvable at all), not just this single one.

A universal turing machine can solve any code that any specific turing machine can solve. So your universal turing machine (2) can solve the problem that your original turing machine (1) was designed to solve. Your original turing machine (1) however can solve only that exact problem and can't solve any other problem (including the "problem" of being a universal turing machine). So no, your original turing machine is not a universal turing machine according to your description. (It might be if the you define it to, but that's kind of cheating).

Can someone give an example of a problem to be solved. Sure: Given encoded turning machine and data, what is the result :) If your machine can solve this problem, it is surely UTM. Do you know the line of reasoning why those different problems are in NP? Like 'can i solve the 3-sat problem when I have a machine that solves the Hamiltonian problem?' You can surely use the same to answer your question.

Proving the Turing completeness of a particular system is not trivial, unless you can easily show that it's equivalent/isomorphic to another system that is know to be Turing complete. So short answer: there is no simple test that you can put your machine through to check whether it is Turing complete. You have to analyze and show properties of the system as a whole. If you want to learn more about this topic, read these articles about Turing completeness and computability theory.

Imagine a UTM as if how would you proceed if you have to write a code(High level) for simulating the turing machine.You will require the following: 1.Array to hold the input symbols and the stuff that yiu would do on it. 2.An array(possible 2-d) to hold the transition function that you will prompt the user. 3.An algorithm that read user's inputs of transition functions and simulates it on array 1. 4.Few variables that your program will need to track its own state. If you think in this way,if you end up getting a perfectly working code you end up with a perfect UTM. However the catch is no matter how efficiently you code you can't stop the user from entering transition functions that can cause your code to run forever.So there will be certain problems for which UTM will fail,and then we say that for those problems we can't develop a membership testing machine.(though notice a membership verification machine is always possible)

### Compare numbers without deleting them - Turing machines

Is there a way to compare two binary numbers with a Turing machine without deleting them in the process?

### Operating System Process Management, Memory Management, Kernel

I am working in software firm where hardware independent coding is done on the Network Chipsets and fully Multigthreading coding implemented and various buffers(CRU Buffer, Linear Buffer) are handled and memory (stack memory) is optimally used. And IPC done via Message queues. And Multiple Locks, Semaphores are used for concurrency mechanisum. Now i will be assigned to new development project, where i have to understand and have to develop new features in next one month. I am feeling like middle of the Amazon Jungle :). => I am in beginning level in OS concepts. I feel like intermediate level in C language. So expecting, suggestion for "Materail/Book which could help me to improve/concrete my OS skills" i saw OS Book by Abraham Silberschatz and Modern Operating Systems by Tanenbaum - 3rd Edition. Both are looking big and covers all corners of operating system. I thought to study that book steadily and slowly for future referencee. ==> Now i am looking for the Network materials/books which explaining the "Main concepts" in the detailed manner. For example i have seen virtual memory concepts in one online material where clearly virtual memory explained. Example abour virtual memory from that material: amesmol#aubergine:~/test> objdump -f a.out a.out: file format elf32-i386 architecture: i386, flags 0x00000112: EXEC_P, HAS_SYMS, D_PAGED start address 0x080482a0 explanation: Notice the start address of the program is at 0x80482a0.Program thinks like where its starting address is actual physical address. But it is a virtual address space. Its original starting address at physical memory location 0x1000000. As like this( correct point and example), could you people suggest good materials for the OS concepts ( Process Management, Memory Management, IPC)? Can you also suggest the ways to improve/concrete this skills? (suggest either what kind of mini homework project i can do, etc..) Thanks in advance

if you are working on projects you have to go over books you mentioned as soon as possible for theoretical explanations, concepts and terminologies. after that,even along with your reading, i suggest you to go to university websites to get hand on skills for small projects. some suggested links are as follows http://www.eecg.toronto.edu/~lie/Courses/ECE344/ http://web.stanford.edu/~ouster/cgi-bin/cs140-winter13/pintos/pintos.html#SEC_Contents http://www3.cs.stonybrook.edu/~porter/courses/cse624/f13/project.html (JOS implementation. very helping instructors if you send them specific queries) http://www.brokenthorn.com/Resources/OSDev7.html http://www.osdever.net/bkerndev/Docs/intro.htm (above two links are not university link but as a beginner I recommend this to start with ) apart from above, Lion's commentary on Unix code with line number reference must be in your reading to understand the implementation of small scale OS

### Universal turing machine U should determine if M(x) stops

so we have a universal turing machine U, that should determine if a turing machine M with input x will stop. The solution should be presented in pseudo code. can someone help me a bit out, who should i solve it ?

This sounds like the halting problem: The halting problem can be stated as follows: "Given a description of an arbitrary computer program, decide whether the program finishes running or continues to run forever". This is equivalent to the problem of deciding, given a program and an input, whether the program will eventually halt when run with that input, or will run forever. Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. A key part of the proof was a mathematical definition of a computer and program, what became known as a Turing machine; the halting problem is undecidable over Turing machines. So no, it's not possible. If you want, you can probably run M on x for a while. If it stops, we know it stops. If it doesn't stop, we don't really know whether or not it stops.

### GCC optimization options for AMD Opteron 4280: benchmark

We're moving from one local computational server with 2*Xeon X5650 to another one with 2*Opteron 4280... Today I was trying to launch my wonderful C programs on the new machine (AMD one), and discovered a significant downfall of the performance >50%, keeping all possible parameters the same(even seed for a random numbers generator). I started digging into this problem: googling "amd opteron 4200 compiler options" gave me couple suggestions, i.e., "flags"(options) for available to me GCC 4.6.3 compiler. I played with these flags and summarized my findings on the plots down here... I'm not allowed to upload pictures, so the charts are here https://plus.google.com/117744944962358260676/posts/EY6djhKK9ab I'm wondering if anyone (coding folks) could give me any comments on the subject, especially I'm interested in the fact that "... -march=bdver1 -fprefetch-loop-arrays" and "... -fprefetch-loop-arrays -march=bdver1" yield in a different runtime? I'm not sure also if, let's say "-funroll-all-loops" is already included in "-O3" or "-Ofast", - why then adding this flag one more time makes any difference at all? Why any additional flags for intel processor makes the performance even worse (except only "-ffast-math" - which is kind of obvious, because it enables less precise and faster by definition floating point arithmetic, as I understand it, though...)? A bit more details about machines and my program: 2*Xeon X5650 machine is an Ubuntu Server with gcc 4.4.3, it is 2(CPUs on the motherboard)X6(real cores per each)*2(HyperThreading)=24 thread machine, and there was something running on it , during my "experiments" or benchmarks... 2*Opteron 4280 machine is an Ubuntu Server with gcc 4.6.3, it is 2(CPUs on the motherboard)X4(real cores per each=Bulldozer module)*2(AMD Bulldozer whatever threading=kind of a core)=18 thread machine, and I was using it solely for my wonderful "benchmarks"... My benchmarking program is just a Monte Carlo simulation thing, it does some IO in the beginning, and then ~10^5 Mote Carlo loops to give me the result. So, I assume it is both integer and floating point calculations program, looping every now and then and checking if randomly generated "result" is "good" enough for me or not... The program is just a single-threaded , and I was launching it with the very same parameters for every benchmark(it is obvious, but I should mention it anyway) including random generator seed(so, the results were 100% identical)... The program IS NOT MEMORY INTENSIVE. Resulting runtime is just a "user" time by the standard "/usr/bin/time" command.