Is PCP recognizable? - turing-machines

I want to know whether Post Correspondence Problem(PCP) is recognizable. I learnt how to demonstrate the undecidability of PCP. I thought to use the similar approach for recognizability too i.e. to considering MPCP and show whether it is recognizable. I am not sure if it is a good approach.

The Post Correspondence Problem is indeed recognizable. Here are two ways to see this:
Build a recognizer for it. Given a set of tiles, you could imagine a TM that lists off all sequences of exactly one domino, then exactly two dominoes, then exactly three dominoes, then exactly four dominoes, etc., progressively increasing the number of dominoes each time. If the TM ever finds a series of dominoes where the tops and bottoms match, then it can accept. Otherwise, it will loop infinitely.
Build a nondeterministic TM for it. Design a nondeterministic TM that, given a set of tiles, nondeterministically guesses a series of tiles to line up, then checks whether the tops and bottoms match. If so, it accepts; otherwise it rejects. This NTM will then accept any "yes" instance since it can always guess a valid series of dominoes, and will not accept any "no" instances because it can never guess a valid ordering of the dominoes.

Related

Improving parameters by artificial intelligence [closed]

Here might not be the proper place to ask this question but I didn't find any better place to ask it. I have a program that have for example 10 parameters. Every time I ran it, it could lead to 3 results. 0, 0.5 or 1. I don't know how the parameters would influence the last result. I need something to little by little improve my program so it gets more 1s and less 0s.
First, just to get the terminology right, this is really a "search" problem, not a "machine learning" problem (you're trying to find a very good solution, not trying to recognize how inputs relate to outputs). Your problem sounds like a classic "function optimization" search problem.
There are many techniques that can be used. The right one depends on a few different factors, but the biggest question is the size and shape of the solution space. The biggest question there is "how sensitive is the output to small changes in the inputs?" If you hold all the inputs except one the same and make a tiny change, are you going to get a huge change in the output or just a small change? Do the inputs interact with each other, especially in complex ways?
The smaller and "smoother" the solution space (that is, the less sensitive it is to tiny changes in inputs), the more you would want to pursue straightforward statistical techniques , guided search, or perhaps, if you wanted something a little more interesting, simulated annealing.
The larger and more complex the solution space, the more that would guide you towards either more sophisticated statistical techniques or my favorite class of algorithms, which are genetic algorithms, which can very rapidly search a large solution space.
Just to sketch out how you might apply genetic algorithms to your problem, let's assume that the inputs are independent from each other (a rare case, I know):
Create a mapping to your inputs from a series of binary digits 0011 1100 0100 ...etc...
Generate a random population of some significant size using this mapping
Determine the fitness of each individual in the population (in your case, "count the 1s" in the output)
Choose two "parents" by lottery:
For each half-point in the output, an individual gets a "lottery ticket" (in other words, an output that has 2 "1"s and 3 "0.5"s will get 7 "tickets" while one with 1 "1" and 2 "0.5"s will get 4 "tickets")
Choose a lottery ticket randomly. Since "more fit" individuals will have more "tickets" this means that "more fit" individuals will be more likely to be "parents"
Create a child from the parents' genomes:
Start copying one parents genome from left to right 0011 11...
At every step, switch to the other parent with some fixed probability (say, 20% of the time)
The resulting child will have some amount of one parents genome and some amount of the other's. Because the child was created from "high fitness" individuals, it is likely that the child will have a fitness higher than the average of the current generation (although it is certainly possible that it might have lower fitness)
Replace some percentage of the population with children generated in this manner
Repeat from the "Determine fitness" step... In the ideal case, every generation will have an average fitness that is higher than the previous generation and you will find a very good (or maybe even ideal) solution.
Are you just trying to modify the parameters so the results come out to 1? It sounds like the program is a black box where you can pick the input parameters and then see the results. Since that is the case I think it would be best to choose a range of input parameters, cycle through those inputs, and view the outputs to try to discern a pattern. If you could automate it it'll help out a lot. After you run through the data you may be able to spot check to see which parameter give you which results, or you could apply some machine learning techniques to determine which parameters lead to which outputs.
As Larry said, looks like a combinatorial search and the solution will depends on the "topology" of the problem.
If you can, try to get the Algorithm Design Manuel book (S. Skiena), it has a chapter on this that can help determine the good method for this problem...

How does a non deterministic turing machine work?

I understand they aren't real and they seem to branch computation whenever there are 2 options, instead of picking one. But, for example, if I say this:
"Non deterministically guess a bijection p of vertices from Graph G to Graph H" (context here is Graph Isomorphism)
What is that supposed to mean? I understand the bijection, but it says "non deterministically guess". If it's guessing, how is that an algorithmic approach? How can it guarantee it's going to work?
There's different ways to picture one. One I find useful is the oracle model. Did you ever see the Far Side cartoon where a derivation on the blackboard has "Here a miracle occurs" as one of the intermediate steps? In this version of a NDTM, when you need to choose something, the oracle writes the correct version on the right part of the tape. (This is taken from Garey and Johnson, Computers and Intractability, their classic book on NP-complete problems.) You aren't allowed to assume you've got the right one, though, and there may not be a correct one.
Therefore, when you non-deterministically guess a bijection, you're getting the correct bijection for your purposes, provided one exists.
It isn't a good basis for an algorithm, since the complexity of implementing a non-deterministic Turing machine is basically exponential in the nondeterministic states, and the algorithmic equivalent of the nondeterministic guess is to try every possible bijection.
From a theoretical point of view, I'd translate it as "If there is a bijection such that....". From an algorithmic point of view, find another book, or another chapter of the same book, since that approach is useless for even moderately large graphs.
They don't, they just sort of illustrate a point. Basically what they do is guess an answer, and check if it's right(deterministically). It's not the guessing the answer part that's important though, it's checking that the answer is right. It's just like saying given an arbitrary solution, is it correct? So for example there are problems that take exponential time to compute, and some of their answers can be checked in polynomial time, but some can't. So what the non-deterministic TM does is it divides those two, the ones that can be checked quickly from the ones that can't. And then this brings up the bigger question, if one group of questions solutions can be verified much quicker than another, can their solutions also be generated quicker? This question hasn't been answered, yet.
I believe what is meant is "non deterministically choose a solution" and then test that the solution is true. Since all possible choices (guesses) are tested, the solution is guaranteed.
A physical implementation of the non-deterministic Turing machine is the DNA computer. For example, here's an outline of how to solve the traveling salesman problem in DNA:
Get/make a bunch of DNA sequences, each with length proportional to the cost of an edge in your graph and sticky ends with sequences uniquely identifying one of the vertices that the edge connects.
Mix them together, with DNA ligase in a big beaker. They'll anneal to each other in sequences that represent every possible path through the graph (ok, not the really long ones).
Remove all the sequences that are missing at least one vertex. To do this, sequentially select for each vertex using hybridization. For example, if "ACGTACA" encodes vertex 1, select for sequences that bind to "TGTACGA". Then repeat this selection for every other vertex.
Sort the remaining sequences by size using gel electrophoresis. Then sequence the shortest one. The sequence encodes the shortest path through your graph.

Multiple levels of infinity [closed]

Some programmers don't see much relevance in theoretical CS classes (especially my students). Here is something I find very relevant. Let me build it up in pieces for those that haven't seen it before...
A) Programming problems can be reworded to be questions about languages.
B) Turing machines recognize languages.
C) Turing machines can be encoded as (large) integers.
D) Therefore, the number of possible Turing machines are countably infinite
E) The power set of a set is just all the possible subsets of that set.
F) If a set is countably infinite, its power set is bigger, ie, uncountably infinite.
G) Therefore, if a language is infinite, it has an uncountably infinite number of subsets. Each of these represents a problem. But there are only countably many Turing machines with which to solve those problems. And if we cannot solve a problem with a Turing machine, it cannot be solved.
Conclusion...we can only solve an infinitesmally small fraction of all problems.
My question is almost here...
Whenever I present this argument to students, they get stuck on the countably vs. uncountably infinite. They generally do not have strong math backgrounds, so attempts to explain via Cantor's diagonalization argument tends to glaze their eyes.
Usually I try to give them somthing they can grasp, like this...place a finite box over any portion of the counting number line, and we capture a finite quantity of those numbers...but place a finite box over any portion of the real number line, and we capture an infinite quantity of real numbers. A sort of evidence that there ARE more real numbers than there are counting numbers.
Finally my question...How do YOU explain the concept of multiple levels of infinity to those that have never heard of the concept, and may not be mathematically inclined?
Final Edit: I learned a lot by asking this question and I appreciate the feedback. I wasted far too much time trying to figure out what "Community wiki" actually was. I learned there is an inherent bias in some people against theory questions that I feel is simply a mistake because so much of what we do today was theory yesterday. But this bias is natural and while I disagree with them on the value of theory, I have no problem with it, and it helps me understand where my students are coming from. I do think the BS comment was unnecessary.
I do not feel this question was a poll or a preditions-for-2009 question at all. Those of you that only want coding questions with coding answers might want to re-examine that requirement. I have moved this question to community wiki but strongly feel I was compelled to do so by improper use of force.
I think your explanation is the simplest, as that is what I learned. It's almost as if real numbers have multiple dimensions of infinity. It is infinite in one direction, but also in another.
Diagonalization is a very cool experiment, but I can see how it may go over beginners heads. It does make sense though, if it is demonstrated in a very deliberate way, going very slowly. Just throwing up numbers quick can be hard to follow I imagine.
I think the principle of Cardinality of the Continuum is also helpful, although perhaps can be simplified to a beginner level. Showing that there is more beyond simple real vs. integers can potentially help something to 'click'.
My recommended first step for teaching levels-of-infinity to people of limited mathematical background is "Why do mathematicians say that the set of even numbers and the set of whole numbers are the same size?" This introduces "if you can associate every member of set A with exactly one member of set B, mathematicians say the sets have the same size." Next comes showing that every fraction (every rational number) can be associated with exactly one counting number, using the diagonal method. Once they're satisfied with that, I bring up π, which everyone knows has an infinite number of non-repeating digits in its decimal expression, which means it cannot be expressed as a fraction, so it will be left over, and that means that the set of irrational numbers is larger than the set of counting numbers. Some wiseguys will object that π has a finite number of digits if you're working in base π, namely 1π, but you can come back at them with "okay, brainiac, write down the number of days in a week in base π."
Where's the "very relevant" part?
Edit: OK, I've been writing code professionally for 13 years and I wouldn't call levels of infinity relevant to anything I've ever worked on.
And I guess I would draw a different conclusion from your theory. How is "we can only solve an infinitesimally small fraction of all problems" the limit of our craft?
Sounds to me like there are an infinite (countable or uncountable doesn't seem to make a difference) number of problems. Therefore our craft is unlimited -- we will never run out of problems to solve.
There are several tens of thousands of words in the English language. You can count the number of words in a book or the number of books in the universe. You cannot count the number of books that will ever be
Forgive the poorly written metaphors below.
I personally think of the countability/uncountability dichotomy as being very closely related to Zeno's paradox of the arrow.
The set of all natural numbers is countable, there is a specific method of generating the "next" integer, and it will get you a step forward. Countable sets are forward-moving in that sense. It's almost as if it has a velocity, it keeps moving forward.
The set of all real numbers is uncountable, like zeno's arrow.
If you have to move between the origin (0) and the destination (1 == 2-0), you must first go through the midpoint (1/2 == 2-1).
Now your destination is 1/2; If you must then go between the origin (0) and the (1/2), you must go through the midpoint (1/4 == 2-2)
So on and so forth, so to get between 0 and 1, you must first get between something inbetween, which you must first get between something inbetween. There is no finite method of calculating the "next" step, so the velocity (in contrast to the velocity of natural numbers) doesn't really exist, your next step is not going to take you anywhere.
Edit:
I realize now that this probably has to do with the total ordering and mapping of the set of natural numbers to any countable sets. If you can't totally order the items in a set, or you can't create a method to determine what the next item is in a set, chances are it's uncountable.
G) Therefore, if a language is infinite, it has an uncountably infinite number of subsets. Each of these represents a problem.
Citation needed. You can't merely assume that any (possibly infinite) set of Turing machines necessarily represents a distinct 'problem'. At the very least, you have to (separately) formalize the definition of 'problem' as much as Turing machines have been formalized.
Programmers (or at least, myself) don't often have to worry much about infinity in this way. When you place a finite box over any portion of the machine-representable real number line, you get a finite quantity of real numbers. =)
For example, a double precision variable has a finite number of possible values: 2^64.
Here's an example of a computable problem: at the start of a chess game, is it possible for white to force a win?
The number of possible moves and counter-moves is finite. All we have to do is build the trees and prune them. We haven't done this yet only because with current technology it would take billions of years.
Here is an example of a problem that is not computable: Given a two-dimensional view of a scene, construct a full three-dimensional model of the scene.
We do this all the time. (Make a room with a peephole in the door. Have someone furnish it. Look through the hole and describe everything you see.)
We do not compute the incomputable. We produce an approximate result (just like we compute and use an approximate value of pi, another incomputable number). We keep updating the result as more information comes in. That's what optical illusions are all about. When you look at the picture of "a vase, or is it two faces?" your visual system says "It's a vase. No. Wait. It's two faces. No. Wait. It's a vase." You see it switching back and forth between the two interpretations.
Just because something is not computable is no reason not to do it.
Conclusion...we can only solve an infinitesmally small fraction of all problems.
You must be web designer.

Tree search based Game AI: How to avoid AI 'wandering'/'procrastination' with sparse rewards?

My game AI makes use of an algorithm that searches all possible future states based on the moves I can make (minimax / monte carlo esque). It evaluates these states using a scoring system, picks the highest scored final state and follows it.
This works well in most situations, but awfully when rewards are sparse. For example: there's a desirable collectable object that's 3 tiles to the right of me. The natural solution would be to go right->right->right.
But, my algorithm searches 6 turns deep. And it will will find many paths that eventually collect the object, including ones that take longer than 3 turns. It might for example find a path that's: up->right->down->right->right->down, collecting the object on turn 5 instead.
Since in both cases, the final leaf nodes detect the object as collected, it doesn't naturally prefer one or the other. So, instead of going right on turn 1, it might go up, or down, or left. This behavior will be repeated exactly on the next turn, so that it basically ends up dancing randomly in front of the collectable object, only luck will make it step on it.
That's clearly suboptimal and I want to fix it, but have run out of ideas how to handle this appropriately. Are there any solutions for this issue or is there any theoretical work that deals with handling this issue?
Solutions I've tried:
Make it value object collection more on earlier turns. While this works, to beat evaluator 'noise', the difference between turns must be quite high. Turn 1 must be rated higher than 2, turn 2 rated higher than 3, etc. The difference between turn 1 and 6 needs to be so high that it ends up making the behavior extremely greedy, which is not desirable in most situations. In an environment with multiple objects, it might end up choosing the path that grabs an object on turn 1, instead of the much better path that can grab objects on turn 5 and 6.
Assign the object as a target and value distance to it. If not done on a turn to turn basis, the original problem persists. If done on a turn to turn basis, the difference in importance required per turn once again makes it too greedy. This method also decreases flexibility and causes other issues. Target selection is not trivial and kind of ruins the point of a minimax style algorithm
Going much deeper in my searches so that it can always find a second object. This would cost so much computing power that I'd have to make concessions, like pruning paths much more aggressively. If I do so, I'll be back at the same problem since I won't know how to get it to prefer pruning the 5 turn version over the 3 turn version.
Give extra value to the plans laid out last turn. If it can at least follow the suboptimal path, there wouldn't be as much of an issue. Unfortunately, this once again has to be a pretty strong effect for it to work reliably, making it follow sub-optimal paths in all scenarios, hurting overall performance.
When weighting the outcome of the last step of your move, are you calculating in the number of moves needed to pick up an object?
I presume, you are quantifying each step of your move actions, giving a +1 if the step results in the picking up of an object. This means that in 3 steps, I can pick up the object with your above example, and get a +1 state of the play field, but I can also do this with 4-5-6-x steps, getting the same +1 state. If only a single object is reachable in the depth you are searching, your algorithm will likely select one of the random +1 states, giving you the above behaviour.
This could be solved, by quantifying with a negative score, each of the moves the AI must make. Thus, getting the object in 3 moves, will result in a -2, but getting the object in 6 moves, will result in -5. This way, the AI will clearly know, that it is preferable to get the object in the least amount of moves, ie, 3.

AI - what is included in the state

I'm taking my first course in AI, and I have to define some problems in my homework (not yet solve them, just supply a definition).
So I have to define about boolean satisfiability problem
:
What is a state?
What is the initial state?
What is a final state?
What are the operators?
My question is: Should the formula be a part of the state?
Considerations so far:
The operator doesn't change it, and it's constant through the computation, so it's not.
If I do include it, in theory, the search space gets much bigger, since more states are possible, but in reality the formula can't be changed, so I get a big state, and a branching factor that is not corresponding.
It's varying from one execution to the next, so it should be a part of the state.
You need only really consider the varying parts of the problem to be a state when conducting a search such as this, although I'd say in this case it really comes down to how you define the problem.
The search space for a given run of the algorithm depends upon the input formula, but after that is fixed, ie you are searching the space of n length bit vectors where n is the number of variables in the formula. So the formula is not part of the state because it does not vary.
The counter claim is that you are searching in a larger space of formula-vector pairs, but as you cannot change the formula as part of the problem, this has not really increased the size of the search space. So I would not make the claim that "If I do include it, in theory the search space gets much bigger". It does not, the reachable states are the same, the branching is the same, the space that requires exploring to solve the problem is the same.
Given this, my answer would that the formula is not part of the state, but defines the nature of the state space. So the answers to your four questions will each be functionally dependant on the formula in some way, but the state depends only on the length of the formula.
Hope that makes sense!
This is just a note for future readers - Not an answer
Vic Smith is right, another way to look at the fact that in theory there are more states but in practice not (my second dot), is just to think about it as separate bondage spaces. For example for the formula X or Y there is one bondage space, and for not X and Y there is another one and they have no common nodes in the representation.
So it can vary from one execution to another, but still has the same "reachable" states, and same branching factor. And each execution has a different starting state.

Resources