Check whether 2 languages are Turing recognizable or co-Turing recognizable - turing-machines

I have these 2 languages
A = {<M> | M is a TM and L(M) contains exactly n strings }
B = {<N> | N is a TM and L(N) contains more than n strings }
I believe that these 2 are undecidable, but I am not sure whether they are Turing recognizable or co-Turing recognizable.

B is Turing recognizable since we can interleave executions of M on all possible input tapes. If more than n of the running instances of M ever halt-accept, then halt-accept.
We know that A cannot be Turing-recognizable because, if it were, the language B' = {<N> | N is a TM and L(N) contains no more than n strings } would be Turing-recognizable (we could interleave the execution of recognizers for 1, 2, ..., n and halt-accept if any of those did). That would imply both B and B' were decidable since B' must be co-Turing recognizable.
If A were co-Turing recognizable, we could recognize machines that accept a number of strings different than n. In particular, let n = 1. We can run the recognizer for machines whose languages contain other than n strings on a TM constructed to accept L(M) \ {w} for every possible string w. At each stage, we run one step of all existing machines, then construct a new machine, and repeat, thus interleaving executions and ensuring all TMs eventually get to run arbitrarily many steps.
Assuming |L(M)| = 1, exactly one of these TMs will halt-accept (the one that removes the only string in L(M)) and the rest will either halt-reject or run forever. Therefore, a recognizer for |L(M)| != 1 can be used to construct a recognizer for |L(M)| = 1. This generalizes to |L(M)| != k and |L(M)| = k by subtracting all possible sets of k input strings.
Therefore, if A were co-Turing recognizable, it would also be Turing recognizable, thus decidable. We already know that's wrong, so we must conclude that A is not co-Turing recognizable; nor is it Turing recognizable.

Related

Is a primarily prime TM decidable?

A language L over alphabet Σ is primarily prime if and only if for every length l, the majority of strings of length l do belong to L if l is a prime number, but do not belong to L if l is a composite number. Let PriPriTM = {〈M〉 : L(M) is primarily prime and M is a TM}.
Is PriPriTM Turing decidable?
This is a very complicated decision problem, but the answer is that no, it cannot possibly be decidable whether a TM accepts a primarily prime language. Why? Some TMs accept primarily prime languages (consider a TM that accepts exactly the strings of prime length) and some do not (consider the TM accepting the complement of the former's language). The property is semantic in that it deals with what strings are in the language - rather than syntactic, dealing with the form of the TM itself. In other words, two TMs accepting the same language would always be treated identically by a decider for our problem. By Rice's theorem, then, the problem of deciding whether a TM decides such a language is not computable.

How to decide if a language is In R or RE or CoRE

I have these three languages I don't know how to decide whether the language is in R or RE or coRE
L1={<M>| epsilon belongs to L(M)}
L2={<M><w>|M doesn't accept any prefix of w}
L3={<M>|there exists w where M accepts all the prefixes of w}
For the first two, a technque called dovetailing can help you show that the languages are enumerable.
For L_1:
given a gödel-numbering of all Turing Machines, compute step 1 of M1(eps),
then step 1 of M2(eps), then step 2 of M1(eps),
1 of M3(eps), 2 of M2(eps), 3 of M1(eps)
...
in other words an lower left triangle of the coordinate system with "number of steps" and "Turing Machine number x" as axes.
If epsilon is in L(Mx), then it is accepted in a number n of steps. With your method you will detect this, when you reach coordinate [x,n]. This is true for every [x,n], so you can enumerate all of the machines in this way.
Since a word only has a finite number of prefixes, you can also apply this method for L2 by going through a coordinate system like above for each prefix (not sequentially, but also interweaved). So L2 is enumerable, too.
For L3, there exists w where M accepts all the prefixes of w, then this is also true for the string consisting only the first letter of w. So you only need to check it for the finitely many symbols of the alphabet, just like for L2.
As to the recursiveness of the three languages, read for example this answer which treats your L1.

Determining whether the following language is decidable

{⟨M,N⟩ | All strings in L(M)∩L(N) begin with 110.}
I think that this language is decidable. We can make a Turing Machine TM, which takes as input . For every string that is in L(M)∩L(N), if the string starts with 110, after the first 3 digits, we halt and accept. If the first three digits are not 110, we halt and reject. I am unsure what we do if the string is not in L(M)∩L(N).
Also overall I am unsure if my Turing Machine is actually working or not. Could I get some feedback on this?
If M and N are Turing machines, then this language is not decidable. If it were, we could make N a TM that accepts all strings, and then we'd have a decider for {M | all strings in M begin with 110}. We can recognize this is not decidable since the condition is true for some TMs, false for others, and it's semantic in that it deals with the strings in the language; so Rice's theorem applies.

Proving a decision is undecidable

I understand that HP is an undecidable problem because of the diagonalization argument.
In my book (kozen) the first example of a reduction to the halting problem is a machine that can decide whether or not the empty string ε is accepted.
from my book:
Suppose we could decide whether e. given machine a.ccepts E. We could then
decide the halting problem as folIows. Say we are given a Turing machine
M and string x, and we wish to determine whether M halts on X. Construct
from M and x a new machine M' that does the following on input y:
(i) erases its input y;
(ii) writes x on its tape (M' has x hard-wired in its finite control);
(iii) runs M on input x (M' also has a description of M hard-wired in its
finite control);
(iv) accepts if M halts on X.
Here already, numerous questions come to my mind. M' is not based (as far as the text tells at least) on the actual machine that decides whether ε is accepted or not.
Why do we erase the input y? Is x in M' arbitrary? And the biggest confusion comes from my question: Why can't I prove any decision problem this way: Make a machine M' that erases its input, writes x on the tape, runs M on the input x and accepts if M halts on x?
I'm trying to understand the relation between the decider for a machine to accept ε and the TM given by the book, but I can't seem to understand it, neither can my fellow students.
Erasing the input is just to show that it is irrelevant. You could as well leave it there and write behind it.
x in M' is not arbitrary but "M' has x hard-wired in its finite control" for deciding the problem "given a Turing machine M and string x, and we wish to determine whether M halts on x."
The new machine always does the same independent of its input. Therefore, it either accepts ALL inputs or NONE. So it accepts the empty string IFF M accepts x. It is important to note that we are talking about only one computation of M but all the computations of M'.

Finding the minimum number of swaps to convert one string to another, where the strings may have repeated characters

I was looking through a programming question, when the following question suddenly seemed related.
How do you convert a string to another string using as few swaps as follows. The strings are guaranteed to be interconvertible (they have the same set of characters, this is given), but the characters can be repeated. I saw web results on the same question, without the characters being repeated though.
Any two characters in the string can be swapped.
For instance : "aabbccdd" can be converted to "ddbbccaa" in two swaps, and "abcc" can be converted to "accb" in one swap.
Thanks!
This is an expanded and corrected version of Subhasis's answer.
Formally, the problem is, given a n-letter alphabet V and two m-letter words, x and y, for which there exists a permutation p such that p(x) = y, determine the least number of swaps (permutations that fix all but two elements) whose composition q satisfies q(x) = y. Assuming that n-letter words are maps from the set {1, ..., m} to V and that p and q are permutations on {1, ..., m}, the action p(x) is defined as the composition p followed by x.
The least number of swaps whose composition is p can be expressed in terms of the cycle decomposition of p. When j1, ..., jk are pairwise distinct in {1, ..., m}, the cycle (j1 ... jk) is a permutation that maps ji to ji + 1 for i in {1, ..., k - 1}, maps jk to j1, and maps every other element to itself. The permutation p is the composition of every distinct cycle (j p(j) p(p(j)) ... j'), where j is arbitrary and p(j') = j. The order of composition does not matter, since each element appears in exactly one of the composed cycles. A k-element cycle (j1 ... jk) can be written as the product (j1 jk) (j1 jk - 1) ... (j1 j2) of k - 1 cycles. In general, every permutation can be written as a composition of m swaps minus the number of cycles comprising its cycle decomposition. A straightforward induction proof shows that this is optimal.
Now we get to the heart of Subhasis's answer. Instances of the asker's problem correspond one-to-one with Eulerian (for every vertex, in-degree equals out-degree) digraphs G with vertices V and m arcs labeled 1, ..., m. For j in {1, ..., n}, the arc labeled j goes from y(j) to x(j). The problem in terms of G is to determine how many parts a partition of the arcs of G into directed cycles can have. (Since G is Eulerian, such a partition always exists.) This is because the permutations q such that q(x) = y are in one-to-one correspondence with the partitions, as follows. For each cycle (j1 ... jk) of q, there is a part whose directed cycle is comprised of the arcs labeled j1, ..., jk.
The problem with Subhasis's NP-hardness reduction is that arc-disjoint cycle packing on Eulerian digraphs is a special case of arc-disjoint cycle packing on general digraphs, so an NP-hardness result for the latter has no direct implications for the complexity status of the former. In very recent work (see the citation below), however, it has been shown that, indeed, even the Eulerian special case is NP-hard. Thus, by the correspondence above, the asker's problem is as well.
As Subhasis hints, this problem can be solved in polynomial time when n, the size of the alphabet, is fixed (fixed-parameter tractable). Since there are O(n!) distinguishable cycles when the arcs are unlabeled, we can use dynamic programming on a state space of size O(mn), the number of distinguishable subgraphs. In practice, that might be sufficient for (let's say) a binary alphabet, but if I were to try to try to solve this problem exactly on instances with large alphabets, then I likely would try branch and bound, obtaining bounds by using linear programming with column generation to pack cycles fractionally.
#article{DBLP:journals/corr/GutinJSW14,
author = {Gregory Gutin and
Mark Jones and
Bin Sheng and
Magnus Wahlstr{\"o}m},
title = {Parameterized Directed \$k\$-Chinese Postman Problem and \$k\$
Arc-Disjoint Cycles Problem on Euler Digraphs},
journal = {CoRR},
volume = {abs/1402.2137},
year = {2014},
ee = {http://arxiv.org/abs/1402.2137},
bibsource = {DBLP, http://dblp.uni-trier.de}
}
You can construct the "difference" strings S and S', i.e. a string which contains the characters at the differing positions of the two strings, e.g. for acbacb and abcabc it will be cbcb and bcbc. Let us say this contains n characters.
You can now construct a "permutation graph" G which will have n nodes and an edge from i to j if S[i] == S'[j]. In the case of all unique characters, it is easy to see that the required number of swaps will be (n - number of cycles in G), which can be found out in O(n) time.
However, in the case where there are any number of duplicate characters, this reduces to the problem of finding out the largest number of cycles in a directed graph, which, I think, is NP-hard, (e.g. check out: http://www.math.ucsd.edu/~jverstra/dcig.pdf ).
In that paper a few greedy algorithms are pointed out, one of which is particularly simple:
At each step, find the minimum length cycle in the graph (e.g. Find cycle of shortest length in a directed graph with positive weights )
Delete it
Repeat until all vertexes have not been covered.
However, there may be efficient algorithms utilizing the properties of your case (the only one I can think of is that your graphs will be K-partite, where K is the number of unique characters in S). Good luck!
Edit:
Please refer to David's answer for a fuller and correct explanation of the problem.
Do an A* search (see http://en.wikipedia.org/wiki/A-star_search_algorithm for an explanation) for the shortest path through the graph of equivalent strings from one string to the other. Use the Levenshtein distance / 2 as your cost heuristic.
Hash Map data structure (that allows duplicates) is suitable for solving the problem.
Let the string be s1 and s2. The algorithm iterates through both the string and whenever a mismatch is found the algorithm maps the character of s1 to s2 i.e char of s1 as key and char of s2 as value is inserted in Hash Map wherever mismatch is occurred.
After this initialize the result as zero.
The next step is while the Hash Map is not empty do following:
For any key k find its value v.
Now use value v as the key to lookup in the Hash Map to find value if the found value is equal to k then increment the result by 1 and remove both the keys k and v from the Hash Map.
If the found value is not equal to k then only remove key k from the Hash Map and increment the result.
result holds your desired output.

Resources