## Classify a language as Turing-recognizable or co-Turing-recognizable - turing-machines

I have this language
L = {<M> | M is a TM that accepts w whenever it accepts w^R}
I was able to prove that this language is undecidable.
But is this language Turing-recognizable or co-Turing-recognizable?

A language is RE if a TM can halt-accept on all strings in the language. A language is coRE if a TM can halt-reject on all strings not in the language. For L to be RE, we would need to be able to tell that TM always accepts w^R when it accepts w. For L to be coRE, we would need to be able to tell that TM accepts some w but not the corresponding w^R. It is neither RE nor coRE.
It is not RE because if a particular TM happens to accept the empty language, and therefore belongs to L, there is no way to recognize this fact. A recognizer for our language would allow us to recognize TMs that accept the empty language, an impossibility.
It is not coRE because if a particular TM happens to accept a language consisting of a single non-palindromic string, and therefore doesn't belong to L, there is no way to recognize this fact. A recognizer for our language would allow us to recognize TMs that accept a single non-palindromic string, an impossibility.

## Related

### Is a primarily prime TM decidable?

A language L over alphabet Σ is primarily prime if and only if for every length l, the majority of strings of length l do belong to L if l is a prime number, but do not belong to L if l is a composite number. Let PriPriTM = {〈M〉 : L(M) is primarily prime and M is a TM}. Is PriPriTM Turing decidable?

This is a very complicated decision problem, but the answer is that no, it cannot possibly be decidable whether a TM accepts a primarily prime language. Why? Some TMs accept primarily prime languages (consider a TM that accepts exactly the strings of prime length) and some do not (consider the TM accepting the complement of the former's language). The property is semantic in that it deals with what strings are in the language - rather than syntactic, dealing with the form of the TM itself. In other words, two TMs accepting the same language would always be treated identically by a decider for our problem. By Rice's theorem, then, the problem of deciding whether a TM decides such a language is not computable.

### Turing machine decidability ambiguous cases

1) Is a Turing machine M that accepts the language L = {ε}, accepting no entry? In one hand, I think it can be false because the empty word could be an entry, but in another i think this could possibly be an indecidable problem. 2) Is every Turing machine whose language is decidable stops on any input ? Same idea, intuitively I would have say yes, due to the definition of decidable, but I don't know, something trouble me. 3) Is the language of the palindromes decidable whatever the aphabet ? For this one, I have almost no doubt that it's False, because with Rice's Theorem we can prove that, this probleme is indecidable.

1) I am not sure how to parse this but if a TM accepts the language consisting only of the empty set, it will eventually halt-accept on a blank tape. Whether that counts as an entry or not depends on your definition of "entry". I would count it as an entry, so I would answer "no". 2) The language consisting of only the empty string is decidable. However, we can write a TM that halt-accepts the empty string only and goes into an infinite loop for all other inputs. What is meant by "whose language" is debatable but for TMs that encode partial functions I would call the language of that TM the set of strings which it halt-accepts on, so I would answer "no". 3) It seems to me that, given an alphabet with n symbols, you can always construct a single-tape deterministic TM with O(n) states which halt-accepts on palindromes over that alphabet and halt-rejects other strings, thus deciding the language of palindromes over the alphabet. I would answer "yes", as long as the terms have their usual meanings. Note that Rice's theorem does not apply; it would apply to the problem of deciding whether a TM accepts the language of palindromes over an alphabet, but actually deciding whether something is a palindrome is of course possible (PDAs do it).

### Proving a language is in RE/R/coRE

Lets say we have a function for a Turing Machine like this one: 𝑓(𝑀) = { 1, for any 𝑤 where 𝑀(𝑤) halts only if w is a palindrome of even length 0, otherwise How can one prove that it belongs (or not) to RE, R, coRE. I mean, I know we can use a turing reduction using the f-halt to prove that it is not belonging to R. But what about RE/coRE?

A language is RE if we can halt-accept for any string in the language. A language is coRE if we can halt-reject for any string not in the language. R is the intersection of RE and coRE; a language is R if we can halt-accept on strings in the language and halt-reject on strings not in the language. You already know that the language isn't R. This can also be seen by Rice's theorem: halting only on palindromes of even length is a semantic property of the accepted language (subset of EPAL), so the inclusion problem isn't decidable. This tells you that the language cannot be both RE and coRE, though it might be neither. Given a machine M, can we determine that it does only accept strings which are even-length palindromes? This seems unlikely. We would need a way to be sure that all strings - maybe infinitely many - are even-length palindromes. We can't just find a counterexample and be done. Given a machine M, can we determine that it doesn't only accept strings which are even-length palindromes? We sure can! We can interleave executions of copies of this machine such that arbitrarily many possible input strings get arbitrarily much computing time; if M accepts any particular string, we can eventually find out, and if it accepts one that isn't an even-length palindrome, we can eventually tell. So, this language: is coRE is not RE is not R

### Define natural numbers in functional languages like Ada subtypes

In Ada to define natural numbers you can write this: subtype Natural is Integer range 0 .. Integer'Last; This is type-safe and it is checked at compile-time. It is simple (one-line of code) and efficient (it does not use recursion to define natural numbers as many functional languages do). Is there any functional language that can provide similar functionality? Thanks

This is type-safe and it is checked at compile-time. As you already pointed out in the comments to your question, it is not checked at compile time. Neither is equivalent functionality in Modula-2 or any other production-ready, general-purpose programming language. The ability to check constraints like this at compile time is something that requires dependent types, refinement types or similar constructs. You can find those kinds of features in theorem provers like Coq or Agda or in experimental/academic languages like ATS or Liquid Haskell. Now of those languages I mentioned Coq and Agda define their Nat types recursively, so that's not what you want, and ATS is an imperative language. So that leaves Liquid Haskell (plus languages that I didn't mention, of course). Liquid Haskell is Haskell with extra type annotations, so it's definitely a functional language. In Liquid Haskell you can define a MyNat (a type named Nat is already defined in the standard library) type like this: {-# type MyNat = {n:Integer | n >= 0} #-} And then use it like this: {-# fac :: MyNat -> MyNat #-} fac :: Integer -> Integer fac 0 = 1 fac n = n * fac (n-1) If you then try to call fac with a negative number as the argument, you'll get a compilation error. You will also get a compilation error if you call it with user input as the argument unless you specifically check that the input was non-negative. You would also get a compilation error if you removed the fac 0 = 1 line because now n on the next line could be 0, making n-1 negative when you call fac (n-1), so the compiler would reject that. It should be noted that even with state-of-the-art type inference techniques non-trivial programs in languages like this will end up having quite complicated type signatures and you'll spend a lot of time and effort chasing type errors through an increasingly complex jungle of type signatures having only incomprehensible type errors to guide you. So there's a price for the safety that features like these offer you. It should also be pointed out that, in a Turing complete language, you will occasionally have to write runtime checks for cases that you know can't happen as the compiler can't prove everything even when you think it should.

Typed Racket, a typed dialect of Racket, has a rich set of numeric subtypes and it knows about a fair number of closure properties (eg, the sum of two nonnegative numbers is nonnegative, the sum of two exact integers is an exact integer, etc). Here's a simple example: #lang typed/racket (: f : (Nonnegative-Integer Nonnegative-Integer -> Positive-Integer)) (define (f x y) (+ x y 1)) Type checking is done statically, but of course the typechecker is not able to prove every true fact about numeric subtypes. For example, the following function in fact only returns values of type Nonnegative-Integer, but the type rules for subtraction only allow TR to conclude the result type of Integer. > (lambda: ([x : Nonnegative-Integer] [y : Nonnegative-Integer]) (- x (- x y))) - : (Nonnegative-Integer Nonnegative-Integer -> Integer) #<procedure> The Typed Racket approach to numbers is described in Typing the Numeric Tower by St-Amour et al (appeared at PADL 2012). There's usually a link to the paper here, but the link seems to be broken at the moment. Google has a cached rendering of the PDF as HTML, if you search for the title.

### Representing the strings we use in programming in math notation

Now I'm a programmer who's recently discovered how bad he is when it comes to mathematics and decided to focus a bit on it from that point forward, so I apologize if my question insults your intelligence. In mathematics, is there the concept of strings that is used in programming? i.e. a permutation of characters. As an example, say I wanted to translate the following into mathematical notation: let s be a string of n number of characters. Reason being I would want to use that representation in find other things about string s, such as its length: len(s). How do you formally represent such a thing in mathematics? Talking more practically, so to speak, let's say I wanted to mathematically explain such a function: fitness(s,n) = 1 / |n - len(s)| Or written in more "programming-friendly" sort of way: fitness(s,n) = 1 / abs(n - len(s)) I used this function to explain how a fitness function for a given GA works; the question was about finding strings with 5 characters, and I needed the solutions to be sorted in ascending order according to their fitness score, given by the above function. So my question is, how do you represent the above pseudo-code in mathematical notation?

You can use the notation of language theory, which is used to discuss things like regular languages, context free grammars, compiler theory, etc. A quick overview: A set of characters is known as an alphabet. You could write: "Let A be the ASCII alphabet, a set containing the 128 ASCII characters." A string is a sequence of characters. ε is the empty string. A set of strings is formally known as a language. A common statement is, "Let s ∈ L be a string in language L." Concatenating alphabets produces sets of strings (languages). A represents all 1-character strings, AA, also written A2, is the set of all two character strings. A0 is the set of all zero-length strings and is precisely A0 = {ε}. (It contains exactly one string, the empty string.) A* is special notation and represents the set of all strings over the alphabet A, of any length. That is, A* = A0 ∪ A1 ∪ A2 ∪ A3 ... . You may recognize this notation from regular expressions. For length use absolute value bars. The length of a string s is |s|. So for your statement: let s be a string of n number of characters. You could write: Let A be a set of characters and s ∈ An be a string of n characters. The length of s is |s| = n.

Mathematically, you have explained fitness(s, n) just fine as long as len(s) is well-defined. In CS texts, a string s over a set S is defined as a finite ordered list of elements of S and its length is often written as |s| - but this is only notation, and doesn't change the (mathematical) meaning behind your definition of fitness, which is pretty clear just how you've written it.