Recursively enumerable sets and Turing machines - turing-machines

Let L1 be a recursive language. Let L2 and L3 be languages that are recursively enumerable but not recursive. Which of the following statements is not necessarily true? (A) L2 – L1 is recursively enumerable. (B) L1 – L3 is recursively enumerable (C) L2 ∩ L1 is recursively enumerable (D) L2 ∪ L1 is recursively enumerable

You're right, the answer is (B). You should find a concrete example of languages L1 (a recursive language) and L3 (a RE language) for which L1-L3 is not RE.
Below are proofs that statements (A), (C), and (D) hold. I'm using the fact that every recursive language is recursively enumerable, and the well-known closure properties of RE languages.
(A) L2 - L1 = L2 intersection (complement L1) is recursively enumerable because L1 is recursive, thus (complement L1) is recursively enumerable, and an intersection of RE languages is again a RE language. (In general, the complement of a RE language L is RE if and only if L is recursive.)
(B) L1 - L3 = L1 intersection (complement L3) need not be RE. (Exercise: find a counterexample, i.e., find concrete languages L1 (recursive) and L3 (RE) such that L1-L3 is not RE.)
(C) L2 intersection L1 is RE because both L1 and L2 are RE and we know that intersection of two RE languages is again a RE language.
(D) L2 union L1 is RE because both L1 and L2 are RE and we know that union of two RE languages is again a RE language.

L1-L3=L1 intresection need not be RE

You're right, the answer is (B). You should find a concrete example of languages L1 (a recursive language) and L3 (a RE language) for which L1-L3 is not RE.

The answer is B.you should find the concrete examples of L1 and L3

Related

What type systems can prevent goal suspension in logical languages?

From section 3.13.3 of the curry tutorial:
Operations that residuate are called rigid , whereas operations that narrow are called flexible. All defined operations are flexible whereas most primitive operations, like arithmetic operations, are rigid since guessing is not a reasonable option for them. For example, the prelude defines a list concatenation operation as follows:
infixr 5 ++
...
(++) :: [a] -> [a] -> [a]
[] ++ ys = ys
(x:xs) ++ ys = x : xs ++ ys
Since the operation “++” is flexible, we can use it to search for a list satisfying a particular property:
Prelude> x ++ [3,4] =:= [1,2,3,4] where x free
Free variables in goal: x
Result: success
Bindings:
x=[1,2] ?
On the other hand, predefined arithmetic operations like the addition “+” are rigid. Thus, a
call to “+” with a logic variable as an argument flounders:
Prelude> x + 2 =:= 4 where x free
Free variables in goal: x
*** Goal suspended!
Curry does not appear to guard against writing goals that will be suspended. What type systems can detect ahead of time whether a goal is going to be suspended?
What you've described sounds like mode checking, which generally checks what outputs will be available for a certain set of inputs. You may want to check the language Mercury which takes mode checking quite seriously.

Coq : simulate function extensionality theorem

Intuitively, when two functions g and h are equal for all x, we can imagine that g = h, and therefore replace all occurrences of f g into f h. This is what we call function extensionality. However, in Coq function extensionality is not added by default, so we need it, we need to add an axiom.
However, I'd like to avoid to use this axiom, and try to find a theorem that is equivalent in practive, that would look like "for all f that are inductive functions that output an inductive type, forall g and h, f g = f h". In particuliar, f cannot be identity, because the identity function is not an inductive type.
For now, when I need to do something like this, I need to manually handle all the cases of f, and it can be quite long. Do you know if I can write a theorem like this in pure Coq, or maybe if it's not possible, can I create a tactic that would generate this proof for all f ?
Thank you!
One example on how to build a "functional" structure that has extensionality is the finfun datatype in the Mathematical Components library.
Given a finite type T, a finitely-supported function {ffun T -> A} is just a #|T|.-tuple A, that is to say, a table assigning every element of T to some element A. You can overload application so f x := nth f (rank x) where rank x returns the "index" of x : T.
It is the case that ffunP: forall x, f x = g x <-> f = g forall f, g : {ffun T -> A}. Why? Indeed, the key point is that we represent such functions by tables, then Coq indeed can check that such "canonical" representation meets extensionality.
You will have to perform similar tricks to embed your structure with extensionality, which is usually painful, so that's why few people bother and just add the axiom. In particular, note that in order to have an extensional representation, you will need to construct "canonical" representatives of each object in the corresponding equality class.
For example, given some kind of lambda terms and renaming, you would need to canonically rename terms and construct a type that indicates that such term is in canonical form. This is not easy in general.

Time complexity of multiple string concatenation (join) in functional programming languages

Am I right that the only algorithm that can be implemented in functional programming languages like Haskell to concatenate multiple strings (i.e. implement join that transforms list of lines ["one", "two", "three"] into one line "onetwothree") has time complexity of order O(n^2), like described in this well-known post?
E.g. if I work with immutable strings, for example, in Python, and try to implement join, I'll get something like
def myjoin(list_of_strings):
return list_of_strings[0] + myjoin(list_of_strings[1:])
Is it true that it is not possible to make it faster, for example, in Haskell?
First of all Haskell is lazily: this means that if you write:
concat ["foo", "bar", "qux"]
it will not perform this operation until you request for instance the first character of the outcome. In that case usually it will not concatenate all strings together, but - depending on how smart the function is implemented - aim to do the minimal amount of work necessary to obtain the first character. If you request the first character, but do not inspect it, it could be possible that you for instance got succ 'f' instead of 'g' since again Haskell is lazy.
But let's assume that we are interested in the resulting string, and want to know every character. We can implement concat as:
concat :: [[a]] -> [a]
concat [] = []
concat (x:xs) = x ++ concat xs
and (++) as:
(++) :: [a] -> [a] -> [a]
(++) [] ys = ys
(++) (x:xs) ys = x : (xs ++ ys)
Now that means that - given (:) works in O(1) - (++) works in O(a) with a the length of the first list, and b (note that this is not in the big oh expression) the length of the second list.
So now if we inspect concat, we see that if we enter k strings, we will perform k (++) operations. At every (++) operation, the left string is equal to the length of the string. So that means that if the sum of the lengths of the strings is n, concat is an O(n) algorithm.

Is it possible to lazily traverse a recursive data-structure with O(1) memory usage, tail-call optimized?

Let's say that we have a recursive data-structure, like a binary tree. There are many ways to traverse it, and they have different memory-usage profiles. For instance, if we were to simply print the value of each node, using pseudo-code like the following in-order traversal...
visitNode(node) {
if (node == null) return;
visitNode(node.leftChild);
print(node.value);
visitNode(node.rightChild);
}
...our memory usage would be constant, but due to the recursive calls, we would increase the size of the call stack. On very large trees, this could potentially overflow it.
Let's say that we decided to optimize for call-stack size; assuming that this language is capable of proper tailcalls, we could rewrite this as the following pre-order traversal...
visitNode(node, nodes = []) {
if (node != null) {
print(node.value);
visitNode(nodes.head, nodes.tail + [node.left, node.right]);
} else if (node == null && nodes.length != 0 ) {
visitNode(nodes.head, nodes.tail);
} else return;
}
While we would never blow the stack, we would now see heap usage increase linearly with respect to the size of the tree.
Let's say we were then to attempt to lazily traverse the tree - here is where my reasoning gets fuzzy. I think that even using a basic lazy evaluation strategy, we would grow memory at the same rate as the tailcall optimized version. Here is a concrete example using Scala's Stream class, which provides lazy evaluation:
sealed abstract class Node[A] {
def toStream: Stream[Node[A]]
def value: A
}
case class Fork[A](value: A, left: Node[A], right: Node[A]) extends Node[A] {
def toStream: Stream[Node[A]] = this #:: left.toStream.append(right.toStream)
}
case class Leaf[A](value: A) extends Node[A] {
def toStream: Stream[Node[A]] = this #:: Stream.empty
}
Although only the head of the stream is strictly evaluated, anytime the left.toStream.append(right.toStream) is evaluated, I think this would actually evaluate the head of both the left and right streams. Even if it doesn't (due to append cleverness), I think that recursively building this thunk (to borrow a term from Haskell) would essentially grow memory at the same rate. Rather than saying, "put this node in the list of nodes to traverse", we're basically saying, "here's another value to evaluate that will tell you what to traverse next", but the outcome is the same; linear memory growth.
The only strategy I can think of that would avoid this is having mutable state in each node declaring which paths have been traversed. This would allow us to have a referentially transparent function that says, "Given a node, I will tell you which single node you should traverse next", and we could use that to build an O(1) iterator.
Is there another way to accomplish O(1), tailcall optimized traversal of a binary tree, possibly without mutable state?
Is there another way to accomplish O(1), tailcall optimized traversal of a binary tree, possibly without mutable state?
As I stated in my comment, you can do this if the tree need not survive the traversal. Here's a Haskell example:
data T = Leaf | Node T Int T
inOrder :: T -> [Int]
inOrder Leaf = []
inOrder (Node Leaf x r) = x : inOrder r
inOrder (Node (Node l x m) y r) = inOrder $ Node l x (Node m y r)
This takes O(1) auxiliary space if we assume the garbage collector will clean up any Node that we just processed, so we effectively replace it by a right-rotated version. However, if the nodes we process cannot immediately be garbage-collected, then the final clause may build up an O(n) number of nodes before it hits a leaf.
If you have parent pointers, then it's also doable. Parent pointers require mutable state, though, and prevent sharing of subtrees, so they're not really functional. If you represent an iterator by a pair (cur, prev) that is initially (root, nil), then you can perform iteration as outlined here. You need a language with pointer comparisons to make this work, though.
Without parent pointers and mutable state, you need to maintain some data structure that at least tracks where the root of the tree is and how to get there, since you'll need such a structure at some point during in-order or post-order traversal. Such a structure necessarily takes Ω(d) space where d is the depth of the tree.
A fancy answer.
We can use free monads to get efficient memory utilization bound.
{-# LANGUAGE RankNTypes
, MultiParamTypeClasses
, FlexibleInstances
, UndecidableInstances #-}
class Algebra f x where
phi :: f x -> x
A algebra of a functor f is a function phi from f x to x for some x. For example, any monad has a algebra for any object m x:
instance (Monad m) => Algebra m (m x) where
phi = join
A free monad for any functor f can be constructed (possibly, some sort of functors only, like omega-cocomplete, or some such; but all Haskell types are polynomial functors, which are omega-cocomplete, so the statement is certainly true for all Haskell functors):
data Free f a = Free (forall x. Algebra f x => (a -> x) -> x)
runFree g (Free m) = m g
instance Functor (Free f) where
fmap f m = Free $ \g -> runFree (g . f) m
wrap :: (Functor f) => f (Free f a) -> Free f a
wrap f = Free $ \g -> phi $ fmap (runFree g) f
instance (Functor f) => Algebra f (Free f a) where
phi = wrap
instance (Functor f) => Monad (Free f) where
return a = Free ($ a)
m >>= f = fjoin $ fmap f m
fjoin :: (Functor f) => Free f (Free f a) -> Free f a
fjoin mma = Free $ \g -> runFree (runFree g) mma
Now we can use Free to construct free monad for functor T a:
data T a b = T a b b
instance Functor (T a) where
fmap f (T a l r) = T a (f l) (f r)
For this functor we can define a algebra for object [a]
instance Algebra (T a) [a] where
phi (T a l r) = l++(a:r)
A tree is a free monad over functor T a:
type Tree a = Free (T a) ()
It can be constructed using the following functions (if defined as ADT, they'd be constructor names, so nothing extraordinary):
tree :: a -> Tree a -> Tree a -> Tree a
tree a l r = phi $ T a l r -- phi here is for Algebra f (Free f a)
-- and translates T a (Tree a) into Tree a
leaf :: Tree a
leaf = return ()
To demonstrate how this works:
bar = tree 'a' (tree 'b' leaf leaf) $ tree 'r' leaf leaf
buz = tree 'b' leaf $ tree 'u' leaf $ tree 'z' leaf leaf
foo = tree 'f' leaf $ tree 'o' (tree 'o' leaf leaf) leaf
toString = runFree (\_ -> [] :: String)
main = print $ map toString [bar, buz, foo]
As runFree traverses the tree to replace leaf () with [], the algebra for T a [a] in all contexts is the algebra that constructs a string representing in-order traversal of the tree. Because functor T a b constructs a new tree as it goes, it must have the same memory consumption characteristics as the solution quoted by larsmans - if the tree is not kept in memory, the nodes are discarded as soon as they are replaced by the string representing the whole subtree.
Given that you have references to nodes' parents, there's a nice solution posted here. Replace the while loop with a tail-recursive call (passing in last and current and that should do it.
The built-in back-references allow you to keep track of traversal ordering. Without these, I can't think of a way to do it on a (balanced) tree with less than O(log(n)) auxiliary space.
I was not able to find an answer but I got some pointers. Go have a look at http://www.ics.uci.edu/~dan/pub.html, scroll down to
[33] D.S. Hirschberg and S.S. Seiden, A bounded-space tree traversal algorithm, Information Processing Letters 47 (1993)
Download the postscript file, you may need to convert it to PDF (my ps viewer was unable to present it correctly). It mentions on page 2 (Table 1) a number of algorithms and additional literature.

How lazy is Haskell's `++`?

I'm curious how I should go about improving the performance of a Haskell routine that finds the lexicographically minimal cyclic rotation of a string.
import Data.List
swapAt n = f . splitAt n where f (a,b) = b++a
minimumrotation x = minimum $ map (\i -> swapAt i x) $ elemIndices (minimum x) x
I'd imagine that I should use Data.Vector rather than lists because Data.Vector provides in-place operations, probably just manipulating some indices into the original data. I shouldn't actually need to bother tracking the indices myself to avoid excess copying, right?
I'm curious how the ++ impact the optimization though. I'd imagine it produces a lazy string thunk that never does the appending until the string gets read that far. Ergo, the a should never actually be appended onto the b whenever minimum can eliminate that string early, like because it begins with some very later letter. Is this correct?
xs ++ ys adds some overhead in all the list cells from xs, but once it reaches the end of xs it's free — it just returns ys.
Looking at the definition of (++) helps to see why:
[] ++ ys = ys
(x:xs) ++ ys = x : (xs ++ ys)
i.e., it has to "re-build" the entire first list as the result is traversed. This article is very helpful for understanding how to reason about lazy code in this way.
The key thing to realise is that appending isn't done all at once; a new linked list is incrementally built by first walking through all of xs, and then putting ys where the [] would go.
So, you don't have to worry about reaching the end of b and suddenly incurring the one-time cost of "appending" a to it; the cost is spread out over all the elements of b.
Vectors are a different matter entirely; they're strict in their structure, so even examining just the first element of xs V.++ ys incurs the entire overhead of allocating a new vector and copying xs and ys to it — just like in a strict language. The same applies to mutable vectors (except that the cost is incurred when you perform the operation, rather than when you force the resulting vector), although I think you'd have to write your own append operation with those anyway. You could represent a bunch of appended (immutable) vectors as [Vector a] or similar if this is a problem for you, but that just moves the overhead to when you flattening it back into a single Vector, and it sounds like you're more interested in mutable vectors.
Try
minimumrotation :: Ord a => [a] -> [a]
minimumrotation xs = minimum . take len . map (take len) $ tails (cycle xs)
where
len = length xs
I expect that to be faster than what you have, though index-juggling on an unboxed Vector or UArray would probably be still faster. But, is it really a bottleneck?
If you're interested in fast concatenation and a fast splitAt, use Data.Sequence.
I've made some stylistic modifications to your code, to make it look more like idiomatic Haskell, but the logic is exactly the same, except for a few conversions to and from Seq:
import qualified Data.Sequence as S
import qualified Data.Foldable as F
minimumRotation :: Ord a => [a] -> [a]
minimumRotation xs = F.toList
. F.minimum
. fmap (`swapAt` xs')
. S.elemIndicesL (F.minimum xs')
$ xs'
where xs' = S.fromList xs
swapAt n = f . S.splitAt n
where f (a,b) = b S.>< a

Resources