How to prove that the language $E_{tm}$ is $NP-Hard$ - turing-machines

Consider the language $E_{tm}={ \langle M \rangle: M\text{is a Turing Machine that accepts nothing}$
I am not sure how to even start.
My idea is to provide poly time reduction from some NP - Complete problem.
E_tm
What I don't understand is that, knowing that E_tm is not decidable, but NP-Hard class is decidable.

solution:
DF: A problem is NP-hard if all problems in NP are polynomial time reducible to it, even thoughit may not be in NP itself ( p326 Sipser) (the only definition our book has ).
For any language L' that is in NP if we show that we can poly-time reduce to Etm.
This will prove that that Etm is NP - hard.
Since L' is in NP by definition there exist a TM ( NTM but since they are equivalent in power I write TM ) M' such that decides L'.
TM M'' that takes as an input <M,w> constructs
TM M' such that
on arbitrary x
if w = x
run M on w if accept => reject
if reject => accept
else reject.
Therefore M accepts w iff M'' rejects all the input.
Let's confirm that. First assume that M accepts w, then M'' reject on any input therefore L(M'') = empty.
Now assume that M rejects w, then M'' accept, therefore L(M'') is not empty.
Note that to construct the M'' takes polynomial time.
That completes the proof.

Related

Prove that this language is undecidable

Is the following language L undecidable?
L = {M | M is a Turing machine description and there exists an input x of length k such that M halts after at most k steps}
I think it is but I couldn't prove it. I tried to think of a reduction from the halting problem.
Review: An instance of the halting problem asks whether Turning machine N halts on input y. The problem is known to be undecidable (but semidecidable).
Your language L is indeed undecidable. This can be shown by reducing the halting problem to L:
For the halting problem instance (N, y), create a new machine M for the L problem.
On input x, M simulates (N, y) for length(x) steps.
If the simulation halted within that number of steps, then M halts. Otherwise, M deliberately goes into an infinite loop.
This reduction is valid because:
If (N, y) does halt eventually in k steps, then M will halt for all inputs of length k or greater, thus M is in L.
Otherwise (N, y) does not halt, then M will not halt for any input string no matter how long it is, thus M is not in L.
Finally, the halting problem is undecidable, therefore L is undecidable.

How to define unspecified constants in Coq

My question is how to define unspecified constants in Coq.
To make clear what I mean, assume the following toy system:
I want to define a function
f:nat->nat, which has the value 0 at all but one place w, where it has the value 1.
The place w shall be a parameter of the system.
All proofs of the system can assume that w is fixed but arbitrary.
My idea was to introduce
Parameter w:nat.
But I get stuck by defining f(x), because I don't have a clue how to match x with a.
What would be the right way to handle this?
Or, is it the wrong way using w as a Parameter?
(This is NOT a homework question)
This is how I'd do it:
Require Import Arith.
Parameter w : nat.
Definition f (n : nat) := if beq_nat n w then 1 else 0.
When proving properties about f you can then use lemmas specifying that beq_nat n w is indeed deciding whether n = w. You can find them by using e.g.
SearchAbout beq_nat.

Non deterministic Turing machine

I am new to NDTM, but I do understand the concept of a turing machine. when it comes to NDTM I get a little confused, I m supposed to develop a NDTM for language {a,b,c} and
L = {w ∈ Σ*| Ǝv ∈ Σ*, Ǝn >= 2 with w = v (to the power of) n }
First thing that I want to know is how to read L, for example what is the meaning of Ǝ.
I do understand that a NDTM gives twp possibilities of one outcome, like for example for a:
we would have with a and without a if i am correct, Can someone help me fugure this out?
This should be marked as "Homework" I think.
Ǝ is "there exists"
Σ is "the set of symbols in the language" ({a, b,c}) in this case
∈ is "element of"
Now that we have that, we can read this language. So L is the set of words w in {a, b, c}* such that there exists a word v and there exists a n >= 2 such that w is a repetition of v n times. E.g. ababab = (ab)^3 ∈ L.
Now you want to come up with a Turing machine, M, to represent this language, so you have to consider:
When do we reject a word (what is our rejection state, what is on the stack)
When do we accept a word (what is our accepting state, what is on the stack)
How do we guarantee that M terminates.
We can see that a is not in L because n >= 2 which implies that the length of v^n is at least 2 (0 in the case of the empty string though, which is an outlier). Similarly for b and c. With that consideration and the knowledge that n >= 2, figure out what words are not accepted (e.g. consider b, abc, cab, cca, etc.).

What is the correct and smooth way to write a OCaml function?

I am learning Jason Hickey's Introduction to Objective Caml.
After I learned Chapter 3, I seem to understand how the let and fun work. But still, I have troubles to write my own fun.
Here is an example problem I am facing.
Write a function sum that, given two integer bounds n and m and a function
f, computes a summation (no for loop allowed). i.e., sum n m f = f(n) + f(n+1) + ... + f(m)
So, how should I start to think about producing this function sum?
In Java or normal programming language, it is easy.
Since here for loop is not allowed, so I guess I should do it in let rec way?
Something like this:
let rec sum n m f = fun i -> ....
I need an i to be a cursor?
Whatever, I can't continue to think out.
Can anyone point a road for me to produce a OCaml fun?
This is my final solution:
let rec sum n m f = if n <= m then (f n)+(sum n+1 m f) else 0;;
but of course, it is wrong. The error is Error: This expression has type 'a -> ('a -> int) -> 'b
but an expression was expected of type int
Why? and what is 'a?
I'm hoping this will help you to think in terms of recursion and not with loops (let's leave out tail recursion for a moment).
So you need to calculate f(n) + f(n+1) + ... f(m). It might help you to think of this problem in an inductive fashion. That is, assume you know how to calculate f(n+1) + ... + f(m), then what do you need to do in order to calculate the original result? well, you simply add f(n) to the latter, right? That is exactly what your code has to say:
let rec sum n m f =
if n = m then
f m
else
f n + sum (n + 1) m f;; (* here's the inductive step *)
You can see how I have added f(n) to the result of f(n+1) + .... + f(m). So, think inductively, break down the problem into smaller pieces and think about how you can put the results of those smaller pieces together.
Hope I didn't make things more confusing.
You have done a classic syntax mistake: sum n+1 m f is parsed as (sum n) + (1 n f) instead of what you expect. In OCaml, function application (space) has stronger precedence than infix operators.
The type error comes from the fact that sum n (which you use in a sum) is not an integer. It takes one more argument (m) and a function returning an integer. At this point of the type inference process (when the error occurs), OCaml represents this as 'a -> ('a -> int) -> 'b: takes some unknown stuff a, a function from a to int, and returns some stuff b.
'a is like the generic type in Java. For example:
let test a = 1
Its type is 'a -> int
This function would return 1 regardless of the type of your argument.
The error is that you need to put parentheses here
(sum (n+1) m f)
Ocaml thought of it as extra arguments, and so it results in a different type than as you intended. Putting parentheses would make sure you have the right number of arguments. It is a subtle problem to debug when you have a lot of codes. So using parentheses in similar situations would save you so much time. :)

Can somebody please explain in layman terms what a functional language is? [duplicate]

Possible Duplicate:
Functional programming and non-functional programming
I'm afraid Wikipedia did not bring me any further.
Many thanks
PS: This past thread is also very good, however I am happy I asked this question again as the new answers were great - thanks
Functional programming and non-functional programming
First learn what a Turing machine is (from wikipedia).
A Turing machine is a device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a computer.
This is about Lamda calculus. (from wikipedia).
In mathematical logic and computer science, the lambda calculus, also written as the λ-calculus, is a formal system for studying computable recursive functions, a la computability theory, and related phenomena such as variable binding and substitution.
The functional programming languages use, as their fundamental model of computation, the lambda calculus, while all the other programming languages use the Turing machine as their fundamental model of computation. (Well, technically, I should say functional programming languages vr.s imperative programming languages- as languages in other paradigms use other models. For example, SQL uses the relational model, Prolog uses a logic model, and so on. However, pretty much all the languages people actually think about when discussing programming languages are either functional or imperative, so I’ll stick with the easy generality.)
What do I mean by “fundamental model of computation”? Well, all languages can be thought of in two layers: one, some core Turing-complete language, and then layers of either abstractions or syntactic sugar (depending upon whether you like them or not) which are defined in terms of the base Turing-complete language. The core language for imperative languages is then a variant of the classic Turing machine model of computation one might call “the C language”. In this language, memory is an array of bytes that can be read from and written to, and you have one or more CPUs which read memory, perform simple arithmetic, branch on conditions, and so on. That’s what I mean by the fundamental model of computation of these languages is the Turing Machine.
The fundamental model of computation for functional languages is the Lambda Calculus, and this shows up in two different ways. First, one thing that many functional languages do is to write their specifications explicitly in terms of a translation to the lambda calculus to specify the behavior of a program written in the language (this is known as “denotational semantics”). And second, almost all functional programming languages implement their compilers to use an explicit lambda-calculus-like intermediate language- Haskell has Core, Lisp and Scheme have their “desugared” representation (after all macros have been applied), Ocaml (Objective Categorical Abstract Machine Language) has it’s lispish intermediate representation, and so on.
So what is this lambda calculus I’ve been going on about? Well, the basic idea is that, to do any computation, you only need two things. The first thing you need is function abstraction- the definition of an unnamed, single-argument, function. Alonzo Church, who first defined the Lambda calculus used the rather obscure notation to define a function as the greek letter lambda, followed by the one-character name of the argument to the function, followed by a period, followed by the expression which was the body of the function. So the identity function, which given any value, simply returns that value, would look like “λx.x” I’m going to use a slight more human-readable approach- I’m going to replace the λ character with the word “fun”, the period with “->”, and allow white space and allow multi-character names. So I might write the identity function as “fun x -> x”, or even “fun whatever -> whatever”. The change in notation doesn’t change the fundamental nature. Note that this is the source of the name “lambda expression” in languages like Haskell and Lisp- expressions that introduce unnamed local functions.
The only other thing you can do in the Lambda Calculus is to call functions. You call a function by applying an argument to it. I’m going to follow the standard convention that application is just the two names in a row- so f x is applying the value x to the function named f. We can replace f with some other expression, including a Lambda expression, if we want- and we can When you apply an argument to an expression, you replace the application with the body of the function, with all the occurrences of the argument name replaced with whatever value was applied. So the expression (fun x -> x x) y becomes y y.
The theoreticians went to great lengths to precisely define what they mean by “replacing all occurrences of the variable with the the value applied”, and can go on at great lengths about how precisely this works (throwing around terms like “alpha renaming”), but in the end things work exactly like you expect them to. The expression (fun x -> x x) (x y) becomes (x y) (x y)- there is no confusion between the argument x within the anonymous function, and the x in the value being applied. This works even in multiple levels- the expression (fun x -> (fun x -> x x)) (x x)) (x y) becomes first (fun x -> x x) ((x y) (x y)) and then ((x y) (x y)) ((x y) (x y)). The x in the innermost function (“(fun x -> x x)”) is a different x than the other x’s.
It is perfectly valid to think of function application as a string manipulation. If I have a (fun x -> some expression), and I apply some value to it, then the result is just some expression with all the x’s textually replaced with the “some value” (except for those which are shadowed by another argument).
As an aside, I will add parenthesis where needed to disambiguate things, and also elide them where not needed. The only difference they make is grouping, they have no other meaning.
So that’s all there is too it to the Lambda calculus. No, really, that’s all- just anonymous function abstraction, and function application. I can see you’re doubtful about this, so let me address some of your concerns.
First, I specified that a function only took one argument- how do you have a function that takes two, or more, arguments? Easy- you have a function that takes one argument, and returns a function that takes the second argument. For example, function composition could be defined as fun f -> (fun g -> (fun x -> f (g x))) – read that as a function that takes an argument f, and returns a function that takes an argument g and return a function that takes an argument x and return f (g x).
So how do we represent integers, using only functions and applications? Easily (if not obviously)- the number one, for instance, is a function fun s -> fun z -> s z – given a “successor” function s and a “zero” z, one is then the successor to zero. Two is fun s -> fun z -> s s z, the successor to the successor to zero, three is fun s -> fun z -> s s s z, and so on.
To add two numbers, say x and y, is again simple, if subtle. The addition function is just fun x -> fun y -> fun s -> fun z -> x s (y s z). This looks odd, so let me run you through an example to show that it does, in fact work- let’s add the numbers 3 and 2. Now, three is just (fun s -> fun z -> s s s z) and two is just (fun s -> fun z -> s s z), so then we get (each step applying one argument to one function, in no particular order):
(fun x -> fun y -> fun s -> fun z -> x s (y s z)) (fun s -> fun z -> s s s z) (fun s -> fun z -> s s z)
(fun y -> fun s -> fun z -> (fun s -> fun z -> s s s z) s (y s z)) (fun s -> fun z -> s s z)
(fun y -> fun s -> fun z -> (fun z -> s s s z) (y s z)) (fun s -> fun z -> s s z)
(fun y -> fun s -> fun z -> s s s (y s z)) (fun s -> fun z -> s s z)
(fun s -> fun z -> s s s ((fun s -> fun z -> s s z) s z))
(fun s -> fun z -> s s s (fun z -> s s z) z)
(fun s -> fun z -> s s s s s z)
And at the end we get the unsurprising answer of the successor to the successor to the successor to successor to the successor to zero, known more colloquially as five. Addition works by replacing the zero (or where we start counting) of the x value with the y value- to define multiplication, we instead diddle with the concept of “successor”:
(fun x -> fun y -> fun s -> fun z -> x (y s) z)
I’ll leave it to you to verify that the above code does
Wikipedia says
Imperative programs tend to emphasize the series of steps taken by a program in carrying out an action, while functional programs tend to emphasize the composition and arrangement of functions, often without specifying explicit steps. A simple example illustrates this with two solutions to the same programming goal (calculating Fibonacci numbers). The imperative example is in C++.
// Fibonacci numbers, imperative style
int fibonacci(int iterations)
{
int first = 0, second = 1; // seed values
for (int i = 0; i < iterations; ++i) {
int sum = first + second;
first = second;
second = sum;
}
return first;
}
std::cout << fibonacci(10) << "\n";
A functional version (in Haskell) has a different feel to it:
-- Fibonacci numbers, functional style
-- describe an infinite list based on the recurrence relation for Fibonacci numbers
fibRecurrence first second = first : fibRecurrence second (first + second)
-- describe fibonacci list as fibRecurrence with initial values 0 and 1
fibonacci = fibRecurrence 0 1
-- describe action to print the 10th element of the fibonacci list
main = print (fibonacci !! 10)
See this PDF also
(A) Functional programming describes solutions mechanically. You define a machine that is constantly outputting correctly, e.g. with Caml:
let rec factorial = function
| 0 -> 1
| n -> n * factorial(n - 1);;
(B) Procedural programming describes solutions temporally. You describe a series of steps to transform a given input into the correct output, e.g. with Java:
int factorial(int n) {
int result = 1;
while (n > 0) {
result *= n--;
}
return result;
}
A Functional Programming Language wants you to do (A) all the time. To me, the greatest tangible uniqueness of purely functional programming is statelessness: you never declare variables independently of what they are used for.

Resources