Non deterministic Turing machine - turing-machines

I am new to NDTM, but I do understand the concept of a turing machine. when it comes to NDTM I get a little confused, I m supposed to develop a NDTM for language {a,b,c} and
L = {w ∈ Σ*| Ǝv ∈ Σ*, Ǝn >= 2 with w = v (to the power of) n }
First thing that I want to know is how to read L, for example what is the meaning of Ǝ.
I do understand that a NDTM gives twp possibilities of one outcome, like for example for a:
we would have with a and without a if i am correct, Can someone help me fugure this out?

This should be marked as "Homework" I think.
Ǝ is "there exists"
Σ is "the set of symbols in the language" ({a, b,c}) in this case
∈ is "element of"
Now that we have that, we can read this language. So L is the set of words w in {a, b, c}* such that there exists a word v and there exists a n >= 2 such that w is a repetition of v n times. E.g. ababab = (ab)^3 ∈ L.
Now you want to come up with a Turing machine, M, to represent this language, so you have to consider:
When do we reject a word (what is our rejection state, what is on the stack)
When do we accept a word (what is our accepting state, what is on the stack)
How do we guarantee that M terminates.
We can see that a is not in L because n >= 2 which implies that the length of v^n is at least 2 (0 in the case of the empty string though, which is an outlier). Similarly for b and c. With that consideration and the knowledge that n >= 2, figure out what words are not accepted (e.g. consider b, abc, cab, cca, etc.).

Related

Can somebody please explain in layman terms what a functional language is? [duplicate]

Possible Duplicate:
Functional programming and non-functional programming
I'm afraid Wikipedia did not bring me any further.
Many thanks
PS: This past thread is also very good, however I am happy I asked this question again as the new answers were great - thanks
Functional programming and non-functional programming
First learn what a Turing machine is (from wikipedia).
A Turing machine is a device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a computer.
This is about Lamda calculus. (from wikipedia).
In mathematical logic and computer science, the lambda calculus, also written as the λ-calculus, is a formal system for studying computable recursive functions, a la computability theory, and related phenomena such as variable binding and substitution.
The functional programming languages use, as their fundamental model of computation, the lambda calculus, while all the other programming languages use the Turing machine as their fundamental model of computation. (Well, technically, I should say functional programming languages vr.s imperative programming languages- as languages in other paradigms use other models. For example, SQL uses the relational model, Prolog uses a logic model, and so on. However, pretty much all the languages people actually think about when discussing programming languages are either functional or imperative, so I’ll stick with the easy generality.)
What do I mean by “fundamental model of computation”? Well, all languages can be thought of in two layers: one, some core Turing-complete language, and then layers of either abstractions or syntactic sugar (depending upon whether you like them or not) which are defined in terms of the base Turing-complete language. The core language for imperative languages is then a variant of the classic Turing machine model of computation one might call “the C language”. In this language, memory is an array of bytes that can be read from and written to, and you have one or more CPUs which read memory, perform simple arithmetic, branch on conditions, and so on. That’s what I mean by the fundamental model of computation of these languages is the Turing Machine.
The fundamental model of computation for functional languages is the Lambda Calculus, and this shows up in two different ways. First, one thing that many functional languages do is to write their specifications explicitly in terms of a translation to the lambda calculus to specify the behavior of a program written in the language (this is known as “denotational semantics”). And second, almost all functional programming languages implement their compilers to use an explicit lambda-calculus-like intermediate language- Haskell has Core, Lisp and Scheme have their “desugared” representation (after all macros have been applied), Ocaml (Objective Categorical Abstract Machine Language) has it’s lispish intermediate representation, and so on.
So what is this lambda calculus I’ve been going on about? Well, the basic idea is that, to do any computation, you only need two things. The first thing you need is function abstraction- the definition of an unnamed, single-argument, function. Alonzo Church, who first defined the Lambda calculus used the rather obscure notation to define a function as the greek letter lambda, followed by the one-character name of the argument to the function, followed by a period, followed by the expression which was the body of the function. So the identity function, which given any value, simply returns that value, would look like “λx.x” I’m going to use a slight more human-readable approach- I’m going to replace the λ character with the word “fun”, the period with “->”, and allow white space and allow multi-character names. So I might write the identity function as “fun x -> x”, or even “fun whatever -> whatever”. The change in notation doesn’t change the fundamental nature. Note that this is the source of the name “lambda expression” in languages like Haskell and Lisp- expressions that introduce unnamed local functions.
The only other thing you can do in the Lambda Calculus is to call functions. You call a function by applying an argument to it. I’m going to follow the standard convention that application is just the two names in a row- so f x is applying the value x to the function named f. We can replace f with some other expression, including a Lambda expression, if we want- and we can When you apply an argument to an expression, you replace the application with the body of the function, with all the occurrences of the argument name replaced with whatever value was applied. So the expression (fun x -> x x) y becomes y y.
The theoreticians went to great lengths to precisely define what they mean by “replacing all occurrences of the variable with the the value applied”, and can go on at great lengths about how precisely this works (throwing around terms like “alpha renaming”), but in the end things work exactly like you expect them to. The expression (fun x -> x x) (x y) becomes (x y) (x y)- there is no confusion between the argument x within the anonymous function, and the x in the value being applied. This works even in multiple levels- the expression (fun x -> (fun x -> x x)) (x x)) (x y) becomes first (fun x -> x x) ((x y) (x y)) and then ((x y) (x y)) ((x y) (x y)). The x in the innermost function (“(fun x -> x x)”) is a different x than the other x’s.
It is perfectly valid to think of function application as a string manipulation. If I have a (fun x -> some expression), and I apply some value to it, then the result is just some expression with all the x’s textually replaced with the “some value” (except for those which are shadowed by another argument).
As an aside, I will add parenthesis where needed to disambiguate things, and also elide them where not needed. The only difference they make is grouping, they have no other meaning.
So that’s all there is too it to the Lambda calculus. No, really, that’s all- just anonymous function abstraction, and function application. I can see you’re doubtful about this, so let me address some of your concerns.
First, I specified that a function only took one argument- how do you have a function that takes two, or more, arguments? Easy- you have a function that takes one argument, and returns a function that takes the second argument. For example, function composition could be defined as fun f -> (fun g -> (fun x -> f (g x))) – read that as a function that takes an argument f, and returns a function that takes an argument g and return a function that takes an argument x and return f (g x).
So how do we represent integers, using only functions and applications? Easily (if not obviously)- the number one, for instance, is a function fun s -> fun z -> s z – given a “successor” function s and a “zero” z, one is then the successor to zero. Two is fun s -> fun z -> s s z, the successor to the successor to zero, three is fun s -> fun z -> s s s z, and so on.
To add two numbers, say x and y, is again simple, if subtle. The addition function is just fun x -> fun y -> fun s -> fun z -> x s (y s z). This looks odd, so let me run you through an example to show that it does, in fact work- let’s add the numbers 3 and 2. Now, three is just (fun s -> fun z -> s s s z) and two is just (fun s -> fun z -> s s z), so then we get (each step applying one argument to one function, in no particular order):
(fun x -> fun y -> fun s -> fun z -> x s (y s z)) (fun s -> fun z -> s s s z) (fun s -> fun z -> s s z)
(fun y -> fun s -> fun z -> (fun s -> fun z -> s s s z) s (y s z)) (fun s -> fun z -> s s z)
(fun y -> fun s -> fun z -> (fun z -> s s s z) (y s z)) (fun s -> fun z -> s s z)
(fun y -> fun s -> fun z -> s s s (y s z)) (fun s -> fun z -> s s z)
(fun s -> fun z -> s s s ((fun s -> fun z -> s s z) s z))
(fun s -> fun z -> s s s (fun z -> s s z) z)
(fun s -> fun z -> s s s s s z)
And at the end we get the unsurprising answer of the successor to the successor to the successor to successor to the successor to zero, known more colloquially as five. Addition works by replacing the zero (or where we start counting) of the x value with the y value- to define multiplication, we instead diddle with the concept of “successor”:
(fun x -> fun y -> fun s -> fun z -> x (y s) z)
I’ll leave it to you to verify that the above code does
Wikipedia says
Imperative programs tend to emphasize the series of steps taken by a program in carrying out an action, while functional programs tend to emphasize the composition and arrangement of functions, often without specifying explicit steps. A simple example illustrates this with two solutions to the same programming goal (calculating Fibonacci numbers). The imperative example is in C++.
// Fibonacci numbers, imperative style
int fibonacci(int iterations)
{
int first = 0, second = 1; // seed values
for (int i = 0; i < iterations; ++i) {
int sum = first + second;
first = second;
second = sum;
}
return first;
}
std::cout << fibonacci(10) << "\n";
A functional version (in Haskell) has a different feel to it:
-- Fibonacci numbers, functional style
-- describe an infinite list based on the recurrence relation for Fibonacci numbers
fibRecurrence first second = first : fibRecurrence second (first + second)
-- describe fibonacci list as fibRecurrence with initial values 0 and 1
fibonacci = fibRecurrence 0 1
-- describe action to print the 10th element of the fibonacci list
main = print (fibonacci !! 10)
See this PDF also
(A) Functional programming describes solutions mechanically. You define a machine that is constantly outputting correctly, e.g. with Caml:
let rec factorial = function
| 0 -> 1
| n -> n * factorial(n - 1);;
(B) Procedural programming describes solutions temporally. You describe a series of steps to transform a given input into the correct output, e.g. with Java:
int factorial(int n) {
int result = 1;
while (n > 0) {
result *= n--;
}
return result;
}
A Functional Programming Language wants you to do (A) all the time. To me, the greatest tangible uniqueness of purely functional programming is statelessness: you never declare variables independently of what they are used for.

Adding complete disjunctive assumption in Coq

In mathematics, we often proceed as follows: "Now let us consider two cases, the number k can be even or odd. For the even case, we can say exists k', 2k' = k..."
Which expands to the general idea of reasoning about an entire set of objects by disassembling it into several disjunct subsets that can be used to reconstruct the original set.
How is this reasoning principle captured in coq considering we do not always have an assumption that is one of the subsets we want to deconstruct into?
Consider the follow example for demonstration:
forall n, Nat.Even n => P n.
Here we can naturally do inversion on Nat.Even n to get n = 2*x (and an automatically-false eliminated assumption that n = 2*x + 1). However, suppose we have the following:
forall n, P n
How can I state: "let us consider even ns and odd ns". Do I need to first show that we have decidable forall n : nat, even n \/ odd n? That is, introduce a new (local or global) lemma listing all the required subsets? What are the best practices?
Indeed, to reason about a splitting of a class of objects in Coq you need to show an algorithm splitting them, unless you want to reason classically (there is nothing wrong with that).
IMO, a key point is getting such decidability hypotheses "for free". For instance, you could implement odd : nat -> bool as a boolean function, as it is done in some libraries, then you get the splitting for free.
[edit]
You can use some slightly more convenient techniques for pattern matching, by enconding the pertinent cases as inductives:
Require Import PeanoNat Nat Bool.
CoInductive parity_spec (n : nat) : Type :=
| parity_spec_odd : odd n = true -> parity_spec n
| parity_spec_even: even n = true -> parity_spec n
.
Lemma parityP n : parity_spec n.
Proof.
case (even n) eqn:H; [now right|left].
now rewrite <- Nat.negb_even, H.
Qed.
Lemma test n : even n = true \/ odd n = true.
Proof. now case (parityP n); auto. Qed.

Proof with false hypothesis in Isabelle/HOL Isar

I am trying to prove a lemma which in a certain part has a false hypothesis. In Coq I used to write "congruence" and it would get rid of the goal. However, I am not sure how to proceed in Isabelle Isar. I am trying to prove a lemma about my le function:
primrec le::"nat ⇒ nat ⇒ bool" where
"le 0 n = True" |
"le (Suc k) n = (case n of 0 ⇒ False | Suc j ⇒ le k j)"
lemma def_le: "le a b = True ⟷ (∃k. a + k = b)"
proof
assume H:"le a b = True"
show "∃k. a + k = b"
proof (induct a)
case 0
show "∃k. 0 + k = b"
proof -
have "0 + b = b" by simp
thus ?thesis by (rule exI)
qed
case Suc
fix n::nat
assume HI:"∃k. n + k = b"
show "∃k. (Suc n) + k = b"
proof (induct b)
case 0
show "∃k. (Suc n) + k = 0"
proof -
have "le (Suc n) 0 = False" by simp
oops
Note that my le function is "less or equal". At this point of the proof I find I have the hypothesis H which states that le a b = True, or in this case that le (Suc n) 0 = True which is false. How can I solve this lemma?
Another little question: I would like to write have "le (Suc n) 0 = False" by (simp only:le.simps) but this does not work. It seems I need to add some rule for reducing case expressions. What am I missing?
Thank you very much for your help.
The problem is not that it is hard to get rid of a False hypothesis in Isabelle. In fact, pretty much all of Isabelle's proof methods will instantly prove anything if there is False in the assumptions. No, the problem here is that at that point of the proof, you don't have the assumptions you need anymore, because you did not chain them into the induction. But first, allow me to make a few small remarks, and then give concrete suggestions to fix your proof.
A Few Remarks
It is somewhat unidiomatic to write le a b = True or le a b = False in Isabelle. Just write le a b or ¬le a b.
Writing the definition in a convenient form is very important to get good automation. Your definition works, of course, but I suggest the following one, which may be more natural and will give you a convenient induction rule for free:
Using the function package:
fun le :: "nat ⇒ nat ⇒ bool" where
"le 0 n = True"
| "le (Suc k) 0 = False"
| "le (Suc k) (Suc n) = le k n"
Existentials can sometimes hide important information, and they tend mess with automation, since the automation never quite knows how to instantiate them.
If you prove the following lemma, the proof is fully automatic:
lemma def_le': "le a b ⟷ a + (b - a) = b"
by (induction a arbitrary: b) (simp_all split: nat.split)
Using my function definition, it is:
lemma def_le': "le a b ⟷ (a + (b - a) = b)"
by (induction a b rule: le.induct) simp_all
Your lemma then follows from that trivially:
lemma def_le: "le a b ⟷ (∃k. a + k = b)"
using def_le' by auto
This is because the existential makes the search space explode. Giving the automation something concrete to follow helps a lot.
The actual answer
There are a number of problems. First of all, you will probably need to do induct a arbitrary: b, since the b will change during your induction (for le (Suc a) b, you will have to do a case analysis on b, and then in the case b = Suc b' you will go from le (Suc a) (Suc b') to le a b').
Second, at the very top, you have assume "le a b = True", but you do not chain this fact into the induction. If you do induction in Isabelle, you have to chain all required assumptions containing the induction variables into the induction command, or they will not be available in the induction proof. The assumption in question talks about a and b, but if you do induction over a, you will have to reason about some arbitrary variable a' that has nothing to do with a. So do e.g:
assume H:"le a b = True"
thus "∃k. a + k = b"
(and the same for the second induction over b)
Third, when you have several cases in Isar (e.g. during an induction or case analysis), you have to separate them with next if they have different assumptions. The next essentially throws away all the fixed variables and local assumptions. With the changes I mentioned before, you will need a next before the case Suc, or Isabelle will complain.
Fourth, the case command in Isar can fix variables. In your Suc case, the induction variable a is fixed; with the change to arbitrary: b, an a and a b are fixed. You should give explicit names to these variables; otherwise, Isabelle will invent them and you will have to hope that the ones it comes up with are the same as those that you use. That is not good style. So write e.g. case (Suc a b). Note that you do not have to fix variables or assume things when using case. The case command takes care of that for you and stores the local assumptions in a theorem collection with the same name as the case, e.g. Suc here. They are categorised as Suc.prems, Suc.IH, Suc.hyps. Also, the proof obligation for the current case is stored in ?case (not ?thesis!).
Conclusion
With that (and a little bit of cleanup), your proof looks like this:
lemma def_le: "le a b ⟷ (∃k. a + k = b)"
proof
assume "le a b"
thus "∃k. a + k = b"
proof (induct a arbitrary: b)
case 0
show "∃k. 0 + k = b" by simp
next
case (Suc a b)
thus ?case
proof (induct b)
case 0
thus ?case by simp
next
case (Suc b)
thus ?case by simp
qed
qed
next
It can be condensed to
lemma def_le: "le a b ⟷ (∃k. a + k = b)"
proof
assume "le a b"
thus "∃k. a + k = b"
proof (induct a arbitrary: b)
case (Suc a b)
thus ?case by (induct b) simp_all
qed simp
next
But really, I would suggest that you simply prove a concrete result like le a b ⟷ a + (b - a) = b first and then prove the existential statement using that.
Manuel Eberl did the hard part, and I just respond to your question on how to try and control simp, etc.
Before continuing, I go off topic and clarify something said on another site. The word "a tip" was used to give credit to M.E., but it should have been "3 explanations provided over 2 answers". Emails on mailing lists can't be corrected without spamming the list.
Some short answers are these:
There is no guarantee of completely controlling simp, but attributes del and only, shown below, will many times control it to the extent that you desire. To see that it's not doing more than you want, traces need to be used; an example of traces is given below.
To get complete control of proof steps, you would use "controlled" simp, along with rule, drule, and erule, and other methods. Someone else would need to give an exhaustive list.
Most anyone with the expertise to be able to answer "what's the detailed proof of what simp, auto, blast, etc does" will very rarely be willing to put in the work to answer the question. It can be plain, tedious work to investigate what simp is doing.
"Black box proofs" are always optional, as far as I can tell, if we want them to be and have the expertise to make them optional. Expertise to make them optional is generally a major limiting factor. With expertise, motivation becomes the limiting factor.
What's simp up to? It can't please everyone
If you watch, you'll see. People complain there's too much automation, or they complain there's too little automation with Isabelle.
There can never be too much automation, but that's because with Isabelle/HOL, automation is mostly optional. The possibility of no automation is what makes proving potentially interesting, but with only no automation, proving is nothing but pure tediousness, in the grand scheme.
There are attributes only and del, which can be used to mostly control simp. Speaking only from experimenting with traces, even simp will call other proof methods, similar to how auto calls simp, blast, and others.
I think you cannot prevent simp from calling linear arithmetic methods. But linear arithmetic doesn't apply much of the time.
Get set up for traces, and even the blast trace
My answer here is generalized for also trying to determine what auto is up to. One of the biggest methods that auto resorts to is blast.
You don't need the attribute_setups if you don't care about seeing when blast is used by auto, or called directly. Makarius Wenzel took the blast trace out, but then was nice enough to show the code on how to implement it.
Without the blast part, there is just the use of declare. In a proof, you can use using instead of declare. Take out what you don't want. Make sure you look at the new simp_trace_new info in the PIDE Simplifier Trace panel.
attribute_setup blast_trace = {*
Scan.lift
(Parse.$$$ "=" -- Args.$$$ "true" >> K true ||
Parse.$$$ "=" -- Args.$$$ "false" >> K false ||
Scan.succeed true) >>
(fn b => Thm.declaration_attribute (K (Config.put_generic Blast.trace b)))
*}
attribute_setup blast_stats = {*
Scan.lift
(Parse.$$$ "=" -- Args.$$$ "true" >> K true ||
Parse.$$$ "=" -- Args.$$$ "false" >> K false ||
Scan.succeed true) >>
(fn b => Thm.declaration_attribute (K (Config.put_generic Blast.stats b)))
*}
declare[[simp_trace_new mode=full]]
declare[[linarith_trace,rule_trace,blast_trace,blast_stats]]
Try and control simp, to your heart's content with only & del
I don't want to work hard by using the formula in your question. With simp, what you're looking for with only and the traces is that no rule was used that you weren't expecting.
Look at the simp trace to see what basic rewrites are done that will always be done, like basic rewrites for True and False. If you don't even want that, then you have to resort to methods like rule.
A starting point to see if you can completely shut down simp is apply(simp only:).
Here are a few examples. I would have to work harder to find an example to show when linear arithmetic is automatically being used:
lemma
"a = 0 --> a + b = (b::'a::comm_monoid_add)"
apply(simp only:) (*
ERROR: simp can't apply any magic whatsoever.
*)
oops
lemma
"a = 0 --> a + b = (b::'a::comm_monoid_add)"
apply(simp only: add_0) (*
ERROR: Still can't. Rule 'add_0' is used, but it can't be used first.
*)
oops
lemma
"a = 0 --> a + b = (b::'a::comm_monoid_add)"
apply(simp del: add_0) (*
A LITTLE MAGIC:
It applied at least one rule. See the simp trace. It tried to finish
the job automatically, but couldn't. It says "Trying to refute subgoal 1,
etc.".
Don't trust me about this, but it looks typical of blast. I was under
the impressions that simp doesn't call blast.*)
oops
lemma
"a = 0 --> a + b = (b::'a::comm_monoid_add)"
by(simp) (*
This is your question. I don't want to step through the rules that simp
uses to prove it all.
*)

Prove that this language is undecidable

Is the following language L undecidable?
L = {M | M is a Turing machine description and there exists an input x of length k such that M halts after at most k steps}
I think it is but I couldn't prove it. I tried to think of a reduction from the halting problem.
Review: An instance of the halting problem asks whether Turning machine N halts on input y. The problem is known to be undecidable (but semidecidable).
Your language L is indeed undecidable. This can be shown by reducing the halting problem to L:
For the halting problem instance (N, y), create a new machine M for the L problem.
On input x, M simulates (N, y) for length(x) steps.
If the simulation halted within that number of steps, then M halts. Otherwise, M deliberately goes into an infinite loop.
This reduction is valid because:
If (N, y) does halt eventually in k steps, then M will halt for all inputs of length k or greater, thus M is in L.
Otherwise (N, y) does not halt, then M will not halt for any input string no matter how long it is, thus M is not in L.
Finally, the halting problem is undecidable, therefore L is undecidable.

How to optimize this computation

I'm writing a model-checker which relies on the computation of a coefficient which is used intensively by the algorithms which is the following:
![alt text][1]
where q is double, t a double too and k an int. e stands for exponential function. This coefficient is used in steps in which q and t don't change while k always starts from 0 until the sum of all previous coefficients (of that step) reaches 1.
My first implementation was a literal one:
let rec fact k =
match k with
0 | 1 -> 1
| n -> n * (fact (k - 1))
let coeff q t k = exp(-. q *. t) *. ((q *. t) ** (float k)) /. float (fact k)
Of course this didn't last so much since computing the whole factorial was just unfeasible when k went over a small threshold (15-20): obviously results started to go crazy. So I rearranged the whole thing by doing incremental divisions:
let rec div_by_fact v d =
match d with
1. | 0. -> v
| d -> div_by_fact (v /. d) (d -. 1.)
let coeff q t k = div_by_fact (exp(-. q *. t) *. ((q *. t) ** (float k))) (float k)
This version works quite well when q and t are enough 'normal' but when things gets strange, eg q = 50.0 and t = 100.0 and I start to calculate it from k = 0 to 100 what I get is a series of 0s followed by NaNs from a certain number until the end.
Of course this is caused by operations with numbers that start to get too near to 0 or similar problems.
Do you have any idea in how I can optimize the formula to be able to give enough accurate results over a wide spread of inputs?
Everything should be already 64 bit (since I'm using OCaml which uses doubles by default). Maybe there is a way to use 128 bit doubles too but I don't know how.
I'm using OCaml but you can provide ideas in whatever language you want: C, C++, Java, etc. I quite used all of them.
qt^k/k! = e^[log[qt^k/k!]]
log[qt^k/k!] = log[qt^k] - log[k!] // log[k!] ~ klnk - k by stirling
~ k ln(qt) - (k lnk - k)
~ k ln(qt/k) - k
for small values of k, Stirling approximation is not accurate.
however, since you appear to be doing finite known range, you can compute log[k!]and put it in array, avoiding any errors whatsoever.
of course there are multiple variations you can do further.
This is not an answer (I believe), but pehaps just a clarification .If I misunderstood something, I'll delete it after your comment.
As I understand, you are trying to calculate n, such as the following sum is equal to 1.
As you may see it approaches to 1 asymptotically, it'll never be EQUAL to 1.
Please correct me if I misunderstood your question.

Resources