Can the halting prοblem be sοlved for certain finite functions? - turing-machines

It is my understanding that for a sufficiently simple function, let's say
function(boolean input){
it is possible to tell if it will halt for any possible input.
It is easy to see that the above function will terminate for false and not terminate for true. It's only impossible to solve the halting problem for an arbitrary functionf, as of course you can evaluatehaltingFinder(haltingFinder)` and essentially create a paradox.
Am I correct in my understanding?

Yes, of course you are right. Take a funtion that does not even have a loop: it will always halt. For entire classes like the regular and context-free languages the halting problem is trivial: the corresponding machines (finite automata, pushdown-automata without epsilon moves) can only make a number of steps equal to the input word's length and thus will always halt. Though, of course, you can design non-halting computations for simple funtions, e.g. a Turing Machine with useless loops for a regular language.


How to call a structured language that cannot loop or a functional language that cannot return

I created a special-purpose "programming language" that deliberately (by design) cannot evaluate the same piece of code twice (ie. it cannot loop). It essentially is made to describe a flowchart-like process where each element in the flowchart is a conditional that performs a different test on the same set of data (without being able to modify it). Branches can split and merge, but never in a circular fashion, ie. the flowchart cannot loop back onto itself. When arriving at the end of a branch, the current state is returned and the program exits.
When written down, a typical program superficially resembles a program in a purely functional language, except that no form of recursion is allowed and functions can never return anything; the only way to exit a function is to call another function, or to invoke a general exit statement that returns the current state. A similar effect could also be achieved by taking a structured programming language and removing all loop statements, or by taking an "unstructured" programming language and forbidding any goto or jmp statement that goes backwards in the code.
Now my question is: is there a concise and accurate way to describe such a language? I don't have any formal CS background and it is difficult for me to understand articles about automata theory and formal language theory, so I'm a bit at a loss. I know my language is not Turing complete, and through great pain, I managed to assure myself that my language probably can be classified as a "regular language" (ie. a language that can be evaluated by a read-only Turing machine), but is there a more specific term?
Bonus points if the term is intuitively understandable to an audience that is well-versed in general programming concepts but doesn't have a formal CS background. Also bonus points if there is a specific kind of machine or automaton that evaluates such a language. Oh yeah, keep in mind that we're not evaluating a stream of data - every element has (read-only) access to the full set of input data. :)
I know this question is somewhat old, but for posterity, the phrase you are looking for is "decision tree". See for details. I believe this captures exactly what you have done and has a pretty descriptive name to boot!
I believe that your language is sufficiently powerful to encode precisely the star-free languages. This is a subset of that regular languages in which no expression contains a Kleene star. In other words, it's the language of the empty string, the null set, and individual characters that is closed under concatenation and disjunction. This is equivalent to the set of languages accepted by DFAs that don't have any directed cycles in them.
I can attempt a proof of this here given your description of your language, though I'm not sure it will work precisely correctly because I don't have full access to your language. The assumptions I'm making are as follows:
No functions ever return. Once a function is called, it will never return control flow to the caller.
All calls are resolved statically (that is, you can look at the source code and construct a graph of each function and the set of functions it calls). In other words, there aren't any function pointers.
The call graph is acyclic; for any functions A and B, then exactly one of the following holds: A transitively calls B, B transitively calls A, or neither A nor B transitively call one another.
More generally, the control flow graph is acyclic. Once an expression evaluates, it never evaluates again. This allows us to generalize the above so that instead of thinking of functions calling other functions, we can think of the program as a series of statements that all call one another as a DAG.
Your input is a string where each letter is scanned once and only once, and in the order in which it's given (which seems reasonable given the fact that you're trying to model flowcharts).
Given these assumptions, here's a proof that your programs accept a language iff that language is star-free.
To prove that if there's a star-free language, there's a program in your language that accepts it, begin by constructing the minimum-state DFA for that language. Star-free languages are loop-free and scan the input exactly once, and so it should be easy to build a program in your language from the DFA. In particular, given a state s with a set of transitions to other states based on the next symbol of input, you can write a function that
looks at the next character of input and then calls the function encoding the state being transitioned to. Since the DFA has no directed cycles, the function calls have no directed cycles, and so each statement will be executed exactly once. We now have that (∀ R. is a star-free language → ∃ a program in your language that accepts it).
To prove the reverse direction of implication, we essentially reverse this construction and create an ε-NFA with no cycles that corresponds to your program. Doing a subset construction on this NFA to reduce it to a DFA will not introduce any cycles, and so you'll have a star-free language. The construction is as follows: for each statement si in your program, create a state qi with a transition to each of the states corresponding to the other statements in your program that are one hop away from that statement. The transitions to those states will be labeled with the symbols of input consumed making each of the decisions, or ε if the transition occurs without consuming any input. This shows that (∀ programs P in your language, &exists; a star-free language R the accepts just the strings accepted by your language).
Taken together, this shows that your programs have identically the power of the star-free languages.
Of course, the assumptions I made on what your programs can do might be too limited. You might have random-access to the input sequence, which I think can be handled with a modification of the above construction. If you can potentially have cycles in execution, then this whole construction breaks. But, even if I'm wrong, I still had a lot of fun thinking about this, and thank you for an enjoyable evening. :-)
Hope this helps!

Advice on Learning “How to Think Functional”?

As a newbie in functional languages (I started touching Erlang a couple of weeks ago -- the first functional language I could get my hands on).
I started to writing some small algorithms (such as left_rotate_list, bubble_sort, merge_sort etc.). I found myself often getting lost in decisions such as "should I use a helper List for intermediate result storage?" and "should I create a helper function to do this?"
After a while, I found that functional programming (bear with me if what I am talking does not make sense at all) encourages a "top down" design: i.e., when I do merge_sort, you first write down all the merge sort steps, and name them as individual helper functions; and then you implement those helper functions one by one (and if you need to further dividing those helper functions, do it in the same approach).
This seems to contradict OO design a little, in which you can start from the bottom to build the basic data structure, and then assemble the data structure and algorithms into what you want.
Thanks for comments. Yes, I want to get advice about how to "think in functional language" (just like "thinking in Java", "thinking in C++").
After a while, I found that functional programming […] encourages a "top down" design.
I'm not sure this is an accurate statement. I've been recently trying to teach myself functional programming, and I've found that a sort "bottom-up" style of programming really helps me. To use your example of merge sort:
First look at the base case. How do you sort an array of 0/1 elements?
Next, look at the base + 1, base + 2, … cases. Eventually, you should see a pattern (splitting into subproblems, solving subproblems, combining subsolutions) that allows you to write a general recursive case than eventually reaches the base case.
Splitting into subproblems is easy, but combining the subsolutions is a bit harder. You need a way to merge two sorted arrays into one sorted array.
Now put everything together. Congratulations, you've just written merge sort. :)
I could be misusing the term, but this feels like bottom-up design to me. Functional programming is different than object-oriented programming, but you shouldn't need to totally abandon existing design techniques when switched between the two.
An answer is that functional programming is to program using functions, as they are defined in mathematics (in short, side-effect free things that map values from the domain to the codomain). To actually translate that into "how to think" is the hand-waving part that is difficult to be exhaustive about, but I'll sample some of my thoughts:
The definition is more important than the efficiency. That is, an obviously correct implementation of a function that one can understand all of the behaviour of is better than a complex optimized one that is hard to reason about. (And should be preferred as long as possible; until there is evidence one must break this nice property.)
A mathematical function has no side-effects. A useful program must have side-effects. A functional programmer is aware of side effects, as a very dangerous and complicating thing, and designs the program as a bunch of functions that take output values from one side-effect and creates input values to the next side-effect.
Number one is associated with the vague: "elegant code". List comprehensions can present very succinct and mathematical-equations like definitions of functions. Just look at the quick-sort implemented with LCs. This is how I define elegance, succinct and makes all behaviours clear. Not that perl code-golf where you are most often terse and cryptic.
Number two is something that I use day to day in all programming. Divide code into functions (methods, routines, etc...) of current state that are side-effect free computations giving inputs to the next action to take (even which the next action to take is). When the value is returned, give it to a routine that performs the action that is described, then start over.
In my head I diagram an Erlang process as a state machine graph, where each vertex is a side-effect and a function whose output is which edge to chose out of the vertex. The high regard of side-effects is something the functional programming paradigm taught me. Especially in Erlang, since side-effects really matter in concurrency, and Erlang makes concurrency very available.
The same way some isolated tribes have only one word for numbers above 3, or no words for "mine"/"yours". It feels like popular languages do not have words for "this will cause a side-effect", but Functional Programming has it. It is forcing you to be aware of it all the time, and that is a good thing.
After a while, I found that functional programming [...] encourages a "top down" design.
Well, it's not about "top down" or "bottom up" design really. It's about focusing on the "what" of the problem at hand, rather than the "how". When I started off with functional programming, I found that I kept recalling imperative constructs like the nested for loop in C. Then I quickly found out that trying to translate my imperative thinking to functional constructs was very difficult. I'll try to give you a more concrete example. I'll implement an equivalent program in C and Haskell and attempt to trace my thought process in both cases. Note that I've been explicitly verbose for the purpose of explanation.
In C:
#include <stdio.h>
int main(void)
int i, inputNumber, primeFlag = 1;
scanf("%d", &inputNumber);
for(i = 2; i <= inputNumber / 2; i ++)
if (inputNumber % i == 0)
primeFlag = 0;
if (primeFlag == 0) printf("False\n");
else printf ("True\n");
return 0;
Trace of my thought process:
Think in steps. First, accept a number from the user. Let this number be called inputNumber. scanf() written.
Basic algorithm: A number is prime unless otherwise proven. primeFlag declared and set equal to 1.
Check primeNumber against every number from 2 to primeNumber/2. for loop started. Declared a loop variable i to check primeNumber against.
To disprove our initial assertion that the number is prime, check primeNumber against each i. The moment we find even one i that divides primeNumber, set primeFlag to 0 and break. Loop body written.
After going through our rigorous checking process in the for loop, check the value of primeFlag and report it to the user. printf() written.
In Haskell:
assertPrime :: (Integral a) => a -> Bool
assertPrime x = null divisors
where divisors = takeWhile (<= div x 2) [y | y <- [2..], mod x y == 0]
Trace of my thought process:
A number is prime if it has no divisors but one and itself. So, null divisors.
How do we build divisors? First, let's write down a list of possible candidates. Wrote down Texas range from 2 to number/2.
Now, filter the list, and pick out only items that are really divisors of the number. Wrote the filter mod x y == 0
I want to get advice about how to
"think in functional language"
Ok, first and foremost, think "what", not "how". This can take a lot of practice to get used to. Also, if you were formerly a C/C++ programmer like me, stop worrying about memory! Modern languages have a garbage collector, and it's written for you to use- so don't even try to modify variables in place. Another thing that has personally helped me: write down English-like definitions in your program to abstract out the functions that do the heavy-lifting.
I found myself often getting lost in decisions such as "should I use a helper List for intermediate result storage?" and "should I create a helper function to do this?"
My advice for this: read The Little Schemer. You can follow it in Erlang. It's a good book to drill a sense of this into you.
It’s import to get used to thinking that data can be used as code, and vice versa.
Usually you construct a program (data) using several primitive operations (folding, nesting, threading, distributing, ..., and some are generalized inner product, outer product, etc.), and use this program (data) to manipulate other data.
After a while, I found that functional
programming […] encourages a "top
down" design.
I agree.

Case statement vs If else in VHDL

What is main differences between if else and case statement in VHDL. Although both look similar and sometime replace each other.but What logic circuit appear after synthesis . When should we go for if else or case statement ?
Assuming an if-statement and a case-statement describes the same behavior, then the resulting circuit is likely to be identical after the synthesis tools done the translation and optimization.
As Paebbels writes in the comment, the details are described for each tool in the relevant synthesis guide, and there are probably tool-dependent cases where the result may differ, but as a general working assumption, then the synthesis tool will get to the same circuit for equivalent if-statements and case-statements.
The critical point is usually to make correct and maintainable VHDL code, and here readability counts, so choose an if-statement or a case-statement depending on what makes the code most straight forward, and don't try to control the resulting circuit through VHDL constructions, unless there is a specific reason that this is required.
Note that in the if-statement early conditions takes priority over later, but in the case-statement all when have equal priority.
Remember that VHDL is parallel programming language and a form of declarative programming see here as opposed to procedural programming like c/c++ and another other sequential language.
This means in essence, you are telling or attempting to describe to the compiler with your code what the behavior should be, and not specifically telling it what to do or what the behavior is like with procedural programming. This might be what prompted you to ask the question.
Now remember however, that the sequencing of the if or case will affect synthesis. With FPGA's nowadays, all combinatorial part of the logic are in the form of Loop up tables which are internally designed as cascaded arrays multiplexers grouped together to form LUTs with input number N commonly 4 See here for more details, and the compiler decides how to configure these arrays of LUTs.
The ordering can affect the number of cascaded multiplexer that the compiler calculates before the output is resolved.
Note that although in theory, it is possible to get the same behaviour for both if and switch. Case is looking at a single variable and deciding cases for each possible outcome while an If statement can be applied to multiple variables at the same time.
So flexibility? I would say goes to If. However with great power comes great responsibility, if is easy it use several signals from everywhere and if not done properly can lead to bad design, ie coupling of too many variable and any change is subject to failure due to too many dependency issues. Case is suitable for state machines but that is also true for procedural languages I suppose.
In addition, if you use too many different signals to act as conditions to your If, it can affect timing. which may mean limitation in your clock frequency, if you are working with high speed and the list goes on. clock skew, need to constrain signals etc.

Recognizing patterns in a number sequence

I think this should be an AI problem.
Is there any algorithm that, given any number sequence, can find patterns?
And the patterns could be abstract as it can be...
For example:
12112111211112 ... ( increasing number of 1's separated by 2 )
1022033304440 ...
11114444333322221111444433332222.. (this can be either repetition of 1111444433332222
four 1's and 4's and 3's and 2's...)
and even some error might be corrected
1111111121111111111112111121111 (repetition of 1's with
intermittent 2's)
No, it's impossible, it's related to the halting problem and Gödels incompleteness theorem.
Furthermore some serious philosophical groundwork would need to be done to actually formalize the question. Firstly what is exactly meant by "recognizing a pattern". We should assume it identifies:
The most expressive true pattern. So "some numbers" is invalid as it is not expressive enough
The argument would go something like; assume the algo exists, and now consider a sequence of numbers that is a code for a sequence of programs. Now suppose we have a sequence of halting programs, by above it must know, it cannot just say "some programs" as that is not maximally expressive. So it must say "halting programs" Now given halting program P we can add it to the halting list and the algo should say "halting programs", which would conclude P halts, if it doesn't halt then the algo should say something else, like "some halting and one non halting". Therefore the algo can be used to define an algo that can decide if a program halts.
Not a formal proof now, but not a formal question :) Suggest you look up Gödel, Halting problem and Kolmogorov Complexity.

Optimized binary search

I have seen many example of binary search,many method how to optimize it,so yesterday my lecturer write code (in this code let us assume that first index starts from 1 and last one is N,so that N is length of array,consider it in pseudo code.code is like this:
while( L<R)
if A[m]> x
Here we assume that array is A,so lecturer said that we are not waste time for comparing if element is at middle part of array every time,also benefit is that if element is not in array,index says about where it would be located,so it is optimal,is he right?i mean i have seen many kind of binary search from John Bentley for example(programming pearls) and so on,and is this code optimal really?it is written in pascal in my case, but language does not depend.
It really depends on whether you find the element. If you don't, this will have saved some comparisons. If you could find the element in the first couple of hops, then you've saved the work of all the later comparisons and arithmetic. If all the values in the array are distinct, it's obviously fairly unlikely that you hit the right index early on - but if you have broad swathes of the array containing the same values, that changes the maths.
This approach also means you can't narrow the range quite as much as you would otherwise - this:
would normally be
... although that would reasonably rarely make a significant difference.
The important point is that this doesn't change the overall complexity of the algorithm - it's still going to be O(log N).
also benefit is that if element is not in array,index says about where it would be located
That's true whether you check for equality or not. Every binary search implementation I've seen would give that information.