Turing Machines/ Turing Completeness - turing-machines

I was reading the ethereum white paper and I came up with the term Turing-Complete. I did some research and I found out this is a whole mathematical theory. How can I start learning about Turing-Machines and Turing Completeness on a technical level. I don't care about the connection with ethereum i just want to understand the scientific part. I mean with an intro to technical level escalation. I googled this and I found general powerpoints/pdfs but that's not what I am looking for. I decided to ask here(for guidance/textbooks) cause I am pretty sure there are experts here that work on this topic. Sorry for the general-type question. I'm just a guy that loves mathematics and cryptography trying to understand how to world functions.
Kind Regards,
Nick

Related

Theory of Computation

Can anybody please explain me use/importance of studying Theory of Computation.
I had course on the same subject during graduation but I did not study is seriouly.
I also found following link where some video lecture are available.
http://aduni.org/courses/theory/index.php?view=cw
Shai Simonson's classes are really very good. I have listened to them. As he says in the initial lecture, 'Theory of Computation' is a study of abstract concepts. But these abstract concepts are really very important to better understanding of the field of Computing, as most of the concepts we deal with have lot of abstract and logical under pinnings. As John Saunders said in an above answer,you can become a programmer, even a good one if you know the programming language well. But the knowledge of what is going underneath will always makes you an enlightened one. So go ahead and learn it again (NB: I understand why you didn't study it seriously at college. Most of the teachers in our colleges aren't that good at explaining this topic (I too had a lousy teacher), but I assure you the teacher here is the best you can get,)
I think every computer science student should know some of computation theory, even you won't do any research.
Some concepts are just universal and you will encounter them again and again in other courses. E.g. finite state machines, you need to know them when you are learning string matching algorithms, and compilers. Another example, you will learn some reduction algorithms (transforming from one model to another model) in computation theory, these things teach you how to think abstractly and algorithmically.
The greatest of all human faculties is the power of abstraction. That is what separates us from the animal. The more we exercise this power the more successful we are in solving problems.
Playing Chess may seem a futile pastime to some and never of practical use to any but goes a long way to give the player the ability to think ahead every time an important decision is to be made.
Besides, it reveals the elegance and simplicity that is hidden beneath layers of ugly syntax and brain-dead code we sift through every day just to make a living.
The importance of Computation Theory will depend on what you do with your life. If you want to be a Computer Scientist, then it is an important basis for your future studies.
If you just want to be a Programmer or Software Engineer, then you will probably never use the knowledge again.
In addition to the usefulness of various tools (regular expressions, context free grammars, state machines etc.) in your daily life as a programmer, a good theoretical computer science course will have taught you how to model certain problems in a way that you can tackle effectively.
Solutions that seem clever to people without training in this discipline will seem natural and "the right way" to people who have. I recommend that you pay close attention to what's going on in your course since it will give you a very powerful toolset that will help you as a programmer and as an abstract thinker.
Its really not without its practical aspects in regards to software engineering.
For example,
you may be tempted to parse some programming language as input to your program with Regular Expressions.
CS Theory proves why this is a bad idea (most programming language syntaxes are not Regular), and can never be overcome no matter how much you'd like to try.
Other examples may include NPC problems, etc.
Basically, CS Theory can teach you many important things with regards to reasoning. But it also describes the fundamental limits to programming and algorithms.
"Know your limits"
Theory of computation is sort of a hinge point among computer science, linguistics, and mathematics. If you have intellectual curiosity, then expose yourself to the underlying theory. If you just want to dip lightly into making computers do certain things, you can probably skip it. Me? I loved it. But I also liked topology, so I may not be a typical developer in that respect.
Some practical examples:
Before spending a lot of time on a problem you'll want to know:
If the problem can't be solved.
If there is a "good" (polynomial) solution, as some problems may don't have good" solutions (or at least, not ones we currently know of ;))
(A bit less practical) you'll want to know if a problem is "harder" than other, that is, takes more time/space.

Where can I find sample automata and turing machines? [closed]

I'm studying for an automata test on a course that's heavily based on jflap. Trouble is we don't have much documentation and the sample automata that I've found on jlap like this and this, are insufficient to prepare for the upcoming test.
Where can I find more? Any other resource with sample turing machines shown as graphs with transitions would also be helpful.
"Problem solving in automata, languages and complexity" is a fantastic textbook for anything related to... anything in its title. Among other things, you can find a bunch of examples of DFAs/NFAs/PDAs/TMs for all sorts of things, and they teach you a lot of techniques for building them.
Edit: that first link of yours keeps talking about "nondeterministic NPDAs" and "deterministic NPDAs". I'm writing this edit just to satisfy my urge to denounce such pleonasms and oxymora :)
Try Michael Sipser's excellent book "Introduction to the Theory of Computation". The automata and Turing machines are all expressed as state diagrams, with sufficient text explanation to help you interpret and implement them.
This was our course text at Uni about 4 years ago, just before the 2nd edition came out; it was a real rock, I heartily recommend it!

I am a newbie in java programming should I start learning AI programming now. if so then where should I start?

I am a CSE student. Learning java for my courses. Should I start learning AI programming. I am very much interested in AI programming. If I should then where should I start from??
Echoing csd2421, AI is not something generally suable for those new to programming.
Generally speaking though, the introductory AI courses (in my own experiences as a student) first deal with state space searching. As in good old Breadth First Search, Depth First Search, Uniform Cost Search, and A* for a bit of spice. More so then just programming them and implementing them, the point is to understand the differences between how each operate and traverse the state space.
The UC Berkley Pacman assignments, for example, do a good job at beginning small with the previous mentioned state space searching, and then using those concepts to build up to more complicated AI practices such as Minimax, Particle Filtering, Bayes' Nets, and more.
Copies of those assignments can be found floating around the internet, like here. Solutions for the problems are also sitting about various places/public repositories.
Of course, that is all in python and does nothing for you on the Java end of things.
If you don't mind language agnostic recommendations, Artificial Intelligence: A Modern Approach by Russell and Norvig is considered an excellent all-around book on AI.
Lastly, as a student I recommend keeping an eye and ear out for AI related courses.
AI programming will be difficult to learn if you do not understand the fundamentals of computer programming. You may wish to hone your skills further before tackling a concept as difficult as machine learning. That said, I know Stanford offers an online course in AI, which you may find interesting. You earn a certificate when you successfully complete the course. Coursera also offers courses in AI, which you can see here: https://www.coursera.org/course/ml
I wish you the best of luck in your endeavors. Computer science is a difficult field, but also very rewarding!

How to learn agda

I am trying to learn agda. However, I got a problem. All the tutorials which I found on agda wiki are too complex for me and cover different aspects of programming. After parallel reading of 3 tutorials on agda I was able to write simple proofs but I still don't have enough knowledge to use it for real word algorithm correctness.
Can you recommend me any tutorials on the subject? Something similar to Learn Yourself a Haskell but for Agda.
When I started learning Agda about a year ago I think I tried all available tutorials and each taught me something new.
You should probably give Coq a try, because it has a larger user base and there are two nice books available for it:
Coq'Art - slightly dated, but beginner friendly
Certified Programming with Dependent Types
Software Foundations is also very nice.
The nice thing is that the theories Agda and Coq are based on are somewhat similar, so many examples can be translated from one to another. Programming in Martin-Löf's Type Theory is a really nice and readable introduction to the dependent type theory, it can clear some things for you.
It would help to know what do you mean by "real world algorithms". Many example developments are described in papers which mention Agda.
Conor McBride gave a great series of lectures last year on dependently-typed programming using Agda. It's a good place to go if you want a break from pouring through terse tutorials on the topic. I believe there are also accompanying exercises.

classical AI, ontology, machine learning, bayesian

I'm starting to study machine learning and bayesian inference applied to computer vision and affective computing.
If I understand right, there is a big discussion between
classical IA, ontology, semantic web researchers
and machine learning and bayesian guys
I think it is usually referred as strong AI vs weak AI related also to philosophical issues like functional psychology (brain as black box set) and cognitive psychology (theory of mind, mirror neuron), but this is not the point in a programming forum like this.
I'd like to understand the differences between the two points of view. Ideally, answers will reference examples and academic papers where one approach get good results and the other fails. I am also interested in the historical trends: why approaches fell out of favour and a newer approaches began to rise up. For example, I know that Bayesian inference is computationally intractable, problem in NP, and that's why for a long time probabilistic models was not favoured in information technology world. However, they've began to rise up in econometrics.
I think you have got several ideas mixed up together. It's true that there is a distinction that gets drawn between rule-based and probabilistic approaches to 'AI' tasks, however it has nothing to do with strong or weak AI, very little to do with psychology and it's not nearly as clear cut as being a battle between two opposing sides. Also, I think saying Bayesian inference was not used in computer science because inference is NP complete in general is a bit misleading. That result often doesn't matter that much in practice and most machine learning algorithms don't do real Bayesian inference anyway.
Having said all that, the history of Natural Language Processing went from rule-based systems in the 80s and early 90s to machine learning systems up to the present day. Look at the history of the MUC conferences to see the early approaches to information extraction task. Compare that with the current state-of-the-art in named entity recognition and parsing (the ACL wiki is a good source for this) which are all based on machine learning methods.
As far as specific references, I doubt you'll find anyone writing an academic paper that says 'statistical systems are better than rule-based systems' because it's often very hard to make a definite statement like that. A quick Google for 'statistical vs. rule based' yields papers like this which looks at machine translation and recommends using both approaches, according to their strengths and weaknesses. I think you'll find that this is pretty typical of academic papers. The only thing I've read that really makes a stand on the issue is 'The Unreasonable Effectiveness of Data' which is a good read.
As for the "rule-based" vs. " probabilistic" thing you can go for the classic book by Judea Pearl - "Probabilistic Reasoning in Intelligent Systems. Pearl writes very biased towards what he calls "intensional systems" which is basically the counter-part to rule-based stuff. I think this book is what set off the whole probabilistic thing in AI (you can also argue the time was due, but then it was THE book of that time).
I think machine-learning is a different story (though it's nearer to probabilistic AI than to logics).

Resources