Reduction from ATM to ATM-co - turing-machines

Is there a reduction from ATM to ATM-complement?
I have been thinking about it too much and couldn't find the answer.
I know that reduction from ATM-complement to ATM is not possible becouse if it was, ATM would not be in RE. But how can I proove/profe the other way around?
Thank you very much :)

There is no mapping reduction from (ATM)c to ATM. To see this, note that ATM is Turing-recognizable, so if (ATM)c ≤m ATM, we'd have that (ATM)c would be Turing-recognizable. But that's impossible, since we know that (ATM)c isn't Turing-recognizable, since if it were, ATM would be decidable (since any language that's Turing-recognizable and co-Turing recognizable is decidable).
However, there is a Turing reduction from (ATM)c to ATM. Just invoke the subroutine for ATM and return the opposite result.


How to determine a Heuristic for an algorithm, say A*, is a good one

I am recently learning A* algorithm, I know it takes a heuristic value when finding the potential path, and I also understand what consistent and admissible is for a heuristic. But I am confused that what kind of heuristic is good and why is it good?
BTW, how does heuristics work?
Selecting a heuristic is, in my opinion, mostly dependent on the problem. Yet, selecting a heuristic becomes easier if the problem is understood in a goal-oriented fashion. At least thats what I do. The idea I follow is this:
The heuristic evaluates to zero at the goal state.
So what are all the scenarios ? What are all the functions that yield zero at the goal?
Possible Heuristics
Number of food pallets left?
Distance of the current pallet to the Next Unexplored Pallet?
Unexplored Area in the grid with food pallets etc
I would go with the last option since it seems more reliable, though all the three will eventually lead to the solution.
So, I believe, you select a heuristic by putting yourself at the goal
state and then look back and see what all you have accomplished on the
way. So in a sense, a heuristic is nothing but an approximation of
what needs to be accomplished ( which evaluates to 0 at the goal).

How to implement long division for enormous numbers (bignums)

I'm trying to implement long division for bignums. I can't use a library like GMP unfortunately due to the limitations of embedded programming. Besides, i want the intellectual exercise of learning how to implement it. So far i've got addition and multiplication done using any-length arrays of bytes (so each byte is like a base-256 digit).
I'm just trying to get started on implementing division / modulus and i want to know where to start? I've found lots of highly-optimised (aka unreadable) code on the net, which doesn't help me, and i've found lots of highly-technical mathematical whitepapers from which I can't bridge the gap between theory and implementation.
If someone could recommend a popular algorithm, and point me to a simple to understand explanation of it that leans towards implmenentation, that'd be fantastic.
-edit: I need algorithms which work when the dividend is ~4000bits, and divisor is ~2000bits
-edit: Will this algorithm work with base-256 ?
-edit: Is this the algorithm (newton division) i should really be using?
If you want to learn, then start with the pencil and paper method you used in elementary school. Believe it or not, that is essentially the same O(n^2) algorithm that is used in most bignum libraries for numbers that are in the range you are looking for. The tricky first step is called "quotient estimation", and that will probably be the hardest to understand. Once you understand that, the rest should come easy.
A good reference is Knuth's "Seminumerical Algorithms". He has many discussions about different ways to do quotient estimation both in the text and in the exercises. That book has chapters devoted to bignum implementations.
Are you using the void Four1(long double[],int,int) in your code and then convolving and then doing a inverse transform well I got multiplication to work but when I tried to do division the same way it spat out one result then quit so I cannot help but if you have the tome called "Numeric Recipes in C++" go to near the end and you will find what you are looking for actually it starts on Page 916 to 926.
This question is over 2 years old, but for this size numbers you can look at the OpenSSL source code. It does RSA with this size numbers so has lots of math routines optimized for 1000 to 4000 bit numbers.

Multiple levels of infinity [closed]

Some programmers don't see much relevance in theoretical CS classes (especially my students). Here is something I find very relevant. Let me build it up in pieces for those that haven't seen it before...
A) Programming problems can be reworded to be questions about languages.
B) Turing machines recognize languages.
C) Turing machines can be encoded as (large) integers.
D) Therefore, the number of possible Turing machines are countably infinite
E) The power set of a set is just all the possible subsets of that set.
F) If a set is countably infinite, its power set is bigger, ie, uncountably infinite.
G) Therefore, if a language is infinite, it has an uncountably infinite number of subsets. Each of these represents a problem. But there are only countably many Turing machines with which to solve those problems. And if we cannot solve a problem with a Turing machine, it cannot be solved.
Conclusion...we can only solve an infinitesmally small fraction of all problems.
My question is almost here...
Whenever I present this argument to students, they get stuck on the countably vs. uncountably infinite. They generally do not have strong math backgrounds, so attempts to explain via Cantor's diagonalization argument tends to glaze their eyes.
Usually I try to give them somthing they can grasp, like a finite box over any portion of the counting number line, and we capture a finite quantity of those numbers...but place a finite box over any portion of the real number line, and we capture an infinite quantity of real numbers. A sort of evidence that there ARE more real numbers than there are counting numbers.
Finally my question...How do YOU explain the concept of multiple levels of infinity to those that have never heard of the concept, and may not be mathematically inclined?
Final Edit: I learned a lot by asking this question and I appreciate the feedback. I wasted far too much time trying to figure out what "Community wiki" actually was. I learned there is an inherent bias in some people against theory questions that I feel is simply a mistake because so much of what we do today was theory yesterday. But this bias is natural and while I disagree with them on the value of theory, I have no problem with it, and it helps me understand where my students are coming from. I do think the BS comment was unnecessary.
I do not feel this question was a poll or a preditions-for-2009 question at all. Those of you that only want coding questions with coding answers might want to re-examine that requirement. I have moved this question to community wiki but strongly feel I was compelled to do so by improper use of force.
I think your explanation is the simplest, as that is what I learned. It's almost as if real numbers have multiple dimensions of infinity. It is infinite in one direction, but also in another.
Diagonalization is a very cool experiment, but I can see how it may go over beginners heads. It does make sense though, if it is demonstrated in a very deliberate way, going very slowly. Just throwing up numbers quick can be hard to follow I imagine.
I think the principle of Cardinality of the Continuum is also helpful, although perhaps can be simplified to a beginner level. Showing that there is more beyond simple real vs. integers can potentially help something to 'click'.
My recommended first step for teaching levels-of-infinity to people of limited mathematical background is "Why do mathematicians say that the set of even numbers and the set of whole numbers are the same size?" This introduces "if you can associate every member of set A with exactly one member of set B, mathematicians say the sets have the same size." Next comes showing that every fraction (every rational number) can be associated with exactly one counting number, using the diagonal method. Once they're satisfied with that, I bring up π, which everyone knows has an infinite number of non-repeating digits in its decimal expression, which means it cannot be expressed as a fraction, so it will be left over, and that means that the set of irrational numbers is larger than the set of counting numbers. Some wiseguys will object that π has a finite number of digits if you're working in base π, namely 1π, but you can come back at them with "okay, brainiac, write down the number of days in a week in base π."
Where's the "very relevant" part?
Edit: OK, I've been writing code professionally for 13 years and I wouldn't call levels of infinity relevant to anything I've ever worked on.
And I guess I would draw a different conclusion from your theory. How is "we can only solve an infinitesimally small fraction of all problems" the limit of our craft?
Sounds to me like there are an infinite (countable or uncountable doesn't seem to make a difference) number of problems. Therefore our craft is unlimited -- we will never run out of problems to solve.
There are several tens of thousands of words in the English language. You can count the number of words in a book or the number of books in the universe. You cannot count the number of books that will ever be
Forgive the poorly written metaphors below.
I personally think of the countability/uncountability dichotomy as being very closely related to Zeno's paradox of the arrow.
The set of all natural numbers is countable, there is a specific method of generating the "next" integer, and it will get you a step forward. Countable sets are forward-moving in that sense. It's almost as if it has a velocity, it keeps moving forward.
The set of all real numbers is uncountable, like zeno's arrow.
If you have to move between the origin (0) and the destination (1 == 2-0), you must first go through the midpoint (1/2 == 2-1).
Now your destination is 1/2; If you must then go between the origin (0) and the (1/2), you must go through the midpoint (1/4 == 2-2)
So on and so forth, so to get between 0 and 1, you must first get between something inbetween, which you must first get between something inbetween. There is no finite method of calculating the "next" step, so the velocity (in contrast to the velocity of natural numbers) doesn't really exist, your next step is not going to take you anywhere.
I realize now that this probably has to do with the total ordering and mapping of the set of natural numbers to any countable sets. If you can't totally order the items in a set, or you can't create a method to determine what the next item is in a set, chances are it's uncountable.
G) Therefore, if a language is infinite, it has an uncountably infinite number of subsets. Each of these represents a problem.
Citation needed. You can't merely assume that any (possibly infinite) set of Turing machines necessarily represents a distinct 'problem'. At the very least, you have to (separately) formalize the definition of 'problem' as much as Turing machines have been formalized.
Programmers (or at least, myself) don't often have to worry much about infinity in this way. When you place a finite box over any portion of the machine-representable real number line, you get a finite quantity of real numbers. =)
For example, a double precision variable has a finite number of possible values: 2^64.
Here's an example of a computable problem: at the start of a chess game, is it possible for white to force a win?
The number of possible moves and counter-moves is finite. All we have to do is build the trees and prune them. We haven't done this yet only because with current technology it would take billions of years.
Here is an example of a problem that is not computable: Given a two-dimensional view of a scene, construct a full three-dimensional model of the scene.
We do this all the time. (Make a room with a peephole in the door. Have someone furnish it. Look through the hole and describe everything you see.)
We do not compute the incomputable. We produce an approximate result (just like we compute and use an approximate value of pi, another incomputable number). We keep updating the result as more information comes in. That's what optical illusions are all about. When you look at the picture of "a vase, or is it two faces?" your visual system says "It's a vase. No. Wait. It's two faces. No. Wait. It's a vase." You see it switching back and forth between the two interpretations.
Just because something is not computable is no reason not to do it.
Conclusion...we can only solve an infinitesmally small fraction of all problems.
You must be web designer.

NP-Complete reduction (in theory)

I want to embed 3 NP-Complete problems(2 of them are known to be NP-Complete, 1 of them is my own idea). I saw "this question" and got idea about reinterpreting embedding problems in theory:
The Waiter is The Thief.
Tables are store.
Foods are valued items which has different weight.
Thief know all the items' price and weight before the robbery.
His target is most efficient robbery(max capacity of knapsack used, most valued items got) with robbing(getting at least 1 item) all stores(shortest way to completing robbery tour, also visit each store 1 time).
This part is embedding 2 NP-Complete problems.
My idea is, that more items mean more bag weight. More bag weight slow downs the thief exponentially. So another target of the thief should be finishing the robbery as quickly as he/she can.
At this time, I'm not sure that my idea is actually NP-Complete. Maybe, "gravity" is not a NP-Complete Problem alone. But maybe it is in this context of the travelling salesman and knapsack problem.
So my questions are:
Is the slowing down of the thief NP-complete, too?
Is it possible to reduce those three embedded problems to a simple NP-complete problem?
Okay, that was just a bit tough to follow, but I think I'm getting the gist.
The XKCD cartoon is showing you how easy it is to make a real-life problem NP-complete. (Of course, since most menus have a small number of items and a uniform set of prices, it's also easy to show that there is a trivial answer.)
The idea of "embedding" an NP-complete problem I think you're referring to is finding a poly-time reduction; I've written that up pretty completely elsewhere on SO.
This is a bit confusing, but here are my answers to some possible questions.
The combination of two NP-complete problems is going to be NP-complete. In fact, the combination of an NP-complete problem with any other problem is going to be NP-complete.
I don't see how to evaluate whether the gravity problem is NP-complete on its own, because it isn't on its own. If the time between points depends on distance as well as backpack weight, then it's NP-complete because it's part of the Traveling Salesman problem. If it doesn't, then the right solution is to pick up objects lightest to heaviest.
The combined problem is a combination of two problems (which objects to steal, and which route to take), and doesn't look any more interesting to me than the two separately, since you can solve one without worrying about the other. Adding weight-dependent delays can couple the problems so they aren't independent, but you need an evaluation function other than how fast you can commit the optimum theft (the optimum theft is its own problem, and then it's just a modified TSP).
Nor are you going to be able to take problems, couple them, complicate them, and then make a simpler problem in general.
I honestly have no idea what you are asking. But it might be you are asking how you prove one problem NP-complete by converting it to another NP-complete problem.
The answer to this is that you write an algorithm which runs in polynomial time to convert the problem to a known NP-complete problem, then another polynomial algorithm to convert the solution back.
For more details read a decent textbook or see the Wikipedia page.
Thanks for helpful comments, Cartoon gave me an idea about embedding problems. I was bit hurry when I was writing, so I did many writing mistakes. My main language is not English too, so editors made my question more understandable. I also want another comments to more brainstorming.
Charlie Martin, thanks for your link.
The cost of carrying extra weight is not a problem by itself, but rather a parameterization of the edge-weights of your Traveling Salesman Problem.
The decision version of this problem is still NP-complete, because a) we can still check quickly if a given tour has cost less than k so it is in NP, and b) Hamiltonian Cycle still reduces to our TSP with carrying costs (since we just set all edge weights to 1 and all carrying costs to 0 in the reduction).
In other words, the carrying costs just made our TSP harder, so it is still NP-hard (and can be used to solve any NP-complete problem), but it did not make it hard enough that we cannot quickly check a proposed solution to the decision problem: "Does this tour cost less than c?", so it is still NP-complete.
The (NP-complete) decision version of the Knapsack problem is independent, and can be solved sequentially with the TSP problem.
So the entire problem is NP-complete, and if we reduce the TSP problem and the Knapsack
problems to SAT (reduction normally isn't done in this direction but it is theoretically possible), then we can encode the two together as one SAT instance.

Why not using trinary logic instead of binary logic in the computer world? [closed]

I was wondering how computer would look if trinary logic was used. It seems like the bigger the base , the more memory can be utilized. I'll explain.
Binary address with length of 32 -> allows you to represent 2^32 possible values.
Trinary address -> 3^32 , which is ~ 431439 bigger than the binary one .
It seems like it is much better. Also , the hardware way of doing it could be done easly -> 2 means strong current, 1 means weak current , and 0 no current. Of course it is much more complicated, but the idea is simple. However , i couldnt find any reffrence to any new research or new computer using this kind of logic.
So , my question is why not using 3 numbers logic ? or any n -number logic ( n>2 ) ? What is stopping us from doing that ?
These already exist. In fact one of the first computers used ternary logic and indeed, Knuth believes that due to their efficiency and elegance we will eventually move back to using them.
I'm surprised you didn't find anything on this in computer architecture/digital logic books! It is possible to do trinary or polynary logic on chips - the question is not so much about the logic but more about electrical threshold calculations
An on/off (1/0) is not purely off when it's a 0 it's a threshold value - i.e., anything below this voltage level should be considered off and anything above it as on. Now YOU come along and say let's go trinary - the transistors now start feeling the pressure. They are supposed to be much more accurate i.e., i.e, have multiple thresholds to get you what you want and must be fine tuned so that these thresholds/boundaries are better obeyed.
Let's assume you got the thresholds out of the way, you have the problem of the human mind :) What do you like better:
1100110011 or 1122110022
I prefer the former, but maybe that's just me. Ternary logic systems DO exist! In fact quantum computing takes it a leap further with multiple states!!
The thing is you CAN do it, the question is, is it worth it? Going by evidence, binary dominates and definitely seems worth it!
At their base, computers use switches, which have two states. On and Off. When dealing with electronic current, at the most basic level, those are your two options. While in theory you probably could have multiple amounts of electricity count as different bits, it would be complicated.
This book, Code, by Charles Petzold, explains how computers work, from the ground up all the way through building a basic processor unit. I think that you'll have a lot to gain by giving it a read.