How can I show reduction from every language in RE to HP - turing-machines

L is considered to be Hard-RE if for every L' in RE, there is a reduction from L' to L (L'<=L)
L is considered to be Complete-RE if L id Hard-RE and also L is in RE.
How can I prove that HP is complete-RE? i will need to show reduction from every language in RE to HP..

(Assuming RE means recursively enumerable and HP means the halting problem)
HP is trivially in RE
Consider the Turing machine that, given the description a of a Turing machine M[a] and some input b, just simulates running M[a] wiht input b until it halts; and when it does, it outputs TRUE. This machine will output TRUE iff M[a] halts for b, and will diverge when M[a] diverges; so it is a partial decider for the halting problem.
HP is RE-Hard
That is, given a language L in RE, it is reducable to HP: since L is in RE, there exists a Turing machine M[m(L)] such that for any input b, M[m(L)](b) will do one of the following:
If b∈L, then M[m(L)](b) halts with output TRUE
b∉L and M[m(L)](b) halts with output FALSE
b∉L and M[m(L)](b) diverges
So how do we turn that into a proper decider for L, given HP? Easy peasy! Just run your HP oracle on (m(L), b)!
If HP(m(L), b) returns TRUE, we know that M[m(L)](b) converges, so just run it and return whatever TRUE/FALSE answer it results in.
If HP(m(L), b) return s_FALSE_, we know that M[m(L)](b) diverges, which it is only allowed to do if b∉L (see case 3 above). So we then don't run M[m(L)](b); instead, we just output FALSE.
Thus, HP is RE-Complete


Does Idris have non-terminating terms?

On the Unofficial FAQ on the official Idris wiki (official in that it is on the language's git repo), it is stated that
in a total language [e.g. Idris] we don't have undefined and non-terminating terms
so we need not worry about evaluating them.
However, the following definition for ones (using List instead of Stream) certainly seems non-terminating:
ones: List Int
ones = 1 :: ones
-- ...
printLn(head ones) -- seg fault!
So, I'm not sure if the wiki entry is mistaken, or if I misunderstand the context. Note the Stream workaround is already described in the Idris tutorial.
Idris is only total if you ask it to be total. You may write one of %default total, %default covering, or %default partial (the default), and all declarations afterwards will take on the given totality annotation:
%default total
-- implicitly total
ones1 : List Int
ones1 = 1 :: ones1
-- ERROR: ones1 is not total
-- total
ZNeverHeardOfIt : Nat -> Nat
ZNeverHeardOfIt (S n) = n
-- ERROR: missing cases in ZNeverHeardOfIt
natRoulette : Nat -> Nat
natRoulette Z = Z
natRoulette (S n) = natRoulette (S (S n))
-- covering means all possible inputs are covered by an equation
-- but the function isn't checked for termination
-- natRoulette has cases for all inputs, but it might go into an infinite loop
-- it's morally equivalent to just partial, as a function that loops forever
-- on an input isn’t very different from one missing the case
-- it just gets the compiler to complain more
ones : List Int
ones = 1 :: ones
-- no checks at all
-- Idris, being strict, needs to evaluate ones strictly before it can evaluate ones.
-- Oh wait, that's impossible. Idris promptly vanishes in a segfault.

Coq: freeing universes

Is there a way to Reset (or more generally, free) universes in Coq?
Universe M.
Print Sorted Universes. (*M = Type.2*)
Fail Print M. (*Error: M not a defined object.*)
Reset M.
Print Sorted Universes. (*M = Type.2*)
Definition M := Type#{M}.
Print M. (*M = Type: Type*)
Print Sorted Universes. (*M = Type.2*)
Reset M.
Fail Print M. (*Error: M not a defined object.*)
Print Sorted Universes. (*M = Type.2*)
Whatever I do, M = Type.2. I'm in Coq 8.5
I've found 2 ways. Reset Initial destroys the entire environment (which is usually more than one would want). Another way is to mask universes w/ modules
Universe M R. Constraint M < R.
Definition M := Type#{M}. Definition R := Type#{R}.
Check M:R. Fail Check R:M. (*the hierarchy holds*)
(*1. w/ modules:*)
Module i.
Universe M R. Constraint R < M.
Check M:R. Fail Check R:M. (*still holds*)
Definition M := Type#{M}. Definition R := Type#{R}. (*but now..*)
Fail Check M:R. Check R:M. (*not any more*)
Print Sorted Universes. (*2 Rs and Ms, w/ the old hierarchy masked by the new one*)
End i.
(*outside the module the old hierarchy holds*)
Check M:R. Fail Check R:M.
(*2. w/ Reset Initial:*)
Reset Initial. Fail Check M:R. Fail Check R:M.
Print Sorted Universes. (*the def-d univ-s are gone*)

How to prove that the following language is not decidable?

L = { <M> | M is a Turing machine over {0, 1}, and <M>||<M> (not in) L(M)}
How do I prove that L is not recognizable? Any ideas?
I've proven the L compliment is recognizable:
Set Turing machine to J
1. Run J on input <M>||<M>
2. TM J accepts then accept, it reject the reject.
<M>||<M> is the concatenation of the encoding of the Turing machine.
You can reduce the (diagonal) acceptance problem to this problem. I try to use your own notation
D = { <M> | M is a Turing machine over {0, 1}, and <M> (not in) L(M)}
Suppose to fix an encoding of machine M, and consider a new program that takes in input a string w and accepts it just in case <M> in L(M) (so it has a constant behaviour, independent from the input string, and only dependent on <M>).
The previous program can be build parametrically and effectively in <M>, that is we have a total computable function h such that the previous program has code
h(<M>). Formally, I am using the smn theorem here, but since I am not sure you are confident with it, I prefer not mentioning it.
Now the question is if h(<M>) is in L.
If <M> in D, then by construction the machine h(<M>) does not accept any string, and in particular does not accept h(<M>)||h(<M>), so h(<M>) in L.
Conversely, if <M> not in D, then by construction the machine h(<M>) does accept any string, and in particular it accepts h(<M>)||h(<M>), so h(<M>) not in L.
If we had a way to decide L, we would have a way to decide D, and we know that D is not decidable (in fact, it is productive, similarly to L).

Partial application to precompute intermediary results

For the below quardratic formula, I have multiple a but fixed b and c.
I wish to write a partial application function, which execute efficiently, i.e., my function doesn't recompute fixed values (because of b and c).
Here is my solution
let r b c = let z = b *. b in fun a -> (-.b +. sqrt (z-.4.0*.a*.c))/.(a*.2.0);;
I guess this solution can work, but I am not sure whether it is efficient enough. I just made b^2 to be fixed as I saw other parts are all with a.
Anyone can give me a better solution?
Yeah, that's a correct way to deal with the situation at hand. The alternate form doesn't help much (as long this obtains the accuracy you require). You may want to move the 4*c out as well,
let r b c = let z = b *. b and c4 = 4.0 *. c in
fun a -> (-.b +. sqrt (z-.a*.c4))/.(a*.2.0);;

How to prevent common sub-expression elimination (CSE) with GHC

Given the program:
import Debug.Trace
main = print $ trace "hit" 1 + trace "hit" 1
If I compile with ghc -O (7.0.1 or higher) I get the output:
i.e. GHC has used common sub-expression elimination (CSE) to rewrite my program as:
main = print $ let x = trace "hit" 1 in x + x
If I compile with -fno-cse then I see hit appearing twice.
Is it possible to avoid CSE by modifying the program? Is there any sub-expression e for which I can guarantee e + e will not be CSE'd? I know about lazy, but can't find anything designed to inhibit CSE.
The background of this question is the cmdargs library, where CSE breaks the library (due to impurity in the library). One solution is to ask users of the library to specify -fno-cse, but I'd prefer to modify the library.
How about removing the source of the trouble -- the implicit effect -- by using a sequencing monad that introduces that effect? E.g. the strict identity monad with tracing:
data Eval a = Done a
| Trace String a
instance Monad Eval where
return x = Done x
Done x >>= k = k x
Trace s a >>= k = trace s (k a)
runEval :: Eval a -> a
runEval (Done x) = x
track = Trace
now we can write stuff with a guaranteed ordering of the trace calls:
main = print $ runEval $ do
t1 <- track "hit" 1
t2 <- track "hit" 1
return (t1 + t2)
while still being pure code, and GHC won't try to get to clever, even with -O2:
$ ./A
So we introduce just the computation effect (tracing) sufficient to teach GHC the semantics we want.
This is extremely robust to compile optimizations. So much so that GHC optimizes the math to 2 at compile time, yet still retains the ordering of the trace statements.
As evidence of how robust this approach is, here's the core with -O2 and aggressive inlining:
main2 =
case Debug.Trace.trace string trace2 of
Done x -> case x of
I# i# -> $wshowSignedInt 0 i# []
Trace _ _ -> err
trace2 = Debug.Trace.trace string d
d :: Eval Int
d = Done n
n :: Int
n = I# 2
string :: [Char]
string = unpackCString# "hit"
So GHC has done everything it could to optimize the code -- including computing the math statically -- while still retaining the correct tracing.
References: the useful Eval monad for sequencing was introduced by Simon Marlow.
Reading the source code to GHC, the only expressions that aren't eligible for CSE are those which fail the exprIsBig test. Currently that means the Expr values Note, Let and Case, and expressions which contain those.
Therefore, an answer to the above question would be:
unit = reverse "" `seq` ()
main = print $ trace "hit" (case unit of () -> 1) +
trace "hit" (case unit of () -> 1)
Here we create a value unit which resolves to (), but which GHC can't determine the value for (by using a recursive function GHC can't optimise away - reverse is just a simple one to hand). This means GHC can't CSE the trace function and it's 2 arguments, and we get hit printed twice. This works with both GHC 6.12.4 and 7.0.3 at -O2.
I think you can specify the -fno-cse option in the source file, i.e. by putting a pragma
{-# OPTIONS_GHC -fno-cse #-}
on top.
Another method to avoid common subexpression elimination or let floating in general is to introduce dummy arguments. For example, you can try
let x () = trace "hi" 1 in x () + x ()
This particular example won't necessarily work; ideally, you should specify a data dependency via dummy arguments. For instance, the following is likely to work:
x dummy = trace "hi" $ dummy `seq` 1
x1 = x ()
x2 = x x1
in x1 + x2
The result of x now "depends" on the argument dummy and there is no longer a common subexpression.
I'm a bit unsure about Don's sequencing monad (posting this as answer because the site doesn't let me add comments). Modifying the example a bit:
main :: IO ()
main = print $ runEval $ do
t1 <- track "hit 1" (trace "really hit 1" 1)
t2 <- track "hit 2" 2
return (t1 + t2)
This gives us the following output:
hit 1
hit 2
really hit 1
That is, the first trace fires when the t1 <- ... statement is executed, not when t1 is actually evaluated in return (t1 + t2). If we define the monadic bind operator as
Done x >>= k = k x
Trace s a >>= k = k (trace s a)
instead, the output will reflect the actual evaluation order:
hit 1
really hit 1
hit 2
That is, the traces will fire when the (t1 + t2) statement is executed, which is (IMO) what we really want. For example, if we change (t1 + t2) to (t2 + t1), this solution produces the following output:
hit 2
really hit 2
hit 1
The output of the original version remains unchanged, and we don't see when our terms are really evaluated:
hit 1
hit 2
really hit 2
Like the original solution, this also works with -O3 (tested on GHC 7.0.3).