## A mod B function Turing machine [closed] - turing-machines

### Making my canvas dispaly the graph correctly

```I am trying to create a graph calculator and making it display the graph correctly on a "canvas". When I load the HTML file and write x e.g it starts from the upper left corner and goes down to the lower right corner. So the problem is that it displays the graph upside down and it does not include negative values.
I know that the canvas starts from (0,0) in pixel value in the upper left corner and ends at (300,300) in the lower right corner. I want it to display something like the green canvas from this link: http://www.cse.chalmers.se/edu/course/TDA555/lab4.html
points :: Expr -> Double -> (Int,Int) -> [Point]
points exp scale (x, y) = [(x, realToPix (eval exp (pixToReal x))) | x<-[0..(fromIntegral canWidth)] ]
where
pixToReal :: Double -> Double --converts a pixel x-coordinate to a real x-coordinate
pixToReal x = x * 0.02
realToPix :: Double -> Double --converts a real y-coordinate to a pixel y-coordinate
realToPix y = y / 0.02
```
```You're probably used to working with 2D coordinate systems where positive y is up, but as you noted in HTML canvas positive y goes down. To simulate the coordinate system you want, you need to flip all the y-values over the line y=0 (aka the x-axis).
Here are a few y-values and their corresponding corrections you can use as tests. Note that I'm assuming y has already been scaled properly; it looks like you've already got that part.
150 -> 0
0 -> 150
-150 -> 300
The pattern is y_new = -(y_old - 150) where 150 is canvas_height/2. Therefore, after scaling you need to apply this formula to all your y values.
To shift the y-axis to the center you need to do the same sort of thing to derive the appropriate linear transformation.```

### Why does 9.0 + 4.53 + 4.53 yield 9.05 when β = 10 and p = 3?

```I'm reading this paragraph in What Every Computer Scientist Should Know About Floating-Point Arithmetic:
(6)
(Suppose the triangle is very flat; that is, a ≈ b + c. Then s ≈ a,
and the term (s - a) in formula (6) subtracts two nearby numbers, one
of which may have rounding error. For example, if a = 9.0, b = c =
4.53, the correct value of s is 9.03 and A is 2.342.... Even though the computed value of s (9.05) is in error by only 2 ulps, the
computed value of A is 3.04, an error of 70 ulps.
from: http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html#1403
I wonder why (a+b+c)/2 is equal to 9.05, where a = 9.0, and b = c = 4.53?
I suppose a hardware it may concern will calculate a+b first, which results 13.53, rounded to 13.5. Then c will be added to 13.5, giving us 18.03, which is eventually rounded to 18.0. Finally, 18.0 is divided by 2. This yields 9.00, and is given to variable s. It is assumed that this hardware has at least one guard digit.
So where is wrong?
```
```If they did b + c = 9.06 first, then added a + 9.06 = 18.06, rounded that to 18.1 they'd get to 9.05.
I guess this detail is not that important, the point is that with only three significant digits,
you'll end with 9.00 or 9.05, but not with the correct 9.03 (even though that number could be represented)
every additional operation you do introduces more inaccuracy, so that the end result can be off by far more than just the three digit limitation (3.04 vs 2.342..., which does not get even the first digit right)```

### What is the closest value to 1/3 that can be achieved using IEEE 754 32-bit floating point format?

```I'm having trouble completing the problem despite having a fairly good understanding of how the floating point format works. Can someone walk me through the steps one would take to arrive at an answer? Why is it impossible to represent 1/3 entirely, and how do we know we've gotten to a number closest to 1/3?
```
```The fraction part of a float in IEEE 754 is made from the sum of negative powers of 2.
For instance, 0.5 is 2-1, 0.75 is 2-1+2-2 etc...
what number approaches 1/3 by summing N bits of negative powers of 2?
and 1/3 being infinite, you cannot have exactly 1/3 within a finite number of bits (32) float
To complete the question, you can implement a fairly easy algorithm to reach a value close to 1/3
N = 23 // mantissa bits
T = 1/3 // target
p = 0.5 // first negative power of 2
r = 0.0 // resultat
do N times
if ( r + p <= T ) r = r + p // add power of 2 if result is not bigger than target
p = p / 2 // next power of 2
done
To visualize the result in binary, you could print "1" when p is added to r, and "0" when it's not.
The last bit could be "1" to round the result closer to the target.
```
```To answer “How close can we get?” questions with floating-point, it is often useful to change the significand (fraction portion) to an integer. To do this, start with the customary floating-point format for normal 32-bit binary IEEE floating-point numbers:
A sign s (0 or 1 representing + or –), an exponent e, and a fraction f that begins with “1.” and has 23 bits after the decimal point. These three parts combine to represent the number (–1)s•2e•f.
Then scale the format so the fraction f is instead an integer F. To do this, we subtract 23 from the exponent and multiply the fraction f by 223. Then we have this format:
A sign s, an exponent e–23, and an integer F equal to 223•f (so 223 ≤ F < 224). These parts combine to represent the number (–1)s•2e–23•F = (–1)s•2e•f.
Now, figure out what exponent 1/3 would have in the customary format. To make f start with “1.”, e must be -2. We can see this from the fact that 1/3 = 2–2•4/3, and 1 ≤ 4/3 < 2, so, in binary, 4/3 starts with “1.”.
Then, consider the scaled format. In this format, we must have: The sign s is 0, the exponent is –2–23 = –25, and F is some integer such that (–1)0•2–25•F ≈ 1/3.
This is easy to solve, F ≈ 225•1/3 = 33554432/3 = 11184810.666…. The nearest integer to this value is 11184811, so F = 11184811.
Now we can see the error in F is 1/3 (the difference between the integer that F must be and the value would like it to be), and that error is scaled by 2-25, so the error is 2-25/3 ≈ 9.934e-09. And the value itself is 2–25•11184811, which is approximately .3333333433.
(Note that I addressed only normal numbers. For numbers near the limits of the floating-point format, overflow and underflow must also be considered.)```

### Working out floating point numbers in base 2, 10 and 16

```I was reading across my notes and came across the following:
For every real number there are various ways of representing
it in such a way. Therefore, computers fix two parameters (so
they do not need to be stored, and arithmetic is more
convenient):
- the base b (normally, it is 2, 10 or 16) and
- the position of the decimal (or binary) point (by normalising
the mantissa such that it satisfies 1/b ≤ m < 1)
Example: Normalised representations for r := 12.25 are,
- for b = 2, r = 1 × 0.110001 × 2^4,
- for b = 10, r = 1 × 0.1225 × 10^2 and,
- for b = 16, r = 1 × 0.C4 × 16^1.
How do you go about working out the floating point numbers in base 2, 10 and 16 for the value 12.25. I'm not too sure at how the lecturer arrived at his answers for b = 2, b = 10 and b = 16.
```
```From the examples, it seems like your lecturer's definition of "normalized" is to represent the number as +1 or -1 multiplied by some x multiplied by the base raised to an integer power, where x is the largest value less than 1 for which the product equals the represented number. Also, x is represented as a numeral in the base.
E.g., consider 12.25 in base 2. Sticking to base 10 for the moment, we could represent 12.25 as 1×12.25×20 or 1×6.125×21 or 1×3.0625×22 or 1×1.53125×23 or 1×.765625×24 or 1×.3828125×25, and so on. Of these, we can see that .765625 is the largest value less than 1 that fits the form. So, we represent 12.25 as 1×.765625×24. Then we need to convert .765625 to base 2.
You have probably covered that in previous lessons, but we can do it like this: Multiply .765625 by 2 (to get 1.53125) and separate the integer part (1) from the fraction (.53125). Multiply the fraction by 2 (1.0625) and separate again (1 and .0625). Repeat with the new fraction (0 and .125). Continue repeating until the fraction is zero or you have as many digits as you want: 0 and .25, 0 and .5, 1 and 0. List the integers you got: 1 1 0 0 0 1. Now the base-two numeral you want is a period followed by those digits: .110001. So, 12.25 in base 2, normalized according to your lecturer's definition, is 1×.110001×24.
A rule for finding the right value of x could be this: Start with an exponent of 0. If x is larger than 1, divide it by the base and add one to the exponent. If x is less than 1/base, multiply it by the base and subtract one from the exponent. Repeat this until x is between 1/base and 1 (including 1/base but excluding 1, so stop if x equals 1/base).
For 12.25 and decimal: Start with exponent 0. Divide 12.25 by 10 (getting 1.225) and increment the exponent to 1. Divide again (.1225) and increment the exponent to 2. Now we stop because .1225 is between 1/10 and 1.
For 12.25 and base 16: Start with exponent 0. Divide 12.25 by 16 (getting .765625) and increment the exponent to 1. Now stop because .765625 is between 1/16 and 1.
To convert .765625 to base 16: Multiply .765625 by 16 to get integer 12 (digit C) and fraction .25. Multiply .25 by 16 to get integer 4 and fraction 0. The fraction is 0, so stop. The base-16 numeral is .C4, so the whole form is 1×.C4×21.
Sometimes, other definitions of "normalized" will be used. Commonly, instead of adjusting the x to be between 1/base and 1, we adjust the x to be between 1 and b.
```
```We're trying to represent the number 12.25 in various bases. So without normalizing.
For Binary:
12 - > 1100
0.25 - > .01 (1 * (1/2)^2)
so 12.25 = 1100.01
For Hex :
12 - > C
0.25 - > .4 (4 * (1/16)^1)
so 12.25 = C.4
and then the decimal point has been shifted in conjunction with an exponent term.
Hope that helps.
I guess this isn't an answer because I'm not describing any method. So here's some links to discussions at MathForums on the topic.
Long Division with Binary Numbers
Floating Point Binary Fractions
Dr. Math's FAQ on Bases
I'm still looking for a good guide to Base 16 Floating point division
Base Convert and Digit Convert are online base converters that can work with floating point numbers.
The sites use Javascript so it is possible to get to the algorithm being used.```

### string Rabin-Karp elementary number notations

```I am reading about String algorithms in Introduction to Algorithms by Cormen etc
Following is text about some elementary number theoretic notations.
Note: In below text refere == as modulo equivalence.
Given a well-defined notion of the remainder of one integer when divided by another, it is convenient to provide special notation to indicate equality of remainders. If (a mod n) = (b mod n), we write a == b (mod n) and say that a is equivalent to b, modulo n. In other words, a == b (mod n) if a and b have the same remainder when divided by n. Equivalently, a == b (mod n) if and only if n | (b - a).
For example, 61 == 6 (mod 11). Also, -13 == 22 == 2 == (mod 5).
The integers can be divided into n equivalence classes according to their remainders modulo n. The equivalence class modulo n containing an integer a is
[a]n = {a + kn : k Z} .
For example, 7 = {. . . , -11, -4, 3, 10, 17, . . .}; other denotations for this set are [-4]7 and 7.
Writing a belongs to [b]n is the same as writing a == b (mod n). The set of all such equivalence classes is
Zn = {[a]n : 0 <= a <= n - 1}.----------> Eq 1
My question in above text is in equation 1 it is mentioned that "a" should be between 0 and n-1, but in example it is given as -4 which is not between 0 and 6, why?
In addition to above it is mentioned that for Rabin-Karp algorithm we use equivalence of two numbers modulo a third number? What does this mean?
```
```This is not a programming question, but never mind...
it is mentioned that "a" should be between 0 and n-1, but in example it is given as -4 which is not between 0 and 6, why?
Because [-4]n is the same equivalence class as [x]n for some x such that 0 <= x < n. So equation 1 takes advantage of the fact to "neaten up" the definition and make all the possibilities distinct.
In addition to above it is mentioned that for Rabin-Karp algorithm we use equivalence of two numbers modulo a third number? What does this mean?
The Rabin-Karp algorithm requires you to calculate a hash value for the substring you are searching for. When hashing, it is important to use a hash function that uses the whole of the available domain even for quite small strings. If your hash is a 32 bit integer and you just add the successive unicode values together, your hash will usually be quite small resulting in lots of collisions.
So you need a function that can give you large answers. Unfortunately, this also exposes you to the possibility of integer overflow. Hence you use modulo arithmetic to keep the comparisons from being messed up by overflow.
```
```I'll try to nudge you in the right direction, even though it's not about programming.
The example with -4 in it is an example of an equivalence class, which is a set of all numbers equivalent to a given number. Thus, in 7, all numbers are equivalent (modulo 7) to 3, and that includes -4 as well as 17 and 710 and an infinity of others.
You could also name the same class 7, because every number that is equivalent (modulo 7) to 3 is at the same time equivalent (modulo 7) to 10.
The last definition gives a set of all distinct equivalence classes, and states that for modulo 7, there is exactly 7 of them, and can be produced by numbers from 0 to 6. You could also say
Zn = {[a]n : n <= a < 2 * n}
and the meaning will remain the same, since 7 is the same thing as 7, and 7 is the same thing as 7.```