Can Turing Machine work with decimal numbers? - turing-machines

I can do some operations on turing machine, but I use the binary forms of numbers, as the computers do. I wonder whether can I write decimal numbers on its tape and do calculation?
Thanks in advance.

Decimal numbers are strings of symbols. Thus they can be written on a TM tape and be manipulated in any computable way. This includes of course the normal arithmetic calculations.
Of course, you can only represent and manipulate decimal numbers with a finite number of significant digits (or one infinite one, but that does not seem useful). So in decimal, for example you cannot represent 1/3. If all rational numbers could occur, maybe the representation as fractions would be better, since it is always finite.

Related

Arbitrary precision Float numbers on JavaScript

I have some inputs on my site representing floating point numbers with up to ten precision digits (in decimal). At some point, in the client side validation code, I need to compare a couple of those values to see if they are equal or not, and here, as you would expect, the intrinsics of IEEE754 make that simple check fails with things like (2.0000000000==2.0000000001) = true.
I may break the floating point number in two longs for each side of the dot, make each side a 64 bit long and do my comparisons manually, but it looks so ugly!
Any decent Javascript library to handle arbitrary (or at least guaranteed) precision float numbers on Javascript?
Thanks in advance!
PS: A GWT based solution has a ++
There is the GWT-MATH library at http://code.google.com/p/gwt-math/.
However, I warn you, it's a GWT jsni overlay of a java->javascript automated conversion of java.BigDecimal (actually the old com.ibm.math.BigDecimal).
It works, but speedy it is not. (Nor lean. It will pad on a good 70k into your project).
At my workplace, we are working on a fixed point simple decimal, but nothing worth releasing yet. :(
Use an arbitrary precision integer library such as silentmatt’s javascript-biginteger, which can store and calculate with integers of any arbitrary size.
Since you want ten decimal places, you’ll need to store the value n as n×10^10. For example, store 1 as 10000000000 (ten zeroes), 1.5 as 15000000000 (nine zeroes), etc. To display the value to the user, simply place a decimal point in front of the tenth-last character (and then cut off any trailing zeroes if you want).
Alternatively you could store a numerator and a denominator as bigintegers, which would then allow you arbitrarily precise fractional values (but beware – fractional values tend to get very big very quickly).

How is floating point stored? When does it matter?

In follow up to this question, it appears that some numbers cannot be represented by floating point at all, and instead are approximated.
How are floating point numbers stored?
Is there a common standard for the different sizes?
What kind of gotchas do I need to watch out for if I use floating point?
Are they cross-language compatible (ie, what conversions do I need to deal with to send a floating point number from a python program to a C program over TCP/IP)?
-Adam
As mentioned, the Wikipedia article on IEEE 754 does a good job of showing how floating point numbers are stored on most systems.
Now, here are some common gotchas:
The biggest is that you almost never want to compare two floating point numbers for equality (or inequality). You'll want to use greater than/less than comparisons instead.
The more operations you do on a floating point number, the more significant rounding errors can become.
Precision is limited by the size of the fraction, so you may not be able to correctly add numbers that are separated by several orders of magnitude. (For example, you won't be able to add 1E-30 to 1E30.)
A thorough explanation of the issues surrounding floating point numbers is given in the article What Every Computer Scientist Should Know About Floating-Point Arithmetic.
The standard is IEEE 754.
Of course, there are other means to store numbers when IEE754 isn't good enough. Libraries like Java's BigDecimal are available for most platforms and map well to SQL's number type. Symbols can be used for irrational numbers, and ratios that can't be accurately represented in binary or decimal floating point can be stored as a ratio.
As to the second part of your question, unless performance and efficiency are important for your project, then I suggest you transfer the floating point data as a string over TCP/IP. This lets you avoid issues such as byte alignment and will ease debugging.
Basically what you need to worry about in floating point numbers is that there is a limited number of digits of precision. This can cause problems when testing for equality, or if your program actually needs more digits of precision than what that data type give you.
In C++, a good rule of thumb is to think that a float gives you 7 digits of precision, while a double gives you 15. Also, if you are interested in knowing how to test for equality, you can look at this question thread.
In follow up to this question, it
appears that some numbers cannot be
represented by floating point at all,
and instead are approximated.
Correct.
How are floating point numbers stored?
Is there a common standard for the different sizes?
As the other posters already mentioned, almost exclusively IEEE754 and its successor
IEEE754R. Googling it gives you thousand explanations together with bit patterns and their explanation.
If you still have problems to get it, there are two still common FP formats: IBM and DEC-VAX. For some esoteric machines and compilers (BlitzBasic, TurboPascal) there are some
odd formats.
What kind of gotchas do I need to watch out for if I use floating point?
Are they cross-language compatible (ie, what conversions do I need to deal with to
send a floating point number from a python program to a C program over TCP/IP)?
Practically none, they are cross-language compatible.
Very rare occuring quirks:
IEEE754 defines sNaNs (signalling NaNs) and qNaNs (quiet NaNs). The former ones cause a trap which forces the processor to call a handler routine if loaded. The latter ones don't do this. Because language designers hated the possibility that sNaNs interrupt their workflow and supporting them enforce support for handler routines, sNaNs are almost always silently converted into qNaNs.
So don't rely on a 1:1 raw conversion. But again: This is very rare and occurs only if NaNs
are present.
You can have problems with endianness (the bytes are in the wrong order) if files between different computers are shared. It is easily detectable because you are getting NaNs for numbers.
Yes there is the IEEE Standard for Binary Floating-Point Arithmetic (IEEE 754)
The number is split into three parts, sign, exponent and fraction, when stored in binary.
This article entitled "IEEE Standard 754 Floating Point Numbers" may be helpful. To be honest I'm not completely sure I'm understanding your question so I'm not sure that this is going to be helpful but I hope it will be.
If you're really worried about floating point rounding errors, most languages offer data types that don't have floating point errors. SQL Server has the Decimal and Money data types. .Net has the Decimal data type. They aren't infinite precision like BigDecimal in Java, but they are precise down to the number of decimal points they are defined for. So you don't have to worry about a dollar value you type in as $4.58 getting saved as a floating point value of 4.579999999999997
What I remember is a 32 bit floating point is stored using 24 bits for a actual number, and the remain 8 bits are used as a power of 10, determining where the decimal point is.
I'm a bit rusty on the subject tho...

IEEE -754 Floating Point Conversion

Just started a new class and I'm having trouble grasping the floating-point conversions. We were given a problem of a dollar amount, then to convert that to binary, then to hex, then to floating point. I can find the answers online in calculators if i wanted, but I need help understanding how logically for a fraction number.
I can do the following for ex: 842 to binary(no fraction), how would you convert something like 272.10, or anything along those lines? And then how to floating point?
I was under the impression you take 2,7,2,1,0 and run that in the binary value chart, corresponding with 0010, 0111, 0010, 0001, 0000.. but that's not what everything has for the final answer.
The community helped me a lot with the hex and made that easy, hoping to grasp this as well. Any step-by-step help is appreciated.
This is an extended comment rather than a complete answer, only addressing "I was under the impression you take 2,7,2,1,0 and run that in the binary value chart, corresponding with 0010, 0111, 0010, 0001, 0000.. but that's not what everything has for the final answer."
If you have been able to convert decimal 842 to binary, you must have noticed that both decimal and binary are positional systems. The "8" contributes eight hundred to the total value, not just eight, because of its position.
In the same way, the "1" in decimal 272.10 contributes one tenth to the total value. To do the conversion, you must not just map digits to their individual binary representation, you also need to map their weighting by a power of the radix, ten for decimal, two for binary.
Unfortunately, one tenth cannot be represented exactly as a terminating binary fraction, so any radix 2 representation of 272.10 must be an approximation. This is the same problem as representing 1/3 as a terminating decimal fraction.

Is specifying floating-point type sufficient to guarantee same results?

I'm writing a specification that describes some arithmetic that will be performed by software. The intention is that I can hand this spec to two different programmers (who use potentially different languages and/or architectures) and when I feed some input into their programs, they will both always spit out the same result.
For instance, if the spec says "add 0.5 to the result", this can be a problem. Depending on the floating point storage method, 0.5 could be represented as 0.4999999135, or 0.500000138, etc.
What is the best way to specify the rules here so that things are consistent across the board? Can I just say "All numbers must be represented in IEEE 754 64-bit format"? Or is it better to say something like "All numbers must be first scaled by 1000 and computed using fixed-point arithmetic"?
This is a little different from most floating-point questions I've come across since the issue is repeatability across platforms, not the overall precision.
IEEE 754-2008 clause 11 describes what is necessary for reproducible floating-point results. This is largely:
Bindings from the programming language to IEEE 754 operations (e.g., a*b performs the floating-point multiplication specified by IEEE 754).
Ways to specify that reproducible results are desired. E.g., disable default language permissions to use wider precision than the nominal types of objects.
Accurate conversions to and from external decimal character sequences.
Avoiding some of the fancier features of IEEE 754.
These requirements are poorly supported in today’s compilers.
Adding .5 will not be a problem. All normal floating-point implementations represent .5 exactly and add it correctly. What will be a problem is that a language implementation may add .5 and keep the result precisely (more precisely than a usual double) while another implementation rounds the result to double. If math library routines are used (such as cos and log), that is another problem, because they are hard to compute well, and different implementations provide different approximations.
IEEE 754 is a good specification. Ideally, you would specify that implementations of your specification conform to IEEE 754.

Huffman encoding

Under what conditions does Huffman encoding make a string not compressible? Is it when all the characters appear with equal frequency/probability? And if so, how can one show this is true?
In a nutshell, Huffman encoding assigns smaller bit-length codes to more probable binary combinations and longer ones to the less probable ones. If all are equally likely, you will find there is no real advantage because the compression due to shorter codes is lost due to equally likely longer codes.
You can calculate a simple zero-order entropy for a sequence of symbols which will tell you if you even have a chance of significant compression with just Huffman coding. (I wish stackoverflow had TeX formatting like math.stackexchange.com does. I can't write decent equations here.)
Count how many different symbols you have and call that n, with the symbols numbered 1..n. Compute the probability of each symbol, which is how many times each symbol occurs divided by the length of the sequence, and call that p(k). Then the best you can do with zero-order coding is an average number of bits per symbol equal to: -sum(p(k)log(p(k)),k=1..n)/log(2). Then you can compare the result to log(n)/log(2) which is what the answer would be if all the probabilities were equal (1/n) to see how much the unequal probabilities could buy you. You can also compare the result to, for example, 8, if you are currently storing the symbols as a byte each (in which case n <= 256).
A Huffman code will have equal to or more bits per symbol than that entropy. You also need to take into account how you will convey the Huffman code to the receiver. You will need some sort of header describing the code, which will take more bits. An arithmetic or range code could get closer to the entropy than the Huffman code, especially for very long sequences.
In general, a Huffman code by itself will not produce very satisfying compression. A quick test on the 100M character English text test file enwik8 gives an entropy of about five bits per symbol, as does Huffman coding of the text. Huffman (or arithmetic or range) coding needs to be used in combination with a higher-order model of the input data. These models can be simple string matching, like LZ77 as used in deflate or LZMA, a Burrows-Wheeler transform, or prediction by partial matching. An LZ77 compressor, in this case gzip, gets less than three bits per symbol.
I can't resist including a picture of Boltzmann's gravestone, engraved on which is his formula that connects entropy to probability, essentially the formula above.
Two factors come to my mind:
If you have similar probabilities of elements, then little
compression will be possible
If you try to compress a small input (say, a short text), then the overhead of attaching a Huffman look-up table (aka dictionary - you need to decode your compressed file, don't you?) can make the final size even bigger than the original input.

Resources