Single tape turing machine for converting binary number to unary? - turing-machines

I'm stuck on a problem I've been trying to solve, namely converting a binary number with no leading zeros to a unary representation on the same tape.
E.g. 110 -> xxxxxx
I found markov's algorithm as a potential solution, but am unable to implement it. Would appreciate some direction!
Edit: figured it out on my own. Write a machine for binary subtraction, then write an x for each subtraction.


Most accurate open-source OCR for handwritten numbers? [closed]

My software needs to read a fixed-length handwritten number.
While I could use a general-purpose library like Tesseract, I am sure there is something smarter. Tesseract will probably misinterpret some of the 1 or 7 as I or l, whereas a software that expects only numbers would not.
Knowing that there are only numbers (American-English way of writing them), the algorithm could focus on 10 potential matches instead of hundreds of symbols.
Any experience OCRing handwritten number-only fields?
What open source library/software did you get the best results with?
From the FAQ of Tesseract:
How do I recognize only digits?
In 2.03 and above:
TessBaseAPI::SetVariable("tessedit_char_whitelist", "0123456789");
before calling an Init function or put this in a text file called tessdata/configs/digits:
tessedit_char_whitelist 0123456789
and then your command line becomes:
tesseract image.tif outputbase nobatch digits
Warning: Until the old and new config variables get merged, you must have the nobatch parameter too.
But I think since it was designed for printed—not handwritten—text, accuracy might suffer even for digits only.

How to convert a CFG to a Turing Machine

How do I convert a CFG to a TM? I have a basic idea of how to convert a DFA to a TM, but I can't think of a way to do this one properly.
Can I get some general implementation steps on this?
Construct a pushdown automata (PDA) using a standard algorithm. I won't go into details of any particular construction (a separate question would be a better place to cover that) but you can search "convert cfg to pda" and get results. One example is here.
Construct a two-tape Turing Machine from the PDA as follows:
The Turing machine reads the input tape (tape #1) left to right
The Turing machine uses the second tape as the stack, moving right when pushing and left when popping.
Construct a vanilla one-tape Turing Machine from the two-tape machine using a standard construction. The proof that one-tape and two-tape TMs are equivalent is a constructive proof and you can follow the implied algorithm to show construct a single-tape TM for the language of your CFG.
Every produced word of the CFG can be written as a parse tree. If you get a string produced by the CFG these are the leafs of your parse tree. To determine whether the string is produced by the CFG you simply go from the leaves to inner nodes till the root. So the general idea is to turn around the production rules. If you can reach the root, then the string is part of the grammar. If you are at a point, where no turned production rule fits, then the word is not produced by the CFG. Note that if there are more than one possibilities you need to try all possibilities and if one works, the string is produced by the CFG. If none works, it is not.

DPDA to Turing Machine?

Is there a way to convert a deterministic pushdown automata into a turing machine?
I thought about putting the stack after the input on the tape, with '#' between them.
But it seems kind of impossible to prove it formally.
Do you have any suggestions?
Did somebody do it already?
Thank you
Push-down automaton works only in one direction. That is it cannot retrace its step or keep a count.
For example, if you want a formal language:
L = {1^n+0^m | n>m, m>0}
Here the no. of 1's are greater than no. of zeroes.
This problem is solvable by both DPDA and Turing Machine.
However if we add another condition, like:
L = {1^n.0^m.1^n | n>m, m>0}
Assuming that you know how to solve the above problem in Turing Machine, you would understand its not possible to solve it without back tracing the input tape.
Therefore there is no way you can make PDA as powerful as Turing Machine.
Here is link to Wiki for your more understanding :

Addition operation versus multiplication operation

I know, that addition operation is more trivial than multiplication operation. But will there be any difference in execution time of 123456 * 3 and 123456 + 123456 + 123456?
How exactly works multiplication?
Do multiplication algorithms vary in different programming languages?
How multiplication looks on low-level (i.e. Assembler code)?
In x86 assembly language the addition and multiplication operations look like this:
ADD [operand1], [operand2]
where operand1 can be register, operand 2 can be register, constant or memory address
It takes from 1 to 7 clocks depending on processor model and operand2 type
MUL [operand] ;for unsigned multiplication
multiplies the content of the accumulator register (AL, AX, EAX) with operand, which can be a register or a memory address. And again, depending on the type of the operand and processor model, it takes 12-38 clocks
There's also a version of MUL that does signed multiplication.
This is core assembly language, without modern SIMD extensions like SSE etc. The real speed, as mentioned above, depends on the compiler optimizations.
A smart compiler will most likely replace your 123456 + 123456 + 123456 with 3*123456
Premature optimization is the root of all evil :)
What you give the compiler is not what you get back after the optimization step, so while in theory addition is faster, in real world conditions you can never be sure what will be the result (not to mention when you take into account the SSE or other processor instructions that the compiler might use).

Determining how many bits is a processor by looking at the assembly listing file

In my introduction to computers class, we wrote an assembly program for the MC68332 microcontroller. I know this microcontroller is 32-bit because I read it in the datasheet. I was wondering if there is a way to determine this by looking at the LST file generated when assembling the asm source code.
The first column ist the address of the instructions, the second group are the operations to be executed aka opcodes or instructions, and the last field are the instructions translated to human readable form, commonly known as assembly.
On this processor the opcodes consume usually a multiple of 16 bit (2 bytes), thats the reason you see only even addresses. Despite this, it is a 32 bit processor, this is mainly because of its address space of 2^32. This is the reason you see the addresses eight digits wide, each digit encodes 4 bits.
You can guess that it is a 32 bit processor from the .L suffix of some instructions, it is short for "long" which is usually 32 bit, so this processor has additionally the ability to process 32 bit wide instructions.
the first number in those lines is obviously the address; the second (and third) the actual opcodes assembled. the reason the last two have two 16-bit words is the args $01 and $02. the CLR.L is a good hint that it is a 32-bit processor: "clear longword".
In general, you can't just tell from the assembly instructions or listing. If you need to look up the mnemonic, you are going to find out all about the instruction, and it is going to tell you about how many bits will be involved. But even then that may not be enough, as I could write a series of instructions or an IBM System z mainframe that only work with the lowest 1/4 of a 64-bit register, or deal with a single byte in storage, and you could not tell just from the code that the 16 general-purpose registers are 64-bit. Or ancient Honeywell Series 6000 code that might work with either 6 6-bit characters or 4 9-bit characters in a register, based on a flag bit in a control register.
This then leads to the best way to find out - read the spec sheet, Principles of Operation, or similar guide, and learn the instructions. And the more you learn, the more fun you realize it is to use assembler code.