Turing machine to find most occurring char on tape - turing-machines

So I need to create a literal representation of a TM that finds the most occurring char on the tape and erases everything else.
The TM has only one tape and inputs would look like this:
#a# => #a#
#aaabbaac# => #a#
The alphabet is {a,b,c,d}.
I need some hints with this cause I found myself stuck. My first idea was to delete chars in order (eg. for any 'a', try to delete another 'b' then 'c' then 'd' if possible) so at the end there will only remain the most occurring one but this seems way too complicated.
Any ideas?

As with any programming task, the idea is to break it into manageable pieces you know how to encode in the machine that will run your program. Once we have a plan, we can worry about how to simplify it and write down the answer.
Your idea is not a bad one: we can see what the first non-erased symbol is, then scan right and remove up to one instance of every other non-erased symbol until we get to a blank; then, reset and repeat until you end up not erasing any symbol in the last pass. Then, find the last remaining non-erased symbol, copy it to the front of the tape, and blank out the rest until you find blanks on the right. This is the design.
Now we can talk about implementation. We need an encoding of states and transitions that is going to have the effect of doing what is described above. The first thing we need to do is read the tape symbol to figure out what the first non-erased symbol is. We know the TM needs an initial state anyway, so that's what this can be. We will call the state q0.
If we are in q0 and we see a blank, we can always take this to mean that the tape is empty and we can halt accept. Note that the problem description does not cover two edge cases: the empty string; strings with equal numbers of multiple symbols. If some symbols occur the same maximal number of times, do we show them all or do we show nothing? This derivation assumes we are OK with showing nothing.
If in q0 we see a on the tape, we need to scan right and remove up to one b, c and d. We might even call the state qbcd to remind ourselves of what we're looking for. Upon reading a, we need to erase it; we don't necessarily want to overwrite it with blank, so we can use a special symbol B to indicate erasure. We get similar transitions to qacd, qabd and qabc for other tape possibilities.
In state qxyz, where x, y and z stand in for some combination of three of a, b, c and d, we are looking to erase one of x, y or z but not the remaining symbol. So, if we see the remaining symbol, we leave the tape alone and move right; if we see symbol x, y or z then we transition to the state corresponding to the two symbols remaining to be removed. There will be six such states: qab, qac, qad, qbc, qbd, qcd.
In state qxy, where x and y stand in for some combination of two of a, b, c and d, we are looking to erase one of x or y but not the other two symbols. So, if we see the other symbols, we leave the tape alone and move right; if we see symbol x or y then we transition to the state corresponding to the only symbol remaining to be removed. There will be four such states: qa, qb, qc, qd.
In state qx, where x stands in for one of a, b, c and d, we are looking to erase x but none of the other three symbols. So, if we see other three symbols, we leave the tape alone and move right; if we see symbol x then we transition to the state indicating one of each symbol has been removed. Call this state qR.
If in states qxyz, qxy or qx you find a real blank symbol - meaning input has been exhausted - that means that you didn't find some of the symbols you were looking to remove. This is fine; in this case, we can transition to the same state as mentioned in the preceding paragraph: the one indicating we have completed a pass and are ready to repeat the process.
In state qR, you rewind the tape back to the beginning of the tape where the first real blank is, and then transition to another state we'll call qF. The only job of qF is to scan right and find the first symbol that isn't B. Then, it behaves exactly like the initial state and repeats the process above. Note that all states should ignore Bs when reading the tape, and pass over them. We can even just reuse q0 since the input tape won't have any Bs on it initially; it is safe to overload q0 with this extra functionality.
If you reach the right of the tape - the blank after all the original input - in one of the states qxyz, that means you erased the fourth symbol w but didn't find any of the other symbols to erase in the same pass. That means that w was the symbol with the most instances originally. When we detect this special condition, we can transition to new states qa', qb', qc' and qd' which return to the beginning of the tape, then transition to states qa'', qb'', qc'' and qd'' to write their inputs down, then transition to qE to finally erase the rest of the tape, halt-accepting when the blank originally at the end of the input is reached.
What does this look like in a TM?
state tape | new state new tape head direction
// find and erase the first symbol /////////////////////////
q0 # | halt_accept # same
q0 B | q0 B right
q0 a | qbcd B right
q0 b | qacd B right
q0 c | qabd B right
q0 d | qabc B right
// look for any of three targets ////////////////////////////
qbcd # | qa' # left
qbcd B,a | qbcd B,a right
qbcd b | qcd B right
qbcd c | qbd B right
qbcd d | qbc B right
qacd # | qb' # left
qacd B,b | qacd B,b right
qacd a | qcd B right
qacd c | qad B right
qacd d | qac B right
qabd # | qc' # left
qabd B,c | qabd B,c right
qabd a | qbd B right
qabd b | qad B right
qabd d | qab B right
qabc # | qd' # left
qabc B,d | qabc B,d right
qabc a | qbc B right
qabc b | qac B right
qabc c | qab B right
// look for any of two targets //////////////////////////////
qab # | qR # left
qab B,c,d | qab B,c,d right
qab a | qb B right
qab b | qa B right
qac # | qR # left
qac B,b,d | qac B,b,d right
qac a | qc B right
qac c | qb B right
qad # | qR # left
qad B,b,c | qad B,b,c right
qad a | qd B right
qad d | qa B right
qbc # | qR # left
qbc B,a,d | qbc B,a,d, right
qbc b | qc B right
qbc c | qb B right
qbd # | qR # left
qbd B,a,c | qbd B,a,c right
qbd b | qd B right
qbd d | qb B right
qcd # | qR # left
qcd B,a,b | qcd B,a,b right
qcd c | qd B right
qcd d | qc B right
// look for single target ///////////////////////////////////
qa #,a | qR #,B left
qa B,b,c,d | qa B,b,c,d right
qb #,b | qR #,B left
qb B,a,c,d | qb B,a,c,d right
qc #,c | qR #,B left
qc B,a,b,d | qc B,a,b,d right
qd #,d | qR #,B left
qd B,a,b,c | qd B,a,b,c right
// scan back to the beginning of the tape ///////////////////
qR # | q0 # right
qR B,a,b,c,d| qR B,a,b,c,d left
qa' # | qa'' # right
qa' B,a,b,c,d| qa' B,a,b,c,d left
qb' # | qb'' # right
qb' B,a,b,c,d| qb' B,a,b,c,d left
qc' # | qc'' # right
qc' B,a,b,c,d| qc' B,a,b,c,d left
qd' # | qd'' # right
qd' B,a,b,c,d| qd' B,a,b,c,d left
// write the output if we found one /////////////////////////
qa'' # | halt_accept # same
qa'' B,a,b,c,d| qE a right
qb'' # | halt_accept # same
qb'' B,a,b,c,d| qE b right
qc'' # | halt_accept # same
qc'' B,a,b,c,d| qE c right
qd'' # | halt_accept # same
qd'' B,a,b,c,d| qE d right
// erase the rest of the input tape /////////////////////////
qE # | halt_accept # same
qE B,a,b,c,d| qE # right
If you'd rather leave the tape head at the front of the tape, you can write special tape symbols like A, B, C and D in the double-prime states, scan to the end, and erase backwards until you find # or one of the above symbols. That means a couple extra states but is conceptually straightforward.
This TM has 24 states, 96 transitions (if fully expanded) and erases at least one symbol in every pass; therefore, its runtime is quadratic in the worst case: input size 4n, n of each symbol, algorithm performs n passes and on the order of n steps in each pass, plus some o(n^2) stuff at the end to erase the tape.
Perhaps this is what you had in mind and thought was too complicated. I grant that it took a long time to write down. However, this is conceptually very simple and likely optimal for a single tape TM.


Normalize Count Measure in Tableau

I am trying to create a plot similar to those created by Google's ngram viewer. I have the ngrams that correspond to year, but some years have much more data than others; as a result, plotting from absolute counts doesn't get me the information I want. I'd like to normalize it so that I get the counts as a percentage of the total samples for that year.
I've found ways to normalize data to ranges in Tableau, but nothing about normalizing by count. I also see that there is a count distinct function, but that doesn't appear to do what I want.
How can I do this in Tableau?
Thanks in advance for your help!
Here is some toy data and the desired output.
Toy Data:
| Pattern | Year |
| a | 1 |
| a | 1 |
| a | 1 |
| b | 1 |
| b | 1 |
| b | 1 |
| a | 2 |
| b | 2 |
| a | 3 |
| b | 4 |
Desired Output:
Put [Year] on the Columns shelf, and if it is really a Date field instead of a number - choose any truncation level you'd like or choose exact date. Make sure to treat it as a discrete dimension field (the pill should be blue)
Put [Number of Records] on the Rows shelf. Should be a continuous measure, i.e. SUM([Number of Records])
Put Pattern on the Color shelf.
At this point, you should be looking at a graph raw counts. To convert them to percentages, right click on the [Number of Records] field on the Rows shelf, and choose Quick Table Calc->Percent of Total. Finally, right click on [Number of Records] a second time, and choose Compute Using->Pattern.
You might want to sort the patterns. One easy way is to just drag them in the color legend.

postgresql, select multiple json_array_elements works so werid

I want use json_array_elements to expands json array. But it works so werid. Pls see below.
select json_array_elements('[1, 2]') as a, json_array_elements('[2, 3, 4]') as b;
a | b
1 | 2
2 | 3
1 | 4
2 | 2
1 | 3
2 | 4
(6 rows)
select json_array_elements('[1, 2]') as a, json_array_elements('[2, 3]') as b;
a | b
1 | 2
2 | 3
(2 rows)
It's seems when the length of the arrays are equal, something goes wrong.
Can anyone tell me, why is like this.
PostgreSQL repeats each list until both happen to be at the end simultaneously.
In other words, the length of the result list is the least common multiple of the length of the input lists.
This behaviour is indeed weird, and will be changed in PostgreSQL v10:
select json_array_elements('[1, 2]') as a, json_array_elements('[2, 3, 4]') as b;
a | b
1 | 2
2 | 3
| 4
(3 rows)
From the commit message:
While moving SRF evaluation to ProjectSet would allow to retain the old
"least common multiple" behavior when multiple SRFs are present in one
targetlist (i.e. continue returning rows until all SRFs are at the end of
their input at the same time), we decided to instead only return rows till
all SRFs are exhausted, returning NULL for already exhausted ones. We
deemed the previous behavior to be too confusing, unexpected and actually
not particularly useful.

what is convergence in k Means?

I have a very small question related to unsupervised learning because my teacher have not use this word in any lectures. I got this word while reading tutorials. Does this mean if values are same to initial values in last iteration of clusters then it is called converge? for example
| c1 | c2 | cluster
| (1,0) | (2,1)|
A(1,0)| .. |.. |get smallest value
B(0,1)|.. |... |
c(2,1)|.. |... |
D(2,1)|.. |.... |
now after performing n-iteration and if values come same in both c1 and c2 that is (1,0) and (2,1) in last n-th iteration and taking avg if other than single , is it convergence?
Ideally, if the values in the last two consequent iterations are same then the algorithm is said to have converged. But often people use a less strict criteria for convergence, like, the difference in the values of last two iterations is less than a particular threshold etc,.

order of Processes in this special case of round robin scheduling

(this was a question asked in the 5th semester of my computer engineering degree)
What will the order of process execution be in the following scenario, given that the scheduling used is Round Robin?
Process----Arrival time----Burst time
My real doubt comes at time 9. At this point, A and C have finished execution. B is in the queue and D has just entered. Which one will be executed first? B or D?
should the overall order be A-B-C-D-E-B-D-E
or A-B-C-B-D-E-D-E?
In round robin processes are executed for a time period called Quantum you haven't mentioned it. Still there is no problem. Round Robin algorithm says that each process will get equal time period for execution in a circular fashion. At the point of ambiguity, it implements First Come First Serve method. You are mentioning a deadlock situation here. B should com first. Here are few references: Word Definition a simple example
the order of exe will be A-B-C-B-D-E-D-E, because at time 3 i.e. after exe of a ready queue have B , C in same order , so B is executed till time 7 (as TQ < burst time of b ) and B is queued back in circular queue (ready queue) as follows A-B-C-B
and at time 7 c will execute till time 10 , while exe of c d arrived at ready queue at time 9 therefore queue like A-B-C-B-D...
final chart will be
Q= | A | B | C | B | D | E | D | E |
T= | | | | | | | | |
0 3 7 10 11 15 19 20 21
Round Robin scheduling is similar to FCFS (First Come
First Serve) scheduling, but preemption is added. The ready queue is treated as a circular queue. The CPU scheduler goes around
the ready queue, allocating CPU to each process for a time interval of
up to 1 time quantum.
Operating System Concepts (Silberschatz)
Now in your case,
the Gantt Chart will look like this:
A | B | C | D | E | B | D | E |
0 3 7 9 13 17 18 19 20
Notice in this case at first we consider FCFS and start with process A (arrival time 0 ms) then we continue dispatching each process based on their arrival time (the same sequence in which you have listed in the question)
, for 1 time quantum (4 ms each).
If the burst time of a process is less than the time quantum then it releases the CPU voluntarily otherwise preemption is applied.
So, the scheduling order will be :
A -> B -> C -> D -> E -> B -> D -> E

Convention of faces in OpenGL cubemapping

What is the convention OpenGL follows for cubemaps?
I followed this convention (found on a website) and used the correspondent GLenum to specify the 6 faces GL_TEXTURE_CUBE_MAP_POSITIVE_X_EXT but I always get wrong Y, so I have to invert Positive Y with Negative Y face. Why?
| |
| pos y |
| |
| | | | |
| neg x | pos z | pos x | neg z |
| | | | |
| |
| |
| neg y |
but I always get wrong Y, so I have to invert Positive Y with Negative Y face. Why?
Ah, yes, this is one of the most odd things about Cube Maps. Rest assured, you're not the only one to fall for it. You see:
Cube Maps have been specified to follow the RenderMan specification (for whatever reason), and RenderMan assumes the images' origin being in the upper left, contrary to the usual OpenGL behaviour of having the image origin in the lower left. That's why things get swapped in the Y direction. It totally breaks with the usual OpenGL semantics and doesn't make sense at all. But now we're stuck with it.
Take note that upper left, vs. lower left are defined in the context of identity transformation from model space to NDC space
Here is a convenient diagram showing how the axes work in OpenGL cubemaps: