## Turing machine to find most occurring char on tape - turing-machines

### Normalize Count Measure in Tableau

```I am trying to create a plot similar to those created by Google's ngram viewer. I have the ngrams that correspond to year, but some years have much more data than others; as a result, plotting from absolute counts doesn't get me the information I want. I'd like to normalize it so that I get the counts as a percentage of the total samples for that year.
I've found ways to normalize data to ranges in Tableau, but nothing about normalizing by count. I also see that there is a count distinct function, but that doesn't appear to do what I want.
How can I do this in Tableau?
Edit:
Here is some toy data and the desired output.
Toy Data:
+---------+------+
| Pattern | Year |
+---------+------+
| a | 1 |
| a | 1 |
| a | 1 |
| b | 1 |
| b | 1 |
| b | 1 |
| a | 2 |
| b | 2 |
| a | 3 |
| b | 4 |
+---------+------+
Desired Output:
```
```Put [Year] on the Columns shelf, and if it is really a Date field instead of a number - choose any truncation level you'd like or choose exact date. Make sure to treat it as a discrete dimension field (the pill should be blue)
Put [Number of Records] on the Rows shelf. Should be a continuous measure, i.e. SUM([Number of Records])
Put Pattern on the Color shelf.
At this point, you should be looking at a graph raw counts. To convert them to percentages, right click on the [Number of Records] field on the Rows shelf, and choose Quick Table Calc->Percent of Total. Finally, right click on [Number of Records] a second time, and choose Compute Using->Pattern.
You might want to sort the patterns. One easy way is to just drag them in the color legend.```

### postgresql, select multiple json_array_elements works so werid

```I want use json_array_elements to expands json array. But it works so werid. Pls see below.
select json_array_elements('[1, 2]') as a, json_array_elements('[2, 3, 4]') as b;
a | b
---+---
1 | 2
2 | 3
1 | 4
2 | 2
1 | 3
2 | 4
(6 rows)
select json_array_elements('[1, 2]') as a, json_array_elements('[2, 3]') as b;
a | b
---+---
1 | 2
2 | 3
(2 rows)
It's seems when the length of the arrays are equal, something goes wrong.
Can anyone tell me, why is like this.
```
```PostgreSQL repeats each list until both happen to be at the end simultaneously.
In other words, the length of the result list is the least common multiple of the length of the input lists.
This behaviour is indeed weird, and will be changed in PostgreSQL v10:
select json_array_elements('[1, 2]') as a, json_array_elements('[2, 3, 4]') as b;
a | b
---+---
1 | 2
2 | 3
| 4
(3 rows)
From the commit message:
While moving SRF evaluation to ProjectSet would allow to retain the old
"least common multiple" behavior when multiple SRFs are present in one
targetlist (i.e. continue returning rows until all SRFs are at the end of
their input at the same time), we decided to instead only return rows till
all SRFs are exhausted, returning NULL for already exhausted ones. We
deemed the previous behavior to be too confusing, unexpected and actually
not particularly useful.```

### what is convergence in k Means?

```I have a very small question related to unsupervised learning because my teacher have not use this word in any lectures. I got this word while reading tutorials. Does this mean if values are same to initial values in last iteration of clusters then it is called converge? for example
| c1 | c2 | cluster
| (1,0) | (2,1)|
|-------|------|------------
A(1,0)| .. |.. |get smallest value
B(0,1)|.. |... |
c(2,1)|.. |... |
D(2,1)|.. |.... |
now after performing n-iteration and if values come same in both c1 and c2 that is (1,0) and (2,1) in last n-th iteration and taking avg if other than single , is it convergence?
```
`Ideally, if the values in the last two consequent iterations are same then the algorithm is said to have converged. But often people use a less strict criteria for convergence, like, the difference in the values of last two iterations is less than a particular threshold etc,.`

### order of Processes in this special case of round robin scheduling

```(this was a question asked in the 5th semester of my computer engineering degree)
What will the order of process execution be in the following scenario, given that the scheduling used is Round Robin?
QUANTUM SIZE = 4
Process----Arrival time----Burst time
A---0---3
B---1---5
C---3---2
D---9---5
E---12---5
My real doubt comes at time 9. At this point, A and C have finished execution. B is in the queue and D has just entered. Which one will be executed first? B or D?
should the overall order be A-B-C-D-E-B-D-E
or A-B-C-B-D-E-D-E?
```
```In round robin processes are executed for a time period called Quantum you haven't mentioned it. Still there is no problem. Round Robin algorithm says that each process will get equal time period for execution in a circular fashion. At the point of ambiguity, it implements First Come First Serve method. You are mentioning a deadlock situation here. B should com first. Here are few references: Word Definition a simple example
```
```the order of exe will be A-B-C-B-D-E-D-E, because at time 3 i.e. after exe of a ready queue have B , C in same order , so B is executed till time 7 (as TQ < burst time of b ) and B is queued back in circular queue (ready queue) as follows A-B-C-B
and at time 7 c will execute till time 10 , while exe of c d arrived at ready queue at time 9 therefore queue like A-B-C-B-D...
final chart will be
Q= | A | B | C | B | D | E | D | E |
T= | | | | | | | | |
0 3 7 10 11 15 19 20 21
```
```Round Robin scheduling is similar to FCFS (First Come
First Serve) scheduling, but preemption is added. The ready queue is treated as a circular queue. The CPU scheduler goes around
the ready queue, allocating CPU to each process for a time interval of
up to 1 time quantum.
Operating System Concepts (Silberschatz)
the Gantt Chart will look like this:
A | B | C | D | E | B | D | E |
0 3 7 9 13 17 18 19 20
Notice in this case at first we consider FCFS and start with process A (arrival time 0 ms) then we continue dispatching each process based on their arrival time (the same sequence in which you have listed in the question)
, for 1 time quantum (4 ms each).
If the burst time of a process is less than the time quantum then it releases the CPU voluntarily otherwise preemption is applied.
So, the scheduling order will be :
A -> B -> C -> D -> E -> B -> D -> E```

### Convention of faces in OpenGL cubemapping

```What is the convention OpenGL follows for cubemaps?
I followed this convention (found on a website) and used the correspondent GLenum to specify the 6 faces GL_TEXTURE_CUBE_MAP_POSITIVE_X_EXT but I always get wrong Y, so I have to invert Positive Y with Negative Y face. Why?
________
| |
| pos y |
| |
_______|________|_________________
| | | | |
| neg x | pos z | pos x | neg z |
| | | | |
|_______|________|________|________|
| |
| |
| neg y |
|________|
```
```but I always get wrong Y, so I have to invert Positive Y with Negative Y face. Why?
Ah, yes, this is one of the most odd things about Cube Maps. Rest assured, you're not the only one to fall for it. You see:
Cube Maps have been specified to follow the RenderMan specification (for whatever reason), and RenderMan assumes the images' origin being in the upper left, contrary to the usual OpenGL behaviour of having the image origin in the lower left. That's why things get swapped in the Y direction. It totally breaks with the usual OpenGL semantics and doesn't make sense at all. But now we're stuck with it.
Take note that upper left, vs. lower left are defined in the context of identity transformation from model space to NDC space
```
`Here is a convenient diagram showing how the axes work in OpenGL cubemaps:`