id
stringlengths
6
26
chapter
stringclasses
36 values
section
stringlengths
3
5
title
stringlengths
3
27
source_file
stringlengths
13
29
question_markdown
stringlengths
17
6.29k
answer_markdown
stringlengths
3
6.76k
code_blocks
listlengths
0
9
has_images
bool
2 classes
image_refs
listlengths
0
7
05-5.2-2
05
5.2
5.2-2
docs/Chap05/5.2.md
In $\text{HIRE-ASSISTANT}$, assuming that the candidates are presented in a random order, what is the probability that you hire exactly twice?
Note that - Candidate $1$ is always hired - The best candidate (candidate whose rank is $n$) is always hired - If the best candidate is candidate $1$, then that's the only candidate hired. In order for $\text{HIRE-ASSISTANT}$ to hire exactly twice, candidate $1$ should have rank $i$, where $1 \le i \le n - 1$, and al...
[]
false
[]
05-5.2-3
05
5.2
5.2-3
docs/Chap05/5.2.md
Use indicator random variables to compute the expected value of the sum of $n$ dice.
Expectation of a single dice $X_i$ is $$ \begin{aligned} \text E[X_k] & = \sum_{i = 1}^6 i \Pr\\{X_k = i\\} \\\\ & = \frac{1 + 2 + 3 + 4 + 5 + 6}{6} \\\\ & = \frac{21}{6} \\\\ & = 3.5. \end{aligned} $$ As for multiple dices, $$ \begin{aligned} \text E[X] & = \text E\Bigg[\sum_{...
[]
false
[]
05-5.2-4
05
5.2
5.2-4
docs/Chap05/5.2.md
Use indicator random variables to solve the following problem, which is known as the **_hat-check problem_**. Each of $n$ customers gives a hat to a hat-check person at a restaurant. The hat-check person gives the hats back to the customers in a random order. What is the expected number of customers who get back their ...
Let $X$ be the number of customers who get back their own hat and $X_i$ be the indicator random variable that customer $i$ gets his hat back. The probability that an individual gets his hat back is $\frac{1}{n}$. Thus we have $$E[X] = E\Bigg[\sum_{i = 1}^n X_i\Bigg] = \sum_{i = 1}^n E[X_i] = \sum_{i = 1}^n \frac{1}{n}...
[]
false
[]
05-5.2-5
05
5.2
5.2-5
docs/Chap05/5.2.md
Let $A[1..n]$ be an array of $n$ distinct numbers. If $i < j$ and $A[i] > A[j]$, then the pair $(i, j)$ is called an **_inversion_** of $A$. (See Problem 2-4 for more on inversions.) Suppose that the elements of $A$ form a uniform random permutation of $\langle 1, 2, \ldots, n \rangle$. Use indicator random variables t...
Let $X_{i, j}$ for $i < j$ be the indicator of $A[i] > A[j]$. We have that the expected number of inversions $$ \begin{aligned} \text E\Bigg[\sum_{i < j} X_{i, j}\Bigg] & = \sum_{i < j} E[X_{i, j}] \\\\ & = \sum_{i = 1}^{n - 1}\sum_{j = i + 1}^n \Pr\\{A[i] > A[j]\\} \\\\ & = \frac{1}{2} \sum_{i = 1}^{n - 1...
[]
false
[]
05-5.3-1
05
5.3
5.3-1
docs/Chap05/5.3.md
Professor Marceau objects to the loop invariant used in the proof of Lemma 5.5. He questions whether it is true prior to the first iteration. He reasons that we could just as easily declare that an empty subarray contains no $0$-permutations. Therefore, the probability that an empty subarray contains a $0$-permutation ...
Modify the algorithm by unrolling the $i = 1$ case. ```cpp swap(A[1], A[RANDOM(1, n)]) for i = 2 to n swap(A[i], A[RANDOM(i, n)]) ``` Modify the proof of Lemma 5.5 by starting with $i = 2$ instead of $i = 1$. This resolves the issue of $0$-permutations.
[ { "lang": "cpp", "code": "swap(A[1], A[RANDOM(1, n)])\nfor i = 2 to n\n swap(A[i], A[RANDOM(i, n)])" } ]
false
[]
05-5.3-2
05
5.3
5.3-2
docs/Chap05/5.3.md
Professor Kelp decides to write a procedure that produces at random any permutation besides the identity permutation. He proposes the following procedure: ```cpp PERMUTE-WITHOUT-IDENTITY(A) n = A.length for i = 1 to n - 1 swap A[i] with A[RANDOM(i + 1, n)] ``` Does this code do what Professor Kelp intends?
The code does not do what he intends. Suppose $A = [1, 2, 3]$. If the algorithm worked as proposed, then with nonzero probability the algorithm should output $[3, 2, 1]$. On the first iteration we swap $A[1]$ with either $A[2]$ or $A[3]$. Since we want $[3, 2, 1]$ and will never again alter $A[1]$, we must necessarily ...
[ { "lang": "cpp", "code": "> PERMUTE-WITHOUT-IDENTITY(A)\n> n = A.length\n> for i = 1 to n - 1\n> swap A[i] with A[RANDOM(i + 1, n)]\n>" } ]
false
[]
05-5.3-3
05
5.3
5.3-3
docs/Chap05/5.3.md
Suppose that instead of swapping element $A[i]$ with a random element from the subarray $A[i..n]$, we swapped it with a random element from anywhere in the array: ```cpp PERMUTE-WITH-ALL(A) n = A.length for i = 1 to n swap A[i] with A[RANDOM(1, n)] ``` Does this code produce a uniform random permutation? Why or why n...
Consider the case of $n = 3$ in running the algorithm, three IID choices will be made, and so you'll end up having $27$ possible end states each with equal probability. There are $3! = 6$ possible orderings, these should appear equally often, but this can't happen because $6$ does not divide $27$.
[ { "lang": "cpp", "code": "> PERMUTE-WITH-ALL(A)\n> n = A.length\n> for i = 1 to n\n> swap A[i] with A[RANDOM(1, n)]\n>" } ]
false
[]
05-5.3-4
05
5.3
5.3-4
docs/Chap05/5.3.md
Professor Armstrong suggests the following procedure for generating a uniform random permutation: ```cpp PERMUTE-BY-CYCLIC(A) n = A.length let B[1..n] be a new array offset = RANDOM(1, n) for i = 1 to n dest = i + offset if dest > n dest = dest - n B[dest] = A[i] return B ``` Show that each element $A[i]$ has a $1 / ...
Fix a position $j$ and an index $i$. We'll show that the probability that $A[i]$ winds up in position $j$ is $1 / n$. The probability $B[j] = A[i]$ is the probability that $dest = j$, which is the probability that $i + offset$ or $i + offset − n$ is equal to $j$, which is $1 / n$. This algorithm can't possibly return ...
[ { "lang": "cpp", "code": "> PERMUTE-BY-CYCLIC(A)\n> n = A.length\n> let B[1..n] be a new array\n> offset = RANDOM(1, n)\n> for i = 1 to n\n> dest = i + offset\n> if dest > n\n> dest = dest - n\n> B[dest] = A[i]\n> return B\n>" } ]
false
[]
05-5.3-5
05
5.3
5.3-5 $\star$
docs/Chap05/5.3.md
Prove that in the array $P$ in procedure $\text{PERMUTE-BY-SORTING}$, the probability that all elements are unique is at least $1 - 1 / n$.
Let $\Pr\\{j\\}$ be the probability that the element with index $j$ is unique. If there are $n^3$ elements, then the $\Pr\\{j\\} = 1 - \frac{j - 1}{n^3}$. $$ \begin{aligned} \Pr\\{1 \cap 2 \cap 3 \cap \ldots\\} & = \Pr\\{1\\} \cdot \Pr\\{2 \mid 1\\} \cdot \Pr\\{3 \mid 1 \cap 2\\} \cdots \\\\ & = 1 (1 - \fr...
[]
false
[]
05-5.3-6
05
5.3
5.3-6
docs/Chap05/5.3.md
Explain how to implement the algorithm $\text{PERMUTE-BY-SORTING}$ to handle the case in which two or more priorities are identical. That is, your algorithm should produce a uniform random permutation, even if two or more priorities are identical.
```cpp PERMUTE-BY-SORTING(A) let P[1..n] be a new array for i = 1 to n P[i] = i for i = 1 to n swap P[i] with P[RANDOM(i, n)] ```
[ { "lang": "cpp", "code": "PERMUTE-BY-SORTING(A)\n let P[1..n] be a new array\n for i = 1 to n\n P[i] = i\n for i = 1 to n\n swap P[i] with P[RANDOM(i, n)]" } ]
false
[]
05-5.3-7
05
5.3
5.3-7
docs/Chap05/5.3.md
Suppose we want to create a **_random sample_** of the set $\\{1, 2, 3, \ldots, n\\}$, that is, an $m$-element subset $S$, where $0 \le m \le n$, such that each $m$-subset is equally likely to be created. One way would be to set $A[i] = i$ for $i = 1, 2, 3, \ldots, n$, call $\text{RANDOMIZE-IN-PLACE}(A)$, and then take...
We prove that it produces a random $m$ subset by induction on $m$. It is obviously true if $m = 0$ as there is only one size $m$ subset of $[n]$. Suppose $S$ is a uniform $m − 1$ subset of $n − 1$, that is, $\forall j \in [n - 1]$, $\Pr[j \in S] = \frac{m - 1}{n - 1}$. If we let $S'$ denote the returned set, suppose f...
[ { "lang": "cpp", "code": "> RANDOM-SAMPLE(m, n)\n> if m == 0\n> return Ø\n> else S = RANDOM-SAMPLE(m - 1, n - 1)\n> i = RANDOM(1, n)\n> if i ∈ S\n> S = S ∪ {n}\n> else S = S ∪ {i}\n> return S\n>" } ]
false
[]
05-5.4-1
05
5.4
5.4-1
docs/Chap05/5.4.md
How many people must there be in a room before the probability that someone has the same birthday as you do is at least $1 / 2$? How many people must there be before the probability that at least two people have a birthday on July 4 is greater than $1 / 2$?
The probability of a person not having the same birthday as me is $(n - 1) / n$. The probability of $k$ people not having the same birthday as me is that, squared. We apply the same approach as the text - we take the complementary event and solve it for $k$, $$ \begin{aligned} 1 - \big(\frac{n - 1}{n}\big)^k &...
[]
false
[]
05-5.4-2
05
5.4
5.4-2
docs/Chap05/5.4.md
Suppose that we toss balls into $b$ bins until some bin contains two balls. Each toss is independent, and each ball is equally likely to end up in any bin. What is the expected number of ball tosses?
This is just a restatement of the birthday problem. I consider this all that needs to be said on this subject.
[]
false
[]
05-5.4-3
05
5.4
5.4-3 $\star$
docs/Chap05/5.4.md
For the analysis of the birthday paradox, is it important that the birthdays be mutually independent, or is pairwise independence sufficient? Justify your answer.
Pairwise independence is enough. It's sufficient for the derivation after $\text{(5.6)}$.
[]
false
[]
05-5.4-4
05
5.4
5.4-4 $\star$
docs/Chap05/5.4.md
How many people should be invited to a party in order to make it likely that there are $three$ people with the same birthday?
The answer is $88$. I reached it by trial and error. But let's analyze it with indicator random variables. Let $X_{ijk}$ be the indicator random variable for the event of the people with indices $i$, $j$ and $k$ have the same birthday. The probability is $1 / n^2$. Then, $$ \begin{aligned} \text E[X] & = \sum_{i ...
[]
false
[]
05-5.4-5
05
5.4
5.4-5 $\star$
docs/Chap05/5.4.md
What is the probability that a $k$-string over a set of size $n$ forms a $k$-permutation? How does this question relate to the birthday paradox?
$$ \begin{aligned} \Pr\\{k\text{-perm in }n\\} & = 1 \cdot \frac{n - 1}{n} \cdot \frac{n - 2}{n} \cdots \frac{n - k + 1}{n} \\\\ & = \frac{n!}{n^k(n - k)!}. \end{aligned} $$ This is the complementary event to the birthday problem, that is, the chance of $k$ people have distinct birthdays.
[]
false
[]
05-5.4-6
05
5.4
5.4-6 $\star$
docs/Chap05/5.4.md
Suppose that $n$ balls are tossed into $n$ bins, where each toss is independent and the ball is equally likely to end up in any bin. What is the expected number of empty bins? What is the expected number of bins with exactly one ball?
Let $X_i$ be the indicator variable that bin $i$ is empty after all balls are tossed and $X$ be the random variable that gives the number of empty bins. Thus we have $$E[X] = \sum_{i = 1}^n E[X_i] = \sum_{i = 1}^n \bigg(\frac{n - 1}{n}\bigg)^n = n\bigg(\frac{n - 1}{n}\bigg)^n.$$ Let $X_i$ be the indicator variable th...
[]
false
[]
05-5.4-7
05
5.4
5.4-7 $\star$
docs/Chap05/5.4.md
Sharpen the lower bound on streak length by showing that in $n$ flips of a fair coin, the probability is less than $1 / n$ that no streak longer than $\lg n - 2\lg\lg n$ consecutive heads occurs.
We split up the n flips into $n / s$ groups where we pick $s = \lg(n) - 2 \lg(\lg(n))$. We will show that at least one of these groups comes up all heads with probability at least $\frac{n - 1}{n}$. So, the probability the group starting in position $i$ comes up all heads is $$\Pr(A_{i,\lg n - 2\lg(\lg n)}) = \frac{1}...
[]
false
[]
05-5-1
05
5-1
5-1
docs/Chap05/Problems/5-1.md
With a $b$-bit counter, we can ordinarily only count up to $2^b - 1$. With R. Morris's **_probabilistic counting_**, we can count up to a much larger value at the expense of some loss of precision. We let a counter value of $i$ represent that a count of $n_i$ for $i = 0, 1, \ldots, 2^b - 1$, where the $n_i$ form an in...
**a.** To show that the expected value represented by the counter after $n$ $\text{INCREMENT}$ operations have been performed is exactly $n$, we can show that each expected increment represented by the counter is $1$. Assume the initial value of the counter is $i$, increasing the number represented from $n_i$ to $n_{i...
[]
false
[]
05-5-2
05
5-2
5-2
docs/Chap05/Problems/5-2.md
The problem examines three algorithms for searching for a value $x$ in an unsorted array $A$ consisting for $n$ elements. Consider the following randomized strategy: pick a random index $i$ into $A$. If $A[i] = x$, then we terminate; otherwise, we continue the search by picking a new random index into $A$. We continue...
**a.** ```cpp RANDOM-SEARCH(x, A, n) v = Ø // a set (or bitmap, etc.) of visited indices while |v| < n: i = RANDOM(1, n) if i ∉ v: // only use i if it hasn't been picked before if A[i] = x: return i else: v ...
[ { "lang": "cpp", "code": "RANDOM-SEARCH(x, A, n)\n v = Ø // a set (or bitmap, etc.) of visited indices\n while |v| < n:\n i = RANDOM(1, n)\n if i ∉ v: // only use i if it hasn't been picked before\n if A[i] = x:\n return i\n ...
false
[]
06-6.1-1
06
6.1
6.1-1
docs/Chap06/6.1.md
What are the minimum and maximum numbers of elements in a heap of height $h$?
At least $2^h$ and at most $2^{h + 1} − 1$. Can be seen because a complete binary tree of depth $h − 1$ has $\sum_{i = 0}^{h - 1} 2^i = 2^h - 1$ elements, and the number of elements in a heap of depth $h$ is between the number for a complete binary tree of depth $h − 1$ exclusive and the number in a complete binary tre...
[]
false
[]
06-6.1-2
06
6.1
6.1-2
docs/Chap06/6.1.md
Show that an $n$-element heap has height $\lfloor \lg n \rfloor$.
Write $n = 2^m − 1 + k$ where $m$ is as large as possible. Then the heap consists of a complete binary tree of height $m − 1$, along with $k$ additional leaves along the bottom. The height of the root is the length of the longest simple path to one of these $k$ leaves, which must have length $m$. It is clear from the w...
[]
false
[]
06-6.1-3
06
6.1
6.1-3
docs/Chap06/6.1.md
Show that in any subtree of a max-heap, the root of the subtree contains the largest value occuring anywhere in the subtree.
If the largest element in the subtree were somewhere other than the root, it has a parent that is in the subtree. So, it is larger than it's parent, so, the heap property is violated at the parent of the maximum element in the subtree.
[]
false
[]
06-6.1-4
06
6.1
6.1-4
docs/Chap06/6.1.md
Where in a max-heap might the smallest element reside, assuming that all elements are distinct?
In any of the leaves, that is, elements with index $\lfloor n / 2 \rfloor + k$, where $k \geq 1$ (see exercise 6.1-7), that is, in the second half of the heap array.
[]
false
[]
06-6.1-5
06
6.1
6.1-5
docs/Chap06/6.1.md
Is an array that is in sorted order a min-heap?
Yes. For any index $i$, both $\text{LEFT}(i)$ and $\text{RIGHT}(i)$ are larger and thus the elements indexed by them are greater or equal to $A[i]$ (because the array is sorted.)
[]
false
[]
06-6.1-6
06
6.1
6.1-6
docs/Chap06/6.1.md
Is the array with values $\langle 23, 17, 14, 6, 13, 10, 1, 5, 7, 12 \rangle$ a max-heap?
No. Since $\text{PARENT}(7)$ is $6$ in the array. This violates the max-heap property.
[]
false
[]
06-6.1-7
06
6.1
6.1-7
docs/Chap06/6.1.md
Show that, with the array representation for sorting an $n$-element heap, the leaves are the nodes indexed by $\lfloor n / 2 \rfloor + 1, \lfloor n / 2 \rfloor + 2, \ldots, n$.
Let's take the left child of the node indexed by $\lfloor n / 2 \rfloor + 1$. $$ \begin{aligned} \text{LEFT}(\lfloor n / 2 \rfloor + 1) & = 2(\lfloor n / 2 \rfloor + 1) \\\\ & > 2(n / 2 - 1) + 2 \\\\ & = n - 2 + 2 \\\\ & = n. \end{aligned} $$ Since the index of the left child is larger than the number...
[]
false
[]
06-6.2-1
06
6.2
6.2-1
docs/Chap06/6.2.md
Using figure 6.2 as a model, illustrate the operation of $\text{MAX-HEAPIFY}(A, 3)$ on the array $A = \langle 27, 17, 3, 16, 13, 10, 1, 5, 7, 12, 4, 8, 9, 0 \rangle$.
$$ \begin{aligned} \langle 27, 17, 3, 16, 13, 10,1, 5, 7, 12, 4, 8, 9, 0 \rangle \\\\ \langle 27, 17, 10, 16, 13, 3, 1, 5, 7, 12, 4, 8, 9, 0 \rangle \\\\ \langle 27, 17, 10, 16, 13, 9, 1, 5, 7, 12, 4, 8, 3, 0 \rangle \\\\ \end{aligned} $$
[]
false
[]
06-6.2-2
06
6.2
6.2-2
docs/Chap06/6.2.md
Starting with the procedure $\text{MAX-HEAPIFY}$, write pseudocode for the procedure $\text{MIN-HEAPIFY}(A, i)$, which performs the corresponding manipulation on a min-heap. How does the running time of $\text{MIN-HEAPIFY}$ compare to that of $\text{MAX-HEAPIFY}$?
```cpp MIN-HEAPIFY(A, i) l = LEFT(i) r = RIGHT(i) if l ≤ A.heap-size and A[l] < A[i] smallest = l else smallest = i if r ≤ A.heap-size and A[r] < A[smallest] smallest = r if smallest != i exchange A[i] with A[smallest] MIN-HEAPIFY(A, smallest) ``` The running tim...
[ { "lang": "cpp", "code": "MIN-HEAPIFY(A, i)\n l = LEFT(i)\n r = RIGHT(i)\n if l ≤ A.heap-size and A[l] < A[i]\n smallest = l\n else smallest = i\n if r ≤ A.heap-size and A[r] < A[smallest]\n smallest = r\n if smallest != i\n exchange A[i] with A[smallest]\n MIN-...
false
[]
06-6.2-3
06
6.2
6.2-3
docs/Chap06/6.2.md
What is the effect of calling $\text{MAX-HEAPIFY}(A, i)$ when the element $A[i]$ is larger than its children?
No effect. The comparisons are carried out, $A[i]$ is found to be largest and the procedure just returns.
[]
false
[]
06-6.2-4
06
6.2
6.2-4
docs/Chap06/6.2.md
What is the effect of calling $\text{MAX-HEAPIFY}(A, i)$ for $i > A.heap\text-size / 2$?
No effect. In that case, it is a leaf. Both $\text{LEFT}$ and $\text{RIGHT}$ return values that fail the comparison with the heap size and $i$ is stored in largest. Afterwards the procedure just returns.
[]
false
[]
06-6.2-5
06
6.2
6.2-5
docs/Chap06/6.2.md
The code for $\text{MAX-HEAPIFY}$ is quite efficient in terms of constant factors, except possibly for the recursive call in line 10, which might cause some compilers to produce inefficient code. Write an efficient $\text{MAX-HEAPIFY}$ that uses an iterative control construct (a loop) instead of recursion.
```cpp MAX-HEAPIFY(A, i) while true l = LEFT(i) r = RIGHT(i) if l ≤ A.heap-size and A[l] > A[i] largest = l else largest = i if r ≤ A.heap-size and A[r] > A[largest] largest = r if largest == i return exchange A[i] with A[la...
[ { "lang": "cpp", "code": "MAX-HEAPIFY(A, i)\n while true\n l = LEFT(i)\n r = RIGHT(i)\n if l ≤ A.heap-size and A[l] > A[i]\n largest = l\n else largest = i\n if r ≤ A.heap-size and A[r] > A[largest]\n largest = r\n if largest == i\n ...
false
[]
06-6.2-6
06
6.2
6.2-6
docs/Chap06/6.2.md
Show that the worst-case running time of $\text{MAX-HEAPIFY}$ on a heap of size $n$ is $\Omega(\lg n)$. ($\textit{Hint:}$ For a heap with $n$ nodes, give node values that cause $\text{MAX-HEAPIFY}$ to be called recursively at every node on a simple path from the root down to a leaf.)
Consider the heap resulting from $A$ where $A[1] = 1$ and $A[i] = 2$ for $2 \le i \le n$. Since $1$ is the smallest element of the heap, it must be swapped through each level of the heap until it is a leaf node. Since the heap has height $\lfloor \lg n\rfloor$, $\text{MAX-HEAPIFY}$ has worst-case time $\Omega(\lg n)$.
[]
false
[]
06-6.3-1
06
6.3
6.3-1
docs/Chap06/6.3.md
Using figure 6.3 as a model, illustrate the operation of $\text{BUILD-MAX-HEAP}$ on the array $A = \langle 5, 3, 17, 10, 84, 19, 6, 22, 9 \rangle$.
$$ \begin{aligned} \langle 5, 3, 17, 10, 84, 19, 6, 22, 9 \rangle \\\\ \langle 5, 3, 17, 22, 84, 19, 6, 10, 9 \rangle \\\\ \langle 5, 3, 19, 22, 84, 17, 6, 10, 9 \rangle \\\\ \langle 5, 84, 19, 22, 3, 17, 6, 10, 9 \rangle \\\\ \langle 84, 5, 19, 22, 3, 17, 6, 10, 9 \rangle \\\\ \langle 84, 22, 19, 5, 3, 17,...
[]
false
[]
06-6.3-2
06
6.3
6.3-2
docs/Chap06/6.3.md
Why do we want the loop index $i$ in line 2 of $\text{BUILD-MAX-HEAP}$ to decrease from $\lfloor A.length / 2 \rfloor$ to $1$ rather than increase from $1$ to $\lfloor A.length/2 \rfloor$?
Otherwise we won't be allowed to call $\text{MAX-HEAPIFY}$, since it will fail the condition of having the subtrees be max-heaps. That is, if we start with $1$, there is no guarantee that $A[2]$ and $A[3]$ are roots of max-heaps.
[]
false
[]
06-6.3-3
06
6.3
6.3-3
docs/Chap06/6.3.md
Show that there are at most $\lceil n / 2^{h + 1} \rceil$ nodes of height $h$ in any $n$-element heap.
From 6.1-7, we know that the leaves of a heap are the nodes indexed by $$\left\lfloor n / 2 \right\rfloor + 1, \left\lfloor n / 2 \right\rfloor + 2, \dots, n.$$ Note that those elements corresponds to the second half of the heap array (plus the middle element if $n$ is odd). Thus, the number of leaves in any heap of ...
[]
false
[]
06-6.4-1
06
6.4
6.4-1
docs/Chap06/6.4.md
Using figure 6.4 as a model, illustrate the operation of $\text{HEAPSORT}$ on the array $A = \langle 5, 13, 2, 25, 7, 17, 20, 8, 4 \rangle$.
$$ \begin{aligned} \langle 5, 13, 2, 25, 7, 17, 20, 8, 4 \rangle \\\\ \langle 5, 13, 20, 25, 7, 17, 2, 8, 4 \rangle \\\\ \langle 5, 25, 20, 13, 7, 17, 2, 8, 4 \rangle \\\\ \langle 25, 5, 20, 13, 7, 17, 2, 8, 4 \rangle \\\\ \langle 25, 13, 20, 5, 7, 17, 2, 8, 4 \rangle \\\\ \langle 25, 13, 20, 8, 7,...
[]
false
[]
06-6.4-2
06
6.4
6.4-2
docs/Chap06/6.4.md
Argue the correctness of $\text{HEAPSORT}$ using the following loop invariant: At the start of each iteration of the **for** loop of lines 2-5, the subarray $A[1..i]$ is a max-heap containing the $i$ smallest elements of $A[1..n]$, and the subarray $A[i + 1..n]$ contains the $n - i$ largest elements of $A[1..n]$, sort...
**Initialization:** The subarray $A[i + 1..n]$ is empty, thus the invariant holds. **Maintenance:** $A[1]$ is the largest element in $A[1..i]$ and it is smaller than the elements in $A[i + 1..n]$. When we put it in the $i$th position, then $A[i..n]$ contains the largest elements, sorted. Decreasing the heap size and c...
[]
false
[]
06-6.4-3
06
6.4
6.4-3
docs/Chap06/6.4.md
What is the running time of $\text{HEAPSORT}$ on an array $A$ of length $n$ that is already sorted in increasing order? What about decreasing order?
Both of them are $\Theta(n\lg n)$. If the array is sorted in increasing order, the algorithm will need to convert it to a heap that will take $O(n)$. Afterwards, however, there are $n - 1$ calls to $\text{MAX-HEAPIFY}$ and each one will perform the full $\lg k$ operations. Since: $$\sum_{k = 1}^{n - 1}\lg k = \lg((n ...
[]
false
[]
06-6.4-4
06
6.4
6.4-4
docs/Chap06/6.4.md
Show that the worst-case running time of $\text{HEAPSORT}$ is $\Omega(n\lg n)$.
This is essentially the first part of exercise 6.4-3. Whenever we have an array that is already sorted, we take linear time to convert it to a max-heap and then $n\lg n$ time to sort it.
[]
false
[]
06-6.4-5
06
6.4
6.4-5 $\star$
docs/Chap06/6.4.md
Show that when all elements are distinct, the best-case running time of $\text{HEAPSORT}$ is $\Omega(n\lg n)$.
This proved to be quite tricky. My initial solution was wrong. Also, heapsort appeared in 1964, but the lower bound was proved by Schaffer and Sedgewick in 1992. It's evil to put this an exercise. Let's assume that the heap is a full binary tree with $n = 2^k - 1$. There are $2^{k - 1}$ leaves and $2^{k - 1} - 1$ inne...
[]
false
[]
06-6.5-1
06
6.5
6.5-1
docs/Chap06/6.5.md
Illustrate the operation $\text{HEAP-EXTRACT-MAX}$ on the heap $A = \langle 15, 13, 9, 5, 12, 8, 7, 4, 0, 6, 2, 1 \rangle$.
1. Original heap. ![](../img/6.5-1-1.png) 2. Extract the max node $15$, then move $1$ to the top of the heap. ![](../img/6.5-1-2.png) 3. Since $13 > 9 > 1$, swap $1$ and $13$. ![](../img/6.5-1-3.png) 4. Since $12 > 5 > 1$, swap $1$ and $12$. ![](../img/6.5-1-4.png) 5. Since $6 > 2 > 1$, swap $1$...
[]
true
[ "../img/6.5-1-1.png", "../img/6.5-1-2.png", "../img/6.5-1-3.png", "../img/6.5-1-4.png", "../img/6.5-1-5.png" ]
06-6.5-2
06
6.5
6.5-2
docs/Chap06/6.5.md
Illustrate the operation of $\text{MAX-HEAP-INSERT}(A, 10)$ on the heap $A = \langle 15, 13, 9, 5, 12, 8, 7, 4, 0, 6, 2, 1 \rangle$.
1. Original heap. ![](../img/6.5-2-1.png) 2. Since $\text{MAX-HEAP-INSERT}(A, 10)$ is called, we append a node assigned value $-\infty$. ![](../img/6.5-2-2.png) 3. Update the $key$ value of the new node. ![](../img/6.5-2-3.png) 4. Since the parent $key$ is smaller than $10$, the nodes are swapped. ...
[]
true
[ "../img/6.5-2-1.png", "../img/6.5-2-2.png", "../img/6.5-2-3.png", "../img/6.5-2-4.png", "../img/6.5-2-5.png" ]
06-6.5-3
06
6.5
6.5-3
docs/Chap06/6.5.md
Write pseudocode for the procedures $\text{HEAP-MINIMUM}$, $\text{HEAP-EXTRACT-MIN}$, $\text{HEAP-DECREASE-KEY}$, and $\text{MIN-HEAP-INSERT}$ that implement a min-priority queue with a min-heap.
```cpp HEAP-MINIMUM(A) return A[1] ``` ```cpp HEAP-EXTRACT-MIN(A) if A.heap-size < 1 error "heap underflow" min = A[1] A[1] = A[A.heap-size] A.heap-size = A.heap-size - 1 MIN-HEAPIFY(A, 1) return min ``` ```cpp HEAP-DECREASE-KEY(A, i, key) if key > A[i] error "new key i...
[ { "lang": "cpp", "code": "HEAP-MINIMUM(A)\n return A[1]" }, { "lang": "cpp", "code": "HEAP-EXTRACT-MIN(A)\n if A.heap-size < 1\n error \"heap underflow\"\n min = A[1]\n A[1] = A[A.heap-size]\n A.heap-size = A.heap-size - 1\n MIN-HEAPIFY(A, 1)\n return min" }, { ...
false
[]
06-6.5-4
06
6.5
6.5-4
docs/Chap06/6.5.md
Why do we bother setting the key of the inserted node to $-\infty$ in line 2 of $\text{MAX-HEAP-INSERT}$ when the next thing we do is increase its key to the desired value?
In order to pass the guard clause. Otherwise we have to drop the check if $key < A[i]$.
[]
false
[]
06-6.5-5
06
6.5
6.5-5
docs/Chap06/6.5.md
Argue the correctness of $\text{HEAP-INCREASE-KEY}$ using the following loop invariant: At the start of each iteration of the **while** loop of lines 4-6, the subarray $A[1 ..A.heap\text-size]$ satisfies the max-heap property, except that there may be one violation: $A[i]$ may be larger than $A[\text{PARENT}(i)]$. Yo...
**Initialization:** $A$ is a heap except that $A[i]$ might be larger that it's parent, because it has been modified. $A[i]$ is larger than its children, because otherwise the guard clause would fail and the loop will not be entered (the new value is larger than the old value and the old value is larger than the childre...
[]
false
[]
06-6.5-6
06
6.5
6.5-6
docs/Chap06/6.5.md
Each exchange operation on line 5 of $\text{HEAP-INCREASE-KEY}$ typically requires three assignments. Show how to use the idea of the inner loop of $\text{INSERTION-SORT}$ to reduce the three assignments down to just one assignment.
```cpp HEAP-INCREASE-KEY(A, i, key) if key < A[i] error "new key is smaller than current key" while i > 1 and A[PARENT(i)] < key A[i] = A[PARENT(i)] i = PARENT(i) A[i] = key ```
[ { "lang": "cpp", "code": "HEAP-INCREASE-KEY(A, i, key)\n if key < A[i]\n error \"new key is smaller than current key\"\n while i > 1 and A[PARENT(i)] < key\n A[i] = A[PARENT(i)]\n i = PARENT(i)\n A[i] = key" } ]
false
[]
06-6.5-7
06
6.5
6.5-7
docs/Chap06/6.5.md
Show how to implement a first-in, first-out queue with a priority queue. Show how to implement a stack with a priority queue. (Queues and stacks are defined in section 10.1).
Both are simple. For a stack we keep adding elements in increasing priority, while in a queue we add them in decreasing priority. For the stack we can set the new priority to $\text{HEAP-MAXIMUM}(A) + 1$. For the queue we need to keep track of it and decrease it on every insertion. Both are not very efficient. Further...
[]
false
[]
06-6.5-8
06
6.5
6.5-8
docs/Chap06/6.5.md
The operation $\text{HEAP-DELETE}(A, i)$ deletes the item in node $i$ from heap $A$. Give an implementation of $\text{HEAP-DELETE}$ that runs in $O(\lg n)$ time for an $n$-element max-heap.
```cpp HEAP-DELETE(A, i) if A[i] > A[A.heap-size] A[i] = A[A.heap-size] MAX-HEAPIFY(A, i) else HEAP-INCREASE-KEY(A, i, A[A.heap-size]) A.heap-size = A.heap-size - 1 ``` **Note:** The following algorithm is wrong. For example, given an array $A = [15, 7, 9, 1, 2, 3, 8]$ which is a ma...
[ { "lang": "cpp", "code": "HEAP-DELETE(A, i)\n if A[i] > A[A.heap-size]\n A[i] = A[A.heap-size]\n MAX-HEAPIFY(A, i)\n else\n HEAP-INCREASE-KEY(A, i, A[A.heap-size])\n A.heap-size = A.heap-size - 1" }, { "lang": "cpp", "code": "HEAP-DELETE(A, i)\n A[i] = A[A.heap-s...
false
[]
06-6.5-9
06
6.5
6.5-9
docs/Chap06/6.5.md
Give an $O(n\lg k)$-time algorithm to merge $k$ sorted lists into one sorted list, where $n$ is the total number of elements in all the input lists. ($\textit{Hint:}$ Use a min-heap for $k$-way merging.)
We take one element of each list and put it in a min-heap. Along with each element we have to track which list we took it from. When merging, we take the minimum element from the heap and insert another element off the list it came from (unless the list is empty). We continue until we empty the heap. We have $n$ steps...
[ { "lang": "cpp", "code": "def MERGE-SORTED-LISTS(lists)\n n = lists.length\n // Take the lowest element from each of lists together with an index of the list and make list of such pairs.\n // Pairs are of \"type\" (element-value, index-of-list)\n let lowest-from-each be an empty array\n for i...
false
[]
06-6-1
06
6-1
6-1
docs/Chap06/Problems/6-1.md
We can build a heap by repeatedly calling $\text{MAX-HEAP-INSERT}$ to insert the elements into the heap. Consider the following variation of the $\text{BUILD-MAX-HEAP}$ procedure: ```cpp BUILD-MAX-HEAP'(A) A.heap-size = 1 for i = 2 to A.length MAX-HEAP-INSERT(A, A[i]) ``` **a.** Do the procedures $\text{BUILD-MAX-HEA...
**a.** Consider the following counterexample. - Input array $A = \langle 1, 2, 3 \rangle$: - $\text{BUILD-MAX-HEAP}(A)$: $A = \langle 3, 2, 1 \rangle$. - $\text{BUILD-MAX-HEAP}'(A)$: $A = \langle 3, 1, 2 \rangle$. **b.** It is very easy to find out that the $\text{MAX-HEAP-INSERT}$ operation for each iteration takes ...
[ { "lang": "cpp", "code": "> BUILD-MAX-HEAP'(A)\n> A.heap-size = 1\n> for i = 2 to A.length\n> MAX-HEAP-INSERT(A, A[i])\n>" } ]
false
[]
06-6-2
06
6-2
6-2
docs/Chap06/Problems/6-2.md
A **_$d$-ary heap_** is like a binary heap, but (with one possible exception) non-leaf nodes have $d$ children instead of $2$ children. **a.** How would you represent a $d$-ary heap in an array? **b.** What is the height of a $d$-ary heap of $n$ elements in terms of $n$ and $d$? **c.** Give an efficient implementati...
**a.** We can use those two following functions to retrieve parent of $i$-th element and $j$-th child of $i$-th element. ```cpp d-ARY-PARENT(i) return floor((i - 2) / d + 1) ``` ```cpp d-ARY-CHILD(i, j) return d(i − 1) + j + 1 ``` Obviously $1 \le j \le d$. You can verify those functions checking that $$d\t...
[ { "lang": "cpp", "code": "d-ARY-PARENT(i)\n return floor((i - 2) / d + 1)" }, { "lang": "cpp", "code": "d-ARY-CHILD(i, j)\n return d(i − 1) + j + 1" }, { "lang": "cpp", "code": "d-ARY-HEAP-EXTRACT-MAX(A)\n if A.heap-size < 1\n error \"heap under flow\"\n max = A[1]...
false
[]
06-6-3
06
6-3
6-3
docs/Chap06/Problems/6-3.md
An $m \times n$ Young tableau is an $m \times n$ matrix such that the entries of each row are in sorted order from left to right and the entries of each column are in sorted order from top to bottom. Some of the entries of a Young tableau may be $\infty$, which we treat as nonexistent elements. Thus, a Young tableau ca...
**a.** $$ \begin{matrix} 2 & 3 & 12 & 14 \\\\ 4 & 8 & 16 & \infty \\\\ 5 & 9 & \infty & \infty \\\\ \infty & \infty & \infty & \infty \end{matrix} $$ **b.** If the top left element is $\infty$, then all the elements on the first row need to be $\infty$. But if this is the cas...
[]
false
[]
07-7.1-1
07
7.1
7.1-1
docs/Chap07/7.1.md
Using figure 7.1 as a model, illustrate the operation of $\text{PARTITION}$ on the array $A = \langle 13, 19, 9, 5, 12, 8, 7, 4, 21, 2, 6, 11 \rangle$.
$$ \begin{aligned} \langle 13, 19, 9, 5, 12, 8, 7, 4, 21, 2, 6, 11 \rangle \\\\ \langle 13, 19, 9, 5, 12, 8, 7, 4, 21, 2, 6, 11 \rangle \\\\ \langle 13, 19, 9, 5, 12, 8, 7, 4, 21, 2, 6, 11 \rangle \\\\ \langle 9, 19, 13, 5, 12, 8, 7, 4, 21, 2, 6, 11 \rangle \\\\ \langle 9, 5, 13, 19, 12, 8, 7, 4, 21, 2, 6, 11 \rangle \...
[]
false
[]
07-7.1-2
07
7.1
7.1-2
docs/Chap07/7.1.md
What value of $q$ does $\text{PARTITION}$ return when all elements in the array $A[p..r]$ have the same value? Modify $\text{PARTITION}$ so that $q = \lfloor (p + r) / 2 \rfloor$ when all elements in the array $A[p..r]$ have the same value.
It returns $r$. We can modify $\text{PARTITION}$ by counting the number of comparisons in which $A[j] = A[r]$ and then subtracting half that number from the pivot index.
[]
false
[]
07-7.1-3
07
7.1
7.1-3
docs/Chap07/7.1.md
Give a brief argument that the running time of $\text{PARTITION}$ on a subarray of size $n$ is $\Theta(n)$.
There is a for statement whose body executes $r - 1 - p = \Theta(n)$ times. In the worst case every time the body of the if is executed, but it takes constant time and so does the code outside of the loop. Thus the running time is $\Theta(n)$.
[]
false
[]
07-7.1-4
07
7.1
7.1-4
docs/Chap07/7.1.md
How would you modify $\text{QUICKSORT}$ to sort into nonincreasing order?
We only need to flip the condition on line 4.
[]
false
[]
07-7.2-1
07
7.2
7.2-1
docs/Chap07/7.2.md
Use the substitution method to prove that the recurrence $T(n) = T(n - 1) + \Theta(n)$ has the solution $T(n) = \Theta(n^2)$, as claimed at the beginning of section 7.2.
**Proof** We are given the recurrence $$ T(n) = T(n - 1) + \Theta(n), $$ and we aim to prove $T(n) = \Theta(n^2)$. By the definition of $\Theta(n)$, there exist constants $c_1$, $c_2$, and $n_0$ such that for all $n \ge n_0$, $$ c_1 n \le f(n) \le c_2 n, $$ where $f(n) = \Theta(n)$. Thus, the recurrence becomes ...
[]
false
[]
07-7.2-2
07
7.2
7.2-2
docs/Chap07/7.2.md
What is the running time of $\text{QUICKSORT}$ when all elements of the array $A$ have the same value?
It is $\Theta(n^2)$, since one of the partitions is always empty (see exercise 7.1-2.)
[]
false
[]
07-7.2-3
07
7.2
7.2-3
docs/Chap07/7.2.md
Show that the running time of $\text{QUICKSORT}$ is $\Theta(n^2)$ when the array $A$ contains distinct elements and is sorted in decreasing order.
If the array is already sorted in decreasing order, then, the pivot element is less than all the other elements. The partition step takes $\Theta(n)$ time, and then leaves you with a subproblem of size $n − 1$ and a subproblem of size $0$. This gives us the recurrence considered in 7.2-1. Which we showed has a solution...
[]
false
[]
07-7.2-4
07
7.2
7.2-4
docs/Chap07/7.2.md
Banks often record transactions on an account in order of the times of the transactions, but many people like to receive their bank statements with checks listed in order by check numbers. People usually write checks in order by check number, and merchants usually cash the with reasonable dispatch. The problem of conve...
The more sorted the array is, the less work insertion sort will do. Namely, $\text{INSERTION-SORT}$ is $\Theta(n + d)$, where $d$ is the number of inversions in the array. In the example above the number of inversions tends to be small so insertion sort will be close to linear. On the other hand, if $\text{PARTITION}$...
[]
false
[]
07-7.2-5
07
7.2
7.2-5
docs/Chap07/7.2.md
Suppose that the splits at every level of quicksort are in proportion $1 - \alpha$ to $\alpha$, where $0 < \alpha \le 1 / 2$ is a constant. Show that the minimum depth of a leaf in the recursion tree is approximately $-\lg n / \lg\alpha$ and the maximum depth is approximately $-\lg n / \lg(1 - \alpha)$. (Don't worry ab...
The minimum depth corresponds to repeatedly taking the smaller subproblem, that is, the branch whose size is proportional to $\alpha$. Then, this will fall to $1$ in $k$ steps where $1 \approx \alpha^kn$. Therefore, $k \approx \log_\alpha 1 / n = -\frac{\lg n}{\lg\alpha}$. The longest depth corresponds to always taking...
[]
false
[]
07-7.2-6
07
7.2
7.2-6 $\star$
docs/Chap07/7.2.md
Argue that for any constant $0 < \alpha \le 1 / 2$, the probability is approximately $1 - 2\alpha$ that on a random input array, $\text{PARTITION}$ produces a split more balanced than $1 - \alpha$ to $\alpha$.
In order to produce a worse split than $1 - \alpha$ to $\alpha$, $\text{PARTITION}$ must pick a pivot that will be either within the smallest $\alpha n$ elements or the largest $\alpha n$ elements. The probability of either is (approximately) $\alpha n / n = \alpha$ and the probability of both is $2\alpha$. Thus, the p...
[]
false
[]
07-7.3-1
07
7.3
7.3-1
docs/Chap07/7.3.md
Why do we analyze the expected running time of a randomized algorithm and not its worst-case running time?
We analyze the expected run time because it represents the more typical time cost. Also, we are doing the expected run time over the possible randomness used during computation because it can't be produced adversarially, unlike when doing expected run time over all possible inputs to the algorithm.
[]
false
[]
07-7.3-2
07
7.3
7.3-2
docs/Chap07/7.3.md
When $\text{RANDOMIZED-QUICKSORT}$ runs, how many calls are made to the random number generator $\text{RANDOM}$ in the worst case? How about in the best case? Give your answer in terms of $\Theta$-notation.
In the worst case, the number of calls to $\text{RANDOM}$ is $$T(n) = T(n - 1) + 1 = n = \Theta(n).$$ As for the best case, $$T(n) = 2T(n / 2) + 1 = \Theta(n).$$ This is not too surprising, because each third element (at least) gets picked as pivot.
[]
false
[]
07-7.4-1
07
7.4
7.4-1
docs/Chap07/7.4.md
Show that in the recurrence $$ \begin{aligned} T(n) & = \max\limits_{0 \le q \le n - 1} (T(q) + T(n - q - 1)) + \Theta(n), \\\\ T(n) & = \Omega(n^2). \end{aligned} $$
We guess $T(n) \ge cn^2 - 2n$, $$ \begin{aligned} T(n) & = \max_{0 \le q \le n - 1} (T(q) + T(n - q - 1)) + \Theta(n) \\\\ & \ge \max_{0 \le q \le n - 1} (cq^2 - 2q + c(n - q - 1)^2 - 2n - 2q -1) + \Theta(n) \\\\ & \ge c\max_{0 \le q \le n - 1} (q^2 + (n - q - 1)^2 - (2n + 4q + 1) / c) + \Theta(n) \\\\ ...
[]
false
[]
07-7.4-2
07
7.4
7.4-2
docs/Chap07/7.4.md
Show that quicksort's best-case running time is $\Omega(n\lg n)$.
We'll use the substitution method to show that the best-case running time is $\Omega(n\lg n)$. Let $T(n)$ be the best-case time for the procedure $\text{QUICKSORT}$ on an input of size $n$. We have $$T(n) = \min _{1 \le q \le n - 1} (T(q) + T(n - q - 1)) + \Theta(n).$$ Suppose that $T(n) \ge c(n\lg n + 2n)$ for some ...
[]
false
[]
07-7.4-3
07
7.4
7.4-3
docs/Chap07/7.4.md
Show that the expression $q^2 + (n - q - 1)^2$ achieves a maximum over $q = 0, 1, \ldots, n - 1$ when $q = 0$ and $q = n - 1$.
$$ \begin{aligned} f(q) & = q^2 + (n - q - 1)^2 \\\\ f'(q) & = 2q - 2(n - q - 1) = 4q - 2n + 2 \\\\ f''(q) & = 4. \\\\ \end{aligned} $$ $f'(q) = 0$ when $q = \frac{1}{2}n - \frac{1}{2}$. $f'(q)$ is also continious. $\forall q: f''(q) > 0$, which means that $f'(q)$ is negative left of $f'(q) = 0$ and positive right ...
[]
false
[]
07-7.4-4
07
7.4
7.4-4
docs/Chap07/7.4.md
Show that $\text{RANDOMIZED-QUICKSORT}$'s expected running time is $\Omega(n\lg n)$.
We use the same reasoning for the expected number of comparisons, we just take in a different direction. $$ \begin{aligned} \text E[X] & = \sum_{i = 1}^{n - 1} \sum_{j = i + 1}^n \frac{2}{j - i + 1} \\\\ & = \sum_{i = 1}^{n - 1} \sum_{k = 1}^{n - i} \frac{2}{k + 1} & (k \ge 1) \\\\ & \ge \sum_{i = 1}^{...
[]
false
[]
07-7.4-5
07
7.4
7.4-5
docs/Chap07/7.4.md
We can improve the running time of quicksort in practice by taking advantage of the fast running time of insertion sort when its input is "nearly" sorted. Upon calling quicksort on a subarray with fewer than $k$ elements, let it simply return without sorting the subarray. After the top-level call to quicksort returns, ...
In the quicksort part of the proposed algorithm, the recursion stops at level $\lg(n / k)$, which makes the expected running time $O(n\lg(n / k))$. However, this leaves $n / k$ non-sorted, non - intersecting subarrays of (maximum) length $k$. Because of the nature of the insertion sort algorithm, it will first sort fu...
[]
false
[]
07-7.4-6
07
7.4
7.4-6 $\star$
docs/Chap07/7.4.md
Consider modifying the $\text{PARTITION}$ procedure by randomly picking three elements from array $A$ and partitioning about their median (the middle value of the three elements). Approximate the probability of getting at worst an $\alpha$-to-$(1 - \alpha)$ split, as a function of $\alpha$ in the range $0 < \alpha < 1$...
First, for simplicity's sake, let's assume that we can pick the same element twice. Let's also assume that $0 < \alpha \le 1 / 2$. In order to get such a split, two out of three elements need to be in the smallest $\alpha n$ elements. The probability of having one is $\alpha n / n = \alpha$. The probability of having ...
[]
false
[]
07-7-1
07
7-1
7-1
docs/Chap07/Problems/7-1.md
The version of $\text{PARTITION}$ given in this chapter is not the original partitioning algorithm. Here is the original partition algorithm, which is due to C.A.R. Hoare: ```cpp HOARE-PARTITION(A, p, r) x = A[p] i = p - 1 j = r + 1 while true repeat j = j - 1 until A[j] ≤ x repeat i = i + 1 until A[i] ≥ x if i < j ex...
**a.** After the end of the loop, the variables have the following values: $x = 13$, $j = 9$ and $i = 10$. **b.** Because when $\text{HOARE-PARTITION}$ is running, $p \le i < j \le r$ will always hold, $i$, $j$ won't access any element of $A$ outside the subarray $A[p..r]$. **c.** When $i \ge j$, $\text{HOARE-PARTITI...
[ { "lang": "cpp", "code": "> HOARE-PARTITION(A, p, r)\n> x = A[p]\n> i = p - 1\n> j = r + 1\n> while true\n> repeat\n> j = j - 1\n> until A[j] ≤ x\n> repeat\n> i = i + 1\n> until A[i] ≥ x\n> if i < j\n> exchange A[i] ...
false
[]
07-7-2
07
7-2
7-2
docs/Chap07/Problems/7-2.md
The analysis of the expected running time of randomized quicksort in section 7.4.2 assumes that all element values are distinct. In this problem. we examine what happens when they are not. **a.** Suppose that all element values are equal. What would be randomized quick-sort's running time in this case? **b.** The $\t...
**a.** Since all elements are equal, $\text{RANDOMIZED-QUICKSORT}$ always returns $q = r$. We have recurrence $T(n) = T(n - 1) + \Theta(n) = \Theta(n^2)$. **b.** ```cpp PARTITION'(A, p, r) x = A[p] low = p high = p for j = p + 1 to r if A[j] < x y = A[j] A[j] = A[high +...
[ { "lang": "cpp", "code": "PARTITION'(A, p, r)\n x = A[p]\n low = p\n high = p\n for j = p + 1 to r\n if A[j] < x\n y = A[j]\n A[j] = A[high + 1]\n A[high + 1] = A[low]\n A[low] = y\n low = low + 1\n high = high + 1\n ...
false
[]
07-7-3
07
7-3
7-3
docs/Chap07/Problems/7-3.md
An alternative analysis of the running time of randomized quicksort focuses on the expected running time of each individual recursive call to $\text{RANDOMIZED-QUICKSORT}$, rather than on the number of comparisons performed. **a.** Argue that, given an array of size $n$, the probability that any particular element is ...
**a.** Since the pivot is selected as a random element in the array, which has size $n$, the probabilities of any particular element being selected are all equal, and add to one, so, are all $\frac{1}{n}$. As such, $\text E[X_i] = \Pr\\{i \text{ smallest is picked}\\} = \frac{1}{n}$. **b.** We can apply linearity of e...
[]
false
[]
07-7-4
07
7-4
7-4
docs/Chap07/Problems/7-4.md
The $\text{QUICKSORT}$ algorithm of Section 7.1 contains two recursive calls to itself. After $\text{QUICKSORT}$ calls $\text{PARTITION}$, it recursively sorts the left subarray and then it recursively sorts the right subarray. The second recursive call in $\text{QUICKSORT}$ is not really necessary; we can avoid it by ...
**a.** The book proved that $\text{QUICKSORT}$ correctly sorts the array $A$. $\text{TAIL-RECURSIVE-QUICKSORT}$ differs from $\text{QUICKSORT}$ in only the last line of the loop. It is clear that the conditions starting the second iteration of the **while** loop in $\text{TAIL-RECURSIVE-QUICKSORT}$ are identical to th...
[ { "lang": "cpp", "code": "> TAIL-RECURSIVE-QUICKSORT(A, p, r)\n> while p < r\n> // Partition and sort left subarray.\n> q = PARTITION(A, p, r)\n> TAIL-RECURSIVE-QUICKSORT(A, p, q - 1)\n> p = q + 1\n>" }, { "lang": "cpp", "code": "MODIFIED-TAIL-RECURSIVE-QUICKS...
false
[]
07-7-5
07
7-5
7-5
docs/Chap07/Problems/7-5.md
One way to improve the $\text{RANDOMIZED-QUICKSORT}$ procedure is to partition around a pivot that is chosen more carefully than by picking a random element from the subarray. One common approach is the **_median-of-3_** method: choose the pivot as the median (middle element) of a set of 3 elements randomly selected fr...
**a.** $p_i$ is the probability that a randomly selected subset of size three has the $A'[i]$ as it's middle element. There are 6 possible orderings of the three elements selected. So, suppose that $S'$ is the set of three elements selected. We will compute the probability that the second element of $S'$ is $A'[i]$ am...
[]
false
[]
07-7-6
07
7-6
7-6
docs/Chap07/Problems/7-6.md
Consider the problem in which we do not know the numbers exactly. Instead, for each number, we know an interval on the real line to which it belongs. That is, we are given $n$ closed intervals of the form $[a_i, b_i]$, where $a_i \le b_i$. We wish to **_fuzzy-sort_** these intervals, i.e., to produce a permutation $\la...
**a.** With randomly selected left endpoint for the pivot, we could trivially perform fuzzy sorting by quicksorting the left endpoints, $a_i$'s. This would achieve the worst-case expected running time of $\Theta(n\lg n)$. We definitely can do better by exploit the characteristic that we don't have to sort overlapping i...
[ { "lang": "cpp", "code": "FIND-INTERSECTION(A, p, r)\n rand = RANDOM(p, r)\n exchange A[rand] with A[r]\n a = A[r].a\n b = A[r].b\n for i = p to r - 1\n if A[i].a ≤ b and A[i].b ≥ a\n if A[i].a > a\n a = A[i].a\n if A[i].b < b\n b = A...
false
[]
08-8.1-1
08
8.1
8.1-1
docs/Chap08/8.1.md
What is the smallest possible depth of a leaf in a decision tree for a comparison sort?
For a permutation $a_1 \le a_2 \le \ldots \le a_n$, there are $n - 1$ pairs of relative ordering, thus the smallest possible depth is $n - 1$.
[]
false
[]
08-8.1-2
08
8.1
8.1-2
docs/Chap08/8.1.md
Obtain asymptotically tight bounds on $\lg(n!)$ without using Stirling's approximation. Instead, evaluate the summation $\sum_{k = 1}^n\lg k$ using techniques from Section A.2.
$$ \begin{aligned} \sum_{k = 1}^n\lg k & \le \sum_{k = 1}^n\lg n \\\\ & = n\lg n. \end{aligned} $$ $$ \begin{aligned} \sum_{k = 1}^n\lg k & = \sum_{k = 2}^{n / 2} \lg k + \sum_{k = n / 2}^n\lg k \\\\ & \ge \sum_{k = 1}^{n / 2} 1 + \sum_{k = n / 2}^n\lg n / 2 \\\\ & = \frac{n}{2} + \frac{n}{2}...
[]
false
[]
08-8.1-3
08
8.1
8.1-3
docs/Chap08/8.1.md
Show that there is no comparison sort whose running time is linear for at least half of the $n!$ inputs of length $n$. What about a fraction of $1 / n$ of the inputs of length $n$? What about a fraction $1 / 2^n$?
Consider a decision tree of height $h$ with $r$ reachable leaves corresponding to a comparison sort on $n$ elements. From **Theorem 8.1**, We have $n! / 2 \le n! \le r \le 2^h$. By taking logarithms, we have $$h \ge \lg (n! / 2) = \lg (n!) - 1 = \Theta (n\lg n) - 1 = \Theta (n\lg n).$$ From the equation above, there ...
[]
false
[]
08-8.1-4
08
8.1
8.1-4
docs/Chap08/8.1.md
Suppose that you are given a sequence of $n$ elements to sort. The input sequence consists of $n / k$ subsequences, each containing $k$ elements. The elements in a given subsequence are all smaller than the elements in the succeeding subsequence and larger than the elements in the preceding subsequence. Thus, all that ...
Assume that we need to construct a binary decision tree to represent comparisons. Since length of each subsequece is $k$, there are $(k!)^{n / k}$ possible output permutations. To compute the height $h$ of the decision tree, we must have $(k!)^{n / k} \le 2^h$. Taking logs on both sides, we know that $$h \ge \frac{n}{...
[]
false
[]
08-8.2-1
08
8.2
8.2-1
docs/Chap08/8.2.md
Using Figure 8.2 as a model, illustrate the operation of $\text{COUNTING-SORT}$ on the array $A = \langle 6, 0, 2, 0, 1, 3, 4, 6, 1, 3, 2 \rangle$.
We have that $C = \langle 2, 4, 6, 8, 9, 9, 11 \rangle$. Then, after successive iterations of the loop on lines 10-12, we have $$ \begin{aligned} B & = \langle, , , , , 2, , , , , \rangle, \\\\ B & = \langle, , , , , 2, 3, , , \rangle, \\\\ B & = \langle, , , 1, , 2, 3, , , \rangle \end{aligned} $$ and at the end, $...
[]
false
[]
08-8.2-2
08
8.2
8.2-2
docs/Chap08/8.2.md
Prove that $\text{COUNTING-SORT}$ is stable.
Suppose positions $i$ and $j$ with $i < j$ both contain some element $k$. We consider lines 10 through 12 of $\text{COUNTING-SORT}$, where we construct the output array. Since $j > i$, the loop will examine $A[j]$ before examining $A[i]$. When it does so, the algorithm correctly places $A[j]$ in position $m = C[k]$ of ...
[]
false
[]
08-8.2-3
08
8.2
8.2-3
docs/Chap08/8.2.md
Suppose that we were to rewrite the **for** loop header in line 10 of the $\text{COUNTING-SORT}$ as ```cpp 10 for j = 1 to A.length ``` Show that the algorithm still works properly. Is the modified algorithm stable?
The algorithm still works correctly. The order that elements are taken out of $C$ and put into $B$ doesn't affect the placement of elements with the same key. It will still fill the interval $(C[k − 1], C[k]]$ with elements of key $k$. The question of whether it is stable or not is not well phrased. In order for stabil...
[ { "lang": "cpp", "code": "> 10 for j = 1 to A.length\n>" } ]
false
[]
08-8.2-4
08
8.2
8.2-4
docs/Chap08/8.2.md
Describe an algorithm that, given n integers in the range $0$ to $k$, preprocesses its input and then answers any query about how many of the $n$ integers fall into a range $[a..b]$ in $O(1)$ time. Your algorithm should use $\Theta(n + k)$ preprocessing time.
The algorithm will begin by preprocessing exactly as $\text{COUNTING-SORT}$ does in lines 1 through 9, so that $C[i]$ contains the number of elements less than or equal to $i$ in the array. When queried about how many integers fall into a range $[a..b]$, simply compute $C[b] − C[a − 1]$. This takes $O(1)$ times and yie...
[]
false
[]
08-8.3-1
08
8.3
8.3-1
docs/Chap08/8.3.md
Using Figure 8.3 as a model, illustrate the operation of $\text{RADIX-SORT}$ on the following list of English words: COW, DOG, SEA, RUG, ROW, MOB, BOX, TAB, BAR, EAR, TAR, DIG, BIG, TEA, NOW, FOX.
$$ \begin{array}{cccc} 0 & 1 & 2 & 3 \\\\ \hline \text{COW} & \text{SE$\textbf{A}$} & \text{T$\textbf{A}$B} & \text{$\textbf{B}$AR} \\\\ \text{DOG} & \text{TE$\textbf{A}$} & \text{B$\textbf{A}$R} & \text{$\textbf{B}$IG} \\\\ \text{SEA} & \text{MO$\tex...
[]
false
[]
08-8.3-2
08
8.3
8.3-2
docs/Chap08/8.3.md
Which of the following sorting algorithms are stable: insertion sort, merge sort, heapsort, and quicksort? Give a simple scheme that makes any sorting algorithm stable. How much additional time and space does your scheme entail?
Insertion sort and merge sort are stable. Heapsort and quicksort are not. To make any sorting algorithm stable we can preprocess, replacing each element of an array with an ordered pair. The first entry will be the value of the element, and the second value will be the index of the element. For example, the array $[2...
[]
false
[]
08-8.3-3
08
8.3
8.3-3
docs/Chap08/8.3.md
Use induction to prove that radix sort works. Where does your proof need the assumption that the intermediate sort is stable?
**Loop invariant:** At the beginning of the **for** loop, the array is sorted on the last $i − 1$ digits. **Initialization:** The array is trivially sorted on the last $0$ digits. **Maintenance:** Let's assume that the array is sorted on the last $i − 1$ digits. After we sort on the $i$th digit, the array will be sor...
[]
false
[]
08-8.3-4
08
8.3
8.3-4
docs/Chap08/8.3.md
Show how to sort $n$ integers in the range $0$ to $n^3 - 1$ in $O(n)$ time.
First run through the list of integers and convert each one to base $n$, then radix sort them. Each number will have at most $\log_n n^3 = 3$ digits so there will only need to be $3$ passes. For each pass, there are $n$ possible values which can be taken on, so we can use counting sort to sort each digit in $O(n)$ time...
[]
false
[]
08-8.3-5
08
8.3
8.3-5 $\star$
docs/Chap08/8.3.md
In the first card-sorting algorithm in this section, exactly how many sorting passes are needed to sort $d$-digit decimal numbers in the worst case? How many piles of cards would an operator need to keep track of in the worst case?
Given $n$ $d$-digit numbers in which each digit can take on up to $k$ possible values, we'll perform $\Theta(k^d)$ passes and keep track of $\Theta(nk)$ piles in the worst case.
[]
false
[]
08-8.4-1
08
8.4
8.4-1
docs/Chap08/8.4.md
Using Figure 8.4 as a model, illustrate the operation of $\text{BUCKET-SORT}$ on the array $A = \langle .79, .13, .16, .64, .39, .20, .89, .53, .71, .42 \rangle$.
$$ \begin{array}{cl} R & \\\\ \hline 0 & \\\\ 1 & .13 .16 \\\\ 2 & .20 \\\\ 3 & .39 \\\\ 4 & .42 \\\\ 5 & .53 \\\\ 6 & .64 \\\\ 7 & .79 .71 \\\\ 8 & .89 \\\\ 9 & \\\\ \end{array} $$ $$A = \langle.13, .16, .20, .39, .42, .53, .64, .71, .79, .89 \rangle.$$
[]
false
[]
08-8.4-2
08
8.4
8.4-2
docs/Chap08/8.4.md
Explain why the worst-case running time for bucket sort is $\Theta(n^2)$. What simple change to the algorithm preserves its linear average-case running time and makes its worst-case running time $O(n\lg n)$?
If all the keys fall in the same bucket and they happen to be in reverse order, we have to sort a single bucket with $n$ items in reversed order with insertion sort. This is $\Theta(n^2)$. We can use merge sort or heapsort to improve the worst-case running time. Insertion sort was chosen because it operates well on li...
[]
false
[]
08-8.4-3
08
8.4
8.4-3
docs/Chap08/8.4.md
Let $X$ be a random variable that is equal to the number of heads in two flips of a fair coin. What is $\text E[X^2]$? What is $\text E^2[X]$?
$$ \begin{aligned} \text E[X] & = 2 \cdot \frac{1}{4} + 1 \cdot \frac{1}{2} + 0 \cdot \frac{1}{4} = 1 \\\\ \text E[X^2] & = 4 \cdot \frac{1}{4} + 1 \cdot \frac{1}{2} + 0 \cdot \frac{1}{4} = 1.5 \\\\ \text E^2[X] & = \text E[X] \cdot \text E[X] = 1 \cdot 1 = 1. \end{aligned} $$
[]
false
[]
08-8.4-4
08
8.4
8.4-4 $\star$
docs/Chap08/8.4.md
We are given $n$ points in the unit circle, $p_i = (x_i, y_i)$, such that $0 < x_i^2 + y_i^2 \le 1$ for $i = 1, 2, \ldots, n$. Suppose that the points are uniformly distributed; that is, the probability of finding a point in any region of the circle is proportional to the area of that region. Design an algorithm with a...
Bucket sort by radius, $$ \begin{aligned} \pi r_i^2 & = \frac{i}{n} \cdot \pi 1^2 \\\\ r_i & = \sqrt{\frac{i}{n}}. \end{aligned} $$
[]
false
[]
08-8.4-5
08
8.4
8.4-5 $\star$
docs/Chap08/8.4.md
A **_probability distribution function_** $P(x)$ for a random variable $X$ is defined by $P(x) = \Pr\\{X \le x\\}$. Suppose that we draw a list of $n$ random variables $X_1, X_2, \ldots, X_n$ from a continuous probability distribution function $P$ that is computable in $O(1)$ time. Give an algorithm that sorts these nu...
Bucket sort by $p_i$, so we have $n$ buckets: $[p_0, p_1), [p_1, p_2), \dots, [p_{n - 1}, p_n)$. Note that not all buckets are the same size, which is ok as to ensure linear run time, the inputs should on average be uniformly distributed amongst all buckets, of which the intervals defined with $p_i$ will do so. $p_i$ ...
[]
false
[]
08-8-1
08
8-1
8-1
docs/Chap08/Problems/8-1.md
In this problem, we prove a probabilistic $\Omega(n\lg n)$ lower bound on the running time of any deterministic or randomized comparison sort on $n$ distinct input elements. We begin by examining a deterministic comparison sort $A$ with decision tree $T_A$. We assume that every permutation of $A$'s inputs is equally li...
**a.** There are $n!$ possible permutations of the input array because the input elements are all distinct. Since each is equally likely, the distribution is uniformly supported on this set. So, each occurs with probability $\frac{1}{n!}$ and corresponds to a different leaf because the program needs to be able to disti...
[]
false
[]
08-8-2
08
8-2
8-2
docs/Chap08/Problems/8-2.md
Suppose that we have an array of $n$ data records to sort and that the key of each record has the value $0$ or $1$. An algorithm for sorting such a set of records might possess some subset of the following three desirable characteristics: 1. The algorithm runs in $O(n)$ time. 2. The algorithm is stable. 3. The algorit...
**a.** Counting-Sort. **b.** Quicksort-Partition. **c.** Insertion-Sort. **d.** (a) Yes. (b) No. (c) No. **e.** Thanks [@Gutdub](https://github.com/Gutdub) for providing the solution in this [issue](https://github.com/walkccc/CLRS/issues/150). ```cpp MODIFIED-COUNTING-SORT(A, k) let C[0..k] be a new array ...
[ { "lang": "cpp", "code": "MODIFIED-COUNTING-SORT(A, k)\n let C[0..k] be a new array\n for i = 1 to k\n C[i] = 0\n for j = 1 to A.length\n C[A[j]] = C[A[j]] + 1\n for i = 2 to k\n C[i] = C[i] + C[i - 1]\n insert sentinel element NIL at the start of A\n B = C[0..k - 1]\n...
false
[]
08-8-3
08
8-3
8-3
docs/Chap08/Problems/8-3.md
**a.** You are given an array of integers, where different integers may have different numbers of digits, but the total number of digits over _all_ the integers in the array is $n$. Show how to sort the array in $O(n)$ time. **b.** You are given an array of strings, where different strings may have different numbers o...
**a.** First, sort the integer according to their lengths by bucket sort, where we make a bucket for each possible number of digits. We sort each these uniform length sets of integers using radix sort. Then, we just concatenate the sorted lists obtained from each bucket. **b.** Make a bucket for every letter in the al...
[]
false
[]
08-8-4
08
8-4
8-4
docs/Chap08/Problems/8-4.md
Suppose that you are given $n$ red and $n$ blue water jugs, all of different shapes and sizes. All red jugs hold different amounts of water, as do the blue ones. Moreover, for every red jug, there is a blue jug that holds the same amount of water, and vice versa. Your task is to find a grouping of the jugs into pairs ...
**a.** Select a red jug. Compare it to blue jugs until you find one which matches. Set that pair aside, and repeat for the next red jug. This will use at most $\sum_{i = 1}^{n - 1} i = n(n - 1) / 2 = \Theta(n^2)$ comparisons. **b.** We can imagine first lining up the red jugs in some order. Then a solution to this pro...
[]
false
[]
08-8-5
08
8-5
8-5
docs/Chap08/Problems/8-5.md
Suppose that, instead of sorting an array, we just require that the elements increase on average. More precisely, we call an $n$-element array $A$ **_k-sorted_** if, for all $i = 1, 2, \ldots, n - k$, the following holds: $$\frac{\sum_{j = i}^{i + k - 1} A[j]}{k} \le \frac{\sum_{j = i + 1}^{i + k} A[j]}{k}.$$ **a.** ...
**a.** Ordinary sorting **b.** $2, 1, 4, 3, 6, 5, 8, 7, 10, 9$. **c.** $$ \begin{aligned} \frac{\sum_{j = i}^{i + k - 1} A[j]}{k} & \le \frac{\sum_{j = i + 1}^{i + k}A[j]}{k} \\\\ \sum_{j = i}^{i + k- 1 } A[j] & \le \sum_{j = i + 1}^{i + k} A[j] \\\\ A[i] & \le A[i + k]. ...
[]
false
[]