chapter_num
int64
1
6
question_num
int64
1
57
question
stringlengths
37
897
4
8
The diameter of a digraph is the length of the maximum‑length shortest path connecting two vertices. Write a DijkstraSP client that finds the diameter of a given EdgeWeightedDigraph that has nonnegative weights.
4
9
The table below, from an old published road map, purports to give the length of the shortest routes connecting the cities. It contains an error. Correct the table. Also, add a table that shows how to achieve the shortest routes. Providence Westerly New London Norwich Providence - 53 54 48 Westerly 53 - 18 101 New London 54 18 - 12 Norwich 48 101 12 - 685
4
10
Consider the edges in the digraph defined in Exercise 4.4.4 to be undirected edges such that each edge corresponds to equal‑weight edges in both directions in the edge‑weighted digraph. Answer Exercise 4.4.6 for this corresponding edge‑weighted digraph.
4
11
Use the memory‑cost model of Section 1.4 to determine the amount of memory used by EdgeWeightedDigraph to represent a graph with V vertices and E edges.
4
12
Adapt the DirectedCycle and Topological classes from Section 4.2 to use the EdgeWeightedDigraph and DirectedEdge APIs of this section, thus implementing EdgeWeightedCycleFinder and EdgeWeightedTopological classes.
4
13
Show, in the style of the trace in the text, the process of computing the SPT with Dijkstra’s algorithm for the digraph obtained by removing the edge 5→7 from tinyEWD.txt (see page 644).
4
14
Show the paths that would be discovered by the two strawman approaches described on page 668 for the example tinyEWDn.txt shown on that page.
4
15
What happens to Bellman‑Ford if there is a negative cycle on the path from s to v and then you call pathTo(v)?
4
16
Suppose that we convert an EdgeWeightedGraph into an EdgeWeightedDigraph by creating two DirectedEdge objects in the EdgeWeightedDigraph (one in each direction) for each Edge in the EdgeWeightedGraph (as described for Dijkstra’s algorithm in the Q&A on page 684) and then use the Bellman‑Ford algorithm. Explain why this approach fails spectacularly.
4
17
What happens if you allow a vertex to be enqueued more than once in the same pass in the Bellman‑Ford algorithm? Answer : The running time of the algorithm can go exponential. For example, describe what happens for the complete edge‑weighted digraph whose edge weights are all -1.
4
18
Write a CPM client that prints all critical paths.
4
19
Find the lowest‑weight cycle (best arbitrage opportunity) in the example shown in the text.
4
20
Find a currency‑conversion table online or in a newspaper. Use it to build an arbitrage table. Note : Avoid tables that are derived (calculated) from a few values and that therefore do not give sufficiently accurate conversion information to be interesting. Extra credit : Make a killing in the money‑exchange market!
4
21
Show, in the style of the trace in the text, the process of computing the SPT with the Bellman‑Ford algorithm for the edge‑weighted digraph of Exercise 4.4.5.
4
22
Vertex weights. Show that shortest‑paths computations in edge‑weighted digraphs with nonnegative weights on vertices (where the weight of a path is defined to be the sum of the weights of the vertices) can be handled by building an edge‑weighted digraph that has weights on only the edges.
4
23
Source‑sink shortest paths. Develop an API and implementation that use a version of Dijkstra’s algorithm to solve the source‑sink shortest‑path problem on edge‑weighted digraphs.
4
24
Multisource shortest paths. Develop an API and implementation that uses Dijkstra’s algorithm to solve the multisource shortest‑paths problem on edge‑weighted digraphs with positive edge weights: given a set of sources, find a shortest‑paths forest that enables implementation of a method that returns to clients the shortest path from any source to each vertex. Hint : Add a dummy vertex with a zero‑weight edge to each source, or initialize the priority queue with all sources, with their distTo[] entries set to 0.
4
25
Shortest path between two subsets. Given a digraph with positive edge weights, and two distinguished subsets of vertices S and T, find a shortest path from any vertex in S to any vertex in T. Your algorithm should run in time proportional to E log V, in the worst case.
4
26
Single‑source shortest paths in dense graphs. Develop a version of Dijkstra’s algorithm that can find the SPT from a given vertex in a dense edge‑weighted digraph in time proportional to V². Use an adjacency‑matrix representation (see Exercise 4.4.3 and Exercise 4.3.29).
4
27
Shortest paths in Euclidean graphs. Adapt our APIs to speed up Dijkstra’s algorithm in the case where it is known that vertices are points in the plane.
4
28
Longest paths in DAGs. Develop an implementation AcyclicLP that can solve the longest‑paths problem in edge‑weighted DAGs, as described in Proposition T.
4
29
General optimality. Complete the proof of Proposition W by showing that if there exists a directed path from s to v and no vertex on any path from s to v is on a negative cycle, then there exists a shortest path from s to v (Hint : See Proposition P.).
4
30
All‑pairs shortest path in graphs with negative cycles. Articulate an API like the one implemented on page 656 for the all‑pairs shortest‑paths problem in graphs with no negative cycles. Develop an implementation that runs a version of Bellman‑Ford to identify weights π[v] such that for any edge v→w, the edge weight plus the difference between π[v] and π[w] is nonnegative. Then use these weights to reweight the graph, so that Dijkstra’s algorithm is effective for finding all shortest paths in the reweighted graph.
4
31
All‑pairs shortest path on a line. Given a weighted line graph (undirected connected graph, all vertices of degree 2, except two endpoints which have degree 1), devise an algorithm that preprocesses the graph in linear time and can return the distance of the shortest path between any two vertices in constant time.
4
32
Parent‑checking heuristic. Modify Bellman‑Ford to visit a vertex v only if its SPT parent edgeTo[v] is not currently on the queue. This heuristic has been reported by Cherkassky, Goldberg, and Radzik to be useful in practice. Prove that it correctly computes shortest paths and that the worst‑case running time is proportional to Ε V.
4
33
Shortest path in a grid. Given an N‑by‑N matrix of positive integers, find the shortest path from the (0, 0) entry to the (N–1, N–1) entry, where the length of the path is the sum of the integers in the path. Repeat the problem but assume you can only move right and down.
4
34
Monotonic shortest path. Given a weighted digraph, find a monotonic shortest path from s to every other vertex. A path is monotonic if the weight of every edge on the path is either strictly increasing or strictly decreasing. The path should be simple (no repeated vertices). Hint : Relax edges in ascending order and find a best path; then relax edges in descending order and find a best path.
4
35
Bitonic shortest path. Given a digraph, find a bitonic shortest path from s to every other vertex (if one exists). A path is bitonic if there is an intermediate vertex v such that the edges on the path from s to v are strictly increasing and the edges on the path from v to t are strictly decreasing. The path should be simple (no repeated vertices).
4
36
Neighbors. Develop an SP client that finds all vertices within a given distance d of a given vertex in a given edge‑weighted digraph. The running time of your method should be proportional to the size of the subgraph induced by those vertices and the vertices incident on them, or V (to initialize data structures), whichever is larger.
4
37
Critical edges. Develop an algorithm for finding an edge whose removal causes maximal increase in the shortest‑paths length from one given vertex to another given vertex in a given edge‑weighted digraph.
4
38
Sensitivity. Develop an SP client that performs a sensitivity analysis on the edge‑weighted digraph’s edges with respect to a given pair of vertices s and t: Compute a V‑by‑V boolean matrix such that, for every v and w, the entry in row v and column w is true if v→w is an edge in the edge‑weighted digraphs whose weight can be increased without the shortest‑path length from v to w being increased and is false otherwise.
4
39
Lazy implementation of Dijkstra’s algorithm. Develop an implementation of the lazy version of Dijkstra’s algorithm that is described in the text.
4
40
Bottleneck SPT. Show that an MST of an undirected graph is equivalent to a bottleneck SPT of the graph: For every pair of vertices v and w, it gives the path connecting them whose longest edge is as short as possible.
4
41
Bidirectional search. Develop a class for the source‑sink shortest‑paths problem that is based on code like Algorithm 4.9 but that initializes the priority queue with both the source and the sink. Doing so leads to the growth of an SPT from each vertex; your main task is to decide precisely what to do when the two SPTs collide.
4
42
Worst case (Dijkstra). Describe a family of graphs with V vertices and E edges for which the worst‑case running time of Dijkstra’s algorithm is achieved.
4
43
Negative cycle detection. Suppose that we add a constructor to Algorithm 4.11 that differs from the constructor given only in that it omits the second argument and that it initializes all distTo[] entries to 0. Show that, if a client uses that constructor, a client call to hasNegativeCycle() returns true if and only if the graph has a negative cycle (and negativeCycle() returns that cycle). Answer : Consider a digraph formed from the original by adding a new source with an edge of weight 0 to all the other vertices. After one pass, all distTo[] entries are 0, and finding a negative cycle reachable from that source is the same as finding a negative cycle anywhere in the original graph.
4
44
Worst case (Bellman‑Ford). Describe a family of graphs for which Algorithm 4.11 takes time proportional to V E.
4
45
Fast Bellman‑Ford. Develop an algorithm that breaks the linearithmic running time barrier for the single‑source shortest‑paths problem in general edge‑weighted digraphs for the special case where the weights are integers known to be bounded in absolute value by a constant.
4
46
Animate. Write a client program that does dynamic graphical animations of Dijkstra’s algorithm.
4
47
Random sparse edge‑weighted digraphs. Modify your solution to Exercise 4.3.34 to assign a random direction to each edge.
4
48
Random Euclidean edge‑weighted digraphs. Modify your solution to Exercise 4.3.35 to assign a random direction to each edge.
4
49
Random grid edge‑weighted digraphs. Modify your solution to Exercise 4.3.36 to assign a random direction to each edge.
4
50
Negative weights I. Modify your random edge‑weighted digraph generators to generate weights between x and y (where x and y are both between –1 and 1) by rescaling.
4
51
Negative weights II. Modify your random edge‑weighted digraph generators to generate negative weights by negating a fixed percentage (whose value is supplied by the client) of the edge weights.
4
52
Negative weights III. Develop client programs that use your edge‑weighted digraph to produce edge‑weighted digraphs that have a large percentage of negative weights but have at most a few negative cycles, for as large a range of values of V and E as possible.
4
53
Prediction. Estimate, to within a factor of 10, the largest graph with E = 10V that your computer and programming system could handle if you were to use Dijkstra’s algorithm to compute all its shortest paths in 10 seconds.
4
54
Cost of laziness. Run empirical studies to compare the performance of the lazy version of Dijkstra’s algorithm with the eager version, for various edge‑weighted di‑graph models.
4
55
Johnson’s algorithm. Develop a priority‑queue implementation that uses a d‑way heap. Find the best value of d for various edge‑weighted digraph models.
4
56
Arbitrage model. Develop a model for generating random arbitrage problems. Your goal is to generate tables that are as similar as possible to the tables that you used in Exercise 4.4.20.
4
57
Parallel job‑scheduling‑with‑deadlines model. Develop a model for generating random instances of the parallel job‑scheduling‑with‑deadlines problem. Your goal is to generate nontrivial problems that are likely to be feasible.
5
1
Consider the four variable‑length codes shown in the table at right. Which of the codes are prefix‑free? Uniquely decodable? For those that are uniquely decod‑able, give the encoding of 1000000000000.
5
2
Given a example of a uniquely decodable code that is not prefix‑free. Answer : Any suffix‑free code is uniquely decodable.
5
3
Give an example of a uniquely decodable code that is not prefix free or suffix free. Answer : {0011, 011, 11, 1110} or {01, 10, 011, 110}
5
4
Are { 01, 1001, 1011, 111, 1110 } and { 01, 1001, 1011, 111, 1110 } unique‑ly decodable? If not, find a string with two encodings.
5
5
Use RunLength on the file q128x192.bin from the booksite. How many bits are there in the compressed file?
5
6
How many bits are needed to encode N copies of the symbol a (as a function of N)? N copies of the sequence abc?
5
7
Give the result of encoding the strings a, aa, aaa, aaaa, ... (strings consisting of N a’s) with run‑length, Huffman, and LZW encoding. What is the compression ratio as a function of N?
5
8
Give the result of encoding the strings ab, abab, ababab, abababab, ... (strings consisting of N repetitions of ab) with run‑length, Huffman, and LZW encoding. What is the compression ratio as a function of N?
5
9
Estimate the compression ratio achieved by run‑length, Huffman, and LZW en‑coding for a random ASCII string of length N (all characters equally likely at each position, independently).
5
10
In the style of the figure in the text, show the Huffman coding tree construction process when you use Huffman for the string "it was the age of foolishness”. How many bits does the compressed bitstream require?
5
11
What is the Huffman code for a string whose characters are all from a two‑character alphabet? Give an example showing the maximum number of bits that could be used in a Huffman code for an N‑character string whose characters are all from a two‑character alphabet.
5
12
Suppose that all of the symbol probabilities are negative powers of 2. Describe the Huffman code.
5
13
Suppose that all of the symbol frequencies are equal. Describe the Huffman code.
5
14
Suppose that the frequencies of the occurrence of all the characters to be encoded are different. Is the Huffman encoding tree unique?
5
15
Huffman coding could be extended in a straightforward way to encode in 2‑bit characters (using 4‑way trees). What would be the main advantage and the main disadvantage of doing so?
5
16
What is the LZW encoding of the following inputs? a. T O B E O R N O T T O B E b. Y A B B A D A B B A D A B B A D O O c. A A A A A A A A A A A A A A A A A A A A
5
17
Characterize the tricky situation in LZW coding. Solution : Whenever it encounters cScSc, where c is a symbol and S is a string, cS is in the dictionary already but cSc is not.
5
18
Let Fn be the k th Fibonacci number. Consider N symbols, where the k th symbol has frequency Fk. Note that F1 + F1 + … + FN = FN+2 – 1. Describe the Huffman code. Hint : The longest codeword has length N – 1.
5
19
Show that there are at least 2N – 1 different Huffman codes corresponding to a given set of N symbols.
5
20
Give a Huffman code where the frequency of 0s in the output is much, much higher than the frequency of 1s. Answer : If the character A occurs 1 million times and the character B occurs once, the codeword for A will be 0 and the codeword for B will be 1.
5
21
Prove that the two longest codewords in a Huffman code have the same length.
5
22
Prove the following fact about Huffman codes: If the frequency of symbol i is strictly larger than the frequency of symbol j, then the length of the codeword for symbol i is less than or equal to the length of the codeword for symbol j.
5
23
What would be the result of breaking up a Huffman‑encoded string into five‑bit characters and Huffman‑encoding that string?
5
24
In the style of the figures in the text, show the encoding trie and the compression and expansion processes when LZW is used for the string "it was the best of times it was the worst of times".
5
25
Fixed length width code. Implement a class RLE that uses fixed‑length encoding, to compress ASCII bytestreams using relatively few different characters, including the code as part of the encoded bitstream. Add code to compress() to make a string alpha with all the distinct characters in the message and use it to make an Alphabet for use in compress(), prepend alpha (8‑bit encoding plus its length) to the compressed bitstream, then add code to expand() to read the alphabet before expansion.
5
26
Rebuilding the LZW dictionary. Modify LZW to empty the dictionary and start over when it is full. This approach is recommended in some applications because it bet­ter adapts to changes in the general character of the input.
5
27
Long repeats. Estimate the compression ratio achieved by run‑length, Huffman, and LZW encoding for a string of length 2N formed by concatenating two copies of a random ASCII string of length N (see Exercise 5.5.9), under any assumptions that you think are reasonable.
6
1
Maxwell‑Boltzmann. The distribution of velocity of particles in the hard disc model obeys the Maxwell‑Boltzmann distribution (assuming that the system has thermalized and particles are sufficiently heavy that we can discount quantum‑mechanical effects), which is known as the Rayleigh distribution in two dimensions. The distribution shape depends on temperature. Write a driver that computes a histogram of the particle velocities and test it for various temperatures.
6
2
Arbitrary shape. Molecules travel very quickly (faster than a speeding jet) but diffuse slowly because they collide with other molecules, thereby changing their direction. Extend the model to have a boundary shape where two vessels are connected by a pipe containing two different types of particles. Run a simulation and measure the fraction of particles of each type in each vessel as a function of time.
6
3
Rewind. After running a simulation, negate all velocities and then run the system backwards. It should return to its original state! Measure roundoff error by measuring the difference between the final and original states of the system.
6
4
Pressure. Add a method pressure() to Particle that measures pressure by accumulating the number and magnitude of collisions against walls. The pressure of the system is the sum of these quantities. Then add a method pressure() to CollisionSystem and write a client that validates the equation \(pv = nRT\).
6
5
Index priority queue implementation. Develop a version of CollisionSystem that uses an index priority queue to guarantee that the size of the priority queue is at most linear in the number of particles (instead of quadratic or worse).
6
6
Priority queue performance. Instrument the priority queue and test Pressure at various temperatures to identify the computational bottleneck. If warranted, try switching to a different priority‑queue implementation for better performance at high temperatures.
6
7
Suppose that, in a three‑level tree, we can afford to keep a links in internal memory, between \(b\) and \(2b\) links in pages representing internal nodes, and between \(c\) and \(2c\) items in pages representing external nodes. What is the maximum number of items that we can hold in such a tree, as a function of \(a\), \(b\), and \(c\)?
6
8
Develop an implementation of Page that represents each B‑tree node as a BinarySearchST object.
6
9
Extend BTreeSET to develop a BTreeST implementation that associates keys with values and supports our full ordered symbol table API that includes min(), max(), floor(), ceiling(), deleteMin(), deleteMax(), select(), rank(), and the two‑argument versions of size() and get().
6
10
Write a program that uses StdDraw to visualize B‑trees as they grow, as in the text.
6
11
Estimate the average number of probes per search in a B‑tree for \(S\) random searches, in a typical cache system, where the \(T\) most‑recently‑accessed pages are kept in memory (and therefore add 0 to the probe count). Assume that \(S\) is much larger than \(T\).
6
12
Web search. Develop an implementation of Page that represents B‑tree nodes as text files on web pages, for the purposes of indexing (building a concordance for) the web. Use a file of search terms. Take web pages to be indexed from standard input. To keep control, take a command‑line parameter \(m\), and set an upper limit of \(10m\) internal nodes (check with your system administrator before running for large \(m\)). Use an \(m\)-digit number to name your internal nodes. For example, when \(m\) is 4, your nodes names might be BTreeNode0000, BTreeNode0001, BTreeNode0002, and so forth. Keep pairs of strings on pages. Add a close() operation to the API, to sort and write. To test your implementation, look for yourself and your friends on your university’s website.
6
13
B∗ trees. Consider the sibling split (or B∗‑tree) heuristic for B‑trees: When it comes time to split a node because it contains \(M\) entries, we combine the node with its sibling. If the sibling has \(k\) entries with \(k < M-1\), we reallocate the items giving the sibling and the full node each about \((M+k)/2\) entries. Otherwise, we create a new node and give each of the three nodes about \(2M/3\) entries. Also, we allow the root to grow to hold about \(4M/3\) items, splitting it and creating a new root node with two entries when it reaches that bound. State bounds on the number of probes used for a search or an insertion in a B∗‑tree of order \(M\) with \(N\) items. Compare your bounds with the corresponding bounds for B‑trees (see Proposition B). Develop an insert implementation for B∗‑trees.
6
14
Write a program to compute the average number of external pages for a B‑tree of order \(M\) built from \(N\) random insertions into an initially empty tree. Run your program for reasonable values of \(M\) and \(N\).
6
15
If your system supports virtual memory, design and conduct experiments to compare the performance of B‑trees with that of binary search, for random searches in a huge symbol table.
6
16
For your internal‑memory implementation of Page in EXERCISE 6.15, run experiments to determine the value of \(M\) that leads to the fastest search times for a B‑tree implementation supporting random search operations in a huge symbol table. Restrict attention to values of \(M\) that are multiples of 100.
6
17
Run experiments to compare search times for internal B‑trees (using the value of \(M\) determined in the previous exercise), linear probing hashing, and red‑black trees for random search operations in a huge symbol table.
6
18
Give, in the style of the figure on page 882, the suffixes, sorted suffixes, index() and lcp() tables for the following strings: a. abacadaba b. mississippi c. abcdefghij d. aaaaaaaaaa
6
19
Identify the problem with the following code fragment to compute all the suffixes for suffix sort: ``c suffix = ""; for (int i = s.length() - 1; i >= 0; i--) { suffix = s.charAt(i) + suffix; suffixes[i] = suffix; } `` Answer: It uses quadratic time and quadratic space.
6
20
Some applications require a sort of cyclic rotations of a text, which all contain all the characters of the text. For i from 0 to N-1, the ith cyclic rotation of a text of length N is the last N-i characters followed by the first i characters. Identify the problem with the following code fragment to compute all the cyclic rotations: ``c int N = s.length(); for (int i = 0; i < N; i++) rotation[i] = s.substring(i, N) + s.substring(0, i); `` Answer: It uses quadratic time and quadratic space.
6
21
Design a linear‑time algorithm to compute all the cyclic rotations of a text string. Answer: ``java String t = s + s; int N = s.length(); for (int i = 0; i < N; i++) rotation[i] = t.substring(i, i + N); ``
6
22
Under the assumptions described in Section 1.4, give the memory usage of a SuffixArray object with a string of length \(N\).
6
23
Longest common substring. Write a SuffixArray client LCS that takes two file‑names as command‑line arguments, reads the two text files, and finds the longest substring that appears in both in linear time. (In 1970, D. Knuth conjectured that this task was impossible.) Hint: Create a suffix array for s#t where s and t are the two text strings and # is a character that does not appear in either.