Introduction to Algorithms (Principles of Algorithm Design)

Principles of Algorithm Design

When you are trying to design an algorithm or a data structure, it’s often hard to see how to accomplish the task. The following techniques can often be useful:

  1. Experiment with examples. One of the most important things you can do to get a feel for how a problem works is to try generating some random input and seeing what output you should be returning. This helps you understand the problem, and can often give you a feel for how the solution can be constructed. If you experiment with enough small examples, you can often begin to see a pattern in the way that the solution looks in relation to the input. Once you understand that pattern, you are one step closer to being able to solve the problem
  2. Simplify the problem. Sometimes, when a problem is difficult to solve, it can be worth it to solve a related, simpler problem instead. Once you develop ideas for the simpler case, you can often apply them to handle the more complex case. Instead of trying to solve the problem in all generality, see if you can solve it for just a subset of the inputs. For instance, if the algorithm you are trying to design takes two inputs (such as two numbers n and k), you might consider trying to solve the problem if you set k equal to a small constant, such as 1 or 2. If the algorithm takes a list as an argument, does assuming that the list is sorted help you figure out the problem? What about assuming that all items in the list are distinct? Or assuming that the number of items in the list is a power of 2? Failing that, are there other assumptions you could make that would make the problem simpler? Once you have simplified the problem and solved it, you need to figure out how to make the algorithm more general again. But at this point, you have solved a similar (if simpler) problem, so you can often take advantage of . . .
  3. Look for similar problems. For many of the questions in this course, the solution involves an algorithm or data structure that you have seen before. Even when considering other questions, it’s often possible to draw inspiration from problems that you know how to solve. So if the problem that you are considering seems similar to a problem that you know how to solve, a good first step is to think about how the problems compare to each other. What properties do the problems share? What causes them to be different? Are the differences significant? Are the commonalities easy to see, or is it more of a stretch? Is one problem a more restricted version of the other? A more general version of the other? Or does it seem more general in some ways and more restricted in others? Once you understand how the problems relate to each other, think about the techniques that you used to solve the problem you understand. Can you reuse the same algorithm to solve the problem? If not, can you tweak some of the details a little to get a working solution? A lot of algorithms are based around particular techniques — for instance, you’ve seen several divide-and-conquer solutions, wherein you try to divide up the input into smaller pieces, then put the pieces back together again once you use a recursive call to solve the individual pieces. Try figuring out what techniques were used to solve the known problem, and then see whether you can apply them to the new problem as well.
  4. Delegate the work. One very powerful concept in Computer Science is the idea of recursion. At its heart, recursion lets us solve problems more easily by delegating the difficult work to a recursive call. For instance, say that we want to compute n!. Well, that’s pretty hard to compute. But if we knew (n − 1)! it would magically become a lot easier to compute n!. So we use a recursive call to compute (n − 1)! and then use the result of that recursive call to compute n!. If you can’t figure out how to solve a problem, see if you can figure out how to solve only the last bit of it. Then see if you can use recursion to solve the rest of it. Pretty much all of the algorithms you’ve seen are based around this principle, so being able to apply it is a very useful skill. For ideas of how to break the problem into pieces and ideas on how to define “the last part” of the problem, you can often look at the algorithms that you’ve already seen. For instance, a common thing to do involving lists is to split them in two and solve the problem recursively on both halves. If you can figure out a good way to break the list apart and then put it back together again when you’re done, that’s all you need.
  5. Design according to the runtime. Sometimes the runtime that we give you provides a lot of information about what you should be doing to construct the algorithm. For instance, say that you are designing an algorithm whose runtime should be O(log n). The algorithms and data structures we know of that have that runtime are binary search, heaps, and AVL trees. (Note, however, that the cost of constructing a heap or an AVL tree is high enough that it most likely cannot be used for any algorithms with runtime O(log n). But they might be used in data structure design to implement functions that should take time O(log n).) If none of those seem useful, consider some simple recurrence relations that resolve to the value you’re searching for. For instance, say that you know that the algorithm takes time O(n log n), but that the common O(n log n) algorithms you know don’t seem to work. There are a couple of simple recurrence relations that resolve to this, such as:
    1. T(n) = O(log n) + T(n − 1). This could be n binary searches or n operations on a heap or an AVL tree.
    2. T(n) = O(n) + 2T(n/2). This might be an O(n) pass to separate the data into two groups, and then an O(n) pass to put the data back together again at the end.

你可能感兴趣的:(Introduction to Algorithms (Principles of Algorithm Design))