A Greedy Knapsack Heuristic

1.  Strategies for NP-Complete Problems

    --  Identify computationally tractable special cases.

        - knapsack capacity W = polynomial in number of items n

    --  Heuristics

        - Pretty good greedy heuristic

        - Excellent dynamic programming heuristic

    --  Exponential time but better than brute-force search

        - O(nW)-time dynamic programming vs. O(2^n) brute-force search.

 

2.  Knapsack Revisited

    --  Input: n items. Each has a positive value vi and a size wi . Also, knapsack capacity is W.

    --  Output: A subset S in {1, 2, ..., n} that maximizes Sum(i in S) {vi} subject to Sum(i in S) {wi} <=W

 

3.  A Greedy Heuristic

    --  Motivation: Ideal items have big value, small size.

    --  Step 1: Sort and reindex item so that v1/w1 >= v2/w2 >= ... >=vn/wn

    --  Step 2: Pack items in this order until one doesn't fit, then halt. (can continue to pack the follow-up items that can fit)

 

4.  A Refined Greedy Heuristic

    --  Fact: Greedy solution can be arbitrarily bad relative to an optimal solution.(i.e. v1 = 2, w1 = 1, v2 = 1000, w2 = 1000, W = 1000)

    --  Fix: Step 3: Return either the Step 2 solution, or the maximum valuable item, whichever is better.

    --  Theorem: Value of the 3-step greedy solution is always at least 50% value of an optimal solution.       --  Runnig time: O(n log n)

 

5.  Greedy Fractional Algorithm

    --  We were allowed to fill fully the knapsack using a suitable “fraction" (like 70%) of item (k+1)

    --  Greedy fractional solution at least as good as every non-fractional feasible solution.

        --  Let S = an arbitrary feasible solution

        --  Suppose l units of knapsack filled by S with items not packed by the greedy fractional solution

        --  Must be at least l units of knapsack filled by greedy fractional solution not packed by S

        --  By greedy criterion, items in Greedy Fractional solution have larger bang-per-buck vi/wi than those in S [i.e., more valuable use of space]

        --  Total value of greedy fractional solution at least that of S

 

6.  Analysis of Greedy Heuristic

    --  Suppose our greedy algorithm picks the 1st k items (sorted by vi/wi ).

    --  Value of 3-step greedy algorithm >= total value of 1st k items

        Value of 3-step greedy algorithm >= value of (k + 1)th item

        2 (value of 3-step greedy) >= total value of 1st (k + 1) items = total value of greedy fractional solution >= optimal knapsack solution

    --  Analysis is Tight (i.e. W = 1000 , v1 = 502, v2 = v3 = 500, w1 = 501, w2 = w3 = 500)

    

7.  A Refined Analysis

    --  Suppose: Every item i has size wi <= 10% * knapsack capacity W.

    --  Consequence: If greedy algorithm fails to pack all items, then the knapsack is >= 90% full.

    --  Value of 2-step greedy algorithm >= 90% * value of greedy fractional solution >= 90% * value of an optimal solution.

    --  In general, if maxi wi <= m%W, then 2-step greedy value is >= (1 - m%)*optimal

 

8.  Arbitrarily Good Approximation

    --  Goal: For a user-specified parameter e > 0 (e.g., e = 0.01), guarantee a (1 - e)-approximation.

    --  Catch: Running time will increase as e decreases. (i.e., algorithm exports a running time vs. accuracy trade-off).

    --  High-level idea: Exactly solve a slightly incorrect, but easier, knapsack instance.

    --  If vi's are integers, can solve knapsack via dynamic programming in O(n^2vmax) time, where vmax = maxi{vi}. 

    --  Plan: Throw out lower-order bits of the vi's!

 

9.  A Dynamic Programming Heuristic

    --  Step 1: Round each vi down to the nearest multiple of m [larger m ==> throw out more info ==> less accuracy ==> m depends on e]

        Divide the results by m to get vi' (integers). (i.e., vi' = floor(vi/m) )

    --  Step 2: Use dynamic programming to solve the knapsack instance with values v1', ... , vn', sizes w1, ..., wn, capacity W.

        Running time = O(n^2 max{vi'})

 

10.  Two Dynamic Programming Algorithms

    --  Dynamic programming algorithm #1: 

        --  Assume sizes wi and capacity W are integers

        --  Running time = O(nW)

    --  Dynamic programming algorithm #2:

        --  Assume values vi are integers

        --  Running time = O(n^2vmax), where vmax = max{vi}

 

11.  The Subproblems and Recurrence

    --  Subprolems: For i = 0, 1, ... , n and x = 0, 1, ... , n*vmax define

        S(i,x) = minimum total size needed to achieve value >= x while using only the first i items. (Or +Infinity if impossible)

    --  Recurrence: (i >= 1)

        S(i,x) = min { S(i-1,x) [Case 1, item i not used in optimal solution] , wi + S(i-1,x-vi) [Case 2, item i used in optimal solution, interpreted as 0 if vi > x]

 

12.  The Algorithm

    --  Let A = 2-D array [indexed by i = 0, 1, ... , n and x = 0, 1, ... , n*vmax]

    --  Base case: A[0,x] = 0 if x = 0 , +Infinity otherwise

    --  For i = 1, 2, ... , n

            For x = 0, 1, ... , n*vmax 

                A[i,x] = min{A[i-1,x] , wi+ A[i-1,x-vi]}

    --  Return the largest x such that A[n,x] <= W

    --  Running time: O(n^2vmax)

 

13.  Accuracy Analysis

    --  Since we rounded down to the nearest multiple of m,  m*vi' in [vi-m , vi] for each item i.

    --  If S* = optimal solution to the original problem (with the original vi), and S = our heuristic's solution, then

              Sum(i in S){vi'} >= Sum(i in S*){vi'} [Since S is optimal for the vi'] 

    --  Sum(i in S){vi} >= m * Sum(i in S) {vi'} >= m * Sum(i in S*){vi'} >= Sum{i in S*} {vi -m} >= Sum(i in S*} - nm

    --  Constraint: Sum(i in S){vi} >= (1-e)*Sum(i in S*){vi}

    --  To achieve above constraint: Choose m small enough that mn <= e * Sum(i in S*){vi}

        Sum(i in S*){vi} is unknown to algorithm, but definitely >= vmax

        Sufficient: Set m so that mn <= e*vmax, i.e., heuristic uses m = e*vmax/n

    --  Running time is O(n^2vmax') , vmax' <= vmax/m = vmax / (e*vmax/n) , so running time is O(n^3/e)

你可能感兴趣的:(Heuristic,NP-Complete)