**Analysis**

Design the solution using a recursive function. Just use the first naive function that solves the problem. For this you should define:

- arguments of the function, that wholly describe the problem you are trying to solve
- recursive relation, that describes how the arguments are changed and fed to the recursive call of the function
- trivial case, the set of arguments that represents an instance of your problem, that is trivial to solve (like maximum of single element set, or sum of the empty set of numbers etc.)

If the number of different sets of arguments is too big, you have to find a representation of your problem that reduces this number. Here you have to exploit properties of the problem and ask yourself, do I really need X to represent the problem? Are all the bits of X used in the computations that the function does? Try to throw most of the X away and maybe introduce other small 'helper' arguments if that helps. Here are some common ways to reduce the number of different arguments:

- Isn't it enough to pass X modulo something?
- What about only the first and last char/digit/number of X?
- Or store the original problem in a global variable and pass only an index into it and maybe some other small argument Y.

**Implementation**

This simple step turns your ugly exponential recursive function into a neat polynomial one (or at least less ugly exponential one), trading the time for memory.

- Implement that function as you designed it.
- Introduce a global variable DP in which you will map already calculated results to corresponding arguments of the function.
- Add a check at the start of your recursive function that searches for current arguments in DP. If the result has been already calculated for them, just return it.

**Optimization**

In some cases, recursion is too slow. Every call of the recursive function takes some time, the memory on the stack needs to be assigned and so on. In this case, it may help to transform your recursion into iteration.

In other cases, the recursive tree can be too deep, causing your stack to overflow. You can transform your solution to iteration, or if you are lazy like me, there is one other thing you can do. Try to find out, for which arguments the recursion tree is deep and for which shallow. Like for example your arguments are numbers up to one million and you find out that for high number inputs the recursion is more shallow than it is when dealing with lower numbers. What do you do? Just iterate from one million downwards and call the function on the numbers as you go!

Next, the memory requirements for the global cache variable can be just a little too big. In this case it helps to do a kind of a BFS instead of the DFS your recursive function uses naturally. It can be done by converting to iteration and then storing only a limited portion of the global variable in the memory. This case applies usually when the recursive function moves '

*slowly*' in some dimension. For example consider function like this, where**y**moves only in jumps of 1 and 2:int f(int x,int y) { if (DP[x][y].is_computed()) return DP[x][y]; DP[x][y]=f(x+m,y+1)+f(x+n,y+2); return DP[x][y]; }

In this case, after converting to iteration, just 3 columns of the DP table are enough to be kept in the memory at the same time, regardless of the values of

**m**and

**n**.

Finally, there is one more trick I would like to discuss. In some problems, that are solvable by DP, sometimes one dimension of the state space is ridiculously big (like 10

^{9}). This is a dead giveaway of the technique so called Matrix Multiplication Boost (ok, I've just made the name up ;-) ).

**MMB**is applicable in cases, where the transitions between states of the problem have linear properties. Then, you can embed the whole state space (without the big dimension) in a vector

**V**. Next, encode the transitions in the form of a matrix

**M**. If everything works fine, the solution of

**M * V**is a vector, that describes the same space as

**V**, except moved one step forward in terms of the big dimension mentioned before. Now, to move

**K**steps forward, you can just calculate

**M**. See it? Instead of having to iterate over whole

^{K}* V**K**, you just rise

**M**to the

**K**power. Matrix exponentiation can be calculated in O(

^{th}**M**log(

^{3}***N**) ) where

**N**is the bigger dimension of

**M**.

## No comments:

## Post a Comment