int rabbit(int n)
{
int r1=1,r2=1;
while (n>2)
{
int r=r1+r2;
r1=r2;
r2=r;
--n;
}
return r2;
}
Of course, you can do the top-down data caching DP version of it, but it's O(n) too.
int rabbit(int n)
{
int *p=new int[n+1];
//assume exception thrown if out of mem
int i;
for (i=0;i<=n;++i)
p[i]=0;
int r=rabbit2(n,p);
delete[]p;
return r;
}
int rabbit2(int n,int *p)
{
//if we've done this one, return it.
if (p[n]!=0)
return p[n];
//generate value, but cache it.
p[n]=rabbit2(n-1,p)+rabbit2(n-2,p);
//return cached item
return p[n];
}
The former's way easier to write. Many DP problems can be written ground-up, but you can end up solving for more values than you need if the recursion doesn't hit every possible combination of parameters. It does in this case, and its simple.
The latter's the cached solution. It's harder to write, requires you to have or make a cache of the right size, get rid of it maybe if you're grabbing it from a heap like I am... but it looks just like the recursive version, and is easy to convert from a recursive version to the cached version.
Betcha didn't think you'd learn anything in THIS thread. :D
Yeah, the straight recursive one's pretty bad.
|