0% found this document useful (0 votes)
23 views1 page

EQTVuAhSFY (Dragged) 3

Uploaded by

Niko Mitrichev
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views1 page

EQTVuAhSFY (Dragged) 3

Uploaded by

Niko Mitrichev
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Otherwise, I recursively call Fibonacci of n minus 1.

I recursively call Fibonacci of n minus 2 and add those two


numbers together. So, you get this branching tree. You are solving two subproblems of almost the same size, just
additively smaller by one or two. I mean you are almost not reducing the problem size at all, so that's intuitively
why it is exponential. You can draw a recursion tree and you will see how big it gets and how quickly. I mean by n
over two levels, you've only reduced on one branch the problem from n to n over 2. The other one, maybe you've
gotten from n down to one, but none of the branches have stopped after n of two levels. You have at least 2 to the
power n over 2 which is like square root of 2 to the power n, which is getting close to phi to the n. So, this is
definitely exponential. And exponential is bad. We want polynomial. N squared, n cubed, log n would be nice.
Anything that is bounded above by a polynomial is good. This is an old idea. It goes back to one of the main
people who said polynomial is good, Jack Edmonds who is famous in the combinatorial optimization world. He is
my academic grandfather on one side. He is a very interesting guy. OK, so that's a really bad algorithm. You have
probably seen a somewhat better algorithm, which you might think of as the bottom-up implantation of that
recursive algorithm. Or, another way to think of it is if you build out the recursion tree for Fibonacci of n, you will
see that there are lots of common subtrees that you are just wasting time on. When you solve Fibonacci of n
minus 1, it again solves Fibonacci of n minus 2. Why solve it twice? You only need to solve it once. So, it is really
easy to do that if you do it bottom-up. But you could also do it recursively with some cache of things you have
already computed. So, no big surprise. You compute the Fibonacci numbers in order. And each time, when you
compute Fibonacci of n, let's say, you have already computed the previous two, you add them together, it takes
constant time. So, the running time here is linear, linear in n, and as our input. Great. Is that the best we can do?
No. Any ideas on how we could compute Fibonacci of n faster than linear time? Now we should diverge from what
you have already seen, most of you. Any ideas using techniques you have already seen? Yes? Yes. We can use
the mathematical trick of phi and psi to the nth powers. In fact, you can just use phi, phi, pi, pho, phum, whatever
you want to call this Greek letter. Good. Here is the mathematical trick. And, indeed, this is cheating, as you have
said. This is no good, but so it is. We will call it naÔve recursive squaring and say well, we know recursive
squaring. Recursive squaring takes log n time. Let's use recursive squaring. And if you happen to know lots of
properties of the Fibonacci numbers, you don't have to, but here is one of them. If you take phi to the n divided by
root 5 and you round it to the nearest integer that is the nth Fibonacci number. This is pretty cool. Fibonacci of n is
basically phi to the n. We could apply recursive squaring to compute phi to the n in log n time, divide by root 5,
assume that our computer has an operation that rounds a number to its nearest integer and poof, we are done.
That doesn't work for many different reasons. On a real machine, probably you would represent phi and root 5 as
floating point numbers which have some fixed amount of precise bits. And if you do this computation, you will lose
some of the important bits. And when you round to the nearest integer you won't get the right answer. So, floating
point round off will kill you on a floating point machine. On a theoretical machine where we magically have
numbers that can do crazy things like this, I mean it really takes more than constant time per multiplication. So, we
are sort of in a different model. You cannot multiply phi times phi in constant time. I mean that's sort of outside the
boundaries of this course, but that's the way it is. In fact, in a normal machine, some problems you can only solve
in exponential time. In a machine where you can multiply real numbers and round them to the nearest integers,
you can solve them in polynomial time. So, it really breaks the model. You can do crazy things if you were allowed
to do this. This is not allowed. And I am foreshadowing like three classes ahead, or three courses ahead, so I
shouldn't talk more about it. But it turns out we can use recursive squaring in a different way if we use a different
property of Fibonacci numbers. And then we will just stick with integers and everything will be happy. Don't forget
to go to recitation and if you want to homework lab. Don't come here on Monday. That is required. This is sort of
the right recursive squaring algorithm. And this is a bit hard to guess if you haven't already seen it, so I will just
give it to you. I will call this a theorem. It's the first time I get to use the word theorem in this class. It turns out the
nth Fibonacci number is the nth power of this matrix. Cool. If you look at it a little bit you say oh, yeah, of course.
And we will prove this theorem in a second. But once we have this theorem, we can compute f of n by computing
the nth power of this matrix. It's a two-by-two matrix. You multiply two two-by-two matrixes together, you get a two-
by-two matrix. So that is constant size, four numbers. I can handle four numbers. We don't have crazy precision
problems on the floating point side. There are only four numbers to deal with. Matrixes aren't getting bigger. So,
the running time of this divide-and-conquer algorithm will still be log n because it takes a constant time per two-by-
two matrix multiplication. Yes? Oh, yes. Thank you. I have a type error. Sorry about that. F of n is indeed the
upper left corner, I hope. I better check I don't have it off by one. I do. It's F_n upper right corner, indeed. That's
what you said. F of n. I need more space. Sorry. I really ought to have a two-by-two matrix on the left-hand side
there. Thank you. So, I compute this nth power of a matrix in log n time, I take the upper right corner or the lower
left corner, your choice, that is the nth Fibonacci number. This implies an order log n time algorithm with the same
recurrence as the last two, binary search and really the recursive squaring algorithm. It is log n plus a constant, so
log n. Let's prove that theorem. Any suggestions on what techniques we might use for proving this theorem, or
what technique, singular? Induction, very good. I think any time I ask that question the answer is induction. Hint for
the future in this class. A friend of mine, when he took an analysis class, whenever the professor asked, and what
is the answer to this question, the answer was always zero. If you have taken analysis class that is funny.
[LAUGHTER] Maybe I will try to ask some questions whose answers are zero just for our own amusement. We are
going to induct on n. It's pretty much the obvious thing to do. But we have to check some cases. So, the base case
is we have this to the first power. And that is itself [(1, 1), (1, 0)]. And I should have said n is at least 1. And you
can check. This is supposed to be F_2, F_1, F_1, F_0. And you can check it is, F_0 is 0, F_1 is 1 and F_2 is Base
case is correct, step case is about as exciting, but you've got to prove that your algorithm works. Suppose this is
what we want to compute. I am just going to sort of, well, there are many ways I can do this. I will just do it the fast
way because it's really not that exciting. Which direction? Let's do this direction. I want to use induction on n. If I
want to use induction on n, presumably I should use what I already know is true. If I decrease n by 1, I have this
property that this thing is going to be [(1, 1), (1, 0)] to the power n minus 1. This I already know, by the induction

You might also like