PageRank - How Eigenvectors Power the Algorithm Behind Google Search.
Welcome back! In the last post we derived Eigenvectors. In particular, we saw how useful they are in analyzing matrices we need to apply again and again.
In this post, we’re going to dive into one of the most famous applications of Eigenvectors - the original PageRank algorithm that allowed Google to create the world’s best search engine. PageRank was Larry Page’s phD thesis while at Stanford. He spun out this amazing algorithm to redefine search and create one of the most iconic companies in the modern era. We’re going to re-derive it here and see that it’s just a simple application of Eigenvectors!
By the end of this post, you will:
- Feel like you could have come up with PageRank yourself.
- Understand how eigenvectors are central to PageRank.
- See how central eigenvectors are to dynamic processes in general.
Ok let’s build out our understanding of pagerank using a hypothetical scenario.
Let’s say you just moved to a new city and you’re trying to make some new friends. You need to figure out which of your new friends to hang out with in your free time. But unfortunately you’re super busy and have very little free time!
So you need to rank all these potential friends to prioritize which ones to chill with.
This could be you! Just need to find the friends...
In a moment of clarity you realize the following:
- Friends you’re close with are probably going to hang out with people you’d like.
- Someone everyone is friends with is probably someone you are going to be friends with.
You decide to use these two insights to create a scheme you call FriendRank.
- How many people are friends with this person weighted by,
- How close you are to the people who are friends with them.
Let’s write this down as a formula. Let’s say the strength of the friendship between you and some person is . Let be the set of friends of .
Calculating Our First Friend Score
Let’s work through an example. Say you have 4 potential friends:
- Jimmy (closeness score - 5)
- Maya (closeness score - 9)
- Lisa (closeness score - 8)
- Fred (?)
In this world, you’re really close to Maya (with a score of 9) and Lisa (8) but only kinda close to Jimmy (5).
Only Jimmy and Maya are friends with Fred.
Graphically, this is:
Accounting For People who are Friends With Everyone
Not such a crazy idea right? You start this process and you realize one thing. Your really close friend Maya claims to be friends with EVERYONE. You start wondering, “Hmm maybe I need to downweight Maya’s opinion because she doesn’t really seem to be that picky”.
This is easy - we just divide Maya’s weighting score by how many people she claims to be friends with. That way if Maya claims 100 friends, his weighting for each friend is divided by 100. So our formula is now:
Going back to our example, our score becomes:
Let’s now simplify this formula a little.
- Let’s call the set of all friends .
Then we can simplify our earlier formula as shown below:
So Who’s Friends With Whom?
Ok this is looking great! You decide to go ahead and implement this scheme with your friends Bobby, Lauren, Daniel, Rony, and Saba. You start by asking all your friends to write down who they’re friends with! So you send them all the following email:
From: email@example.comTo: Bobby, Lauren, Daniel, Rony, SabaSubject: Who are your friends?
My dearest "friends",
I’m trying to rank you and all my other potential friends and need your help to do so. Below is a list of people. Please put a 1 next to their name if you call them your friend.
Person Friends? Bobby ___ Lauren ___ Daniel ___ Rony ___ Saba ___
Yours truly,Parry Lage
P.S. Just because I'm sending you this email doesn't mean we're friends.
So Saba sends you something like this:
Looking good so far! You then remember you need to downweight Saba’s score by how many friends she claims to have. She’s claiming 2 friends here so we divide by 2 giving us:
You now decide to organize all your friends responses into a little table so that it’s easy to see. Each column of the table is a different friend. It looks like this (notice Saba’s answers above are the last column in the table below):
This table is just a matrix, which we’ll call . The element in the matrix is
For example, let’s examine :
- Person is Bobby and Person is Saba.
- is whether Saba claims to be friends with Bobby. We saw in her response she did, so .
- is the number of friends Saba has (2). Hence .
- So just as it is in the table.
Notice additionally that is not reflexive! For instance, Rony may claim to be friends with Saba, but in the above example, Saba doesn’t call Ron a friend.
Hence, , but .
Adding Your Own Scores
You then decide you want to come up with an initial guess for everyone’s FriendScore!
There are some people you know better than others so this is really a guess for what your final FriendScore is for each of them. You write down the following and call it - your initial FriendScore:
Combining Everyone’s Opinion
Ok we now have the following:
- - a Matrix who’s elements are .
- - An initial guess of everyone’s FriendScore.
Now that we know who’s friends with who, we can figure out who’s really likely to be our friend.
We’d like to come up with a new FriendScore based on this information.
How do we do that?
Recall from earlier our formula for Fred’s FriendScore:
So similarly, we want to update our rankings, , for our friends to be:
Let’s shorten our notation:
- Let be
- Let be
Then Bobby’s ideal new ranking is:
This looks an awful lot like this dot product (go ahead and compute it!):
Breaking Down the Dot Product - First Vector
Let’s try and see if we have the two vectors in this dot product. Let’s start with the first vector:
Where can we find this vector?
Remember the elements of are just . Let’s look at again, substituting the first row of the matrix with the formula representation - we’ll see that it’s the same as this vector!
Great so we have the first vector in the dot product.
Breaking Down the Dot Product - Second Vector
Let’s now see if we can get the second one:
You’ve already created this vector when you came up with your initial guesses in the following table:
- The first vector is just the first row of .
- The second vector is .
Matrix Multiplication To Find FriendScores
If we just multiply we compute this dot product and get in the first row of the resulting vector.
More generally, you’ll find that the second row of this vector is exactly etc, giving us:
When Do We Stop?
Ok so we now have a super quick way of finding our next FriendScore. Just take our current guess, which we’ll call , multiply it by , and obtain :
Everytime we compute , we are updating our guess of our Friend Score with the rest of our friends opinions.
When do we stop this process? We stop when stops changing:
Seeing This Visually
How does move to ? In many ways, the matrix pulls towards its eigenvector .
There’s an awesome visualization of this here by setosa.io that I highly recommend you check out. Here’s an extract directly from their writeup. In the below image, is an eigenvector of . Notice how as we keep applying on , we move closer to . In particular, the sequence , , , … eventually ends up directly on .
Similarly in FriendRank, as we keep multiplying by , we get the sequence , , … which eventually ends up on - the eigenvector of .
From Friends to Pages
You’ve probably guessed at this point that you can switch the word friends to pages in the above and get PageRank! In the world of webpages:
One webpage claims to be “friends” with another if it links out to the webpage.
Google finds out who’s friends with who (our initial survey) by just crawling the web and finding out who links to who.
The number of “friends” a webpage claims to have is just the number of links present on the webpage.
We create the matrix by crawling the web. Once we have it, we find its eigenvector to get an estimate of the PageRanks.
Putting this together gives us a way of ranking all webpages on the internet for relevance!
I find it really cool how PageRank models the abstract concept of webpages using simple insights into how we interact as humans.
A few quick asides:
The formula we’ve derived for PageRank is a simplified version of the actual one. To see the full formula, read the original paper here.
Notice that PageRank is actually a next level pun. PageRank was written by Larry Page about ranking webpages - well done Larry.
So we’ve seen that eigenvectors are at the heart of one of the most important algorithms of all time! More generally, whenever you see a linear function being applied again and again and again (Like ), you are without a doubt going to be looking for the eigenvectors of that linear function/matrix. The eigenvectors will tell you where that process ends!
I hope you enjoyed this unique and powerful application of eigenvectors. In the next post, we’re going to step back into Matrix Algebra. We’ll learn about Kernels (or Nullspaces) and visually see how they help us immediately grasp what matrices do to their inputs.