# PageRank Algorithm: How It Works

### People Who Liked This Video Also Liked

### Did this video help you?

Suhas Nayak: It would be more clear if you can explain things with some animations or if you can record the videos while you explain the things on white board.Phenix Ouali: I hope someone can answer my question:in the video he said that if we are implementing the algorithm in a single core we have to pull the pagerank and don't push it to the neighbors. Why did he only mention single core implementation doesn't this work for parallel implementation? Phenix Ouali: Hello,is the sparse list representation the same as adjacency list ? Thank you Oliver Saleh: We add random hops so as to avoid spider traps and dead ends. 0.84 is the typical (empirical) value used for choosing to pick a neighbor or randomly hop. Good clear explanation of the algorithm but not too much as to the reasons behind its formulation. jydk37: Is lambda a Google trade secret? Without understanding how lambda is generated, I fail to see any value in this presentation.Tien Nguyen Huu: please translate to Vietnamese thank youOleg Melnikov: Thanks for the great video with nice and clear explanations and visuals. A couple of questions came up. 1. I believe, (1-lambda)/N must represent the probability of starting at page x, not hoping to it from another node.Hopping from another node is captured by the second summand of PR(x). Besides, if hoping from another node (with N-1 nodes around) we would have N-1 in denominator. 2. lambda can be defined more clearly (and mathematically). You bring a good example of lambda as a parameter of a Bernoulli distribution, but what random variable follows Bern(lambda) in this model? Which brings to the next comment :) 3. From what I understood, it's the lambda that is initialized to 1/N, not the PR(x). Then PR(x) is initialized with lambda, where PR(x) changes with each iteration, but lambda remains fixed at initial value. :) 4. It makes sense to introduce subscripted PR since the start. 5. Finally, from basic probability, coin flipping is associated with 1/2 parameter in students' mind. Since lambda is vastly different from this, another example of Bernoulli random variable may make more sense. For example, it could come from card playing or die throwing :) Thanks for this and other videos! Yuanzhi Bao: Wonderful video!!!!! Help me solve a lot misunderstandings about pagerank. Thank you so much.MH: You need to slow down and explain everything in slower steps.Rahul Thankachan: how did you calculate lamda? Thank youUnify Courses: Funny accent, flawless explanations!! Thanks!! :)George Don Quixøte: Shouldn't the sum PR(A) + PR(B) + ... = 1 or 100 in every step? How can we verify if the algorith is correct?fckingkim: Amazing series, thanks so much Victor. I have a question though. I was wondering whether the pagerank algorithm you gave at 1:25 is complete. Please correct me if I'm wrong. In the example you gave beginning at 7:37, PR(B) = 0.18 * 9.1+..., my understanding is the 0.18 is the probability that someone will get to page B through randomly hoping to the page rather than by clicking a link that brings them to page B. i.e. 0.18 = 1 - λYou have multiplied this 0.18 with 9.1, which is the probability that one would get to page B through hoping if there were no links between the pages. That makes sense, but in the original pagerank equation, PR(B) = (1 - λ) * (1 / N )..., it doesn't seem to me like the 0.18 is included. I would have expected the original equation to start this was; PR(B) = (1 - λ)* (1 - λ)* (1 / N) If I'm correct, is there a reason you didn't include the 0.18 (1 - λ) in the original equation? Thanks |

PageRank algorithm: how it works
5
out of 5