Choice of Lesson (Bấm chuột vào ô sổ xuống bên dưới và chọn bài kiểm tra)

Level A Level B Level C TOEFL Incorrect word TOEFL Reading Comprehension Synonym word TOEFL

Showing posts with label SEO. Show all posts
Showing posts with label SEO. Show all posts

Sunday, August 5, 2007

Search engine results page

From Wikipedia, the free encyclopedia

Jump to: navigation, search
A typical Search Engine Results Page (SERP)
A typical Search Engine Results Page (SERP)

A search engine results page, or SERP, is the listing of web pages returned by a search engine in response to a keyword query. The results normally include a list of web pages with titles, a link to the page, and a short description showing where the keywords have matched content within the page. A SERP may refer to a single page of links returned, or to the set of all links returned for a search query.

Contents


Query caching

Some search engines cache SERPs for frequent searches and display the cached SERP instead of a live SERP to increase the performance of the search engine[citation needed]. The search engine updates the SERPs periodically to account for new pages, and possibly to modify the rankings of pages in the SERP.

SERP refreshing can take several days or weeks[citation needed] which can occasionally cause results to be inaccurate or out of date, and new sites and pages to be completely absent.

[edit] Different types of results

SERPS of major search engines like Google and Yahoo! may include different types of listings: contextual, algorithmic or organic search listings, as well as sponsored listings, images, maps, definitions, or suggested search refinements. The major search engines also offer different types of search, such as image search, news search, and blog search. The SERPS for these specialized searches offer specific types of results.

Advertising (Sponsored Listings)

SERPs usually contain advertisements. This is how commercial search engines fund their operations. Common examples of these advertisements are displayed on the right hand side of the page (e.g. Google Adwords) as small classified style ads or directly above the main organic search results on the left (e.g. Yahoo! Sponsored Search).

See also


source: trading-education.blogspot.com

Open Directory Project

From Wikipedia, the free encyclopedia

Jump to: navigation, search
Open Directory Project
URL http://dmoz.org/
Commercial? No
Type of site Web directory
Registration Optional
Owner Netscape
Created by Netscape
Launched June 5, 1998

The Open Directory Project (ODP), also known as dmoz (from directory.mozilla.org, its original domain name), is a multilingual open content directory of World Wide Web links owned by Netscape that is constructed and maintained by a community of volunteer editors.

ODP uses a hierarchical ontology scheme for organizing site listings. Listings on a similar topic are grouped into categories, which can then include smaller categories.

Contents


Project information

ODP was founded as Gnuhoo by Rich Skrenta and Bob Truel in 1998. At the time, Skrenta and Truel were working as engineers for Sun Microsystems. Chris Tolles, who worked at Sun Microsystems as the head of marketing for network security products, also signed on in 1998 as a co-founder of Gnuhoo along with co-founders Bryn Dole and Jeremy Wenokur. Skrenta was already well known for his role in developing TASS, an ancestor of tin, the popular threaded Usenet newsreader for Unix systems. Coincidentally, the original category structure of the Gnuhoo directory was based loosely on the structure of Usenet newsgroups then in existence.

The Gnuhoo directory went live on June 5, 1998. After a Slashdot article suggested that Gnuhoo had nothing in common with the spirit of free software,[1] for which the GNU project was known, Richard Stallman and the Free Software Foundation objected to the usage of Gnu. So Gnuhoo was changed to NewHoo. Yahoo then objected to the usage of "Hoo" in the name, prompting them to switch the name again. ZURL was the likely choice. However, before the switch to ZURL, NewHoo was acquired by Netscape Communications Corporation in October of 1998 and became the Open Directory Project. Netscape released the ODP data under the Open Directory License. Netscape was acquired by AOL shortly thereafter, and ODP was one of the assets included in the acquisition. AOL later merged with Time-Warner.

By the time Netscape assumed stewardship, the Open Directory Project had about 100,000 URLs indexed with contributions from about 4500 editors. On October 5, 1999, the number of URLs indexed by ODP reached one million. According to an unofficial estimate, the number of URLs in the Open Directory surpassed the number of URLs in the Yahoo! Directory in April 2000 with about 1.6 million URLs. ODP achieved the milestones of indexing two million URLs on August 14, 2000, three million listings on November 18, 2001 and four million on December 3, 2003.

From January 2006 the Open Directory began to publish online reports to inform the public about the development of the project. The first report covered the year 2005. Monthly reports have been issued subsequently.

These reports give greater insight into the functioning of the directory than the simplified statistics given on the front page of the directory. The number of listings and categories cited on the front page include "Test" and "Bookmarks" categories, but these are not included in the RDF dump offered to users. The total number of editors who have contributed to the directory as of March 31 2007 was 75,151.[2] The number of active editors at any instant is much lower, for example there were 7407 active editors during August 2006.[3]

System failure and editing outage, October to December 2006

On October 20, 2006, the ODP's main server suffered a catastrophic system failure[4] that prevented editors from working on the directory until December 18, 2006.[5] During that period, an older build of the directory was visible to the public. On January 13, 2007, the Site Suggestion and Update Listings forms were again made available.[6] On January 26, 2007, weekly publication of RDF dumps resumed. To avoid future outages, the system now resides on a redundant configuration of two Intel-based servers.[7]

Competing and spinoff projects

ODP inspired the formation of two other major web directories edited by volunteers and sponsored by public companies, both now defunct: Go.com directory (formerly owned by The Walt Disney Company), and Zeal (formerly owned by LookSmart). These directories did not license their content for open content distribution, which may have contributed to their demise; open content licensing contributed to ODP's success in a fiercely competitive market.

The concept of using a large-scale community of editors to compile online content has been successfully applied to other types of projects. ODP's editing model directly inspired three other open content volunteer projects: an open content restaurant directory known as ChefMoz (launched by ODP management), an open content music directory known as MusicMoz, and an encyclopedia known as Open Site. As yet, none of these has approached ODP's level of success.

Content

Open Directory Project front page, January 2006
Open Directory Project front page, January 2006

Gnuhoo borrowed its initial ontology from Usenet. For example, the topic covered by the comp.ai.alife newsgroup was represented by the category Computers/AI/Artificial_Life. The original divisions were for Adult, Arts, Business, Computers, Games, Health, Home, News, Recreation, Reference, Regional, Science, Shopping, Society, and Sports. While these fifteen top-level categories have remained intact, the ontology of second- and lower-level categories has undergone a gradual evolution; significant changes are initiated by discussion among editors, and then implemented when consensus has been reached.

In July 1998, the directory became multilingual with the addition of the World top-level category. The remainder of the directory lists only English language sites. By May 2005, seventy-five languages were represented. The growth rate of the non-English components of the directory has been greater than the English component since 2002. While the English component of the directory held almost 75% of the sites in 2003, the World level grew to over 1.5 million sites as of May 2005, forming roughly one third of the directory. Ontology in non-English categories generally mirrors that of the English directory, although exceptions which reflect language differences are quite common.

Several of the top-level categories have unique characteristics. The Adult category is not present on the directory homepage, but it is fully available in the RDF dump that ODP provides. While the bulk of the directory is categorized primarily by topic, the Regional category is categorized primarily by region. This has led many to view ODP as two parallel directories: Regional and Topical.

On November 14, 2000, a special directory within the Open Directory was created for people under 18 years of age.[8] Key factors distinguishing this "Kids and Teens" area [1] from the main directory are:

  • Stricter guidelines which limit the listing of sites to those which are targeted or appropriate for people under 18 years of age.[2]
  • Category names as well as site descriptions use vocabulary which is age appropriate.
  • Age tags on each listing distinguish content appropriate for kids (age 12 and under), teens (13 to 15 years old) and mature teens (16 to 18 years old).
  • Kids and Teens content is available as a separate RDF dump.
  • Editing permissions are such that the community is parallel to that of the Open Directory.

By May 2005, this portion of the Open Directory included over 32,000 site listings.

Since early 2004 the whole site has been in UTF-8 encoding. Prior to this, the encoding used to be ISO 8859-1 for English language categories, and a language-dependent character set for other languages. The RDF dumps have been encoded in UTF-8 since early 2000.

Maintenance

Directory listings are maintained by editors. While some editors focus on the addition of new listings, others focus on maintaining the existing listings. This includes tasks such as the editing of individual listings to correcting spelling and/or grammatical errors, as well as monitoring the status of linked sites. Still others go through site submissions to remove spam and duplicate submissions.

Robozilla is a web crawler written to check the status of all sites listed in ODP. Periodically, Robozilla will flag sites which appear to have moved or disappeared, and editors follow up to check the sites and take action. This process is critical for the directory in striving to achieve one of its founding goals: to reduce the link rot in web directories. Shortly after each run the sites marked with errors are automatically moved to the unreviewed queue where editors may investigate them when time permits.

Due to the popularity of the Open Directory and its resulting impact on search engine rankings (See PageRank), domains with lapsed registration that are listed on ODP have attracted domain hijacking, an issue that has been addressed by regularly removing expired domains from the directory.

While corporate funding and staff for the ODP have diminished in recent years, volunteerism has resulted in the creation of new and improved editing tools, such as linkcheckers to supplement Robozilla, category crawlers, spellcheckers, search tools that directly sift a recent RDF dump, bookmarklets to help automate some editing functions, and tools to help work through unreviewed queues in multiple ways.

License and requirements

ODP data is made available for open content distribution under the terms of the Open Directory License, which requires a specific ODP attribution table on every Web page that uses the data.

The Open Directory License also includes a requirement that users of the data continually check the ODP site for updates and discontinue use and distribution of the data or works derived from the data once an update occurs. This restriction prompted the Free Software Foundation to refer to the Open Directory License as a non free documentation license, citing the right to redistribute a given version not being permanent, and the requirement to check for changes to the license.

RDF dumps

ODP data is made available through an RDF-like dump that is published on a dedicated download server [3]. An archive of previous versions is also available [4]. New versions are usually generated weekly. An ODP editor has catalogued a number of bugs that are/were encountered when implementing the ODP RDF dump, including UTF-8 encoding errors (fixed since August 2004) and an RDF format that does not comply with the final RDF specification because ODP RDF generation was implemented before the RDF specification was finalized [5].

So while today the so-called RDF dump is valid XML, it is not strictly RDF, but an ODP-specific format. Software to process the ODP RDF dump needs to take account of this.

Content users

ODP data powers the core directory services for many of the Web's largest search engines and portals, including Netscape Search, AOL Search, Google, and Alexa.

Other uses are also made of ODP data. For example, in the spring of 2004 Overture announced a search service for third parties combining Yahoo! Directory search results with ODP titles, descriptions and category metadata. The search engine Gigablast announced on 12 May 2005 its searchable copy of the Open Directory. The technology permits search of websites listed in specific categories, "in effect, instantly creating over 500,000 vertical search engines".[6]

As of September 8, 2006 the ODP listed 313 English-language Web sites that use ODP data as well as 238 sites in other languages.[7] However these figures do not reflect the full picture of use, as those sites which use ODP data without following the terms of the ODP license are not listed.

Policies and procedures

There are restrictions imposed on who can become an ODP editor. The primary gatekeeping mechanism is an editor application process wherein editor candidates demonstrate their editing abilities, disclose affiliations that might pose a conflict of interest, and otherwise give a sense of how the applicant would likely mesh with the ODP culture and mission. A majority of applications are rejected, but reapplying is allowed and sometimes encouraged. The same standards apply to editors of all categories and subcategories, which can result in certain areas going without editors for long periods of time.[citation needed]

ODP's editing model is a hierarchical one. Upon becoming editors, individuals will generally have editing permissions in only a small category. Once they have demonstrated basic editing skills in compliance with the Editing Guidelines, they are welcome to apply for additional editing privileges, in either a broader category, or in a category elsewhere in the directory. Mentorship relationships between editors are encouraged, and internal forums provide a vehicle for new editors to ask questions.

ODP has its own internal forums, the contents of which are intended only for editors to communicate with each other[9] primarily about editing topics. Access to the forums requires an editor account, and editors are expected to keep the contents of these forums private.[10]

Over time, senior editors may be granted additional privileges which reflect their editing experience and leadership within the editing community. The most straightforward are editall privileges, which allow an editor to access all categories in the directory. Meta privileges additionally allow editors to perform tasks such as reviewing editor applications, setting category features, and handling external and internal abuse reports. Cateditall privileges are similar to editall, but only for a single directory category. Similarly, catmod privileges are similar to meta, but only for a single directory category. Catmv privileges allow editors to make changes to directory ontology by moving or renaming categories. All of these privileges are granted by admins and staff, usually after discussion with meta editors.

In August 2004, a new level of privileges called admin was introduced. Administrator status was granted to a number of long serving metas by staff. Administrators have the ability to grant editall+ privileges to other editors and to approve new directory-wide policies, authorities that had previously only been available to root (staff) editors.[11] A full list of senior editors is available to the public. [8]

All ODP editors are expected to abide by ODP's Editing Guidelines.[9] These guidelines describe editing basics: what types of sites may be listed and which may not; how site listings should be titled and described in a loosely consistent manner; conventions for the naming and building of categories; conflict of interest limitations on the editing of sites which the editor may own or otherwise be affiliated with; and a code of conduct within the community. Editors who are found to have violated these guidelines may be contacted by staff or senior editors, have their editing permissions cut back, or lose their editing privileges entirely. ODP Guidelines are periodically revised after discussion in editor forums.

Site submissions

One of the original motivations for forming Gnuhoo/Newhoo/ODP was the frustration that many people experienced in getting their sites listed on Yahoo! Directory. However Yahoo! has since implemented a paid service for timely consideration of site submissions. That lead has been followed by many other directories. Some accept no free submission at all. By contrast the ODP has maintained its policy of free site submissions for all types of site — the only one of the major general directories to do so.

One result has been a gradual divergence between the ODP and other directories in the balance of content. The pay-for-inclusion model favours those able and willing to pay, so commercial sites tend to predominate in directories using it. (See for example the initial impact on Looksmart. [10]) Whereas a directory manned by volunteers will reflect the aims and interests of those volunteers. The ODP lists a high proportion of informational and non-profit sites.

Another consequence of the free submission policy is that the ODP has enormous numbers of submissions. The ODP now has approximately two million unreviewed submissions, in large part due to spam and incorrectly submitted sites.[citation needed] So the average processing time for a site submission has grown longer with each passing year. However the time taken cannot be predicted, since the variation is so great: a submission might be processed within hours or take several years.

Controversy and criticism

There have long been allegations that volunteer ODP editors give favorable treatment to their own websites while concomitantly thwarting the good faith efforts of their competition[11]. Such allegations are fielded by ODP's staff and meta editors, who have the authority to take disciplinary action against volunteer editors who are suspected of engaging in abusive editing practices. In 2003, ODP introduced a new Public Abuse Report System that allows members of the general public to report and track allegations of abusive editor conduct using an online form.[12] Other alleged abuses have occurred at the executive level, with company management leveraging the link value from ODP to accelerate new privately funded projects. Although site policies suggest that an individual site is submitted in only one category [12], Topix.net, a news aggregation site operated by ODP founder Rich Skrenta, has more than 10,000 listings.[13]

Early in the history of the ODP, its staff gave representatives of selected websites, such as Rolling Stone magazine, editing access at ODP in order to list many individual pages from those websites. The use of such professional content providers lapsed and the experiment has not been repeated.

Ownership and management

Underlying some controversy surrounding ODP is its ownership and management. Many of the original GnuHoo volunteers felt that they had been deceived into joining a commercial enterprise.[citation needed] Most of that controversy died down when the project was renamed NewHoo. Moreover, when Netscape acquired the project, renamed it ODP, and released ODP's content under an open content license, criticism of the ODP all but disappeared. However, as ODP's content became widely used by most major search engines and web directories, the issue of ODP's ownership and management resurfaced.

At ODP's inception, there was little thought given to the idea of how ODP should be managed, and there were no official forums, guidelines, or FAQs. In essence, ODP began as a free for all. Even after ODP set up its internal editor forums, many editors remained blissfully unaware that these forums existed until they were directed to the forums by one of their fellow editors. Moreover, given that ODP had no official guidelines at first, ODP editors simply hashed out some sort of consensus among themselves and published unofficial FAQs.

As time went on, the ODP Editor Forums became the de facto ODP parliament, and when one of ODP's staff members would post an opinion in the forums, it would be considered an official ruling. (In other words, "Staff has spoken.") There was also a short-lived attempt at moderation of the ODP Editor Forums, but it was abandoned as being the antithesis of the egalitarian principles on which the ODP community was supposed to be based. Even so, ODP staff began to give trusted senior editors additional editing privileges, including the ability to approve new editor applications, which eventually led to a stratified hierarchy of duties and privileges among ODP editors, with ODP's paid staff having the final say regarding ODP's policies and procedures.

Allegations that editors are removed for criticizing policies

ODP's paid staff has imposed controversial policies from time to time[citation needed], and volunteer editors who dissent in ways staff considers uncivil may find their editing privileges removed.[citation needed] One alleged example of this was chronicled at the XODP Yahoo! eGroup in May of 2000.[13] The earliest known exposé was Life After the Open Directory Project, later appearing as a June 1, 2000, guest column written for Traffick.com,[14] by David F. Prenatt, Jr. (former ODP editor "netesq") after losing his ODP editing privileges. Another example was the volunteer editor known by the alias The Cunctator, who was banned from the ODP soon after submitting an article to Slashdot on October 24, 2000, which criticized changes in ODP's copyright policies.[15]

Uninhibited discussion of ODP's purported shortcomings has become more common on mainstream Webmaster discussion forums.[citation needed]

Editor removal procedures

ODP's editor removal procedures are overseen by ODP's staff and meta editors. According to ODP's official editorial guidelines, editors are removed for abusive editing practices or uncivil behaviour. Discussions that may result in disciplinary action against volunteer editors take place in a private forum which can only be accessed by ODP's staff and meta editors, and volunteer editors who are being discussed are not given notice that such proceedings are taking place. Some people find this arrangement distasteful, wanting instead a discussion modeled more like a trial held in the U.S. judicial system.

In the article Editor Removal Explained, ODP meta editor Arlarson states that "a great deal of confusion about the removal of editors from ODP results from false or misleading statements by former editors".[16]

ODP has a standing policy that prohibits any current ODP editors in a position to know anything from discussing the reasons for specific editor removals. In the past, this has led to claims that many ODP editors are left to wonder why they cannot login at ODP to perform their editing work.[citation needed]

However, ODP is now set up in such a way that when someone attempts to login at ODP using a deactivated editor login, a generic web page is displayed that informs a removed editor that a final decision has been made regarding the deactivation of his or her login and providing a list of possible reasons as to why such a decision might have been made.[citation needed]

Blacklisting allegations

Senior ODP editors have the ability to attach "warning" or "do not list" notes to individual domains, but no editor has the unilateral ability to block certain sites from being listed. Sites with these notes might still be listed, and at times notes are removed after some discussion.

Software

The ODP Editor Forums were originally run on software that was based on the proprietary Ultimate Bulletin Board system. In June 2003, they switched to the open source phpBB system. As of 2007, these forums are powered by a modified version of phpBB.

The ODPSearch software is a derivative version of Isearch and is open source, licensed under the Mozilla Public License.

The bug tracking software used by the ODP is Bugzilla, and the web server Apache. Squid web proxy server is also used, all these applications being open source.

However, the ODP database/editing software is closed source, although Richard Skrenta of ODP did say in June 1998 that he was considering licensing it under the GNU General Public License. This has led to criticism from the aforementioned GNU project and other proponents of free software[citation needed], many of whom also criticise the ODP content license.[17]

As such, there have been some efforts to provide alternatives to ODP (see below). These alternatives would allow communities of like-minded editors to set up and maintain their own open source/open content Web directories. However, no significant open source/open content alternative to ODP has emerged.

Hierarchical structure

There recently is criticism of ODP's hierarchical structure. Many believe hierarchical directories are too complicated. As the recent emergence of Web 2.0, folksonomies began to appear. These people thought folksonomies, networks, and directed graph are more "natural" and easier to manage than hierarchies.[18][19][20]

See also

References

  1. ^ "The GnuHoo BooBoo". Slashdot. Retrieved on April 27, 2007.
  2. ^ ODP Front Page, retrieved 15 August 2006
  3. ^ Open Directory Forum - General - Analyzing editor numbers - page 1, 13 August 2006
  4. ^ "Dmoz's Catastrophic Server/Hardware Failure" October 27, 2006, retrieved November 15, 2006
  5. ^ "dmoz.org technical problems (UPDATED: December 18, 2006)", Resource-Zone.com Announcement, retrieved December 19, 2006
  6. ^ dmoz.org technical problems (UPDATED: January 13, 2007) at resource-zone.com
  7. ^ The Hamsters' New Home, in: Open Directory newsletter issue Winter 2006, retrieved December 26, 2006
  8. ^ Kids and Teens Launches! Open Directory Project Newsletter, November/December 2000
  9. ^ Communication and Codes of Conduct: Using the Forums, from the Open Directory Editing Guidelines
  10. ^ Communication and Codes of Conduct: Email and Forum Privacy, from the Open Directory Editing Guidelines
  11. ^ Open Directory Project Administrator Guidelines
  12. ^ Open Directory Project: Public Abuse Report System.
  13. ^ XODP Yahoo! Group Message Archive
  14. ^ David F. Prenatt, Jr., Life After the Open Directory Project, Traffick.com (June 1, 2000).
  15. ^ CmdrTaco, Dmoz (aka AOL) Changing Guidelines In Sketchy Way, Slashdot (October 24, 2000).
  16. ^ Arlarson, Editor Removal Explained, Open Directory Project Newsletter (September 2000).
  17. ^ "The primary problems are that your right to redistribute any given version is not permanent and that it requires the user to keep checking back at that site, which is too restrictive of the user's freedom." http://www.fsf.org/licensing/licenses/index_html#NonFreeDocumentationLicenses
  18. ^ Hriţcu, C., Folksonomies vs. Ontologies, 8th April 2005.
  19. ^ Shirky, C., Ontology is Overrated: Links, Tags, and Post-hoc Metadata, ITConversations, 15th March 2005.
  20. ^ Hammond, T., Hannay, T., Lund, B. & Scott, J., Social Bookmarking Tools (I) D-Lib Magazine, April 2005.

External links

[show]
v d e
Time Warner Inc.

source: trading-education.blogspot.com

PageRank

From Wikipedia, the free encyclopedia

Jump to: navigation, search
How PageRank Works
How PageRank Works

PageRank is a link analysis algorithm that assigns a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of "measuring" its relative importance within the set. The algorithm may be applied to any collection of entities with reciprocal quotations and references. The numerical weight that it assigns to any given element E is also called the PageRank of E and denoted by PR(E).

PageRank was developed at Stanford University by Larry Page (hence the name Page-Rank[1]) and later Sergey Brin as part of a research project about a new kind of search engine. The project started in 1995 and led to a functional prototype, named Google, in 1998. Shortly after, Page and Brin founded Google Inc., the company behind the Google search engine. While just one of many factors which determine the ranking of Google search results, PageRank continues to provide the basis for all of Google's web search tools.[2]

The name PageRank is a trademark of Google. The PageRank process has been patented (U.S. Patent 6,285,999 ). The patent is not assigned to Google but to Stanford University.

Contents


General description

Google describes PageRank:[2]

PageRank relies on the uniquely democratic nature of the web by using its vast link structure as an indicator of an individual page's value. In essence, Google interprets a link from page A to page B as a vote, by page A, for page B. But, Google looks at more than the sheer volume of votes, or links a page receives; it also analyzes the page that casts the vote. Votes cast by pages that are themselves "important" weigh more heavily and help to make other pages "important".
A graphical representation of a web of links between sites used for PageRank calculations.
A graphical representation of a web of links between sites used for PageRank calculations.

In other words, a PageRank results from a "ballot" among all the other pages on the World Wide Web about how important a page is. A hyperlink to a page counts as a vote of support. The PageRank of a page is defined recursively and depends on the number and PageRank metric of all pages that link to it ("incoming links"). A page that is linked to by many pages with high PageRank receives a high rank itself. If there are no links to a web page there is no support for that page.

Google assigns a numeric weighting from 0-10 for each webpage on the Internet; this PageRank denotes your site’s importance in the eyes of Google. The scale for PageRank is logarithmic like the Richter Scale and roughly based upon quantity of inbound links as well as importance of the page providing the link.

Numerous academic papers concerning PageRank have been published since Page and Brin's original paper.[3] In practice, the PageRank concept has proven to be vulnerable to manipulation, and extensive research has been devoted to identifying falsely inflated PageRank and ways to ignore links from documents with falsely inflated PageRank.

Alternatives to the PageRank algorithm include the HITS algorithm proposed by Jon Kleinberg, the IBM CLEVER project and the TrustRank algorithm.

PageRank algorithm

PageRank is a probability distribution used to represent the likelihood that a person randomly clicking on links will arrive at any particular page. PageRank can be calculated for any-size collection of documents. It is assumed in several research papers that the distribution is evenly divided between all documents in the collection at the beginning of the computational process. The PageRank computations require several passes, called "iterations", through the collection to adjust approximate PageRank values to more closely reflect the theoretical true value.

A probability is expressed as a numeric value between 0 and 1. A 0.5 probability is commonly expressed as a "50% chance" of something happening. Hence, a PageRank of 0.5 means there is a 50% chance that a person clicking on a random link will be directed to the document with the 0.5 PageRank.

Simplified PageRank algorithm

Assume a small universe of four web pages: A, B, C and D. The initial approximation of PageRank would be evenly divided between these four documents. Hence, each document would begin with an estimated PageRank of 0.25.

If pages B, C, and D each only link to A, they would each confer 0.25 PageRank to A. All PageRank PR( ) in this simplistic system would thus gather to A because all links would be pointing to A.

PR(A)= PR(B) + PR(C) + PR(D).\,

But then suppose page B also has a link to page C, and page D has links to all three pages. The value of the link-votes is divided among all the outbound links on a page. Thus, page B gives a vote worth 0.125 to page A and a vote worth 0.125 to page C. Only one third of D's PageRank is counted for A's PageRank (approximately 0.083).

PR(A)= \frac{PR(B)}{2}+ \frac{PR(C)}{1}+ \frac{PR(D)}{3}.\,

In other words, the PageRank conferred by an outbound link L( ) is equal to the document's own PageRank score divided by the normalized number of outbound links (it is assumed that links to specific URLs only count once per document).

PR(A)= \frac{PR(B)}{L(B)}+ \frac{PR(C)}{L(C)}+ \frac{PR(D)}{L(D)}. \,

In the general case, the PageRank value for any page u can be expressed as:

PR(u) = \sum_{v \in B_u} \frac{PR(v)}{N_v},

i.e. the PageRank value for a page u is dependent on the PageRank values for each page v out of the set Bu (this set contains all pages linking to page u), divided by the number of links from page v (this is Nv).

PageRank algorithm including damping factor

The PageRank theory holds that even an imaginary surfer who is randomly clicking on links will eventually stop clicking. The probability, at any step, that the person will continue is a damping factor d. Various studies have tested different damping factors, but it is generally assumed that the damping factor will be set around 0.85.[4]

The damping factor is subtracted from 1 (and in some variations of the algorithm, the result is divided by the number of documents in the collection) and this term is then added to the product of (the damping factor and the sum of the incoming PageRank scores).

That is,

PR(A)= 1 - d + d \left( \frac{PR(B)}{L(B)}+ \frac{PR(C)}{L(C)}+ \frac{PR(D)}{L(D)}+\,\cdots \right)

or (N = the number of documents in collection)

PR(A)= {1 - d \over N} + d \left( \frac{PR(B)}{L(B)}+ \frac{PR(C)}{L(C)}+ \frac{PR(D)}{L(D)}+\,\cdots \right) .

So any page's PageRank is derived in large part from the PageRanks of other pages. The damping factor adjusts the derived value downward. The second formula above supports the original statement in Page and Brin's paper that "the sum of all PageRanks is one".[3] Unfortunately, however, Page and Brin gave the first formula, which has led to some confusion.

Google recalculates PageRank scores each time it crawls the Web and rebuilds its index. As Google increases the number of documents in its collection, the initial approximation of PageRank decreases for all documents.

The formula uses a model of a random surfer who gets bored after several clicks and switches to a random page. The PageRank value of a page reflects the chance that the random surfer will land on that page by clicking on a link. It can be understood as a Markov chain in which the states are pages, and the transitions are all equally probable and are the links between pages.

If a page has no links to other pages, it becomes a sink and therefore terminates the random surfing process. However, the solution is quite simple. If the random surfer arrives at a sink page, it picks another URL at random and continues surfing again.

When calculating PageRank, pages with no outbound links are assumed to link out to all other pages in the collection. Their PageRank scores are therefore divided evenly among all other pages. In other words, to be fair with pages that are not sinks, these random transitions are added to all nodes in the Web, with a residual probability of usually d = 0.85, estimated from the frequency that an average surfer uses his or her browser's bookmark feature.

So, the equation is as follows:

PR(p_i) = \frac{1-d}{N} + d \sum_{p_j \in M(p_i)} \frac{PR (p_j)}{L(p_j)}

where p1,p2,...,pN are the pages under consideration, M(pi) is the set of pages that link to pi, L(pj) is the number of outbound links on page pj, and N is the total number of pages.

The PageRank values are the entries of the dominant eigenvector of the modified adjacency matrix. This makes PageRank a particularly elegant metric: the eigenvector is

\mathbf{R} = \begin{bmatrix} PR(p_1) \\ PR(p_2) \\ \vdots \\ PR(p_N) \end{bmatrix}

where R is the solution of the equation

\mathbf{R} =  \begin{bmatrix} {(1-d)/ N} \\ {(1-d) / N} \\ \vdots \\ {(1-d) / N} \end{bmatrix}  + d  \begin{bmatrix} \ell(p_1,p_1) & \ell(p_1,p_2) & \cdots & \ell(p_1,p_N) \\ \ell(p_2,p_1) & \ddots &  & \vdots \\ \vdots & & \ell(p_i,p_j) & \\ \ell(p_N,p_1) & \cdots & & \ell(p_N,p_N) \end{bmatrix}  \mathbf{R}

where the adjacency function \ell(p_i,p_j) is 0 if page pj does not link to pi, and normalised such that, for each j

\sum_{i = 1}^N \ell(p_i,p_j) = 1,

i.e. the elements of each column sum up to 1.

This is a variant of the eigenvector centrality measure used commonly in network analysis.

The values of the PageRank eigenvector are fast to approximate (only a few iterations are needed) and in practice it gives good results.

As a result of Markov theory, it can be shown that the PageRank of a page is the probability of being at that page after lots of clicks. This happens to equal t − 1 where t is the expectation of the number of clicks (or random jumps) required to get from the page back to itself.

The main disadvantage is that it favors older pages, because a new page, even a very good one, will not have many links unless it is part of an existing site (a site being a densely connected set of pages, such as Wikipedia). The Google Directory (itself a derivative of the Open Directory Project) allows users to see results sorted by PageRank within categories. The Google Directory is the only service offered by Google where PageRank directly determines display order. In Google's other search services (such as its primary Web search) PageRank is used to weight the relevance scores of pages shown in search results.

Several strategies have been proposed to accelerate the computation of PageRank.[5]

Various strategies to manipulate PageRank have been employed in concerted efforts to improve search results rankings and monetize advertising links. These strategies have severely impacted the reliability of the PageRank concept, which seeks to determine which documents are actually highly valued by the Web community.

Google is known to actively penalize link farms and other schemes designed to artificially inflate PageRank. How Google identifies link farms and other PageRank manipulation tools are among Google's trade secrets.

PageRank variations

Google Toolbar

An example of the PageRank indicator as found on the Google toolbar.
An example of the PageRank indicator as found on the Google toolbar.

The Google Toolbar's PageRank feature displays a visited page's PageRank as a whole number between 0 and 10. The most popular websites have a PageRank of 10. The least have a PageRank of 0. Google has not disclosed the precise method for determining a Toolbar PageRank value. Google representative Matt Cutts has publicly indicated that the Toolbar PageRank values are republished about once every three months, indicating that the Toolbar PageRank values are historical rather than real-time values.[6]

Google directory PageRank

The Google Directory PageRank is an 8-unit measurement. These values can be viewed in the Google Directory. Unlike the Google Toolbar which shows the PageRank value by a mouseover of the greenbar, the Google Directory does not show the PageRank as a numeric value but only as a green bar.

False or spoofed PageRank

While the PR shown in the Toolbar is considered to be derived from an accurate PageRank value (at some time prior to the time of publication by Google) for most sites, it must be noted that this value is also easily manipulated. A current flaw is that any low PageRank page that is redirected, via a 302 server header or a "Refresh" meta tag, to a high PR page causes the lower PR page to acquire the PR of the destination page. In theory a new, PR0 page with no incoming links can be redirected to the Google home page - which is a PR 10 - and by the next PageRank update the PR of the new page will be upgraded to a PR10. This spoofing technique, also known as 302 Google Jacking, is a known failing or bug in the system. Any page's PR can be spoofed to a higher or lower number of the webmaster's choice and only Google has access to the real PR of the page. Spoofing is generally detected by running a Google search for a URL with questionable PR, as the results will display the URL of an entirely different site (the one redirected to) in its results.

Manipulating PageRank

For search-engine optimization purposes, some companies, such as Text Link Brokers, offer to sell high PageRank links to webmasters.[7] As links from higher-PR pages are believed to be more valuable, they tend to be more expensive. It can be an effective and viable marketing strategy to buy link advertisements on content pages of quality and relevant sites to drive traffic and increase a webmaster's link popularity. However, Google has publicly warned webmasters that if they are or were discovered to be selling links for the purpose of conferring PageRank and reputation, their links will be devalued (ignored in the calculation of other pages' PageRanks). The practice of buying and selling links is intensely debated across the Webmastering community. Google advises webmasters to use the nofollow HTML attribute value on sponsored links. According to Matt Cutts, Google is concerned about webmasters who try to game the system, and thereby reduce the quality of Google search results.[7]

Other uses of PageRank

A version of PageRank has recently been proposed as a replacement for the traditional ISI impact factor,[8] and implemented at eigenfactor.org. Instead of merely counting total citation to a journal, the "importance" of each citation is determined in a PageRank fashion.

PageRank has also been used to automatically rank WordNet synsets according to how strongly they possess a given semantic property, such as positivity or negativity.[9]

A similar new use of PageRank is to rank academic doctoral programs based on their records of placing their graduates in faculty positions. In PageRank terms, academic departments link to each other by hiring their faculty from each other (and from themselves).[10]

A dynamic weighting method similar to PageRank has been used to generate customized reading lists based on the link structure of Wikipedia.[11]

A Web crawler may use PageRank as one of a number of importance metrics it uses to determine which URL to visit next during a crawl of the web. One of the early working papers[12] which were used in the creation of Google is Efficient crawling through URL ordering,[13] which discusses the use of a number of different importance metrics to determine how deeply, and how much of a site Google will crawl. PageRank is presented as one of a number of these importance metrics, though there are others listed such as the number of inbound and outbound links for a URL, and the distance from the root directory on a site to the URL.

Google's "rel='nofollow'" proposal

In early 2005, Google implemented a new value, "nofollow", for the rel attribute of HTML link and anchor elements, so that website builders and bloggers can make links that Google will not consider for the purposes of PageRank — they are links that no longer constitute a "vote" in the PageRank system. The nofollow relationship was added in an attempt to help combat spamdexing.

As an example, people could create many message-board posts with links to their website to artificially inflate their PageRank. Now, however, the message-board administrator can modify the code to automatically insert "rel='nofollow'" to all hyperlinks in posts, thus preventing PageRank from being affected by those particular posts.

This method of avoidance, however, also has various drawbacks, such as reducing the link value of actual comments. (See: Spam in blogs#rel="nofollow")

See also

References

  1. ^ David Vise and Mark Malseed (2005). The Google Story, 37. ISBN ISBN 0-553-80457-X.
  2. ^ a b Google Technology. [1]
  3. ^ a b The Anatomy of a Large-Scale Hypertextual Web Search Engine. Brin, S.; Page, L (1998).
  4. ^ Sergey Brin and Lawrence Page (1998). "The anatomy of a large-scale hypertextual Web search engine". Proceedings of the seventh international conference on World Wide Web 7: 107-117 (Section 2.1.1 Description of PageRank Calculation).
  5. ^ Fast PageRank Computation via a Sparse Linear System (Extended Abstract). Gianna M. Del Corso, Antonio Gullí, Francesco Romani.
  6. ^ Cutt, Matts. What’s an update? Blog post (September 8, 2005)
  7. ^ a b How to report paid links. mattcutts.com/blog (April 14, 2007). Retrieved on 2007-05-28.
  8. ^ Johan Bollen, Marko A. Rodriguez, and Herbert Van de Sompel. (December 2006). "Journal Status". Scientometrics 69 (3).
  9. ^ Andrea Esuli and Fabrizio Sebastiani. PageRanking WordNet synsets: An Application to Opinion-Related Properties. In Proceedings of the 35th Meeting of the Association for Computational Linguistics, Prague, CZ, 2007, pp. 424-431. Retrieved on June 30, 2007.
  10. ^ Benjamin M. Schmidt and Matthew M. Chingos (2007). "Ranking Doctoral Programs by Placement: A New Method". PS: Political Science and Politics 40 (July): 523-529.
  11. ^ Wissner-Gross, A. D. (2006). "Preparation of topical readings lists from the link structure of Wikipedia". Proceedings of the IEEE International Conference on Advanced Learning Technology.
  12. ^ Working Papers Concerning the Creation of Google. Google. Retrieved on November 29, 2006.
  13. ^ Cho, J., Garcia-Molina, H., and Page, L. (1998). "Efficient crawling through URL ordering". Proceedings of the seventh conference on World Wide Web.

Further reading

External links


source: trading-education.blogspot.com