Choice of Lesson (Bấm chuột vào ô sổ xuống bên dưới và chọn bài kiểm tra)

Level A Level B Level C TOEFL Incorrect word TOEFL Reading Comprehension Synonym word TOEFL

Showing posts with label SEO. Show all posts
Showing posts with label SEO. Show all posts

Sunday, August 5, 2007

Search plugin

From Wikipedia, the free encyclopedia

Jump to: navigation, search

A search plugin provides the ability to access a search engine from a web browser, without having to go to the engine's website first.

Technically, a search plugin is a small text file that tells the browser what information to send to a search engine and how the results are to be retrieved. The ease with which search plugins can be created has lead to archives where public contributions can be downloaded, and these can be important in software personalization.

With the introduction of Firefox 2.0 in 2006, search plugins started to offer search suggestions, where terms would appear as the user typed. These are laid out in a menu, and are predicted based on the most likely ending to a word that was midway through being typed. This uses Ajax technology to query the remote website's database for most common search terms, and so differs from traditional browser autofill, where the form would typically be completed based on information the user had entered previously.

Formats

See also

External links

This article related to a type of software is a stub. You can help Wikipedia by expanding it.

source: wikipedia.org

Multiway linking

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Multiway Linking is a technique used for website promotion whereby websites may create similar one way links that each involve 3 or more partner sites. This provides each website with a one way non-reciprocal link. This technique has evolved from reciprocal linking which is a link created between only 2 websites. According to Google and Yahoo, two of the largest search engines the latest Search Algorithms have evolved to hold less favour towards websites that a contain a high percentage of reciprocated links, and a higher favour towards websites that maintain a high level of incoming non-reciprocated (one-way) links. There are many services including iCrawl that offer creation and management of a one way link.

The term multiway simply refers to the fact that the link exchange is between 3 or more websites, however each link is singular by only pointing to one other website. Other means of linking that may increase your web presence may also include other indirect methods such as loading images, videos, content or RSS feeds from a third partners website.


Best Practices

As with most SEO methods as they become more popular the search engines rightfully correct and enhance their Algorithms to crack down on websites that gain a high presence whilst offering little or no service to the online world. In order to allow your site to benefit and gain positions in Search Results, all links must come from a similar website or category. Their page must be ranked well themselves with related content to your sites purpose.

Sites known to 'Authoritative' will also hold a higher vote towards the end score for each page in your website. If the incoming links to your site are from a website that is considered to be Authoritative than it is more likely that your site will achieve a higher web presence and ranking within the major Search Engines.

See also

source: wikipedia.org

Search engine marketing

From Wikipedia, the free encyclopedia

Jump to: navigation, search
Internet Marketing

Display advertising
Interactive advertising
Email marketing
Web analytics

Affiliate marketing

Cost Per Action
Revenue sharing
Contextual advertising

Search engine marketing

Search engine optimization
Social media optimization
Pay Per Click advertising
Paid inclusion

This box: view talk edit

Search Engine Marketing, or SEM, is a form of Internet Marketing that seeks to promote websites by increasing their visibility in the Search Engine results pages (SERPs) and has a proven ROI (Return on Investment). According to the Search Engine Marketing Professionals Organization, SEM methods include: Search Engine Optimization (or SEO), paid placement, and paid inclusion.[1] Other sources, including the New York Times define SEM as the practice of buying paid search listings, different from SEO which seeks to obtain better free search listings.[2][3]

Contents


Market Structure

In 2006, North American advertisers spent US$9.4 billion on search engine marketing, a 62% increase over the prior year and a 750% increase over the 2002 year. The largest SEM vendors are Google AdWords, Yahoo! Search Marketing and Microsoft adCenter.[1] As of 2006, SEM was growing much faster than traditional advertising. [2]

History

As the number of sites on the Web increased in the mid-to-late 90s, search engines started appearing to help people find information quickly. Search engines developed business models to finance their services, such as pay per click programs offered by Open Text [4] in 1996 and then Goto.com [5] in 1998. Goto.com later changed its name [6] to Overture in 2001, and was purchased by Yahoo! in 2003, and now offers paid search opportunities for advertisers through Yahoo! Search Marketing. Google also began to offer advertisements on search results pages in 2000 through the Google AdWords program. By 2007 pay-per-click programs proved to be primary money-makers [7] for search engines.

Search Engine Optimization consultants expanded their offerings to help businesses learn about and use the advertising opportunites offered by search engines, and new agencies focusing primarily upon marketing and advertising through search engines emerged. The term "Search Engine Marketing" was proposed by Danny Sullivan in 2001 [8] to cover the spectrum of activities involved in performing SEO, managing paid listings at the search engines, submitting sites to directories, and developing online marketing strategies for businesses, organizations, and individuals. In 2007 Search Engine Marketing is Stronger than ever [9] with SEM Budgets up 750% as shown with stats dating back to 2002 vs 2006.

Ethical questions

Paid search advertising hasn't been without controversy, and issues around how many search engines present advertising on their pages of search result sets have been the target of a series of studies and reports [10] [11] [12] by Consumer Reports WebWatch, from Consumers Union. The FTC also issued a letter [13] in 2002 about the importance of disclosure of paid advertising on search engines, in response to a complaint from Commercial Alert, a consumer advocacy group with ties to Ralph Nader.

See also

Organizations
  • SEMPO, the Search Engine Marketing Professional Organization, is a non-profit professional association for search engine marketers.
Search engines with SEM programs

References

  1. ^ a b The State of Search Engine Marketing 2006. Search Engine Land (February 8, 2007). Retrieved on 2007-06-07.
  2. ^ a b More Agencies Investing in Marketing With a Click. New York Times (March 14, 2006). Retrieved on 2007-06-07.
  3. ^ SEO Isn’t SEM. dmnews.com (December 5, 2005). Retrieved on 2007-06-07.
  4. ^ Engine sells results, draws fire. news.com.com (June 21, 1996). Retrieved on 2007-06-09.
  5. ^ GoTo Sells Positions. searchenginewatch.com (March 3, 1998). Retrieved on 2007-06-09.
  6. ^ GoTo gambles with new name. news.com.com (September 10, 2001). Retrieved on 2007-06-09.
  7. ^ Jansen, B. J. (May 2007). The Comparative Effectiveness of Sponsored and Nonsponsored Links for Web E-commerce Queries. ACM Transactions on the Web,. Retrieved on 2007-06-09.
  8. ^ Congratulations! You're A Search Engine Marketer!. searchenginewatch.com (November 5, 2001). Retrieved on 2007-06-09.
  9. ^ Is Search Engine Marketing Dying?. darin.cc (June 20, 2007). Retrieved on 2007-06-20.
  10. ^ False Oracles: Consumer Reaction to Learning the Truth About How Search Engines Work (Abstract). consumerwebwatch.org (June 30, 2003). Retrieved on 2007-06-09.
  11. ^ Searching for Disclosure: How Search Engines Alert Consumers to the Presence of Advertising in Search Results. consumerwebwatch.org (November 8, 2004). Retrieved on 2007-06-09.
  12. ^ Still in Search of Disclosure: Re-evaluating How Search Engines Explain the Presence of Advertising in Search Results. consumerwebwatch.org (June 9, 2005). Retrieved on 2007-06-09.
  13. ^ Re: Complaint Requesting Investigation of Various Internet Search Engine Companies for Paid Placement and Paid Inclusion Programs. ftc.gov (June 22, 2002). Retrieved on 2007-06-09.
source: wikipedia.org

Search engine submission

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Search engine submission is how a webmaster submits a web site directly to a search engine. While Search Engine Submission is often seen as a way to promote a web site, it generally is not necessary. Because the major search engines like Google, Yahoo, and MSN use crawlers, bots, and spiders that eventually would find all by themselves most web sites on the Internet.

There are two basic reasons to submit a web site or web page to a search engine. The first reason would be to add an entirely new web site because the site operators would rather not wait for a search engine to discover them. The second reason is to have a web page or web site updated in the respective search engine.

Contents


How web sites are submitted

There are two basic methods still in use today that would allow a webmaster to submit their site to a search engine. They can either submit just one web page at a time. Or, they can submit their entire site at one time with a sitemap. However, all that a webmaster really needs to do is to submit just the home page of a web site. With just the home page, most search engines are able to crawl a site, provided that it is well designed.

Web sites desire to be listed in popular search engines because that is how most people access web sites. People like to search for information on the web at what is known as a search engine. Sites that appear on the first page of a search are said to be in the top 10. Clicking on a hyperlink causes the found web page to appear in the searchers web browser.

Thus, webmasters often highly desire that their sites appear in the top 10 in a search engine search. This is because searchers are not very likely to look over more than one page of search results, known as a SERPs.

In order to obtain good placement on search results in the various engines, webmasters must optimize their web pages. The process is called search engine optimization. Many variables come into play, such as the placement and density of desirable keywords, the hierarchy structure of web pages employed in a web site (i.e., How many clicks from the home page are required to access a particular web page?) , and the number of web pages that link to a given web page. The Google search engine also uses a concept called page rank.

PageRank relies on the uniquely democratic nature of the web by using its vast link structure as an indicator of an individual page's value. In essence, Google interprets a link from page A to page B as a vote, by page A, for page B. But, Google looks at considerably more than the sheer volume of votes, or links a page receives; for example, it also analyzes the page that casts the vote. Votes cast by pages that are themselves "important" weigh more heavily and help to make other pages "important." Using these and other factors, Google provides its views on pages' relative importance ( Source: http://www.google.com/technology/)


Sitemaps

Google Sitemaps was introduced in June 2005 so web developers could publish lists of links from across their sites[1]. The sitemap is used to make the search engine aware of the site and the pages on the site.

Search engine submission companies

Earlier in the history of the web the submission process could be automated. Now a days, however, most search engines have implemented steps to prevent this.

Nevertheless, many commercial businesses still exist that offer to automatically place any web site with several hundred search engines for a fee. These business services are generally considered to be scam operations because they do not work. Nor, are they even necessary. There are little more than a dozen or two search engines to begin with. And, there are really only 3 major search engines. It shouldn't take longer than 15 minutes to submit a site to all three of these search engines. Google, for example, has dozens and dozens of foreign editions. But, they really are just the same search engine. And, many of the lesser search engines are powered from the Google engine index.

Most search engines currently require the input of randomly generated characters displayed on your browser screen when submitting a site. This feature makes automatic submission of a web site by a commercial submission service to many different search engines no longer physically possible.

Search engine submission services no longer necessary

By 2004, search engine submission services became unnecessary because the major search engines, "the big four", Google, Yahoo!, Microsoft Live and Ask.com, already had the ability to automatically discover new webpages by crawling links from other sites. Professional search engine optimizers, such as Jill Whalen, have stated that search engine submission is unnecessary. In fact, automated search engine submission may violate the search engines' terms of service, creating the potential for a site using such a service to be banned.[2]

References

  1. ^ Sitemaps-0.84
  2. ^ Are Search Engine Submission Services Worth It?. webpronews.com (July 27, 2004). Retrieved on 2007-06-09.
source: wikipedia.org

One way link

From Wikipedia, the free encyclopedia

Jump to: navigation, search

One way link is a term used among webmasters for link building methods. It is a Hyperlink that points to a website without any reciprocal link; thus the link goes "one way" in direction.

It is suspected by many industry consultants that this type of link would be considered more natural in the eyes of search engines. Similar to 'one way links', there is a three way link building technique in Search engine optimization (SEO).

There are many theories on text link building and one way links verse reciprocal links. Google is the company that has made this concept very popular with their PageRank technology. This term is mostly used in the business field of search engine optimization, Internet marketing and online advertising.

See also

External links

source: wikipedia.org

TrustRank

From Wikipedia, the free encyclopedia

Jump to: navigation, search

TrustRank is a link analysis technique for semi-automatically separating useful webpages from spam. (Gyöngyi et al. 2004)

Many Web spam pages are created only with the intention of misleading search engines. These pages, chiefly created for commercial reasons, use various techniques to achieve higher-than-deserved rankings on the search engines' result pages. While human experts can easily identify spam, it is too expensive to manually evaluate a large number of pages.

One popular method for improving rankings is to increase artificially the perceived importance of a document through complex linking schemes. Google's PageRank and similar methods for determining the relative importance of Web documents have been subjected to manipulation.

The TrustRank method calls for selecting a small set of seed pages to be evaluated by an expert. Once the reputable seed pages are manually identified, a crawl extending outward from the seed set seeks out similarly reliable and trustworthy pages. TrustRank's reliability diminishes as documents become further removed from the seed set.

The researchers who proposed the TrustRank methodology have continued to refine their work by evaluating related topics, such as measuring spam mass.

Bibliography

  • Zoltán Gyöngyi, Hector Garcia-Molina, Jan Pedersen, "Combating Web Spam with TrustRank", Proceedings of the International Conference on Very Large Data Bases 30:576, 2004.

See also

External links

This World Wide Web-related article is a stub. You can help Wikipedia by expanding it.

source: wikipedia.org

Page hijacking

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Page hijacking is a form of spamming the index of a search engine (spamdexing). It is achieved by creating a rogue copy of a popular website which shows contents similar to the original to a web crawler, but redirects web surfers to unrelated or malicious websites. Spammers can use this technique to achieve high rankings in result pages for certain key words.

Page hijacking is a form of cloaking, made possible because some web crawlers detect duplicates while indexing web pages. If two pages have the same content, only one of the URLs will be kept. A spammer will try to ensure that the rogue website is the one shown on the result pages.

Contents


Case Study: Google Jacking

One form of this activity involves 302 server-side redirects on Google. Hundreds of 302 Google Jacking pages were said to have been reported to Google.[citation needed] While Google has not officially acknowledged that page hijacking is a real problem, several people have found to be victims of this phenomenon when checking the search engine rankings for their website. Because it is difficult to quantify how many pages have been hijacked, GoogleJacking.org was founded in May 2006 to help make Google aware of the significance of Google Jacking. Visitors can add themselves to a map, providing a visual indicator of how widespread the problem is.

Example of Page Hijacking

Suppose that a website offers difficult to find sizes of clothes. A common search entered to reach this website is really big t-shirts, which - when entered on popular search engines - made the website show up as the first result:

SpecialClothes
Offering clothes in sizes you cannot find elsewhere.
www.example.com/

A spammer working for a competing company then creates a website that looks extremely similar to one listed and includes a special redirection script that redirects web surfers to the competitor's site, but shows the page to web crawlers. After several weeks, a web search for really big t-shirts then shows the following result:

SpecialClothes
Offering clothes in sizes you cannot find elsewhere... at better prices!
www.example.net/
—Show Similar Pages—

When web surfers click on this result, they are redirected to the competing website. The original result was hidden in the "Show Similar Pages" section.

See also

References

Tools and Information for Webmasters

External links


source: wikipedia.org

Meta element

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Meta elements are HTML elements used to provide structured metadata about a web page. Such elements must be placed as tags in the head section of an HTML document.

Contents


Meta element use in search engine optimization

Meta elements provide information about a given webpage, most often to help search engines categorize them correctly. They are inserted into the HTML document, but are often not directly visible to a user visiting the site.

They have been the focus of a field of marketing research known as search engine optimization (SEO), where different methods are explored to provide a user's site with a higher ranking on search engines. In the mid to late 1990s, search engines were reliant on meta data to correctly classify a web page and webmasters quickly learned the commercial significance of having the right meta element, as it frequently led to a high ranking in the search engines — and thus, high traffic to the web site.

As search engine traffic achieved greater significance in online marketing plans, consultants were brought in who were well versed in how search engines perceive a web site. These consultants used a variety of techniques (legitimate and otherwise) to improve ranking for their clients.

Meta elements have significantly less effect on search engine results pages today than they did in the 1990's and their utility has decreased dramatically as search engine robots have become more sophisticated. This is due in part to the nearly infinite re-occurrence (keyword stuffing) of meta elements and/or to attempts by unscrupulous website placement consultants to manipulate (spamdexing) or otherwise circumvent search engine ranking algorithms. While search engine optimization can improve search engine ranking, consumers of such services should be careful to employ only reputable providers.

Major search engine robots are more likely to quantify such factors as the volume of incoming links from related websites, quantity and quality of content, technical precision of source code, spelling, functional v. broken hyperlinks, volume and consistency of searches and/or viewer traffic, time within website, page views, revisits, click-throughs, technical user-features, uniqueness, redundancy, relevance, advertising revenue yield, freshness, geography, language and other intrinsic characteristics.

The keywords attribute

The keywords attribute was popularized by search engines such as Infoseek and AltaVista in 1995, and its popularity quickly grew until it became one of the most commonly used meta elements[1]. By late 1997, however, search engine providers realized that information stored in meta elements, especially the keyword attribute, was often unreliable and misleading, and at worst, used to draw users into spam sites. (Unscrupulous webmasters could easily place false keywords into their meta elements in order to draw people to their site.)

Search engines began dropping support for metadata provided by the meta element in 1998, and by the early 2000s, most search engines had veered completely away from reliance on meta elements, and in July 2002 AltaVista, one of the last major search engines to still offer support, finally stopped considering them[2]. The Director of Research at Google, Monika Henziger, was quoted (in 2002) as saying, "Currently we don't trust metadata"[3].

No consensus exist whether or not the keywords attribute has any impact on ranking at any of the major search engine today. It is being speculated that they do, if the keywords used in the meta can be found in the page copy itself. 37 leaders in search engine optimization concluded in April 2007 that the relevance of having your keywords in the meta attribute keywords is little to none[4].

The description attribute

Unlike the keyword attribute, the description attribute is supported by most major search engines, like Yahoo and Live Search, while Google will fall back on this tag when information about the page itself is requested (e.g. using the related: query). The description attribute provides a concise explanation of a web page's content. This allows the webpage authors to give a more meaningful description for listings than might be displayed if the search engine was to automatically create its own description based on the page content. The description is often, but not always, displayed on search engine results pages, so it can impact click-through rates. Industry commentators have suggested that major search engines also consider keywords located in the description attribute when ranking pages.[5] W3C doesn't specify the size of this description meta tag, but almost all search engines recommend it to be shorter than 200 characters of plain text[citation needed].

The robots attribute

The robots attribute is used to control whether search engine spiders are allowed to index a page, or not, and whether they should follow links from a page, or not. The noindex value prevents a page from being indexed, and nofollow prevents links from being crawled. Other values are available that can influence how a search engine indexes pages, and how those pages appear on the search results. The robots attribute is supported by several major search engines [6]. There are several additional values for the robots meta attribute that are relevant to search engines, such as NOARCHIVE and NOSNIPPET, which are meant to tell search engines what not to do with a web pages content. [7]. Meta tags are not the best option to prevent search engines from indexing content of your website. A more reliable and efficient method is the use of the Robots.txt file (Robots Exclusion Standard).

Additional attibutes for search engines

NOODP

The search engines Google, Yahoo! and MSN use in some cases the title and abstract of the Open Directory Project (ODP) listing of a web site at Dmoz.org for the title and/or description (also called snippet or abstract) in the search engine results pages (SERPS). To give webmasters the option to specify that the ODP content should not be used for listings of their website, Microsoft introduced in May 2006 the new "NOODP" value for the "robots" element of the meta tags [8]. Google followed in July 2006[9] and Yahoo! in October 2006[10].

The syntax is the same for all search engines who support the tag.

Webmasters can decide if they want to disallow the use of their ODP listing on a per search engine basis

Google:

Yahoo!

MSN and Live Search:

NOYDIR

Yahoo! also used next to the ODP listing the content from their own Yahoo! directory but introduced in February 2007 a meta tag that provides webmasters with the option to opt-out of this[11].

Yahoo! Directory titles and abstracts will not be used in search results for their pages if the NOYDIR tag is being added to a web page.

Robots-NoContent

Yahoo! also introduced in May 2007 the "class=robots-nocontent" tag.[12] This is not a meta tag, but a tag, which can be used throughout a web page where needed. Content of the page where this tag is being used will be ignored by the Yahoo! crawler and not included in the search engine's index.

Examples for the use of the robots-nocontent tag:

excluded content

excluded content

excluded content

Academic studies

Google does not use HTML keyword or metatag elements for indexing. The Director of Research at Google, Monika Henziger, was quoted (in 2002) as saying, "Currently we don't trust metadata" [13]. Other search engines developed techniques to penalize web sites considered to be "cheating the system". For example, a web site repeating the same meta keyword several times may have its ranking decreased by a search engine trying to eliminate this practice, though that is unlikely. It's more likely that a search engine will ignore the meta keyword element completely, and most do regardless of how many words used in the element.

Meta tags use in social bookmarking

In contrast to completely automated systems like search engines, author-supplied metadata can be useful in situations where the page content has been vetted as trustworthy by a reader.

Redirects

Meta refresh elements can be used to instruct a web browser to automatically refresh a web page after a given time interval. It is also possible to specify an alternative URL and use this technique in order to redirect the user to a different location. Using a meta refresh in this way and solely by itself rarely achieves the desired result. For Internet Explorer's security settings, under the miscellaneous category, meta refresh can be turned off by the user, thereby disabling its redirect ability entirely.

Many web design tutorials also point out that client side redirecting tends to interfere with the normal functioning of a web browser's "back" button. After being redirected, clicking the back button will cause the user to go back to the redirect page, which redirects them again. Some modern browsers seem to overcome this problem however, including Safari, Mozilla Firefox and Opera.

HTTP message headers

Meta elements of the form can be used as alternatives to http headers. For example, would tell the browser that the page "expires" on June 21 2006 14:25:27 GMT and that it may safely cache the page until then.

Alternative to meta elements

An alternative to meta elements for enhanced subject access within a web site is the use of a back-of-book-style index for the web site. See examples at the web sites of the Australian Society of Indexers and the American Society of Indexers.

In 1994, ALIWEB, which was likely the first web search engine, also used an index file to provide the type of information commonly found in meta keywords attributes.

See also

References

  1. ^ Statistic (June 4,1997), META attributes by count, Vancouver Webpages, retrieved June 3, 2007
  2. ^ Danny Sullivan (October 1, 2002), Death Of A Meta Tag, SearchEngineWatch.com, retrieved June 3, 2007
  3. ^ Journal of Internet Cataloging, Volume 5(1), 2002
  4. ^ Rand Fishkin (April 2, 2007), Search Engine Ranking Factors V2, SEOmoz.org, retrieved June 3, 2007
  5. ^ Danny Sullivan, How To Use HTML Meta Tags, Search Engine Watch, December 5, 2002
  6. ^ Vanessa Fox, Using the robots meta tag, Official Google Webmaster Central Blog, 3/05/2007
  7. ^ Danny Sullivan (March 5, 2007),Meta Robots Tag 101: Blocking Spiders, Cached Pages & More, SearchEngineLand.com, retrieved June 3, 2007
  8. ^ Betsy Aoki (May 22, 2006), Opting Out of Open Directory Listings for Webmasters, Live Search Blog, retrieved June 3, 2007
  9. ^ Vanessa Fox (July 13, 2006), More control over page snippets, Inside Google Sitemaps, retrieved June 3, 2007
  10. ^ Yahoo! Search (October 24, 2006), Yahoo! Search Weather Update and Support for 'NOODP', Yahoo! Search Blog, retrieved June 3, 2007
  11. ^ Yahoo! Search (February 28, 2007), Yahoo! Search Support for 'NOYDIR' Meta Tags and Weather Update, Yahoo! Search Blog, retrieved June 3, 2007
  12. ^ Yahoo! Search (May 02, 2007), Introducing Robots-Nocontent for Page Sections, Yahoo! Search Blog, retrieved June 3, 2007
  13. ^ Journal of Internet Cataloging, Volume 5(1), 2002

External links

source: trading-education.blogspot.com