Deep linking

From Seo Wiki - Search Engine Optimization and Programming Languages

Jump to: navigation, search

Deep linking, on the World Wide Web, is making a hyperlink that points to a specific page or image on another website, instead of that website's main or home page. Such links are called deep links.



This link: is an example of a deep link. The URL contains all the information needed to point to a particular item, in this case the English Wikipedia article on deep linking, instead of the Wikipedia home page at

Deep linking and HTTP

The technology behind the World Wide Web, the Hypertext Transfer Protocol (HTTP), does not actually make any distinction between "deep" links and any other links—all links are functionally equal. This is intentional; one of the designed purposes of the Web is to allow authors to link to any published document on another site. The possibility of so-called "deep" linking is therefore built into the Web technology of HTTP and URLs by default—while a site can attempt to restrict deep links, to do so requires extra effort. According to the World Wide Web Consortium Technical Architecture Group, "any attempt to forbid the practice of deep linking is based on a misunderstanding of the technology, and threatens to undermine the functioning of the Web as a whole". One way to prevent deep linking is to configure the web server to check the referring URL using a rewrite engine.[1]


Some commercial websites object to other sites making deep links into their content either because it bypasses advertising on their main pages, passes off their content as that of the linker or, like The Wall Street Journal, they charge users for permanently-valid links.

Sometimes, deep linking has led to legal action such as in the 1997 case of Ticketmaster versus Microsoft, where Microsoft deep-linked to Ticketmaster's site from its Sidewalk service. This case was settled when Microsoft and Ticketmaster arranged a licensing agreement.

Ticketmaster later filed a similar case against, and the judge in this case ruled that such linking was legal as long as it was clear to whom the linked pages belonged.[2] The court also concluded that URLs themselves were not copyrightable, writing: "A URL is simply an address, open to the public, like the street address of a building, which, if known, can enable the user to reach the building. There is nothing sufficiently original to make the URL a copyrightable item, especially the way it is used. There appear to be no cases holding the URLs to be subject to copyright. On principle, they should not be."

Deep linking and web technologies

Websites which are built on web technologies such as Adobe Flash and AJAX often do not support deep linking. This can result in usability problems for people visiting such websites. For example, visitors to these websites may be unable to save bookmarks to individual pages or states of the site, web browser forward and back buttons may not work as expected, and use of the browser's refresh button may return the user to the initial page.

However, this is not a fundamental limitation of these technologies. Well-known techniques, and libraries such as SWFAddress and History Keeper, now exist that website creators using Flash or AJAX can use to provide deep linking to pages within their sites.[3][4][5]

Court rulings

In the beginning of 2006 in a case between the search engine and job site, the Delhi High Court in India prohibited from deeplinking to[6]

In December 2006, a Texas court ruled that linking by a motocross website to videos on a Texas-based motocross video production website did not constitute fair use. The court subsequently issued an injunction.[7] This case, SFX Motor Sports Inc., v. Davis, was not published in official reports, but is available at 2006 WL 3616983.

In a February 2006-ruling, the Danish Maritime and Commercial Court (Copenhagen) found systematic crawling, indexing and deeplinking by portal site of real estate site not to conflict with Danish law or the database directive of the European Union. The Court even stated that search engines are desirable for the functioning of the Internet of today. And that one, when publishing information on the Internet, must assume—and accept—that search engines deep link to individual pages of one's website.[8]

Many critics[who?] charge that such sites simply want to establish policies that will "license" such links to the highest bidder. They[who?] argue that links are a fundamental part of "user-oriented" web browsing. Probably the earliest legal case arising out of deep-linking was the 1996 Scottish case of Shetland Times vs Shetland News where the Times accused the News of appropriating stories on the Times' website as its own.[9]

Opt out

Web site owners wishing to prevent search engines from deep linking are able to use the existing Robots Exclusion Standard (/robots.txt file) to specify their desire or otherwise for their content to be indexed. Some feel that content owners who fail to provide a /robots.txt file are implying that they do not object to deep linking either by search engines or others who might link to their content. Others believe that content owners may be unaware of the Robots Exclusion Standard or may not use robots.txt for other reasons. Deep linking is also practiced outside the search engine context, so some participating in this debate question the relevance of the Robots Exclusion Standard to controversies about Deep Linking. The Robots Exclusion Standard does not programmatically enforce its directives so it does not prevent search engines and others who do not follow polite conventions from deep linking.

See also


External links

de:Surface Links und Deep Links es:Deep linking fr:Lien profond it:Deep linking he:קישור עמוק nl:Dieplinken ja:ディープリンク no:Dyplenking ru:Внешнее связывание

Personal tools

Served in 0.365 secs.