From Seo Wiki - Search Engine Optimization and Programming Languages

Jump to: navigation, search
Developer(s) University of Leipzig, Freie Universität Berlin, OpenLink Software
Initial release 23 January 2007
Stable release DBpedia 3.4 / 11 November 2009[1]
Written in PHP, Java, VSP
Operating system Virtuoso Universal Server
Type Semantic Web, Linked Data
License GNU General Public License
Website dbpedia.org

DBpedia is a project aiming to extract structured information from the information created as part of the Wikipedia project. This structured information is then made available on the World Wide Web.[2] DBpedia allows users to query relationships and properties associated with Wikipedia resources, including links to other related datasets.[3] DBpedia has been described by Tim Berners-Lee as one of the more famous parts of the Linked Data project.[4]



The project was started by people at the Free University of Berlin and the University of Leipzig, in collaboration with OpenLink Software[5], and the first publicly available dataset was published in 2007. It is made available under free licences, allowing others to reuse the dataset.

Wikipedia articles consist mostly of free text, but also include structured information embedded in the articles, such as "infobox" tables, categorisation information, images, geo-coordinates and links to external Web pages. This structured information is extracted and put in a uniform dataset which can be queried.


As of November 2009, the DBpedia dataset describes more than 2.9 million things, including at least 282,000 persons, 339,000 places (including 241,000 populated places), 88,000 music albums, 44,000 films, 15,000 video games, 119,000 organizations (including 20,000 companies and 29,000 educational institutions), 130,000 species and 4400 diseases. The DBpedia knowledge base features labels and abstracts for these things in 91 different languages; 807,000 links to images and 3,840,000 links to external web pages; 4,878,100 external links into other RDF datasets, 415,000 Wikipedia categories, and 75,000 YAGO categories. From this dataset, information spread across multiple pages can be extracted, for example book authorship can be put together from pages about the work, or the author.

The DBpedia project uses the Resource Description Framework (RDF) to represent the extracted information. As of November 2009, the DBpedia dataset consists of around 479 million pieces of information (RDF triples) out of which 190 million were extracted from the English edition of Wikipedia and 289 million were extracted from other language editions.[6]


DBpedia extracts factual information from Wikipedia pages, allowing users to find answers to questions where the information is spread across many different Wikipedia articles. Data is accessed using an SQL-like query language for RDF called SPARQL. For example, imagine you were interested in the Japanese shōjo manga series Tokyo Mew Mew, and wanted to find the genres of other works written by its illustrator. DBpedia combines information from Wikipedia's entries on Tokyo Mew Mew, Mia Ikumi and on works such as Super Doll Licca-chan and Koi Cupid. Since DBpedia normalises information into a single database, the following query can be asked without needing to know exactly which entry carries each fragment of information, and will list related genres:

 PREFIX dbprop: <http://dbpedia.org/property/>
 PREFIX db: <http://dbpedia.org/resource/>
 SELECT ?who ?work ?genre WHERE { 
  db:Tokyo_Mew_Mew dbprop:illustrator ?who .
  ?work  dbpprop:author ?who .
  OPTIONAL { ?work dbpprop:genre ?genre } .


The dataset is interlinked on RDF level with various other Open Data datasets on the Web. This enables applications to enrich DBpedia data with data from these datasets. As of November 2009, there are more than 3.7 million interlinks between DBpedia and external datasets including: Freebase, OpenCyc, UMBEL, GeoNames, Musicbrainz, CIA World Fact Book, DBLP, Project Gutenberg, DBtune Jamendo, Eurostat, Uniprot, Bio2RDF, and US Census data.[7][8] The Thomson Reuters initiative OpenCalais, the Linked Open Data project of the New York Times, and the Zemanta API also include links to DBpedia.[9][10][11] The BBC uses DBpedia to help organize its content.[12][13] Faviki uses DBpedia for semantic tagging.[14]

Amazon provides DBpedia Public Data Set that can be integrated into Amazon Web Services applications.[15]

See also


  1. DBpedia 3.4 released
  2. Christian Bizer, Jens Lehmann, Georgi Kobilarov, Soren Auer, Christian Becker, Richard Cyganiak, Sebastian Hellmann, DBpedia - A crystallization point for the Web of Data. Web Semantics: Science, Services and Agents on the World Wide Web, Volume 7, Issue 3, The Web of Data, September 2009, Pages 154-165, ISSN 1570-8268
  3. "Komplett verlinkt - Linked Data" (in German). 3sat. 2009-06-19. http://www.3sat.de/dynamic/sitegen/bin/sitegen.php?tab=2&source=/neues/sendungen/magazin/135119/index.html. Retrieved 2009-11-10. 
  4. Sir Tim Berners-Lee Talks with Talis about the Semantic Web. Transcript of an interview recorded on 7 February 2008.
  5. , http://wiki.dbpedia.org/Team, retrieved 2009-11-23 
  6. "DBpedia dataset". DBpedia. http://wiki.dbpedia.org/Datasets#h18-3. Retrieved 2008-09-26. 
  7. , http://esw.w3.org/topic/TaskForces/CommunityProjects/LinkingOpenData/DataSets/LinkStatistics, retrieved 2009-11-24 
  8. , http://esw.w3.org/topic/TaskForces/CommunityProjects/LinkingOpenData/DataSets/Statistics, retrieved 2009-11-24 
  9. "First 5,000 Tags Released to the Linked Data Cloud". open.blogs.nytimes.com. http://open.blogs.nytimes.com/2009/10/29/first-5000-tags-released-to-the-linked-data-cloud/. Retrieved 2009-11-10. 
  10. "Life in the Linked Data Cloud". www.opencalais.com. http://www.opencalais.com/node/9501. Retrieved 2009-11-10. "Wikipedia has a Linked Data twin called DBpedia. DBpedia has the same structured information as Wikipedia – but translated into a machine-readable format." 
  11. "Zemanta talks Linked Data with SDK and commercial API". blogs.zdnet.com. http://blogs.zdnet.com/semantic-web/?p=243. Retrieved 2009-11-10. "Zemanta fully supports the Linking Open Data initiative. It is the first API that returns disambiguated entities linked to dbPedia, Freebase, MusicBrainz, and Semantic Crunchbase." 
  12. "European Semantic Web Conference 2009 - Georgi Kobilarov, Tom Scott, Yves Raimond, Silver Oliver, Chris Sizemore, Michael Smethurst, Christian Bizer and Robert Lee. Media meets Semantic Web - How the BBC uses DBpedia and Linked Data to make Connections". www.eswc2009.org. http://www.eswc2009.org/program-menu/accepted-in-use-track-papers/134-georgi-kobilarov-tom-scott-yves-raimond-silver-oliver-chris-sizemore-michael-smethurst-christian-bizer-and-robert-lee-media-meets-semantic-web-how-the-bbc-uses-dbpedia-and-linked-data-to-make-connections. Retrieved 2009-11-10. 
  13. "BBC Learning - Open Lab - Reference". bbc.co.uk. http://backstage.bbc.co.uk/openlab/reference.php. Retrieved 2009-11-10. "Dbpedia is a database version of Wikipedia. It's used in a lot of projects for a wide range of different reasons. At the BBC we are using it for tagging content." 
  14. "Semantic Tagging with Faviki". www.readwriteweb.com. http://www.readwriteweb.com/archives/semantic_tagging_with_faviki.php. 
  15. "Amazon Web Services Developer Community : DBPedia". developer.amazonwebservices.com. http://developer.amazonwebservices.com/connect/entry.jspa?externalID=2319&categoryID=249. Retrieved 2009-11-10. 

External links

Personal tools

Served in 0.789 secs.