From Seo Wiki - Search Engine Optimization and Programming Languages
|Internet media type|
|Type of format||Markup language|
|Extended from||XML, XHTML|
|Standard(s)||0.4 , |
VHML or the Virtual Human Markup Language is a set of tags and rules (an XML-based markup language) used to accommodate the various aspects of Human-Computer Interaction with regards to Facial Animation, Body Animation, Dialogue Manager interaction, Text to Emotional Speech production, Emotional Representation plus Hyper and Multi Media information.
Human communication is inherently multimodal. The information conveyed through body language, facial expression, gaze, intonation, speaking style etc. are all important components of everyday communication. An issue within computer science concerns how to provide multimodal agent based system. That is systems that interact with users through several channels. These systems can include Virtual Humans. A Virtual Human might for example be a complete creature, i.e. a creature with a whole body including head, arms, legs etc. but it might also be a creature with only a head, a Talking Head.
The aim of the Virtual Human Markup Language (VHML) is to control Virtual Humans regarding speech, facial animation, facial gestures and body animation.
- EML Emotion Markup Language
- SML Speech Markup Language
- FAML Facial Animation Markup Language
- BAML Body Animation Markup Language
- XHTML HyperText Markup Language
- DMML Dialogue Manager Markup Language
VHML is divided into three levels, where only five elements constitute the top level. At the middle level are the two sub languages that control emotions and gestures, EML and GML. Their elements are inherited to three of the low level languages, SML, FAML and BAML. Apart from these three, there are two additional sub languages at the low level, DMML and XHTML. The structure of VHML is shown in Figure 1. The dotted lines imply that the language on the lower level inherits the elements from the language on the upper level.
The intent of this language is to facilitate the natural and realistic interaction of a Talking Head or Virtual Human with a user via a web page or a standalone application. In response to a user enquiry, the Virtual Human will have to react in a realistic and humane way using appropriate words, voice, facial and body gestures. For example, a Virtual Human that has to give some bad news to the user may speak in a sad way, with a sorry face and a bowed body stance. In a similar way, a different message may be delivered with a happy voice, a smiley face and a lively body.
Example: <?xml version=¿1.0¿> <!DOCTYPE vhml SYSTEM ¿http://www.vhml.org/vhml.dtd¿> ¿
Elements of VHML
Status of VHML
- Final Report of the W3C Emotion Incubator Group. 10 July 2007.
- Publications on VHML. http://www.vhml.org
- Specification of VHML. http://www.vhml.org
- 2001, Marriott et al., "VHML - Directing a Talking Head", in Jiming Liu, ed., Active Media Technology: 6th International Computer Science Conference, page 90
- VHML is currently being used in several Talking Head applications as well.
- 2004, Matthias Klusch et al., "Interactive Information Agents and Interfaces", in Robert W. Proctor and Kim-Phuong L. Vu (eds.), Handbook Of Human Factors In Web Design, Routledge, ISBN 0-8058-4612-3, page 229,
- As an alternative to character-specific adjuncts to programming languages, XML-compliant character scripting languages have been be defined, such as VHML (www.vhml.org) or MPML (www.miv.t.u-tokyo.ac.jp/MPML/).
- 2004, Gebhard et al., "Coloring Multi-character Conversations through the Expression of Emotions", in Elisabeth André, ed., Affective Dialogue Systems: Tutorial and Research Workshop, ADS 2004, page 138,
-  this approach shares similarities with proposals for character scripting languages comprising emotion tags, such as AML, APML, MPML, or VHML.