I'M-Europe Home Page IMPACT Programme Home Page OII Standards and Specifications List

Multimedia/Hypermedia Interchange Standards

This section of the OII Standards and Specifications List provides information on the following standards used to interchange multimedia and hypermedia audiovisual information:

* Indicates description revised this month

Standards for the storage, presentation and interchange of audiovisual information are prepared by both private and public organizations. The following public bodies are active in this area:

A useful additional source on standards related to multimedia can be found athttp://cuiwww.unige.ch/OSG/MultimediaInfo/mmsurvey/standards.html.

Index


HyperODA

Expanded name
Hypermedia Extensions to the Open Document Architecture

Area covered
Allows temporal and non-linear relationships of the type needed to present audiovisual information to be incorporated into documents coded using ODA

Sponsoring body and standard details

Characteristics/description
Part 9 of the ODA standard defines an Audio Content Architecture that will allow voice messages and other forms of audio content to be attached to ODA documents.

Part 12 of the standard will allow a part of an ODA document to be identified and referenced from another ODA document.

Part 14 of the standard will allow temporal relationships and non-linear structures to be used to control the sequence in which the component parts of an ODA document are displayed to users.

Part 15 of the standard will allow CCITT H.261, JPEGMPEG-1 and MPEG-2 coded images to be incorporated into an ODA document as formatted processable basic layout objects. Video sequences can be cropped both spatially and temporally to show part of the image of part of a clip. The position of the video frame within the document can be controlled using the standard ODA attributes for object positioning. New attributes are provided to control the contrast, lightness, saturation and hue of images, and picture resolution, clipping, size and scaling. Markers can be defined to assign unique names to start points used within a video clip.

Usage (Market segment and penetration)
Not applicable until final standard is published.

Further details available from:
ITU, ISO or local national standards organisations

Index


HyTime

Expanded name
Hypermedia/Time-based Structuring Language

Area covered
Hyperdocuments that link and synchronise static and time-based information

Sponsoring body and standard details

Characteristics/description
HyTime is an SGML application that provides facilities for describing the relationships between different types of data. It provides standardized methods for describing hypertext links, time scheduling, event synchronisation and projection in multimedia and hypermedia documents.

In keeping with the character of the SGML standard, HyTime does not seek to provide a standardized way of coding hypermedia presentations but instead provides a language that can be used to describe how any set of hypermedia objects has been interconnected, and how users are meant to access them. User groups will define their own application specifications, which will be interchanged in the form of an SGML document type definition.

The emphasis in HyTime is on identifying specific types of hypermedia objects, such as links and other locatable events, and on providing addressing mechanisms that will identify any segment of data that may need to be accessed or presented to users in a special way, irrespective of how the source data has been coded.

HyTime information sets can be placed into standardized BENTO (SBENTO) interleaved 'containers' to ensure efficient interchange of shared data objects.

HyTime uses a generalized measuring mechanism that can be used to define any measurement counting method. A HyTime finite coordinate space can have any number of axes, each of which can be assigned its own set of measurement units. Events can be specified as occurring at specific points on one or more measurement axes, or as occurring relative to some other event. The trees used to define formally structured documents, such as those created using SGML and ODA, can be treated as measurement axes along which counting can be done in either a depth-first or a breadth-first manner. Virtual time, of the type used in music timing, is provided for, as is virtual space (which can give a percentage of the used space, for example).

There are six modules to HyTime, of which only one is compulsory:

Usage (Market segment and penetration)
HyTime engines that can support the full functionality of this compehensive standard are still not commercially available, but many projects are starting to introduce a subset of HyTime functionality. Interactive Electronic Technical Manuals (IETMs) based on the application of HyTime have been specified as part of the deliverables for a number of US Defense Department projects, and some documentation is already available for these projects. The Swedish military are also documenting their new systems using HyTime as the underlying mechanism.

It is likely that, for the time being at least, 'Hytime engines' will tend to concentrate on specific aspects of the standard as opposed to providing full coverage of all possible functionality. This may lead to different Hytime engines gaining penetration in differing market segments.

Further details available from:
ISO, local national standards organisations or the Committee for the Application of HyTime (CApH) via Internet (caph@techno.com).

Information on recent standards activity related to HyTime can be found in the OII Multimedia and Hypermedia Standards Activity Report for October 1995.

Index


MHEG

Expanded name
Coding of Multimedia and Hypermedia Information

Area covered
Coded representation of non-revisable final form multimedia and hypermedia information objects

Sponsoring body and standard details

Characteristics/description
The MHEG standard provides a standardized set of object classes that can be used to control the presentation of multimedia and hypermedia information. The interchanged information is defined using ISO's Abstract Syntax Notation One (ASN.1) representation.

MHEG defines the following object classes:

Like HyTime, MHEG uses the concepts of generic space and time to synchronize events. MHEG, however, is restricted to three spatial and one temporal axis. Four levels of synchronization are identified: script, conditional, spatio-temporal and intermedia. Synchronization can be elementary, chained, cyclic or controlled by a condition.

MHEG links are associative, dynamic and event driven. There are two types: object synchronization links and hyperlinks.

Further parts of the standard are being developed. Part 3 will specify a set of extensions for script object interchange while Part 5 will specify the MHEG subset for base level implementations such as those used for Video-on-Demand and Home Shopping services. Part 6 will identify support levels required for interactive televsion and related applications.

Usage (Market segment and penetration)
Until this draft standard is finalized products and services based on it will not be available. The OMHEGA ESPRIT research project is preparing a 'MHEG Toolkit' that should be available before the standard is formally published. This toolkit is being tested as part of the DELTA European Collaborative Open Learning Environment (ECOLE) project.

The DAVIC consortium are likely to adopt Part 5 as the basis for the interchange of information between digital television service providers (broadcast, satellite and cable) and set-top units in the home. The idea is that a single, general purpose, MHEG set-top unit would be able to receive digital signals from any unit as part of a Video-on-Demand distribution system.

Note: The 70 strong Open-PC MPEG consortium has decided to use Microsoft's Media Control Interface (MCI) to control MPEG presentations. This may affect the uptake of MHEG within computer-based systems, though as the MCI standard is platform dependent at present it is unclear just how useful it will be for wide area networks, for which MHEG is specifically designed.

Further details available from:
ISO or local national standards organisations. Further information is available over the Internet fromhttp://www.fokus.gmd.de/ovma/berglass/mhews/MHEG.html.

Information on recent standards activity related to MHEG can be found in the OII Multimedia and Hypermedia Standards Activity Report for February 1996.

Index


M-JPEG

Expanded name
Moving JPEG

Area covered
Use of JPEG images to provide moving image presentations

Sponsoring body and standard details
Proprietary application of interationally approved still image standard prepared by a group calling themselves the Motion Joint Picture Engineers Group (M-JPEG)

Characteristics/description
Uses a sequence of JPEG compressed still pictures to provide moving images without accompanying sound. Because pictures do not rely on information stored in other frames they are easier to decode than an MPEG audiovisual presentation, but are not so highly compressed.

Usage (Market segment and penetration)
Now available as an option on a number of multimedia systems as an intermediate step to the adoption of MPEG. Also useful for digital editing of video sequences.

Further details available from:
Various suppliers.

Index


MPEG-1

Expanded name
Coding of Moving Pictures and Associated Audio for Digital Storage Media

Area covered
Compression of moving pictures and synchronized audio signals for storage on, and real-time delivery from, CD-ROM

Sponsoring body and standard details

Characteristics/description
A typical interlaced (PAL) TV image has 576 by 720 pixels of picture information, a picture speed of 25 frames per second and requires data to be delivered at around 140Mbit/s. Computer systems typically use even higher quality images, up to 640 by 800 pixels, each with up to 24 bits of colour information, and so require up to 12Mbits per frame, or over 300Mbit/s. CDs, and other optical storage devices, can only be guaranteed to deliver data at speeds of around 1.5Mbit/s so high compression ratios are required to store full screen moving images on optical devices.

The MPEG-1 standard is intended to allow data from non-interlaced video formats having approximately 288 by 352 pixels and picture rates of between 24 and 30 Hz to be displayed directly from a CD-ROM or similar optical storage device, or from magnetic storage medium, including tape. It is designed to provide a digital equivalent of the popular VHS video tape recording format.

High compression rates are not achievable using standard, intraframe, compression algorithms (see Image Compression Techniques). MPEG-1 utilises block-based motion compensation techniques to provide interframe compression. This involves the use of three types of frame encoding:

While B-Pictures provide the highest level of compression they cannot be interpreted until the next I-Picture or P-Picture has been processed to provide the required reference points. This means that frame buffering is required for intermediate B-Pictures. The amount of frame buffering likely to be available at the receiver, the speed at which the intermediate frames can be processed, and the degree of motion within the picture therefore control the level of compression that can be achieved.

MPEG-1 uses a block-based discrete coding transform (DCT) method with visually weighted quantisation and run length encoding for video compression. MPEG-1 audio signals can be encoded in single channel, dual channel (two independent signals), stereo or joint stereo formats using pulse coded modulation (PCM) signals sampled at 32, 44.1 or 48kHz. A psychoacoustic model is used to control audio signals sent for quantisation and coding.

Usage (Market segment and penetration)
As MPEG-1 provides TV quality images, rather than higher density images that make optimum use of the resolution available on computer systems, the standard has yet to be accepted as the standard way of capturing multimedia data. The devlopment of the MPEG-2 standard has led a number of vendors to take advantage of the availability of the new quad-speed CD-ROM drives, which make the 1.5Mbit/s delivery speed restriction redundant, to deliver higher resolution images.

As more and more multimedia data is captured on camcorders and from other forms of television signal the use of MPEG-1 may increase. Prices of MPEG video cards are beginning to approach the levels where they can be fitted to low-end (SOHO) systems as well as professional quality systems. Software-based compression is also available, so SOHO users no longer need to purchase specialized hardware.

A 70 strong Open-PC MPEG consortium has agreed to use Microsoft's Media Control Interface (MCI) to provide a standardized control mechanism for MPEG-1 presentations, rather than wait for the development of the MHEG standard.

Further details available from:
ISO or local national standards organisations

Index


MPEG-2

Expanded name
Coding of Moving Pictures and Associated Audio for Digital Storage Media

Area covered
Compression of broadcast quality moving pictures and synchronized audio signals

Sponsoring body and standard details

Characteristics/description
Joint ITU and ISO/IEC project to define a coding system for digital transmission of television resolution pictures. It can be used for the encoding of both standard definition television (SDTV) and high definition television (HDTV) programs.

Like MPEG-1, MPEG-2 utilises block-based motion compensation techniques to provide interframe compression. This involves the use of three types of frame encoding:

An MPEG-2 stream can include data encoded according to the MPEG-1 specification.

MPEG-2 systems distinguish between multi-program transport streams and single-program program streams. Both contain compressed data that has been segmented into packetized elementary stream (PES) packets. MPEG-2 provides facilities for:

Two variants of the video compression standard are available; a fully scaleable variant and a subset of this that is non-scalable. The non-scalable version can provide extra compression for interlaced video signals. MPEG-2 uses a block-based discrete coding transform (DCT) method with visually weighted quantisation and run length encoding for video compression.

As well as permitting multi-layer images when using a multi-program transport stream, MPEG-2 also provides facilities for data partitioning, SNR, spatial and temporal scalability.

Work is currently in-hand to extend MPEG to cope with multiple view images of the type used for 3-D and virtual reality images. Part 6 of ISO 13818 will provide a command and control language that can be used to interchange video control information between computers and set-top units or intelligent televions.

MPEG-2 audio signals can be encoded in single channel, dual channel, stereo and multichannel (surround sound) formats using signals sampled at 16, 22.05, 24, 32, 44.1 or 48kHz.

Usage (Market segment and penetration)
Though the standard has only recently been published a number of video cards are already available, and images encoded in this format are already available on the Internet. MPEG-2 looks set to become the major encoding standard for digital high definition television (HDTV).

Further details available from:
ISO or local national standards organisations

Information on recent standards activity related to MPEG-2 and MPEG-4 can be found in the OII Multimedia and Hypermedia Standards Activity Report for February 1996.

Index


Indeo Video

Area covered
Standard on-chip protocol for the encoding of audiovisual information

Sponsoring body and standard details
Proprietary standard developed by Intel

Characteristics/description
Set of on-chip drivers that work with Video for Windows to allow compressed audiovisual data to be replayed without the addition of specialist video boards. Can also be used in conjunction with a Smart Video card to capture and compress video sequences.

Usage (Market segment and penetration)
Heavily promoted by Intel, this chip set looks set to become a common feature in hardware designed for multimedia presentation.

Further details available from:
Intel Corporation (UK) Ltd, Pipers Way, Swindon, Wiltshire SN3 1RJ, UK (+44 1793 431155)

Index


PREMO

Expanded name
Presentation Environment for Multimedia Objects

Area covered
Provides a programming environment for the presentation, construction and manipulation of multidimensional objects.

Sponsoring body and standard details

Characteristics/description
PREMO will extend the concepts developed in the Computer Graphics InterfaceComputer Graphics Metafile and Programmer's Hierarchical Interactive Graphics standards to cover the inter-relationships required for moving multidimensional images. As such it should provide a mechanism for exchanging information stored on virtual reality systems.

PREMO is specifed using object-oriented programming techniques that can be bound to non-object-oriented languages. Applications based on PREMO will have the ability to store, retrieve and interchange object information and define application specific structuring for object sets.

Usage (Market segment and penetration)
Not applicable: standard still being written.

Further details available from:
ISO or local national standards body. For FTP access to more up to date information contact ftp://ftp.cwi.nl/pub/ISO-SC24-WG6/Premo/PremoDocument

Index


Quicktime

Area covered
Addition of time-based functionality to the Apple operating system

Sponsoring body and standard details
Proprietary standard developed by Apple Computers

Characteristics/description
Quicktime is an operating system extension for the Apple computer that retrofits low resolution video onto the desktop. Quicktime use requires three elements: an operating system extension (Quicktime Extension), a data object (in the MooV format), and a player or application program (e.g. Apple's Simple Player). Quicktime is different from other recent moving picture formats, including MacroMind Director and HyperCard, in that it neither flips cards nor moves sprites. It involves not so much adding video as adding the notion of time into the operating system. The QuickTime clock determines the right video frame to display at the right time, and it can also orchestrate a sound mixer, a lighting board, or another real-world device that needs to be triggered by time cues.

Usage (Market segment and penetration)
Quicktime's market segment was originally that section of the Macintosh user community with a particular interest in multimedia. The standard is now supported across a range of platforms by a number of suppliers. It is not clear that its penetration has yet become as large as might have been expected. There has been some user hesitation over certain areas of the functionality, for instance the video performance, quality and size of display window. The market appears to be waiting for these obstacles to be overcome.

Further details available from:
Apple Computer Europe, Le Wilson 2, Cedex 60, 92058 Paris la Defense, France

Index


SMSL

Expanded name
Standard Multimedia Scripting Language

Area covered
Provides a standardized method for defining the constructs used in the script for an audiovisual presentation.

Sponsoring body and standard details

Characteristics/description
Extends HyTime by providing SGML meta-DTD architectural forms for describing the object classes, virtual functions, messages, aggregates and class/data membership used in a multimedia presentation's script.

Usage (Market segment and penetration)
Not applicable: standard still being written.

Further details available from:
ISO or local national standards body

Index


Video for Windows/AVI

Expanded name
Video for Windows/Audio Video Interleave

Area covered
Compression technique used to provide audiovisual information for personal computers

Sponsoring body and standard details
Proprietary standard developed by Microsoft Corporation for Intel.

The AVI standard was originally developed by Microsoft for Intel. It has since come under direct control of Microsoft, who market it under the name Video for Windows.

Characteristics/description
A 386-based machine can play back small, 160 by 120 pixel, 256 colour images, while a 486 machine can handle images up to 320 by 240 pixel (a quarter of a low-resolution screen). Using a Pentium chip with an accelerator graphics cards it is now possible to provide full screen (640 by 480 pixel) low resolution moving images.

Usage (Market segment and penetration)
Whilst heavily promoted by Microsoft as the preferred interchange format for moving images to be presented on a PC the different levels at which this standard can be used mean that an AVI designation does not necessarily imply compatibility.

Because Microsoft bundle AVI compression and decompression software as part of their standard Video for Windows package it is likely that files conforming to one of the versions of this standard will remain common for a number of years. In many cases specialist cards will be used to enhance the image quality presented to users rather than playing back the image using standard hardware facilities.

Further details available from:
Microsoft Corporation, 16011 NE 36th Way, Redmond, Virginia 98073-9717, USA

Index


This information set on OII standards is maintained by Martin Bryan at The SGML Centre on behalf of CEC's DGXIII/E. We would very much welcome comments on its accuracy and completeness. Comments should be sent to:

G. Heine
DG XIII/E
L-2920 Luxembourg
Phone: +352 4301 33620
Fax: +352 4301 33190
E-mail: Gerhard.Heine@lux.dg13.cec.be

Help ] [ Frequently Asked Questions ] [ Text Search ]

Home - Gate - Back - Top - Moving - Relevant

I'M-Europe Home Page IMPACT Programme Home Page OII Standards and Specifications List

File last updated: March 1996

© ECSC-EC-EAEC, Brussels-Luxembourg, 1995
Reproduction is authorized, except for commercial purposes, provided the source is acknowledged.


webmaster@echo.lu