<cite>OII Standards and Specifications List</cite>

I*M Europe
OII Home Page
What is OII?
Standards List
OII Guides
OII Fora List
Conference Reports
Monthly Reports
EC Reports
Whats New?
OII Index
OII Feedback
Search Database

OII Guide to Objects and Components

The term "objects" is well established as a cornerstone of information technology. In object oriented technology, a programming object associates functionality with data through a well-defined interface. Such programming objects are typically pieces of software that can be reused within a given environment, which follow four basic principles: encapsulation, inheritance, polymorphism, and instantiation (click here for further information on these techniques).

In recent years, however, the view that re-usability should be considered solely in terms of software has been challenged. Some users have reported disappointment with object oriented technology for delivering widespread re-usability. It has been argued that programming objects should be decomposed (or combined) in order to more accurately reflect individual business processes or entities. Moreover, objects should be de-coupled from specific technologies and programs. Objects that are defined in a technology-independent way and have been fully aligned with business processes form "components" of a business model that can be reused in many different environments.

A component may be defined as either a business process or an entity. It can be designed to work on one machine only, or can be used as a distributed architecture to run across a network (e.g. JavaBeans uses CORBA to move components between machines). The structure of a component is dictated by the requirements of business processes, in contrast to that of programming objects as currently defined for object oriented technology, which need to conform to the principles of this technology.

Like objects, the key to components is re-usability. However, whereas the emphasis on objects is ease of programming, componentization is deeply influenced by the need to reduce the complexity and cost of building business applications. Accordingly, a component must have a clear meaning to business staff. An order, an invoice or, indeed, a complete payroll program may all be components. Several components may be assembled to form larger components, sometimes also referred to (confusingly) as "business objects".

In summary a component has the following characteristics:

  • Provides a specific service to a client application
  • Provides services that can be reused in multiple applications
  • Applications are constructed as a set of co-operating components
  • Services can evolve without impacting client applications
  • Easily deployable and upgradable throughout an enterprise.

Componentization is arguably aligned with the concept of "modularization", which was popularised in the 1970s. The focus of modularization was on the creation of reusable modules on software platforms (e.g. Cobol, FORTRAN) and the recompilation of the modules for reuse. Componentization however is a response to the development of distributed computing, especially in the context of the Web. The emergence of Java, in particular, has enabled the dynamic manipulation of Web content, such as the ability to execute components (or modules) "on the fly". Moreover, unlike traditional programming languages, Java is not compiled until it is used. The "late binding" characteristics of Java -- in contrast to the "fixed binding" of traditional programming languages -- fundamentally changes the way in which data is associated with methods of data execution and presentation. In other words, the way in which data can be executed and presented no longer needs to be tied to the particular language in which the data is written.

An important consequence of componentization is, therefore, the ability to specify and share a common set of data semantics independently of the particular programming languages, methods and software techniques being used. In addition, it highlights the need for formal techniques for software reuse within multi-tier architectures which are no longer bound by particular processes or network nodes. Thirdly, the size of the component has become an issue in today's networked client/server environment. Thus, it has been held that given bandwidth constraints, components of significant size should not be server based and downloadable onto the desktop as needed for execution. These issues are further explored in the present document.

The Business Context

Componentization is driven by the need to more closely align IT developments with business developments. As business functions and boundaries are increasingly broken down, businesses themselves are increasingly being considered in terms of essential processes that deliver products and services to customers, rather than in terms of the traditional functional framework. According to this view, business processes can be decomposed into individual processes as well as people and software -- entities or components which can be re-grouped to optimise the organisational structure and operation. From the perspective of IT, the objective is to implement the components into software once only. The components are then targeted for reuse across all relevant business processes, within the entire organisation, or even across enterprises and across sectors. This is consistent with the current general evolution of IT development methods, from the traditional "write spec, design code, maintain" approach to one in which repositories of components are browsed against a specification to enable assembly of many if not all parts of any given application. However, this trend does raise the question of how data repositories are to be managed, and who are to be responsible for their management (see below).

An important corollary is that any change in business practices and organisational structures would no longer require an overhaul of the IT systems. The componentization of IT means that the individual entities could be replaced relatively quickly and inexpensively with minimal disruption. Moreover, full alignment of IT processes with business processes means that any business process change would necessitate an exact corresponding software process change, with no further rippling effect. On the other hand, to achieve this objective, it is essential that business structures are reflected in the structures of the application at quite a detailed level, which may in itself require an overhaul of the existing applications.

Advantages and disadvantages

The advantages of a componentization include:

  • Ability to increase speed of development. Pulling a ready-made component off-the-shelf is obviously faster than writing codes from scratch. This would enable organisations faster time to market
  • The performance characteristics of a component will be known in advance. Predictable operational characteristics would help reduce risks and uncertainty within the organisation
  • Reduction of maintenance costs. The more widely components can be re-used within an application or across different applications, the smaller the total amount of code that has to be maintained
  • Ability of user to purchase individual processes or sets of processes and entities rather than complete functional application packages as is today. This is believed to produce considerable savings for users, enable avoidance of vendor "lock-in", and enable purchase of "best of breed" products.

However, organisations that have sought to implement a component-based architecture have reported the following disadvantages:

  • Significant up-front investment for planning and modelling effort, as well as for training/re-training of staff who have to master a different development discipline: building applications from components rather than from scratch
  • The need to integrate separate components together into a workable application, which could be complex and resource intensive.

Perhaps potentially the most serious disadvantage of all, componentization, just like object technology, suffers from user concerns about its robustness to support business-critical enterprise applications. While it is relatively straight-forward to break down software into components such as buttons and dialogue boxes to build a user interface on the desktop, building a distributed, server-side infrastructure where core application components can interoperate would require efforts, investments and confidence of a totally different order. This would require the ability to exchange components across a network between different operating system, as well as a degree of reliability that satisfies operational requirements for entire enterprises rather than just meeting those of individual users.


Software reuse falls into four categories:

  • Allows users to see only the outside of an artefact, i.e. interfaces (e.g. JavaBeans)
  • Allows users to see the inside as well as the outside of an artefact but cannot touch (e.g. class, subroutines if provided as source codes)
  • Allows users to see and change the inside as well as the outside of an artefact (e.g. patterns, configurations)
  • Allows reuse where users provide a description or specification of what they want, and let a "black box" program generate the implementation details (e.g. stubs and skeletons generated by IDL compiler).

The advent of networked-based computing has meant that the concept of software reuse is now being extended from a micro-architecture to a multi-tier macro-architecture level. Prototyping and incremental development play an increasingly important role for software reuse, as is the availability of formal techniques. The emergence of Uniform Modelling Language (UML), which focuses on using components with interfacesand system usage from the perspective of each user, is also challenging the traditional notions of reuse. Crucially, UML is increasingly being championed by many in the software industry as a system modelling language on which software reuse needs to be based. In parallel, enterprise integration and tighter coupling of vertical markets -- a key tenet of electronic commerce -- has increased the expectation for the reuse of domain specific components, in particular server side components in the distributed system.

The scope of re-usability of components is closely linked with the nature of the components. Not only do the components need to be an appropriate size, they must be derived from as widely accepted a common semantic base as possible. Components which are derived solely from a single application are unlikely to be usable across other applications. An organisation's IT architecture should encompass all applications within the organisation. Better still would be an architecture that spans a whole industry sector, or even across industry sectors. For this reason, there have been considerable activities in the development of a commonly accepted component architecture, particularly concerning the interfaces between components. Such efforts are being undertaken by some individual vendors and user organisations, vendor communities, user communities, as well as by standardization groups.

As discussed above, the development of componentization is intimately linked to the ability to specify and share a common set of data semantics independently of the particular programming languages, methods and software techniques being used. Discussions on the definition and description of what are generally termed Common Business Objects are underway in various standardization communities. They include, for example:

How these various activities are to interface with one another is an important issue for consideration by the standardization community as a whole. At the same time, the (potential) divergence among these approaches is a concern. A key issue that has emerged is the distinction between reusable business information objects (e.g. invoice number, date, supplier id/name, etc) and reusable sets of business processobjects (e.g. orders, invoices, etc), and the modelling of these different kinds of objects within a specific model. By way of example, CommerceNet's CBL consists of information models which distinguish between:

  • Business description primitives like companies, services, and products
  • Business forms like catalogues, purchase orders, and invoices
  • Standard measurements, date and time, location, classification codes.

Further developments in this area are expected.

Component libraries

Up to now object class libraries generally come with browsers that enable developers to discover the objects that they require. However, object descriptions at this level are delivered in terms of implementation detail and technical interfaces, which do not help the kind of analysts / developers who will need to develop front-line applications from components. For components to be re-usable, they must be described in terms of their business meaning and behaviour rather than their implementation detail.

Many believe that this problem can only be overcome through the use of a meta-language that declares the purpose of a component, at a recognisable business level, rather than describing its implementation, and through the adoption of common naming conventions that allow easy recognition of common objects / components. The problem, however, is somewhat complicated by the emergence of pre-customised packages, with proprietary built-in component libraries.

As indicated above, management of component libraries will become increasingly important as the concept of component libraries become more widely adopted. Locally managed libraries are likely to lead to duplication and inconsistency, and so will defeat the objective of re-usability. However, not many organisations have centrally managed data management groups (though the situation is changing) and smaller organisations will continue to need to rely on commercial packages for the assembly of applications.

Moreover, while it is in principle possible to build and maintain a component infrastructure from a selection of vendors' technologies and products, in practice, many users will prefer to offload that responsibility and effort onto a third party. This means that users may end up by binding themselves to a particular supplier in the process, and risk losing the benefits of adopting a component architecture.

Impact on software industry

The development of componentization is likely to have a significant impact on the future of the software industry. As libraries of business entities and processes are created, so users will not want (or indeed be able to use) the kind of functional application packages on the market today. Users will wish to buy instead individual processes or sets of processes and entities rather than complete packages. Today's application packages are generally too tightly integrated or too monolithic in structure to support this.

Some software companies are however already re-engineering themselves to embrace the new paradigm -- changing from delivering a few major product releases a year to custom assembly of systems using "plug-and-play" components. If this trend continues, it would mean that software delivery processes would become more service like. There would be a greater emphasis on support for customer-specific product evolution, as well as many client-specific configurations of the deployed products. The implications are profound, from version/control systems, to helpline, to protection of the integrity of the core system (a system which needs to allow custom extension). Many software companies also point to the severe skill shortage for developing component-based products.


Given the trend towards componentization, a number of models are today available. To the extent that these models are based on the principles of componentization as described in the above, they are to be distinguished from the models as embedded in the traditional software programming languages. It should however be noted that the new component-based models are variously referred to component as well as object models, even though they extend beyond the traditional notion of objects as defined in object technologies. For consistency, the term "component model" as well as the associated term "components" (rather than "objects") are used in this section. The exceptions are the OMG object model and OMG objects, which are well-established terminology.

The most fundamental problem that a component model aims to solve is: How can a system be designed so that binary executables from different vendors, written in different languages and at different times, are able to interoperate? To address this problem, the relevant issues are:

  • Basic interoperability - how can developers create their own unique binary components, yet be assured that these binary components will interoperate with other binary components built by different developers?
  • Versioning - how can one system component be upgraded without requiring all the system components to be upgraded?
  • Language independence - how can components written in different languages communicate?
  • Transparent cross-platform interoperability - how can developers be given the flexibility to write components to run in-process, cross-process and indeed cross-network, using one programming model?
  • Performance - how can components interacting within the same address space be able to use each other's services without any undue system overhead?

The leading component models today include:


In the Component Object Model (COM), a component is a piece of compiled code that provides some service to the rest of the system. Introduced by Microsoft in 1993, COM is the underlying architecture that forms the foundation for higher level Windows-based software services, like those provided by Microsoft's object technology Object Linking and Embedding (OLE). OLE services span various aspects of system functionality, including compound documents, customised control, inter-application scripting, data transfer and other software interactions. While these services provide distinct functionality to the user, they share a fundamental requirement for a mechanism that allows binary software component, derived from any individual component or combination of components from different vendors, to connect to and communicate with each other in a pre-defined manner. COM provides this mechanism in run time, in contrast to the more traditional approach of reusing components/objects at compile time.

A primary architectural feature of COM is that the components never have direct access to other components in their entirety: components always access other components through interface pointers. In this respect, all OLE services are essentially COM interfaces (groupings of functions). Interfaces and their attributes are described in COM. Of note is IUnknown - the "base" interface that COM defines to allow components to control their own lifespan and to dynamically determine another component's capabilities.

In 1996 Microsoft introduced Distributed COM (DCOM), for the creation of networked applications built from components. DCOM is designed for use across multiple network transport, including Internet protocols such as HTTP. DCOM is based on the Open Software Foundation's (now The Open GroupDistributed Computing Environment (DCE) Remote Procedure Call (PC) specification and is intended to work with both Java applets and ActiveX components through its use of the Component Object Model (COM).

The Microsoft virtual machine (Microsoft VM) provides mapping between a Java component and a COM component, and, according to Microsoft, enables any COM component to be accessed as a Java component. To achieve this, Microsoft has developed a "Java-Callable Wrapper" (JCW) for exposing the functionality of a COM component. To create a JCW, compiler directives are added to regular Java source files that specify how the Java component maps to the COM equivalent and vice versa. According to Microsoft, this makes all the existing COM-based applications and services available to Java and enables any existing Java components to be used by other COM supporting languages.

In 1997 Microsoft announced that the evolution of its component services would continue with COM+, which includes, amongst other things, queued components (allows clients to invoke methods on COM components using an asynchronous model), dynamic load balancing (automatically spread client requests across multiple equivalent COM components), and full integration of the Microsoft Transaction Server into COM.

JavaBeans and Enterprise JavaBeans

JavaBeans is a platform-independent component architecture model written in the Java programming language and based on Sun Microsystems' JavaBeans specification, first released for public comment in 1996. According to Sun, JavaBeans is intended to act as a bridge between proprietary component models and enable developers to create components rather than "monolithic applications" and to build up a portable, reusable code base.

The JavaBeans model describes a component as a related set of Java classes, including a BeanInfo class that describes the properties, methods, and events associated with the component. JavaBeans components, or Beans, are reusable software components that can be manipulated visually in a builder tool. Beans can be combined to create traditional full-fledged applications, or applets (which can be designed to work as reusable Beans). Beans share five common features:

  • Introspection: enables a builder tool to analyse how a Bean works
  • Customisation: enables a developer to use an application builder tool to customise the appearance and behaviour of a Bean
  • Events: enables Beans to communicate and connect together
  • Properties: enable developers to customise and program with Beans
  • Persistence: enables developers to customise Beans in an application builder, and then retrieve those Beans, with customised features, for future use.

As a component model, JavaBeans is primarily intended for visual construction of reusable components for the Java platform. In 1998 Sun formally launched Enterprise JavaBeans (EJB), an API which extends JavaBeans to middle-tier/server side business applications. The extensions that EJB adds to JavaBeans include support for transactions, state management, and deployment time attributes. In addition, EJB specifies how communication among components maps into the underlying communication protocols, notably CORBA/IIOP. The addition of EJB to the JavaBeans component model shifts the focus from writing codes that facilitate interaction between technologies to writing business logic for reusing components.

OMG Object Model

The OMG Object Model defines common object semantics for specifying the externally visible characteristics of objects in a uniform and implementation-independent way. These common semantics characterise objects which exist in an OMG-conformant system, notably theCommon Object Request Broker Architecture (CORBA).

The OMG Object Model is based on a small number of basic concepts: objects, operations, types, and supertyping/subtyping.

An object can model any kind of entity. Operations are applied to objects and collectively characterise an object's behaviour. Objects are created as instances of types, which are templates for object creation. A type characterises the behaviour of its instances by describing the operations that can be applied to the object. Relationships between types are known as supertypes or subtypes.

The OMG model, as applied in CORBA, is centred on the notion that objects are encapsulated entities that provide one or more services that can be requested by clients in a language-independent way. A CORBA object is the instance of an interface defining methods. An interface has no data members. Clients access services by issuing a request which consists of a method name, the target object, parameters and an optional context. The ORB then sends the request to the receiving object and returns results or error status back to the calling objects.

The set of possible operations that a client may request of an object is determined by the object's interfaces, which are defined in OMG's IDL (Interface Definition Language). IDL is independent of any programming languages and contains only data descriptions. In CORBA an IDL compiler translates the IDL specifications into IDL-Stubs (for callers) and IDL-Skeletons (for object implementations) in the actual programming language that is being used.

The relationships between COM/DCOM, Java and CORBA are a key topic of discussion within the IT industry. Many views have been expressed and while they do not necessarily coincide, the differences between these technologies do appear to become less of an issue, given the increasing availability of bridges and gateways to facilitate interconnection. From the standardization viewpoint, however, a crucial distinction remains that COM/DCOM were developed for Windows-based applications and services, in contrast to JavaBeans/EJB and the OMG object model which are intended to be platform independent.

One notable development which relates to component models is the emergence of UML (current version 1.3, accepted by OMG in October 1997) and its associated Unified Process. UML has been described as a "blueprint language" and a "visual language" for modelling a system. The UML models cover specifications, architecture design and processes, and can be applied for all business, hardware and software systems. The focus of UML is on using components with sound interfaces and system usage from the perspective of each user. The Unified Process is a component-based process framework which employs UML. It covers the entire cycle of software development from inception to product release. Key to the Unified Process is the deployment of a user-driven and architecture-centric framework which unifies the iterative and incremental phases of development. Both UML and the Unified Process are expected to enable software development teams to work in a more concerted way and facilitate the wider reuse of components as well as frameworks.


As discussed in this guide, a key driver for componentization is to bridge the "semantic gap" between the worlds of business and information technology. On the other hand, componentization has given rise to different views of components as well as different models of their reuse. What is clear is that the Web-based network has generated a new paradigm for portability and distribution of software applications, as well as a new meaning for productivity ("don't code, design"). Allied with this development is direct execution of components in runtime and the ability to decouple data objects from the component methods used for managing data ("late binding"). Reusability of components has sharpened focus on the nature of the components and underlined the need for the specification and sharing of a common set of data semantics which are grounded in business logic as opposed to IT and system logic. As we have seen various initiatives in this area are underway within the standardization community. However, it is at present difficult to predict to what extent these initiatives will converge with each other, as well as with the various market developments in componentization.

Section Contents
OII Home Page
OII Index
OII Help

This information set on OII standards is maintained by Martin Bryan of The SGML Centre and Man-Sze Li of IC Focus on behalf of European Commission DGXIII/E.

File created: March 1999

Home - Gate - Back - Top - Objects - Relevant