FTP 2 ActiveX FTP 2.6.11 serial key or number

FTP 2 ActiveX FTP 2.6.11 serial key or number

FTP 2 ActiveX FTP 2.6.11 serial key or number

FTP 2 ActiveX FTP 2.6.11 serial key or number

Meta-Information

SGML

Introduction

SGML is the Standard Generalised Markup Language, the international standard for defining descriptions of the structure and content of different types of electronic document. Essentially, SGML is a method for creating interchangeable, structured documents. It allows one to:

  • assemble a single document from many sources (such as SGML fragments, word processor files, database queries, graphics, video clips, and real-time data from sensing instruments);
  • define a document structure using a special grammar called a Document Type Definition (DTD);
  • add markup to show the structural units in a document; and
  • validate that the document follows the structure that was defined in the DTD. It is important to note, however, that SGML is not:
  • a predefined set of tags that can be used to markup documents
  • a standardised template for producing particular types of documents.

The components of SGML

SGML is based on the concept of document being composed of a series of entities. Each entity can contain one or more logical elements. Each of these elements can have certain attributes (properties) that describe the way in which it is to be processed. SGML provides a way of describing the relationships between these entities, elements and attributes, and tells the computer how it can recognise the component parts of a document.

SGML requires users to provide a model of the document being produced. This model, called a Document Type Definition (DTD), describes each element of the document in a form that the computer can understand. The DTD shows how the various elements that make up a document relate to one another.

To allow the computer to correctly identify where each part of a document starts and ends SGML requires that the user declares, in an SGML Declaration, how the computer is to identify markup, and what codes have been used to identify and delimit markup sequences.

SGML Systems

There are four general classes of tools in an SGML system, namely Editors, Conversion Tools, Document Managers, and Formatters.

  • Editors use the SGML declaration to set the ground rules, the Declaration Subset to determine which tags are allowed to occur, and in what order they should occur, and may use an output specification for how they display the document to the user on screen or in a printed proof. Having this information allows the Editor to assist the user in creating valid document instances by allowing data to be entered with tags in the manner prescribed by the DTD. Additionally, some Editors include the function of a print formatter, in which case they will use a separate output specification for printing.
  • Conversion tools typically have a mechanism for specifying how the elements and/or tags of a document are to be converted. Additionally, when conversion tools are being used to filter SGML data to some other format they typically need to take as input, the SGML declaration, declaration subset and instance. Having the declaration subset allows the conversion tool to know what to expect in the data, and to enable the tools to act on entire elements.
  • Document managers come in many varieties. Some document managers work with whole documents. These tools often do not need any information about the contents of a file as they are just concerned with the management of the file. Other document managers will break the documents into components, based on user input and the declaration subset, and manage those components. Whether or not a document manager needs your DTD is a good clue to how the document manager will be working with you data.
  • Formatters also come in many varieties. A common scenario is for the formatter not to use the DTD but just to act on the tags as they are encountered. These formatters use some type of output specification, which describes the formatting actions to take when encountering the tags. For example an electronic book product may have a style sheet which controls the generation of a table of contents and the screen presentation.

DSSSL

DSSSL (Document Style Semantics and Specification Language) is an International Standard, ISO/IEC , for specifying document transformation and formatting in a platform- and vendor-neutral manner. In particular, DSSSL can be used to specify the presentation of documents marked up according to the SGML standard.

DSSSL consists of two main components: a transformation language and a style language. The transformation language is used to specify structural transformations on SGML source files. For example, a telephone directory structured as a series of entries ordered by last name could, by applying a transformation spec, be rendered as a series of entries sorted by first name instead. The transformation language can also be used to specify the merging of two or more documents, the generation of indices and tables of contents, and other operations. While the transformation language is a powerful tool for gaining the maximum use from document databases, the focus in early DSSSL implementations will be on the style language component.

Within the style language, it is possible to identify a number of capabilities that for one reason or another should be considered optional for early implementations. Recognising this, the designers of DSSSL designated certain features of the style language as optional and created a Core Query Language and a Core Expression Language specifically in order to make more limited implementations possible. However, they did not define any particular subset of the style language component within the standard itself, but rather left that task to industry organisations and standards bodies.

Software

SP <URL:cromwellpsi.com> for SGML parsing and entity management. Here is a brief summary of its features:

  • Provides access to all information about SGML document
    Access to DTD and SGML declaration as well as document instance
    Access to markup as well as abstract document
    Sufficient to recreate character-for-character identical copy of any SGML document
  • Supports almost all optional SGML features
  • Sophisticated entity manager
  • Supports multi-byte character sets
    Parser can use bit characters internally
    bit characters can be used in tag names and other markup
    Supports ISO/IEC (Unicode) using both UCS-2 and UTF-8
    Supports Japanese character sets (Shift-JIS, EUC)
  • Object-oriented
  • Written in C++ from scratch
  • Reentrant
  • Fast
  • Portable
  • All major Unix variants
    MS-DOS
    Win Windows 95/Windows NT
    OS/2
  • Free
    Includes source code
    No restrictions on commercial use
  • Disadvantages
    Programmer-level documentation only for generic API and not for native API.
Jade is a DSSSL engine written by the author of SP - see <URL:cromwellpsi.com> .

SP and Jade can be used for conversion of SGML documents into other formats, such as XML, RTF, TeX, MIF, as well as to perform SGML transformations.

References

The official definition of SGML is in the international standard ISO The list of general information on SGML, including online tutorials is <URL:cromwellpsi.com> .

XML

XML is an abbreviated version of SGML, to make it easier to define custom document types, and to make it easier for programmers to write programs to handle them. It omits the more complex and less-used parts of SGML in return for the benefits of being easier to write applications, easier to understand, and more suited to delivery and interoperability over the Web. But it is still SGML, and XML files may still be parsed and validated the same as any other SGML file.

XML is designed "to make it easy and straightforward to use SGML on the Web: easy to define document types, easy to author and manage SGML-defined documents, and easy to transmit and share them across the Web."

It defines "an extremely simple dialect of SGML which is completely described in the XML Specification. The goal is to enable generic SGML to be served, received, and processed on the Web in the way that is now possible with HTML."

"For this reason, XML has been designed for ease of implementation, and for interoperability with both SGML and HTML."
<URL:cromwellpsi.com]>

HTML

Introduction

The Hypertext Mark-up Language (HTML) covers a set of standards defining document type definitions (DTD) corresponding to the various "official" versions of HTML. The standardisation procedure is W3C based though a number of Internet Drafts and RFCs represent the earlier evolution of HTML.

The latest W3C recommendation is HTML , which includes support for style sheets, frames, tables and forms. Internationalisation and accessibility issues are also represented in its design.

Although focus for the future has turned to XML, HTML will still be a key part of the web for some time. Notable developments in the evolution of HTML are covered below.

Forms

Forms where introduced in HTML to interoperate with the CGI standard. A form is a template for a form data set (data captured by the browser which is sent to the server), an associated method (the HTTP method for uploading to the server) and an action URI (a reference to a server program that will process the form). The form data set is a sequence of name/value pairs, specified using form elements. Form submission usually results in the data set being transferred to the web server for processing.

The META element

Broadly, the element can be used to identify properties of a document (e.g. author, expiration date, a list of keywords etc.) and assign value to those properties. Each element specifies a name/value pair using the attributes and . Such usage is often used to provide keywords for indexing purposes, for example:

Note that in cases where the value for some property is a reference outside the document itself, the LINK element may be used, i.e. the following are equivalent:

Alternatively the attribute can be used in place of the attribute to create a header in the HTTP response. The value of the attribute specifies the header name and the value of the attribute its value. For example:

will produce the following HTTP response header:

Some user-agents support the use of the value for the attribute to implement a simple form of client-pull.

The attribute can be used within the element (as it can in many other HTML defined elements) to specify the (human) language of the attribute (this could be used, for example, for speaking browsers to pronounce words in various languages correctly).

The attribute provides user agents with a context in which to interpret metadata. For example, to differentiate between different formats or to specify types of identifier. For example:

The element may also be used to specify the defaults for scripting language, style sheet language and document character encoding.

The element may contain the attribute to specify the location of a metadata profile. Its value is a URI that is used by a user agent. Actions may then be taken by the user agent based upon definitions within the (dereferenced) profile.

Image maps

Image maps are inline images that include "hotspots" that operate like hyperlinks. An image map has three components: an image, appropriate HTML to specify an image map and map data.

Server-side maps appeared first, where maps are interpreted by the web server (i.e. the browser sends the server a set of coordinates corresponding to the clicked image region). Client-side maps have largely replaced these, where the map data is stored within HTML and the browser interprets the map. With the introduction of Java support within browsers, there is also the possibility of Java-based image maps.

Tables

HTML implemented a widely deployed subset of the specification given in RFC - "HTML Tables" and can for the mark-up of tabular material or for layout purposes (although with the recent advent of style sheets and various accessibility issues this type of use is discouraged).

Frames

Frames were not an official HTML standard until the W3C HTML specification but were deployed as browser extensions to the Netscape browser (and also later implemented by Microsoft Internet Explorer). Frames enable browser windows to be split into multiple independently scrollable windows with separate objects in each window.

Inline scripting languages

Support was introduced in the Netscape browser for the JavaScript scripting language developed by Netscape and Sun Microsystems. Javascript offered client-side scripting embedded within HTML and could be used to develop simple applications and to enhance user interfaces. Later Microsoft developed a version of JavaScript known as Jscript and provided support for Jscript and VBScript within their browser. Interoperability concerns prompted the standardisation of a scripting language by ECMA and the ECMAScript Language Specification was released in June

Initially, the scope of such object-oriented scripting languages within the HTML framework was limited due to the lack of an object model for the HTML language. The Document Object Model (DOM), developed by the W3C, addresses this issue by providing a non-proprietary API to a standard set of objects representing HTML and XML documents.

Dynamic HTML (DHTML) refers to the use of a scripting language within the DOM to remove previous restraints on functionality. DOM also addresses the inclusion of objects such as Java applets and ActiveX controls.

Style sheets

Style sheets were introduced to address the problem of layout within a document (since custom layout was not originally a concern of HTML). Cascading Style Sheets 1 (CSS1) was an initial W3C recommendation, but was only partly supported by Microsoft Internet Explorer. CSS2, released as a W3C recommendation in May , provides a great deal of control over document appearance.

Implementation

HTML delivery

Flat HTML files were traditionally entirely written by hand. This results in problems such as invalid HTML (i.e. non-conformance to a DTD) and so the use of an HTML is generally recommended, a number of which a number now exist (including WYSIWYG systems). An editor can also better deal with large or complicated documents, provide versioning control and generate complex scripts. Packages such as Microsoft FrontPage integrate with a Microsoft server to provide many useful administrative and functional features.

HTML documents are traditionally stored as files accessible to server software that can serve resources via HTTP. There is however increasing use of storing (perhaps fragmented) HTML in a "back-end" database and serving the results of some database query to the client.

Prior to transfer of a document, scripting may be undertaken by the server to generate HTML (or other resources) which may entirely comprise what is served or add to other resources. Different servers (and platforms) may implement different kinds of server-side scripting, for example PHP, SSI and mod Perl under the UNIX apache server and ASP for Microsoft servers.

The CGI standard is also implemented server-side. On receiving a data set from a client, the server may operate on this and return a document, usually dynamically generated from the results of processing the data set. Java-based technologies may be used to provide richer functionality than CGI but at the cost of being more complicated to implement.

A web browser usually requests an HTML document from a server using the HTTP protocol. After receiving the contents (which may have been dynamically generated server-side), the browser can then process any objects such as scripts or style sheets (which may result in a client-side dynamic document).

A cache or proxy may bridge the route between client and server. It is possible that resources may be altered or generated at this stage.

Push technologies have recently become popular, where the traditional request-response paradigm is replaced by automatic delivery of resources to a suitable client (for example, news reports may be delivered to an "active desktop"). This may include HTML documents.

Accessibility

As the users of the Web continue to grow and become more diverse, various communities will have different abilities and skills. It is important to recognise that HTML needs to reflect this, for example being more accessible to those with disabilities. HTML addresses a number of such issues, however much is down to the author of documents and guidelines exist covering the creation of accessible HTML.

Internationalisation

Internationalisation issues are important for creating a functional international Web. Issues broadly split into:

How characters within the text (not mark-up) should be able to represent non-western alphabets

Explicitly defining the language for a segment of text.

HTML includes a number of internationalisation features and incorporates RFC Features include support for rendering text written right to left and the LANG attribute for many HTML elements used to specify language. There are also features for specifying the character encoding of a document. Importantly, the ISO/IEC has been adopted as the document character set for HTML. This standard deals most inclusively with issues of representing international characters, text direction, punctuation and other world language issues.

Standards

Standards were initially deployed via Internet drafts and RFCs. Later the standardisation procedure became W3C recommendation based. Standards are not necessarily fully adhered to by browser manufacturers.

Comparisons and relevance

PRIDE will obviously be using HTML. Issues that may arise with this use include:

  • Version of HTML to use
  • Use of proprietary extensions/browser support
  • Accessibility
  • Internationalisation
  • Design
  • Use of HTML technologies (e.g. the element)

Future developments

The W3C is now looking at the next generation of HTML. There is demand for HTML to provide support for television and mobile devices and to integrate more closely with database applications. It is considered that XML will provide the foundation for further HTML development.

There is work on defining HTML as a modular application of XML. Modularity will allow the integration of HTML with specialised tag sets for various applications (e.g. Maths) and to define profiles tailored to different device capabilities. There would also be back compatibility with previous HTML versions. Interoperability is also a key concern and this may be achieved with tools to transform documents into a form suitable for different browsers (e.g. transformed at an intermediate proxy). Core HTML elements would be complemented by modular specialised domain element sets. Accessibility, Internationalisation and standards issues would continue to be reflected in HTML development.

Related information

World Wide Web Consortium, <URL:cromwellpsi.com>

Internet Engineering Task Force, <URL:cromwellpsi.com>

XML, <URL:cromwellpsi.com>

HTML specifications and history, <URL:cromwellpsi.com>

RFC HTML Tables

JavaScript, <URL:cromwellpsi.com?content=cromwellpsi.com>

JScript

VBScript

ECMAScript

DOM, <URL:cromwellpsi.com>

Java, <URL:cromwellpsi.com>

ActiveX

Style sheets, <URL:cromwellpsi.com>

HTTP, <URL:cromwellpsi.com>

HTTP-NG, <URL:cromwellpsi.com>

PHP, <URL:cromwellpsi.com>

Apache, <URL:cromwellpsi.com>

Microsoft IIS, <URL:cromwellpsi.com>

Resource Description Framework (RDF)

Introduction

The Resource Description Framework (RDF) is being developed by the W3C as a metadata framework that can be used by a variety of application areas such as: resource discovery, site-maps, Web collections, content rating, e-commerce and rights management, collaboration, privacy and Web-site management. RDF has been developed over the last year or so as part of the W3C's Metadata Activity and has received input from several communities including those working on content rating using PICS, Web collections, digital libraries (particularly the Dublin Core initiative), digital signatures (DSig) and Web privacy (P3P). RDF provides a generic metadata architecture that can be expressed in XML. The ultimate aim is that a machine understandable Web of metadata will be developed across a broad range of application and subject areas.

RDF is based on a mathematical model that provides a mechanism for grouping together sets of very simple metadata statements known as `triples'. These triples formally consist of a subject, a predicate and an object. The subject is the resource being described. The resource may be a Web page, a part of a Web page or a collection of pages (e.g. a whole Web-site). A resource may also be an object that is not directly accessible using the Web, for example a book. All resources described using RDF must be assigned a URI. The predicate is a property of the resource. The property is some aspect, attribute or characteristic used to describe a resource. The object is the value of the property. The value may be a literal (a string or number) or it may be some complex structure represented by other RDF triples.

The RDF model is often represented using node and arc diagrams. However, in order that RDF can be processed by computers, a serialisation syntax has been developed using the Extensible Markup Language (XML-RDF). A very simple example follows:

<rdf:RDF> <rdf:Description about="cromwellpsi.com"> <Title>The UKOLN Home Page</Title> </rdf:Description> </rdf:RDF> This XML-RDF represents the sentence, `The UKOLN Home Page is the title of the resource cromwellpsi.com

The RDF model requires the semantics of metadata to be defined in an RDF schema. The schema allows software to take actions on RDF such as validation, mapping and value prompts. The RDF Model and Syntax Working group of the W3C are developing the RDF data model. The RDF Schema Working Group are developing an RDF Schema Definition Language.

Implementation

Several tools and software toolkits are beginning to be developed that support the creation and manipulation of RDF. These include:

Reggie

Reggie is a metadata editor that can output metadata in various formats, including XML-RDF. Reggie is implemented as a Web application using Java. Reggie allows the use of a schema to specify that structure of the metadata to be created. New schemas can be developed and referred to by URL within the editor. Reggie is being developed by DSTC.

DC-dot

DC-dot is a Web-based Dublin Core generator that automatically extracts metadata from a resource and then presents it for editing. DC-dot outputs Dublin Core in various formats including XML-RDF. DC-dot is being developed by UKOLN.

SiRPAC

SiRPAC is an RDF parser and compiler by Janne Saarela (W3C). It is written in Java. The SiRPAC compiler takes XML-RDF and generates the RDF triples of the underlying data model.

RDF for XML

RDF for XML is "a Java implementation of the RDF specification for creating technologies that search for, describe, categorize, rate, and manipulate data". RDF for XML is being developed by IBM Alphaworks.

Standards

The Resource Description Framework (RDF) Model and Syntax Specification and the Resources Description Framework (RDF) Schema Specification are both in the "last call" phase of W3C working draft documents.

Comparisons and relevance

The Dublin Core Data Model Working Group has been developing a data model for Dublin Core based on RDF. An example of the anticipated syntax for representing Dublin Core within RDF is given in the Dublin Core section of this document.

Future development

All metadata related activity within the W3C will be based on RDF from now on. It is expected that many other activities (for example the DOI developments related to metadata) will also use RDF as the basis for their work.

Related information

The W3C RDF Page, <URL:cromwellpsi.com>

RDF Model and Syntax Specification, <URL:cromwellpsi.com>

RDF Schema Specification, <URL:cromwellpsi.com>

Reggie, <URL:cromwellpsi.com>

DC-dot, <URL:cromwellpsi.com>

SiRPAC, <URL:cromwellpsi.com>

RDF for XML, <URL:cromwellpsi.com>

Dublin Core Data Model Working Group, <URL:cromwellpsi.com>

Uniform Resource Identifiers (URIs)

Introduction

A Uniform Resource Identifier (URI) is a short string of characters that identifies a resource (in the abstract or physical sense). A URI provides a simple and extensible means for identifying a resource that can then be used within applications. The specification is derived from concepts introduced by the World Wide Web [RFC - "Universal Resource Identifiers in WWW"] and builds upon previous notions such as URLs. URIs are a superset of Uniform Resource Locators (URLs), Uniform Resource Names (URNs) and Uniform Resource Citations or Uniform Resource Characteristics (URCs).

The URI specification implements the recommendations of RFC - "Functional Recommendations for Internet Resource Locators" and RFC - "Functional Requirements for Uniform Resource Names".

The following definitions from RFC characterise the URI:

Uniform

The uniformity of URIs provides several benefits:

It allows different types of resource identifiers to be used in the same context (even though the mechanisms used to access the resources may differ).

It allows uniform semantic interpretation of common syntactic conventions across different types of resource identifiers (e.g. URLs start with a scheme representing the method of network access).

It allows the introduction of new types of resource identifiers without interfering with the way that existing identifiers are used.

It allows the use of identifiers to be reused in different contexts (this permitting new applications or protocols to leverage a pre-existing set of resource identifiers).

Resource

A resource is anything with identity, not necessarily network accessible. The term `resource' refers to the concept of the identified entity, so that a resource can remain fixed even when its content changes (for example, a noticeboard). An identified resource may not be instantiated at a given time.

Identifier

An identifier is an object that acts as a reference to something that has identity. For URIs, the object is a set of characters conforming to the URI syntax.

Implementation

A Uniform Resource Identifier may be a locator, a name or a metadata resource.

URLs

URLs identify resources via a representation of their access mechanism (usually network location) rather than by any other attribute of the resource. URLs have the most varied use of the URI syntax and often have a hierarchical namespace. A major disadvantage of URLs is that they confuse the name of a resource with its location. In the larger Internet information architecture URLs will act only as locators.

URNs

Whereas a URL identifies the location or container for an instance of a resource, a URN identifies the resource. The resource identified by a URN may reside in one or more locations, may move, or may not actually be available at a given time.

The URN has two interpretations, the first is as a globally unique and persistent identifier for a resource (achieved though an institutional commitment) that is accessible over a network; the second is as the specific `urn' scheme which will embody the requirements for a standardised URN namespace [RFC - "URN Syntax"]. Such a scheme will resolve names that have a greater persistence that that currently associated with URLs.

RFC - "Functional Requirements for Uniform Resource Names" identifies the following requirements for a URN:

Global scope
A URN is a name with global scope that does not imply a location. It has the same meaning everywhere.
Global uniqueness
A URN will be assigned to exactly one resource.
Persistence
The lifetime of a URN should be permanent (i.e. exist even when the resource no longer exists).
Scalability
URNs can be assigned to any resource available on the network.
Legacy support
The scheme must support existing naming systems. For example, ISBN numbers, ISO public identifiers etc. and allow an embedding that satisfies the syntactic requirements described here.
Extensibility
Any scheme for URNs must permit future extensions to the scheme.
Independence
It is solely the responsibility of a name issuing authority to determine the conditions under which it will issue a name.
Resolution
A URN will not impede resolution. For example, for URNs that have corresponding URLs, there must be some feasible mechanism to translate a URN onto a URL.
Highlights of the URN Framework include:
Naming schemes and resolution systems
A naming scheme is a system for creating and assigning URNs. A resolution system is a network-accessible service than maps a URN to a resource. A given URN can map to any type of URI (i.e. URL, URN or URC).
Independence of naming schemes and resolution systems
A naming scheme is not specific to some resolution system. Any resolution system is potentially capable of resolving a URN from any given name scheme.
URN registries
Mechanisms must be created for the user of a URN to discover what resolution systems are available to resolve the URN.
Syntax
Syntax has been generally agreed upon and is acceptable in all proposed naming schemes and resolution systems. A number of details still need to be agreed upon.
URCs

The internet draft "URC Scenarios and Requirements" defines the URC:

"The purpose or function of a URC is to provide a vehicle or structure for the representation of URIs and their associated meta-information".

Initially, URCs where the intermediate that associated a URN with a set of URLs that could then be used to obtain a resource. Later it was decided that metadata should also be included so that resources could be obtained conforming to a set of requirements. Although work has been carried out by the URC-WG, URCs are still not in existence.

URCs are descriptions of resources available via a network. Such a resource may have any number of locations. URCs provide a standard scheme for sites to provide descriptions, rather than relying on a central URC service. Because URCs are likely to describe a wide range of resources, there is no core set of descriptive attributes (such as author, title etc.). URC standards encourage the development of URC subtypes which are description schemes suited to particular domains.

Example applications Persistent URLs

Persistent URLs, or PURLs, were developed by OCLC as an interim naming and resolution system for the Web. PURLs increase the probability of correct resolution and thereby reduce the burden and expense of catalog maintenance.

A PURL is a URL. However, a PURL refers to a resolution service, which maps the PURL to a URL and returns this to the client. On the Web, this is a standard HTTP redirect.

Standards

Internet addressing standards are IETF based. A number of other standards may form part of private conventions.

Comparisons and relevance

PRIDE may want to look at how URNs could be used within its architecture.

Related information

RFC Universal Resource Identifiers in WWW

RFC Functional Recommendations for Internet Resource Locators

RFC Functional Requirements for Uniform Resource Names

RFC URN Syntax

RFC Uniform Resource Identifiers (URI): Generic Syntax

W3C addressing page, <URL:cromwellpsi.com>

TURNIP, the URN Interoperability Project, <URL:cromwellpsi.com>

DOI Foundation, <URL:cromwellpsi.com>

PURL homepage, <URL:cromwellpsi.com>

Digital Object Identifier (DOI)

Introduction

The Digital Object Identifier (DOI) has been developed by the International DOI Foundation (IDF) on behalf of the publishing community to provide an identifier for intellectual content in the digital environment. Its goals are to provide a framework for managing intellectual content, link customers with publishers, facilitate electronic commerce, and enable automated copyright management.

The DOI system has two main parts (the identifier and a directory system) and a third logical component, a database.

The identifier

The identifier has two parts, a globally unique part called the prefix and a publisher assigned part called the suffix. For example, the DOI

/ has a prefix of `' and a suffix of `'. The prefix is assigned by a DOI agency. Separate publisher imprints will be identified by extending the prefix -- prefix `' might have imprints `' and `' for example. Prefixes begin with a code - `10' in the above example - to indicate the agency that allocated them. Currently there is only one agency. The suffix is assigned by the publisher and will be unique to them. It can be any string of printable characters and can be composed of another identifier, such as a SICI, if necessary.
The directory

The DOI system is based on a distributed central directory. Currently DOIs are usually embedded into URLs. When a user clicks on such a URL, a message is sent to the DOI directory where the URL associated with that DOI is stored. This location is sent back to the user's Internet browser as an HTTP redirect -- a special message telling the browser to "go to this particular URL".

The underlying technology for the DOI directory is the Handle resolution system developed by the Corporation for National Research Initiatives (CNRI). The Handle System is a distributed system that stores names (handles) of digital objects and which can resolve those names into locators (URLs) to access the objects. The system is global and general purpose and is used over networks such as the Internet. The Handle system is currently in use in a number of other prototype projects.

The database

Information about an object that is identified by a DOI is maintained by the publisher. However, it is planned that the DOI system will also collect some minimum level of associated metadata to enable provision of automated, efficient services such as look-up of DOIs from bibliographic data, citation linking, and so forth.

Implementation

It is currently difficult to determine how widespread the implementation of the DOI is. However, IDF members include major industry players in a range of technology and content industries. The Board of the Foundation, elected by and from the membership, currently consists of the Association of American Publishers, International Publishers Association, International Association of STM Publishers, Authors Licensing and Collecting Society, Elsevier Science, European Music Rights Alliance, Microsoft, New England Journal of Medicine, and Wiley.

DOIs are currently embedded into Web pages as URLs. In this way they can be resolved using any Web browser. A Handle browser plug-in is also available which can resolve DOIs directly. Some experimental work has also been done, encoding DOIs as URNs and resolving them using HTTP proxy servers.

Standards

The DOI syntax is being standardised within NISO. The IDF is also working closely with ISO (the ISWC working group) and with the URN working group of the IETF. In its discussions about metadata, the IDF is working with the Dublin Core initiative and the W3C RDF working group.

Comparisons and relevance

The DOI is closely related to other bibliographic identifiers such as the ISBN, ISSN and SICI. It is currently used in the form of a URL and is resolved in a very similar way to the Persistent URL (PURL).

Future development

One aspect of the DOI that is currently under discussion within the IDF is the issue of what metadata about the objects identified by a DOI should be held within the DOI directory. This discussion is bringing together several interested parties, including representatives of the Dublin Core initiative, the W3C RDF working group, publishers and copyright licensing agencies. Some of this discussion is likely to take place within the framework of the European funded Interoperability of Data in E-Commerce Systems (INDECS) project.

Related information

* International DOI Foundation, <URL:cromwellpsi.com>

  • The Handle System, <URL:cromwellpsi.com>

    MPEG-7

    Introduction

    MPEG-7, which is a work in progress at the moment, will be a standardised description of various types of multimedia information. This description will be associated with the content itself, to allow fast and efficient searching for material that is of interest to the user. MPEG-7 is formally called `Multimedia Content Description Interface'.

    MPEG-7 is intended to extend the limited capabilities of proprietary solutions in identifying multimedia content that exist today, notably by including more data types. MPEG-7 will specify a standard set of descriptors that can be used to describe various types of multimedia information. MPEG-7 will also standardise ways to define other descriptors as well as structures (Description Schemes) for the descriptors and their relationships.

    "This description (i.e. the combination of descriptors and description schemes) shall be associated with the content itself, to allow fast and efficient searching for material of a user's interest. MPEG-7 will also standardise a language to specify description schemes, i.e. a Description Definition Language (DDL). AV material that has MPEG-7 data associated with it, can be indexed and searched for. This `material' may include: still pictures, graphics, 3D models, audio, speech, video, and information about how these elements are combined in a multimedia presentation (`scenarios', composition information). Special cases of these general data types may include facial expressions and personal characteristics." [0]

    Implementation

    The MPEG-7 standard builds on other representations such as analogue, PCM, MPEG-1, -2 and One functionality of the standard is to provide references to suitable portions of them. For example, perhaps a shape descriptor used in MPEG-4 is useful in an MPEG-7 context as well, and the same may apply to motion vector fields used in MPEG-1 and MPEG

    MPEG-7 descriptors do not depend on the ways the described content is coded or stored. It is possible to attach an MPEG-7 description to an analogue movie or to a picture that is printed on paper. Even though the MPEG-7 description does not depend on the (coded) representation of the material, the standard in a way builds on MPEG-4, which provides the means to encode audio-visual material as objects having certain relations in time (synchronisation) and space (on the screen for video, or in the room for audio). Using MPEG-4 encoding, it will be possible to attach descriptions to elements (objects) within the scene, such as audio and visual objects. MPEG-7 will allow different granularity in its descriptions, offering the possibility to have different levels of discrimination.

    The same material can be described using different types of features, tuned to the area of application. To take the example of visual material: a lower abstraction level would be a description of e.g. shape, size, texture, colour, movement (trajectory) and position (`where in the scene can the object be found?). And for audio: key, mood, tempo, tempo changes, position in sound space. The highest level would give semantic information: `This is a scene with a barking brown dog on the left and a blue ball that falls down on the right, with the sound of passing cars in the background.'

    MPEG-7 will address applications that can be stored (on-line or off-line) or streamed (e.g. broadcast, push models on the Internet), and can operate in both real-time and non real-time environments.

    The standardisation of audio-visual content recognition tools is beyond the scope of MPEG In developing the standard, however, MPEG might build some coding tools for research purposes, but they would not become part of the standard itself.

    Standards

    The MPEG-7 standard is being developed by Moving Picture Experts Group (MPEG). At this stage, the requirements have been defined and an open Call for Proposals <URL:cromwellpsi.com> . technologies and systems tools are due on 1 February in accordance with the instructions in the MPEG-7 Proposal Package Description (PPD) <URL:cromwellpsi.com>.

    The preliminary work plan for MPEG-7 foresees:

    • Working Draft December
    • Committee Draft October
    • Final Committee Draft February
    • Draft International Standard July
    • International Standard September
    As this new MPEG work item will require technology available in technological areas not yet sufficiently represented in the MPEG community, it shall be necessary to seek the collaboration of new experts in the relevant areas. As always, MPEG is open to anyone interested to participate and contribute.

    Dublin Core

    Introduction

    The Dublin Core (DC) is a fifteen element metadata set that was originally developed to improve resource discovery on the Web. To this end, the DC elements were primarily intended to describe Web-based `document-like objects'. More recently the scope of DC has expanded to include off-line electronic resources and other objects, museum artefacts for example. The Dublin Core effort is developing mechanisms for describing the relationships between such resources.

    The Dublin Core originated at a meeting organised by OCLC in Dublin, Ohio attended by representatives of the library, museum and research communities and commercial Web software developers. Since then there have been 4 follow-up meetings in the Dublin Core workshop series, the most recent being in Helsinki, Finland late in A sixth workshop is planned for Washington DC, known as DC-DC, in November Between workshops, Dublin Core discussion continues via email. The main DC related mailing list currently has more than subscribers.

    The fifteen DC elements and a very brief description of their semantics follow:

    Title

    the title of the resource

    Subject

    simple keywords or terms taken from a list of subject headings

    Description

    a description or abstract

    Creator

    the person or organisation primarily responsible for the intellectual content of the resource

    Publisher

    the publisher

    Contributor

    a secondary contributor to the intellectual content of the resource

    Date

    a date associated with the creation or availability of the resource

    Type

    the genre of the resource (home page, thesis, article, journal, data-set, etc.)

    Format

    typically a MIME type (e.g. text/html)

    Identifier

    a URL, DOI, ISBN, ISSN, URN or other identifier

    Source

    the resource from which the current resource was derived

    Language

    the language of the resource

    Relation

    an identifier of a second resource and its relationship to the current resource

    Coverage

    the temporal or spatial characteristics of the resource (e.g. 18 century UK)

    Rights

    a simple rights statement about the resource

    All of the elements are both optional and repeatable. A minimal DC record may therefore contain only one or two of the above elements. If necessary an element may be repeated, to indicate multiple authors for example. The values of several elements may be taken from enumerated lists. In some cases, these lists already exist, in others lists are being developed as part of the Dublin Core effort.

    The semantics of some of the elements are defined very broadly. For example, the date element is simply defined as "a date associated with the creation or availability of the resource" and the relation element as "an identifier of a second resource and its relationship to the present resource". It is possible to refine the meaning of the elements using an 'element' qualifier:

    Date DateType=Valid

    Relation RelationType=IsPartOf

    It is also possible to qualify the value of an element using a 'value' qualifier. For example, to associate an externally defined `scheme' (for example a controlled vocabulary or specific syntax) with element values:

    Subject SubjectScheme=LCSH

    Date DateScheme=ISO

    Dublin Core that makes use of `element' and `value' qualifiers is known as `Qualified DC'. Dublin Core that does not is often referred to as `Simple DC'.

    Implementation

    Much of the DC effort has gone into defining the semantics of the 15 elements and considerable cross-domain consensus has been achieved on this over the last few years. There has also been some work on syntax, particularly on the use of DC within HTML Web pages. Many DC-based projects are embedding DC metadata directly into Web pages using the HTML tag. In this way, the metadata is directly available for collection and indexing by Web robots. For example, the DC metadata embedded into the UKOLN homepage is:

    Notice that the element names, `Creator' for example, are prefixed by `DC.' to indicate that each one is a part of the Dublin Core. Many projects using Dublin Core have added extra metadata elements appropriate to their needs, using a different prefix to indicate that these elements are not part of DC.

    However, there are limitations in what can be achieved using HTML tags. It is not possible to group sets of tags in HTML, nor is it possible to represent any hierarchical structure that may be present in the metadata. Qualified DC can be embedded, indeed many projects using Dublin Core rely on qualified DC for their resource descriptions, but there is some inconsistency in the way projects are doing this. In particular, the way in which qualified DC is embedded into HTML depends on the HTML version in use. HTML incorporated some of the ideas from the Dublin Core and added a attribute on the tag, which was not present in earlier versions.

    Partly because of these difficulties, DC looks likely to make use of RDF as its preferred syntax in the future and to become one of the early RDF schemas. Although the syntax for representing DC in RDF is still being developed, it is likely to be something like the following:

    <rdf:RDF xmlns:rdf=cromwellpsi.com# xmlns:dc="cromwellpsi.com"> <rdf:Description about="cromwellpsi.com"> <dc:Title> UKOLN: UK Office for Library and Information Networking </dc:Title> <dc:Creator> UKOLN Information Services Group </dc:Creator> <dc:Subject> national centre, network information support, library community, awareness, research, information services, public library networking, bibliographic management, distributed library systems, metadata, resource discovery, conferences, lectures, workshops </dc:Subject> <dc:Description> UKOLN is a national centre for support in network information management in the library and information communities. It provides awareness, research and information services </dc:Description> <dc:Publisher> Bath University </dc:Publisher> <dc:Type> Text </dc:Type> <dc:Format> text/html - bytes </dc:Format> </rdf:Description> </rdf:RDF>

    Standards

    Early development of the Dublin Core was done informally, using a combination of face to face meetings (usually one or two per year) and mailing list discussion by a group of invited experts from around the world. Recently a formal structure, comprising a Policy Advisory Committee and a Technical Advisory Committee, has been put in place to oversee the future development of the Dublin Core. The first of five Internet Engineering Task Force Requests For Comments (IETF RFCs) has been published - RFC , "Dublin Core Metadata for Resource Discovery". Work is also underway to submit the Dublin Core to NISO as a national standard, the intention being to use this as the basis for a submission to ISO.

    Related information

    Dublin Core Metadata Homepage, <URL:cromwellpsi.com>

    RFC Dublin Core Metadata for Resource Discovery

    International Standard Book Number (ISBN)

    Introduction

    The ISBN system was developed in (ISO standard in ) as an international standard numbering system for books and other monographic publications.

    An ISBN always has ten decimal digits following the letters `ISBN'. The digits are divided into four parts separated by a hyphen or a space. For example:

    Group identifier

    Identifies a country (82=Norway) or a language area (3=German, Switzerland (German part) and Austria). May be digits in length, depending on the number of documents issued in the country/area.

    Publisher identifier

    The numberis assigned by the national ISBN agencies and may be digits in length. Publishers issuing many items have short identifiers and publishers issuing few documents have longer identifiers.

    Title number

    A unique title number assigned by the publisher.

    Check digit

    A check digit, calculated using the Modulus 11 algorithm.

    Each ISBN is unique and should never be used for another title. If a publisher uses up all their available title numbers, a new publisher identifier will be assigned by the national ISBN agency. Each form of a publication, for example paper and CD-ROM, will be assigned a different ISBN. ISBNs can be assigned to any printed publication of 16 pages or more. They can also be assigned to spoken word audiocassettes, microform publications, Braille publications, calendars, floppy disks, CD-ROMs and videocassettes. Recent guidelines from the International ISBN agency also include on-line publications. ISBNs should not be used for printed music, newspapers, magazines, art prints and art folders without title page of text, private firms catalogues, price-lists, directions, loose-leaf systems, theatre and exhibition programmes, colouring-books, games or sound recordings. Serial titles are assigned an ISSN.

    At the international level, Internationale ISBN-Agentur, Staatsbibliothek, Berlin is responsible for the ISBN and for assigning new group identifiers. Each country has a national ISBN agency that is responsible for assigning new publisher identifiers and for updating the Publisher's International ISBN Directory, published by Internationale ISBN-Agentur. The national ISBN agencies also produce lists of title numbers for publishers.

    Implementation

    The ISBN is used by publishers, booksellers, intermediaries and libraries in order to purchase, retrieve and manage items. Libraries also use the ISBN for citation purposes. The ISBN is widely used in most countries ( countries in ).

    The ISBN is used on nearly all printed books and to some extent on electronic off-line documents, such as CD-ROMs. Traditional publishers who normally assign an ISBN to books also tend to use them when they issue electronic publications.

    The ISBN is defined by ISO International Standard Book Numbering.

    Related information

    International Standard ISO Information and Documentation - International Standard Book Numbering (ISBN)

    Identification - Deliverable D of Telematics for Libraries project BIBLINK (LB ), <URL:cromwellpsi.com>

    Parts of this section are based on 'BIBLINK - LB D Identification'.

    International Standard Serial Number (ISSN)

    Introduction

    The ISSN is a standardised international numeric code that allows the identification of any serial publication independent of its medium. This includes periodicals, newspapers, newsletters, yearbooks, annuals and series published on paper or other medium (floppy disk, CD-ROM, CD-I) or accessible online. The ISSN is linked to a standardised form of the title of the identified serial, known as the `key title'. The ISSN has (as explicit representation, as appearance in print) the form of the acronym ISSN followed by two groups of four digits, separated by a hyphen. The eighth character is a control digit (on the basis of the preceding 7 digits); the control digit can be an `X'. Examples:

    ISSN
    ISSN X

    The ISSN has a fixed number of digits and a built-in check-digit and can be validated locally by library systems. Every assigned ISSN is basically unique (globally). In general an assigned original ISSN will never be used again for another title. Only one ISSN is assigned to a serial title. The ISSN is unique for every specific form of the publication. Documents issued in different versions, i.e. both on paper and on the Internet, will be assigned different ISSNs.

    There is a central ISSN database (ISSN Register) in which every ISSN input is checked for consistency and uniqueness by the International ISSN Centre. New blocks of unique ISSN are only distributed by the International Centre to national centres. The participants in the network, the national ISSN centres, are responsible for the correct assignment of ISSN in their own countries. The International Centre takes care of the ISSN assignment for countries without a national ISSN centre.

    According to the standard and syntax of ISSN there is no possibility of extension to the scheme at the current time. The ISSN has a fixed number of digits and consequently the number of available numbers is limited, but for the foreseeable future there will be enough numbers available. However, the guidelines of the scheme have been extended to permit inclusion of new media, for example electronic documents. (Note: if serials appear in different physical formats or manifestations, different editions or, for example, in different versions a separate ISSN can be assigned without any need for extension of the syntax).

    The authority responsible for uniqueness is the ISSN International Centre (located in Paris). It is the registration institution officially designated by ISO for the ISSN. It works in collaboration with the national ISSN centres. The ISSN International Centre compiles and maintains the central ISSN database, ensuring that it is accurate, consistent and continually updated. On the national (and regional) level the national (and regional) ISSN centres are responsible. In cases where there is no existing ISSN centre in a particular country, the ISSN International Centre will take responsibility. The ISSN International Centre also takes responsibility for all ISSN assignments concerning serials of international organisations world-wide. An ISSN can be assigned at any point in the publishing process.

    Implementation

    The ISSN identification scheme is used, among others, by: publishers, distributors, subscription agencies, libraries, national bibliographic agencies, documentation centres and databases, union catalogues, reproduction rights organisations (RROs), postal services, (scientific) researchers, authors and library users. The original purpose of the ISSN is to identify the title of a specific serial publication by the application of an international standard code, enabling the exchange of information about serials between computers.

    Actual use is still related to the unique identification code for serial publications. This is done, for example, by finding a specific serial title in a database through a search with the 8 digits. The ISSN can also be used in citations. The use of ISSN is especially effective if titles of serials (world-wide) resemble each other very closely. In such cases it can be difficult to identify a title unless the ISSN is known. Without the ISSN, far more bibliographic detail of the specific serial publication is required. In general the records within the ISSN Register can also be used to control, complete or create specific databases. Cost of usage is in principle zero. In practice only a couple of national ISSN centres are planning to charge for (part of the) administration costs. The scale of usage is world-wide.

    The ISSN is defined by a standard, i.e. it is the object of a definition and of standardised application rules internationally adopted in the framework of ISO (International Standards Organisation) which groups the official standardisation institutions throughout the world. ISSN is defined by the ISO standard, which concerns the definition of a serial.

    Related information

    International Standard ISO Documentation - International Standard Serial Numbering (ISSN)

    ISSN International Centre, <URL:cromwellpsi.com>

    Identification - Deliverable D of Telematics for Libraries project BIBLINK (LB ), <URL:cromwellpsi.com>

    Parts of this section are based on 'BIBLINK - LB D Identification'.

    Serial Item and Contribution Identifier (SICI)

    Introduction

    The SICI is a variable length code that uniquely identifies serial items (issues) and each contribution (article) contained in a serial. The work on the standard began in the US Serials Industry Systems Advisory Committee (SISAC) in and was taken over by the National Information Standards Organisation (NISO) as the standard was published in The standard has recently been revised ().

    A SICI is divided in three segments with the following syntax:

    Item segment<Contribution segment>Control segment The Contribution segment is optional. The different parts within the segments are separated by punctuation. There is no restriction on the length of a SICI. For example,

    Needleman, Mark. "Computing Resources for an Online Catalog - 10 Years Later".
    Information Technology and Libraries, Jun, v11n

    would be assigned the following SICI:

    Item segment

    ISSN

    All SICIs must have an ISSN. For serials that do not have an ISSN, there are mechanisms to request for one.


    Chronology

    The cover date for a serial title.


    Enumeration

    The enumerationof a specific issue of a serial title. As many levels as needed are recorded, e.g. series, volume, number. The levels are separated using a colon.

    Contribution Segment

    Location

    The location of the contribution, normally a page number. This is set to zero for electronic documents.


    Locally assigned numbers

    [Not in the example]

    The contribution segment also allows for alternative local numbers, e.g. numbers used by publishers during the production process. Locally assigned numbers are separated from the title code with a colon. (CSI=3)


    Titlecode

    The first characters in the first six words of the title and subtitle.

    Control Segment

    CSI (Code Structure Identifier)

    Determines the coding level.

    CSI = 1: Assigned to an issue of a serial (SII- Serial Item Identifier)

    CSI = 2: Assigned to a contribution within a serial (SCI- Serial Contribution Identifier)

    CSI = 3: An alternative numbering scheme is included. Only used during the production process. A published document will have a CSI 1 or a CSI 2.


    DPI (Derivative Part Identifier)

    Identifies parts of the serial other than articles.

    DPI = 0: A serial item or a contribution

    DPI = 1: A table of contents

    DPI = 2: An index

    DPI = 3: An abstract


    MFI(Medium/ Format Identifier)

    A two letter alphabetic code used to indicate the physical format


    SVN

    Standard version number of the SICI standard used.


    Check character

    Calculated by applying the Modulus 37 algorithm.

    A SICI is essentially a unique identifier. (Theoretically, two contributions can have identical SICI values if, for instance, two articles in different serials start on the same page number and have the same first six characters in the titles. Tests indicate that duplicate values occur once per million contributions). It should be noted that a SICI can be constructed on the basis of different sources, both from the serial in hand and from various forms of citations. Therefore, depending on the information available in the different sources, a contribution(article) might be given more than one SICI. Different forms of a publication, e.g. documents issued both on paper and CD-ROM, will be assigned different SICIs.

    The SICI code has no length restriction. The latest version (ZX) is extended to include contributions other than articles, e.g. table of content, indexes etc. In principle the SICI code could be further extended if necessary.

    The SICI covers all serial items, including periodicals, newspapers, annual works, reports, journals, proceedings, transactions and numbered monographic series and articles in a serial. Book Industry Communication(BIC) has drafted a non-serial equivalent of the SICI, a "Book Item and Component Identifier (BICI)". The draft is being offered to NISO for adoption and submission to ISO alongside the SICI. The numbering scheme does not cover electronic documents that do not contain location numbers or enumeration.

    Implementation

    The SICI is intended for use by those members of the bibliographic community engaged in the functions associated with management of serials and the contributions they contain, such as ordering, accessioning, claiming, royalty collection, rights management, online retrieval, database linking, document delivery, etc.

    The SICI is defined by ANSI/NISO standard Z

    Related information

    SICI: Serial Item and Contribution Identifier Standard, <URL:cromwellpsi.com>

    Identification - Deliverable D of Telematics for Libraries project BIBLINK (LB ), <URL:cromwellpsi.com>

    Parts of this section are based on 'BIBLINK - LB D Identification'

    Warwick Framework

    Introduction

    The Warwick Framework provides a conceptual architecture for the interchange of distinct metadata packages. The architecture has two fundamental components, containers and packages. Containers are the unit for aggregating metadata packages. A container may be transient, existing only to transfer packages between systems, or persistent. In its persistent form a container is stored on one or more servers and is accessible using a global identifier (URI). It should be noted that a container can be wrapped within another object, i.e. one that is a wrapper for both data and metadata. Each package is a typed object of one of the following kinds:

    metadata set, for example a Dublin Core or MARC record

    indirect, i.e. a reference to an external object using a URI

    container, these can be nested to any level of complexity

    The following diagram shows a simple example of a Warwick Framework container with three packages, the first two contained within the container and the third referenced indirectly.

    The key characteristics of the Warwick Framework are:

    • No presumptions made on the underlying transfer mechanism.
    • No constraints on the complexity of containers.
    • No restrictions on the types of metadata contained in packages -- though a type registry scheme is required (similar to the MIME registry) to allow clients to determine the type of packages.

    Implementation

    The Warwick Framework makes no constraints on the underlying means of communication. It has been implemented experimentally using MIME and SGML. Warwick Framework containers could be transmitted using email, file transfer, HTTP (the Web), etc. However, there are no known implementations of the Warwick Framework in a "service" environment.

    The Resource Description Framework (RDF), currently under development by two working groups of the W3C, provides all of the functionality of the Warwick Framework.

    Related information

    C Lagoze, C A Lynch, R Daniel Jr. The Warwick Framework -- A container Architecture for Aggregating Sets of Metadata. <URL:cromwellpsi.com>

    C Lagoze, R Daniel Jr. Extending the Warwick Framework -- From Metadata Containers to Active Digital Objects. <URL:cromwellpsi.com>

    J Knight, M Hamilton. A MIME implementation for the Warwick Framework. <URL:cromwellpsi.com>

    L Burnard, E Miller, L Quin, C M Sperberg-McQueen. A Syntax for Dublin Core Metadata - Recommendations from the Second Metadata Workshop. <URL:cromwellpsi.com~lou/wip/cromwellpsi.com>

    Resource Description Framework (RDF) Model and Syntax Specification - working draft. <URL:cromwellpsi.com>

    Collection Description

    Introduction

    For the purposes of resource management, cataloguing, discovery, rights management and other functions, individual resources are often grouped together and treated collectively. These groups are commonly referred to as `collections' and may contain physical items (books, journals, museum artefacts), digital surrogates of physical items, other digital items and catalogues of such collections. Typical examples of collections include:

    • Internet catalogues (e.g. Yahoo)
    • Subject Gateways (e.g. SOSIG)
    • Library, museum and archival catalogues
    • Web indexes (e.g. Alta Vista)
    • Collections of text, images, sounds, datasets, software, other material or combinations of these
    • Collections of events
    • Library and museum collections
    • Archives
    • Other collections of physical items
    • Digital archives
    The attributes required to describe a collection can be (loosely) grouped into those that describe the collection itself, those that describe access to the collection (protocol information, opening times, location, etc.) and those that describe the terms and conditions associated with the collection.

    Implementation

    Collection descriptions are currently provided in a variety of different contexts including:

    • Conspectus
    • ISO - Directories of libraries, archives, information and documentation centres, and their databases.
    • Z Profile for Access to Digital Collections
    • ISAD(G) - General International Standard Archival Description
    • EAD - Encoded Archival Description
    • GILS - the Government Information Locator Service
    • Subject gateways - e.g. the ROADS SERVICE template
    • WASRV - the IETF Wide Area Service Location working group
    • ISAD(G) - the general
    • Dublin Core - although primarily a set of attributes for describing individual resources, the Dublin Core can also be used to describe collections.
    • Web Collections - there have been a number of attempts to develop mechanisms for describing collections of Web resources. Often these are based on XML.
    A small working group composed primarily of members of UK Electronic Libraries Programme Phase 3 Hybrid Library and Large Scale Resource Discovery (Clumps) project members has spent some time investigating the issues surrounding collection level description. A report on work in progress is available from <URL:cromwellpsi.com>. The working group have identified a set of 22 attributes for collection description, have developed a taxonomy of collection types and have created a number of example collection descriptions. The intention is that the attribute set be used as the basis of collection description within some of the eLib phase 3 projects - implemented in a variety of ways including RDF and ROADS.

    Comparisons and relevance

    The PRIDE directory will contain descriptions of collections and the services that provide access to those collections. Therefore, the project will need to identify a suitable attribute set for describing those collections or will need to develop one.

    Related information

    ELib Phase 3 Collection Description Working Group - Report on work in progress <URL:cromwellpsi.com>

    Collection Level Description - an eLib supporting study (in progress) <URL:cromwellpsi.com>

    RFC Dublin Core Metadata for Resource Discovery <URL:cromwellpsi.com>

    Application Profile for GILS <URL:cromwellpsi.com>

    ROADS SERVICE template <URL:cromwellpsi.com>

    ISAD(G): General International Standard Archival Description <URL:cromwellpsi.com(g)cromwellpsi.com>

    Z Profile for Access to Digital Collections <URL:cromwellpsi.com>

    WASRV <URL:cromwellpsi.com>


    PRIDE Requirements and Success Factors
  • Источник: [cromwellpsi.com]
    , FTP 2 ActiveX FTP 2.6.11 serial key or number

    WOA2 - A system, method and computer program for determining operationalmaturity of an organization - Google Patents

    A SYSTEM, METHOD AND ARTICLE OF MANUFACTURE FOR DETERMINING OPERATIONAL MATURITY OF AN OPERATIONS ORGANIZATION

    FIELD OF INVENTION

    The present invention relates to IT operations organizations and more particularly to evaluating a maturity of an operations organization through process analysis.

    BACKGROUND OF INVENTION

    Triggered by a recent technology avalanche and a highly competitive global market, the management of information systems is undergoing a revolutionary change. Both information technology and business directions are driving information systems management to a fundamentally new paradigm. While business bottom lines are more tightly coupled with information technology than ever before, studies indicate that many CEOs and CFOs feel that they are not getting their money's worth from their IT investments. The complexity of this environment demands that a company have a formal way of assessing its IT capabilities, as well as a specific and measurable path for improving them.

    In initiatives to address these issues, various frameworks and gap analysis have been used to capture the best practices of IT management and to determine areas of improvement. While the frameworks and gap analysis are intended to capture weaknesses in processes that are observable, it does not provide data with sufficient objectivity and granularity upon which a comprehensive improvement plan can be built.

    There is thus a need to add further objectivity and consistency to conventional framework and gap analysis. SUMMARY OF INVENTION

    A system, method, and article of manufacture consistent with the principles of the present invention are provided for gauging a maturity of an IT operations organization. First, a plurality of process areas of an operations organization are defined in terms of either a goal or a purpose. These process areas are then categorized in terms of common characteristics. Next, process capabilities are determined for the process areas of the operations organization. Thereafter, capabilities are calculated for the specific process categories. A maturity of the operations organization is subsequently determined based on the capabilities of the categories.

    The capabilities of the process areas are based at least in part on the completion of base practices associated with the particular process area. The process capabilities are also based at least in part on assessment of generic practices. In yet another aspect, capabilities may be calculated for the categories based on the process capabilities of the process areas. In particular, the category capabilities may be calculated based on the lowest process capability. Similarly, the maturity of the operations organization may be determined based on the lowest category capability.

    The present invention provides a basis for organizations to gauge performance, and assists in planning and tracking improvements to the operations environment. The present invention further affords a basis for defining an objective improvement strategy in line with an organization's needs, priorities, and resource availability. The present invention also provides a method for determining the overall operational maturity of an organization based on the capability levels of its processes.

    The present invention can thus be used by organizations in a variety of contexts. An organization can use the present invention to assess and improve its processes. An organization can further use the present invention to assess the capability of suppliers in meeting their commitments, and hence better manage the risk associated with outsourcing and sub-contract management. In addition, the present invention may be used to focus on an entire IT organization, on a single functional area such as service management, or on a single process area such as a service desk. BRIEF DESCRIPTION OF DRAWINGS

    The invention may be better understood when consideration is given to the following detailed description thereof. Such description makes reference to the annexed drawings wherein:

    Figure 1 is a schematic diagram of a hardware implementation of one embodiment of the present invention;

    Figure 2 is a flowchart illustrating generally the steps associated with the present invention;

    Figure 3 is an illustration showing the relationships of the process category, process area, and base practices of the operations environment dimension in accordance with one embodiment of the present invention;

    Figure 4 is an illustration showing a measure of each process area to the capability levels according to one embodiment of the present invention;

    Figure 5 is an illustration showing various determinants of operational maturity in accordance with one embodiment of the present invention;

    Figure 6 is an illustration showing an overview of the operational maturity model;

    Figure 7 is an illustration showing a relationship of capability levels, process attributes, and generic practices in accordance with one embodiment of the present invention;

    Figure 8 is an illustration showing a capability rating of various attributes in accordance with one embodiment of the present invention;

    Figure 9 is an illustration showing a mapping of attribute ratings to the process capability levels determination in accordance with one embodiment of the present invention;

    Figure 10 is an illustration showing assessment roles and responsibilities in accordance with one embodiment of the present invention; and Figure 11 is an illustration showing the process area rating in accordance with one embodiment of the present invention.

    DISCLOSURE OF INVENTION

    The present invention comprises a collection of best practices, both from a technical and management perspective. The collection of best practices is a set of processes that are fundamental to a good operations environment. In other words, the present invention provides a definition of an "ideal" operations environment, and also acts as a road map towards achieving the "ideal" state.

    Figure 1 is a schematic diagram of one possible hardware implementation by which the present invention may be carried out. As shown, the present invention may be practiced in the context of a personal computer such as an IBM compatible personal computer, Apple Macintosh computer or UNIX based workstation.

    A representative hardware environment is depicted in Figure 1, which illustrates a typical hardware configuration of a workstation in accordance with one embodiment having a central processing unit , such as a microprocessor, and a number of other units interconnected via a system bus The workstation shown in Figure 1 includes a Random Access Memory (RAM) , Read Only Memory (ROM) , an I/O adapter for connecting peripheral devices such as disk storage units to the bus , a user interface adapter for connecting a keyboard , a mouse , a speaker , a microphone , and/or other user interface devices such as a touch screen (not shown) to the bus , communication adapter for connecting the workstation to a communication network (e.g., a data processing network) and a display adapter for connecting the bus to a display device

    The workstation typically has resident thereon an operating system such as the Microsoft Windows NT or Windows/95 Operating System (OS), the IBM OS/2 operating system, the MAC OS, or UNIX operating system. Those skilled in the art may appreciate that the present invention may also be implemented on other platforms and operating systems.

    A preferred embodiment of the present invention is written using JAVA, C, and the C++ language and utilizes object oriented programming methodology. Object oriented programming (OOP) has become increasingly used to develop complex applications. As OOP moves toward the mainstream of software design and development, various software solutions require adaptation to make use of the benefits of OOP.

    OOP is a process of developing computer software using objects, including the steps of analyzing the problem, designing the system, and constructing the program. An object is a software package that contains both data and a collection of related structures and procedures. Since it contains both data and a collection of structures and procedures, it can be visualized as a self-sufficient component that does not require other additional structures, procedures or data to perform its specific task. OOP, therefore, views a computer program as a collection of largely autonomous components, called objects, each of which is responsible for a specific task. This concept of packaging data, structures, and procedures together in one component or module is called encapsulation.

    In general, OOP components are reusable software modules which present an interface that conforms to an object model and which are accessed at run-time through a component integration architecture. A component integration architecture is a set of architecture mechanisms which allow software modules in different process spaces to utilize each others capabilities or functions. This is generally done by assuming a common component object model on which to build the architecture. It is worthwhile to differentiate between an object and a class of objects at this point. An object is a single instance of the class of objects, which is often just called a class. A class of objects can be viewed as a blueprint, from which many objects can be formed.

    OOP allows the programmer to create an object that is a part of another object. For example, the object representing a piston engine is said to have a composition-relationship with the object representing a piston. In reality, a piston engine comprises a piston, valves and many other components; the fact that a piston is an element of a piston engine can be logically and semantically represented in OOP by two objects.

    OOP also allows creation of an object that "depends from" another object. If there are two objects, one representing a piston engine and the other representing a piston engine wherein the piston is made of ceramic, then the relationship between the two objects is not that of composition. A ceramic piston engine does not make up a piston engine. Rather it is merely one kind of piston engine that has one more limitation than the piston engine; its piston is made of ceramic. In this case, the object representing the ceramic piston engine is called a derived object, and it inherits all of the aspects of the object representing the piston engine and adds further limitation or detail to it. The object representing the ceramic piston engine "depends from" the object representing the piston engine. The relationship between these objects is called inheritance.

    When the object or class representing the ceramic piston engine inherits all of the aspects of the objects representing the piston engine, it inherits the thermal characteristics of a standard piston defined in the piston engine class. However, the ceramic piston engine object overrides these ceramic specific thermal characteristics, which are typically different from those associated with a metal piston. It skips over the original and uses new functions related to ceramic pistons. Different kinds of piston engines have different characteristics, but may have the same underlying functions associated with it (e.g., how many pistons in the engine, ignition sequences, lubrication, etc.). To access each of these functions in any piston engine object, a programmer would call the same functions with the same names, but each type of piston engine may have different overriding implementations of functions behind the same name. This ability to hide different implementations of a function behind the same name is called polymoφhism and it greatly simplifies communication among objects.

    With the concepts of composition-relationship, encapsulation, inheritance and polymoφhism, an object can represent just about anything in the real world". In fact, our logical perception of the reality is the only limit on determining the kinds of things that can become objects in object- oriented software. Some typical categories are as follows:

    • Objects can represent physical objects, such as automobiles in a traffic-flow simulation, electrical components in a circuit-design program, countries in an economics model, or aircraft in an air-traffic-control system.

    • Objects can represent elements of the computer-user environment such as windows, menus or graphics objects.

    • An object can represent an inventory, such as a personnel file or a table of the latitudes and longitudes of cities.

    • An object can represent user-defined data types such as time, angles, and complex numbers, or points on the plane. With this enormous capability of an object to represent just about any logically separable matters, OOP allows the software developer to design and implement a computer program that is a model of some aspects of reality, whether that reality is a physical entity, a process, a system, or a composition of matter. Since the object can represent anything, the software developer can create an object which can be used as a component in a larger software project in the future.

    If 90% of a new OOP software program consists of proven, existing components made from preexisting reusable objects, then only the remaining 10% of the new software project has to be written and tested from scratch. Since 90% already came from an inventory of extensively tested reusable objects, the potential domain from which an error could originate is 10% of the program. As a result, OOP enables software developers to build objects out of other, previously built objects.

    This process closely resembles complex machinery being built out of assemblies and sub- assemblies. OOP technology, therefore, makes software engineering more like hardware engineering in that software is built from existing components, which are available to the developer as objects. All this adds up to an improved quality of the software as well as an increased speed of its development.

    Programming languages are beginning to fully support the OOP principles, such as encapsulation, inheritance, polymoφhism, and composition-relationship. With the advent of the C++ language, many commercial software developers have embraced OOP. C++ is an OOP language that offers a fast, machine-executable code. Furthermore, C++ is suitable for both commercial-application and systems-programming projects. For now, C++ appears to be the most popular choice among many OOP programmers, but there is a host of other OOP languages, such as Smalltalk, Common Lisp Object System (CLOS), and Eiffel. Additionally, OOP capabilities are being added to more traditional popular computer programming languages such as Pascal.

    The benefits of object classes can be summarized, as follows:

    • Objects and their corresponding classes break down complex programming problems into many smaller, simpler problems.

    • Encapsulation enforces data abstraction through the organization of data into small, independent objects that can communicate with each other. Encapsulation protects the data in an object from accidental damage, but allows other objects to interact with that data by calling the object's member functions and structures.

    • Subclassing and inheritance make it possible to extend and modify objects through deriving new kinds of objects from the standard classes available in the system. Thus, new capabilities are created without having to start from scratch.

    • Polymoφhism and multiple inheritance make it possible for different programmers to mix and match characteristics of many different classes and create specialized objects that can still work with related objects in predictable ways.

    • Class hierarchies and containment hierarchies provide a flexible mechanism for modeling real- world objects and the relationships among them.

    • Libraries of reusable classes are useful in many situations, but they also have some limitations. For example:

    • Complexity. In a complex system, the class hierarchies for related classes can become extremely confusing, with many dozens or even hundreds of classes. • Flow of control. A program written with the aid of class libraries is still responsible for the flow of control (i.e., it must control the interactions among all the objects created from a particular library). The programmer has to decide which functions to call at what times for which kinds of objects.

    • Duplication of effort. Although class libraries allow programmers to use and reuse many small pieces of code, each programmer puts those pieces together in a different way.

    Two different programmers can use the same set of class libraries to write two programs that do exactly the same thing but whose internal structure (i.e., design) may be quite different, depending on hundreds of small decisions each programmer makes along the way. Inevitably, similar pieces of code end up doing similar things in slightly different ways and do not work as well together as they should.

    Class libraries are very flexible. As programs grow more complex, more programmers are forced to reinvent basic solutions to basic problems over and over again. A relatively new extension of the class library concept is to have a framework of class libraries. This framework is more complex and consists of significant collections of collaborating classes that capture both the small scale patterns and major mechanisms that implement the common requirements and design in a specific application domain. They were first developed to free application programmers from the chores involved in displaying menus, windows, dialog boxes, and other standard user interface elements for personal computers.

    Frameworks also represent a change in the way programmers think about the interaction between the code they write and code written by others. In the early days of procedural programming, the programmer called libraries provided by the operating system to perform certain tasks, but basically the program executed down the page from start to finish, and the programmer was solely responsible for the flow of control. This was appropriate for printing out paychecks, calculating a mathematical table, or solving other problems with a program that executed in just one way.

    The development of graphical user interfaces began to turn this procedural programming arrangement inside out. These interfaces allow the user, rather than program logic, to drive the program and decide when certain actions should be performed. Today, most personal computer software accomplishes this by means of an event loop which monitors the mouse, keyboard, and other sources of external events and calls the appropriate parts of the programmer's code according to actions that the user performs. The programmer no longer determines the order in which events occur. Instead, a program is divided into separate pieces that are called at unpredictable times and in an unpredictable order. By relinquishing control in this way to users, the developer creates a program that is much easier to use. Nevertheless, individual pieces of the program written by the developer still call libraries provided by the operating system to accomplish certain tasks, and the programmer must still determine the flow of control within each piece after it's called by the event loop. Application code still "sits on top of the system.

    Even event loop programs require programmers to write a lot of code that should not need to be written separately for every application. The concept of an application framework carries the event loop concept further. Instead of dealing with all the nuts and bolts of constructing basic menus, windows, and dialog boxes and then making these things all work together, programmers using application frameworks start with working application code and basic user interface elements in place. Subsequently, they build from there by replacing some of the generic capabilities of the framework with the specific capabilities of the intended application.

    Application frameworks reduce the total amount of code that a programmer has to write from scratch. However, because the framework is really a generic application that displays windows, supports copy and paste, and so on, the programmer can also relinquish control to a greater degree than event loop programs permit. The framework code takes care of almost all event handling and flow of control, and the programmer's code is called only when the framework needs it (e.g., to create or manipulate a proprietary data structure).

    A programmer writing a framework program not only relinquishes control to the user (as is also true for event loop programs), but also relinquishes the detailed flow of control within the program to the framework. This approach allows the creation of more complex systems that work together in interesting ways, as opposed to isolated programs, having custom code, being created over and over again for similar problems.

    Thus, as is explained above, a framework basically is a collection of cooperating classes that make up a reusable design solution for a given problem domain. It typically includes objects that provide default behavior (e.g., for menus and windows), and programmers use it by inheriting some of that default behavior and overriding other behavior so that the framework calls application code at the appropriate times.

    There are three main differences between frameworks and class libraries:

    • Behavior versus protocol. Class libraries are essentially collections of behaviors that one can call when one wants those individual behaviors in a program. A framework, on the other hand, provides not only behavior but also the protocol or set of rules that govern the ways in which behaviors can be combined, including rules for what a programmer is supposed to provide versus what the framework provides.

    • Call versus override. With a class library, the code the programmer instantiates objects and calls their member functions. It's possible to instantiate and call objects in the same way with a framework (i.e., to treat the framework as a class library), but to take full advantage of a framework's reusable design, a programmer typically writes code that overrides and is called by the framework. The framework manages the flow of control among its objects. Writing a program involves dividing responsibilities among the various pieces of software that are called by the framework rather than specifying how the different pieces should work together.

    • Implementation versus design. With class libraries, programmers reuse only implementations, whereas with frameworks, they reuse design. A framework embodies the way a family of related programs or pieces of software work. It represents a generic design solution that can be adapted to a variety of specific problems in a given domain.

    For example, a single framework can embody the way a user interface works, even though two different user interfaces created with the same framework might solve quite different interface problems.

    Thus, through the development of frameworks for solutions to various problems and programming tasks, significant reductions in the design and development effort for software can be achieved. A preferred embodiment of the invention utilizes HyperText Markup Language

    (HTML) to implement documents on the Internet together with a general-puφose secure communication protocol for a transport medium between the client and the Newco. HTTP or other protocols could be readily substituted for HTML without undue experimentation.

    Information on these products is available in T. Berners-Lee, D. Connoly, "RFC Hypertext

    Markup Language - " (Nov. ); and R. Fielding, H, Frystyk, T. Berners-Lee, J. Gettys and

    J.C. Mogul, "Hypertext Transfer Protocol ~ HTTP/ HTTP Working Group Internet Draft" (May 2, ). HTML is a simple data format used to create hypertext documents that are portable from one platform to another. HTML documents are SGML documents with generic semantics that are appropriate for representing information from a wide range of domains.

    HTML has been in use by the World-Wide Web global information initiative since

    HTML is an application of ISO Standard ; Information Processing Text and Office Systems; Standard Generalized Markup Language (SGML).

    To date, Web development tools have been limited in their ability to create dynamic Web applications which span from client to server and interoperate with existing computing resources. Until recently, HTML has been the dominant technology used in development of Web-based solutions. However, HTML has proven to be inadequate in the following areas:

    • Poor performance;

    • Restricted user interface capabilities;

    • Can only produce static Web pages;

    • Lack of interoperability with existing applications and data; and • Inability to scale.

    Sun Microsystem's Java language solves many of the client-side problems by:

    • Improving performance on the client side; • Enabling the creation of dynamic, real-time Web applications; and

    • Providing the ability to create a wide variety of user interface components.

    With Java, developers can create robust User Interface (UI) components. Custom "widgets" (e.g., real-time stock tickers, animated icons, etc.) can be created, and client-side performance is improved. Unlike HTML, Java supports the notion of client-side validation, offloading appropriate processing onto the client for improved performance. Dynamic, real-time Web pages can be created. Using the above-mentioned custom UI components, dynamic Web pages can also be created.

    Sun's Java language has emerged as an industry-recognized language for "programming the Internet." Sun defines Java as: "a simple, object-oriented, distributed, inteφreted, robust, secure, architecture-neutral, portable, high-performance, multithreaded, dynamic, buzzword- compliant, general-puφose programming language. Java supports programming for the Internet in the form of platform-independent Java applets." Java applets are small, specialized applications that comply with Sun's Java Application Programming Interface (API) allowing developers to add "interactive content" to Web documents (e.g., simple animations, page adornments, basic games, etc.). Applets execute within a Java-compatible browser (e.g., Netscape Navigator) by copying code from the server to client. From a language standpoint, Java's core feature set is based on C++. Sun's Java literature states that Java is basically, "C++ with extensions from Objective C for more dynamic method resolution."

    Another technology that provides similar function to JAVA is provided by Microsoft and ActiveX Technologies, to give developers and Web designers wherewithal to build dynamic content for the Internet and personal computers. ActiveX includes tools for developing animation, 3-D virtual reality, video and other multimedia content. The tools use Internet standards, work on multiple platforms, and are being supported by over companies. The group's building blocks are called ActiveX Controls, small, fast components that enable developers to embed parts of software in hypertext markup language (HTML) pages. ActiveX Controls work with a variety of programming languages including Microsoft Visual C++,

    Borland Delphi, Microsoft Visual Basic programming system and, in the future, Microsoft's development tool for Java, code named "Jakarta." ActiveX Technologies also includes ActiveX Server Framework, allowing developers to create server applications. One of ordinary skill in the art readily recognizes that ActiveX could be substituted for JAVA without undue experimentation to practice the invention.

    One embodiment of the present invention includes three different, but complementary 'dimensions that together provide a framework which can be used in assessing and rating the IT operations of an organization. The following three dimensions constitute the framework of the present invention: 1) Operations Environment Dimension, 2) Capability Dimension, and 3) Maturity Dimension.

    The first dimension describes and organizes the standard operational activities that any IT organization should perform. The second dimension provides a context for evaluating the performance quality of these operational activities. This dimension specifies the qualitative characteristics of an operations environment and orders these characteristics on a scale denoting rising capability. The final dimension uses this capability scale and outlines a method for deriving a capability rating for specific IT process groups and the entire organization.

    The Operations Environment and Capability dimensions provide the foundation for determining the quality or capability level of the organization's IT operations. The Operations Environment dimension can be viewed as a descriptive mapping of a model operations environment. In a similar manner, the Capability dimension can be construed as a qualitative mapping of a model operations environment. The Maturity dimension builds on the foundation set by these two dimensions to provide a method for rating the maturity level of the entire IT organization.

    Figure 2 is a flow chart illustrating the various steps associated with the different dimensions of the present invention. As shown, a plurality of process areas of an operations organization are first defined in terms of either a goal or a puφose in operation The process areas are then grouped into categories, as indicated in operation It should be noted that the categories are grouped in terms of process areas having common characteristics.

    Next, in operation , process capabilities are received for the process areas of the operations organization. Such data may be generated via a maturity questionnaire which includes a set of questions about the operations environment that sample the base practices in each process area of the present invention. The questionnaire may be used to obtain information on the capability of the IT organization, or a specific IT area or project. Thereafter, category capabilities are calculated for the categories of the process areas in operation A maturity of the operations organization is subsequently determined based on the category capabilities of the categories in operation

    The user-specified or measured parameters, i.e., capability of each of the process areas, may be inputted by any input device, such as the keyboard , the mouse , the microphone , a touch screen (not shown), or anything else such as an input port that is capable of relaying such information. Further, the definitions, grouping, calculations and determinations may be carried out manually or via the CPU , which in turn may be governed by a computer program stored on a computer readable medium, i.e., the RAM , ROM , the disk storage units , and/or anything else capable of storing the computer program. In the alternative, dedicated hardware such as an application specific integrated circuit (ASIC) may be employed to accomplish the same. As an option, any one or more of the definitions, grouping and determinations may be carried out manually or in combination with the computer.

    Further, the outputting of the determination of the maturity of the operations organization may be effected by way of the display , the speaker , a printer (not shown) or any other output mechanism capable of delivering the output to the user. It should be understood that the foregoing components need not be resident on a single computer, but also may be a component of either a networked client and/or a server.

    Operations Environment Dimension The Operations Environment Dimension is characterized by a set of process areas that are fundamental to the effective technical execution of an operations environment. More particularly, each process is characterized by its goals and puφose, which are the essential measurable objectives of a process. Each process area has a measurable puφose statement, which describes what has to be achieved in order to attain the defined puφose of the process area.

    In the present description, goals refer to a summary of the base practices of a process area that can be used to determine whether an organization or project has effectively implemented the process area. The goals signify the scope, boundaries, and intent of each process area.

    The process goals and puφose may be achieved in an IT organization through the various lower level activities; such as tasks and practices that are carried out to produce work products. These performed tasks, activities and practices, and the characteristics of the work products produced are the indicators that demonstrate whether the specific process goals or puφose is being achieved.

    In the present description, work product describes evidence of base practice implementation. For example, a completed change control request, a resolved trouble ticket, and/or a service level agreement (SLA) report.

    The operations environment is partitioned into three process areas: Process Categories, Process Areas and Base Practices which reflect processes within any IT organization. Figure 3 depicts and summarizes the relationship of the Process Categories , Process Areas , and Base

    Practices of the Operations Environment Dimension. This breakdown provides a grouping by type of activity. The activities characterize the performance of a process. The three level hierarchy is described as follows.

    Process Categories ()

    In the present description, a Process Category has a defined puφose and measurable goals and consists of logically related set of Process Areas that collectively address the puφose and goals, in the same general area of activity.

    The puφose of Process Categories is to organize Process Areas according to common IT functional characteristics. There are four process categories defined in the present invention: Service Management, Systems Management, Managing Change, and IT Operations Planning. Process Categories are described as follows:

    Источник: [cromwellpsi.com]
    FTP 2 ActiveX FTP 2.6.11 serial key or number

    Critical Information Infrastructures Security


    This book constitutes the thoroughly refereed post-conference
    proceedings of the Second International Workshop on Critical Information
    Infrastructures Security, CRITIS , held in Benalmadena-Costa, Spain,
    in October in conjunction with ITCIP , the first conference on
    Information Technology for Critical Infrastructure Protection.

    The 29 revised full papers presented were carefully reviewed and
    selected from a total of 75 submissions. The papers address all
    security-related heterogeneous aspects of critical information
    infrastructures and are orgaized in topical sections on R&D agenda,
    communication risk and assurance, code of practice and metrics,
    information sharing and exchange, continuity of services and resiliency,
    SCADA and embedded security, threats and attacks modeling, as well as
    information exchange and modeling.

    Keywords

    access control ad hoc networks attack modeling critical information systems critical infrastrucures data security denial of service dependability emergency management fingerprinting grid security identity theft impact analysis information technology security
    • Javier Lopez
    • Bernhard M. Hämmerli
    1. cromwellpsi.comment of Computer ScienceUniversity of MalagaMálagaSpain
    2. cromwellpsi.comhule Technik und Architektur Luzern (HTA)LuzernSwitzerland

    Bibliographic information

    • DOIcromwellpsi.com
    • Copyright InformationSpringer-Verlag Berlin Heidelberg
    • Publisher NameSpringer, Berlin, Heidelberg
    • eBook PackagesComputer ScienceComputer Science (R0)
    • Print ISBN
    • Online ISBN
    • Series Print ISSN
    • Series Online ISSN
    • Buy this book on publisher's site
    Источник: [cromwellpsi.com]
    .

    What’s New in the FTP 2 ActiveX FTP 2.6.11 serial key or number?

    Screen Shot

    System Requirements for FTP 2 ActiveX FTP 2.6.11 serial key or number

    Add a Comment

    Your email address will not be published. Required fields are marked *