Web Based Information Systems Research Papers

Analysis and Design of Web-based InformationSystems

Kenji Takahashi
NTT Multimedia Communications Laboratories

Eugene Liang
College of Computing
Georgia Institute of Technology


We have developed a method for analysis and design of Web-basedinformation systems (WBISs), and tools to support the method, WebArchitect and PilotBoat. The method and the tools focus onarchitectures and functions of Web sites, rather than on appearanceof each Web resource (page), such as graphics and layouts. Our goal is to efficiently develop WBISs that best support particular businessprocesses at lowest maintenance cost. Our method consists of twoapproaches, static and dynamic. We use the entity relation (E-R)approach for the static aspects of WBISs, and use scenario approachfor the dynamic aspects. The E-R analysis and design, based onrelationship-management methodology (RMM) developed by Isakowitz etal., defines what are entities and how they are related. The scenarioanalysis defines how Web resources are accessed, used, and changed bywhom. The method also defines attributes of each Web resource, whichare used in maintaining the resource. WebArchitect enables designersand maintainers to directly manipulate meta-level links between Webresources that are represented in a hierarchical manner. PilotBoat isa Web client that navigates and lets users collaborate through Websites. We have applied our approaches to the WWW6 proceedings site.

1. Introduction

Recently Web-based information systems (WBISs) are attracting significant interest among business process practitioners as flexible and low-cost solutions to distributed collaborative work. "Intranet " is a typical targeted environment for WBISs. A WBIS not only disseminates information, but also proactively interacts with users and processes their business tasks to accomplish their business goals. Thus, analysis and design of WBISs need an approach different from those for Web sites that mainly provided information uni-directionally on users' requests, such as catalog, directory, and advertisement sites. There is a lot of work on graphical and user-interface aspects of Web site design [Nielsen, 1995; Sano, 1996]. They emphasis visual design of each Web page but do not provide a systematic way of designing logical structure of Web sites as a whole. There are also design methods of Web sites, such as OOHDM [Schwabe et al, 1996] and RMM [Isakowitz et. al, 1995]. These methods are useful in formally modeling kiosk-type applications to navigate users to desired information in a systematic manner. However, for users of WBISs, access to a particular piece of information is only one part of their business goals. They also need to process business data and communicate and collaborate with colleagues by using WBISs. These formal modeling approaches do not answer important questions in analyzing and designing WBISs, such as "Whether and how do users accomplish their business goals using WBISs?" "How do WBISs process and respond to users inputs?" and "How do users interact with other users via WBISs?" Maintenance is also a very important issue as Web sites become larger. However there id very little research on maintenance of Web sites, although there are several commercial tools, such as WebAnalyzer(tm) developed by InContext Corp. These tools are useful in identifying broken links and other problems in existing Web sites but do not provide a solution to or a way of avoiding the problems. As software engineering research reports [Boehm, 1981; Davis, 1990; Humphrey, 1989], maintenance cost can be dramatically reduced if errors are detected in the analysis and design phases and the maintenance procedure (i.e. when and who changes what) is defined in the early phase.

We propose a method for analyzing and designing WBISs, and describe WebArchitect and PilotBoat, tools to support the method. The method and the tools focus on architectures and functions of Web sites, rather than on appearance of each Web resource (page), such as graphics and layouts. In other words, our concern is with what Web resources contain, and how they are linked each other, and best managed to support particular business processes at lowest maintenance cost.

Our method consists of two approaches, static and dynamic. We use the entity relation (E-R) method for static analysis and design of WBISs, and use scenario method for dynamic analysis and design. The E-R method, based on relationship management methodology (RMM) [Isakowitz et al., 1995], defines what entities are and how they are related. The scenario method defines how Web resources are accessed, used, and changed by whom. The method also defines the attributes of each Web resource, which are used in maintaining the resource.

WebArchitect enables designers and maintainers to directly manipulate meta-level links between Web resources that are represented in a tree graph. PilotBoat is a Web client that navigates and lets users collaborate through Web sites. We have applied our approaches to several Web site development projects, including the CommerceNet Web site and WWW6 conference site. Throughout this paper, examples for development of the WWW6 conference site will be used. The conference site is a typical WBIS, which provides various services for a variety of users, such as on-line registration for attendees, agenda tracking for the organizing committee, and interactive discussions of papers between authors and readers.

2. A method of analysis and design of Web sites

Our method for analysis and design of WBISs consists of the following activities: E-R analysis, scenario analysis, architecture design, and attribute definition (Figure 1). First, the problem domain, where a WBIS is expected to operate, is analyzed by E-R analysis. Next, scenario analysis determines how potential users interact with the WBIS to accomplish their business goals. Based on the results from these analyses, the architecture of the WBIS is designed. Then attributes of the Web resources that consist of the WBIS are defined for maintenance. The WBIS is constructed based on the design. Finally, the WBIS is tested using the scenarios and introduced into the work place. It continues to be maintained and revised after the introduction throughout its life time.

2.1 E-R Analysis Of Problem Domain

A given problem domain is analyzed to understand the domain and determine the scope of the target WBIS by entity-relation (E-R) analysis. Entities are identified and relations between them are established. The entities and their relations are the basis of the WBIS structures. Entities are categorized into three types: agent, event, and product. We categorize them because the categorization enables us to capture and handle the dynamic nature of WBISs (i.e. "who produces what"), and to define and manage attributes specific to each type in maintenance. Agents are entities that act on and affect other entities, including organizations or groups (e.g., a company) and individuals. For example, the Organizing Committee is an agent in the WWW6 conference problem domain. Events are those which agents schedule and conduct, including meetings. For example, an Organizing Committee meeting is an event. Products are those produced by agents and resulting from events. For example, the minutes of Organizing Committee meeting is a product. In short, an agent conducts an event that result in products (e.g., the organizing committee has a meeting that is recorded in minutes). This analysis can be conducted based on precedent requirements analysis.

Results from the analysis are represented in extended E-R diagrams in which agents, events, and products are depicted differently. Figure 2 shows an E-R diagram for the WWW6 conference.

Figure 2. An E-R diagram for the WWW6 conference
(A larger figure is here.)

2.2 Scenario Analysis

Following the E-R analysis, scenario analysis is conducted to identify who are potential users, what Web resources they need, how they visit and use the resources, and how WBISs respond to the users to achieve the users' goals. Scenario analysis is a well-accepted practice in the software engineering field [Anton et al., 1994; Dardenne, 1993; Potts, 1995; Potts et al., 1994]. We apply this technique to the analysis and design of WBISs, which is a kind of software system. The scenario analysis is conducted as follows: first, the users' goals are identified. Then scenarios are scripted to describe how to accomplish each goal in different situations. For example, three different scenarios are identified for a goal of users of the WWW6 conference site to read and discuss a paper: (1) for registered attendees accessing from the conference homepage (Table 1), (2) for non-registered users finding the paper by using an outside search engine, and (3) for registered attendees accessing from the program homepage and planning their schedules. A scenario is represented in a table, where each row represents a step in the sequence of a scenario. A row (step) has three types of columns: an agent, action that the agent takes, and Web resources involved in the action.

Scenario analysis and architecture design (described in the next section) are complementary and conducted concurrently with close cross-checks. Scenario analysis elicits navigation paths and Web resources needed to accomplish users' goals. These paths and resources become building blocks in the architecture design. Once the architecture has been designed, it can be validated by going through the scenarios. Through the analysis and design, entities and relations in the E-R diagrams are transformed into corresponding Web resources and navigation links in the architecture, respectively.

Programs that process users' tasks, such as CGI and Java(tm) applets, are also designed based on the scenarios, as in scenario-based software development [Rubin and Goldberg, 1992]. In the testing phase after construction of WBISs, scenarios are used as test cases.

Table 1 shows a scenario for the WWW6 proceedings site, which is called "Hyperproceedings". Hyperproceedings uses HyperNews developed at NCSA and lets users discuss papers via Web using net news systems. In this scenario, a registered attendee reads and discusses a paper with one of the authors by using Hyperproceedings. He accesses the paper from the conference homepage, reads and discusses it, and schedules himself to attend the paper presentation.

Table 1. A scenario for reading and discussing a paper on the Hyperproceedings

1Registered attendeevisits the Conference homepageConference homepage visited
2Registered attendeevisits the Hyperproceedings homepageHyperproceedings homepage visited
3Registered attendeevisits the table of contents (TOC) pageTOC page visited
4Registered attendeefinds the "searching technique" session( "searching technique" proc session index item found)
5Registered attendeefinds the "psychic search" paper( "psychic search" paper index item found)
6Registered attendeevisits the "psychic search" paper page"psychic search" paper page visited
7Registered attendeeprints out and reads the paper"psychic search" paper page printed out
8Registered attendeevisits the trial page to learn how to make commentstrial page visited
9Registered attendeereturns to the paper page"psychic search" paper visited
10Registered attendeemakes a comment on the paper
11Hyperproceedingsadds the comment to a thread of a discussiona new comment page created

a new comment index item created

12Registered attendeesubscribes himself to be notified of responses to the comment
13Hyperproceedingsadds Registered attendee to the subscriber lista new subscriber entry created
14Hyperproceedingssends Author a comment e-mailcomment e-mailed
15Authorreceives and reads the comment e-mail
16Authorvisits the paper page"psychic search" paper visited
17Authorvisits the comment pagecomment page visited
18Authorinputs a response to the comment
19Hyperproceedingsadds the comment to a thread of a discussionA new comment page created

a new comment index item created

20Hyperproceedingssends Registered attendee the response e-mailcomment e-mailed
21Registered attendeereceives and reads the response e-mail
22Registered attendeefinds a link to the presentation session information on the paper page
23Registered attendeevisits the session page on the program sitesession page visited
24Registered attendeestores the presentation information and schedules his personal calendar to attend the presentationa new schedule item created

2.3 Architecture Design

Based on the scenario analysis, architectures of WBISs are designed andrepresented in an RMDMW (Relationship Management Data Model for WBISs) diagram.RMDMW is again based on RMDM [Isakowitz et al., 1995] and enhanced todifferentiate agents, events, and products. RMDMW diagrams are evolved from theE-R diagrams. This evolution includes transforming entities and relations in E-Rdiagrams into Web resources and navigational links, respectively. Heredesigners determine (1) the navigation methods employed by users to access Webresources, and (2)ways to map the navigation methods to Web resources. Thereare three navigation methods: guided tour, index, and indexed guided tour. Aguided tour navigates users through a series of linked resources. An index is aset of links to related resources. An indexed guided tour is a mixture of thetwo methods. For example, we use an index for comments on papers inHyperproceedings. These methods are fully described in [Isakowitz et al.,1995]. A navigation method can be implemented as a part of a Web page or as anindependent Web page. The decision is made based on the design policy, and thelength of the contents and the number of indexed items linked from the contents. In Hyperproceedings, we implement the table of contents as an independent page,while we embed the index of comments on a paper into the paper page. Figure 3shows the RMDMW diagram for Hyperproceedings.

2.4 Attributes definition

Attributes of Web resources are defined for maintenance. There are attributes intrinsic to the type of the resource (agent, product, or event) and those common to all the types. The common attributes are described below. The examples of the attributes value are those of a paper page.
  • Title: shows the name of the Web resource (e.g., Psychic Search)
  • Managed by: describes who (and/or which program) is responsible for managing the Web resource (e.g., CGI script of Hyperproceedings, Hyperproceedings staff)
  • Access right: defines who can read and/or write to the Web resource. "r" means the read access right. "w" means the write access right. (e.g., Hyperproceedings staff "rw", Others "r")
  • Created/Modified (when, and by whom): keeps track of records about who created and modified the Web resource. (e.g., 12/9/97 by Kenji)
  • Published since: describes when the Web resource is published. (e.g., 3/7/97)
  • Expired when: describes when the Web resource expired. (e.g., 4/12/97)
  • Version: describes the version of the Web resource. (e.g., published_version_1)
  • Derived from: describes Web resource(s) from which the Web resource is derived. (e.g., "submitted_version")
  • Relevant resources: refers to relevant Web resource(s). (e.g., Panel, Poster)

2.4.1 Product

The product type of Web resources has the following intrinsic attribute. An example of the attribute value is that of the organizing committee meeting minutes.
  • Resulted from: describes the event type resource(s) which the product resulted from. (e.g., Organizing committee meeting)

2.4.2 Agent

The agent type of Web resources has the following intrinsic attributes. Examples of the attributes value are those of the organizing committee of the WWW6 conference.
  • Representative: describes a representative of the agent. This is defined only if the agent is a set of agents. (e.g., Benay)
  • List of members: lists-agent type Web resources that belong to the agent. This is defined only if the resource is a set of agents. (e.g., Ann, Nick, Kenji)
  • Products produced: describes product-type Web resource(s) that the agent produces. (e.g., Organizing Committee meeting minutes)
  • Events related to: describes event-type Web resource(s) that the agent is related to. (e.g., Organizing committee meeting)

2.4.3 Event

The event type of Web resources has the following intrinsic attributes. Examples of the attributes value are those of a session in the WWW conference.
  • Schedule: describes time and place of the event. (e.g., 3:00pm to 5:00pm on 4/8/97 at room A in Santa Clara Convention Center)
  • Status: describes the status of the event - whether it has not started, is in progress, postponed, canceled, or finished. (e.g., not started yet)
  • List of participants: lists which agent type Web resources participate in the event. (e.g., Authors, Attendees, Chair)Announced by: describes whether and how the event is announced. (e.g., Program Committee by e-mail to Authors and Chair)
  • Reported by: describes who reports the event (e.g., Ann) Organized by: describes who organizes the event (e.g., session chair)
  • Resulted in: describes which product type Web resource(s) results from the event. (e.g., WWW6 report)

3. Tool support for development of WBISs

Here we describe WebArchitect and PilotBoat, tools for construction and maintenance of WBISs based on our method. Both tools use meta-level links, which are described in the next section, to relate and manipulate Web resources. Resulting WBISs are also implemented by using meta-level links. We use the WWW6 proceedings site for illustration again. We used our tools in the prototyping of the site architecture, but the version of the site in service is implemented without using meta-level links.

3.1 Meta-level Links and Attributes

A meta-level link establishes a semantic relationship between two Web resources outside of their contents. Types of meta-level links are defined based on their semantics. Thus a navigation link in an RMDMW diagram can be implemented as a meta-level link with the same type name.

Conventionally links are represented by reference anchors (i.e. <a href ...> tags) in HTML files. Because these links themselves are a part of the contents of Web resources, maintainers have to check every resource and fix errors, if any. This makes it very difficult to maintain the integrity of relationships between Web resources in WBISs.

The meta-level link mechanism is our answer to the relation management problem. The mechanism lets users establish and manipulate a relationship between two Web resources from remote sites without the need to access and modify the contents of the resources. The mechanism uses two HTTP methods, LINK and UNLINK, and transmits the link information in LINK header fields of HTTP. The LINK method establishes a meta-level link between two resources. The UNLINK method deletes a meta-level link. Because the mechanism uses only HTTP, users can work on Web resources on remote sites through firewalls with appropriate security provision.

In addition, the meta-level mechanism has two other advantages over HTML links in managing relationships between Web resources:

  • (1) meta-level links can relate Web resources that are not written in HTML.
  • (2) meta-level links can be obtained faster. This is because clients have to get only headers from servers to obtain meta-level links. Whereas to obtain HTML links, clients have to get the bodies of Web resources from servers, and interpret and extract the links.

For example, a technical paper page is related to the session page (URL: http:// www6program_host/session/search.html) in which the paper is presented by a "Presented in" meta-level link. This link is defined as follows:

Link: <"http://program_host/session/search.html"> rel ="Presented in"

Figure 4 compares the two pairs of the paper and session pages, one linked with HTML links and the other linked with meta-level links.

Figure 4. Comparison of HTML links and meta-level links

Attributes of a Web resource are described in another Web resource that is linked by a meta-level link, whose type is "Attribute." The "Attribute" meta-level links should not be used to represent other relationships and a Web resource has only one "Attribute" meta-level link. For example, the technical paper page has a meta-level link to its attributes resource (URL: http://proceedings_host/tech_paper/tp15.atr) as follows:

Link: <"http://proceedings_host/tech_paper/tp15.atr"> rel ="Attribute"

3.2 WebArchitect

WebArchitect is a tool for constructing and maintaining WBISs. It visualizes the architecture of a WBIS in a hierarchical manner. It lets users manipulate meta-level links in a WYSWYG way and maintain attributes of Web resources.

Users can construct the architecture of a WBIS by drawing tree graphs based on the RMDMW diagram with simple mouse operations. This results in the overall architecture of the WBIS that consists of Web resources linked with meta-level links that implement the RMDMW relationships. WBISs can be constructed in both top-down and bottom-up ways using WebArchitect. Users construct the WBIS architectures first with empty Web resources and then fill in the body of each resource in a top-down way. In the bottom-up way, they prepare Web resources first by creating Web resources and/or reusing existing ones, and then construct the WBISs by linking the Web resources. Users, of course, can use both ways as needed.

In maintenance, WebArchitect lets users manage the attributes of Web resources in WBISs and revise the WBIS architectures using meta-level link functions. Using WebArchitect users can retrieve, check, and make changes to Web attributes. WebArchitect also notifies users that changes have been made to Web resources or changes does not happen within prescribed periods.

General end users can use the visualized architecture as the navigational aid with appropriate access control. Figure 5 shows the architecture of the WWW6 proceedings site visualized by WebArchitect.

Macro client

Micro client

Figure 5. The architecture of the WWW6 proceedings site visualized by WebArchitect

3.2.1 Architecture

WebArchitect consists of graphical clients and notification agents, and works with enhanced HTTP servers that comprise the targeted WBISs (Figure 6). The servers can handle two methods for meta-level links, LINK and UNLINK . WebArchitect can handle a distributed WBIS, which consists of more than one servers in different hosts. There are two types of clients: macro and micro view clients. Macro clients show an overview of the architecture of a WBIS. Micro clients provide a detailed view of a part of the architecture and functions for manipulating Web resources. The rectangle region in the macro client is displayed in detail in micro clients (as shown in Figure 5). Both views are cooperative- if one view moves or changes, the other view behaves accordingly.

The notification agents monitor the response messages multicasted by the servers using SNP (simple notification protocol) developed by us. If the notification agents detect events specified by users, the agents notify the users of the events via prescribed communication media (e.g., e-mail, beeper, etc.). In the following sections, we describe the functions of WebArchitect.

Figure 6. Architecture of WebArchitect

3.2.2 Visualizing Web architecture

WebArchitect visualizes the architecture of WBISs as tree graphs. We use tree graphs because hierarchy is a basic structure for the architectures of any practical WBISs. Loops, however, are also included in the architecture. We use a "back link" approach, where tree structures are created first and then back links to nodes already created are established as they are detected. WebArchitect visualizes the Web structure in interactive and batch modes. In the interactive mode, it dynamically spreads the tree structure from the Web resource specified by users. In the batch mode, it generates the tree structure from a starting Web resource (e.g., homepage) specified by users to the specified depth.

Users can filter the view of Web architectures by meta-level link types and attributes, such as access rights or publishing and expiration dates.

3.2.3 Manipulating meta-level links

WebArchitect lets users create and change the architecture of WBISs. Users can attach, detach, connect, and move Web resources via drag-and-drop operations in the micro clients. The micro client issues LINK and UNLINK methods to the servers according to operations of the user. Operations on the architecture visualized on the client immediately act on the real architecture stored in the servers. Thus visualized and stored architectures are always the same. For example, if a user moves a technical paper page currently under (i.e., linked to) the "Search" session page to be under (i.e., re-linked to) the "Cache" session page, the following methods are issued inside of WebArchitect:
  • (1) UNLINK http://proceedings_host/session/search.htmlLink: http://proceedings_host/tech_paper/tp15.html; rel = "Presented in"
  • (2) LINK http://proceedings_host/session/cache.htmlhttp://proceedings_host/tech_paper/tp15.html rel = "Presented in"
(here http://proceedings_host/session/search.html and http://proceedings_host/session/cache.html are URLs for the search and cache session pages, respectively. http://proceedings_host/tech_paper/tp15.html is URL for the paper page.)

3.2.4 Managing attributes

WebArchitect provides functions for managing attributes to support maintenance of WBISs. Users can display the attributes of a Web resource and change their values. Users can also search for Web resources that have a particular attribute value. For example, a maintainer can detect obsolete Web pages by checking out pages that have expiration date attributes earlier than today. Also as described in 3.2.2, maintainers can check out different views for different types of users by filtering the view of Web architectures by access rights. For example, only maintainers can see the "Access statistics" resources of Hyperproceedings. Maintainers also determines how the view of architecture for public users changes over time by defining publishing and expiration dates of the Web resources.

3.2.5 Notification of changes

A user has his/her own notification agent and asks the agent to inform him/her of specific events. Users can know the following events, (1) creation, revision and deletion of Web resources, (2) establishment and deletion of meta-level links between Web resources, and (3) access to Web resources and headers.

The notification agents have two notification mechanism: action-based and time-based mechanisms. The action-based mechanism notifies users when a specified event (e.g., a change to a Web resource) has happened to a specified Web resource. Using this mechanism, members of a development team of a WBIS can be efficiently coordinated and maintain the consistency of the WBIS by knowing what changes other members have made. The time-based notification mechanism can also notify users if a given event did not occur during a specified period. Using this mechanism, a development team can manage the development project in a timely manner by prompting scheduled tasks and by knowing how much of the scheduled Web resources have been developed. The WebArchitect clients themselves also can be notified of the changes in WBISs and renew graphs that they display according to the change.

The event information is multicasted by the WebArchitect servers in real time using SNP. SNP is transmitted over IP multicast, which enables us to send the information to more than one agents and clients in a scalable manner [Kumar, 1995].

3.3 PilotBoat: a Meta-Level link Navigator

PilotBoat is a client to navigate users through meta-level linked Web resources. The PilotBoat client works with existing Web browsers, such as Netscape Navigator(tm), and the meta-level linkable HTTP servers. Users use PilotBoat to handle meta-level links and use existing browsers to display Web pages. PilotBoat can invoke and control Netscape Navigator upon users' requests. PilotBoat clients communicate with each other using IP multicasting to share the same Web resource. Figure 7 shows a screen image of PilotBoat.

Figure 7. A screen image of PilotBoat

PilotBoat has the following functions:

Navigating meta-level linksPilotBoat clients display the meta-level links that a Web resource has. Users can visit a Web resource linked with a meta-level link shown in the clients by double clicking it and display the contents of the resource on the Web browser.

Sharing a Web resourceUsers can set up a shared session using PilotBoat. In the shared session, all the users joining the session will receive the same Web resource from the servers. If one of the users gets a different source, then the others get the same source he/she got so that all the users can have the same resource on their own PilotBoat clients. In such a shared session, if a user asks his/her PilotBoat client to get a Web resource, the client notifies the other clients that it gets the resource and then, prompted by the notification, the others get the same resource.

3.4 Status

We have implemented all the functions described except the sharing function of PilotBoat. The WebArchitect clients are written in Java using SubArctic [Hudson and Smith, 1996]. The enhanced sever is also written in Java using JDK 1.0(tm) developed by Sun Mircosystems. PilotBoat is developed using MetaCard(tm), a product of MetaCard Corporation, and the libWWW library written by Henrik Frystyk Nielsen at W3C.

4. Discussion

There are other approaches to analysis and design as described in Section 1, including formal modeling methods and usability engineering. Our approach is focusing on WBISs, which interactively processes users tasks, and therefore is different from these conventional approaches in the following points:
  • (1) it incorporates the scenario analysis to analyze the dynamic interactions among users and WBISs.
  • (2) it also supports the maintenance of WBISs.
  • (3) it is implemented as tools to enable distributed users to construct and maintain WBISs in a consistent manner using meta-level links and IP multicast-based notification.

Our approach, however, does not support graphic design of Web resources (or pages). Although the graphic design is not within the scope of this paper, it is important and we need to incorporate it to our approach.

5. Future work

We have applied our method and tools to several analysis and design projects followed by informal and qualitative evaluation, but we still need to conduct formal evaluation through the entire life cycle of WBISs. We plan to apply our method and tools to Intranet for our laboratories, which is a WBIS distributed globally among Tokyo, Palo Alto and Atlanta.

WebArchitect should also integrate support for analysis and design with its construction and maintenance support in a seamless manner. Our analysis and design methods extensively use diagrams and tables, which have specific formats. Thus analysts and designers can be more productive by using functions for creating and checking diagrams and tables, which CASE (computer aided software engineering) tools provide. Furthermore a part of the architectures of WBISs can be automatically generated and tested, based on the results from analysis and design in the integrated support environment. We plan to implement such functions for editing diagrams and tables, and for generating WBIS architectures.


We deeply thank Kathryn Henniss, SLAC at Stanford University, for her leadership and passion in the WWW6 conference site project. We also thank Benay Dara-Abrams, who is the chair of the Organizing Committee of the WWW6 conference and with Sholink Corporation, for giving us the opportunity to participate in the project. Thanks are due to other Organizing Committee members of the WWW6 conference. It is our pleasure and honor to work with them. Finally, we thank Masafumi Higuchi and Katsuyuki Hasebe, Hyperproceedings project members at NTT MCL, for nicely implementing the site.


[Anton et al., 1994] A. I. Anton, W.M. McCracken and C. Potts "Goal Decomposition and Scenario Analysis in Business Process Engineering", Advanced Information Systems Engineering, Proc. 6th Int. Conf. CAiSE'94, Utrecht, Springer-Verlag Lecture Notes in Computer Science, 811, 1994.
[Boehm, 1981] B. Boehm, Software Engineering Economics, Prentice-Hall, Englewood Cliffs, New Jersey, 1981.
[Davis, 1990] A. Davis, Software Requirements : Analysis & Specification, Prentice-Hall, 1990.
[Dardenne, 1993] A. Dardenne, "On the Use of Scenarios in Requirements Acquisition", Oregon Technical Report, CIS-TR-93-17, 1993.
[Humphrey, 1989] W. S. Humphrey, Managing the Software Process, Addison Wesley Publishing, 1989.
[Hudson and Smith, 1996] S. Hudson and I. Smith, "Ultra-Lightweight Constraints", Proc. of ACM Sympo. On User Interface Software and Technology, pp. 147-155, 1996.
[Isakowitz et al., 1995] T. Isakowitz, E. A. Stohr, and P. Balasubramanian, "RMM: A Methodology for Structured Hypermedia Design", Comm. ACM, Vol. 38, No. 8, August 1995, pp. 34-44.
[Jacobson, 1992] Jacobson, I., "Object-Oriented Software Engineering: A Use-case Driven Approach", Addison-Wesley, 1992.
[Karat and Bennet, 1991] Karat, J. and Bennett, J., "Using Scenarios in Design Meetings � A Case Study Example", in Karat, J. (ed.), Taking Software Design Seriously, Academic Press, 1991.
[Kumar, 1995] V. Kumar, Mbone: Interactive Multimedia on the Intenet, Macmillan Publishing, Simon & Schusternet, 1995.
[Nielsen, 1995] J. Nielsen, Multimedia and Hypertext the Internet and Beyond, Academic Press, 1995
[Potts, 1995] C. Potts, "Using Schematic Scenarios to Understand User Needs", Proc. Sympo. Designing Interactive Systems: Processes, Practices, Methods & Techniques (DIS'95), August 23-25, 1995, pp. 247-256.
[Potts et al., 1994] C. Potts, K. Takahashi and A. I. Anton, "Inquiry-based Scenario Analysis of System Requirements", IEEE Software, ??, March 1994, pp. 21-32.
[Rubin and Goldberg, 1992] K. S. Rubin and A. Goldberg, "Object Behavior Analysis",, "Object Behavior Analysis", Comm. ACM., Vol. 35, No. 9, 1992, pp. 48-62.
[Sano, 1996] D. Sano, Designing Large-scale Web Sites, Wiley Computer Publishing, 1996.
[Schwabe et al., 1996] D. Schwabe, G. Rossi, and S. D. J. Barbosa, "Systematic Hypermedia Application Design with OOHDM", Proc. Hypertext '96, 1996, pp. 116 - 128.

URL references

  • HyperNews (http://union.ncsa.uiuc.edu/HyperNews/get/hypernews.html)
  • libwww (http://www.w3.org/pub/WWW/Library/Activity.html)

Return to Top of Page
Return to Technical Papers Index

Quality of Web-based information systems

Kazimierz Worwa1, Jerzy Stanik2
  1. Kazimierz Worwa, PhD, DSc Associate Professor, Institute of Information Systems, Faculty of Cybernetics, Military University of Technology, Warsaw, Poland Postal Address: Military University of Technology, ul. Kaliskiego 2, 00-908 Warszawa, Poland Author's Personal/Organizational Website: www.isi.wat.edu.pl Email: kworwa@wat.edu.pl
    (please use to correspond with the authors) Prof. Kazimierz Worwa is an associate professor at the Faculty of Cybernetics of Military University of Technology in Warsaw, Poland. His areas of interest are software engineering and software reliability modeling.
  2. Jerzy Stanik, PhD Assistant Professor, Institute of Information Systems, Faculty of Cybernetics, Military University of Technology, Warsaw, Poland Postal Address: Military University of Technology, ul. Kaliskiego 2, 00-908 Warszawa, Poland Author's Personal/Organizational Website: www.isi.wat.edu.pl Email: jstanik@wat.edu.pl
    Dr. Jerzy Stanik is a head of Division of Information Management Systems at the Faculty of Cybernetics of Military University of Technology in Warsaw, Poland. His research interests include information systems used in management and analytical methods used in the management.
Related article atPubmed, Scholar Google

Visit for more related articles atJournal of Internet Banking and Commerce


The scope and complexity of current World Wide Web applications vary widely: from smallscale, short-lived services to large-scale enterprise applications distributed across the Internet and corporate intranets and extranets. As Web applications have evolved, the demands placed on Web-based systems and the complexity of designing, developing, maintaining, and managing these systems have also increased significantly. They provided vast, dynamic information in multiple media formats (graphics, images, and video). Web site design for these and many other applications demand balance among information content, aesthetics, and performance. In accordance with the growth of the Internet and World Wide Web, there has been some research on quality issues of Web-based software systems. The differences between the Web-based information systems and conventional information systems are discussed in a paper from the perspective of software quality.


Web-based information systems; Web engineering; quality of Web-based software; software quality modeling


In recent years, the Internet and World Wide Web (www) have become ubiquitous, surpassing all other technological developments in our history. They’ve also grown rapidly in their scope and extent of use, significantly affecting all aspects of our lives. Industries such as manufacturing, travel and tourism, banking, education, and government are Web-enabled to improve and enhance their operations. E-commerce has expanded quickly, cutting across national boundaries. Even traditional legacy information and database systems have migrated to the Web. Advances in wireless technologies and Web-enabled appliances are triggering a new wave of mobile Web applications. As a result, we increasingly depend on a range of Web applications. Using Web technologies, an organization can reach out to customers and provide them with not only general information about its products or services but also the opportunity of performing interactive business transactions. Organizations investing in Web technologies and applications are looking forward to realizing the benefits of these investments, however, this would not be possible without an appropriate tool for measuring the quality of their Web sites. Therefore, the quality of a Web-based information system has become a main concern of all the users of the system, the developers of the system, and the managers of the corresponding company.

2. Evolution of Web-based information systems

The commercial use of the Internet and Web has grown explosively in the past five years. In that time, the Internet has evolved from primarily being a communications medium (email, files, newsgroups, and chat rooms) to a vehicle for distributing information to a full-fledged market channel for e-commerce. Web sites that once simply displayed information for visitors have become interactive, highly functional systems that let many types of businesses interact with many types of users.
The common first stage in the development of Web-based information systems is an information kiosk where marketing type information is presented either to boost prestige, enhance brand identification or to encourage conventional sales activity.
The second stage is to open up the one-way information kiosk system using say Web-forms, so that browsers can place orders and make enquiries from the Web pages of interests, treating the Web site as an electronic mail order catalogue.
The third stage is a re-think of the traditional boundaries between the customer and the enterprise. For example a customer might be able to go beyond browsing an on-line catalogue and clicking buttons to select products, moving instead into examining production schedules.
They might then enter customized changes to product designs and could see the impact of their requirements on production schedules, delivery times and contribution to overall cost.
The scope and complexity of current Web applications vary widely: from small-scale, short-lived services to large-scale enterprise applications distributed across the Internet and corporate intranets and extranets. Web-based applications can be grouped into the seven categories (Ginige and Murugesan, 2001):
• informational, e.g. online newspapers, product catalogs, newsletters, service manuals, online classifieds, online electronic books;
• interactive, e.g. registration forms, customized information; user-provided presentation, online games;
• transactional, e.g. electronic shopping, ordering goods and services, online banking;
• workflow e.g. online planning and scheduling systems, inventory management, status monitoring;
• collaborative work environments, e.g. distributed authoring systems, collaborative design tools;
• online communities, marketplaces, e.g. chat groups, recommender systems that recommend products or services, online marketplaces, online auctions;
• Web portals, e.g. electronic shopping malls, online intermediaries.
As Web applications have evolved, the demands placed on Web-based systems and the complexity of designing, developing, maintaining, and managing these systems have also increased significantly. For example, Web sites such as for the 2000 Sydney Olympics, 1998 Nagano Olympics, and Wimbledon received hundreds of thousands of hits per minute (Ginige and Murugesan, 2001). They provided vast, dynamic information in multiple media formats (graphics, images, and video). Web site design for these and many other applications demand balance among information content, aesthetics, and performance.
Changes in use of the Internet and Web-based information systems have had an enormous impact on software engineering (Offutt, 2002). As the use of the Internet and Web has grown, the amount, type and quality of software necessary for powering Web sites has also grown. Just a few years ago, Web sites were primarily composed of static HTML files, so-called “soft brochures,” usually created by a single webmaster who used HTML, JavaScript, and simple CGI scripts to present information and obtain data from visitors with forms. Early Web-based information systems have a typical client-server configuration in which the client is a Web browser that people use to visit Web sites that reside on different computers, the servers, and a software package called a Web server sends the HTML files to the client. HTML files contain JavaScripts, which are small pieces of code that are interpreted on the client. HTML forms generate data that are sent back to the server to be processed by CGI programs. This very simple model of operation can support relatively small Web sites. It uses small-scale software, offers very little security, usually cannot support much traffic, and offers limited functionality. This was called a two-tier system because two separate computers were involved. The Web’s function and structure have changed drastically, particularly in the past two years, yet most software engineering researchers, educators, and practitioners have not yet grasped how fully this change affects engineering principles and processes. Web sites are now fully functional software systems that provide business-to-customer ecommerce, business-to-business ecommerce, and many services to many users. Instead of referring to visitors to Web sites, we now refer to users, implying much interaction. Instead of Webmasters, large Web sites must employ Web managers leading diverse teams of IT professionals that include programmers, database administrators, network administrators, usability engineers, graphics designers, security experts, marketers, and others. This team uses diverse technologies including several varieties of Java (Java, Servlets, Enterprise JavaBeans, applets, and Java Server Pages), HTML, JavaScript, XML, UML, and many others. The growing use of third-party software components and middleware represents one of the biggest changes. The technology has changed because the old two-tier model did not support the high quality requirements of Web software applications. It fails on security, being prone to crackers who only need to go through one layer of security on a computer that is, by definition, open to the world to provide access to all data files. It fails on scalability and maintainability because as Web sites grow, a two-tier model cannot effectively separate presentation from business logic, and the applications thus become cumbersome and hard to modify. It fails on reliability: whereas previous Web software generations relied on CGI programs, usually written in Perl, many developers have found that large complex Perl programs can be hard to program correctly, understand, or modify. Finally, it fails on availability because hosting a site on one Web server imposes a bottleneck: any server problems will hinder user access to the Web site. Current Web site software configuration has expanded first to a three-tier model and now more generally to an “N-tier” model. Clients still use a browser to visit Web sites, which are hosted and delivered by Web servers. But to increase quality attributes such as security, reliability, availability, and scalability, as well as functionality, most of the software has been moved to a separate computer - the application server. Indeed, on large Web sites, a collection of application servers typically operates in parallel, and the application servers interact with one or more database servers that may run a commercial database. The client-server interaction, as before, uses the Internet, but middleware - software that handles communication, data translation and process distribution - often connects the Web and application servers, and the application and database servers. New Web software languages such as Java are easier to modify and program correctly and permit more extensive reuse, features that enhance maintainability, reliability, and scalability. The N-tier model also permits additional security layers between potential crackers and the data and application business logic. The ability to separate presentation (typically on the Web server tier) from the business logic (on the application server tier) makes Web software easier to maintain and to expand in terms of customers serviced and services offered. Distributed computing, particularly for the application servers, allows the Web application to tolerate failures and handle more customers, and allows developers to simplify the software design. Java Server Pages (JSPs) and Enterprise JavaBeans (EJBs) let developers separate presentation from logic, which helps make software more maintainable. To further subdivide the work, developers can create a software dispatcher that accepts requests on the Web server tier, then forwards the request to an appropriate hardware/software component on the application tier. Such design strategies lead to more reliable software and more scalable Web sites. Of course the technology keeps changing, with the latest major addition to the technology being Microsoft’s .NET. It is too early to say today what type of affect .NET will have, although it does not seem to provide additional abilities beyond what is already available. Clearly, modern Web sites’ increased functionality creates a need for increasingly complex software, system integration and design strategies, and development processes.
Contrary to the perception of some software developers and software engineering professionals, Web engineering isn’t a clone of software engineering although both involve programming and software development. While Web engineering adopts and encompasses many software engineering principles, it incorporates many new approaches, methodologies, tools, techniques, and guidelines to meet the unique requirements of Web-based systems. Developing Web-based systems is significantly different from traditional software development and poses many additional challenges. There are subtle differences in the nature and life cycle of Web-based and software systems and the way in which they’re developed and maintained. Web development is a mixture between print publishing and software development, between marketing and computing, between internal communications and external relations, and between art and technology (Ginige and Murugesan, 2001). Building a complex Web-based system calls for knowledge and expertise from many different disciplines and requires a team of diverse people with expertise in different areas. As a result, Web engineering is multidisciplinary and encompasses contributions from areas such as systems analysis and design; software engineering; hypermedia and hypertext engineering; requirements engineering; humancomputer interaction; user interface development; information engineering; information indexing and retrieval; testing, modelling, and simulation; project management; and graphic design and presentation.
Successful Web-based system development and deployment is a process, not just an event as currently perceived and practiced by many developers and academics. Web engineering is a holistic approach, and it deals with all aspects of Web-based systems development, starting from conception and development to implementation, performance evaluation, and continual maintenance. Building and deploying a Web-based system involves multiple, iterative steps. Most Web-based systems continuously evolve to keep the information current and to meet user needs. Web engineering represents a proactive approach to creating Web applications. Web engineering methodologies have been successfully applied in a number of Web applications, for example, the ABC Internet College, 2000 Sydney Olympics, 1998 Nagano Winter Olympics, Vienna International Festival (Ginige and Murugesan, 2001).
Several factors inherent to Web development contribute to the quality problem. Developers build Web-based software systems by integrating numerous diverse components from disparate sources, including custom-built special-purpose applications, customized off-the-shelf software components, and third-party products. In such an environment, systems designers choose from potentially numerous components, and they need information about the various components suitability to make informed decisions about the software’s required quality attributes. Much of the new complexity found with Web-based applications also results from how the different software components are integrated. Not only is the source unavailable for most of the components, the executables might be hosted on computers at remote, even competing organizations. To ensure high quality for Web systems composed of very loosely coupled components, we need novel techniques to achieve and evaluate these components’ connections. Finally, Web-based software offers the significant advantage of allowing data to be transferred among completely different types of software components that reside and execute on different computers. However, using multiple programming languages and building complex business applications complicates the flows of data through the various Web software pieces. When combined with the requirements to keep data persistent through user sessions, persistent across sessions, and shared among sessions, the list of abilities unique to Web software begins to get very long. Thus, software developers and managers working on Web software have encountered many new challenges. Although it is obvious that we struggle to keep up with the technology, less obvious is our difficulty in understanding just how Web software development is different, and how to adapt existing processes and procedures to this new type of software.

3. Quality criteria for Web-based information systems

We evaluate software by measuring the quality of attributes such as reliability, usability, and maintainability, yet academics often fail to acknowledge that the basic economics behind software production has a strong impact on the development process (Offutt, 2002). Although the field of software engineering has spent years developing processes and technologies to improve software quality attributes, most software companies have had little financial motivation to improve their software’s quality. Software contractors receive payment regardless of the delivered software’s quality and, in fact, are often given additional resources to correct problems of their own making. So-called “shrink wrap” vendors are driven almost entirely by time-tomarket; it is often more lucrative to deliver poor-quality products sooner than high-quality products later. They can deliver bug fixes as new “releases” that are sold to generate more revenue for the company. For most application types, commercial developers have traditionally had little motivation to produce high-quality software. Web-based software, however, raises new economic issues. Unlike many software contractors, Web application developers only see a return on their investment if their Web sites satisfy customers’ needs. And unlike many software vendors, if a new company puts up a competitive site of higher quality, customers will almost immediately shift their business to the new site once they discover it. Thus, instead of “sooner but worse,” it is often advantageous to be “later and better.” Despite discussions of “sticky Web sites” and development of mechanisms to encourage users to return, thus far the only mechanism that brings repeat users to Web sites has been high quality. This will likely remain true for the foreseeable future. In software development, a process driver is a factor that strongly influences the process used to develop the software. Thus, if software must have very high reliability, the development process must be adapted to ensure that the software works well. Web software development managers and practitioners see the seven most important quality criteria for Web application success (Offutt, 2002):
• reliability,
• usability,
• security.
• availability,
• scalability,
• maintainability,
• time-to-market.
Of course, this is hardly a complete list of important or even relevant quality attributes, but it provides a solid basis for discussion. Certainly speed of execution is also important, but network factors influence this more than software does, and other important quality attributes such as customer service, product quality, price, and delivery stem from human and organizational rather than software factors.


Extensive research literature and a collection of commercial tools have been devoted to testing, ensuring, assuring, and measuring software reliability. Safety-critical software applications such as telecommunications, aerospace, and medical devices demand highly reliable software, but although many researchers are reluctant to admit it, most software currently produced does not need to be highly reliable. Many businesses’ commercial success depends on Web software, however - if the software does not work reliably, the businesses will not succeed. The user base for Web software is very large and expects Web applications to work as reliably as if they were going to the grocery store or calling to order from a catalog. Moreover, if a Web application does not work well, the users do not have to drive further to reach another store; they can simply point their browser to a different URL. Web sites that depend on unreliable software will lose customers, and the businesses could lose much money. Companies that want to do business over the Web must spend resources to ensure high reliability. Indeed, they cannot afford not to.


Web application users have grown to expect easy Web transactions—as simple as buying a product at a store. Although much wisdom exists on how to develop usable software and Web sites, many Web sites still do not meet most customers’ usability expectations. This, coupled with the fact that customers exhibit little site loyalty, means unusable Web sites will not be used - customers will switch to more usable Web sites as soon as they come online.


We have all heard about Web sites being cracked and private customer information distributed or held for ransom. This is only one example of the many potential security flaws in Web software applications. When the Web functioned primarily to distribute online brochures, security breaches had relatively small consequences. Today, however, the breach of a company’s Web site can cause significant revenue losses, large repair costs, legal consequences, and loss of credibility with customers. Web software applications must therefore handle customer data and other electronic information as securely as possible. Software security is one of the fastest growing research areas in computer science, but Web software developers currently face a huge shortfall in both available knowledge and skilled personnel.


In our grandparents’ time, if a shopkeeper in a small town wanted to take a lunch break, he would simply put a sign on the front door that said “back at 1:00.” Although today’s customers expect to be able to shop during lunchtime, we do not expect stores to be open after midnight or on holidays. On the Web, customers not only expect availability 24 hours a day, seven days a week, they expect the Web site to be operational every day of the year—“24/7/365.” Availability means more than just being up and running 24/7/365; the Web software must also be accessible to diverse browsers. In the seemingly never-ending browser wars of the past few years, some software vendors actively sought to make sure their software would not work under competitors’ browsers. By using features only available for one browser or on one platform, Web software developers become “foot soldiers” in the browser wars, sometimes unwittingly. To be available in this sense, Web sites must adapt their presentations to work with all browsers, which requires significantly more knowledge and effort on developers’ part.


We must engineer Web software applications to be able to grow quickly in terms of both how many users they can service and how many services they can offer. The need for scalability has driven many technology innovations of the past few years. The industry has developed new software languages, design strategies, and communication and data transfer protocols in large part to allow Web sites to grow as needed. Scalability also directly influences other attributes. Any programming teacher knows that any design will work for small classroom exercises, but large software applications require discipline and creativity. Likewise, as Web sites grow, small software weaknesses that had no initial noticeable effects can lead to failures (reliability problems), usability problems, and security breaches. Designing and building Web software applications that scale well represents one of today’s most interesting and important software development challenges.


One novel aspect of Web-based software systems is the frequency of new releases. Traditional software involves marketing, sales, and shipping or even personal installation at customers’ sites. Because this process is expensive, software manufacturers usually collect maintenance modifications over time and distribute them to customers simultaneously. For a software product released today, developers will start collecting a list of necessary changes. For a simple change, (say, changing a button’s label), the modification might be made immediately. But the delay in releases means that customers won’t get more complex (and likely important) modifications for months, perhaps years. Web-based software, however, gives customers immediate access to maintenance updates - both small changes (such as changing the label on a button) and critical upgrades can be installed immediately. Instead of maintenance cycles of months or years, Web sites can have maintenance cycles of days or even hours. Although other software applications have high maintenance requirements, and some research has focused on “on-the-fly” maintenance for specialized applications, frequent maintenance has never before been necessary for such a quantity of commercial software. Another ramification of the increased update rate has to do with compatibility. Users do not always upgrade their software; hence, software vendors must ensure compatibility between new and old versions. Companies can control the distribution of Web software to eliminate that need, though Web applications must still be able to run correctly on several Web browsers and multiple versions of each browser. Another possible consequence of the rapid update rate is that developers may not feel the same need to fix faults before release - they can always be fixed later.


This has always been a key business driver and remains important for Web software, but it now shares the spotlight with other quality attributes. Most of the software industry continues to give priority to first to market. Given the other factors discussed here, however, the requirement for patience can and must impact the process and management of Web software projects. Software researchers, practitioners, and educators have discussed these criteria for years, but no type of application has had to satisfy all of these quality attributes at the same time. Web software components are coupling more loosely than any previous software applications. In fact, these criteria have until recently been important to only a small fraction of the software industry. They are now essential to the bottom line of a large and fast growing part of the industry, but we do not yet have the knowledge to satisfy or measure these criteria for the new technologies used in Web software applications.

4. Quality modelling for Web-based information systems

In the literature, there are numerous reports on various quality models for program-based software information systems, such as McCall’s model (McCall and Richards and Walters, 1977), Boehm’s model (Boehm et al., 1978), and Gillies’s model (Gillies, 1997) etc. Regarding information systems as a much wider concept than software systems, the SOLE model (Eriksson and Torn, 1991) and its variants provide three different views for three different interest groups of stakeholder: users, technical staff and managers. In fact, the basic idea of the SOLE model is from the general theory of quality proposed by Garvin (Garvin, 1987). In accordance with the growth of the Internet and World Wide Web, there has been some research on quality issues of Web-based software systems. In (Lindroos, 1997), the differences between the Web-based information systems and conventional information systems are discussed from the perspective of software quality. It is realized that a quality model for Web-based information systems is needed. In (Olsina et al., 1999), a quality model for Web sites of universities, called Web site QEM, was proposed based on the users’ view. It breaks down the quality of Web sites into more than a hundred attributes, Existing quality models were almost all proposed as general models that are intended to be suitable for a large class of software and information systems. The development of such models is mostly based on many years’ experiences in the development and maintenance of software and information systems. The validation of such models is mostly by empirical studies, such as by analyzing data collected from questionnaires and interviews. The quality models investigated by such an approach may not fully address all the quality characteristics of the target systems. It is recognized that the special requirements of software and information systems must be considered in the application of quality models (Kitchenham and Walker, 1989). Different kinds of information systems may have different requirements on quality. For instance, both e-commerce systems and personal homepages are Web-based information systems, the former have quite different quality requirements from the later, in terms of information security and information searching issues. Hall argues that development of Web applications today is currently at the stage software development was 30 years ago, when ‘software crisis’ was first considered (Lowe and Hall, 1999). Most e-commerce systems are developed using ad hoc approaches, which have brought many quality problems, such as maintainability, efficiency and security. However what is lacking is a systematic method that enables developers to build a quality model for any given system.
Quality is not a new concept in information systems management and research. Information systems practitioners have always been aware of the need to improve the information systems function so it can react to external and internal pressures and face the critical challenges to its growth and survivability (Aladwani, 1999). Moreover, information systems scholars have been concerned with definitions of quality in information systems research. Information systems researchers have attempted to define data quality (Kaplan et al., 1998), information quality (King and Epstein, 1983), software/system quality (Olsina et al., 1999), documentation quality (Gemoets and Mahmood, 1990), information systems function service quality (Kettinger and Lee, 1994), and global information systems quality (Nelson, 1996). More recently, there has been some effort to define quality in the context of the Internet (Liu and Arnett, 2000). However, the Web quality concept remains underdeveloped. Web quality is a vastly undefined concept. For the most part, existing scientific research discusses the meaning of some aspects of Web quality in a descriptive manner without delineating its major dimensions or providing tested scales to measure it. For example, Liu and Arnett (Liu and Arnett, 2000) named such quality factors as accuracy, completeness, relevancy, security, reliability, customization, interactivity, ease of use, speed, search functionality, and organization. Huizingh (Huizingh, (2000) focused on two aspects of Web quality: content and design. Wan (Wan, 2000) divided Web quality attributes into four categories: information, friendliness, responsiveness, and reliability. Rose et al. (Rose et al., 1999) provided a theoretical discussion of technological impediments of Web sites. The authors highlighted the importance of factors such as download speed, Web interface, search functionality, measurement of Web success, security, and Internet standards. Misic and Johnson (Misic and Johnson, 1999) suggested such Web-related criteria as finding contact information (e.g. e-mail, people, phones, and mail address), finding main page, speed, uniqueness of functionality, ease of navigation, counter, currency, wording, and color and style. Olsina et al. (Olsina et al., 1999) specified quality attributes for academic Web sites. These authors took an engineering point of view and identified factors such as cohesiveness by grouping main control objects, direct controls permanence, contextual controls stability, etc. Bell and Tang (Bell and Tang, 1998) identified factors such as access to the Web, content, graphics, structure, user friendliness, navigation, usefulness, and unique features. Another useful stream of research is key Web-site characteristics on the purpose and value dimensions. While the purpose dimension relates directly to the contents of the site, the value dimension relates more to the quality aspects. The trade press and Internet sources also discussed some aspects of Web quality attributes. Levin (Levine, 1999) offered tips to help a company with Web site design including fast Web page download, Web page interactivity, and current content, among other factors.
Three conclusions can be drawn from the above review (Aladwani and Palvia, 2002). First is that past Web quality research, albeit useful, is fragmented and focuses only on subsets of Web quality. For example, Rose et al. (Rose et al., 1999) list six factors, and Bell and Tang (Bell and Tang, 1998) mention eight factors. Misic and Johnson’s (Misic and Johnson’s, 1999) study was more extensive, but it missed several critical factors such as Web security, availability, clarity, and accuracy, to name a few. Liu and Arnett (Liu and Arnett, 2000) list 11 items and two dimensions of Web quality—information and system quality. Like the other studies, several important quality dimensions are missing from the authors’ list. Second, past research lacks rigor when it comes to measuring the Web quality construct. In some cases, an ad hoc Web quality tool was suggested (Bell and Tang, N. 1998, Liu and Arnett, 2000, Misic and Johnson, 1999). However, it is not clear to the reader what is the domain of the measured construct or what refinement, validation, and normalization procedures were employed. For example, Liu and Arnett’s scale included items about information to support business objectives, empathy to customers’ problems, and follow-up services to customers. These items loaded on the same factor, which they called ‘‘information quality’’. In addition, double barreled items can be found in their scale, e.g. security and ease of use. Third, the majority of the suggested Web quality attributes and scales are relevant to Web designers than to Web users; like for instances, the ideas and scales proposed by Liu and Arnett (Liu and Arnett, 2000) and Olsina et al. (Olsina et al., 1999).
The previous discussion underscores the fact that the Web quality construct lacks a clear definition and Web quality measurement is still in its infancy.
A number of psychometric researchers have proposed several procedural models to help other researchers develop better scales for their studies, e.g. (Babbie, 1992 and Churchill, 1979). Applying these concepts, MIS researchers have developed several instruments, e.g. the end user computing satisfaction by Doll and Torkzadeh (Doll and Torkzadeh, 1988) and the microcomputer playfulness instrument by Webster and Martocchio (Webster and Martocchio, 1992). Straub (Straub, 1989) described a process for creating and validating instruments in IS research, which includes content validity, construct validity and reliability analyses. Three generic steps common in all these models include conceptualization, design, and normalization (Aladwani and Palvia, 2002). Conceptualization focuses on content validity, involves such activities as defining the construct of interest and generating a candidate list of items from the domain of all possible items representing the construct. The second step, design focuses on construct validity and reliability analysis. It pertains to the process of refining the sample of items from the previous step to come up with an initial scale, deciding on such operational issues as question types and question sequence, and pilot-testing the initial scale that has been developed in the preparation stage. The third and last step concerns the effort to normalize the scale that has been developed. It involves the important steps of subsequent independent verification and validation. Unfortunately, this step is omitted in many scale development efforts. In the conduct of these steps, several analytical techniques can be used such as factor analysis and reliability analysis as we will describe next.

5. Conclusion

Achieving the high quality requirements of Web software represents a difficult challenge. Although other segments of the software industry have already mastered some of these, such as the need for reliability in telecommunications and network routing, aerospace, and medical devices, they have typically done so by hiring the very best developers on the market, using lots of resources (time, developers, and testing), or relying on old, stable software and technologies. Unfortunately, these solutions will not work for Web applications. There are simply not enough of the “best developers” to implement all of the Web software needed today, and few but the largest companies can afford to invest extra resources in the form of time, developers, and testing. Finally, it should be obvious that old, stable software technologies will not suffice as a base for the Web - it relies on the latest cutting-edge software technology. Although the use of new technology involves some risk, it allows us to achieve otherwise unattainable levels of scalability and maintainability.
Past Web quality research has focused on general description of some specific aspects of Web quality and paid little attention to construct identification and measurement efforts. In this study, we moved beyond descriptive and narrative evidence to empirical evaluation and verification by developing a multidimensional scale for measuring user-perceived Web quality. The results of the two-phased investigation uncovered four dimensions of perceived Web quality (technical adequacy, specific content, content quality, and appearance) and provided evidence for the psychometric properties of the 25-item instrument. Another contribution is that while past Web quality research focuses mostly on the perspectives of Web developers and designers, the current study targets the Web users. In this era of intense competition and customer responsiveness, the users are major stakeholders and should not be ignored. The limitations of the study include those customarily associated with instrument-building and survey methods. However, the extensive testing and validation improved the internal validity, and using several groups of subjects improved the external validity and generalizability of the instrument to a larger
population. Nevertheless, instruments are always subject to further improvement and we encourage fellow researchers to do so. The Web quality model/instrument has practical as well as theoretical and research applications. In terms of practical applications, a validated instrument provides an important tool for assessing the quality of Web site. The Internet is hosting hundreds of millions of Web sites varying widely in terms of quality. The scales might be used to assess the quality of a given Web site. This could be carried out at the overall quality level using the 25-item instrument or at a specific quality dimension level, e.g. using a sub-scale of one of the four dimensions of perceived Web quality. This evaluation may provide a fast and early feedback to the firm. If the firm finds itself lacking in any of the dimensions, then it may do a more detailed analysis and take necessary corrective actions. The four dimensions and the 25- items of the instrument may also be used by Web site designers in a proactive manner. The dimensions and the items may be considered explicitly in the site design. Additionally, a firm may want to assess the relative importance of the four quality dimensions and the specific items in its own context. While this study provided their relative importance based on its own sample, each firm is unique based on its products, services, customers, business strategies, etc. Such an evaluation would facilitate the design of a quality Web site in the first place.


  1. Aladwani, A.M. (1999) Implications of some of the recent improvement philosophies for the management of the information systems organization, Industrial Management and DataSystems 99 (1), pp. 33–39.

  2. Aladwani, A.M., Palvia, P.C. (2002) Developing and validating an instrument for measuring userperceivedWeb quality. Information & Management, 39, 467–476.

  3. Babbie, E.R. (1992) The Practice of Social Research, Wadsworth Publishing Company, Belmont,CA.

  4. Bell, H. Tang, N. (1998) The effectiveness of commercial Internet Web sites: a user’s perspective, Internet Research, 8 (3), pp. 219–228.

  5. Boehm, B.W., Brown, J., Kaspar, H., Lipow, M., MacLeod, G., Merrit, M. (1978) Characteristicsof Software Quality, Vol. 1 of TRW Serious of Software Technology, North-Holland, NewYork.

  6. Churchill, G.A. (1979) A paradigm for developing better measures of marketing constructs, Journal of Marketing Research, 16 (1), pp. 64–73.

  7. Doll, W.J., Torkzadeh, G. (1988) The measurement of end-user computing satisfaction, MIS Quarterly, 12 (2), pp. 259–274.

  8. Eriksson, I., Torn, A. (1991) A Model for IS Quality, Software Engineering Journal, July, pp. 152-158.

  9. Garvin, D. (1987) Competing on the Eight Dimensions of Quality, Harvard Business Review,1987, (6), pp. 101- 109.

  10. Gemoets, L.A., Mahmood, M.A. (1990) Effect of the quality of user documentation on usersatisfaction with information systems, Information and Management, 18 (1), pp. 47–54.

  11. Gillies, A. (1997) Software Quality: Theory and Management, International Thomson ComputerPress.

  12. Ginige A., Murugesan S. (2001) Web Engineering: An Introduction. IEEE Multimedia, 1-3, pp.14-18.

  13. Huizingh, E.K. (2000) The content and design of Web sites: an empirical study, Information andManagement, 37 (3), pp. 123–134.

  14. Kaplan, D., Krishnan, R., Padman, R., Peters, J. (1998) Assessing data quality in accounting information systems, Communications of the ACM, 41 (2), pp. 72–78.

  15. Kettinger, W.J., Lee, C.C. (1994) Perceived service quality and user satisfaction with the information services function, Decision Sciences, 25, pp. 737–766.

  16. King, W.R., Epstein, B. (1983) Assessing information system value, Decision Sciences, 14 (1),pp. 34–45.

  17. Kitchenham, B.A., Walker, J.G. (1989) A quantitative approach to monitoring softwaredevelopment, Software Engineering Journal, 4(1), pp.2-13.

  18. Levine, G. (1999) 10 steps to building a successful Web site. Bobbin, 40 (8), pp. 61–63.

  19. Lindroos, K. (1997) Use Quality and the World Wide Web, Information and Software Technology,vol. 39, pp. 827-836.

  20. Liu, C., Arnett, K.P. (2000) Exploring the factors associated with Web site success in the contextJIBC December 2010, Vol. 15, No.3 - 13 - of electronic commerce, Information and Management, 38 (1), pp. 23–33.

  21. Lowe, D., Hall, W. (1999) Hypermedia and the Web: an engineering approach, Chichester: JohnWiley.

  22. McCall, J., Richards, P., Walters, G. (1977) Factors in Software Quality, Technical report CDRLA003, US Rome Air Development Centre, Vol. I.

  23. Misic, M.M., Johnson, K. (1999) Benchmarking: a tool for Web site evaluation and improvement,Internet Research, 9 (5), pp. 383–392.

  24. Nelson, K.G. (1996) Global information systems quality: key issues and challenges, Journal ofGlobal Information Management, 4, pp. 4–14.

  25. Offutt, J. (2002) Quality Attributes of Web Software Applications. IEEE SOFTWARE, 3-4, pp. 25-32.

  26. Olsina, L., Godoy, D., Lafuente, G.J., Rossi, G. (1999) Specifying Quality Characteristics and

  27. Attributes for Websites, in Proceedings of the First ICSE Workshop on Web Engineering, 16 -17 May Los Angeles, USA.

  28. Olsina, L., Godoy, D., Lafuente, G.J., Rossi, G. (1999) Specifying quality characteristics and attributes for Web sites, First ICSE Workshop on Web Engineering (WebE-99), Los

  29. Angeles.Rose, G., Khoo, H., Straub, D.W. (1999) Current technological impediments to business-toconsumerelectronic commerce, Communications of AIS, 1(16).

  30. Straub, D.W. (1989) Validating instruments in MIS research, MIS Quarterly, 13 (2), pp. 147–169.

  31. Wan, H.A. (2000) Opportunities to enhance a commercial Web site, Information andManagement, 38 (1), pp. 15–21.

  32. Webster, J., Martocchio, J.J. (1992) Microcomputer playfulness: development of a measure withworkplace implications, MIS Quarterly, 16 (2), pp. 201–226.

  33. Zhang, Y., Zhu, H., Greenwood, S., Huo, Q. (2001) Quality Modelling for Web-Based InformationSystems. Proceedings of the Eighth IEEE Workshop on Future Trends of DistributedComputing Systems (FTDCS.01).

One thought on “Web Based Information Systems Research Papers

Leave a Reply

Your email address will not be published. Required fields are marked *