SIL Electronic Working Papers 1998-004, July 1998
Copyright © 1998 Gary F. Simons and Summer Institute of Linguistics, Inc.
All rights reserved.


First presented at the Joint International Conference of the Association for Literary and Linguistic Computing and the Association for Computers and the Humanities, Debrecen, Hungary, 5-10 July 1998

In search of task-centered software:
building single-purpose tools from multipurpose components

Gary F. Simons


Contents:

Abstract
1. Lamenting the state-of-the-art: underused multipurpose tools
2. Focusing on a solution: task-centered single-purpose tools
3. Finding a means to the end: single-purpose tools from multipurpose components
4. Building a future: task-centered tools embedded in interactive documents
5. Conclusion
Appendix: Notes on implementation
References

Abstract:

The humanities computing community has developed some great tools but they seem to be underused. A large segment of their target audience finds them too difficult to learn and use. This paper suggests that a significant contributing factor is the semantic gap between the specific task the user wants to do and the general functionality that the tool provides. It proposes a solution in which the developer begins with a task analysis of the work users do, and then pieces together reusable multipurpose components to construct single-purpose tools to support the various subtasks. An example is used throughout of tasks for which a lexicographer might use a concordance. The appendix gives a working demonstration of a prototype in which interactive Web documents use forms and components to implement single-purpose tools and use hypertext links to informational pages that provide integrated performance support.

The mood in the humanities computing community tends to vacillate between excitement and despair. On the one hand, there is excitement for some of the great tools and techniques that have been developed; on the other hand, there is despair over the fact that they are so underused within the humanities community at large. Section 1 of the paper explores that problem and concludes that the root of the problem is that we have been focusing on developing multiple purpose tools. Section 2 proposes that if we want to develop tools that will reach the majority of our colleagues, then we must start building tools that focus on supporting just one task at a time. Section 3 describes a software engineering strategy for doing this with an economy of effort; it involves building single-purpose tools from reusable multipurpose components. Finally, section 4 offers a glimpse of what a future humanities computing application might be like--it demonstrates how a component can be embedded in a Web page to create an interactive document that supports performing a specific task.

1. Lamenting the state-of-the-art: underused multipurpose tools

In May of 1996, about two dozen leading figures of the humanities computing community gathered in Princeton, New Jersey for a Text Analysis Software Planning Meeting. The meeting was called to focus on two questions: "Why is it, even though we have developed so many great programs, that so few of our colleagues are using them?" and secondly, "What can we do about it?"

Michael Sperberg-McQueen (1996) in his report on the conference lists four factors in answer to the first question:

  1. For many potential users, existing software still seems very hard to learn.
  2. Current programs don't interoperate well, or at all.
  3. Current programs are often closed systems, which cannot easily be extended to deal with problems or analyses not originally foreseen.
  4. Almost all current text analysis tools rely on what now seems a hopelessly inadequate model of text structure; they model text as a linear sequence of words rather than as a hierarchical structure of textual elements.

Points 2 through 4 are certainly problems that must be addressed, but they are largely problems that confront the people who are actively trying to use the existing software. The first point, namely, that most of our colleagues find the software hard to learn, is probably the single biggest factor in explaining why it is underused.

What makes software easy to learn? During the last decade, the Macintosh revolution showed us that replacing a command line interface or a question-and-answer interface with a GUI (graphical user interface) substantially improved learnability. At one point we were optimistic that this would solve the problem, but now that we have GUI-based tools, we are still disappointed that they are underused.

What more will it take before we can hope for widespread use of our software? I think there are still two key hurdles to overcome: familiarity and semantic transparency.

The argument for familiarity was made forcefully by two of the organizers of the Princeton meeting, Susan Hockey and Willard McCarty. They emphasized the point that the majority of our target audience is not particularly proficient at using computers. The tools we have built are new and unfamiliar; the more complicated they are, the less likely they are to be used. Both Hockey and McCarty, in separate presentations, proposed that the World Wide Web has quickly become the most ubiquitous and most familiar part of the computing landscape. They argued that if we really want our software to be used by everyone in the target audience, then we need to figure out how to slip it into the Web framework. Just by clicking on links and filling in forms users should be able to run our software without ever leaving the familiar surroundings of the Web browser.

The second hurdle, semantic transparency, has to do with how well the task that the program performs corresponds to the task that the user wants to perform. If what the program offers to do is the same as what the user wants to do, then the program is transparent; otherwise, it is opaque. If a program is opaque to the user, then it is not user friendly, no matter how nice its user interface is. I fear that this is, unfortunately, the current state of our art.

This point is easy to illustrate. Say that I am a lexicographer who has access to an electronically encoded text corpus. I suspect the corpus could be a big help, but I don't know how to take advantage of it. First, I am just looking for sentences to illustrate certain headwords in the dictionary. I ask my computing consultant how I would do this and he says, "Here, this concordance program will do that." Later, I note that the sense definitions for some complex entries don't seem quite right and realize that looking at all the occurrences of the words in the corpus would help to sort things out. I tell my consultant what I want to do, and he says, "The concordance program I already gave you will do that, too." Still later I am focusing on the grammatical categories of words in the dictionary and find that I need to verify some of the category assignments. If I could find all the words with the same tag and then compare their uses in context, I could verify that the tags were applied appropriately. When I take this problem to my computing consultant, he says yet again, "Oh, the concordance program does that, too!"

As a lexicographer I had three distinct tasks in mind:

When I went for help about software, the answer was always the same. Use the tool that performs the task:

For some, the relationship between the desired task and the prescribed tool with all of its controls would be transparent; these are the people who would succeed in applying the current software to perform their tasks. But for most, the relationship would not be entirely transparent and these are the individuals who are most likely to remain as potential users.

As software developers we are tool builders, and our instinct has been to build tools that apply to as many situations as possible so that they will be used as widely as possible. This indeed was the starting point a decade ago when I embarked with a team of programmers on a project to develop a general purpose computing environment for literary and linguistic computing (Simons 1988, 1998). Along the way we have been learning some new ways of thinking as we have used the general purpose environment to develop many single-purpose tools as part of the LinguaLinks system (SIL 1998).

2. Focusing on a solution: task-centered single-purpose tools

LinguaLinks is an instance of an electronic performance support system, or EPSS (Gery 1991, Seddon 1998). Specifically, it is an EPSS for language field workers that supports tasks in the domains of anthropology, language learning, linguistics, literacy, and sociolinguistics. An EPSS is a computer-based system that seeks to support a knowledge worker in performing his or her job. It does so by integrating the software tools needed to do the job with the reference and tutorial materials that are needed to know how to do the job well. A program in an EPSS gives context-sensitive help that not only explains how the program works, but also explains how to do the job. It gives examples, case studies, guidelines, advice on choosing alternatives, background information, and more. The notion of electronic performance support is gaining momentum throughout the business world as a way to provide just-in-time training for workers in a rapidly changing world (Fischer and Horn 1997).

A software development project typically begins with requirements analysis--representatives of the target user community are interviewed to determine exactly what the software must do. The Princeton meeting took this approach when it broke into small groups to discuss who the potential users of text analysis software are and what requirements they might have. The result was a long list of potential user groups and an even longer list of needed functions. But from a performance support point of view, this does not get us any closer to software that people will actually use.

Performance support focuses on the job as opposed to the software. Rather than asking who the potential users are, it asks what is the specific job that needs to be done. Once this is identified, the first step is to perform a task analysis (Desberg and Taylor 1986). In this process, the job to be done is broken down into all of its subtasks. These in turn are broken down into even smaller tasks. The knowledge, skills, and attitudes needed to perform each task are identified; so are the tasks that can be supported by automated tools. Once the automatable tasks have been identified, a requirements analysis for each can commence.

This approach to requirements analysis leads us to think in terms of a number of single-purpose software tools, each of which is focused on performing a particular task. For instance, returning to the example from section 1 of tasks in lexicography that could be supported by a concordance program, some requirements for single-purpose tools would be as follows:

The LinguaLinks system has not gone quite this far yet, but it is headed in this direction. It has no concordance program as such; rather, it incorporates concordance views into many task-centered tools. Figure 1 shows a concordance for finding illustrative examples. The data are from the Tuwali Ifugao language of the Philippines (the data set was developed by my colleague, Lou Hohulin). The figure shows a configuration of the lexical database editor that embeds a concordance of all attested occurrences in text for the current headword.

Figure 1. A concordance for finding illustrative sentences


Figure 2 illustrates how performance support is integrated into LinguaLinks tools. Throughout the system a right-click always brings up helps related to the current selection. In this example, the user has selected a part-of-speech label and right-clicked. A list of pertinent help modules is displayed, beginning with items about parts of speech, then about the more general context (namely, senses in a dictionary entry), and the even more general context (namely, major entries).

Figure 2. Integrated performance support


Figure 3 shows the result of selecting the first option. This brings up a module about how to choose a part of speech for a sense. Note that this is not documentation about how to operate the program; rather, it is advice taken from experts about how to do the task in general (quite apart from the computer tool). In this way it goes beyond traditional program help to offer performance support.

Figure 3. A performance-support module


Figure 4 returns to our concordance examples. It illustrates a concordance specifically for the task of assigning each occurrence of a word in text to the sense of meaning it exemplifies. The upper left pane shows a list of wordforms; the upper right pane shows the possible analyses of the word selected in the list on the left. The pane below these gives a concordance display of all the text occurrences assigned to the sense selected in the upper right. The bottom pane shows all the occurrences that have not yet been assigned to a particular sense. The buttons in the bar between the two concordance panes allow a selected occurrence to be assigned or unassigned.

Figure 4. A concordance for assigning text occurrences to senses of meaning


Figure 5 shows a concordance that could be used to help sort out the part of speech for a particular word. The top pane gives a list of the possible parts of speech. The bottom pane shows all the occurrences of words that are assigned to the selected part of speech. In the figure, the part of speech viewer has been called up from the tool shown in figure 4. We might do so, for instance, to verify that hagabi was really functioning as a common noun by comparing its occurrences in context with those of other common nouns.

Figure 5. A concordance for determining parts of speech


These LinguaLinks tools demonstrate the shift from general-purpose tools to task-centered tools. But we still have not gone far enough to bridge the learnability gap for many of our target users. Section 4 presents the direction we are exploring to help remedy this problem, namely, taking the emphasis away from programs and instead embedding task-centered, single-purpose tools within interactive documents. First, however, the next section surveys an important enabling technology.

3. Finding a means to the end: single-purpose tools from multipurpose components

Humanities software developers have been building multipurpose tools for an obvious reason--we have not had the resources to build a multitude of single-purpose tools. Fortunately, new technologies are available that can make it cost effective to pursue a strategy of building single-purpose tools that are truly transparent to the target audience.

The established ubiquity of the Web browser as a user environment and the pending ubiquity of XML (Bray and others 1998, Cover 1998) as a formalism for data encoding and interchange on the Web, give us good fixed points for the front end and back end, respectively, of a new generation of tools for humanities computing. I believe that the key to building the software that lies in between is "componentware."

The idea of using components in software development is an old one (McIlroy 1969). It is an analogy to the practice that is common in building hardware systems. A custom personal computer, for instance, can be fairly easily built by piecing together a number of prepackaged components (like a power supply, motherboard, peripheral devices, monitor, and keyboard). In software, a component is a generally useful bit of functionality that is precompiled and housed in a reusable package. As such it becomes a building block for software construction. A customized program can be built by piecing together pre-existing components (like different kinds of GUI widgets, an XML parser, a concordance builder, and so on).

It is only recently, however, that such an approach has begun to materialize. The development of object-oriented methodology has been the catalyst by providing a workable paradigm for decomposing complex applications into reusable components (Nierstrasz and others 1992). A new approach to programming is emerging in which system programmers use system programming languages (like C++ and Java) to build components, and then application programmers use scripting languages (like Tcl, Perl, Visual Basic, and JavaScript) to glue components together to build applications. Early studies indicate that the latter approach promises as much as an order of magnitude increase in productivity over using system programming languages to program everything from scratch (Ousterhout 1998, Kiely 1998).

Component-based software depends on the standards that define how components are built and interconnected. The future shape of this approach thus depends on how these standards shake down. Two main standards are competing at present: Microsoft's Component Object Model (or COM, which includes OLE and ActiveX; Chappell 1996, Gray and others 1998, Microsoft 1998) and Sun's JavaBeans (Javasoft 1998). The Object Management Group's CORBA (Common Object Request Broker Architecture) fills a similar niche (OMG 1998). Krieger and Adler (1998), Lewandowski (1998), and Wegner (1997) give comparative reviews of these standards. It is not yet clear if one of these will emerge as the dominant standard, but the widespread acceptance (both in industry and in academia) of building applications from components suggests that it is an approach that is here to stay.

4. Building a future: task-centered tools embedded in interactive documents

Section 1 has suggested that if we want our software to be used by the target audience, it must feel familiar (even when it is new) and it must transparently support the specific task the user wants to perform. Section 2 has shown that supporting the user in performing that task requires more than just a specialized computer program--it also requires that the program be integrated with materials that provide background information on how to do the task and how to do it well. Section 3 has suggested that component technology provides a means for packaging the functionality of our software in such a way that it can easily be tailored to build tools that are focussed on performing a single task at a time.

The World Wide Web provides a framework in which we can get a glimpse of what this sort of software would be like. The Web browser provides a user environment that is already familiar. By embedding software components into Web pages, we can present the user with interactive documents--a medium that should be both more intuitive and less threatening than our conventional programs. By using the scripting features of Dynamic HTML, we can customize the interaction with the components to produce a single-purpose tool. We can exploit the hypertext features of HTML to incorporate as much performance supporting background material as is needed.

What might such a system look like? Instead of using the operating system to locate a program, users would begin by using a Web browser to navigate to a page about the job they are doing. Figure 6 shows a sample. Here we return to the example presented in section 1 of a linguist who is building a dictionary. This Web page offers a task analysis for the job of doing lexicography.

Figure 6. The entry point for a user: an interactive task analysis


The complete task analysis forms a tree containing scores of subtasks. To simplify the presentation, Dynamic HTML is used to provide outline controls that can show or hide the subtasks of a given supertask. The task of immediate interest is to find a good illustrative sentence in a text corpus. Figure 7 shows the display that results after the linguist has clicked on the outline controls for the supertasks.

Figure 7. Opening subtasks to find the task of interest


The tasks in the task tree are hyperlinks. Clicking on a task takes the user to a Web page that tells how to perform that task. Some pages may have only expository text; others will present a customized tool built by embedding components that are controlled by user interface widgets in an HTML form. Figure 8 shows the single-purpose tool for finding an illustrative sentence in a text corpus. It provides interface controls for specifying the word to search for, the text to look in, and the maximum length of sentences. In the figure, the controls are filled in to find occurrences of the word race in Aesop's Fables within sentences no longer than ten words.

Figure 8. A task-centered, single-purpose tool


The single-purpose tool also offers performance support in the form of links to pages that give background that will help the user to do the job well. Figure 9 shows the beginning of the page that describes "How to select a good illustrative sentence."

Figure 9. Performance support for the tool


5. Conclusion

I believe that the key challenge facing humanities software developers today is to move the functionality now available in large standalone multipurpose tools into a number of smaller reusable multipurpose components. With relatively little effort, these components can then be combined and configured in novel ways to build single-purpose tools that incorporate task-specific helps. Tools like this should make it easier for novice users to tap into the riches of humanities computing.

Appendix: Notes on implementation

The Web demonstration in figures 6 through 9 is included with this electronic working paper. At present, the opening task tree only works in Internet Explorer 4.0 (due to idiosyncrasies of each browser's support for Dynamic HTML). In other browsers the dynamic outline will not work; either you will see just the top-level tasks or you will see all the tasks. In the former case, just click the second link below to start the second part of the demonstration. Otherwise, if all the tasks show, you will be able to see the link for "Find an illustrative sentence for a sense" which you can follow to try the second part. Note that this is the only task that is linked to a supporting page in this demo. Note, too, that the first time you ask to build a concordance, the text is downloaded from the server. Thus there is a delay in proportion to the size of the text: Aladdin's Lamp is 29K, Aesop's Fables is 65K, and Alice in Wonderland is 149K. Subsequent concordances on the same text will come up immediately.

Try the full demonstration (figures 6-9)

Start just the single-purpose tool (figures 8-9)

The task analysis in figures 6 and 7 began as a straightforward XML file. The actual HTML file is much more complicated, as can be seen by viewing the document source. This form was achieved by running the XML file through an XSL stylesheet by means of the Microsoft XSL processor (msxsl.exe, see http://www.microsoft.com/xml/xsl/default.asp). For an introduction to Dynamic HTML, see http://www.microsoft.com/workshop/author/dhtml/dhtml.htm.

The concordance-building component that lies at the heart of the single-purpose tool in figure 8 is a Java applet. It was implemented by my colleague John Thomson to whom I am deeply indebted for helping this demonstration to succeed. The component is embedded at the bottom of the page by the following <applet> element:

   <APPLET name="conc" code="ConcAppletFrame.class"
           width="510" height="200">
   </APPLET>

Note that if we were using an ActiveX component, we would use the <object> element instead. The <form> in the middle of the page is where the interactive controls are defined. In the actual page, the form uses a table to lay out the prompts and the controls. The following is a simplified version of the essence of the form:

   <FORM NAME="form">
      What word do you want to find examples of?
      <INPUT TYPE="text" NAME="targetWord">
      What text do you want to search?
      <SELECT NAME="targetText">
         <OPTION SELECTED>Select from list
         <OPTION VALUE="aesop.txt">Aesop's Fables
         <OPTION VALUE="aladdin.txt">Aladdin's Lamp
         <OPTION VALUE="alice.txt">Alice in Wonderland</SELECT>
      Maximum length (in words) of sentences to include:
      <INPUT TYPE="text" NAME="maxWords" VALUE="10">
      <INPUT TYPE="button" VALUE="Show concordance"
             onClick="showConcordance()"></FORM>

The key thing to note here is that the form itself and each of the controls within it is assigned a name. The glue that binds the form to the component is the attribute onClick="showConcordance()" on the last button of the form. This is a bit of Dynamic HTML that says, "When the button is clicked, evaluate the showConcordance function." This function is defined in the <script> section in the <head> of the page as follows:

   
   <SCRIPT LANGUAGE = "JavaScript">
      function showConcordance() {
         var i = document.form.targetText.selectedIndex
         document.conc.setTargetWord( document.form.targetWord.value )
         document.conc.setTargetText( document.form.targetText.options[i].value )
         document.conc.setMaxWords( document.form.maxWords.value )
         document.conc.setSameSentenceContext( true )
         document.conc.build()
      }
   </SCRIPT>

This function sets the parameters of the applet (which is accessed by document.conc) to the values the user has specified in the form (which is accessed by document.form). This way of accessing the elements on the page is based on the Document Object Model; see http://www.webcoder.com/howto/ for a tutorial. The parameters are set by calling public methods of the applet; the parameter value is passed as an argument. Note that this single-purpose tool assumes that the user will always want to see just the current sentence in the displayed context, so always sets the parameter sameSentenceContext to true. Finally, the function invokes the method that builds the concordance based on the current parameter settings.

References

Bray, Tim, Jean Paoli, and C. M. Sperberg-McQueen. 1998. Extensible Markup Language (XML), version 1.0. World Wide Web Consortium Recommendation. <http://www.w3.org/TR/REC-xml>

Chappell, David. 1996. Understanding ActiveX and OLE: a guide for developers and managers. Redmond, WA: Microsoft Press.

Cover, Robin. 1998. Extensible Markup Language (XML) Web Site. <http://www.sil.org/sgml/xml.html>.

Desberg, Peter and Judson H. Taylor. 1986. Essentials of Task Analysis. Lanham, MD: University Press of America.

Fischer, Olivier and Richard Horn, eds. 1987. Electronic performance support systems lead the way. Five articles in Communications of the ACM 40(7):31-63.

Gray, David N., John Hotchkiss, Seth LaForge, Andrew Shalit, and Toby Weinberg. 1998. Modern languages and Microsoft's Component Object Model. Communications of the ACM 41(5):55-65.

Javasoft. 1998. JavaBeans home page. <http://splash.javasoft.com/beans/index.html>.

Kiely, Don. 1998. The component edge: an industry-wide move to component-based development holds the promise of massive productivity gains. TechWeb News, April 13, 1998. <http://www.techWeb.com/se/directlink.cgi?IWK19980413S0001>.

Lewandowski, Scott M. 1998. Frameworks for component-based client/server computing. ACM Computing Surveys 30(1):3-27.

McIlroy, M. D. 1969. Mass produced software components. In, Software engineering, ed. by P. Naur and B. Randell. NATO Science Committee, pp. 138-150.

Microsoft. 1998. Component Object Model home page. <http://www.microsoft.com/com/>.

Nierstrasz, Oscar, Simons Gibbs, and Dennis Tsichritzis. 1992. Component-oriented software development. Communications of the ACM 35(9):160-165.

OMG. 1998. CORBA home page, Object Management Group. <http://www.omg.org/corba>.

Ousterhout, John K. 1998. Scripting: higher-level programming for the 21st century. IEEE Computer 31(3):23-30.

Seddon, Jacqui, ed. 1998. Epss.com! [a "webzine" devoted to Electronic Performance Support Systems]. <http://www.epss.com>.

SIL. 1997. LinguaLinks: Electronic Helps for Language Field Work, Version 2.0. Dallas, TX: Summer Institute of Linguistics. See also <http://www.sil.org/lingualinks/>.

Simons, Gary F. 1988. A Computing Environment for Linguistics, Literary, and Anthropological Research: technical overview. <http://www.sil.org/cellar/cellar_overview.html>.

Simons, Gary F. 1998. The nature of linguistic data and the requirements of a computing environment for linguistic research. In, John M. Lawler and Helen Aristar Dry, eds., Using Computers in Linguistics: a practical guide. London: Routledge, pp. 10-25. < http://www.routledge.com/routledge/linguistics/using-comp.html>.

Sperberg-McQueen, C. M. 1996. Text Analysis Software Planning Meeting, Princeton, 17-19 May 1996: Trip Report. <http://www-tei.uic.edu/orgs/tei/misc/trips/ceth9605.html>.

Wegner, Peter. 1997. Frameworks for compound active documents. <http://www.cs.brown.edu/people/pw/>.


Date created: 6-Jul-1998
URL: http://www.sil.org/silewp/1998/004/silewp1998-004.html
Questions/Comments: SILEWP@sil.org


[SILEWP 1998 Contents | SILEWP Home | SIL Home]