Home | Proceedings

The Treasury Report on Receivables:   A Case Study in Transforming a DOS-Based Mainframe System to the Internet

Nicole J. Burton, Internet Program Office, Financial Management Service, U.S. Department of Treasury, Hyattsville, MD

5th Conference on Human Factors and the Web:
The Future of Web Applications
June 3, 1999


The new Treasury Report on Receivables System is neither cutting edge in technique nor exciting in content. Instead, it represents a typical data collection and reporting tool, similar to many legacy systems found at Federal agencies. What is cutting edge and exciting is that it is among the first legacy systems to be implemented as a pure Internet solution at the Financial Management Service (FMS). It is also the first FMS system to be developed using iterative usability testing and performance centered design techniques. The paper also discusses unique aspects of legacy-to-Web migration and lists some guidelines to follow. 

Table of Contents

    The Old System
    Selecting a Legacy System for Internet Implementation
    Introducing Human Factors into Development
Usability Testing and Performance
    The First Usability Test
    Major Findings of the First Test
    Conducting a Pilot Test at Five Federal Agencies
    Results of the Pilot Test
Implementation and Legacy-to-Internet Migration Issues
    Unique Aspects of Migrating Systems to the Web
    Guidelines for Legacy-to-Web Migration
    Human Factors Implications of Migrating Systems to the Web


In 1996, the Debt Collection Improvement Act mandated that the Federal government become more efficient in collecting its debt. The Act expanded the Department of Treasury’s responsibilities and placed new requirements on Federal agencies for collecting their delinquent debt portfolios.

The Department of Treasury Financial Management Service (FMS) worked with other agencies in the Federal Credit Policy Working Group to revise the existing data gathering and reporting tool, the Report on Receivables Due From the Public, also known as the Schedule 9. Congress, the Office of Management and Budget, agency Chief Financial Officers, and others in the Federal, private, and public sectors use this information in analysis and budget reports.

The new Treasury Report on Receivables System (TROR) is neither cutting edge in technique nor exciting in content. Instead, it represents a typical data collection and reporting tool, similar to many legacy systems found at Federal agencies. What is cutting edge and exciting is that it is one of the first legacy systems to be implemented as a pure Internet solution at the Financial Management Service. It is also the first FMS system to undergo usability testing.

The Old System

Approximately 130 users in 90 Federal program agencies report debt information for 300 organizational entities quarterly or annually, depending on the size of their debt portfolios. Under the old system, accountants in Federal agencies accessed Treasury’s Government Operations Accounting Link System (GOALS), entered their information into a DOS-based "green-screen" display, and transmitted the data over a dedicated line to Treasury, where it was batch processed on a mainframe computer.

The receivables data resided in a proprietary database that could be accessed only by the system administrator; the users had no direct access to their reports. During the quarterly two-week processing window, service-call volume was high. Users had difficulty navigating the system, the transmission process was unreliable and cumbersome, and lack of feedback left users unsure whether they had completed their tasks.

Selecting a Legacy System for Internet Implementation

The FMS Internet Program Office, an in-house development group, was looking to build and implement a Treasury Internet system as a proof of concept. This office grew out of the establishment of the FMS Web site and had programmed several smaller applications for the Web site. The Treasury Report on Receivables was selected as a pilot system for several reasons:

  1. It was virtually a stand-alone system and didn’t depend on other Treasury systems for data or feed data into other Treasury systems directly.
  2. The debt management activities it supported were important but not critical.
  3. The data collected and distributed was not particularly sensitive, simplifying security issues, and
  4. No payments were involved.

Introducing Human Factors into Development

I joined FMS as an Internet Analyst specializing in usability in April 1998 after the project had been green-lighted. Requirements had already been written and the database design was underway. My role on the development team was to ensure the usability of the application, assist in interface design, and work closely with the customers to make the project a success.

We met with the agency customer, Debt Management Services, and developed more detailed user profiles than were included in the requirements. We determined that there were at least five sets of potential users: data preparers, analysts, report users, verifiers, and system administrators.

We also performed extensive task analysis to determine the exact tasks to be performed and their sequence. Redesigning the application from an onerous mainframe dial-up system to a usable Internet system clearly affected the workflow. Where once the task of entering data had been delegated by the analysts to the accountants and accounting technicians, now the analysts preparing spreadsheets wanted in some cases to take over entering the data themselves. No longer did the data entry have to be performed at a dedicated workstation with a modem; it could be performed on anyone’s desktop.

Finally, we established usability performance measures to give us something to measure against and to help us determine if we were successful at project’s end. Following Jeffrey Rubin’s framework for usability testing (Rubin, Handbook of Usability Testing), we established usability measures in usefulness, ease of learning, ease of use, and attitude.

Usability Testing and Performance Centered Design

Since programming began shortly after I arrived, it was too late to test a paper prototype of the system. In addition, FMS had no history of conducting usability tests or using performance centered design principles, but the members of the development team were open-minded to both concepts.

I asked our contact in Debt Management Services for names of actual end users and was put in touch with a user from within FMS (which had its own debt portfolio to report) and two users from the Department of Veterans Affairs (VA). We conducted site visits with these users, observed them using the old GOALS system, and solicited input for the new system.

The First Usability Test

In October 1998, after 65-75% of the system had been developed, we conducted our first usability test in the Internet lab. We had no incentives such as cash or coffee mugs to give users for participating but we provided juice and cookies after the hands-on part of the test was finished.

Over a two-week period, we conducted a dry run test with the manager of the Internet Program Office, followed by two tests with the three users (2 from VA, one from FMS). We had each user run through a typical preparer scenario. The tasks they performed included logging on to the system, selecting an entity to report on, entering data, editing the data, reviewing the report, and transmitting the report.

A majority of the users were preparers, and their activities comprised the "bread-and-butter" functions of the system. During the test, one member of the development team moderated the test (with minimal intervention) while another took notes. Afterwards, we discussed the test with the users and gathered their impressions and feedback.

Major Findings of the First Test

The results were humbling, as we reported in the Usability Test Report, November 1998:

Solution: We streamlined and more clearly delineated the Save, Edit, File Report, and View Report functions. Instead of asking the users to remember to save every 5-10 minutes, we imbedded prominent Save buttons at logical junctions in the form. We eliminated error checking during interim saves and had users perform the editing after they had entered all their data. We removed from the error list generated by the Edit process errors over which users had no control. We used full descriptions instead of abbreviations to indicate the location of errors.  We eliminated some confusion by displaying only one action button (such as Save or Edit) per screen, and we eliminated a "smiley face" icon that users didn’t understand.

Solution: Because users didn't conceive of the report in three separate parts, we changed the flow to automatically display Part II after Part I, and Part III after Part II instead of making users choose where to go next.   We still included Parts links at the top of the screen but we made them dynamic: If you were on Part I, only the Part II and Part III links would appear; if you were on Part II, only Part I and Part III would appear, etc. This casually drew the users’ attention to the Parts as a means of navigation.

Solution: We determined the precise navigation and format information the users had to know at the beginning and imbedded the information in the form itself. We made a one-page printable Help sheet appear at the onset of the application, which users could turn off if they chose to. On this sheet we explained the navigation and format conventions. We also augmented the Help system to clarify these issues.

Solution: In addition to imbedding the most vital information in the interface itself, we reordered and re-titled the elements on the task bar.

Solution: We changed the colors from yellow and green to blue and maroon. We redesigned the Home Page several times, opting for a dynamic task bar and simple text that we could change as needed.

Solution: With current browser constraints, we couldn’t improve the printed report, but we will look at this issue in a subsequent release.

Solution: We emphasized at briefings, training sessions, and in the Help system the need for basic Internet and Windows skills and encouraged users to seek out classes at their agencies. We also included a "Browser Basics" section in the Help and eliminated some instructions that required intermediate Internet and Windows knowledge.

Solution: We reviewed the functions of the old system and incorporated old terminology such as "Transmit File" (instead of "File Report") to relate the new process to that which the users were accustomed to. We directed users through the flow of the application more directly, presenting fewer choices. Discussions and observations indicated that users wanted to be told how to complete their tasks as quickly as possible.


The week following the October usability tests, the team met to discuss a list of observed problems and proposed solutions. To organize the problems and fixes, we developed a Problem List using simple criteria discussed in Dumas and Redish’s A Practical Guide to Usability Testing. Using this method, problems are first analyzed according to their scope, then ranked according to their severity. A Problem List Key appears below:

A. Scope: Scope of problems is either Local or Global
Local = found on one screen/window/menu. Generally easy to fix.
Global = applies to more than one screen/window or throughout the system. Address these first. Generally harder to fix, may require some redesign.

B. Severity: Problems may be scaled as follows:
1. Prevents completion of important task
2. Creates significant delays or frustration
3. Has minor effect on usability
4. Should be considered as future enhancement

After we agreed on the scope and severity and what actions to take, we distributed the tasks quickly among ourselves with some items postponed to a future release. Within two weeks, we had done 80% of the redesigning and reprogramming necessary to implement the solutions.

The database had been well designed to begin with and did not need redesign but we realized we might have designed it a little differently had we tested sooner. One programmer commented after the problem list review: "Couldn’t we have done this earlier?" The answer was yes, we could have done a paper prototype, but also, in usability testing, you start testing wherever you are. There are no perfect projects.

Conducting a Pilot Test with Five Federal Agencies

We continued to refine the system, getting user feedback from Debt Management Services as well as the VA and FMS users, who were eager to contribute. During December 1998 and January 1999, we demonstrated the system to ascending levels of FMS management, culminating in the Agency Commissioner. Feedback from all observers was positive, particularly in regard to the cooperative nature of the development and the extent of customer involvement.

In February, we tested the redesigned system in a pilot test on site at five agencies: the VA, FMS, the Department of Education, the Department of Energy, and the Federal Communications Commission. Working with Debt Management, we selected these agencies because they represented a range of robust and light users, as well as a range of system skills. Geographically, all the agencies were located in the Washington, D.C. area, and we were able to complete each test in a morning or afternoon session, sending two team members as a moderator/note-taker and a timer/note-taker.

One of the advantages of conducting on-site tests is that it requires the programmers to visit the user’s work environment and observe actual system use, often for the first time. Although the developers on the team were initially reluctant to participate, they came away feeling positive about the experience.

During the test, we had the users run through the preparer scenario using their own data. This time we timed the main tasks in order to determine whether they met our performance measures. We were not rigorously scientific in the timing. For some of us, it was the first occasion of timing system tasks with a stopwatch. In addition, the users often wanted to stop and comment about what they were doing, adding suggestions and observations as they went. However, what we did not achieve in "clean" timing data, we gained in user feedback and a clear sense of where the remaining sticking areas lay.

We performed two test sessions per week over a period of a month (a dry run test plus five agency tests and a one-week postponement at the last agency.) Task times came in as expected, with the data entry and editing tasks taking substantially longer than projected. By the time we had conducted the dry run and three agency tests, we'd learned what the main problems were, but each agency offered new and useful details about how their users employed the system, the user population, their workflow arrangements, and the level of their system skills.

Results of the Pilot Test

The pilot test revealed that the system was overall quite usable; that it was a marked improvement on the system they were used to; and that the data entry and editing tasks needed additional fine-tuning. We augmented the interface with single-line prompts and reminders at the top of some screens and added clearer prompts on the Home Page (e.g., "To enter receivables, click on the Enter Receivables button above").  We also gave them a zero-fill option to eliminate errors caused by blank fields during the Edit. (For political reasons, we could not zero-fill the form at the beginning of the process.)

The users continued to ignore the extensive Help system we had developed. (One user stated flatly, "I’d rather die than click on Help.") We had a Help button to indicate overall help and a Question Mark icon to indicate contextual help. One user commented that having the two Help buttons looked like "a mistake." We made all Help contextual, except on the Home Page (where the Help button would take you to the Help Table of Contents). We added the Help TOC as a link on all the Help pages and eliminated the Question Mark icon. The Sample Report, which linked a sample receivables form line by line to definitions in the HTML Instructional Workbook, was duplicated on the public Internet site, where it could be used by data preparers who lacked access to the data entry system.

Implementation and Legacy-to-Web Migration Issues

During March 1999, Debt Management Services conducted three 2-hour agency training sessions in Kansas City, Washington, D.C., and Denver to ensure that users were comfortable with the new system and form. Through the training and other communications with Debt Management, the users became acquainted with the reference information on the FMS Web site as well as the information contained in the Help system.

One of the challenges of the project was the amount of innovation we incorporated simultaneously:

In hindsight, I believe the use of the Internet as the delivery vehicle for the new system and the expectation of improvements made accepting the other changes easier for users.

After the training sessions were completed, we opened up the system for two weeks for users to try out. Seventeen of the 130 preparers tested the system during this period. In early April, we closed the system to users, re-initialize the database, and conducted traditional testing to ensure correct functioning.

The system was implemented on April 15, 1999. Users had a two-week window to enter and transmit data. As I finish this paper, the reporting window is still open with most traffic expected in the last two days. User questions so far have dealt with basic system issues (such as how to log on) and questions about data fields on the new form. We plan to modify the Home Page, the FAQs, and the interface as needed to clarify common problems.

Unique Aspects of Migrating Legacy Systems to the Web

The following differences between the Internet and legacy or client-server platforms should be addressed when migrating legacy systems to the Web.

Internet Systems

Legacy or Client-Server Systems

Users may not be known Users generally known
Unpredictable workflow changes More stable workflow
Uncontrollable desktop Controlled desktop
Coding for multiple browsers Coding for a single operating system
Stateless connection Direct connection
Can code and reiterate more quickly (due to technology + organizational culture) Must use "slower" tools and go through established committees and processes
Higher service expectations from users Modest service expectations from users

Guidelines for Legacy-to-Web Migration

The following elements may be useful in considering a legacy-to-Internet migration: 

For our first legacy-to-Internet application, we selected a system that would be important enough to enhance the FMS Internet Program if successful but not so critical that failure would make the front page of The Washington Post.

With TROR, we simultaneously rolled out a new reporting timetable, a new form, a new system, and new platform (the Internet).  We also introduced iterative usability testing into the development methodology.

Good ideas spawn more ideas.  In future TROR versions, we are considering additional reporting capabilities for internal and external customers; additional system administrator reports; a partial data crosswalk from the old to the new system; and the ability to upload data from a spreadsheet to TROR.

The Internet engenders reengineering. With TROR, we are already seeing the analysis/preparation and data entry tasks merging.  Without the old system constraints, agencies have an opportunity to redesign their workflow.

Most Federal customers of legacy systems have never used a Web application in their work. Most need basic Web and Windows training. Reiterate this during application development and rollout. Training is available at most agencies—day or half-day classes. We also spelled out the basic functions customers were expected to know: printing, bookmarking, and using Windows and Internet navigation, such as scroll bars, radio buttons, and checkboxes.

 Human Factors Implications of Migrating Systems to the Internet

The Internet carries new expectations for service and innovation:


Users want:


The Internet:


The Federal government has hundreds of legacy systems similar to TROR that could be migrated to the Internet with accruing benefits.  To take advantage of what the Internet platform has to offer, however, such systems would need to be more usable than they are today. 

According to Ben Shneiderman of the University of Maryland, usable systems constitute the area of greatest information technology savings in the foreseeable future and a huge potential growth sector. Many Internet applications are already flagships in this new fleet of usable systems.

As U.S. taxpayers, most of us are stockholders in a "slow company" called the U.S. Government. We invest thousands of dollars per person per year in Federal information technology. We should expect a good return on our investment, one that takes advantage of new Internet technology and modern design principles.


Dumas, Joseph S. & Redish, Janice C., A Practical Guide to Usability Testing. Norwood, N.J.: Ablex Publishing Corporation, 1994.

Rubin, Jeffrey, Handbook of Usability Testing

"The Treasury Report on Receivables: A Case Study in Transforming a DOS-Based Mainframe System to the Internet"
<- Back

Thanks to our conference sponsors:
A T and T Laboratories
ORACLE Corporation  
National Association of Securities Dealers, Inc.

Thanks to our conference event sponsor:

Bell Atlantic

Site Created: Dec. 12, 1998
Last Updated: June 10, 1999
Contact hfweb@nist.gov with corrections.