Advanced Search
CS Search   Google Search
HomeDigital LibrarySite MapStoreHelpContact UsPress RoomShopping Cart

Past Issues >> Table of Contents


MARCH/APRIL 2008 (Vol. 25, No. 2) Web Extra
0740-7459/08/$25.00 © 2008 IEEE
Published by the IEEE Computer Society

Web Extra

Method: How We Selected and Analyzed the Studies



(From IEEE Software)
Voice of Evidence
Method: How We Selected and Analyzed the Studies

This material supplements the Mar./Apr. Voice of Evidence column, " Understanding the Customer: What Do We Know about Requirements Elicitation?"

Oscar Dieste, Natalia Juristo, and Forrest Shull

We followed general procedures for performing systematic reviews.However, Barbara Kitchenham’s suggested meta-analysis techniques1can be applied only under certain circumstances (essentially, a large number of replications) that don’t generally hold in experimental software engineering. So, we developed our own procedure, inspired by Charles Ragin2and Robert Yin and Karen Heald,3to aggregate our results.

Inclusion and exclusion criteria

We included any comparative empirical studies about elicitation techniques that individuals can apply. By “comparative,” we mean that the studies compare two or more elicitation techniques (such as protocol analysis versus interviews), but not an elicitation technique versus “nothing” or “the typical way” (as in a paper by Mustapha Tawbi, Camille Ben Achour, and Fernando Vélez4). All publications were in English.

We rejected studies in which any of these conditions hold:

  • The empirical study describes a case study or singular experience.5
  • The elicitation technique isn’t pertinent for software engineering (such as probability elicitation6 and decision making7).
  • Any of the tested techniques is a group technique (such as focus groups8).
  • Any of the tested techniques requires computing support (such as virtual meetings9). This restriction means that we considered only face-to-face interactions between stakeholders.

Data sources and search strategy

The databases we used were Scopus, IEEE Xplore, and the ACM Digital Library. We performed some Google searches as well.

Search strategy

For Scopus, IEEE Xplore, and the ACM DL, we used the search string

(elicitation OR “requirements gathering” OR “requirements acquisition”)

AND

(capture OR empirical OR experiment OR study OR review OR evaluation)

To locate unindexed or “grey” literature (such as thesis papers10), we searched in Google using the search string

elicitation ; “requirements gathering” ; “requirements acquisition”

We also reviewed all the papers referenced by the papers we located using the above-mentioned searches.

Restrictions

We considered all papers we found using this search strategy, in any venue. We limited our searches topapers published up to and including March 2005.

Study identification and selection

We applied the search terms to the articles’ titles, abstracts, and keywords. We reviewed the titles and abstracts and decided whether the publications fulfilled the inclusion criteria. In the first stage, we quickly reviewed titles and abstracts and selected 74 publications. After reviewing those papers’ references, we selected 490 more papers. In the second stage, we reviewed the 564 papers in more detail and excluded 511 of them. The final set had 53 papers, including experiments, case studies, and surveys. We resolved all discrepancies by consensus.

Data extraction and checking

Twenty-seven of the papers were grey literature or very difficult to get (for example, some conference papers11), so we didn’t consider them. We read the other 26 papers thoroughly and extracted data (including study type, number and type of subjects, factors, response variables, experimental task, experimental results, and so on) using a standardized extraction form (see http://grise.upm.es/sites/extras/1/extraction.doc).

Aggregation

Our procedure for aggregating the results included a quality rating scheme, which we applied to each paper. We didn’t consider lower-quality studies in our aggregation. We drew conclusions by reasoning about whether studies of the same techniques and the same attributes found the same results (similar to “vote counting” approaches in which each study’s results count as one “vote” on the issue being investigated).

Table 1. The 30 studies in the aggregation.

Code & URL

Authors & reference

Year

Field

Design

Who applied the elicitation technique?

Respondent to the elicitation technique

Settings

Task

S01, http://grise.upm.es/sites/extras/1/S01.php

Adelman12

1989

Knowledge engineering

Crossover (on technique as usual)

Six knowledge engineers

28 groups composed of four or five US Marine lieutenants

Lab-like settings. Two questioners elicited information from three groups, and the other four from two groups. The study was repeated twice using different elicitation techniques (crossover). Sessions were carried out over nine months.

Extracting a hierarchy of attributes for a US Marines evaluation system

S02, http://grise.upm.es/sites/extras/1/S02.php

Agarwal and Tanniru13

1990

Information systems

Randomized 2 × 2 factor design (questioner experience × technique)

20 graduate business students (inexperienced subjects); 10 experienced knowledge engineers or system analysts

30 expert practitioners from industry

The respondent’s work environment. Maximum elicitation time was one-and-a-half hours. All participants received some training before the task.

Capital budgeting/resource allocation decision

S03, http://grise.upm.es/sites/extras/1/S03.php

Bech-Larsen and Nielsen14

1999

Marketing

Randomized parallel (that is, one factor with two or more levels)

Unclear; we guess the experimenters

150 women recruited in a shopping center

The same shopping center

Choosing among olive oil brands

S04, http://grise.upm.es/sites/extras/1/S04.php

Breivik and Supphellen15

2003

Marketing

Randomized 2 × 2 × 2 factor design (type of product × presentation order × technique)

Unclear; we guess the experimenters

160 consumers recruited by phone

Unclear; we guess that the experiment was carried out in the respondent’s home.

Choosing among restaurants and car brands

S05, http://grise.upm.es/sites/extras/1/S05.php

Browne and Rogich16

2001

Information systems

Randomized parallel

Unclear; one or several interviewers, blind to the experiment’s objectives

45 nonfaculty employees from two universities with user-level computer experience

Lab-like settings

Developing an Internet-based food-shopping system

S06, http://grise.upm.es/sites/extras/1/S06.php

Burton et al.17

1987

Knowledge engineering

Crossover (on technique)

Unclear; we guess the experimenters

32 undergraduate geology students

Lab-like settings

Identifying igneous rocks

S07, http://grise.upm.es/sites/extras/1/S07.php

Corbridge et al.18

1994

Knowledge engineering

Randomized parallel

Unclear; we guess the experimenters

Eight postgraduate or higher-level experts on metallurgy

Lab-like settings. There were six elicitation sessions of 10 minutes each.

Classifying the corrosion of metals

S08, http://grise.upm.es/sites/extras/1/S08.php

Corbridge et al.18

1994

Knowledge engineering

Crossover (on technique)

Unclear; we guess the experimenters

32 final-year medical students

Lab-like settings; maximum elicitation time was one hour.

Diagnosing acute abdominal conditions

S09, http://grise.upm.es/sites/extras/1/S09.php

Corbridge et al.18

1994

Knowledge engineering

Crossover (on technique)

Unclear; we guess the experimenters

16 drivers with at least two years’ experience

Probably lab-like settings; maximum elicitation time was half an hour.

Perceiving driving hazards

S10, http://grise.upm.es/sites/extras/1/S10.php

Crandall19

1989

Knowledge engineering

Randomized parallel

Unclear; we guess the experimenters

20 experienced fireground commanders

Lab-like settings

Making decisions on simulated fire incidents

S11, http://grise.upm.es/sites/extras/1/S11.php

Moore and Shipman20

2000

Software engineering

Randomized parallel

10 non-computer science graduate students

Didn’t exist; a description of the future system was used instead

Lab-like settings

Designing a new university course registration system

S12, http://grise.upm.es/sites/extras/1/S12.php

Larsen and Naumann21

1992

Information systems

Randomized parallel

24 university teachers

14 university teachers

Lab-like settings. A one-hour presentation explained each group’s roles in the experiment. Additionally, questioners received 45 minutes of training. Elicitation was carried out in continuous sequence.

Unclear; we guess a typical information systems development problem

S13, http://grise.upm.es/sites/extras/1/S13.php

Fowlkes, Salas, and Baker22

2000

Knowledge engineering

Randomized parallel (on subject’s experience)

Unclear; we guess the experimenters

10 instructor pilots and 10 student aviators

Lab-like settings. Elicitation sessions lasted one-and-a-half hours.

Identifying strategies to respond to situations in a night flight (a videotape simulated the night flight)

S14, http://grise.upm.es/sites/extras/1/S14.php

Goodrich and Olfman23

1990

Information systems

Randomized 2 × 2 factor design (type of task × technique)

Eight high school students

Eight high school students

Lab-like settings. Five elicitation sessions were carried out, each one-and-a-half-hours or shorter, Students received half an hour of training before the experiment.

Designing a reservation system and a decision support system

S15, http://grise.upm.es/sites/extras/1/S15.php

Hudlicka24

1996

Software engineering

Randomized parallel

Unclear; we guess the experimenters

17 FAA inspectors

Lab-like settings. Sessions lasted between 30 minutes and two hours.

Identifying attributes for an FAA safety performance analysis subsystem

S16, http://grise.upm.es/sites/extras/1/S16.php

Jones and Miles25

1998

Knowledge engineering

Randomized parallel

Unknown

Unknown

Unknown

Designing an expert system for text processing

S17, http://grise.upm.es/sites/extras/1/S17.php

Burton et al.26

1990

Knowledge engineering

Crossover

Unclear; we guess the experimenters

16 professional archaeologists

Lab-like settings

Analyzing flint artifacts and pottery shards

S18, http://grise.upm.es/sites/extras/1/S18.php

Burton et al.26

1990

Knowledge engineering

Case study

Unclear; we guess the experimenters

Five professional archaeologists

Lab-like settings

Analyzing the appropriateness of clauses in an expert system

S19, http://grise.upm.es/sites/extras/1/S19.php

Moody, Will, and Blanton27

1996

Knowledge engineering

Randomized parallel

Unclear

42 experienced librarians

Lab-like settings

Identifying search strategies to locate specific literature

S20, http://grise.upm.es/sites/extras/1/S20.php

Rowe et al.28

1996

Knowledge engineering

Nonstandard design

19 experienced US Air Force technicians

Unclear; we guess the experimenters

Lab-like settings

Identifying problems in an F-15 flight device

S21, http://grise.upm.es/sites/extras/1/S21.php

Zmud, Anthony, and Stair29

1993

Information systems

Randomized parallel

Four research assistants in management or information systems

100 senior undergraduate business students

Lab-like settings

Identifying elements that a handbook for job recruiting activities should include

S22, http://grise.upm.es/sites/extras/1/S22.php

Eva30

2001

Information systems

Survey

N/A

N/A

N/A

N/A

S23, http://grise.upm.es/sites/extras/1/S23.php

Marakas and Elam31

1998

Information systems

Randomized 2 × 2 factor design (subject’s experience × technique)

20 students in last year of an undergraduate MIS program and 20 professional system analysts and software designers

Unclear

Lab-like settings. Questioners received one hour of training before the experiment and had unlimited time to prepare the elicitation session

Designing an order receiving and processing system

S24, http://grise.upm.es/sites/extras/1/S24.php

Rugg et al.32

1992

Knowledge engineering

Randomized parallel

Unclear; we guess the experimenters

75 subjects of unknown origin

Lab-like settings. Each elicitation session lasted half an hour.

Identifying fruit

S25, http://grise.upm.es/sites/extras/1/S25.php

Pitts and Browne33

2004

Information systems

Not factorial design. Analysis carried out using statistical correlation.

54 practicing system analysts with more than two years’ experience

One person unfamiliar with system development and blind to the experiment’s objectives

Lab-like settings

Designing an online grocery shopping system

S26, http://grise.upm.es/sites/extras/1/S26.php

Griffin and Hauser34

1993

Marketing

Case study

Unclear; we guess the experimenters

30 potential customers of portable food-carrying and storing devices (such as coolers, picnic baskets, and knapsacks)

Unclear; it looks like a real setting

Identifying potential customer needs about portable food-carrying and storing devices

S27, http://grise.upm.es/sites/extras/1/S27.php

Griffin and Hauser34

1993

Marketing

Case study

Seven analysts with different backgrounds

Didn’t exist; interview transcripts were used instead

Unclear; it looks like a real setting.

Identifying customer needs about portable food-carrying and storing devices

S28, http://grise.upm.es/sites/extras/1/S28.php

Schweickert et al.35

1987

Knowledge engineering

Case study

Unclear; we guess the experimenters

One expert

Real settings. Each elicitation session lasted 70 minutes. All sessions were carried out the same day.

Creating knowledge bases on lighting for industrial inspection

S29, http://grise.upm.es/sites/extras/1/S29.php

Silver and Thompson36

1991

Marketing

Case study

Unclear; author specifies only that questioners were “experienced interviewers”

17 customers of unknown origin

Unclear; it looks like a real setting.

Identifying requirements and needs for business equipment

S30, http://grise.upm.es/sites/extras/1/S30.php

Freeman37

2004

Software engineering

Randomized parallel trial

24 senior-level undergraduate IS students

24 senior-level undergraduate non-IS students

Real settings. Questioners received 25 minutes of training before the experiment.

Unknown

 

References

  1. B.A. Kitchenham, Procedures for Performing Systematic Reviews, tech. report TR/SE-0401 Keele Univ., 2004.
  2. C.C. Ragin, The Comparative Method: Moving beyond Qualitative and Quantitative Strategies, Univ. of California Press, 1987.
  3. R.K. Yin and K.A. Heald, “Using the Case Survey Method to Analyze Policy Studies,” Administrative Science Quarterly, vol. 20, no. 3, 1975, pp. 371–381.
  4. M. Tawbi, C. Ben Achour, and F. Vélez, “Guiding the Process of Requirements Elicitation through Scenario,” Proc. 10th Int’l Workshop Database and Expert Systems Applications, IEEE Press, 1999, pp. 345–349.
  5. C. Potts et al., “An Evaluation of Inquiry-Based Requirements Analysis for an Internet Service,” Proc. 2nd Int’l Workshop Requirements Eng., IEEE Press, 1995, pp. 27–34.
  6. G.J. Browne, S.P. Curley, and P.G. Benson, “Evoking Information in Probability Assessment: Knowledge Maps and Reasoning-Based Directed Questions,” Management Science, vol. 43, no. 1, 1997, pp. 1–14.
  7. G.P. Hodgkinson, A.J. Maule, and N.J. Bown, “Causal Cognitive Mapping in the Organizational Strategy Field: A Comparison of Alternative Elicitation Procedures,” Organizational Research Methods, vol. 7, no. 1, 2004, pp. 3–26.
  8. A.P. Massey and W.A. Wallace, “Focus Groups as a Knowledge Elicitation Technique: An Exploratory Study,” IEEE Trans. Knowledge and Data Eng., vol. 3, no. 2, 1991, pp. 193–200.
  9. W.J. Lloyd, M.B. Rosson, and J.D. Arthur, “Effectiveness of Elicitation Techniques in Distributed Requirements Engineering,” Proc. IEEE Joint Int’l Conf. Requirements Eng., IEEE Press, 2002, pp. 311–318.
  10. C.J.P.M. de Bont, “Consumer Evaluation of Early Product-Concepts,” PhD thesis, Delft Univ., 1992.
  11. B. Bradburn, “A Comparison of Knowledge Elicitation Methods,” Int’l Conf. Eng. Design (ICED 91), Professional Eng. Publishing, 1991, pp. 298–305.
  12. L. Adelman, “Measurement Issues in Knowledge Engineering,” IEEE Trans. Systems, Man, and Cybernetics, vol. 19, no. 3, 1989, pp. 483–488.
  13. R. Agarwal and M.R. Tanniru, “Knowledge Acquisition Using Structured Interviewing: An Empirical Investigation,” J. Management Information Systems, vol. 7, no. 1, 1990, pp. 123–141.
  14. T. Bech-Larsen and N.A. Nielsen, “A Comparison of Five Elicitation Techniques for Elicitation of Attributes of Low-Involvement Products,” J. Economic Psychology, vol. 20, no. 3, 1999, pp. 315–341.
  15. E. Breivik and M. Supphellen, “Elicitation of Product Attributes in an Evaluation Context: A Comparison of Three Elicitation Techniques,” J. Economic Psychology, vol. 24, no. 1, 2003, pp. 77–98.
  16. G.J. Browne and M.B. Rogich, “An Empirical Investigation of User Requirements Elicitation: Comparing the Effectiveness of Prompting Techniques,” J. Management Information Systems, vol. 17, no. 4, 2001, pp. 223–249.
  17. A.M. Burton et al., “A Formal Evaluation of Knowledge Elicitation Techniques for Expert Systems: Domain 1,” Research and Development in Expert Systems IV: Proc. 7th Ann. Technical Conf. British Computer Soc. Specialist Group on Expert Systems (Expert Systems 87), Cambridge Univ. Press, 1987, pp. 136–145.
  18. B. Corbridge et al., “Laddering: Technique and Tool Use in Knowledge Acquisition,” Knowledge Acquisition, vol. 6, no. 3, 1994, pp. 315–341.
  19. B. Crandall, “A Comparative Study of Think-Aloud and Critical Decision Knowledge Elicitation Methods,” ACM Sigart Bull., Apr. 1989, pp. 144–146.
  20. J.M. Moore and F.M.I. Shipman, “A Comparison of Questionnaire-Based and GUI-Based Requirements,” Proc. 15th IEEE Int’l Conf. Automated Software Eng., IEEE Press, 2000, pp. 35–43.
  21. T.J. Larsen and J.D. Naumann, “An Experimental Comparison of Abstract and Concrete Representations in Systems Analysis,” Information & Management, vol. 22, no. 1, 1992, pp. 29–40.
  22. J.E. Fowlkes, E. Salas, and D.P. Baker, “The Utility of Event-Based Knowledge Elicitation,” Human Factors, vol. 42, no. 1, 2000, pp. 24–35.
  23. V. Goodrich and L. Olfman, “An Experimental Evaluation of Task and Methodology Variables for Requirements Definition Phase Success,” Proc. 23rd Ann. Hawaii Int’l Conf. System Sciences, IEEE Press, 1990, pp. 201–209.
  24. E. Hudlicka, “Requirements Elicitation with Indirect Knowledge Elicitation: Comparison of Three Methods,” Proc. 2nd Int’l Conf. Requirements Eng., IEEE CS Press, 1996, pp. 4–11.
  25. S.R. Jones and J.C. Miles, “The Use of a Prototype System for Evaluating Knowledge Elicitation Techniques,” Expert Systems, vol. 15, no. 2, 1998, pp. 83–97.
  26. A.M. Burton et al., “The Efficacy of Knowledge Acquisition Techniques: A Comparison across Domains and Levels of Expertise,” Knowledge Acquisition, vol. 2, no. 2, 1990, pp. 167–178.
  27. J.W. Moody, R.P. Will, and J.E. Blanton, “Enhancing Knowledge Elicitation Using the Cognitive Interview,” Expert Systems with Applications, vol. 10, no. 1, 1996, pp. 127–133.
  28. A.L. Rowe et al., “Toward an On-line Knowledge Assessment Methodology: Building on the Relationship between Knowing and Doing,” J. Experimental Psychology: Applied, vol. 2, no. 1, 1996, pp. 31–47.
  29. R.W. Zmud, W.P. Anthony, and R.M. Stair, “The Use of Mental Imagery to Facilitate Information Identification in Requirements Analysis,” J. Management Information Systems, vol. 9, no. 4, 1993, pp. 175–191.
  30. M. Eva, “Requirements Acquisition for Rapid Applications Development,” Information & Management, Dec. 2001, pp. 101–107.
  31. G.M. Marakas and J.J. Elam, “Semantic Structuring in Analyst Acquisition and Representation of Facts in Requirements Analysis,” Information Systems Research, vol. 9, no. 1, 1998, pp. 37–63.
  32. G. Rugg et al., “A Comparison of Sorting Techniques in Knowledge Acquisition,” J. Knowledge Acquisition, vol. 4, no. 3, 1992, pp. 279–291.
  33. M.G. Pitts and G.J. Browne, “Stopping Behavior of Systems Analysts during Information Requirements Elicitation,” J. Management Information Systems, vol. 21, no. 1, 2004, pp. 203–226.
  34. A. Griffin and J.R. Hauser, “The Voice of the Customer,” Marketing Science, vol. 12, no. 1, 1993, pp. 1–27.
  35. R. Schweickert et al., “Comparing Knowledge Elicitation Techniques: A Case Study,” Artificial Intelligence Rev., vol. 1, no. 4, 1987, pp. 245–253.
  36. J.A. Silver and J.C. Thompson, “Understanding Customer Needs: A Systematic Approach to the ‘Voice of the Customer,’” master’s thesis, Sloan School of Management, Massachusetts Inst. of Technology, 1991.
  37. L.A. Freeman, “The Effects of Concept Maps on Requirements Elicitation and System Models during Information Systems Development,” Concept Maps: Theory, Methodology, Technology. Proc. 1st Int’l Conf. Concept Mapping, 2004, pp. 257–264.