Image

MAKING a DIFFERENCE

MEASURING the IMPACT of
INFORMATION on DEVELOPMENT

Proceedings of a workshop
held in Ottawa, Canada
10 – 12 July 1995

EDITED BY
Paul McConnell

Image

mail: PO Box 8500, Ottawa, ON, Canada K1G 3H9

phone: 613 236-6163 ext. 2087

e-mail: order@idrc.ca

Gopher gopher.idrc.ca

World-Wide Web: http://www.idrc.ca

Contents

Foreword

v

Acknowledgments

vi

Measuring the Impact of Information on Development: Overview of an International Research Program

 

Paul McConnell

1

Case Studies

CIDE/REDUC Case Study

Warren Thorngate, Afredo Rojas, and Maria Francini

31

Impact of the Semi-Arid Tropical Crops Information Service (SATCRIS) at ICRISAT

L.J. Haravu and T.N. Rajan

49

Impact of Electronic Communication on Development in Africa

Nancy Hajkin and Michel Menou

71

Impact of Information Rural Development: Background, Methodology, and Progress

Kingo Mchombu

87

Impact of Information on Policy Formulation in the Caribbean

Audrey Chambers and Noel Boissiere

103

Information for Decision-Making in the Caribbean Community

Carol Collins

123

Impact Research Studies

Using LISREL to Measure the Impact of Information on Development: London Site Pilot Study

J. Tague-Sutcliffe, L. Vaughan, and C. Sylvain

135

Information Factors Affecting New Business Development: Progress Report

Charles T. Meadow and Louis Felicie Spiteri

153

Related Impact Activities

Benefit-Cost Analysis Progress Report: Applications to IDRC Impact Indicators Research

Forest Woody Horton, Jr

177

Information for Policy Formulation: Latin America and the Caribbean

Fay Durrani

183

Measuring the Effects of Information on Development

Warren Thorngate

195

Reporting Information About Studies of Information

Charles T. Meadow

201

INLMCAS Listserv — “INIMICAS-L”: Analysis of Initial Use

Ronald Archer

207

Can Computer Conferencing Be Effective for Information Policy Formation?

Warren Thorngate and David Balson

211

Measuring the Impact of Information on Development: Related Literature, 1993-1995

Bev Chataway and Atsuko Cooke

237

Participants

245

Foreword

This book is a progress report on an international research program investigating the impact of information on development. In 1991, when the idea was first broached of a systematic examination of the role and value of information in the development agenda, several colleagues voiced their doubts about the feasibility of such a task, but they still felt sufficiently intrigued by the consequences of success that they were prepared to rise to the challenge.

One of the reasons for undertaking this work was to help prepare more effective arguments that could be used to change attitudes toward investing in information for development in the face of increasingly tough competition for financial resources. The financial pressures have in no way diminished in the intervening years; indeed, the rationale for moving ahead with this research has become even stronger.

Supported by the International Development Research Centre (IDRC), this complex, multiphase, multidisciplinary, and multipartner research program has grown from an initial exploratory workshop early in 1992 to an international network that involves researchers from Africa, Asia, the Caribbean, Latin America, Europe, and North America. The need to share information and experiences about the evolving research program is a paramount concern of IDRC and other members of the research network.

The initial phases of the work gave rise to a monograph edited by Michel Menou and published in 1993 by IDRC. Entitled, “Measuring the Impact of Information on Development.” it provided a strong theoretical foundation for launching an extensive field-testing of the concepts and methodologies. The recent workshop in Ottawa provided a timely opportunity to bring together members of the research network for an exchange of views on all of the action-research activities now in train within this program.

This current volume is also part of our ongoing commitment to making the research accessible to a wider audience. Although the research program is still only part of the way through its timetable, we felt it important to share these initial experiences, survey instruments, and other materials as a source of practical advice and interesting commentary for others who might also wish to try answering the “impact” question.

Martha B. Stone

IDRC, Ottawa

Acknowledgments

This book is based on presentations made during a 3-day workshop held at the International Development Research Centre (IDRC) in July 1995. Its appearance has proved possible only through an extensive cooperative effort involving intellectual, logistical, organizational, and editorial support.

I should like to thank all of the participants for giving their time and sharing their experiences during the workshop. The proceedings benefited greatly from the fine work of the chairs and rapporteurs: Ronald Archer, Atsuko Cooke, José-Marie Griffiths, L.J. Haravu, Michel Menou, Tavinder Nijhawan, and Martha Stone.

I am particularly indebted to the authors for delivering their manuscripts promptly, despite the formidable deadlines. On this point, I wish to record my thanks to my colleagues Carole Laplante and Tavinder Nijhawan for assisting me with my own contribution to the book. The publishing consultant, Katherine M. Kealey, deserves a special mention for her help in bringing this substantial publication together smoothly and on time.

Logistical arrangements for the workshop were well taken care of thanks to the experience and hard work of several IDRC staff in Ottawa: Judy Cray, Margaret Langill, Carole Laplante, and Susan Warren.

I am happy to single out two other acknowledgments. The bulk of the preparatory work for the meeting fell on the shoulders of the manager of the impact project at IDRC, Ronald Archer. The success of the event owes much to his resilience, good nature, and organizational skills. Finally, I should like to acknowledge the leadership provided by Martha Stone throughout this complex research program. Martha has provided the illumination as we venture along paths seldom traveled!

Paul McConnell

IDRC, Ottawa

Measuring the Impact of Information on Development: Overview of an International Research Program

Paul McConnell1
(with an Appendix by José-Mane Griffiths)

Background to the Research Program

What is the link between “information” and “development?” This is the fundamental question driving an international research effort that was launched by Canada’s International Development Research Centre (IDRC) early i 1992. The purpose of this paper is to provide a brief introduction to the research program and to serve as a guide to the proceedings of a workshop on this subject that took place at IDRC in July 1995.

Terms such as “global information highway,” “information revolution,” and “information economy” are used routinely to illustrate the profound role of information in modern societies. Yet it is easy to make unverified assumptions about the nature of benefits being generated in an “information society.” Despite the high profile enjoyed by information issues, especially those involving new information technologies, it is perhaps ironic to find that many library and other information services are feeling increasingly vulnerable in the tight funding environment that is prevalent today.

This situation has fueled an increasing number of studies in recent years on the relevance, usefulness, value, and impact of information in various settings. Relevant publications include, for example, “Special Libraries: Increasing the Information Edge” (Griffiths and King 1993), “The Impact of the Special Library on Corporate Decision-Making” (Marshall 1993), “The Value and Impact of Information” (Feeney and Grieves 1994), “The Value of Information to the Intelligent Organisation” (HERTIS 1994), and many others could be listed.

1 Director, Program Coordination and Development, Information Sciences and Systems Division, International Development Research Centre (IDRC), 250 Albert St, PO Box 8500, Ottawa, Ontario, Canada K1G 3H9, and Vice Chancellor for Computing and Telecommunications, University of Tennessee, 507 Andy Holt Tower, Knoxville, Tennessee 37996-0157 USA, respectively.

Hopefully, this growing body of knowledge will be used effectively to ensure that the significant practical contribution of the information infrastructure is fully recognized and strengthened accordingly, not neglected and eroded.

Against such a backdrop that illustrates the evolving information climate in industrialized countries of the North, what is the picture emerging in the developing countries of the South? It seems reasonable to assume that information holds great potential as a powerful and reusable resource for development. It is an essential input, catalyst, and product of change. Information can be a precious fuel in the process of transformation taking place in developing societies. Despite this impressive potential, however, there are many economic, technical, political, and other constraints that prevent it from being realized.

In a funding environment so much more difficult than the one enjoyed in the North, there is even more urgency to demonstrate the value and impact of investing in the information sector. Perhaps one could argue that the developing countries need only observe the conspicuous role of information in the North to see the benefits of accelerating their local investment in this sector. But such an approach would ignore the different perceptions of need and benefit held in different societies. It would fail to acknowledge the extent of competition for severely limited financial resources.

This is not to ignore the valuable inputs provided by certain studies generated in the North; the impressive chapter by Badenoch et al. (1994) on “The Value of Information” is a case in point (who could argue with its opening sentence: “There is a very good reason why the ‘value of information’ is a neglected and under-researched subject: it is well nigh impossible to establish an agreed definition of what we mean by the terms.”) Nevertheless, the primary rationale for strengthening the role and use of information must be based on circumstances and priorities that have direct relevance to the South. The current research program is a response to that need.

Statement of the Research Problem

The nature of the research problem has been described in a related publication (Menou 1993, p. ix), and is worth restating here:

Although we have witnessed a steady growth in the provision of information services in developing countries, a number of fundamental questions remain unanswered. The people of these countries question the relevance and appropriateness of the services offered. Development assistance agencies are concerned about problems of sustainability. The extent to which information services actually contribute to the empowerment of people and the accountability of the institutions concerned are subjects of controversy and debate. Logic dictates that information is an essential resource for the social and economic development of Third World countries, but how can this be demonstrated? How tangible is the linkage between information investments and the achievement of specific development goals? The limited status accorded to information in most developing countries suggests that its potential value is not self-evident.

The assessment of development efforts in information infrastructure and services has mainly relied upon measures of input and immediate output. Although information specialists may claim, for example, that a 5,000 record database is now operational, policymakers and decision-makers understandably look for a clear indication of its overall socio-economic benefits, and ask ‘so what?’.

The answer, so far, has been axiomatic. It is expressed in sentences such as ‘Information is the most critical resource and plays a fundamental role in development’. Yet there is no systematic body of empirical evidence to support this assertion, especially quantitative evidence. Unless a more appropriate answer is found, people involved in information-related programs will have difficulty justifying a high level of priority and a share of scarce resources compared with those in disciplines whose relationship to development is better established.

This is the challenge that prompted IDRC to create the current research program. In essence, the research program is seeking an answer to the specific question, “What is the impact of information on development?” This is a simple enough question to pose, but the complexities of interpreting it, exploring the various elements of a response, and formulating a viable research program have proved no simple undertaking.

Indeed, the scarcity of previous work in this domain is leading the research program to break new ground. It is worth taking a moment to clarify the scope of the program and its objectives. Clearly, as noted earlier, others have been investigating different aspects of the value and impact of information. But there are limits to the relevance of these studies to the central issues being explored here:

• The vast majority of previous studies describe approaches and experiences from a Northern perspective in industrialized countries that have a different information tradition and infrastructure than in the South.

• Much of this work in the North is confined to a particular organization or sector, rather than providing a more comprehensive assessment of the larger “impact” question.

• Although there are numerous studies and guides for helping information managers evaluate the performance of a particular information service, these often involve simple quantitative measures such as counts of visitors, inquiries, or loans, and perhaps a survey of user opinion, and they seldom attempt to find an answer to the more probing “so what?” question.

• Many of these studies interpret “impact” in a rather short-term sense related to immediate effects of information on the user community, rather than on the longer term consequences of the use of that information.

Goal and Outcomes of the Research Program

The research network that IDRC is supporting is somewhat ambitiously attempting to measure the impact of information not on a particular individual or institution, but on development. The term “development” is used here as shorthand to describe the complex process of change that is taking place in less-industrialized societies. “Impact” in this sense involves demonstrating the social, cultural, economic, political, environmental, and other benefits that are associated with the consequences of making effective use of information and, indeed, the problems or missed opportunities associated with not having (or not using) information.

The program is not focused at the level of indicators for evaluating the performance of an individual library or information service. It is situated at the other end of the spectrum, where it is exploring the role that information systems and services play in bringing about more widespread improvements in social and economic conditions.

The declared goal of the research program, therefore, is to devise and apply a methodology for measuring the benefits and impact of information on development. But even this daunting task cannot be the end-point, for the real impact of this research program will be achieved through the effective use of its findings to bring about a shift in attitudes toward the role of information in development, corresponding shifts in development policy, an increase in the allocation of funds to be invested in this field and, ultimately, an improvement in the management, role, and use of information as a strategic resource for development.

Thus there are three compelling reasons why it is important to pursue this investigation into the relation between information and development impact:

• A more convincing demonstration of the benefits could encourage developing countries to take better advantage of their information resources.

• A clearer understanding of the relationship between project inputs, outputs, and outcomes could improve the design of systems and services and help select the most cost-effective options.

• Recognition of the social and economic returns on investment in information activities is likely to strengthen their financial viability and, hence, their long-term sustainability.

Yet, despite the significance of these intended outcomes, this is an area of information science research that has been left largely unexplored. To be sure, there have been some previous contributions that have moved beyond the anecdotal. The paper by Boon (1992), for example, is one that touches on a number of the issues being addressed in the IDRC-supported research program.

Slightly more familiar are studies that have focused on the development impact of introducing information technologies, e.g., Ang and Pavri (1994), CABI (1995), and Hanna and Boyson (1993).

The bibliography compiled by Chataway and Cooke (this volume) identifies additional writings of some relevance. But more searching examinations of the relationship between information and development are few and far between. There are several understandable reasons for this paucity, including the complexity of the task, the number of external variables that can affect information usage, the lengthy timeframe for demonstrating results, the difficulty in extrapolating from one information site to another, the volume of data that must be processed, the multidisciplinary nature of the research, and the considerable cost of mounting a meaningful research program. Nevertheless, despite these formidable obstacles, there were some who still believed that the importance of the goal clearly merited a concerted international response.

Launching the Research Program

Research Framework and Outputs

The international response was formally initiated in 1992 by Martha Stone of IDRC. Acknowledging that no single institution had the human, financial, or information resources to mount an effective exploration of “impact,” IDRC has encouraged several interested researchers to join forces at various points in a long-term, multiphase program of work. The principal components of the four stages of the research program are:

Stage I

Exploring the feasibility and scope of a substantive investigation of “impact.”

Stage II

Formulating an appropriate methodology for assessing the impact of information on development.

Stage III

Implementing and refining the methodology through several case studies and associated research.

Stage IV

Reviewing and disseminating the findings for greatest effect.

The specific outputs from this research program are expected to include:

• A detailed description of a tested methodology for assessing impact, incorporating feedback from case studies, workshops, and associated research.

• A compilation of documented case studies describing the impact of information in different development domains.

• A practical handbook or similar presentation of the impact methodology, illustrated by the case studies, and incorporating related material to encourage maximum benefits from the process.

• Several contributions to the scientific literature on experience gained and lessons learned while undertaking the various stages of this research.

• An international network of individuals and institutions collaborating on further dimensions of the “impact” problematique.

These outputs will be applied toward bringing about the intended outcomes described in the preceding section, i.e., the shifts in perception, attitude, policy, and investment toward information activities and their role in the development agenda.

Completing Stages I and II

Stage I, i.e., the in-depth analysis of relevant concepts, previous work, and possible new approaches, began in April 1992 with an exploratory workshop at IDRC Headquarters in Ottawa, Canada. The discussions continued over the next 8 months via a structured computer conference moderated by Michel Menou. The core group of participants included 16 specialists drawn from the private, government, and academic sectors in North America, Europe, the Caribbean, Latin America, and the Middle East. An additional group of 13 specialists served as a consultative panel, providing feedback in response to periodic summaries of the conference proceedings. The major product of Stage I was a comprehensive report that dealt with definition of concepts, different types of benefit and associated measures or indicators, procedures for gathering data, and possible approaches to assessing benefits and impact.

This report was the primary input to Stage II of the research program, during which the conclusions of the computer conference were examined and validated and were used as the basis for formulating a practical methodology for assessing the impact of information on development. Stage II took place in February 1993 in Nairobi, Kenya, using a facilitated workshop format. The 15 participants included five from the original computer conference to provide some continuity; most of the additional participants were senior professionals from developing countries. The deliberations at the workshop succeeded in drafting a framework for impact assessment, including a methodology that was appropriate for field-testing in a series of case studies.

In keeping with the objective of sharing experience gained during this lengthy research effort, a number of items have been published or are in press. Most notably, IDRC published a monograph providing a detailed account of Stages I and II, including the assessment methodology, illustrative annexes, and an extensive bibliography (Menou 1993).

A related need identified during Stage II for a practical guide to help information managers perform benefit-cost analysis led to another IDRC publication (Horton 1994). An evaluation of the computer conference used in Stage I is reported in the current publication (Thorngate and Balson this volume). In addition, and perhaps more widely accessible, some shorter pieces have been published in the information literature (e.g., Stone 1993; Stone and Menou 1994). To help readers interpret the case studies that are reported later in this publication, a brief overview of the assessment methodology is provided here.

Proposed Impact Assessment Methodology

The following outline has been based on Chapter 6, “Preliminary Framework for Impact Assessment,” contained in the IDRC monograph (Menou 1993). The first seven are prerequisites:

1.

Define the user community.

2.

Define the development issue and program to which the information activity or project is contributing.

3.

Identify the main patterns of operation of the information life cycle and the factors that influence its effectiveness for the defined user community and development issue.

4.

Describe the target audience to whom the findings will be directed.

5.

Describe the information use environments (IUEs) of the user community and the target audience.

6.

Set up standard guidelines for collecting, analyzing, interpreting, and presenting anecdotes and other data.

7.

Assemble baseline data.

Next, working collaboratively with representatives of the various groups of beneficiary (including end-users and target audiences), determine the perceived or expected benefits of their work that might be linked to information activities and products.

8.

Determine which primary objectives are being served and their outcomes.

9.

Develop a hierarchy of the objectives or outcomes.

10.

Define corresponding outputs and the required inputs.

11.

At each level of the hierarchy, identify critical factors that are either “informational” in nature or are information-dependent.

12.

Define the indicators in the framework that would show that the appropriate information input is secured and improved.

The methodology goes on to map out a model showing permutations of variables:

Object of Evaluation (e.g., program, project, service, specific activity, product).

Evaluation Perspective (e.g., information service provider, user - actual and potential, beneficiary, donor agency, the community).

Generic Assessment Measures (e.g., in relation to inputs required to perform the information activity, outputs, usage, outcomes, and the particular domain under study).

Derived Measures, or Indicators

Five types of assessment indicator are identified:

1. Performance indicators, relating inputs to outputs.

2. Effectiveness indicators, relating outputs to usage.

3. Cost-effectiveness indicators, relating inputs to usage.

4. Cost-benefit indicators, relating inputs to outcomes.

5. Impact indicators, relating usage to outcomes (and domain characteristics).

Once the data have been gathered, analyzed, and the relationship between inputs, outputs, and benefits/outcomes has been determined, a strategy must be developed for communicating the findings to the target audience(s). Being aware of their perceptions of critical issues and benefits is essential; obtaining this information is a key step in the methodology outlined in the foregoing. The assessment results can then be repackaged in the most effective way.

The methodology outlined here was derived by professional information managers and others, but it is still a theoretical construct. It encompasses many steps that may not prove feasible in practice. It may not deal adequately with certain aspects. But it does provide a basic framework that can serve as a common starting point for testing via case studies.

Stage III — Moving from Theory into Action

Stage III is the “action-research” phase. The theoretical concepts and methodology produced as outputs from Stages I and II must be tested in real life environments. The approach taken has been to develop a series of specific case studies in different information domains. The case studies will attempt to apply the provisional assessment framework and, in so doing, provide practical feedback on its use, devise improvements to the methodology, and identify operational and other issues needing further attention. In addition to this set of case studies, as noted in the following, some further action-research work also has been initiated to help illuminate other aspects of the “impact” question using field-testing.

Case Studies

Two of the reasons why this research area has seldom been explored in a comprehensive way are that it requires a commitment to longitudinal studies (rather than brief surveys) to reveal the consequences of information use over the longer term, and it requires examples drawn from different information settings to reveal the extent of applicability of the approach and transferability of findings.

In this research program, however, it has proved possible to assemble an interesting mix of case studies. To increase the level of confidence in the results, the case studies have been selected from different geographical and information environments. They have been drawn from Africa, Asia, the Caribbean, and Latin America, and different characteristics of information activities (e.g., extent of local/regional coverage, single/multisectoral, types of user, types of target audience) have been taken into consideration.

1. Single Sector, Regional Network Serving Senior Policymakers (Latin America) This study is a collaborative effort between the Centro de Investigacion y Desarollo de la Educacion (CIDE), Santiago, Chile and Carleton University, Ottawa, Canada. Its primary focus is to assess the impact of CIDE’s education information network (REDUC) on ministers and senior bureaucrats in the education field in Mexico and Central America.

2. Single Sector Information System Serving an International Research Community (Asia/Africa) The International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) is located in Hyderabad, India. This is one of a global network of International Agricultural Research Centres (IARCs) supported through a consortium of donors. This case study is examining the impact of SATCRIS, the semi-arid tropics research information service maintained by ICRISAT.

3. Regional Information Infrastructure (Electronic Communication in Africa) The United Nations Economic Commission for Africa, through its Pan-African Development Information Systems (PADIS), has been active in promoting the introduction and use of electronic communication. This case study is exploring the impact of using electronic communication in selected countries.

4. Provision of Community-Level Information (Africa) The Department of Library and Information Studies, University of Botswana, has been researching the provision of information for rural development in Botswana, Malawi, and Tanzania. The case study is looking at information needs of the community and the impact of gaining this information.

5. Regional Study, Information and Policy Formulation (Caribbean) The Documentation and Data Centre of the Institute of Social and Economic Research, University of the West Indies, is focusing its study on the information needs of a target group of senior policymakers from the English-speaking Caribbean and their subsequent use of the information provided.

6. Multisectoral Study, Regional Information Networks (Caribbean) The Caribbean Community Secretariat in Georgetown, Guyana, helps coordinate several regional information services, including agriculture, trade, industry, etc. This case study is examining the impact of these information services on decision-making, research, and action in different sectors.

Research Studies

These six case studies will follow, more or less, the methodology proposed in the preliminary framework for impact assessment. Clearly, however, the framework is not complete; nor does it preclude refinements or the pursuit of alternative approaches.

To provide further insight on methods for exploring the link between development action and information, IDRC is supporting an investigation into a variation on the methodology; instead of relying primarily on questionnaires about the use of information to highlight causal relations between information and action, the technique of causal modelling is being attempted, using the Linear Structural Relations (LISREL) software to perform the statistical analysis. The research will involve pilot projects in Canada, being conducted by the University of Western Ontario and the University of Toronto, and then move to more extensive testing by the Institute of Scientific and Technical Information of Shanghai, in China.

Reporting on Progress — The Ottawa Workshop

The most recent impact workshop, held at IDRC in July 1995, was scheduled as an essential milestone within Stage III of the international research program. The case studies and associated research projects were initiated in 1993 and 1994 following completion of Stages I and II. They are intended to provide substantive feedback on the validity of the proposed impact methodology, i.e., on the extent to which it generates meaningful assessment of impact of information and does so using an operational approach that is practical and acceptable. Each individual study, hopefully, will contribute useful experience and insight on the original concepts and methodology.

In addition, there may be larger issues emerging from a more systematic look at the overall body of work now taking place. Furthermore, many of the participants in this research program had never met each other and, at best, had been able to engage only in occasional electronic discussion of methodological and conceptual issues of mutual interest; the prospect of a more extensive exchange of ideas and practical advice was attractive. For these reasons, it was seen as highly desirable to bring all the principal researchers together for a face-to-face meeting part way through the action-research phase.

Workshop Objectives and Agenda

Objectives were achieved at three levels:

• Individual case studies — providing feedback to improve their design and implementation. During the workshop, leaders of studies were able to describe their particular experiences to date and obtain practical advice from other participants.

• Preliminary impact methodology — reviewing possible revisions to be incorporated into the updated version expected in Stage IV. Issues affecting the draft impact assessment framework (e.g., relative priorities, need for additional guidelines, useful illustrations, practical tools that have been developed in case studies) were identified during the various presentations.

• Overall impact research program — identifying related and longer term impact issues that will influence future program directions. Participants were able to explore program plans, timetables, forthcoming events where the impact program could be presented, associated research of potential interest, communication among participants, publicity, additional international partners, and other ideas to help guide the future development of the research program.

The workshop proceedings were grouped into five sessions:

• Presentation of the six case studies.

• Presentation of the related research studies.

• Presentation of related impact activities.

• General discussion of methodological updates, issues, concerns.

• General discussion of related activities and future plans.

Overview of Papers included in the Workshop Proceedings

The first part of the publication, Case Studies, contains full accounts of each of the six investigations. Authors were requested to provide an introduction to the scope of their particular study, the approach taken, examples of questionnaires or other instruments developed for the study, a status report, and a special note on any specific issues or concerns that required further discussion.

The first case study, by Thorngate, Rojas, and Francini, describes the efforts of CIDE/REDUC to improve the use of information for policymaking in the educational field. It is testing the hypothesis that if policy analysts are trained to acquire the skills needed to be proficient in the search, analysis, and evaluation of educational research findings, they will also be better able to plan and implement more effective educational projects and programs.

The case study is following the experiences of a group of policy analysts in Central America through training workshops and subsequent application of their information skills. Initial findings show the relative absence of a culture of using research information by policymakers in this environment, and the need to find ways of improving communication techniques to influence them. Detailed responses transcribed from interviews help illustrate the progress being made to date.

The case study reported by Haravu and Rajan focuses on the impact of the Semi-Arid Tropical Crops Information Service (SATCRIS) on the communities (primarily research) it is expected to serve. The report follows closely the methodology outlined in the provisional assessment framework, describing the information use environments, development goals, SATCRIS products and services, target audiences, and methods of data collection (including a questionnaire designed for the study). Some preliminary findings are presented.

Hafkin and Menou report on a multicountry study of the impact of information communicated through electronic networking. The focus of the study is on the impact of applying electronic communications in the users’ businesses and its contribution to problem-solving or other practical benefits. Extensive preparatory work has been done on appropriate sampling techniques and survey instruments for the in-depth studies taking place in Ethiopia, Senegal, Uganda, and Zambia.

The impact of information to support rural development is the theme of the case study presented by Mchombu. The project involves monitoring the process of setting up and using information service outlets in six rural communities. Information needs have been identified and these are now being interpreted in terms of “anticipated benefits” that can be measured to illustrate impact. The report provides several examples to illustrate the relationship between types of information, corresponding benefits, and relevant information services and products, and describes approaches being taken to collect data on the effects of introducing the community information centres.

The presentation by Chambers and Boissiere describes a concern to improve the flow and transformation of research findings into information products and services that will influence policymaking in the Caribbean. Of particular note in this report are the approach to obtaining the sample of target users and the detailed description of the methodology for selecting appropriate indicators that will be monitored during the case study.

The final case study, presented by Collins, describes a complex attempt to look at the impact of several regional information systems in the Caribbean. The report identifies some of the practical challenges encountered when applying the provisional assessment framework and the adaptations that have been made to make the methodology viable in the local environment. The need to maintain and improve information services during the impact can place an extra workload on the assessor; but a positive dimension of this is that the valuable feedback being obtained from users and target audiences can be used to reinforce the services.

The second part of the proceedings contains reports from the Impact Research Studies being undertaken in Canada prior to further investigation in China. Pilot projects are taking place at two sites to develop and test a mathematical model that will reveal the relationships between variables indicative of economic conditions, information availability, and information use. The results of the pilot projects will be used in the main study to assess the impact of information on small businesses in Shanghai.

The presentation by Tague-Sutcliffe, Vaughan, and Sylvain describes the pilot study being undertaken in London, Ontario. The report describes the collection of selected data from small businesses and the analysis of it using the LISREL software to determine whether a causal relationship could be demonstrated.

There has been very little experience in using the modelling approach in this field of information science research, and so one of the primary tasks has been to refine the methodology for the current application. The report includes a summary of experience gained in the London pilot study, a tentative plan for the Shanghai study, and several figures and appendices containing detailed outputs of the initial phase of activity.

The companion study reported by Meadow and Spiteri is attempting to answer the question, “To what extent does the availability and use of information affect the success of newly established small businesses in the province of Ontario?” Again, the purpose of the pilot study is to contribute toward developing the methodology for determining variables and collecting the requisite data, leading to full testing in the Shanghai study. The paper includes a discussion of perceptions and assumptions about information in this field, the plan for data collection, and the nature of the variables to be incorporated in the model.

The third part of the publication is a compilation of Related Impact Activities. Most of these papers were presented or tabled in Ottawa. They provide useful perspectives on various aspects of the overall research program. The first paper, for example, offers some reflections by Horton on the current context for pursuing benefit-cost analysis as part of the impact research program now that the detailed guide to benefit-cost analysis has been published by IDRC (Horton 1994).

In her presentation, Durrant describes the rationale prompting the drafting by IDRC of a research program in Latin America and the Caribbean that would focus on the impact of information (and communication/information technologies) in two areas — policy formulation, and the performance of small and medium enterprises (SMEs). Examples of IDRC-funded projects and consultancies are used to illustrate the types of research and methodological and development issues that could be addressed in this program.

The next two papers were prompted by ideas exchanged by members of the research network via the Listserv. These “think-pieces” have been included here because they provide interesting perspectives on some of the principal questions raised in the research program. The essay by Thorngate is from the vantage point of a social psychologist looking at the process of policy decision-making and the relatively low influence of formal information inputs. He proposes alternatives to expressing the value of information in traditional economic terms, suggesting instead that measures of “attention” and “attitudinal change” might be revealing.

In his essay, Meadow picks up on some of the points raised by Thorngate and offers additional challenges to the assumptions often made about familiar concepts used in evaluating information services. He illustrates, for example, the ambiguity evident in the use of the term “relevance” when applied to information and notes the difficulty in sharing data among information science researchers in the absence of agreed-upon standards.

The use of electronic communication to facilitate discussion of research on information impact is the subject of the next two papers. The first, by Archer, describes the use being made of the special Listserv that was introduced by IDRC in February 1995 to support Stage III of the research program. Of particular interest is the way in which this technology is now reaching all members of the impact research network including those in developing countries. This is a major advance over the situation in 1992, when the scope of the computer conference employed in Stage I of the research program was severely limited because of poor connectivity of potential participants in the South.

Nevertheless, despite this constraint, the computer conference did achieve its objective. Given the relative novelty of using an actively moderated computer conference as the vehicle for exploring an information science research topic in such depth among an international community, readers might be interested in seeing the account by Thorngate and Balson of their evaluation of the Stage I computer conference and their suggestions on how to improve the structure and operation of such a process in the future.

The final paper is an annotated bibliography prepared by Chataway and Cooke. IDRC is attempting to capture the key literature being published in the field of information impact. An extensive bibliography was included in Menou (1993), and the one in the present volume picks up from where that one left off.

Findings to Date, and Looking Ahead

These proceedings constitute a status report on a “work in progress.” This was the intended purpose, for it will be perhaps 2 more years before all the results are available from the various studies and a comprehensive analysis of findings can be presented. The original monograph proposing the preliminary impact assessment framework was published in 1993. It seemed timely at this point in the research program to provide an account of experience gained by researchers in setting up their studies.

Understandably, this companion volume places the emphasis not on the theoretical underpinnings of the research program, but on practical operational experience with the impact methodology. This is the domain of sampling techniques, questionnaire design, survey instruments, data definitions, together with some early feedback on the difficulties that may be encountered and the adjustments being made during the testing phase. Hopefully, by bringing together all the progress reports plus related material, this volume will prove to be a source of interesting commentary and practical advice for interpreting and applying the impact assessment methodology.

Methodological Issues and Observations

Contained among these progress reports are examples of survey tools, comparative experiences in using interviews and/or questionnaires, approaches to identifying different user communities, descriptions of anticipated benefits, and a lot of additional information that could prove helpful to others investigating this field. In addition, in the course of individual presentations and subsequent debate, participants flagged a number of points of concern and/or opportunity. The workshop session on “Methodological Issues” was chaired by Jose-Marie Griffiths. Her concise and illustrated account of the principal items emerging during the discussion, and the priority attached to them, has been included as a valuable appendix to this paper.

To varying degrees, the items identified by participants have potential implications for the current case studies and for those that might follow in the future. It was agreed that finding appropriate mechanisms for addressing the priority items would require further elaboration and that the INIMCAS Listserv might prove a useful channel for pursuing this in the first instance.

Future Activities

A number of items were tabled concerning future directions for the impact research program and for operation of the impact network. They have been recorded here:

• Communications among the members of the research network should be strengthened by making more effective use of the INIMCAS Listserv. It could be used for problem-solving among the group (i.e., functioning as a “virtual help-desk”), publicizing relevant literature, conference announcements, impact news, exchange of progress reports, etc. INIMCAS users will be canvassed for their opinions — via the Listserv.

• Improvements in the use of the Listserv may require more active moderation of the electronic discussion.

• Although there are pros and cons of keeping the INIMCAS Listserv confined to participants directly involved in the case studies and research projects, it was agreed that the Listserv should remain closed for the time being. This would keep it focused on the task at hand and permit frank discussions to take place within a known community.

• Nevertheless, additional channels must be found for securing input of experience and fresh ideas from outside the current research network. Each member was encouraged to serve as a conduit to his or her own research community and to facilitate two-way flows.

• The experience of other fields (such as medicine, psychology, economics, and management) should be explored systematically for relevant lessons or insights about impact assessment.

• Greater effort should be given by members of the research network to publicize the work taking place in the impact research program and to disseminate information on its progress, using print, electronic channels, and selected meetings.

• Now that the research program has moved to the testing phase, a systematic approach should be considered to selected donors and development agencies, as well as to the broader information community.

• Advantage should be taken of the existence of the guide to BCA for information managers. This manual was published by IDRC as a spin-off from the previous phase of the impact research program and could be put to good use.

• Although acknowledging the vast amount of additional work that could be undertaken in this field, there was a consensus that the group should focus on its current set of activities, consolidate its findings, and build a strong foundation for subsequent action, rather than end up complicating or diluting the present efforts.

• When planning outreach activities, careful attention should be given to ways of using the material from the research and case studies for best effect. Finding ways of reaching and influencing the right target audience with authority for resource allocation will be a critical task.

• Assuming meaningful results are obtained, one of the principal outputs envisaged is a practical handbook to impact assessment, illustrated with the operational experience gained through the projects.

• To assist in all of the foregoing, IDRC will convene an “International Advisory Group” to advise it (and the research network) on current impact questions, relevant work taking place elsewhere, gaps in the research agenda, issues arising from the case studies, potential institutional linkages, and dissemination activities.

In Conclusion

The complexity of this impact topic is immense, and the feasibility of reaching a practical conclusion is still uncertain. But the prospect of success brings with it new possibilities for reaffirming the impact of information on development, and creating an environment in which information can play a more extensive and productive role in bringing benefits to the developing world. This is a challenge worth accepting. Hopefully, this publication will stimulate others to pick up the gauntlet.

References

Ang, J.; Pavri, F. 1994. A survey and critique of impacts of information technology. International Journal of Information Management, 14, 122–133.

Badenoch, D.; Reid, C.; Burton, P.; Gibb, F.; Oppenheim, C. 1994. The value of information. In The value and impact of information. Bowker Saur, London, UK. British Library Research: Information policy issues, 9–77.

Boon, J.A. 1992. Information and development: Towards an understanding of the relationship. South African Journal of Library and Information Science, 60(2), 63–74.

CABI (Centre for Agriculture and Biosciences International). 1995. Evaluation of the impact of CAB International’s CD-ROM databases on sustainable development in Africa. Centre for Agriculture and Biosciences International, 60 pp.

Feeney, M.; Grieves, M., ed. 1994. The value and impact of information. Bowker Saur, London, UK. British Library Research: Information policy issues, 303 pp.

Griffiths, J.-M.; King, D.W. 1993. Special libraries: Increasing the information edge. Special Libraries Association, Washington, DC, WA, USA.

Hanna, N.; Boyson, S. 1993. Information technology in World Bank lending: Increasing the developmental impact. The World Bank, Washington, DC, USA. World Bank discussion papers, no. 206, 104 pp.

HERTIS 1994. The value of information to the intelligent organisation. Key Issues in the Information Business, Vol. 4. University of Hertford Press, Hatfield, UK. 169 pp.

Horton, F.W., Jr. 1994. Analyzing benefits and costs: A guide for information managers. International Development Research Centre (IDRC), Ottawa, ON, Canada 285 pp.

Marshall, J.G. 1993. The impact of the special library on corporate decision-making: Final report of a project funded by the Special Libraries Association. Special Libraries Association, Washington, DC, USA. SLA research series, no. 8.

Menou, M.J. 1993. Measuring the impact of information on development. International Development Research Centre (IDRC), Ottawa, ON, Canada. 188 pp.

Stone, M.B. 1993. Assessment indicators and the impact of information on development. Canadian Journal of Information and Library Science, 18(4), 50–64.

Stone, M.B.; Menou, M.J. 1994. The impact of information on development. Bulletin of the American Society for Information Science, 20(5), 25–26.

Appendix
Analysis of Issues and Concerns

José-Marie Griffiths

Brief overviews of key issues and concerns emerging from the presentation of the case studies, related impact activities, and impact research studies were presented by L.J. Haravu and Michel Menou. A free-form discussion ensued and several recurring themes were identified:

1. Definitional issues — the need to define key terms such as information, information resources, impact, etc. There was no consensus on whether a standard set of definitions could, or should, be developed and then adhered to, or whether impact studies should state at the outset the set of definitions used for the specific case.

2. The importance of defining and describing the context of both the information user and provider. The group reiterated the importance of adapting and testing the “information use environment” (IUE) model as described in “Measuring the Impact of Information on Development” (Menou 1993).

3. The need for longitudinal studies to consider transformation and cumulative effects of information provision and use. It is interesting to note that the lack of longitudinal and cumulative studies is a frequent criticism of recent information science research.

4. Sampling issues — a variety of issues associated with sampling were identified. These included whether a sample should be homogeneous or heterogeneous, the representativeness and size of sample particularly in regional studies, scalability of results, individual interviews versus focus groups, etc. As with the definitional issues, there was no clear consensus that standard approaches to sampling should be developed but rather that care should be taken over sample design to optimize the results of a particular case study.

5. Need to address and reconcile differences in opinions and facts, expectations, perceptions, and reality in all aspects of the case studies.

6. Importance of including all stakeholders in the definition and selection of impact indicators to ensure that their points of view are incorporated into case studies.

7. The importance of assessing costs along with benefits, and benefit-cost analysis (BCA) with impact indicators.

8. The need to promote the view of assessment as an ongoing process rather than as a one-time event.

9. The need to test, evaluate and validate indicators so that a set of “proven indicators” can be identified.

10. Need for guidelines on how ordinal values are assigned to scales.

11. Need to address issues of causality and how it can be tracked.

To help frame future discussions and developments, these 11 themes and issues were subjected to an informal prioritization exercise. The informal process did not allow sufficient time for a rigorous definition of the issue areas; thus, any results should be taken as indicative of the priorities rather than an absolute expression of them.

The process used was a forced ranking of each of the issues according to two criteria: the relative importance of each issue to the further development of impact assessments, and the relative ease of implementation of approaches to addressing each issue. Each participant was asked to allocate a total of 100 points across the 11 issues for importance and another 100 points for ease of implementation. The resulting scores for each issue were averaged and displayed in a series of “opportunity maps” (Figs. 1–3).

The opportunity map is a display of importance against ease of implementation. The results can be interpreted by considering the four quadrants of the map. The upper right quadrant contains those issues considered to be the “first target of opportunity.” as they are of high importance and high ease of implementation. The upper left quadrant and the lower right quadrant are considered as secondary targets of opportunity, with the upper left being of high importance but more difficult to implement and the lower right being less important but easier to implement. Issues in the lower left quadrant should be carefully considered for implementation since they are of lesser importance and difficult to implement.

Image

Fig. 1. Opportunity mapping exercise (all respondents).

Image

Fig. 2. Opportunity mapping exercise (respondents in the field).

Image

Fig. 3. Opportunity mapping exercise (respondents not in the field).

Image

Fig. 4. Opportunity mapping exercise (all respondents). Data normalized.

Figure 1 shows the opportunity map based on all participants’ input. This shows the first target of opportunity to include:

• Definitions

• Context descriptions

• Stakeholder involvement

• Benefits and costs

• Validation

• Sampling

The secondary targets would include:

• Causality

• Longitudinal studies

• Assignment of ordinal values

The third target would include:

• Reconciliation of opinions, expectations, reality

The responses from participants who work “in the field.” i.e., in the delivery of information services in developing nations and regions, were separated from the other participants to determine whether any significant differences exist. Figure 2 shows the opportunity map for respondents in the field and Figure 3 shows the map for all other respondents.

Respondents in the field clearly consider the assessment of costs along with benefits to be the most important issue, although they consider implementation more difficult than the nonfield respondents do. Both groups consider the definitional issues relatively important and easy to implement.

Nonfield respondents consider context descriptions more important than field respondents but also more difficult to implement. The field respondents consider the assignment of ordinal values to scales as a target of opportunity, whereas the nonfield respondents do not.

Finally, the responses were normalized to spread them across the visual map. The normalized map for all respondents is shown in Figure 4. In interpreting the normalized map, it is important to remember that an issue that may appear to be of low importance or to be very difficult to implement (e.g., “opinions, expectations, reality” in Figure 4) is only low relative to the others and not according to the scale itself. The advantage of the normalized map is that it unclusters the issues so that it is clearer to consider a sequence of consideration.

For example, based on Figure 4, the order of considerations implied is:

• Definitions

• Context descriptions

• Stakeholder involvement

• Benefits and costs

• Validation

• Sampling, etc.

This page intentionally left blank

Case Studies

This page intentionally left blank

CIDE/REDUC Case Study

Warren Thorngate, Alfredo Rojas, and Maria Francini1

Background:
The Training of Policy Analysts

“REDUC” is the acronym for “Red Latinoamericana de Informacián y Documentacián en Educacián” (Latin American Educational Information and Documentation Network). The implementation of this network was decided at a meeting that took place in Montevideo, Uruguay, in 1978. The Ford Foundation and the International Development Research Centre (IDRC), approved the project and provided the initial funding. During the same meeting, it was decided to promote the publication of a magazine named “Revista Mexicana de Investigaciones Educativas” (Mexican Review of Educational Investigations), which is now being published bi-monthly in Mexico by the Centro de Estudios Educativos (Center for Educational Studies). The REDUC network has grown to include 22 associated centres, and the overall coordination was established at the Chilean Center for Investigation and Development in Education (CIDE), in Santiago, Chile.

The CIDE/REDUC organization has two main purposes. The first was to collect analytic summaries of educational investigations produced in Latin America. The second was to make those summaries available to decision-makers to encourage a more efficient decision-making process in the different countries. Currently, 20,000 documents have been collected. Of those, 10,000 are in a database and additions are growing at the rate of 2,000 new inputs a year. The role of the associated centres is to produce analytic summaries of the collected documents and to function as documentation centres answering questions and forwarding information as needed.

Up to now, REDUC has thoroughly fulfilled the first task. Little progress, however, has been made in the performance of the second task. The goal was to promote a more efficient planning process in the field of education by making the

1Professor, Psychology Department, Carleton University, Ottawa, Ontario, Canada K1S 5B6; Coordinador de REDUC, Latin American Educational Information and Documentation Network (REDUC), CIDE, Santiago, Chile; and Research Assistant, Psychology Department, Carleton University, Ottawa, Ontario, K1S 5B6, respectively.

results of the educational research performed in Latin America available to the decision-makers in the different countries. Up to this point, the impact of information on development and decision-making has not yet been fully established.

It can be argued that the presence and activities of the REDUC centres have kept alive educational investigation in the Latin American region, and that the collection of analytic summaries is an important source of consultation for researchers. Nevertheless, the information assembled by REDUC has not been fully utilized by professors and policymakers.

The professionals responsible for planning and policy-making have little use for a plain database containing analytic summaries. Instead, they need the concrete knowledge derived from the overall appraisal of the findings. They cannot spend time searching for the specific information needed at a given moment. They would rather examine and value the broader conclusions.

The use of information in the decision-making process in Latin America was investigated in 1990 (Rojas 1990. This research detected the need to provide a different kind of information for decision-makers.

As a consequence, in 1991 REDUC started two new programs. The first was named “Information Workshops for the Analysis of Educational Policies” (Talleres de Informacián para el Análisis de Políticas Educativas). The second was called “Course for Educational Policy Analysts” (Curso de Analistas de Política Educativa).

The first program is a workshop for the training of high-level civil servants employed at the Ministry of Education in national, provincial, or municipal departments. The 4-day workshop focuses on depicting and explaining educational research findings, as related to recent key topics in policymaking. The training takes place as an interactive participation in the solution of problems, quizzes, and challenges that the participants are to solve individually or in groups. Simulation games are also used as part of the training.2

The second new program is the Course for Policy Analysts. In the context of a larger project funded by the Interamerican Development Bank (Banco Interamericano de Desarrollo, BID), REDUC proposed to provide training for both junior employees of the Ministry of Education, at the national and provincial

2The Harvard Institute for International Development (HIID) developed a project called BRIDGES based on workshops for the training of policy-makers. In 1991, REDUC obtained a contract to prepare a “state-of-the-art.” to translate the workshops, and to adapt them to the Latin American environment. In 1993, REDUC began the development of simulation games and, in 1994, the first workshop was organized.

levels, and for members of educational faculties. The goal of the course is to provide training for the use of the information available at the REDUC centres and at other sources for the drafting of projects and programs in education.

During the month-long course, the participants read, write, discuss, and solve problems and simulations in the field of educational policies. The goal is to learn and practice at the same time. Because the participants come from different countries, they also have the opportunity to interact and learn by sharing problems and experiences of different environments. The first course took place in Mexico City in August 1994. The second will take place in Argentina in 1995.

The REDUC’s teachings at both the workshop and the course are addressed to those directly responsible for policymaking and to those who will be the analysts, or the consultants, in the educational environment. It is assumed that if they acquire the skills needed to be proficient in the search, analysis, and evaluation of the educational research findings, they will also be better qualified to plan more efficient educational projects and programs.

CIDE/REDUC August 1995 Workshop

CIDE/REDUC promoted the training of policy analysts through a workshop that took place in Mexico City during the month of August 1994. Eight countries participated: Costa Rica, Dominican Republic, El Salvador, Guatemala, Honduras, Mexico, Nicaragua, and Panama. The 34 participants were staff of the Ministry of Education or of university educational faculties.

Prof. Warren Thorngate proposed to interview some of the participants to gain a better understanding of the impact of the workshop. He prepared the questions for the interviews and these follow under “Analysis and Comments.” A first set of interviews was done during the workshop in August 1994.

A second group was performed during the month of April 1995, 7 months after the workshop. On this second occasion the participants were interviewed by their tutors in their countries. The contribution of the Dominican Republic was not available for the second interview, so it was deleted from the study. A third set of interviews is scheduled for the month of September 1995, a year after the workshop.

The first interview produced 27–90 minute tapes. The questions were related to personal studies, experiences, and current occupation. Other questions inquired about the structure and function of the educational system in each country, educational problems, and participation of the private sector, church, and syndicate.

The second interview produced 21–90 minute tapes. The questions were aimed at investigating the number of policies and programs analyzed since participation in the workshop including how many of those analyses had been requested and how many were personal; difficulties encountered, information, funding, presentation, success, acceptance or rejection, and implementation if any; and hopes and future goals.

The interviews were in fluent Spanish and were translated and written in English. The English version is 240-pages. It contains a vast amount of important information about the personal ways in which the different participants perceived the environments and the problems as well as their hopes, desire to succeed, goals, and capacity to deal with challenges. All this will be taken in account in the final evaluation of the project. The interviews are relevant socio-historic-psychological documents that reflect the realities of the eight countries as seen by the participants at the time.

Analysis and Comments (The Second Listserv)

The Mexico workshop was the culmination of efforts to make REDUC’s educational research information base more useful in educational policy formation. REDUC’s excellent information base is widely distributed in Latin America. But staff at REDUC have become increasingly concerned that it is not being used effectively, if at all, by people responsible for policy decisions.

There seem to be many reasons for their oversight. Two of the most common are lack of time and lack of expertise. Policymakers in Latin America seem to be busy people with little time to find, ingest, and digest information about educational research. In addition, many (most?) are not well qualified to evaluate the implications of this research for educational policy

Policymakers in Canada, Europe, and the USA face the same problems, and in the 1960s they invented the role of policy analyst to solve them. A policy analyst typically finds relevant information regarding some policy, then summarizes the results and their policy implications, and passes it on to a policymaker.

In Canada, policy analysts proliferated rapidly during the 1970s, perhaps as the result of the former Prime Minister’s, Pierre Trudeau’s, rumoured insistence that any policy decision be made on the basis of a policy issue summary not exceeding two pages. Policy analysts are now common in Canadian government policy formation. It seemed reasonable to believe that they could be useful in Latin America too. There are virtually no educational policy analysts in Latin America, so the role is virtually unknown.

REDUC hoped to find a small sample of Latin America’s best and brightest educational researchers, professors, information scientists, and nongovernmental organization (NGO) workers to learn the basics of policy analysis and to send them home to find a role for themselves as policy analysts in their own country. The first of two policy analyst workshops was held during the entire month of August 1994. Carefully selected representatives from universities, government, and nongovernment offices in Central America, the Dominican Republic, and Mexico were flown to Mexico City to spend 8-12 hours each day for 4 weeks to learn their new role.

They were taught by educational consultants from Harvard, UNESCO (United Nations Educational, Scientific and Cultural Organisation), and similar agencies, as well as by members of REDUC. They learned how to make efficient use of REDUC’s information base, how to evaluate the information critically, how to negotiate contracts, and how to write policy analysis reports. They participated in special educational policy simulations designed by REDUC. In addition, they agreed to write at least two policy analyses during the year following the workshop for practice and discussion with a special tutor (one from each of the eight countries represented by the workshop participants) selected for the task.

First Interview3

I arrived for the last week of the workshop, observed some of the activities, and met with the tutors. As part of their tasks, the tutors had agreed to interview the 2-5 workshop participants from their country three times: first, during the last days of the workshop; second, about 6 months after the workshop; and third, about 12 months after. I provided each tutor with a portable tape recorder, a stack of tapes and batteries, and the first of three interview questionnaires. I also gave them a mini-workshop on the purposes of their interview activities and on interviewing techniques.

Together, we modified the first questionnaire according to many of their good suggestions. I then asked them to conduct their first interviews in the remaining days of the workshop and submit the tapes to me.

Three days later I had 26 taped interviews, each lasting between 70 and 110 minutes. The 50 interview questions covered four basic topics: personal background (age, education, employment, education policy experience, etc.), comments on the workshop (likes, dislikes, suggestions for improvement, etc.), the history and current situation of educational policy in the participant’s home country

3Questions in this section were prepared by W. Thorngate.

(how policy is made, who makes it, current policy issues, etc.), and expectations about work as a policy analyst (possible employers, likely difficulties, supporting services, etc.).

Here are a few of the results:

1. Age range of workshop participants: 32–48 (average about 40).

2. Most common occupations: education professor, education researcher, head of a documentation centre, or director of an education planning department in a local ministry of education office.

3. Most important things learned: who is doing what in Central America, role of the policy analyst, techniques of negotiating policy among groups with different interests, use of computer in policy analysis.

4. Most common suggestion to add in next workshop: more practice and practical examples.

5. Most common suggestion to delete in next workshop: theory 6. Most common themes in description of education system in country: education ministers, employees, and policies change every 1-2 years and with very new government; everyone is always learning, undoing the good with the bad; and there is no continuity in government.

6. Most common response to “Who makes policy decisions in your country?“ — ”I don’t know.”

7. Most common problems of education in home country: lack of money, lack of continuity in policy, poor teacher training, inefficiency, coverage (no schools in some areas), high drop-out rates.

8. Most common responses to “Where will you get information relevant to policy analyses.” (in decreasing order of mention): Ministry of Education, university, friends, REDUC, e-mail.

9. Most common anticipated problems of doing policy analysis: policy analyst role unknown, no credibility for the role, quality and relevance of information available, Minister’s advisors are politically appointed, analyst not part of the Minister’s “inner circle.”

10. Most commonly named “difficult tasks of policy analyst”: negotiating policies, writing, and statistical analysis.

I think a few things are worth noting about these preliminary results. First, most of the primary concerns of the workshop participants are not about obtaining information. The primary concerns are about obtaining the attention, credibility, and influence of policymakers. Second, most of the difficult tasks anticipated by workshop participants are not technical. Instead, they are tasks of communication.

I have not yet carefully considered the implications of these results. But, I confess, they lead me to wonder if our common question may not be the most fruitful we can consider. For example, instead of asking “How can we measure the impact of information on development.” I wonder if it would be more beneficial to ask “What should information scientists, policy analysts, and others do to maximize their chances of influencing policy formation.”

It may be extremely difficult for us to extract evaluation guidelines about information impact from our case histories. But, I think it will be much easier for us to develop a list of “do’s and don’ts for adapting your information project to local realities” from our own case histories that would be useful to future information scientists. The list may eventually form criteria for evaluating impact. In the mean time, people could use the list to keep them aware of all the things they must do to make their information project successful.

The Second Interview

Late in February of 1995, 6 months after the workshop, I contacted the eight tutors to remind them of the evaluation and of their role as interviewers and sent them the first follow-up questionnaire. Again, they were asked to contact the workshop participants in their country and tape record their answers to the interview questions, then send the tapes tome. There were 17 questions on the second interview questionnaire. They asked about the number and nature of attempts to undertake policy analysis since the workshop and the results of these attempts. They also asked what workshop skills had been most and least useful, what frustrations and difficulties they had experienced in their role as policy analyst, and what they now believed should be changed in their country to improve educational policymaking.

We eventually received 21 interviews. Two of the original participants lived far from their assigned interviewer, and she requested considerable extra funds for travel and accommodations at their locations to interview them. I decided it was not “cost-effective” to do so. One of the interviewers promised several times to conduct his second interviews “next week for sure.” but never fulfilled his promise despite repeated letters, faxes, and phone calls to remind him. After waiting almost 3 months and spending more than $200 in reminders, I gave up. Three more participants were thus lost from the study.

Of the remaining seven interviewers, one sent her tapes in on time and without a reminder. The other required from one to three reminders. The lesson learned here is that conducting research across countries and cultures takes much more time and effort and communication than we can anticipate.

I had noticed in the first interviews that the tutors sometimes took creative liberties with the interview questions, but rarely strayed from the interview themes. In the second interview, at least three of the remaining seven tutors became more creative and strayed far from the questions. My wonderful assistant, Maria Francini, was required to be equally creative in finding answers to the interview questions somewhere in the informal conversations that resulted from their creative impulses. Lesson learned: controlling interviewers is a little like herding cats, and information is always lost as a result.

I am currently analyzing the second interviews. But a few important themes are already apparent. First, more than half of the interviewed participants have been successful in obtaining at least some work as an education policy analyst. One aspect of their background (noted from the first interview) seems to be a good predictor of this success: who they know in the Ministry of Education. Second, almost all of the participants report frustrations in finding work as a policy analyst. Some of the frustrations confirm their expectations. For example, few policymakers in their country know what a policy analyst does, and few are willing to take a chance on someone in this new role.

This led to the suggestion that REDUC attempt to inform the ministers of education and their assistants of the nature of policy analysis and to help pave the way for a chance to use their new analytical skills. Other frustrations reflect a fascinating cultural issue regarding information use. As Maria Francini summarized “In these education ministries there is no culture of information.”

Many of the 21 workshop participants reported that policymakers do seek advice, but limit their search to consultants who act in much the same way as doctors or priests. The consultants rarely cite research or other useful information in packaging their advice. Instead, they rely almost entirely on their reputation and credibility, and justify their advice by appealing to their “vast experience as a consultant.” Unlike policy analysts who cite the conflicts of research results and see complexities everywhere, the consultants offer advice that is simple and unequivocal. Unlike policy analysts who ask for extra time to do a better job, consultants are happy to offer easy prescriptions at a moment’s notice. Policy analysts summarize information; consultants calm nerves. The policymakers with no training or experience in using research information for policy decisions seem to prefer what the consultants provide.

If the impact of information on development depends on the development of a culture of information, then how can we develop one? I have no good answer. But it seems to me that it may require the education of future policymakers rather than the training of current ones. How can we educate them?

I spend much of my time working with students from developing countries. Most are extremely intelligent and hard working. But most come from countries lacking a culture of information. They have been trained to memorize rather than to think for themselves. They have been taught that answers come from books or from an experts rather than from their own observations and critical thinking. They are dumbfounded by the size of our libraries and are as likely to fear them as to embrace them. It takes 2-4 years for most of these students to understand our culture of information and to make good use of it. It may take many more years for them to become policymakers in their own country and to incorporate information, as Westerners conceive it, into their policy decisions.

Can any agency anxious to evaluate the impact of information on development support an evaluation for that length of time? Or will economic and political constraints always lead us to attempt short-term evaluations that will almost always show no information impact and, hence, no reason to continue to support of information gathering or dissemination?

Perhaps these are nothing more than rhetorical questions. But they lead me in logical ways to one last thought. Perhaps the most important impact of information on development is not in the area of policy formation but in the area of education. Perhaps one crude but useful indicator of the impact of information on development may be found by examining the curriculum of schools and universities in developing countries. What is taught now that was not taught 10, 20, or 50 years ago? What are the chapter headings of the textbooks? Is there a relationship between the curriculum of 10 years ago and the socioeconomic status of today? Also, if you will pardon my ignorance, are any of you examining curriculum as part of your evaluation?

Report of the First Partial Results

Definitions

• The purpose of the CIDE/REDUC study is to measure the impact of information on development.

• The task of the policy analyst in his/her country is: “The use of constructive and creative actions, based on scientific investigation, to make educational policymaking more efficient.” This is the main goal of the training provided by the workshop.

• The distant goals of the project are: “The cultural development of Latin America, the creation of a culture of information, the reduction of illiteracy, and the impulse toward productivity.” It is assumed that the incorporation of Latin America into a modern world will benefit the entire American continent. As a result, its population will not be in need of support and care and will participate more actively in production.

• The expression “development” is understood as a process of environmental and individual changes across time, therefore, the investigations related to development are always longitudinal studies involving long follow-up periods.

Costs-Benefits

The immediate benefit of this particular 1994 project has been to educate, teach and upgrade the knowledge of 34 highly qualified persons proceeding from eight countries of Central America and Mexico by means of a workshop. This included training on computer competence, database, statistics, qualitative and quantitative analysis, technical writing, information gathering, Internet, and several other related topics. More workshops are scheduled for 1995 and 1996.

Report: Method

At the beginning of this “longitudinal study.” it is important to report how each of the participants did perceive his or her country’s political/educational environment at the time of the workshop (August 1994). It is assumed that the personal attitude of each, and the objective conditions of the environment, will influence the success of the task that he or she has to perform in relation to the project.

In “Measuring the Impact of Information on Development” M. Menou (1993, p. 67), in the chapter “Describing Constituencies.” says “… basic data are needed about the population to allow adequate interpretation of observations….descriptive parameters were identified in Taylor’s (1991) model of an “information use environment (IUE).” Some of the data obtained from the participants to the CIDE/REDUC’s August 1994 workshop, on the occasion of the first two interviews, have been summarized according to the IUE proposed by Taylor (1991).

Descriptive Parameters

Parameters of the conditions in the country of origin of the participants:

• Sources of authority (in education).

• Problems (related to education). Specific needs.

• Attitudes toward education.

• Information culture (in the educational structure).

• First results after a 7-month period (see Table 1).

• Expected solutions to the problems (in education).

Sources of Authority

Aspect Studied Decision-making in educational policies. The decision-making process in education is fully centralized in half of the eight countries studied. In the other half, there is a satisfactory adjustment to a progressive decentralization.

Honduras “All is centralized and does not work well.” “Lack of coordination in between levels.” “Society cannot look into or evaluate any project.” “At each change of government the policies are changed depending on personal competition.”

Guatemala “Most decision are taken at the central level. The process of decentralization has started but has not been achieved yet.” “There is no opportunity to respond to the local goals.”

EI Salvador “Some aspects of the administration have been decentralized, but not the decision-making.” “The government controls the curriculum, national and external funding. At the periphery they handle human resources, and part of the budget.”

Nicaragua “The process of decentralization is going on everywhere. There is a superior counsel of education integrated by different people of the civil society and under the direction of the Ministry of Education. They share the central decision-making.” “There are the school councils that are composed by parents, students, and local civil and political authorities, at the periphery. They are independent in the roles of administration, teaching hiring, salaries, and may modify the study plans to adjust to local needs.” “There is also a special commission, formed by parents, students, teachers, and the private sector, which meets regularly with the Minister of Education personally.”

Table 1. Policy analyses done/publications.

Country

Participants

Required

Personal

Articles

Guatemala

x

1

4

 

y

4

1

 

z

4

 

 

___

___

Subtotal

 

9

5

El Salvador

x

2

1

 

y

1

1

2

 

 

___

___

Subtotal

 

3

3

Nicaragua

x

1

 

x

1

 

y

2

 

z

2

 

 

___

Subtotal

 

5

Panama

x

1

 

6

 

y

 

1

 

z

 

 

___

___

Subtotal

 

1

1

Mexico

x

1

 

y

 

z

3

 

zz

2

 

 

___

Subtotal

 

6

Honduras

x

2

 

y

 

z

 

 

___

___

Subtotal

 

2

1

Costa Rica

x

 

1

2

 

y

 

z

1

 

___

 

 

___

___

Subtotal

 

1

1

Total

 

27

11

8

 

 

_____

_____

_____

 

 

 

 

 

Panama “The rules are centralized and respond to an old educational law of 1946.” “Totally hierarchical.” “There is a new document but it cannot be coordinated because of political unrest.”

Mexico “The decentralization is now starting to function.” “We are decentralizing and that brings big changes.”

Costa Rica “The government is the direct responsible for all aspects of education, and regulates all public and private teaching.”

Problems Related to Educational Policies: Present Reality and Specific Needs

• Political instability. Policies change as every government changes, in all countries studied.

• Insufficient budget for education. Some 85–95% goes into salaries, too little is left for other purposes, in all countries studied.

• Need to improve primary and secondary teacher’s education, in all countries studied.

• Salaries for primary school teacher’s are insufficient and the teacher must hold other jobs. Teacher’s salaries depend on the strength of the syndicates and their relation with the governments.

• Coverage for primary schools is insufficient and repetition and drop-out are very high in all countries studied.

Guatemala Some 47% of children are not in the scholastic system. The rural population is very dispersed. “In certain regions the curriculum does not correspond to the needs of the population.” “There are also linguistic problems because the indigenous population does not speak Spanish.”

El Salvador Coverage is 50% in grade six and 33% in grade 9. “The poor sector of the population has less opportunities, remains in school for shorter periods. Problems have a concrete geographical location.” “Of those who start 1st degree, 36% reach 6th degree, and 24% reach 9th degree. Interviews to 4th and 5th grade children show that at the end of the year, 90% had not achieved most of the expected learning.” “Something has to be done to promote learning!”

Nicaragua Some 60% of the population is under 15 years of age. The budget is insufficient. “Children must be educated, must be able to work, if not, who is going to support 60% of an illiterate adult population in years to come?”

Panama “No progress in the last 10 years. Political unrest has increased the problems of illiteracy in the poor sector.”

Mexico Coverage for elementary schools is 90%. Both primary and secondary schools are obligatory, free and for all.

Honduras Primary school has a coverage of 86%. Some 28–32% finish 6th degree. In rural areas, 12 years are needed to obtain 30 children graduating from primary school. In urban areas, 8 years are needed. Roughly 50–70% of learning gains have been obtained. Only 1.5% of the students starting primary school will complete university. For secondary schools, there is a successful “Education at Distance” program.

Costa Rica The index of illiteracy is relatively low, but primary education is not obligatory any more.

Conclusions Those five problems have a high degree of severity. The performance in learning is very poor in all countries.

Attitudes Toward Education

The Governments For all the governments, education is a priority in theory. In practice, the budgets are insufficient. Nevertheless, in all the countries (with the exception of Panama) there are active reform projects for basic education at different stages of implementation. The reforms are focused on decentralization, on updating and improving the curriculum, the quality of education, and the training of teachers.

Guatemala Radical changes are in progress. First national congress for secondary education in 1995. Participation of citizens.

El Salvador Has 1986 rules that need revision. The budget is the lower in Latin America proportionately comparable to Haiti and Bolivia. “The citizen are not aware of the magnitude of the problem.”

Nicaragua “We are making good and fast progress.” “We had 10 years of war, and we try to do our best.”

Panama Political unrest. Education is not a priority for the government.

Mexico “The reforms are steadily under way.” High awareness of the citizens.

Honduras “The Government of Honduras is just following instructions from external sources. Is entirely dependent, and not able to judge the designs to see if they are useful to the specific situation of the country.”

Costa Rica “There is a desire to improve because of international needs to be competitive.”

The Private Sector The private sector is generally very active in promoting education in all countries. High degree of participation in secondary and vocational education.

The Catholic Church Gives support in basic education. Cares for marginal sectors. High degree of participation in primary education.

Conclusion There is a good support to education from the private sector and the Catholic Church.

The Information Culture

The information culture is very poor in all countries studied. Both endogenous and exogenous information culture are very deficient. In this respect, the first interview must be distinguished from the second. A change of attitude can be detected, both from the answers of the participants and from their concrete activities in policy analysis.

First Interview, August 1994 The analysis of the first interview showed that none of the participants had ever done policy analysis. None was ever requested to produce policies, programs, or projects proposals. To the question: “How is the decision-making process in your country?” most participants answered: “Decisions are based on political thinking.” “Decisions are based on common knowledge and some few statistical averages.” “Policymakers have advisors but those do not use information.” “The Ministry of Education was functioning only to cover successive emergencies, there was no general plan.” “There is no continuity in educational policies.” “The planning office of the Minister of Education tries to integrate the politics of the government at different levels.” “Every 4 years all is dropped and started again when the government changes.”

Second Interview, April 1995 All the participants to the CIDA\REDUC workshop changed their attitude toward the importance of information, both endogenous and exogenous, after the training period. The second interview shows that there is a satisfactory sensitization toward the use of research and analysis. There is a new need and understanding of the importance of information, learning, and creativity. There is a desire to find new solutions to old problems. To the question: “What did you gained from the workshop?” some of the answers were:

Guatemala “I gained a strong motivation for a precise formulation of projects.” “It is necessary to sensitize the different authorities and show that the results are concrete and can be measured.” “There should be more research and analysis.”

El Salvador “The difficulty for my research was to find the right information. We do not have a good information system.” “The goal of investigation should be to find new solution to old problems.” “It is important to stress the importance of information in the decision process.” “There are many computational resources that are distributed in different areas and not fully utilized. My new task is now to integrate all those sources and define an information system.” “We must remember that the bureaucrats need to be educated…we talked about the information that I presented, they did not know about it…they were quite surprised to learn that averages do not represent the real situation in individual districts…they asked for more information…this is a beginning.”

Nicaragua “Nicaragua is just part of the large problem of Central America in which there is a lack of investigation and of information.” “The presence of the policy analyst forces the decision-maker to be more aware, more responsible.”

Panama “By improving the libraries we could improve the quality of education and create the habit of gathering information in the population in general.” “Having electricity they will be able to improve their educational system.” “The decision-maker should be able to understand the value of information.”

Mexico “It is a long process…. They are pondering very much on the need for information at all levels. The information used to be considered ’top secret’, so, now that has to be changed.”

Honduras “The information that I received at the workshop has been shared at the University with several professors.” “We lack equipment and technical personnel to process information.” “Unfortunately at the Ministry of Education all the information is lost in summaries.”

Costa Rica “There was a meeting in the Ministry of Education to discuss the importance, need, advantages and strategies of the use of sources of information for research.”

The enthusiasm of the participants following the workshop is not only apparent in their answers, but shows equally in the facts. The analysis of the second interview showed that the 21 participants interviewed had been very active during the months following the training. They had been involved in the study, elaboration, and presentation of 38 policy analyses and had published eight articles.

Expected Solution To The Problems

Political authorities in Latin American countries need to understand the benefits that their governments may derive from a precise and effective planning of the resources available. Good planning is the result of precise information and of the analysis of objective situations.

Endogenous information needs to be systematized and available and not only summarized, but detailed. The development of expert personnel must be encouraged, examples must be given. The human factor is even more important than the hardware systems. Specialists must believe in the importance of what can be obtained by a skilled use of information. Policymakers need to trust policy analysts as the best sources of information for a successful government. In the Latin American environment at the moment, there no is understanding of the advantages related to this point.

Exogenous information helps creativity. Examples of solutions found by others stimulate new attitudes, values, and beliefs. The process of development may be facilitated by means of publications, videos, mass-media, direct personal contacts, and all sort of “impact” methods that our creativity may suggest and the local conditions may require. The cost-benefit will balance in proportion to our dedication, enthusiasm, and clear vision of the goals.

References

Menou, M.J. 1993. Measuring the impact of information on development. International Development Research Centre (IDRC), Ottawa, ON, Canada.

Rojas, A. 1990. Informacián y toma de decisiones in educacián. Latin American Educational Information and Documentation Network (REDUC/UNESCO-OREALC), Santiago, CL.

Impact of the Semi-Arid Tropical Crops Information Service (SATCRIS) at ICRISAT

L.J. Haravu and T.N. Rajan1

Background

The International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), one of a network of 17 international agricultural research centres (IARCs), carries out research in “the hardest end of the research spectrum: rainfed farming in the semi-arid tropics (SAT).” ICRISAT’s objectives are to:

• Serve as a world centre for the improvement of yield and quality of five crops basic to life in the SAT — sorghum, pearl millet, chickpea, pigeonpea, and groundnut;

• Develop improved farming systems that will help to increase and stabilize agricultural production through more effective use of natural and human resources in the seasonally dry SAT;

• Identify constraints to agricultural development in the SAT and evaluate means of alleviating them through technological and institutional changes; and

• Assist in the development and transfer of technology to the farmer through cooperation with national agricultural research systems (NARS) and regional research programs.

The information infrastructure of the NARS of the SAT (50 countries in Africa, Asia, and Latin America), in most cases, is inadequate to meet the needs of researchers, students, teachers, project managers, extensionists, and policymakers in these countries. The inability to access world-wide sources of information and the paucity of information handling skills, software, and hardware are main causes for the inability to manage and use information as a resource in research, development, problem-solving, and decision-making.

1Senior Manager, Library and Documentation Services, International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), Asia Centre, Patancheru, PO Andhra Pradesh 502 324, India, and Consultant, C-30/343 Eastend Apts, Mayur Vihar Ph. 1 Extension, Chilla, Delhi 110 096, India, respectively.

ICRISAT, since its inception in 1972, has considered it essential to support the NARS not only in research and technology transfer but also in ensuring that they have access to scientific, technical, and socioeconomic information in support of their own research, development, and extension work. Information support to the NARS has been provided through investments in a well-trained cadre of information professionals and in the development of a strong program on information management and exchange including, library and documentation services, editorial and publishing services, public-awareness services, and the capacity to utilize new and appropriate information technologies for the storage, retrieval, dissemination, communication, and repackaging of information.

The Semi-Arid Tropical Crops Information Service (SATCRIS), an integral part of ICRISAT’s Library and Documentation Services, is an example of ICRISAT’s commitment to supporting the NARS through the development of a strong in-house program. SATCRIS, established in 1986, and developed initially through financial and technical support from IDRC, has been providing a number of services to users in NARS.

This case study presents the methodology, and preliminary results obtained, to evaluate the impact of the services of SATCRIS on the research and related community that it is expected to serve.

Information Use Environments in the NARS

It is important that impact assessment of an information service such as SATCRIS should take note of the information use environments (IUE) of its target clientele. It is not easy to make generalizations about the IUE of 50 SAT Countries, because there is considerable diversity and variance in their economic development, educational systems, languages used, communications infrastructure, and strength of linkages with other NARS and IARCs.

Visits, however, to several countries in eastern, southern, and western Africa and south and southeast Asia indicate that the IUE of countries in these regions has the following broad characteristics:

• Many agricultural research stations, particularly in Africa, do not have a formal mechanism (e.g., library, information centre) to provide access to information that is generated externally.

• Access to research information generated within these countries is generally poor because formal national mechanisms (e.g., the International Information System for Agricultural Sciences and Technology (AGRIS) centres) do not have the resources and/or skills to organize such information and provide access to it.

• Access to current literature, even in the academic establishments of many SAT countries, is woefully inadequate.

• Library collections are often inadequate and cannot provide the needed document delivery to users primarily because of the lack of foreign exchange needed to acquire current publications.

• Many SAT countries are small and have limited research capacities. There is need to develop and rely on linkages with external sources of information and technology.

SATCRIS Products and Services

Given the foregoing broad characteristics of the IUE of the SAT, the following information products and services were developed to meet the needs of potential clients for SATCRIS services.

• A comprehensive bibliographic database of information on the crops and resources of interest to the user community and online access to the database.

• An automated Selective Dissemination of Information (SDI) service to cater to the current-awareness needs of the target clientele.

• On-demand search service to meet articulated needs of the user community.

• Document delivery triggered by the SDI and search services. Information analysis and consolidation products on carefully chosen topics to meet the broad needs and interests of a category of users.

• Software that would be useful for libraries/documentation centres in their information retrieval and dissemination work.

• Locational tools for use by libraries in the NARS for document delivery.

The SATCRIS database built with monthly subsets received from the CAB International (CABI) and AGRIS databases and locally generated input now has over 50000 records and grows annually by 6000 records. A user-interface (Ratnakumar and Haravu 1994) for the database developed in 1993 enables in-site and remote end-user access to the database.

The SATCRIS SDI service (Haravu et al. 1990) begun in 1986 has 446 users in 52 countries of the SAT. During 1991–94, more than 1,200 on-demand searches of the SATCRIS and other databases were conducted for users in 33 countries.

Two information analysis products (one on the Aflatoxin problem in groundnut and another on a cereal pest called Busseola fused) were produced in collaboration with scientists at ICRISAT. Both these products and their associated databases have been widely disseminated.

Based on requests that we received from some of the NARS, a general purpose computer program to generate SDI outputs with any CDS/ISIS (Computerized Dissemination System/Integrated System for Information Services) database was developed. This program uses stored user profiles to search a database or its updates to produce SDI outputs. This program has been made available widely to NARS libraries in Africa, Asia, and Latin America.

An impact assessment study, funded by IDRC, of the foregoing products and services was begun in late 1994. The case study reported in this paper relates to the work done on evaluating a few of these products/services under the ongoing study to evaluate the impact of SATCRIS.

Developmental Goals and Target Clients

The aim in any impact assessment must be the determination of how one or more information products or services contributed to the development goals of the target clientele, institution, or program served by the information service or project. In practice, however, it is difficult to arrive at a direct correlation between an information product or service and the achievement of one or more development goals, because information is only one of the many factors that contribute to the achievement of one or more goals. What is attempted is to show how information services did or did not influence one or more factors that in turn contributed to the achievement of goals. The use of path analysis models in measuring the influence of one variable on the variability of another is one way of introducing greater objectivity in such measurement.

Because the primary aim of SATCRIS is to contribute to the overall goals of ICRISAT and NARS working in ICRISAT’s areas of interest, it is clear that impact assessment of SATCRIS must address the development issues and goals of ICRISAT. Stated specifically, it was decided that the impact study should determine the extent to which SATCRIS services contributed, directly and indirectly, to the following development goals:

• Crop improvement of sorghum, pearl millet, chickpea, pigeonpea, and groundnut;

• Farming systems research in the SAT and associated sustainability issues, e.g., soil conservation;

• Transfer of technology;

• Planning, formulation, monitoring, communication of results, and evaluation of research projects;

• Problem-solving; and

• Decision-making.

Another way of restating the foregoing for the purposes of our study is to determine how SATCRIS products and services (the input variables) contributed to (or accounted for) the variation in the following output variables:

• Capability for problem-solving,

• Capability for technology development,

• Capability for research project formulation and management,

• Capability to contribute to the literature of the field and to scientific meetings,

• Capability to contribute to decision- and policymaking,

• Capability to contribute to the development of human resources, and

• Self-confidence, leadership, and self-image of recipients of services.

Another important parameter in impact assessment is the delineation of target users compared to the recipients and beneficiaries of SATCRIS products and services, who will be the chief respondents in assessing the impact. The following categories of users were identified to be the target user-community that would be studied to measure use, usefulness, and impact:

• Researchers in IDRCs,

• Researchers in the NARS of the SAT,

• Teachers and postgraduate and research students in academic establishments of the NARS,

• Officers and others working in extension services usually associated with the ministries and departments of agriculture,

• Officers working in intermediary agencies including libraries and nongovernmental organizations (NGOs), and

• Researchers and managers working in the private sector, e.g., seed companies, breweries, and agrochemical companies.

Methodological Considerations

Data required for the impact assessment is being collected using carefully designed questionnaires (see the Appendix for the questionnaire on the search service of SATCRIS) and through structured and unstructured interviews with users and nonusers of SATCRIS services. In addition, data for costs of providing the services will be drawn from internal records at SATCRIS. A feedback analysis system was built to record feedback data received from recipients of the SDI service. This system calculates the precision ratio for each user based on the relevance rating that the SDI users give to each item disseminated to them in their SDI output. Data from this system will also be used to supplement that collected from the questionnaire and interview surveys.

Searches of databases will be made to discover papers written by respondents — users and nonusers — and the attempt will be to see if there is any linkage between information use and productivity of respondents as seen in contributions to the literature of the field. The question whether papers by regular users are more cited than those of nonusers will also be examined.

Theoretically, impact assessment of SATCRIS could be user-needs centred as opposed to a service-centred approach. We felt that both these approaches are important, and we hope to be able to address these in our study. An important question we were confronted with was whether to use one common questionnaire for all the different services or whether different questionnaires, one for each of the services, would be needed. The relative advantages and disadvantages of the two possibilities were discussed with social scientists at ICRISAT and in the IDRC sponsored Listserver called INIMCAS set up for participants of the impact study cases supported by IDRC. It was decided that, for the current study, it would be advantageous to have more than one questionnaire. The rationale for taking this approach is in the following:

• A single questionnaire would be very long and could put off people from responding.

• There are not too many common users across services.

• Although some of the purposes served by the services are common, there are differences (e.g., the primary purpose of the SDI is to provide regular current awareness, whereas the search service is one-time information retrieval). It would be useful to probe how, if any, such differences influence use, usefulness, impact, and outcomes resulting from the different services.

In addition to the mailed questionnaire surveys, the study will use structured and unstructured interviews. The unstructured interviews will be based on the responses received to the questionnaire, and the attempt will be to probe for actual outcomes arising out of the use of one or more of our services. For instance, if a respondent, say an entomologist, indicates that he or she found useful information in solving a pest problem, the interview will attempt to get more specifics of the problem solved and anecdotal information useful in impact assessment. Questions specific to specialist groups, e.g., biotechnologists or agronomists, would also need to be asked in the unstructured interviews to determine if use of our services contributed to the development goals mentioned elsewhere in this paper.

Based on the advice of social scientists and statisticians, it was decided that the questionnaires would be sent to all recipients of a service or product. Following the receipt of responses, a stratified sample of respondents and nonrespondents would be drawn. This sample of users will be interviewed. The stratification will need to take into account characteristics such as country, subject specialization, and institution type of the user. A separate questionnaire for nonusers will be used in structured interviews with such respondents.

Our experience to date indicates that interviews, more than the questionnaires, will be critical to the success of the study and in discovering ways in which our services did or did not contribute to the development goals. Because interviews will involve travel to user sites, this will begin only after responses to questionnaires are received and stratified samples drawn.

Measurement

In looking at impact assessment literature, we were attracted by the directness and simplicity of the Logical Framework Analysis (LFA) of the Canadian International Development Agency (CIDA) (Lazier 1994). An example of the use of this for measuring the impact of the SADC/ICRAF (International Council for Research in Agroforestry) Agroforestry Research Network for Southern Africa was studied and it was felt that it would be useful to attempt an LFA for the SATCRIS project:

SATCRIS — Impact study Logical Framework Analysis

Narrative summary

Objectively verifiable indicators (OVI)

Means of verification (MOV)

Goals

Improve access to research information and documents for SAT researchers and others working for the improvement of crops mandated to ICRISAT through information retrieval, dissemination, information consolidation and document delivery services

Increased demand for information and documents from SAT researchers.

Records of requests for services received

Purpose

1. At end of project the following are expected to be in place:

- a comprehensive database

- regular search, SDI and document delivery services to researchers and others in the SAT.

- other information consolidation products.

- Strengthened documentation facility at ICRISAT Sahelian Center (ISC), Niger.

- increased awareness of SATCRIS/services in the SAT among potential users.

Outputs

1. A comprehensive database

1. Growth and quality of

1. Figures for size, growth, and comprehensiveness and quality of database

 

1.1 Usage of database (by SATCRIS staff and users)

 

2. SDI and search service

2. Growth in demand for the services and geographic spread of demand

2. Figures for growth and use

3. SDI Software

3. Use by NARS libraries to provide and SDI service to their users

3. Figures and facts on how the software was used

4. Consolidation products

4. Demand for the products and nature of use

4. Figures for sale/distrib’n of products and questionnaire for nature of use.

 

 

 

The foregoing analysis and measures would indicate the extent to which the project as a whole has fulfilled its objectives. The study, however, will need to go a step further to measure performance, effectiveness, and impact. The framework for impact assessment developed in a workshop sponsored by IDRC (Menou 1993) to relate measures for input, output, usage, and outcomes will be used to develop the following kinds of indicators wherever possible:

• Performance measures relating inputs to outputs.

• Effectiveness measures relating output to usage.

• Cost-effectiveness measures relating inputs to usage.

• Cost-benefit measures relating inputs to outcomes.

• Impact measures relating usage to outcomes.

Of these types of measures, the impact measures will obviously pose the greatest challenge. For instance, how does one measure if one or more services contributed to the users’ capability for problem-solving or technology development. Case histories of users who were involved in problem-solving, and anecdotal information about how they were able to use information obtained through one or more services to solve problems, we hope will be obtained in the interview and questionnaire surveys. Some measures will probably be expressed only in relative terms, e.g., in comparison with nonusers.

Some Preliminary Results

Search Service

We decided to take up the assessment of the search service in the first instance. We reasoned that this service is user-driven and is meant to fulfil a very specific need. If we could target the beneficiaries of this service with a well-designed questionnaire to be followed up with interviews, we could come up with useful clues on how the service has or has not contributed to the development goals listed earlier. Experience in evaluating this service would be useful in similar work for the SDI service.

A pilot questionnaire on the SATCRIS search service was posted on INIMCAS (Information Impact Case Studies) and the feedback received was useful. Similarly, the pilot questionnaire was circulated to selected and experienced social scientists in ICRISAT and in the Indian NARS. This was also administered to a small, random sample of our users. Responses received and suggestions made by the social scientists we contacted were incorporated, and a final questionnaire for this service emerged (Appendix) in May of this year.

We decided to send out the search service questionnaire to all those who requested the service during 1993–95. The questionnaire was mailed in the first instance to all those who received the service during 1994. The mailings took place in the first week of June. Out of 194 mailings, we received 73 responses making for a 37% response ratio. The results reported here are based on the questionnaire returns so far.

User Satisfaction

The extent to which users are satisfied with the service can be gauged from responses to questions relating to timeliness, relevance, comprehensiveness, and the degree to which the search served the expected purpose. The scores for these variables are given in the following:

Qn 3 How timely was the search output received by you?

Timely:

68

(93%)

Late:

5

(7%)

Qn 4 How relevant was the search output?

Very relevant:

24

(33%)

Relevant:

46

(64%)

No response:

3

(3%)

Qn 5 Comprehensiveness of the search output?

Comprehensive:

61

(83%)

Not comprehensive:

10

(14%)

No response:

2

(3%)

Qn 6 How well was the purpose for which the search was done served?

Very well:

23

(32%)

Well:

48

(66%)

Poorly:

2

(3%)

Not at all:

0

Outcomes

A question to find out the end-uses or outcomes that the search service contributed to with 25 possible values was asked. Respondents were asked to indicate the extent to which they thought they were helped in achieving one or more of the outcomes. Outcomes that applied to the user were to be scored on a high-to-low scale of 5-1. A score of 5 against an outcome meant that the user was greatly helped by the search in achieving that outcome. A score of 1 meant that he or she was not helped at all. The outcomes that the search service contributed to in order of scores given to them by the 73 respondents is as follows:

Rank

Outcome

1

Obtain new information

2

Obtain background information

3

Understand methodologies used by others

4

Compare results with those of others

5

Write a review

6

Get new ideas/directions for work

7

Verify findings/results

8

Write a research paper

9

Compile a bibliography

10

Plan a new research project

11

Plan an experiment

12

Write thesis/dissertation

13

Provide advice/guidance to peers/students

14

Develop a new model for research

15

Contribute to meeting/seminar

16

Develop new technologies/tools, etc.

17

Develop teaching/training material

18

Identify peers with similar interests

19

Establish new contacts

20

Prepare for a training course

21

Solve a field or laboratory problem

22

Write a consultancy report

23

Write a manual

24

Identify vendors

25

Provide policy advice

The foregoing mentioned ordering has not taken into account that the respondents comprised a mix of researchers, teachers, and postgraduate students. It would be interesting to compare the ranking of outcomes for each class of users. This will be done when response to the search service questionnaire is fully received. As already pointed out, the interviews planned for will attempt to look for specifics of the outcomes and anecdotal evidence of the use, effectiveness, and impact if any of the service.

The search service questionnaire had another question where respondents were asked to indicate the extent to which they agreed with statements (on a high-to-low scale of 5-1) concerning the search and its impact on time saved, quality of their paper/experiment, confidence, etc. The responses to this question are:

Rank

Statement

1

The search updated knowledge of the topic

2

The search saved my time

3

The search contributed to the quality of mypaper, etc.

4

The search contributed to my confidence

5

The search helped in decisions in my work

6

The search helped in interactions with others

7

The search helped in resolving difficulties in my work

Here again it would be necessary to see if there are differences in the perception of impact of the service for different classes of users.

SDI Software

The software and documentation was provided to 77 institutions. Recipients included libraries, CDS/ISIS user groups, and national distributors of CDS/ISIS. A questionnaire to find out how the software was in fact used was distributed to all the 77 recipients of the software. Twenty recipients (25%) responded.

Of the 20 responding libraries, four have used the software to run an SDI service for their users. Their service goes to a total of 455 users. Of these four, two are international centres and two are NARS libraries — one in Pakistan and one in India. Nine recipients of the software use it in demonstration and training, and three recipients have distributed the application to other CDS/ISIS users within their region. One international centre has contributed the software to a CD-ROM containing CDS/ISIS applications and databases. Six recipient libraries were not able to install the software.

SATCRIS Database

Work on evaluating the use and usefulness of the in-house database has not yet been started. We have some quantitative data, however, on extent of usage by end-users. Until 1993, the database was not being searched directly by users because the information retrieval software being used has a command-driven query language that is not friendly for end-users. In 1993, a menu-driven front end was written and made available to users on ICRISAT’s LAN. Until this time, only the library staff accessed the in-house database. About 40 sessions per week was the average use of the database. Since opening the database for direct access to end users, usage has more than doubled to between 80 and 100 sessions per week.

The study will examine through a series of structured interviews primarily of users within ICRISAT, the purposes for which the database has been used, and if in fact the availability of an online database has helped in improving productivity of researchers and in contributing to the research and development goals of ICRISAT.

Other Products and Services

A questionnaire for evaluating the SDI service is now in its final stages of development. We will pre-test the questionnaire before mailing to all the 440+ recipients of the service. This will be followed-up with interviews of a stratified sample of the users as in the case of the search service. We have still to determine how we will go about evaluating the use and impact of the information analysis products and associated databases that we put out. We will still need to define suitable indicators to measure relationships between inputs, outputs, cost-effectiveness, and cost-benefits.

Acknowledgments

The authors gratefully thank IDRC for the opportunity and financial support to participate in this meeting. Thanks are also extended to the management of ICRISAT for permission to participate and present this paper.

References

Ratnakumar, P.; Haravu, L.J. 1994. Design and development of a user interface for the library’s database at ICRISAT. Program 28(1), 15–27.

Haravu, L.J.; Sinha, P.K.; Prasannalakshmi, S.; Jotwani, D.; Naidu, R.G. 1990 AGRIS level II and the information services of specialized information analysis centers: The case of SATCRIS and its SDI services. IAALD Quarterly Bulletin, 35(1), 11–18.

Lazier, J.R. 1994. A framework for the assessment of impact in natural resource based projects. Canadian International Development Agency (CIDA), Hull, PO, Canada. 47 pp.

Menou, M.J. ed. 1993. Measuring the impact of information on development. International Development Research Centre (IDRC), Ottawa, ON, Canada. 188 pp

Image

Image

Image

Image

Image

Image

Image

This page intentionally left blank

Impact of Electronic Communication on Development in Africa

Nancy Hafkin and Michel J. Menou1

Electronic Communication in Africa

Connectivity in Africa

Although Africa remains the least electronically connected of all the regions of the world, with only four countries (Egypt, South Africa, Tunisia, and Zambia) at the present time having full interactive access to the Internet, it is perhaps the region for which electronic communication offers the greatest hope as a rapid and relatively inexpensive means to end the information isolation and information gap characteristic of much of the region. As such, the study of the impact of this information technology on development is a particularly apt subject for the first series of impact studies sponsored by the International Development Research Centre (IDRC).

CABECA

IDRC was the first organization to sponsor electronic communication initiatives in this area in Africa through their Telematics program in Africa, which began in the mid 1980s. During the period 1989 through 1992, this program sponsored a number of pilot projects that introduced electronic networking to the Africa region on an experimental basis, through projects such as “Computer Networking in Africa” executed by the Pan African Development Information System (PADIS) of the United Nations Economic Commission for Africa in Addis Ababa, Ethiopia. In 1993 IDRC sponsored the project “Capacity Building for Electronic Communication in Africa” (CABECA), which aims to introduce low-cost electronic connectivity (Fido-based) to some 24 countries in the region, also implemented by PADIS. By mid 1995, CABECA had worked to initiate or strengthen electronic communication systems in more than 20 African countries.

1Officer-in-Charge, PADIS, Economic Commission for Africa (ECA), PO Box 3001, Addis Ababa, Ethiopia, and Consultant, CIDEGI, 13, rue Nationale, F-49530 Les Rosiers sur Loire, France, respectively.

Under the CABECA project, countries are provided the required equipment for a Fido node, training of the systems operators and the users, and back stopping. In some instances, electronic mail (e-mail) is the only facility available, whereas in others BBS have also been set up.

Impact Study on Electronic Communication in Africa

Given the importance of assessing the impact on development of information transmitted electronically, IDRC decided to sponsor one of the case studies on this topic as suggested by the team in charge of CABECA. The case study is conducted at the regional level using standard instruments to allow for the consolidation of the results into a single final analysis.

Objectives and Geographic Coverage

The objectives of this case study are to contribute to the development of indicators for the measurement of the impact of information and to assess the impact of electronic communications (telematics) on development in Africa. The CABECA project at the Economic Commission for Africa provides the framework and institutional infrastructure for the implementation of this “Africa Networking Impact Study.”

The study could not cover all countries where e-mail access has been provided for some time. Those to be selected were ideally to cover the variety of background conditions, experiences, and geographical areas found on the continent. Material constraints did not allow for including more than four countries. Selected for study were:

Horn of Africa:

Ethiopia

Eastern Africa:

Uganda

Southern Africa:

Zambia

Western Africa:

Senegal

In Ethiopia, the CABECA project had set up a node with nearly 1,000 users by mid 1995; Senegal had both active RIO-ORSTOM (Reseau Informatise de l’ORSTOM) and Fido nodes, the latter at an environmental NGO, ENDA; in Uganda, one system operator had set up a university node as well as another serving the private sector, and in Zambia an active Fido-based university node had stimulated the establishment of a private company and a World Bank loan leading to the first full Internet connectivity in sub-Saharan Africa outside of South Africa.

A local investigator, or team of investigators, will conduct the study in each country. It was felt that the electronic communication system operators in the countries under study might not have the required research background and skills and would be too busy to conduct the studies themselves. They should, however, be closely associated with the studies. In each of the four selected countries, a local investigator was identified.

The desired characteristics for the investigators were that they be residents of the country identified for study and available to conduct the surveys and prepare reports, be fairly familiar with the information and communication cycle and possibly with electronic networks, and be experienced in user and social surveys in general. To pass the methodology of impact studies and assist in achieving consistency among the national studies, a consultant was to serve as a moderator for the entire study.

An additional desirable element was to associate the graduate students of the IDRC-supported information science programs in the Consortium of African Schools of Information Science (CASIS). Fortuitously, three of the national case investigators chosen are either current or former graduates of one of the members of the consortium, the School of Information Science in Africa (SISA), at Addis Ababa University, with the two former graduates now themselves information science instructors in their own countries. Their participation, it was felt, would be a factor in keeping impact studies in the forefront of the concerns of information scientists in Africa.

Workplan

The impact study is intended to involve the following steps:

1   Design of the study and detailed workplan

2   Identification of the investigators

3   Start -up workshop

4   Sampling

5   Development of the survey instruments

6   Test and revision of the survey instruments

7   Initial surveys

8   Ongoing monitoring of impact factors

9   Ongoing backstopping of investigators through e-mail

10  Mid -way workshop of the investigators

11  Final surveys

12  Comparison of the results of the initial and final surveys

13  Interpretation

14  Production of interim national reports

15  Compilation of the national reports into an interim overall report

16  Final workshop of the investigators

17  Revision of the interim reports

18  Production of the final report

19  Editing of the results for presentation to the target audiences (at institutional, national, and regional levels)

20  Analysis of feedback and presentation of results to the target audiences.

The impact study calls for two sets of data to be collected:

• Data about the initial situation at the national level and the expected impact of electronic communication.

• Identification by the users of the individual and institutional benefits gained in the various possible categories (e.g., political, economic, social, cultural, technological). Upon completion of the study, the current situation and/or the changes in the initial situation will be described and the perceived impact explained by the users. The comparison between the two sets of data (situation prior to electronic networking and changes resulting from the access to electronic networks) is expected to provide the main bases for identifying the impact.

Status of the Project

The start-up workshop took place in Addis Ababa at the beginning of March 1995, bringing together the four country case investigators with the moderator of the study (Michel Menou). Concerned PADIS staff also participated in the workshop. The objectives of the workshop were to review the impact assessment concepts and methods, devise the work plan, and prepare the required survey instruments.

The workshop program included a review of the impact program in general, basic concepts of and framework for impact assessment, purpose and scope of the electronic connectivity impact assessment; analysis of data on user communities in the four countries; elaboration of survey questionnaires, sampling strategies, and interview schedules. A good deal of time during the workshop was devoted to the analysis of data on the user communities in the four countries and discussion of alternative survey and sampling strategies. (The questionnaire as well as notes on sampling appear in the Appendix to this chapter).

Following the workshop national investigators were to contact the moderator with their comments on the questionnaire, and contact the systems operators of all the networks operating in their countries to find out what kind of data were available on systems users. The next step was establishing lists of entry points by frequency of use. The first report by the investigators was to indicate the number of national entry points per frequency of use; suggested thresholds of low, medium, and high use; average number of users per entry point; percentage of end users that matched the requirement to be national organizations or permanent residents; lists of main categories describing the institutional or sectoral groups to which the entry points belonged and number of entry points outside the capital city area.

Survey and Instruments

It has been envisaged to first conduct a mail survey with all entry end users to collect basic data about the use of electronic mail and its anticipated benefits. The series of questions is replicating, when appropriate, those used in a previous survey of the use of Healthnet, a satellite-based computer communications system for health, also sponsored by IDRC. This would possibly allow for comparisons and/or generalizations.

On the basis of structured samples, the national investigators should further conduct personal interviews with a number of end users. A minimum target of 50 interviews in each country has been tentatively set up.

Draft survey instruments were prepared by the moderator on the basis of the results of the workshop. The national investigators were to make small-scale tests of the instruments and report their observations to the moderator. Once the questionnaires were finalized, they were to be sent by e-mail to all entry points and an analysis made of the first returns.

Follow-ups were to be made to secure the necessary number of respondents to provide baseline survey data. Following reports of the baseline survey, investigators were to proceed to user interviews that, along with the baseline data, were to be the basis for the first report. The interviews are to be done according to either an analytic or an historic option, depending on which the investigator will find most appropriate for the person being interviewed. Suggested questions and interview schedules are also given in the Appendix.

Once all these reports are produced and shared within the team, the project will hold a mid-project workshop to review the reports and finalize the methodology for the final half of the project. The second half will consist of a second series of interviews that will secure a longitudinal basis for the study. After the circulation of country final reports among the team, a consolidated report will be produced and a final workshop will be held — foreseen for the end of December 1996 — to review the findings of the project.

Status of the Case Study

The interaction between the PADIS staff, the national investigators and the moderator takes place mostly through e-mail. The project also has made good use of the IDRC-established Listserv on the impact studies — INIMCAS-1 for communications between investigators and the project leader despite initial difficulties of some researchers in subscribing. Communications related to the electronic communication in Africa impact study on the Listserv are labeled “CABUS.”

All of the researchers have now collected their initial data on users and arranged it by frequency of use; a common yardstick was adopted for the low-, medium-, and high-use categories. That electronic communication is taking on a substantial dimension in the countries under study can be seen from the figures of average calls per month at the start of the study. For Ethiopia, it was 4,561; for Uganda, 2,929; and for Zambia, 6,831. (The data for Senegal came in later.) Categories of institutions for the stratification of the sample were also established. Currently, investigators are working on the proposed sample structure for each country and preparing for the first series of interviews.

The project also took advantage of the presence in Addis Ababa, of some 300 electronic communication users some 5 weeks after the initial project workshop, the bulk of them from Africa. At the Regional Symposium on Telematics for Development in Africa, which took place from 3 to 7 April 1995, they distributed a supplemental questionnaire to collect further data of the impact of electronic communication in Africa. Some 20 questionnaires were filled out and returned (the low rate of return is probably because the participants were bombarded with documents and activities and might not have had much time to fill out the questionnaires) and are now being analyzed. A copy of the questionnaire distributed at the symposium also appears in the Appendix.

Image

Image

Image

Image

Image

Image

Notes on Sampling2

The points below attempt to summarize the conclusions of the project start-up workshop regarding the structure of the sample. It should be recalled that it is only on the basis of more precise figures and tabulations that these orientations will be confirmed and implemented under the most appropriate formula.

1. The focus of the study being the potential contribution of electronic communications to development in Africa, it seems appropriate to restrict the population of actual users to be interviewed to:

(a) individual users with permanent resident status in the concerned country;

(b) users in national organizations, irrespective of their personal status.

The sample of people to be interviewed would thus exclude foreign and international organizations, who are, however, covered in the baseline survey.

If only native individuals are taken into account it seems that the total population may be reduced by up to 50% thus making the size of the group interviewed far more representative.

2. Frequency of use (low, medium, high) will be considered as the prime attribute of the users. Thus, the hypothesis to be verified is that the electronic communication user is delivering better and achieving more for him/herself and for his/her organization. The other variables become thus dependent variables.

3. As far as possible, the total sample, for the 4 countries, should be as representative as possible of the native users in the 3 frequency subsets, considering first their number then their distribution among the main categories of organizations. Achieving national representativeness of the sample, though highly desirable, can only be a secondary target.

2Notes on sampling were prepared by Michel Menou.

Image

Image

This page intentionally left blank

Impact of Information on Rural Development: Background, Methodology, and Progress

Kingo Mchombu1

Introduction2

This paper discusses the dynamics of information provision to support development in Africa. For any community to function efficiently and productively, a basic minimum stock of usable information is essential. Every society needs to acquire, store, and exchange this basic stock of information to allow it to survive. The view that information is central to the solution of any society’s economic and social problems, and should be regarded as a factor of production is now widely accepted (Belshaw 1965, p. 128; McAnany 1978, p. 2). Pradervand (1980, p. 56) has gone even further to insist that information is the most basic of all basic needs.

Although information is recognized as an essential resource for social and economic development of the Third World, the fact that it is accorded a low status is proof that its potential value is not yet fully recognized (Mchombu 1992). The contradiction between the vital role of information in development and its lack of official recognition in Africa can hardly escape the attention of Information specialists.

Stone (1993) has called this the apparent dilemma of information being a powerful catalyst to transform society, and yet the apparent weakness of the linkage between information investments and the achievement of specific

1Lecturer, Department of Library and Information Studies, University of Botswana, Private Bag 0022, Gabarone, Botswana.

2Collaborating institutions/individuals: Malawi National Library Services (National Librarian, R. Mabomba); Botswana National Library Service (Director, B. Garibakwena); Institute of Development Studies, University of Dar-es-Salaam (I. Ngwara); Tanzania Commission of Science and Technology (Director of I & D, T. E. Mlaki); and the International Development Research Centre (IDRC, Canada) Funding Agency and coordinator of impact studies (R. Archer, Michel Menou-Private Consultant).

development goals. There is an ironic twist in that our continent, which is the least developed, is the very one that shows the least awakening to the use of information in overcoming underdevelopment.

Part of the reason lies in our politicians and policymakers not having been exposed to evidence that irrefutably demonstrates the contribution of information to development. Saracevic (1980), for example, notes that “there is no systematic body of empirical evidence to support this assertation, especially quantitative evidence.”

According to Stone (1993), the challenge, therefore, is to produce valid models in which the socioeconomic impact of information activities could be assessed. In turn, such an orientation would result in the design and creation of impact-bearing information programs and services. If such impact can be demonstrated to decision-makers, resource allocators, and politicians it would increase the support of information as a vital resource in development.

In 1992, therefore, the International Development Research Centre (IDRC) organized a conference on the theme of “Measuring the Impact of Information on Development.” Given the exploratory nature of the subject, the need to have lengthy discussions, and the commitments of prospective participants, the organizer decided to hold a computer conference, moderated by Michel Menou, to enable participants to exchange messages for a lengthy period of time (7 months) without interruption.

Although not involved in the computer conference because of unreliable computer facilities at my institution at that time, I was involved as a panellist and was able to send comments directly to the organizer. A workshop held in Nairobi attempted to link the findings of the computer conference to practical applications (Stone 1993, p. 19).

The outcome of the Nairobi workshop, in which we fully participated, was to come up with methodological guidelines and an operational framework on how to carry out assessment studies on the impact of information on development. Both the computer conference and workshop deliberations have been published for wider dissemination (Menou 1993). Currently, there is an active computer discussion group on information impact research and issues of methodology through which we exchange ideas and benefit from each others’ experiences.

The current project is one of several ongoing attempts, funded by IDRC, to address this concern. The project grew out of my concern for the provision of information in support of development in rural communities of Southern Africa. A project to identify information needs of rural communities in three SADC member countries (Botswana, Malawi, and Tanzania), called Information Provision for Rural Development (INFORD), was carried out between 1990 and 1992 (Mchombu 1993). IDRC came up with the funding for the project.

Given the background factors already highlighted, it was perhaps inevitable that when INFORD 2 was designed issues of assessment of information on rural development would take the upper hand. Like its predecessor, INFORD 2, also funded by IDRC, will be carried out in the same rural communities that participated in the first phase and will last for 3 years (1994/5-1996/7).

Aims

Cast in the action research mode, this study investigates the provision of information to support rural development and the impact of such information on development. Specifically, the study aims to:

• Explore the impact of information in rural development and establish conditions under which information can make an impact (or fail to impact) on attitudes, skills, and knowledge of targeted groups and cause them to achieve developmental goals.

• Test various methodologies for the efficient collection, dissemination, and use of indigenous knowledge resources and measure the impact of such indigenous knowledge use on the community’s development.

• Identify, gather, and disseminate selected data and information generated from the rural development efforts of the community and measure the impact of increased use of such information.

• Select ‘key information needs areas’ and facilitate the supply and use of information by the community in these key areas, and measure the changes which take place as a result of the information input.

• Develop a model approach to information support for rural development that would be applicable, in a broad sense, to rural communities in Africa.

• Identify problems and constraints in delivering information in support of rural development.

• Find training needs of information workers in offering an effective impact bearing information support service.

The study will also examine the issue of whether concrete evidence can be assembled that shows the relationship between information and development. Basically, the project aims to draw out the key impact-bearing factors in the successful provision of information in support of rural development.

Methodological Issues

This study hopes to provide information support for rural development and measure its impact on the development of rural communities. It involves a complex set of actions. At one level, it will involve the setting up of Community Information Centres in the designated communities, from which the investigation will be carried out. The African rural information environment is one where the average person is not familiar with the operations of a formal information centre and, consequently, information use habits (for such systems) are not fully developed (there is, however, wide use of cultural-oral information systems). Hence, vigorous social marketing is a prerequisite to popularize such services.

A second level of complexity is the selection of an appropriate methodology to measure impact of information on rural development. Assessment of information systems is not totally new. In the past, there have been attempts to assess the performance of libraries on the three criteria of economy, efficiency, and effectiveness. According to Potter (1985, p. 112), this has attempted to answer the questions:

• How economical is the service?

• How efficient is the service in the use of resources?

• How effective is the information service in the service it provides?

The works of Lancaster (1977) and Griffiths (1982) also fall into this category. What is new in the current study series is the concern for the impact such services have made on development. This is an other dimension that has not been adequately tackled in the past.

The impact workshop guidelines mentioned earlier came up with the following suggestions on how to formulate impact studies:

• Data collection and analysis should be as simple as possible.

• Interpretation of the indicators should be straightforward.

• The indicators should point to benefits that are usually given attention by policymakers.

• The indicators should lead to straightforward conclusions, which should be intelligible to those who are going to act on them (Menou 1993, p. 63).

The guidelines also identifies four types of indicators:

• Operational performance indicators, which relate to output (such as productivity, efficiency, cost per output, cost by attribute level, and productivity by attribute level).

• Effectiveness indicators, which relate output to use (such as user satisfaction, turnover rate, amount of use by attribute level, satisfaction by attribute level, and amount of use by satisfaction level).

• Cost-effectiveness indicators, which relate output to use ratios (such as cost per use, cost per user, cost per capita, and cost by satisfaction level).

• Impact indicators, which relate actual to potential use (such as market penetration, user per capita, and needs fill rate) (Menou 1993, p. 64). Martha Stone (1993, p. 12) is more specific on the issue of impact indicators and states that main element is the socioeconomic impact of information activities.

For the purposes of this study, we will concentrate mainly on two types of measurements. First, the evaluation of efficiency and effectiveness and, second, impact measurements. The first is considered essential because without these (interim) measurements, impact might not occur. Impact, refers to the socioeconomic effects of information application on the communities development goals.

To measure impact effectively, one must also monitor the process rather than simply waiting to measure the end result. From the delivery of information to the point where impact occurs (i.e., changes at the rural development practice front) involves several steps. According to Foote et al. (1987), first, is the exposure of the audience to the relevant information; second, assimilation/learning the messages; third, change of behaviours; and fourth, change in current practices as a result of behaviourial change. The authors suggest that outcome measurements should be monitored at each level because “if a failure occurred at any point along the path, no further impact would be expected” (Foote et al., p. 117).

The foregoing views are based on the assumption that people are passive subjects to be acted upon and rather reluctant users of information. In contrast, Tandon (1981, p. 299), for example, subscribes to the view that speed of acceptance of change and taking up of action for development can be increased by the use of dialogue that integrates inquiry and intervention. Such dialogue between information providers/researchers may enhance a change in the critical consciousness of communities that may result in much quicker action and impact being achieved. Hall and Dodd (1977), for example, found that in a public health campaign carried out in Tanzania behaviour change was recorded on almost 2 million people and short-term benefits included the building of 750,000 latrines.

The views of Tandon, and Hall and Dodd, and others are somewhat in conflict with the spirit of the Impact Assessment Guidelines (Menou 1993), which would seem to favour a more traditional research approach to the question of impact assessment with an inclination toward quantitative measurements that would yield data easily understood by politicians and policymakers. We are, however, convinced that a purely traditional research paradigm might not be the best way forward, in the context of impact studies on rural people’s development.

Tandon (1981) notes that the traditional paradigm emphasizes the distinction of researcher and subjects. They are seen as two separate parties, and inquiry is the process of researcher’s knowledge of the subjects. In the traditional research paradigm, the process of inquiry does not entail any impact on the researcher nor on the subjects (Tandon, p. 299).

Tandon explains that in inquiry and intervention both the researcher and the subject learn from each other, they also learn together from the situation that they are a part of and are engaged in studying. Thus, impact is not only on the subjects, i.e., rural people but also on the researcher (information providers) and those who are involved in the whole process (Tandon, p. 299).

The foregoing suggestions are extremely attractive. My conviction has grown since coming into contact with the works of Brenda Dervin (Dervin 1983; Dervin and Dewdney 1986; Dervin and Nilan 1986). Dervin (1983) calls for the necessity of regarding the user as a thinking, self-controlling human being, rather than an “empty bucket” to be filled with information. She adds that users make sense of the information provided in relation to their world, time, place, problems, etc. At the same time, before the information is accepted, the user already has some sense, from experience, on which he or she relies until this sense runs out. She concludes that “people seek information from wherever they can get it,” and many find information relevant to their interests of the moment in almost everything they see, read, or hear while the situation is active in their minds (Dervin 1983, p. 172).

On the question of impact, Dervin noted that impacts must be seen from the users reality and point of view, rather than the popular impact measurements desired by systems in terms of quantitative exposures for accountability purposes, e.g., number of registered users, library circulation, proportion of users attending an activity, etc. Although this aspect of the current research is still being developed, it is anticipated that the methodology used will combine both the quantitative aspects envisaged in the guidelines but also with the strong foundation of dialogue and intervention suggested by Tandon and the sense-making approach of Brenda Dervin.

Yet another complexity, according to McAnany (1978), is the extent impact is influenced by the availability of other inputs required to bring about change. The potential for impact is much greater in those cases where few other inputs are needed, for example, information impact on agriculture is usually limited by the need for noninformational inputs such as fertilizers, seeds, and tools, whereas in health, most ideas may be put into practice at a low cost.

Along the same lines, Grunig (1971) found in Colombian peasants that, although information can help an individual adapt to a changing situation, it can do little to change the situation. For example, agricultural information, will hardly make an impact in a situation where the targeted group consists of landless peasants. This has been referred to as the structural context. There is need, therefore, to determine the weight (or contribution) of information in each change situation or context so that we can assess more accurately the extent of the information-derived impact of each situation.

The Menou report (1993) had also suggested that benefits likely to accrue to the beneficiaries be identified to develop indicators for measuring whether they (benefits) have been realized. Moyo (1995, personal communication), however, notes that such benefits are a result of a compound set of factors, such as macro technological developments, political and economic changes, weather and seasonal changes, and the fact that human development is a naturally occurring phenomenon, hence even without the intervention, some development would have occurred anyway. According to her, there is a need to know the current pace of development before one can measure the acceleration that occurs as a result of additional information support.

From INFORD 1 to INFORD 2

An integral part of this research project involves setting up information services outlets in the six rural communities that participated in phase one of this study (INFORD 1). The information needs identified in phase one will form the point of departure for INFORD 2. Phase one followed mostly a traditional research paradigm (with the partial exception of one section), but it offers a starting point. The information needs uncovered in phase one, briefly highlighted in the following, will be cast in terms of anticipated benefits, and indicators for measuring the impact of the information will be developed.

Phase I Study and Summary

Phase one of this study was carried out between 1990 and 1992, and a report titled “Information Needs and Seeking Patterns for Rural People’s Development in Africa” (Mchombu 1993) was the result. The findings show that rural information needs fall into two categories: information needs common to all rural communities (studied), and needs that are location specific.

Common Rural Information Needs

• Information on income generation (projects, nonfarm incomes, and money-saving initiatives).

• Community leadership.

• Literacy support.

• Basic economics (petty business, finance/loans and how to get them, and survival of small businesses).

• Government policies on rural development (health, agriculture, education, cooperatives, etc.).

• Soil conservation, fertility restoration, and soil erosion.

The location-specific information needs are presented in the following under the respective countries/villages.

Malawi: Village 1 (Chiwamba)

• Agriculture — tobacco (modern farming, marketing, international atmosphere).

• Health and sanitation (hygienic handling of local brews, malaria prevention, hookworms prevention, etc.).

Malawi: Village 2 (Bandawe)

• Alternative crop to rice (cotton?)

• Health and hygiene (malaria, hookworms, chest infections) Fishing information — migration on the lake and marketing information.

Botswana: Village 3 and 4 (Mogobane and Kopong):

• Information for seasonal/casual employment.

• Vocational training opportunities.

• Farming under drought conditions.

• Livestock husbandry.

Tanzania: Village 5 (Kisarawe II):

• Farming (cashewnut, fruit tree, coconuts farming, horticulture).

• Cooperatives.

• Health and sanitation (mosquito-borne diseases).

• Health of young children.

Tanzania: Village 6 (Marindi):

• Farming (Coffee — modern farming, marketing, use of pesticides).

• Keeping cross-bred dairy cattle.

• Cooperatives.

• Health and sanitation.

The investigation also included how indigenous knowledge resources of each of the communities are perceived and used by the community. Findings show that most of the villagers were hostile to the use of indigenous knowledge, but there was still considerable use of such information in the struggle for daily survival. There is no system in place to bequeath this knowledge to the younger generation as communication links for this purpose have been cut.

There was a need to create channels for the communication of indigenous knowledge to the young (primary school children) and incorporate useful elements in the development process. The broad purpose of any future study is to change the negative perception toward indigenous knowledge, which is perceived as a barrier to development.

Linking Information Needs to Impact Assessment

To link information needs to impact assessment, we have to propose benefits of information under each need, types of services, and products that will lead to these impacts and indicators that we will look for during assessments.

This exercise must be finalized after consulting the respective communities, but what follows is a tentative elaboration of a possible outline on selective information needs (space will not allow a comprehensive elaboration). In each community, one of the information needs areas will be chosen for measuring impact on development.

Need: Income Generation

• Anticipated Benefit:

Find opportunities to earn off farm incomes,

Identify opportunities to earn extra incomes from agricultural products,

Learn about small businesses,

Be aware of basic economics and simple accounting procedures, and

Increase cash incomes in household.

• Relevant Services and Products:

Information of sources of rural finance,

Information on how rural people in other parts generate more rural incomes,

Information on how to process and preserve foodstuffs,

Information on market prices for agricultural produce and

seasonal fluctuations,

How to start and manage small projects,

Booklets on basic economics and basic accounting,

How to keep away from bankruptcy, and

How cooperative activities operate.

Need: Employment

• Anticipated Benefits:

Self-improvement,

Improved chances of finding work,

Improved chances of acquiring training, and

Awareness of self-employment opportunities (including small-scale commercial farming).

• Services and Products (to provide benefits):

Information on local job opportunities,

Newspapers with vacancies,

Information on self-employment opportunities,

Information on courses and training schemes,

Careers literature, and

Directories of vocational training institutions.

Need: Soil Conservation Measures

• Anticipated Benefits:

Improved management of land and water resources,

Improved land use practices,

Increase in agricultural productivity, and

Improved soil fertility restoration practices.

• Services and Products (to provide benefits):

Information on how to make manure from organic matter,

Information on range management practices,

Use of fertilizers, prices, advantages, problems, etc., and

Information on local/government bylaws on conservation.

Need: Information on Community Leadership

• Anticipated Benefits:

Improved leadership in the community;

Increase VDC capacity to diagnose community problems,

formulate action, and monitor/supervise implementation;

Improved record keeping by the community government,

Appreciate need for accountability and democratic participation, and

Improvement in the motivation and mobilisation of people for development.

• Services and Products:

Information on community development and management,

The history of the community and its development,

How to keep minutes of meetings and monitor implementation,

Information on the development of other communities in other parts of the world,

News of ongoing projects in the village,

News of government programs that affect the village, and

News of other programs the community can take advantage of.

Need: Health Information

• Anticipated Benefits:

Awareness of how the most prevalent diseases spread,

Improved chances of taking measures against common diseases,

Improved attendance at clinics/health centres for under fives,

Improvement in family nutrition through better mix of existing foods, and

Safer handling of pesticides, and agricultural drugs.

• Services and Products (to provide benefits):

Information on common diseases and their control measures,

News of scheduled clinics for under fives,

Information on nutritious foods locally available,

Awareness of poor food habits, and

Information on safe handling of pesticides and what to do in case of poisoning.

Research Activities

Dissemination of Information

First, in the planned sequence of major events involved in this study, the dissemination of information will entail establishment of the Community Information Centres, using participative and dialogue approaches, including consultations with the respective communities and formation of a management committee at the grassroots level.

Second, it will involve negotiations with several stakeholders, principally the national libraries of the respective countries, village extension workers, teachers, and nursers to brief them on the project and solicit their cooperation. Part of the collection of the CIC will be borrowed from respective public libraries through their book box services or village reading rooms to act as a start up resource for the centre while a dedicated collection is being assembled.

Third, information delivery strategies will be set up, including -Monthly News-sheet, lending of books/booklets, Provision of Newspapers, Provision of Farm magazines, and Provision of audio-visual information through audio and video cassette, and formation of discussion groups.

Baseline Data Collection

A vast amount of background data was collected during phase one of this study, time has passed and some changes have occurred. Updating data will thus be collected on the key areas identified as important for information provision to take place.

Collection of Routine Data/Statistics on CIC Activities

These data are aimed at indicating level of use of activities and trends over time. When compared to the total population, for example, it will reveal use per capita, groups making most use of certain services, etc.

Collection of Anecdotes from Members of the Community

Reactions of individuals and groups on the use of information and its effect on their work, thinking, development, etc., will be recorded on a continuous basis. Anecdotes will be collected using the sense-making and dialogue methods. Such anecdotes will be analyzed by categories and by linking them to the environment of the user, context, problems, and changes that have occurred (if any) to the user’s life as a result of information use. If a large enough body of anecdotes is collected, content analysis will be used to divide them to broad groups and other variables of interest to this study.

Interviews

Interviews will be held after a year of information provision in the respective communities. The aim of the interviews will be to track the impact of information on cognitive structures and community development. The study will be quantitative and based on a small sample of between 20 and 30 respondents in each of the respective communities.

This small-scale study will also attempt to monitor the general impression of the CIC innovation on the community. The interviews will attempt to link variables of gender, age, and education/literacy levels and income levels to exposure, learning, behavioral change, and information use impact on individual and community development goals. Small-scale ancillary studies will be carried out during the second year to observe if new practices have been introduced in farms, health practices, etc., as a result of the information input and to monitor community reactions to the CIC services.

Main Impact Assessment

The aim of this major research activity will be to establish if the information provided has led to development gains in the community. This will have both qualitative and quantitative aspects. An attempt will be made to trace the various information products/services offered vis a vis the key information needs to find the impact such information has had on the different target groups in the community, from their own perspective, and what have been the factors (environmental and otherwise) that contributed to the outcome. This final stage will also involve holding a similar study in several communities (ideally one village in each country) that did not have exposure to the information support activities to find out whether there are any differences between the control communities and the communities that participated in the study.

Progress

At the time of writing, several activities related to this project have taken place. Phase one findings have been reported back to the respective communities. This took place in December 1994. Negotiation with authorities in the villages aimed at starting the second phase are at an advanced stage. In all cases, the communities have received the possibility of starting a Community Information Centre with great enthusiasm.

Rural communities are not static, and since 1990 when phase one took place, many changes have taken place. In Malawi, for example, the one-party regime of Dr Banda has been replaced by a multiparty system, and there is a new spirit of hope and confidence in the two rural communities. In Tanzania, the one-party state has been forced to accept multiparty politics, and people who have lived under a monolithic party system all their lives have a choice now of more than 20 political parties.

Most find this disconcerting in the light of very limited firsthand information on what these changes entail and what these parties stand for. In Botswana, things are more stable, but the ruling party lost considerable ground in a recent election (1994) and, in one of the villages, I found the that the village chief had passed away and his place taken by his well-educated son who has shown great enthusiasm for this project.

The search for suitable premises from which to offer the service in the communities has shown that only two communities can offer premises (one a former tea room, the other a village reading room) to start a very basic service. One village has offered a building whose roof was blown off during a storm and estimates for repair are awaited. Three of the villages are finding it difficult to come up with suitable premises. These are some of the challenges of working in an African rural environment where basic facilities are often not yet in place. Clearly, each village may have to move at its own pace. Shortly, between now and July, we hope to have recruited information facilitators to work in each of the CICs after giving them initial training.

Conclusion

We feel reasonably confident that this research aimed at measuring the impact of information on rural development, like its predecessor (INFORD 1), will come up with interesting findings concerning the research problems described in the foregoing. Not the least in importance, perhaps, is the possibility of revisiting the whole process of designing information services, which may come up with concrete ways of designing impact-driven information systems. The twin concepts of dialogue, and sense-making, for example, hold great promise for finding and meeting the information needs of users and potential users. Although the task is far from easy, it is a challenge worth undertaking.

References

Belshaw, C. 1965. Tradition exchange and the modem market. Englewood Cliffs, Prentice Hall, NJ, USA.

Dervin, B. 1983. Information as a user construct: The relevance of perceived information needs to synthesis and interpretations. In Spencer, W.; Reed, L., ed., Knowledge structure and use: Implications for synthesis and interpretation. Temple University Press, Philadelphia, PA, USA. pp. 153–183.

Dervin, B.; Dewdney, P. (1986) Neutral questioning: A new approach to the reference interview. Reference Quarterly, 24(4), 506–513.

Dervin, B.; Nilan, M.S. 1986. Information needs and uses. Annual Review of Information Science and Technology, 21, 3–l33.

Foote, D.; Kendall, C.; Spain, P.; Martorell, R. 1983. Evaluating the impact of health education systems. Paper presented at the National Council of International Health Conference, 13–15 June 1983. Washington, DC, USA.

Griffiths, J-M. 1982. The value of information and related systems, products and services. Annual Review of Information Science and Technology, 17, 269–284.

Grunig, J. 1971. Communication and economic decision-making processes of Colombian peasants. Economic Development and Cultural Change, 19(3), 580–589.

Hall, B.; Dodd, A. 1977. Voices for development: The Tanzanian radio study campaigns. In Spain, J.; McAnany, E., ed., Radio for education and development: Case studies. World Bank Staff Working Paper no. 266, 2 vols.

Lancaster, F.W. 1977. The measurement and evaluation of library services. Information Resources Press, Washington, DC, USA.

McAnany, E. G. 1978. Communication with the rural poor in the Third World: Does information make a difference? Institute for Communication Research, Stanford University, Stanford, CA, USA.

Mchombu, K.J. 1993. Information needs and seeking patterns for rural people’s development in Africa. IDRC/University of Botswana, Gaborone, BW. 185 pp.

_____1992. Information management in Africa: An uncharted terrain. FID News Bulletin, 42(9), 185–189.

Menou, M., ed. 1993. Measuring the impact of information on development. International Development Research Centre (IDRC), Ottawa, ON, Canada.

Potter, J. 1985. Performance measures: The user view. In Harris, C.; Clifford, B., ed., Public libraries reappraisal and restructuring. Rossendale Press, London, UK.

Pradervand, P. 1980. Knowledge is power. International Development Review, 22(1).

Saracevic, T. 1980. Perceptions of the need for scientific and technical information in less developed countries. Journal of Documentation, 36(3), 214–267.

Stone, M. 1993. Assessment indicators and the impact of information on development: A keynote address. Paper presented at 1993 CAIS/ASCI Conference. International Development Research Centre (IDRC), Ottawa, ON, Canada.

Tandon, R. 1981. Dialogue as inquiry and intervention. In Reason, P.; Rowan, J., ed., Human inquiry. John Wiley, London, UK.

Assessing the Impact of Information on Policy Formulation in the Caribbean

Audrey Chambers and Noel Boissiere1

Background

Research is a critical vehicle for gathering and analyzing factual information and it is, therefore, fundamental to policy formulation, both in the preparation of new policy proposals and in the support and evaluation of current policies. Research activities, however, tend to be dispersed throughout executing agencies in university, public and private sector organizations, nongovernmental organizations (NGOs), and international agencies. This factor, along with the absence of coordinated national research policies, has limited systematic, widespread access by decision- makers to the results of research.

In the Caribbean, formal subject-oriented information systems, developed over the past 15 years, have contributed to more systematic access to bibliographic information and quantitative data by policymakers. Links between the generators and potential users of research and formal information systems are now being strengthened.

For example, the Institute of Social and Economic Research (ISER); several faculties of the University of the West Indies; the Planning Institute of Jamaica (PIOJ); and the Ministries of Education, Health, Labour, and Welfare are collaborating with the Government of Jamaica (GOJ), Government of The Netherlands (GON), and the University of the West Indies (UWI) World Bank Social Policy Analysis component of the Reform of Secondary Education Project, to improve the research and analysis capabilities of these organizations. A Data Bank of social indicators on health, education, poverty and welfare in Jamaica has been established and maintained at ISER through this project and will provide statistical information and analytical services as well as technical assistance to Jamaican government organizations.

Since its inception in 1948, ISER has functioned as the focal point for social science research in the Caribbean Community (CARICOM) region. The

1Director, Documentation and Data Centre, Institute of Social and Economic Research (ISER), University of the West Indies — Mona, Kingston, Jamaica, and Consultant, 8 Rooknest Trail, Agincourt, Ontario, Canada, MIS 3W2, respectively.

Institute has demonstrated its capacity to undertake major studies on issues critical to socioeconomic development, such as fertility, urban transportation, microenterprises, and health service management. Its role as a source of technical assistance to the governments of the region is well established rebutting the usual impractical reputation of academic research centres.

The partnership between ISER and the Central Banks is illustrative of these linkages, as monetary policymaking in the Central Banks was partially informed by the results of their research departments and the Regional Programme of Monetary Studies. The staff of the latter at ISER were sponsored by the banks for over 20 years. Another example of research impact is the Family Health International/International Centre for Research on Women/UCLA/ISER study on Sexual Decision-Making in Jamaica, which contributed major inputs into the national AIDS Committee in discussion of the design of the Jamaican AIDS/STD (Sexually Transmitted Diseases) campaign. A number of researchers in the Public Enterprise and Development in the Caribbean Project, 1978–82, which influenced the regulatory practices and pricing of utilities throughout the region at the time, eventually held or now hold ministerial and other key policymaking positions.

Recommendations of an ISER self-study task force (1991), underscored an agenda of policy studies and governance strongly linked to Caribbean development needs identified by governments, researchers, and other development actors. The challenge lay in identifying methods to improve flows of research results to information systems and identify and document mechanisms to transform these results and other policy-relevant data into products that may be used by decision-makers to resolve public problems. Furthermore, a project to identify and test indicators to monitor and evaluate the contribution of these inputs (directly accessible to the policymakers) to the policy formulation activities of the target population.

The project proposal presented to the International Development Research Centre (IDRC) was well placed to complement the Institute’s mission and the initiatives of regional and national information systems. The head of the Documentation and Data Centre currently participates in the Consultative Committee on Caribbean Regional Information Services (CCCRIS), and both ISER and the Consortium Graduate School of Social Sciences (CGSSS) are participating in the regional project (funded by IDRC), Information for Decision-Making in the Caribbean Community, which is being implemented by the major regional institutions.

To explore the issues involved in implementing the proposal, ISER invited representatives from regional and national organizations to a meeting in Jamaica (19–20 October 1993) for discussion of a proposal drafted by the Institute to address this problem. The regional consultation provided a forum to discuss the relevant concerns and expectations shared by these institutions and their constituencies, and to ascertain potential areas of cooperation. Recommendations from participants regarding the scope and methodology of the project have been incorporated into this proposal.

Project Objectives and Beneficiaries

IDRC approved the project proposal presented by ISER and the CGSSS on Assessing the Impact of Information on Policy Formulation in the Caribbean in 1994. This project will develop a strategy to support priorities of the current policy agenda in the Caribbean region by assessing and strengthening the links between research, information systems, and policy formulation through:

• Analysis of the needs and information seeking habits of a sample of senior social policymakers from the English-speaking Caribbean;

• Utilization of multimedia for the development of a data bank, combining bibliographic and quantitative content and emphasizing the results of research especially that emanating from the three units of ISER and the Institute of Development Studies (IDS), University of Guyana.

• Analysis of research results, preparation of repackaging and delivery of information services on topics and issues required by the target user group;

• Development of indicators which may be used to determine the impact of the products and services on policymaking. Examples may be input into national or sectoral plans, changes in policies at various levels, or integration into legislation; and

• Assessment of the impact of these services on the user group, and the wider community, through the application of indicators to measure benefits.

Although the obvious focus in this plan of action are Caribbean policy and decision-makers, the faculty, postgraduate students, and researchers who will benefit from the specialized databases created, the indirect yet critical beneficiaries will be the members of the public who participate in the social programs/policy areas identified by the group of policymakers participating in the project.

Implementation

The start-up date of the 3-year project was October 1994. ISER, as the main implementing agency, manages and coordinates the project activities and the three supporting consultancies. The Librarian heads the project team, in consultation with the Directors of ISER and CGSSS, and with the Advisory Group, which was established at the meeting in October 1993 for this purpose.

The increased range of services and products offered and the concomitant expansion in use of the Documentation Centre required staffing beyond the single established professional post. An additional information professional and the position of Data Analyst are partly funded by the project.

Training has begun of postgraduate students (research assistants) from the CGSSS, government, management studies, and sociology departments who will be trained on an ongoing basis to assist in the analysis of information needs and the ongoing analyses and synthesis of information for the target group. The packages prepared will incorporate state-of-the art reviews on issues and cross-cutting themes identified by the target group. They will also include profiles of research projects completed and in progress and summaries of analyses of statistical indicators. The students are, primarily, technical personnel from the public sector who may have experience in identifying and synthesizing information for policymakers and, therefore, they would be well prepared to participate in this aspect of the project.

The project has drawn on and will continuously draw on the results of the concurrent regional project, Information for Decision-Making in the Caribbean Community, particularly for data from the baseline study on information needs and information-seeking behaviour, the pricing of information products and services, and the development of system guidelines regarding performance criteria and evaluation of the effectiveness of information products.

The CARICOM project, through its survey (administered through personal interviews) of 100 policy- and decision-makers throughout the region, provided background data on the information use environment and reaffirmation of the need for services and products that the ISER/CGSSS project intends to deliver.

Target User Group

The target group of policymakers who will receive service throughout the life of the project is selected from senior policymakers responsible for socioeconomic policymaking. The services will be associated with the positions rather than the individuals. A nucleus of about 50 persons from institutions across the region will be chosen as the focus of study on the analysis of the impact of the use of information on the policy formulation process. The sample will permit generalization of the experience of a total of nearly 400 senior policymakers in the English-speaking Caribbean. This group will include senior and junior ministers, permanent secretaries, and other senior personnel in the areas of economic planning, health, education, labour, social welfare, and social security.

The methodology for identifying and selecting the target user group has been elaborated by Noel Boissiere an economist and management consultant. The Boissiere report, “A Methodology for Selecting a Sample Target Group for Information Services in the Caribbean Community,” summarizes the issues of identifying a sample group as defining the “target population” from within all 14 countries of the English-speaking Caribbean, selecting from among these the “sample population” of senior policymakers in selected countries and choosing the sample size appropriate for this study.

A stratified sample was used as recommended by Boissiere because of three key considerations:

• The interest in detecting any unique country differences in the impact of the services and in distinguishing the results by country,

• The desirability of making sure that certain key institutions and policymakers are included in the sample, and

• Considerations of the nature of the target population, the likelihood of response, and the willingness to participate for individuals chosen on a random basis.

At the first level of stratification — geographic — the region is divided into five country subgroups: Jamaica, Trinidad and Tobago, Barbados, Guyana, and Dominica. The second level of stratification addresses the sample population as the nucleus formed by the target user group selected from the following institutional units:

Direct Policymakers

• Senior government policymakers, that is, the minister, the permanent secretary, the special advisor, the senior economist or senior technocrat in all ministries (with special emphasis on the Ministry of Finance);

• Governors of the Central Banks, deputy governors, the directors of research;

• Heads and deputy heads of semiautonomous institutions, such as the National Planning Institute, the Industrial Development Corporations;

• Heads and deputy heads of major regional organizations such as the CARICOM Secretariat, Organization of Eastern Caribbean States (OECS), Caribbean Development Bank, Eastern Caribbean States Export Development Agency (ECSEDA);

• The University of the West Indies and the University of Guyana;

• Private consultants;

• Persons who serve in an advisory capacity to the prime minister; and

• Leader (or representative) of the opposition party in parliament.

The final three subgroups were included at the suggestion of the regional workshop group.

Indirect Policymakers:

Private Sector Organizations

• The media

• Religious groups

• Trade unions and NGOs

• Chambers of commerce

The final level of stratification is sample size. Determination of the numbers included in the table reflects the choice of the sample based on the geographical and institutional subgroups, knowledge of the region, and informed judgment. The method of selection has the advantage of flexibility and allows for changes in the sample size and for updating of the sample as conditions suggest.

User Needs

Once the total target group has been selected, the survey of participants (in progress) will be completed both to record the articulated needs of the target group and to permit the assessment and interpretation of the findings. The survey instrument to be used is an adaptation of the questionnaire designed within the project Information for Decision-Making in the Caribbean Community and exemplifies the cooperation alluded to earlier through testing the tools developed by the peer project.

Information Base

Depending on the topics identified as critical by the user group and the project team, the Documentation and Data Centre will develop guidelines for the analysis, distillation, synthesis, and distribution of data from the textual and statistical information sources. Information on priority topics will be culled, encapsulated, and repackaged for delivery to the relevant users within and outside the target group. The information base that will support the services will be composed of the bibliographic database of the holdings of the ISER Documentation and Data Centre, the quantitative databases held in the Data Bank, the research in progress files of university, private and public sector resources, and the database containing the analyses of the research results.

A consultant will be contracted to implement a program for sensitizing the members of the target group and other policy-makers specifically, on the value and cost of research results and other information in the development process. National and regional meetings of policymakers provide a ready-made forum for such briefings. The consultant will also design a public awareness program that will be promulgated through the media.

Evaluation

As constant feedback and evaluation is required from the participating members of the target group from the very start of this project, a consultant was engaged late in 1994 to review system activities. This evaluation is aimed at demonstrating to policymakers as users, and as planners responsible for allocating resources, the role of information systems in this area.

This review will involve the participation in the users’ meetings and evaluation of:

• The quality of the products and services provided to the sample of the target group;

• The effectiveness of the information in relation to the users’ earlier and ongoing determinations of value derived, and

• Utilization of the research results in the policy formulation process. It will also recommend options and directions for future development.

The consultant will be responsible also for collecting data for the baseline study on the information use environment, developing consensus on externalities affecting the latter, and analyzing the survey findings in relation to specific benefits expected by users.

Telecommunications

Communication channels within the project among the sample of policymakers and the project team will be through electronic data exchange and the ISER Policy Newsletter. Participants will be encouraged to adopt electronic mail as the primary instrument for requesting document transfer and for transmitting evaluations. The ECLAC/AMBIONET (Economic Commission for Latin America and the Caribbean) Information Exchange System, located in Trinidad and Tobago, will provide the backbone for the interchange of data communication. To ensure speedy uptake of the messaging technology by the sample group, the project will provide modems required for electronic data exchange.

There is a strong perception of a lack of responsiveness on the part of the telecommunications authorities to the current innovations in computer-based communications now available to the Caribbean. The services provided will also be used to provide a demonstration effect of the role and value of this type of facility in linking the university and its research results with implementing agencies. If their experience within the project is deemed successful, policymakers participating in this pilot study will have a clearer understanding of the value of data communications and will be expected to influence the direction of national and regional data communications policies.

Regional Workshop

The initial meeting was convened in January 1995 of a small number of participants, representative of the target group, the consultants on evaluation and the development of indicators, and the coordinators of the CARICOM Decision-Making Project and ECLAC/AMBIONET system. The purpose was to elucidate the project objectives and modus operandi, to offer technical briefing and training of the target group in the use of the electronic ECLAC/AMBIONET messaging system and a brief introduction to the Internet. In particular, participants provided critical feedback on the indicators under development as well as on the content and format of the sample presentations of research results prepared for the meeting.

Development of Indicators

Methodology/Object of Assessment

The consultant’s seminal paper on potential indicators to measure the impact of the information provided to the project’s constituency — policymakers in the Caribbean region — provided the starting block for the in-depth discussion at the regional workshop. A synthesis of both the paper and the recommendations are detailed in this section.

The object of the current assessment process is information services provided by ISER. “Information services” represents a wide range of services and, for practical purposes, specification of the object services is necessary.

One aspect of the IDRC program concerns information technologies and matching the technology with the information. The current project will be using almost exclusively the electronic mail system and electronic data exchange, and the assessment will focus on the following specific services:

• Flows of research results to information systems and the production of information packages tailor-made for assimilation by policy makers;

• Selected dissemination of information (SDI) service: facilitated access to information through the interface of data bases with electronic information exchange systems; and

• Reference information service and online database searching service.

These services are being assessed from the perspective of the users/beneficiaries and policymakers of the region.

Impact Assessment Indicators

Indicators are being used to determine the degree to which a project or activity succeeds or fails in meeting stated general needs and objectives, in using resources efficiently, and in achieving expected results. Furthermore, it is acknowledged that assessment of the impact of information cannot be a one-time exercise. On the contrary, it is based on the following principles:

• The assessment process is a user-driven, ongoing process;

• Not all indicators will apply in any given situation; and

• The target audiences (those who will use the results of the assessment) are clearly identified.

Three categories of potential audiences are considered: decision-makers and policymakers, information managers and information-system users, and funding agencies.

In extracting from the deliberations of the computer conference and discussions at the postconference workshop, basically three types of indicators were proposed as practical for the project Information for Decision-Making in the Caribbean Community.

• Performance indicators: “relating inputs to outputs”

• Effectiveness indicators: “relating outputs to usage”

• Impact indicators: “relating usage to outcomes and domain characteristics” (Menou 1993, p. 97).

This already represents a reduction in the number of five types of assessment indicators originally discussed (Menou 1993, p. 97). When the focus is directed to those indicators that practitioners judge to be capable of being put into operation, a further pruning is suggested. The main reason for the reduction is the difficulty of data collection and time constraints involved in implementing measurement where cost indicators are concerned. Furthermore, the information added by distinguishing between performance indicators and impact indicators appears not great enough to warrant separate treatment. Consequently, assessment indicators used in the current case are compressed into a combination of performance/impact indicators. Essentially, these are indicators derived as a consequence of use of the information services and relate usage (input factors) to outcomes (output benefits).

The seven impact indicators listed in the following relate to the consequence of use of the information services provided. Some are by nature also the benefits derived from access to and use of the information services. Although they are all intended to serve as measurements, some are quantifiable and others are nonquantifiable or qualitative. It is recognized that the priority given to each measure varies with the user; however, discussions at the meeting led to the following presentation of indicators in order of importance:

Outcomes/Benefits

• User satisfaction. This incorporates the concept of the degree of satisfaction relative to the investment of effort and money to acquire the information. This is initially a qualitative measure, and anecdotal evidence can be used in its determination. At the empirical level, satisfaction can be determined by Needs Met by the service. The value of an information service clearly lies in its ability to fulfill a specific need. Needs Met can be further subdivided in various ways, e.g., into (a) short-term or immediate need for specific information, and (b) medium- and long-term needs for more general information. Although not a measure of benefit, needs not met by the service could also be identified at this point.

Status Measures

• Use per capita (frequency of use of the services) in the Target User Group.

• Number of users in the wider “target population”. This measure reflects both access to the technology for using the service as well as its actual use.

Other Measures

• Time saved (for the user) by using the information service provided. Considerable debate arose around the issue of whether time for the busy policy maker was indeed saved in view of the large volume of information made accessible.

In addition, the time spent learning how to search was also a factor. It was pointed out that:

(a) Searching could be made manageable through the literature review,

(b) Packaging and dissemination of research findings would indeed save time, and

(c) User training and experience in the techniques of information searches would help to reduce the time used.

• Improved analysis and decision-making in terms of quality, coverage, and timeliness of the material informing the decision-making process.

• Improvement in preparedness, skills, and effectiveness in negotiations.

• Access to information and ideas through contact with colleagues and others who have worked or may be working on the same issues in distant places.

Qualitative Characteristics of Information Provided

The nature and quality of the information provided would have a bearing on user satisfaction. Timeliness, reliability, and relevance are basic desirable characteristics. In the context of the Caribbean, where there is both the expression of concern with lack of data and with the inadequate use of information that is provided or can be provided, it is all the more important to reflect on the nature of information needed by the policymakers.

In the current project, needs surveys have been conducted and the results will be incorporated in plans for the future. The surveys no doubt capture the need for information in the realm of ideas and alternative lines of action in addition to a need for facts and direct data; that is, the urgent need in many developing countries for what has been called “coping information,” which can be interpreted to mean information to assist policymakers in coping with the myriad of problems and decisions faced.

Measurement of Impact

Surveys should be used to gather measurement data, and as the method for identifying and assessing the links between provision of the new information services and impact in policy formulation. In the first instance, surveys can be used to determine the extent of use of the services, and the consequences of that use in terms of informing policy formulation. In these measurement surveys, emphasis would placed on the attributes of simplicity of structure in data collection and capacity for straightforward interpretation of the ratios or indices generated to measure impact. A critical factor to be incorporated in the measurement process is that measurements must be designed to record not only the status at a given time, but must also have the ability to record change over the period of the project to analyze feedback, improvements suggested and executed, or lack of change.

An example of the proposed measurement framework is outlined in Table 1, which illustrates the basic parameters that must be incorporated. Expansions and greater detail are envisaged in the actual survey work.

The matrix in Table 1 is based on the format devised in the postcomputer conference workshop — Preliminary Framework for Impact Assessment (Menou 1993, pp. 101–102) — in which input factors are linked with output benefits to produce indices (indicators) of impact. These indices must then be analyzed and interpreted as indicative of strong or weak impact of the information services in informing policy formulation.

Hierarchy and Weighting

The purpose of the survey is to determine how the users rate their experience in terms of benefits derived, that is, benefits related directly or indirectly to policy formulation. A good experience in terms of benefits derived in the areas specified (relative to benefits expected) could be interpreted as indicative of positive impact of the service in informing policymakers.

The survey obtains the user’s subjective responses, based on individual experience or impressions, and puts them into an objective framework of predetermined weights and measures of importance for judging impact. The proposed impact indicators are listed according to the hierarchy of significance assigned at the meeting of policymakers of January 1995. A further subdivision distinguishes between Impact Indicators, derived as a consequence of use of a service, and Effectiveness Indicators, measuring the status of use at any particular time.

Table. 1. Creating index numbers to measure impact indicators.
Illustration of the impact of selected information services in policy formualtion.
A. Recording of acknowledgment of a benefit derived from using a service.

Image

B. Recordings of ratings of the user, on a scale of 1–5.

Image

C. Indices generated: Ratings x Weights.

Image

Although User Satisfaction was already assigned top position in the hierarchy listing, a further numerical weighting is used to indicate the degree of importance assigned to this element relative to the other indicators when determining impact. A good grade scored for User Satisfaction and Needs Met is thereby deemed to carry much greater weight than a good grade scored in Time Saved when assessing impact of this service on policy formulation. The following is an example of suggested weights that may be used to show the importance given to each indicator:

 

 

Weight

 

Hierarchy

Rank Assigned

 

Degree of Importance

Impact Indicators

User Satisfaction

#1

 

25

Needs Met

 

 

25

Time saved

#4

 

  5

Improved Analyses

#5

 

15

Negotiations Preparedness

#6

 

10

Access to Others

#7

 

  5

Effectiveness Indicators

Frequency of Use

#2

 

15

Number of Users

#3

 

10

Ease of Use

 

 

  5

Table 1 illustrates the steps by which a matrix could be completed to yield indices as measurements of various impact indicators. In other words, measures of specified categories of benefits (outputs) derived by Target Group Users as a consequence of using specific information services/products provided (inputs). For any given time period, the user will:

(a) Be asked to state whether any benefit was received from using the specific service within the categories specified. An “x” is placed in the appropriate cell to indicate a quantifiable (Qn) benefit or a qualitative (Ql) benefit if applicable;

(b) Rate his/her experience in using the service by evaluating the strength of the benefits received on a simple scale. The scale suggested is: 1 - weak, 2 - below average, 3 - average, 4 -above average, and 5 - strong.

The rating of the experience of using the service is then multiplied by the weight assigned to the particular indicator, thereby producing an index number, which can be used for comparison over time, of user, type of service, etc., to analyze impact. Indices of measurement can be made in terms of all seven indicators or for more detailed subgroups of each indicator, as considered appropriate.

Example 1:

The information service being assessesd is “Packaged Research Results.” A user of the service may rate his/her experience in the following manner: User Satisfaction/Needs Met (an output benefit or impact indicator) is given a rating of 4; Time Saved is given a rating of 3. The resulting indices on this occasion are 100 and 15:

 

Rating

 

Weight

 

Index

Satisfaction

4

x

25

=

100

Time Saved

3

x

  5

=

  15

Example 2:

A specific delivery of services may be considered very beneficial against needs expressed. For the service Access to Databases, Time Saved is given a rating of 5, Negotiations Preparedness, is also given a rating of 5. The indices calculated are 25 and 50:

 

Rating

 

Weight

 

Index

Time Saved

5

x

  5

=

25

Negotiations

Preparedness

5

x

10

=

50

Initially, relative index numbers derived would be interpreted as indicative of a strong or weak impact in analyzing progress or changes over the period, by user, institution, country, etc. In analyzing the significance of these indices, the evaluator must then interpret the degree to which the indices translate into a statement that the service has made a difference to the policymaking resources and capability of the user.

Indicators may be further subdivided or other distinctions made. For example, specific Needs Met by the service Research Results could be distinguished and may be rated separately. A distinction can usefully be made between:

• Direct benefits, or immediate, short-term benefits, e.g., did the use of the information service solve or assist in solving the particular problem for which the information was sought?

• Indirect benefits or medium- and long-term benefits, e.g., general or specific enlightenment, attitudinal changes that inform policymaking.

Table 1 is given for illustrative purposes only and other details may be added where considered necessary. For instance, the status indicators of Number of Users and Frequency of Use may be further extended to reflect the kind of use to which the information service is put, for example, for cabinet briefings, etc., as opposed to individual interest and personal development, in which case the link with policy formulation would be more tenuous. Weights assigned to the indicators are suggested but by no means carved in stone. Ongoing contact with the Target User Group is likely to lead to adjustments of the relative weights to derive more appropriate measurements. The essential initial first step is to register the existence of a particular benefit from the service provided.

Although not all users may agree with the weighting assigned to each indicator, the same weights will be maintained for all members of the Target Group in any one round of the survey for comparison and analysis. Surveys related to indicator measurement would be carried out periodically (6 months being the proposed interval); four or five such surveys are expected to be conducted during the course of the project. It was agreed that questions related to the implementation of the measurement procedures would be incorporated in the ongoing survey work, which will be administered by the consultant for evaluation.

Conclusion

The preceding section on methodology hints at the expected challenges and uncertainties, which will inevitably persist as the project progresses. A plethora of questions and issues to be addressed have arisen from the interaction during both the pre-planning meeting and the regional workshop, including:

• Evaluation instruments will accompany the response to specific inquiries by members of the target group for immediate assessment by the recipient. In the majority of cases, the evaluation questionnaire will be forwarded, after a prescribed period of several weeks, following the delivery of an information package. The latter situation presents distinct problems in pinpointing the use of the contents of the package, which has now been transformed as part of the individual’s knowledge base.

• Positions, not individuals, will be the focus of the service; yet, political and administrative shifts will require accommodation within the project of office-holders who may be “disengaged.” Furthermore, technocrats expressed their concern that the service should be directed to both a policymaker and a technocrat from the former’s office to maximize effective use of the disseminated material.

• Some members of the target group urged that efforts be made to identify benefits lost if a participant is unaware that critical information is available through local systems, for example, the expense (procurement costs and time expended) in sourcing data from outside the region.

• The preferred method of delivery of information is through electronic channels, which offer rapid, cost-efficient access. The potential resistance to largely unfamiliar communications

References

Boissiere, N. 1994. A methodology for selecting a sample target group for information services in the Caribbean Community. Institute of Social and Economic Research (ISER), Kingston, Jamaica.

Menou, M.J. 1993. Measuring the impact of information on development. International Development Research Centre (IDRC), Ottawa, ON, Canada.

This page intentionally left blank

Information for Decision-Making1 in the Caribbean Community

Carol Collins2

Introduction

The project “Information for Decision-Making in the Caribbean Community,” funded by the International Development Research Centre (IDRC), has been fully described in Menou (1993) in the context of its providing the scope for applying and testing the information impact assessment methodology that has been developed. Because of the easy availability of that information and the central role of the publication to the deliberations here, only the main purpose of the project will be mentioned.

In brief, the main purpose it is to improve the effectiveness and sustainability of regional information systems and networks in the Caribbean, and to develop further the framework within which the services of these regional information systems can be efficiently and effectively utilized by end users who are directly engaged in developmental activities in the region. The implementation of this project is dependent upon the successful completion of a series of eight subprojects, managed by seven regional institutions. The segments are largely interdependent so that, in many cases, outputs of one become inputs of another. Coordination and timely delivery of outputs are, therefore, very important to the success of the overall project.

To achieve the main purpose, the series of activities being undertaken include:

• The evaluation of existing regional information systems, services, and products by the information managers;

• The evaluation of existing regional information services and products by the end users in terms of their satisfaction with, and benefits derived from the information offered to meet their needs,

1Decision-making in the context of this project is not confined to the activities of decision-makers and policymakers in the public sector but relates to the process of making decisions in different situations within the private and public sectors.

2Director, Information and Communication, Caribbean Community Secretariat, Bank of Guyana Bldg, PO Box 10827, Georgetown, Guyana.

as well as such attributes as ease of use, modern methods and technologies, improved delivery mechanisms, and ability of information managers to meet the required standard of service delivery;

• The assessment of the cost-effectiveness of the services and products offered; and

• The assessment of the usefulness/value of the information systems as a developmental tool.

This project, therefore, lends itself, in almost all its aspects, as a research project in testing the assumptions made based on theoretical precepts about how to measure the impact of information on aspects of development, as detailed in the Menou publication (Menou 1993). The basic building blocks of the framework are:

• The identification and description of the users,

• An understanding of the development problems and an appreciation of the expected benefits,

• Cost-benefit analysis of the service being provided and obtained through measuring the inputs and outputs of the information system, and

• Communicating the findings to the right target audiences.

These were judged to be relevant and to fit when a preliminary analysis was done in Jamaica at a workshop3 to see how the generic method of assessment could be applied to the CARICOM (Caribbean Community) project. Although the project has several subsets, for the purposes of these discussions, only the experience gathered from subproject I: “User Study and the Provision of Enhanced Services from Regional Information Networks” and subproject III: “Measurement of Performance, Efficiency, and Value of Information Networks, Services, and Products” will form the basis of this presentation.

The approach adopted in this case study is to match the activities undertaken so far against the assumptions, to the extent that is possible, and to identify those assumptions that seem to need modification. Reports that form part of the project outputs have been very useful in this exercise:

(a) “Evaluation of Some Information Service Providers in the Caribbean” (Tindegarukayo 1994).

(b) Report on the Analysis of Questionnaire Responses from the Survey of Decision-Makers in Various Caribbean Institutions and Companies” (Griffiths 1995).

3Workshop on Impact Assessment and Performance Measurement. Kingston, Jamaica, 9–11 January 1994.

(c) “Designing Impact Assessment and Performance Measures for SDI Service” from the workshop on Impact Assessment and Performance Measurement, Kingston, Jamaica 1994); as well as some observations in respect of delivering an SDI service under the project.

(d) Designing Impact Assessment and Performance Measures for Selected CARICOM Services/AMBIONET (an electronic information exchange system), from the workshop on Impact Assessment and Performance Measurement, Kingston, Jamaica 1994.

(e) CARTIS (Caribbean Trade Information System) Customer Satisfaction Survey: Business Opportunity (see the Appendix for a sample questionnaire).

(f) Interim Technical Report for the period 1 June 1993 to 31 May 1994.

The experience in implementing the CARICOM project will be looked at in terms of:

• Variance between project objective and measurement realities,

• Geographic spread and communication realities,

• Multisectoral systems and services delivery, and

• Other variances.

Variance Between Project Objective and Measurement Realities

The CARICOM project is somewhat complex because it involves a range of constraints that affect the overall purpose of the project. It addresses such issues as improving services, developing and improving products, improving skills levels, computer communications and connectivity, knowledge of users and their information needs, and delivery mechanisms and improving marketing of products and services. In dealing with the range of issues, it might not always be feasible to match activities within the context of the agreed assessment framework and, therefore, it is expected that there will be some modification in implementation.

First, for example, in treating users as an integral part of the assessment framework to improve the regional information systems, it is necessary for the identified users to work with the information systems in effecting the improvements. Notwithstanding the prior arrangement made with each of the 100 “users,” however, where the users agreed to be part of the research process as well as the beneficiaries, it cannot be expected that the needs of the community of users can take second place to the research requirements. For example, the choice of policy-makers and chief executive officers (CEOs) as a target group, selected because of the expected significant contribution to the research project and the process of providing support for the information systems, needs some modification to be effective.

In general, the group is too busy to participate directly in the process and, in many instances, it is an assistant who maintains contact with the project and makes the request of the system. These assistants must then become part of the user group to provide feedback.

There are also great expectations for improved services from the existing systems, and perhaps insufficient understanding of systems that are streamlining their services. Some public relations initiative will be necessary to offset any negative feelings of the ability of the systems to provide services. Whether or not this will be reflected in the ongoing measurement of the effectiveness of the service is to be seen.

Second, the user community does not form one homogeneous group of decision-makers but three: policy-makers and CEOs, professional/technical groups, and small-scale business managers. The subject coverage is also somewhat wide for the existing regional systems to handle on their own, and so new arrangements are being made.

Geographic Spread and Communication Realities

The geographic spread (from Belize to Guyana) and the lack of an intraregional communication network that affects the delivery of services are important factors and challenges in the implementation process. One-hundred persons from across the Caribbean Community in the category “decision-maker” were chosen from among policymakers and strategic managers (49), professional/technical group (34), and small-scale business managers (14+3 special). Thirty-nine of the 49 in the first group have difficulty in obtaining the information they need, notwithstanding that most of them (68%) have easy access to information centres, either within their institutions or locally.

The project, coordinated from the CARICOM Secretariat in Guyana (where there are difficult intraregional telecommunication links, although the extraregional links are good) is seen de facto to have taken on the responsibility either of providing an improved information service from a centre other than those that traditionally serve the needs of the individual users, or of offering direct assistance in a short time to the information centres providing the service. Interestingly, that remoteness does affect the perception of whether or not an information system can supply the information. This perception changes if the centre is fully computerized and has online access facilities.

Because of these two factors, an arrangement has been made where information centres and national focal points of the regional information systems have been co-opted to provide a direct information service to the users and to keep constantly in touch to see that their needs are met. For the project, therefore, meeting the challenge of triggering the use of the services being offered and organizing for the response in a referral mode have become central issues.

Multisectoral Systems and Services Delivery

The regional sectoral information systems cover planning, export trade, energy, policy research, technology transfer, and debt management and is, therefore, multisectoral. The challenge comes from the multidisciplinary information demands of the users, where the information services provided by the existing networks cannot always match the requirements. The information provision base has been widened to provide services in the 23 basic categories of information identified in the survey of user needs. This has necessitated the development of ways of bringing the small, specialized databases at the national level into the response mechanism. To this end, an information resources map of the region, RESMAP, which has been developed as an output of the project, is being refined.

Service Delivery

Service delivery is an important dimension of the project and user characteristics must be taken into account. The characteristics of policy-makers (ministers of government and CEOs of institutions and organizations) are context-based and have distinct information needs, which change according to the problem. Furthermore, the extent to which “intelligence” and knowledge, as distinct from information, influence decision-making should be considered.

The workshop group in Kingston suggested that service should be focused on providing “intelligence” to users and that the service would provide information culled from references, full texts of articles published, analyses and reports, and information gleaned from discussions with experts and consultants in the field, in hard copy and electronic format. This suggestion approaches the feeling of some CEOs, who have said that, on the matter of intelligence, the product of the information manager would need to be verified by a peer group. That opinion reflects the need for engendering a positive and closer working relationship between the user and the information provider. It also goes toward eliminating that constant peer verification step and, indeed, the need as expressed by some strategic managers for executive summaries in addition to the full text of the work.

An indicator of the degree to which the intelligence service has been delivered might well be to determine the quantity and quality of information read/absorbed in a week and examined over a set period of time and then to do an analysis of the content matched against the stated information requirement. It is also worthwhile taking into account that the need for instant information, executive summaries, analyses, and the ability to have the opinions of peers point to the importance of having e-mail systems and a high level of computerization.

Certainly, in the Caribbean, the extended use of AMBIONET (the major bulletin board and information system for the Caribbean, established and operated by UNECLAC/the United Nations Economic Commission for Latin America and the Caribbean), would greatly enhance the level of satisfaction of policymakers with regard to information service provision. It would be necessary as a measure to ascertain the level of competence in using computerized systems and to determine if there is reluctance to use the system.

Policymaking and decision-making at the government level in the Caribbean is highly politicized. It would be instructive to know to what extent information is most influential in that process where expediency is so often the deciding factor.

There has been very good response from the small-scale business sector in identifying their needs and in seeking information. The improved products of the Caribbean Energy Information System (CEIS) and the CARTIS have been noticed and appreciated. For CARTIS, the impact is being measured in terms of new business opportunities and orders that are currently being quantified (Appendix).

Other Variances

Although it is important to look directly at the question of impact assessment, there is a time factor involved. It is important to minimize the deficiencies of information systems and resources before measuring the impact, especially if in matching the user needs against resource and service availability, there is a clear indication of where the improvements are needed and how these would affect the service. Failing that, the result of the impact assessment might be skewed. It becomes important, therefore, to refer to the indicators and instruments for measuring the performance of the systems and services that have been developed and to examine the findings based on these indicators.

In the Caribbean, the attempt to go beyond the testing on a small scale, of the evaluation and performance instruments that were developed, and to apply them across the board to all systems, met with some difficulty (Tindegarukayo 1994). There are several reasons for this.

In the first place, the sheer volume of work prevented the information manager from responding to the questionnaire in a timely fashion. Second, some services had recently completed the assessment of their users and were understandably reluctant to repeat the exercise.

Without having established Management Information Systems (MIS) and Information Analysis Centres and the recognition of deficiencies in certain services from lack of resources, for example, undertaking online searches of databases or of marketing the service or providing advice regarding recent trends in the developments of a particular sector, there would be some reluctance to apply the instruments easily.

This could well be wrongly interpreted as being a personal deficiency. It could also be that there is neither the demand nor the tradition of having services evaluated, except for projects. In any case, there is a need to pay attention to the constraints introduced by the information manager’s attitude before assessing the impact of information.

The same kind of observation can be made about the resources of the information centre. If the financial situation does not allow sufficient upgrading for the required services or the introduction of efficiencies into the level of operation, then the survey should be carried out over an expanded period until and beyond when the improvements in service delivery would be completed.

Tindegarukayo has also pointed to the need for the training of users in computerized systems. There tends to be too great a reliance on the information specialist to undertake the searches personally even of the local databases, rather than allow users that measure of independence to carry out their own search. Although this is understandable because of the expensive computing equipment, it detracts from the flexibility allowed the user.

In relation to attitudes that might affect measuring impact, the origin of the questionnaire (i.e., whether or not it is internally or externally stimulated), the perception of the intended beneficiary to the quality of service expected, and the ability of the information system to supply and deliver all become important, otherwise there might not be a repeat use of the service, thus eliminating one possible indicator of satisfaction.

Conclusion

In highlighting those areas that relate specifically to measuring the impact of information, the impression might have been given that there are not many success stories in this project, this is not so. The instruments of measurements that have been developed and tested, the computerized user profiles and SDI service, the analytical reports, the introduction of new approaches to solve potential problems, such as RESMAP and the incorporation of a larger number of information systems managers in this process, all speak to the progress of this project. There is also ongoing work in other areas of the project, such as information product development and research into ownership of databases.

All the evidence so far shows that the assessment framework and methods are relevant to the CARICOM project and can be applied:

• There is a need to improve services and operations of the systems while attempting to measure the impact of information on development;

• There is need to improve the skills levels and knowledge base of information managers, while seeking to measure impact of information;

• For decision-makers, there is need to expand service to provide intelligence and the corresponding need to develop indicators for measuring this service;

• Where service is going to be individualized, the information resource base must be widened; and

• Because of some of the existing constraints, there is a need to carry out the impact assessment over time.

References

Griffiths, J.-M. 1995. Report on the analysis of questionnaire responses from the survey of decision-makers in various Caribbean institutions and companies. Submitted to the Caribbean Community Secretariat, Georgetown, Guyana. (Mimeo)

Menou, M. 1993. Measuring the impact of information on development. International Development Research Centre (IDRC), Ottawa, ON, Canada.

Tindegarukayo, J. 1994. Evaluation of some information service providers in the Caribbean. Submitted to the Caribbean Community Secretariat, Georgetown, Guyana. (Mimeo).

Image

This page intentionally left blank

Impact Research Studies

This page intentionally left blank

Using LISREL to Measure the Impact of Information on Development: London Site Pilot Study

J. Tague-Sutcliffe, L. Vaughan, and C. Sylvain1

Introduction

This paper reports the result of a pilot study conducted at the University of Western Ontario in London, Ontario, Canada. The pilot study is part of a major project called “Measuring the Impact of Information on Development: A Path Analysis Approach” (Tague-Sutcliffe et al. 1994) funded by the International Development Research Centre (IDRC). The principal goal of this project is to perform an exploratory study of the feasibility of quantitatively measuring the impact of information on development. It follows a recent initiative by the Information Sciences and Systems Division of IDRC that aims to develop a “… set of tangible criteria by which the relevance or impact of information on development can be measured” (Stone 1993, p. 53).

Although the role of information programs and services on development is undeniable, there is little hard evidence linking information investments and levels of information activities to specific socioeconomic impact and development objectives. The IDRC program seeks to fill this gap by the development of models, based on valid qualitative and quantitative indicators, of information impact.

Following a series of workshops and discussions, a general framework for impact assessment, based on Griffiths and King (1993), was developed in a monograph (Menou 1993). The preliminary results reported here make more concrete some of the directions proposed in this book.

The general objective of the project is to develop and test a mathematical model that will indicate, for each of a set of input variables, their relative importance in accounting for the variation in a set of output or impact variables. As well, the model may incorporate latent variables, not directly observable, which

1Graduate School of Library and Information Science, University of Western Ontario, London, Ontario, Canada N6G 1H1.

mediate between the input and output variables. The main study is to investigate the impact of information on small businesses in Shanghai (China).

This sector of the economy was chosen because, as in most Western economies, it is currently booming. We have gained support and cooperation from the Institute of Scientific and Technical Information of Shanghai, where the actual data collection and analysis will be carried out in the Fall of 1995. To gain experience with the methodology before using it in China, we conducted a pilot study in London, Ontario.

The London Project

The small business sector in Canada has become a key to its economy. Taken together, the small business sector now accounts for about 40% of Canada’s gross domestic product (GDP) and employs more than half of the private sector’s total workforce. Furthermore, it is responsible for a quarter of all sales, a third of all profits, and a fifth of all assets (Small Business Working Committee 1995). In this initial phase of the study, the impact of information on small businesses in the London area was investigated.

The input variables we are interested in relate primarily to different sources of information used by those who are operating a small business. We need to identify, however, the other major factors that contribute to the variation in the impact (output) variables.

The output variables describe the success of the business, or its contribution to the local economy. The literatures of business, management, and information science contain few empirical studies specifically devoted to information factors in the success of small business. A number of studies have identified that factors such as innovation, know-how, creativity, and management competence play important functions in the success of small business (Chaganti, 1983). Other influences on business success include firm size, corporate status, and industrial sector (Tiggles and Green 1994).

In a survey of small manufacturing firms in Canada, Chaganti (1983) concluded that factors such as identifying a market niche and balancing quality with cost were also key factors for success. Clearly, many of these factors carry an information overtone. In fact, improving access to information, particularly through the Canadian network of Small Business Service Centres, was one of the key recommendations recently made to the Canadian government by a specially formed committee (Small Business Working Committee 1994).

To identify which variables were relevant to our pilot study and determine their potential interactions, we consulted with a number of experts: G. Stewart and

J. Kinsella, Small Business Centre (London); P. Tripp, Head of Business Information Services, London Public Library (LPL); Prof. R. Knight, Western Business School; and Professor P. Kantor, Rutgers University. This consultation process lead to the design of a mathematical model to be tested and of a survey instrument used to collect information about these variables from a sample of small business owners in London.

Methodology

The main methodology (or mathematical model) used in this study is LISREL. LISREL is an acronym for the Linear Structural RELations model. Properly speaking, LISREL is a computer program that analyzes covariance structures, but the widespread use of the LISREL software has identified the name of the program with the statistical procedures it performs. It is considered the most general method for the analysis of causal hypotheses or covariance structure models on the basis of non-experimental data. LISREL for Windows (v. 8.12) by Scientific Software International (Chicago, Illinois) was used in this study.

LISREL has been used widely in fields such as sociology and psychology. So far, there have been few studies in information science. For example, Auster and Lawton (1983) used a path-analysis approach to study the relationships between the interview techniques with users of online bibliographic retrieval systems, the amount of information gained by the users, and their ultimate satisfaction with the quality of the items retrieved.

In a different vein, path analysis was used to model the influence of various factors on the attitudes of users toward management information systems and their associated processes, products and services (Joshi 1992). Similarly, Baroudi et al. (1986) used questionnaire data to investigate the causal relationships between user involvement, usage of an information system, and user’s satisfaction of the system using path analysis.

The LISREL methodology involves a number of steps:

• Identifying variables to be used,

• Collecting data on these variables,

• Developing the model,

• Testing the model against the data, and

• Revising the model if necessary and retesting it.

Variables identified in the London study will be discussed in detail later in the paper (see Appendix A for a list of variables). Data for these variables were collected through the use of a questionnaire.

Data Gathering and Coding

There are many ways to define a small business. One of the most common is by the number of employees. In Canada, small businesses are generally held to be enterprises with fewer than 100 employees in the manufacturing sector and fewer than 50 in services. In the current study, businesses located in London, and the immediate vicinity, with 50 employees or less were included in the sample. Based on the list of variables shown in Appendix A, a questionnaire was designed (see Appendix B) and mailed to a total of 982 small businesses in London.

The mailed package included a covering letter and self-addressed, stamped, return envelope. A second envelope was also included to allow participants who wished to receive a report of the main findings of the study to identify themselves. For ethical reasons, the participants were guaranteed anonymity and their identification on the questionnaire was optional.

The sampling frame consisted primarily of two databases of businesses in the London area, made available in electronic format to the researchers by the Economic Development Office of the City of London. A stratified sample of 919 small businesses was systematically extracted from the two databases (mounted on Microsoft Access, a database management software). This initial sample was augmented by the recruitment of 37 participants at a Small Business Fair (held in March 1994 at the Main Branch of the London Public Library - LPL), by scanning the Business Section of the London Free Press in the months of March and April 1994 (5 participants), by placing an ad in the HomePreneur, a local bulletin for small business owners (2 participants), and, finally, by directly distributing questionnaires to users of the Business Section at the Main Branch of LPL (19 participants).

From 21 April to 22 June 1995, 184 valid questionnaires had been returned. Another 28 had to be discarded because they were undeliverable (e.g., wrong address). Four returned questionnaires had to be excluded because they did not meet the definition of small business used in this study (50 employees or less). The adjusted return rate of 19.29% should be considered quite acceptable for a study of this type. Follow-up with the mailing list, had it been approved by the Ethics Review Board of the University of Western Ontario, might have improved the return rate.

Responses were anonymously coded and entered into an electronic data file using Microsoft Excel 5.0. Simple data screening (e.g., frequency distributions) was performed using PRELIS 2.12a (by Scientific Software International, Chicago, Illinois), a companion application software to LISREL. The frequencies for each answer are reported on the questionnaire in Appendix B. Values for all continuous variables were recorded as on the questionnaire. Answers to ordinal variables were coded: 1 for Never Use, 2 for Sometimes Use, and 3 for Frequently Use. Missing values were coded as 99 and answers that were not applicable were coded as 88.

Two of the most common problems with the data were answers in-between categories and multiple selection of categories for a given question (e.g., answer with both Sometimes Use and Frequently Use). These problems are partly because of the small number of possible answers given in the questionnaire (e.g., Not Important, Somewhat Important, Very Important). This can be remedied by providing more choices of answers in future questionnaires (e.g., Likert scale).

Some questions also proved to be too sensitive. For example, only 48 participants (26% of the total) chose to indicate their profit margin. The wording of some questions was also problematic. For example, the profit margin question (Question 7) was unclear as to whether we meant net or gross profit margin.

A more general issue concerns the respondents’ position in their respective organizations and thus the reliability of the data collected. The questionnaires were sent out to the contact person identified by each firm in the sampling frame. In most cases, given the size of those businesses, these contact people will actually be the owner or the president (or other similar role) of the business. In a larger business, however, it is possible that the person filling out and returning the questionnaire may not have access to all company records. Although we can assume that a contact person has the authority to find out the correct information, the risk exists for incorrect information. This would be particularly problematic in the study of larger organizations in which case, the choice of the entry point would be a determinant for the quality of the data collected.

Data Analysis Using The Lisrel Model

There are two basic types of variables in LISREL, the latent variables are represented by ovals and the observable variables by rectangles in Figure 1. Latent variables are those that are formulated in terms of theoretical or hypothetical concepts, or constructs that are not directly measurable or observable. Observable variables are those that are directly measurable or observable and can be used as indicators of latent variables. In other words, latent variables are represented or measured by observable variables. Variables on the right side of Figure 1 are dependent (or output) variables, e.g., success is the dependent latent variable and PROFIT, INCREASE, and LENGTH are the dependent observable variables. Variables on the left side are independent (or input) variables. LISREL integrates both latent theoretical concepts and observed or measured indicator variables into a single structural equation to study the causal relationship among the variables.

Image

Fig. 1. A hypothetical model.

To illustrate the meaning of LISREL, let us consider the hypothetical model in Figure 1. This model states that business success (“success” oval in the graph) is caused by type of business (“type”), business environment (“busienvi”), and use of information (“info_use”). Each of these is a latent variable, which is measured by one or more observable variables. For example, the latent variable “success” can be measured by the profit margin (“PROFIT” rectangle in the graph), percentage increase in employees (“INCREASE”), and length of time in business (“LENGTH”). Lines pointing from independent latent variables to dependent latent variables are called paths.

Once data are collected on all the observable variables, LISREL will estimate the path coefficients that indicate the magnitude of the contribution of each independent latent variable to the dependent latent variable. In this example, path coefficients will allow us to assess the impact of information on business success when the contributions of other factors (e.g., business type and business environment) are considered together with use of information. LISREL will also tell us whether the path coefficient is statistically significant or not, that is, whether it is a true causation or is just a coincidence in the particular data we collected.

Findings: Path Analysis Models

Our original LISREL model is shown in Figure 2, which contained 32 observable variables and seven latent variables. Because of the restriction of presenting the model on a single page, not all the observable variables are listed here. For example, there are five observable variables to measure the latent variable “use of formal information” (“formal” oval in Fig. 2), but only two of them are presented in the graph (LPL for London Public Library and SBC for Small Business Centre). The “....” sign between the LPL rectangle and the SBC rectangle means that there are observable variables being omitted here. The same applies to “informal” and “infokind” latent variables in Figure 2. Please see Appendix A for a complete list of all the observable variables. Most observable variables here correspond to a particular question in the questionnaire.

It was soon realized that we do not have enough data points to test a model with so many variables. Thus, we deleted and combined some variables to achieve a more parsimonious that which has 10 observable variables and four latent variables as shown in Figure 3. For example, we deleted latent variable “type” and the corresponding observable variable “3 TYPES” because LISREL cannot deal with nominal data.

Image

Fig. 2. Original model.

Image

Fig. 3. Revised version of the model.

We deleted observable variable “PROFIT” because only 26% of people answered this question. Some variables were combined to form a composite variable by taking the arithmetic average of original scores. For example, latent variables “use of formal information” and “use of informal information” in Figure 2 were combined into a single latent variable “info_use” in Figure 3. This latent variable is measured by two observable variables “FORMAL” (use of formal information) and “INFORMAL” (use of informational information). Data on this combined variable “FORMAL” were obtained by taking the average of the five original variables (use of London Public Libraries, Small Business Centres, Internet, statistics, and newspapers, etc).

LISREL theory, however, and the test results of this revised model suggest that there are still too many variables in this version considering the number of data points we have (total 164 effectively used by LISREL). After several attempts at further combining variables and retesting the model, the model either did not converge, or converged into a nonadmissible point meaning that the model is not successful.

The estimated parameters (e.g., path coefficient, which will indicate the contribution of information to business success) produced by LISREL is not provided in this paper because these data are not meaningful when the model converged into a nonadmissible point. The path coefficients, therefore, on the Figures 2 and 3 are all zeros rather than the actual figures. The chi-square value and its significance level were not reported in this paper either for the same reason. We decided not to combine variables any further to fit the model because it will render a meaningless model from a research point of view, although the model might fit data statistically.

Experience Gained in the London Pilot Study

Although the London pilot study was not very successful, we gained a large amount of experience that can be summarized as follows:

• Business success is an extremely complex phenomenon involving many variables. A simplified model using fewer variables will not capture the true causal relationships intended to be examined by Models with more variables, however, require significantly more data. Larger numbers of data points are need for ordinal data (data typically collected in survey questionnaires). Use of the LISREL model, therefore, is not recommended unless large amounts of data are available. Even larger amounts of data should be available if a questionnaire is used to collect data.

• Nominal data are not allowed in LISREL (i.e., at least ordinal scale data are needed). Thus “yes” or “no” questions, which are frequently used in questionnaires, should be avoided.

• Determining the appropriate output variables (indicators of business success in this study) is extremely important for the LISREL model to work. This is also a very difficult task. The lack of success of the London pilot study is partially because of the inappropriate output variables. For example, the variable “profit margin.” which is probably one of the most important indicators of business success, was not included in the final version of model because an insufficient number of data points were available for this variable. (It is understandable that a lot of people chose not to answer this question because of its highly sensitive nature.)

Tentative Plan of the Shanghai Study

In light of the experience gained in the London pilot study, the Shanghai study will:

• Increase sample size. It will aim at 400 data points, which is more than twice that of the London pilot study.

• Include more effective business success variables, such as increase in fixed assets of the business.

• Limit the study to the manufacturing sector to reduce the factors involved and, therefore, simplify the model. It will also make comparisons of individual businesses more meaningful.

• To increase the return rate of sensitive data such as profit margin, ordinal rather than ratio data will be collected. That is, rather than asking for the actual profit margin figure, we will ask interviewees to indicate whether the profit margin is excellent, good, satisfactory, etc. This of course will compromise the accuracy of the data in these variables. It is necessary, however, to ensure the availability of data.

• Pretest the questionnaire instrument to improve its quality, thus reducing the possibility of unclear questions. Increase the measurement scale for ordinal variables from three to seven (i.e., interviewees can choose from seven categories of answers rather than three).

• To increase the return rate, the questionnaire will be followed by a telephone call to encourage participation.

Mr Han-dong Wang, a researcher from the Institute of Scientific and Technical Information of Shanghai, is visiting our research team in London (June-August 1995) to gain firsthand experience with LISREL methodology and help in the planning of the Shanghai study, which is currently under active discussion.

“Because this is an exploratory study, we cannot expect that the final result will be a clear cut determination of the exact, quantitatively determined, impact of information on the outcome variable” (Tague-Sutcliffe et al. 1994, p. 11). The difficulties we encountered in the London study confirmed this original estimation in our project proposal to IDRC. With our experience in London behind us, we hope to achieve a better result in the Shanghai study. Because of the difficult nature of this endeavour, however, we should not set our expectations too high.

References

Auster, E.; Lawton, S.B. 1984. Search interview techniques and information gain as antecedents of user satisfaction with online bibliographic retrieval. Journal of the American Society for Information Science, 35(2), 90–103.

Baroudi, J. J.; Olson, M.H.; Ives, B. 1986. An empirical study of the impact of user involvement on system usage and information satisfaction. Communications of the ACM, 29(3), 232- 238.

Chaganti, R. 1983. A profile of profitable and not-so- profitable small businesses. Journal of Small Business Management, 21(3), 43–51.

Griffiths, J.-M.; King, D.W. 1993. Special libraries: Increasing the information edge. Special Libraries Association, Washington, DC, WA, USA.

Jöreskog, K.; Sörbom, D. 1988. PRELIS: A program for multivariate data screening and data summarization, 2nd ed. Chicago: Scientific Software International. Pp. 2-8.

Joshi, K. (1992). A causal path model of the overall user attitudes toward the MIS function: The case of user information satisfaction. Information and Management, 22, 77–88.

Menou, M.J., ed. 1993. Measuring the impact of information on Development. International Development Research Centre (IDRC), Ottawa, ON, Canada.

Small Business Working Committee. 1994. Breaking through barriers: Forging our future. Industry Canada, Ottawa, ON, Canada.

_____1995. Small business: A progress report. Industry Canada, Ottawa, ON, Canada.

Stone, M.B. 1993. Assessment indicators and the impact of information on development. Canadian Journal of Information and Library Science, 18(4), 50–64.

Tague-Sutcliffe, J.; Meadow, C.T.; Vaughan, L. 1994. Measuring the impact of information on development: A path analysis approach. A proposal to the International Development Research Centre (IDRC). IDRC, Ottawa, ON, Canada.

Tiggles, L.M.; Green, G.P. 1994. Small business success among men-owned and women-owned firms in rural areas. Rural Sociology, 59(2), 289–310.

Appendix A: List of Variables

Variable

Description

Label (in Fig. 2)

OI

Business type

3 TYPES

OI

Technological developments

TECHNO

OI

Exchange rate

EXCHANGE

OI

Interest rate

INTEREST

OI

Customer base

CUSTBASE

OI

Availability of financing FINANCE

FINANCE

OI

Location

LOCATION

OI

Employees expense

EMPLOYEE

OI

Employer’s expense

EMPLOYER

OI

Friends, relatives, business people

Omitted in Fig. 2 as “informal”

OI

Customers

Omitted in Fig. 2 as “informal”

OI

Consultants, lawyers, accounts

Omitted in Fig. 2 as “informal”

OI

Statistics Canada, governmental pub’s

Omitted in Fig. 2 as “formal”

OI

Newspapers, mag’s, trade pub’s

Omitted in Fig. 2 as “formal”

OI

Internet

Omitted in Fig. 2 as “formal”

OI

London Public Libraries

LPL

OI

Small Business Centre

SBC

OI

Business clubs, trade conventions

CLUB

OI

Banks and financial agencies

BANK

OI

Financing your business

FINBUS

OI

Markets, customers

Omitted in Fig. 2 as “infokind”

OI

Suppliers

Omitted in Fig. 2 as “infokind”

OI

Government regulations

Omitted in Fig. 2 as “infokind”

OI

Management skills

Omitted in Fig. 2 as “infokind”

OI

Technology

Omitted in Fig. 2 as “infokind”

OI

Writing a business plan

Omitted in Fig. 2 as “infokind”

OI

Selling skills, motivation

Omitted in Fig. 2 as “infokind”

LI

Types of business: measured by 3 TYPES

type

LI

Environment measured by: TECHNO, EXCHANGE, INTEREST, CUSTBASE, FINANCE, LOCATION

environ

LI

Expertise measured by: EMPLOYEE, EMPLOYER

expertis

LI

Formal information sources measured by: LPL, SBC (and 3 others not included here)

formal

LI

Informal information sources measured by: CLUB, BANK (and 4 others not included here)

informal

LI

Kinds of information measured by: FINBUS (and 7 others)

infokind

LD

Success of business measured by: all latent independent variables

success

OD

Profit

PROFIT

OD

Increase in number of employees from start-up until now

INCREASE

OD

Expectation of being in business a year from now

PROSPECT

OD

How long in business (in years, from start date until 1995)

LENGTH

Note:
01 = Observed Independent
LI = Latent Independent
LD = Latent Dependent
OD = Observed Dependent

Appendix B: Questionnaire Used for Data Collection and Frequency Distribution2

Measuring the Impact of Information on Small Businesses in London

A Questionnaire

A study conducted by the Graduate School of Library and Information Science University of Western Ontario

Your responses to this questionnaire will be used to analyze information sources used by London small businesses. Your company will not be identified in this process.

I Our first questions are about the size and scope of your business.

1. When did you start your business?            Median=1985

2. What type of business do you have?

Manufacturing

10.6%

Retail

7.8%

Service

44.7%

Other

35.8%

Please, specify: _____

3. How many full-time employees do you have now?      Median=4

4. How many full-time employees did you have at start up?      Median=l

2Only the first part of the questionnaire is included here. The second part was used to collect some qualitative information on the use of London Public Library services. These data have not been used in the study reported in this paper. This appendix also includes the frequency values (in percentage) obtained for each answer (total number of cases is 184). Note that the total for each question does not always add up to 100% because of missing values.

5 Approximately how many hours of part-time employment did your business provide last year?      Median=275 hours

6. Did you make a profit last year?

Yes

62.6%

No

32.4%

If yes, do you mind telling us your profit margin?      Median=10%

7. Do you expect to be in business a year from now?

Yes

93.9%

No

2.2%

II We are interested in the information you use in running your business.

1. How often do you use each of the following sources of information?

Source of Information

Never Use

Sometimes Use

Frequently Use

Friends, associates, relatives

2.8

46.4

48.6

Suppliers

7.8

40.8

49.7

Customers

5.6

41.3

52.0

ConsultantsJawyers, accountants

16.2

63.1

19.6

Banks, other financial agencies

35.2

52.0

11.2

Newspapers, magazines, trade pubs

8.9

55.3

35.8

Stats Canada, other govt, pubs

60.9

29.6

7.8

Trade associations, business clubs

29.1

52.5

17.3

Internet

78.8

16.2

2.8

London Public Libraries

56.4

32.4

10.1

Small Business Centre

82.7

13.4

2.8

Other_____

12.3

2.2

6.7

 

2. Please tell us, for each of the following kinds of information, how important it is:

Kind of Information

Not Important

Somewhat Important

Very Important

Financing your business

21.2

33.5

43.0

Markets, customers

3.4

17.9

74.9

Suppliers

14.5

34.6

49.7

Government regulations

12.3

47.5

37.4

Management skills

11.7

34.1

51.4

Technology

7.8

31.8

58.7

Writing a business plan

30.2

45.3

21.2

Selling skills/motivation

14.0

35.2

5.3

Other_____

7.8

0.0

3.9

III What other factors have been important in determining the success of your business?

Factor

Not Important

Somewhat Important

Very Important

Availability of financing

26.3

33.5

36.9

Location

38.5

35.2

23.5

Existing customer base

6.1

18.4

74.3

Technological developments

16.8

47.5

33.0

Exchange rate

60.9

24.6

12.8

Interest Rate

40.2

35.2

22.3

Your Expertise

0.6

8.9

90.5

Employees’ expertise

11.2

15.6

69.8

Other

4.5

0.6

  6.7

This page intentionally left blank

Information Factors Affecting New Business Development: Progress Report

Charles T. Meadow and Louise Felicie Spiteri1

Introduction

We are conducting a pilot project whose objective is to explore methods of measuring the impact of information on development and, in particular, to develop a mathematical model of the relationships among a number of variables or indicators. Our variables are generally indicative of economic conditions, information availability, and information use. This is in furtherance of a goal of the International Development Research Centre (IDRC) stated as, “What is required is a set of tangible criteria by which the relevance or impact of information on development may be measured” (Stone 1993, p. 53). Stone further added (p. 55) that “little empirical research [had] ever [been] attempted in this area.”

We have tried to follow the principles set forth in Menou (1993, p. 63) as to characteristics of indicators. Our indicators are generally used for the purpose he stated (p. 64) as “to evaluate performance relative to a set of objectives, depending on the specific area of endeavor or project.” In general, we are concerned with the impact of information and economic variables on the successful establishment of a new business.

The information systems we are concerned with are already in place. The major question is whether or how effectively they are used. We propose to develop a macro model, meaning that we are dealing mainly with variables broadly descriptive of a region, not with individual cases. We are working with new business creation, rather than with expansion of existing businesses.

This project is part of a larger effort directed by Professor Jean Tague-Sutcliffe at the Graduate School of Library and Information Science of the University of Western Ontario with the cooperation of the Institute of Scientific and Technical Information in Shanghai (Tague-Sutcliffe et al., this volume). Our

1Faculty of Information Studies, University of Toronto, 140 George St, Toronto, Ontario, Canada M5S 1A1.

part of the project attempts to answer the question, “To what extent does the availability and use of information affect the success of newly-established small businesses in the Canadian Province of Ontario.” Data will be gathered from 15 cities within Ontario. Our “macro” approach, uses variables descriptive of the general area under study, combined with some individual data pertaining to specific businesses. Professor Tague-Sutcliffe’s pilot project (1993) focuses more specifically upon the impact of London-based information services upon selected small businesses in London and more on the specifics of individual businesses.

Ontario is not necessarily representative of developing countries. But, our goal in the pilot is to determine the variables needed and a method of collecting data. We are not, at this stage, expecting a complete, predictive model of the development process or even one to be used for post hoc evaluation.

We are attempting to develop the method that, if reasonably successful, would then be used in our next phase in which the focus will be on a community in the Peoples’ Republic of China. Our preliminary meaning of “reasonably successful” is that, at the conclusion of the pilot study, we would be confident that we had identified the important variables and methods of gathering the requisite data. In both the Ontario and China projects, we aim to demonstrate the relationship between economic variables pertaining to the selected region and variables descriptive of information availability and use.

As just one example of the kind of difficulty we may face, we talked with a number of people active in support of new or small business development in Toronto. A point of common agreement among them was that the personality of the entrepreneur is the dominant factor in the success of a new enterprise. How do we measure this? Could we administer some form of personality inventory or other psychometric measuring instrument? Suppose this were to turn out to be a good predictor of success. Could we then administer the instrument to a random sample of all applicants for a business licence in Ontario? Very likely, we would find this difficult to impossible for social and political reasons.

Such a testing program could easily be misconstrued as establishing or working toward the establishment of a required psychological profile for anyone starting a new business. Even if we could do this, Ontario has an extremely ethnically diverse population. Is any personality inventory equally effective in categorizing personalities, and using those categories to predict behaviour, across many cultures? Even if this answer were yes, could we use this test in China, i.e., would the government and populace permit it and cooperate in the use of a Western psychometric instrument? We think the reality is that we will have to do without such data.

There is no lack of independent variables in such a study. We have measures of the economy at national, provincial, and city levels and of information available in a city. We do not have all the information we would like, and sometimes have to make do with indirect measures or gross approximations. By questionnaire we can get some indication of what information sources are being or were used by entrepreneurs and their indications of what types of information are important.

Ideally, we would develop a mathematical relationship by which the state of the economy, the availability of information, the willingness of a business leader to seek information, and his or her skill in using it lead to business success. Unfortunately, as there is no simple measure of business success and no way to measure all the information seeking and use that a person has been involved in, we must settle for approximations.

We are approaching this pilot study in the spirit that it is exploratory, that we are trying to find out what are the variables and how to go about measuring them, and we are not expecting to produce a completely valid psychosocial model of new business development in Ontario. We have had to make some generalizations and assumptions that we cannot rigorously justify, in the hope that, if they point us in the right direction, we can refine them later.

Related Research

One point comes clearly through most of the published works on the subject of use of information services — providing the service is only part of the problem. The other major part is getting people to use them. Providing services is often a technical or managerial matter. Encouraging and training people to use them may be a very complex social and educational matter. Often, studies of information use concentrate on formal institutions, but word of mouth, consultation with friends, attorneys, or village elders, and news media all contribute to knowledge.

There is a considerable literature, in sociology and information science, on the subject of information gaps (Chatman 1995; Childers and Post 1975; Dervin and Greenberg 1972). The focus has been largely on disadvantaged minorities within a larger society. This target group does not generally coincide with our target group. It probably has to be conceded that in any society there are going to be subgroups less popular than others and that government office staff may reflect such attitudes, whether or not repression is the official government policy. But, we have to proceed on the assumption that successful new businesses aid the society in general and that governments support them.

Hence, information gaps pertaining to potential entrepreneurs are not necessarily the result of deliberate policy and we shall act on this assumption. But there can also be no doubt that information gaps do exist and they may be caused by local lack of information availability for anyone, lack of adequate services for dissemination of information, unofficial discrimination, lack of willingness on the part of the entrepreneurs to seek out information, or lack of ability on their part to know how to use the services or understand the information. These lacks, or gaps, may or may not pertain to everyone, i.e., they may be selective.

Evans et al. (1977) state that “In every country…a certain amount of scientific and technological information…is processed and stored in some fashion for the benefit of users. Unless these users know how to find relevant information available to them, the information machinery’ falls short of its main goal.” It appears that, not only must they know how, but they must be motivated to use this knowledge. It is not enough to be literate in the sense of knowing how to read and write. It is also necessary to be literate in the sense of knowing or caring what is written and trusting that the written or recorded word may have value.

Agriculture is a business that is often small and highly dependent on information about technology and markets. “Technology” includes seeds, fertilizers, cultivating techniques, and transportation. Market knowledge includes information about world-wide demand, supply level and sources, competitive products, and sometimes sources of labour. The agricultural extension service in the United States, a cooperating network of government, university, and college agencies involved in advising, training, and educating farmers and farm families, is 200 years old (Scott 1970).

In its early days, it encountered the same user reluctance and skepticism we often hear about among modern small business people of today with regard to formal information services. In the late 19th century, after a devastating civil war and subsequent depression, U.S. farmers were becoming desperate and began to seek any advantage. Extension services became more popular. In 1914, the U.S. enacted new legislation that considerably enhanced and spread the extension service and government spending thereon. Today, it is highly respected and used (Richardson 1987; Warnock 1992). Its product is information. There seems to be nothing comparable, on so grand a scale, to support business in other industries.

Is there a lesson for us in this growth of an information service to development? There are and have been similar undertakings aimed at nonfarm businesses, but they lack the history or extent of analysis that applies to agriculture. What was it that made information services so effective in agriculture, compared with other fields? One answer is the passage of time, but in the modern world sponsors of such services do not tend to want to wait a century for an idea to take hold.

As noted earlier, people who talk or write about information gaps tend to use this term in a relative sense. There is said to be a gap if a given group is assumed to have, or has demonstrated it has, less information than some other group. Usually, this is a disadvantaged minority compared with the majority population. But there is often nothing quantitative, nor is there a norm — what are people supposed to know? Measuring the deviation from an unknown norm is very difficult.

Chatman and Pendleton (1995), whose focus was on minority groups within a country, refer to an impoverished lifestyle. A contribution to this state is a lack of use of information, because of three causes: “(the information impoverished) perceive that sources of information are unusable in a timely manner,…when sources are available and useful, they are insufficient to respond to their needs,… [and] the channels of information, both mass and interpersonal, are viewed with suspicion and skepticism.”

By way of illustration, a classic North American situation concerns a husband and wife lost on a highway while on an automobile trip. The wife says “Let’s ask for help.” The husband does not want to. These people are, temporarily, in a very information impoverished situation — truly lost in terms of both geography and temper.

Why does the husband resist asking for help? Partly because, in many situations, the husband views the help available with suspicion and skepticism. “The person we might ask will really not know but will not say so. We will get directions in words or gestures that are meaningless. We’re better off consulting our out-of-date map.” I suspect that many entrepreneurs have similar feelings. “Is all this government information in academic or legal language really going to help me? Or am I wasting my time seeking out and reading it.”

Another reason, both for the lost husband and the budding entrepreneur not wanting to ask, is the feeling of not wanting to have to display ignorance to a stranger. This is probably also a reason many people are reluctant to ask for help in a library. The information gap literature suggests that this is the kind of reasoning used by information deprived people.

Chatman (1995) lists four concepts related to the impoverished life world: risk-taking, secrecy, deception, and situational relevance. I think we can assume the second and third of these are not highly applicable to our situation, but neither should we be totally naive.

As we pointed out earlier, in almost any country there will have been, or are, certain groups the majority, with or without the official support of the government, wish to repress. Or, the minority may feel repressed because of past history, whether or not the repression is currently practiced. This feeling of repression could keep people from seeking the information necessary for business development.

The first and fourth concepts, risk-taking and situational relevance, are clearly pertinent to our project. Risk-taking has already been identified as a major factor in entrepreneurial success. For our purposes, the question becomes: How do we identify people high in this quality and then observe how they use information? The concept of situational relevance is one that can be controlled by a government or information agency, in the sense that they can work to make their information appear more relevant to potential users and we might be able to measure the extent to which they do so.

Approach to the Problem

Our first objective is to identify the variables needed and available. Finding economic data about Ontario as a whole was easy enough. We have time series for about 10 years of GNP (gross national product) at national and provincial levels, employment data, number of business registrations and deregistrations, bankruptcies, interest rates, and the like. What is not readily available is information about information: what is available, how information services are used by budding business people, how much these new entrepreneurs know about their chosen field and about business management in general. We plan to collect data on 15 communities in Ontario.

Categories of Variables

We began by interviewing a number of people, principally the head of the Toronto New Business Development Centre; the Small Business Office of the Ontario Ministry of Economic Development; the Bank of Montreal Small Business Office, which offers support services to small businesses; and the Small Business Center Network in North Carolina. The data available and opinions of experts in new business development suggest three important categories of information: economic/monetary, environmental/commercial, and personal characteristics of entrepreneurs or candidate entrepreneurs.

It is interesting that availability of information was not generally explicitly stated as an important category. Much anecdotal evidence suggests that entrepreneurs also do not think much of information as a distinct category of resource that they need. They are more likely to think in terms of what is needed: names of customers, resources, properties of materials used, etc. This widespread nonuse of the key term information confounds work in the field.

We learned that many people start a business for personal reasons, such as to become independent of a “boss.” Others do so primarily to make a profit. It is the latter type that most aid the economy of an area. The former are really just transferring their employment from an existing company to their own, new one; no new jobs or wealth are necessarily created. Development agencies, as may be expected, concentrate on the new wealth creators. Possibly, it would be desirable to filter those seeking just independence out of our sampling, but this is quite difficult.

Entrepreneurs’ Interest in Information

We learned that many prospective entrepreneurs lack expertise in their chosen field and even more lack knowledge of such aspects of business administration as marketing, money management, dealing with government regulations, and managing staff. One can learn many of the managerial skills through books, short courses, and consultation. One cannot learn so readily how to cobble shoes, grow bananas, or design electronic computers. New business agencies seem to discourage those without the basic skills and work hard at providing or making available the managerial skills.

This brings us to one of the critical and difficult to measure factors, the willingness of entrepreneurs to seek information. By information in this context we mean information in all forms: books, pamphlets, video cassettes, and consultation with knowledgable people. Ideally, we would like to find out how many of those who thought about starting, or actually tried to start, a new business (or expand an existing one) sought help from a new business development centre, a government agency, a library, a bank, a trade association or union, etc.

There is no way that we could identify entrepreneurs at the moment they form the idea to start a new business. The closest we can come is to catch them as they first approach an Ontario Ministry of Economic Development and Trade Small Business Self-Help Centre, 32 of which are distributed around the province, But doing so means we will have missed anyone who abandoned the idea earlier for lack of information.

Information Availability

Our approach to measuring the information available in a community is to count the number of information facilities that are relevant to new business development: libraries, government information offices, lawyers, accountants, banks, newspapers, magazines, and radio and television stations. This approach ignores the specifics of what may be done at any given office — not all lawyers handle new business start-ups — but when there is a large number of one type of information resource there tends to be a large number of other types. An information availability index will be computed relating the number of such facilities to population in a community.

Information Used

Possibly the only way to measure the amount of information use during the steps from forming an initial idea to starting a business, through various consultations, registration and actual start up, would be to follow the business people individually for some time. Such a technique has been used in studies of managers’ work (Mintzberg 1973). We will approximate this by asking entrepreneurs before and after start up what sources they used and how they value them. We define start-up, for this purpose, as incorporation.

Measuring Short-Term Business Success

What is the best measure of success of a new business? In a study over a short span of time, such factors as net profit or number of employees are not necessarily definitive of success. Many businesses today prefer to hire part-time help or to contract out work to avoid the burden of long-term commitment to employees. This is another reason why we plan to use the fact of incorporation as a short-term measure of success. We will, however, ask each business person on our questionnaire if the business made a profit in the last accounting year.

We decided that going from an idea in an entrepreneur’s mind to the formal creation of a new business is, in itself, a measure of success. It is not the ideal measure, but the information about creating a new company is public and it does represent a degree of achievement.

In the Province of Ontario, a new business must be registered with the government. When a business ceases operation, it is deregistered. A business may be a sole proprietorship, a partnership, or a corporation. Hence, incorporation is not synonymous with registration. Not all businesses are corporations, but the larger ones or those aspiring to be large tend to be incorporated. Similarly, there are multiple reasons for ceasing to function, such as retirement of the owner, movement of the business to a different jurisdiction, or bankruptcy. Hence, we cannot use the overall term deregistration to imply failure of a business.

As noted, those who actually registered a new business obviously do not include those who became discouraged by the early consultation process, or who realized without benefit of consultation that they were lacking the knowledge or money needed to get started. We have decided to look at one group of entrepreneurs (a) as they first approach the self-help office and another (b) who have been incorporated for about a year. These would be separate samples of preregistrants and of recent incorporators. We lack the time for a longitudinal study. Hence, we are examining the difference in use of or attitude toward information between groups (a) and (b). We assume that differences in these factors are indicative of success in business.

Measuring Personality

We know, that personality is a major factor in success. This includes willingness to take risks, creativity, energy, persistence, and articulateness (in selling the product or service to a customer or the enterprise to a prospective financial backer). It may include willingness to admit to not knowing certain things and willingness to ask for help.

We might approximate a measure of the willingness to ask for help by the number who actually go to government or private business help agencies or libraries. But, mere entry into the facility is not the same as asking for and receiving useful information, and information agencies do not normally keep records of the question asked, the type of person who asked a question, or the reason for it, or the perceptiveness of the question. Finally, we have no way to know how many unofficial consultations were held with what may have been highly knowledgeable people.

At best, the information sought may depend on the amount or quality of information perceived by the entrepreneur to be available. A person may be willing to ask for help only if he or she truly believes it will be forthcoming.

This attitudinal aspect of the problem has so far proven difficult to measure. We discuss it further in the next section, but we must accept that availability alone is not a measure of use. Ideally, we would also consider willingness to seek information, the actual seeking of it, and use of what is found.

Information Availability

What has been distressing, in our work so far, is the lack of previous work on measuring the various aspects of information gaps discussed earlier. We must ask what an information gap is a gap between. The sociological definition tends to say that it is a gap between information available to favoured groups versus unfavored groups. We might reinterpret that to mean a gap between those entrepreneurs who are well informed and those not, in any society.

There is no point considering the gap between an entrepreneur attempting to start a new electronics business in Silicon Valley and one trying a similar enterprise in a small country whose economy was previously dominated by sugar, bananas, and tourism. The gap of interest is between those in a community who have “made it” in business or will make it and those destined not to. We need, then, some sort of relative measure or index of information availability, a separate measure of information use and perhaps a third, information known or knowledge. All these must be interpreted in social, political, and economic contexts.

Data Available

Economic Data The economic variables we have collected (information available for Canada and Ontario for the past 5-10 years) and hope to be able to duplicate or approximate in other countries are:

• GDP (gross domestic product)

Canada

• GDP

Ontario

• Labour force

Canada

• Labour force

Ontario

• Interest rate - Bank

Canada

• Interest rate - Prime

Canada

• Weekly earnings

Canada

• Weekly earnings

Ontario

• Unemployment

Canada

• Unemployment

Ontario, self, incorporated

• Employment

Ontario, self, incorporated

• Employment

Ontario, self, unincorporated

• Number of firms

Ontario

• Business start-ups

Ontario

• Business exits2

Ontario

• Bankruptcies

Ontario

• New registrations

Ontario

• New incorporations

Ontario

• Employment

Ontario

• Imports

Ontario

• Exports

Ontario

• Disposable income, total or per capita

By city

• Retail sales, total or per capita

By city

Collectively, this list is probably more than we need. We are not trying to predict the behaviour of an economy, but general economic conditions and attitudes toward the future are surely important factors in the decision to attempt

2“Business exits” is a generic term approximately synonymous with Ontario’s “deregistrations.” It covers the many reasons for a business to cease operation.

to start a new business and the probability of its success. If the business is agricultural, we would include environmental factors, such as rainfall. For that reason, it may be necessary to make the model specific to an industry, i.e., to use a somewhat different model for an agricultural economy than for one based on mineral extraction, textiles, or electronic equipment assembly. We will compute a city economic index based on per capita income and per capita retail sales and a general index of economic conditions based on the foregoing variables.

Information Available to Users Based on informal conversations with people involved in research and measurement in the information science field, it appears that there might be a high correlation between those who read and those who use other information services. Hence, we are treating all sources as equal: those that provide primarily print-based information, such as libraries and newspapers; those that provide ethereal information, such as television broadcasters; and those that provide primarily consulting services, such as banks or lawyers. We will create an index based on the major kinds of publicly available information services relevant to business. Our first-round list:

• Libraries, public or academic.

• Other types of government information agencies (e.g., agricultural extension service, revenue service consultants, business self-help centres, etc.).

• Postsecondary educational institutions.

• Lawyers.

• Accountants.

• Bank or Trust Company branches.

• Printed news media.

• RadioYTV stations.

The first item is traditionally measured by holdings — roughly the number of books. One alternative is the number of staff members, probably highly correlated with holdings, and another is the number of public facilities — main library and branches. Postsecondary educational institutions could be measured by number of faculty members or number of students, or simply number of campuses. Lawyers, accountants, and bank or trust company branches are ideally measured by number of transactions or cases, but such data would be very difficult to collect. They could, like libraries, be measured by number of staff or by number of organizations. Printed news media and radio/TV stations could be measured by number of publishers or broadcasters, probably better by audience size, if such data were available.

Our initial approach will be to use number of offices or equivalent for all the categories in a community — number of firms for lawyers and accountants, number of branches for libraries and banks, number of newspapers or magazines published, and number of radio or TV broadcasting stations. This is a relatively rough measure, but we believe these counts will be consistent with population and perhaps major industry in a community. An information index will be computed as the sum of the information facilities divided by population in thousands of persons.

We recognize that one key source of information for the new entrepreneur is not included in this approach: experienced persons in a field. Not getting information about all relevant consultations means that we are possibly missing some important data. We get some of this, but not really precise measurements, in the questionnaire described in the following. The only way to get the information directly would be to ask a relatively large number of people in a community to identify those who could offer advice in various fields.

Measurements of Information Use Our first approximation to measuring information use comes from recognizing that we cannot get all the information we want. We have no way to identify the segment of the population considering a new business until its members take some overt action, such as applying for registration of a company. Hence, the first point at which we can identify a prospective entrepreneur is when he or she applies for registration or approaches a new business incubator or similar institution, or approaches a bank for a start-up loan. We have already missed those who thought about a new business but did not know how to go about forming one.

The second point at which we can find new entrepreneurs is when they register. Some go beyond this and incorporate. In a sense, we can consider incorporation a form of success. The company may not have realized any profit, but its owners have created a legal business by going through a formal government procedure.

After incorporation, we once again have trouble finding information. We might poll new businesses — the act of incorporating is a public one — to find out their financial situation, but they are not obliged to answer. We can easily find the number of bankruptcies and deregistrations, but not necessarily the names and locations of all the companies involved. Thus, we were led to the assumption that remaining in business is a form of success.

Our decision was to take samples of people approaching an Ontario small business self-help office as an approximation to those considering the formation of a business. We will take samples of companies that have been established for 1-2 years to represent initially successful companies. We will administer a similar questionnaire to both, based on that used in the London, Ontario, pilot project (Tague-Sutcliffe et al., this volume). We modified the London questionnaire principally by eliminating references to London and leaving out some detailed information. The questionnaire as used by us is shown in Appendix A. In Part I it asks some general questions about the business, including how long it has been in existence and whether or not it is profitable. This would be used only for incorporated companies. In Part II we ask about information sources used and the value of various categories of information. Part II also asks about other factors important to the success of a business. We intend that the difference in the frequency distributions of responses to the questions of Part II will be a measure of the difference in information use of preregistrants and established companies.

Summary of Variables

The following variables are used in our model:

Direct Measures

Economic

Regional

GDP

Provincial

GDP

National

Interest rate - Prime

Canada

Weekly earnings

Ontario

Unemployment

Ontario

Business start-ups

Ontario

Business exits

Ontario

Bankruptcies

Ontario

New registrations

Ontario

New incorporations

Ontario

Economic

Local

Disposable income, per capita

By city (DI)

Retail sales, per capita

By city (RS)

Population

 

Population

By city (POP)

Information Available (INFO)

1. Number of public library branches

2. Number of academic library branches

3. Number of law offices

4. Number of chartered accounting offices

5. Number of bank branches

6. Number of trust company branches

7. Number of daily newspapers

8. Number of weekly newspapers

9. Number of radio stations (AM/FM)

10. Number of television stations

11. Number of government information agencies. Agencies deemed relevant to the project are listed in Appendix B.

Questionnaire Data

Company Information (Existing corporations)

Year incorporated

Type (manufacturing, retail, service, other)

Number of full-time employees at start

Number of full-time employees now

Number of part-time employees hours now

Profitable (yes/no)

• Information Sources Used (INFSO)

1. Friends (Never, Sometimes, Frequently)

2. Suppliers

3. Customers

4. Consultants

5. Banks

6. Associations

7. StatsCan publications

8. Publications

9. Internet

10. Libraries

11. Other government agency information sources

12. Other

• Importance of Information (INFIMP)

1. Financing (Not, Somewhat, Very)

2. Markets

3. Suppliers

4. Government regulations

5. Management skills

6. Technology

7. Own business plan

8. Selling

9. Other

Factors of Importance (FACIMP)

1. Availability of financing

2. Location

3. Customer base

4. Technological developments

5. Exchange rate

6. Interest rate

7. Own expertise

8. Employee expertise

9. Other

Indexes: Composite Measures

Regional Economic Index (RECOX)

(Definition to be determined)

Local Economic Index (LECOX)

Tentative definition: difference between mean per capita disposable income and retail sales, divided by disposable income. This gives an index value by city. It is the only city-wide economic data we have on all cities in the sample.

Information Availability Index (INFAVX)

Sum of number of facilities of types listed in INFO, divided by population of city. Computed for each city.

INFAVXj = Σ INFOi.j/POPj

where: i is the index on question number, j is the index on city.

Information Use Index (INFUSX)

Average value of responses to information use questions (INFSO) divided by the maximum value of a response.

INFUSXj,k = 1/n (Σ INFSOi,j/POPj

where: i is the index on question number, j is the index on city, k is the sample group, n is number of responses in sample, INFSOMAX is maximum value of a response.

Information Importance Index (INFIMPX)

Average value of responses to information use questions (INFSO)divided by the maximum value of a response.

INFIMPXj.k = 1/n (Σ INFIMPi.j.k/INFIMPMAX

where: i is the index on question number, j is the index on city, k is the sample group, n is number of responses in sample, INFIMPMAX is maximum value of a response.

Factor Importance Index (FACIMPX)

Average value of responses to information importance questions (FACIMP) divided by the maximum value of a response.

FACIMPXjk = 1/n (Σ FACIMPi.j.k/FACIMPMAX)

where: i is the index on question number, j is the index on city, k is the sample group, n is number of responses in sample, FACIMPMAX is maximum value of a response.

Statistics for Inference

Our sample groups a and b represent “before and after” samples, i.e., business people before having made a commitment to starting a business and after having established one. A key part of the project is to determine to what extent these sample groups differ. The only data we have on individual businesses come from the questionnaires, to be administered in three of the cities. We will test for significant differences between the responses in each of three parts.

Information Use

Compute mean, variance, and student’s t of questionnaire responses, Part II–1, by city and sample group. Test for significance of difference.

Information Importance

Compute mean, variance, and student’s t of questionnaire responses, Part II–2 by city and sample group. Test for significance of difference.

Factors of Importance

Compute mean, variance, and student’s t of questionnaire responses, Part III, by city and sample group. Test for significance of difference.

Regressions

Initially, we looked for correlations among our basic data. We found that, not unexpectedly, population correlates strongly with measures of information availability, per capita income and per capita retail sales indicating that, in general, the greater the population, the more money is available and the more information services are available. At the time this interim report was written, we had not yet had an opportunity to compare information seeking and valuing practices with these basic variables.

When we are able to administer our questionnaires, we will look for relationships between the two sample groups (a = those thinking about starting, b = newly incorporated businesses) and various combinations of environmental factors (economic, population), information availability, information use, and importance rating data.

Data Collection Plan

What we would like to do, but what is well beyond the scope of this pilot project, is to locate prospective entrepreneurs earlier in their careers and do the data gathering and analysis based on the type of industry involved and some personal characteristics of the persons involved. Then, we should follow the company over at least several years, watching for changes in size, sales, profitability, or product scope. Such a project would require several years of planning and monitoring.

We have selected 15 cities in the Province of Ontario. They are well distributed over the province, geographically, but with some view to our limited travel budget and the large expanse of the province. Five each are in the population categories of 8,000-25,000, 25,000-00,000, and more than 100,000 people. The cities and populations (Canadian Almanac and Directory 1995) are shown in Table 1.

Our plan is to send the questionnaires to officials who agree to cooperate, in small business self-help centres in these cities. In three cities, one in each group, we will conduct personal interviews on the same basis as before; some with people just approaching the registration process, some with companies established for a set period of time. These will be in-depth interviews seeking to learn the respondents’ attitudes toward information and information sources. We require approval and assistance from several provincial government offices. This has been requested and the appears to be viewed with favour.

Table 1. Cities used in the study and their populations. AH are in the Province of Ontario, Canada.

City

Population

Toronto

3,377,000

London

315,000

St Catherine’s-Niagara

306,000

Sudbury

148,000

Thunder Bay

118,000

Peterborough

68,750

Barrie

66,000

Clarington

54,000

Belleville

37,243

Stratford

27,666

Simcoe

15,539

Midland

13,865

Collingwood

13,505

Kapuskasing

10,342

Smiths Falls

    9,396

Source: Canadian Almanac and Directory 1995.

Current Status

We have collected almost all the basic data on economics and information availability. This comes from readily available sources, such as almanacs and telephone directories. To administer a questionnaire or interview entrepreneurs in depth, we must have the authorization and cooperation of two provincial ministries, both of which have been most helpful to date. We expect to complete this phase by September and the analysis of data by December 1995.

References

Canadian Almanac and Directory 1995. Toronto: Canadian Almanac and Directory Publishing Company Limited, Toronto, ON, Canada.

Charm an, E.A. 1995. The impoverished life-world of outsiders. Journal of the American Society for Information Science. (In press)

Chatman, E.A.; Pendleton, V.E.M. 1995. Knowledge gap information seeking and the poor. School of Information and Library Sciences, University of North Carloina at Chapel Hill, NC, USA. (Mimeo)

Childers, T.; Post, J.A. 1975. The information poor in America. Scarecrow Press, Metuchen, NJ, USA.

Dervin, B.; Greenberg, B.S. 1972. The communications environment of the urban poor. In Klein, F.J.; Tichenor, P.J., ed., Current Perspectives in mass communication research, vol. 1. Sage Publications, Beverly Hills, CA, USA. pp. 195–234.

Evans, A.J.; Rhodes; R.G.; Keenan, S. 1977. Education and training of users of scientific and technical information. UNISIST Guide for Teachers. United Nations Educational, Scientific and Cultural Organisation (UNESCO), Paris, France.

Menou, M.J. 1993. Measuring the impact of information on development. International Development Research Centre (IDRC), Ottawa, ON, Canada.

Mintzberg, H. 1973. The nature of managerial work. Harper and Row, New York, NY, USA. pp. 221–229.

Richardson, J.G. 1987. Capacity of the North Carolina agricultural extension service to deliver technological information: Perceptions of agricultural producers who are users of its services. Raleigh North Carolina State University, NC, USA. (PhD thesis)

Scott, R.V. 1970. The reluctant farmer. The rise of agricultural extension to 1914. University of Illinois Press, Urbana, IL, USA.

Stone, M. 1993. Assessment of indicators and the impact of information on development. Canadian Journal of Information and Library Science, 18(4), 50–63.

Warnock, P. 1992. Surveying client satisfaction. Journal of Extension. 30(Spring), 9–11.

Appendix A: Questionnaire

The original questionnaire is contained in Tague-Sutcliffe et al. (this volume). The changes were not complete at the time of the workshop this is a brief summary of the intended content. Section 1 asks for information about an existing company: when incorporated, number of employees, type of business, whether or not profitable. Section 2 asks about use of various information sources, recording for each type, never used, sometimes, or frequently used. It also asks for an assessment of the importance of various classes of information, such as about financing the business, suppliers, or government regulations. Section 3 asks about factors deemed important in the success of the business, such as availability of financing, location, or dollar exchange rate.

Appendix B: Government Agency Information Offices

The following provincial and Federal agencies may have information offices in any of the cities in our sample:

• Ministry of Economic Development and Trade: Small Business Ontario Offices

• Ministry of Consumer and Corporate Relations

• Ontario Development Corporation

• Eastern Ontario Development Corporation

• Northern Ontario Development Corporation

• Ministry of Culture, Tourism and Recreation (Tourist Establishment Licences)

• Liquor Licence Board of Ontario

• Customs and Excise Division, Revenue Canada (import/export)

• Ministry of Transportation (Delivery and Transport licences)

• Health Protection Branch/Health and Welfare Canada (Food and Drug Act Regulations)

• Consumer and Corporate Affairs Canada (Packaging, Labelling)

• Ministry of Housing (Building codes)

• Ministry of the Environment (Pollution Control)

• Ministry of Finance/ Retail Sales Tax Offices

• Revenue Canada, GST (Goods and Services Tax) Information Offices

• Revenue Canada/District Taxation Offices

• Employer Health Tax Regional Offices

• Consumer and Corporate Affairs Canada/Patent Office

• Workers’ Compensation Board

• Ministry of Labour/Industrial Health and Safety District Offices

• Ministry of Health/Health Insurance Offices

• Industry Canada/Business Service Centre

• Industry, Science and Technology Canada/Licensing Opportunities Section

This page intentionally left blank

Related Impact Activities

This page intentionally left blank

Benefit-Cost Analysis Progress Report: Applications to IDRC Impact Indicators Research

Forest Woody Horton, Jr1

Background

From the very beginning of the special research program initiated by the International Development Research Centre (IDRC) in 1992, it was clear that a practical guide, designed specifically to help the information community in developing countries to examine more systematically the benefits versus the costs of potential resource allocations to information activities, was missing. Very useful discussions during the early stages of the research program (e.g., the computer conference and the Nairobi workshop) helped shape both the format and the content for such a guide.

The author was asked in late 1993 to develop a handbook closely tied to the research project’s findings, conclusions, and recommendations, as they began to emerge and become formulated and documented during the 1992-1993 time frame. The author was also asked to take account of past IDRC experience with information projects and, to that end, the RED AT AM project was examined in depth.

The guide was published in June 1994 with the title “Analyzing Benefits and Costs: A Guide for Information Managers” (Horton 1994). The methodology contained in the guide reflects the conceptual approach recommended by the IDRC research team. This paper briefly traces the background of the guide’s development, and the recommended benefit-cost analysis (BCA) methodology, and makes proposals for possible next steps to refine and test the BCA methodology within the framework of the ongoing case studies and research projects currently under way.

The guide proposal to IDRC suggesting the need for a practical benefit:cost analysis guide for use by the information communities in developing countries said benefit-cost analysis or “BCA” (sometimes also called cost-benefit analysis or “CBA”) is increasingly used in both the public and the private sectors

1Information Consultant, 500 23rd Street NW, Suite B901, Washington, DC, 20037, USA.

as an aid to making decisions. Everyone makes decisions about how to spend money and resources in exchange for expected results, and all of these decisions have a common denominator — they all weigh, more or less consciously, the benefits and costs of one or more alternatives. The decision to buy or rent a house, for example, is a benefit-cost decision, and so is the choice of buying one personal automobile rather than another. So is a decision by a developing country to adopt one course or action as a “preferred” course of action over one or more other alternatives that are formally or informally considered, whether we are talking about projects being considered in the health field, the energy field, the environmental field, the information field, or any other field.

Most benefit-cost decisions, even in project management, are made informally, especially where the costs are relatively modest and where the preferred course of action is so incontrovertible as to preclude the need to study any other alternatives. For example, a manager, policymaker, or other decision-maker might base his or her decision on a briefing, which might or might not be detailed, or perhaps a more formal presentation.

At some point, however, a project becomes so big, and the elements of the decision become so numerous and complex, that only a structured BCA process will allow one systematically to identify, organize, analyze, and interpret all of the data and information the decision-maker needs to make an intelligent choice.

Although the dividing line between projects that require a formal BCA approach, and those that do not, may seem at first rather obvious, in practice, it takes experience, many facts, and good judgment to determine which decisions will benefit from BCAs and which will not.

Developing countries must make these kinds of decisions all the time. As mentioned, sometimes the scope and complexity of a project are small and simple enough so that a formal, structured approach to identifying feasible alternatives and selecting a preferred alternative are not required. In other words, there seems to be a broad consensus from the start that a certain avenue being considered is clearly the superior one.

Increasingly, especially in the light of greater pressure to force lower level decision-makers to justify their planned expenditures with greater care and precision in front of higher level decision-makers, as well as in response to greater demands for project accountability for results, policymakers and decision-makers must identify, explain, and defend the basis for their decisions. In short, officials higher up the chain of command are demanding not just an explanation for the reasons for the selection of a preferred alternative, but why the lower level decision-makers did not consider other alternatives, some of which may be alleged to be cheaper and perhaps even more cost-effective than the one actually adopted.

Over the last decade, the conceptual and philosophical underpinnings of formal BCA, as well as the methodological, technical details of how to apply the method to an actual decision problem (such as the challenge faced by policymakers and decision-makers to adopt the “best” alternative from among several “competing” ones), have been greatly refined. Today, there exists an excellent body of literature that can be adapted and tailored to decision-maker’s unique needs.

IDRC Special Impact Assessment Research Program

In her foreword to the BCA guide, Martha B. Stone, Director-General, Information Sciences and Systems Division, IDRC, said:

In a time of increasing competition for decreasing funds, any previously held assumptions about the value of information can no longer be left unchallenged. The merits of investing in information systems and services must be made more explicit, and nowhere is the pressure to do so more acute than in the developing countries. (Menou 1993)

This fundamental concern was the genesis of a special research program initiated by IDRC in 1992. The research program has been investigating how the impact of information on development might be better defined and assessed. The first phase has surveyed current thinking and approaches, and has led to the formulation of a methodology for use in undertaking this type of evaluation.

This research program is multidisciplinary in its approach, taking into account economic, behavioural, technological, and other considerations. During the analysis of existing tools that might be applied to this field, the study identified a gap. What was missing was a practical guide designed specifically to help the information community in developing countries to examine more systematically the benefits versus the costs of potential resource allocations to information activities.

Although in many sectors the concept of benefit:cost analysis is not new, its application in the current context, i.e., the information field in developing countries, is somewhat novel. The need for such a management tool has become increasingly evident in the current financial climate. IDRC believes the guide will enable information managers, and indeed others, to undertake the type of analysis that will help demonstrate more convincingly the value of investing in information.

Further details regarding the background, goals, methodology, outputs, and other information regarding the overall research program itself can be best obtained by reviewing the 1993 publication, “Measuring the Impact of Information on Development,” Menou (1993). The need to link benefits with costs is elaborated on page 21, and more fully in chapter 3 beginning on page 37. Applying benefit-cost analysis to an information project is included in concrete detail, using an illustrative project (the development of an AIDS national database and information system), in appendix 2 of Menou (1993).

Current Context and Next Steps

The development of the BCA Guide has paralleled the launching of the various case studies and research projects that are a part of the effort to develop an overall, generic assessment model. It was decided that, although the formal development of the BCA methodology and the publication of the guide documenting that methodology could be completed in 1994, the testing of the BCA methodology contained within the guide should await further ripening of the various case studies and research projects for several reasons:

• One of the key objectives of the case studies was to test the basic elements of a generic assessment model. It could be a mistake, therefore, to test the BCA methodology prematurely before research participants and IDRC were able to reach a general consensus reaffirming the validity of the overall recommended conceptual framework and a generic model;

• From a priority standpoint, it was important to launch the case studies first so that the testing of a generic assessment model and basic conceptual framework could proceed in accordance with generally established principles of the “scientific method”; and

• Trying to simultaneously test the details of the BCA methodology, as well as testing a basic assessment conceptual framework, would present enormous complications in the research programme’s overall management, as well as in sorting out causalities in formulating findings and conclusions.

The overall research program is now, however, coming to a point where it would be useful to select one or more of the ongoing case studies (or perhaps an existing case study and a new one) to begin testing the BCA methodology. Although in “the real world” deciding to apply BCA is usually made before an information project is undertaken, nevertheless, it would not be inappropriate to select an ongoing case study project where the objective is to test a specific proposed BCA methodology, which is exactly the extant situation here. Also, applying the methodology before the case studies are ended, could have the added benefit of sharpening the focus of the basic assessment conceptual framework and generic assessment model because the methodology could be tied more closely to “real life” project examples (the case studies), and thus be integrated more closely to the overall research program. Otherwise, the BCA methodology as it now appears in the guide, remains a parallel standalone effort, rather than an integral part of the core generic assessment model effort.

As was pointed out in the context of the CARICOM (Caribbean Community) project, one of the ongoing case studies (see Menou 1993, p. 158). Discussions about the CARICOM project at the postconference workshop and later were exploratory. No attempt was made to identify definitive measures of input and output, types of benefits, or derived indicators. Nevertheless, the conversations did point to areas of benefit, likely indicators, and other illustrative examples.

The following recommendation is, therefore, made to the participants and IDRC for their consideration:

• Select one or more existing or new case study projects to test the BCA methodology,

• Brief the key project principals as to goals and procedures for the test exercise (participants should read the BCA guide),

• Apply the BCA methodology and record findings and conclusions,

• Obtain feedback on draft findings and conclusions from principals, and

• Finalize and distribute the report to all participants.

References

Horton, F.W., Jr. 1994. Analyzing benefits and costs: A guide for information managers. International Development Research Centre (IDRC), Ottawa, ON, Canada.

Menou, M.J. 1993. Measuring the impact of information on development. International Development Research Centre (IDRC), Ottawa, ON, Canada.

This page intentionally left blank

Information for Policy Formulation: Latin America and the Caribbean

Fay Durrani1

In Latin America and the Caribbean, the International Development Research Centre (IDRC) seeks to combine innovative aspects of information systems development with research into the impact of information and information and communication technologies. The current focus is on information for policy formulation, with particular initiatives aimed at assessing the impacts on the development of policies and on related development areas such as productivity and competitiveness.

Regional Trends

The waves of globalization, privatization, and decentralization are major forces affecting the socioeconomic scenario and, consequently, policies are now being formulated by groups and individuals outside the traditional government structures. The characterization of policymakers in the Caribbean done by Boissiere (1994), incorporates a sample of actors from the public and private sectors and from civil society. In some areas of Latin America, the situation is complex and dynamic and can include, for example, a greater percentage of representatives from local government and community groups as participants in the process of policy formulation.

Forms of information access and utilization have also changed significantly in Latin America and the Caribbean over the past decade. Major influences have been increasing globalization of the production and marketing of information and communication technologies (ICT), wider access to these technologies, increased capacity to manipulate more user friendly systems, and the development of models of information systems for decision-making, which are being incorporated into practical use.

These developments still impact unevenly on the population of the region. Capacity to exploit new information technologies for increased information access

1Senior Program Officer, IDRC Regional Office for Latin America and the Caribbean, Centro Internacional de Investigaciones para el Desarrollo (CIID), Casilla de Correo 6379, Montevideo, Uruguay.

has been developed particularly by groups of information specialists and researchers and the more advanced nongovernmental organizations (NGOs). The development of data communications, particularly the subsidized access to Internet, has resulted in collaborative research and networking among some groups in the region, and across regions, and increased access to points of regional information for policy- and decision-making.

Some of these points such as CLAD’s Information System for Integrated State Reform, and CEPAL’s INFOPLAN, provide information to policymakers on issues of state reform in the region and are aimed at impacting on policy formulation, while REDATAM Plus is used in several countries in the region to provide objective bases for decisions and policies. Chile provides an interesting example of the use of REDATAM Plus GIS2 and ARC INFO3 as the basis of determining priorities in resource allocation.

Research Issues

On the one hand, although researchers and information specialists and some business persons have been able to develop some degree of capacity to access information via the new channels, policymakers, community leaders, and small-scale business managers, on the other hand, are less able to exploit the advantages of the new information technologies. The capacity of ICTs to cross institutional barriers and to encourage community collaboration in policymaking has also not been effectively exploited. In some cases, there is an excess of information available, but there have been difficulties in accessing the crucial information and research results needed for policy formulation. Also, sometimes the problems relate to lack of connectivity to the points of information services but, in others, there is need for access to research results and analysis and syntheses of information from the variety of sources available in the region.

2REDATAM is the abbreviated from of REtrieval of DATa for small Areas by Microcomputer, and REDATAM Plus GIS (R+GIS) is a later version of REDATAM for use with Geographical Information Systems such as ARC INFO. Both REDATAM and REDATAM Plus GIS were developed and are maintained by ECLAC/CELADE. Applications are being developed in collaboration with the University of Waterloo with support from IDRC. Some 728 copies of REDATAM Plus GIS have been distributed to institutions in Africa (29), Europe (16), the Caribbean (50), Latin America (572), North America (43), Asia and Pacific (17), and West Asia (1).

3ARC INFO is the trade name for a geographical information system, a software package compatible with REDATAM.

The urgent need in Latin America and the Caribbean and in other regions is strengthening of the capacity of those who make policy in all areas to access information and research results via the most effective paths. There is also a need for policymakers at all levels to understand the potential impact of these technologies for providing access to information, to incorporate tools and methods for increasing this access, and exploiting the existing ICTs for increasing organizational productivity and access to information for policy formulation.

The models that are under development have begun to impact on policy formulation. These include the Information System on Children (SIPI), which has stimulated more effective decision-making in relation to marginalized children in Uruguay and is beginning impact also in Ecuador. Another model is the Information System for Municipal Management in Chile, which is aimed at enabling municipal managers to use information from the central government systems to inform their decision-making and management.

The important common thread in the program is the definition and analysis of the types of information required for policy formulation, implementation of appropriate channels for information delivery, and evaluation of the information use environment. Participation of policymakers in the research into these issues is, therefore, an important focus of current initiatives. The general orientation can be summarized in the following.

Objectives

General Objectives

The general aim is in developing closer, interactive linkages between information systems and the processes of policy development and policy formulation. This is being done by optimizing access to information for policy formulation and assessing and evaluating the impact of information and information and communication technologies on policy formulation.

Specific Objectives

• Assess the impact of information on policy formulation in the areas of social policy, information policy, and environmental management policy;

• Expand access to regional information on key areas of sustainable development through specialized information services to selected user groups.

• Create value-added products synthesizing research and data on priority topics in social policy environmental policy and information policy. Products and services will be developed in relation to the needs of selected user groups.

• Enhance regional capacity for electronic networking, and computer conferencing on the part of selected user groups; and

• Monitor regional developments in access to, and research on, electronic communication and the impacts of its use. This will be done with a view to informing policy options and guiding the sustainability of the electronic communication facilities in the region.

Expected Results

• Analyses of the impact of information and the use of information technology on policy formulation in the areas of environmental policy, social policy, macroeconomic policy, and information policy;

• Closer linkage between research results, other development information, and policy formulation. This will be achieved through the participation of policymakers in areas of research and the provision of information products for policymakers, community leaders, and researchers on environmental policy, social policy, and information policy; and

• Established partnerships with private sector companies providing information or information technology services with development banks and other donors in the region.

With the changes in the region, there has been a need to review the scope of policymaking activities and, consequently, the range of policymakers. In an assessment done last year on the experiences in the region, the current trend in policymaking can be seen as:

• Extending beyond the scope of central government to include local government and civil society,

• Involving processes of consultation and consensus building among government and community groups, and

• Involving a complex series of information and communication activities in which all the actors participate.

The group of potential users of information for policy formulation, therefore, includes all those who define or execute a given policy and potentially the entire population as participants in or implementors of the policy decisions. The needs of each group in terms of data, analysis, and communication required would, therefore, need to be assessed on an ongoing basis to determine the information requirements for the policy making process.

Impact of Information and Communication Technologies

Although an important goal has been increasing access by the end users to information, the logical next step is interactive communication between the users or clientele and the information systems. The advances in technologies in the region are beginning to change the bases on which users have access and are able to interact with the information systems. In this context, electronic connectivity has begun stimulating interactive communication between users and the information systems and, therefore, moving beyond basic access.

A recently funded project “The impact of Information and Communication Technologies on the Productivity and Competitiveness of SMEs” (Ecuador and Argentina) (IDRC 1994) illustrates some of the research questions and will be identifying research solutions as they relate to increasing the competitiveness and productivity of SMEs. This research area is being developed on the assumption that information and communication technologies are a set of enabling technologies that can have widespread application across all sectors but that have particular potential for improving productivity and competitiveness. They also have significant unexploited potential at the community level.

Appropriate adoption, adaptation, transfer, and use of information technologies can only take place with informed choices and the development of user capabilities. This requires active involvement on the part of users, particularly the owner/managers of the enterprises in the identification of problems or barriers to information technologies, selection, adoption, transfer, and use and the definition of channels for overcoming policy challenges.

This area of the program is based on the need for increasing competitiveness and productivity in the region. With the focus throughout the region on market-based resource allocation, owner/managers of SMEs have recognized the need to increase their capacity to differentiate products and services and to link electronically with customers and suppliers. ICTs have been recognized as instruments of this change, but the degree of impact is still being researched at a global level. The questions relate to the degree of impact of ICTs that can be anticipated as an input to national and firm level policies.

The Universidad de Buenos Aires Maestria en Politica y Gestion de la Ciencia y la Tecnologia (UBA) and the Instituto de Investigaciones Socioeconomico y tecnologicas (INSOTEC) will jointly implement a research program focused on guiding managers of SMEs in Argentina and Ecuador in determining the value of ICTs and in deciding on the effective selection of ICTs for maximizing factors for productivity and competitiveness.

In the short term, the researchers in collaboration with associations of SMEs, will analyze the factors influencing adoption of ICTs, determine the characteristics of the decision-making process that promote or hamper the adoption of information and communication technologies by SMEs and evaluate the impacts of such adoption on the productivity and competitiveness of SMEs. The development and testing of a methodology for determining the value of ICTs will be an important output of this research, and the delivery of this methodology to the participating enterprise associations will be the principal result.

The research will establish a typology of SMEs in relation to use of ICTs, and the case studies will analyze in depth the impact of ICT adoption in selected SMEs. The participation of the owner/managers, ICT suppliers, support organizations, and government officials will guide the researchers in the interpretation of the findings and in the dissemination of results. The project will be implemented in collaboration with the Camaras de Pequeña Industria de Pichincha and Guayas in Ecuador, the Union Industrial Argentina, the Camara de Industria de Procesos (CIPRA), and the Secretaria de Ciencia y Tecnica de la Nacion in Argentina. This linkage is considered essential for encouraging the participation of the enterprises in the data gathering and in the dissemination of the research.

The issues raised in this project have resulted from regional consultations held in 1993, which were aimed at shaping a regional program on information technology policy research. IDRC supported two regional consultations. The first examined the question of ICT policy in the Latin American and Caribbean region and concluded that the most effective focus for the Centre would be ICT policies related to SMEs. The second consultation was based on analysis of studies of the ICT policy experience in Argentina, Brazil, the Caribbean, and Ecuador and facilitated discussions among representatives of organizations that support SMEs. The results have provided the basis for the orientation of this project and preliminary analyses of the problems to be researched.

Although there were several possible areas of appropriate focus, the regional meeting, held in April 1993, agreed that an important one would be small- and medium-scale enterprises and organizations as the key agents for increasing productivity and for stimulating competitiveness in the region. Areas of focus agreed on were the effectiveness of policymaking relating to information technologies. The subsequent analyses for Argentina, Brazil, the Caribbean, and Ecuador concluded that there are research gaps between the micro- and macroeconomic analyses on technology adoption and the implications of adoption on company profitability and competitiveness.

The regional consultation of December 1993 made recommendations for sector-specific studies to examine the contribution of ICTs to the international competitiveness of SMEs. The problems in measuring productivity impacts of ICTs on SMEs were recognized and suggested for further research. Norman Girvan’s paper on the Caribbean (Girvan 1994) suggested that the impacts be examined in relation to increased capacity to differentiate products and services, market penetration and customer reach and services. The “critical success factors” identified in the discussions were:

• Strategic planning

• Commitment of the owner managers User participation

• Willingness to change work procedures

• Adequate staff capacity and training

In concluding the study on Argentina and Ecuador, Carlos Correa (Correa 1993) identified the impact on competitiveness of adoption of different types of ICTs as one of the least explored areas. Weisman and MacMillan (1994) also highlight some of the avenues for using information systems as competitive weapons. The methodological difficulties of such analyses were also recognized, and are also identified in the proposal.

These difficulties include the need to:

• Isolate the impacts of ICTs from other micro and macro economic factors that affect productivity and competitive advantage;

• Define competitive advantage in terms that are relevant and operative vis a vis the research (such as increased market share); and

• Appropriately define productivity in the context of computerized systems.

With a view to clarifying where models for measuring impact exist and have been tested, a literature survey was commissioned by the Centre via a contract with specialists in this field. Louis A. and Elizabeth Lefebvre of the Ecole Polytechnique, Universite de Montreal, have completed a study analyzing and evaluating, via the literature, methodological models used in Canada and in some of the OECD (Organisation for Economic Co-operation and Development) countries for the measuring the adoption and impact of information and communication technologies on the productivity and competitiveness of SMEs (Lefebvre and Lefebvre 1995). The study particularly address:

• Identifying and analyzing the adoption of information and communication technologies,

• Determining the characteristics of the decision-making process (i.e., the way in which the decision to adopt these technologies is made) that promote or hamper technology adoption, and

• Evaluating the impacts of such adoption.

The Lefebvre study has researched various aspects related to adoption, diffusion, and impacts of ICTs with particular focus at the firm level. The main purpose was not only to identify the issues but also the measures and constructs that ultimately allow the design of data-collection tools.

The study first defines information and communication technologies according to different models and examines the rate of diffusion of these technologies in different countries. It then analyzes the internal and external factors affecting the adoption the these technologies, and discusses the characteristics of the decision-making process — considered a prime adoption factor. Finally, it assesses the relationship between ICTs and productivity, the impacts of information and communication technologies on key competitive dimensions, performance, work and employment, and operational measures for the impact of ICTs.

The Lefebvres conclude that data collection tools should be designed for or adapted to the broader research objectives pursued, the specific environments to which they are addressed and the internal validity of the pertinent dimensions to be included in a specific study. This analysis of the literature will be used by the researchers as a support for developing the details of the methodology and the data-collection tools to be used in undertaking the research project.

The effective implementation of ICTs within SMEs requires firm-level policies. In this, the owner/managers are the major players. At the same time, effective implementation at the firm level is enhanced by industry-level policies, such as technical assistance, cooperative purchases, subsidies etc., which are implemented by industry associations such as the associations linked to the research groups, or by industry support organizations such as INSOTEC.

The support for redefinition of policies at the national level depends on lobbying by groups such as the industry associations and the implementation of policies for adequate compatible cross-sector infrastructure, including labour. These policies, in the form of laws, special grants, and exemptions, are defined by agencies such as the Secretaria de Ciencia y Tecnicia de Argentina and are implemented by public or private development banks or equivalent agencies.

The areas of testing agreed upon at the project development meeting in January 1995 were:

• Lessons to be learned in relation to the adoption of ICTs by SMEs. Decision-making mechanisms used by SMEs for selection of ICTs.

• Procedures for implementing ICT systems in SMEs.

• Productivity impacts of adopting ICTs. Competitiveness impacts of adopting ICTs.

The two research groups will be working with manufacturing- or industry-related service companies. They will both focus on applications in management (accounting and finance, production control and management, and communication). This focus agreed upon at the project development meeting will provide the areas of commonality and will enable comparisons between the developments in the companies studied in Argentina and Ecuador.

The grouping of potential participating enterprises is exceedingly varied, and it was agreed at the project development meeting that, although there is a core group of leaders that can be expected to play major innovative roles in the process of increasing competitiveness, the best strategy would be to network a mix of enterprises covering the leaders, both active and traditional.

The project recognizes the difficulties when designing the detailed methodology. These relate to the need to isolate the impacts of ICTs from other factors that affect productivity and to define productivity and competitive advantage in relevant terms. The application of productivity indicators has been outlined in the proposal and in the development of the detailed methodology these indicators will be assessed as a means of selecting productivity indicators to be incorporated into the analysis.

As a result of the consultations between the two groups and with the Centre, the sectors to be studied will include the major sectors of:

• Food and drink

• Textiles, apparel, and the leather industry

• Chemical substances and chemical products

• Metal products machinery and equipment

As the project focuses on the impact of the ICTs, it was agreed that the ICTs to be studied will be those used by the majority of SMEs, thus providing a basis for analysis of experience rather than of potential. There will be an initial survey that will provide baseline data on the areas of study, and then the results will provide a base for selecting firms for the case study. The SMEs for the survey will be drawn from the membership of the associations and will include adapters and nonadapters of ICTs. A strong element of the methodology is also the participation of the enterprises at several stages of the research and the link with the SME associations that will stimulate the interchange of issues and concerns and will provide ready channels for the dissemination of the results.

The resulting evaluation of the use of ICTs in the samples will permit an identification of: the level of incorporation of these technologies in the enterprises, the existence of cooperative relations between SMEs and other enterprises that influence the access to and application of ICTs, the areas of enterprise management in which there is greater utilization of ICTs, and the range of ICTs available for SMEs.

The project’s impact is anticipated on two levels: on the researchers who will have tested a methodology for measuring productivity and competitiveness in this context, and on the enterprises and participating industry associations. In the case of Argentina, collaboration with the Union Industrial Argentina, la Cámara de Industria de Procesos (CIPRA), and the Secretaría de Ciencia y Técnica del la Nacián are the main links between the research with future implementation. In the case of INSOTEC, the Cámaras de Pequeña Industria de Pichincha and Guayas will be the main collaborators. Impact will, therefore, be at the level of the chambers and INSOTEC as a service provider for SMEs.

Information and Policy Formulation

Two other projects relating to information for policy formulation in Latin America and the Caribbean and that will be presented in these proceedings are “Impact of Information Policy Formulation in the Caribbean” (Chambers and Boissiere, this volume) and “Information for Decision-Making in the Caribbean Community” (Collins, this volume). They illustrate several of the objectives that guide the current program and the scope for assessing the impact of information and information technology on policy- and decision-making. They focus on a range of user groups, emphasize complementary areas of information services, and will provide complementary results from their case study components.

Another project supported in this area is the “Network of Networks: Latin America.” which links information providers and researchers and has as its general objective increasing access by the end user to the information held by 17 regional networks. Electronic communication and dissemination of the databases on CD-ROM has enabled the information provided to be more widely disseminated, whereas some users have been able to interact electronically with the information providers and with some information systems.

The next phase is expected to extend the aspect of interaction among users, information specialists, and systems. The developments in Internet access now make this kind of interaction much more widespread than would have been the case in 1991 when the current phase was being developed. Users and information specialists are now able to be more selective about what they retrieve and to move toward the definition and creation of value added products. As Negroponte (1995) points out in his recent work “Being Digital.” hypertext removes the limitations of the printed page and enables users, be they information scientists or clients, to create a new meaning from information that might have been structured by the original producer in a different way.

As a first step, these three projects will try to isolate the “benefits” of information use in the context of policy formulation. The area that I think needs further research and analysis is the definition of the impacts on specific areas of policy formulation. Tracking the flow of information and matching it with any number or range of policies will need incorporation, in each research activity, of relevant policies on which the impact of information can be assessed.

References

Boissiere, N. 1994. A methodology for selecting a sample target group for information services in the Caribbean Community. International Development Research Centre (IDRC), Ottawa, ON, Canada. (Mimeo)

Correa, C. 1993. Diffusion and information technology policies for small and medium sized enterprises in Latin America and the Caribbean. Paper originally presented to the Meeting on Information Technology Policies for SMEs in Latin America and the Caribbean, 6-8 December 1993, Montevideo, UY.

Durrant, F. 1995. The role of information in the process of social policy making. In Morales-Gomez, D.; Torres A., M., ed., Social policy in a global society. International Development Research Centre (IDRC), Ottawa, ON, Canada. (Mimeo).

Girvan, N. 1994. Information technology for small and medium enterprises in small open economies. Paper originally presented to the Meeting on Information Technology Policies for SMEs in Latin America and the Caribbean. 6-8 December 1993, (revised January 1994). Montevideo, UY.

IDRC (International Development Research Centre). 1994. The impact of information and communication technologies on the productivity and competitiveness of SMEs IDRC Project summary 94-8762. IDRC, Ottawa, ON, Canada.

Lefebvre, E.; Lefebvre, L. 1995. Methodologies for measuring the adoption and impact of information and communication technologies on the productivity and competitiveness of SMEs. International Development Research Centre (IDRC) and CITI, Ottawa, ON, Canada.

Negroponte, N. 1995. Being digital. Alfred A Knoph, New York, NU, USA.

Wiseman, C; MacMillan, I.C. 1994. Creating competitive weapons from information systems. Journal of Business Strategy.

This page intentionally left blank

Measuring the Effects of Information on Development

Warren Thorngate1

About a year ago, a colleague challenged me to learn something about measuring the effects of information on development by following the 28 participants of the Education Policy Analyst Workshop organized last August by Alfredo Rojas of REDUC (the Latin American Educational Information and Documentation Network) in Santiago, Chile.

I would like to discuss here something of what has become of its graduates, hoping to explicate what I think are some important methodological issues in the evaluation of information effects. By way of background, I am neither an information scientist, nor a librarian, nor a development expert. I am instead a social psychologist who teaches graduate courses in statistics and research methods and who has spent the last 25 years conducting research on the use of information in decision-making.

On the one hand, this background makes me rather different, and perhaps it should disqualify me from discussion of our listserv topic. On the other hand, the ideas of Martha Stone and of Michel Menou that have inspired our listserv seem, quintessentially, multidisciplinary. It may, therefore, do no harm to give a report from that perspective.

I think that Menou (1993)2 noted the possible contributions of different disciplines in developing measures of the effects of information on international development somewhere toward the end of his book So perhaps a psychological perspective on the use of information may complement your methodological work nicely. Yet different perspectives are not necessarily complementary. Often they conflict.

Possibly, what experimental and social psychologists know about information use may conflict with those viewing information as a commodity measurable by traditional means and amenable to classical statistical and

1Professor, Psychology Department, Carleton University, Ottawa, Ontario, Canada K1S 5B6.

2Menou, M.J. 1993. Measuring the impact of information on development. International Development Research Centre (IDRC), Ottawa, Ontario, Canada.

cost-benefit analyses. So, to be true to my discipline and my intuition, I would like to express some doubts about the possibilities of evaluating the impact of information on development using the usual variations of evaluation methodologies, statistical procedures, and cost-benefit analyses, then propose a preferred alternative.

When I think of examining the effects of information on development, I think of how people seek and use information. It is a psychological perspective that seems to begin where libraries end. As it happens, psychologists know quite a lot about how people seek and use information. One of our own, Herb Simon, won a Nobel Prize in economics for his work on related issues (bounded rationality), much to the consternation of classical economists.

But psychologists are not alone. Sociologists know a lot about information use, as do those studying organizational behaviour and communication. There are journals full of the this kind of information including “Knowledge: Creation, Diffusion, and Utilization” (now called “Science Communication”), “Behavioral Decision Making.” and “Organizational Behavior and Human Decision Processes.”

I see few references, however, to research from the information sciences within their covers. Those of us who publish in these journals should know more about your research. Maybe it would be helpful if you know more about ours. I am one of those psychologists who is largely ignorant of the information sciences. But reading Michel Menou’s book and following the ideas of our listserv, I sense that much of the discourse is predicated on assumptions about information use that psychologists and other social scientists know to be seriously flawed.

The assumptions I detect include the following:

• Information is that which reduces uncertainty (Shannon-Weaver definition);

• By reducing uncertainty we can better predict the consequences of our actions;

• Increasing our predictive accuracy will lead to increased benefits; and

• Thus, the net value of any information can be calculated by determining the increased benefits it leads to, minus the costs of obtaining, storing, and distributing it.

Mixed with such assumptions are an odd assortment of corollaries:

• The quality of information can be assessed by noting the amount of uncertainty it reduces,

• High-quality information reduces uncertainty more than low-quality information, and

• Thus, information quality may be equated with the quantity of uncertainty reduction.

Such assumptions and corollaries seem to be well-suited for the purposes for which they were originally designed, namely, the transmission of signals down a telephone wire. But they present us with an extremely limited idea of information. What is information? It is a question no less difficult to answer than “What is development.”

Each concept can be defined in many different ways. This circumstance presents us with the danger of choosing definitions easily adapted to our measurement and statistical techniques, rather than adapting our techniques to fit more defensible definitions.

Psychologists are no more able to define information than anyone else. But at least we can argue strongly what information is not. Information is not knowledge. We consider the former is what exists “out there” beyond our senses; it lives in nature, in print, on hard disks, in the air. Knowledge is that which exists “in here” behind our eyeballs, sitting just above uncertainty. There is no such thing as uncertainty “out there” — it is quite literally a figment of our imagination. We reduce it with knowledge, not with information. So psychologists are prone to be skeptical of measuring the effects of information on development, because we believe that it is more proper to evaluate the effects of knowledge on development.

To do so it is prudent to learn something of the relationship between information and knowledge. The relationship, unfortunately, is extremely complex. What do psychologists know about the complex relationship between information and knowledge? What does it imply for our own evaluative projects? Here are four of many things we know.

First, information, especially in its symbolic forms (including all research reports) cannot be used without prior knowledge. If I say to you, “Khaste nabashed! Kaben ashghe man.” I give you information that does not produce knowledge unless you know Farsi. If I mention the base-rate fallacy, it will mean nothing unless you know of the work of Tversky and Kahneman. Thus, an evaluation that shows no relation between information and development may only be showing a byproduct of insufficient prior knowledge. Increasing information may treat the symptom of ignorance rather than the disease.

Second, psychologists know that information affects the heart as much as the head. The classical notions of information listed earlier assume that information changes beliefs or the strength of association between ideas. They do not address the possibility that information can change values, priorities, goals, or evaluation criteria. Alfred North Whitehead made a useful distinction in judging knowledge: we can judge it as true versus false, or we can judge it as important or trivial.

Information we can count and measure is generally information that separates the true from the false; it is the stuff of science and policy implementation. The information we cannot measure or count usually separates the important from the trivial; it is the stuff of art and policy formation. The former changes our beliefs about how to do something efficiently or well. The latter changes our values about doing it at all. We must not overlook the latter in our cost-benefit evaluations, understanding that values are affected by different information than are beliefs, and in different ways.

Third, decision-makers are usually quite bad at judging how they made decisions. Instead, they construct stories that make sense of what they did. Thus, it is foolhardy to trust the self-reports of decision-makers in determining how information affected their decisions.

Fourth, psychologists know that human decision-makers almost never behave like the prescriptive models of rationality derived from economics (including cost-benefit analyses) say they should. Let us be thankful, because most of the prescriptive models are either useless or dangerous.

The weaknesses of human decision-making are numerous and many are frightening. Decision-makers are prone, for example, to seek information in biased or irrational ways, ignore most of it, and change their preferences without notice. Dozens of such weaknesses have been christened with fancy names such as confirmatory bias, regression oversight, illusory correlation, dissonance reduction, and defensive avoidance. Coupled with a common lack of skill for finding relevant information, we often wonder how it is possible for decision-makers ever to make a good decision, much less to survive at all.

Of course, it was this fallibility that inspired mathematicians and economists such as von Neumann and Morgenstern to develop their calculi of choice. Only later did other mathematicians and economists demonstrate that the calculi had tragic flaws, illustrated by the St Petersburg Paradox or the Prisoner’s, Commons, or Temporal Dilemmas. Economists who have tried to resolve these paradoxes and dilemmas have succeeded only in developing new calculi to prescribe alternative choices, and eventually in developing rationalizations for every possible prescription. This reduces rational choice to a choice of rationalization. Most good policymakers know it: “Get me some data that justify my decision!” Why not? Psychologists who have studied weakness of human decision-making have also developed great respect for its strengths.

We now know that it is at least as important for survival to create or recognize alternatives as it is to choose among them. Prescriptive models of decision-making do not prescribe how to create or recognize alternatives. Humans do it anyway. We do it by extracting knowledge, insight, and understanding from information in new and unpredictable ways.

Therein lies a lesson for measuring the impact of information on development. Information has potential value as well as current value. But we cannot assess the potential of information by measuring information. We can only assess the potential by measuring the creativity of the people who use it.

Psychologists, like most other sentient beings, know that people seek and use information for many more reasons than improving their policies and decisions. We seek information for excitement and pleasure and the satisfaction of our curiosity. We use information to coordinate and justify our behaviours, to gain status and power, and to adapt to changes in our circumstances. We produce and consume information to maintain friendships, to resolve conflicts, to teach, and to learn.

I mention this only to remind those of us who are trying to assess the impact of various information services on economic or policy decision-making in developing countries that we are greatly restricting the range of our focus on the impact of information on development. The methods we develop may be well suited for the focus of our examination. But, the most important impacts of information on development may lie elsewhere, and we may be looking in the wrong place.

In sum, the main lessons from the psychological study of information and its uses are these: information is necessary but not sufficient for development, its effects are usually indirect and delayed, and it is never useful on its own. These conclusions may sound pedantic. But I think they have an important implication for approaching the task of measuring how information affects development.

The value of information cannot be meaningfully assessed outside the context of its use. It cannot be meaningfully estimated by including information as a variable in some linear equation or by “weighing” it in some additive fashion against the dollars spent on producing, distributing, storing, and retrieving it.

To assess the relative value of information against its funding competitors is like assessing the relative value of food versus water for human survival. Which is more valuable, food or water? It is a meaningless question. We must have both. The value of either depends on how much we need and want and have now and in the future.

So, are we wasting our time? Should we abandon attempts to assess the value of information and to measure its impact on development? It seems too foolish to abandon the idea of assessment. But, it does not seem foolish to abandon Shannon-Weaver and economic assumptions about information, to

abandon most of what we learn in those dry evaluation research books, and to change the rules of the game of cost-benefit analysis.

Psychology provides at least two different ways of looking at information that may be useful in setting or changing our course. The first is to put information and its uses in the broad context of human communication and the coordination of human activity rather than in the context of telephone communication, economics, or decision-making. In this context, information becomes the raw material of attitude change and social control, of rhetoric, and political action. In this context, it becomes as important to study advertisements and editorials as it is to study Internet traffic or library use. Indeed, in this context, money itself can be seen as a symbolic form of communication and lead us to compare money against other forms of communication for its ability to coordinate human action.

Second, psychology tells us that the true currency of information exchange is not money. The true currency is attention. Attention is literally what we “pay” for information, as we “spend” time to be informed. The exchange of attention for information forms the basis of an “attentional economy” that follows rather different principles than the monetary economy we know.

I have written at length about the principles of attentional economics and some of their implications. Suffice it to say, however, that the concept suggests that we might fruitfully evaluate the impact of information on development by considering how people spend their time before and after information is available, then examining how much time is “saved” for new or more rewarding activities as a result. If we want to return to the money game, we can then estimate the worth of that time and translate it into dollars.

Reporting Information About Studies of Information

Charles T. Meadow1

A common occurrence in information science is that research papers do not make use of prior data compiled by other people. Prior theory, yes; actual quantitative data, no. One of the problems all of us have who study and try to measure the impact of information is the multiple definitions of the very word, “information.” Because the definition varies so much, measurements based on it vary also. We all know the Shannon definition,

H = - Σpilog pi where pi

is the probability of occurrence of a given symbol (Shannon and Weaver 1949). A variation on this is that information is that which reduces uncertainty. But the reality is that it is hardly possible to measure the extent to which uncertainty is lessened in a human being on a topic of great social and economic importance to that person. We might be able to make such measurements in a highly controlled laboratory study where, unfortunately, the relationship between the variables measured and the real world is unknown. But measuring the contribution of a library or consultant to a particular decision is a different matter altogether.

The first problem seems to be the definition of the key word, information. A second is that we do not always report the definitions of the variables we measure, or the circumstances of the measurement, with enough precision to enable others to use them.

Defining Information

It is very hard to expunge the more common meaning of information as the printed, and occasionally spoken, word. When we read about information and development, we are usually reading about the provision of information, i.e., documents, databases, or formal advice. We typically do not read about the process of converting this data-information into knowledge-information. One reason is the difficulty of measuring that process in any context, let alone that of

1Professor Emeritus, Faculty of Information Studies, University of Toronto, 140 St George Street, Toronto, Ontario, Canada M5S 1A1.

a developing economy and social system. Another is that the conversion process, itself, is not enough. We must also measure the willingness to try to find information and the skill at doing it.

Probably all of us who call ourselves information scientists have at one time or another made the distinction between data and information. For most of us, it is roughly what Thorngate (this volume) called “out there” (data) vs “in here” (information), or “information” is that which each individual internally interprets the data to mean.

I am one of those who has published a formal definition (Meadow 1992) together with distinctions among such related terms as data, knowledge, intelligence, and wisdom. Yet, I often find myself, as well as my colleagues, using the word information very casually, ignoring my own definitions.

In most of the material I have read on the subject of information and development, information tends to be used to indicate documents or relatively formal, orally conveyed data, which is not the usual definition. There are, of course, exceptions. But, in general, even in research reports, the reader does not know for certain which meaning of information is intended, and this limits the exchange of data among ourselves about information because we are never completely sure what is being measured, or under what circumstances. The problem is not limited to the development research community; it is simply that we are the latest group to attempt to quantify the impact of information transfer.

A large part of the literature on the evaluation of information systems, whether or not in the development context, is concerned with the distribution of packages of data as yet unconverted to information by their recipients. This can mean printed books or reports in a library, records in a computer-stored database, or advice conveyed orally. “Unevaluated” means that the intended user has not necessarily evaluated the information in terms of its contribution to that person’s own work or thought.

Most evaluation studies end with an assessment of “relevance.” often defined as topicality or subject-relatedness. Another definition is utility or value, usually based on a questioner’s immediate reaction to a document upon reading it. If I am a farmer concerned with crop loss caused by some parasite, and I find a document that describes the phenomenon and seems to offer a. solution, but in actual trials does not enable me to get rid of the pest, the document may be relevant but will have no impact.

Another aspect of information is the intended recipient’s willingness to seek it and skill in doing so. In assessing impact, it is as important to find people who did not visit the information centre as those who did. It is as important to know what the person looked for as what he or she found. It is as important to know what the actual consequence of finding the information was, as how relevant the person said it was.

These are not trivial issues. In my own impact assessment project, (Meadow and Spiteri, this volume) we wish to interview people who are thinking about starting a new business, as well as some who have already done so. The “done so” part is relatively easy. Businesses must register with the government and we can have access to the records.

Ideally, we would like to reach those with the idea as soon as they have it, before they start asking other people for information and advice because we would like to track the advice and information seeking and its effect. The earliest point at which we know we can find such people is when they approach a government office whose purpose is to advise those starting a new business. But, these people have already followed a rational path in seeking such advice. What happened to those who did not follow this path? We do not and will not know.

One method likely to find entrepreneurs before they approach the government or a bank is to do a survey of the population at large. The cost of such a survey to find a small group of people is prohibitive. Another way may be to advertise for people thinking of starting a new business and willing to help a research project. But this approach gives us a self-selected sample. Hence, of necessity, we deal with somewhat biased data, making their use by the next researcher questionable.

My own basic field has been information retrieval. When evaluating an information retrieval system it has generally been the custom to ask the person who has a question to evaluate the outcome. This can be done by asking if the retrieved informational items (typically, but not necessarily, documents) are relevant to the question. Research people in this field generally recognize but rarely actually take into account that the question asked at the retrieval facility might not be the one the user really was interested in.

Belkin (1980) was the first to articulate this phenomenon, in drawing a distinction between what he called the anomalous state of knowledge (ASK) and a question. The somewhat peculiar expression, ASK, reflects the fact that the person may not even know the exact question and may only be vaguely aware that something is missing.

If the prospective information user does not know the true question, then what question should be asked at a library? Sometimes, even when the true question is known, the question brought to a library (the information need statement) may reflect a feeling that the information service person assigned to help cannot understand the real question or that the asker simply lacks the ability to articulate it.

Either case is likely to lead to inadequate results even though the information sought may be present and the user able to understand and act upon it. Eventually, however, a question is asked and some information items are retrieved. The meaning of “relevance” of these information items can be, as noted earlier, topicality (Is it on the right subject?) or value/utility (How useful is this item to me?). But this judgment is almost invariably made at the time of retrieval, not after the retrieved item has been put to actual or attempted use. Did the lawyer win the case? Did the physician cure the patient? Did the securities trader make a profit? One reason, of course, why we do not wait until a use is made is that it may be impossible to attribute such a real-life outcome to retrieval or nonretrieval of a single document.

It is my belief that it is not impossible to attribute an influence of the retrieval of a documents to a subsequent real-life event, but developing the method would require some original research. Put more positively, using a broader measure, it is possible to assess the contribution of retrieved information to user actions, over a number of actions. That is, we are not likely to be able to assess the contribution of a particular document to the winning of a case at law, but we can assess the contribution of the process of recovery of information to winning cases, over a number of cases. In the context of development, this means that if we follow a number of users of an information facility as well as a matched sample of nonusers, it might be possible to assess the contribution of the facility to their actions.

Another side of our difficulty, as scholars of the impact of information, is that it is difficult for us to exchange data among ourselves. If “A” does a study of information use in one country and “B” a different study in another country it might still be useful to exchange data, to establish, for example, a series of data points showing the tendency to use a library as a function of user characteristics, or the success in achieving user-determined success in finding information as a function of training in searching for it. But, we rarely see publications in this field in which one researcher has used data taken from another. If the two do not use the same definitions of the user characteristics or of success in finding information, or of “information” itself, comparison of data is not possible and duplication of research is often necessary.

What is needed is a standardization of the variables used. In physics, it is possible to share or compare data taken by different experimenters because the definitions of the variables used or observed are standardized and well understood. A well-known example is the graph of thermal conductivity of tungsten as a function of temperature (Ho et al. 1974), reproduced in Tufte (1983). A clear, smooth line fitted to the data shows how thermal conductivity varies with temperature, as a composite of the work of many people. Many observations fall off the smooth line. Points falling off the line may indicate experimental error or that other factors may bear on the measurement. I believe it would be possible and certainly desirable to prepare similar composites for measurements of system performance, user performance, even just measurement of relevance under varying conditions.

Achieving the standardization I suggest is no small task. We could approach it by creating a standardized way to describe data. For example, and returning to the question of the meaning of “information.” if a person reports a measurement of user satisfaction with retrieved information it would help others if the report explicitly stated: characteristics of the user, manner, or circumstances of collecting the data, meaning of information used, and scale of the satisfaction measure. We would need, and I believe we could construct, a thesaurus of descriptors for such data elements as user characteristics and, of course, information.

Here is an example. Assume we have a study of effect of training on performance of users with an information retrieval system. The users are all drawn from the same population, business managers with no previous computer experience and an average of 12 years of formal education.

One-half of the group is given training program “A.” the other program “B.” Each person in each group is given same set of questions to try to answer. A subject matter expert, familiar with the database in use, judges the outcome, rating each document retrieved as to relevance.

The judge is also asked to estimate how many relevant records were not retrieved, and to base the relevance rating on topicality, i.e., on whether retrieved records appeared to be on the subject of the question, rather than rating on utility or how valuable the individual user found the record. A binary relevance scale was used. Documents were deemed relevant or not relevant.

The variables of this example are:

User characteristics

Education (years)

Occupation (Code from a standard classification schedule)

Computer experience (years)

Outcome

Relevance

           Definition used: topicality
                Values possible: 0,1 (0 = not relevant)

For each question, number of documents retrieved, nq

For each question, number relevant, rq

For each question, number assumed relevant missed or not retrieved, mq

For each question, Precision = rq/nq

For each question, Recall = rq/(rq+mq)

Treatment

Trained by method (describe)

Questions assigned by experimenter. (List questions used)

Qualifications of relevance judge(s)

If we do not have all this information recorded in a standard manner (e.g., use of some standard occupation classification), then another researcher cannot use this data because it is not known exactly what the reported measurements mean. If a second researcher wanted to use the data in this hypothetical case, that person would typically not be sure of such questions as: Which definition of relevance did the judges use? Can it reasonably be assumed that the judges’ findings are the same as those that would have been rendered by the searchers, i.e., can judges results be directly compared with user results? Were effects of order of presentation of records to judges considered? How should data based on a binary relevance scale be compared with data based on a scale of 1-5?

In summary, we need much more precise and standard definitions of the variables we report. If a researcher could reasonably count on terms always meaning essentially the same thing when used to describe a measure, we should be able to share data and to use others’ data in our own work, without repeating the earlier work.

References

Belkin, N.J. 1980. Anomalous states of knowledge as a basis for information retrieval. Canadian Journal of Information Science, 5(May 80), 133-143.

Ho, C.Y.; Powell, R.W.; Liley, P.E. 1994. Thermal conductivity of the elements: A comprehensive review, supplement no. 1. Journal of Physical and Chemical Reference Data, 3,1-692.

Meadow, C.T. 1992. Text information retrieval systems. Academic Press, San Diego, CA, USA.

Shannon, C.E.; Weaver, w. 1949. The mathematical theory of communication. University of Illinois Press, Urbana, IL, USA.

Tufte, E.R. 1983. The visual display of information. Graphics Press, Cheshire, CT. 150 pp.

INformation IMpact CAse Studies Listserv — “INIMCAS-L”: Analysis of Initial Use

Ronald Archer1

In 1992, 16 experts in the field of library and information sciences participated in a computer conference that provided the medium for free discussion and development of ideas to assist in the construction of a framework for measuring the impact of information on the process of international development. After the conclusion of the computer conference, an evaluation of its utility and effectiveness as a medium for accomplishing its goal was undertaken. Indeed, the evaluation proved that the conference was generally successful (Thorngate and Balson, this volume).

It is interesting to note that since the time of the conference (April-November 1992) operated on the CoSy computer conferencing system at the University of Guelph, many of the problems encountered in that first effort have not been issues during the running of the INTMCAS Listserv. These involved the problems with telecommunication links to CoSy (particularly from developing regions), the difficulty of transferring files, and coping with the unfamiliarity of this type of conference medium.

It was stated in the evaluation report:

Alas, there is little that can be done to solve these problems, except to wait for the industry to improve service. There is, however, reason to be optimistic about improvement. Reliable digital telecommunication systems are rapidly replacing their less reliable analogue versions in both developed and developing countries. Within five years, perhaps half of the linking problems should be solved.

Indeed, within 3 years, we are at the point where everyone from the developing regions has been able to connect to the INIMCAS Listserv via the Internet. Although a few minor command problems were encountered on the initial subscribing to the Listserv, these were quickly resolved through auxiliary e-mail (electronic mail) messages. It is also clear that the use and practice of e-mail

1Project Manager, Program Coordination and Development, Information Sciences and Systems Division, International Development Research Centre (IDRC), 250 Albert St, PO Box 8500, Ottawa, Ontario, Canada K1G 3H9. messaging is common enough that no one seemed to have any problems in entering messages onto the Listserv.

The INIMCAS-L Listserv was officially launched 10 February 1995. The objective of the Listserv is to provide a vehicle for the free exchange of ideas and experiences among case study leaders and consultants currently implementing impact of information case studies. As of July 1995, there were 30 subscribers to the Listserv, 16 in North America, 8 in Africa, 2 in Latin America, 2 in the Caribbean, 1 in Europe, and 1 in South Asia.

The first actual message on INIMCAS-L was from Michel Menou (editor of the IDRC publication “Measuring the Impact of Information on Development”) on 7 March and since then, to 4 July 1995, a total of 46 messages have been recorded on the Listserv (this does not include the initial messages from participants to connect to the Listserv). Of the total number of messages:

• Four were related to “housekeeping” on the Listserv (e.g., questions of command instructions, etc.),

• Six were directly related to the organization of the Ottawa workshop (July 1995), and

• Thirty-six were directly related to discussions or presentations on, or about, the case studies.

Although some interplay between participants did transpire on the Listserv (on about three or four occasions), it was far less than anticipated. In actual fact the Listserv operated more as an information bulletin board, rather than a vehicle for discussion. For this analysis, it was not possible to track the amount of independent electronic mail between individuals following the announcement of any one entry on the Listserv, however, it was noted that in several instances participants on the Listserv did engage in direct communication and discussion. The Listserv did provide the impetus and the electronic address for contact.

In terms of substance, 11 messages were of a general nature related to book reviews or information of general “impact” interest to the case studies; 11 messages concerned the “Capacity Building in Electronic Communication in Africa” (CABECA) case study; three messages each related to the joint University of Western Ontario/University of Toronto “Causal Modelling” research study, International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) case study, and the Centro de Investigacián y Desarrollo de la Education (CIDE/REDUC) case study; and one message each related to the Botswana case study at the University of Botswana; and, the issue of cost- benefit analysis.

Possibly the two most surprising aspects of this analysis were the following:

• The CABECA case study was the last to begin its work (and thus the latest to be connected to the Listserv), yet they have been the most prolific users of the Listserv. This is perhaps explained by the nature of the methodology of this case study, which uses researchers in several geographically dispersed African countries, and it is using as its model a project that, in its own right, emphasises the use of electronic communication.

• The Caribbean (CARICOM in Guyana and the University of the West Indies in Jamaica) case studies were among the first case studies to be started, but they were late in joining the Listserv and have not contributed any items to the Listserv. It appears that the messages on the list are not read on a regular basis by the participants from the Caribbean. This may be partly because of a technical problem caused by the type of Internet connection to this region or simply that the “e-mail culture” is not the same as other regions.

Has INIMCAS-L been a success? With only a few months of operation, it is too soon to tell. It has been useful as a tool to post reports and information about the case studies. It has not engendered a lot of discussion on various implementation methodologies, but perhaps it is still too early in the implementation of the case studies to expect much of this type of interplay.

The participants at the Ottawa workshop agreed that the Listserv was a useful tool to exchange information on the case studies and felt that its strength was as a vehicle to announce information on the case studies and to receive generally posted information about other aspects of “impact.” rather than a mechanism of debate and discussion.

Unlike the original computer conference, the INIMCAS Listserv has not been “moderated.” nor has there been the opportunity offered to allow outside “experts” to enter comments on the Listserv from time to time. These issues received some discussion at the Ottawa workshop, but the consensus was to maintain the status quo of the Listserv as a medium for the exchange of information on the ongoing case studies.

It was felt that the types of information needs of the case study participants fell into the following categories:

• Status reports of ongoing case studies.

• Reports of survey instruments used in the various case studies.

• Discussion of basic concepts of “impact” issues.

• Discussion of methodological and research issues.

• Reviews and reports of findings.

• Lists of alternate sources of advice/consultants. Bibliographic reviews on the broad issue of “impact.”

Should the INIMCAS be opened to a wider audience? From some quarters it was felt that by doing so, it would further the knowledge of the work on the “impact of information” and possibly provide additional and different points of view from other persons and groups who might be conducting research in this field. Although it was agreed that a broader discussion group on the wider issue of “impact” may be useful and illuminating at some future point, it was believed that the INIMCAS Listserv was not the medium to use and it should remain as a closed Listserv for the time being.

Can Computer Conferencing Be Effective for Information Policy Formation?1

Warren Thorngate and David Balson2

Early in April 1992, 14 experts in library and information science met in Ottawa with members of the Information Sciences and Systems Division of the International Development Research Centre (IDRC) to begin a unique brainstorming experiment. During the previous 20 years, IDRC and other agencies funded many efforts to collect, catalogue, maintain, and disseminate information assumed relevant to research in developing countries.

As a result, information centres and networks proliferated and their funding requirements grew. Alas, the funding sources did not. Although information remained a “good thing.” the time had come to reconsider the belief that more is better and to determine how the usefulness of information projects in developing countries might be assessed more objectively.

As researchers in decision-making know, it is extremely difficult to determine the value of information except in the most contrived and trivial situations (e.g., see Arkes and Hammond 1986; Simon 1976; Thorngate and Ferguson 1977; Tversky and Kahneman 1974). Information cannot be valued except for its contribution to knowledge.

Yet, as 100 years of research in cognitive psychology and pedagogy have shown, information almost never contributes to knowledge in an incremental or accretive way. Instead, it almost always combines in a manner akin to complex chemical reactions, generating knowledge as an emergent property.

In short, information does not “add up” to knowledge; there is no monotonie or simple relation between the amount of information available and the amount of knowledge obtained. To make matters more complex, the value of knowledge is more related to its quality than its quantity and is highly contingent

1This research was supported by a grant from IDRC. The authors thank Viviana Alonso and Fatemeh Bagherian for their diligent research assistance in conducting this evaluation.

2Professor, Psychology Department, Carleton University, Ottawa, Ontario, Canada K1S 5B6, and Senior Program Officer, Information Sciences and Systems Division, International Development Research Centre (IDRC), 250 Albert St, Ottawa, Ontario, Canada K1H 3G9, respectively.

on one’s goals and circumstances. Such complexity makes it difficult to employ simple quantitative criteria for deciding which information projects will receive funding and which will not. Of course, few would support a library with no books or patrons, or a journal with no articles or subscribers, but beyond such rare extremes there are no obvious guides to judge the relative merits of information projects.

Correlations between the number of documents, the number of users, and the quantity, quality, and value of knowledge transferred are low. As a result, criteria based on such numbers are fallible and become more so as the number of competitors for limited funds increases (see Thorngate 1988; Thorngate and Carroll 1987, 1991). Other criteria such as “track record.” that are correlated with prior opportunities, soon lead to the development of pampered cliques and to the exclusion of innovators (Thomgate and Hotta 1990).

Requirements of standards (e.g., all bibliographic proposals must follow Library of Congress classifications) are as limiting as the standards themselves (consider the limits of ASCII, QWERTY or VHS outside North America). Imposition of partnerships or coordinated activities (e.g., a proposal must contain related projects in at least five countries) increase the risk that after all is said nothing is done.

Faced with the daunting task of developing criteria for evaluating the relative merits of information projects in developing countries, Martha Stone, Director General of the Information Sciences and Systems Division of IDRC initiated a series of activities on how to assess the impact of information on development. Her first step was to seek advice from outside experts, and the recursive experiment was born.

Following their April workshop, the 14 experts continued brainstorming about assessment criteria via a computer conference, which lasted until mid November. The contents of the computer conference were summarized by its designated facilitator, Michel Menou, for a subsequent face-to-face conference in Nairobi in March 1993, which included more representatives from developing countries.

Their reactions to the summary and further suggestions for criteria are now being combined with papers from many of the 14 original participants in a book on assessment indicators for the impact of information on development. In addition, the assessment framework is now being tested in several information projects in developing regions.

Computer Conferencing as a Forum

The difficulty of developing assessment indicators stimulated the 14 participants in the April 1992 workshop to engage in extended discussion and debate following their face-to-face meeting. Because they were scattered across the globe and busy with their own affairs, it was expensive and impractical to bring them regularly together.

The traditional alternative of distributing papers via mail was plagued by its traditional problems: slowness, unreliability, and expense. A less-traditional alternative was thus considered. In light of advances in computer communication it seemed reasonable to attempt an extended brainstorming session using a computer conference.

IDRC had first examined the appropriateness of computer conferencing for development purposes in 1981. In that year, the former Information Sciences Division organized a workshop on Computer-Based Conferencing Systems for Developing Countries (see Balson et al. 1981).

As a result of this workshop, IDRC’s Telematics Program was initiated to support research activities related to developing-country institution’s access to and utilization of computer-based conferencing techniques. The first computer conference experiment undertaken by the Telematics Program was an international discussion of bioconversion of lignocellulosics fuel, fodder and food in 1983 (see Balson 1985). A second conferencing experiment was undertaken in the mid 1980s to support a Untied Nations University-sponsored Brucellosis research network in Latin America.

The experiments indicated that three conditions were necessary for a successful computer conference:

There must be a focused topic of discussion;

• There must be a sufficient number of participants with the time, motivation, and financial resources to participate effectively; and

• There must be minimal technical difficulties.

Because the indicators conference provided a focused topic, and because its participants were motivated to continue their discussions, at least two of the necessary conditions for a successful computer conference seemed to be satisfied. So IDRC contracted with the University of Guelph to use its COSY computer conferencing facilities as the host of the indicator brainstorming sessions.

North American and European participants were expected to have little trouble connecting to the COSY system from their local work place or home. The few participants from developing countries without reliable connections to COSY were periodically sent printouts via courier and asked to courier back papers or disks in response. Their responses were transferred to COSY for distribution to the participants with online connections.

If the promises of the medium were fulfilled, the computer conference could be at least as effective as its face-to-face counterpart. It would also be far less expensive. If the promises of the medium were unfulfilled, however, much time and money would be wasted. There is still much to be learned about computer conferencing, its costs, benefits, products, and processes. There was thus good reason to monitor and to evaluate the computer conference.

How is it possible to evaluate a computer conference? What criteria should be used to assess its strengths and weaknesses? Such questions are recursively familiar, highlighting the irony of agonizing over criteria to assess how others use the medium to agonize again over criteria to assess how others use information resources.

Irony aside, however, the principal author was contracted to monitor computer conference activity and expense, and to obtain participants’ reactions to the medium and its messages. Several Carleton University graduate students assisted in monitoring conference activity, generating regular printouts of new conference postings, and in recording summary information about them. Conference expenses were recorded at IDRC. Participants’ reactions to the conference were obtained by questionnaire. There was, of course, no comparison group of face-to-face, conference-call, or letter-only discussion to afford relative evaluations of the computer conference. What follows should, therefore, be seen as a formative evaluation or a case study rather than a comparative assessment of a medium or its use.

Method

Before the April workshop, the conference was registered on the COSY system as IIDCONF in reference to the workshop title: Assessment Indicators for the Impact of Information on Development. The COSY system allows a computer conference to contain several discussion topics, each in its own directory, and much use was made of this facility. Ten topics were listed as subconferences of IIDCONF, and were titled Benefits, Calculation, Digests, General, Indicators, Literature, Policies, Projects, Research, and Other.

Each participant was asked to post items under what he or she believed to be the most relevant topic title. Thirteen postings were considered relevant to more than one topic, the facilitator cross-posted 10 of them, and parts of the remaining three. One posting was obviously out of place and was transferred to a more relevant topic heading.

Participants

IDRC invited 15 conference participants’ for the face-to-face meeting in April and for the subsequent IIDCONF computer conference. Those who were able to attend the April meeting were: Toni Bearman (USA), John Black (Canada), Antonio Briquet de Lemos (Brazil), Neil Burk (Canada), Blaise Cronin (USA), Julio Cubillo (Chile), Stepheny Ferguson (Jamaica), Jose-Marie Griffiths (USA), Woody Horton (USA), Michel Menou (France), Youssef Nusseir (Jordan), Jean Salmona (France), Rohan Samarajiva (USA), and Robert Vitro (USA). The 15th invited participant, Enzo Molino (Mexico), was unable to attend the April workshop in Ottawa but did participate in the computer conference. Martha Stone (IDRC) served as the 16th IIDCONF participant. Michel Menou was selected as the IIDCONF facilitator, with general instructions to encourage participation and to summarize periodically the major points and issues raised in each of the discussion topics.

Hoping to benefit from the participants’ brainstorms during the next phase of the indicators project, IDRC convened the face-to-face workshop, organized the computer conference and followed the conference dialogue. IDRC staff posted messages to IIDCONF only when necessary for clarification of some organizational or technical matter; only three such messages were posted.

The computer conference was conceived as an 8-month roundtable discussion. Each participant was encouraged to discuss the conference with local colleagues and to encourage them in turn to submit postings to the conference as conference guests. Twelve such guests obliged: six of them once, four twice, and two five times for a total of 24 message postings.

During the April workshop, participants were given the opportunity to be trained on the COSY system. John Black, the Librarian at the University of Guelph, provided much of the training by lecture, demonstration, and hands-on training in a computer room designed for the purpose. Several participants posted trial messages (e.g., “This is a test”) during this time. As a result, data for the first month of the conference are somewhat unrepresentative and should be interpreted with caution.

It should be noted that participants could send each other an e-mail (electronic mail) via COSY or via other e-mail systems they utilized. Many did. No records could be collected from COSY of how often this alternative means of communication was used for conference purposes. So two questionnaire items were included to gain some rough estimate of this e-mail usage. Participants accessed COSY via packet switched networks, INET, Internet, and via the offline method previously mentioned.

Data Collection

Each conference posting contained a header with seven pieces of information:

• Topic title,

• Message number,

• COSY account of the person who posted it,

• Number of characters in message,

• Date of posting,

• Referent message (if message was a reply to a previous one), and

• An optional message title.

This information was recorded in seven fields of a database. Twenty of the 24 messages from the 12 guests who volunteered their views were posted by Michel Menou, the conference facilitator; the remaining four were posted by John Black.

Questionnaire

A 29-item questionnaire was distributed to participants at the end of the computer conference. It contained 10 background and usage items (previous computer communication experience, COSY use, etc.); seven training and technical items (regarding the training workshop, manual, phone problems, etc.); five items concerning conference preparation, organization, and leadership; four regarding conference submissions; and three regarding conclusions and recommendations. The items are reproduced in the results given in the following.

Results

Postings

Table 1 shows the number of postings in each sub-conference or topic for each month from the beginning of the conference to the end. There was no consistent increase or decrease in the number of postings over the conference period; contribution numbers seemed to fluctuate sporadically. The small number of May and June postings stimulated efforts by the facilitator and IDRC staff to encourage greater participation. The efforts seemed to have worked in July and August only to drop off thereafter, perhaps because the preliminary theoretical work had been largely completed.

Table 1. Subconference postings by month.

Conference

Apr

May

Jun

Jul

Aug

Sep

Oct

Nov

Total

Benefits

 

 

 

 

 

 

 

 

 

(Ben)

 21

8

4

4

 11

 

8 1

 18

 84

Calculation

 

 

 

 

 

 

 

 

 

(Cal)

8

 

 

2

1

 

 

1

 12

Digests

 

2

 

5

9

6

 11

 

 33

General

 13

6

9

 41

 26

3

 

 

 98

Indicators

2

 

1

7

8

1

4

4

 27

Literature

4

1

 

 

7

 

 

 

 12

Policies

 13

 

 

 

4

 

 

 

 17

Projects

6

 

1

 

 

 

 

1

8

Research

1

 

 

 

 

 

 

 

1

Other

 22

9

 10

 14

 22

7

 15

7

106

Total

 90

 26

 25

 73

 88

 17

 48

 31

398

As seen in Table 1, the most popular three subconferences were Benefits, General, and Other. In contrast, the least popular included Research, Projects, Literature, Calculation, and Policies. These popularity differences may reflect confusion about the content of subconferences. Alternatively, they may reflect an academic tendency to remain general and abstract in discussion and to avoid postings about more specific suggestions.

Table 2 shows the contributions made by each participant to each subconference (to preserve anonymity, participants are listed in random order). There is no strong tendency for participants to specialize in one or two topics. It is obvious, however, that some participants were more active than others. The facilitator accounted for 49% of the IIDCONF postings. Six others together accounted for an additional 37%. The remaining nine participants and 12 guests generated only 14% of the postings.

Like face-to-face conferences, the IIDCONF seems to have been dominated by a minority — 25% of the participants accounted for 86% of the postings. The average number of characters typed (a simple measure of message length) shows a similar pattern. The seven most active participants (25%) generated 78% of the printout.

Table 2. Subconference postings by participant.

Image

Further analyses were conducted to examine how much participants interacted. At one extreme, no participant would respond to anyone’s postings, and no interaction would occur. At the other extreme, every posting would stimulate everyone to respond and interaction would increase exponentially. The “re:” feature of COSY allowed some estimate of the postings stimulated by others. The penultimate row of Table 2 shows what percentage of each participant’s postings were responses to other postings. Just over half of them were (53%), an indication of a healthy exchange.

Costs

There were three distinct types of costs associated with the conference: the face-to-face workshop, the direct costs of the computer conference, and the indirect costs of the computer conference. The costs of the face-to-face workshop consisted of those traditional items associated with any face-to-face meeting: travel, food, lodging, mailings, supplies, etc. The direct costs of using the computer conferencing facility were revealed in two monthly invoices.

First were the invoices sent to IDRC by the University of Guelph for COSY user IDs ($8 per month each), COSY connect charges ($8 per hour), and packet-switch linkage charges (also $8 per hour). Second were Bell Canada invoices for INET connections to COSY for the four participants who required this service. Table 3 shows a monthly summary of these charges for the invoices available. Each Guelph invoice covered a calendar month. The Bell invoices went from mid-month to mid-month, so were interpolated to align billing periods with those of Guelph (e.g., invoice for June = 1/2 the 16 May to 15 June invoice + 1/2 the 15 June to 15 July invoice)

Not surprisingly, Table 3 reveals a strong association between monthly costs and monthly posting rates (see Table 1). It cost an average of $8.88 in COSY and communication charges for each of the 398 postings, or about 60 cents per copy sent. This compares favourably with charges for airmail stamps.

Indirect costs of the computer conference are difficult to estimate, but they are probably not trivial. For example, the technical support provided by David Balson and John Black was substantial, as was the time taken by many participants to continue their discussion. By comparison, however, substantial clerical assistance is usually required for a face-to-face conference, and participants take almost as much time talking and listening as they do reading and writing.

Table 3. Conference and communication costs.

 

Apr

May

Jun

Jul

Aug

Sep

Oct

Nov

Total

COSY/

 

 

 

 

 

 

 

 

 

Datapac

$643

374

292

354

396

253

336

275

 $2,923

INET

na

  45

  86

100

107

103

  88

  82

611

Total

643

419

378

454

503

356

424

357

 $3,534

Questionnaire

A user questionnaire was sent to 15 of the participants on 4 December 1992. Martha Stone was not sent the questionnaire to avoid a conflict of interest. By 1 February 1993, 12 of the 15 questionnaires had been returned. One of these stated only “I regret I was unable to participate…” and gave no answers. It was excluded, leaving 11 questionnaires for the analyses presented in the following.

Six of the 11 completed questionnaires were faxed; one was missing the last two pages of the questionnaire and two had several blurred passages. In addition, some of the respondents skipped some of the questions. These factors account for the varying totals of responses to the questions. For ease of presentation, each question is reproduced as it appeared on the questionnaire followed by summaries of the responses. Quoted comments are given in random order to preserve anonymity.

Background and Usage Items

1. Before the conference, in how many computer-mediated conferences had you participated?

none

=

7 respondents

one

=

2

two

=

0

three

=

1

more than three

=

1

2. Before the conference, how often had you used e-mail?

never

=

4 respondents

1-3 times per week

=

2

4-6 times per week

=

1

7-10 times per week

=

0

more that 10

=

4

3. On average, how much time did you spend using COSY each week? (Best guess).

Average = 1.6 hour(s) per week; range = 0.1–4.0 hours

Where did you find the time?

I used spare time at work

=

5 respondents

I used spare time at home

=

3

I took time from (elsewhere)

=

3

4. On average, how many times each week each week did you login to COSY? (Best guess)

Average = 1.9 times per week;   range = 0–6

5. On average, what percentage of the times that you logged into COSY did you… check for and read new conference entries?

Average = 81% range = 5–100%

write at least one new conference

 

 

entry?

25%

0–50

check for and read new e-mail?

68%

0–100

write at least one new e-mail message?

25%

0–80

6. Between the beginning of the conference (April 1992) and today, what percentage of the conference messages did you read? (Best guess).

Average = 80% range = 15–100%

7. Between the beginning of the conference (April 1992) and today, how many conference messages did you write? (Best guess).

Average = 26 range = 2–179

(Actual average = 30) (actual = 1–197)

8. Between the beginning of the conference (April 1992) and today, how many private e-mail messages concerning the conference did you receive from conference participants? (Best guess).

Average = 23 range = 0–75

9. Between the beginning of the conference (April 1992) and today, how many private e-mail messages concerning the conference did you write to conference participants? (Best guess).

Average = 23 range = 0–100

10. All things considered, do you think the COSY conference facility or electronic mail facility was the more useful for developing and exchanging ideas?

4 = conference (i.e., four participants chose “conference”)

3 = e-mail

1 = equal, they complement each other

3 = no comment

Conference was more useful because—

“It exposed ideas for all to consider.”

“It allowed for contribution of different ideas on a topic with a time for reflection; others are able to add their own ideas as well.”

“We exchanged most of the messages with it.”

E-mail was more useful because—

“The conference facility was cumbersome; for example, adding a comment to a message read earlier is awkward without the ability to search conference by full text or without an index.”

“E-mail seemed more user friendly.”

“Both were clumsy.”

“It provided a more controlled communication environment and was more conducive to transparent and more frank interactions.”

Technical Training, Problems, and Support

1. Were the COSY training sessions in Ottawa last April adequate to prepare you for using the system?

5 = yes, I had no trouble using COSY afterwards

4 = yes, but I forgot much of what I learned by the time I tried COSY from home

1 = no, in retrospect they should have included: “one full day training.”

2. Was the COSY user manual adequate to prepare you for using the system?

1 = I don’t know because I never read it

6 = yes

4 = no, it should have…

“been restructured, simplified and streamlined.”

“included a summary list of actions for typical transactions and short cuts, e.g., how to comment on a message without re-reading it.”

3. Did you have problems linking to COSY from home or work? (check one)

3 = no_____

8 = yes_____

“One time recently I was unable to get on for about 10 days”

“About 2 times out of 10 there was noise on the line, difficulty getting on”

“About 2 times out of 10 [there was] messed up system reply, unable to write; the connection was cut twice because of net interconnections in Canada, once 4 days and then about 7 days”

“About 7 times in 10, noise in the local line”

“All times (never connected)”

“In the beginning, but with the help of Mr. Black, by fax, and a friend of mine who is a computer expert, the connection was made.

“First had hassle with my interface, then COSY”

4. Did you have problems using COSY?

2 = no

9 = yes, the most common problem(s) was/were—

“Not really although I have not been able to download”

“Forgetting differences between conference and mail; difficulty in locating specific conference messages; download and upload.”

“Editing scratchpad cumbersome; uneven reaction of mail module after sending message…; mailer control of messages sent on internet is erratic…; I was not able to set up an automatic download procedure under kermit, I cold only upload topic by topic or e-mail by e-mail; system went down in the middle of a transaction in some cases (below 3%).”

“I was not able to upload files”

“Telecommunication”

“Noise (initial letters were regularly replaced by some odd signs, and words and lines were missing on the screen); I could not download or upload any message; …I have enormous difficulty to get a line in [my national] network, I have spent hours trying to access [it] without success [due to] an overload in the network, especially at night when the tariff is lower.”

“Got cut off, etc.”

“Not ergonomic at all; far too complex; bad choice made”

5. Was the technical support offered by IDRC adequate to solve problems you had with COSY?

4 = I don’t know because I never asked for it

7 = yes

0= no

6. Although it is difficult to estimate, about how much money do you think it cost to pay for your use of COSY(including telephone charges) during the conference April - November)?

Average best guess (N=8)

= $541 US

Extrapolated guess ($541 x 11)

= $5,950 US

Actual cost = $3,534 Canadian x 0.80

= $2,827 US

7. If you have any additional comments or suggestions regarding technical issues that you believe would be useful for improving future computer conferences, please write them below.

“Easier to use e-mail (e.g., more like pine); delete all systems messages after a set time — reading about downtime 2 years ago is a waste of time”

“Line editing is a hell, some more flexible procedure needs to be developed; display of outbox should be optional; I was never able to try chat, would be good if the system could be asked at the beginning of a session who among participants is on and to call them to turn to chat.”

“Shorter contribution texts”

“Do not choose a conference [system?] with so many options; choose the most simple… [fax unreadable here]… it should be usable by anyone”

Conference Preparation, Organization, and Leadership

1. Did the pre-conference meeting in Ottawa give you enough understanding of the conference goals to participate effectively in the conference?

7 = yes

1= “not attended”

2 = no, in retrospect it should have included— “The opportunity for each participant to expose his/her views on the actual theme of the conference: the impact of information on development”

“A hint of the expected products (structure, contents, intention).”

2. Did the goals of the conference become more or less clear to you as the conference progressed?

5 = yes, especially when I read — “the digests” “the comments”

“the [general] discussion”

“the contributions of participants”

1 = no, I never understood —

“discussions on piecemeal issues when major issues were yet to be unveiled” = same 1 = “not applicable”

1 = “yes and no; the goals were clear but not the way taken by some discussions; a few were excessively long and seemed to go astray”

3. Was the conference adequately organized to accomplish its goals?

2 = yes, especially —

“the categories which worked quite well” 1 = “yes and no” —

“alternative design choices might have proved more effective, but only a test could tell. A wider group size and possibly more diverse composition might have brought higher traffic and thus more interest/interaction. It might have been easier to start with a discussion of a simple concrete case. If each participant would have prepared a short position statement on each topic at the beginning this might have helped. Summaries should have been more frequent; distribution of printed dumps and summaries should have been more frequent”

6 = no, in retrospect it could have been improved by —

“greater structure; more milestones; clearer progress monitoring”

“more frequent contribution from more of the participants”

“indexing or full text search capability”

“ambitious and inadequate focus”

“reducing the number of subconferences”

“the starting up of working teams when we still were in Ottawa”

4. Was the conference adequately moderated to accomplish its goals?

6 = yes, especially —

“the provision of digests”

“when digests were produced”

“the way the facilitator related comments to earlier statements”

2 = no, in retrospect it could have been improved by —

“more direction and more aggressive critiquing”

“stopping everything after one month”

5. If you have additional comments or suggestions about conference preparation, organization or leadership (e.g., about the number or composition of participants, divisions of subconferences, periodic summaries, etc.), please write them below.

“more diversity in group composition might have been useful”

“it’s unfortunate that some of the early participants dropped out so early and didn’t ever participate fully”

“only one or two voices offered fresh perspectives and insights, much stuff was terribly (and predictably) deja vu”

“I think that more people with greater experience in running information services in and for developing countries should have participated in the conference. Also people who don’t believe in the importance of information for development who would challenge most of the issues raised, creating the atmosphere for creative thinking. Periodic and regular summaries in print-out form would have been useful. There was an excess of sub-conferences”

“I believe the division of labour was not duly accomplished: participation from the south was almost nonexistent; participation from the north was unbalanced (monopoly by a couple of participants) and not representative of existing visions.”

Conference Submissions

1. There were 10 subconferences in the IIDCONF conference: General, Policies, Benefits, Indicators, Calculation, Projects, Research Agenda, Literature, Digests and Other. Considering the goals of the conference: The two most valuable sub-conference were —

5 = Benefits

5 = Indicators

2 = Digests

4 = General

Comments —

“Benefits & Indicators because most concrete/relevant to goals”

“Benefits & Indicators because the challenge which both topics posed”

“Digests because it provided summaries, and Indicators because [it]

contained the meat of the topic.”

“General & Benefits because they were the most used”

“General & Indicators because most proposals were here”

“Benefits because it represents the effect, and Indicators because they are measurable elements for the effect”

“General & Benefits because they dealt what seemed to me the real goals of the conference”

“General because it permitted further clarification of the broad issues, and Digests because it provided monitoring of information” The two least valuable sub-conferences were —

3 = Calculation

2 = Literature

3 = Research Agenda

1 = Other, Digest, Projects, Policies, and Indicators (each) Comments —

“Calculation because too specific and Other because too vague”

“Literature because not readily available, and Research Agenda because it seemed to be premature to consider this until other areas such as benefits and indicators were adequately dealt with”

“Calculation because it was little used and probably premature”

“Policies and Research Agenda because they were the least used”

“Literature and Research Agenda because very little could be read”

“Digest because it not that important, and Other because it reduces concentration on main topics”

“Policies and Projects because of the reduced number of substantive contributions”

“Indicators and Calculation because they lack an acceptable framework”

2. In general, how did you find the conference submissions?

2 = Adequate in both quality and quantity to address the conference issues.

4 = Adequate quality but inadequate quantity to address the conference issues.

1 = Adequate quantity but inadequate quality to address the conference issues.

2 = Inadequate in both quality and quantity to address the conference issues.

3. How was the quality of COSY conference submissions compared to face-to-face conferences?

0 = about the same as face-to-face conferences

3 = generally better because —

“it provided time to reflect; on the other hand however, perhaps because of this the contribution of some participants was sporadic. In face to face conferences perhaps they would feel more compelled to react.”

“more care seemed to go into advanced thinking and the quality of participants was very high”

“generally better because not interrupted and people had time to

thoroughly think before making their points; they could also reflect

on various interventions instead of reacting to the last one”

6 = generally worse because —

“too many nonparticipants”

“timing and digression”

“human interface increases interaction”

“It lacked the empathy and the possibility of prompt argumentation. Personally, I felt extremely disturbed by the costs of telecommunications. Sitting in front of the computer and having to be concerned with tariffs and the possibility of a technical failure, besides the utilization of several commands, was a painful experience. I do prefer face-to-face conferences.”

1 = better “because they provided recorded information” and worse “because they would not provide signs that are normally useful to assess others’ intention”

4. If you have any additional comments or suggestions about conference submissions (especially about increasing their quality or quantity), please write them below.

“Too much divergence, not enough convergence and synthesis! This was the most serious flaw!”

“Maximum size of submissions should be established (maybe 2Kb)”

“Periodic and regular summaries reflecting the most important issues and indicating the points of possible consensus, excluding any irrelevant material, could contribute to improve the quality of the conference in general.”

“I believe well-balanced teams devoted to tackle specific problem statements could have rendered better quality/quantity performances.”

Conclusions and Recommendations

1. What percentage of the goals of the conference do you believe were accomplished?

Average

=

58%

Range

=

10–90%

If your estimated percentage is less than 100, why do you think some goals were not accomplished? 0 = these goals were not clearly stated

0 = these goals were never addressed in the conference

2 = these goals can’t be attained in a computer conference because — “of the diffuseness of the topic”

“a consensus or basic understanding of larger issues was not available”

1 = these goals are impossible to attain in any conference

6 = other —

“goals were not translated into component sub-goals and objectives

that were concrete enough to be achievable”

“each time I felt I had a hold on the subject the arguments

indicated otherwise”

“time constraints on very busy people”

“more time was devoted to general issues and very little on defining indicators”

“because it seemed that most of the participants lacked the discipline to stick to the objectives expressed in the preliminary outline. The number of subconferences probably contributed to this.”

“the computer system was inadequate, the messages too long”

2. In light of your experiences with the conference and any others, what recommendations can you offer for improving the effectiveness of computer conferences as a forum of policy development or of intellectual exchange?

“Too many participants were either unwilling or unable to participants. This led to ’monopolizing’ the conference by only a few participants”

“It is an excellent medium for intellectual exchange but perhaps the participants should have been drawn from a more diverse background”

“Have the group meet face-to-face midway although the conference. Distribute brief summaries regularly in print. Distribute a print integrated bibliography on regular basis”

“More congenial software, tighter topic focus, more careful selection of participants”

“People need more practical training in the use of the conferencing system, handling computer communications, including file management of line (which may be quite time consuming) and even more on communication (the benefits from the use of the technology are no substitute for propension to and ease in communication).”

“Stating a set of smaller goals with [themes?) specific for each; more time”

“Initially participants might want to address all the conference issues in their first contribution, and then move to discuss subconference topics.”

“They should be more user friendly and participants should be involved only after they were totally fluent in dealing with the computer and telecommunications techniques, besides a thorough understanding of the system used (COSY). The moderator has an extremely active role to play especially in summarizing the points of consensus and calling attention to those points that were not sufficiently investigated.”

“Identify more concrete policy or research problems; select participants highly motivated in relation to the problems; decentralize tasks; enrich technological environment for better conference monitoring.”

“As simple as possible; messages as short as possible (10 lines max); no personal statements, no [old?] information”

3. Please make any additional comments about the conference you wish below:

“The topic was a difficult one to come to grips with. The conference was immensely interesting and equally frustrating.”
“I learned a great deal from it. I wish I could have participated more actively all the time. The group of participants was excellent. The facilitator did a masterful job.”

“The conference was very important as a whole. I believe, however, at least by now some of the participants may share a certain frustration in view of the goals which were initially set up. I don’t know if this frustration is due to the technique involved (computer conference) or to the difficulties inherent in the theme. I guess … the results would have been different if a comprehensive working document had been written and distributed at the beginning of the conference.”

“It provided a valuable experience about human interaction mediated by electronic means.”

Discussion

Was the conference successful? Was it cost-effective? If others are undertaken in the future, what can we learn from this one to improve them? Data about the conference activity, conference costs, and participant reactions allow a first approximation answering these questions.

Was the Conference Successful?

To answer this question, some bases of comparison are useful. Consider a few extremes. Perhaps the best possible computer conference would generate vast amounts of important and interesting dialogue; everyone would contribute equally and rapidly, responding to each other and building toward a mutual, intellectual synergism.

Now consider some nightmare conferences. One would generate no activity; no one would login and nothing would be written or read. In another, one or two participants would contribute all information while the others passively read it like a periodical, or did not bother to read it at all. In a third, everyone would contribute but no one would respond to the contributions of others. In a fourth, factions would form around conceptual antagonisms and battle over assumptions in a deadly boring epistemological war.

In light of such extremes, the activity data suggest that the conference was a qualified success. It was certainly not a disaster. On average, about 50 contributions were generated in each of its 8 months. Almost half of the participants averaged more 2.5 contributions per month. About half of the contributions were responses to other contributions. There were few signs of an analytical blood bath. On the one hand, by these criteria, it was a productive exchange. On the other hand, the facilitator contributed about half of the conference material, while over half of the participants contributed almost nothing. Discussion in half of the sub-conferences was sparse. Participants reported, on average, that only about half of the conference goals were met.

It is difficult to say how much of the conference failings were a function of the forum or of the participants. In fairness to the forum, most face-to-face conferences show similar tendencies. For example, 20% of the faces usually make 80% of the contributions, and many meetings are poorly attended. The questionnaire indicates that the forum was new to most participants, so few could be expected to be facile in its use. Participants, however, reported chronic difficulties linking to COSY and additional difficulties using it. Many of the facilitator’s contributions were directly related to his role: to encourage participation. They cannot be classified as attempts to dominate discussion. Nor can they be faulted for failing to increase the participation of those who could not use the system.

Was the Conference Cost-Effective?

The glib answer to this question is “yes.” simply because no other forum could have accomplished its goals. By the standards of face-to-face conferences, the computer conference was a bargain. To accomplish the tasks achieved during the face-to-face workshop and the computer conference using only face-to-face meetings would have required the initial workshop and several more meetings.

Even academics would take a long time in face-to-face discussion to generate the 325 pages of transcript accumulated from the computer-mediated alternative. Accordingly, the direct costs of the computer conference should be compared to the additional face-to-face meetings that would have been required. Its direct costs (about $3,500) would have paid for the travel and accommodation of only 3-4 participants at a face-to-face conference — half the number of those who regularly participated in the computer conference. The indirect costs for equivalent support may have been less in a traditional conference, but not by a large amount. If an hourly rate for the time and effort taken by participants to learn COSY, deal with its frustrations, type comments, etc., were factored into its costs, the computer conference may not have been such a bargain.

Judging from the questionnaire comments, however, the medium was well-chosen for the task at hand. Although few had good things to say about COSY, and few escaped the frustrations of unreliable links to it, no one challenged or rejected computer conferencing as a potentially useful brainstorming forum. This tacit endorsement of the medium and the relatively low cost of its use suggest that further experiments should be undertaken to improve and to promote it. The questionnaire results provide many useful suggestions for how this can be done.

How Can Computer Conferences Be Improved?

It is worthwhile to recall that the success of any conference is a complex function of many elements including the nature and organization of the conference agenda, the quality, motivation and relationships of participants, the form of leadership, the conference setting, time constraints, and the medium of communication. The forum of a face-to-face conference is familiar, reliable, and transparent, so recommendations for improving the forum usually focus on the “other things” such as conference organization and leadership. Because a computer conference is not yet so familiar, reliable, or transparent, it is natural to look for ways to improve it by improving the forum. The current evaluation stimulated many technical suggestions for improving it. Yet they should not overshadow suggestions for improving the other things (Thorngate 1985).

Technical Issues One cannot pass messages without a medium or exchange them without a forum. Judging from the questionnaires, there were nagging problems with both. Problems with telecommunication links to COSY were surprisingly common and chronic. They were as likely to occur in developed countries as in developing ones and discouraged or prevented many from greater participation.

Alas, there is little that can be done to solve these problems, except to wait for the industry to improve service. There is, however, reason to be optimistic about improvement. Reliable digital telecommunication systems are rapidly replacing their less reliable analogue versions in both developed and developing countries. Within 5 years, perhaps half of the linking problems should be solved. More time can then be spent improving the forum.

Participants did not like COSY. There was little to suggest that those who complained were merely rationalizing their low participation by “blaming the equipment.” Almost all participants considered the COSY training they received to be quite adequate. In addition, there was no relationship between the degree of participation and liking for the system.

The most common complaints about COSY concerned the difficulty of transferring files to and from it. This implies most participants want to use COSY as a file transfer facility, not as an online system for text composition and screen-reading. A few comments indicate why and confirm previous research (e.g., Thorngate 1991).

Most users do not enjoy learning how to use a PC (personal computer) or learning how to use a local mainframe or network system. Few have the time to learn their advanced features. Although COSY may be wonderful and full of these features, most users see it as just another system to master.

Most do not have the time to do so and little motivation when the system will not be used for long. In addition, it takes only one or two bad experiences with a telecommunication connection to realize that it is dangerous financially to stay on any system that charges too long.

Combine such constraints with busy schedules full of interruptions and the prudent course of action is clear: read and compose messages locally, then get in, transfer files, and get out. As the complaints suggest, the participants in this computer conference found that COSY was not well-designed for this communication style.

In fairness, COSY does have many of the file transfer features which participants desired. Their disuse suggests that future training sessions emphasize file transfer procedures and ensure that participants are confident in using them.

COSY is not, however, the only alternative for computer conferencing. Recent improvements now make the Internet a viable alternative. Either through closed (restricted) news groups or (mailing) list servers of various kinds, the Internet allows its users to use their local system for conference-like forums, relying on its Internet connections to distribute automatically to the local systems of other conference participants all conference messages.

The reading and writing of these messages can be done on each participant’s local system. The extra knowledge needed to use the Internet facilities on the local system can be employed beyond any single computer conference. Training can be done locally; so can the solution of telecommunication problems. Conference messages can be collected by the facilitator on her or his own local system, then edited and “published” as a public document file for worldwide retrieval using Internet facilities such as “ftp,” “gopher,” or “wais.”

Furthermore, because the costs of most Internet connections are subsidized by local governments or host organizations, it is generally less expensive to use than COSY. In view of such advantages, it would be worthwhile to consider using the Internet for future computer conference experiments. Like COSY, however, few developing-country organizations currently enjoy direct, affordable access.

Organizational Issues As the technical issues of a computer conference are resolved, the organizational issues become more apparent. A few can be gleaned from responses to the questionnaire. There seemed to be too many subconferences, and the reasons for offering some were ambiguous. As a result, messages tended to collect in a few popular conferences, although some may have been better suited to the less popular ones.

The problems created by this seem minor and tractable. A hypertext system might ease the difficulty of finding related information in a hierarchical scheme. Greater emphasis might be given to creating more useful titles and keywords. Subtopic design was to structure the discussion and to minimize file management tasks. One discussion area with the facilitator organizing the results in appropriate read-only topics might have contributed to more and easier participant input. Future experiments should try different organizational schemes.

There were some complaints about lack of focus, and many requests for more summaries. These may partly reflect the complexity of the conference theme, and partly the extended nature of the conference. Unlike face-to-face conferences concentrated in space and time, computer conferences allow participants to wander around, attend to their lives, and login for occasional visits.

Following a conference hiatus and faced with screens full of new messages, few participants can be expected to keep tightly focussed on conference goals and themes. A typical participant will read something new that will immediately stimulate a close or remote association; he or she will then type it into the conference, logoff, and go home. Several days may pass before the participant reads any responses accrued. Under these conditions, one would expect a rapid decline in focus.

Because of such forum characteristics and the problems that they cause, the computer conference facilitator’s task is normally much more difficult and time consuming than his or her face-to-face counterpart (Thorngate 1991). Judging from the lack of complaints in the questionnaire and from previous evaluations of computer conferences, Michel Menou did a remarkable job as facilitator.

Finally, it is useful to note most participants reported sending at least as many private e-mail messages to each other on topics related to the conference as they sent public conference messages. Like the informal discussions that occur in the halls of a face-to-face conference, these e-mail messages are probably crucial to the success of the venture. Further research should study their characteristics and their role in accomplishing conference goals.

In the end, the success of a conference must be judged by the quality of its content. The current evaluation was not intended to assess quality. But the authors hope it has shown that a computer conference is a useful medium for international discussion, and that it should be further refined and used more extensively by international organizations involved in promoting sustainable and equitable development.

References

Arkes, H.; Hammond, K., ed. 1986. Judgment and decision making. Cambraide University Press, Cambridge, UK.

Balson, D. 1985. International computer-based conference on biotechnology: A case study. International Development Research Centre (IDRC), Ottawa, ON, Canada.

Balson, D.; Drysdale, R.; Stanley, B. 1981. Computer based conferencing systems for developing countries. Report of a workshop held in Ottawa, Canada, 26–30 Oct. 1981 International Development Research Centre (IDRC), Ottawa, ON, Canada.

Simon, H. 1976. Administrative behavior 3rd Edition. The Free Press, New York, NY, USA.

Thorngate, W. 1985. Social psychology and the design of computer conferencing systems. Proceedings of the first symposium on computer conferencing and electronic mail. University of Guelph, Guelph, ON, Canada, pp. 181–186.

_____1988. On the evolution of adjudicated contests and the principle of invidious selection. Journal of Behavioral Decision Making, 1, 5–16.

_____1991. Patterns of computer communication: A report on Brucella network activity. In Frank, J., ed., Networking in brucellosis research. United Nations University Press, Tokyo, Japan.

Thorngate, W.; Carroll, B. 1987. Why the best person rarely wins: Some embarrassing facts about contests. Simulation and Games, 18, 299–320.

_____1991. Tests versus contests: A theory of adjudication. In Baker, W.; Highland, M.; van Hezewijk, R.; Terwee, S., ed., Recent trends in theoretical psychology. Springer-Verlag, New York, NY, USA. Vol. 2, pp. 431–438.

Thorngate, W.; Ferguson, T. 1977. Behind the eyeball: Some common misuses of information in human decision making. Canadian Journal of Information Science, 2, 1–11.

Thorngate, W.; Hotta, M. 1990. Expertise and information retrieval. Knowledge: Creation, Diffusion, and Utilization, 11, 237–247.

Tversky, A.; Kahneman, D. 1974. Judgment under uncertainty: Heuristics and biases. Science, 185, 1124–1131.

Measuring the Impact of Information on Development: Related Literature, 1993–1995

Bev Chataway and Atsuko Cooke1

This list reviews literature since the publication of “Measuring the Impact of Information on Development.” Publications before 1993 are well covered in “Measuring…”; they are not included here.

The bibliography is divided into four sections: Section A contains papers and publications directly derived from the International Development Research Centre (IDRC) “impact” project; Section B contains recent writings considered most relevant to the topic; Section C lists related publications of interest. Reviews of the IDRC monograph “Measuring…” are in Section D.

A. Literature Derived from the IDRC Project

Bearman, T.; Griffiths, J.-M.; Menou, M.J. 1993. Toward an assessment of the impact of information on development. Paper presented at the 56th ASIS Annual Meeting, October 1993, Columbus, OH, USA. ASIS, Silver Spring, MD, USA.

Three participants in the IDRC-supported computer conference, “Assessment Indicators for the Impact of Information on Development” and the postconference workshop in Nairobi, present a general overview of the project, their personal observations and the outcome of the exercise. This informal but insightful presentation is available on audio-cassette.

1Head, Research Information Services, and Research Officer, respectively, International Development Research Centre (IDRC), 250 Albert St, PO Box 8500, Ottawa, Ontario, Canada K1G 3H9.

Horton, F.W., Jr. 1994. Analyzing benefits and costs: A guide for information managers. International Development Research Centre, (IDRC), Ottawa, ON, Canada. 285 pp.

As one of the participants in the “impact” project, the author recognizes the need for a practical guide to help the information community in developing countries to analyze the benefits versus the costs of resource allocations to information activities. This is a useful management tool for government policymakers, library, or information agency managers, and project sponsors. It includes case studies, detailed technical guidelines and an Excel 3 for Windows Quickstart software diskette.

McConnell, P. (In press). Measuring the impact of information: implications for marketing. Paper presented at the IIMA/IDRC Workshop on “Marketing of Information Products and Services”, Indian Institute of Management, February 1994, Ahmedabad. Sage Press, New Delhi, India.

Menou, M.J. 1993. The impact of information on development: Results of a preliminary investigation. Paper presented at the 3rd International Information Research Conference, July 1993, Poigny-la Forêt, France.

As the moderator of the computer conference and editor of the IDRC publication, “Measuring the Impact of Information on Development,” the author describes the process and results of the conference and the workshop. He provides personal observations from his experience and offers plans for the next step and suggestions for further research.

Menou, M.J., ed. 1993. Measuring the impact of information on development. International Development Research Centre, Ottawa, ON, Canada. 188 pp.

For many years, institutions in developing countries and development assistance agencies have exerted considerable effort on various information projects on the assumption that these activities contribute to overall economic and social advancement. There had been no substantial study conducted, however, to prove this assumption. With support from IDRC, a group of the world’s leading information scientists participated in a 7-month long computer conference to identify those parameters or indicators by which the impact of information programs or services can be assessed to ensure the relevance of information activities to development and to provide concrete answers to decision-makers regarding the value of information. This publication discusses details of the process and outcome of the conference and presents the preliminary framework for impact assessment produced at the post-conference workshop in Nairobi. It concludes with suggestions for future activities, such as field testing, training, cost-benefit analysis and further research areas. It includes a 19-page bibliography.

Menou, M.J. (In press). From data to wisdom: Does IT contribute to the gross national happiness? In Lamberton, D., ed., Beyond competition: The future of telecommunication. Elsevier, Amsterdam.

Menou, M.J. (In press). The impact of information - I. Toward a research agenda for its definition and measurement. Information Processing and Management.

Menou, M.J. (In press). The impact of information - II. Concepts of information and its value. Information Processing and Management.

Stone, M.B. 1993. Assessment indicators and the impact of information on development. Canadian Journal of Information and Library Science, 18(4), 50-64.

In this keynote address presented at the 1993 CAIS/ACSI Conference, the Director General, Information Sciences and Systems Division, IDRC, provides the historical background of IDRC’s support to information activities in developing countries and the rationale for the “impact” project. She explains the outcome of the Nairobi workshop where a conceptual framework for assessment was produced and proposes the creation of a decentralized international network of interested academic and research institutions to continue investigation.

Stone, M.B.; Menou, M.J. 1994. The impact of information on development. Bulletin of the American Society for Information Science, 20(5), 25–26.

The paper summarizes briefly the background and rationale leading to the “impact” project and describes the process and outcome of the first and the second phase and presents future plans.

B. Related Key Literature

Ang, J.; Pavri, F. 1994. A survey and critique of the impacts of information technology. International Journal of Information Management, 14, 122–133.

The paper reviews diverse literature on the impacts of information technology at the societal, organizational, and individual levels. At the societal level, which includes sociotechnical and economic aspects, the authors find that most studies are speculative, anecdotal, or based on surveys. The interest in IT at the organizational level is reflected in a large body of literature, some of which deals with sociotechnical and some with strategic impacts. The authors concede that the impacts of IT are complex and defy straightforward interpretation. The paper concludes with a discussion on the need for a plurality of perspective in IT impact research.

Feeney, M.; Grieves, M., ed. 1994. The value and impact of information. Bowker Saur, London, UK. British Library Research: Information policy issues. 303 pp.

This publication is based on a series of information policy briefings organized by the British Library Research and Development Department and covers various aspects of the value and impact of information. A comprehensive literature review by D. Badenoch et al. starts with definitions of terms such as “information.” “knowledge.” and “value.” analyzes contexts of value and describes various approaches to measuring the value of information. It includes a 10-page list of references. J.-M. Griffiths and D.W. King describe the framework of measures used for numerous studies they conducted on the usefulness and value of special and public libraries, the results of which the authors believe demonstrate that libraries are an undiscovered national resource. J. Marshall describes two studies conducted on the impact of information services on decision-making: in the financial and health care sectors. M. Collier reports the results of a study on the impact of information on the management of a large academic institution. Also included in this volume are: “Information use and business success: A review of recent research on effective information delivery,” by A. Abell; “What do large companies seek from national information policy” by B. Williams, “How much does British industry pay for patents and patent information?” by B. Mooney and C. Oppenheim; and “Statistical perspectives add reality to policy debates: How well are we served by our statistical data?” by J. Sumsion and R. Marriott.

Fitzgerald, E.P. 1993. Success measures for information systems strategic planning. The Journal of Strategic Information Systems, 2(4), 335–350.

The article reviews information systems planning effectiveness literature and examines current approaches for measuring success. It provides an outline of approaches currently available and proposes directions for future research.

Hanna, N.; Boyson, S. 1993. Information technology in World Bank lending: Increasing the developmental impact. The World Bank, Washington, DC, USA. World Bank discussion papers, no. 206. 104 pp.

This study examines the patterns of World Bank lending for information applications in developing countries and the payoffs. Its lending for information technology (IT) rose from $379 million in 1986 to $890 million in 1991. There are information systems components in almost 90% of Bank lending operations. To measure the payoffs of Bank lending in IT, techniques from the total quality management field were employed. Quantitative process improvement measures and qualitative perceptions of “process owners” were gleaned from appraisal and evaluation reviews and structured surveys. In particular, three major areas of impact were targeted for examination: transforming trade and tax administration, modernizing public institutions, and poverty and social support. While highlighting the potentially high payoffs from “successful” IT investments, the study also points to common pitfalls and constraint. The study concludes with recommendations for both the Bank and developing countries.

Marshall, J.G. 1993. The impact of the special library on corporate decision-making: Final report of a research project funded by the Special Libraries Association. Special Libraries Association, Washington, DC, USA. SLA research series, no. 8.

Managers and executives from five major financial institutions were asked to request some information from their special library related to a current corporate decision-making situation and to evaluate the impact of the information received. Better informed decision-making was reported by 84% of the responses. Special libraries are particularly effective in supplying new knowledge in decision-making situations and in increasing the level of confidence of managers and executives in the decisions being made. The results showed that, when libraries are used in decision-making situations, the information provided is frequently perceived by managers and executives as having a significant impact on their actions.

C. Other Literature of Interest

Ferreira, J.R. 1994. O impacto da tecnologia da informacao sobre o desenvolvimento nacional. (The impact of information technology on national development.) Cienca Da Informacao, 23(1), 9–15.

Gashaw, K.G., et al. 1995. Evaluation of the impact of CAB International’s CD-ROM databases on sustainable development in Africa. CAB International, Wallingford, UK. 60 pp.

Kantor, P.B.; Saracevic, T. 1995. Studying the cost and value of library services. Alexandria Project Laboratory, School of Communication, Information and Library Studies, The State University of New Jersey, Rutgers, NJ, USA.

Katz, A.I. 1993. Measuring technology’s business value: organizations seek to prove IT benefits. Information Systems Management, 10(1), 33–39.

McPherson, P.K. 1994. Accounting for the value of information. Aslib Proceedings, 46(9), 203–215.

Mahmood, M.A.; Mann, G.J. 1993. Measuring the organizational impact of information technology investment: An exploratory study. Journal of Management Information Systems, 10(1), 97–122.

Nweke, K.M.C. 1993. Providing information services for rural mobilisation in special libraries for national development in Nigeria. Annals of Library Science and Documentation, 40(1), 1–5.

Parker, J.; Houghton, J. 1994. The value of information: Paradigms and perspectives. Proceedings of the 57th Annual Meeting of the American Society for Information Science, 31, 26–33.

Parsons, D.F. 1994. The impact of information technology on health care: A practitioner’s perspective. Telematics and Informatics, 11(2), 127–135.

Rojas, A. 1994. Impact of information on decision making: A relation to be constructed. REDUC, Santiago. 13 pp.

Segars, A.H.; Grover, V. 1994. Strategic group analysis: A methodological approach for exploring the industry level impact of information technology. Omega: International Journal of Management Science, 22(1), 13–34.

Tellis, D.A. 1993. Value of information revisited. Perspectives in Information Management, 3(1), 38–45.

Watson, R.T., et al. 1993. User satisfaction and service quality of the IS department: Closing the gaps. Journal of Information Technology, 8(4), 257–265.

Zulu, S.F.C. 1994. Africa’s survival plan for meeting the challenge of information technology in the 1990s and beyond. Libri, 44(1), 77–94.

D. Measuring the Impact of Information on Development — Reviews

Development Communication Report, (83), 22, 1993/4.

Documentaliste — Sciences de l’Information, 31(3), 186–188, 1994.

Information Development, 11(1), 66–67, 1995.

Information Processing & Management, 31(2), 255–256, 1995. Reviewed by M.M. Aman.

International Forum on Information and Documentation, 19(3–4), 42–43, 1994. Reviewed by V.A. Markusova.

International Journal of Information Management, 15(1), 69–70, 1995. Reviewed by I. Rowlands.

Journal of the American Society for Information Science, 46(1), 75–77, 1995. Reviewed by R. Samarajiva.

Le Lettre de PADIS Newsletter, 9(2), 7, 1994.

Managing Information, 1(7–8), 50, 1994. Reviewed by S.P. Webb.

The Network: a Newsletter for the Equal Exchange of Information on Trade and Technology, 7(4), 17, 1994. Reviewed by B. Thomson.

Prometheus: The Journal of Issues in Technological Change, Innovation, Information Economics, Communication and Science Policy, 13(1), 120–121, 1995. Reviewed by M. Jussawalla.

Third World Libraries, 5(2), 89–91, 1995. Reviewed by D. Rosenberg.

Participants

Noel Boissiere Consultant, 8 Rooknest Trail, Agincourt, Ontario, Canada MIS 3W2

Tel: (416) 299-5036, Fax: (416) 299-5058

Audrey Chambers Director, Documentation and Data Centre, Institute of Social and Economic Research (ISER), University of the West Indies - Mona, Kingston, Jamaica

Fax: 1-809-927-2409, E-mail: amcham@uwimona.edu.jm

Carol Collins Director, Information and Communication, Caribbean Community Secretariat, Bank of Guyana Bldg, PO Box 10827 Georgetown, Guyana Fax: 1-592-2-67816

E-mail: collinsc@opus-networx.com/caricom@undp.org

Maria Francini, MD Research Assistant, Psychology Department, Carleton University, Ottawa, Ontario, Canada K1S 5B6

Fax: (613)788-3667, E-mail: mfrancin@ccs.carleton.ca

José-Marie Griffiths Vice Chancellor for Computing and Telecommunications, University of Tennessee, 507 Andy HoltTower, Knoxville, Tennessee 37996-0157 USA

Fax: 615-974-4967, E-mail: JGRIFFIT@UTKVX.UTK.EDU

Nancy Hafkin Officer-in-Charge, PADIS, Economic Commission for Africa (ECA), PO Box 3001, Addis Ababa, Ethiopia

Tel: 251-1- 51-11-67, Fax: 251-1-51-44-16 or 1-212-963-4957

E-mail: Nancy_Hafkin@padis.gn.apc.org / hafkin.uneca@un.org

L.J. Haravu Senior Manager, Library and Documentation Services, ICRISAT Asia Centre, Patancheru, PO Andhra Pradesh 502 324, India

Fax: 91-40-241239, E-mail: L.HARAVU@CGNET.COM

Forest Woody Horton, Jr Information Consultant, 500 23rd Street NW, Suite B901, Washington, DC, 20037, USA

Fax: (202) 223-5534, E-mail: WOODY@CNI.ORG

Kingo Mchombu Lecturer, Dept of Library and Information Studies, University of Botswana, Private Bag, 0022, Gabarone, Botswana

Fax: 267-356-591,

E-mail: mchombu@NOKA.ub.bw

Charles T. Meadow Professor Emeritus, Faculty of Information Studies, University of Toronto, 140 St George Street, Toronto, Ontario, Canada M5S 1A1

Fax: (416) 971-1399, E-mail: meadow@fis.utoronto.ca

Michel J. Menou Consultant, CIDEGI, 13, rue Nationale, F-49530 Les Rosiers sur Loire, France

Fax: +33-1-49-85-01-79, E-mail: michel.menou@utopia.fnet.fr

T.N. Rajan Consultant, C-30/343, Eastend Apts, Mayur Vihar Ph. 1 Extension, Chilla, Delhi 110 096, India

Fax: c/o L.J. Haravu at 91-40-241239 E-mail: L.HARAVU@CGNET.COM

Alfredo Rojas Coordinador de REDUC, General, Latin American Educational Information and Documentation Network (REDUC), CIDE, Santiago, Chile

Fax: 562-671-8051, E-mail: cidei@huelen.reuna.cl

Louise F. Spiteri Research Assistant, Faculty of Information Studies, University of Toronto, 140 St George Street, Toronto, Ontario, Canada M5S 1A1

Fax: (416) 971-1399, E-mail: Spiteriafis.utoronto.ca

Christian Sylvain Research Assistant, Graduate School of Library and Information Science, University of Western Ontario, London, Ontario, Canada N6G 1H1

Fax: (519) 661-3506, E-mail: csylvain@julian.uwo.ca

Liwen Vaughan Assistant Prof., Graduate School of Library and Information Science, University of Western Ontario, London, Ontario, Canada N6G 1H1

Fax: (519) 661-3506, E-mail: lvaughan@julian.uwo.ca

Han-Dong Wang Institute of Scientific and Technical Information of Shanghai, 1634 Huai Hai Zhong Lu, Shanghai, China 20031

Fax: (21)-4335311

IDRC Staff

Head office address: IDRC, Information Sciences and Systems Division, 250 Albert Street, PO Box 8500, Ottawa, Ontario, Canada K1G 3H9 Tel: (613) 236-6163, Fax: (613) 238-7230

Ronald Archer Project Manager

E-mail: RARCHER@IDRC.CA

Bev Chataway Head, Research Information Service

E-mail: BCHATAWAY@IDRC.CA

Atsuko Cooke Research Officer

E-mail: ACOOKE@IDRC.CA

Fay Durrant Senior Program Officer

E-mail: FDURRANT@IDRC.CA

Paul McConnell Director

E-mail: PMCCONNELL@IDRC.CA

Tavinder Nijhawan Research Assistant

E-mail: TNIJHAWAN@IDRC.CA

Martha B. Stone Director General

E-mail: MSTONE@IDRC.CA

Katherine M. Kealey

Publishing Consultant, 1258 Portland Ave, Ottawa, Ontario, Canada K1V 6E9

Fax: (613)523-5074, E-mail: KKEALEY@IDRC.CA

About the Institution

The International Development Research Centre (IDRC) is committed to building a sustainable and equitable world. IDRC funds developing-world researchers, thus enabling the people of the South to find their own solutions to their own problems. IDRC also maintains information networks and forges linkages that allow Canadians and their developing-world partners to benefit equally from a global sharing of knowledge. Through its actions, IDRC is helping others to help themselves.

About the Publisher

IDRC Books publishes research results and scholarly studies on global and regional issues related to sustainable and equitable development. As a specialist in development literature, IDRC Books contributes to the body of knowledge on these issues to further the cause of global understanding and equity. IDRC publications are sold through its head office in Ottawa, Canada, as well as by IDRC’s agents and distributors around the world.