Micro-computers as Tools for Social Science Research in African Countries

COLIN DARCH
Senior Documentalist, SAPES Trust

Keynote Paper for the Standing Committee on Informatics,
presented to the Zimbabwe Research Council’s
Biennial Science and Technology Symposium,
Harare, September 1990

This paper attempts to present an overview of some of the problems involved in using micro-computers in text-oriented social science research, and more specifically, how the use of micro-computer technology can change the nature of the research process itself, both in investigation and in presentation. Quite specifically excluded are the uses of the computer for large-scale and complex calculation and modelling (number-crunching) in the pure sciences and engineering. The paper tries to identify some of the contradictions which arise from the application of relatively high technology solutions to problems in societies, such as Zimbabwe, with a weak technological base. It is thus highly selective and largely descriptive, and although it raises some questions, endeavours to avoid prescriptions.

The main part of the paper is divided into three main sections, namely The Computer as a Storage and Retrieval Device, Manipulating Data, and The Computer as a Production Device. In the closing section, the paper looks briefly at the implications of micro-computer technology for work practice and education in poor countries, posing some questions about the ways in which the use of micro-computers can either strengthen social and educational elitism, or alternatively act as a force for the empowerment of individuals and organisations.

INTRODUCTION: THE POLITICAL ECONOMY
OF THE MICRO-COMPUTER

Many of the questions which are examined in this section, fall, it may be argued, in an as-yet underdeveloped discipline which has been termed ‘micro-computer studies’. One writer has defined this area as including:

the discussion of intellectually challenging topics and problems related to the micro-computer from points of view other than computer science (as it is currently interpreted), and in greater depth than is common in the consumer magazines. [1]

That there is a need for such a new discipline is arguable. But it is certainly true that neither using a computer with the elevated level of skill known as ‘computer literacy’, nor having high-level engineering skills in either hardware or software, guarantee any necessary understanding of the social or philosophical implications of the truly astonishing and universal impact of the micro-computer over the last decade.

At a national computer conference held recently by the South African mass democratic movement (MDM) in Cape Town, and attended by several hundred delegates from grass-roots organisations, the slogan shouted by Comrade Chair at the beginning of each work session to get the delegates into the right mood was, ‘Viva Computers for People’s Power, Viva!’ [2]. The fact that such a slogan sounds perhaps slightly odd, but not completely ridiculous to us in the 1990s, is an indication of the extent to which computer concepts have entered the popular consciousness and come to be regarded over the last decade as an essential and affordable part of the development process. [3]

But although the technology is indeed essential, it is not an unproblematic part of the process, for several reasons. There is a trade-off, for example, between the analytical power which we gain from the use of data processing technology, and the technical dependency which derives from the fact that the basic norms of informatics are almost entirely defined and controlled by the industrialised North. There is, moreover, a serious contradiction, usually ignored in the literature, in the fact that the technology is made available to us at low cost, at least partly because of the exploitation by multinational corporations of the cheap labour of poor women in east and south-east Asia. And at the most basic practical level, in developing countries there are all the problems associated with the correct identification of needs, and thereafter of the right systems to meet those needs.

Powerful vested interests are also at work, in this most rational of all technologies, to perpetuate irrational near-monopoly control of a profitable trade. Data processing technology is one of the fastest growing industrial sectors in the world. Four years ago, in 1986, it already occupied a global market worth nearly US$400 billion. This market, it is estimated, will have grown to nearly US$1,200 billion by the middle of the present decade, in 1995. [4] The stake of the developed world in marketing computer technology to the South is therefore obvious. [5] There have been some attempts to place the question of the social nature of informatics, and its policy implications for Third World countries (including the fact of its domination by a handful of corporations) onto an explicitly political agenda. The ‘processes of global North Americanism’, as one Cuban commentator aptly termed them [6], are as well served by the computerisation of the world with Apples and PCs as by the spread of hamburgers and Coca Cola.

Some quite serious, if sporadic, attention has been paid to the issues by Third World governments. As early as June 1981, an age ago in the reckoning of developments in computer technology, the ‘Declaration of Mexico on Informatics, Development and Peace’ was signed by representatives of nine African states, among other countries. The document showed considerable perspicacity for its time, and clearly anticipated some of the impact that the technology was likely to have. The declaration called for ‘an awareness of the implication [of the technology] for cultural identity and diversity,’ and also warned that ‘the capacity to assimilate and evolve technology depends on the political will to adopt national strategies [...]’ [7] Even more frank was an article published in the Cuban magazine Prisma in the mid-1980s, which pointed out that

this technology is controlled internationally by a small group of companies which are using new technology to guarantee their interests and the continuity and reinforcement of an unjust, unequal market system. [8]

The author of this polemical piece argues in favour of six points, which had been adopted by the Non-Aligned Movement and the Group of 77, and which included strengthening UNESCO’s multilateral programmes in informatics, and promoting cooperation between Latin American, Asian and African countries.

Generalisation in this area is complicated by the fact that, although most Third World countries have some experience of computer technology, the situation is far from uniform, both in terms of the costs of access and also in terms of national policy. Analysts have proposed several different ways of categorising various national strategies, using as criteria both the state of development of the local industry, and national policy, if any exists. Such categorisation is important, not least because those countries with the lowest levels of technical development are those most likely to need to make the kind of quantum leap forward in information provision and research technique which micro-computer technology can stimulate. Simply in terms of productivity, as we all know, a simple word processor can provide the means to produce and disseminate a vastly superior quantity of information.

At the simplest level, the basic division in the Third World is between those countries where the use of computers is widespread in a range of different types of activity, and those where it is limited to a narrow section of society. The distinction between South Africa, where micro-computers are used in commerce and industry, by community groups, trade unions, political parties, schools and even a substantial number of individuals; and Zimbabwe, where computers are, broadly speaking, used mainly by some but not all of the larger commercial enterprises, illustrates this divide. Indeed, it is probably fair to say that the majority of African countries (with a few exceptions which are mentioned below), are still in their computing infancy.

A more sophisticated categorisation has been proposed by Jozia Beer Gabel of the University of Paris I (Pantheon-Sorbonne). He points out that the majority of Third World countries ‘have policies of using computers and controlling the facilities.’ In a second category, he places countries which also have large internal markets and which are trying to set up local industries to meet domestic demand, such as Brazil and India, both with huge populations and significant local high-technology manufacturing.

In the third category, Beer Gabel places countries such as Taiwan, Singapore, Hong Kong, and South Korea, which manufacture equipment and systems for the major players in the world market, some of which remains locally available at low prices. [9] Even more sophisticated, and indeed some might say optimistic, is the line of analysis of Patrick Haas, who has gone on record as believing that in informatics, ‘under-development is not unavoidable.’ Haas uses a five-point scale to place Third World countries in a ranked hierarchy, from ‘mastery of one or two skills’ and ‘embryonic national industry’ through to ‘industry that leads the world.’ In Africa, according to Haas, only Algeria, Senegal, Côte d’Ivoire and Gabon have the skills and the embryonic national industry. Even in these countries the technology is nonetheless still imported from foreign firms. [10]

To look in more detail at one of these cases, Côte d’Ivoire takes computer technology very seriously indeed. The government is currently applying the second national computing plan, covering the years 1986-1990, and spends over one percent of its GDP on computer technology, roughly twice as much as countries at a comparable level of development. Activities include local French-language software development, and local PC assembly of a model known as the ‘Ramses I.’ [11]

Some developing countries have open policies towards the import of high technology equipment, others protect local industry even when it is not capable of meeting local demand. Which of these options is preferable over the short, medium and long terms is still difficult to say. The argument about protectionism versus liberalisation is a complex one. Emerging local industries certainly need government support and protection, and even then, may not succeed. Brazil’s strongly protectionist policy has had mixed success. With an estimated local market capacity of about a million machines, the country had about 100,000 computers in place in early 1989, of which around 40 percent were foreign contraband from the US. [12]

Closer to home, and as recently as last year, Zimbabwe’s then Minister of Information, Posts and Telecommunications expressed strongly protectionist sentiments as far as liberalisation in telecommunications was concerned. The Minister argued that liberalisation would give Northern companies a chance to make a quick killing, and would reverse gains in technology transfer by finishing off presently uncompetitive local companies. Advances in the area of standardisation would also be lost. [13]

THE MICRO-COMPUTER AS A
STORAGE AND RETRIEVAL DEVICE

Putting aside for a while these questions of the political economy of the micro-computer, let us look at the ways in which this technology’s rapid rate of change affects its use in social science research. Ten years ago, most personal computers could handle little more than a book chapter or an academic paper at any given time. Nowadays, machines routinely run several different programmes together, swapping data around between them, and can easily store whole encyclopaedias, or even small libraries.

Using such technology productively requires new approaches to traditional scholarly activities. These developments in micro-computers, according to some commentators, have actually produced changes in the way researchers think. These changes begin at the perceptual and linguistic levels, and then gradually affect the way in which researchers approach new problems. An example of this is the spreadsheet, a type of program invented some ten years ago for financial modelling. The spreadsheet is essentially a large grid, into each cell of which the user may insert either numerical data, or formulae for their manipulation. Changes in the data in a given cell are reflected virtually instantaneously in all the other cells (the ‘what if?’ effect). The later three-dimensional spreadsheet allows for the addition of a third axis, often time. It can be argued that spreadsheets, for their users, have become more than just a convenient tool, but have actually started to define what researchers see as data, as well as how they describe it. In the words of one analyst:

[...] four-dimensional representations of datasets (i.e. time-changing rotating 3-D datasets) are becoming ‘the appearance of the data’, with ‘flat spots’ and ‘wrinkles’ added to the language of ‘flyers’ and ‘clusters’. [14]

Word processing and associated programs, including desktop publishing, have arguably had a similar impact, not simply automating the process of writing but changing its nature. We shall look more closely at this below.

A little over ten years ago, affordable micro-computers had severe size limitations, both in terms of memory (which restricted the size of programs) and in terms of storage capacity (which reduced the amount of easily accessible data). The popular Osborne 1, a CP/M based machine which was launched in 1981, was the first fully-functional portable computer. It boasted 64 Kb of RAM and two 96 Kb floppy disk drives, as well as a tiny five inch screen. [15] Although this computer, which sold for around US$1,800 and weighed nearly 11 kilos (officially 24 lb.), was perfectly adequate for linear procedures such as word processing, users of data-base programs (such as the early dBase II), or of spreadsheets (such as Visicalc or SuperCalc) almost immediately bumped up against the limits of memory and storage.

There were other problems with the first generation micro-computers, which prevented them from being used as primary storage and retrieval devices. Although the CP/M operating system, which was then the established standard, used a common format for 8 inch floppy diskettes, making data transfer between computers quite easy, the formatting for the more widely-used 5.25 inch floppies differed according to the manufacturer of the machine. Thus, an Osborne 1 could not directly read the data from a diskette prepared on, say, a Commodore PET machine. Although there were various solutions available at the time, which ranged from software emulations to wiring the machines together and using a communications programme, they were clumsy and inefficient.

In addition, different manufacturers used different methods of addressing the screen, so that programmes could not be ported between different machines (they would produce garbage on the screen). While this had the effect of imposing a practical control on software piracy, it also added to development and marketing costs, since vendors either had to keep various versions of a popular program in stock for the most widely used machines, or software houses had to provide complex installation routines. It also incidentally provided a tidy source of income for early hackers, who could earn a little extra income by the highly illegal practice of opening up a friend’s copy of WordStar and installing it on another machine for a few dollars fee.

The advent and rapid rise to market dominance of the DOS-based PC in the early 1980s, together with the affordable hard disk, has changed this scene, if not overnight, then in a remarkably short period of time. The 360 kilobyte 5.25 inch double-sided floppy disk, with 40 tracks per side, became the de facto standard for virtually the whole world of PC computing, with the notable exception of the Apple family of machines. In addition, other procedures, such as screen addressing and keyboard layout were also standardised. From the users’ point of view, this meant that not only could data be moved from one machine to another with no technical difficulty at all, but so could programs.

This marked the beginning of the era of software copy protection, hardware dongles, [16] and logically and inevitably, commercial protection-breaking utilities such as NoKey, NoGuard, and various others.

But it is the rapid development of large-scale storage capacity for micro-computers (hard disks, and compact disk or CD storage) which has created a situation in which individual researchers, as well as quite small institutions, can store vast amounts of data. (The data itself, either in the form of material down-loaded from commercial vendors, or input by the researchers themselves, can be acquired in large quantities, with astonishing rapidity, in a research environment.) From a situation ten years ago, when to add on an external hard disk with 10 megabytes of capacity to a CP/M computer would cost several thousand US dollars, we have reached a point where a top-of-the-range laptop can store over 100 megabytes, and where desktop computers with half a gigabyte of storage are common.

It is certain that the growth of both memory and cheap storage capacity is going to continue. Although compact disks and the drives to read them are still expensive, the prices are dropping. It is possible that within a few years digital audio tapes (or DAT) will be available for cheap bulk backup storage; it is also likely that the 20 megabyte floppy will become a marketing reality within a year or so. The main problem that computer users will face in the immediate future is likely to be organisation and retrieval of data, rather than a shortage of space.

But what does a social scientist do with 500 megabytes of text? How can it be used efficiently? The central problem is recovery, but most of the well-known traditional data-base programs (such as dBase III and IV, Rbase, Revelation, and so on) are not primarily oriented towards the efficient retrieval of free-form, unstructured textual data. The researcher clearly wants a text management programme, to facilitate the setting up and maintenance of whatever kind of customised free-form data-base he may need. [17]

A note of caution is necessary here. The term data-base is used in different senses by theoreticians, computer professionals and by the average computer user. It is necessary to distinguish between the rigour of E. F. Codd’s relational theory of data at one end of the scale [18], the reality of the average commercial data-base management system (DBMS), and flat file, text indexing, and spreadsheet programmes at the other. Such distinctions are in practice often blurred.

In the broad sense, then, data-bases can range through text management systems, bibliographic data-bases, and economic time series systems (three-dimensional spreadsheets), to name three of the most common. The popular term also covers specialised applications for the manipulation of different types of data (numeric, text, graphics) within the same system. Even geographic mapping and macro-economic modelling are now possible on relatively small and cheap machines. [19]

MANIPULATING DATA

It is probably true to say that many social science researchers have, until recently, used micro-computers mainly as small- or medium-scale text storage devices, retrieving data in a more-or-less haphazard and unstructured way. In this sense, social scientists differ from engineers or natural scientists, who are much more likely to be used to the so-called ‘number-crunching’ applications or simulations, which allow them to solve problems which they could not have approached previously. Work on such mathematical problems as theories of chaos are concrete examples of this.

But sophisticated bibliographic data-bases, either for customised listings or with data included in the package, and which are often the first type of application used by social scientists after the word processor, demand a high level of computer literacy from their users. The large storage devices which have made such data resources available on-line to users in the 1980s, will soon be available for the individual user too. A bibliographic data-base program needs to be formally structured, with fields for both personal and corporate authors (which must be repeatable in the case of multiple personal authors), for titles, place of publication, date, and so forth. At the same time it must be flexible, since the data is likely to be of extremely variable length; a program which requires fixed-length fields, which will be defined at the maximum possible length, will gobble up disk storage with empty fields.

This is a developing market. It is noticable that northern academic publications such as the New Scientist or the Times Higher Education Supplement now regularly feature reviews of software for indexing or otherwise organising and accessing electronically-stored textual information. [20]

Relatively new object-oriented programming concepts allow users to link data and the processes which are specific to it together, so that the computer ‘ knows’how to deal with dates, for example. So-called ‘expert systems’ can assimilate and reproduce reasoning processes in an as-yet crude way.

Based on these developments, and moving beyond word processing and data-bases, vendors are now selling the data along with the means to use it. Compact disk publishing is in the process of changing the way reference works such as dictionaries and encyclopaedias are used. The second edition of the Oxford English Dictionary, the standard work of reference for the language around the world, occupies twenty large volumes in its paper form. On compact disk (CD), it can not only be carried around in a brief-case, it comes with a search program which can perform in moments tasks which would be virtually impossible with the hard copy, organised as it is by the alphabetical order of words. One reviewer of the CD asked the data-base how many loan-words had come into English from Turkish since 1650, and had his answer in moments; such a query would be virtually unanswerable from the book volumes.

Several other developments, representing significant and radical changes in the way text software is designed, are having an impact on the way social scientists deal with micro-computers. These include:

[...] hypertext (and HyperTalk), textbases, outliners, text retrievers, electronic reference works, indexers, document comparison programs, co-editing programs, shorthand programs, bibliography programs, the so-called personal information managers, parsers, communications programs, and spelling and grammar checkers - the debuggers of writing in natural languages. [21]

These tools have reduced the drudgery in producing and manipulating texts, and have made it not only possible but actually easy to undertake tasks which a researcher might not otherwise attempt. A simple example might be to check the accurate spelling of a particular name or phrase throughout a long text.

But the central problem remains: in order to retrieve existing data efficiently from free text, the ideal program must have several features. First, it must allow the user to specify the four main logical operators, ‘and’, ‘or’, ‘not’, and ‘exclusive or’. These allow him or her to link search terms together in a meaningful way, as, for example SOUTHERN and AFRICA but not SOUTH and AFRICA in a search to retrieve information on the SADCC and Frontline countries, but excluding South Africa itself.

Second, it should allow the user a reasonable number of search terms. Some current programs allow up to 255 terms in one search operation. If the program allows ‘right truncation’, that is the use of a wild card at the end of a word which may have alternative spellings or grammatical forms (e.g. DESTABILI*, which would produce both DESTABILISATION and DESTABILIZATION, as well as DESTABILISED, DESTABILISING, and so on), then the total number of terms in a search can grow rapidly.

Much more difficult to deal with using presently available programs, are the question of hierarchical relationships between terms, and the related problem of synonyms. In a search of a text system for all references to, say, WEAPONS, how is the program to know that this term should include all mentions of the AK-47, of guns in general, of armaments, of bomber aircraft, and so on. All such terms appear simply as isolated text strings as far as the program is concerned. Some sophisticated management systems, such as the relatively little-known program Status [22], do however allow users to ‘teach’ computers relationships of this type between text terms. In Status, it is additionally possible to define a term such as WEAPONS as including AK-47s, and to retain this definition for future searches. Synonyms are a little different, because they are terms with the same level of meaning, but it is possible to ‘teach’ the computer that, for example, JOAQUIM CHISSANO and PRESIDENT OF MOZAMBIQUE have the same referent.

Another difficulty is related to the proximity of terms to each other. If phrases are allowed, then searching for SOUTH AFRICA presents no difficulty. If not, then a search for SOUTH and AFRICA may produce a hit because SOUTH America is mentioned in the first line of a text and North AFRICA in the 500th line. Status allows this proximity definition in both directions, permitting the retrieval, in a well-formulated search, of MINISTER (or MINISTRY) OF EDUCATION and EDUCATION MINISTER, but excluding a hypothetical EDUCATION OF THE FOREIGN MINISTER (where the terms are three words apart).

Some text systems are semi-structured. They may have a large chunk of text, with a bibliographic reference attached to it, with the various fields for author, title, and so on. It should be possible to search whole records, or certain fields only.

THE COMPUTER AS A PRODUCTION DEVICE

Modern micro-computers can, as we have seen, store vast amounts of information, and provide better and quicker access to it than any previous storage device. Thanks to their capacity to treat text both as text and as graphics images, they can now also be used to produce almost any kind of printed product, from a simple newspaper through to complete books. [23]

The word-processing and text management revolution has changed the way in which many people look for information and how they write. The desktop publishing revolution, based on the cheap availability of the laser printer, is in the process of changing publishing concepts. Journalists now routinely transmit copy halfway around the world by computer. Conferencing and bulletin boards enable researchers to exchange information quickly and efficiently.

But to use computers as a cheap and sophisticated means of producing paper copy is not the only way in which they can be used as production devices in the dissemination of research results. In some areas of research activity, the delays involved in conventional publishing of results are unacceptable. During the 1988-1989 controversy over the possibility of ‘cold fusion’, much of the scientific communication was conducted over a spontaneous but closed fax network, to which only certain participants had access.

Can computer networks be used in this way? In fact, conferencing facilities on existing bulletin board systems and networks already provide some of the necessary conditions. However, up-to-date newsletters such as SouthScan, which is published on both the GeoNet and APC systems, also come out in paper. The likelihood of electronic publishing replacing paper in the near future seems remote, despite the optimism of some commentators. [24]

THE COMPUTER AND ELITISM

In his recent opening speech at the Palestine Liberation Organisation-sponsored Computer Camp for local youths, Zimbabwe’s Minister of Higher Education referred to the computer as an ‘important gadget.’ [25] This expression gets it precisely right, because the computer is indeed important in our lives, but is only a tool, a way of getting things done, a gadget. But too often popular mystifications of the computer and its supposed ability to solve all difficulties lead to disaster.

Although it has been said many times before, it bears repeating again: micro-computers are effective tools only in the hands of people or organisations which are already well-organised and have a clear idea of the tasks which need to be done. ‘Garbage in, garbage out’ applies not only to data, but to the organisation of tasks.

The introduction of micro-computer technology can have two quite different effects within a given organisation. If it is introduced as a panacea, as a mysterious and all-powerful solution to all the problems that the organisation faces, then it will often end up creating a techno-elite of one or two people, who control the systems and end up in a very powerful position. If, on the other hand, the technology is introduced as a way of increasing productivity, as another tool with well-defined uses, which is available to everyone, then it can actually democratise the work-process itself to some extent.

A concrete example of the first situation is the question of electronic mail, or e-mail. E-mail is essentially the transferring of messages directly from one computer to another along a telephone line. The two computers can be in the same building, or they can be halfway round the world. In many organisations, because of the cost involved in making international telephone calls, responsibility for sending e-mail is given to one individual, and the process is seen by other staff members as in some way ‘special’ or mysterious. But in fact, with modern batch-mode communication programs such as Front Door, the sending and receiving of e-mail messages can often be quicker, more efficient and cheaper than telex or fax.

There are similar problems with hierarchical organisational relations. In the present state of the art, the principle means of access to the computer is through a typewriter-style keyboard – and, logically, it helps if you can type. But in some organisations, especially when the computer is used mainly as a word processor, a glorified typewriter, the old-fashioned view predominates that it is the secretary’s job to type, and that it would be demeaning for the ‘boss’ to be seen tapping away at a keyboard for him (or her) self. Thus an opportunity to rethink the relationship between manager and secretary in terms of the productivity available using the new technology may be lost.

But the concept of ‘manager’s software’, which emerged in the US about four years ago, offers a possible way round this problem. [26] The essence of the idea of manager’s software is that it must give rapid access to information to users who are not interested in setting up or learning how to use specific programs. Most managers are never going to be willing to devote time to designing a data-base, inputting data, drafting search strategies, or updating information. A classic example of such a program is the now somewhat aged ‘Q & A’ (or Question and Answer) from Symantec Corporation. Q & A features an ‘intelligent assistant’ which allows a user to type in a natural-language request (such as ‘Who is the highest paid employee in this institution?’ or even ‘Find me all the research institutes in Zimbabwe and South Africa sorted by name.’) To do this, the program used techniques developed during early research on artificial intelligence.

From the concept of manager’s software, we can derive the old distinction between ‘users’ and ‘pushers’, which in the computer world have quite different meanings than in the streets of New York. Pushers are the computer literate, who learn which buttons to push to achieve a desired result, but have no real interest in understanding why those effects follow. Users are the computer literate whose understanding of the inner workings of the micro are sufficient to allow them to solve most of their own problems, to customise their own software, certainly to write complex batch processing files, and so forth. It is probably unreasonable to expect most pushers to become users, although much marketing and some computer education seems to based on the assumption that such a transformation is desirable. The danger lurking behind this is that the technology will intimidate people, and empower the few who manage to dominate it to the detriment of the rest.

Decoration

 

Endnotes

1. ‘The nature and importance of micro-computer studies,’ University Micronews, no.16 (May 1989), p.12.

2. First National Conference on Computers for Transformation (Cape Town, University of the Western Cape, 5-8 July 1990). Among important papers from this meeting, see Cathy Stadler, ‘Computers and research: a case study of computers in the Education Projects Unit’; K. Stielau, ‘The role of the computer in community surveys’; and Vaughn John, ‘data-bases in small organisations: keeping a count of deaths,’ this last on techniques used in monitoring violence in Natal during the last few months.

3. As long ago (in computer terms) as 1986 a major conference with 86 participants looked at the relationship of the technology to development issues. See the published proceedings, European Association of Development Research and Training Institutes, Working Group on Information and Documentation, data-bases and networking in development: seminar, Brighton, Sussex, 4-6 September 1986 (Bergen: Chr. Michelsen Institute, 1987) (DERAP Publication no.218)

4. 1989 OECD estimates, quoted by Jozia Beer Gabel, ‘Computers and the Third World Today,’ The Courier, no.113 (January-February 1989), p.56.

5. At least in the area of software, it is not even as though this is an error-proof technology either. Most successful commercial programs are produced, as we can see from the version numbers, by a hit-and-miss process of tinkering. See ‘Something rotten in the state of software,’ The Economist (9 January 1988) p.83-84, 86.

6. Enrique Gonzalez Manet, ‘The wrong connection: the Third World is switching off,’ Prisma, [Havana] (date unknown), p.40.

7. The text of the Declaration is printed in full in the article ‘Informatics and development,’ The Courier, no.113 (January-February 1989) p. 55.

8. Gonzalez Manet, op.cit., p.34.

9. Beer Gabel, op.cit., p.58.

10. Patrick Haas, Le Figaro [Paris] (16 November 1987), quoted by Beer Gabel.

11. Frederic Grah Mel, ‘Computer boom in Côte d’Ivoire,’ The Courier, no.113 (January-February 1989), p.63-66.

12. Beer Gabel, op.cit., p.59.

13. ‘Zimbabwe opposes telecoms liberalisation,’ Computers in Africa, vol.3, no.5 (1989), p.22-23.

14. ‘The nature and importance of micro-computer studies,’ op.cit., p.23-24.

15. The Osborne 1 was a runaway success, and sold around 30,000 units in the first year. See Time (21 June 1982).

16. A ‘dongle’ is a small hardware device plugged into one of the ports at the back of a computer, often between the printer cable and the machine. A protected program checks for the presence of its own particular dongle, before allowing itself to run.

17. The use of the technology is still under development even in the north: see May Katzen on the difficulties of integrating computer-based methods into humanities curricula (‘Rhetoric to reality,’ Times Higher Education Supplement [23 February 1990], p.26, 28).

18. E. F. Codd, a mathematician employed at IBM in the late 1960s, developed the twelve fidelity rules (of which there are actually thirteen, starting at 0) for relational data-bases. See his ‘A relational model of data for large shared data banks,’ Communications of the ACM (June 1970); and most recently, ‘The twelve rules for determining how relational a DBMS product is,’ TRI Technical Report, no.EFC-6 (16 May 1986). A summary of Codd’s work and its implications appears in Fabian Pascal’s ‘A brave new world?’ Byte (September 1989) p.247-56.

19. For DOS-based machines the two programs PC-Globe and PC-USA provide a kind of updatable electronic atlas, with maps of each country in the world (or state in the union), and basic economic and social data. Much more sophisticated is the Macintosh program Map II, developed at the University of Manitoba in Canada. Map II can process maps imported by tracing, scanning, or drawing, and is fully equipped with tools for processing spatial data.

20. See, for example, the Times Higher Education Supplement (15 June 1990), special section on Information Technology, with reviews both of books on hypertext, CD-ROM reference works, and such packages as Pro-Cite, Reference Manager, and BiB/Search.

21. ‘The textual revolution,’ University Micronews no.16 (May 1989), p.33.

22. Status is a mainframe program, written in Fortran and developed by a group of programmers associated with the Harwell Atomic Energy Authority in the UK. A DOS version is available. The program is used by the Liverpool University data-base on southern Africa known as Southern Data or SACDTS.

23. The first newspaper ever produced in Bhutan (south Asia) was launched using a solar-powered DTP system (three computers and two laser printers, plus software) purchased with a US$93,000 grant from the UNDP. Ruth Massey, ‘A newspaper, thanks to the computer,’ The Courier, no.113 (January-February 1989), p.79.

24. See ‘Computer-accelerated research: some details about how the feedback loop is shortened,’ University Micronews no.16 (May 1989), p.25-29 for an interesting summary of the issues involved. Nonetheless, at least one book has come out in shareware: Roger Bullis (ed.), Computer shock, [1987].

25. ‘Karimanzira on computer technology,’ Zimbabwe Press Statement, [Dept. of Information], no. 200/90/SM/SD (17 August 1990), p.3.

26. For one of the earliest mentions of the concept of ‘manager’s software’, see Jim Seymour’s column ‘The Corporate Micro’in PC Week (21 January 1986), p.37.

Decoration