--------------------------------- <> by Robert A. Kraft Department of Religious Studies University of Pennsylvania [Beginning of O F F L I N E Series] Council on the Study of Religion Bulletin 15/2 (April 1984) --------------------------------- Early in 1983, the Religious Studies administrative committee at the University of Pennsylvania passed a new requirement that graduate students entering the program from Fall 1984 onward must demonstrate "computer literacy" in addition to the other existing requirements for graduation from the program. The University News Bureau sent out a news release which was picked up by a few newspapers in the next few days. As chairperson of the Religious Studies program, I was even interviewed by telephone for a local radio news segment. What seemed to us to be newsworthy was that, to our knowledge, we were the first religious studies program (if not the first humanities program) to take such a step. (The media wondered about other things like whether computers would have a sensational impact on theologizing!) Why did we take this action? What is at stake and what are the broader implications? Computer people had already begun to argue about what the label "computer literacy" ought to mean. Our intended definition is very modest -- it is a minimalist definition. We want our graduate students to be able to use a computer system comfortably for scholarly purposes, the most obvious of which is in the preparation of research papers, reports, and other written work. In the next few years, the computer screen and keyboard will replace the typewriter for many of us -- my own office selectric with the easy erase feature scarcely gets any use any more! Two problems that tend to keep people from using the computer to write are (1) unavailability of equipment and (2) fear of the unknown. With the boom in microcomputers, availability is becoming a non-issue. By instructing our students in the use of this marvelous writing and research tool and by requiring them to put that knowledge to practical use, we wish to ensure that they are not burdened with any intimidation they might otherwise feel. Students can get access to equipment, others are using it, why not us? But there is a deeper aspect of "computer literacy" in our new requirement. We not only want the student to display a certain skill in mastering the text editing features ("word processing" if you must!) of at least one computer system, but we will also require the student to understand simple computer programming. I did say that the student must learn to write programs in a computer language, but we will expect the student to be able to follow the logic of a program intelligently. (We do not ask students to in German or French, but to understand.) We reason that a person who knows the rudiments of how to communicate with computers is in a better position to make effective use of packaged programs and of programming consultants that are becoming increasingly more available as the "new technology" takes stronger hold. The student may also want to learn more about programming, once exposed to it. As time goes on, it will be commonplace for a student who has come up through the ranks of American educational institutions to be much more "computer literate" than our modest enactment requires. But in the meantime, we do not want this present generation of students to be left out in the cold. Indeed, the timing of this new requirement was selfconsciously opportunistic. Especially since the Sloan Foundation sent its message out to the halls of higher learning that "the new technology" was more than just one among several significant issues, but was perhaps most central issue (and thus highly fundable!), activity has increased in finding ways to integrate the computer more fully into college and university education. Wherever one turns in the committee structures of academia, and especially among the natural sciences, mathematics, and social sciences, questions about the adequacy of computer facilities, computer education, computer research are constantly raised. With rare exceptions, the humanists are at a great disadvantage in such discussions since they have little exposure to the issues and little awareness of how they might make use of computers in their own academic world. We have been more fortunate. After years of fitful groping towards the possibility of applying computer technology to the study of the Greek Bible, with invaluable encouragement and assistance from pioneers like David Packard and patrons like the Packard Foundation and the Heinz Foundation, we have been blessed with a major grant from the Program for Research Tools and Reference Works of the National Endowment for the Humanities. This has allowed us to secure the necessary computer equipment to continue our research efficiently (again, with much support from David Packard whose invention of the IBYCUS computer system cannot be praised too much). It has also enabled us to develop our own experts in computer programming and management -- humanists who can participate intelligently and effectively in the university discussion and planning councils. This, in turn, has helped create an extraordinary spirit of cooperation with respect to computing between us humanists and those others with whom we interact on computing matters. We decided to offer a course. We were not sure what to call it -- "Computing and the Humanities," "Computers and Literary Analysis," "Computer Research on Ancient Texts"? We were sure that there would be some takers, especially from among the several humanities groups at this University who focus on the study of texts, and especially on texts in ancient and/or "esoteric" languages. After all, the IBYCUS System was developed to work with classical texts, with Greek characters exhibited on the screen, and David Packard added Hebrew to the screen display for our convenience (Coptic has been added as well, for the Claremont Nag Hammadi Project). People from Classics, Ancient History and Medieval Studies were especially interested in learning more about what we were doing and how we were doing it. We finally settled on "Computers and Textual Research" as a fitting name for the course. Twelve people were enrolled the first time around; well, actually three of them were faculty auditors, and some of the others were graduate students associated with the Septuagint Project, but there was one "token" undergraduate and some significant things were accomplished on the Apple II+ microcomputers we were able to use in the mathematics department. During summer of 1982, the course was offered again to a typically small group who could now use the newly installed IBYCUS System with its four terminals when that equipment was not preempted by the project. In the spring of 1983, the course enrolled more than 30 students and faculty seeking to become "computer literate," and a steady flow of interest has continued since then. In the summer of 1983 we introduced a second computer course, covering intermediate level programming for textual research. This course, taught by John R. Abercrombie, has now produced a book manuscript on the subject, for publication in 1984. Other related books and manuals should follow. Opportunistic, yes. We want to affirm to fellow humanists not only that there is nothing to fear from these mysterious machines, but that there is an enormous amount to be gained, and that after all, they need not remain so mysterious! We want to create a positive atmosphere among humanists and to train ourselves and our students to take advantage of the opportunities before us. We do not want to be left behind in the discussions of grant proposals, equipment needs, new installations and networks. We want to produce students who not only master these skills for their own needs, but will be able to teach others to make effective use of computers when these students enter the job market. If I am sounding like a missionary for some sort of new cause, I am not ashamed. Computers have been available for over two decades now, and various humanists have made good use of them. But a combination of factors has served to catapult computers dramatically onto center stage -- Time Magazine's "Man of the Year" for 1982! -- and we would all do well to make the most of our opportunities to pursue "computer literacy" in every appropriate way. We were careful to consult our present students and were pleased to find that after the intention of the requirement was made clear, there were no substantial objections and there was a great deal of enthusiasm. The reaction among University colleagues has been equally positive. We are talking about a tool for research and writing, something that takes some of the drudgery out of parts of the humanist's work. The computer does what you (or someone else) tell it to do. It can find passages more quickly than you could with a conventional concordance or index (if you such an index!), allow you to excerpt the desired material more efficiently than using a xerox machine, and help you synthesize it into your own research project more neatly and effectively than is possible with handwritten or typed drafts. It also gives you new perspectives on the material with which you are working, whether it is your own writing or someone else's. Certainly one can quantify all sorts of things on a computer -- count words and phrases, create various statistical comparisons -- but that is by no means the primary value of a computer for humanists, and it is in any event a task that many humanists regularly attempt in much less efficient ways, so why not learn how to do it better? The funded project that I co-direct with my colleague Emanuel Tov at Hebrew University in Jerusalem is called "Computer Assisted Tools for Septuagint Studies." We are creating a comprehensive collection of primary research materials (a "data bank") for the ancient Jewish scriptures in Greek (the "Septuagint"), including textual variants, grammatical analyses, and Greek-Hebrew equivalents. Ultimately, various new concordances, lexicons, grammatical and syntactical studies, and textcritical investigations will be produced from this data. I confess that I am hooked. The project is just a beginning. It is very complicated in itself, and ultimately will involve us with a variety of languages in addition to Greek and Hebrew, since the Greek translation of the Hebrew was itself translated into Latin, Coptic, Syriac, Armenian, Ethiopic, and various other languages. I have always been intrigued with the frontiers of knowledge in the early Jewish and early Christian worlds, and the computer now promises to permit me better access to those frontiers. Painstaking work I have done on ancient fragments of manuscripts, ancient languages, ancient handwriting all can be done more efficiently and more completely by computer. Bibliographies are easier to compile and access. Articles are easier to write and print. There is no return to the old ways. Why would anyone want to return? We need a large bank of texts on computer -- ideally, everything that exists should be computer accessible. It boggles the mind to think of typing it all in. But new developments permit us to bypass that step and have most of it read in automatically, like the scanner that reads the prices on your groceries. We are already scanning complicated Greek textcritical material into the computer. At some point we may be able to read some handwritten manuscripts automatically, and to make clear judgments about the probable dates and places from which the writing comes. As more text is available for comparison, hitherto unidentified fragments of manuscripts become easier to identify and reconstruct. Our knowledge of languages will increase rapidly, leading to a quickened pace of scholarly discovery. In many ways, the potential of computer research is limited only by the imagination of the user. Very quickly, as low cost high quality printers become increasingly available. The expense of typewriter quality printers has been an obstacle, but that is already being overcome. And the computer generated printout need not be shackled by conventional problems with regard to producing foreign type and mixing English with non-English characters. Various sources, including our project, are currently developing various foreign language character sets for display on the terminal and for printing. An inexpensive matrix printer that can do these jobs adequately is already on the market for about $500, and the prices are dropping rapidly. It is already possible to set up an adequate microcomputing system for around $1000, including a printer. But even more exciting is the possibility that the computer will revolutionize our traditional modes of information dissemination altogether. For example, we can expect to see the development of scholarly periodical literature on computer networks, where you can examine a writing that is available on a central system, determine if you want to have it for yourself, and then obtain it for your own system, to store it on your disk and/or print it out at your convenience. Networked bibliographies, news reports, etc. are already commonplace. Actually, a great deal of computer work has been done by humanists over the years. The Association for Computers and the Humanities has been in existence for some time, and has published reports on various projects and developments. Other similar publications and newsletters could also be named -- I think immediately of CALCULI for Classicists, which Stephen Waite edited for many years, and ARITHMOI which Richard Whitaker has produced for biblical scholars. Years ago, the American Philological Association established a Repository of machine intelligible texts (Greek and Latin), and the Oxford (England) Computing Center has a large Archive of texts in many languages, including English. But in the early years of computing, one had to be near a computer center to do computer work, and had to use what seem now to be unbearably cumbersome systems of punch cards, complicated computer languages, etc. Computing is now much more widely available, less expensive, and more "user friendly," thanks in large part to the advent of the video keyboard, the microcomputer boom and the development of computer programming languages that are more effective and efficient for humanistic research. In the project, we use a language called IBYX which David Packard invented for the IBYCUS System at a time when no other suitably sophisticated language for textual research was available. It is in many ways similar to the language called PASCAL, which we teach as the most convenient of the widely available languages for humanistic purposes. Many of the students will know or learn BASIC, but it is not nearly as efficient as PASCAL, and is frustratingly unstructured by comparison. David Packard is a classicist who was already working with computers in the late 1960s, along with Stephen Waite and Richard Whitaker, among others. Partly out of frustration with what was available on the open market, Packard decided to create his own computer system, using Hewlett-Packard equipment, precisely for purposes of text editing, rapid searching of texts, and printing of complex materials. The IBYCUS System is the result, and there are fifteen of these systems installed around the country (e.g. at Princeton, Duke, University of North Carolina at Chapel Hill, University of Texas, Claremont Graduate Schools, and the University of California at Irvine, not to mention printing and publication configurations such as at Scholars Press and elsewhere). The system is relatively simple to use, is very quick, and very reliable. We have five terminals for the department and project, and our Classics department also has a terminal running on the same system. We also have a medium sized "hard disk drive" (120 megabytes), a tape drive for easy transfer of data, a NEC Spinwriter for everyday printing needs and an Epson matrix printer for special character generation. The aforementioned "hardware" (equipment) itself could cost about $70,000 if purchased new. With David Packard's help, we were able to find mostly used equipment and cut this cost almost in half. But the key to the system is the "software" (system programs, etc.), and that can only be obtained, at very reasonable cost, through negotiations with David Packard. It is not available on the open market. With much time and effort, of course, a clever and knowledgeable computer expert could create similar programs on existing equipment, and this is beginning to happen. But Packard's IBYCUS System continues to lead the field in these matters! The Greek was purchased, along with many other Greek texts, from the Thesaurus Linguae Graecae Project at the University of California at Irvine, which now also uses an IBYCUS System and is the single most prolific source of machine intelligible Greek material. The Hebrew was produced (through a grant from the Packard Foundation) by Richard Whitaker and Van Parunak at the University of Michigan computing center. These materials are now available at cost to interested scholars (see below). At the University of Pennsylvania, we have two research associates, one of whom has worked full time (John Abercrombie) and the other part time (William Adler), who supervise a staff of six part time research assistants (graduate students). Adler has directed the automatic morphological analysis of the Greek texts, which is performed on the large IBM computer at the University by a sophisticated program written by David Packard before he developed the IBYCUS System. Abercrombie does most of the programming, and is especially involved in the automatic parallel alignment of the Greek and Hebrew texts. At Hebrew University in Jerusalem, Emanuel Tov works with several graduate assistants correcting and verifying the automatic parallel alignment materials, among other things. An exchange of staff between Philadelphia and Jerusalem also has been instituted. To the contrary, we find that there is a large amount of interest and support especially from the staff of the large IBM installation that was used in the past primarily for scientific applications. Penn is proud of having been a leader in the earliest development of electronic computers, and is anxious to have this powerful tool available for all interested users. We are developing "exotic language" character sets on the IBM color graphics terminals for use in classroom instruction, among other things. In this, as in other matters, the IBM mainframe staff has given generously of their time and energy to help us towards our goals. At some point in the near future, we hope that the IBYCUS System and the IBM will be able to communicate directly with each other, as with other systems at the University. The staffs of the various installations are actively working towards this goal as well. Recently, the University's Wharton (Business) School computer center obtained a Kurzweil Data Entry Machine (KDEM) which is capable of automatic encoding of English and certain foreign language materials, and has promised us access for our purposes. I am serving on a University wide committee for computer development and am currently chairing the Arts and Sciences committee on computing. Jack Abercrombie has been appointed as coordinator for faculty computer education for Arts and Sciences. We are both members of a new committee for computing and textual research. Isolation is not a problem for us! We are the only ones working directly on the "Septuagint" text in such a comprehensive manner. The Maredsous concordance project in Belgium parallels some of work (bilingual Hebrew-Greek concordancing). A similar project on the Hebrew Bible has been underway for many years in France, under the direction of Gerard Weil, and in Australia, Francis Andersen is also at work on the Hebrew text. I have already mentioned Van Parunak at Michigan, who has analyzed some portions of the Hebrew Bible by computer. There is even a publishing venture headed by Arthur Baird at Wooster College in Ohio called Biblical Research Associates which has published many volumes in a series called the Computer Bible, representing other researches in related areas to ours. The general information and service functions performed for the humanities by the Oxford Computer Center in England also deserves special mention. A summary article appeared in volume 14 (1981) of the Bulletin of the International Organization for Septuagint and Cognate Studies, and subsequent issues of that publication contain updates. Additional information can be obtained from us directly at Box 36 College Hall, University of Pennsylvania, Philadelphia 19104. Information on obtaining the computerized Greek biblical texts (New Testament as well as Septuagint) is available from the Thesaurus Linguae Graecae (TLG) project at University of California, Irvine 92717; for the Hebrew biblical text, contact Richard Whitaker at 300 Broadway, Pella, Iowa 50219. A new column entitled OFFLINE, Computer Assisted Research for Religious Studies (CARRS), will be inaugurated in a subsequent issue of the CSR Bulletin to provide relevant computer related information on a regular basis, as a service of the SBL Computer Assisted Research Group steering committee, and with the encouragement of the SBL Council. OFFLINE proposes to deal with such matters as bibliography and the availability of computer intelligible data; relevant technological developments in general, and especially with reference to microcomputers (e.g. displaying non-Roman characters on the screen; transfer of data between micros and larger machines); and printing and publication capabilities. Your input and responses will be appreciated greatly. ---------------------- << O F F L I N E 1 >> by Robert A. Kraft [dateline 12 April 1984] Council on the Study of Religion Bulletin 15/3 (June 1984) ---------------------- A service of the SBL Computer Assisted Research Group steering committee, with support of the SBL Council, OFFLINE will deal with such matters as bibliography and the availability of computer data, relevant technological developments with special (but not exclusive) reference to microcomputers, and printing and publication capabilities. For a general orientation, see the CSR Bulletin 15/2 (April 1984), "In Quest of Computer Literacy." Please send any information, suggestions, or queries to the editor at Box 36 College Hall, University of Pennsylvania, Philadelphia PA 19104 (215-898- 5827). The need for timely and reliable information about computing among humanists is acute, especially with reference to microcomputing. Special "users groups" for exchange of ideas and programs within the fields covered by religious studies would seem highly desirable. Frequently, such groups are organized with reference to particular microcomputers: Apple, Commodore IBM, TRS, Kaypro, Osborne, etc. Also valuable would be a users group which focuses on special technical operations such as foreign character generation for screen and printer. If you are interested in helping organize such a group for a particular machine or for a special function within the context of religious studies, or if you know of the existence of such a group, this OFFLINE column will attempt to disseminate appropriate information and help coordinate efforts. Detailed information of various sorts and at various levels is sorely needed. OFFLINE cannot hope to provide it all, but will attempt to summarize and coordinate. A noteworthy new effort is the bimonthly Microcomputer Newsletter from the Institute for Biblical Research, edited by Peter H. Davids at Regent College, 2130 Westbrook Mall, Vancouver BC V6T 1W6, Canada (tel. 604-224-3032). Two issues have appeared thus far (cost, $7.50 per year to non IBR members). On David Packard and his IBYCUS System, see Magazine for February 1984, 80-81. Please keep us informed of other relevant sources of information. Do you speak computer? Probably not. But to help you understand better those who do, OFFLINE will try to build up your vocabulary by providing simple definitions of some frequently used terms. Let's start with some relatively simple distinctions in this OFFLINE column: refers to whether one is connected directly to a main computer for immediate interaction (online) or is not; if your microcomputer could be used as a terminal on a larger system, for example, you would be "online" when functioning as a terminal but "offline" with respect to the larger system when functioning independently. = the tangible machinery such as keyboards, monitors, tape and disk drives, printers, etc. = the electronic instructions ("programs") that drive the hardware. = software that is captured and fixed in physical areas of the hardware (built-in programming) and cannot be readily modified or replaced by the user. refers to "personal" or "desktop" computers capable of performing the full range of computing functions. usually refers to the large computers which can support a large number of terminals and other devices. is an inbetween size which can support several terminals/devices, but is not as powerful as a "mainframe." is the heart of any computer, whether the CPU is integrated with the keyboard, as in some microcomputers, or is a separate entity as with most larger installations. : hardware by which materials can be put into the computer (e.g. keyboard, tape or disk drive, optical scanner) or produced (put out) by the computer (e.g. video screen, tape or disk drive, printer). are hardware items that can be connected to the core computer system, such as various "I/O" devices. are the places where peripheral devices are connected to the system. (American Standard Code for Information Interchange) refers to the most widely used system for identification of actual and possible keyboard characters. As noted in the April 1984 CSR Bulletin, Greek and Hebrew biblical texts in computer form are available "in the public domain." They are on standard 9 track tape which can be read at any modern computer facility with a magnetic tape drive. The ability to transfer such material to microcomputers ("download") depends on a variety of local factors. More about this in a later column. For the present, I wish to emphasize that there is much wisdom in everyone adopting, quite arbitrarily at this point, the transliteration codes for Greek and Hebrew that are used in the most widely available texts. Unambiguous coding can easily be changed from one system to another if necessary, but why not try to avoid the problem from the start? Consistency will be especially important as programs are created for displaying and printing in foreign characters. If the same keyboard symbols represent the same foreign characters in all our programs and systems, we will all be served more effectively. If your keyboard does not have all the suggested equivalents, you may have to improvise on some characters. Just be unambiguous and consistent. With this in view, I am providing a list of the coding used now for TLG Greek texts, and for the Parunak-Whitaker Hebrew text, with the strong recommendation that this become the standard coding for "transportable" texts and programs -- i.e. for computer materials that will be shared in a general way. (Other languages will be discussed in future columns.) For Greek, the use of upper case English to represent lower case Greek (prefixed asterisk designates Greek upper case) is awkward and derives from the time when most computers used only upper case. Since this coding is used for the bulk of available Greek texts and for some major computer tools for searching and analyzing the texts, it is recommended as standard. Even if you choose to mix lower and upper case, it is important that the same English letter equivalents are standardized (Q/q for theta, W/w for omega, Y/y for psi, etc.). TRANSLITERATION CODES FOR HEBREW AND GREEK (with ASCII values) <(ASCII)> aleph ) (41) alfa A (65) beth B beta B (66) gimel G gamma G (71) daleth D delta D (68) he H epsilon E (69) waw W digamma V (86) zayin Z zeta Z (90) heth X eta H (72) tet + (43) theta Q (81) yod Y iota I (73) kaph K kappa K (75) lamed L lamda L (76) mem M mu M (77) nun N nu N (78) samek S ksi C (67) ayin ( (40) omicron O (79) pe P pi P (80) tsade C qof Q resh R rho R (82) sin & (38) sigma (all) S (83) shin $ (36) [or final= J (74)] sin/shin # (35) taw T tau T (84) upsilon U (85) phi F (70) chi X (88) psi Y (89) omega W (87) patah A (65) qametz F (70) hireq I (73) upper case sign * (42) segol E (69) diaeresis + (43) tsereh " smooth breathing ) (41) holam O (79) rough breathing ( (40) qibbuts U (85) iota subscript | (124) shureq W. acute accent / (47) shewa : (58) grave accent \ (92) hateph- -patah :A circumflex acc. = (61) -qametz :F subscript dot ? (63) -segol :E midpoint punct. : (58) OFFLINE addendum [4/18/84] A committee at the University of Pennsylvania is actively investigating means to establish a Center for the Computer Analysis of Humanistic Texts, similar in some respects to the Oxford (England) Center and Archive. The functions of such a center would include the maintenance of a repository of electronic texts for distribution and/or access (as appropriate), filling requests for automatic encoding of texts into electronic form, coordination of informational and educational services (e.g. summer institutes) and responding to requests for programming and technical advice. This sort of service facility would require a significant level of funding to be successful ($100,000 per year would seem minimal), and one way of securing such funding would be the creation of a of supporting institutions, each of which would make an annual contribution as a sponsor/subscriber to the service. The amount of contribution would vary with the size and anticipated needs of the participating institution; we could envision a range from $5000 for major Universities to $250 or possibly less for smaller schools. The initial question is whether there is sufficient interest, and how tangible it could be. We encourage you to inquire of the appropriate officers at your institution (this type of endeavor will impact on library functions as well as other computer-related activities) and respond to myself or to Professor Roger Allen, chairperson of the committee, whose address is 845 Williams Hall, University of Pennsylvania, Philadelphia, PA 19104. ------------------------------------------------------------------ << O F F L I N E 2 >> by Robert A. Kraft [dateline 20 August 1984] Council on the Study of Religion Bulletin 15/4 (October 1984) ------------------------------------------------------------------ Several of you have expressed interest in the formation of user groups, but no one has volunteered to help organize one, and no one has reported to OFFLINE that such a group already exists, with the needs and interests of religious studies persons in view. My impression is that a users group for foreign character generation (on screen and printer) would be extremely valuable. OFFLINE will continue to collect responses and build up an address file until something more definite emerges. Please specify the type of hardware (equipment) you are using and the special needs you are interested in exploring with other users. And volunteer if you are willing to take some initiative! The July issue of Peter Davids' (1.4) has appeared, and continues to be exceptionally informative as well as timely. It comments on wordprocessors and printers, database programs, and communications programs, among other matters. Anyone with serious interests in microcomputing and textual research is well advised to sift through this valuable resource. Readers who work with foreign language characters may find of interest the article in the July 1984 by Joseph D. Becker (Xerox Corp.) entitled "Multilingual Word Processing." The article is tantilizing and informative at various levels, but does not provide any specific information about how soon or for how much the system described will be on the market. To discover what is already available, be sure to visit the special computer display at the annual meetings of the AAR/ASOR/SBL in Chicago in December (see below)! A descriptive article by Rebeckah R. Glazebrook on "'Saving' Literary Classics with Software" (about David Packard's IBYCUS System and the Thesaurus Linguae Graecae data bank) appeared in the July issue of DEC's magazine (pp. 58-64). Packard will display a micro- version of his system at the December annual meetings in Chicago. Hot off the press in September 1984 is the long awaited book on , by John R. Abercrombie (Philadelphia: University of Pennsylvania Press, pp. 176, $12.95 spiral bound), which provides a range of useful programs in common dialects of BASIC and PASCAL for searching, collating, indexing, concording, parsing, statistical counting, etc. It doesn't make programming painless, or completely transparent, but it will give many of you a relatively "friendly" place to start. You can also purchase the programs separately on an IBM PC formatted floppy disk. In the previous OFFLINE column you encountered glossary terms relating to the physical world (hardware, etc.) of computing. , , (American Standard Code for Information Interchange). In this issue, you will be introduced to some terms used frequently in discussing how to communicate with the hardware. = a set of precise instructions directing the computer to perform a specific set of tasks. = the logic (plan, organization) of a program. = the vocabulary and syntax by which one instructs (programs) the computer. Different languages function differently and may require large amounts of computer "memory" (workspace). Languages with names (often acronymns) such as , , (and its variation !), and the like, normally have not been used with average capacity microcomputers. They are primarily mainframe languages. Conversely, the general purpose language called (in various forms) has become the standard micro language. (Some people even claim that it is easy to use!) Now, with the expanding capacity of micros and growing sophistication of micro users, the language called has become increasingly available. It is a more "structured" and versatile language than BASIC, is quicker and more powerful, and is also available on most mainframes (thus programs in PASCAL are more "portable" to other systems). ; all the aforementioned languages are called "high level" since they package several "low level" instructions in each specified function (e.g. to "PRINT" something on the computer screen a BASIC program calls on the computer to perform a number of "go" and "stop" [on/off] operations at the most rudimentary electronic level). "Machine Language" commands the computer at that simplest level, while "Assembly Language" is only slightly more synthetic. These represent important intermediaries between "high level" languages and the electronic switches in the computer. = whole numbers (1, 2, 3, etc.), as distinct also from "real" numbers (which include decimalized fractions; 1.00, 3.1416, etc.). Practically speaking, computer systems respond best to simple numerical communications and relationships. For example, to the computer, the of an alphabetic character in a line (10th, etc.) is more important than what the character may be (d, e, f, etc.). This is called the character's "subscript value" (in the word "position," the subscript value of the letter "s" would be 3, "t" would be 5, "n" would be 8, etc.). = sequences of characters (words, syllables, even numbers in some situations), as distinct from the discrete numbers and numerical relationships by which the computer has been made to operate foundationally. Some computer languages deal more efficiently with strings of characters than do others. Some forms of PASCAL are especially adept; IBYX (on IBYCUS Systems) is excellent. All computer languages can be made to do the job in one way or another. = symbols or combinations of symbols used in a computer program to represent items that can change in the course of the program (just as in algebra "x" can stand for any number; indeed, in a grammatical model sentence such items as "subject" or "noun" or "verb" or "word" are all string variables!). The programmer can create general formulas with variables (e.g. sentence = subject + predicate), and modify the specific reference of each variable as the program develops. To use the same example mentioned above under integers, if I tell my program to find subscript(x) in the word "position," starting with x = 1, it will find the letter "p"; if I then change the value of the variable x by saying x = x + 1 (i.e. x = its present value of 1, plus 1, totals 2), it will find the letter "o"; then if the next value of x = x + 1 (i.e. 2 + 1), it will find "s"; and so on. = a program procedure which repeats the desired command a specified number of times or until certain conditions are met. The repetition of x = x + 1 in the example at the end of the "variables" section above is a useful feature to embed in a loop in order to read a string character by character. An "infinite loop" is a programming blunder since it makes it impossible for the program to move on to the next task. The previous OFFLINE column contained a few errors, especially in the section on Hebrew transliteration coding. Frankly, I was careless and called up the wrong file from my computer directory. What you got was the Parunak-Whitaker Hebrew consonantal text coding (as intended) followed by an adapted form of the vocalization code as it is used in Whitaker's programs to morphologically analyze the Hebrew for the Septuagint Tools Project. (Also, the ASCII value of ")" is 41 and of "(" is 40, of course!) Does that mean that all of you who have been entering transliterated vocalized Hebrew into your computers in the past few weeks will now be out of step with the "public domain" biblical text coding? Yes, but it is not difficult to correct the problem. The important thing is to be consistent and unambiguous in your coding. It is relatively simple to substitute one coding scheme for another. Thus you can also change upper case Greek transliteration to lower case, or vice versa, if you like. How? If you work in BASIC or PASCAL, John Abercrombie's new book (see above) includes a routine (or procedure) that he calls "translateit." I would have called it "transliterateit," or maybe just "recodeit." Nevermind. The logic ("algorithm") is simple: Get a line and read it character by character. If a letter that needs to be changed is encountered, substitute the new character for the old. Then save the recoded line. (You can save the old line as well, if you like. Abercrombie puts them side by side for comparison.) It all seems quite simple and straightforward. But how do you get this into a language your machine will understand? Abercrombie uses both MBASIC and PASCAL which will run on an IBM PC and similar computers. Other dialects of these languages will require slight modifications in terminology and/or syntax, which Abercrombie also describes. If you would like to be "walked through" such a program, write to OFFLINE for printed instructions with detailed explanations. Ultimately, the program will deal with the following substitutions: Instead of the respective symbols a A e E u (as printed incorrectly in the previous OFFLINE column), substitute A F E " U (the standard Parunak-Whitaker Hebrew text coding). The remaining two-for-one substitutions (W. for U, :A for 2, :F for 1, :E for 3) require a bit more programming effort, but are no more difficult for the computer. The corrected chart of the Parunak-Whitaker coding is as follows: patah A (not a) qametz F (not A) hireq I (same) segol E (not e) tsereh " (not E) holam O (same) qibbuts U (not u) shureq W. (not U) shewa : (same) hateph- -patah :A (not 2) -qametz :F (not 1) -segol :E (not 3) The Parunak-Whitaker Hebrew text also includes the cantillation, which has its own set of codes that can be supplied to you at your request. If you do not need the cantillation (or the vocalization, for that matter), it is a simple program simply to copy the text without the unwanted symbols. There will be a special exhibition hall, separate from the usual book exhibits, for computer related displays at the December AAR/ASOR/SBL annual meetings in Chicago. Several exhibitors have already agreed to be present, and other invitations have been or will be extended. The emphasis will be on systems and products of special use for humanists who have various special requirements such as foreign language display/printing or other sophisticated graphics needs (e.g. archaeology). We expect to see several specialized word/text processing packages capable of working with foreign characters on various computer systems including IBM PC (BRIT Systems, IMAGE PROCESSING Systems, ACADEMICFONT, NOTA BENE, WORLD WRITER, WORLD MASTER), Apple microsystems (GUTENBERG, MULTI-LINGUAL SCRIBE), and the Commodore 64 (DAVKA, for Hebrew). If you know of other similarly useful multi-font wordprocessing systems, especially any for the less expensive micros, please send the information to OFFLINE and/or notify the exhibitor of this opportunity. Among the systems and program packages for scholarly research on texts, we expect David Packard to exhibit his new IBYCUS microsystem, Paul Miller to show GRAMCORD in action, and a new Hebrew Bible research system called MIKRAH (on IBM PC) to be represented. We will also bring information on the OXFORD text archives and concordance package as well as on the THESAURUS LINGUAE GRAECAE data bank. It is hoped that some systems for language instruction and computer assisted translation will also be exhibited. The PLATO "computer-based education" system has been invited as well as WEIDNER COMMUNICATIONS CORPORATION (modern language translation systems). Thus far, micro-compatible packages for archaeological recording and research have not been found (please contact OFFLINE with any addresses and/or suggestions), but we hope to see some appropriate hardware from companies like Hewlett-Packard as well as to sample the "built-in" capabilities of the Apple Macintosh. Various portable systems (e.g. Apple IIc, H-P portable, Zorba) also will be represented along with information on automatic data entry (e.g. Kurzweil Data Entry Machine) and on specialized terminals for upgrading a standard system (e.g. Human Design Systems). The special computer display is an ambitious experiment aimed at helping meet your needs and interests, and has been made flexible enough to permit some exhibitors to be added even at this late date. But if you have additional suggestions you must act quickly by sending appropriate information to the editor of OFFLINE. To be effective, this special exhibition requires not only your interested support in Chicago, but any input you may have at this time as well. //end #2// ---------------------- << O F F L I N E 3 >> by Robert A. Kraft [dateline 23 October 1984] The Council on the Study of Religion Bulletin 15/5 (December 1984) ---------------------- Progress towards the creation of a center for computer assisted textual research at the University of Pennsylvania (see OFFLINE 1, June 1984) continues, and we are pleased to announce that a limited number of orders can be taken for automatic encoding of typed or printed materials in various languages including Greek, Coptic, Armenian, and unpointed Hebrew. We are using a Kurzweil Data Entry Machine (KDEM) and can output the results either to standard 9 track 1600 bpi tape or to floppy disks compatible with DEC Rainbow formats. Exact costs vary according to the size and relative difficulties of the text, but as a rule, most materials can be entered directly into electronic form at a cost of less than $1 per 400 words (an average printed page, or about two typed pages). Materials encoded in this manner usually still need some correcting and reformatting, but the savings in time and effort can often be impressive. The Septuagint Tools Project is computerizing the published textcritical apparatuses for the old Greek texts in this manner. Inquiries should be directed to the editor at the above address. One of the ambitions of the proposed center is to provide convertability of data between all systems and electronic formats in use among textual researchers, so that the person working with an IBM PCJr, for example, can exchange materials with someone who uses a Commodore 64 (if not a PET). At present, there are ways to make such transfers, but they are usually complicated, inefficient, and of questionable reliability. In theory, telephone "modem" (MOdulator/DEModulator) connections between computers at different locations offer an ideal solution, but in fact, this method has some serious drawbacks. Important material (special codes, etc.) can become garbled and/or lost in the process. Why are such exchanges so difficult to make and/or control? The problem is relatively minor for systems that can read from standard 9 track tape, since such tapes can be sent through the mail or otherwise transported physically. But for micro-disk formats there are a number of special complications of which I will discuss three: different disk sizes, different disk operating systems and incompatible text processing programs. Most microcomputers use diskettes (often called "floppy" or "soft" disks) as the primary medium for data storage. In the past, magnetic cassette tapes were more widely used, but the floppy disk provides larger, faster and more reliable capabilities. (The return to a modified cassette tape principle by systems such as Caleco's ADAM is not likely to be widely imitated, even if it is less expensive than diskettes and faster than the older cassettes.) But microdiskettes vary widely both in external size and in internal organization. An 8 inch diameter disk is used on some mini systems, but this fact is of little relevance to most microcomputer owners. Most micro systems use a 5 1/4 inch floppy, but recently the Apple Macintosh and Hewlett- Packard 150 (among others) have introduced a smaller, more rigidly constructed (less "floppy"!), larger capacity 3 1/2 inch disk. Many microcomputers also can be fitted with a "hard" or fixed disk with impressive amounts of internal storage capacity similar to earlier mini systems, but the primary route by which data is transferred to and from the hard disk is still through the attached "floppy" diskette drive. For microcomputers that use the same physical size diskettes (e.g. 5 1/4 inch), it is possible physically to insert any such diskette into any such machine. But one machine will not necessarily automatically read the diskette formatted on another machine. A key will slide into many locks, but it will only turn and open those for which it is correctly cut. The issue of diskette compatibility is not a new one -- it is comparable to problems with tape formats in earlier times -- and steps have been taken by some manufacturers and software vendors to try to make machines compatible. There are currently two major competing disk operating systems (DOS for short) for increasing compatibility among 5 1/4 floppies. One is called CP/M (Control Program for Microcomputers) and is widely available on such machines as Kaypro, Osborne, Commodore 64, and others using 8080, 8085 or Z80 microprocessors. Some machines can be made CP/M compatible by installing an appropriate option. The main alternative disk operating system, using the 8088 microprocessor, has developed from IBM PC influence and is called MS-DOS (Microsoft developed this as a generic form of the IBM PC DOS). "IBM compatible" microcomputers run on such a system, and some manufacturers have installed both MS-DOS and CP/M on their machines (e.g. DEC Rainbow). Thus it is sometimes possible to achieve some degree of compatibility by working through an intermediate with both capabilities. Some of you will note that Apple has not been mentioned in the discussion of CP/M and MS-DOS. Not only is the DOS different between the popular Apple II series (5 1/4 floppy disks) and the new Apple Lisa/Macintosh (3 1/2 inch diskettes), but direct compatibility between either Apple system and CP/M or MS-DOS systems has not, to my knowledge, yet been achieved reliably. (Information from readers is requested on such matters!) Indirect communication by telephone lines through a modem can be achieved, subject to the caveats voiced above. Differences in DOS are not the only obstacles to satisfactory communication between microcomputers. If the material you wish to transfer from one machine to another has been processed through an intermediary such as a "word processing program" which has its own coding and special operating procedures, some of the data may be lost or disrupted in the transfer. Similarly, if you are trying to transfer data into your word processing program for further editing, etc., this may be difficult to do for the same sorts of reasons. Indeed, even on the same machine, you will have problems changing data from one wordprocessing system to another. Programs and program packages to assist textual researchers in these regards are beginning to become available, but you will need great patience and tenacity (some knowledge of how programs operate would also help) for most such transfers at the present time. And the problems become even more complex if one wishes to transfer the material stored through a "data base" program (e.g. address lists) to a different machine and/or program. Assuming that problems of transferring data to your microcomputer DOS can be resolved satisfactorily, what kinds of data are readily available? You may be astounded at the answer! The availability of biblical texts has been mentioned already in earlier OFFLINE columns: the partly corrected Hebrew text is in the public domain and can be obtained at cost. Similarly, the Greek Jewish scriptures and the Greek New Testament can be obtained at cost. Negotiations are underway for releasing the RSV English text to the public in electronic forms, and the AV/KJV is already available from some vendors. The Latin vulgate is on tape and requires special permission from its encoders in Germany before it can be obtained from the Oxford (England) Archive or elsewhere. Some of these texts can also be acquired with morphological tagging (to give you the dictionary form of words), with textual variants, and even in aligned Hebrew-Greek (for Jewish scriptures) or Greek-Latin (etc.) forms. I listed the biblical materials first to accomodate SBL interests. But there is much, much more. The Thesaurus Linguae Graecae project is in the final stages of encoding all surviving Greek literature through the 6th century of the common era, and a large number of carefully verified texts are available from them, including Jewish and Christian Greek materials. The American Philological Association supports a repository of Greek and Latin electronic texts as well, gathered from various sources. The Oxford (England) Archive is extensive and diverse in its holdings, including a number of non-western texts and much medieval and modern material. And other electronic texts are available from other sources. Indeed, with the growing sophistication of automatic "scanners" (see above), this collective data bank will continue to expand at a breathtaking rate! To help keep track of it all, the Rutgers University Libraries are conducting an international inventory to locate and describe humanities textual information that has been put into electronic form (with funding from the Council on Library Resources and the Mellon Foundation, among others). This information will be made available on RLIN (Research Libraries' Information Network). If you have or need information relevant to this project, contact Marianne I. Gaunt at Alexander Library, Rutgers, New Brunswick, New Jersey 08903 (201 932- 7851). She will gratefully send you an appropriate survey questionnaire and/or a general description. Many of the responses I have received to OFFLINE show a great interest in the problems of transcribing, displaying and printing foreign character texts. Some raise questions about the wisdom of reproducing what may seem to be relatively outdated or awkward transcription systems, as I recommended for Greek and Hebrew. My answer is simple; if all the available TLG texts (or the available Hebrew Bible materials) follow such a system and I want to make extensive use of them, it is probably easier for me to imitate them than to change them to what seems to be a more satisfactory system. Either route is possible (on programs to modify transcription systems, see OFFLINE 2). The former has the advantage of immediate "portability" (others can easily use what you produce) as well as directness. But things will doubtless change with the times, and your suggestions may be instrumental in encouraging such change. Just be sure to be unambiguous and internally consistent, no matter what specific coding you use. Another question relating to transcription codes concerns the casual manner in which I dealt with (or failed to deal with) "final" letter forms in (Greek and) Hebrew. I should have been more explicit. If there are only two forms for a letter, as with these "final" letters, and if one of the forms occurs only in completely predictable circumstances (e.g. at the end of a word), there is no need for special coding. The program to display and/or print the appropriate character can be written to take account of word endings (which are followed by a blank, a carriage return, or by a non-letter such as punctuation). In effect, such letters use a double code (letter plus non-letter) from the computer's perspective. With more complicated character sets such as Arabic, in which more than two forms of some letters occur, the problem is more complex and may require explicitly different coding indicators. Of general interest is the September issue of Scientific American (251.3), which is devoted to discussions of computer software. The article by Terry Winograd entitled "Computer Software for Working with Language" (pp. 130-45) provides helpful insights into problems of machine translation and "word processing" in its various extended aspects (including morphological, syntactic and context analysis), among other things. Those of you present at the 1984 SBL/AAR/ASOR Annual Meetings in Chicago should have received written information about the machines and programs displayed at the special exhibit and had the opportunity to see some of them in action. Additional copies of the descriptive brochure can be obtained from the editor (see instructions below). Two other types of informational material will be made available upon request from the editor's office. The University of Pennsylvania is now publishing an occasional newsletter called "Penn Printout" as part of its own microcomputer services. In addition to basic information (descriptions, prices, etc.) about the machines centrally supported at the University (Apple Macintosh, IBM PC, DEC Rainbow), it includes articles such as "Do I need a Microcomputer?", and comparative evaluations of the aforementioned micros and of some of the available software. Much of what is reported derives from Jack Abercrombie, and I have received permission from the University to supply copies to interested outsiders. I am also attempting to coordinate the information contained in the IBR Newsletters (see earlier OFFLINE comments) and in reader responses to OFFLINE in order to get an idea of who has what equipment, software, and information. If you have specific questions about specific products and/or problems (e.g. who else is working with TRS 80 hardware? Has anyone used the AcademicFont programs?), I will try to provide a "handshaking" service. For these purposes, it is also important that persons who write to OFFLINE agree to permit their name and address to be sent to those with similar interests/problems. In this way we may be able to build appropriate user groups. You will realize that I can't give a personalized answer to every query or letter sent to OFFLINE, but if you will include with your letter a self-addressed label or an appropriately sized self-addressed envelope (9 x 12 for the aforementioned brochures and offprints), it will assist me enormously. And if you do NOT want your name and address circulated with my index lists, please say so clearly in your communication. //end #3// ------------------------------------------------------------------ << O F F L I N E 4 >> by Robert A. Kraft [dateline 2 January 1985] The Council on the Study of Religion Bulletin 16/1 (February 1985) ------------------------------------------------------------------ There has not yet been time to digest all the information gathered at the special computer exhibition and at the computer assisted research group meeting at the 1984 SBL/AAR/ASOR Annual Meetings in Chicago on December 8-11. If you were unable to attend but would like to see the detailed descriptive brochure that was distributed at the exhibition, copies are available from the OFFLINE editor. Please include with your request a self-addressed 9 x 12 (or larger) envelope to facilitate prompt response. The following types of exhibitions were represented in Chicago: (1) Wordprocessing and textediting systems (see further, below); (2) Bibliographical access by computer (American Theological Library Association Religion Index); (3) Automatic machine translation in ready-to-finish form (Weidner Communications Corporation); (4) Automatic encoding/scanning of typed and printed materials (Kurzweil Data Entry Machine); (5) Scholarly research systems and projects. The last listed category included a display by PROJECT GRAMCORD (Trinity Evangelical Divinity School), for grammatical concordancing of NT materials; programs from the CHRONOLOGY - HISTORY RESEARCH INSTITUTE (e.g. for determining dates of eclipses in ancient times); information on the embryonic CENTER FOR COMPUTER ASSISTED TEXTUAL RESEARCH at the University of Pennsylvania; and an exhibit of IBYCUS systems (Richard Whitaker's equipment), which are used by a number of advanced projects that focus on Greek, Latin, Hebrew and Coptic materials (e.g. Thesaurus Linguae Graecae at University of California - Irvine, the Duke University Papyri Project, the Septuagint Tools Project at University of Pennsylvania, and the Claremont Institute's Nag Hammadi Project). David Packard also displayed a prototype of the new IBYCUS microcomputer he plans to market in the near future. Wordprocessing/Textediting and Printprocessing systems for various types of microcomputers were displayed: The IBM PC was represented by Nota Bene, Mikrah (English - Hebrew), and Wordmill (English - Hebrew); The APPLE II series by the World Bible Translation Center (we regret that GUTENBERG was not able to exhibit); The Apple MACINTOSH by MacGreek - MacHebrew - MacJapanese (P.B.Payne) and by GreekKeys (G.Walsh); The KAYPRO 2 and 4 (and 10) by Marianne Sawicki and Peter Davids, in cooperation with Value Added Systems, a local Chicago dealership. Available printers included the Epson FX80, Okidata 92, and Toshiba P1351. Nearly 100 people filled out "User Group Registration" forms or left addresses at the information desk in Chicago, in addition to any who may have communicated directly with the KAYPRO booth. Thus it is now possible to provide a list of contact persons for particular hardware configurations as an initial step in exploring the establishment of various user groups. If you have not already registered through OFFLINE or through the Chicago exhibition, you are invited to supply the designated contact person with the following information: name-address-phone, computer- printer configuration, current project/interests and software needs. If you use the computer that is listed but a different printer, or vice versa, or use a different model of the same manufacturer's line, you should still communicate with the contact person, who will transfer any incongruous information to OFFLINE or to another appropriate group. If you use hardware NOT mentioned below (e.g. Commodore 64, Compaq, DEC Rainbow, IBM PC [Jr, XT, AT], Victor; Epson and Toshiba printers), please respond directly to OFFLINE. KAYPRO (and OKIDATA 92 printer) users are fortunate to have access to the experience and energies of Marianne Sawicki (PO Box 45264, Los Angeles, CA 90045; 213-645-2882), who took initiative in setting up the KAYPRO booth in Chicago, and of Peter Davids (Regent College, 2130 Wesbrook Mall, Vancouver BC V6T 1W6; 604-224-3245), founder and first editor of the Institute for Biblical Research Microcomputer Newsletter. Please contact Marianne directly to register interest in the KAYPRO group. Other equipment, with the contact persons: APPLE IIe and/or BROTHER HR15 printer --Richard K. Payne, Philosophy, West Valley College, 14000 Fruitvale, Saratoga, CA 95070; 408-867-2200 x321 -- special interest in Chinese and Japanese. Apple MACINTOSH with Imagewriter printer -- Walter Cason, 2121 Sheridan Road, Evanston, IL 60201; 312-866-3973. EPSON QX10 -- Roger A. Johnson, 22 Leighton Road, Wellesley, MA 02181; 617-235-7833. OSBORNE (CP/M) -- Stephen H. Skiles, 68 Oak Avenue, Northboro, MA 01532; 617-393-3875. SANYO MBC 555 and/or GEMINI 10X printer --Karl W. Rutz, 1269 W. Shryer, Roseville, MN 55113; 612-645-4245. TRS 80 Model II and/or TRS DaisyWheel II printer -- Roy E. Hayden, 7805 S. College Avenue, Tulsa, OK 74136; 918-492-5922. ZENITH Z-100 (MS-DOS and CP/M) and/or SILVER-REED printer --Paul Ferris, 1105 Gunter Circle, West Columbia, SC 29169; 803-794-7471. In addition to the announced presentations listed in the annual meeting program, a number of informal brief reports were received at the Chicago session of the SBL Computer Assisted Research Group, including the following: John Hurd and E. G. Clark (Toronto), on the imminent publication of their computer generated Concordance to Targum Pseudo-Jonathan to the Pentateuch (KTAV) and the availability of the electronic text; Richard Whitaker (Pella, Iowa), on his programs for Hebrew morphological analysis and parallel alignment of RSV with the Hebrew, and on efforts to make RSV available on tape; Raymond Martin (Wartburg Seminary), on morphological analysis and related work on the Hebrew and Greek of Ruth, Obadiah, Baruch, Epistle of Jeremiah, Daniel, Ezra, Jeremiah, and Ezekiel; R.-F. Poswick (Maredsous, Belgium), on the work of Le Centre Informatique et Bible (see its newsletter, INTERFACE); Niel Houk (Waukesha, WI), on statistical analyses of Esther; Alan Segal (Barnard), on computerized study of Midrash Tanhuma; John Evans (British Columbia), on the possibility of a QUME wheel for Hebrew; Johann Erbes (Andrews), on multilingual fonts for Ancient Near East studies; Robert Wright (Temple), on hardware developments of interest such as an updated IBM PC, new Quietprinter technology, and the new Okidata-mate 20 printer. It was also announced that the Latin morphological analysis programs developed under the direction of Louis Delatte at Liege had been purchased for installation at the University of Pennsylvania Center for Computer Assisted Textual Research, where they would be available for scholarly use. Another centralized service for humanistic computer users has been established at the University of Pennsylvania under the direction of John Abercrombie, namely, an electronic bulletin board on BITNET, an international communications network linking many colleges and universities. The service is free to BITNET users, and the updated bulletin board will be sent at the end of each month to all who request it. The bulletin board will contain items of interest especially to those engaged in computer assisted textual/philological and archaeological research. To add your name to the list of recipients, simply transmit your User ID and Address/Node to (user ID) JACKA at (node) PENNDRLS. You may also submit information to the bulletin board (dated items of no more than 50 words) in the same manner, or communicate with OFFLINE through BITNET by addressing (user ID) KRAFT at (node) PENNDRLN. Inquire at your institution's computer center to determine if you are on BITNET, which is coordinated through City University of New York. Those of you interested in "scanners" (optical character readers = OCRs), whether the high priced and versatile Kurzweil Data Entry Machine (KDEM) exhibited at the Chicago meetings or the inexpensive Oberon Omni-Reader, are referred to Doug Stewart's descriptive article "Machines that Read," in the 12/84 DIGITAL REVIEW (pp. 53-58). You might also be interested to know that Dartmouth's KIEWIT Computing Center charges a standard $.50 per page to read printed or typed materials in English/Roman fonts on their KDEM; contact Warren Belding (Hanover, NH 03755) for further details. For scanning Greek, Coptic, Hebrew, etc., contact OFFLINE. An extensive interview with Steven Jobs, cofounder and chairman of Apple Computer, Inc., has appeared in the February 1985 Playboy. Especially interesting is Jobs' perception of the struggle between IBM PC and Apple Macintosh technologies, and its significance for the future of microcomputing. Among the various reports of relevant new books on computing "in progress," the following seem sufficiently well advanced to warrant mention: John J. Hughes, on computer assisted biblical studies, to be published by Nelson (see IBR Microcomputer Newsletter 1.3 [6/84] #5.4); John Abercrombie and Alan Segal, comparing Apple Macintosh, DEC Rainbow, and IBM PC (including available software) as tools for textual research, to be published by University of Pennsylvania Press. //end #4// ------------------------------------------------------------------ << O F F L I N E 5 >> by Robert A. Kraft [dateline 07 March 1985] The Council on the Study of Religion Bulletin 16/2 (April 1985) ------------------------------------------------------------------ <...Afraid to Ask? (For the Uninitiated)> If you have resisted becoming informed about the "computer revolution" but this line somehow caught your eye, let me try to entice you further. There are a number of solid reasons why you should consider exposing yourself to "the new technology," with only minimal fear of the side effects. I will list a few of the most obvious, with brief comments: "" -- If you do a significant amount of writing and do not have extensive secretarial help, you would be well advised to replace your typewriter with a suitable computer package. (Otherwise, do the same for your secretary.) You will save enormous amounts of time on the actual processes of writing, editing, revising, indexing, etc. (and use more than the time you saved in discovering what else your computer can do!). Why? Because on the computer you type the material only once, and then make whatever modifications are necessary without having to retype what does not need to be changed. You can easily modify the format of the same materials; single or double spacing, indentation, headings and headers, etc. You can make systematic changes "globally" (throughout the entire document) with only a few keystrokes. And with the appropriate printer and print commands, you can produce a variety of typestyles including foreign language characters (e.g. Greek, Hebrew) mixed with the English or standing alone. Expensive? Not terribly. By checking classified ads in the newspaper you might find a suitable used system for very little (of course, you also might need help to know what is suitable!) -- I have seen complete used systems (computer, monitor, disk drive, printer, wordprocessing program) for around $300. The same sort of equipment can be purchased new for under $1000, and for about $2000 you can get one of any number of high powered and widely used new systems. My mother does a lot of letter writing as well as some articles. We bought her an inexpensive Commodore 64 system with a Gemini 10x printer and a simple wordprocessing program. After a bit of initial intimidation, she is doing just fine. Start checking around, and you won't regret it! -- If you work with texts, in any language, and especially with unindexed texts, the computer can change your entire pattern of research. Once your texts are in computer readable form, you can search them for syllables, words, combinations of words, etc., with speed and thoroughness. The larger your corpus of texts, however, the more computer capacity and power you will need. Thus the number of suitable machines becomes more limited, and the costs of investing in a system increase. For editing and printing uncomplicated texts, wordprocessing equipment will usually suffice. Editing things in smaller sections is less of a problem than attempting to search large amounts of text in small sections. -- Services traditionally available only by visiting a library are becoming more accessible to computer users at their own desks (e.g. searches of bibliographies, indices, abstracts, articles, newspapers, encyclopedias) and it will become increasingly possible to access large data banks of relevant texts as central archives are created for this purpose. Communication with other computer users by means of telephone lines is commonplace. Such services have a price tag attached, but in many situations may be an excellent investment. (See further details below.) In short, don't be embarrased to ask questions. Learn some of the jargon, if you feel you must. Seek clarification as necessary. But don't avoid the world of computers. It is rapidly changing your world, and can provide you with unbelievably powerful new tools to help you with your work. How it may affect your play is a topic for a different sort of column! Traditional means of published communication, including this column, are much too slow to keep up with developments in the computer world, and the computer itself shatters traditional concepts of publishing. This column is being written, on a computer, on 6 March (at the latest of last minutes). When you see it, much of it will already be seriously dated. Nevertheless, until you all are electronically networked to the same sources of virutally instant information, we must rely on the established routes. OFFLINE 4 announced the establishment of a BITNET Bulletin Board as a service of the Center for Computing and the Humanities at the University of Pennsylvania. Responses and requests are slowly coming in from those of you fortunate enought to have access to BITNET. (As of 11/12/84, there were 113 Universities and similar institutions on this continent and abroad connected to the network, with 46 additional connections pending.) Send information to JACKA at PENNDRLS, or send him your User ID and Address/Node to receive the Bulletin Board each month. You can also communicate with OFFLINE via BITNET by writing KRAFT at PENNDRLN. We were pleased to discover that the Hebrew University in Jerusalem (along with a number of European locations, linked through Rome) is also on BITNET so that we can communicate freely with colleagues there in a matter of minutes and also transfer files of up to about 40 pages in length. As you discover other useful networks, please provide information that OFFLINE can pass along to traditional readers! Issue #5 of the PENN PRINTOUT, which can be obtained through OFFLINE, is devoted to microcomputer communications by means of telephone lines, with articles on hardware, software, and file transfer. In an earlier issue (#3, p.3) a research librarian describes some services (BRS After Dark, Dialog's Knowledge Index) that permit owners of personal computers to link with a variety of "online searchable databases" at costs ranging from $.10 to $40 per minute (plus a one- time startup fee, etc.), as well as online services available locally at the Penn Library. Another service available through the Penn Library is a six page (as of 1/85) special bibliography "HOW TO FIND OUT ABOUT MICROCOMPUTERS: SELECTED REFERENCES" which is intended to help individuals find information which can be used for selecting and evaluating microcomputers and microcomputer software. It will be updated every few months as new materials are acquired in the Penn libraries. You can obtain copies through OFFLINE, or directly from Julie Miller, Reference Department, Van Pelt Library/CH, University of Pennsylvania, Philadelphia 19104. In addition to the Kurzweil Data Entry Machine (KDEM) services offered by the KIEWIT Computing Center at Dartmouth (OFFLINE 4), the Center for Computing in the Humanities at Penn (OFFLINE 3, now with some help from the Packard Foundation), and the Oxford University Computing Service (England; see OFFLINE 3 in reference to the Oxford Archive), a KDEM is also operating at Duke University (Durham, NC 27706), under the direction of Prof. Frank L. Borchardt of the German Department, as part of the COLOE Project (Computerization of Language Oriented Enterprises; 919-684-3836). All of these centers accept requests from outside the respective universities, and charge "non profit" prices. Thus every month, a number of texts are converted automatically into electronic form in these locations, not to mention what happens elsewhere. The texts are quite "raw" when they emerge from the KDEM -- further verification and editing are usually needed -- but it is hoped that most of them will become part of the growing "public domain" of such data. At Penn, we have recently encoded the Sahidic Psalter (BM Or 5000, ed. Budge), the Hebrew Avot de Rabbi Nathan (Schecter ed.), various Latin texts, and the textcritical apparatuses to the Goettingen Septuagint editions, among other things. Duke has also successfully scanned materials in various languages, as well as some of the Hymns of Wesley. Oxford maintains a large Archive of materials which is constantly growing. Hopefully those of us involved in creating such materials will communicate the appropriate information to the Rutgers Inventory of computer readable texts (OFFLINE 3) and similar listings such as OFFLINE. James Spickard (Box 406, Aromas, CA 95004) reports that he has been preparing an article for PROFILES, the Kaypro users magazine, on printing foreign characters, and has provided OFFLINE with a three page summary of his findings. Although there is not room in the present OFFLINE column to reproduce what he sent, I will forward a copy to anyone who sends an appropriately addressed envelope. Or you may contact him directly. He is interested in hearing from persons who have tried other programs as well. His review covers the following programs: Chartech on CP/M, MSDOS and Apple II with CP/M card machines, with most dot matrix printers, for WordStar wordprocessor; Greek, Hebrew, Cyrillic, and user defined capabilities ($95); Techware, Box 10545, Eugene, OR 97440; 503-484-0520. Woodsmith on Kaypro and IBM types with Okidata and Star Micronics downloadable printers, and any wordprocessor capable of font shifts; custom character sets for Greek, Hebrew and Cyrillic ($34.50), character design kit for creating your own fonts ($49.50); Woodsmith Software, Rt. 3, Box 550A, Nashville, IN 47448; 812-988-2137). FancyFont for all CP/M and MSDOS machines with best results on Epson FX printer but also compatible with Star Micronics (Gemini, Delta, Radix) and some Texas Instrument printers and Epson MX; can be used by itself as a typesetting program or with other wordprocessors; Greek, Hebrew, Cyrillic, phoenetics, and user definable capabilities ($180); SoftCraft, 222 State St., Madison, WI 53703; 608-257-3300. Apple II users will also be interested in the HAY-SOFT SYSTEM for Multi-Language Display and Printing developed by Michael Stone and his associates at the Hebrew University, which currently supports Hebrew, Greek, Armenian and Roman fonts, and supplies a printer module for the Epson FX. Michael Stone has also been instrumental in developing an Armenian ASCII standard coding and keyboard layout. For further information contact him at PO Box 16174, Jerusalem 91161, or through BITNET (see above), STONEA at HBUNOS. There is also a program called DUKEFONT for the Victor 9000 which permits wordprocessors like WordStar to display Greek, Hebrew, Coptic, Ethiopic and Cyrillic characters. It was developed by Jeffrey W. Gillette (1801 Morehead Ave., Durham, NC 27707) in connection with the COLOE Project at Duke (see above). He is also working on a number of other items of probable interest to OFFLINE readers including computer assisted foreign language instruction (CALIS) and sophisticated concordance searching. For users of the IBM PC with color graphics, Jack Abercrombie's PENNWRITE program enables the computer screen to serve as a simple typewriter display for using and/or mixing Roman, Greek, Hebrew, Arabic, and other user generated fonts (without further editing capabilities) and is available through OFFLINE at the cost of reproduction. By the end of the summer, Abercrombie hopes to have a diskette available which presents programs for foreign character generation in a variety of fonts for screen and printer. A couple of other commercially advertised programs for displaying and printing foreign characters have come to our attention recently. Research Software Division of Research Corporation at 6840 E. Broadway, Tucson, AZ 85710-2815 (602-296-6400), advertises several programs for customizing the character set that appears on the screen of the IBM PC (and some compatibles) in connection with the WordStar wordprocessor, and for printing to various printers. One of the programs claims to do Hebrew from right to left, but it is not clear whether this can be mixed effectively with left to right character display as well. Another program includes Greek, Cyrillic, and Katakana characters. Prices range from $80 to $140. Finally, Arabic Software Associates Inc. (1649 Wright Ave., Sunnyvale, CA 94087; 408-738-1011) offer "Alkaatib" for about $250, which promises Arabic and Persian fonts on an Apple Macintosh now, and a complete multilingual wordprocessor "later this year" (also a version for the IBM PC at a later time). Presumably the display works from right to left, although this is not stated explicitly in the literature. The software automatically adjusts to the correct forms of letters (e.g. final letters) and ligatures. In June, Jack Abercrombie plans to release an expanded and updated form of the IBM PC diskette containing the programs presented in his book (1984); the new diskette will contain compiled programs in Turbo Pascal that can be run on any IBM DOS 2.1 machine, whether it supports Pascal or not. The programs address many basic needs of textual research: searching and sorting, indexing and concordancing, comparing. It will also include program modules that can be combined into new configurations by the user. For detailed information on products being developed in the COLOE Project at Duke (e.g. DUKEFONT, CALIS), contact F. Borchardt or J.Gillette at the addresses listed above. Another sort of product that was brought to my attention recently, after I had ordered yet another batch of ribbons for the Spinwriter, is a ribbon reinker sold by Computer Friends, 6415 SW Canyon Ct. - Suite 10, Portland OR 97221 (503-297-2321). It is called MAC INKER and is available in different forms for virtually any printer, at costs ranging from $50 to $80 (a bottle of ink is $3). About 12 reinkings would probably pay for the machine. Maybe I'll get one. (I have about 4 years worth of ribbon corpses stored away in a box awaiting such a resurrection!) In OFFLINE 3, I requested further information from readers on the problems of getting different microcomputers to communicate with each other, and was not disappointed. The Apple II series is not as isolated from other machines as I suggested, if one obtains a CP/M card for the Apple II, or an Apple II card for the IBM PC. Many of the problems I mentioned at that time are now being solved by developments in communications software (see e.g. PENN PRINTOUT 5, mentioned above, which reviews some of the products). For this type of information, the newly inaugurated user groups are in a position to provide their members with better service that any outsider can. OFFLINE has not yet addressed this aspect of the computer field in any detail, partly because I am under the impression that the development of computer assisted instructional materials for the humanities (not to mention religious studies as such) is still in its infancy. Some interesting and valuable things have been done especially for language instruction, and especially in connection with University computer centers, but much is needed by way of development and dissemination (packaging) of such materials for the individual microcomputer user. If readers have accurate and current information on these matters that seems appropriate to OFFLINEs audience, please send it along! We would like to be updated on, e.g. PLATO, Apple Consortium developments, IBM Threshold grant results, and the like. Should there be another special exhibition of computer related projects and products at the SBL/AAR meetings in Anaheim in November? Commercial vendors will be encouraged to display their wares under arrangements similar to those for commercial booksellers. But shall we try to arrange for other sorts of "non commercial" displays of interest and relevance such as was done at the 1984 Chicago meetings (e.g. Weidner Communications Corporation automatic translation programs, Kurzweil Data Entry Machine, Kaypro user group and IBYCUS System project displays)? Such exhibitions sometimes need to be subsidized and require special effort to organize. If you favor this approach, let us know. If you have suggestions about what sorts of displays would interest and/or help you most, let us know. We may not always be able to deliver, but we can always try. //end #5// ---------------------- << O F F L I N E 6 >> by Robert A. Kraft [dateline 28 January 1986] RSNews 1/2 (March 1986) ---------------------- The first five installments of this column, sponsored by the SBL Computer Assisted Research Group steering committee with support from the SBL and AAR, appeared in the CSR Bulletin from June 1984 (15/3) through April 1985 (16/2). (For further background, see also the April 1984 (15/2) article "In Quest of Computer Literacy.") Resumption of OFFLINE in this new format, at the kind invitation of the AAR and SBL Executive Secretaries, reestablishes a convenient forum for communication in this fast moving technical field. OFFLINE 4 included the names of individuals who had volunteered to serve as contact persons for various microcomputer hardware configurations as a step towards the establishment of working user groups. Some results of such cooperative efforts were evident at the special Anaheim computer exhibit (see below). Especially active are the KAYPRO, IBM PC, Apple MAC and Apple II groups, each of which has a relatively large constituency. An updated list of contact persons follows, for convenient reference. To register, please send the following information to the appropriate contact, or to OFFLINE: name- address-phone, computer-printer configuration, current project/interests and software needs. If you use hardware NOT listed below please respond directly to OFFLINE. APPLE II Dr. Richard K. Payne, Philosophy, West Valley College, 14000 Fruitvale, Saratoga, CA 95070 (408-867-2200 x321) Apple MACINTOSH Dr. J. Walter Cason, 2121 Sheridan Road, Evanston, IL 60201 (312-866-3973) COMMODORE 64 Dr. Lloyd Gaston, Vancouver School of Theology, 6000 Iona Drive, Vancouver, BC V6T 1L4, Canada (604-228-9031) COMPAQ Dr. Richard D. Weis, Ancient Biblical Manuscript Center, PO Box 670, Claremont, CA 91711 DEC RAINBOW Dr. Elmer B. Smick, 84 Old Cart Rd., South Hamilton, MA 01982 EPSON QX10 -- Prof. Roger A. Johnson, 22 Leighton Rd., Wellesley, MA 02181 (617-235-7833) IBM (PC XT AT) Mr. Frederic C. Putnam, Biblical Theological Seminary, 200 N. Main St., Hatfield, PA 19440 (215-368-5000) KAYPRO Dr. Marianne Sawicki, 631 South Limestone, Lexington, KY 40508 OSBORNE (CP/M) -- Dr. Stephen H. Skiles, 68 Oak Avenue, Northboro, MA 01532 (617-393-3875) SANYO MBC 555 -- Dr. Karl W. Rutz, 1269 W. Shryer, Roseville, MN 55113 (612-645-4245) TELEVIDEO (CP/M) Dr. Valarie Ziegler Morris, 1508 W. Luburnum Ave., Richmond, VA 23227 TRS 80 -- Dr. Roy E. Hayden, 7805 S. College Avenue, Tulsa, OK 74136; (918-492-5922) ZENITH Z-100 (MS-DOS and CP/M) -- Dr. Paul Ferris, 1105 Gunter Circle, West Columbia, SC 29169; (803-794-7471) A large number of computer readable texts are "out there," but it is difficult (1) to obtain reliable information about them and (2) to acquire copies for ones own use. Two recent developments signal significant hope that the situation will soon be brought under better control. Librarians are becoming more and more involved in cataloging these materials, and the influential Research Libraries Group (RLG) has committed itself to making the Rutgers Inventory of electronic texts (see OFFLINE 3) available on RLIN (Information Network). The development of "laser disk" ("CD ROM" = Compact Disk, Read Only Memory) technology capable of storing up to 540 megabytes of information (i.e. about 1000 volumes of 150 pages each!) on a single small-sized disk at reasonable costs for access from microcomputers promises to have various benefits. Not only will it facilitate searching and study of large bodies of material, but because the data on the CD ROM cannot be changed, it will help standardize the materials being "published" in this form and make it easier to control the processes of subsequent correction and verification. An experimental laser disk for use by software developers has recently been produced by the Thesaurus Linguae Graecae (TLG) project at the University of California at Irvine, with funding from the David and Lucile Packard Foundation. It has been exhibited on the new IBYCUS Scholarly Personal Computer (see below) at the AAR/SBL meetings at Anaheim and at the American Philological Association meetings in Washington DC. TLG expects to publish and market the complete corpus of its materials (all Greek literature through the 6th century ce!) on such a laser disk in the near future. Another such CD ROM, containing biblical materials (Hebrew and Greek), ancient Latin works, and other available "public domain" electronic texts, is being prepared by the Facility for Computer Analysis of Texts (FCAT) at the University of Pennsylvania, with anticipated support from the Packard Foundation, for release at nominal cost later this year. If you know of suitably verified material that can be made available for scholarly use in this format, please contact OFFLINE. Meanwhile, gradual progress is also being made in getting textual material onto diskettes for use with microcomputers. The aforementioned FCAT can supply virtually any publicly available text on IBM (DOS 2.0 or greater) diskettes at cost (minimum $25), including the Hebrew consonantal biblical text (7 diskettes, $39), the Greek NT text (4 diskettes, $33), and a wide array of other materials. Hopefully, the user groups will be able to advise and coordinate transfer from IBM to other diskette formats by means of available communication programs and devices. FCAT is anxious to coordinate such efforts, but is not equipped to make these transfers itself. For further information, contact FCAT through the OFFLINE address. Even if you attended the 1985 AAR/SBL meetings at Anaheim, it would have been difficult for you to take advantage of everything of potential interest, including computer related developments. In the regular exhibit hall were a few "technological displays" of special note such as the American Theological Library Association computerized indices, Trinity Evangelical Divinity School's Project GRAMCORD for sophisticated searching and analysis of NT texts, Ronald Benun's similar MIKRAH Computer Research Systems for working with the Hebrew biblical text and related material, and Dragonfly Software's NOTA BENE wordprocessing system, with its special orientation towards humanist scholarly needs. There was also a special exhibit for non-commercial computer software and related developments in a separate room, coordinated by Jacqueline Pastis of the University of Pennsylvania. The KAYPRO user group (see above) was well represented and provided some basic instruction in how to use the KAYPRO for scholarly purposes as well as displaying various relevant programs. The Apple MAC group also demonstrated available software, especially for generating non-English character fonts, and David McCarthy entertained and instructed us with his Hebrew language voice synthesizer for the MAC. Thanks to the cooperation of IBM, a full IBM AT system was also available for demonstrations of various scholarly developments. Jack Abercrombie showed his PENNWRITE and PENN TOOLBOX software, and Jeffrey Gillette exhibited the DUKE TOOLKIT (for background on both systems, see OFFLINE 5). Robert Wright demonstrated programs for textual collation and multilingual wordprocessing developed at Temple University, and John Turner showed his powerful little concordance program. We were also able, without any advance planning, to enable Michel Roberge and Catherine Barry of the Laval Coptic project to demonstrate programs, which they happened to have brought to the meetings, for working with the Nag Hammadi materials. Finally, William Johnson of IBYCUS Systems exhibited the prototype of the new Scholarly Personal Computer scheduled for release in Spring of 1986, with its dazzling array of foreign fonts, windows, CD ROM accessed texts, and light pen. OFFLINE will attempt to assist readers who desire further information about these projects or who wish to establish direct contact with any of the project representatives. //end #6// ---------------------- << O F F L I N E 7 >> by Robert A. Kraft [dateline 21 March 1986] RSNews 1/3 (May 1986) ---------------------- -- If you do not need to display or print non-standard characters, please accept OFFLINE's apologies for returning again and again to the concerns of those who do, and especially those interested in Hebrew and Greek. A variety of acceptable "wordprocessing" programs can be obtained for relatively uncomplicated English language work, but it has been more difficult to find multilingual capabilities. With the growing availability of more foreign language texts (see OFFLINE 6), the need for compatible foreign character display to screen and to printer becomes all the more crucial. At present, various approaches have been developed, with varying degrees of complexity and potential frustration. <. . . What Kind of Machine is Best for Me?> -- The ideal microcomputer system for which OFFLINE is searching would be able to display all the desired characters attractively on the screen (color is an interesting luxury) and printer, to deal with large bodies of textual material efficiently, to run special programs (e.g. concordancing) on the textual materials without undue complications, to mix and edit the various fonts easily, with sophisticated "wordprocessing" functions readily available (e.g. rapid search/replace, footnoting), and be reliable, maintainable, affordable and easy to operate. Nothing currently on the market combines all of these features conviently and effectively, although the IBYCUS Personal Scholarly Computer that is scheduled for release later this year (see OFFLINE 6) promises to change that situation. For a valuable survey of of the situation up to early 1985 see Roger Bagnall, Word Processing for Classicists (American Classical League, Miami Univ., Oxford, OH 45056; tel. 513 529-4116). With the currently available systems and software, two different approaches have been taken to the problem of foreign character generation, and related issues. The first approach treats the screen (or printer) as a gridwork of tiny points, each of which can be turned on or off. This is what is meant by "graphics" mode, with "all points addressable." The impressive graphics versatility of such machines as the Apple Macintosh, Xerox Star, Commodore Amiga and Atari ST is largely the result of building the systems around this capability using a "vector graphics" approach that permits mathematically controlled flexibility of shape and position (enlargement, reduction, elongation, relocation). In such an environment, foreign characters can be created as static pictures ("raster graphics" or "bitmapping") within specified grid dimensions by means of a "font generator" program, after which they become part of the system's dynamic "vector graphics" processes, easily used in other programs such as wordprocessing. The variety of characters in use at any given time is limited only by the amount of machine memory available and the intervening software. The Imagewriter type of printer normally used with a Macintosh reproduces what is on the screen so that what you see there is what you get on paper (with the option of reducing print quality for the sake of quicker printing, when desired). This is called a "screen dump" approach, as over against character transmission. Ready-made Macintosh character sets for various languages are available at reasonable prices from software developers such as SMK GreekKeys (5760 Blackstone Ave, Chicago 60637) or Linguists' Software (PO Box 231, Mount Hermon, CA 95041). OFFLINE is not aware of similar products for the Xerox Star, Amiga or Atari at this time. A similar approach can also be used on machines that were not specifically developed around a graphics oriented system. John Abercrombie's PENNWRITE program for the enhanced IBM PC/XT/AT and compatibles (see OFFLINE 6) displays its foreign characters on the screen as graphics images that have been vectorized. This gives the advantage of versatility and open-endedness, at the expense of requiring relatively large amounts of computer processing which can slow down performance. It is also relatively more difficult to coordinate this approach with standard wordprocessing programs available on an IBM type machine. The printing process, however, treats the screen graphics character images as though they were standard non-graphic fixed characters and generates corresponding characters on the printer. Thus, unlike the "screen dump" approach described above, the printer here imitates the screen rather than directly reproducing it (see also below). The second approach treats characters not as potentially dynamic graphics pictures, but as fixed blocks with set dimensions (e.g. 8 by 12 screen points or "pixels") for screen display. In this "alpha- numeric" approach, new characters can also be created through a "font generator" program and then loaded into identifiable memory locations, either in a designated character "chip" or PROM (Programmable Read Only Memory) which stores them indefinitely, or into a "volatile" section of RAM (Random Access Memory) in which they remain until replaced or until the electricity is cut off. The number of characters thus created is limited to the number of PROMs the machine can accommodate for this purpose or to the space allocated in memory (typically enough for 256 characters), but access is quick, machine processing activities are not taxed, and compatibility with many standard wordprocessing (and other) programs can be achieved with relative ease. Usually, some expense is involved in adding or replacing the character PROMs and upgrading the screen resolution needed to produce sufficiently legible special characters. A number of developers have used this second approach, especially on IBM type machines. The DUKE LANGUAGE TOOLKIT, demonstrated by Jeffrey Gillette at Anaheim (see below and OFFLINE 6), is an excellent example and can be obtained without significant cost (J.W. Gillette, The Divinity School, Duke University, Durham, NC 27706). The commercial NOTA BENE software package, which has been endorsed by the Modern Languages Association, has recently introduced Greek and Hebrew capabilities using the Duke approach. (Equal Access Systems, 211 Bergen St, Brooklyn, NY 11217). Various other private and commercial examples could be mentioned, including the work of Kevin Clinton at Cornell (for Greek Inscriptions), John Hurd at Toronto, and vendors such as AcademicFont, Proofwriter International, Multi-Lingual Scribe, Mikrah (Hebrew), Davka/Mince (Hebrew) and Wordmill (Hebrew). For the Apple II series, the Gutenberg package is one of the earliest versatile multilingual approaches developed; more recently, M. Stone's Haysoft System uses chips to generate Armenian as well as Hebrew and Greek on Apple II hardware. Producing the foreign characters in printed form on paper ("hard copy") in this non-graphics approach is a relatively straightforward task for printers that can accommodate additional character sets. Representations of the characters seen on screen can be introduced as fixed patterns into the printer's own memory. Some printers accept ready made character PROMs or cartridges as relatively permanent memory modules; some are equipped with temporary printer memory space to receive "downloaded" character sets of the user's choosing. The printing of non-standard characters can also be separated from the question of screen display. Programs and peripherals that deal only with printer fonts are available, both on the commerical market and as freeware. Even without the ability to see foreign characters on your screen, you may be able to generate them on your printer (see OFFLINE 5 for further details). -- There are too many variables for a quick and firm answer. Remember that the cost of software and add-ons must also be included in comparisons. If you have only $1000 to spend and want one of the new generation of machines, the Atari ST is the only option known to OFFLINE at this date. For about $500 more, the Amiga has even greater potential. But until you, or someone else, develops appropriate software for these machines you may be relatively limited in what you can do. The 512K Macintosh Plus is well established as an option, already has some suitable software, but costs about twice the Amiga and has less flexibility for future development. The IBYCUS SPC should cost about the same as a "Fat Mac" with software and will be focused exactly on the needs of humanistic scholars, which makes it less useful for other, more general purpose functions (e.g. games, tax preparation). Several IBM compatibles fall within this same price range, if you can insure that the desired software will run on them, while the necessary certified IBM equipment may be more expensive. -- It is OFFLINE's intention to provide, from time to time, brief reviews of relevant products that become available and for which "demo disks" can be obtained. For example, the CompuBIBLE demonstration packet from Word Church Services (4800 W. Waco Drive, Waco, TX 76710) has arrived. It runs on IBM type machines and provides various searching and concording approaches to the Authorized (King James) Version of the Bible (coded data plus programs, $299). More later. -- OFFLINE also will try to keep you informed about sources of relevant information. One of the most comprehensive for biblical and related areas is the Centres de Traitement Automatique de la Bible (Abbaye de Maredsous, B-5198 Denee, Belgium), which keeps updated lists of projects and publications. Of great general interest, although not particularly aware of multilingual developments, is "Horizon 90: Humanities and New Technologies," by Raymond Ortali (9/85; Inst. for Renaissance Interdisciplinary St., Humanities 223, State Univ. of NY at Albany, NY 12222). -- In addition to the computer exhibits at the AAR/SBL 1985 annual meetings in Anaheim (described in OFFLINE 6), the session of the Computer Assisted Research Group also bristled with timely information from various sources. Major reports (see the published program) were heard from the Facility for Computer Analysis of Texts (FCAT) (J. Z. Pastis, B. G. Wright, J. R. Abercrombie), from the proposed Hebrew Bible Variants Database (M. Fox), from GRAMCORD (P. A. Miller), and from the Computer Assisted Tools for Septuagint Studies (CATSS) project (E. Tov). The following brief reports also were received: R.-F. Poswick (Maredsous, Belgium; by proxy), update on biblical text projects and a recent conference; S. Ralston (U. Cal., Irvine), TLG project update; J. Cook (Stellenbosch, S. Afr.), The Syriac Peshitta Project; J. Gillette (Duke), projects developed at the Duke Center relating to computer assisted instruction and foreign language character generation (see above), among others; R. Whitaker (Pella, Iowa), Hebrew biblical text and its morphological analysis update; A. Baird (Wooster), the Computer Bible publications series update; G. Chamberlain (Dubuque), work on a LXX mini-lexicon and glossaries; R. Wright (Temple), multilingual wordprocessing and textcritical searching (IBM PC); J. Turner (Nebraska), key word in context concordance; R. Saley (Harvard), Photogrammetry project update; J. Erbes (Andrews University), new products for multilingual display and print applications; J. Wise (The Way), Syriac NT text update; J. Strange (Univ. South Florida), archaeological applications; D. McCarthy (Wisconsin/Madison), Hebrewtalk on the Apple MAC; C. Barry (Laval University), "Nag Hamedit" Coptic editing program. OFFLINE will attempt to assist readers who desire further information about these projects or who wish to establish direct contact with any of the project representatives. //end #7// ------------------------------------------------------------------ << O F F L I N E 8 >> by Robert A. Kraft [dateline 30 May 1986] RSNews 1/4 (July 1986) ------------------------------------------------------------------ The "Facility" for Computer Analysis of Texts at the University of Pennsylvania is now officially a "Center." But we do make mistakes, and OFFLINE provides an efficient correction medium. If you received IBM PC diskettes of the Hebrew Bible from CCAT prior to 16 May 1986, several books may be missing the final few characters, and the Pentateuch/Torah may include the notation "n" for proper names, which is not mentioned in the documentation. Some of you may be pleased with the latter bonus, which was not supposed to be released in such an incomplete and unexpected manner. But if you want the intended text, without the "n" character but with the end characters, return the faulty diskettes to CCAT for replacement. Alternatively, there is an easy way for you to correct the material yourself: first, create from the console a file with the missing characters and an end of file marker, thus -- A:>copy con: missing.txt M00?Z 1 File(s) copied then join that file to the original in a new, corrected file -- A:>copy micah.mt + missing.txt b:micah.new MICAH.MT MISSING.TXT 1 File(s) copied and finally, replace the original with the new (corrected) file -- A:>copy b:micah.new a:micah.mt 1 File(s) copied The missing characters are as follows: ? [=new line] in Lev, 2Kg, Joel; 00? in Hos, Ezek; M00? in Ex, Micah; W00? in Num, 1Kg, Jer; otherwise -- Gen YIM00?, Judg 5H00?, 2Sm 75L00?, Amos Y/KF00?, Jonah AB.F75H00?, Hag :BF)O75WT00?, Pss YF75H.00?, Job 5YM00?, Ezra [space]P?, 1Chr WT00 P?. Three other errors have also been reported: Prov 25.17 TAG:L/:KF, change T to R Qohelet 10.13 KIS:L92W.T (BHS), change KIS to SIK Dan 11.10 **W:/YIT:G.FRE73W (change final W to H). You may also have received a faulty version of the WHATHEB "filter" program, for selecting various forms of the Hebrew (without cantillation, consonantal only, etc.). Does it remove " and U in the consonantal text option? Does it preserve schwas with the vocalized form? Does it work properly for monochaptered Obadiah? If not, return a disk for correction. Henceforth, we will also make the source code freely available with the understanding that any improvements will be sent to CCAT so that all may benefit from our combined efforts. Your help would be appreciated in other ways as well. Many of you do not have IBM PC type machines but would like access to the texts now available through CCAT on IBM formatted diskettes. Some of you may be appropriately equipped to make the transfer from IBM or ASCII tape to other diskette formats (Apple II, Macintosh, Kaypro, etc.). CCAT will be happy to supply, free of charge, the texts for such transfers (on IBM diskettes or 9 track ASCII tape), in exchange for a master copy of the resulting non-IBM diskettes for distribution through CCAT and/or through the appropriate users group. Lest you despair of attempting to keep up with what is happening in computer assisted research, an excellent new book (yes, "hard copy"!) looms on the horizon and hopefully will appear while its contents are still timely. Don't be put off by the cutesy title, (Nelson and Sons). If the author, John J. Hughes, does not have to pare his 600 plus page draft manuscript too radically, you will have access to an accurate and detailed description --intelligent and encyclopedic -- not only of scholarly uses of computers, but of how computers operate and of bible study uses at more popular levels (games, bible school applications, etc.). An impressive accomplishment! Of a quite different nature, but also attempting to foster technological awareness among humanists, is the new ACLS publication (1717 Mass. Ave. NW, Suite 401, Washington DC 20036). The Spring 1986 issue (No. 4) contains a stimulating article by Hugh Kenner (Johns Hopkins, English) on "Computers, libraries, scholars," which explores some of the ways computers affect our scholarly habits. There is also a "Notes on Computers" section, with information on database management software, the Duke Language Toolkit, the Brigham Young concordance program, and the new Macintosh quarterly for academic computing, among other things. Along the way, the editors also observe that "a more expensive program is not necessarily a better program" -- I hasten to add that scholar produced "freeware" is often more suited to scholarly needs than commercial software! --and refer to a study that claims that "99 percent of scholars could do 99 percent of their serious work efficiently" on a computer system costing less than $800. Perhaps true, but can we afford the humiliation? Not quite so new, but also worth mentioning, is the (vol.1 appeared in 1983). For a free prospectus write to the Rose-Hulman Institute of Technology, Terre Haute, IND 47803. Another valuable source of timely information is , available by itself (PO Box 1057, Osprey, FL) or at reduced rates as part of a membership package from the Association for Computers and the Humanities (Harry Lincoln, Music Department, SUNY Binghamton, NY 13901). Information also comes in electronic forms. Abercrombie's BITNET Online Bulletin Board has been mentioned before in OFFLINE. It is free, but not everyone has access to BITNET. Now from North Carolina State University and General Videotex Corporation's Delphi system comes HumaNet to link humanists in more than 65 countries, using modems and telephone lines at the local level. Individual subscriptions cost around $30, plus hourly online fees. Contact Richard W. Slatta, ScholarNet Director, NC State, Box 8101, Raleigh, NC 27695; tel 919-781-3181. It is a pleasure to announce that orders are now being accepted for the IBYCUS Scholarly Personal Computer, which almost certainly will become the standard for speed, convenience and price against which all other scholarly text processing systems (monolingual or multilingual) must be judged. Developed by scholars for scholars, the computer and software is priced around $3000, and is configured to operate with a laser disk reader (see OFFLINE 6) and laser printer as added options. Contact IBYCUS Systems, Box 410, Bldg B, 301 North Harrison St., Princeton, NJ 08540. Linguists' Software for the Apple Macintosh has changed its address (106R Highland St., South Hamilton, MA 01982; tel 617-468-3037) and expanded its offerings to include Greek for the Apple LaserWriter as well as Hebrew, Greek, Arabic, Farsi, Syriac, Coptic, Ethiopic, Sabean, Devanagari, Hieroglyphics, Ugaritic, and more, for the Imagewriter. Most packages cost less than $80. For CP/M machines, Techware/Pangloss (474 Willamette St., Suite 201, PO Box 10545, Eugene, OR 97440; tel 503-484-0520) has announced the availability of Greek, Russian, Hebrew and ("coming soon") Arabic, including screen display for appropriately configured systems. Prices are under $200 per package. Chinese for the IBM PC/XT/AT is being marketed by the Asiagraphics Corporation (407 East Main Street, PO Box 153, Port Jefferson, NY 11777; tel 516-473-8881), at around $500 for properly configured systems. I am pleased to report that some readers have acquired the exciting new (and very inexpensive) Atari ST and are developing software for it. Please inform OFFLINE if you wish to become part of a users group for this machine. The Atari includes in its basic character set unpointed Hebrew and several Greek characters. It can also be made IBM compatible with a bit of work and/or extra expense, and should be able to do whatever a Macintosh can do with foreign fonts as the appropriate software becomes available. Similar things are also true of the somewhat more expensive, and more versatile, Commodore Amiga (see OFFLINE 7 for background information). OFFLINE has also been receiving information on computer developments in England (esp. Oxford and Manchester) and Scotland (esp. Edinburgh), that will be of interest to some readers. In general, the projects underway are similar to what is happening here (multilingual editing/display, concordancing, searching, language instruction, stylistics, archiving), although the hardware is not always familiar. Transatlantic BITNET communication has not always been reliable, but seems to be improving. OFFLINE will attempt to route specific queries to the appropriate destinations. IBM is joining the 3.5 inch diskette world, which should make it easier to communicate with such machines as the Macintosh, HP 150, Amiga, Atari, and Ibycus; the IBM 4865 portable 3.5" drive is announced at under $400. IBM has also announced a double disk drive portable that looks very promising on paper. Toshiba America is introducing a laser printer in the lower price range to compete with the popular HP LaserJet. This is especially noteworthy since the Toshiba line of high quality, low priced matrix printers has become widely used by those who work with downloaded foreign fonts. Look for the price of laser printers to drop as competition increases. Datacopy Corporation (1215 Bella Avenue, Mountain View, CA 94043; tel 415-965-7900) has announced several models of "image scanners" (using a graphics approach rather than pattern recognition; see OFFLINE 7) for IBM PC type equipment. Prices begin at $3000 and mount rapidly, depending on the tasks envisioned. OFFLINE has not yet seen the system in action. Already, much of what gets published goes through an electronic form somewhere in the process from the author's desk to finished copy. It is in your interests to recover from your publisher the best electronic form available, for further revision, excerpting, indexing, bibliography, etc. Past experience shows that once something is published, it is easy for the electronic data to be lost or destroyed. Don't let this happen to you. Make the most of the new possibilities offered by computerized research and publication! In some instances, the published material may be more useful in machine sensible form. For example, a new index to JBL is nearing completion, by John Hurd at Toronto. The Society has seen the wisdom of making this available in electronic form as well as in printed form, and it is clear that searches of the computerized index will be able to provide much desired information more quickly and thoroughly than would otherwise be possible. Similarly, discussions are underway to excerpt the bibliographies from the SBL Centennial Series on the Bible and its Modern Interpreters and make that material available in computer form for scholars, students and teachers. Your ideas about such projects are appreciated. Think of ways the new electronic tools can help you most. //end #8// ---------------------- << O F F L I N E 9 >> by Robert A. Kraft [dateline 31 July 1986] RSNews 1/5 (September 1986) ---------------------- How embarrassing! It has been brought to our attention that the IBM PC format diskettes of the "Septuagint" texts that we have been distributing are faulty in the non-poetic portions, in lines containing 80 ASCII characters or more. Books in poetic format such as Psalms and Proverbs were not affected (nor is this problem present with the Hebrew Bible or the NT materials). The problem occurred in downloading from the larger computer system to the IBM XT, due to our inexperience. If you have a copy of this material dated prior to July 1986, please return the diskettes and the texts will be corrected and returned to you at no extra cost. We apologize for the inconvenience. We are pleased to report that CCAT is moving ahead with all deliberate speed in the preparation of a CD ROM for release this fall, to complement the anticipated release of the Thesaurus Graecae Linguae materials in the same format. This project has received generous support from the David and Lucile Packard Foundation and from the SBL Research and Publications Committee, which will permit us to make the laser disks available at nominal cost -- probably less than $50 each. Why would anyone want such a disk? First, it will hold up to 540 million bytes of information. That is roughly equivalent to 1500 double sided, double density IBM PC diskettes -- a LOT of material. All of the TLG Greek literary texts through the 6th century of the common era will fit on one such disk. The CD ROM that CCAT is producing will have at its core the various biblical and related materials being used and produced by the Computer Assisted Tools for Septuagint Studies Project -- texts coded in the various languages, including English translations, morphological analyses, dictionaries and lists. It will also have a wide selection of materials pertinent to other areas of study and research from all periods of history, especially Latin classical and religious texts but also non-literary Greek papyri and inscriptions, some Coptic materials, Armenian and even Sanskrit, not to mention more medieval and modern languages. We have received excellent cooperation from various scholarly projects, both within the classics and religion circles in which we normally move and from other sources; e.g. the Dante Project centered at Dartmouth and Princeton and the Old English Dictionary Project at Toronto, among others. We have also been given access to some of the materials from the Repositories/Archives of the American Philological Association and of the Oxford (England) Computing Center, as well as to the IBYCUS System collections. How can you take advantage of this wealth of electronic material? First, you will need a CD ROM reader and controller (interface) to your computer. Those items may cost between $700 to $1300, depending on various factors. Equally important, you will need appropriate software to access the CD ROM from your particular machine. This is not a trivial matter, but CCAT is developing software for the IBM PC family first, and will expand its activities as needed to other machines; other centers are also active in this regard -- e.g. University of California at Santa Cruz is working on programs to access the TLG laser disk from the Apple Macintosh as well. Since an experimental TLG CD ROM has been available for more than a year on the IBYCUS Scholarly Personal Computer, using a Sony CD ROM reader, we have attempted to emulate its operation on the IBM, and are preparing our laser disk in the same format. An excellent side effect of the production of the laser disk libraries is that it will place into the public domain materials that were difficult to obtain in the past. Such material can be downloaded to other storage media (diskettes, tapes) and circulated independently of the laser disk, if the proprietors of the materials permit this. User agreements will be required to insure that the materials are not misused, whether a person obtains the entire laser disk or simply a smaller portion therefrom. Thus this development should benefit all users of these sorts of computerized data. I am also pleased to report that response to the plea made in OFFLINE 8 for some of you to help in the process of transferring data to non IBM diskette formats has been positive. By the time this column appears, the biblical and other materials that we have been distributing should be available for Apple II, Macintosh, and CP/M (Kaypro) users at least, and perhaps others as well. This is both an important and a time-consuming enterprise, and thanks are in order to the several volunteers. If possible, we will try to have copies in various formats available at the annual meetings in November in Atlanta. --------------- << O F F L I N E 10 >> by Robert A. Kraft [dateline 24 Sept 1986] RSNews 1/6 (November 1986) --------------- Competition for your time at annual meetings is fierce, and it is difficult for even an experienced user of the program guide to spot all the sessions that might be of special interest. For persons who wish to keep in touch with the rapidly developing world of computers in scholarship, the following information should be noted. Throughout the conference, special exhibits, advice from users groups, and individual consulatation will be coordinated once again by Jacqueline Pastis (Center for Computer Analysis of Texts = CCAT, U. Pa.) on behalf of SBL/AAR and the Computer Assisted Research Group. The designated room is [??? get name from Sherrie Hollis] and it will be open as much of the time as possible. Hours for special exhibitions will be posted in advance. Stop by regularly. Bring your programs and data to display, and diskettes on which to copy materials. On Sunday morning, 9:00 to 11:15 (M-Sidney), the Computer Assisted Research Group is sponsoring three presentations from experts in the use of computer graphics for archaeological and textual research (Program, p. 57). Hopefully, the program conflicts will not keep at least some archaeologists from seeing and hearing the illustrated presentation by the Badlers on 3D computer applications, and some textual scholars from attending the discussions of raster graphics and "laser disk" developments. Also on Sunday, from 11:30 am to 1:00 pm (M-Sidney), the now traditional rapid sequence of brief reports from the various centers, projects, etc., is scheduled, including (but not necessarily limited to): HumaNet Electronic Bulletinboard (W. Adler, North Carolina State); Photogrammetry Project (R. Saley, Harvard University); Comprehensive Aramaic Lexicon (S. Kaufman, Hebrew Union College-JIR); Armenian Data Base (M.Stone, Hebrew University, Jerusalem); The Syriac Peshitta Project (J. Cook, U. Stellenbosch, S. Africa); Syriac NT Text Project (J. Wise, The Way International); CATSS-Hebrew Bible Text/Morph (A. Groves, Westminster Theol. Sem.); Hebrew Bible Variants Database (M. Fox, Univ. of Wisconsin/Madison); Septuagint/CATSS Project at CCAT (F. Knobloch, U. Pennsylvania/Hebrew U.); GRAMCORD Greek NT Project (D. A. Carson/P. A. Miller, Trin. Evang. Div. Sch.); Centre: Informatique et Bible (R.-F. Poswick, Maredsous, Belgium); CALIS-CALICO-COLOE Projects (J. W. Gillette, Duke University); The Computer Bible Series (J. A. Baird, Wooster College); G. Chamberlain, Dubuque; LXX glossaries J. Erbes, Andrews University; foreign fonts R. Whitaker, Pella, IA; Hebrew text/morph John J. Hughes, Whitefish, MT; On Monday afternoon, 4:00 to 6:00 (M-Riviera), the editor of OFFLINE will lead a lecture-discussion on "The Role of the Computer in the Future of Biblical Studies." This should provide opportunity for significant discussion of how the computer can, and will, and already does, reshape our research habits, and what the pros and cons of such influences may be. OFFLINE 9 attempted to introduce you to the new world of "laser disk" (CD ROM) data distribution for textual research. This has led to a rash of inquiries about the detailed contents of the promised CCAT CD ROM, and the availability of these texts in other formats. We hope that by the time this column appears, the first CCAT laser disk will be available, or at least that the materials will have been sent off for "mastering" and reproduction. The TLG experimental CD ROM (mainly Greek texts), at least, will be exhibited on the micro IBYCUS system at the special computer display in Atlanta. In preparing the CCAT CD ROM, much time has had to go into identifying sources of desirable data (material relatively free of "typographical" errors), determining whether the "owners" and controllers of such materials will permit them to be circulated on the CD ROM (and whether any copyright restrictions may exist concerning the printed editions from which the data derives), gathering the material in forms that we can transfer to our equipment, determining what coding and formats are used in each data file, and transforming the data so that its coding and "locators" (book, chapter, verse, line, etc.) are consistant with other materials on the same laser disk, and with other CD ROMs produced for the same computer public (e.g. the TLG Greek disk). The data could be passed along to you in the variety of forms in which we receive it, but then you would need a wide variety of different programs to manipulate the diverse materials effectively. A detailed list of the materials on the CCAT laser disk can be obtained by writing OFFLINE. Once the CD ROM is available, many of its individual texts can also be extracted for specific study and manipulation on various types of equipment by means of standard software options. Thus a wide variety of users ultimately will profit from this powerful technological development, even without owning an actual CD ROM reader. In the near future, scholars with portable computers equipped with laser disk readers (etc.) will not be unusual. Meanwhile, some of us have begun to take advantage of the less spectacular convenience of being able to keep records, take notes, write reports, etc., on portable machines when we travel. Jack Abercrombie has just begun to experiment with the IBM Convertible (portable) and finds that his PENNWRITE system runs on it without modification, as do a variety of other programs for textual analysis that are used regularly at CCAT. With such equipment, one can now carry a full repertoire of character sets (Hebrew, Greek, Arabic, Sanskrit, etc.) for display on screen and printer, for editing, etc. The machine, which includes two built-in 3 1/2 inch disk drives, is not inexpensive ($2000 retail), put can provide an enviable amount of versatility with minimal additional investment. We will also show it off at Atlanta, along with various other recent developments to which we have access. //end #10// ---------------------- << O F F L I N E 11 >> by Robert A. Kraft [dateline 15 Dec 1986] RSNews 2/1 (January 1987) ---------------------- From OFFLINE's perspective, the computer aspects of the Atlanta SBL/AAR/ASOR meetings were a huge success. Even apart from the 3-D rock video tape used as an illustration by the Badlers, the Computer Assisted Research Group (CARG) program was interesting and informative, with its scheduled presentations and the brief reports on various projects and activities. The special computer exhibits and user group consultations coordinated by Jackie Pastis (with major assistance from Jay Treat, Dale Brueggemann and Todd Kraft) were very well attended at all hours -- we even heard that there were some mild complaints when those exhibit rooms were closed for the CARG session! And many of you turned out for the discussion of "The Role of the Computer in the Future of Biblical Studies," despite the fact that at the same time Morton Smith was speaking elsewhere on another kind of magic. (Sorry, mine was an informal presentation so there are no printed copies for circulation.) Thanks to everyone involved, whether staff, exhibitor, or participant! As an indication of the traffic through the special exhibit rooms, about 130 persons took the time to fill out User Group Registration forms. Although we have not yet had time to integrate this new information into our existing lists, and thus may find some overlap, it is instructive to note the types of equipment being used by the most recent registrants. The singly most prominent brand name was Kaypro (35, plus about 10 more with other CP/M based machines), although the total of IBM (26) and IBM compatibles (45) made (PC)DOS the system leader by a wide margin. On the Apple side, 19 Macintosh users registered and 8 who have Apple IIs. I was personally disappointed to find only one each for the Commodore Amiga and the Atari ST, which are excellent, inexpensive machines that deserve more attention from scholars. (If you are worried about the mathematics in the statistics just presented, several registrants claimed access to two or more machines of different types.) For printers, Epson (25), Toshiba (13) and Okidata (11) were most prominent among the independents (i.e. excluding IBMs and Apples). The Kaypro users were especially fortunate to find that the new coordinator of that group, Dale Brueggemann of Westminster Seminary and of the CCAT/CATSS staff, had ported the biblical texts onto Kaypro diskettes for users to copy at the meetings. Jay Treat of the CCAT staff also assisted and advised Macintosh users on obtaining copies of these texts. Unfortunately, the planned Apple II users station had to be cancelled at the last hour, but support of for that group continues and texts are gradually becoming available. Kaypro users may also be interested to note that a Kaypro diskette of John Abercrombie's Computer Programs for Literary Analysis is now available ($26.50 for diskette alone; $14.25 for book alone) from the publisher, University of Pennsylvania Press, as well as a Turbo Pascal version for the IBM PC. A future issue of OFFLINE will provide an updated list of addresses for the various users groups. It was heartening to note that of the 130 registrations filed in Atlanta, 30 persons expressed willingness to help coordinate groups. If there is sufficient response, new groups will be started for the IBYCUS SC, the Commodore Amiga and the Atari ST series. Please let us know if this is relevant to you. A new publication called BBR has been launched by John J. Hughes (Whitefish, Montana 59937), to provide "reviews & news of computer products & resources for the humanities." The first issue appeared in late October with detailed, favorable treatments of the IBYCUS SC and of the Nota Bene academic word processing program, and received wide distribution in Atlanta. Sample copies may still be available from the BBR editor or from OFFLINE. In addition to the reviews, BBR contains brief notices of new software and hardware products, grant awards and fellowships, and of coming events. Full subscription ($55) to the 9 issues per year entitles the subscriber to one hour of telephone consultation as well. This is an initial downpayment on the extensive research that John Hughes has been doing on these subjects in the past three years. Judging from the first issue of BBR, it will significantly overlap another important publication on humanistic computing, SCOPE (Paradigm Press, PO Box 1057, Osprey, FL 33559-9990). While SCOPE has a much wider range of brief notes, along with a few longer reports, on various aspects of computing relevant to humanists, it has not attempted the sort of in-depth reviews found in the present BBR. At least for the moment, there seems to be ample justification for both publications, although there is a real danger that proliferation of such independent publications may not always foster the sort of focused dissemination of information that would best serve humanists. Not too many individuals will be able to justify the expense of subscribing to both SCOPE and BBR. Review journals and newsletters are crucial if individuals hope to keep up on relevant happenings in their fields of interest. But until computer items find their appropriate place in the existing "traditional" scholarly media, columns such as OFFLINE will be needed to review the special new computer review organs (such as SCOPE and BBR) that keep appearing. For example, the Fall 1986 issue of the newsletter SCHOLARLY COMMUNICATION (ACLS, 1717 Mass Ave NW, Suite 401, Washington DC 20036) contains an evaluation of graphics packages for microcomputers, basic information about "optical disks," and a summary report about BITNET, among other valuable notes. I found the following quotations from the graphics article, by H. Wainer and D. Thissen to be especially entertaining as well as apropos: "The [IBM] PC has many faults in addition to weak graphics, but software developers have found it worthwhile, given the popularity of the hardware, to labor heroically to circumvent these faults" (14); and, "Nowadays trying to get an up-to-date review of software or hardware is like trying to shovel the walk while it is still snowing" (15). So true! By now, readers of OFFLINE have become familiar with the term "CD-ROM" (with or without the hyphen). Well, there is a monthly newsletter called "CD DATA REPORT," now in its third year of publication (Langley Publications, 1350 Beverly Road, McLean, VA 22101), that will include an article on CD-ROMs for Scholarly Research in a forthcoming issue. This will cover not only the laser disks compatible with the IBYCUS SC (produced by TLG and CCAT), but also that used in the Brown-Harvard "Isocrates Project" for a UNIX environment (TLG version B), and the relevant ambitions of the proposed Harvard "Perseus Project." //end #11// ---------------------- << O F F L I N E 12 >> by Robert A. Kraft [dateline 15 Dec 1986] RSNews 2/2 (Mar 1987) ---------------------- To adapt a famous NT passage and apply it to the "external services" section of the Center for Computer Analysis of Texts (CCAT) at the University of Pennsylvania, "(almost) all things are possible, but not everything can be done at once." CCAT has no regular full time staff or fixed budget, so we cannot always answer your requests as promptly as we all would like. Teaching, studies, research commitments, and other internal University needs usually must take precedence. For the past few months our priority has been to produce a large collection of consistently coded materials on CD-ROM. Why? We think that in the short run as well as in the long run, this is a more effective use of the available time and personnel in the service of textually oriented scholarship. Once the materials are on the laser disk in such a form that separate programs do not need to be written to manipulate each of the data files, problems of transfer to diskettes, storage and distribution should become much more manageable. For those of you who want large bodies of material on hand, costs will also lower dramatically. Thus we have not been investing large amounts of time and energy in transporting materials to micro diskettes for distribution, with the exception of the Hebrew and Greek (LXX, NT) biblical texts, and now the RSV. We also have prepared a general diskette containing samples of various other types of tools that are available on tape from CCAT and will be included on the CD-ROM (parallel Hebrew-Greek Jewish scriptures, morphologically analyzed "LXX," Greek LXX text with variants). Those items can be obtained for IBM/DOS systems, Apple II and MAC, Kaypro/CPM, and Atari ST. Contact CCAT for details about ordering. Vast amounts of additional material, biblical and non biblical, in a wide variety of coded languages, are on tape for eventual release, but it would be impractical to move them en masse to diskettes at this point. Your suggestions and requests are welcome, but we must also request your patience if your needs fall outside of our current priorities. An important way in which your wants and needs can be met is through strong and active user groups. The ideal is for each of the groups to establish its own lines of communication. Meanwhile, to help things along, OFFLINE will include appropriate notes to assist with the process. In the present column the non-IBM focus is on three machines built around the Motorola 68000 chip; another such machine, the Commodore AMIGA, hopefully will receive attention in a future column as its users become better organized. Several readers have asked for further information about the reference in OFFLINE 9 to development of programs for using the TLG CD-ROM on the Macintosh. As it turns out, this experimental project by David Meyers at Santa Cruz has been completed but has not produced any materials for distribution. CCAT remains committed to insuring that such CD-ROM materials can be accessed from a variety of microcomputers, including the MAC. The 12/86 issue of "Penn Printout," from the University's Computing Resource Center, features a discussion of Macintosh System Software, including notes on how to use downloadable fonts with Microsoft Word and/or with WordPerfect on the HP LaserJet Plus printer. This follows the feature in the 11/86 issue reviewing the HP LaserJet in general. Don't forget to include a self-addressed label or large envelope if you request copies from OFFLINE. On the foreign font front, you may want to update yourself on new developments from Linguists' Software (106R Highland St., South Hamilton, MA 01982; tel. 617-468-3037), with their announcement of a NT Greek text and font package, and LaserWriter capabilities. Similarly, SMK (5760 Blackstone Ave, Chicago, IL 60637) has released a Greek Attika font for the LaserWriter at $40, in addition to its GreekKeys font for the Imagewriter. In a lighter vein, but also of considerable interest, is a recent book by Michael Green Zen and the art of the Macintosh (Running Press, 125 South 22nd St., Philadelphia, PA 19103). Apart from any religio- philosophical aspects of this work, the examples of how Macintosh graphics can be put to a variety of imaginative uses are outstanding. Green takes the reader step by step through the processes of creative Macintosh art. There has been significant activity in the direction of establishing an Atari Users Group, thanks to the willingness of Douglas E. Oakman (3240 Homestead, #3, Santa Clara, CA 95051; tel. 408 244-2578) to help coordinate it, and to the technical and organizational efforts of Drew Haninger (7717 Corliss Ave N., Seattle, WA 98103; tel. 206-643-0300). The prospects that suitable software for textual research in various languages will be available for this powerful and inexpensive machine have been increased recently by the efforts of Haninger and others working with Power Systems in Redmond, WA, who have recently released "PowerWriter," advertised as a multi-lingual word processor with 10 language fonts (including Hebrew, Arabic, Greek, and Cyrillic) for $29.95. Further information is available from APEX Resources (1-800- 343-7535). CCAT is cooperating with Haninger in exploring the development of CD-ROM capability for the Atari. Several purchasers of the new micro-IBYCUS Scholarly Computer have contacted CCAT to transfer files from IBM diskettes to IBYCUS SC diskettes, and we have been able to accommodate such requests. For the present, however, the process must use the older IBYCUS mini system as an intermediate -- IBM to mini-IBYCUS to micro-IBYCUS. The IBYTALK transfer program will be rewritten to work directly between IBM and micro-IBYCUS and will be made available through CCAT, but this will probably not take place until summer 1987. CCAT is also considering the release of a program utilities diskette for micro-IBYCUS users, if there is sufficient interest. Various programs for searching specific columns (e.g. in a morphological analysis file), removing blank lines and/or blank spaces at the left, inserting a left margin, checking for illegitimate English or Latin spelling, setting up and using address lists, doing mathematical calculations, and many others, are all available on the parent IBYCUS system and can be ported to the micro with relative ease. Your suggestions would be appreciated. Oxford University Press is becoming more and more active in "electronic publishing," and has announced a micro version of the widely used Oxford Concordance Program (OCP) for IBM XT/AT type machines with 512K memory and hard disk, for about $450. OCP is a general purpose text analysis program that makes word counts, indices, and concordances from texts in any (coded) language, and displays the results on screen and/or in hard copy. Contact Anne Yates, OUP, Walton St., Oxford OX2 6DP, England. "LBase" is described in the manual as "a programmable tool for linguistic analysis of texts" that "allows the user to design and produce a database for sorting and compiling language data." It is written to work with various files distributed by CCAT (LXX and NT in TLG format, BHS, morphologically analyzed LXX, parallel text), as well as with others. OFFLINE will try to review this package in a later issue. Meanwhile, details can be obtained from John Baima at Eisenbrauns (POB 275, Winona Lake, Indiana 46590; tel. 219-269-2011); a Demo disk costs $10. "Same product, same company, new name!" Thus does "Multi-Lingual Scribe" announce its change of trademark to "Multi-Lingual Scholar," and its support of various high resolution printers such as the 24 pin NEC, Toshiba and Epson models, the HP LaserJet Plus (see also above under Macintosh for "Penn Printout" review) and similar lasers, and various "quad density" models. If you have an older Demo Disk, it can be updated at no charge by returning it to Gamma Productions Inc., 710 Wilshire Blvd, Suite 609, Santa Monica, CA 90401 (tel. 213-394- 8622). Gamma also offers to reproduce custom fonts of your choice, using the Cannon IX-12 scanner. CCAT attempts to cooperate with such developers with a view to insuring that their programs will operate directly on the texts available through TLG and CCAT. Otherwise, the TLG and CCAT CD-ROM materials will have to be put through various transformations before the vendor's display and print programs can work properly. For example, in preparation for including the Claremont Institute for Antiquity and Christianity computer form of the Nag Hammadi texts on the CCAT CD-ROM, along with samples of the Coptic Sahidic Bible, an attempt has been made to rationalize the older coding that had been in use. For the letters and symbols that are identical between Coptic and Greek, the established TLG upper case coding (see OFFLINE 1) will be retained. For the unique Coptic characters, lower case transliterations will be used: f (fai), g (chima), h (hori), j (janja), s (shai), t (ti), and x (the chi-rho combination). At least for the present, the supralinear stroke is represented by a hyphen following the letter over which it occurs. Certain other characters found in Coptic but not Greek texts are under discussion. Readers comments and suggestions are welcome. The KJV is available from various sources, often accessible only through a software package, and will be on the CCAT CD-ROM. Obviously it is of interest to various users, for various reasons. What we have discovered is that (1) there are significant differences, especially in spelling, in the two KJV computer forms we have been able to obtain, and (2) we were unable to find a computer copy of the KJV Apocrypha. It has occurred to us that some students of English literature may wish to use the computer to assess the influence of KJV on various authors, and that the Apocrypha may be important here, so we have scanned the Apocryphal books into computer form and are in the process of verifying them. With regard to the variants (e.g. publick/public, kindenesse/kindness), we need to know the extent to which this information may be important for users. It would be possible to set up the text with alternative forms noted, and to devise programs that could find the alternative phraseology. But we would like to know the extent to which this should be considered a priority task. Perhaps it would be a good project for which one of you might seek funding. We would be willing to provide advice and assistance. The following list represents selected materials of interest that have arrived in the OFFLINE mailbox. Mere listing does not constitute endorsement. See OFFLINE 7, 8, and 11 for earlier listings (e.g. Maredsous Centres de Traitement Automatique de la Bible, SCOPE, ACLS Scholarly Communication, Collegiate Microcomputer Quarterly, HumaNet, BITNET Online Bulletin Board, CD Data Report, Bits and Bytes Review). Computers and the Humanities (CHum), now in its 21st year. An international journal that includes articles and reviews (books, software, hardware). Important, at very least, for institutional libraries. Paradigm Press, Inc., PO Box 1057, Osprey FL 33559. CHum is the journal of the Association for Computers and the Humanities, which also publishes a separate ACH Newsletter. Membership applications go to Harry Lincoln, Music Dept., SUNY Binghamton, NY 13901. SCOPE also developed from this context. Literary and Linguistic Computing (LLC), begun in 1986 by the 14 year old Association for LLC. International, with articles, reports, notes, reviews. Importance similar to CHum. Oxford Press, Walton Street, Oxford OX2 6DP, England. Christian Computer News (CCN), entering its 5th year, is published by Christian Computer Users Association at 1145 Alexander SE, Grand Rapids, MI 49507 (tel. 616-241-0368). "Popular" and theologically "conservative" in orientation. Hebrew Users' Group (HUG), 2736 Bancroft Way, Berkeley, CA 94704 (tel. 415-845-7793), newsletter for $7.50 per year ($10 overseas). Entering its 4th year, with letters, reviews, etc. Northeast Association for Computing in the Humanities (NEACH), meets monthly in NY City and issues occasional newsletters. F. Woodbridge Wilson, Pierpont Morgan Library, 29 East 33rd St., NYC 10016 (tel. 212-685-0008). Duke Humanities Bulletin Board, for dialup access at offpeak telephone rate times (1200 baud, 8 bits, no parity), no special charges. Software, information, reviews, discussion. Humanities Computing Facility, 104 Languages Bldg, Duke University, Durham, NC 27706 (tel. 919-684-3637). American and French Research on the Treasury of the French Language (ARTFL), a data bank accessible by Telenet, WATS, or regular telephone, by subscription. Department of Romance Languages and Literatures, Univ. of Chicago, 1050 East 59th St., Chicago, IL 60637 (tel. 312-962-8488). ----------------------- << O F F L I N E 13 >> by Robert A. Kraft [dateline 3/87] RSNews 2/3 (May 1987) ----------------------- The Center for Computer Analysis of Texts (CCAT) at the University of Pennsylvania has distributed hundreds of copies of Hebrew, Greek, and English biblical materials on microcomputer diskettes since it began this service early in 1986. The non-English materials are in standard (ASCII) transliteration codes and can be displayed (in transliteration) on any suitable computer, where they can be searched, sorted, concorded, etc. They dod not automatically display in the respective Hebrew or Greek fonts, but various programs are available to transform them into Hebrew and Greek on many screens and/or on appropriate printers. OFFLINE will continue to attempt to provide information on software. Because the texts distributed by CCAT may include features of little interest to some users (e.g. Hebrew cantillation) or may be formatted in an inconvenient manner for much of the available software (e.g. the way chapter and verse are indicated), programs to permit selection and/or reformatting are also provided with the data. Included in the "User Agreement" signed by those who receive CCAT texts is the responsibility to report problems and errors. Earlier OFFLINE columns have made reference to such corrections/problems. Our attention has now been drawn to a few more: (1) The "WhatHeb" program distributed prior to 3/20/87 always had to have the last word -- that is, it failed to process the final line of any text it was manipulating. This has now been corrected, and if you send a diskette to CCAT in an appropriate mailer with a return address label, we will return a corrected version to you gratis. (2) RSV texts distributed prior to 2/18/87 contain a few problems that obstruct correct operation of the "Convert" program, but can be corrected by the user as follows: The initial "identification" lines for the books of Hosea, Ezekiel, 4 Ezra, Prayer of Azariah, and 1 John should read respectively: ~c"Hos"x1, ~c"Ezek"x1, ~c"4Ezra"x1, ~c"PrAzar"y1, ~c"1John"x1. (3) In addition, the same RSV texts need correction in the following places: Hosea, at the end of the first line of text, omit "in"; Obadiah, on the first text line, omit "as among the shepherds of Tekoa" (Don't ask! We know how it happened, and it adds a new category for inclusion in future textual criticism manuals -- "error of overwritten screen display," or ). One of the major purposes of OFFLINE is to keep you in touch with what is going on in computer assisted research and related fields. This is an impossible task since changes take place so rapidly, but that is not sufficient reason to abandon the attempt. One approach that helps me keep a relatively clear conscience about this matter is "information by cross-reference"; that is, rather than attempting to provide details about each new development or direction, I make reference to an available source that contains such details. Of course, this may not help you at all since you may not have ready access to the source, but at least I fell better about the situation! A new information source that recently arrived on my desk is the ONTARIO HUMANITIES COMPUTING newsletter published by the Centre for Computing in the Humanities (CCH) at the University of Toronto. Although the newsletter does not tell exactly who the "editor" may be, it is the product of the CCH office directed by Ian Lancashire with assistance from Willard McCarty and Patricia Philip, and the mysterious editor may be reached at Robards Library, 14th Floor, 130 St. George Street, Toronto, Ontario M5S 1A5 (BITNET to PHILIP@UTOREPAS). The first issue (January 1987) announces the formation of the Ontario Consortium for Computers and the Humanities (OCCH) -- a useful model for cooperation that hopefully will be expanded or replicated to include other institutions -- and provides news from the various participants. Especially interesting to OFFLINE readers will be the announced intention by Toronto's CCH to assist researchers "to develop public-domain software, research papers, and documentation ... [which] will be made available to all levels of education, government and industry" (p.4). In this context, the newsletter contains notes on the Microcomputer Text Analysis System (MTAS) developed by Ian Lancashire and Ludo Presutti "for obtaining frequency counts, making pattern searches, and graphing word-distributions and densities in multilingual texts" (Turbo Pascal for MS-DOS machines). Another MS-DOS "free" program package available through the Toronto CCH is called "T(ext) Crunchers," which includes a spelling checker, outline processor, style anslyzers, a browser, and a key-word-in-context (KWIC) concordance program. The handling cost is $20 on diskette, plus $5 for printed documentation, or free to electronic network users (MCCARTY@UTOREPAS). Another exciting recent development is the arrival by electronic mail (e-mail) of the updated catalogue of electronic texts available through the Oxford University Computer Centre TEXT ARCHIVE. Printed catalogues have been available for several years, but it has not been easy to know how current the information at hand might be. The e-mail form makes it possible to be very up-to-date and to integrate other materials into this extensive and conveniently organized base (organized by language). Contact ARCHIVE at VAX.OX.AC.UK from BITNET. With reference to CD-ROM production and use, Linda Helgerson's article on various relevant projects in humanistic research that was announced in OFFLINE 11 has now appeared in the December 1986 issue of CD Data Report. In addition to the TLG and CCAT CD-ROM projects that are treated separately below, this article describes: (1) the Isocrates project directed by Paul Kahn at Brown's Institute for Research in Information and Scholarship (IRIS), which uses an indexed form of the TLG laser disk for UNIX and MS-DOS environments; (2) Harvard's Perseus project, directed by Gregory Crane, that plans to integrate classical texts with images for the teaching of ancient history, among other things; and (3) the new Packard Humanities Institute (PHI) that has been established in Los Altos, CA, with Stephen Waite as the Director and a full-time research staff to deal with various projects pertinent to scholarly work on textual materials, including CD-ROM production and use. Orders are now being taken for the long awaited (and long in production!) CCAT CD-ROM, prepared with funding from the David and Lucile Packard Foundation and from SBL, which includes (1) a wide selection of texts and tools for biblical studies in a variety of languages (Hebrew, Aramaic, Syriac, Greek, Coptic, Latin, Armenian, English), (2) various non-literary Greek papyri and inscriptions, (3) a large corpus of Latin classical and post-classical literature, much of it morphologically analyzed, and (4) a miscellany of other materials in various other languages (e.g. Arabic, Sanskrit, French, Danish, Italian). This is intended as an especially important product for students of biblical and related literatures and languages, as well as a "sampler" for other interests and for software developers. The CD-ROM itself will cost $50, prepaid, from CCAT at the OFFLINE address (inclusion of a self-addressed label would be appreciated). A signed "User Declaration/Agreement" must also accompany each order. The CCAT CD-ROM will be fully compatible with the TLG CD-ROM scheduled to appear around the same time, and can be accessed immediately from IBYCUS SC machines. Software for accessing these CD-ROMs from an IBM- type machine with a Sony CD-ROM reader and interface card is also being produced by CCAT, and software for other widely used microcomputers is planned. Please write or call for further details. The TLG Newsletter 11 (January 1987) outlines the dissemination and pricing policies for the TLG CD-ROM, among other things. The model being used by TLG is of a subscription (rental) service with an annual fee ($100 for institutions, $60 for individuals). The CD-ROM remains the property of TLG and must be returned if the agreement is terminated. While CCAT does not plan to follow such a model, we do understand the delicate issues involved and will have great interest in how it works out in practice. For an overview of TLG, its history and present outlook, see the Winter 1987 issue of Scholarly Communication (#7, from the ACLS) which features an article by Theodore F. Brunner on "Data banks for the humanities: Learning from Thesaurus Linguae Graecae." The treatment of legal and economic issues will be especially instructive in connection with the TLG policies for CD-ROM dissemination. //end #13// ----------------------- << O F F L I N E 14 >> by Robert A. Kraft [dateline 19 June 1987] RSNews 2/4 (August 1987) ----------------------- <"UPDATING" CCAT TEXTS> Quality Control is a demanding mistress. New generations of textual variants are being created in electronic form (that's progress!). Many of them are "non substantive" -- misplaced line feeds or carriage returns, superfluous blank spaces, unexplained hidden codes -- although such items can sometimes cause software to misfire. Others are of a more traditional sort. As an example of the latter category, copies of CCAT RSV issued prior to 15 June 1987 need correction at Ps 89.26 `Thou (not `ou) art my Father. In the former category, various minor inconsistencies and nuisance readings continue to be brought to our attention in the Hebrew text, along with an occasional more serious error. We will continue to issue lists of corrections, and disk updates, as appropriate, for a nominal charge. Why should CCAT charge anything to update these materials? It is a fair question, and deserves a straightforward answer. The reason is that these efforts require someone's time, often in largish chunks, and usually the "someone" needs to be compensated. People still do donate time and put in extra effort without monetary reward, but it is difficult to build a stable operation on such circumstances, especially in the complex world of computers. It was our hope that funds to establish a firm administrative structure for CCAT, and thus reduce costs to users, would be forthcoming from either a granting agency or from a consortium arrangement, but this has not yet happened. Our most recent proposal to the NEH has been rejected, and funding sources in general tend to be more interested in specific projects (e.g. encoding the Coptic biblical texts; morphological analysis of the Hebrew scriptures) than in the broader support context within which such projects can operate most effectively. The Universities, on the other hand, recognize the need for such support structures, but tend to lack the funds, or perhaps the courage/will to invest significantly in the humanities in this manner. The alternatives are not attractive. A completely independent and self supporting central service organization would be quite expensive; perhaps prohibitively so for many potential users. It would need its own equipment as well as its own staff and legal status. By piggy- backing on the University and its resources, some problems are avoided but others created, especially the inefficiency fostered by the need to rely on voluntary good will efforts by interested but overly busy faculty and students. For the moment, the University approach is all that is available, while attempts to create a stable structure within that context continue. But it is a frustrating process, and cannot go on in the present state indefinitely. Your comments and suggestions are most welcome, and your understanding if the cost of services increases or the quality of service fluctuates. Behind this all lies a personal problem: I'm feeling frustrated, hemmed in, and restless to return more focally to my own research projects. We had hoped that the creation of active Users Groups would take much of the pressure off of the central coordinating effort, but it has not worked out that way. Despite the fact that there has often been good cooperation and enthusiastic leadership within the groups, the weight of needs and requests have tended to make it more difficult for the groups to act in relative independence, and have pushed them back closer to the central activities of CCAT. This is partly a problem of technology -- how can we get this text from here to there? Can that printer be used with this machine? But it is also a problem of communication -- who can answer what questions? Where does sufficient expertise exist to solve certain problems? And it is a problem of administration and funding -- who keeps address lists up to date? Who addresses the letters and pays the postage or the phone bills? Our experience is that it is simpler to try to coordinate these activities locally, if appropriate volunteers are available. This makes it easier to discuss problems and seek answers in a known context. Thus we make no apology for the fact that you can now reach both the Kaypro group (Dale Brueggemann) and the Apple Macintosh group (Jay Treat) through the CCAT address. Hopefully this will enhance their effectiveness. For Apple II users, Moise Silva at nearby Westminster Seminary has agreed to attempt to coordinate activities. And IBM types have for a long time communicated with Fred Putnam at another Philadelphia area school (Biblical Theological Seminary, Hatfield), or directly with CCAT. IBYCUS is also represented at CCAT, although some of the other machines with relatively small user bases are not; for example, inquiries about the inexpensive and versatile new Atari should be directed to Drew Haninger and/or Douglas Oakman on the west coast (addresses in OFFLINE 12). We are encouraging each user group to distribute its own data diskettes, in cooperation with CCAT and at prices consistent with CCAT policies. Once again, a special exhibit room for the user groups and for computer assisted research is being planned for the annual AAR/SBL/ASOR meetings in Boston, sponsored by the SBL Computer Assisted Research Group in conjunction with CCAT, with support from the societies. We really want to deal with your interests and needs, and encourage your suggestions and requests. If you have relevant data or software to exhibit, please let us know about it well in advance, including the equipment you would need, and we will attempt to make appropriate arrangements. Progress in CD-ROM and related technologies will be demonstrated, and there should be an explosion of available textual data. It is a good time for exchanging products as well as ideas, and for seeing what is going on among various users of the various systems. Victor Paul Furnish reports that he has found "The Macintosh Bible" (from Goldstein & Blair, Box 7635, Berkeley, CA 94707; $21) to be well worth the price. "Great hints and helps for people ... who have no special interest in knowing any more about computers than we need to know to get them to help us at the specific points where we need help!" Also on the Macintosh front, Philip Payne of Linguists' Software (106R Highland St, South Hamilton, MA 01982) has kept CCAT up to date on the development of Greek and Hebrew biblical data files to interface with the appropriate fonts for display and printing, and a conversion program for using TLG data, among other things. Similar developments have been taking place across the Atlantic as well; Conrad Gempf of Aberdeen has sent samples of his inexpensive Greek and Hebrew fonts and utilities for the Macintosh, on which he plans to report at the Boston AAR/SBL/ASOR meetings. For those of you who deal with Chinese, recent correspondence with Alan Beagley and Jamie Hubbard indicates a growing interest and some promising developments. If there is not already an existing group for computer assisted research in such material, this may be a good time to start one. Comments are solicited. James W. Marchand (Dept. of German, Univ. of Ill.) has published an interesting article on "The Use of the Personal Computer in the Humanities" (IDEAL 2, 1987, 17-32), describing how he has applied the new technology to the study of Gothic manuscript materials, including routines for graphic reproduction of actual manuscripts and scripts. This is a step in the direction of an exciting but underdeveloped aspect of computers and textual analysis, the use of computer techniques in paleography, papyrology, codicology, etc. Call for Papers: The Second International Conference on Bible and Computers (Methods, Tools, Results) has been announced for 5-13 June 1988 in Jerusalem, sponsored by the Association Internationale Bible et Informatique (c/o CIB-MAREDSOUS, B-5198 Denee, Belgium). It will follow the 15th annual meeting of the Association for Literary and Linguistic Computing (ALLC). Proposals (with abstract) are due by 15 October 1987. Further information is available from CCAT. Proceedings of the First Conference (1985), focusing on Text, can still be obtained from AIBI for $55. is the magazine of the International Church Computer Users Network, published six times per year. For further information contact Ed Deane, ICCUN Treasurer, 6102 East Mockingbird Lane, Suite 370, Dallas, TX 75214. "The software that users want is probably out there somewhere although it is not always easy to find. ...A more expensive program is not necessarily a better program. A simple program that meets your needs and is easy to use is better than a more powerful program that has extra capabilities but will be more difficult to learn and run" ("Notes on Computers" in Scholarly Communication 4, Sprint 1986, p. 9/2). //end #14// ----------------------- << O F F L I N E 15 >> by Robert A. Kraft [dateline 1 Sept 1987] RSNews 2/5 (November 1987) ----------------------- Once again, the Annual SBL/AAR/ASOR meetings (Boston, 5-8 December) will include a number of segments directly relevant to computer assisted research. The regular SBL Computer Assisted Research Group (CARG) session is scheduled early in the program, on Saturday afternoon, and will include a panel discussion focusing on the range of computer tools available or desirable for "the complete scholarly workstation" -- including laser disks, scanners, printing strategies, searching/sorting software, electronic communication, text/word processing, and the like. CARG will also provide opportunity for brief reports from the projects, centers, etc., and will again have its own exhibit rooms to assist users groups and demonstrate various hardware and software developments. My term as chair of CARG ends with this meeting, so some thought must also be given to succession and the future of the group. Computer research will be in focus in at least one other Boston session that has come to my attention: AAR is sponsoring a plenary discussion on Monday morning (7 Dec.) entitled "The New Papyrus: Video Disk Technology and the Humanities" and coordinated by Lewis Lancaster (Berkeley) and Andrew Scrimgeour (Regius College). A new term being bandied about in various computer circles is "hypertext." Not to worry. This is a special extension of such older phrases as "data base management," with particular focus on textual data. The idea is to correlate and coordinate various aspects of text- based computer investigation in such a way as to permit easy access between discrete groupings of data, such as pictures or bibliography or cross-references related to an encyclopedia article. Thus in perusing the article, one could call up related information without needing to exit one program and enter another -- really a highly desirable software development that allows the user to move around relatively freely in a related body of data. The David Sarnoff Research Center at Princeton has produced an experimental CD-ROM package for the IBM-AT to illustrate the inclusion of motionvideo and sound data with the textual, to mention only one striking example of current directions of development. Similarly, the University of Maryland in cooperation with Cognetics Corporation in Princeton has announced the availability of "Hyperties," a program package to enable the user easily to "traverse a database of articles and pictures by merely pointing at highlighted words in context." One immediately appealing application of "hypertext" techniques is to such materials as those created and/or collected by the Computer Assisted Tools for Septuagint Studies (CATSS) project and the Center for Computer Analysis of Texts (CCAT), many of which are on the new CCAT CD-ROM. The researcher could, for example, move from a particular English RSV biblical passage to the Hebrew and/or Greek texts, to lexical discussions and morphological analyses of both Greek and Hebrew, to other relevant ancient and modern translations, to information on textual variations as well as on modern discussions of the text. Clever uses of windows and menus would assist the process. Not all of the above pieces are yet in place -- the painstaking task of preparing relevant data continues to require attention, not to mention the programming -- but it will gradually happen, to our general benefit and delight. And in fields such as archaeology or art history, where pictorial and graphic representation is very important, the possibilities are even more visually exciting. It seems that the task of verifying the accuracy of computer data is endless. OFFLINE readers doubtless have tired by now of hearing that still more errors have been discovered in the CCAT texts, and the CCAT staff is completely worn out by it! The final stages of preparing the CCAT CD-ROM have brought these issues to a head, insofar as there is an electronic permanency to the CD-ROM in contrast to the state of flux with disk and tape distribution. Much time has been spent in comparing similar files for accuracy of text and of locator IDs (chapter, verse, etc.) -- e.g. RSV with KJV on number of verses in each chapter; LXX text with LXX Morphological analysis for text and locators; UBS2 Greek NT with UBS3 for differences and errors; all Pentateuchal texts in whatever language for accuracy of locators. The bottom line is that many changes have been made in some of the texts, ranging from issues of format (e.g. poetic structure in RSV, resolution of hyphenated words in LXX) to actual correction of textual or locator errors. The CD-ROM form now becomes the standard for subsequent correction or modification. Diskettes conforming to that form will begin to be issued at the Boston meetings. The least affected texts are the Hebrew Bible, the Greek NT (UBS2, from TLG) and the KJV; for most purposes, users of older forms of these texts will see little difference. RSV has experienced major transformations, especially in poetic sections. If you wish to update diskettes at Boston, come prepared and we will try to accommodate you. For many months now, CCAT "External Services" has been preoccupied with CD-ROM preparation. Now that the first CCAT CD-ROM is available, we hope to turn to related matters such as extending the library of individual texts available on diskette, extending the number of diskette formats supported, and providing appropriate utility programs for using the data. From the outset, CCAT diskettes have included a utility program to supply explicit chapter and verse locators to the highly coded data, to facilitate quick searching by means of, e.g., wordprocessing programs. We are now adding another useful program that is "in the public domain" (available freely for distribution), the exact origin of which is unknown to us. It is called LIST, and it permits the user to search and conveniently browse DOS files of any size. Many other similarly useful programs doubtless are "out there" and need to be collected, and perhaps documented, for effective use in research on textual data. Your suggestions regarding what is desirable as well as what is available in such utility software are urgently requested -- e.g. the ability to search specific columns in the morphological analysis files, to recreate individual manuscripts or editions from the files with textual variants, to reformat margins, etc. CCAT data diskettes are now being made available for the exciting and powerful Commodore Amiga machines, thanks to CCAT staff programmer Todd Kraft. If there is sufficient interest, we will attempt to port the main utilities and display programs as well. We have not yet made any plans to integrate use of the Amiga's built in voice synthesizing feature, but it would certainly be an attractive option for use in elementary language instruction, among other things. In terms of costs, power and versatility, this is a hard machine to beat! CCAT has also acquired one of the new, relatively inexpensive "digitizing" scanners that reproduces materials in electronic form at up to 300 dots per inch resolution. We have begun experimenting with techniques to enhance the legibility of papyri fragments and to standardize paleographical data to facilitate comparison, as well as with simple reproduction of charts, maps and pictures for integration with the textual data. Hopefully, some of this can be demonstrated in Boston. By then we should have acquired "character recognition" software to test the comparative quality of the "digitized" approach to text scanning vis-a-vis what we have been doing with the older technology of the Kurzweil Data Entry Machine (KDEM). OFFLINE is but the tip of the iceberg when it comes to finding out what is going on in the computer world. The mind boggles at the number of publications, columns, newsletters, etc., that are available. One of the best ways to keep up on it all, for humanists, has been the SCOPE digest from Paradigm Press, P.O. Box 1057, Osprey, FL 34229- 1057 (tel 813 922-7666). Now a selection of the SCOPE information is available on PC-DOS (IBM type) diskette for only $5, with encouragement to copy and distribute the diskette. An index/menu permits quick access to pertinent bodies of information. Disc SCOPE will be on display at the CARG exhibits in Boston. Another type of information exchange is the new HUMANIST group coordinated by Willard McCarty at Toronto for persons on electronic networks. A query or comment sent to the HUMANIST BITNET address immediately goes out to the HUMANIST participants, any of whom might choose to respond directly to the originator, or more generally to the network. It is all without specific charge to the user, as a service of the Toronto Humanities Computing Centre, for those fortunate enough to be on BITNET or a compatible network. Contact MCCARTY@UTOREPAS. This is a valuable complement to the monthly BITNET ONLINE NOTES edited by John Abercrombie, director of CCAT. A further encouraging development on the information and access front is the appearance of computer data and programs in booksellers catalogues. One of the first to catch my eye comes from Dove Books (Biblical and Near Eastern Books), 3165 West 12 Mile Road, Berkeley, MI 48072 (tel 313 547-9644), with a growing section on "Computer Products" that will be expanded into a catalogue of its own in the near future. Dove Books has also been active in developing such products in house, as is true of some other publishers mentioned in earlier OFFLINE columns (e.g. Oxford University Press and Eisenbrauns). We hope to be able to display some of the products from such sources in Boston, including OUP's new micro-version of the Concordance Program, John Baima's LBase software, and the printer fonts developed under the rubric DoveFonts (e.g. Hebrew, paleo- Hebrew, Cuneiform), for use with the FancyFont program. In closing, a note on informational births and deaths. The Centre for Computing in the Humanities at the University of Toronto announces that it will sponsor a new periodical called THE HUMANITIES COMPUTING YEARBOOK, with Ian Lancashire and Willard McCarty as editors, scheduled for initial appearance in summer 1988 (to be published by Oxford University Press). On the other hand, ACLS has announced that its Office of Scholarly Communication and Technology, which opened in 1984 in Washington, is closing, thus ending the short but interesting and valuable life of the periodical issued by that office. Let's hope this is not an omen of how American Learned Societies in general view the new technologies! //end #15// ----------------------- << O F F L I N E 16 >> By Robert A. Kraft [dateline 12/87] [excerpted in HUMANIST Januray 1988] RSNews 3/1 (January 1988) CSSR Bulletin 17/1 (February 1988) ----------------------- [[The materials between brackets appeared only in the of the Council of Societies for the Study of Religion (CSSR) 17/1, for reasons explained in those paragraphs.]] [[The OFFLINE Column on Computer Research in Religious Studies is sponsored by the Computer Assisted Research Group (CARG) of the Society of Biblical Literature (SBL), with support from SBL and the American Academy of Religion (AAR). It had its beginnings in the article entitled "In Quest of Computer Literacy," of the Council for the Study of Religion 15/2 (April 1984), and five installments of the column appeared in the between June 1984 (15/3) and April 1985 (16/2). With the suspension of the and the emergence of in 1986, the next ten issues of OFFLINE appeared in the latter publication as a regular feature. Now, with the return of the CSSR , it has been agreed that OFFLINE will now appear in both the and the , with apologies to those members who are thereby subjected to the duplication. I had intended that the resumption of OFFLINE in the CSSR would include a summary and update of the earlier columns, and am working on such a document for that purpose. Unfortuantely, it is not ready at this time. Thus the following column, which is a report on the CARG computer activities at the recent annual meetings of AAR/ASOR/SBL in Boston, places the new reader right in the middle of things that often have connections with previous OFFLINE columns. For this I can only apologize and plead for patience. Acronyms such as CCAT (Center for Computer Analysis of Texts, and Penn) and TLG (Thesaurus Linguae Graecae, at the University of California, Irvine) are taken for granted in OFFLINE, and will be familiar to its regular readers. But the intention is to be clear and informative for all levels of interest, and your comments on the extent to which OFFLINE does not succeed in that aim would also be appreciated.]] This column is being encoded while the annual SBL/AAR/ASOR meetings (Boston, 5-8 December) are still fresh in human memory. Overall, Boston provided the best yet situation for displaying the activities of the SBL Computer Assisted Research Group (CARG), especially with reference to the special exhibit rooms and software developers' discussion groups. The staff enjoyed and endured a steady stream of visitors to the demonstrations and exhibits, and much appreciation is owed for the patient efforts of Jacqueline Pastis, who again coordinated the events, and the people working closely with her to show what is happening and to answer questions (Jack Abercrombie, Dale Brueggemann, Jeffrey Gillette, Alan Humm, Todd Kraft, David Louder, Jay Treat, Ben Wright). Thanks are also due to the numerous other individuals who in one way or another contributed to the success of these endeavors. The CARG program segment also ran smoothly and was well attended, with the contribution of our guest from the Harvard Classics Department, Gregory Crane, setting the tone for a fruitful panel presentation of what is desirable in a "complete scholarly workstation." The traditional marathon of brief reports that followed the discussion was fuller than ever (see below), indicating that computer assisted research is flourishing in a variety of contexts within religious studies. This is gratifying, since CARG has attempted to resist the temptation to treat the new technology almost as an end in itself, aiming instead to focus on it as a tool for scholarship and research in the various disciplinary settings. With the conclusion of my term as chair of CARG, the group has been reorganized in the following ways: John Turner (Nebraska) will coordinate the activities of the reconstituted steering committee and represent the group's interests in wider research contexts. Robin Cover (Dallas Seminary) and Alan Groves (Westminster Seminary) will co-chair the annual CARG program units. John Abercrombie (Penn) and Jackie Pastis (Penn and Washington U., St. Louis) have agreed to organize and run the special exhibits and demonstrations at the annual meetings. Informational activities are in the hands of John Hughes (Bits & Bytes Review) and myself, with assistance concerning electronic communication networks from Ray Harder (UCLA) and Sterling Bjorndahl (Claremont). Dick Saley (Harvard) will help provide liaison with archaeological research needs. Members of the steering committee not named above are Jeffrey Gillette (Duke), John Hurd (Toronto), Paul Miller (GRAMCORD), and Dick Whitaker (consultant). Among the issues that emerged again and again in discussions was the need to know exactly what texts are available, how they are coded, and how they can be obtained and used. A gigantic step towards controlling such information has been taken with the appearance of John Hughes' "resource guide" entitled Bits, Bytes & Biblical Studies (Zondervan, 1987). CARG plans to build on this momentum, and on the existing (incomplete) inventory of electronic texts compiled at Rutgers University, to produce a comprehensive listing for biblical and related research. Your help in calling our attention to extant materials will be greatly appreciated. This is a step towards an official SBL Archive, for which the support of the SBL Research and Publications Committee is being sought. Similar needs in other areas of religious studies are also being examined by the AAR Committee on Research and Scholarship. Related issues that received extensive discussion include coding conventions for particular languages as well as the more general matter of coding "locator IDs" (chapter, page, verse, etc.) in any text file. Insofar as the TLG has produced a massive amount of Greek material with consistent coding, and since the Packard Humanities Institute (PHI) and CCAT are committed to compatibility with TLG in their coding of material for CD-ROM distribution, "new" de facto standards are emerging. Nevertheless, the systems used in these massive data collections for coding the various foreign languages are not always (if ever) in accord with the "official" positions of national and international standards organizations (ANSI, NISO, ISO). Thus CARG intends to attempt contacting the appropriate organizations to discuss possible approaches to rectifying this situation. With regard to ID locators as well as language coding, we have also encouraged software vendors to enable their products (for searching, browsing, etc.) to be used directly with the formats being produced and distributed by TLG, PHI and CCAT, for greater user convenience and flexibility. The point is to insure that the software product you purchase can, without significant adjustments, be used on the bulk of texts that are becoming available in TLG-compatible format -- on Milton or on Dante's commentators as well as on Tertullian or Philo or the Bible in its various languages. Which brings me to an awkward subject -- the delivery of the CD- ROM containing all the promised material! This is not the first time that you have read in this column that by the time you read the column you will be able to obtain the CCAT-PHI [Packard Humanities Institute] CD-ROM. Having is believing, and as I write, the long delayed disk is not yet in hand. There are many unexpected and time consuming turns in the road from a supposedly "clean" text (or even an acceptably "dirty" one) to a set of premastered CD-ROM tapes that can be sent off for production. The text itself, with its coding conventions and possible errors presents one set of problems. The locator IDs, which may vary widely from text to text, or even follow different routes in the same text (e.g. Josephus ala Whiston compared with Niese-Loeb), can be even more difficult to control. Multiply these considerations by the number of texts promised, and you will have some appreciation of why delays have been experienced. This CD- ROM will be very much like first proofs of a published anthology -- some things will be near perfect, but others will require significant repairing. The situation is a bit different with reference to the ability to read the TLG and compatible CD-ROMs on non-IBYCUS machines. In Boston, the CCAT staff showed that the existing TLG CD-ROM could be read on IBM-type machines, although not nearly as quickly or as elegantly as on IBYCUS. Still, we had moved from "nowhere- ware" through "air-ware" towards actual "share-ware" that is being made available to other developers. That IS progress. Whether, at the time you read this, an appropriately useful software product will have emerged remains to be seen. We are still committed to making it happen, and appreciate your patience. A great deal is going on with computer applications in biblical studies and related areas, as was evident from the lengthy session at the Boston CARG meeting. Indeed, the second international meeting of the Association Internationale Bible et Informatique (AIBI) will be held jointly with the Association for Literary and Linguistic Computing (ALLC) in Jerusalem, 5-13 June 1988, devoted entirely to this subject. (Further information is available from OFFLINE.) In Boston, reports were heard from various centers and institutes, including: Brown, Duke, Harvard (Classics), Hebrew University, Maredsous Centre for Informatique et Bible, Oxford, Packard Humanities Institute, Pennsylvania, and Toronto. Of special interest among the ongoing projects were: the Armenian Inscriptions Data Base (M. Stone, Hebrew University), the Comprehensive Aramaic Lexicon (S. Kaufman, Hebrew Union College- JIR), Hebrew Bible Morphological Analysis (A. Groves, Westminster Sem.), GRAMCORD Greek NT (P. Miller, Trinity Ev. Div. School), Hypertext Biblical Data Project (R. Cover, Dallas Seminary), Photogrammetry and Archaeological Data Base Projects (R. Saley, Harvard), the Syriac Peshitta and Qumran Text Projects (J. Cook, Stellenbosch). New fonts developments for the Macintosh were reported by P. Payne (Linguists Software) and Conrad Gempf (Aberdeen), while recent upgrades in Nota Bene (Steve Siebert) and LBase (John Baima) were announced. Attention was also drawn to the special CARG demonstration schedule that included a preview of the new micro version of the Oxford Concordance Program for IBM/DOS machines and exhibits of such teaching tools as John Hurd's (Toronto) Greek Tutor system and George Kong's METACARDS for biblical Greek and Hebrew. Although the amount of information aired at this session was almost overwhelming, a few items of special interest deserve mention. The Toronto Centre is now a node for accessing the extensive "Global Jewish Database" of Bar Ilan University. The Apple MAC is getting more attention as a research computer, as is evident from its prominence in Gregory Crane's Perseus Project (Harvard) and the plan to develop GRAMCORD for the MAC (the search capabilities developed by Star Software for the English Bible were also impressive). DataBase III+ is becoming popular with some IBM based projects such as Michael Stone's Armenian Inscriptions, Emanuel Tov's manipulation of parallel Hebrew and Greek biblical text, and Dick Saley's use of it for archaeological data management. Interest in CD-ROM as a data storage device, and in its probable technological successors is high; in addition to the original TLG CD-ROM "A" accessed from IBYCUS, the TLG "B" disk (including indices of syllables/words) is being accessed through programs developed by Gregory Crane both in his own project, using MACs, and in an adaptation for IBM/DOS machines by Randall M. Smith (UC Santa Barbara). As Steve Siebert of Dragonfly Software (Nota Bene) correctly reminds us, the line between "research" computing as done at Brown, Duke, Penn, Toronto, etc., and "commercial" development is often very thin, if not indistinguishable. It often turns out that both categories are "non-profit," even if unintentionally, and the need to recover development costs at a non-commercial center may create a price structure similar to that in the commercial world. Some products from the centers find their way into the marketplace, as with the BYU Concordance Program which is now sold commercially as "WordCruncher." In almost all instances, the interchange of ideas and sometimes of software developments between the vendors and the centers has been advantageous to computer users. In hopes of fostering such healthy exchange, three special meetings were sponsored by CARG, in which the attending software developers and data producers were invited to discuss the following general issues: (1) advanced research applications such as "hypertext" (see OFFLINE 15), (2) multilingual text processing, and (3) the computer as a multilingual printing device. Among the participants at one or more of the sessions were representatives of Dragonfly Software (Nota Bene), Gamma Products (MultiLingual Scholar), Chi Writer, Grafeas, WordCruncher, Silver Mountain Software (LBase), Linguists Software, Dove Fonts, IBYCUS, GRAMCORD, and the centers at Brown, Duke, Harvard (Classics) and Penn. The discussions were mostly cordial, sometimes quite technical (my staff tells me later what really was said!), and on the whole very valuable for seeing the issues in a more balanced perspective. Some of the CARG plans noted above are direct results of these symposia, and we hope that fruitful contacts among these participants will continue. Indeed, the next few months should witness exciting developments in the area of text manipulation as well as multilingual capabilities. More texts in a variety of different formats and languages will be available in a consistent form on CD-ROMs for experimentation by the software developers. "Hypertext" applications can be implemented on significant bodies of text such as the biblical materials, and the addition of visual data -- perhaps also of sound -- is not far behind thanks to the new generation of digital scanners and other technological advances. With the rapid emergence of CD-ROM libraries, some of the materials presently available only in the very limited contexts of mainframes and tape drives should begin to make its way to a potentially wider audience. With apologies for the length of this report, and for sometimes assuming that the reader is familiar with some level of computerese, let me end with some words about life in the trenches. If you are at all interested in working with ancient texts on computer, and especially biblical texts, become familiar with the treasury of information in John Hughes' Bits, Bytes & Biblical Studies. There is no longer any excuse for you (or for me) to feel like Elijah in the wilderness, and I hope that my job as initiator into the mysteries will be eased considerably. Please consult the book before you call or write! At CCAT, we are still committed to coordinating efforts to produce utility diskettes for textual and related research for the various users groups. Among other things, we are examining "shareware" (copy freely, pay if you wish) and "public domain" programs for inclusion. The IBM/DOS world is rich in such items -- if you use an IBM type machine and do not know about PC-WRITE (a sophisticated shareware word processor), LIST (a convenient shareware browse and search program), D (for more convenient directories), WHEREIS (for finding files on hard disks), HelpDOS, PC-ART, PC-FILE+, and the like, you are decidedly at a disadvantage. One of many sources for such software is Disk-Count Data Products, Rt. 5 Box 211, Hot Springs AR 71913. Even at the level of obtaining information about computing in the humanities, Joe Raben's DiscSCOPE can be copied and distributed free, or purchased from Paradigm Press for $5. I am less knowledgeable about utilities for the MAC, although it is clear that any perceived gap between the value of the MAC and of the IBM/DOS machines for humanistic research has narrowed greatly in recent months. We have an active and talented MAC Users Group coordinated by Jay Treat here at CCAT, and your involvement is welcome. Similarly, such potentially exciting machines as the Amiga and the Atari ST have small support groups that can be contacted through CCAT, although there seem to be few users out there. Both the Apple II group and the KayPro group remain active, coordinated respectively by Moises Silva and Dale Brueggemann of Westminster Seminary. They may also be contacted through CCAT. Finally, it is clear from the responses received in Boston that the time is right to establish an IBYCUS Users Group and to create a means for the exchange of software and information. Interested persons should send their names to CCAT for inclusion on the list, and if particular items of software or documentation are desired, please note that as well. Plans for a utilities diskette are already underway, and special software for certain tasks (e.g. searching the CATSS textual variant files for manuscript groupings, tendencies, etc.) is already available for the cost of materials and handling (typically $10). //end #16// --------------------- <> by Robert A. Kraft [HUMANIST January 1988] RSNews 3/2 (March 1988) CSSR Bulletin 17/2 (April 1988) --------------------- <> The long promised laser disk (CD-ROM) of biblical and related textual materials prepared by the Computer Assisted Tools for Septuagint Studies (CATSS) Project and the Center for Computer Analysis of Texts (CCAT), in conjunction with the Packard Humanities Institute (PHI) is finally a reality. It provides a companion disk to the completely compatible expanded and updated CD-ROM "C" from the Thesaurus Linguae Grecae (TLG) Project. This does not mean that everyone who anticipated its arrival will be pleased with the result, or that suddenly everything related to biblical studies on the computer becomes simple. As will be evident from what follows in this column, many frustrations remain. We are feeling our way along, and your patience and encouragement are still needed. What it does mean is that significant progress has been made towards providing more and better texts, in relatively consistent formats, for computer users to access more efficiently and inexpensively, and for computer program developers to have as potential objects for improved software. At the very least, the presence of these materials on CD-ROM provides a fixed point of reference for further work. It is a benchmark by which to measure the progress that will hopefully be facilitated thereby. <> For information on the TLG CD-ROM, with its vast array of Greek literary texts from the oldest remains to the sixth century of the common era, contact the TLG offices at the University of California at Irvine, CA 92717 (tel. 714 856-6404). TLG distributes this disk under a subscription agreement similar to that already outlined in OFFLINE 13. The PHI/CCAT CD-ROM contains far fewer texts than the TLG disk, but represents a large variety of languages and types of material. It is divisible into four parts, the first two of which have been produced and/or coordinated primarily through PHI, while the last two are the primary responsibility of CATSS-CCAT: (1) Classical Latin Texts (PHI) Anonymous, Vita Iuvenalis Apicius Apuleius Caesar Cato Catullus Celsus Medicus Cicero Hirtius Historia Augusta Horace Hyginus Juvenal Livy Lucan Lucretius Martial Nepos Ovid Persius Petronius Phaedrus Plautus Pliny the Elder Pliny the Younger Pomponius Mela Probus Propertius Quintillian Sallust Seneca the Philosopher Seneca the Rhetor Servius Silius Italicus Statius Suetonius Tacitus Terence Tibullus Valerius Flaccus Varro Vergil Vitruvius Zeno of Verona (2) Greek Inscriptions (coordinated by PHI) Inscriptions from Attica (Cornell) Christian Inscriptions (Cornell) (3) Biblical and Related Materials (coordinated by CCAT) Hebrew Bible Septuagint Greek New Testament (UBS 2) Greek New Testament (UBS 3) Latin Vulgate without Variants Latin Vulgate with Variants King James English Bible, with Apocrypha Revised Standard Version English Bible, with Apocrypha Syriac Genesis Armenian Deut 1-10 Coptic/Sahidic Bible Selections Exodus 1.1 - 15.21 Psalms Jonah Isaiah 47.1 - 66.14 Baruch Lamentations Epistle of Jeremiah Colossians Aramaic Targums Pseudo Jonathan of Pentateuch Neofiti of Pentateuch Job (pre Qumran) Paraleipomena of Jeremiah (Greek, English) 5 Ezra (4 Ezra 1-2) in Latin & English 3 Corinthians (Greek, Latin, English) Origen, Homily on Jeremiah (Greek, Latin, English) Septuagint Morphologically Analyzed New Testament Morphologically Analyzed Parallel Hebrew and Greek Bible, part 1 Parallel Hebrew and Greek of Sirach Parallel Hebrew and Greek of Ruth, with Greek Variants Dictionary of New Testament Greek Index to Journal of Biblical Literature 61-100 (4) Other (coordinated by CCAT) Cologne Mani Codex (Greek), part 1 Savonorola, On Ps 50/51 (Latin) Milton, Defensionem Regiam, pp. 1-300 (Latin) Milton, Paradise Lost (English) Arabic Samples Al-Muqaffae, Poetic Anthology Hamadhabi, Maqamat Mu'allaqa Two Short Texts (Jwa, Mgrb) Sanskrit Samples Rigveda Bhagavadgita Kalidasa, Kumarasambhava 2 and 6 The Heart Sutra (Sanskrit and Tibetan) Dante Commentaries Anonymous Selmiano Text (Italian) Ruskin (English) Constant, Adolphe (French) Kierkegaard, Fear & Trembling (Danish) 1880 Diary of Mary Pierpont (English) List of Latin Words List of English Words <> Not everything we planned to include (and announced in earlier lists) actually made it onto this CD-ROM. By mutual agreement, PHI and CCAT set a deadline for sending the material to be "mastered." Whatever was not ready by that deadline, or whatever might cause significant last minute problems in the premastering process, was not included. Thus the second half of the Parallel Hebrew & Greek Biblical Text (from Jeremiah onward, in the Hebrew order) was omitted, although that file is available from CCAT. Similarly, a number of indexed Hebrew-Greek biblical files were not included. But for the most part, the promised biblical materials are present on the disk. More frustrating is the absence of some materials that had already been represented on the experimental TLG CD-ROM "A" but somehow failed to make it through the mastering process for the new disk. My own work will suffer from having neither the updated Nag Hammadi Coptic texts (Claremont) nor the updated Papyri materials (Duke), nor the additional Greek inscriptional material (Princeton Institute for Advanced Study), on the most recent CD-ROMs. These items were ready in time, but last minute problems were encountered in processing them. Similarly, a number of Patristic Latin texts, some massive Old English and French files (from Toronto), and various other texts in various languages had to be put aside when the final deadline arrived. For this we apologize not only to the potential users, but especially to the generous donors from whom we had received permission to include their work. <> It was partly with this situation in veiw that the decision was made to schedule an update of the PHI/CCAT CD-ROM as soon as practical. With the experience of actually producing this type of disk behind us, we hope to be able to enhance its value and increase its coverage by continuing to collect, correct, and process relevant materials for a new issue in about a year. Perhaps by that time there will even be reason to divide the PHI Latin corpus from the CCAT efforts and produce two separate disks. In any event, if you have control of materials that you would like to see included on a future CCAT CD-ROM, or know of such materials, please inform us as we plans for future releases. <> Distribution of the PHI/CCAT CD-ROM is being coordinated by the PHI office at 300 Second Street, Los Altos, CA 94022 (tel. 415 948-0150). For a variety of reasons, including some mentioned above, this disk is considered "experimental" and is being made available at an extremely reasonable price by subscription, with the requirement that it be returned when it is updated in the near future. Subscribers must sign an appropriate license agreement prior to delivery, and are responsible to know whether any limitations of use are stipulated for any particular texts on the CD-ROM (e.g. published concordances made from the Vulgate files are prohibited without special permission). How can a person make use of the CD-ROM repositories? The answer is simple for IBYCUS SC users, since that machine is already set up for the task. CCAT also has software available for IBM/DOS machines, to permit at least rudimentary access and the ability to transfer files from CD-ROM to other media (hard disks, printers, etc.) if the license agreement permits such use. The only major expense is aquisition of a CD-ROM reader and interface card, which currently costs about $750. A "DOS extension" for your CD-ROM reader to permit you to access it as an additional disk drive is also a necessary, if inexpensive, piece of software. For non-IBYCUS, non-IBM/DOS users, hope may be on the horizon. Apple intends to provide CD-ROM reading capabilities for the MAC, and as more material becomes available on CD-ROM, there will be more pressure to make it useful on other systems. If you are interested in the CCAT software for IBM/DOS, please contact us. The preannounced price was under $100, and we expect to be able to maintain that goal, but several complicating factors require clarification and possible adjustment in relation to any particular machine type. At present, the technology is too new for us to be able to determine how we can best serve potential users. Thus we are anxious to share the "experimental" and "provisional" software we are developing, as long as those who obtain it are willing to participate in the process with their observations, suggestions, and especially their patience! //end #17// ----------------------- << O F F L I N E 18 >> by Robert A. Kraft with John J. Hughes [HUMANIST March 1988] RSNews 3/3 (May 1988) [not in CSSR Bulletin] ----------------------- With this issue, OFFLINE enters into a new stage of its existence. John Hughes, author of Bits, Bytes and Biblical Studies (Zondervan, 1987) and editor of the Bits and Bytes Review (BBR), has agreed to permit OFFLINE to use excerpts from materials prepared for BBR in the attempt to make more efficient use of everyone's time and OFFLINE's space. Thus, in effect, John becomes a co-editor of the column. Actually, his material comprises the bulk of OFFLINE 18. If the experiment works as we hope it will, we will all benefit. A word of caution, however. BBR contains an enormous amount of information that has not been included here because of space limitations. It is distinguished by its moderately technical, highly detailed reviews of products and resources for academic computing, but it also reports on a broad spectrum of computer-related activities. The material excerpted here derives from the "What's News" section of BBR, and not from the BBR reviews. I have selected only what seemed to be most timely and obvious, and have often streamlined it. Don't be misled into thinking that what you see below is anywhere near the amount of valuable information available in other, more comprehensive sources such as BBR and SCOPE. [For a free sample copy of BBR, contact John Hughes, at 623 Iowa Ave., Whitefish, MT 59937 (tel. 406 862-7280; XB.J24@STANFORD.BITNET).] As you may have noticed in the "popular press," there has been a rash of recent reports about various strains of computer "viruses" that can have distressing, or even fatal effects on computer files and systems. I am sorry to report that apparently such a virus was transmitted through one of the IBM machines at the Boston CARG sessions in December. Since such viruses can spread virtually undetected from one diskette to another, or between diskette and hard disk, if you suspect or fear that your data may have been exposed the following tests should be administered. One PC-based virus is known as the "Asher" strain. It is easily detected when you tell your IBM/DOS type machine to display a diskette's volume label (type VOL as a system command for the desired drive; e.g. A> VOL) and the label reads "ASHER." Another virus is the "Lehigh" strain, which betrays itself by changing the date of the COMMAND.COM file -- check the date of your original system file and compare. Both viruses will also slightly expand the size of the COMMAND.COM file, through which they work. Commercially purchased programs are (hopefully!) free of viruses. Shareware, public domain programs, and programs you copy from copies of original distribution diskettes other than your own may be infected. To safeguard disks against viruses, use write-protect tabs (setting a file's attribute byte to read-only does not protect the file from the viruses). The ASHER virus erases a disk a few sectors at a time. After replicating itself four times, the Lehigh virus erases an entire disk all at once. In some instances, some material can be rescued from a virus infected diskette or system, but the process is too complex to describe here. For further information, contact OFFLINE with details of your problem. If it is of any comfort, viruses have also wreaked havoc on mighty mainframe computers and networked systems. One, called the Christmastree virus, caused IBM to shut down its system for 72 hours to eradicate the plague, and harassed various other systems. Also, a virus that originated at the Hebrew University of Jerusalem and that has spread throughout Israel is programmed to erase all infected disks on Friday, May 13, 1988. Similar viruses threatening less dire consequences have also been detected, and various software antidotes are becoming available. BBR reports on a wide variety of sources and programs available as "public domain" and "shareware" products, as well as more explicitly "commercial" items. A sampling follows: SHAREWARE MAGAZINE, the new name for PC-SIG Magazine, is an excellent source of information about shareware for IBM PCs and compatibles. $20/year in U.S.A., $35/year foreign. Contact: Shareware Magazine, PC-SIG, Inc., 1030 East Duane Ave., Suite D Sunnyvale, CA 94086; (408) 730-9291. THE CLEARINGHOUSE for Academic Software, a nonprofit software distribution center operated by Iowa State University, distributes over 100 educational software packages for the VAX family of computers. An on-line catalog and preview system allows users with proper terminals or terminal-emulation programs to scan listings of available programs and to preview many of them on-line. The on-line number is (515) 294-6085. Use CHCAT as username and as password. A free catalog is available. Contact: THE CLEARINGHOUSE for Academic Software, The Computation Center, 104 Computer Science Building, Iowa State University, Ames, IA 50011; (515) 294-0323. The MACWAREHOUSE Catalog (Priemier Edition) lists over 250 heavily discounted software, hardware, and accessory products for Macintoshes. All items come with 120-day guarantees and are shipped via Federal Express Standard Air for $3.00. Contact: MacWarehouser, P.O. Box 1579, 1690 Oak St., Lakewood, NJ 08701-1579; (800) 255-6227; (201) 367-0440. THE MUSICIAN'S MUSIC SOFTWARE CATALOG, a "complete guide to MIDI software," lists software and hardware music products for a broad range of microcomputers--4 products for the Amiga, 29 products for Apple IIs, 31 products for Ataris, 36 products for Commodores, 22 products for IBM PCs, and 38 products for Macintoshes--as well as accessories and books. Contact: Digital Arts & Technologiesr (formerly Scherzando Music), P.O Box 11, Milford, CT 06460; (800) 332-2251; (203) 874-9080. TCRUNCHERS, a shareware package of utility programs for IBM-PC and compatibles that was compiled by Mike Stairs, comes on nine 5.25-inch disks. Among its dozens of useful programs, TCrunchers includes book index preparation programs, a SideKick clone, a chart and graph drawing program, several foreign fonts and a font editor, the Micro-Text Analysis System, the LISP programming language, an outlining and planning program (PC-OUTLN), a style analysis program (PC-STYLE), a word processor (PC-WRITE), a package of text analysis tools, a spell checker, a character- and word-frequency program, and more. TCrunchers is available post-paid for $45 (Canadian) from The Center for Computing in the Humanities, University of Toronto, 14th Floor, Robarts Library 1309 St. George St., Toronto, Ontario, Canada M5S 1A5. Payment MUST accompany orders. The following interesting programs are available from the more than 175 that are listed in the NATIONAL COLLEGIATE SOFTWARE CLEARINGHOUSE 1987 CATALOG (North Carolina State University, School of Humanities and Social Sciences, Box 8101, Raleigh, NC 27695; (919) 737-3067 /2468 /7908). (1) CONTENT ANALYSIS (Macintosh, $23) analyzes ASCII files for up to 50 categories of up to 30 search strings, 10 exclusion strings, and 10 combination strings. This program is sentence oriented, prints marked text, computes category and string counts, and saves data in standard format and in SSDS format. (2) FREEFILE (IBM, $10) is a menu-driven data base program that allows users to create, format, import, export, and modify data bases, as well as to view, print, search, sort, index, modify, and delete records. Cases and fields may be selected for printing; new fields may be computed from existing ones. Supports an unlimited number of records of up to 1,000 characters each, up to 100 fields per record, and up to 65 characters per field. Only one file may be open at a time; this is not a relational data base. (3) WORDWORKER: THE ACCELERATED NEW TESTAMENT (IBM, $23) includes a machine-readable version of the KJV New Testament and an electronic concording program that supports Boolean searches. On-line help is available. See Bits, Bytes, & Biblical Studies 379, for more information. (4) BIBLE: NEW TESTAMENT (IBM, $23), and (5) BIBLE: OLD TESTAMENT (IBM, $50), are machine-readable, ASCII versions of the KJV that each include a rudimentary search program. (6) LANGUAGE TEACHER SERIES (IBM, $23 each for French, Spanish, German, and Italian) consists of drills for nouns, verbs, and phrases. Users are retested on missed items, diagnostic results are provided, printing is supported, (upper ASCII) foreign characters can be displayed even on monochrome screens, and large test sets are included. Additional test sets may be created with word processors. (7) WORD MATCH (IBM, $45) is a content analysis package that can be used to analyze documents, open-ended survey questions, field notes, transcripts, and other textual materials. Supports a stop-word list. Text must be demarcated with markers or entire file analyzed. Outputs record numbers in descending order of frequency of hits, displays matches and their counts, displays counts per file and per search, supports multi-file searches and searching by word, phrase, and root. Written in C. Includes dBASE interface. (8) GOLDEN RETRIEVER PUP (IBM, $10) is a memory-resident, pop-up utility that allows users to locate phrases in any file or subdirectory or in all subdirectories. Supports wild-card and similar-to searches and cut-and-paste. Works only with floppy disks up to 720K. The following interesting programs for IBM PCs and compatibles are available from the more than 50 that are listed in the WISC-WARE DECEMBER 1987 CATALOG (Academic Computing Center, University of Wisconsin-Madison, 1210 West Dayton St., Madison, WI 53706; (800) 543-3201; BITNET: Wiscware@@Wiscmacc). (1) SIMULATIONS FOR PHILOSOPHY (Free) simulates "worlds" and "systems" to give users experience in inductive reasoning and hypothesis formation, design of experimental tests of hypotheses, and the construction of scientific explanations. (2) SCRIPTWRITER ($90) includes a graphics editor, font editor, text editor, and programming language with a single user interface. Allows users to create graphics images that may be called from application programs. Suports monochrome and CGA; EGA suport expected soon. (5) TURBO GRAPHICS TOOLS ($100) is a transportable Turbo Pascal graphics library that is compatible with CGA, EGA, Hercules, and VGA (using EGA emulation) graphics adapters. Features include point, symbol, and line drawing, arbitrary text placement, text input, user-defined coordinate systems, area filling, windows, clipping, support for multiple graphics pages, and screen dumps. The software determines the hardware configuration and selects the graphics mode that offers the highest resolution and most colors. LEXEGETE is text-only lectionary software "designed by and for clergy who are serious about the process of sermon preparation." It is available for Macintoshes and for IBM PCs, uses text from An Inclusive Language Lectionary (1983) and presents exegesis from 35 leading Episcopal, Lutheran, and Presbyterian scholars. Each LEXEGETE document provides the following information about the week's text: context of the pericope, analysis of the Greek text, inclusive-language key words, a strategy for preaching the text, selected study references, suggestions for worshiping and hymn singing, ideas for further reading. Contact Tischrede Software, 50 Elliott St., North Dartmouth, MA 02747; (617) 994-7907. A new version of WORD FINDER, the 220,000-synonym electronic thesaurus for Macintoshes (an IBM PC version also is available), that runs with Apple Computer's HyperCard and MultiFinder has been released. Requires 50K RAM. $59.95; upgrades: $15. Contact: Microlytics, Inc., Techniplex, 300 Main St., East Rochester, NY 14445; (716) 248-9150. HEBREW WORDS uses a mixture of games and quizzes to teach 2,000 Hebrew vocabulary words taken from the Hebrew Scriptures. Users may create individual word lists. For IBM PCs and compatibles. Requires graphics card. $29.95. Contact: Davka Corporation, 845 N. Michigan Ave., Suite 843, Chicago, IL 60611; (800) 621-8227; (312) 944-4070. An updated version of the BIBLICAL (and RELATED) SCHOLARS ADDRESS BOOK for scholars who can be reached by electronic mail is available on-line from Sterling Bjorndahl: BJORNDAS@CLARGRAD.BITNET. Bjorndahl has also set up a sub-network for IBYCUS users. The lively HUMANIST Network (see OFFLINE 15 and BBR 1:7 [June 1987] 15) has set up a file-server to allow users to download files. For information on joining the group or using the file-server, contact McCarty@UTOREPAS.BITNET. The COMMITTEE for COMPUTER ACTIVITIES of the AMERICAN PHILOLOGICAL ASSOCIATION offers an on-line newsletter about current activities in the field of classical studies. The newsletter includes notices regarding developments in the field of computer activities, as well as information about fellowships, lecture series, research in progress, awards and honors, employment opportunities, and other information of general interest to classicists. The on-line newsletter appears on HumaNet, a telecommunications network for scholars working in the humanities. To reach the classics newsletter, select NEW (for "newsletters") from HumaNet's main menu and CLA (for "classics") from the newsletter submenu. All newsletter articles may be searched by keyword. Notices for the on-line classics newsletter should be clearly marked as such and sent to Jeffrey L. Buller, Classical Studies, Box 886, Loras College, Dubuque, IA 52004-0178. A MS-DOS or Macintosh simulation of HumaNet may be obtained by contacting Richard W. Slatta, Director, ScholarNet, North Carolina State University, Raleigh, NC 27695-8101. Be sure to specify which version (MS-DOS or Macintosh) you require. Microcomputer Use in Higher Education, a book produced by EDUCOM and Peat Marwick in cooperation with The Chronicle of Higher Education, reports on 211 respondents (drawn from the 450 institutional members of EDUCOM) to a 1986 survey of American institutions of higher learning. The results of the survey are broken down into the following seven categories: (1) general policy, (2) microcomputer availability, (3) microcomputer access, (4) microcomputer acquisition, (5) software availability, (6) software acquisition, and (7) software support. The data were analyzed by a variety of attributes, including institution control, enrollment size, and selectivity. Results are presented graphically and supplemented with tabulations of counts and percentages for responses to each question. The cost of the full study is somewhat daunting ($45 for EDUCOM members, $125 for institutions of higher learning and nonprofit institutions, $495 for EDUCOM corporate associates, $1,195 for all others!), but a 14-page "Executive Summary" is available for $7.50. Contact: Dr. Susan S. Lukesh, Peat Marwick, 345 Park Ave., New York, NY 10154; (212) 872-6836. The proceedings of the 1987 Computer Applications in Archaeology conference have now been published as Computer and Quantitative Methods in Archaeology 1987, edited by C. L. N. Ruggles and S. P. Q. Rahtz, in British Archaeological Reports, International Series 393. The 300-page volume includes 30 papers on statistics, graphics, data bases, expert systems, and so forth. 18 pounds (post-paid) from BAR, 5, Centremead, Osney Mead, Oxford OX2 ODQ, England. //end #18// ---------------------------------- << O F F L I N E 19 >> by Robert A. Kraft [dateline July 1988] RSNews 3/4 (September 1988) CSSR Bulletin 17/3 (September 1988) ---------------------------------- This column is being drafted on a laptop computer during a layover at Heathrow Airport in London. For the past 10 days, I have been attending a joint conference in Jerusalem sponsored by the Association for Literary and Linguistic Computing (ALLC) and the Association Internationale Bible et Informatique (AIBI). Since the deadline for this column comes almost immediately upon my return to the office, it seemed like a good opportunity to provide a brief report on this conference as it concerns the issues of computing and biblical/religious studies. The detailed conference program can be obtained electronically from OFFLINE, and printed copies (including abstracts) may still be available from the organizer, Prof. Yaakov Choueka, Department of Mathematics and Computer Science, Bar Ilan University, Ramat-Gan, Israel. Not surprisingly, many discussions concerning international cooperation were held at the conference in smaller and larger groups, unofficially and officially. Many of the leading centers and/or individuals engaged in humanistic computing were represented (see below for a selection) and the value of being able to contact them all through electronic mail (BITNET, etc.) was emphasized. Those who were not yet accessible through electronic networks were clearly at a disadvantage. [Note: this is not an exhaustive list] ACH Association for Computers and the Humanities AHL Academy of Hebrew Language, Jerusalem AIBI Association Internationale Bible et Informatique ALLC Association for Literary and Linguistic Computing CATAB Centre d'Analyse et de Traitement Automatique de la Bible et des Traditions Ecrites, France CATSS Computer Assisted Tools for Septuagint Studies, Penn/Hebrew Univ. CCALS Center for Computer Analysis of Language & Speech, Leeds CCAT Center for Computer Analysis of Texts, Penn CCH Centre for Computing in the Humanities, Toronto CIB Centre "Informatique et Bible," Maredsous COBUILD Project, Birmingham, England GJD Global Jewish Database Project, Bar Ilan HRC Humanities Research Center, Brigham Young ILC Instituto di Linguistica Computazionale, CNR, Pisa INLF Institut National de la Langue Francaise, France LDB The Linguistic Database, TOSCA Work Group, Nijmegen OCP Oxford Concordance Program, Oxford OTA Oxford Text Archive, Oxford PHI Packard Humanities Institute, Los Altos TECHNION Israel Institute of Technology, Haifa TLG Thesaurus Linguae Graece, Univ. California Irvine TUSTEP Tuebingen System of Text Processing Programs, Tuebingen WIA Werkgroep Informatica, Vrije Universiteit, Amsterdam A major cooperative project of great importance for all humanities computing is the "Text Encoding Initiative" being sponsored by the ACH, ALLC and the Association for Computational Linguistics, with invitations to other professional groups to participate. Initial funding has been granted by the National Endowment for the Humanities (see below), and Michael Sperberg- McQueen (Univ. of Illinois at Chicago) will serve as editor for the guidelines, with the larger project coordinated by a committee of representatives from the sponsoring groups and chaired by Nancy Ide (Vassar). The matters to be addressed include not only the obvious issues of how to transcribe various foreign fonts for computer use or how to indicate text formats (columns, poetry, etc.) in consistent and translatable ways, but also what basic information about a particular text needs to be included (and where to record it) and how the desired divisions of a text are to be indicated. There seems to be agreement among the various parties consulted that something based on the Standard Generalized Markup Language (SGML) that has been receiving much attention recently in certain circles will be recommended, at least as a model for transferring files between users. The initial NEH grant is for a two year period. Doubtless we will be hearing more about this important endeavor as the work proceeds. John Hughes conducted a workshop on desktop publishing, with special focus on Xerox's Ventura Publishing system for IBM/DOS machines. This thorough survey of the subject provided a useful context for appreciating the various illustrations of in-house printing produced by some of the participants in connection with their presentations. The joint conference was especially impressive for the array of exhibitions -- some boldly online (live), some neatly prepackaged -- of collections of data stored in normal linear (consecutive text) formats (what I am calling "data banks") and of various ways of reconfiguring the data for more rapid and more flexible manipulation ("data base management" approaches). Probably the world's largest data bank, representing various formats and degrees of availability drawn from a wide variety of encoding sources, is the Oxford Text Archive, which reported on its current status, problems and ambitions. A catalogue of its holdings, organized by language, is available on request (ARCHIVE@VAX.OX.AC.UK, or 13 Banbury Road, Oxford OX2 6NN). Other data banks, and strategies for making effective use of them, were well represented. The CD-ROMs from TLG and from PHI/CCAT (see OFFLINE 17) were displayed and accessed not only from the IBYCUS SC, but also from IBM/DOS machines through the new CCAT OFFLOAD program. These data banks exist on a distributable storage medium (CD-ROM) with relatively consistent coding throughout. There was also a workshop on CD-ROM technology and products. For further information contact OFFLINE. The extensive Global Jewish Database Project at Bar Ilan University, an outgrowth of the "Responsa" project, exhibited its powerful capabilities for searching and manipulating material in a variety of ways. This large data bank with its integrated database software, developed under the direction of Yaakov Choueka (address given above), is now accessible on this side of the Atlantic through the Toronto Centre for Computing and the Humanities. GDB includes a lemmatized Hebrew biblical text along with a wide variety of other Jewish (mostly Hebrew) texts that are accessed directly from the Bar Ilan mainframe computer. The strategies for "disambiguation" of similar words and/or forms were especially impressive. Other similar demonstrations were shown by the Institute of Computational Linguistics of Pisa, for Italian Literature, which is developing a "linguistic workstation" with an intelligent knowledge base for retrieval and other applications, and by the Institut National de la Langue Francaise, for the French materials -- the latter is also accessible through the University of Chicago as well as on the TRANSPAC French telecommunications network. In addition to such mainframe based demonstrations and special workstations, there were numerous microcomputer exhibitions for manipulating large bodies of data. The Compucord Database marketed by MIKRAH and prepared in conjunction with the CIB at Maredsous (F. Poswick, B-5198 Denee, Belgium) made integrated use of the morphologically analyzed Hebrew text and various sorts of statistical and searching software. Similarly, the CATSS data was configured for database use on IBM/DOS machines by B. A. Nieuwoudt (Stellenbosch), using DBase III, and for the Apple Macintosh by Galen Marquis (Hebrew University), adapting "Fourth Dimension" software. Other projects of various sorts taking place at or in connection with the work of the LDB, WIA, CATAB, the University of Stellenbosch, and the Hebrew University, among others, were displayed and discussed. An interesting new technology whereby about 4 megabits of information can be stored on a credit card sized medium and displayed on a small dedicated reader called the "Smart Bible" was exhibited by T. H. Treseder (Megaword International, Sydney, Australia). Although some participants wondered aloud about what "hypertext" really meant, and whether it was really so new, W. Claassen and D. van der Merwe of the Stellenbosch group led an interesting workshop on the topic. The ability to work interactively both with consecutive text from data banks and with the more fragmented data base approach (not to mention the possibilities of graphics representation and even sound) holds great potential for new applications in computer instruction and research. Other sorts of computer applications to a wide variety of literary and linguistic materials were also exhibited and/or described by the various participants. Computer approaches to the usual problems of encoding, storing, searching, sorting (concording, indexing), textcritical analysis, multilingual manipulation and editing, lemmatization and parsing, dictionary building, and the like, were addressed from various angles. The reciprocal influence of literary theory (or theories) and computer strategies provided subjects for further discussion, including a panel on "Corpus Linguistics," and questions concerning the actual and possible relationship between computer analysis and "hermeneutical" issues were recommended for discussion at the next AIBI meeting, scheduled for Tuebingen in 1991. The 1989 annual ALLC meeting, to be held on 6-10 June in conjunction with the ACH in Toronto, will address a similarly wide range of topics. Just prior to the joint Jerusalem conference, the National Endowment for the Humanities held a special one-day consultation with a small group of experienced computing humanists to discuss current trends in computer research as they might relate to NEH interests, procedures, and policies. Several areas of special concern were identified in advance, and the discussions flowed from that suggested agenda. On the question of how better to educate scholars regarding computer assisted approaches to research, it was recommended that the NEH consider establishing a program of summer workshops for this purpose. Since it is becoming increasingly easy to contact knowledgeable persons and significant research efforts through electronic networks in a timely fashion, the NEH was urged to take advantage of this situation and actively encourage coordination of relevant information at all levels. The idea of one or more clearinghouses of information, perhaps in cooperation with the professional societies, was also discussed. A second area of focus was the need for creating and improving archival resources. The importance of resuming the task of generating a centralized list of existing electronic texts and related data (the now inactive "Rutgers Inventory" project) was emphasized, and the special interests of libraries in such a task was noted. Problems of legal issues -- copyright, ownership, fair use, etc. -- and the relationship of scholarly research to "commercial" interests were also discussed. With the advent of new storage and distribution media such as laser disks, the problems of actual archiving of data may become less serious, but coordination of special interest archives (perhaps through a consortium) and more active direction in choosing what to encode and archive were seen to be important issues. The subject of standards, and the new NEH funded initiative on text encoding, were also discussed, along the lines already noted above. Under the rubric "New Directions," the "hypertext/media" phenomenon was noted and a general discussion of types of software needed for quality research in the humanities ensued. The need for significant bodies of text between which links can be made, and for basic research on the sorts of links that can be made to best advantage, were emphasized. Again, the problematic relationship between scholarly research and commercial development received some attention. Finally, the special consultation turned its attention to the question of NEH policies and guidelines for grant applications. The present procedure of having separate panels for the "scholarly" aspects and the "computer" aspects of proposals was questioned. The policy of public access to NEH funded projects and the problems that can arise when not all the funding comes from NEH were noted, along with a number of other computer related issues: how should computer related costs be calculated? should materials be required to be in standard electronic formats? the acceptability of electronic publication vis-a-vis traditional hardcopy; the importance of addressing the needs of potential users; creating new software and adapting existing programs; integrity and backup protection of electronic data; attention to software dissemination and future portability. In most instances, no easy solutions emerged, but the opportunity to discuss such problems openly in such a group seemed to be a useful exercise in itself for all concerned. The CD DATA REPORT 4.6 (April 1988) contains an article by David C. Miller entitled "Biblical Publishing Discovers CD-ROM." The article describes several projects to put biblical materials on laser disks, including the PHI/CCAT Demonstration CD-ROM #1 (see OFFLINE 17). Also mentioned are Optical Media International's indexed King James version ($39), the Electronic Bible from the Foundation for Advanced Biblical Studies (FABS) with 5 English versions (KJV, ASV, RSV, NIV, TEV; $299), the Bible Library from Ellis Enterprises Inc. with several English versions along with Hebrew and Greek and various dictionaries and aids ($595), and several other promised products. A mostly monthly periodical called T.H.E. (Technological Horizons in Education) Journal is offered free to qualified individuals in Educational Institutions and Training Departments in the USA and Canada. It contains excellent summary notices of relevant developments, including hardware and software. Subscription by means of a signed and dated application card is preferred, but if such a card is not available, "a written request on school/business letterhead is acceptable, but it must be signed and dated." Home addresses cannot be used. Contact Information Synergy Inc., 2626 S. Pullman, Santa Ana, CA 92705- 0126. //end #19// ---------------------------------- <> by Robert A. Kraft [dateline September 1988] RSNews 3/5 (November 1988) CSSR Bulletin 17/4 (November 1988) ---------------------------------- I just received my invitation to join the club for staff with 25 years or more of service at my University, so I'm entitled to a bit of taking stock. Somewhat gradually, and in various ways, the past decade of my life has become more and more devoted to a "missionary" task of helping to reshape scholarly habits by helping to bring the world of powerful electronic tools more centrally to the attention of my colleagues, and trying to make it easier for them to make use of such tools. When I need, upon occasion, to reassure myself that this is really worth all the time and effort that might otherwise go into more specific and traditional scholarly projects, I usually revert to the motto that I recall having seen in my dentist's office -- "Give someone a fish and you feed them for a day; Teach someone to fish and you feed them for life" (American Indian proverb, the sign claimed, adapted here to genderless pronouns). In terms of my evolving agendas, much has been accomplished and much remains to be done. Probably the most significant single development is the emergence of a computer competent new generation of first rate scholars (by any standards) who will continue to teach us old fogies how really to use these newfangled machines, and who will continue to proliferate as they train the next generations. They are still relatively few in number, but they are there, and the clock will not be turned back (short of a complete economic collapse in our world). They are not functioning as academic half-breeds, somewhere on the fringes of scholarship in some adjunct capacity to "real" scholarship (I hasten to add, we often need such advisors as well!), but are themselves capable scholars in specific traditional fields who are able to use all the available tools. By their fruits we will see the effects of reshaped scholarly habits, both in the research itself, and in how the research is presented to its various audiences. Somewhat less impressive, but nevertheless itself reassuring, is the gradual emergence of corpora of pertinent data -- especially primary textual and graphic (artifacts, pictures, etc.) data -- that can be researched relatively easily with the help of the electronic tools. The treasury of such riches continues to grow (consult the catalogue of the Oxford Textual Archive!), although the task of arranging each gem in harmony with the others, and of distributing this wealth among those who would make good use of it is somewhat daunting (for a model, see the Thesaurus Linguae Grecae project). It is not very exciting to encode or collect and edit and load onto floppy disks or CD-ROMs the bits and pieces of valuable data scattered throughout the world, but it is still a necessary task in the mission of reshaping of scholarly habits. In many senses, the most disappointing aspects of the mission are the frustrations caused by the proliferation of relatively mediocre (from the perspective of humanistic research scholarship) hardware and software, and the resulting confusion caused by the situation. Much of this is the result of how our marketplaces operate. There is not (perceived to be) much general demand for what many scholars would like to have. And scholarly circles do not usually have access to significant funding for developing such items. Nor have the funding foundations and sources seen this as an important area for investment, even when they have been asked. Thus, with rare exceptions (such products as the IBYCUS Scholarly Computer, the Oxford Concordance Program, WordCruncher, and Nota Bene come to mind), we find ourselves in the frustrating position of knowing that the new technology is capable of doing many marvelous, even fairly simple things for us, but being unable to obtain the resources to do them. Despair is no solution, although we often tend to accept the status quo when our hopes for something better are dashed too often. We need to locate and cultivate people who know, and can and will help. Prod our computer centers, exploit the potential of user groups, encourage and support those working in our interest (donations may be sent to ...). Humanists traditionally have tended to work in isolation; this too is a scholarly habit in need of reshaping! And even this situation is improving. The scholarly marketplace is being addressed to some extent, if belatedly, and often by frustrated scholars themselves. The potential of machines such as the Apple Macintosh is finally being recognized and realized more fully in humanistic circles, although similarly potent and much less expensive equipment such as the Atari and Amiga systems seem not to have made much headway (as a very frugal child of the depression, this amazes me; the motto seems to be if it isn't expensive or a "big name brand" it can't be very good! Nonsense!!). Significant software development also is taking place in the University centers of humanistic computing, although the problem of how to share effectively with others needs to be faced more realistically. Mixed progress has been made at consciousness raising in traditional scholarly habitats. The withdrawal of computing enthusiasts to the comfort of new organizations for "computing and ..." is at best a mixed blessing -- it has produced some valuable concentrated efforts and some fora for various levels of "technical" discussion, but it has also worked to deprive the scholarly communities of natural integration of computer assisted research with "traditional" research. The computer-type articles that ought to be part of the mix of materials in the hoary scholarly journals tend to be lost to non-computer scholars in that new special literature for the new special people, never to be seen by the less enlightened. The computer-type scholars are not forced to make their work intelligible to those with less computer literacy, and the latter are robbed of the opportunity to stretch their imaginations and perspectives to include the new horizons that are opening up. Such a pattern of gradual integration would not have been simple, but the resulting necessity to reintegrate at this point is even more complex. Those who share this concern should encourage the officers of their scholarly societies and editors of their scholarly periodicals to take an active stance towards integrating the pertinent aspects of computer research into the traditional scholarly locales. Almost all traditional journals have been very backward in these regards. Leaving the question of scholarly articles aside, doesn't a new edition of a text deserve to be reviewed in the appropriate traditional journal, even if the text is published in electronic from rather than on ink and paper? We deserve to know about all tools for effective research, and to be warned about inadequate materials. At another level of scholarly communication some scholarly societies such as the American Philological Association and the Society of Biblical Literature have a fairly good track record of exposing their members to computer research tools at annual meetings, through exhibits, program sections, plenary presentations and other such fora. But I suspect that most scholarly societies have been relatively remiss in these matters, to the detriment of their constituencies. I am personally most familiar with the work of the SBL Computer Assisted Research Group (CARG) in this context, and point to it as an example of an active approach to integrating computers with traditional scholarship. CARG has not developed as an island to which those who ply computer assisted research can flee, but as a public square in which the research developments of computer technology are announced and displayed and discussed with a view towards encouraging, enlightening and assisting the inexperienced as well as the more experienced. At the annual meetings, CARG not only sponsors a special program segment at a particular time, but also runs special demonstrations in its own designated location, where it provides information and supports scholarly user group activities throughout the meetings. For the 1988 annual meetings of the AAR/SBL/ASOR in Chicago in November, the CARG special program will focus on "Text Retrieval and Hypertext for CD-ROM and other Large Databases in Biblical Studies and the Humanities," with presentations on the Perseus Project at Harvard (classical scholarship on Macintosh computers), on accessing the TLG and PHI/CCAT CD-ROMs for classical and biblical studies, on John Smith's MicroARRAS program for generic text retrieval and concording, and on using the new machine-readable Oxford English Dictionary and software produced at the University of Waterloo. There will also be a number of brief reports on various projects and products, in an attempt to keep people updated on this frustratingly fast moving area. Although CARG is formally sponsored by SBL, it sees itself as responsible to the wider constituencies at the combined meetings as well, and encourages the participation of all interested humanistic scholars. It is also willing to provide advice and assistance to other scholarly groups who wish to address the problem of encouraging computer awareness within their constituencies. The computer is a wonderful aid to conventional publication at virtually every level, from actual composition through editing and manuscript preparation to indexing and final printing. But the computer can also serve in and of itself as a medium of publication, and it is my hope that this option will receive closer attention in the near future. Although most of the textual materials available at present in electronic form are derived from conventional publications (e.g. classical and biblical Greek and Latin texts, English literature, etc.), we ought to see a growing body of works published on computer for computer use (one can, of course, make a printed form from this if desired, but that is an option not a necessity). A step in this direction has already been taken in connection with the CD-ROM (laser disk) materials produced by CCAT under my direction, in which the following items are new and have not been published anywhere in hardcopy: English translation (by W. Newby) of the short form of Paraleipomena Jeremiou; parallel Latin recensions of "5 Ezra" (2 Esdras 1-2) with English translations (edited by T. Bergren); parallel Greek and Latin, with English translations, of Origen's Homily on Jeremiah 2.21f (edited by T. Bergren and B. Wright); 1880 Diary of Mary Ann Pierpont (edited by her great grandson, R.Kraft). The disk also contains massive scholarly tools that would be much less useful in hard copy, such as the morphologically analyzed Jewish Greek Scriptures (LXX) and the parallel Hebrew and Greek text of those Scriptures. Not only does contemporary scholarship need to acclimate itself to using such materials in research, but procedures for acknowledging and evaluating these publications in our journals and in our hiring and tenuring practices also need to be developed. It makes little sense for us to cling desperately to the procedures of the old world as we haltingly but with high hopes move into the new. The editing of electronic texts, manipulation of electronic data into new useful formats and analysis of such data, preparation of software to facilitate use of such materials, and preparation of reports and studies for circulation primarily in electronic form are legitimate scholarly activities that have their parallels in more "traditional" academia. Our young, computer competent scholars deserve encouragement and fair treatment for their efforts as we pass them the baton and watch in wonderment as the next phase of the race develops under their skilful care. In an effort to increase efficiency without affecting cost, CCAT has entered into non exclusive agreements with various software developers and distributors of electronic materials for them to serve as "secondary distributors" for the biblical texts. Some of you will have noticed that Multi-Lingual Scholar now accepts orders for the CCAT texts with orders for their software, or that Linguists Software and the Bible Word Program have similar arrangements. The texts may also be obtained by themselves, without reference to any related software package, from Dove Booksellers at 3165 West 12 Mile Road, Berkley, MI 48072 (313 547-9644). It is expected that at least some of the secondary distributors will have copies of the texts for sale to the annual AAR/SBL/ASOR meetings in Chicago as well. Expanding the distribution base for the texts handled by CCAT hopefully will facilitate the release of additional texts on diskette through CCAT. The first such additions to the current list will be items already available on the PHI/CCAT CD-ROM for which permission has been received from the contributors for diskette distribution. For example, orders are now being taken by CCAT for the Latin Vulgate ($60), Newman's concise English Lexicon of the Greek New Testament ($25), and a corpus of Aramaic Targum texts (Neofiti and Ps-Jonathan of the Pentateuch, Job; $40). Other materials may be made available on request (for a list of the CD-ROM contents, see OFFLINE 17). In case there are any ATARI ST users out there who do not know of the ATARI ST USER GROUP and its Newsletter, please be informed that the group is coordinated by Dr. Douglas E. Oakman, 1090 White Drive, Santa Clara, CA 95051. The standard CCAT texts and utility programs are available for the ATARI ST through the group. In recent months a package for working with Syriac and Aramaic fonts of various sorts has been produced by Alaph Beth Computer Systems to work with the Multi-Lingual Scholar word processor. Packages are also available for Coptic, Phoenician and Ugaritic from the same source, and a Peshitta search and print program has been announced. Contact Alaph Beth at 5030 Maplewood Ave # 203, Los Angeles, CA 90004 (213 465-1443); or Multi-Lingual Scholar at Gamma Productions, 710 Wilshire Blvd Suite 609, Santa Monica, CA 90401 (213 394-8622). On the communications scene, there is a new computer network service called "JewishNet: The International Jewish Computer Network" that functions within the framework of Networking and World Information (East Hartford CT) and includes material from various organizations including the Hebrew Users' Group (HUG). For further information, call 800 624-5916. An informative bibliographic-type survey of computer related books, software, etc., by Martin J. Homan appeared in the Concordia Journal 14.2 (April 1988) 150-57, with subsections on software for IBM/DOS machines, Macintosh, and the Apple II family. If you use an Apple II machine and sometimes feel isolated, be assured that you are not forgotten. There is also an Apple II user group sponsored by CARG, currently coordinated by Moises Silva at Westminster Theological Seminary, PO Box 27009, Philadelphia PA 19118. //end #20// ---------------------------------- by Robert A. Kraft [dateline September 1988] RSNews 4/1 (January 1989) CSSR Bulletin 18/1 (February 1989) ---------------------------------- For the first time in several years, I was largely a bystander rather than a manager for the recent Chicago activities of the Computer Assisted Research Group (CARG) of the Society of Biblical Literature. Not that I wasn't still on the program, reporting on projects with which I am involved, or still on the steering committee, discussing CARG's future, or still on the exhibit floor, helping answer questions and direct visitors to appropriate demonstrations -- but I was mainly a participant, operating in a framework created and administered by others. It was for me a much more relaxing experience (although I was not entirely immune to reacting to occasional crises!), and I am well able to appreciate the time and energy that the new CARG leadership, with Robin Cover and Alan Groves coordinating the programs and Jackie Pastis again in charge of the exhibits, had expended on making these sessions such a striking success. The formal CARG program segment featured "Text Retrieval and Hypertext for CD-ROM and other Large Databases in Biblical Studies and the Humanities," with presentations by (1) Elli Mylonas and Gregory Crane of Harvard's PERSEUS project for classical studies, using Apple Macintosh to integrate CD-ROM texts, graphic materials, and other such aids; (2) John B. Smith of University of North Carolina at Chapel Hill, on his MicroARRAS search software for IBM/DOS machines; (3) Darrell R. Raymond of Waterloo, on the New Oxford English Dictionary project and the accessing software developed for using that data; and (4) John R. Abercrombie and myself, on the creation of CD-ROM based data banks by the Thesaurus Linguae Grecae (TLG) project, the Packard Humanities Institute (PHI), and the Center for Computer Analysis of Texts (CCAT), for various types of research. Useful bibliographies pertaining to each of these projects were prepared by the conveners and can be obtained by contacting Robin Cover at Dallas Theological Seminary, 3909 Swiss Ave., Dallas TX 75204 (BITNET ZRCC1001@SMUVM1). As usual, the main program segment was followed by a plethora of brief reports from various parts of the world of computing as it pertains to biblical and religious studies. Several of the reporters also came equipped to demonstrate some of their results later in the CARG exhibition rooms. The value, and the frustrations, of this traditional reporting session were discussed at length by the CARG steering committee, with the recommendation that in the future, each person wishing to present a brief report should provide a written summary for circulation beyond the confines of the program session itself. Since CARG considers as one of its main functions providing reliable information to the SBL and associated societies and has no desire to be in competition with other program segments that (unfortunately) must be scheduled at the same time, it hopes in this way to be of greater service to those persons unable to be present for the reports. The dilemma of competing program segments was especially obvious at the time of the CARG session, since concurrent with it was an ASOR consultation on computer applications in archaeology. The archaeological presentations by Richard Saley (Ashkelon project database), Richard Klein (analysis of animal bones), and James Strange (CADD computer graphics) are also of interest to the CARG clientele, just as the CARG program would interest many archaeologists -- especially the use of graphics in the PERSEUS project. Hopefully such program clashes can be avoided in the future, and coordination between CARG and any counterparts in ASOR or AAR be expedited. Indeed, although CARG is formally sponsored by SBL, the steering committee reaffirmed the desire to address the needs of the combined audiences of the societies represented at the annual meetings as opportunity permits. At certain levels, the needs of all humanistic users are similar, and CARG has tried to take this into account in its formal programs and also in the demonstrations and exhibitions it sponsors. In Chicago, there were special discussion sessions held in the late afternoon (to cut down on scheduling conflicts) dealing with (1) general purpose software for working with textual material, (2) multilingual word processing, and (3) the creation of multi-media (text, graphics, sound) databases for scholarly research. Many of the software vendors and/or developers attended these sessions, hopefully increasing the opportunity for fruitful dialogue concerning scholarly user interests and marketplace realities. It is usually dangerous to attempt to report briefly and selectively on a situation such as obtained at the Chicago sessions, since the reporter could not possibly be in touch with and remember everything that happened. Nevertheless, at the risk of overlooking some important matters, here is my bystander's view of some developments that will be of interest to at least some of OFFLINE's audiences. Accessing the CD-ROM data banks (at present, TLG, soon also PHI/CCAT) from the Apple Macintosh is now a reality: the PERSEUS people are doing it every day. Exact details of how that software can be obtained by potential users are still being worked out, but it seems clear that PERSEUS intends to make the software available and affordable. Inquiries can be addressed to Elli Mylonas, Classics, Boylston Hall 319, Harvard University, Cambridge MA 02138 (ELLI@WJH12.HARVARD.EDU). In various other ways, the Macintosh is proving itself as a versatile and powerful research machine, especially for working with multilingual materials and graphics applications. It has become the "machine of choice" for many scholarly uses and users. The "HyperCard" program can be employed as the backbone of a variety of research applications, as demonstrated in the CARG rooms by Ray Harder (HARDERR@CLARGRAD), Jay Treat (TREAT@PENNDRLS), and others. Similarly, down in the regular exhibition hall, other impressive Macintosh based products were on display. Linguists' Software (Phil Payne), continues to produce and supply quality fonts for the Macintosh, and will now become a secondary distributor for CCAT-supplied texts that have been reformatted for efficient use with its fonts. Roy Brown's PerfectWORD software also makes impressive use of the Macintosh for swift coordinated searching and browsing, and attractive display, of Hebrew, Greek, and English biblical texts. Not that the IBM/DOS machines have lost their appeal. A straw vote taken at one of the CARG sessions underlined the fact that these computers are still used by the vast majority of the scholars represented. And impressive progress continues to be made to enhance their value for humanistic research. The aforementioned CD-ROMs can be used on IBM-type equipment, whether as delivery devices from which desired texts can be offloaded (software available from CCAT), or to be searched and browsed directly (software demonstrated by Tony Smith from Manchester University, England), or to be used in other types of textual and linguistic research (with John Baima's LBase program, for example). The "commercial" developers also continue to improve the environment for working with texts on IBM/DOS machines. On display in Chicago were the latest versions of Nota Bene (Dragonfly Software, Steve Siebert), MultiLingual Scholar (Gamma Productions, Linda Brandt), MegaWriter (Paraclete Software, Charlie Thrall), Memorization Technologies (George Kong), GRAMCORD (Paul Miller), and others. John Smith discussed his MicroARRAS searching software at the main CARG program segment, and information was available on the Oxford Concordance Program, WordCruncher, and other such IBM-based tools. Zondervan Publishers showed their new software for searching and manipulating their electronic version of the NIV English Bible. On the "shareware" (send a token fee to register as a user) and "public domain" (free) side of things, various utilities were demonstrated and made available to interested parties in the CARG rooms, and database applications in archaeological (Dick Saley) and inscriptional (Mike Stone) contexts were shown. Progress towards compatibility between the available data (e.g. TLG's Greek materials, the biblical texts circulated by CCAT) and the increasingly powerful software (both commercial and "not for profit") continues to be made -- most of the aforementioned software either works directly with the coding found in the available texts or provides "filters" to make the necessary adjustments. Recently released, for example, are "shareware" filters to convert the TLG/PHI/CCAT coding for Hebrew and Greek into Nota Bene format. These are available directly from the developer, David Rensberger (Interdenominational Theological Center, 617 Beckwith Street SW, Atlanta GA 30314; registration fee of $15 for both programs or $10 for either), or from CCAT as a secondary distributor. And, as usual, the IBYCUS Scholarly Computer was on display with its dedicated system for accessing and manipulating texts from the TLG and PHI/CCAT CD-ROMs and for performing other scholarly tasks. A new utilities package put together by Donald Westblade was available for distribution, along with a few programs from CCAT. The IBYCUS SC still provides the most compact, efficient, and sophisticated tool for large-scale searching and retrieving of ancient classical and biblical texts, as well as other materials, but the gap between it and the general purpose machines (especially the Macintosh) has been narrowed considerably in recent months. //end #21// ---------------------------------- by Robert A. Kraft [HUMANIST January 1989] RSNews 4/2 (March 1989) CSSR Bulletin 18/2 (April 1989) ---------------------------------- The "pilot" column for OFFLINE appeared nearly five years ago under the title "In Quest of Computer Literacy" (CSR Bulletin 15.2 [April 1984] 41-45). At about the same time, I prepared a two page stopgap "Computer News Update" for use in responding to inquiries that were arriving rather regularly in my mail. Not surprisingly, the topics discussed in these two pieces often overlapped -- the need for reliable information, for accessible electronic texts and data with which to work, for easy transfer capabilities to permit individuals to work independently on their own microcomputers, and for appropriate multilingual display systems for screen and printer. During subsequent installments of OFFLINE, attention has returned again and again to these and closely related issues. Significant progress has been made on all fronts, although the informational need remains and will remain most vexing, given the rapidly changing nature of the technology and its applications. Humanists have come a long way in the quest to harness this technology for their needs. People whose faces once turned pale (or some other shade) at the suggestion that they might want to investigate how to use computers in their work now routinely expose their thoughts and locutions to "word processing," and perhaps their finances (and grading?) to a "spreadsheet" approach. Bibliographies and similarly ornery materials are also atomized and reshaped by means of "data base management" systems. With increasingly regular frequency, selfconfessed novices are getting accounts on their local mainframe computers and are linking into the electronic bulletin boards and discussion groups such as HUMANIST or the various field oriented listings for history, philosophy, Anglo-Saxon studies, folklore, archaeology, music, and the like. For biblical studies and related interests, the wealth of information recorded in John J. Hughes' BITS, BYTES & BIBLICAL STUDIES (Zondervan, 1987) strikingly attests this explosion of progress. The new annual HUMANITIES COMPUTING YEARBOOK (Oxford Press), coordinated by Ian Lancashire and Willard McCarty at Toronto, will help survey the larger context of humanistic scholarship and teaching. Of course, "seeing is believing," but the opportunities for seeing even a small sampling of the latest developments in humanistic computing are still relatively rare. Fortunately, professional societies such as SBL, AAR, ASOR, APA, MLA, and others have made various attempts to expose their memberships to these developments to some degree, although perhaps not always as consistently as might be wished. The new technofocal humanistic societies, born out of this very revolution in technology, exist in part to mediate the technological advances to the scholarly interests, although this has also taken place with varied degrees of success. On 6-10 June, two of the most prestigious of these "new" societies -- The Association for Computers and the Humanities (ACH) and The Association for Literary and Linguistic Computing (ALLC) -- will hold a joint international meeting hosted by the University of Toronto Centre for Computing in the Humanities at which, it is hoped and planned, the latest and best computer related developments for humanistic academic interests will be demonstrated in the setting of a gala "Software and Hardware Fair." Keeping up with technology does not "come cheap," and the present bifurcation in professional scholarship between the traditional societies (SBL, AAR, ASOR, etc.) and the "computers and ..." groups causes added hardships. It is not clear that our deans and adminstrations are aware of this type of problem -- at my University, faculty are permitted a maximum allowance of $400 towards formal participation in one professional conference per year. If I attend the annual SBL/AAR/ASOR meetings, as I ought, there are no funds left for meetings such as the ALLC-ACH. But the computer/humanities meetings are also very important for scholarship in my field, and there needs to be a way in which the traditional scholarly support structures (professional societies, academic institutions) provide incentives, rather than discouragements, for such dual or even multiple participation! Registration for the ALLC-ACH Conference and the Fair is in the neighborhood of US$200 for non-members of ALLC or ACH (about US$100 for students). In addition to the Fair, and the traditional smorgasboard/banquet of papers and panels, there will also be an associated Summer School in Humanities Computing, jointly sponsored by the University of Toronto and Oxford University. Educational institutions, professional societies, and other possible patrons should be encouraged to consider underwriting the cost of sending representatives to take advantage of this unusually rich opportunity. Indeed, many OFFLINE readers should seriously consider attending these sessions at their own expense, if that proves necessary. The following courses are tentatively scheduled for the Summer School, on a graduated fee scale starting at about US$150 ($125 for ALLC or ACH members) for one course (the more you take, the less each costs, maximum of four courses per week). During 29 May through 2 June, the topics are WordPerfect, Computer Assisted Instructional Writing, Desktop Publishing, Computer Assisted Language Learning, Humanities Computing in China-Japan-Korea, Hypertext, Interactive Writing for Students, HyperCard, Meeting Campus Needs in Humanities Computing, Meeting School Needs in Humanities Computing, and Writing with Computer Support in the Schools. On 5 June there will be a one day workshop on Advanced Function Workstations. From 12-16 June, three of the earlier courses will be repeated (WordPerfect, Desktop Publishing, HyperCard) plus Scholarly Publishing, Interactive Video, Relational Database Systems, Programming in SNOBOL4, Study of Reader Response, Tools for Translation, Nota Bene, Literary and Linguistic Computing, and Discourse Analysis. For further information, contact Professor Ian Lancashire, Centre for Computing in the Humanities, Robarts Library, 14th Floor, 130 St. George Street, University of Toronto, Ontario M5S 1A5 CANADA (tel. 416 978-4238; BITNET IAN@UTOREPAS). In addition to any involvement with software/hardware displays, my own special assignment for Toronto is to coordinate a panel on humanities Archives/Repositories. As is clear to readers of OFFLINE, this is a long and abiding interest of mine. The computer offers a fantastic set of tools for textual research, but they cannot work in a vacuum. We must have access to the electronic texts and related data. Over the years -- now even decades! -- a wide variety of electronic materials have been generated in a wide variety of forms and under widely varying conditions. Some -- perhaps many -- of the early individual efforts are no longer recoverable. Certainly many electronically typeset books survive now only as hardcopy orphans, having lost the electronic parent. Although there have been sporadic efforts to catalogue and/or collect the surviving sea of material, none have yet proved successful in any comprehensive sense. The Oxford Text Archive is probably the largest unstructured collection of such materials -- and it distributes a catalogue of holdings as well -- but it is at the mercy of the various data producers, who may or may not choose to list or deposit their materials at the Archive. The off-again, on-again Rutgers Inventory of Machine Readable Texts deserves encouragement and support for its intent to create a comprehensive list of what is out there, although for a variety of reasons, progress has been slow and sporadic. Archiving is largely a thankless task, and requires both personal commitment and fiscal support to be effective. That the Oxford Text Archive has survived as an active enterprise as long as it has is perhaps more a tribute to British resourcefulness and tenacity on the part of its staff than anything else. As its (usually) amiable overseer, Lou Burnard, would be among the first to admit, the fact that such a collection exists does not guarantee that the needs of the people for whom it exists are being met or even actively addressed. It takes time and resources to document adequately what is in an archive, to correct errors, to harmonize formats and make coding choices consistent, to service inquiries and orders, to stock tapes and diskettes, to make and dispatch copies, to protect legal rights and keep track of the whole business -- to mention only some of the most obvious desirable functions. At the most basic level, an archive (or repository) is involved in collecting and preserving. This can be viewed as a predominantly passive function -- to serve as a storage area for whatever relevant materials are submitted for deposit. Apart from anything else it has done or hoped to do, the Oxford Textual Archive (OTA) has been able to fill this function. It is there, and welcomes contributions of data from whatever source -- including material that is not allowed to circulate independently under any conditions. The fact that all producers of electronic textual material have not in fact sent their materials to or even listed them with the OTA is unfortunate, and hopefully can gradually be remedied. At one level, CCAT is among the guilty. We have sent some materials to the OTA and have agreed to provide a complete listing, but thus far have not fulfilled the promise. But at least we are committed to and are working in a cooperative mode. If every producer and collector of electronic text would take similar steps towards cooperation with the OTA we would all be in a position to reap significant benefits! Why do I emphasize working with the OTA? Because it is in place (and has been for many years), is widely known, and is willing to serve this function. The OTA issues a catalogue of holdings, classified by language and author/work, which includes references to the holdings of cooperating archives elsewhere. Is there really any point in spending scarce humanistic resources to try to replicate this function elsewhere? That makes no sense to me. Many other archives and levels of archiving exist, usually with a specific area of focus. I have not attempted to include projects that are primarily concerned with excerpting and indexing data although they also qualify, in a general sense, as archives. Instead, my main focus here is on consecutive textual data. The classicists saw the need for making electronic material available quite early in the game, and created the American Philological Association's repository of machine readable texts. The Latin side of this endeavor has recently been taken up by the Packard Humanities Institute (PHI), while the Thesaurus Linguae Graecae (TLG) has worked for many years on encoding the ancient Greek literature. At Duke University there is a related project to encode Greek documentary papyri. Projects that focus on ancient Greek inscriptions are underway at Cornell and at the Princeton Institute for Advanced Studies. Electronic versions of Ancient Near Eastern materials can be found at UCLA. The Comprehensive Aramaic Lexicon project is creating its archive centered at Johns Hopkins, and the Yiddish Dictionary project at Columbia. At CCAT, we have concentrated on producing and collecting electronic materials related to biblical studies. Bar Ilan University has its massive "Global Jewish Database." French efforts have produced the "Treasury of the French Language," now being continued also at Chicago. Spanish is centered at Wisconsin. The list goes on and on. It would not surprise me to find that more than 50 major archival centers for electronic texts and related humanities materials exist throughout the western world. (I have only the vaguest idea of the situation in Japan, China, and Russia, for example, and should know more about Australia.) I have not yet mentioned major collections and efforts of which I am aware in the Universities and associated institutions of Canada (e.g. Laval, McGill, Toronto, Waterloo), Great Britain (Cambridge, Essex, Glasgow, London), Scandanavia-Iceland (Bergen, Copenhagen, Goeteberg, Oslo, Reykjavik), the Netherlands (Amsterdam, Leiden, Nijmegen), Belgium (Liege, Louvain-la-Neuve, Maredsous), France (Nancy, Paris), Spain (Madrid), Germany-Austria (Bonn, Cologne, Goettingen, Mannheim, Tuebingen, Ulm, Vienna), Italy (Pisa, Turin), Israel (Academy of the Hebrew Language, Hebrew University). In the USA, other institutions with major collections include Berkeley, Brigham Young, Cleveland State, Colorado, Dartmouth, Rutgers, San Diego, Southern Mississippi, Stanford -- and there is always talk of new archival projects and centers being developed. In preparation for the Toronto panel on Archives, I hope to be able to make available a more precise list of such resources, with at least some general characterization of their holdings. For this, I will need a great deal of cooperation. In most instances, the primary function of such institutions and organizations as those mentioned above is not simply to collect data, but to do something special with the data. And herein lies a labyrinth of problems. Working with data within a specific context and strategy is not necessarily easily compatible with distributing data to general users. It can be very expensive and bothersome to field requests, provide information, replicate the data in various formats, etc. Few places are adequately equipped for such tasks. Thus is is not really surprising that although a relatively large amount of humanistic data has been encoded, it may not be possible to obtain access to that which interests you. And even if you can locate what you want, and can get permission to use it, you may find that the amount of preparatory work necessary for using it is foreboding. Sometimes the data is protected in some way so that it can only be used within a specific framework. Access may be only "online" -- that is, through a direct electronic connection with the archive/repository (e.g. by telephone line, or limited on-site use) -- without the possibility of the user taking electronic material to work on elsewhere. In some instances, the data can be obtained and referred to at the user's convenience, but can only be accessed by means of special software that places limits on the process (e.g. CD-ROM packages under software control). Often the need to protect and control the data is dictated by legal considerations (e.g. copyrighted material), or financial (recouping expenses, if not making a profit). Even where no intent to restrict is present, the circumstances may cause such a situation -- e.g. when distribution is only possible in a form incompatible with the users' equipment (9 track tape, CD-ROM, etc.), or with the available software (a specific data base management system, for example). In short, there are many obstacles between the would-be user and the extant data. Concerted efforts are needed to attack at least the following overlapping areas: (1) Information is needed about the existence of materials in electronic form, whether they are in large "archival" centers or are the products of isolated individuals. Please provide basic information (e.g. title, format, ownership, availability) to OTA (Lou Burnard, 13 Banbury Road, Oxford OX2 6NN, England; BITNET ARCHIVE@VAX.OX.AC.UK) or to OFFLINE. And please alert me to the existence of collections ("archives") that I may have overlooked in the preceding discussion! (2) Support is needed for gathering available materials into appropriate locations for preservation, access, and/or distribution. This is a more difficult problem since few places are ready and willing to attempt to handle all available external formats (diskettes, tapes, etc.). Frequently most of the necessary equipment for such tasks is available in the major centers, but there is no staff or funding to do the job. Fortunately, concern at least for preservation seems to be growing, as evidenced by recent discussions with some professional societies (e.g. SBL) concerning the archiving of relevant electronic materials (book manuscripts, articles, reports, bibliographies, etc.). CCAT hopes to launch a pilot project to explore this type of archiving on mass storage media such as WORM laser/optical drives. Hopefully, other professional groups and centers will also commit themselves to this important step. Authors who have an electronic copy of their own published work should consider depositing it with such an archive. (3) Support is needed for reshaping the data, as needed, into consistent internal formats that can be manipulated effectively by readily available software. At this point the "archive" becomes an active participant in insuring that the data can be put to good use. An example of this process is the TLG data, which is internally consistent so that appropriate software will work on the entire data bank or on any of its parts. Similarly, the "on-line" data banks mentioned above (e.g. Global Jewish Databank or the ARTFL/French Language project) have already performed this service. The costs involved in such a process are enormous, but the resulting increase in value for users cannot easily be measured. Again, close cooperation of the various archival centers will be required to move effectively towards this goal. And the development of widely accepted standards for coding new electronic materials will help to bring this ideal closer to realization (OFFLINE has mentioned recent efforts in this direction in earlier columns). We are discussing an area of major transition for traditional educational and research institutions. With regard to textual materials, the major archives of the past and present are our libraries. And it is to the libraries, expanded to embrace electronic "text," that we doubtless will look in the future. They are rapidly gearing up, trying to catch up with the topsy-turvy growth of the computerized archives during the infancy stages of the new technologies, trying to harness any useful results. Also playing catch-up are the publishing houses, whose fates will become increasingly tied to their integration of computer-related activities. As the situation gradually stabilizes, with publishers and libraries finding their proper balance in relation to the computing expertise of the future, individual scholars and humanities computing centers will probably have much less to be concerned about at the archival level. Our grandchildren probably will have little firsthand knowledge about these struggles. But for the moment, we are presented the opportunity and the responsibility to help shape that future, and it is to our own benefit and the benefit of those who follow that we make the most of this challenge. Moises Silva of Westminster Theological Seminary has prepared an electronic index to the Westminster Theological Journal for the years 1938-1988, and has made it available for distribution for non-commercial purposes. Contact OFFLINE for details. The latest Newsletter from the ATARI ST User Group announces the availability of the main CCAT biblical texts on diskette for that machine. Contact Doug Oakman, 1114 - 121st Street South, Takoma WA 98444, who also reports that he has acquired an IBM/DOS Emulator for the ATARI. Dove Booksellers (3165 W. 12 Mile Rd., Berkeley MI 48072), with its growing line of computer materials, announces a new "After-Hours Computer BBS" (bulletin-board service) at 300/1200 baud M-F 5-8pm, weekends and holidays 24 hrs. Dial 313 547-9693. The Winter 1988 issue of the ACH Newsletter contains these items of more general interest to OFFLINE readers: a report on a proposed "Sanskrit Text Archive Project," and a summary of the past 6 months of HUMANIST discussions on BITNET. Do you have access to a library that subscribes to the publications of the Association for Computers and the Humanities? //end #22// ---------------------------------- by Robert A. Kraft [HUMANIST March 1989] RSNews 4/3 (May 1989) CSSR Bulletin 18/3 (September 1989) ---------------------------------- I'm a bit frustrated in preparing this issue of OFFLINE. I had intended to provide a relatively complete, topically organized list of computer archives as a followup to OFFLINE 22. It would have made my job easier, since I am committed to helping prepare such a list for circulation at the Toronto conference in June in any event. But all of the desired information is not yet in hand, so the list will have to wait. Meanwhile, the index of miscellaneous pieces of information that cross my desk and are identified as possible items to mention in OFFLINE has grown rather large. Thus it makes sense for me to use this occasion to do some housecleaning on that score. Please forgive me if the column seems to be more lacking in cohesion and/or inspiration than usual. Perhaps this scattershot approach will at least make occasional hits among the readership. New terms are constantly surfacing in the discussions of computer research. In the first couple of OFFLINE columns, I even began to construct a glossary of relevant terms, although that never became as consistent a feature of the column as I originally envisioned. Nevertheless, readers will have been tripping across such terms as "CD-ROM" and "hypertext" and "shareware" in subsequent columns. I have made an attempt to define them -- "CD-ROM" stands for Compact Disk with Read Only Memory, which means that masses of electronic material can be stored on this "optical" or "laser disk" medium (not a magnetic device like the normal diskettes), but that the user cannot modify or add to what is already fixed on the disk (it is "read only," not read/write). A similar, but slightly more flexible storage medium is called "WORM," for Write Once but Read Many times. The main practical difference is that special equipment is needed to "master" (fix the data on) a CD-ROM so that it is not usually done inhouse, while the WORM drive permits the user to store the material on the WORM disk and add to it (but not change it) periodically as desired. Now entering the picture are disks with similarly large storage capacity that have the read/write capability. You will be hearing more about them. But you knew most of that (see OFFLINE 6, already). And you also knew that "hypertext" refers to the electronic coordination of various types of available information (e.g. text, dictionary, pictures, even sound) so that the user can move back and forth (often by using multiple windows that can coexist simultaneously on the computer screen) between the various interrelated elements (see OFFLINE 15 and 19). Similarly, you have been exposed to "shareware" or computer software made available to whomever asks, with the expectation that if you find it useful, you will pay a modest fee to the author (OFFLINE 16). Such neologisms as "vaporware" and "airware" are largely selfexplanatory, referring somewhat playfully or cynically to unfulfilled promises. Some other relevant terms in vogue that have not been discussed here include "expert system" and "authoring system." Both of these refer to a special type of computer software that stands between the user (who operates through a "user interface" that hopefully is "friendly" or even "transparent" at the "front end" of the process) and the more technical computer languages (which can be either "high level" like C or Pascal with simple commands that can trigger complex chains of responses in the machine, or "low level" like Assembly, which stands relatively closer to "machine language" and the most basic yes/no alternatives that ultimately operate the computer). Incidentally, to become more familiar with just how it all works, there is a "friendly" if not painless section at the beginning of John Hughes' BITS, BYTES & BIBLICAL STUDIES (Zondervan, 1987) that I would highly recommend. Hughes also has an extensive glossary of terms. But I digress. An "expert system," if I understand how the term is usually used, refers to a specially constructed software system (program or package) that attempts to emulate the "logic" of a particular perspective or approach, such as for making medical diagnoses or anticipating stock market trends or correctly interpreting human language. It is created for a certain type of user in a specifically defined context in which inferences are drawn by applying rules to relevant data in order to produce recommendations. My optical scanner identifies the first item in the sequence 1989 as a number, not as a lower case letter "l" or an upper case letter "I" (which could also be a roman numeral), because it stands next to numbers. That identification is an inference drawn and acted upon by the scanner's expert system. An "authoring system" permits the user to create, within specified limits, an intermediate set of software commands to deal with "user-defined" problems. An authoring system could be built on an expert system, giving the user more freedom to specify the context and goals. It might also work in close connection with "query language" by which various options/selections are presented for the user's choice (an "interactive" approach). The main point is that an authoring system provides the basis from which a user who may not be adept at "computer programming" can accomplish a certain range of programming tasks in a particular context -- can produce user-defined results. Packages such as HyperTalk/HyperCard (for the Mac) come to mind in this connection, providing a high degree of user flexibility and power. To some extent, but perhaps closer to "query language" than to a highly developed "authoring system," are programs such as DBase or Lotus 1-2-3 or the Oxford Concordance Program. These are general programs that permit the user to fine tune the performance for particular needs. The greater the flexibility available to the user, and the ability to generate non-trivial instructions, the closer we come to a clear "authoring system." Computer assisted instruction is an area in which authoring systems offer great potential. I am encouraged to report that Scholars Press has now joined the electronic network, with the BITNET address SCHOLARS@EMORYU1. For e-mail addicts like myself, this will greatly facilitate contact, exchange of information, etc. In a similar vein, as of March 1989, the HUMANIST group (contact MCCARTY@UTOREPAS) on the international university networks had grown to more than 400 members, a fact that helps illustrate the speed with which revolutions in modes of communication are taking place (the FAX machine is another illustration, about which I know very little at present). Various other electronic groups with special interests have also sprung up, as I mentioned in OFFLINE 22, one of the most recent of which is editors of scholarly journals. Specific groups with which I try to keep in touch in this way, for various reasons, include archaeologists, anglo-saxon scholars, Jewish studies people, and IBYCUS users. It is not surprising, then, to find that conventional forms of publication and distribution are being challenged by the new media. OFFLINE itself is circulated as a "pre-publication" service to members of the (free) HUMANIST group as soon as the ink is dry -- I mean, the electrons have settled -- thus several weeks before it appears in "hardcopy." There are now some journals that circulate entirely in electronic form, such as PHILOSOPHY & THEOLOGY QUARTERLY (1986- ), from Marquette University. Oxford University Press has just announced the availability of the NEW OXFORD ENGLISH DICTIONARY on CD-ROM, and of the works of SHAKESPEARE on 20 diskettes (IBM/DOS). I have myself sought and received permission from Mohr/Siebeck in Tuebingen to reissue in electronic form the English edition of Walter Bauer's ORTHODOXY AND HERESY IN EARLIEST CHRISTIANITY, which is now out of print in hardcopy. And I hope to do the same with other publications with which I have been associated. Notices concerning various computer products constantly come to my attention, and I try to take note of any that seem especially relevant to OFFLINE readers. But the procedure is rather haphazard (you should see my office!), so the serious reader should refer to more systematic sources such as John Hughes' BITS & BYTES REVIEW. A second CD-ROM has now been issued by PHI (Packard Humanities Institute, 300 Second St, Palo Alto CA 94022), containing the Duke Documentary Papyri. As with the PHI[/CCAT] #1 disk and the TLG C disk, this can be accessed from the IBYCUS SC without any additional software, and from other machines as well. For Apple Macintosh users, the PANDORA software from the Harvard based PERSEUS Project is now available -- contact Elli Mylonas, Classics, Boylston Hall 319, Harvard University, Cambridge MA 02138. For IBM/DOS type computers, the options include: (1) CCAT's OFFLOAD and associated software being produced here at Penn by Alan Humm and his staff; (2) the GREEKUT program by Tony Smith (University of Manchester; available through CCAT/OFFLINE), especially for the Greek; (3) John Baima's "LBase" software (constantly being upgraded) which can now search directly from the CD-ROMs and display in Greek, Hebrew, Latin/English, etc. (5415 North East 47th St, Vancouver WA 98611); (4) and a newly announced shareware program by Randall M. Smith (Classics, U CA at Santa Barbara 93106) geared especially for the Greek and Latin materials. At the level of data distribution, it was already noted in OFFLINE 20 that CCAT is now making available in diskette format the Latin Vulgate, certain Aramaic Targumic texts, and Newman's concise Lexicon of the Greek New Testament (UBS). To this list can be added sections of the Sahidic Coptic Bible, the morphologically analyzed New Testament, and the indices to the Journal of Biblical Literature (vols 61-100) and the Westminster Theological Journal (1938-88). Standard orders will be referred to secondary distributors such as DOVE Booksellers, 3165 West 12 Mile Road, Berkley MI 48072; tel 313 547-9659. Hopefully, this will help eliminate the long waits some of you have experienced in getting orders filled by CCAT. Please accept our apologies! I have little time for reviewing software products that are sent to me (sometimes at my request), although I have every intention of doing some selective reviews in the future. Fortunately, John Hughes has covered many of them in his 1987 book and in his review journal. For the moment let me quickly mention the wide range that is represented. Biblical materials with accessing software on diskette (e.g. CompuBible, WORDsearch, the Bible Word Program, ThePerfectWord for Mac) and on CD-ROM (FABS, Ellis Enterprises; see OFFLINE 19) constitute one end of the spectrum. Programs for multilingual text manipulation also abound (e.g. LBase, MultiLingual Scholar, NotaBene, Oxford Concordance Program, CATSSBase for the Mac from Galen Marquis in Jerusalem). There is software aimed mainly at pastoral use (e.g. a program called Lexegete), at basic language instruction (NT Greek tutorial), at complex linguistic analysis (MacKinnon/McGill program). Surely I have forgotten some that are physically present here, and I have not even attempted to speak of others concerning which reports have been heard (e.g. the PhiloLogic search and retrieval system for the Mac, from the ARTFL project in Chicago). But it is clear that much activity is taking place at a variety of levels! I am mercifully nearing the end of the accumulation of "things to mention" on OFFLINE. The slate will soon be clear, at least for the moment. The CAL (Comprehensive Aramaic Lexicon) Project has issued a Newsletter and requests information about encoded Aramaic texts that could be incorporated into the Project (contact Delbert Hillers, Johns Hopkins Univ, Baltimore MD). This is a major language archive for ancient studies. The AIBI (Association Internationale: Bible et Informatique) issues a regular newsletter called INTERFACE (in French), which is an excellent resource for information of various sorts (reports, announcements, notes, etc.). AIBI also holds conferences (see OFFLINE 19), publishes conference volumes, and sponsors an electronic interest group (oh yes, I'm on that one too!). Contact PROBI, CIB-Maredsous, B-5198 Denee, Belgium. //end #23// ---------------------------------- by Robert A. Kraft [HUMANIST June 1989] RSNews 4/4 (August 1989) CSSR Bulletin 18/4 (November 1989) ---------------------------------- I write this as June rapidly draws to a close. Fresh in memory is the combined international conference on THE DYNAMIC TEXT held earlier this month in Toronto (see further below). Not very far in the future is the SBL/AAR/ASOR meeting, including its CARG (Computer Assisted Research Group) activities. In between is a papyrology conference in Cairo, Egypt, at which I plan to present an update on some of the procedures and results of our work on computer assisted identification and restoration of papyri fragments. For recreation, I have just finished a computer program to index the names in a massive family genealogy file that I have been developing. On a daily basis, incoming and outgoing electronic mail takes up some of my time, and the more traditional and regular chores of an academician's life, including bibliographical searching of the Library holdings, are also facilitated in various ways by computerized activity. The point is that in virtually every direction and connection, computers and computing are part of the life situation within which I operate. This access to such enormous power no longer awes me as it once did. It is, indeed, largely taken for granted and I wonder how life ever could have functioned adequately otherwise! It is both interesting and comforting to find that many colleagues, students and acquaintances are having a similar experience, at some level or another. Throughout the University, textprocessing has become commonplace, and its benefits obvious. Graduate students marvel that anyone ever finished a dissertation in the pre-computer age, as they exploit the technology to write, rewrite, index, and print their scholarly efforts. A new set of excuses can be heard from tardy undergraduates when the course papers are due -- couldn't make the printer work, or experienced a disk crash, or the dog chewed up the diskette! When my department agreed in 1984 to require an appropriate level of computer literacy from all graduate students, we felt that it would become an unnecessary rule, since it would automatically take care of itself in the fairly near future. This has proved true, not only because most students now enter with some computer experience, but because we are able to provide new levels of usage through the presence of "humanities computing" facilities here at the University. What the isolated person might only suspect or know of indirectly can often be seen in action here, such as optical scanning of texts and pictures, special printing facilities, CD-ROM manipulation, data transfer to optimize machine-specific software, graphics and video coordination, and the like. And as is increasingly clear from the banter on the HUMANIST electronic list (about 400 computing humanists linked together on the BITNET academic network) and from the participation in conferences that deal with humanities computing, a great deal of activity is taking place throughout the world of academia to make such facilities available more broadly. THE DYNAMIC TEXT conference hosted by the Centre for Computing in the Humanities at the University of Toronto provided an excellent cross section of the current situation. The conference was sponsored by two of the leading international "computers and humanities" groups, the American based ACH (Association for Computers and the Humanities) which publishes the journal called CHum (Computers and the Humanities), and the British based ALLC (Association for Literary and Linguistic Computing) which publishes LLC (Literary and Linguistic Computing). Included among the "cooperating associations and institutions" were not only several other "computers and ..." type groups (linguistics, history, bible, conceptual/content analysis), but also a few of the traditional professional associations (historical, philological, philosophical, linguistic) along with a research libraries organization. It would be interesting to know whether other traditional professional societies had been invited to cooperate, and what their responses were. To my knowledge, neither SBL nor AAR was approached, for example, despite their demonstrated interest in such matters. As I will argue below, this type of crossfertilization needs to be actively fostered as general attitudes to computing become more positive. Two booklets were produced in connection with the conference, and copies may still be available from the organizers: A Conference Guide edited by local host Ian Lancashire (191 pages plus index), with an overview of the program and abstracts of most of the presentations; and a Software and Hardware Fair Guide edited by Willard McCarty (131 pages plus index), with details about 74 planned exhibits (a few of them failed to materialize). I have no intention of trying to summarize the variety of activities that took place in this basically 4 day conference (plus associated workshops and short courses). My two graduate student assistants and I spent much of the time showing off various "wares" in the Exhibition room, but we also attended a smattering of the program segments and it is clear that all three of us had an enjoyable and rewarding time (and made many useful contacts). I would like to comment on a few of the issues that were raised and/or reraised at or by the conference that seem to me to impact on virtually all academics in one way or another. Some of these have been mentioned in previous OFFLINE columns, but are reasserting themselves with new vigor and sometimes in new ways. My intent is not primarily to report on the Toronto conference, but to use it as a springboard to more general observations. In Toronto, I chaired a very interesting panel on computer "archives" and related issues. For most of the short history of computing and textual studies, archives of electronic materials have been maintained by centers and projects. Now the situation is changing rapidly -- very rapidly! Libraries, as the traditional custodians of publicly available (mostly printed) information are moving more aggressively to keep abreast of the new electronic developments. Publishers, for whom the invention of the printing press created an immense market opportunity, are increasingly exploring ways in which the new electronic technology can be harnessed to their advantage. The development of storage and delivery devices such as CD-ROM, which in many ways (not the least of which is its "fixed" content) is more like a book than are the more dynamic read/write media, or largely controllable on-line access services (you can see/use what is there, but can't easily obtain it as such), provide an excellent point of contact between the electronic developments and the more traditional modes of publication and storage/access. Roles are necessarily being reshaped -- and with them, expectations, procedures, laws, interrelationships. In many instances, the author with appropriate electronic equipment no longer needs a separate "publisher" to produce attractive printed copy, although questions remain (if appropriate) regarding replication, publicity and distribution. And as authors move more to primarily electronic (rather than printed) publication, and/or as users come to demand more material in electronic forms, how will traditional publishing houses and libraries respond? Who controls the quality of what is "published"? Who keeps track of what version appeared when, and whether any given version is "authorized"? How do legal concepts such as "copyright" or "fair use" apply, and how do they relate to economic issues such as the treatment of expenses and of any income? A futuristic treatment of how such issues could be handled was provided at Toronto by a surprise visitor, Ted Nelson, who spoke about his "Project Xanadu" and its Hypertext System as described in his book LITERARY MACHINES (edition 87.1), which is itself an example of a new approach to publication in various forms (hard copy privately and through a distributor, and also in electronic form). "The old order changes, yielding way to new." Many of the same issues are relevant whether one refers to future original productions, or to the attempt to produce electronic copies of existing publications. New procedures will necessarily be worked out for the future; hopefully authors will be more conscious of protecting their "ownership" rights and not simply give them up pro forma to the new order of distributors, for example. But with reference to works that are already published in the old way, and for which electronic versions are desired, the waters are considerably muddied. It is not clear how traditional "copyright" laws relate to such electronic materials, especially when the original authors (Paul, Shakespeare, etc.) whose writings are reissued under copyright are themselves long removed from the jurisdiction of such laws. If I take a standard copyrighted edition of such an author, strip away all but the consecutive text (without modern page numbers, introduction, notes, etc.), and make it available electronically, do I violate copyright? Have I produced a new edition that is itself copyrightable in my name? Such questions will only be answered legally by being tested in the courts (as has happened with some legal materials), but that prospect currently does not seem appealing to any of the discussants (for understandable reasons!), and we may be able to muddle through the situation by developing agreements between the interested parties -- as has been the situation thus far with the biblical and related materials circulated by CCAT. In many ways, the libraries are caught in the middle on such discussions, and may help force solutions to be found. If a traditional publisher produces an electronic edition, as with the recent Oxford University Press releases of the Oxford English Dictionary (CD-ROM, $950) or of Shakespeare (diskettes, $300), the issues are relatively clear and clean. But some works of Shakespeare, encoded from editions no longer under copyright, also have been available electronically for a longer period of time, without benefit of any authorized "protector" to be responsible for quality and to control distribution. Should libraries attempt to locate and acquire such "public domain" material as well? Until very recently, prospective users were approaching the computer centers for such information and tasks, with mixed results. Growing interest and involvement of the libraries should provide a relatively stabilizing effect on the situation. A major problem has been that it is not easy to ascertain whether a text is available electronically, and if so under what conditions. Lou Burnard at the Oxford Text Archive and his counterparts at a few other centers had managed to provide lists of materials that were on deposit with them, but the long desired inventory of machine readable texts (MRT) that had been begun by Marianne Gaunt at the Rutgers Library was stalled for several years from lack of adequate funding and support. This situation is now changing radically, as we learned at Toronto. The NEH has granted some planning funds for the exploration of a consortial type of Center for MRT in the Humanities, under the combined sponsorship of Rutgers and Princeton Universities, and the first major task will be completion of the Inventory. This will be done in cooperation with other groups and projects that had independently begun to move toward the same goal. Once the inventory information is in hand, and has become available on the standard library networks, it will be much easier to sort out the problems of how individual libraries can facilitate access to the actual materials (e.g. from centralized banks, through an "interlibrary loan" type system, through direct purchase, etc.) and whose legal rights may be involved. Overall, the Toronto conference was a great success, and all who were involved in making it so are to be congratulated. There is, however, for me, an uncomfortable aspect to such success. It breeds enthusiasm, esprit de corps, commitment to the cause, and all those normally desirable side effects. But at the same time it raises the question of what should be the primary focus of allegiance -- what is the function of "computers and ..." organizations in relation to the more traditional types of field oriented professional groups. When the "old guard" left little room or encouragement for serious computer related discussion at professional society meetings, it made sense for alternative fora to arise. But hasn't the current situation become more receptive, so that inclusion of computer assisted study in the traditional framework is no longer a divisive issue? If so, should not scholarly expertise of all types seek its primary focus and expression in the recognized field that it represents? This is not to deny the value of secondary affiliations such as the "computers and ..." groups, or even the possibility that the new technology may actually justify the spawning of some new "fields" in humanities (although I am hard pressed to imagine what, given the "human" emphasis in my definition of "humanities"). But the danger of expending our energies to perpetuate the now comfortable and congenial "technocentric" situation, among longtime friends and sympathizers, at the cost of robbing our special fields of our newly acquired wisdom, talents and leadership, leaves me ambivalent. In the long run, computers are tools -- very powerful tools, to be sure -- that we humanists use in the pursuit and presentation of knowledge. The Fair Guide at Toronto was even entitled "Tools for Humainsts" although in the Introduction Willard McCarty refers in passing, inviting this sort of discussion, to "the discipline [of humanities computing], if it is one" (p. ii). Is there a case to be made for developing departments of "humanities computing," with attendant majors and advanced degrees, side by side with the more traditional departments? Should this become a self-perpetuating "discipline" or better "field of specialization" alongside the other humanities "fields" that it also serves? At present, my inclination is to resist such a development, and to urge that the rapidly growing body of computing humanists not abandon the traditional fields in favor of "computers and ..." contexts, but on the contrary, aggressivly interact with the traditional structures to forge a new and stronger synthesis. To put it another way, the riches of the Toronto exhibits deserve to be seen at the traditional professional society meetings. Some of the presentations made at the Toronto sessions deserve to be heard in the more traditional settings. Conversely, the program committees of the traditional professional societies need to be conscious about insuring that computer related approaches are welcome and encouraged at the sessions. Otherwise, we are in danger of fostering the development of two quite different levels of computer literacy within any given academic field -- those who write with their computers but don't know how to do much more, and those who do much more but fail to communicate it to or share it with the colleagues who, unlike years ago, are now in a better position to appreciate it. There is a very real sense in which the continued flourishing of "computers and ..." groups could prove counterproductive for the future of humanistic scholarship. This is not to say that a continuing forum for technical discussions of humanistic computer applications has no place. But I see its role as supportive and complimentary, not as competitive. The problem is, in its oversimplified form, two sided. The traditional societies and journals seldom have taken an actively positive attitude to the new developments. What journals are reviewing general purpose software, or electronic data? How many scholarly articles that make careful and explicit use of computer technology appear in those journals? Which societies sponsor hardware and software fairs such as the one at Toronto? But on the other side, to what extent are those who are especially interested and talented in the computer assisted applications pouring time and energies into helping the traditional societies and journals cope with the new situation? Do we volunteer to serve as program coordinators and editors for relevant interests? Are computer related articles being submitted to (or rejected by) traditional journals? How can such articles reach the wider audience of the less skilled and help to make them more skilled if they appear mainly in the "computers and ..." journals for the very skilled? Examples of this paradoxical situation are not difficult to find. Many presentations in ACH and ALLC programs over the years, and articles in CHum and LLC, have been very field specific as well as explicitly computer oriented. To what extent has that research also made its way into the traditional journals (with appropriate rewriting, as necessary)? The current issue of LLC (4.1, 1989) contains an article by M. E. Davidson on "New Testament Word Order" (19-28), the spinoff from an MA project at Queen's University, Belfast. While it is clear that Davidson used electronic data in researching the subject, it is not an article on any aspect of computing -- indeed, its use of computers is relatively trivial and incidental. Davidson even prepared a control study from Epictetus "by hand" since appropriate computer data was not available for that author. Davidson's primary approach is through statistical analysis, and his results (whatever their validity) would be of interest to a variety of people in biblical and classical studies as well as in linguistics. I do not know whether the study has been submitted to any traditional journals, but it should be. It is an article on the Greek of the Gospel of Luke and Paul's Romans, and should be used and evaluated by experts in that material. It makes no contribution to humanities computing as such, although it peripherally relates to statistical linguistics. The very next article in LLC deals with the problems of using machine readable dictionaries of English. It is more directly and explicitly related to the computer aspects of linguistic research, addressing such problems as ambiguity and coding of various entries in English dictionary lists. It is an instructive study, and would be of value to people interested in dictionary construction in general, as well as to people who work with computerized dictionaries. Surely such an article has a place in the general scholarly literature for the study of English (and other) lexicography as well as in a journal read by people who must be concerned with consistent electronic coding conventions, file structures, and the like. The more technical discussions in the computer societies would be difficult to justify in traditional periodicals and scholarly meetings, to be sure, although the day may be coming when even that observation may ring false. My point is that we need to seek actively to incorporate, or sometimes reincorporate, the computer assisted studies into the general framework of the existing fields, where appropriate. I suspect that there may already be a generation of computer society members whose primary scholarly affiliation is in that "interdisciplinary" context, and who have no significant involvement with the more traditional professional societies. While I can understand how that can happen, I think we need to resist the inbreeding and tendency to isolation that can result from overly successful "computers and ..." approaches. Otherwise the old will tend to atrophy, and the new will have inadequate rooting and support. Most of the presentations and exhibits at Toronto have a direct application to the teaching and research of the individual academic. The Introduction to the Fair Guide provides a useful classification of the exhibits by type of application, with the following main divisions: computer assisted instruction (including various sight, sound and text systems), databanks and databases (local as well as online), special hardware systems (e.g. NeXT, IBYCUS), machine assisted translation, scanning systems, personal information management (e.g. bibliography), philological and linguistic analysis, historical (and archaeological) simulations, analysis of style and meaning, search and retrieval systems, text editing and textprocessing, data transfer utilities. There was something for everyone, and these brief comments cannot possibly do it all justice. In most instances, it was not a preview of tomorrow but a sample of what is being done today and a challenge to further infuse our everyday academic activities with the fruits and potential of this fabulous new technology. //end #24// ----------------------------- <> by Robert Kraft guest columnist Robin Cover [HUMANIST September 1989] RSNews 4/5 (November 1989) ----------------------------- [[There are many things that could be reported in OFFLINE at the end of this very short and full summer (including the misprint in the header to the previously published column, which should read "24," not "23"). I recently returned from a papyrological conference in Cairo at which there was opportunity to demonstrate the magic of the IBYCUS Scholarly Computer searching the Duke Papyri Data Bank CD-ROM as well as the Thesaurus Linguae Grecae CD-ROM to many people quite unfamiliar with such technology. Some discussions on the use of computerized (digitized) images for paleographical research, on the one hand, and of data base compilations of prosopographical information from the papyri, on the other, also took place there. The various pieces of regular and electronic mail that awaited my return included numerous items of potential interest for OFFLINE readers, from the announcement of new or improved hardware releases to relevant software developments and new electronic texts and data sets. But with the annual November meetings of SBL/AAR/ASOR approaching fast, it seemed most appropriate to devote this column to a preview of some of the computer-related aspects of the Anaheim scene. For those of you who are able to attend, and who can take advantage of the information and expertise available at those sessions, some of the new developments can be seen and discussed in person, along with the old. Thus I asked Robin Cover, co-chair (with Alan Groves) of the Computer Assisted Research Group of the SBL, to provide us all with an overview of the activities and interests of CARG, with a specific eye to the Anaheim meetings. Robin agreed to this request, and his contribution follows. Please check it all out for yourselves on 18-21 November in Anaheim!]] In the first part of this column I will offer a summary description of the current goals and activities of the Computer Assisted Research Group (CARG). In the second part, I will suggest areas in which CARG might provide additional computer assistance and service to the Society of Biblical Literature. Readers interested in helping enrich our vision for the use of computers in individual research and within the Society's corporate activities are invited to respond in writing. In the most general terms, CARG's primary task has been to promote the use of computing technologies in the professional and scholarly work of SBL members. The specific activities of CARG have never been guided by a canonical "mission statement," at least to my knowledge. Rather, several factors have contributed to CARG's historic maintenance of a flexible identity. (1) CARG has no permanent base of funding, but has employed adaptive strategies for its financial existence. Contributions from the SBL and from private donors are deeply appreciated, but funding based upon good will renders CARG's program contingent upon uncertain economies and fortunes. (2) CARG pursues its goals in relation to the rapidly-evolving role of "academic computing" centers in colleges, universities and seminaries, where institutional support for humanities computing is highly variable. While CARG cannot duplicate every function of an institutional "Academic Computing User Services" department, we do attempt to assist in some domain-specific problems encountered by biblical and classical scholars. (3) CARG has attempted to meet the needs of a highly diverse group of interested scholars -- scholars having widely divergent computer literacy skills and widely divergent computing applications. Adding to this complexity the impact of periodic computer hardware revolutions, we find no shame admitting that CARG's goal is a moving target. The Annual Conference of the SBL/AAR is the locus of CARG's visible activity, though a steering committee maintains electronic mail discussion throughout the year. On an annual basis, we attempt to identify technological developments (hardware or software) which have lead to applications that are "ripe" for promotion among the SBL constituency. In the main CARG session (usually on Saturday of the Annual Meeting), we invite two or more individuals to discuss these new applications in terms of their own research, and if possible to demonstrate visually the results. These invited lectures are meant to capture the imagination of scholars in biblical and classical research, and to help them visualize the new computer applications in related areas of study. At the upcoming Anaheim meetings, for example, we have invited three scholars to speak on the general theme "Scanning Technologies and Archives in Humanities Computing." Terrance Erdt (Villanova University) will speak on "Scanning and Character Recognition, New Tools in Humanities Computing;" Theodore Brunner (University of California, Irvine) will speak on "Machine-Readable Text Archives for Classicists: The Thesaurus Linguae Graecae Project;" Robert Kraft (University of Pennsylvania) will speak on "Text Archives: Why you Can't Find/Use the Texts you Need." A second part of the main CARG session at the Annual Meeting is dedicated to reports on recent or ongoing computing activities at academic institutions. Institutional research and development often requires several years for the introduction of a mature computer product or for a major work of data preparation. The report session provides an opportunity for institutional representatives to describe databases and programs that are available for public use, to announce new research endeavors, to solicit cooperative working arrangements with other institutions, etc. This November in Anaheim we hope to hear reports from or pertaining to the following institutions and projects: Biola University (Virginia Doland and Don Wilkins: CAI Software for Biblical/Classical Greek); Harvard University (Greg Crane and Elli Mylonas: PERSEUS Project; Richard Saley: Photogrammetry Project); Johns Hopkins University [with Hebrew Union College-Jewish Institute of Religion] (Stephen Kaufman: Comprehensive Aramaic Lexicon Project); Hebrew University, Jerusalem (Emanuel Tov: Computer Assisted Tools for Septuagint Studies [CATSS]; Michael Stone: Armenian Inscriptions Data Base); Manchester University (Tony Smith and Gordon Neal: Greek Syntactic Parsing Project); Maredsous Centre: Informatique et Bible (R. F. Poswick: Maredsous Biblical Databases); Oxford University (Susan Hockey, Lou Burnard: Oxford University Computing Center and Oxford Text Archive); Packard Humanities Institute (David Packard, Wilkins Poe: Greek & Latin Texts [with Micro-IBYCUS]); Princeton Theological Seminary (Richard Whitaker [also with Claremont Institute for Antiquity and Christianity] with Jim Roberts: [Electronic] Hebrew Lexicon Project; and with James Charlesworth: Qumran Machine-Readable Texts); Summer Institute of Linguistics (Steve DeRose: CELLAR [Computing Environment for Linguistic, Literary and Anthropological Research]); University of California at Irvine (Theodore Brunner: Thesaurus Linguae Graecae Project); University of California at Los Angeles (Giorgio Buccellati: Computer Aided Analysis of Mesopotamian Materials; Andrew Dyck, Bernard Frischer: Classicist's Workbench); University of California at Santa Barbara (Randall Smith: CD-ROM Retrieval Software for Textual Research); University of Pennsylvania (John Abercrombie, Alan Humm, Robert Kraft, David Louder, Jacqueline Pastis, Jay Treat, David Rech: Center for Computer Analysis of Texts [CCAT]); University of Sheffield (David Clines: [Electronic] Dictionary of Biblical Hebrew); University of Stellenbosch (Walter Claassen: Research for Computer Applications to the Language and Text of the Old Testament; Johann Cook: Syriac Peshitta Project); University of Toronto (John Hurd, Trinity College: Center Coordination, Software Library); Vrije Universiteit, Amsterdam (Eep Talstra: Werkgroep Informatica [Hebrew Bible Syntactical Analysis]); Westminster Theological Seminary (Alan Groves: Westminster Computer Project [Hebrew Bible Morphological Parsing]); Wooster College (J. Arthur Baird: Computer Bible Project). CARG supports other computer-related activities in specially designated CARG Demonstration Rooms during the Annual Meeting. These scheduled demonstrations and tutorials serve the personal computing interests of SBL/AAR members. Commercial software developers, academic institutions and hardware companies are invited to schedule 30-minute demonstrations of their academic product at no charge. These small-group demonstration sessions are used to introduce new products and sometimes to provide personalized support. Late-afternoon discussion sessions focus on common problems of text- or word-processing -- frequently the problems of data conversion, document markup, file formats, multi-lingual wordprocessing (foreign-character fonts), concording, text retrieval and desktop publishing. These discussion forums permit computer users collectively to register their complaints and wish-lists with the software developers present. The CARG Demonstration Rooms also contain literature tables for promotion of academic software and provide a meeting place for computer user-groups and special-interest groups. The CARG Steering Committee hopes that the current program supplies vital computer-related information and assistance to members of SBL/AAR who may otherwise be un-supported or under-supported by their own institutions. We recognize, however, that CARG could provide assistance and leadership in other areas of computer technology relevant to the scholarly and professional work of SBL members. In the following paragraphs I will identify two broad computer-related concerns which I feel could be formally addressed by the Society through the help of CARG and/or other groups. Perhaps no aspect of computer technology has affected scholarly research more dramatically than the international academic networks (BITNET, CSNET, Internet, NSFnet) which permit rapid communication and data sharing. Electronic networking permits scholars on different continents to work on collaborative research projects almost as easily as they might if located at a single institution. The administrative offices of the SBL and AAR are now connected electronically via BITNET, so that business communication can be conducted over the networks as well. If these electronic networks are in place, why is a majority of scholarly communication still confined to paper? (I would include the FAX technology as a category of paper communication, since data sent over FAX is characteristically just printed on paper, not delivered to the recipient in editable machine-readable format.) Two major barriers stand in the way of the full democratization of scholarly networking. The first is education and training: electronic mail and networking services are sometimes not adequately promoted or supported by institutions which have these resources, particularly within humanities departments. CARG could help in education and training, and indeed, I began a joint effort with AAR members this summer (plans initiated by Lewis Lancaster and Andrew Scrimgeour) which may result in useful documents on academic networks. We must demonstrate that e-mail communication and networking can be integrated into the electronic scholarly workspace as easily as wordprocessing. A far more serious barrier to networking, I suspect, is that too few members of SBL/AAR have institutional access to network resources. Large research universities, doctorate-granting universities and comprehensive colleges of course support BITNET, Internet and other research networks. But a significant number of SBL/AAR members belong to smaller liberal arts colleges, professional schools and seminaries which do not support the academic networks. In other cases, institutional networking resources may be under the control of engineering schools or computer science departments, and thus not readily accessible to departments of religion where SBL/AAR members work. CARG may be able to coordinate assistance at various levels for SBL/AAR members who face these "access" difficulties. Of several research networks that might be designated as the "recommended" (or official) network for SBL/AAR, BITNET and Internet are the most prominent candidates. BITNET (now merging with CSNET under the auspices of a new Corporation for Research and Educational Networking [CREN]) is currently the network of choice for most humanities scholars, and would probably be the easiest for SBL to adopt. The popularity of BITNET among humanities scholars is due, in large measure, to the fact that institutional membership fees (fixed annual fees, determined by E&E budgets) are very reasonable and that fees are based on access-only. On BITNET, no per-usage fees may be passed on to end users. The Internet is a more modern, high-speed network which supports gateways to BITNET; its installation and support is more expensive, and per-usage fees sometimes make it financially inaccessible to humanities scholars. In a subsequent article I may survey academic networks more broadly, indicating the hardware/software requirements for each and the respective fee structures. I visualize that SBL/AAR could play an intermediary role (perhaps jointly with the APA and related societies) with the BITNET administration in helping medium-size and smaller institutions overcome obstacles to acquisition of BITNET membership. With dramatic decrease in the costs of data storage and electronic publication (especially CD-ROM) and increasingly powerful microcomputers, information management specialists are faced with the problem of comparatively crude, antiquated and otherwise inadequate software. Humanities scholars likewise confront multiple difficulties in the use of electronic tools to create, publish and maintain their written research: (1) inadequate support for multi-lingual authoring and text processing; (2) lack of clear standards for use of foreign-language character sets and fonts; (3) incompatibilities between file formats used in commercial software packages, and inadequate file-conversion utilities; (4) inadequate software support for management of document formats required in various publishing houses; (5) personal desktop-publishing software, or publishers' electronic typesetting software which actually corrupts data from the standpoint of information retrieval. When the work of the international Text Encoding Initiative is completed (1990-91), we may hope for clearer standards and for the compliance of software developers who are committed to serving the academic community. In the interim, we cannot expect quick solutions to these problems, either from the business world or from the special efforts of humanities computing initiatives. On the other hand, the existence of an affiliated "Scholars Press" gives the Society of Biblical Literature a unique opportunity to support emerging standards and to develop goals for an electronic publication division which makes scholarly research available in machine readable format. The SBL would be in a position to set high ethical standards for the protection of intellectual property contained in its scholarly document archives. A few examples are offered below. First, the Society could encourage or require the submission of all major work in electronic as well as paper format, whether academic or administrative data. Microcomputer diskette may be the preferred medium, although simple and standard means are available for mailing binary data in encoded format (uuencode, binhex) over the electronic networks. Encouraging the submission of electronic data will serve to heighten our collective awareness of several vital facts. Of foremost importance: the goal of simply printing information on paper must now be understood as a shortsighted, inferior goal. With some rare exceptions (ephemeral data), any information worth typing and printing on paper is probably worth preserving in machine readable format: for subsequent editing, for information retrieval, for archiving. Similarly, preserving electronic data files and submitting information in electronic format to others will constantly remind us that highly proprietary ways of managing information are usually counter-productive to our research goals. Data submitted to the Society or Scholars Press in electronic format may or may not be of immediate benefit, but it should all be archived for use in future years. A second goal of the Society could be to promote standards (including recommended hardware and software choices) which optimally support our long-range electronic information retrieval objectives. Brief reference was made above to the international Text Encoding Initiative. To judge from the preliminary efforts of this initiative and from inertia in the broader electronic publishing industry, it now appears highly probable that a form of descriptive markup (e.g., SGML = Standard Generalized Markup Language) will be recommended, at least as a standard for document interchange. As an ISO standard, SGML is being required by several government agencies, and is receiving broad acceptance in the publishing and information science industries. Descriptive document markup is a strategic choice, for it permits document content and document structure to be represented independently of document "appearance." The traditional fixation on "document appearance" (viz, the printed page) has usually worked to the detriment of other scholarly concerns, especially electronic publication and information retrieval. More than once in recent years, prestigious publishing firms have thrown away electronic typeset tapes and kept the lead printing plates -- with obvious consequences for electronic publication of that data. Robert Amsler has spoken of getting paper printout as a "transient joy," and Ted Nelson (father of "hypertext") has described "getting on paper" as a shortsighted obsession. Of course, electronic publication will not replace paper publication for many genres of scholarly productivity. But we must begin to believe that authoring, typesetting or electronic publishing schemes which corrupt or obscure information content are inadequate tools, antithetical to our other goals of communication and research. The Society can assist the progress of scholarship by supporting the ideals and standards of the Text Encoding Initiative or other agencies which have clearly articulated the inadequacy of current document processing and publication methods. A third kind of support by the Society would be to specifically target selected publication projects for simultaneous print-copy and electronic-copy formats. I have met on two occasions with the SBL group which is producing the multi-volume anthology series "Writings from the Ancient World." Scholars on this translation team have agreed in principle to the publication of machine-readable editions of these texts along with the bound volumes. Other SBL publication series might be selected for similar treatment, particularly where scholars find it highly desirable to search, concord or index the machine-readable text data. A fourth type of support would be for SBL to sponsor or subsidize software development for particular humanities computing applications that present special problems for scholars in biblical and classical studies. A review board could be set up to referee competitive proposals for funding; recommendations from the review board could be made to other granting agencies which support the software development. Political or legal obstacles may prevent achievement of this goal, but perhaps not. For example, computer applications used by textual scholars in the SBL/AAR arena are usually demanding in that they require a level of multi-lingual support not anticipated by designers of operating-system software (Macintosh Script-Manager notwithstanding). It appears that Donald Knuth's new implementation of TEX and Version 7.0 of Apple's Macintosh operating software may provide more robust font support for selected applications, but what about font support (screen and printer fonts) for some of our popular, standard DOS applications? I know of no full- featured DOS wordprocessor (e.g., Nota Bene, WordPerfect, Microsoft Word) which provides native support for pointed Hebrew and accented Greek in Adobe PostScript. It's not obvious that PostScript support in these cases would be commercially feasible. On the other hand, many needs of scholarship are met through funding of "commercially infeasible" projects, and foundations exist specifically for these purposes. Even if the example offered is problematic, I can visualize other applications for which SBL/AAR subsidy might make a significant difference in solving an annoying humanities computing problem. Anyone wishing to contribute to the formation of the goals of the Computer Assisted Research Group is invited to respond in writing with pertinent suggestions. I will present all recommendations and requests to the CARG Steering Committee in our e-mail forum or at the Annual Meeting in November. If you have a special need in your own computer applications, and feel it would constitute a common problem among SBL/AAR members, please let me know. Professor Robin C. Cover Program (Co-) Chair, Computer Assisted Research Group Assistant Professor of Semitics & Old Testament Dallas Theological Seminary BITNET: zrcc1001@smuvm1 UUCP: attctc!utafll!robin attctc!cdword!cover attctc!txsil!robin FAX: (214) 841-3540 MCI: 332-1975 SNAIL: 3909 Swiss Avenue Dallas, TX 75204 VOICE: (214) 296-1783 [h]; 824-3657 [w] //end #25// ----------------------------- <> by Robert Kraft [HUMANIST December 1989] [Religious Studies News 5.1 (January 1990)] [CSSR Bulletin 19.1] ----------------------------- Another year, another time for taking stock. In the immediate past lies the SBL/AAR/ASOR conference in Anaheim, on the doorstep of the last decade of the 20th century. Too much went on at Anaheim to comprehend responsibly, including a great deal concerning computer assisted research and scholarship. A large debt of gratitude is due to those who put together the rich program of presentations and demonstrations dealing with computer related issues -- Robin Cover, Alan Groves, Jackie Pastis, Ray Harder, etc., for the SBL Computer Assisted Research Group (CARG); Tom Longstaff et al. for ASOR; to mention only the most obvious. And computer technology was also very well represented in the "book exhibit" area, with almost 20 separate booths containing various pertinent services or wares. In OFFLINE 25, Robin Cover gave a preview of the proposed CARG program. Most of it actually took place as planned, so there is no point in attempting to repeat the details here. For those of you who could not attend the CARG Reports session, printed descriptions of 21 scholarly projects or products were collected and produced for the meeting by Alan Groves and copies of that information are still available from OFFLINE. These reports range all the way from the usual updates on activites of well known entities (e.g. Oxford Text Archive, Biblical Research Associates, GRAMCORD, Computer Assisted Tools for Septuagint Studies = CATSS, Center for Computer Analysis of Texts = CCAT, Comprehensive Aramaic Lexicon = CAL, Thesaurus Linguae Graecae = TLG) to such relatively new entries as two Hebrew Lexicon projects (SBL- Princeton Seminary; Sheffield) or the use of Apple Macintosh "hypertext" capabilities to produce Hebrew-Greek lexical and morphological tools for biblical study or the adaptation of David Packard's automatic Greek morphological analysis program for use on IBM type microcomputers. A wide variety of these scholarly developments were also exhibited in the CARG demonstration room. And a similarly wide array of computer applications could be seen in the regular "book exhibit" hall, including several products for studying collected biblical and other materials on CD-ROM (e.g. CDWord from Dallas Seminary; MasterSearch Bible from Tri Star Publishing; LBase from Silver Mountain Software) or in other formats (e.g. Zondervan's newly acquired "macBible" -- formerly the PerfectWord). There were some miniature, hand held computers for Bible study (Franklin; Selectronics) and some searchable single versions on larger machines (Lockman's NASV; Zondervan's NIV). The latest versions of popular textprocessing software were on display (e.g. NotaBene, MultiLingual Scholar, MegaWriter) and of various other special use products (e.g. Linguist's Software fonts for the Mac, or the MemCards package for learning languages). And more! May I be forgiven for what I may have overlooked! Most of the specifically biblical texts and products are listed in the helpful new catalogue produced by a distributor called Hermeneutika, PO Box 98563, Seattle WA 98198 (1-800-55BIBLE), which joins Dove Booksellers as a convenient source for such materials (3165 West 12 Mile Road, Berkley MI 48072, 313-547- 9659). In retrospect, the composite scene proved very gratifying. Progress is being made. Useful products are becoming increasingly available. A growing number of scholars are taking advantage of the powerful new tools and possibilities for facilitating study that are offered by the computer world. The ASOR special consultation on Computer Applications in Archaelolgy illustrated some of the range of interests and applications beyond the primarily textual. CARG guest Terry Erdt (Villanova) spoke about progress in optical scanning technology, and the new Kurzweil 5100 scanner was on display. Ted Brunner (University of California, Irvine) informed and entertained us on the history and plans of the TLG project, while the TLG CD-ROM made its presence known and illustrated its value in several displays. Developments in the creation and coordination of other data archives for humanistic research were also described, and the encouraging role of the National Endowment for the Humanities in funding such projects was noted. There is still a serious gap between those who use computers primarily or exclusively as a writing and printing device and those who use them for other scholarly tasks. But even this is a much more tolerable situation than obtained a mere five years ago, when computer fobia ran rampant among humanists. There is still significant fobia, but it is no longer triggered in most instances by the sight of a keyboard and screen. Even the use of computer jargon seems less intimadating, by and large -- we are hearing it everywhere. Indeed, there are many indicators that the current crisis -- or at least one of the main crises at the moment -- has to do with getting connected with the larger world of electronic communications. The main problem, not surprisingly, seems to be insecurity about whether and how to take such a step: Why should I want to be plugged in? Won't it be expensive? Isn't it very complicated? Etc. Computers are the telephones of the future (not to mention the present) -- and the telegraphs and the telephotos and, to some extent, the postal links and the library catalogues and reading rooms as well. These aspects will all develop at their own rates and in their own ways, but they will gradually come together into a multifaceted computer linkage of visual and audio media. The transmission of visual materials in the form of words and digitized pictures has been mushrooming of late, as the "fax" phenomenon attests, and the multiplication of electronic networks and network users. Communication between previously discrete or incompatible networks is becoming commonplace, and the wealth of available information along with the opportunities for making useful contacts are mushrooming. Electronic discussion groups multiply, and electronic journals are beginning to emerge onto the scene. Opportunities for humanists abound. It is never too soon to start. Most major universities and their satellites are on BITNET or another of the academic networks. If you are at such an institution, you should be able to connect, either by being wired directly to the system or by telephone modem. Get an account. Don't be timid. Find out what is available and how you can make use of it. If your own institution is not on a network, it is probable that you can make arrangements with a nearby networked university to be routed through them, hopefully at a modest cost. In any event you can look into the telephone accessed services such as HumaNet (see OFFLINE 18), which provides similar opportunities to those of the university networks. My own use of the university networks may be atypical, but it will serve to illustrate the possibilities. This column is published first, electronically, on the HUMANIST discussion group (coordinated from Toronto) on BITNET, several weeks before it appears in hard copy. The column is also transmitted to the editorial offices of Scholars Press via BITNET, and any communications with SP about it or related matters are done electronically. Similar contributions to scholarly discussion appear regularly on HUMANIST, to be read or ignored, responded to or left for possible later reference in the discussion group archive. If I need to locate a text, or a reference, if I have a question about humanistic computer developments or software, or about some newly announced hardware, I can send a general query in a single memo to the 500 or so members of HUMANIST. Quick responses are not infrequent. Other discussion groups are also available and attractive to me -- for IBYCUS users, for Judaic Studies, for Editors, for Archival Centers, for Archaeologists and AngloSaxonists (ok, I'm pretty nosy; trying to keep in touch with related areas of interest!). And I also keep in contact with my own less formalized subgroups of individuals such as the CARG steering committee or the CATSS project staff (local and international). All the news is not good. It takes time and discipline to deal with the flood of messages from HUMANIST alone. But I find it much easier to operate efficiently and effectively with "e-mail" than with the regular post in terms of keeping up with my own communications. And the ability speedily to get answers to queries, or to test ideas, is a great advantage, not to mention the ability to keep in touch with what is going on elsewhere throughout the world. Apart from BITNET and HUMANIST, I also make regular use of the electronic linkage to library catalogues here at Penn and elsewhere, similar to what one can also do with online bibliographies such as that of the American Theological Library Association, which regularly displays at the annual SBL/AAR/ASOR meetings and could be seen at Anaheim. From lack of time, and certainly not lack of interest, I have not yet made much use of the plethora of other information and opportunities on networks other than those available directly through the university. Again, using the Anaheim sessions as a point of departure, some important decisions for the future were taken there. The CARG steering committee reaffirmed its intention to continue its activities, but with an increasingly wider agenda and more overt attempts to cooperate with and help meet the needs of the other constituencies represented at the annual meetings (ASOR, AAR). Insofar as CARG has its formal base in SBL, it understandably tends to focus on biblically related research. But the interests and needs represented at the annual meetings of SBL/AAR/ASOR are clearly much broader, and it makes little sense to try to run three CARG-type operations simultaneously, at least at the level of providing general information and securing equipment for conducting demonstrations. Contacts with the computer coordinators in the other societies have been encouraging, and it is hoped that the activities at future meetings will increasingly address a larger circle of interests as well as various levels of need -- from the beginner through the expert. A related issue that has long concerned me is worth noting. There are many scholarly constituencies and societies that could profit from programs and demonstrations similar to what CARG has been doing in SBL. Occasionally inquiries have come to the Center for Computer Analysis of Texts at Penn whether we could put on some sort of computer demonstration at a meeting of this or that group. Sometimes it is possible, usually not. Similarly, the question has been asked about conducting demonstrations of appropriate computing applications at regional meetings of SBL and/or AAR, for example. It cannot be expected that every group has the ability and desire to generate appropriate program segments for computer orientation, but there could be a great deal of value in having an experienced group available for such services. Exactly how to coordinate, and finance, such an effort is a worthy issue for discussion in the major societies and confederations of societies and in the humanities computing service centers. Leadership also emerged from the Anaheim meeting on another matter of great significance for scholarly adjustment to the computer age. The SBL Research and Publications Committee has committed some resources to establishing a central archive to preserve the electronic forms of society publications. Journals, monographs, abstracts, newsletters, annual programs, etc., are printed from electronic versions, but the survival of the electronic forms has not been systematically attended to until now. Beginning immediately, the electronic materials will be collected and sent to CCAT at Penn, where they will be transferred to large capacity storage devices. The initial primary aim is preservation. Later such questions as consistency of format, selective electronic (re)publication and the like may be considered. If you are in possession of the electronic form of a book or article or similar item that deserves to become part of these archives, please send a copy to the OFFLINE address. As has often been noted before in OFFLINE, creation and preservation of electronic archives is an important step towards the emerging future. Over the years the Oxford Text Archive has managed to establish itself and survive -- even flourish, in some limited sense -- as a general repository. A few other similar facilities for producing and collecting electronic data have been created, usually with specific areas of focus, in various universities and projects. Nothing has yet emerged on the scale of the Inter-university Consortium for Political and Social Research (ICPSR) at the University of Michigan, which serves as a social sciences resource center for some 300 institutions, although various possibilities have been discussed at various times. The task for humanists is immense and will probably require close cooperation between existing archives and humities computing centers, major libraries, professional societies and publishers, to mention only the most obvious. An encouraging sign is the revival of the Rutgers Inventory of Machine Readable Texts project, in connection with the newly established Rutgers-Princeton Center for Machine-Readable Texts in the Humanities, with funding from the NEH, the Mellon Foundation, and the state of New Jersey, at present. A project is also underway at Georgetown University to catalogue the existing archive-type repositories relevant to humanities. The formal entry of a professional society such as SBL into this complex task is certainly welcome, and hopefully will help serve as encouragement to other similar groups and publishers. This is clearly a period of transition for humanistic research in relation to technological developments. Much effort on the part of many will be necessary to make this a smooth process. The libraries are feeling their way along to determine their proper roles as repositories and dispensers of information. Professional scholarly societies of the traditional sort are gradually becoming more involved. Publishers are attempting to test the market at various levels. Specific projects in the scholarly world continue to explore and produce new "data" (including texts and repositories of information). Some universities have attempted to address the broader situation in one way or another -- e.g. the Oxford Text Archive, the Toronto Centre for Computing in the Humanities, the Rutgers- Princeton Center for Machine-Readable Texts, the BYU Humanities Research Center, and a few others. But apart from such rare exceptions, the educational institutions seem to be preoccupied with internal needs -- and those not usually of direct relevance to humanists -- with respect to computer assisted scholarship. In some ways the fault may be ours. Often we have failed to make our needs and wants known to our own institutions, and we have failed to lend even verbal support and encouragement to those projects and endeavors that attempt to address our common interests. It is difficult for an administration, overburdened with urgent requests from various quarters, to know what the more silent participants need, or even if they know, to take appropriate action in the face of the symphony of squeaky wheels. It has become increasingly clear to me, for example, that my dean and administration have little awareness of what CCAT has done or attempts to do for humanistic scholarship at large (including the production of this OFFLINE column) and thus find it hard to respond to CCAT requests. I suspect that the same may be true for some of the other similarly oriented service and information facilities mentioned above. Thus when priorities are determined by an administration, these sorts of humanistic efforts my find themselves neglected or even discontinued. If we want to avoid such a situation, and continue to encourage progress in harnassing computing for humanistic research and instruction, we need to speak out with intelligence and conviction in support of making/keeping our humanistic disciplines equally viable in the new situation. We cannot afford to be left behind. //end #26// ---------------------- <> by Robert Kraft [29 January 1990 Draft, copyright Robert Kraft] [HUMANIST 29 January 1990] [Religious Studies News 5.2 (March)] [CSSR Bulletin 19.2 (April)] ---------------------- The number of this column should be OFFLINE 27a. It is a rewrite, more or less, of column 27 which has gone off to electronic heaven (or wherever) with the other materials on my suddenly defunct harddisk. As someone was overheard saying at a computer conference, "A fool and his data are soon parted!" While I don't usually compose on my IBM XT, and I don't usually turn it off without transferring material to another system or to diskette, this time I did, and I hadn't. Some of you may be feeling mixed emotions of empathy and self-exoneration. I'ts happened to you, and even the "experts" screw up. Yes. Do as I say, not as I do. Be sure to make backups! Anyhow, the column theme is "Computer Assisted Instruction" (CAI) or, as some prefer, "Computer Assisted Learning" (CAL). In the lost original column I said some clever things about how OFFLINE has promised for a long time to deal with CAI/CAL but has not done so. I won't be so clever this time around. It's Saturday morning, and I have other things to do. The reason for this theme at this time is that the Computer Assisted Research Group proposes to have a session on CAI at the 1990 annual meetings of SBL/AAR/ASOR in New Orleans, so I'm priming you to think about that subject. And the reason I have not talked much about it before in OFFLINE is that I have very ambivalent feelings personally about CAI. Now is a good time to try to explain (or to figure out) why. I have no abmivalence whatsoever about the value of the computer in the life of the academic. "Wordprocessing" alone is usually worth the price of the equipment. Not only can I write and store (make backups!) books and articles and reviews, but also syllabi, recommendations, letters, memos, addresses, telephone numbers, bibliographical items, and so on. All of this is easily possible in wordprocessing mode, although some things can be handled even more effectively with other special software (e.g. bibliographies, indices, address lists). On the record keeping side (grades, budgets, taxes, etc.), a good "spreadsheet" program is extremely useful, and can even be adapted to various other needs that depend on column structure for non-mathematical materials (e.g. word lists with definitions and analysis). Updating and printing the results of these various endeavors is usually a quite simple matter, and very fancy "hardcopy" products are often possible (e.g. multilingual text, inclusion of graphs or maps or even pictures). At the various levels of basic research and information access that undergird the life of an educator-scholar, the computer can also greatly facilitate matters. For some things you will need to be connected to an electronic network of some sort -- for example, to access library catalogues or collections of texts and data available only "online," or to get at student records and other "administrative" data, or to interact with colleagues and centers electronically. Some things might still be done most effectively on a "large" mainframe or a medium sized mini-computer, although in the humanities, the number of such tasks seems to have shrunk dramatically. In my own work, I use the mainframe as an access to electronic networks (Libraries, BITNET, etc.), as a means for manipulating certain 9 track tape formats, and for running a complex morphological analysis program; I use a mini mostly for largescale editing, uncomplicated tape handling, and preparing data for CD-ROM publication. Otherwise, mindboggling types and amounts of academic work can be done on one's own personal "micro" machine. And you don't need to be very expert, if you know how and where to find the information and advice you need. Among the most obvious general sources of which I am aware are John Hughes, BITS, BYTES, & BIBLICAL STUDIES (Zondervan 1987), which will continue for years to be an extremely useful introduction and detailed reference tool despite the fact that many of the programs and projects it describes will have changed (or perhaps disappeared) in the period since the book was researched; THE HUMANITIES COMPUTING YEARBOOK (volume 1 = 1988), edited by Ian Lancashire and Willard McCarty at the University of Toronto (Oxford, starting 1989), which updates and considerably expands some of the information in Hughes; Susan Hockey, A GUIDE TO COMPUTER APPLICATIONS IN THE HUMANITIES (Johns Hopkins 1980), which provides a good overview of the sorts of things that researchers can do with computers, although now the machinery has changed radically from 1980. To try to keep up with the various aspects of humanities scholarly computing today (articles, conferences, books and reviews, etc.), consult the special journals in the field, especially LITERARY AND LINGUISTIC COMPUTING (Oxford) and COMPUTERS AND THE HUMANITIES (Kluwer). And see that your local academic library has them! As I have mentioned before, and will continue to mention, there often is a frustrating rift between the regular ongoing activities of the older professional societies and the relevant activities of computer oriented research. Journals such as JBL, JAAR, RSR and BASOR need to be more aggressive about including information on computer tools -- e.g. reviews of software or of electronic data that is directly relevant to the clientele, or even review articles covering computer related areas. By all indications, the number of scholars who would profit from such innovations is significant and growing. These needs deserve to be addressed in the traditional contexts. Let the appropriate persons in your professional societies know if you share these concerns, and volunteer to be involved (e.g. as reviewers) if you are in a position to do so. But I am avoiding the main announced subject of this OFFLINE column, namely CAI/CAL. By this I mean the use of computers in the formalized student/teacher relationship, either by actually incorporating computers as part of the formal classroom experience or by using them as a formal part of the extra- classroom components of the course (laboratory type drills, homework assignments, supplementary assistance/tutorials, examinations, teacher/student communication and interaction, etc.). Most OFFLINE readers are probably actual or potential CAI/CAL users. My suspicion is that in general, except for language courses, the formal use of computers in classrooms is rare in religious studies and associated fields. Although I do not myself teach language acquisition at a formal level, I would almost certainly try to use CALL techniques if I did. Some of the reasons are obvious, in view of the subject matter. Linguistic relationships are mostly predictable. Practice and standard drills are usually important for acquisition of language skills. What has traditionally been done in language textbooks and workbooks can be done even more effectively through interactive computer software. The students can move at their own pace, to some degree, learning and practicing everything from the formation of letters (e.g. using an electronic drawing pad) and even sounds (on appropriate equipment) to the analysis of lexical forms (morphology) and relationships (syntax), to the simple meanings and usage of words and constructions, to more advanced exercises. They can test themselves, with cleverly written software assisting them to understand their predictable errors. They can play educational games (e.g. hangman) that help overcome feelings of monotony or drudgery in the learning experience. In this type of educational situation I have no serious ambivalence. For some examples of what is available, see the literature mentioned above (especially Hughes' book and the YEARBOOK). The same factors that make CALL attractive for foreign language learning are also applicable to some other subjects, to some degree. A wide variety of programs have been developed to assist with English composition, for example, both for native English speakers and for "English as a Second Language" (ESL). This is still "language learning," of course, but at a more advanced level. And to the extent that one is uncomfortable about dictating that everyone should write in the same general style (e.g. regarding sentence length, use of particles, repetition of vocabulary, sentence structure, punctuation, etc.), the appeal of using software that might suggest such a norm may lessen. Far along at the other end of this discussion is the often controversial topic of "Artificial Intelligence" (AI). To the extent that our machines can be programmed to think like we do, to that extent our software for such things as English composition can be sufficiently sophisticated to avoid being guilty of deceptive oversimplification. In many ways, subjects such as music or art would seem to have aspects similar to language learning, for which CAI/CAL approaches have obvious value, to a point. And some teachers have been exploiting these situations. As the computer technology gets more sophisticated with respect to sound and graphics, its appeal for such fields will become all the greater. Those of us who experienced the rock video demonstration presented by Virginia and Norman Badler to the CARG session in Atlanta in 1986, as an example of possible archaeological modelling, will appreciate the possibilities. More recently, the Perseus Project directed by Gregory Crane at Harvard is combining various linguistic, literary, archaeological, artistic, topographic, encyclopedic, and other approaches to knowledge in a data bank for classicists that will enable a much broader and more integrated approach to CAI/CAL in that subject matter. Although my students, especially the graduate students, are well aware of the value of computers in the sort of work we do, I have not yet set up any formal CAI/CAL components in my courses. (I am not counting the times that individual students are encouraged to so some searching of ancient texts on IBYCUS or other systems; or when I wheel a terminal into class to illustrate some point or another.) The reasons for this are varied, but probably the most important one is that I do not know of any appropriate software already developed for the classes I am teaching (mostly on Judaism and Christianity in the Greco-Roman worlds), and I have not considered it a priority to take time to develop such software at this point. In the very near future, I plan to try at least one CAI/CAL experiment, using John Abercrombie's CINEMA software that assists the instructor in adding various supplementary types of information and enlightenment to a video disk production. We propose to annotate the film "The Last Temptation of Christ" for use in a course on Jesus. There are already humanistic subjects and courses (other than CALL types) for which appropriate CAI software does exist. Information about these projects is available from a variety of sources. The large microcomputer manufacturers have funded such developments and have established such aids as the periodical entitled WHEELS FOR THE MIND (Apple Computer, P.O. Box 810, Cupertino CA 95015), on Mac oriented projects by and for educators, or the "WiscWare" distribution center for academic software prepared with IBM Educational Grant funding (tel 800- 543-3201 or 608-262-8167). A variety of users groups (or "special interest groups" = SIG) have been established over the years and have often collected and distributed CAI/CAL software of various sorts and in various stages of development as well, with oganizations such as "National Collegiate Software" (Duke University Press, 6697 College Station, Durham NC 27708) refining that model. Indeed, the problem is not so much locating sources for CAI/CAL type software, but in actually choosing from the virtually undifferentiated mass of available items some that might possibly be relevant and operative for your purposes. That also takes time, and often at least an intermediate level of computer competence. But even if one has time to discover and search the available catalogues of possibly useful CAI/CAL software, and is able to obtain and test potentially attractive items, and actually finds something appropriate, what then? To my knowledge, there are few schools equipped to deal with serious CAI/CAL humanities situations, my own included. Our audio- visual center has not yet made the transition to computer centered AV, although it is moving in that direction. We have one small "humanities" workroom equipped with some IBM and some Apple Mac machines, but hardly enough to conduct a class of 20 or more students comfortably. And there are many demands on that room, especially for use by the "computers and humanities" courses as well as for CALL needs. Probably more than half of our students have their own computers, but they are not all the same make of computer and usually are not networked to the University. Thus there is no easy way to make classroom use of computers or to require extra-classroom assignments that go beyond personal wordprocessing types of activity or to communicate with the students electronically. It will probably be several years before adequate facilities and software exist to make effective classroom CAI/CAL in the humanities a reality for most of us. We all have our own approaches to the subject matter, and it would be unusual to find that software developed by one teacher in a particular situation would be fully acceptable to another. Neverthless, just as we now make use of required "textbooks" to provide a common base of knowledge for our students, it should soon become commonplace for course assignments to include electronic data and approaches. General "authoring systems" are being developed which establish a context in which the individual teacher ("author") can tailor the subject matter to the computer program. Whether and to what extent these will prove successful in humanities courses remains to be seen. For predictable "quiz" type of activities, with the instructor supplying (modifying, annotating, etc.) questions and acceptable ranges of answer (which could be automatically graded and to some extent commented upon), this could be an obvious benefit. For those of us who do not normally use this type of quiz procedure, however, it is not so attractive. And as the desired coverage becomes more broad and less predictable (as in historical "modelling," where students can change certain variables in the past historical situation to determine what might have happened if ...), it becomes more difficult to develop effective software. Both in the short run and in the long run, the CAI/CAL possibility that attracts me most is development of the electronic "textbook," similar to the Perseus project for classics mentioned above. Compact disk (CD-ROM) and related technologies are making it possible for a wide assortment of data and data types (text, notes, bibliography, graphs, maps, still and moving pictures, sound and music, etc.) to be integrated on a single disk, which can then be accessed in a myriad of combinations directed by the user. These are the textbooks of the near future, and they will be both exciting and highly effective as educational tools. They need to be produced by experts in the subject matter, working in cooperation with experts in the presentation technology. They depend on the availability of a large range of electronic data that can be successully integrated, and on the sophisticated software that can make it easy to get at such data in a high variety of configurations. Assignments of the "find out all that you can about ..." type will challenge the skills of the individual users to move through, and beyond, the available data in physical contexts of their own choosing (at home, in the library, in labs), and to enter class more solidly prepared for discussion and presentation (I'm such an optimist!). It is not an easy road to the realization of such hopes. Encoding of even textual data is still in its infancy in most fields, not to mention graphical and sound data. While CD-ROM technology is becoming widely used, it still tends to be located in libraries and labs rather than on "private" machines, and it cannot yet be considered "inexpensive." But things change rapidly (do we need to be reminded!?) and our very anticipation of the future may itself be an important factor in making it arrive more quickly. One can already obtain glimpses of that future in such products as the OXFORD ENGLISH DICTIONARY on CD-ROM with special searching software, or in GROLLIERS ENCYCLOPEDIA on CD-ROM, including pictures and sound. Somewhere down the line I see an educational environment in which the students and instructors have powerful portable machines with CD-ROM type devices and electronic hypertextbooks for consultation in private or in class, and a resultingly enhanced educational experience for all. There will be stations for plugging into networks or into special printers for particular needs, and "drill" type of assignments, and electronic quizzes, but the heart of it all will focus on interactive information access on the one side, and interactive discussion with knowledgeable students and instructor on the other. Thus assisted by computers, the instruction and learning will hopefully be greatly enhanced. The SBL ATARI ST User Group Newsletter 4.1 (January 1990) is now available from Douglas Oakman, 870-120th Street South, Takoma WA 98444 (tel 206 537-2376; OAKMAN_D1@PLU1.BITNET), with the usual menu of valuable information (including some attractive print samples in Hebrew, Greek, Coptic and Syriac). Two articles in the September 1989 issue of ACADEMIC COMPUTING on the ideal "scholar's workstation" may be of special interest to OFFLINE readers, and also the article on "The Future of the Scholarly Journal" (in an increasingly electronic environment). The American Philological Association has established an Editorial Board for Non-Print Publications which will review electronic materials including scholarly literature, utilities and research tools for possible publication with Scholars Press. This is a step to be lauded and emulated. //end #27// ---------------------- <> by Robert Kraft [30 March 1990 Draft, copyright Robert Kraft] [HUMANIST 30 March 1990] [Religious Studies News 5.3 (May)] [CSSR Bulletin 19.3 (September)] ---------------------- OFFLINE was created originally to serve as a point of contact in what would hopefully be a two way street between interested religious studies persons and "computing humanists." Maybe I wouldn't have put it exactly that way in 1984, but in retrospect, that's what I had in mind. New things were happening very rapidly in areas of computer technology, new vocabularies were being created, new approaches tested, old things done differently. Those for whom all this was potentially, if not yet actually relevant needed to know about these developments. And their input (see how naturally I speak "computerese" -- and you understand it!) was important to help insure that the computer enthusiasts did not ignore significant issues or create difficult problems that might come back to haunt us all at a later time. In the early days of OFFLINE, there were attempts to create a glossary of computer terms and ideas, and to encourage discussion of standards for foreign character recognition. User groups (now often called SIGs = Special Interest Groups) were organized whenever practical, and mention sometimes was made of significant computer publications of possible interest to readers. Seemingly pertinent new developments of hardware and software received notice. The tasks were pretty obvious, following a model of trying to take an audience from very little or no knowledge to a level of respectable understanding. The main goal was, without apology, to help speed up the acceptance and effective use of computer related research within the scholarly community -- an end result that seemed to me not only extremely useful but also ultimately inevitable, in a general sense. Feedback from the audience of OFFLINE never became much of a factor. Occasional comments and suggestions were made -- my unfortunate attempt to use an analogy from the medical world (computer viruses and AIDS) once generated two heated complaints -- but for the most part you let me go my own way without much assistance or interference. For the most part, I have assumed that the column was meeting some real needs, although on a few occasions I heard from some readers that they enjoyed the column even though it was mostly over their heads. A mixed blessing! Many of us have grown along together in OFFLINE, so that what was once new and mysterious is now mostly taken for granted, and what interests us most goes far beyond the level of simplicity we once desired. This growth, of course, creates a dilemma or two. On the one hand, there are always newcomers to the area of discussion, who need to be led along carefully and deliberately. On the other, there are many more "advanced" users of computer technology who also merit recognition and assistance. And in between are various shades of interest and expertise, all of them deserving at least occasional attention. How is it possible, and is it even desirable, to address all these different levels? It is difficult to get our relatively uninvolved students to do background preparation. They seem to want to be fed the necessary information, hopefully in an entertaining way, in class. So we tend to bully them with quizzes, reports, threats of poor grades. But I have no such leverage with the readers of OFFLINE. All I can do is to try (and hope) to spark some interest that will carry beginners to a next level. There are assignments that can be made. Browsing through back issues of OFFLINE itself might be of help to some. In fact, the assignment to obtain that material on diskette (see instructions at the end of the column) and learn how to search and browse it effectively holds various benefits, despite the fact that much of the information will be seriously out-of-date in this rapidly moving world. Much more organized, and full of both general and very specific information for all levels of interest, is the often mentioned () volume by John J. Hughes (Zondervan, 1987). It is still an excellent investment, despite the passage of time, and will help orient readers to the major areas of computer usage in most fields of study, whether "biblical" or not. The biggest help for a beginner, however, is getting started. There is only so much that one can learn about computers in the abstract, and it is very difficult (at least for me) to make much sense of it in the abstract. Take the plunge. Get access to a machine that has proved its value in academic contexts (for most of you that will mean an Apple Macintosh or an IBM/DOS type, or perhaps an IBYCUS SC), ask someone to help you get started, and do something -- write me a letter, link up to the library or to a bulletin board, play with an electronic text. If there are no machines for you conveniently to use, and you must explore the labyrinth of deciding what to purchase, and price is an important factor, don't be afraid to look around for used systems (e.g. in newspaper ads) or mail-order bargains. Of course there are risks in such an approach, and of course you need to exercise as much care as possible, but those caveats are always true -- even for new and expensive systems -- and you need to start somewhere/somehow. Starting is important. My first system, a decade ago, was a used Commodore Pet, Business Machine version, purchased through a newspaper ad. It was a family machine -- games, educational programs, a spreadsheet for taxes and records -- with only a "40 column" screen (i.e. it could only show 40 letters in line width), but an 80 column printer. And a BASIC programming language built in. I even managed to get it to print Greek letters, and learned something about programming at the same time. I still have that computer system and it still works, although none of us use it anymore because it can't communicate easily with the systems that have become popular in the intervening period. Make me an offer? You probably won't. The point is that for a relatively modest cost it was, and still is, possible to get started with a reliable and versatile machine. If I had to start today, with my ancient language needs, I would be shopping around for an Apple Macintosh (they are available used and/or reconditioned). Without those needs, the immense world of IBM and IBM clones would also be considered. A "remaindered" Toshiba 1000 laptop for under $700 (plus a printer) might appeal to me in that situation (check the ads in the plethora of personal computer magazines!). Basically, I'm a frugal person. OK, then, I'm cheap -- a penny pincher. Blame it on the great depression and WWII rationing which helped forge my youthful habits, or on some personal flaw; for whatever reason, it's true. I would not go out and spend gobs (or even daubs) of money on alluring software unless I was really sure I needed and wanted it. There is lots of good freeware and "shareware" (pay a small fee if you like it) available, especially in the IBM/DOS world. Maybe you will ultimately want a more flashy -- and more expensive -- wordprocessing package than the one I am using to write this column (PC Write, which is shareware), but you don't have to make that investment at the outset. You can start cheap, and build up to what you determine will be most useful to your needs. But do start. And let me know if there are specific ways OFFLINE can help. I suspect that most readers of OFFLINE, at least in its printed manifestations (as contrasted to the electronic version that goes to the HUMANIST discussion group on BITNET), fall somewhere between beginners and experts. Most of you own your own computers, others have regular access to such machinery, and you write papers, reports, books -- maybe even letters -- through this powerful "wordprocessing" tool. Some of you use "spreadsheets" to do your accounts and taxes. Some of you keep track of bibliography and addresses and other information that is easily stored in its various subunits that can then be reconstituted in various combinations through a "database" type program. A few of you have even plugged into the vast and growing universe of electronic "bulletin boards" and communication possibilities by using a "modem" and telephone lines, or by being "hardwired" into an existing network. As things have developed, it is to this varifocused group that OFFLINE has mostly come to address itself. Yet it has often been a flight in the dark. You have seldom made it known to me what sorts of information might interest you most. Perhaps this is because the possibilities are so immense and variegated that you wouldn't know where to begin. Perhaps you are too timid, and without an invitation would not presume to make such suggestions. Well, let me remove that excuse. I want to know what you would like to know about using computers in your work, and will even prod you with the old "multiple choice" approach. Would you be interested in discussion of any of the following in OFFLINE? -getting and using electronic textual materials -accessing remote library resources from the computer -the pros and cons of database storage and retrieval -plain and fancy printing ("desktop publishing") -using electronic networks and communications -peeping through the keyhole at computer programming -OTHER (please specify) Not that I would try to handle all such subjects by myself! But expert assistance is available, and the idea of coopting guest columnists or contributors appeals to me very much. Please let me know how you can be helped. Otherwise, you remain victims of my perceptions of what you should know! "Experts" have been described as people who know more and more about less and less until ultimately they know everything about nothing! With regard to computers, I'm not there yet. I guess I'm more of a generalist. But all sorts of expertise exist in the computer world, as in traditional academia, and it has become a goal of OFFLINE to keep the experts interested and involved without devoting the column to their needs and detailed foci. There are appropriate places for experts to meet and mingle -- the journals called Computers and the Humanities and Literary and Linguistic Computing, to name only the most obvious. There are workshops and conferences, some of which have been reported upon in OFFLINE. What has not been happening with sufficient regularity, from my point of view, is the interchange at the expert level between traditional "field-oriented" scholarship and the new "computer-oriented" research. In this context, OFFLINE wants to be a mediator and a gadfly. Perhaps you are tired of hearing the following sentiments, but they still seem to be in need of saying. Have you seen many -- any? -- reviews of field-specific computer data or software in the traditional scholarly journals? (You can find a wealth of such information in John Hughes' aforementioned ! Come to think of it, has it been reviewed in the journals?) If you want to obtain an electronic version of the Bible in English, for example, do you know what is available, how accurate it is, how you might make use of it, etc.? When new study editions of Bibles (such as the ), or new Bible concordances are published we usually can consult reviews by the experts as to the reliability and utility of these materials. But procedures have not yet been developed in most of the traditional professional societies for similar evaluations of electronic data and tools, which can already take the user far beyond anything offered by conventional static printed books. Have you, as users, made known your interests to those in charge of the review sections of your professional journals? Have you, as experts, volunteered to take part in helping to develop such avenues of information? We are living in a period of rapid adjustment and transition (just ask you local librarians!). Things will go much smoother if we all get involved at the appropriate levels. Computer assisted research that finds its way into printed form is gradually making its impact on traditional fields of scholarship, to be sure, but general information about how one can take advantage of the new tools and approaches is relatively scarce. Susan Hockey's very "old" (Johns Hopkins, 1980) is still very useful in showing what can be done through various approaches. For specific examples of such applications, the bibliographies in Hughes are full of leads. But where will you find out if the software tool that you might like to use to facilitate your special research, or the data on which you want to run it, are reliable? How do you register your observations that the available software doesn't really do what you, and others like you, might want to do? Where do the traditional expertise of the scholar and the expertise of the programmer confront each other, for better or worse? We need to be paying more serious attention to these matters if we have any hope of taking full advantage of the new powers at our disposal. The story is not one of complete frustration, by any means. Much good will exists in many of the traditional societies, and a new group of experts is emerging who can help foster changes where they become recognized as desirable. Electronic publication as an end product, not just as a means to printed output, is gaining impetus in the professional societies (e.g. Modern Languages Association, American Philological Association) as well as among the presses (e.g. Oxford, Zondervan), and some exciting new "experimental" products are on the horizon (e.g. from Harvard's PERSEUS Project). More and more texts are being made available from various computer oriented sources and in various formats (e.g. on diskettes, on CD-ROMs, with or without accessing software) -- e.g. recently a neatly packaged machine-readable version of Hegel's (trans J.B.Baillie) arrived from Georgetown's new Center for Text and Technology, as the first in a projected series. The new Rutgers-Princeton Center for Machine Readable Texts in the Humanities is coordinating its activities with the Oxford Text Archive and other repositories in producing a text-inventory that will become available in electronic catalogues through the library systems. The libraries themselves are acquiring more such materials along with the machines and technical expertise to use them. Archiving existing electronic materials (e.g. by SBL) and establishing broadly based standards for dealing with them (by the international Text Encoding Initiative project) are receiving long overdue attention. All of these things need encouragement and support from individuals and professional societies as well as from educational institutions and from private and government funding sources. Certainly monetary support is most welcome -- the relatively small budget of the National Endowment for the Humanities can only spread so far, and projects that do receive awards almost always need to raise "matching funds" in addition. The fact that you may not be asked directly for this sort of help does not mean that it would not be appreciated. Identify a pet project and support it. But there are other important ways to offer support as well. Universities are constantly involved in reevaluating their programs and resources, and if they do not perceive the value of a given activity, it is not likely to flourish, especially when money is tight. But with regard to computing activities, the decision makers seldom have appropriate input from humanists regarding humanist related needs. You can help by alerting your administrations, including libraries and computing centers, to your actual and potential interests in these connections. Are your humanities departments aware of and plugged into the wealth of available resources? Are your libraries addressing computer related needs (usually in coordination with the computer centers)? Finally, have you ever considered dropping a note of appreciation to the administration of an institution (not necessarily your own) that sponsors an activity from which you gain benefit? That sort of support can also be very important in times when hard decisions are being made on the basis of relatively limited information. And there is a large agenda that remains, which will also grow as we gain new insights and recognize new opportunities. Among the things that concern me most at this point are (1) the continued need for getting reliable information to scholars and students who do not regularly consult the humanities computing publications and discussion groups -- the idea of a sort of "syndicated" column offered to various traditional newsletters has been mentioned before; (2) the need for appropriate reviewing in the traditional journals of relevant computer related products, as mentioned above; (3) sponsorship of representative demonstrations and exhibits at regional, national and international professional society meetings, to make it possible for people to see and even participate in what is happening -- perhaps it is worth developing and supporting a project team for this purpose as a short term stimulus; (4) input from the societies (based on the opinions of their experts) regarding priorities for data encoding (which texts deserve immediate attention?) and for software development (what do the users want to do with the data?); (5) support for new, at this point "experimental," ways of using the new technologies to advance education and scholarship aggressively -- e.g. electronic textbooks that take advantage of the ability to mix text and graphics and sound in various user-determined combinations. It seems to me that the stage of raising awareness, in general, and of overcoming apprehension is well under control. The period we have entered calls for more aggressive approaches to producing appropriate results based on this growing new awareness. //end #28// ---------------------- <> by guest columnist James O'Donnell, for Robert Kraft [11 June 1990 Draft, copyright Robert Kraft] [HUMANIST 20 June 1990] [Religious Studies News 5.4 (Aug)] [CSSR Bulletin 19.3 (September)] ---------------------- One of the topics of special interest to a number of readers who responded to the invitation in OFFLINE 28 is how to access remote libraries. Since everything I know about that topic has come from my colleague in Classical Studies, Jim O'Donnell, it seemed sensible to ask him if he would write a guest column for OFFLINE on that subject. He agreed, to the benefit of us all, and the results follow. The directions work for me, but then, I am on the same computer system that Jim uses. You may need to put a bit more energy into making them work, but keep at it -- the results are well worth it! Thanks, Jim, for sharing your expertise. "Ad hoc, ad loc, quid pro quo, so little time, so much to know" was the complaint of the Nowhere Man in the animated film _Yellow Submarine_. The paradox of the computer age is that it makes it possible to learn more things in less time, while at the same time making many more things for us to know. At Bob Kraft's request, I am going to sketch here one advance that has been revolutionary for me and for others. Any working scholar with experience in e-mail will find it all rather transparent; the scholar who has not yet gone on-line with the world will find it rather less transparent at first, but should be assured that in fact it is all a piece of cake. If you can do _anything_ on an IBM or Macintosh, you can summon the wisdom of the world down the wires into your machine: it's easier than learning to use a word processor. The first principle is that the great research libraries of the world have been and are continuing to make their collections more accessible through computer cataloguing. Most major university collections now have at least part of their collection in an on-line catalogue and most users are now accustomed to looking not only in a card catalogue but also in one of a row of terminals usually found standing in the entrance hall of the library. The second principle, specific to the world of computers, is that any information in any computer anywhere in the world is theoretically available to any other computer anywhere in the world, including the one on your desk. In practice, there are often obstacles, but happily librarians genuinely enjoy minimizing those obstacles. Many exciting developments remain, but much has already been done. What can you do? From any modem equipped telephone anywhere, you can now reach a huge variety of library catalogues. Now catalogues are not the same thing as books, and it is certainly frustrating to learn that a book is on a shelf someplace where you can't go; but it may be useful to know that anyway. Among the uses of the kind of library searching that I will describe below are these: browsing specialized collections in remote libraries, confirming the existence of and locating relatively uncommon volumes, searching in catalogues better equipped than that of your home institution (this can be useful in several ways), and the exhilarating sense of intellectual play that comes from nosing through any collection of books anywhere. Some examples. I grew up in New Mexico and Texas. The University of Pennsylvania library is not specially strong in southwest regional history and sociology; but the collections of the Universities of New Mexico, Colorado, and California (to name the ones I've had access to) are much stronger. I can learn of the existence of materials, get confirmed bibliographical records, and (if it came to that) decide which collection(s) might be so strong as to be worth a visit sometime. Often I find myself in possession of a defective and obscure bibliographical reference: title, author, date, with perhaps the title slightly garbled. The Penn library doesn't have it on-line. A little intelligent snooping, and I find it in the University of California system: I get a confirmed title/author/place/date record and take that the next day to our Interlibrary Loan office, where they do a much better job of getting the book quickly than they ever could have with the defective record with which I started. But, you might wonder, wouldn't the National Union Catalogues have the same information? Probably, but: (1) I just finished rearranging all my books at home and I've confirmed that I _don't_ have room on my shelves for all the NUC volumes, not even merely (sic) the pre-'56 imprints; and (2) the computer databases can be searched in ways more cunning than the printed volumes or a card catalogue. If your reference is really defective as regards author or title, the computer lets you do searches by parts of words, keywords, subject, and in some cases even call-number: it's much easier to turn a bad reference into a good one from a keyboard than by walking up and down helplessly in front of a row of NUC volumes. You also need to know less about library cataloguing and filing conventions than you used to, and that can be a great time-saver in obscure cases. Perhaps the most important use, however, is for gaining access to catalogues better than that which your home institution can offer. At Penn, for example, the full computerized catalogue covers items received and catalogued since about 1968/70. That means there's an awful lot of older material just not on line; some recent additions have put defective and partial records of a lot of older stuff on-line, but those additions, while they may help me locate a book that is in the collection, are no substitute for the full catalogue information that a regular catalogue record can allow. My most idiosyncratic use of the catalogues is not for everyone: I use them as a grand intellectual toy. I have always found it an important part of the life of the mind to browse, rummage, snoop, and generally prowl the libraries. On a leave a few years ago, I devoted some time to reading through the shelves of the Bryn Mawr College library collection: I got through all the philosophy/religion, history, and literature shelves over the course of a year. Just walking along looking at things, pulling off whatever struck my fancy, and sitting down to sort them as often as my arms were full. A richly useful intellectual experience. With the computer, it is possible to do half of that: you can't pull books out and look at them, but you can browse and snoop much more widely, in much bigger collections. But no description of possibilities can be prescriptive, only suggestive. Whatever you can do with a library catalogue, you can do better and faster from your computer: if that means something to you, read on for directions. _First_, you need a PC (Mac or IBM are identical for these purposes) with modem and modem software. _Second_, you need a connection to the great world. Characteristically, this will be furnished by your home academic institution. You will probably have gotten that connection in order to do e-mail of some kind, or perhaps to have access to on-line student records for registration or the like. Some institutional connection is apparently essential: inquiries have failed to find a commercial service that offers the right kind of interactive link to INTERNET. There may be an easier way: snoop around academic institutions close to you. See if any of them allow outsiders to reach the level of access necessary to get to INTERNET without a formal account or password; institutional policies will vary widely. _Third_, you will need to find out from your local computer gurus how to get on to INTERNET, the nationwide computer network that links the libraries (and many other facilities). Usually this is easy. For me, it means giving a single command on first logging on to the local computer (I merely type "TELNET" and hit a carriage return ), then I get a new prompt at which I type the letter T and the "address" (either names or numbers: example below) I seek. In a matter of seconds, I am linked to the computer I seek and can begin logging on there. BUT YOU MUST FIND OUT FOR YOURSELF WHAT THE EXACT PROTOCOLS ARE AT YOUR HOME INSTITUTION: NO TWO SYSTEMS ARE EXACTLY ALIKE. Now _fourth_, you need more information. The easiest way to get the information you need is through e-mail. Ask you local e- mail gurus how to acquire files from remote list-servers. On many systems, this is as simple as issuing a one-line command from the basic system prompt; others require you to send a short mail message. On the common VM/VMS systems, the message you need to send is: TELL LISTSERV AT UNMVM GET INTERNET LIBRARY The crucial elements are the address (UNMVM: a computer at the University of New Mexico) and the filename (INTERNET LIBRARY). This file is a collection of very specific and explicit instructions for gaining access to libraries all over the country. (If you have had a copy of this file around for some time and done nothing with it, now would be a good time to get a new copy: it is updated regularly, with new facilities being added all the time.) Read this file. (Dr. Art St. George of UNM deserves at least a medal for his patient work in gathering and updating this material: it is really the browser's bible.) At this point, you want to follow your nose and your inclinations. Which libraries are of most interest to you will be a matter of taste, and trial and error will confirm them. For each library discussed in the file, there will be good instructions how to log on to the individual facility. Here's where an example helps, so I'll walk you through the University of Maryland, which is a very easy system to approach. First on my machine, I have given the link-to-internet command: TELNET Then the call-Maryland command T 128.8.161.199 The number 128.8.161.199 I learned from the INTERNET LIBRARY file. Some places have also alphabetical addresses like those familiar from e-mail addresses (e.g., PENNLIB.UPENN.EDU); if you have addresses in both forms and one doesn't work, try the other; for Maryland, you would use UMCAT.UMD.EDU. Anyway, after you connect you get a rather austere prompt, but from INTERNET LIBRARY, you know to answer: CAT At this point, you can stop reading this article and start looking at the help screens you get on the computer. These will be the same screens you would get if you were in the catalogue department of the library itself. You will want to experiment with what you can get. One thing I like (and don't have at home at Penn) is the capacity to do keyword searches, e.g., k=water buffalo That is a command that will get you many more "hits" than a subject search: subject searches are restricted to the kinds of things that librarians have explicitly selected and ratified, the kinds of things that were formally listed on the card in the old card file; but keyword searches look at the whole record, and so if there is a book with "water buffalo" in the title, you will get it (even if it's a novel); and if somebody wrote a book under the pseudonym "Clem Water Buffalo," by golly, you'll hit it. You may well get therefore more dross with such a search, but also more gold. (I've also noticed that SUBJECT headings in these catalogues are the ones most prone to typographical errors: in the old days, the subject heading typed on the original author card was not a crucial piece of information: it would be retyped correctly on the actual subject card. But in the computer record, _that's_ the place the computer looks for the subject heading. Curtailing your search target is a good idea here.) And always remember, a search that doesn't work out takes a second; you can usually think of a way to improve your question to get more effective results. A=JONES is going to get thousands of "hits"; think about how to reduce the range. When in doubt, of course, always use the shortest possible search target: don't ask for "water buffaloes" because you'll _only_ get the plural (and even then somebody might have spelled it "buffalos" and you'd miss that); ask for the singular and the plural will come along at no extra charge; and in fact "water buf" is likely to get you everything you want, and the fewer characters you type, the fewer chances to make a typing mistake and have to start over. So, go ahead, play. Browse, snoop, take notes. When you're done, hang up the modem or disconnect (sometimes systems will have explicit logoff instructions; if you know them it's polite to use them, but in an emergency or if you simply don't know them, just breaking the connection will suffice). Start again. Go back to INTERNET LIBRARY and look for someplace else to call. Note-taking. Depending on your software, your hardware, and the characteristics of the individual library you are calling, you may be able to "LOG" your call and keep a record of your results on disk. Sometimes this takes a little practice (at Penn, for example, there's an undocumented alternate "terminal type" you need to tell it to emulate in order for most communications software packages to be able to log successfully). Once you do that, whatever you see on screen will be recorded on disk for later editing and manipulation. But almost every computer, modem software, and printer combination will allow you to PRINT SCREEN, and that is in fact the way I usually handle it. Get a screen with interesting information, hit the PRINT SCREEN button, and nudge my laser printer to eject the paper and I have a printed record. 99% of the time, this is fine, even wonderful. Other facilities: There are two main proprietary systems that enrich the possibilities of on-line library work, OCLC and RLIN. Both of these systems, which report the holdings of many libraries, require special access codes and, at least indirectly, payment of fees. Practices vary sharply from institution to institution, but at Penn, we may get access to RLIN and our own account number on request, but the library administration periodically reviews the costs and benefits and reserves the right at some point to pass the charges on to users. We do not have access to OCLC. Consult your local institution (usually somebody in the library) for information about these systems. These systems have their own special strengths, special databases, etc., and RLIN, for example, is the best source I have for information about very new books --sometimes even finding out about books before they are actually published, as the Library of Congress posts information registered by the publishers. As you play around with INTERNET LIBRARY, you will find that some institutions have already begun putting other services on- line: I would like to have a good on-line encyclopedia, and would be happy to have as many reference databases as possible handy. At some institutions, you can already check circulation status of a book and leave, by computer, a request that a book be recalled or merely paged from the stacks and held for you at the circulation desk. Similarly, at Penn there is talk of allowing faculty to initiate their own Interlibrary Loan requests by machine from home, with the existing ILL staff freed up to concentrate on the really tough cases and on the management of the flow of books in and out of the building. Somewhere beyond INTERNET lies a Borgesian fantasy library, where all the texts are themselves on-line, and where you may flit from text to text without budging from your desk in your study, perhaps miles away from the library. A pretty fantasy, but not all _that_ unrealistic and worth keeping in mind as the goal towards which all the interim developments reach. In the first instance, consult your local computer gurus about INTERNET access and your local library people about things like RLIN and OCLC. The INTERNET LIBRARY file gives addresses for queries directed to other libraries. For queries about this article, the author may be reached as JODONNEL@PENNSAS.UPENN.EDU (and would be particularly glad to hear of any corrections or improvements that might be suggested). //end #29// ---------------------- <> by Robert Kraft, with guest columnist Tzvee Zahavy [06 September 1990 Draft, copyright Robert Kraft] [HUMANIST 07 September 1990] [Religious Studies News 5.5 (November)] [CSSR Bulletin 19.4 (November)] -------------------- This OFFLINE column has three primary aims: (1) To alert readers to the computer related activities at the annual meetings in New Orleans; (2) to follow up on some issues raised in OFFLINE 29 by Jim O'Donnell's discussion of accessing libraries remotely; and (3) to present a comparative review of some CD-ROM packages for biblical and related studies by guest columnist Tzvee Zahavy. There are also a couple of miscellaneous items of possible immediate interest thrown in. It's a tall order, so lets get to it. There will be various pieces of detailed information on the programs of the SBL Computer Assisted Research Group (CARG) at the New Orleans meetings, and on associated (AAR, ASOR) or separate (e.g. comercial vendors exhibits) computer activities. The special CARG program on Saturday (coordinated by Robin Cover and Alan Groves) will focus on materials for computer assisted instruction, including presentations by my colleague Jack Abercrombie on his CINEMA project for using movies as a base for language and other instructional modules, and by our current guest columnist Tzvee Zahavy (see below) on his Hebrew language instructional software. CARG will also have the usual special exhibit and demonstration room running throughout the period of the meetings, in the Marriott Hotel. Seek us out and help us help you to keep in touch with developments. The August 1990 SBL ATARI ST User Group Newsletter 4.3 recently arrived and announces a change of leadership from Doug Oakman, who has coordinated the group for the past three years or so, to Ray Mattera (1130 South Lorraine, Apt 3B, Wheaton IL 60187; tel 708 665-5240). Thank you, Doug, for your valuable investment of time and energy on behalf of your colleagues; and best wishes, Ray, as you continue the task of developing and exploiting the scholarly use of this powerful and inexpensive system. Apple Macintosh users of Conrad Gempf's Corinth Greek font for displaying "Beta Code" Greek files (from CCAT or TLG) that have been filtered through the CCAT programs "Transcribe" and "Running Text" may find that every line that begins with a smooth breathing beneath a circumflex accent is missing! The author of the programs, Jay Treat, can provide information on how to avoid the problem; he can also provide updates (including TRANSCRIBE 2.7) to those who supply a formatted disk and a self-addressed stamped envelope, or via electronic mail. Contact Jay through the OFFLINE addresses provided at the end of the column. An immediate and widely represented response to the printed publication of Jim O'Donnell's OFFLINE 29 discussion took roughly this form: How can the person who is NOT connected to the University networks take advantage of the opportunities offered by electronic communications and access to remote libraries? The question is not insoluable, but finding the appropriate answer for your own situation may take a great deal of perseverance on your part! There is no single answer, but the following suggestions will perhaps help some of the frustrated dispossessed. The August 1990 issue (19.1, 64-65) of _T.H.E Journal_ contains a brief article by Martin B. Solomon entitled "E-Mail: a Primer for Academics." Solomon describes INTERNET, which is what you want to plug into, as "a non-commercial computer 'network of networks.' ...Anyone using any one of the member networks can communicate with people on any other member network." A large number of networks are connected in this way, including BITNET, Arpanet, CSNet, NSFNET and the like. Organizations pay a fixed annual fee to join the Internet, and you as an individual want somehow to establish an association with such an organization, or at least find ways to plug into the Internet facilities. One approach is to obtain an account at an accessible University that is connected to the Internet. Sometimes this is only possible for persons affiliated with the University. Sometimes "outsiders" can obtain an account but the costs may be prohibitive -- at my own institution, $50 per month has been quoted as a minimum. But the variations are significant, and you might want to shop around. Stanford (415 723-4795), for example, charges non-affiliated account holders 35% more than its own people, but has no minimal fee for usage so that the costs are the telephone tolls plus about $.36 per minute CPU from 6 pm to 6 am. Disciplined usage of such a system has proved quite valuable for some. An enquiry for more information on such matters was sent to the combined membership of HUMANIST (a network discussion group of some 700 humanists) and produced interesting results. The University of Georgia, it is claimed, has provided toll-free dial-in numbers for "a number of state and private institutions" that are otherwise not connected to the networks. Details were not known regarding costs to those institutions, but apparently there are no costs to individual users. Hopefully other Universities and areas will follow this sort of lead. There is a service called the Cleveland FreeNet (supported by Case Western Reserve University; modem dial-up 216 368-3888) that apparently provides at least limited access to some library resources for the cost of the telephone charges. Something called CERFNet in the California area might also be promising (619 534-5087). Vague reference was also made to "a Denver Public Access Unix" address. What you need is some magic telephone number (preferably toll- free, of course, but at least at low cost hours!) that gets you to a connection where your TELNET command is accepted -- from there, just follow Jim O'Donnell's instructions. A dial-up line to a University Library might do it; you will only find out by trying. Some things can be done through commercial networks such as CompuServe (800 848-8199; $39.95 to join, $1.50 per month, $6 per hour at 300 baud; for basic information see J.Hughes, _Bits Bytes & Biblical Studies_ 5.4.1) or the WELL (415 332-4335; $8 per month, $3 per hour connect time), among others, such as sending and receiving messages, getting files from FileServers, and the like. But the interactive use of resources such as library catalogues is not yet possible with those facilities. Still, if you are on such a network and want to get involved in the e-mail Internet world, you can do so. You can even drop me a line at KRAFT@PENNDRLS.UPENN.EDU to test the system (not that I'm lonely or in need of distractions ...). The remainder of the column has been contributed by Tzvee Zahavy, professor of Classical and Near Eastern Studies at the University of Minnesota. He is the author of books on _The Traditions of Eleazar Ben Azariah_ (Scholars Press for Brown University Judaic Studies), _The Mishnaic Law of Blessings and Prayers: Tractate Berakhot_ (Brown Judaic Studies, 1987), _The Talmud of the Land of Israel: Tractate Berakhot_ (University of Chicago, 1989), and _Studies in Jewish Prayer_ (University Press of America, 1990). He is also the primary author of "Milim: Vocabulary Drill for Foreign Language Instruction," published by IBM's Wiscware, of "MILIM-Windows" and "Verbs" published by Exceller Software, and he is co-author of the Windows language shell system, "Sentences," also published by Exceller Software (1-800-426- 0444). As was mentioned above, he will be exhibiting and discussing some of his instructional software at the forthcoming AAR/SBL/ASOR annual meetings in New Orleans. The following comparative review deals with collections of electronic texts (databanks) stored on CD-ROMs and accompanied by accessing software that runs on IBM-type microcomputers (similar resources for Mac users are also beginning to appear). Thus they differ from those CD-ROMs which contain texts but no software -- e.g. the TLG, PHI/CCAT, and PHI/DUKE discs, which have been described in earlier OFFLINE columns. The popularity of CD-ROM as a delivery device is growing rapidly, as should be clear to anyone associated with a modern research library, and thus Professor Zahavy's review is most welcome and most timely. Imagine this scenario: You are doing research on a religious studies project for which you need to know all the references to prayer in the Hebrew Bible (Tanakh) and the Talmud. You can work laboriously for several hours with printed concordances to look up every possible form of "pray," "worship," "serve," etc., and copy them out. Or you can turn to your computer library and in a matter of minutes find, print or save to your disk all the needed references and cross references in up to eight different Bible translations as well as in the original languages. Or imagine you are a clergyman called upon to eulogize a departed congregant. All you really know is that the person religiously played golf every day. Perhaps there are some biblical references compatible with his activities that can be used to commemorate his dedication. How do you locate such materials to intersperse with your remarks, making reference to a staff or rod, to sand or a trap, or to the hazards of life? You can turn to handbooks and concordances or, if you have the appropriate resources, you can turn to your computer and in seconds have such a basis for your talk. The Tora Treasure and the "Computer Bibles" reviewed here share common characteristics. Each package comes with search software and a CD-ROM containing a text databank. The Computer Bibles have in addition reference resources. Each runs on IBM or compatible computers with a CD-ROM storage device disc drive. A CD-ROM drive reads compact discs that look identical to those used for recorded music. In fact some CD-ROM drives for computers can also be used to play music from common audio CDs. The drive must be attached to an adaptor "card" installed in one slot of your computer. The appropriate software "drivers" must be installed and booted up to enable access to the CD-ROM drive. A CD-ROM encoded for the IBM in "high sierra format" can hold up to 660 megabytes of data, the equivalent of 300,000 pages of typewritten text, 1,800 double sided floppy disks, 74 minutes of audio, or thousands of graphic images. In addition to the biblical materials reviewed here, CD-ROM applications now available include _The Original Oxford English Dictionary_, _The New Grolier Encyclopedia_, _The Microsoft Bookshelf_, and _The CIA World Factbook_. The list is expanding rapidly. Especially promising CD-ROM applications appear to be the storage of lengthy and complex catalogues, directories, technological manuals and documentation. A CD-ROM drive costs $600 or more but broader market competition is leading quickly to lower prices. You cannot accidentally erase a CD-ROM because it is a read-only medium (thus ROM). You accordingly cannot write any new data to the CD. Specifications of the hardware indicate that the computer accesses information on CD-ROM at a slower rate than it can retrieve data from a typical hard disk. I frankly could not notice much difference in the tests I performed. To use these products one must have as a minimum configuration an IBM-PC, AT or PS/2 (or compatible) with 640K of internal memory (RAM) and a CD-ROM drive connected. I tested these programs on a Toshiba 3201A drive attached to a PS/2 model 60. The RAM memory requirements for these packages are substantial. When I began to test these products with a DOS shell program running on my computer I had less than 500K RAM memory space available for other programs. Thus, to run the CD-ROM packages, I had to disable the shell programs I normally use. The programs and databanks under review put Rabbinic literature or Bible versions and reference works conveniently at your disposal. They can print or save to disk the results of your searches. Some even provide cross-referencing to encyclopedias, dictionaries, handbooks and other resources in the package. The Computerised Tora Treasure comes from Israel and runs in Hebrew. The screen displays in Hebrew on an IBM-type computer using either a chip or special software and an EGA or VGA monitor. One disappointment in the Bible Computer packages is their lack of support for Greek and Hebrew character display on screen and to printer. A customer paying these prices expects state-of-the- art software. Subroutines to display the foreign characters for those users who want them on screen or printed ought to be included with each package. After describing each package I will provide a comparative analysis of its features and a final reckoning, following the evaluation model established in the reviews of the computer publication _InfoWorld_, where 5.0 and above is satisfactory, 6.25 is good, 7.5 is very good, and 10.0 is excellent (averaging the weighted evaluation of each of the features that is compared). Version: none specified, no date, no label on the CD-ROM Company: Computerised Tora Treasure (Machon Otzar HaTorah HaMemuchshav -- ATM), 59 Rabbi Akiva St., B'nei B'raq, Israel, telephone 03-783-262. List Price: Starts at $350 for the Tanakh, and goes up to several thousand dollars, depending on the data collections desired. Hardware/ system requirements: 640K memory with 500K free. Positive features: Has rabbinic literature. Runs in Hebrew on the screen. Good search program. Negative features: High cost, poor support. Many errors or inconsistencies in the texts. Some features on menus not operational. The program is copy protected by means of a very inconvenient system. A "doggle" from the publisher must be attached to the computer's parallel port. The first one sent to me did not work. As a result I could not run the program for a month until I received a replacement from Israel. Summary: The software is fast and intuitive. The databank is extensive. A great tool for research. -Textbank and Capacity (318 megabytes+) [very good] Tanakh, Mishnah (with commentaries), Tosefta, Talmud Bavli, Talmud Yerushalmi, Midreshei Aggadah, Rambam: Mishneh Torah, Tur and Shulchan Aruch, Ramban on the Torah. Soon available: Zohar, Midreshei Halachah, Rashi and Tosafot for the Talmud Bavli, Shitah Mequbetzes. Potential users should note that the publishers of the Tora Treasure provide no information concerning the source(s) of their electronic texts. The rights to some of this material may be subject to dispute. -Compatibility of Output [good] You can send all or selected records or indexes to your printer or to disk as an ASCII file. Of course you must have Hebrew capabilities in your word processor or printer to use the data in this way. -Search/Retrieval Capabilities [very good] Length of record returned can be toggled from two lines to full entry. An index of occurrences is also returned. Boolean-like searches with delimiters are achieved through the search screen. Wild cards include prefix, suffix, middle, incremental and missing letter specifiers. Proximity matches are possible by specifying a number in the appropriate column of the search screen. Up to five word entries and three alternatives remain on the screen as you shift from one databank to another. Field designations: You can select an entire databank, e.g., Talmud Bavli, or tractates or books within the databank. -Documentation/Help Facilities [satisfactory] A helpful seven page photocopied manual accompanies the program. Aside from a list of function key operations toggled to either Hebrew or English, no on-screen help is available. -Learning Curve [good] The manual takes you through the basic search processes. The procedures for searching are intuitive and clear. A computer literate user will have little trouble mastering the software. Installation is straightforward if you have an operational "doggle." -Efficiency for Users [good] The program is efficient at word searches. Other feature on the menu (index searches, sorted list retrieval, and chapter/verse retrieval) do not work on the copy I have. This could be used at a library workstation with supervision. -Errors: Fallibility and Consequences [good] If you select from the menu a databank that is not operational, you can be warned or thrown out to DOS. Otherwise I could not make the system crash. -Product Support [satisfactory] The publisher's representative in Israel is helpful if you can reach him at home. A phone number for the B'nei B'raq distributors is supplied. As is the case with any international purchase, the support is at best inconvenient. -Is it Worth the Price? [satisfactory] Is there a choice? If you want a computerized rabbinic text databank on your PC, this is the only option. The publisher runs special promotions that can save you hundreds of dollars if you buy all or many of the databanks. -Final Average: good (6.2) Version: 1.2 Company: Tri Star Publishing, 475 Virginia Drive, Ft. Washington, PA. 19034 List Price: $695.00 Hardware/system requirements: 640K memory with 500K free. Positive features: Excellent documentation, tutorial, memo feature, multiple windows, four bible versions, secondary research aids (dictionary, encyclopedia, geography), access to original languages. Negative features: Hebrew and Greek transliterated only; clumsy retrieval mechanism. Summary: A slick package with strengths that falls short on some expected capabilities. Comes with a memo pad feature. Is the only program of the three to allow for four full windows with different texts, easy movement between windows and scrolling within each. -Textbank and Capacity (320 megabytes) [very good] Bible editions: King James, New American Standard, New International, transliterated Greek and Hebrew texts. Only Hebrew and Greek words found in Strong's dictionary may be accessed, using a difficult and non-intuitive style of transliteration beta-code. This is a drawback to the package. Reference tools: Expository Dictionary of the Bible, Handbook to Biblical Study, The New Manners and Customs of Bible Times, New International Dictionary of Biblical Archaeology, Oxford NIV Scofield Study Bible, NIV Study Bible, The Wycliffe Bible Commentary, The Wycliffe Bible Encyclopedia, The Wycliffe Historical Geography of Bible Lands. -Compatibility of Output [very good] You have the option of exporting data to a disk file, to a printer or to both. File output sends soft carriage returns at line ends as the default to allow for easy importing into wordprocessors. Markings for paragraphs may be adjusted to suit the needs of different wordprocessors. -Search/Retrieval Capabilities [good] Length of record returned can be varied by enlarging or shrinking the active window using the zoom function. Boolean searches/ delimiters: Searching may be done by word or phrase. Operators AND, OR, ANDNOT, WITHIN and TO may be used in a search. For instance a command line ALL>"(Abraham* | Abram*) & (Sara*)" would return all instances where both names are found together with whatever endings (e.g. Abram's, Sarai, Sarah's). The "wild card" option (*) can be used only at the end of a word. Proximity matches specifying the distance between targeted words are possible using the WITHIN command. Field designations: You can limit searches, e.g., to individual books in select bible versions. The "Chain option" provides hypertext links to chain references in the Scofield Bible. The software also allows search access to an index of Tanakh (OT) verses in the New Testament. Speed: retrieval is fast. -Documentation/Help Facilities [very good] The manual is slick and professional. On-line help is sufficient and accessible. -Learning Curve [satisfactory] Through the tutorial a user of average abilities should become confident enough to perform searches of some complexity after one or two sessions. -Efficiency for Users [good] The program is efficient. Included is a guide card that contains many of the keyboard commands and search operators, a list of Hebrew and Greek transliterations and the abbreviations used for the biblical books and reference works in the databank. Unfortunately the retrieval mechanism is clumsy and frustrating for a user searching and wishing to save multiple records. One has to go back frequently and repeat the search to access additional records. I have hesitations recommending this program for a library workstation. -Errors: Fallibility and Consequences [good] I could not make the system lock up or crash. -Product Support [very good] The publisher provides a toll free number for assistance. -Is it worth the price? [satisfactory] This product lives up to a high level of professional quality. If four Bible versions are adequate for the library or individual user and you are comfortable with the presentation and search mechanisms, this is a good value. -Final Average: very good (7.3) Version: no date, no version specified Company: FABS International Inc., DeFuniak Springs, FL 32433 List Price: $795.00 [RAK note: this price has been reduced drastically and negotiations are underway for the American Bible Society to take over the package and reissue the CD-ROM under its own label at perhaps one-third of the original price.] Hardware/ system requirements: 640K memory with 500K free Positive features: Documentation on disk (must be printed out by user). Simple search engine with save and print features. Extensive databank of biblical versions, including Vulgate, Spanish and German editions, and aids to study. Provides very good support for grammatical inquiries via searches of the Westminster morphological analysis of the Hebrew text and the Gramcord morphological analysis of the Greek New Testament. A multi-language wordprocessor (MegaWriter, English/Hebrew/Greek) is included. Negative features: Limited to search of index entries. Hebrew and Greek resources rely on beta code transliterations; no reference card included; some unusual choices of secondary sources. No apparent user support. Some documentation on the disk was missing or corrupted. Summary: For the most part, lives up to its advertising. The "Foundation for Advanced Biblical Studies" provides a valuable electronic study Bible for ministers, teachers and advanced students. -Textbank and Capacity (279 megabytes) [good] Bible editions: King James, New King James, New International, American Standard, Revised Standard, New American Standard, two Spanish Bibles (LALB and 1960 RVR), German (Luther, 1984). Reference tools are presented in several categories: "Original Language Texts:" Hebrew Bible (UBS), Greek "Septuagint" (Rahlfs), Greek New Testament (UBS), Greek Harmony of the Gospels, Hebrew Harmony of the OT History, Latin Vulgate (UBS). "Grammatical Key to the Bible:" Hebrew Bible (Westminster), Greek "Septuagint" (CATSS), Greek New Testament (GRAMCORD), Hebrew Lexicon (Davidson), Greek Lexicon (UBS). "Bible Study Aids:" Abingdon's Dictionary of Bible and Religion, Apostolic Fathers (Lightfoot), Works of Josephus, English Septuagint (Benton), English Harmony of the Gospels, English Harmony of the OT History. "FABS Electronic Journal" of theological writings. "FABS Electronic Library" of additional materials. -Compatibility of Output [good] Exporting to disk, printer, and the ASCII output can be imported into most wordprocessors. -Search/Retrieval Capabilities [very good] Length of record returned: Single line entry in list of records or view full record in context, with scrolling ability. Boolean searches with delimiters OR, AND, AND NOT; wild cards and proximity matches/fuzzy searches are not pertinent as the search must utilize entries from the indexes -- this limits browsing access to the actual text. Field designations can be made easily to restrict the search to selected versions or texts within the databank. Speed: When several hundred responses are found there is some waiting at the screen. -Documentation/Help Facilities [satisfactory] The disc comes with a one page installation guide and installs at one command. The user must print out the documentation by first un-arc-ing it then printing three files. One of the documentation files on the disk I received was defective leaving me with several corrupted pages of the manual. There is no phone number to call for support. -Learning Curve [good] The program is straightforward enough to learn to use within an hour. -Efficiency for Users [good] The package will deliver satisfaction to its intended audience of ministers and parochial school and seminary teachers. It also could be used by library patrons as a stand-alone workstation. -Errors: Fallibility and Consequences [good] I could not make the system crash. -Product Support [poor]: None apparent. -Is it worth the price? [satisfactory] Without user support I hesitate to recommend the package even to those who find the extensive databanks contain all that they need. Still, the search processes work well and many users will find them intuitive and powerful. -Final Average: good (6.3) Version: 1988 date, no version number. Company: Ellis Enterprises, P.O. Box 1775, Edmund, OK 73073 List Price: $595.00 Hardware/system requirements: 640K memory with 500K free. Positive features: Lower price, easy to install and use, very good documentation with tutorial, parallel versions presented in record retrieval, flexible search criteria. Good use of function keys in the search software. Negative features: No searching in transliterated Hebrew, incomplete transliteration system, limited browsing among record fields Summary: For the price this product delivers an excellent value in many respects. -Textbank and Capacity (287 megabytes) [very good] Bible editions: 27 titles, 9 Bibles and 18 reference works -- American Standard, King James, New King James, Living Bible, New International, Revised Standard, Simple English Bible New Testament, transliterated Hebrew and Greek Bibles. Reference tools: Easton Illustrated Dictionary, Elwell Evangelical Dictionary, Gray Home Bible Commentary, Henry Concise Commentary, Life and Times of Jesus, Literal Translation, Micro Bible, Osbeck's 101 Hymn Stories, Sermon Outlines, Strong's Greek Dictionary, Strong's Hebrew Dictionary, Theological Wordbook of the OT, Vine's NT Dictionary, Vine's OT Dictionary. Many users will find the transliteration system used for Hebrew and Greek easier to read than the beta code system used in the other packages, although some precision is lost thereby. Again I express disappointment that fully accented and vocalized original texts in Hebrew and Greek characters could not be presented as part of these programs. -Compatibility of Output [good] Exports to disk, printer; the ASCII output can be imported into most wordprocessors. -Search/Retrieval Capabilities [excellent] Length of record returned: one verse or entry at a time in the versions. More is accessible in the reference works. Browsing in the verses before and after the records selected is not possible. The selection of ALT-F5 gives all versions of a verse in parallel, a nice feature. It is easy to scroll through the records by using two function keys. Boolean searches/delimiters; wild cards; proximity matches/fuzzy searches: The options in the Marcon search retrieval software are excellent. The options AND, OR and AND/NOT can be selected easily. The program allows proximity searches within a given number of words, or in a sentence or paragraph. Stems, wildcards and nesting are also supported, but naturally slow down the search process. The search help screen gives clear and detailed information for the wide range of options. A list of very common and thus non-searchable words is provided. Searching is not supported for the OT romanized (transliterated) Hebrew because the software will not recognize the signs used for aleph (') or ayin (`). This should have been corrected. Field designations: The user can elect to search all or part of the texts from a Bible list or collections list. Other valuable features are included: Choosing "Versions" selects all 8 translations; choosing "Literal Translation" selects Strong's numbers and dictionary definitions. Speed: The software uses indexes to search verses quickly in versions. -Documentation/Help Facilities [very good] The manual starts with a tutorial, a good approach. The reference section is short but complete. The installation section is very helpful, directed more to the novice who might have trouble with RAM memory, DOS extensions and the like. -Learning Curve [very good] The software can be learned in one sitting by most intermediate level users. -Efficiency for Users [very good] The package is well suited for a stand-alone library workstation. -Errors: Fallibility and Consequences [good] I could not make the system crash but I did exit the software several times accidentally by pressing one too many "escapes." A trap asking "Do you really want to exit now (Y/N)?" would be helpful. -Product Support [good] A phone number for support is included in the manual -Is it worth the price? [very good] Yes. The package provides an extensive databank that will serve a broad audience at a lower price than the other packages reviewed. -Final Average: very good (7.5) Version: 1.0 Company: CDWord Library, 2 Lincoln Center, 5420 LBJ Freeway LB7, Dallas, Texas 75240 List Price: $595. $8 video demonstration available. (Known initially as CDWord, but changed name to avoid confusion with the software Microsoft Word.) Hardware/system requirements: 286 or 386 processor, hard disk with 6-10 megabytes free (for Windows software and program files), 640K memory (uses 512K to 2 Meg of extended or expanded memory if available), EGA, VGA or Hercules graphics monitor, mouse, Epson 24- pin or HP Laser printer, MS-DOS 3.0 or higher, Windows 2.0 (run- time version included) or Windows 3.0. Positive features: Uses Windows environment to display multiple references and employs the mouse for pull down menus. Offers synchronized scrollable windows to enable simultaneous viewing of Bible versions and commentaries. Displays Greek characters on screen and permits searching in Greek LXX and Greek NT. Hypertext- like interaction between biblical texts and lexica, Bible dictionaries, commentaries, cross references and graphics collections (maps, charts, diagrams). Greek parser available. Includes a highly refined search engine. Negative features: Extensive hardware memory and software requirements. Learning curve for this package may be more extended than for others. Does not include a Hebrew Bible. Hebrew texts and lexica along with other enhancements will be in a future release of the package according to Jim Bolton of CDWord Library. Summary: By far CDWord Library represents the state of the art of the Bible CD-ROM programs. After working with the package I was so impressed I considered changing my area of research specialization from rabbinic literature to biblical studies. All the databanks are presented in an attractive fashion. Little enhancements permeate the package and make using the program very pleasant. For instance, you can adjust the size of the characters or windows in the display. With the provided "navigational tools" you can backtrack to previous stages in your work in each session or you can insert a bookmark and return later to that place in your search or browsing. You can modify many of the default settings. In a typical session you can search for terms and compile a concordance of occurrences. From there you can move to a Bible version and see the context of each hit. From the NASB text you can directly access the LXX or the NA Greek NT. From the Greek NT you can move into a lexicon or commentary. When in the Bauer Lexicon you can click on any abbreviation to reveal the full reference. By clicking on a cross reference you can move easily to that point in a Bible version you designate. All reference materials are complete editions. Accordingly the Bauer Lexicon includes needed references in the entries to words in Hebrew characters as well as to all the English and Greek analyses. While working you can block and save any text entry to the Windows clipboard (if you have a full version of Windows) or to a file in a number of formats. I copied an article from the Harper Bible Dictionary to a clipboard file, exited CDWord Library, imported the text into the Write wordprocessor and printed it with formatting on an HP Laserjet. The manual does contain a polite warning to users to respect the copyrights of the various materials in the package. You should also be able to print directly from the CDWord Library program. However, I had trouble doing this. I suspect that I need to reconfigure and reinstall my printer definition. The superb hypertext links and features like the on-line parser and search concording capacities make the program powerful for learning and for research. I judge this program will be the standard for competitors to reach. -Textbank and Capacity (300+ megabytes) [very good] Nestle-Aland Greek NT with diacritics; Rahlfs Greek LXX with tags (based on the CATSS/CCAT version of the TLG text plus the CATSS Morphological Analysis); KJV; NIV; NASB; RSV; New Bible Dictionary; Harper's Bible Dictionary; Bauer Lexicon; Intermediate Liddell and Scott; one volume Theological Dictionary of the NT; Harper's Bible Commentary; Bible Knowledge Commentary; Jerome Bible Commentary; also graphics: illustrations, charts, chronologies, cities, genealogies, maps and tables. -Compatibility of Output [excellent] You can save any text, index of hits, concordance of retrieved texts, dictionary or lexicon entry to either the clipboard, a Write wordprocessor format file, an ascii file or directly to the printer. The package makes full use of the MS Windows environment in this respect. You can also save your indexes of searches or your concordances in a hypertext format for later retrieval into CDWord Library for further examination and refinement. -Search/Retrieval Capabilities [excellent] Length of record returned: Initially a chapter/verse index of hits is returned. By clicking on a record you may view it in context dynamically or retrieve its text into the index in a variable user-specified length. Boolean-like searches with delimiters: Three levels of searching serve a wide variety of users. You have a choice of simple string searches, intermediate or advanced searches. Proximity matches and context specifiers are supported. You can use precedence relation operators, boolean logical operators (and, or, not), and wildcards mixed in a query. Searches can be made of Greek lemma and morphological tags (in NT and LXX texts). The program supports menu-driven and command-line searching, and macros for shortening queries for both documents and expressions. Search arguments can be recalled, reused and edited. Field designations: Can be restricted in usual fashion to various elements of the databanks, and even directed to bibliographies, reference, headwords and authors in documents. Note that searches can take a bit of time. A test retrieving 150 hits for five alternate words took about one minute. Adding all of the context verses to the returned index took another two minutes. The manual warns that a search may be limited to 250 matches. I installed only the minimum amount of files and data onto my hard disk for this testing. A more completely installed set-up may run faster. -Documentation/Help Facilities [excellent] The printed manual that accompanies the program provides an excellent introduction and tutorial. In addition the context sensitive on-line help and documentation is easy to access, clear and comprehensive. -Learning Curve [very good] If users know the Windows environment they should be able to operate the program with half-a-day of training. Otherwise I would estimate that you should allow for another half-a-day to learn the basics of the MS Windows software. More advanced applications will take additional training and experience. -Efficiency for Users [very good] The program is a true pleasure to use. It is logical, attractive and smooth. My only reservation here is that the processes of installing and learning Windows-based software at first may be intimidating to some IBM-users. -Errors: Fallibility and Consequences [excellent] I could get the program to crash once in trying to print directly from the text (I suspect this may have been a printer driver installation problem) and once in clicking the mouse too soon after starting the program. I also was able to exit the program immediately with an ALT-F4 key combination, leaving behind some unsaved search materials. Some additional trapping ought to be inserted in the code to avert such unintended exits. -Product Support [very good] A toll phone number provides access to the publisher. Full product support is available. -Is it worth the price? [excellent] Yes, with certitude. I say this in spite of the absence of original Hebrew Bible texts. The package is the best of the CD-ROM Bible materials for all of the reasons specified above. In addition, as a developer of Windows instructional software myself, I know of the environment's shortcomings and possibilities. CDWord Library is the finest state-of-art professional program of any kind that I have seen for MS Windows. -Final Average: excellent (8.8) Final remark: In the box with the software and CD-ROM I found a printed note thanking the buyer for the purchase and promising to stand behind the product. Most buyers will in turn thank this publisher for providing such exceptional software tools. These packages are among the most sophisticated electronic Bible and rabbinic text databanks available. Obviously the hardware system requirements and price put them beyond the means of most home users. Libraries, seminaries, and large synagogues or churches might be more likely to invest in these resources. Despite the high price one could argue that the purchase of a CD Bible resource might save a library money. The Bible Library manual, for instance, brags that the materials it provides would cost over $1400 retail in print form. A recent and extensive catalogue from a software dealer and consultant, Hermeneutika: Computer-Aided Bible Research (1-800- 55-BIBLE), lists over 350 PC Bible software and hardware tools. New products include some promising and interesting items. Some are more economical but less extensive than the products reviewed here. Other products may serve the more specialized interests of various users. Among the examples of alternatives is the new stand-alone Franklin Electronic Holy Bible. This small portable unit is available with either the KJV or the RSV for $250 from the publisher Technologies for Learning (P.O. Box 210, Lumberton, NJ 08048). An inexpensive popular disk-based program, Godspeed, can be purchased from many software dealers. With the KJV it is $99.95, with the NIV, $119.95, and with the Greek/Hebrew, $149.95. Those who want an alternative for Hebrew Bible searching in the original may find Zondervan's MacBible the best available at $399 (1-800-727-7759; 1415 Lake Drive SE, Grand Rapids MI 49506). CD-ROM storage for rabbinic literature and computer Bibles are exciting technological developments for scholars and professionals who work with biblical and classical Hebrew texts. For those who regularly have relevant applications, I recommend these packages highly. //end #30// ---------------------- <> coordinated by Robert Kraft [01 December 1990 Draft, copyright Robert Kraft] [HUMANIST 07 December 1990] [Religious Studies News 6.1 (January 1991)] [CSSR Bulletin 20.1 (February 1991)] ---------------------- In Transition Once again, this OFFLINE column represents the efforts of more than its coordinating editor, and is the first installment generated by a newly developing "editorial board" (initially R. Kraft, T. Zahavy, R. Cover, S. Bjorndahl, D. Westblade). The amount of information available to us that would interest OFFLINE readers is overwhelming, so we have selected items that hopefully will span a variety of concerns, especially among students of "biblical literature," but also far beyond. We will continue to attempt to strike a balance between the relatively elementary and the more advanced aspects of computer assisted research. It is heartening to find that the recently released _Critical Review of Books in Religion 1990_ from AAR/SBL (Scholars Press, 1990) includes review articles entitled "Beyond Word Processing," by John J. Hughes, and "The Computer as a Tool for Research and Communication in Religious Studies," by Andrew Scrimgeour. Not only do these articles contain valuable detailed information that goes far beyond what OFFLINE can provide, but they represent steps in the direction of evaluating electronic tools and resources in the same context as more traditional hardcopy materials. Thank you, Beverly Gaventa and your editorial board! As will be clear from Tzvee Zahavy's report on New Orleans, a great deal is happening even in the relatively circumscribed area of biblical studies. More and more people are using the available electronic data in more and more ways. New data is appearing, older materials are being updated (e.g. an extensively corrected version of the CATSS Septuagint Morphological Analysis is now being distributed). Success has caused some problems -- e.g. the PHI/CCAT CD-ROM containing Latin and Biblical materials is "sold out," and hopefully will be reissued in expanded forms soon. New hardware and software promise more power and ease, but sometimes require older approaches to be adjusted -- e.g. the IBM DOS users can use the new Windows architecture to do what has long been possible on the Apple Macintosh, but much of the older software must be rewritten to run under Windows. Relatively newer technologies are now becoming commonplace -- e.g. CD-ROM readers and software for both IBM and Mac can do many of the things that were conveniently possible only on IBYCUS a few years ago. Advances in scanning technology promise (but don't always fulfil!) to lighten the burdens of data encoding and versatility of coverage (e.g. graphics for photos, etc.). Video and sound are more easily integrated with more static visual data and will rapidly become more interesting to today's computer users, and part of the new "electronic textbooks" that will be produced (see Harvard's PERSEUS Project as an example). OFFLINE will try to keep you in touch, but cannot possibly do so in a timely manner or in adequate depth or breadth. If you can connect to the electronic networks, the good news is that you will have immediate access to all sorts of information and resources to help you in the transition to this new world. The bad news is that you may have to adjust your usual modes of operation, to be more strict in setting priorities, more selective in how to invest your electronic time, if you want to avoid being inundated during this period of immense transition. By knowing how to contact persons and groups who share your interests (e.g. through the discussion groups listed below), your work will hopefully be facilitated and important progress will be made in various areas of scholarly research. And when you are appropriately connected to the online resources, OFFLINE will no longer be needed as a transitional aide. That is one of our goals! Report on New Orleans, by Tzvee Zahavy I've just returned from New Orleans, site of the November 1990 annual meeting of the American Academy of Religion and Society of Biblical Literature. Here is some news concerning computer developments, most of it garnered from visits to the exhibition booths or from the sessions of the Computer Assisted Research Group. First, on the commercial front, new versions of the Multi-Lingual Scholar (MLS) and Nota Bene (NB) word processors for IBM DOS type systems are soon to be released. MLS has a Windows-like interface with pull down menus and fantastic graphical capabilities. Linda Brandt of Gamma Software demonstrated how you could even change the menu language to hieroglyphics if you so desired. NB (Dragonfly Software), on the other hand, will be the fastest and most flexible integrated multi-language processor. With its IBID bibliography program, look out journals! Here come the articles with really extensive footnotes! And you will never have to retype an entry. You just push a few keys and my word... there is the citation snatched from your bibliography database into in your manuscript. Both MLS and NB have been among the leaders in the ability to integrate and display (on screen and printer) Hebrew and Greek along with English, and both are authorized distributors of the electronic biblical texts managed by the Center for Computer Analysis of Texts (CCAT) at the University of Pennsylvania. One stop shopping thus is possible for some users! Other vendors with similarly integrated text-with-software services who exhibited in New Orleans include, for IBM DOS type systems, LBase and the Bible Word Program (see reviews below), GRAMCORD and Paraclete Software's MegaWriter, and Zondervan's Scripture Fonts; and for the Apple Macintosh, Linguists' Software. "Computer Assisted Instruction" (CAI) software has progressed too. Most of the products demonstrated were authored by professors as a "hobby." In one CARG panel presentation and discussion the participants from several universities (including yours truly from Minnesota) spoke honestly and openly about the strengths and weaknesses of their efforts. The CAI tools developed were divided evenly between Mac and IBM and focused especially on teaching Hebrew and Greek. The investment costs for development ranged from $1.50 for one Mac program (not counting faculty time invested, of course) to $90,000 for a four program IBM based series of CAI tools (leaving aside faculty effort, but factoring in the value of hardware received, program design and programmer costs -- this was our U of M project, MILIM). In the CARG plenary session on CAI, the audience was treated to specific descriptions and examples of (1) the use of Apple Macintosh "hypercard stacks" for research and instructional purposes, by Raymond Harder, (2) the use of interactive video for teaching language and other courses (e.g. History of Egypt, Life of Jesus), by Jay Treat for CCAT at UPenn, and (3) the development of NT Greek instruction programs on IBM DOS equipment, by John Hurd (Greek TUTOR program, UToronto). On the computer-aided-research side, some projects appear to be stalled. Others are making nice progress. A group of reports from ongoing projects was compiled and distributed in hardcopy by J. Alan Groves of Westminster Theological Seminary and brief oral reports were given as well on such projects as the following: Dictionaries: Comprehensive Aramaic Lexicon (Hebrew Union Col/Johns Hopkins U) Dictionary of Classical Hebrew (Sheffield U) Hebrew Lexicon (Princeton Th Sem) Encoding, Tagging and Maintaining Textual Data BHS Morphological Tagging (Westminster Th Sem) CATAB Hebrew/Masoretic Materials (U Villeurbanne [France]) Werkgroep Informatica [BHS] (Free U [Amsterdam]) Biblia Hebraica Transcripta (U Munich) Qumran Non-biblical Texts (Princeton Th Sem) CCAT/Tools for Septuagint Studies (CATSS; U Pennsylvania, Hebrew U) DEBORA, Centre Informatique et Bible (Maredsous) GRAMCORD Institute Peshitta Project (Trinity Evang Div School) Hebrew and Jewish Inscriptions Project (Cambridge U) Rock Inscriptions and Graffiti Project (Hebrew U) Mesopotamian Literature (UCLA) Armenian Literature Data Base (Leiden U, Hebrew U) Thesaurus Linguae Graece (U California at Irvine) Oxford Text Archives (Oxford U) Biblical Research Associates (Wooster College) Software Development, Data Integration Archaeological Data Base Management (Harvard U) GRAMCORD (Trinity Evang Div School) LBase [see review below] CATSS Base (Hebrew U) CDWord [see OFFLINE 30] (Dallas Th Sem) Project CONSTRUE for Greek (U Manchester [England]) SEARCHER for Greek & Latin CDs (U California at Santa Barbara) PERSEUS and PANDORA (Harvard U) The general mood of the computer types was mixed. Some people showed the frayed edges of the fast paced world of technology change. Indeed it is virtually impossible to keep up with the deluge of new and always faster hardware products and the ever growing software list. It seems unlikely that generalist types will be able to stay abreast of the field much longer. Active colleagues testified that the involvement in serious projects requires more than a full-time commitment. Many new faces surfaced at meetings of the computer oriented content. That bodes well for future growth and momentum. From the meeting we observe that the computer has moved from the category of exotic new gadget into the classification as one of the powerful, and perhaps indispensable, tools available to scholars in the humanities. LBase and Bible Word Program: Guest Reviews by Alex Luc (Columbia Biblical Seminary, Columbia SC) Published concordances for the original Hebrew and Greek texts of the Bible or extra-biblical material generally provide only brief contexts for the word we look for, and rarely have entries for idioms or longer phrases. With the help of computer programs, such searches have become simple and the desired results can often be obtained in minutes. My own teaching and research have been greatly helped by learning to use such tools. I have been able to produce more easily than before handouts for my students with elaborate evidence of the versions or the original texts to help them follow the arguments of my lectures. In dealing with literary or theological theories and arguments, computer research has also enabled me to evaluate or verify quickly the evidence to which they appeal. Our interpretation of ancient texts depends so much on evidence from a large number of primary sources. We need evidence produced not only by searching a word or idiom but by searching into the grammatical and stylistic phenomena of these texts. This can all be done now within a short time by using the computer -- e.g. searches to find out how often in the Greek biblical texts (LXX or NT) a preposition is followed by an article then a verb, or how common is the phenomenon of having any two verbs immediately adjacent to each other. For those who have never performed such searches and yet have access to a Personal Computer (IBM DOS compatible), the following two programs may deserve your consideration. (1) At the present, the most useful program I find for the above purposes is LBase, a multilingual database program developed by John Baima of Silver Mountain Software (7246 Cloverglen Drive, Dallas TX 75249; 214 709-6364; SILVER@UTAFIL.LONESTAR.ORG). Its new version 5.0 includes several important improved features. LBase is designed for reading and searching literary texts, texts that are in Roman or non-Roman (including Hebrew/Aramaic and Greek) scripts. LBase turns the available transliterated biblical or extra-biblical texts into their original Hebrew, Aramaic and Greek scripts for viewing or for comparing different texts with each other on screen. You can leisurely search and display a word or phrase with a screenful of context for each of its occurrences. Or you can do a fast search (e.g., about one minute for the whole Greek NT) and then print the results on a dot matrix printer (or a laser printer with an appropriate wordprocessor). And LBase can work either from materials on floppy or hard disk, or from CD-ROM. From the appropriately tagged biblical texts, which are also available through LBase, the program can display and explain the parsing information for the verbs, nouns, etc., of the LXX, the Greek NT and the Hebrew Bible. As mentioned earlier, you can also search for grammatical and stylistic phenomena in these texts. Moreover, by just typing in the root form of a word, you can have all the occurrences of the word in its various forms along with their contexts. This ability to perform such advanced grammatical searches is still relatively rare in available computer systems of which I am aware. You may also use LBase to view or search the Hebrew parallels of any term in the LXX and vice versa, or to look up a Greek dictionary entry while viewing the texts themselves. If you have access to a CD-ROM drive, you can view or search with speed the huge collections of the TLG (Thesaurus Linguae Graecae, all Greek literature to the 6th century CE), PHI (Packard Humanities Institute, classical Latin Literature, Greek papyri and inscriptions), and CCAT (Center for Computer Analysis of Texts, biblical and related materials in various languages) texts. In previous versions, the power of LBase has been partly compromised by the complicated steps one must go through to use it. Its manuals had been written more for advanced users than for beginners. The new version has simplified some of the steps but how simplified its manual will be remains to be seen. Having been involved in helping others to learn to use this software, I composed my own short manual for beginners, which is available to anyone interested (please supply a stamped [75›] and self-addressed envelope, size 6" x 9" or larger; send to Alex Luc, Columbia Biblical Seminary, P.O.Box 3122, Columbia SC 29230). (2) Another useful software product is the Bible Word Program developed by James Akiyama and distributed by Hermeneutika (PO Box 98563, Seattle WA 98198; 1-800-55BIBLE). It comes with the program disk and the Greek texts of the LXX and the NT, the texts of the Hebrew Bible (BHS), RSV and KJV. I was informed that the NIV will also be available soon. Unlike LBase, which is a very flexible database system, the Bible Word Program can search only texts that come with it, that have been transformed into its own specific format. For those who are satisfied with just doing word or phrase search with these texts and not searches into their grammatical and stylistic phenomena (or searches into the large collections of extra-biblical texts), the Bible Word Program may be sufficient. The price of the Bible Word Program is comparatively lower than LBase and it is easier to use. Though limited, it has two useful features that are not available in LBase: First, after searching the Hebrew text, for instance, for a certain word or phrase, you can create an index of all the verses found and use it to bring in immediately all the verses of the same references from the text of the LXX (or RSV or KJV); secondly, you can use the index editor in the program to compile and print out a list of Bible texts by simply typing in the references of the verses you want. These features, however, should not be the primary reasons for you to choose this program over LBase, especially if you are not sure whether you will eventually need to do searches into the grammatical and stylistic phenomena or into the many extra-biblical texts available. Pot-Pourri: "Scholarly" Electronic Discussion Groups One vivid illustration of the rapid domestication of computerized discussion and research is the proliferation of "scholarly discussion groups" on the University-centered electronic networks, especially BITNET. It has recently been estimated that throughout the electronic world of commercial networks there are perhaps 50,000 "Bulletin Board Services" (BBS) covering virtually any subject! Some of them are also of direct and obvious academic quality and use, but for the moment the discussion will focus upon groups produced primarily by and for academic audiences in the humanities (with representative samples from social sciences as well), and thus of special interest to students of religion in its various aspects. If you have access to BITNET or to INTERNET, consult your local support staff on exactly how to access any of these addresses -- the process differs slightly from system to system. For some details about getting connected, see also OFFLINE 29 and 30. Once you are connected, such discussion groups as are listed below are available. Some are "unmonitored," which means that any message you send (intentionally or, sometimes, by mistake!) gets published to the entire list automatically, while messages to "monitored" groups are filtered by human editors. The following list is illustrative, and is surely out of date almost from the moment of capture. BITNET addresses are normally provided, but the groups can also usually be reached from the INTERNET (consult local gurus) and/or from some other networks. More current information is available by checking such lists and services as: NEWLIST-L@INDYCMS for new science & technology lists; NETMONTH from BITLIB@YALEVM for new lists as they appear; and on the INTERNET, SERVICE@NIC.DDN.MIL for its List of Lists (= SIGLIST). You can also contact the ListServ(er) at your local node for its "global" list of available lists. List Address Subject Matter of Group (BITNET unless (selected social science and otherwise noted) new technology groups included) ------------ ---------------------- ANSAX-L@WVNVM Anglo-Saxon studies ANTHRO-L@UBVM Anthropology BIOMED-L@NDSUVM1 Biomedical Ethics BUDDHIST@JPNTOHOK Indian & Buddhist Studies C18-L@PSUVM 18th Century Interdisciplinary discussion Comserve Hotlines Several discussion groups on Communication, @RPIECS such as: ETHNO(methodology), INTERCUL(tural Commun.), PHIL(osophy of)COMM(unication), RHETORIC(al Analysis), METHODS, all of them accessed @RPIECS CRTNET@PSUVM Communication Research and Theory network EDTECH@OHSTVMA Educational Technology ENGLISH@UTARLVM1 Departments of English discussion ENVBEH-L@POLYGRAF Environmental Behavior ERL-L@TCSVM Educational Research FICINO@UTORONTO Centre for Reformation & Renaissance Studies FWAKE-L@IRLEARN Discusses James Joyce's Finnegan's Wake HEGEL@VILLVM Hegel Society discussion HISTORY@FINHUTC History HUMANIST@BROWNVM General Humanities & Computing Focus (moderated) IOUDAIOS@YORKVM1 Judaism in the Greco-Roman World LITERARY@UIUCVME Contemporary Literature LORE@NDSUVM1 Folklore MBU-L@TTUVM1 On teaching college composition NL-KR@CS.ROCHESTER.EDU (Internet) Natural Language & Knowledge Representation NSP-L@RPIECS Philosophy (Noble Savage Philosophers) PHILOSOP@YORKVM1 Philosophy PHILOS-L@LIVERPOOL.ac.uk (on British JANET) Philosophy PMC-TALK@NCSUVM Post-Modern Culture PRST-L@UMCVMB Political Science Research and Teaching (moderated) PSYCH@TCSVM Psychology REED-L@UTORONTO Records of Early English Drama discussion RELIGION@HARVARDA Comparative Religion, World Religions [new 1991] SBRHYM-L@SBCCVM SUNY/Stony Brook Literary Underground sci.lang Linguistics (USENET Unix group) SHAKSPER@UTORONTO Shakespeare SLART-L@PSUVM Second Language Aquisition Research/Training TEI-L@UICVM International Text Encoding Initiative WHIM@TAMVM1 Humor Studies WORDS-L@YALEVM English language Preparing for Kansas City In keeping with the growing interest in new developments in communication and data availability, the CARG Steering Committee (now headed by Robin Cover and Raymond Harder) has announced the following theme for the November 1991 CARG plenary session in Kansas City, at the AAR/SBL/ASOR meetings: Academic Networking and Data Interchange: Text Encoding, File Conversion and Electronic Mail. Your comments and suggestions for enhancing the usefulness of this program are most warmly solicited. //end #31// ---------------------- <> coordinated by Robert Kraft [11 February 1991 Draft, copyright Robert Kraft] [HUMANIST 15 February 1991] [Religious Studies News 6.2 (March 1991)] [CSSR Bulletin 20.2 (April 1991)] ---------------------- Deadlines overtake and harass with increasing regularity. What would we do without them?! I had intended, or at least planned, to use some of the space in this column to provide some updated addresses and bibliographical references to assist readers in finding their way in this ever increasingly complex electronic world. It is much too large a task for this small space, but some beginnings are possible. Some of the "old" standby sources are still very valuable, such as John J. Hughes' _Bits, Bytes & Biblical Studies: A Resource Guide for the Use of Computers in Biblical and Classical Studies_ (Zondervan, 1987), including its extensive bibliographies and related information. And There are various periodic publications to help keep the interested current. The American based Association for Computing in the Humanities publishes a quarterly Newsletter as well as its bimonthly journal _Computers and the Humanities_ (CHum), and can be contacted c/o Dr. Joe Rudman, English Department, Carnegie-Mellon University, Pittsburgh PA 15213. The British based sister organization is the Association for Literary and Linguistic Computing, with its quarterly journal, for which the contact person is currently Dr. Thomas Corns, Department of English, University College of North Wales, Bangor, Gwynedd LL57 2DG, Wales. Some of the more traditional professional societies also offer specific types of assistance with computer technology and tools, such as the American Philological Association or the Modern Languages Association (check the respective journals for current addresses). The aforementioned sources and others like them often can provide pointers to the availability and utility of electronic data and software. Probably the most comprehensive catalogue of electronic data currently available is produced by the Oxford Text Archive, 13 Banbury Road, Oxford OX2 6NN, England (ARCHIVE@VAX.OX.AC.UK). From the same address, a free (at least for the moment) newsletter on _Computers in Literature_ is also available through Dr. Marilyn Deegan, CTI Centre, Oxford University Computing Service. Academic distribution services for software products of various types and with a wide range of prices (including "freeware") also may be of interest: North Carolina State University has amassed a gigantic collection, although easiest access to it seems to be through the electronic networks. Duke University produces a catalogue of available software that it has collected. "WiscWare" is the name for educational software generated in connection with IBM grants, and is available through from that office at 1210 West Dayton Street, Madison WI 53706 (WISCWARE@WISCMACC.bitnet). Apple Macintosh software with similar pedigree has been publicized in Apple's _Wheels for the Mind_ magazine and was also available through Kinko's Academic Courseware Exchange until recently; apparently new arrangements are underway. Much else could be said along these lines but I am running out of time and energy. Indeed, suggestions from readers regarding what centers and services you have found most helpful are invited. OFFLINE will be pleased to pass the information along! The remainder of this column provides some corrections and updates on computer assisted research relating (mostly) to ancient texts and tools, with contributions from the wider group of OFFLINE collaborators. The section on the Comprehensive Aramaic Lexicon project by Steve Kaufman in the 1990 CARG Reports distributed at New Orleans was inadvertently a repeat of the 1989 report rather than the revised 1990 material he had submitted. Anyone desiring a copy of the updated version should request it from me at the following address. My apologies to Steve! Those interested in a copy of the full reports should send $2.50 and specify whether you want the electronic or the printed version. If you desire both include a total of $3.50. The reports will be made available on an electronic file server soon (HUMANIST and/or IOUDAIOS) and can be accessed that way if so desired. Alan Groves Westminster Seminary Philadelphia, PA 19118 GROVES@PENNDRLS.BITNET (or internet @PENNDRLS.UPENN.EDU) for e-mail. A consortium including but not limited to Apple, IBM, NeXT, Xerox, Sun, Microsoft, and the Research Libraries Group is developing a new standard for electronic transcription of written materials in various languages. It is called "Unicode," and is intended to replace the extant ASCII standard. Unicode will offer distinct representations for approximately 25,000 characters, including those of every writing system I have ever heard of (all the way to China and Japan) and several more besides. It deals only with the electronic transcription code: not the display or the printing font. It is up to the software to teach the hardware how to display those correctly on screen and on paper. This is the future. I have found the Unicode mavens most eager to discuss what they are doing and fascinating besides. To express interest and get a copy of the proposal, address MICROSOFT!ASMUSF@UUNET.UU.NET. The material you receive will say how to communicate suggestions, etc. There is a rough deadline of 15 February, 1991, for making views known to Unicode for inclusion in the version that is due for release this spring, but I would not be offput by that: further versions will be forthcoming and the people involved will continue to revise, expand, and discuss. The most useful thing I learned from them is that the best available legend for the creation of the high-ASCII character set goes back to sources inside IBM itself, viz., that the extra 128 characters whose randomness has so charmed us all really were picked out by two guys on an overnight flight to London for a meeting the next morning. The greatest benefit of all will be that it will be possible to transfer texts in a variety of gaudy language systems from an IBM to an Apple to a NeXT, traveling over various unfriendly mainframes in the process, with all the characters coming through as accurately as low ASCII come through now. This will make the world a better place. Provided that all has gone according to January plans, by the time you are reading this the Packard Humanities Institute (PHI) expects to have made license agreements available for a major new release of ancient texts on compact disks to replace its earlier demonstration disks (#1) of Latin and CCAT materials and (#2) of documentary papyri. Pending final negotiations by the copyright lawyers, look for the following text bases to be available on CD-ROM about mid-February: (1) The Thesaurus Linguae Latinae (TLL), which has now been coded and corrected through the end of the second century CE, with some 362 authors ready for distribution by PHI (directed by David Packard and Stephen Waite). This constitutes a major expansion and updating of the Latin materials on PHI CD-ROM #1 that appeared at the end of 1987 and has been "out of print" for the past several months. (2) The Duke Data Bank of Documentary Papyri (from the project directed by John Oates and William Willis), which first appeared on PHI CD-ROM #2 in 1988 and has now been much enlarged and more carefully formatted. (3) The Cornell Inscription Project (directed by Kevin Clinton), including materials from Attica, Delos, Peloponnese, Central Greece, Delphi, Crete, Ionia and Icaria -- an earlier form of some of these texts was included on the PHI CD-ROM #1. (4) The Nag Hammadi Coptic texts (complete) encoded at the Claremont Institute under the direction of James Robinson, with Sterling Bjorndahl -- an earlier form of these materials was present on the original TLG CD-ROM in 1985. (5) The Coptic (Sahidic) New Testament (complete), based on Horner's edition as encoded by CCAT but revised and edited by David Brakke under the direction of Bentley Layton (Yale). Arrangement of these data bases on disk had not been finally determined as of press time, but plans currently call for a division of the material into two disks in order to reduce license fees for those who have no need to work with every set of texts. An annual license fee will be assessed for each collection, but as in the past the Packard Humanities Institute remains dedicated to maintaining fees at nominal levels. Those already working under license agreements with PHI disks #1 or #2 should automatically be sent a proposed license for the new disks as soon as the disks are produced. Other prospective users may write to the Packard Humanities Institute (300 Second Street, Los Altos, CA 94022) to request a license agreement. Incidentally, PHI is also involved with publishers of the papers of Franklin and Washington, encoding those historical American collections for a CD-ROM distribution tentatively set for later this year. The original PHI CD-ROM #1 also included a number of biblical and related texts as well as a miscellany of other materials (see OFFLINE 17) collected and/or prepared by the Center for Computer Analysis of Texts (CCAT) at the University of Pennsylvania. Some of the more widely used texts from this gouping will be carried over onto the new PHI disks described above, and an expanded and updated version will probably appear separately in the near future. In the meantime, the American Bible Society (ABS, representing also the United Bible Societies = UBS) has purchased rights to the CD-ROM data and search sofware previously marketed by the Foundation for Advanced Biblical Studies (FABS) and has issued a limited edition, experimental biblical CD-ROM that includes some of the texts available from CCAT. ABS and CCAT expect to continue to cooperate in future electronic publication and distribution of such materials on CD-ROM. The contents of the experimental "ABS Reference Bible" CD-ROM are as follows (available from ABS, 1865 Broadway, NY NY 10023 for $195, which includes the Innotech FindIt search software): Ancient Bible Texts & Tools Hebrew BHS (Elliger-Rudolph/UBS) Morphologically tagged Hebrew BHS (Westminster, provisional) Hebrew-English Lexicon (Davidson) Hebrew-English Terms Hebrew Harmony of Samuel-Kings & Chronicles (FABS) Greek LXX (Rahlfs/UBS) Morphologically tagged Greek LXX (CATSS/CCAT) Greek NT (Aland et al/UBS3) Morphologically tagged Greek NT (Gramcord) Greek-English Lexicon (Newman/UBS) Greek Harmony of the Gospels (FABS) Strong's Concordance Numbers (linked to some texts) Latin Vulgate (Fischer et al/UBS) English Bible Versions and Tools Authorized (King James) Version New King James Version New American Standard Revised Standard Version New Revised Standard Version Today's English Version English Translation of Greek LXX (Brenton) English Harmony of Samuel-Kings & Chronicles (FABS) English Harmony of the Gospels (FABS) Other Texts and Tools Reina Valera Spanish (1960 revision) Luther German Bible and Apocrypha (1984 edition) English Translation of Josephus (Whiston with Loeb tags) English Translation of Apostolic Fathers (Lightfoot-Harmer) Abingdon's Dictionary of Bible and Religion //end #32// ---------------------- <> coordinated by Robert Kraft [04 April 1991 Draft, copyright Robert Kraft] [HUMANIST and IOUDAIOS 05 April 1991] [Religious Studies News 6.3 (May 1991)] [CSSR Bulletin 20.3 (September 1991)] ---------------------- A Bibleless OFFLINE ? One of these days, I keep telling myself, there will be an OFFLINE column that selfconsciously avoids mentioning biblical and/or related electronic texts, tools, conferences, etc. This is not that column! Indeed, insofar as OFFLINE emanates, one way or another, from the Computer Assisted Research Group (CARG) of the Society of Biblical Literature and from people associated with CARG, the tendency is strong to focus on "biblical" intererests. But we are also well aware of the fact that the scholarly world OFFLINE attempts to address is much larger and more varied, and that world also deserves to be served more directly and with regularity. Some of this awareness will be obvious in Robin Cover's introduction and invitation (with an appendix on electronic publication) to the planned CARG program for the forthcoming annual meetings in Kansas City. We invite you to help increase our awareness of non-bible-related developments, issues, needs, etc., for future columns and programs. Meanwhile, please bear with us. A major international conference precisely on computers and biblical studies is about to take place in Germany and a contribution from Alan Groves appears below to alert you to that fact. Alan also reports that the work he has been directing (and doing) to prepare a morphologically analyzed electronic text of the Hebrew Bible (BHS) is scheduled for release by early summer. Contact him at Westminster Seminary, POB 27009, Philadelphia PA 19118; (215)887-5511; GROVES@PENNDRLS (BITNET; add .UPENN.EDU on the Internet). Although Jack Abercrombie's description of the humanities fileserver at Penn, which rounds out this issue of OFFLINE, is not specifically focused on biblical materials, it does draw attention to the fact that it is becoming increasingly useful to know what is on network fileservers and how to access them. For people with biblical interests, for example, the Georgetown University project on electronic archives has just issued an extensive description of relevant centers and holdings throughout the world. It can be retrieved quickly and easily from the HUMANIST listserver at BROWNVM. Similarly, various reviews, articles, bibliographies, and the like, are available in the same subject area from the IOUDAIOS listserver. These represent only a tip or two of the iceberg, which is already quite large and growing rapidly, in a wide number of fields of which biblical studies is but an example. So even if this exact subject matter is not of particular interest, pay attention to the procedures and possibilities, since they can be applied much more broadly! Computer Assisted Research Group (CARG) Theme for 1991: "Use of Academic Networks and Electronic Text Interchange," by Robin Cover, chair CARG has designated "Academic Network Awareness" as its theme for 1991 and is planning a number of activities and forums focusing on that theme for the annual AAR/SBL/ASOR meetings in November (Kansas City). It is hoped that members of these and other professional societies will be able to contribute and benefit from the discussion during the calendar year. Several broad goals are involved in this agenda, as outlined in the following paragraphs. These areas are, succinctly: (1) increasing awareness of academic networking opportunities; (2) providing practical assistance in finding a network host and identifying forums germane to a particular academic domain; (3) introducing concepts behind "safe networking" and data integrity, and acquiring free personal computer utilities that guarantee reliable networking; (4) summarizing emerging standards for text encoding which impact textual scholarship; (5) discussing accessibility of machine-readable texts via public electronic networks: copyright, intellectual property, ownership of knowledge in electronic texts. (1) Acquainting interested society members with the opportunities for enhanced scholarly productivity through the use of academic networks. The traditional uses of electronic networks for fast personal mail ("email") and collaborative research efforts continue to provide strong incentives for scholarly networking. Information sent by email yields portable, revisable data (vs. FAX) and obviates the need for coordinating human schedules for communication (vs. telephone-tag). Email is very fast: with the high-speed Internet backbone, file transmission can now be a matter of seconds or minutes, and supplies a clear advantage for trans-continental communication. Within institutional settings email is financially economical, usually being provided free to faculty and at modest cost to students. Email is ecologically sensitive (electronic bits are "bio-degradable" non-pollutants, as opposed to paper). Email makes mass distribution of textual information or other data effortless and trivial: one can post a note to a dozen colleagues in a work group or to a thousand people on a discussion forum just as easily as mailing to one person -- sans paper, envelopes, stamps or conscious attention to any addresses. It matters not whether the mail is a message of four words ("Please send comments ASAP") or a document of 100 pages, whether the addressees are all on the local campus or in a dozen different countries. Beyond these obvious uses of electronic networking for personal communication, network forums are now available for a host of other services. CARG will attempt to provide paper and electronic documentation on each of these areas: Online Public Access Computer Systems: University Library Online Catalogues and Databases. In some cases, document browsing and document delivery is supported along with elaborate searching by author, title, subject, keyword., etc. Lists are available for over 160 major schools in the US and Canadian, and for 60 libraries in the UK. These online catalogues and databases are accessible via remote login on the Internet, via direct dialup and sometimes via academic bulletin board systems. Typically there is no charge to the institutional user. See OFFLINE 29 by J. O'Donnell. Internet-accessible Bulletin Board Systems (BBSs). See the information provided below by Jack Abercrombie. Electronic Journals, Newsletters, Digests and Bulletins. Ephemeral publications in electronic format are now quite numerous. Refereed electronic publications are becoming increasingly attractive as alternatives to traditional journals that are becoming unaffordable for individuals and even institutional libraries, and which and cannot guarantee currency (many serial publications, even from prestigious publishers, require 6-24 months lead time). Academic Discussion Forums and Technical Support Groups. Over 1500 such groups are accessible on general and specific topics. Subscription to these forums is free. Though such groups vary in quality and focus, they provide access to advice and expert testimony on a global scale. Scholars share personal expertise, subject bibliographies and other forms of knowledge that cannot be distributed easily in traditional communication media. See OFFLINE 31 for a brief selection of addresses. Electronic Text Archives. Archives of machine-readable texts are maintained at dozens of universities and research centers worldwide. Many archive sites provide searchable title lists for electronic texts, and on a more limited basis, access to the machine-readable texts themselves. Some sites even distribute texts through the networks. Michael Neuman at Georgetown University has been coordinating the collection of detailed information about such archives. Public Domain Software (programs, fonts, documentation). Quite literally thousands of public-domain or shareware programs are available on public fileservers for Macintosh, IBM/PC, Atari and other personal computers. With Internet (FTP = File Transfer Protocol) or mail-based communications, one may access any of hundreds of other computers worldwide to download useful software and similar resources. Internet is emerging as the new university network standard: through various communication protocols, one may "log on" in real time to remote computers around the globe to access the desired information. Archive Servers: To assist in locating specific information on the global network of research computers, online interactive databases called "archive servers" provide indexed retrieval facilities both in real time and via electronic mail. Archive servers thus function as massive indexes to data on all other networked computers. Database facilities for searching remote archives. Many discussion forums maintain archives of permanent data and annual logs of the discussions: these subject-specific archives may be queried using standard database rules. Nameservers (Electronic White Pages). Just as most universities have databases to provide local user directories, several public nameservers assist in locating scholars by name or lists of scholars having similar research interests. (2) Providing demonstrations of network capabilities, and practical assistance for the individual in making network connections. While larger universities usually provide adequate support to faculty and students through Academic Computing departments, smaller institutions may be unable to meet the specific needs of AAR/SBL/ASOR members who need information about academic networking. CARG thus wishes to provide basic assistance for the non-supported or under-supported scholar, and even for non- institutionally affiliated members. If networking facilities are not provided by a local institution, what alternatives are available? At the Annual Meeting, CARG will attempt to provide live demonstrations of many of the networking facilities, and will make accessible general networking information (networking guides, bibliography). Society members are invited to communicate to CARG in advance regarding special networking issues or problems that might fruitfully be addressed in special program forums (workshops, panels). (3) Providing specific instruction on reliable networking: conducting successful network transmissions. The computers used for academic networking represent a vast array of hardware architectures, operating systems, communication protocols, encoding standards and other details that sometimes present problems for reliable networking. In general, one must accommodate the lowest common denominator (a restricted subset of the 7-bit ASCII code standard) to ensure data integrity. CARG wishes to supply summary information and useful program utilities (the latter for Macintosh and IBM/PC) for the variety of data formats and standards used in compression, archiving and 7-bit encoding that users are likely to encounter. Providing simple programmatic means for dealing with the multiple data formats used on academic networks is not difficult if one understands the basic concepts. Most such utilities contain internal integrity checks verifying to the user that transmitted files are authentic, are not infected with viruses, and that the data has been decoded accurately. Even binary data such as graphical images can be sent safely over networks; "shareware" utilities allow compression of images for electronic transport and facilitate conversion between the many standard formats of graphical data. (4) Acquainting scholars with the emerging computing standards as these impact networking and data interchange in the realm of biblical and classical scholarship. New standards are emerging that will make multi-lingual computing far easier than it is today. At the character code level, two major international initiatives are attempting to define a multi- byte encoding standard which will provide operating-system level support for many of the world's writing systems. Such character encoding will replace traditional ASCII (128 or 256 character positions) with standard code space for 65,000 or more characters. At the annual meetings, we will summarize the status of the two initiatives (ISO 10646 and UNICODE), and discuss the practical and theoretical implications for multi-lingual scholarship and its treatment of sacred texts. An even more important issue is the recent development of encoding standards for information interchange at a higher level: the use of standard transliteration systems, standard representation for document structure, standard encodings for text-critical data, etc. These standards will make it possible for programmers to build general software (editors, browsers, search programs) to access and exchange data across a wide variety of computing platforms and applications packages. Even current-generation software will be able to incorporate compliant conversion routines at basic levels in order to permit sharing of electronic data across computing platforms and applications programs. The results of the international Text Encoding Initiative (TEI) will be presented as they bear upon high-level "markup" or encoding of biblical, classical and other religious texts. The TEI has set forth a preliminary set of encoding guidelines, but detailed domain-specific implementations are being left to specialist groups who are intimately acquainted with the languages, literatures and other text-specific features. The TEI encoding guidelines are based upon the Standard Generalized Markup Language (SGML). SGML represents one of several efforts by international standards bodies and a growing number of information scientists to formalize a standard means for representing textual data as information. SGML belongs to the realm of descriptive markup systems in that it provides a standard means of using descriptive labels for content objects, their attributes and their relationships. More precisely, SGML is a metalanguage for defining descriptive markup languages used in the structuring of information. A basic premise of SGML is that texts are composed of discrete content objects, and that supplying meaningful names for these delimited textual objects, their attributes and their hierarchical relationships independent of the possible paper or screen appearances is one of the most powerful means of transforming text into information units that may be addressed sensibly by appropriate software. The principle applies not only to document sub- structures, but to textual or graphic information objects in databases of any kind. The designation of textual objects and associated attributes by name is called "markup" since the naming tags are entered directly into a text, at least in the archival and interchange formats. As opposed to proprietary and cryptic formatting codes customarily used (e.g. by typesetters and wordprocessors) in data files, structural markup is entered in readable characters using terms that mnemonically describe what tagged textual objects actually are: journal article, section, paragraph, sentence, note, quotation, email address, etc. The attributes which adhere to the named textual objects are also supplied in human-readable terms, as in or . Such structural markup is distinct from the content of a text, just as control characters (tab, line feed, carriage-return, record-end) are in most text files. As in the case of standards for multi-byte multilingual character encodings, adoption of TEI/SGML encoding methods will make it much easier to develop standard applications that are aware of the language in which a text (a word, phrase, sentence or quotation) is written. With SGML encoding, a language attribute may be attached to all textual objects, allowing intelligent multi-lingual text processing at a low level: language-specific spell checking, hyphenation rules, sort and collation sequences, bibliographic case-conversion rules, keyboarding conventions, screen rendering, invoking fonts and scripts, controlling the direction of writing, line wrapping, invoking an online thesaurus, indexing for full-text retrieval, etc. (5) Discussing the accessibility of textual material on public networks: a personal point of view. [What follows is intended primarily as personal commentary, and does not necessarily represent the consensus of opinions held within the CARG group, nor the views of SBL, AAR, or any affiliate societies.] An immensely complicated issue germane to academic networking is the availability -- or lack of availability -- of textual materials on high-capacity research networks. The escalating demand for network resources constantly challenges the ability of national and regional policy-makers to reckon with economic and political questions which must be answered. How shall government, industry, higher education and the general public share the financial burden for installing advanced network technology? Which information management and transport model should serve to guide the design of the delivery systems: a railroad model (private, free-market enterprise, without government intervention), or an interstate highway model (public network infrastructure, as in Japan and most of Europe)? How shall global network resources be allocated? Within this arena, the voice of humanities scholars will be heard only faintly, if at all: those who will broker access to the electronic information highways are the powerful conglomerates in the telecommunications industry. We may only hope that network facilities installed for these capital-rich interests will adequately serve the needs of humanities computing, and that the networks will not become "toll roads between information castles" whose tariffs are so expensive that only elite institutions can afford to travel them. Another kind of network "access," however, is an arena in which it seems to me that textual scholars can and should take a much more active role in determining their own destiny: the realm of legal access to electronic versions of their own written intellectual creations. Having unlimited access to the most sophisticated research networks may be practically irrelevant if power brokers outside the scholarly world (publishers, libraries) are those who establish legal ownership of electronic knowledge created by productive scholars. Textual scholars must awaken to the fact that the electronic form of their written creations represents a vastly more important reservoir of knowledge than does paper scholarship, and that retaining control over this knowledge is critical to the academic enterprise. Publishers' control over copyrights (ownership of intellectual property) was relatively inconsequential so long as paper was the only important publication format: there is little market in counterfeiting books. Now that electronic text has become a viable publication format, allowing easy copying, rapid dissemination and strategic opportunities for text enhancement, it becomes imperative that we clarify who has ownership of the intellectual property in scholarly writing. And who shall broker power over this form of knowledge? Rather than addressing the entire domain of intellectual property inherent within published scholarly works, we may consider one special class of documents which are fundamental to literary, linguistic and historical inquiry: the primary texts from the past. Who should control access to electronic versions of pre-modern texts? Should any person or institution hold exclusive copyright or effective monopoly over electronic versions of such texts? Or are these texts the heritage of all people, present and future? Should traditional paper publication (e.g., in editio princeps) by one scholar or publishing firm prevent others from digitizing and disseminating an older text in electronic form? Who indeed may own copyright to an electronic version of a sacred text: The Deity? The editor of the editio princeps? The publisher? If pre-modern texts constitute a universal cultural legacy, and if access to electronic text is a unique form of scholarly empowerment, then should not monopolies imposed through legal or economic sanctions be vigorously opposed? If traditional interpretation of current copyright laws will not adequately support the needs of textual scholarship in this special area, in what ways may professional societies, funding agencies and publishers work cooperatively within the publication arena to guarantee the freedom of knowledge in electronic text? How may editors, publishers, database maintainers and data-preparation projects pursue their "free enterprise" objectives without compromising democratic scholarly access to the intellectual traditions in our literary past? A key point of departure in this discussion is the conviction that electronic text represents an expression of human thought and language quite distinct from paper representation of that same text: electronic searching, retrieval, and quantitative analysis of digital libraries constitute more powerful methods of access to knowledge than mere human "reading" of a printed text. In both large and small corpora of machine-readable text, elaborate historical and literary-critical hypotheses may be tested using sophisticated query languages and mathematical algorithms, yielding specific and generalized results that could never be validated or quantified by humans merely reading the printed text. Maintaining legal or economic control over electronic text thus represents a form of scholarly empowerment and access to intellectual tradition that is not usually at issue in traditional paper libraries. It may be asked whether a publisher's traditional copyright claims upon a paper text publication should extend to legal control over the electronic expression of that primary text. Proprietary commercial control appears questionable in light of the common consensus that "texts" from the remote past should be the cultural heritage of all people equally. Proprietary control of access to such a primary form of intellectual tradition likewise seems to be at variance with the ideals of free scholarly inquiry. Assigning the burden of electronic text publication to traditional publishers as a commercial enterprise will simply compound the problem: we already have examples of electronic texts issued in proprietary (encrypted) data formats intelligible only to publishers' proprietary software. Linking proprietary data with proprietary commercial software is doubly restrictive because investigators are then confined to pre- conceived ideas about what kinds of questions may be asked of the text, and users are held captive to (sometimes unconscionable) publishers' prices for the commercial package. Until now, relatively little attention has been paid to the rights of the individual scholar and of the academic community in assigning "ownership" to knowledge in electronic text. But the days may be numbered when a prestigious publisher can intimidate the scholar into consenting (in the small print) that for the privilege of paper publication, he or she must forfeit rights to the intellectual property in electronic format. Such a legal clause usually means: "you forfeit all rights to your own intellectual creation, and we shall decide later how much it will cost you to buy back the right to access your knowledge, or to share it in an electronic environment." It is understandable that a scholar desperate for tenure or promotion or academic stature might consent to such terms. But to surrender passively the electronic copyright is to sell tragically the intellectual birthright. We should reflect on the fact that not just the aspiring scholar sells out under this typical arrangement, but, when primary texts are involved, textual scholarship suffers as well. Thankfully, of course, not all publishers will exploit this basis of power: already there is evidence of publishers who are willing to recognize that electronic versions of literary texts do not compromise sales of printed volumes, and that unrestricted public access to primary texts in electronic format is critical to the progress of textual scholarship. The focus of discussion above has been upon so-called "primary texts," often written in the languages of the ancient orient: within this domain, it is clearest that "ownership" of text in electronic form belongs to ancient, modern and future generations, but not to publishers. Similar concerns might be raised, however, in relation to major translation projects such as English Bible versions: a translation clearly contains modern intellectual effort, but it also emerges directly from the ancient text. More generally, it might be desirable to apply the ideals of public text and public access in cases where major editions (original texts or translations) become or are projected to become canonical or standard textual authorities in the field. On the other hand, derived texts containing elaborate scholarly annotation (morphological and syntactic tags; textual markup for discourse; lexical mappings) are clear cases of textual enrichment in which the textual database is distanced from the pure electronic "text" representation. In such cases, dissemination of electronic text for scholarly purposes is to be expected, but protecting both the "pure" form of the text and crediting the scholarly efforts in providing textual analysis are legitimate concerns. It will probably not be disputed that proprietary control over electronic versions of text and/or convoluted copyright arrangements currently present legal and economic barriers to scholarly research. These barriers should be eliminated if possible, both retrospectively and for the future. It also seems clear that the scholars responsible for primary intellectual effort are most often those who benefit least from the financial rewards in publication: publishers become the legal custodians of books and typically award a disproportionately small percentage of book revenues to the creators of knowledge. The question is how to rectify these inequities. On the one hand, proper provision must be made to remunerate both editors and publishers, to protect publishers' investments and to encourage free enterprise. Similarly, the prerogative of authors and editors to publish texts first in traditional print copy should be protected, for the quality of scholarly text publication and individual scholarly rights require such provision. On the other hand, if the knowledge in an ancient text belongs primarily to the human community, free enterprise efforts should be constrained within this domain of "ancient text as international cultural heritage." If we grant that electronic texts from our literary past embody knowledge of a unique kind, we must grant that the democratization of that knowledge cannot be realized so long as control over electronic texts lies outside the scholarly community or for any reason in the hands of restrictive entrepreneurs. Given the complexity of these issues and the vested interests of publishers in traditional publishing forums, we cannot expect to solve any of these riddles easily. However, it seems both appropriate and of urgent concern that our professional societies assume responsibility for discussion of the special problems which now face textual scholarship as we realize that the most useful and perhaps purest intellectual form of text is the electronic format. It would seem proper for professional societies to take affirmative steps to sponsor broad-based discussion aimed at a consensus which will reshape the assumptions and expectations that operate within the world of publishing and editing primary texts from the past. One might propose, for example, that the privilege of publishing primary texts in traditional print format should carry a concomitant responsibility of placing in "public access" or in the publicly available media an electronic copy of the text such that access to the knowledge in this literary heritage is guaranteed not to become the privilege of the institutional elite (examples can be supplied) or of the rich (similar examples can be given). Federal and other public funding for text publications might reflect these ethical values, as would the policies of agencies authorized to grant publication rights to primary text material (museums, university and national archives, departments of antiquities). There would be clear guidelines describing the realms of textual scholarship within which the privatization of electronic knowledge is judged not to be in the best interests of the academic community, and some realms (perhaps) in which proprietary control over electronic data is judged not problematic. In the term "public access," we do not envision legally unprotected texts, but the clear expectation that machine- readable texts essential to textual scholarship may be distributed freely, for example, from public fileservers over research networks: no one may own them, no one may sell them, and no one may restrict access to them. Such ("copyleft") legal instruments are already established in other domains, and can be adapted for governance of primary textual materials. Variations of such legal instruments may be envisioned, but one which would encourage public funding for the enrichment of electronic text archives is this: anyone who alters or enhances a "copylefted" electronic text must pass on to others this enhanced text with the same freedoms under which it was received. It must be acknowledged that electronic texts which exist today in universities and national archive centers will not easily be extracted from economic and legal entanglements of the past. Many such texts have been given electronic incarnation through extraordinary efforts, sometimes at great personal sacrifice to the visionary individuals and research efforts which keyboarded, scanned and corrected these texts. Many electronic editions have been produced under delicate legal arrangements. We must further acknowledge that basic economic realities must be faced even in current electronic publication efforts. Funding must be available for archiving of texts on file servers, for supplying network access to the data, and for other maintenance operations. It must also be decided who, if anyone, should become the official custodian(s) of our canonical electronic texts. Thus, our vision pertains primarily to the future of textual scholarship. We must ensure that literary texts prepared for paper publication indeed are also delivered in electronic format. It appears certain that paper volumes will remain the medium of choice for scholarly editions in decades to come. Until computers can be made to simulate the appearance and feel of books -- such that pages may be dog-eared, such that text may be annotated with arrows and lines in several colors, such that marginal and interlinear notes may be scribbled with ease, such that reading causes no eye strain -- the tradition of the codex will remain supreme. But the electronic representation of ancient writing is a treasure of a different sort, and publishing policies must be revamped to guarantee that a digital edition is rescued from that traditional process of "getting on paper." Virtually all texts today are electronically encoded before they are published in print format, so it will require not much more than resolve and responsible policy to guarantee that "clear text" or minimally marked-up text emerge at some point in the text publication process. In the same reshaping of values, publishers, editorial boards and funding agencies who wish to respect the public nature of the intellectual tradition in ancient literature will take special care to eliminate or minimize the imposition of any private claims upon the electronic form of the text. Just one perspective from among many legitimate perspectives has been sketched above. Readers wishing to contribute to these discussions are encouraged to enter into thoughtful dialogue concerning ownership of electronic text -- with colleagues, general editors and publishers -- when negotiating book and article agreements, and to participate in society forums that address these concerns. Communiques may be directed to Robin Cover via surface or email. (Verbatim copying and redistribution of this article is permitted without royalty; alteration is not permitted.) Robin Cover BITNET: zrcc1001@smuvm1 6634 Sarah Drive Internet: zrcc1001@vm.cis.smu.edu Dallas, TX 75236 USA Internet: robin@utafll.uta.edu ("uta-ef-el-el") Tel: (1 214) 296-1783 Internet: robin@ling.uta.edu FAX: (1 214) 841-3642 Internet: robin@txsil.lonestar.org Announcing the Third International Conference on Bible and Computers, at Tuebingen, 26-30 August 1991, by Alan Groves Sponsored by: Association Internationale Bible et Informatique (AIBI) Organized by: University of Tuebingen Prof. Harald Schweizer, Chairman Organizing/Program Committee. Under the auspices of the Deutsche Forschungsgemeinschaft. The Association Internationale Bible et Informatique (AIBI) and the University of Tuebingen invite you to attend this third international Conference on Bible and Computers, with a special focus on Interpretation, Hermeneutics, Expertise. It will be hosted by the old Eberhard-Karls- Universitaet, Tuebingen, from Monday, August 26th to Friday, August 30th 1991. English will be the language of the Conference. The registration fee will be US$130*, for early registration and US$160* after April 1st, 1991. (*For registred AIBI members: US$100 or US$130 after April 1st 1991.) Contact Prof. Harald Schweizer, Reutlinger Strasse 12, D-7400 Tuebingen. Telefon: 49-7071-78646 Fax: 49-7071-2944518 (attn: Dr. H. Schweizer) E-mail: IAHS001@CPX.ZDV.UNI-TUEBINGEN Accomodations and management of your stay in Tuebingen can be arranged by the Congress Buro Verkehrsverein Tuebingen: An der Neckarbruecke Postfach 2623 7400 Tuebingen Telefon: 49-7071-35011 Fax: 49-7071-35070 Telex: 49-7071-7262780 tour D The rich and varied program will include an opening lecture by the host, Harald Schweizer (Tuebingen) on "The hermeneutical concept of the 3rd AIBI-Congress," sections with papers and panel discussions on: "Interpretation of a specific text," "Methodology," "Preparation of the text," "Software-Design/Programming," and "Hermeneutics," demonstrations of various specialized software products, and even an Excursion to Stuttgart (Bibelhaus, Landesbibliothek, Akademie Hohenheim). See you there?! The Penn Language Bulletin Board System, by Jack Abercrombie The Penn Language Center and the Center for Computer Analysis of Texts (CCAT) are pleased to announce the establishment of The Penn Language BBS (Bulletin Board System). The BBS is available to any computer user who can reach it via the Internet, as rm105serve.sas.upenn.edu (IP address: 128.91.13.73). At the login prompt, type to move to the bulletin board system on this machine. Once there, you will have to login to the bbs. Here when prompted type if this is your first time using the bbs. You will then be prompted for your user id, password, full name and terminal type. Provide this information. If you have already established an account with the bbs, type in your user id and password when prompted. For example: login: bbs login: new login: rkraft password: xxxxx full name: Robert A. Kraft terminal type: vt100 Like most bbs's this one is menu driven. To see the main menu, type for help. All the bulletin board's options will then be displayed. One option that may be of interest permits reading sub boards. There are four boards provided currently: Public, PLN (= Penn Language Newsletter), CCAT (= Center for Computer Analysis of Texts), and CCATNEWS (= information on CCAT activities). To select a board, type . Now choose the board you wish to read by typing in the board's exact name (e.g. ). From here on the instructions on how to read the individual files appear on the screen. The BBS contains information on the language programs at Penn. Currently files related to Penn Language Center and CCAT are available. Here is a partial list of some of those files: CCAT What is CCAT List of approved text distributors CCATNEWS Article on Apple HyperCard Project Article on IBM Cinema Project Update on Cinema Project (No. 2) Online Notes 88, CCAT's newsletter Online Notes 89, CCAT's newsletter Users may post new information in the public bulletin board provided on the system. To move to the PUBLIC board, type: Make sure that the material you post relates to language learning and research only. From time to time we will examine the material posted there and delete inappropriate information. We ask that you show some responsibility and choose "postings" related to language instruction only. Thank you. To exit from the BBS, return to the main menu and type for Goodbye. This BBS is an experiment in delivery of information normally circulated at Penn on paper. If it proves successful not only internally but externally, we will add other information and improve the facilities. Questions posted on the bulletin board will be answered by electronic mail. Send such questions to: JACKA@PENNDRLS.upenn.edu. //end #33// ---------------------- <> coordinated by Robert Kraft [11 July 1991 Draft, copyright Robert Kraft] [HUMANIST & IOUDAIOS 12 July 1991] [Religious Studies News 6.4 (August 1991)] [CSSR Bulletin 20.4 (September 1991)] ---------------------- This column will be brief. It almost was not written at all, due to my inability to find sufficient time. And why was time so scarce? Largely because of the snowballing success of the emerging new forms of scholarly communication and interaction on the electronic networks. The good news is that exciting things are happening through electronic conferencing (discussion lists, information posting, review of publications, etc.). The bad news is that if one has wide interests and tries to keep up with them by using the electronic resources, it eats up huge amounts of time. All those dire warnings about the "information explosion" are being fulfilled even at the level of academic discussions! We are in a pioneering situation. Old interests and techniques are being adapted to the new technologies, and new things are being tried. Some will work effectively, some will have to be abandoned. There is excitement in taking part in these explorations, but frustration at not being able to keep up with everything at once. There is an air of immediacy -- topics get stale in a day or two, and if you blink, you might miss something really important and/or interesting. The ability to retrieve messages posted in the past is at best spotty, depending on the sort of system being used and the software installed on it. Some electronic groups keep archives of past communications, some do not. And the ability to work effectively with those archives varies greatly from system to system and user to user. Necessity will require that suitable solutions be explored, and the effective ones exploited. "Spy" software may do the trick, where the user identifies the subjects of interest and the software automatically chooses pertinent items from the available materials. Or service centers may be established where actual people (another job for graduate students?!) scan the incoming information and filter it into appropriate channels according to the expressed interests of the ultimate users. Somehow it will be brought more effectively under control, but how and how soon are still quite open questions. What could possibly be so interesting and time consuming, you ask? That will vary from person to person. Unfortunately for me, I am rather nosy, and like to know what is going on in a wide range of fields/areas. Thus I belong to such electronic discussion groups as HUMANIST, in my experience the grandparent of this type of interchange and the most diverse in coverage; IOUDAIOS, with its specific focus on Judaism (including earliest Christianity) in the Greco-Roman world; ANSAX-L, centering on matters Anglo-Saxon in the medieval period but ranging into many associated areas including the transmission of early traditions and literatures in the west; FICINIO, dealing with early modern western literatures; RELIGION, which hasn't really formed a distinct personality yet but is avowedly comparative in its orientation; and ROOTS-L, for genealogy research (everyone should have a hobby, no?). I subscribe to other groups as well, but you get the picture (see OFFLINE 31 for a fuller, if now quite outdated, general listing). As more people join the groups, with more interests and issues to be discussed, the networks and mailboxes bloat with messages of all sorts to be read, some of which themselves call for a response. But what has one learned? What possible value can it all have, especially in relation to the time needed to keep up? Does it really have an effect on one's scholarship, other than keeping one from having time to pursue it? Fair questions, and hard to answer adequately at this juncture. I certainly have gained lots of information that I would not have encountered otherwise, both about how other scholars in adjacent fields think and work and about what other people know or think they know. What textbooks are valuable for teaching which subjects? What effect has "deconstructionism" had outside of modern English literature departments? Were there many female scribes in the middle ages and what difference would it make to us if there were? How were palimpsest manuscripts produced, and how can we exploit technology to recover their original contents? How did the traditions about finding solace in the wounds of Jesus develop? What constituted "patronage" in the Greco-Roman world and how did that institution affect the development of Judaism and early Christianity? What is the best source for detailed information on the convential designations, contents, etc., of the Dead Sea Scrolls? These are actual matters discussed during late spring and early summer of 1991, all of which have direct significance for my own teaching and scholarship. And many others like them have passed across my screen. New groups and endeavors have also started up, calling for even more of one's immediate attention. The Bryn Mawr Classical Review is fully active electronically in producing prompt and thorough reviews of publications in classical studies. These will also appear in print somewhat later. IOUDAIOS has started a similar review project for publications in its areas of interest; some older groundbreaking works such as Robin Lane Fox on Pagans and Christians (1987) have received extensive discussion already, in response to readers' queries. There are also refereed electronic journals, such as "Postmodern Culture." The RELIGION list has started collecting class syllabi and related materials, and there has been some discussion of suitable textbooks for various religion courses. Many different types of material have been deposited on "ListServers" from which any subscriber can obtain specific items at will -- texts, bibliographies, drafts of articles, special collections of data, and the like. And there is much more: Announcements and reports of conferences, job postings, product announcements and reviews (what is the best footnoting software available for the Mac?), obituaries, requests for exchange of housing during sabbaticals, locating addresses, getting advice about graduate programs. The list goes on an on. A credit course on computer programming for the humanities was offered recently over the BITNET network. An electronic Hebrew Users Group newsletter is being produced, and may or may not be listed in the new electronic directory of academic e-conferences available from the HUMANIST ListServer. The new archaeology list (ARCH-L@DGOGWDG1) will certainly appear there, and perhaps also "Egypt-net," about which I know very little. Our world is rapidly changing. Some of us will be caught up more than others in the process, but one way or another we will all be affected by it. Active engagement in the electronic networks is not for everyone, even if everyone had the same opportunity to be involved. But the potential is tremendous, at a wide variety of levels -- information gathering, testing ideas, interactive discussion, review and publication, distribution and revision. And our personal worlds will move more and more in these directions, along with the worlds of our colleagues and students and children. Get involved if you are so inclined, and in any event stay informed. As was emphasized in OFFLINE 33, these and related issues will be the theme of the CARG sessions at Kansas City in November. Meanwhile, if I don't answer your mail immediately, please be patient. I may be floundering in a glut of newly released information! //end #34// ---------------------- <> coordinated by Robert Kraft, guest columnist Richard Jensen [09 October 1991 Draft, copyright Robert Kraft] [HUMANIST and IOUDAIOS and RELIGION 10 October 1991] [Religious Studies News 6.5 (November 1991)] [CSSR Bulletin 20.4 (November 1991)] ---------------------- Over the years, OFFLINE has had little to say about "database management" approaches to manipulating data on the computer. This is largely because the editor himself has had little experience with databases, and indeed, harbors some prejudices based on the early years of development in which transfer of materials in and out of databases, and between databases, was often rather difficult. Furthermore, since databases atomize the material and store it in various compartments for easy retrieval, they can restrict the user's ability to be flexible and open ended in manipulating the material. For my work, standard text files with sophisticated search and retrieve software seemed preferable in most instances. But over the years, things have improved enormously in the development of database technology, and few of the old reservations remain valid today. Thus I was elated to encounter on "the networks" (where else?) the following material on database (and other) programs for historical research. It has been excerpted, with the author's permission, from a draft of an article on "Text Management" by Richard Jensen (CAMPBELD@IUBACS, Professor of History, University of Illinois, Chicago), which is scheduled to appear in full form in the Journal of Interdisciplinary History 22.4 (Spring 1992). Catalyst for the article was provided by the anthology entitled _History and Computing III: Historians, Computers and Data, Applications in Research and Teaching_, edited by Evan Mawsley, Nicholas Morgan, Lesley Richmond and Richard Trainor (Manchester: Manchester University Press, 1990). Where it seemed useful, I have inserted explanatory comments between square brackets in the text of Professor Jensen's treatment. ----- ... The British historians who gathered at the 1988 Glasgow conference had their metamorphosis a decade ago, transfixed by the PCs provided by the productivity-minded Thatcher government. The conference papers display no ideological bias, save a firm commitment never to squeeze historical reality into the codes demanded by SPSS [a widely used computer program in the social sciences]. In a quest for purity through empiricism so extreme that David Hume would have gasped, the conferees congratulate themselves on how far they have come from the "bad old days" (p. 156) of the "number crunching self-styled 'social science historians' of the late 1960s and 1970s" (p. 180). For this report I will both review the book, and examine some software that allows for text management more advanced than word processing. The book's most useful essay is by Daniel Greenstein (p. 60). He makes a good case for the advantages of using DBMS (Data Base Management Systems), what you see is what you have. As in a spreadsheet, each case comprises a row, each field (variable) a column. Indeed, the first program the historian should consider for a database is a spreadsheet like Lotus's 1-2-3, Borland's Quattro, or Microsoft's Excel. [Information from a footnote is incorporated in what follows.] The problem with spreadsheets is that it is hard to generate reports, such as formatted bibliographies. The advantages include simple, fast data entry, and the ability to import and export data into word processors and databases (Quattro is superior for this job). It is always possible to add more rows (the theoretical maximum is 8000), or new columns (up to 256). New variables can be created quickly by algebraic or string formulas operating on old variables. ... Simple sorting and statistical routines (means and standard deviations of subgroups) can be handled easily. Excellent graphs can be produced in seconds. Regression is more of a chore, as is crosstabulation. For these tasks a statistics program is needed, such as SPSS-PC, SAS or SYSTAT. A highly recommended spreadsheet for starters is AsEasyAs 4.0, a clone of [Lotus] 1-2-3. It is available free as shareware. (Users who like it are invited to mail in $50, and to give away copies to students.) Flat-file databases [i.e. with predefined, fixed field sizes] have data screens that make it easy to enter information, but they are less flexible than spreadsheets in manipulating variables. Flat-file databases allow for practically unlimited cases (limited by the size of the hard disk), but the field widths are fixed, and the number of possible fields is limited. An imaginative coding scheme can soon crash into the limitations. With a relational DBMS it is possible to include an enormous amount of complex information about the original data, and yet at the same time to be able to analyze the material easily. The solution involves multiple databases that are linked together ("related"). Suppose the project deals with collective biography or a census. The master database, M, has variables regarding individuals. The particular place an individual lived (at a particular date) is coded by name, and a separate database D-1 of place names constructed. D-1 contains many characteristics of each place--such as size at different times, racial mix, economic characteristics, geographic region, or political behavior. ... This locality information is left out of M. For analysis, the historian constructs a new database A-1. It is a simple flat- file that examined and made a selection of the place variables from D-1. For another analysis, the historian can construct A-2, also flat-file, with a different selection of variables from M and D-1. Besides place, other variables from M can be linked into their own specialized databases, D-2, D-3. . . After spending enormous labor building the relational databases it is pleasant to discover how easily the A-1, A2. . . analytic flat files can be created. If the historian knows what analysis will be done in the first place, it is necessary only to construct one master flat file database containing the desired variables. Historians looking for a flat-file DBMS for bibliographies, mailing lists, convention programs, or uncomplicated historical data sets have a large variety from which to choose. Shareware programs, which can be distributed free to students, include PC- File and Wampum. Personal RBase, Q&A and Professional File 2.0 are commercial DOS (i.e. for IBM and compatibles) programs that perform very well. Database, Panorama, and RecordHolderPlus are comparable Macintosh products. They produce indexes that make for very rapid searches through the records. For teaching and analysis the best is Borland's Reflex (for DOS or Macintosh). It allows data to be examined in many different ways, such as cross- tabulations (including complex multi-way cross-tabulations). (It is poor at exporting files to other formats, for use in other programs.) Andrew Ayton describes an ambitious year-long course required for honors undergraduate majors at the University of Hull (pp. 127-32). Students write three papers using Reflex to study a census, to input fresh data, and to construct an entirely new database. The students find Reflex highly user-friendly. ... Relational DBMS programs can also be used to create flat-file databases. Paradox, by Borland, is consistently rated the best of the relational DBMS for DOS. Borland also sells the market leader dBase, now in version IV (FoxBase is a superior, cheaper clone.) For Macintosh, FileMaker Pro and FoxBase+/Mac are well regarded. ... Several British projects attached to archives have had the mixed blessing of in-house programmers who wrote their own data base programs, or revised a program provided by mainframe or minicomputer manufacturers. Since it is extremely difficult to write a good database program, and difficult as well to keep it revised and up-to-date, the record has been one of frustration. Historians have been much better served by commercial products that are well supported by their vendors, and for which manuals are learning aids are available. ... The attitude [that in-house software development is desirable has been] well expressed by Manfred Thaller, a historian at the Max-Planck-Institut fuer Geschichte at Goettingen. He sees a vacuum that cries out for a theory of historical computing, and a need for programs written explictly for the historian's peculiar requirements. He has been sharply critical of commercial DBMS programs, complaining that their "relational" capability is too confining to the historical sensibility about the temporality and chronology of the past. His own TBMS (Text Base Management System), CLIO, has been under development since 1978, and currently is implemented on seven mainframe and micro operating systems. Thaller has energetically promoted CLIO throughout Europe, with the message that it provides the ultimate flexibility, "fuzziness," and temporal context that historians demand. ... Historians devour vast quantities of information. Some is acquired painfully, word by word, by sitting in an archive transcribing documents onto 5x8 cards (or keyboarding them into a laptop machine.) Numerous projects over the world have built up computerized data bases from social, political and economic sources. Other textual data comes cheap, thanks to the increasing availability of fair quality optical scanners, and the arrival of downloaded files from on-line data sources and card catalogues, electronic bulletin boards, and CD-ROM indexes, like the one to Dissertation Abstracts. With the falling price of large hard disks (100 megabyte disks, which can hold 15 million words of text are now common), historians now can put masses of data on their desks. Classicists already have access to most Greek and Latin texts; philosophers and the literary critics have computerized their texts. A vast chunk of French literature is on-line as ARTFL. Princeton and Rutgers Universities are setting up a Center for Machine-Readable Texts in the Humanities, which will catalog and inventory the hundreds of computerized text projects underway worldwide. Its catalog will be available through RLIN (Research Library Information Network), and it will eventually be able to redistribute copies of texts. Classicists, philologists and other textual critics work with bounded (albeit very large) corpora, while social, political and economic historians deal with open-ended sources. The archives are growing faster than our capabilities. Every year American bureaucrats generate one-quarter trillion pages of paper Allowing for a 8% annual growth rate of bureaucratic output, a 99.9% discard rate, and a rather optimistic 3% annual growth rate in the number of historians, then in ten years the amount of documentation per historian will be 12 times larger than now. The Nexis service of Mead Data Central already offers on-line files of current newspapers and magazines. Eventually the texts of journals will be available for downloading. ... Eventually libraries will scan old books into electronic formats. The National Endowment for the Humanities is spending millions to microfilm old newspapers; it should be scanning them instead. The technology of scanning is advancing rapidly. In late 1991 Wang Laboratories released SEAVIEW for PCs. Any paper document can be scanned in, and is electronically duplicated on a hard disk as a file; the text in the document is simultaneously indexed. Complex searches will retrieve not just the words in the document, but a facsimile of the original. SEAVIEW cannot read handwriting yet, but if there are typed transcriptions of the handwriting, it can index the document and show it as clearly as a microfilm reader. Most historians deal with text through word processors. The trend in word processing software is away from tools that help analyze texts, and toward formatting for desktop publishing. Text Base Management Systems (TBMS) have been invented for text searching and manipulation [see the reference to Thaller's work, above]. Mainframe and mini versions designed for law firms already can handle vast quantities of text. Our concern is with historians and their own PCs. Most of us already have the most elementary version, the search routine incorporated into word processors. The standalone programs, however, are vastly faster--they can search through millions of words in a matter of seconds. Furthermore, they can do Boolean searches: (find [("Wilson" or "Roosevelt") and "war"]), fuzzy searches ("Wilson*" returns "Wilson", "Wilson's", "Wilsonian"), proximity searches ("Wilson" within 5 words of "Roosevelt"), and Soundex searches ("Johnson", "Jensen", "Jansen", "Jenson", all have the Soundex code J525). The easiest and fastest of the TBMS programs is Lotus's Magellan 2.0, for DOS. It operates like a shell on a hard disk, showing directory and subdirectory listings of every file. It has an ingenious viewer that shows the contents of word processing files (ascii, Word Perfect, Word, Nota Bene, etc.), database files (in dBase and Paradox formats), outliners (PC Outline, GrandView), and spreadsheets (1-2-3, Quattro, Excel). It can even view files shrunk by the PKZIP utility. Operating files (*.BAT, *.COM and *.EXE) can be started up, with numerous automatic keystrokes, at the touch of F7. These features alone make it a powerful device for managing the contents of a hard disk. The most powerful feature of Magellan is the search routine. The user specifies what directories (or subdirectories or file types) are to be searched, and Magellan rapidly generates an index for that specification. (There can be many different indexes, including a monster one for the whole disk.) Indexing is fast, and updating indexes after files are added or changed is faster. The user at any time can ask for a search of a word on one index, and in three or four seconds the program will find all the files that contain the specified word (in order of how prominent it is), and actually show the files on the screen. Suppose one hit is a Word Perfect document. Magellan will show the specified word and the entire contents of the document on the screen (neatly formatted), and allow portions of the text to be excerpted to an ascii file; with F7 it will load Word Perfect and open the document. Quite apart from the other programs discussed here, Magellan is a highly recommended tool for the scholar with a busy hard disk. Magellan took 8 minutes to index 515 of my text files containing 2.7 million bytes of text. The new index was only 600K; a typical search takes 4 seconds. The limitation of Magellan is that its search routine is quite limited--one word only, with fuzzy suffixes like plurals but no wildcards or Boolean or Soundex options. It can search for dates and numbers. The next step is an inexpensive little gem called GOfer. It is memory resident (press the hot key and it takes over, does its job, then retires.) It handles Boolean, fuzzy and nearby searches. It does not require a prepared index. Instead it will crank through any list or directory of files as specified, and show the text of every find. For more high powered searching, ZyIndex offers fast Boolean searches through any number of files. It builds indexes like Magellan's, only they are much larger and take far longer to create in the first place and to update. Janet Foster and Marion I. Bowman, at the Wellcome Institute for the History of Medicine, compiled an elaborate guide to medical archives and manuscripts using dBase III. They were reluctant to code: "We were also adamant that the description of the manuscript material should not deviate from the archival norm by being manipulated to fit a structure to which it was not suited." (p. 24) They used the dBase Memo field, which allows they coded portions of the description (types, persons, fields, places) into ordinary fields that could be indexed. They were using a DBMS when they needed a TBMS. Magellan would solve their problem--it does index and search on Memo fields. On the other hand, Magellan lacks the complex linkages and reporting features of dBase. A combination of both DBMS and TBMS would provide much more power than either. TBMS programs of special interest to the bibliographer include Pro-Cite, Library Master, Ibid, and Notebook-Bibliography. They require that fields be defined and information be located in the correct place. The fields can be of any length (up to 9000 words), so that full annotations are possible. Important fields can be indexed, with Boolean searches made using the indexes. Search time for a file of 10,000 records is less than 5 seconds. They do not handle the sort of recoding or statistics that is the forte of DBMS and spreadsheets. Instead they are designed to handle bibliographies, which can be printed out in formats that conform automatically to specified manuals of style, such as The University of Chicago Manual of Style. Numerous different kinds of records can be created (such as book, article in journal, article in book, convention paper), each with appropriate input and output formats. (Notebook/Bibliography has only one record type, so the output does not distinguish books and articles.) New records can by typed directly into the programs, or imported from a word processing or other file. Pro-Cite (for extra charge) and Library Master allow incorporation of downloaded files from electronic databases and library card catalogues. ... Numerous journals publish annual or quarterly bibliographies; typically they are created on a word processor. A superior approach would be to use a text base management system, for then they could be aggregated and published separately (or, better, released to historians in ascii format on diskettes, or posted on an electronic bulletin board.) Pro-Cite is available for DOS and Macintosh, and has a discount price for members of the American Historical Association. Ibid is available only as part of the Nota Bene word processor. For major projects based on large numbers of texts, the historian should consider the TBMS programs Word Cruncher (first choice), Ize, or Ask Sam 5.0. Word Cruncher is popular among philologists. Its builds indexes of large files (very slowly), and allows both for complex searches and for the computation of concordances and word frequencies. Ask Sam and Ize are much more complex programs. While they can import data from ascii or word processing texts, they have to convert it to their own format. All sorts of complex searches are possible, and both programs have hypertext capabilities. Hypertext is three-dimensional reading. Some words in the text ("buttons") are highlighted; they open out to another text, or run a program. Ask Sam includes a flexible programming language and allows inclusion of graphics displays. It chops texts into screensize-bits (maximum 20 lines); the bits can be tracked through the Hypertext/Update routine. A glaring weakness is that it does not indicate the parent file, which the user might want to work with on another program. (The name itself is useful.) AskSam therefore is appropriate only for a large collection of short texts that would fit on a 5x8 card. The Macintosh comes with the Hypercard 2.0 program that has begun to fascinate scholars. Its versatility shines in a program like The Vietnam War, which combines text, graphics, animation and sound. Guide is a program for constructing complex hypertexts. For IBM compatibles, a taste of hypertext can come through the shareware program Black Magic. IZE is the most ingenious of the programs. Every file is tagged with from 1 to 300 "keywords." Searches can be based on keywords or on any other words. IZE handles ascii, WORD, and Word Perfect files, and also dBase files, but cannot deal with the contents of spreadsheets. (Only Magellan handles spreadsheets well.) Having found some files containing the target words, IZE builds a tree-like outline showing the closeness or relatedness of these files. The outline is based on keywords, and thus structures the files rather like the historian would when searching for both a specific topic and information on related topics. Optionally, the historian can specify an outline structure (called a "Guideline") into which IZE will fit the files containing hits. IZE does not allow programming, but does have a macro facility. IZE took 28 minutes to import and automatically assign keywords to 85 files that aggregated 800K. After that searches for texts by keyword were practically instantaneous; searches for words that were not keywords took less than ten seconds. The minor complaints I had about IZE were that it often paused for 10 or 20 seconds to so some work (a very fast 386 machine would solve that problem), it does not alphabetize keywords, and it crashes too often. (When restarted, it picks up where it left off in forgiving fashion.) On the whole, it can be highly recommended to historians beginning a complex project. A typical 5x8 card might contain 70 words (500 bytes). For a few hundred cards, a shoe box will suffice, with colored pen marks or little blue tabs glued on to provide an indexing system. With a few thousand it is time to computerize. With 20,000 or more cards, only the scholar with a TBMS or a xerox mind can keep track. (More than 100,000 cards and it's time to think mainframe, or rethink the project entirely.) Anyone starting a project with quantitative analysis in mind, should consider a systematic procedure involving sampling, definitions of variables, coding, and the preparation of a suitable flat-file or relational DBMS. Quantitative projects that will depend heavily on a statistics program like SPSS would be well served by a spreadsheet for data entry, cleaning, and rough analysis. The humanistic historian has somewhat different needs--the quantity of data may be much larger, but the prior structure is thin and superficial. A TBMS is needed. The needs may well be met by Magellan, or for a large project by Ize or Word Cruncher. Everyone does bibliographies, for which a flatfile DBMS will suffice, and a specialized (but expensive) program like Pro-Cite or Library Master will perform very well. [end of excerpted article] //end #35// ---------------------- <> coordinated by Robert Kraft [12 December 1991 Draft, copyright Robert Kraft] [HUMANIST, IOUDAIOS, RELIGION, etc., 13 December 1991] [Religious Studies News 7.1 (January 1992)] [CSSR Bulletin 21.1 (February 1992)] ---------------------- This issue of OFFLINE focuses on the recent computer sessions at the Kansas City annual meetings of the AAR/ABL/ASOR, by way of reporting some of the things that took place. It was a full and interesting program for which the organizers (especially Robin Cover and Raymond Harder) are to be commended and thanked. The assigned room for presentations, discussions, and demonstrations was appropriate to the tasks and received a regular flow of conference participants. Under the general rubric of "Academic Networks and the Exchange of Electronic Information," several different topics and themes were addressed in the announced sessions, in addition to the usual string of reports and exhibits. With particular reference to the accurate exchange of electronic data between different computer systems, software packages, or over the electronic networks, we were honored to have two of the key participants in the international "Text Encoding Initiative" (TEI) present -- Michael Sperberg-McQueen (University of Illinois, Chicago) and Lou Burnard (Oxford University). Michael's opening address at the initial computer session described what the TEI is all about and how it is proceeding. Later in the conference, Michael and Lou led a working session on encoding ancient texts (see the program Abstracts, S55). A second major theme in the computer meetings was the availability and uses of electronic network communications and resources. Special attention was given to the value of electronic discussion groups on the academic networks (especially BITNET and the INTERNET), including how to get connected by using the commercial and non-academic routes, and to the availability of texts and other specialized data, including electronic journals. A live demonstration of connecting to and using the academic networks was presented by Raymond Harder, who later chaired a special working session on the subject (see the program Abstracts, S163). A third theme, to which a major panel discussion was devoted (see program Abstracts, S142), focused on the issues of ownership, copyright, control, etc., of ancient texts (and modern research) in electronic form. The first segment of discussion was initiated by Ann Okerson's paper, which is presented in detail below. Her observations on copyright and ownership issues from a research librarian's perspective effectively moved the discussion into various related problem areas. The latter part of this panel discussion responded to remarks by Robin Cover, who summarized the most salient features of his appeal that humanities scholars take a more active interest in the electronic future of their published research. His detailed paper on "Technology at War with the Traditions of the Paper Academy..." was circulated in advance of the annual meeting (copies are still avialable; see the end of this column). In it he argued rather forcefully that textual scholarship -- or at least, democratic public access to core research materials -- may well come into serious jeopardy as we make the quantum leap to digital libraries. In this scenario, competitive research will be based upon access to large electronic knowledge bases containing past and current research. Highly intelligent indexing algorithms and query mechanisms, precise electronic probes into semantically-fingerprinted sub-documents, and versatile display filters will make knowledge access via paper books a comparatively primitive art. The humanistic values of scholarship will be compromised if university libraries and scholars themselves have not retained control over this new form of knowledge, but are owners only of paper books. The most important and competitively relevant knowledge resources, in electronic formats, will be controlled by commercial publishing interests, unless scholars begin collectively to refuse to assign to publishers through conventional publication contracts (as is common practice today) exclusive (monopolistic) ownership rights over scholarly tools and published research. I will not attempt in this column to list or summarize the various reports updating already known projects or announcing new endeavors. Some of them are listed in the conference program (S16). Very welcome was news about computer projects on Chinese and on the Pali Canon, since the history and development of the Computer Assisted Research Group under SBL auspices sometimes obscures its desire to serve the wider constituencies of AAR as well as SBL. Also of special interest is the fact that Susan Hockey (formerly at Oxford University) has begun her term as the first director of the newly established Center for Electronic Texts in the Humanities (CETH) at Rutgers and Princeton Universities. This endeavor holds great promise for the development of humanities computing on this continent. ----- Networked Serials, Scholarly Publishing, and Electronic Resource Sharing in Academic Libraries: a Dilemma of Ownership, by Ann Okerson (based on work done for the North Carolina Research Libraries and the Scholarly Communications Committee of the Association of Research Libraries) Copyright Ann Okerson I. The current "crisis" situation in academic publishing Libraries speak of the current crisis in academic publishing, meaning such interconnected problems as: - High prices (particularly for scientific & technical monographs and journals), - High volume of academic literature, much of it remaining unread either because it is so extensive, or because of its quality, - Slowness of relevant materials to appear in print distribution, slower than researchers would wish and than the current pace of scientific growth seems to demand. The constituents of the "information chain" -- authors, editors, publishers, libraries, and readers -- increasingly look to electronic publishing, and in particular publishing and distribution via the electronic networks, as a solution to many of these problems; as a way in which speed of production and access can be measurably improved, information can be stored and purchased more cost-effectively, and in at least the mid-term, prices can be contained or reduced. Because the electronic networks are currently publicly and academically subsidized and most of the scholarly communication on them is created by the not- for-profit sector (academics, researchers), scholars and librarians are hoping that information might flow more freely in the networked world-to-come than it currently does on paper; that it will be a great deal easier to own materials collectively and share them between libraries (and researchers) than at present. A closer look at publishing practices and the way they are translating into the electronic arena suggests, however, that this vision could be little more than wishful thinking. The fundamental issue in regard to holding and sharing information resources in the electronic environment, as it turns out, may be one not so much of technology as of ownership. And, interestingly, ownership is one of the same questions that already vexes today, particularly with paper publication formats, and for similar reasons. II. The effect of copyright transfer on an author's ownership/control of the work in the U.S. By tradition, in the process of scholarly publishing, effective "ownership" (that is, control) of writings usually passes from the hands of the creators/authors/scholars to publishers through the act of copyright assignment. The copyright transfer the author signs when an article is accepted for publication accomplishes two objectives which (for the most part) are entwined, although this need not necessarily be the case. - First, the author(s) attests that the written work is appropriately original and thus can be legally assigned. This indemnifies the publisher against any potential plagiarism accusations or related actions. - Second, the author gives over control of the printed form of the work to the publisher, who thus becomes its new "owner." This ownership, which is valid for the duration of copyright (50 years plus the lifetime of the author) allows the new owner to sell or re-sell the information as is deemed reasonable and in a way that the market will support. In other words, copyright provides the legal framework for ownership transfer and privatization of an author's work. For the most part, the current copyright transfer forms are quite comprehensive. They give the publisher the rights to produce the work in paper, in microform, in new technological formats, and as some contracts specify, in any form not yet created but which may be created. The author generally retains the right to ask the publisher's permission to reproduce the work as part of another work, such as a book or a compilation. The practice is that normally the publisher will grant such permission provided the original source is credited. Additionally, the author is often supplied with preprints or offprints for private distribution. Any other distribution by the author falls within the fair use provision of the Copyright Act of the U.S. It transpires that who the publisher-owner is, has had an enormous impact on the prices charged to the customer and the rights allowed to the customer. According to a number of cost- per-unit studies of various kinds of STM (Scientific, Technical, Medical) journal pricing, the not-for-profit (such as university presses and learned societies) producers sell information consistently at prices anywhere from 2 to 20 times less than the for-profit or commercial publishers. There may be understandable reasons for such differentials but the point is that, in fact, the situation does exist. The publishing process as it traditionally has evolved is a contractual exchange in which authors give up ownership in return for editing and distribution of the work. "Validation" in the form of peer review is built into the publishing process, although peer review continues to be interjected by the academy as a non-monetized process (thus reducing publication costs). Especially with regard to periodical articles, the author's reward is usually in the form of recognition of the work rather than in direct compensation from the publisher or in royalties. In fact, under the current publishing system, offering such compensation to authors could be counterproductive. First, only a minority of articles are copied often enough that royalties would accrue. Second, direct remuneration to authors would conceivably raise the price of academic writings to even less affordable levels and would exacerbate the current pricing situation. So, it can be argued that both author and publisher are compensated in ways that matter to them. Nonetheless, the creator loses ownership of the work and in the past decade or so, the institution (library) is finding it increasingly difficult to afford the re-purchase prices! III. The "rights" of print-on-paper purchasers In the current print-on-paper tradition, some rights accrue to those purchasers of published writings, that is, those who acquire (and thus own) a copy of the author's printed artifact. Whether individuals or libraries, these new owners have considerable latitude in what they do with the material: they may organize it, lend it as they choose (even ship the physical item around the world to "outside" borrowers), and in the end when they are good and tired of it, they may throw the physical item away or sell it at a garage sale. The one limitation -- and it is considerable -- on acquirers of the paper artifact has to do with copying or reproducing the items or parts of the items. Restraints have been placed on purchasers largely in order to protect the livelihood of the publishers who take the production "risk." In the light of the publishers' risk, which can be considerable, it is useful to bear in mind (again) that in scholarly publishing the original authors of the work, particularly authors of articles in journals, are financially uncompensated as are, for the most part, the editorial board members and the peer reviewers. Authors seem to believe, often, that they have some special sharing/distributing privileges for the writing they created, but in fact they have (except for their clutch of offprints), no more rights to copy the work than libraries or any other purchasers. IV. "Fair Use" The restraints upon purchasers of copyrighted materials are outlined in the "fair use" provisions of the 1976 Copyright Act of the U.S. The Act gives special dispensations to use for educational and not-for-profit purposes but nonetheless restricts reproduction and replication (as opposed to handling of the purchased item itself). To determine if use is "fair," the following questions are asked: - Is the reproduction for profit (re-sale) or not-for profit? - If not-for-profit, could the number of copies made impact the publisher/owner's revenue stream? - What proportion of the work is copied? - How often? The CONTU (National Commission on New Technological Uses of Copyrighted Works) Guidelines to the 1976 Copyright Act further suggest that a library ought not to request (on interlibrary loan) more than five articles a year out of any journal title. Of course, as these are guidelines only, libraries may choose to apply them with liberal interpretations and procure for free in excess of five articles in a given year. For a quarterly publication, five articles might not be unreasonable, but for expensive and voluminous titles producing hundreds of articles a year at subscription prices of hundreds or thousands of dollars per year, five is far from realistic and impedes the ability to "share resources" among institutions. More than the five guideline copies can be made in a year provided that either the publisher or the publisher's collection agency (such as the Copyright Clearance Center) is reimbursed. It is in this particular arena, in the copying of more than five articles in any given year out of any given journal title, that paper restrictions begin to resemble the emerging marketplace and the current publisher/distributor practice for electronic counterparts of original publications. V. CD-ROM Prices as a barometer of electronic models & prices to come As a way station to electronic network distribution, consider the current model prevailing for the "static" electronic CD-ROM. Because they share with networked products the potential for easy, wide, quick, and somewhat undisciplined copying and sharing, most CD-ROM products are licensed rather than sold. The license gives the CD-ROM to a library for the duration of the agreement, often a year for a subscription. Such signed contracts take precedence over laws and the library or other signee is required to adhere to what has been promised in the license. Commonly, such licenses may: - Require that the CD disk be returned if the subscription is cancelled; - Limit how many stations are linked, how many users can use the disk at a time; - Prohibit sharing outside the members of the subscriber institution. The point in the CD-ROM model is that the library's or individual purchaser's right to share is defined and limited by the wording of the license. In the license model, the library is no longer an owner of even a copy of a physical artifact, as it was in paper, although there is still a physical artifact. VI. Ownership of electronic information Current print-on-paper publishers are positioning themselves to publish on electronic networks. In networked distribution, there is no precise "product," such as an iridescent disk, let alone a paper artifact. What can be supplied to the "purchaser" is the information itself, delivered to the library or user over a network as a pattern of electronic impulses. These may be read by the "end-user" at a screen, printed by that user on a printer, or accessed by the library on behalf of a current or possibly an anticipated request. Furthermore, the responsibility for any actual printing, or rendering into paper form for the user is passed on to the licensee, so the obligations of the publisher- owner are somewhat diminished or at least altered. In fact, it is questionable whether the publisher can any longer guarantee to provide the integrity-authenticity component of "value added" that is possible more reliably on paper -- it is much more difficult electronically to assure that a printed iteration of the transmitted document (not to mention an electronic copy) is accurate or even honest. Not that these problems do not exist with xeroxing or otherwise copying paper documents; just that electronically the problems and opportunities can be accelerated and exacerbated. VII. Some scenarios by which libraries will access electronic networked information 1. Local "subscription" (site license). Libraries may choose to purchase the information (such as a journal or a database) for loading into a local system (mainframe computer or CDs, linked to additional workstations both in the building and through the user community). A precedent has already been set for charging that is considerably higher than with paper. Licenses are often based on potential number of users or number of users who have access at any given moment. The publisher usually has either developed searching or other software or has contracted that responsibility to a specialist intermediary, and those costs and the desired related profits are part of the cost of the product. Currently, commercial CD-ROM prices are very high. Four reasons for such pricing might be postulated: - First, the necessary development in a new and dynamic technical environment with very few standards, can be costly. - Second, value is added to the user (e.g. ubiquitousness, speed, searchability). - Third, publishers and libraries are accustomed to the subscription model in which monies are collected "up front" for the year in advance. - Fourth, customers appear willing to pay. 2. "Resource sharing" (multi-site license). The library or a consortium of libraries or a library utility may elect to become a broad-based distributor for an electronic journal or database or product. In such a case, other libraries may obtain the information from this organization on an agreed-upon basis (subcontract). The distributing library has a broader (more comprehensive and more expensive) license from the publisher and still does not own the information it distributes. Outside users may gain access, for a fee. The economic risks are shared by both the publisher and the distributor. 3. "Access" (pay "by the drink"). The publisher or a commercial intermediary controls and distributes electronic information, often by means of "on line" connections. This offers the option of a great deal of service with less worry, at probably the highest price of all the options. In this model, as indeed all the ones outlined, no resource sharing as we have known it, is possible. Libraries share only passe, ageing, dusty books, because increasingly libraries neither own nor control any information except for those materials bought long ago in the 90s. 4. The library, scholar, university, or learned society is the publisher. A rare instance of this scenario is in the positioning of the Online Computer Library Center (OCLC) consortium (through its partnership on the proposed Clinical Trials electronic journal of the American Association for the Advancement of Science [AAAS]) as an electronic publisher. In this case, the user community is an important partner in fixing subscription prices and access. Informal examples are in the papers, files, documents, and lists residing on the Net today and accessible by ftp (file transfer protocol) or by subscription. A world of electronic distribution, with no physical artifacts to "own," alters all notions of what libraries are about and how they work, or if they exist as a centralized function, given the assumption that within 20 years much of what scholars want to read will be accessible quickly and conveniently through the networked format at the individual researcher's desktop. All these communication changes will happen. It is a question of quite when, the expense, and coloring in the detail. VIII. Some initiatives to retain scholarly ownership and access to creative scholarly work If colleges and universities and libraries are to be safe harbors for idea growth and for building on knowledge, a most critical focus must be to ensure that ownership of information rests, at least to some extent, in scholars' hands. What are some of the ways in which the academy might address and assure ownership of (and thus ability to share) its own creations? There are several possibilities, and the following list is by no means exhaustive: 1. Support and capitalize learned societies and university presses as publishers, in both paper and electronic formats. 2. Encourage authors to write by preference for academic (i.e., university and learned society) presses and outlets, if possible. Scholars should seek publishers with common mission and a track record of affordability and responsiveness. Provide editorial services for such publishers, by preference. 3. Encourage creators to read copyright statements sent to them for signature and attempt to retain some rights (e.g. electronic distribution) for themselves and their institutions. The chances of effecting changes in publication policy depends on the creators of material showing awareness and concern with regard to this emerging new situation. 4. Universities (academic authors and administration), and preferably groups of universities in local consortia or in associations (National Association of State Universities and Land Grant Colleges [NASULGC], Association of American Universities [AAU], for example) should develop model copyright transfer forms for their authors to sign. These forms might retain some ownership for the author's institution, particularly in electronic distribution, or more favorable license terms for the institution or consortium. 5. Develop a university and library negotiating model for site licenses which are desirable and affordable to these institutions. Current pricing for such publications appears to the library community to be unacceptably high and even less affordable than paper output, while quality is often questionable and software is non-standard and proprietary. Indeed, the desirability of vendors packaging the data with (often very expensive) special software deserves close evaluation -- the software may not always enhance the usability of the data! 6. The Association of Research Libraries and higher education organizations should work with the appropriate government agencies and officials to extend article 105 of the Copyright Act (it provides that work done by federal government employees on government time be exempt from copyright; this article makes such work freely shareable) to work done with government funds, such as grants and contracts with such tax supported agencies as the National Endowment for the Humanities, National Institute of Health, National Science Foundation, Department of Education, and others. 7. Scholars creating and placing information on the Net ought to attach a statement to their work that clearly expresses their wishes for its use. There is a generous, frontiering spirit about the Net which says anything can be reproduced by almost anyone. As more and more substantive work is deposited on the Net, it can be downloaded and even printed for sale. This is the time for every list, e-journal, file and article to issue appropriate statements about ownership. For example, Michael Strangelove and Diane Kovacs' e-directories clearly allow electronic transfer for not-for-profit purposes but no full paper download without express permission. Bryn Mawr Classical Review allows no reproduction beyond fair use without permission of the editors. 8. The Net and its tens of thousands of items is a wondrous source. It is as uncharted as the Amazonian jungles. We need to be able to see the trees for the forest and it is time that learned societies began developing not only good indexing to paper literature, but also indexing to network sources and files. It is not too soon to begin such maps. Placing one's work on the Net is not very rewarding if it cannot be found or recognized. Proper indexing, in time, is one way of achieving respectability for authored works deposited on academic networks rather than in more conventional paper outlets. 9. The Net would be a fine place to begin various semi-formal preprint series for works which are intended later to appear in more traditional forms such as journals. Eventually, one could add peer review on the Net and bypass the longer publication process, if that proves useful. We need leadership in such initiatives from the learned societies. 10. Listowners and journal editors could seek official partnerships or sanctions or institutionalization for their works. Otherwise, as the Net begins to cost its individual users money -- and for various reasons it will -- good publications will be vulnerable financially. Institutions are more stable than individuals and outlast them. The earliest journals begun by individuals generally lasted only a few issues before fading. The "institutional" titles tended to survive. 11. Works currently under copyright cannot easily be put into the public domain. The future of information for the academy can, however, be optimized through actions scholars take from today forward to maintain ownership and access. 12. We must engage in thoughtful discussions and seek useful outcomes for delivering and sharing networked information. The new medium offers many opportunities (and presents many vexing problems) for a brand new conception of what scholarly communications can be. Current discussions postulate "intelligent documents," interactive and collaborative writing, and continuing dialogue about ideas and articles, which are not necessarily the "last word" on any subject. Writings are available from preprint to revision to "final" to continuously annotated form. The philosophical concept of "living documents" represents a fundamental change from the fixed archival version that is encouraged by the paper format. Would today's publisher commit to supporting the continuing development of an idea after fixing and licensing an electronic version? Who will develop the mechanisms to link together related articles in different "electronic journals," a linkage which clearly will be needed and will affect the nature of electronic "documents" and their "authorship/ownership/control"? To a large extent, the future is in the hands of the scholars. A good future demands our best energies and active partnerships between the scholars, their societies, universities, and libraries whose mission is access to the body of knowledge for future generations. <-----> Anyone wishing to obtain the full electronic text of Robin Cover's paper may request it from: zrcc1001@smuvm1 (BITNET), or robin@utafll.uta.edu (Internet). Paper copies may be obtained from the SBL office: Melinda Strobel, Society of Biblical Literature; 1549 Clairmont Road, Suite 204; Decatur, GA 30033- 4635, or phone (1 404) 636-4744, or email sblexec@emoryu1 (BITNET), sblexec@unix.cc.emory.edu (Internet). //end # 36// ---------------------- <> coordinated by Robert Kraft [29 January 1992 Draft, copyright Robert Kraft] [HUMANIST, IOUDAIOS, RELIGION, etc., 30 January 1992] [Religious Studies News 7.2 (March 1992)] [CSSR Bulletin 21.2 (April 1992)] [codes: ... titles, ... emphasis, /

/

... levels of headings.] ---------------------- This issue of OFFLINE presents three different types of material contributed by four guest authors covering rather different aspects of computer assisted research in our rapidly expanding electronic world. The main article by James Marchand describes new computerized ways of dealing with visual materials, and in one way or another will be relevant to every reader. Jim Marchand is the coordinator of the MEDTEXTL (Medieval Text List) and GERLINGL (Germanic Linguistics List) electronic discussion seminars (both at UIUCVMD.bitnet), and draws on his experience working with Gothic palimpsest manuscripts, among other things. My colleague James O'Donnell, editor of the electronic and hardcopy Bryn Mawr Classical Review and no stranger to OFFLINE readers, then reviews the new Christian Latin CD-ROM from CETEDOC and Brepols in Belgium. Finally, Michael Strangelove (who also coordinates some electronic discussion groups including CONTEXT-L, for cultural analysis of ancient texts) and Alan Groves report more briefly on some new products and opportunities of both a general informational sort (Directories and Publication Listings) and of specific interest to Hebrew Bible students. And there is a PS from yours truly. Enjoy! (Or at least, be informed.) ----- Feature Article: The Computer as Camera and Darkroom, by James Marchand, Center for Advanced Study Professor of German, Linguistics and Comparative Literature at the University of Illinois, Urbana Those who deal in earlier cultures have to depend on representations of some kind of the artifacts left behind by those cultures: sketches, lithographs, xylographs, models, etc. Since the advent of photography in the mid-19th Century, we have come more and more to depend upon photography as our means of recording and preserving the past. The problem with photographs is that they are often poorly done, particularly given the conditions under which they often must be made, and the process of improving them in the darkroom is long and arduous. The digital computer has changed all that. Note that I have said "the digital computer." A normal photograph is not digital, but analog in nature, that is to say, is continuous rather than discrete in its registration of light values. The notion of digitizing photographs comes from America's space program, particularly LANDSAT; early Russian and American photographs from space were poor in quality and were analog, so that very little could be done to improve, "enhance" them. Dr. Robert Nathan came up with an idea and some programs which permitted digitizing the photographs from Ranger 7, 8 and 9, and the science of digitizing images, the mainstay of image processing, was born. In fact, one of the best introductions to the field of image processing is still Johannes G. Moik, Digital Processing of Remotely Sensed Images (NASA SP-431) (Superintendent of Documents, 1980). What does it mean, to digitize? Everyone knows that the computer we normally use is called a digital computer because all its information is broken down into yes/no particles called bits (originally "binary digits"). In order to put any information into a computer, the information must be digitized (broken down into yes/no configurations). That is why, for example, we have the ASCII system which gives us so much trouble. Early on, the best we could do in a printer is to have two to the seventh (128), since we had at best eight bit "chips," leaving us with seven patterns of yes/no questions to handle all the letters of the alphabet and other symbols. Of course, Hollerith punched cards, which also had only 128 patterns, brought all this about in part. If we look at a photograph or a scene (we will restrict ourselves to black and white for simplicity at the moment), what we see is continuous values of gray, from black to white. A computer screen or a television screen, however, has to render this scene as configurations of dots, so that the computer screen is like a pointilliste painting or a Sunday comics. If you hold the Sunday comics under a strong magnifying glass, you can see that the picture is actually composed of little dots; if you look closely at a TV screen, you will see the same thing. The computer, through a control card, assigns to each of these dots a unique position (x,y) through a raster scheme familiar to us from juke boxes, seat numbers, etc., a column/row address. It also assigns a gray scale value, according to how much light is transmitted. Thus, the LUT (look up table) generated by a scene in the computer assigns to each pixel (picture element) on the screen a value f(x,y), where x and y are the familiar spatial coordinates and f is the radiometric value in terms of gray- scale. This means that one can manipulate the pixels of an image on the screen one-by-one, a group at a time, a screen (frame) at a time, and that one can also do radiometric operations. Colorizing old films is an example of one process, which assigns color values to gray values.

Image acquisition. There are several ways to get an image into the computer. An old picture, if it has not already been "digitized" by being printed through a half-tone screen (remember the Sunday comics), can be scanned in, using a device such as the Hewlett Packard ScanJet Plus. Here, the size of the chip on the board which connects the scanner to the computer is crucial. At present, we will assume an eight-bit channel, and I would caution that a smaller channel is not practicable in today's world; if you have an old scanner, such as the Hewlett Packard ScanJet, with a four-bit bus, you can modify this easily. This yields 256 levels (2 to the eighth = 256) of gray, enough until one comes to color, where a 32 bit chip is often used. The scanner generates a look up table (LUT), assigning to each dot (whence the measure DPI, dots per inch, typically 300) or pixel a geometric location (x,y) and a gray level (f). If one thinks of a "normal" watch with hands as analog and a "digital" watch as digital, then the process becomes clear. The larger the chip (number of levels), the less information is lost. With 256 levels of gray, my eye does not detect any loss. In a short column like this, I cannot go into other aspects of loss and gain, such as resolution, etc., much of which will depend on your display equipment. Of course, one does not have to scan in a print. The technology is there to work with films, such as microfilm, but at present this is mostly in the development stage. If one wishes to input directly from film, the best thing at present is to convert the film to slides and to use a slide reader, such as the Nikon FL-3510AF, a rather expensive way to go, but one which does eliminate one source of potential error. Another way of acquiring an image is by direct photography by means of a digitizing camera. All of us are familiar with one such camera, namely the video camera. Since the television screen consists of pixels, the video camera must digitize the scene it is registering. It is for this reason that some of the earliest attempts at using the computer to enhance manuscripts used the video camera. In so doing, however, it is important to use a so-called "frame grabber" to freeze and record one frame. In reality, the video camera is of very little use for our purposes, though it can be relatively cheap. When one adds the fact that raster systems (LUTs) differ in time and in space (try running a European video on your VCR), video becomes a very poor substitute for a camera. If you want to try this out: In the April 1990 issue of his PsL News, Nelson Ford discussed a video card, the VIP-640. He found that he could get a better image by using a Sony black and white and the VIP-640 than he could with a scanner and Gray F/X. His group, the Houston Area League, absolutely the best when it comes to shareware, offers a bundled package for about $500, including a Sony black and white camera, a board, plus PicturePublisher. There are more expensive commercial ventures also; the simplest way of learning about them is through the journal Resolution, which is available free from its publisher (P. O. Box 1347, Camden, Maine 04843). We are seeing the advent of new digitizing cameras, such as the Canon Color Xapshot. With this "camera" you can take pictures in color, do macrophotographs, use filters, record up to 50 pictures on one disk, etc. The problem is that you have to buy an interface card to make it work for your computer, since the little disk is incompatible with anything else. I have used this for runestones, and it does an excellent job. Another recent arrival is the Dycam, a digital still camera which does not have a disk and does not require a board. It stores its photos in its own RAM and can store 32 256 gray-scale images with 376 by 240 resolution, according to InfoWorld 13.32 (August 12, 1991), p. 54. It then feeds them into your computer through the serial port. Neither of these cameras is suitable for finicking work, but they are a start. More expensive and better devices, such as the new Sony SEPS-1000 (see Resolution, Jan/Feb, 1992, p. 7), which will permit better "scientific" photography are coming on the market. The value of having a camera attached to a computer is enormous. As the size of computers comes down, one can envisage carrying five pounds of equipment and being able to do such things as to filter in real time. Normally if one, say, has the idea that a light blue filter (in the case of ferrous based inks which occasionally have an orangish cast) might work, one has to take a photograph, go to the darkroom, develop it, perhaps even to print it, before finding out that it did or did not work. With a digitizing camera attached to a computer with a monitor, one can see the results immediately. When more and more such cameras are made available, filtration will be easier, we will have wrap- around ultraviolets, etc. With the advent of new graphic formats and reduction techniques, storage will cease to be a problem. It should also be pointed out that the first problem in image acquisition is access, which is often the most difficult part of the whole process. Not many keepers of archives are going to be willing to have a scholar with a back-load of equipment photograph in their archive. Given also the problem one has with local current, etc., it is extremely important, if one wants to do ones own work, and that is the only good way to go, to be self-contained and light.

Image Manipulation -- The Computer as Darkroom Once one has acquired the image, one can (both fortunately and unfortunately) manipulate it in various ways. Remembering our formula for the pixel, f(x,y), one could, for example, write a simple BASIC routine, "let f(x,y) = f+40(x,y)" and brighten a photograph by 40 units. One could falsify a document just as easily, however, and scholarship is going to have to address this problem. You can take a photograph of a friend and put two noses on him/her. Two excellent books illustrating such techniques are: Composites: Computer Generated Portraits, by Nancy Burson, Richard Carling and David Kramlich (NY: Beech Tree Books, 1986; ISBN 0-688-02601-X) and Gerard J. Holzmann, Beyond Photography: The Digital Darkroom (Englewood Cliffs, NJ: Prentice Hall, 1988; ISBN 0-13-074410-7). Software illustrations of the latter (in C) are also available. For this reason, it is important that the scholar use only algorithms; otherwise his work is just as subjective as that of the lithographer or the xylographer. If the intention is to make a legible facsimile, and if the scholar clearly announces his intent and the fact that he is using geometric methods, I see nothing wrong with such manipulation. I will just mention some of the algorithms which may be used. For a thorough discussion, you cannot get better than Rafael C. Gonzalez and Paul Wintz, Digital Image Processing (Addison-Wesley, 1977; ISBN 0-201-02596-5). It is somewhat out-of-date (though there is a second edition, which I don't have at hand), but I haven't seen anything better. If you want to see what histogram equalization can accomplish, look at the picture of the dollar on p. 126.

Geometric operations Most of these are best ignored, but it is good, for example, to mask off a portion of a picture to work on. For the most part, operations should not be carried on on an entire frame (picture), since you probably will not want to increase the contrast, for example, over the entire picture. This operation of masking, which can be difficult in photography, is easy with the computer. Note also that masking is not dangerous, since most programs will allow one to return to the original position seamlessly. The same can be said for using various overlays, which can at times be useful. It is quite difficult to do overlays without slippage with normal photography; in the computer it offers no problem. It can occasionally be of interest to use geometric operations to correct deformities in the original or the registration of it, as in the case of very crinkly parchment or tightly rolled scrolls.

Cutting and pasting. These two geometric operations can be of great value. In the work of "lacunology," for example, letters from one part of a manuscript are used to "repair" letters from another part. In my own case, I have cut out Gothic letters and assigned them to keys, using SLEd (from VS Software), so that I then had a typewriter which typed Gothic characters as they were found in our manuscripts, both on the screen and in the printer.

Enlargement and reduction. Other geometric operations include enlargement or reduction. The latter operation, often overlooked, is good for old macro photos, since stepping them down increases the resolution. Here the results also can be at times spectacular.

Radiometric operations These are the ones of most interest to us who work with difficult to register scripts and artifacts. We have already mentioned brightening, an obvious darkroom operation and one which is simple for the computer.

Contrast stretching. Those who remember the stir caused by William Bennett's use of high contrast in his study of the Skeireins (e.g. "The Vatican Leaves of the Skeireins in High-Contrast Reproduction," PMLA LXIX [1954] 655-676) will understand the (often wrong) desire of the scholar to add contrast to a picture. In the case of computer images, this is no problem; one simply asks that, e.g., all values from 1-50 become 1 (are replaced in the lookup table by 1), whereas all other values become zero. The results can be spectacular. NB: in real use, it is best to have a joy-stick installed and to change the values continuously until the result one is looking for is obtained. Make sure to mask!

Histogram operations. One can manipulate the histogram of the dispersal of grays in a picture. This can be done to part of the picture or to all of it. Many of the special effects seen on TV are done by histogram specification. Histogram equalization, which reduces highs and lows on the gray scale, often reveals things which cannot be seen by the naked eye on a photograph.

Edge finding. One can set up an algorithm to sense differences in the radiometric values (gray levels) in an area, connect the values where the differentiation takes place, and obtain an edge. The results can at times be of use; an example of what can be done is seen in my article, "The Use of the Computer in the Humanities," Ideal 2 (1987), p. 27. We fed into the computer pictures of Gothic letters which were quite unusable, sensed the edges, contour rounded, and filled in: The result was a Gothic alphabet remarkably like that obtained from a professional scribe (cf. Sydney Fairbanks and F. P. Magun, Jr., "On Writing and Printing Gothic," Speculum 15 [1940] 313- 330, 16 [1941] 122).

Image smoothing. By a somewhat opposite method, one can obtain smoothing of an image, analogous to the use of a soft- focus lens in photography. This can be quite useful in processing photographs of three-dimensional objects where edges are too sharp and interfere with perception.

Pseudo-color. Since one can address each pixel and also each level of gray, it is possible to tell all values from 50 to 70, for example, to turn green. At times, this, too, can be very useful, mainly for decipherment of the photograph, not for publication. Such software as Paintbrush IV Plus from Z-Soft can be used for this purpose.

Density slicing. A kind of pseudo-color operation is density slicing, in which one selects a "slice" of values, say 30-50, and has them turn black, whereas all others are assigned white. In the case of a gray-rich photo of, say, a palimpsest, the results again can be excellent. This represents a new event in photography, one which cannot be duplicated in the darkroom.

Deblurring. It has recently been announced that investigators at Rochester have succeeded in developing an algorithm for enhancing out-of-focus images. As any photographer who has worked in macro-photography can tell you, this is an all- too-common event. See "Taking the Fuzz out of Photos," Newsweek (Jan. 8, 1990, p. 61). Many of these operations have been programmed and are available in off-the-shelf software. Two which I recommend to those who use the DOS platform are PicturePublisher from Micrografx (works under Windows; often bundled with other programs) and Gray f/x (from Xerox). I have already mentioned Paintbrush (from Z- Soft) as a very useful tool. It should be pointed out that such work is not easy; it is tedious in the extreme, and requires hard and careful work. If you want to do careful work, e.g. density slicing, you need to do your own programming, which is nothing like as hard as its seems at first. Mit Sturm ist da nichts einzunehmen ["Nothing comes easy!" Goethe]. It should also be pointed out that we are just beginning. I have not written about 3-dimensional imaging, about color, about holography, about the possibility of 3-dimensional printing, all of which are upon us. More people are becoming involved. The space effort and the raising of the Titanic are highly visible uses of image processing and remote (non-invasive) sensing. One already sees image enhancement studios on a commercial basis arising all over America. Such establishments refurbish old photos (despeckling, contour rounding, pseudo-color, etc.) much in the manner in which retouchers used to work. This means cheaper and better software and hardware. At the same time, storage capacity is going up day-by-day. Kodak has announced a new "darkroom," called Photo-CD, which consists of hardware and software to handle slides which are simply dropped into the scanner. The result can be enhanced by their software, then stored on CD-ROM. This means for us, for example, that the entire oeuvre of the Swedish painter, Albertus Pictor, almost totally unknown outside Sweden, can be made available on 3 CD-ROMs, with captions and discussion, and can then be displayed using random access techniques. The possibilities for recording and display of early manuscripts are enormous. The next generation of scholars will have to become not only computer literate, but also image literate. [Prof. Marchand can be reached electronically as MARCHAND@UX1.CSO.UIUC.EDU, or by regular mail at 3072 FLB, 707 S. Mathews, University of Illinois, Urbana IL 61801.] ----- Review: CETEDOC Library of Christian Latin Texts (CLCLT), reviewed by James J. O'Donnell, Department of Classical Studies, University of Pennsylvania The first edition of the CETEDOC CD-ROM database of patristic and medieval Latin texts ("CLCLT" in the promotional literature) was released on schedule in late December, 1991. The disk was prepared by the Belgian CETEDOC project at the University of Louvain and is distributed by the publishing firm of Brepols. The first version contains 21 million words of text, mainly duplicating the contents of the Corpus Christianorum series (both patristic and medieval sections), but with additional texts as well from other sources chosen to bring certain central authors up to completion: so Augustine, Jerome, and Gregory the Great are here in toto, even where this means using older editions of some works. The disk comes with its own software. Installation on my DOS-based 386 (33 MHz) machine took approximately one minute (but note that I already had the CD-ROM drive installed and operating for other purposes -- if you need to start from scratch it will take a little longer), and the first successful word search took approximately one additional minute. The disk can also be run with a Macintosh. The manual is concise and helpful, and particularly helpful in offering strategies for searching the varying orthographies of later Latin texts. The strengths of the disk are its coverage of patristic texts (its medieval coverage reflects the spottiness and distribution of Corpus Christianorum Continuatio Medievalis editions) and the friendliness of the software. That first word search, for a word that occurs 16 times in the whole database, took approximately six seconds. A test search for "pagan*" (where the "*" wildcard extender covers any number of characters, and thus embraces lemmata from pagana to paganitas) now takes four seconds. The secret, as with all high-speed CD searching, is in the indexing that has been done on the disk already. The fastest searches are those that cover the whole disk; to restrict the search area to an author or group of authors both requires a little fiddling beforehand (and restricting to a heavily- represented author can take as much as a minute, while the software makes up its mind which texts to search for you), and then the search itself takes a little longer; but try as I might to befuddle the system, I have not succeeded in making any search take longer than two minutes from conception to completion. Output may be stored to disk or printed. Output comes in the form of screen after screen of "sententiae," the disk's shorthand for the units of composition it uses for reckoning. Roughly equivalent to English "sentences," these units generally provide a generous sampling of context based on syntax and meaning, not an arbitrary number of lines before and after. Further, if you place the cursor on any search-result passage and hit the CR (return/send key), you are automatically given the full context, that is, the original text with the cursor on the passage you have been directed to: you may then scroll up and down as you please through the whole work from which that passage comes. This is a *very* friendly feature of the program. The software leaves some things to be desired. (1) It is set up to discourage downloading of texts. Now in fact one can circumvent the software and capture screen after screen of text with some screen-dump utility, then format for oneself, but the program's license agreement forbids this. It is not clear to me what abuse the disk's producers fear, and in the long run downloadable texts will be far more attractive than those with any form of "copy protection," as software producers learned some years ago. (2) When you get a huge selection of "hits" with a search, the only ways to approach this data are either to scroll down one "sententia" at a time (you always know how many hits you have achieved, so you can estimate how time consuming this will be), or go to the bottom of the file of results or to the top. There is no way to jump to the middle, though a handy Mac-like register on the right side of the screen positively cries out to be manipulated by a mouse. I find this top/bottom limitation especially inconvenient owing to my interest in Augustine: for the average global search with many "hits," Augustine comes somewhere in the middle of the pile, and I am already weary of scrolling through Tertullian and Cyprian to get to him. One way around this, of course, is to "print" the results of the search to disk, then open the disk file with a word processor and search there. (For what it's worth I found this very easy to do with Desqview in DOS: it was a matter of seconds to configure the CETEDOC program to run under Desqview, so I can keep the CETEDOC program running, hop over to a word processor and check files, then hop back. I also keep the Latin Vulgate text (thanks to CCAT)on my regular hard disk, so I have the capacity to search the Vulgate with one hand, the fathers with the other, and run a third word processing window at the same time for the results. Hog heaven for the likes of me, to be sure. I have not yet tried resizing my Desqview windows to have the material all visible on screen at once, but in principle that is possible as well. (3) The most annoying small problem with the software is that it is not possible to search for a specific reference: so if I have a footnote to Aug. civ. dei 18.43, I cannot simply dip into the disk and see that passage. I *think* I would have to have some clue as to the subject or wording of that passage before I could find it quickly on the disk. I regard these drawbacks as minor ones to be worked out in later releases and am quite happy with the disk. Pricing is high by the standards of TLG or PHI, but within reach. The list price to an institution (or its library) depends on what other Brepols subscriptions the institution has paid for; a typical list price is on the order of $3,000. Once an institution purchases a single copy, moreover, it or any member of the institution may purchase a further copy for half the subscription price. This means in practice that if a scholar's institution can be persuaded to acquire a copy (for the library, e.g.), the individual scholar can then acquire another at half price; and $1500 is at least within reach, by comparison to the current TLG subscription rates. The CETEDOC disk is purchased outright, and updates are promised every two years or so. This is not the only project of its sort. The British publishing firm of Chadwyck-Healey has announced a project to publish a multi-CD edition comprising all the contents of Migne's Patrologia Latina, entered by double-keyboarding and tagged in accord with TEI/SGML standards to help distinguish, say, ancient texts from modern annotation. The project makes no attempt to replace PL texts with better modern editions, where they exist, and so will inevitably give new life to the philological methods and achievements of the seventeenth and eithteenth centuries, from which much of the PL anthology was drawn. Whether this project is well-founded has been the subject of lively debate on e-mail lists like MEDTEXTL (Medieval Text List), ANSAX-L (Anglo- Saxon Studies), and HUMANIST. The pricing may make discussion moot for many: subscription price to purchase the materials for those who hurry (and pay their money well in advance of seeing a first disk) is $45,000, ascending by stages to $60,000 on publication of the first disk (optimistically scheduled for 1993). The present writer has not been shy in expressing his criticism of this project, but recognizes that wise and eminent persons (including my own dissertation adviser and a colleague with whom I am team-teaching a patristic seminar this term) have been more generous in assessing its merits and potential. At any rate, a new age has now begun. The CETEDOC disk's 21 million words approximately triple the total body of computer- accessible Latin text in existence (the PHI disk with virtually all of classical literature on it seems to contain not much more than 10 or 11 million words; in the patristic area, Augustine alone runs to 5 million words). The speed and versatility of the software and the ease of use will make it an indispensable tool in short order. ----- Announcements: Morphologically Analyzed Hebrew Bible Materials and Accessing Software, contributed by Alan Groves, Biblical Studies/Old Testament, Westminster Theological Seminary Westminster Morphology, an electronic, morphologically tagged text of the Hebrew Bible (BHS) has recently been released by Westminster Seminary. It is in raw ASCII and requires 14MB of hard disk space. The text includes locators (chapter & verse), transliterated BHS text, transliterated dictionary lemma, and analysis. Ketiv/Qere are noted as are certain problems with the text of BHS. The cost is $90 (including shipping and handling) from: Prof. Alan Groves Westminster Seminary Philadelphia, PA 19118 phone: 215 887-5511 email: groves@penndrls.upenn.edu Accessing Software: Obtaining appropriate search and retrieval software for use with this material is a separate issue. At present only LBase (obtainable from Westminster, other secondary sources, or directly from Silver Mountain Software) is available for sophisticated use of this data. LBase works on IBM/DOS machines, with the straight ASCII text or with a specially encoded version prepared by Silver Mountain Software. Other software is forthcoming for both Mac and IBM/DOS. In a related development, the Electronic Concordance Application (ECA) morphologically and syntactically encoded Hebrew database from the Free University (Amsterdam), a sister to the Westminster morphology, is being bundled with QUEST, a sophisticated morpho- syntactical search engine, to provide electronic access to the morphology and syntax of BHS on IBM 286/386/486 type machines. Demonstrated at SBL in Kansas City, this program and database require 8MB of harddisk space, 1MB of RAM (2MB for best performance), and a VGA or EGA or Hercules graphics adaptor. Because the data is pre-indexed and compressed, and thus 'fixed,' it cannot be changed by the scholar but it can be imported into other environments for reformatting and printing. The package including QUEST and the ECA-Database will be available on or around 1 March from Prof. Groves for $199 plus shipping and handling. For further information contact Prof. Groves. ----- Announcements: Directory of Electronic Materials and an Electronic Relgious Studies Publications Listing (CONTENTS), contributed by Michael Strangelove, University of Ottawa A new model of collaborative, not-for-profit scholarly publishing has developed over the past year that combines networked electronic text with the availability of a low cost printed text. This model is seen in the Association of Research Libraries' Office of Scientific and Academic Publishing's first edition of the Directory of Electronic Journals, Newsletters and Academic Discussion Lists, by Michael Strangelove and Diane Kovacs, edited by Ann Okerson (ISSN: 1057-1337, 1991). The ARL approached the authors a year ago and offered to publish in one printed volume the electronic Directory of Electronic Journals and Newsletters prepared by Michael Strangelove and the Directory of Academic Discussion Lists and Interest Groups by Diane Kovacs. The first directory documents twenty seven electonic journals and eighty three electronic newsletters that are distributed via the academic networks. The second directory documents five hundred seventeen online academic conferences that function like ongoing conferences on all subjects, to which all are invited to freely participate. These two new communication mediums represent the beginning of a revoltion in the nature of scholarly communication. Under the ARL agreement, the authors would retain copyright to the electronic text while the ARL received limited rights to be the sole distributer of the printed version of this edition. Unlike many contracts, this copyright did not extend beyond the first edition. The first run of one thousand copies quickly sold out even before reviews where available. A second revised edition of two thousand copies will be available shortly. This collaborative effort demonstrates that there is no necessary conflict between freely available networked versions of a text and hardcopy publication, even at the low price of the ARL Directory. The result is a democratizing of access to the authors' work that ensures that both electronic and print based users have full access to the text. No one is denied access to the material due to a simple lack of money and authors do not lose control of their intellectual productions. This sort of model for not-for-profit publication deserves serious consideration from university and academic associations if networked publication is to be integrated into existing peer review and academic advancement structures. The Net has the potential to bring an end to the currently common transfer to publishers of authorial ownership of intellectual production and to replace print based publication as the primary means of the dissemination and legitimation of academic production. Yet this potential will not be realized unless innovative and collaborative models are aggressively and proactively established in the course of this decade. The next phase of the ARL Directory will continue to expand the use of the Net as a publication medium by creating a fully searchable TELNET accessible database as a third means of access. Contrary to the position maintained by traditional for-profit publishers, the ARL is committed to expanded Internet access of the text to the point where there is no longer a demand for maintaining anything other than a networked source. This phase will also see a freely available hypertext version created. The potential uses of the academic networks extend far beyond the new forums of electronic serials and academic conferences. One new model of networked information dissemination is being attempted in the Religious Studies Publications List called CONTENTS. This Listserv list is designed to replicate serials such as the Religious Studies Review by making information available to the online academic community regarding new publications of relevance to religious studies. CONTENTS@UOTTAWA (or @ACADVM1.UOTTAWA.CA) departs from the print based publication review vehicles in several significant ways: (1) first of all, there is no subscription fee; (2) there is no limitation on the number or size of reviews and notes on recent publications; (3) the contents of journal issues will be posted and reviewed and the table of contents of all publications documented will also be posted to list members; and, (4) there will be minimal time lag between the release of a publication and the availability of reviews and indications of the contents of new publications. This new forum anticipates the near future when many publishers will offer delivery of individual book chapters and journal articles to the research community. In its first few weeks of operation, CONTENTS already has over one hundred sixty members and four cooperating academic publishers -- Wilfrid Laurier University Press, Sheffield Academic Press, the Catholic University of America Press and the University of Scranton Press. Publishers will be encouraged to provide an online document ordering service to list members. Along with journals and books in religious studies, CONTENTS will also post information on thesis, dissertations, pre-publication papers, networked bibliographies and related documents of interest to the academic community. All records will be archived and searchable via LISTSERV and the database will eventually be mounted on a public access computer system as a fully searchable TELNET accessable database. For more information on how to access the networked version of the ARL Directory project or for information on how to participate in the Religious Studies Publication List, contact the CONTENTS project Director: Michael Strangelove Department of Religious Studies University of Ottawa BITNET: 441495@Uottawa Internet: 441495@Acadvm1.Uottawa.CA Postal Mail: 177 Waller, Ottawa, Ontario, K1N 6N5 CANADA Voice: (613) 237-2052 FAX: (613) 564-6641 Afterword: The Computers and Texts Newsletter Another valuable source of information that for the present remains free for the asking is the Newsletter Computers and Texts produced by the Computers in Teaching Initiative (CTI) Centre for Textual Studies/Office for Humanities Communication (OHC) at the Oxford University Computing Services, 13 Banbury Road, Oxford OX2 6NN England (CTITEXT@VAX.OX.AC.UK). With the departure of former director Susan Hockey to take up the leadership of the Center for Electronic Texts in the Humanities (CETH) sponsored by Princeton and Rutgers Universities, the CTI/OHC is being directed by Dr. Marilyn Deegan. The Fall 1991 issue of the Newsletter focuses on the acquisition, encoding and analysis of texts; the Spring 1992 issue will deal with applications of computers to philosophy. Sometimes it is still true that the good things in life are free. Don't miss this one. <-----> Please send information, suggestions or queries concerning OFFLINE to Robert A. Kraft, Box 36 College Hall, University of Pennsylvania, Philadelphia PA 19104-6303. Telephone (215) 898- 5827. Internet address: KRAFT@PENNDRLS.UPENN.EDU (please note that the previous BITNET address is no longer operational). To request printed information or materials from OFFLINE, please supply an appropriately sized, self-addressed envelope or an address label. A complete electronic file of OFFLINE columns is available upon request (for IBM/DOS, Mac, or IBYCUS), or from the HUMANIST discussion group FileServer (BROWNVM.BITNET). //end #37//