To the laboratory interested in developing a system of the magnitude of the Yale system, but not a copy, it must be reiterated that neither Yale's out-of-pocket costs nor the cost of copying the system represents the total cost of development. IBM's development costs are not known, but they may be assumed to be very large.

From IBM's viewpoint, the ADC and scaler project is the least successful part of the whole project. Although those instruments are technically excellent, IBM is either unwilling or unable to sell them at a price competitive with the costs of front-end and interface equipment available from the traditional nuclear instrument manufacturers. However, ADC's and scalers available from the traditional sources can easily be interfaced to the front end (whose price is in keeping with its power and versatility). The University of Maryland has followed this procedure.

We therefore present the cost of copying the Yale system by some other laboratory. In Tables 4 and 5 following the example of Maryland, we have not selected IBM ADC's and scalers but rather less expensive components from traditional manufacturers, together with suitable adapters available commercially. The prices shown are to be considered strictly reference numbers and in no way constitute price quotations.

5. General Comments on Experience with the System

Starting by producing an operational data-acquisition software system running within the standard batch programming system for the 360/44 enabled the system to become operational within three months of the delivery of the computer. This not only enabled it to do useful work almost immediately but also enabled important experience to be gained which is being applied to the development of the multiprogrammed version.

One of the main lessons so far is that a batch-oriented system barely begins to tap the real-time potentials of a computer such as the 360/44. In a batch system, whatever analysis is needed during data acquisition must be somehow tied to the processing of events. If this is not possible, it is necessary to stop data acquisition in order to do analysis even though, on a millisecond time scale, plenty of CPU time is available during acquisition. Multiprogramming software is necessary in order to utilize this available time. This means that multiprogramming not only makes the machine available to several people at a time, but, more important, it makes large amounts of parallel processing power available to the experimenter.

It has also been shown quite conclusively that the ability of the physicist to program his own experiment (in Fortran) gives him enormous power, power which simply would not be available on a suitable time scale if he has to queue up for the services of a system programmer.

While the generalized event structure gives the experimenter considerable ability to deal with complex experimental situations, it has an overhead associated with it which limits it to about 5000 events per second. This is, of course, adequate for all experiments that demand such an event structure. For simple pulse-height analysis, it is unnecessary overhead, but it can be "turned off" in a trivial way, by simply defining the completion of filling of the buffer as an event and calling a special pulse-height-analysis program to process the entire buffer, bypassing the event sorting. This allows for close to 100,000 pulse-height analyses per second.