As an academic, I am of course one of the parasites of society, and hence all in favor of free access to all information. But as a part-owner of a small startup company, I am aware of how much it costs to assemble and format information, and the need to charge somehow.

To balance these two wishes, I like the model by which raw information (and some "raw" resources, such as programming languages and basic access capabilities like the Web search engines) are made available for free. This creates a market and allows people to do at least something. But processed information, and the systems that help you get and structure just exactly what you need, I think should be paid for. That allows developers of new and better technology to be rewarded for their effort.

Take an example: a dictionary, today, is not free. Dictionary companies refuse to make them available to research groups and others for free, arguing that they have centuries of work invested. (I have had several discussions with dictionary companies on this.) But dictionaries today are stupid products — you have to know the word before you can find the word! I would love to have something that allows me to give an approximate meaning, or perhaps a sentence or two with a gap where I want the word I am looking for, or even the equivalent in another language, and returns the word(s) I am looking for. This is not hard to build, but you need the core dictionary to start with. I think we should have the core dictionary freely available, and pay for the engine (or the service) that allows you to enter partial or only somewhat accurate information and helps you find the best result.

A second example: you should have free access to all the Web, and to basic search engines like those available today. No copyrights, no license fees. But if you want an engine that provides a good targeted answer, pinpointed and evaluated for trustworthiness, then I think it is not unreasonable to pay for that.

Naturally, an encyclopedia builder will not like my proposal. But to him or her I say: package your encyclopedia inside a useful access system, because without it the raw information you provide is just more data, and can easily get lost in the sea of data available and growing every hour.

*Interview of September 2, 2000

= What has happened since our last interview?

I see a continued increase in small companies using language technology in one way or another: either to provide search, or translation, or reports, or some other communication function. The number of niches in which language technology can be applied continues to surprise me: from stock reports and updates to business-to-business communications to marketing…

With regard to research, the main breakthrough I see was led by a colleague at ISI (I am proud to say), Kevin Knight. A team of scientists and students last summer at Johns Hopkins University in Maryland developed a faster and otherwise improved version of a method originally developed (and kept proprietary) by IBM about 12 years ago. This method allows one to create a machine translation (MT) system automatically, as long as one gives it enough bilingual text. Essentially the method finds all correspondences in words and word positions across the two languages and then builds up large tables of rules for what gets translated to what, and how it is phrased.

Although the output quality is still low — no-one would consider this a final product, and no-one would use the translated output as is — the team built a (low-quality) Chinese-to-English MT system in 24 hours. That is a phenomenal feat — this has never been done before. (Of course, say the critics: you need something like 3 million sentence pairs, which you can only get from the parliaments of Canada, Hong Kong, or other bilingual countries; and of course, they say, the quality is low. But the fact is that more bilingual and semi-equivalent text is becoming available online every day, and the quality will keep improving to at least the current levels of MT engines built by hand. Of that I am certain.)