“And you must admit,” Fitzhugh added, “a spaceship which was given that sort of information might be dangerous.”

This time the laughter was even louder.

“Well, then,” the roboticist continued, “if a mechanism is capable of learning, how do you keep it from becoming dangerous or destroying itself?

“That was the problem that faced us when we built Snookums.

“So we decided to apply the famous Three Laws of Robotics propounded over a century ago by a brilliant American biochemist and philosopher.

“Here they are:

“‘One: A robot may not injure a human being, nor, through inaction, allow a human being to come to harm.

“‘Two: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

“‘Three: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.’”

Fitzhugh paused to let his words sink in, then: “Those are the ideal laws, of course. Even their propounder pointed out that they would be extremely difficult to put into practice. A robot is a logical machine, but it becomes somewhat of a problem even to define a human being. Is a five-year-old competent to give orders to a robot?