I, Robot

Isaac Asimov


Chapter 2

We have: One, a robot may not injure a human being, or, through inaction, allow a human being to come to harm.” “Right!” “Two,” continued Powell, “a robot must obey the orders given it by human beings except where such orders would conflict with the First Law.” “Right!” “And three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”

| Location: 42 | Date: March 2, 2016 |

The conflict between the various rules is ironed out by the different positronic potentials in the brain. We’ll say that a robot is walking into danger and knows it. The automatic potential that Rule 3 sets up turns him back. But suppose you order him to walk into that danger. In that case, Rule 2 sets up a counterpotential higher than the previous one and the robot follows orders at the risk of existence.” “Well, I know that. What about it?” “Let’s take Speedy’s case. Speedy is one of the latest models, extremely specialized, and as expensive as a battleship. It’s not a thing to be lightly destroyed.” “So?” “So Rule 3 has been strengthened—that was specifically mentioned, by the way, in the advance notices on the SPD models—so that his allergy to danger is unusually high. At the same time, when you sent him out after the selenium, you gave him his order casually and without special emphasis, so that the Rule 2 potential set-up was rather weak. Now, hold on; I’m just stating facts.” “All right, go ahead. I think I get it.” “You see how it works, don’t you? There’s some sort of danger centering at the selenium pool. It increases as he approaches, and at a certain distance from it the Rule 3 potential, unusually high to start with, exactly balances the Rule 2 potential, unusually low to start with.” Donovan rose to his feet in excitement. “And it strikes an equilibrium. I see. Rule 3 drives him back and Rule 2 drives him forward—” “So he follows a circle around the selenium pool, staying on the locus of all points of potential equilibrium. And unless we do something about it, he’ll stay on that circle forever, giving us the good old runaround.

Notes:Amazing explanation of how the decision system works and a case where it screwed up

| Location: 42 | Date: March 2, 2016 |




Chapter 5

LIAR

Notes:Really good story. Simple. Entertaining. Thought provoking.

| Location: 90 | Date: March 3, 2016 |




Chapter 8

If Mr. Byerley breaks any of those three rules, he is not a robot. Unfortunately, this procedure works in only one direction. If he lives up to the rules, it proves nothing one way or the other.” Quinn raised polite eyebrows, “Why not, doctor?” “Because, if you stop to think of it, the three Rules of Robotics are the essential guiding principles of a good many of the world’s ethical systems. Of course, every human being is supposed to have the instinct of self-preservation. That’s Rule Three to a robot. Also every ‘good’ human being, with a social conscience and a sense of responsibility, is supposed to defer to proper authority; to listen to his doctor, his boss, his government, his psychiatrist, his fellow man; to obey laws, to follow rules, to conform to custom—even when they interfere with his comfort or his safety. That’s Rule Two to a robot. Also, every ‘good’ human being is supposed to love others as himself, protect his fellow man, risk his life to save another. That’s Rule One to a robot. To put it simply—if Byerley follows all the Rules of Robotics, he may be a robot, and may simply be a very good man.”

| Location: 171 | Date: March 4, 2016 |

What broke loose is popularly and succinctly described as hell

| Location: 174 | Date: March 4, 2016 |




Chapter 9

THE EVITABLE CONFLICT

Notes:Robots vs Humans, justified. And a good case for robots, where most humans benefit. Answers how, if robots win, humanity wins, surprising and believable. 😱👌👍

| Location: 185 | Date: March 5, 2016 |

You admit the Machine can’t be wrong, and can’t be fed wrong data. I will now show you that it cannot be disobeyed, either, as you think is being done by the Society.” “ That I don’t see at all.” “Then listen. Every action by any executive which does not follow the exact directions of the Machine he is working with becomes part of the data for the next problem. The Machine, therefore, knows that the executive has a certain tendency to disobey. He can incorporate that tendency into that data,—even quantitatively, that is, judging exactly how much and in what direction disobedience would occur. Its next answers would be just sufficiently biased so that after the executive concerned disobeyed, he would have automatically corrected those answers to optimal directions. The Machine knows, Stephen!”

| Location: 207 | Date: March 5, 2016 |

You have answered yourself. Nothing is wrong! Think about the Machines for a while, Stephen. They are robots, and they follow the First Law. But the Machines work not for any single human being, but for all humanity, so that the First Law becomes: ‘No Machine may harm humanity; or, through inaction, allow humanity to come to harm.’ “Very well, then, Stephen, what harms humanity? Economic dislocations most of all, from whatever cause. Wouldn’t you say so?” “I would.” “And what is most likely in the future to cause economic dislocations? Answer that, Stephen.” “I should say,” replied Byerley, unwillingly, “the destruction of the Machines.” “And so should I say, and so should the Machines say. Their first care, therefore, is to preserve themselves, for us. And so they are quietly taking care of the only elements left that threaten them. It is not the ‘Society for Humanity’ which is shaking the boat so that the Machines may be destroyed. You have been looking at the reverse of the picture. Say rather that the Machine is shaking the boat— very slightly—just enough to shake loose those few which cling to the side for purposes the Machines consider harmful to humanity.

| Location: 207 | Date: March 5, 2016 |

Stephen, how do we know what the ultimate good of humanity will entail? We haven’t at our disposal the infinite factors that the Machine has at its ! Perhaps, to give you a not unfamiliar example, our entire technical civilization has created more unhappiness and misery than it has removed. Perhaps an agrarian or pastoral civilization, with less culture and less people would be better. If so, the Machines must move in that direction, preferably without telling us, since in our ignorant prejudices we only know that what we are used to, is good—and we would then fight change. Or perhaps a complete urbanization, or a completely caste-ridden society, or complete anarchy, is the answer. We don’t know. Only the Machines know, and they are going there and taking us with them.” “But you are telling me, Susan, that the ‘Society for Humanity’ is right; and that Mankind has lost its own say in its future.” “It never had any, really. It was always at the mercy of economic and sociological forces it did not understand—at the whims of climate, and the fortunes of war. Now the Machines understand them; and no one can stop them, since the Machines will deal with them as they are dealing with the Society,—having, as they do, the greatest of weapons at their disposal, the absolute control of our economy.” “How horrible!” “Perhaps how wonderful! Think, that for all time, all conflicts are finally evitable. Only the Machines, from now on, are inevitable!”

| Location: 209 | Date: March 5, 2016 |