Derived from the law of God "you will not kill", the laws of robotics are formulated by the writer Isaac Asimov in his many novels.

The basic idea is "a robot should never kill a human by its own free will".

The laws are as follows:

โ€ข Zero law:
A robot can not harm humanity, nor, by its inaction, allow humanity to be exposed to danger;

โ€ข first law:
A robot can not harm a human being, nor, remaining passive, allow a human being to be exposed to danger, except in contradiction with the Zero Law. Which means that if he sees a euthanasia, the robot must prevent the gesture of euthanasia.

โ€ข Second Law:
A robot must obey the orders given to it by a human being, unless such orders conflict with the First Law or the Zero Law; which means that even the crazy human can give orders to a robot like a mad doctor.

โ€ข Third Law:
A robot must protect its existence as long as this protection does not conflict with the First or Second Law or the Zero Law. Which means that the robot must commit suicide rather than harm a human, even abject.

What is your opinion ? Do you think it's just or injust for the robot ?
Do you think IA is a good idea if human are stronger than robot or IA ?