Asimov's robots are artificial intelligences, not the sort of "dumb" machines we use today. So perhaps the idea is not so far fetched. However, I feel that Asimov's intention was to quantify
human morality in some simple and symbolic form for stories.
If one takes the Laws literally, one might argue that they are faulty. In some of Asimov's own stories the Laws had to be modified or re-prioritized in order to get the robots to do what was desired. Roger MacBride Allen's
Caliban, set in Asimov's robots universe, addresses the idea that the Laws enslave humanity because robots will not permit "necessary" risk. The Will Smith movie
I, Robot is modeled very closely on
Caliban, but arrives at the "
Colossus" conclusion that man must be protected from himself. James Hogan's
The Two Faces of Tomorrow is refreshing in that the first AI goes to war with mankind because it does not initially recognize us as an intelligence.
"Laws" of robotics assumes a "top down" approach to AI, instead of the more probable "bottom up" method described in
The Two Faces of Tomorrow. In that case, Laws could not be written until the AI was defined and understood. And as depicted in Daniel Wilson's
Robopocalypse, we may not have the time to put such laws into action.