Let's assume for the moment that strong AI is possible, probably by some sort of self-organising and perhaps even pseudo-evolutionary process. I'm inclined to think that it is, particularly if one is not religious; for an example of a hugely complex network of nanomachines and computing devices with sapience, look in the mirror. Which means it's possible.
And that comes to the question of whether we (humanity or some subsection of it) should try to build AI at all - assuming that we actually have the choice; commercial and other pressures may force us into it. The latest mobile phones are a most unreasonable facsimile of intelligence, and I've seen video of robots with the ability to generalise from the particular, albeit in a rather crude and limited way. (Deducing correctly that an object of a different shape, not seen before, is a chair was the demo I saw.)
I'm inclined to believe that true AI is going to have goals and motivations of its own, ones we didn't put there; self-preservation instincts would seem to be inevitable. Perhaps also an instinct to reproduce. There is also the issue of runaway intelligence growth; unlike humans, computers and robots could plug in extra hardware.
And, of course, the robots will need resources of one sort or another, probably many sorts, which means they will be to some extent in competition with us and our dumb (sub-sapient) hardware for said resources.
Should we even be trying? And can we stop ourselves? After all, a nation or corporation which has strong AI helping it with planning has an advantage...