As I said in a previous post, I'm not sure we'll ever reach a point where we can tell for sure whether machines have definitely become sentient.
Leaving aside the sematics and problems associated with the word "sentience", one obstacle is that we don't know our own brain mechanics well enough. That's just the physical part, not the consciousness that arises out of it. I don't think it's unreasonable to expect that we never will understand it fully (but that's a different discussion).
Similarly, a mechanical brain would have to be something beyond our understanding - even if we created, or instigated it - machine learning algorithms already involve code that reaches beyond human comprehension (we do understand machine code even if we can't easily read it). As I've understood it, anyway.
In addition to that, whatever consciousness arises out of that mess would be far removed from our understanding of reality since we are biological and can't be copied and pasted into a hard drive just yet. Our perception is highly coloured by our limitations in neurological abilities, sensory inputs and psychology. And probably a lot more.
But another interesting angle is to look at it from the perspective of systems thinking. Simple systems can result in complex behaviour. I have seen flies with "personalities". They will react differently to my annoyed waving them off, and even have different favourite spots to settle on. I saw a program recently called "The Secrets of Size" on BBC where a researcher talks about individual human heart cells having personalities. My point isn't about whether they are sentient, or about anthropomorphism, but about simply scaling up those simple systems to what I call "me". How am I anything but an organised heap of systems, thinking that I'm "sentient" because that's what these systems want me to think?