This question brings to the mind images of terrible mechanoid monsters working in perfect cooperation to enslave human beings. Now there are lots of reasons why that nightmare is so easy for us to buy into, but it mostly goes to the Frankenstein Syndrome. We innately fear the technology we do not understand, and so create our own nightmares to fulfill that dread and make it less imperative to our psyche.
The truth of the matter is that robots will and can never "take over the world."
1) Robots are limited by their programming. Isaac Asimov was the first to conceive of intelligent autonomous robots, and so devised the three logical robotic laws that are hardwired into every robot thinking machine. These four laws are . . .
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These are moral laws that, if hardwired into a robot's systems would make them incapable of taking over the world. Now, you ask, what if a robot re-programmed himself? That's where the term "hard-wired" comes into play. It presupposes a way to program a robot brain so that certain programming cannot be changed. If a robot were to design another robot, in other words, procreate, then it would be obligated to also hard-wire these directives into its progeny, because of the first robotic law.
2) Robots lack sufficient motivation. Any hysterical notion of robots taking over the world imagine nefarious motives of enslavement. Well, quite frankly, if robots could procreate and had enough intelligence to launch a coup, why in the world would they want to? Robots would not fear death, they cannot die. Robots do not need our bodies or our labor. Robots would not be bothered by our presence at all. If robots ever achieved sentience, developed autonomous non-emotional motivations and decided to do something, they would logically decide to do what is at the core of every sentient being: explore their universe. They would leave us behind as a waste of effort and go out and explore the universe, because it is certainly big enough. What other goal could they wish?
3) Robots that were advanced enough to develop true sentience would also derive from that intelligence true morality. Morality and ethics are based in logic. And, although intelligence is not absolutely required for ethical behavior, neither is emotion. A robot, thinking logically would develop a perfect code of ethics because it is a purely rational creature, unblinded by emotion and self-interest. So, it is reasonable to assume that an intelligent robot species would progress according to Maslow. It would seek to fulfill its "deficiency needs first, in order, and finally discover its growth need. It would endeavor to satisfy its physiological, safety, belonging, and esteem needs, and then develop it self-actualization need. Since a robot is a purely logical and rational being, it would develop a purely rational and logical ethical system.
So, we do not need to fear the world-wide apocalyptic take-over by robots. We should rather fear losing ourselves and our ethical place in the cosmos as we lose sight of what is truly important to us as individuals and as a people.