Many of us must have seen some articles that are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because they’ve become conscious and/or evil. On a lighter note, such articles are actually rather impressive, because they succinctly summarize the scenario that AI researchers don’t worry about. That scenario combines as many as three separate misconceptions: concern about consciousness, evil, and robots.
In fact, AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent, so a robot can never do something it's not programmed to do. If AI becomes an existential threat, it should be a man-made result. Therefore, we should fear the humans programming the AI not the AI itself.
In conclusion, AI is friendly, because AI not only is safe but also bring many benefits for people.