As long as we instill Asimov's Three Laws early, a self aware computer would be far more benevolent and helpful to mankind than the nefarious and greed driven purposes AI is put to by the humans programming it today.

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Of course, if we include psychological damage within the definition for "injure" or "come to harm" - think "thought crimes" - then robots would take that to extreme and we'd all end up in camps with polite but implacable robot guards....


If folks haven't watched "The Social Dilemma" on evil Netflix they should. It should be de rigeour for people reaching 21. I'd say kids getting their first cell phone should watch it but dire warnings of any kind roll off of adolescents' backs like insults to a robot.



Gunnery, gunnery, gunnery.
Hit the target, all else is twaddle!