Good question? Answer as of to date — depends on the “AI creator.” AI today is all dependent upon architects, engineers, etc. design and development and the algorithms & information exposed in AI. Granted we have advanced this technology; however, it is still based on logical design and principles; nothing more.
BTW — here is an example to consider in this argument. If a bank buys a fully functional and autonomous AI; and through audits (such as SOX) it is uncovered that embezzling was done by this AI solution (like in another report 2 weeks ago showing where an AI solution stole money out of customer accounts) who is at fault? Who gets prosecuted? who gets sued? The bank, or the AI technology company; or both? We must be ready to address these types of situations soon and legislation and the courts are going to face some very interesting times in the near future; and consumers will probably take the brunt of the chaos.
A recent experiment in which an artificially intelligent chatbot became virulently racist highlights the challenges we could face if machines ever become superintelligent. As difficult as developing artificial intelligence might be, teaching our creations to be ethical is likely to be even more daunting.