It's true that hard-coding morality is likely the wrong approach to the problem, but we are going to have to find a way to teach our AI's how to tell right from wrong and why something is right or wrong in the first place.
It's a difficult issue to address with programming, but I believe teaching a machine to understand emotions the first step to understanding morality. Emotional and social consequences help drive the rules of social morality. If a machine was able to feel hurtful emotions then, perhaps, it could find value in morality and ethics.
RE: Keeping systems accountable, machine ethics, value alignment or misalignment