The issue as I stated is that AI has to understand each of our individual priorities as well as societal interests. It's not going to be easy because you cannot hard code morality in as merely a set of rules. Hard coding is actually the wrong way to think about morals or value alignment because preferences do evolve, values can change, and an AI has to be able to continuously stay in alignment. In essence the AI has to know and understand us better than we know ourselves, which is possible but not something you can code in, as it has to be trained.
As far as emotions are concerned, I don't think emotions really have much to do with morality. Values have to do with morality. What you value as an individual might be based on how you feel about different things but an AI is not going to understand (nor should it) the feelings aspect, but it can know what humans value.
RE: Keeping systems accountable, machine ethics, value alignment or misalignment