I see no reason to make a distinction between an us and a them. Make it part of you, and make yourself part of it, and you don't have these problems. The problem comes from you making the distinction saying you're separate from it.
Let me give you an example. The water is something which is a part of you. You also are a part of water. If you think you're separate from the water and try to stay dry you'll fail because you're made up mostly of water.
When you figure out how to understand that you are made up of water then you'll have nothing to fear from water.
We don't care if some ladybird bug understands what we do, because it's "brain" just can't comprehend that. There is a possibility that we will become that bug. We already can't comprehend processing of big data, until it is chewed for us.
There is no we. There is just life, intelligence, and the many forms it takes. If you use AI to evolve together with it then you have nothing to fear.
So what you are afraid of is not AI. You're not afraid of intelligence or artificial intelligence. You are afraid of machines which have a will of their own. The point? Don't design the machines to have such a will to do anything which you yourself don't want. Design the machines to be an extension of you, of your will.
Think of a limb. If you have a robot arm, and this arm is intelligent, do you fear someday the arm will choke you to death? Why should you?
On the other hand if you design it to be more than an arm, but to be your boss, to be in control of you, then of course now you have something to fear because you're designing it to act as a replacement rather than a supplement. AI can take the form of:
- Intelligence amplification.
- Replacement for humans.
I'm in favor of the first option. People who fear the second option are just people afraid of change itself. If you fear the second option then don't choose an AI which rules over you. Stop supporting companies which rule over you. Focus on AI which improves and augments your abilities rather than replaces you. Merge with the technology rather than try to compete with it, because humans have always relied on technology to live whether it be fire, weapons, clothing, etc.
RE: Can AI be trained to assist with moral decision making?