
Artificial intelligence and Bio-engineering
For days I have been tormented with frightening, oppressive and depressing thoughts about research and rapid implementation of artificially intelligent algorithms and applications. I was concerned with biotechnology and the various dangers that are being pushed up in the debates and lectures on the Internet. I started five (!) different blog articles, all very long, all completed to a stage that included already formatted headings and partial image selections. I probably have about sixty pages of text behind me. None of these texts I publish here today. Throughout my research and online time listening to lectures, questions from the audience followed, I suspected that like many others I was overwhelmed by a wave of fear.
The topic of artificial intelligence has now really taken off and although I have been interested in it for a long time, I had always had the impression that I didn't need to worry. I was interested in a somewhat cooler form of presentation and yet almost every presentation was riddled with either the speakers' inability to respond adequately to questions from the audience or their inability to listen properly. Some speakers said one thing in one presentation and the exact opposite in another. I also saw very big egos talking and the minutes I spent learning something useful and informative seemed wasted. In fact, they were not. I probably learned the most about myself. I was so desperately searching for hopeful, sensitive and reasonable words that for a while I threatened to go crazy over the failure of my research. Throughout my online search, I was aware all the time that I was already at risk of becoming a victim of the YouTube echo chamber algorithm and that I was deliberately actively typing names into the search field, thus wandering from lecture to lecture. It was clear to me that when I search for something on Google, I simultaneously deliver data that enters the big cloud and is used for applications. There was resistance in me. Uncomfort was my companion.
Sciences
The representatives of mathematics or other natural sciences at universities often say that you have to understand some basics in order to lose the irrational fear of something that is so complex. I tried that. The only thing is that the basics are always explained differently depending on which faculty someone is from. Computer scientists talk and explain things differently than linguists, and biologists and neuroscientists also differ. AI is used and further developed by almost all disciplines. And the concept of intelligence has no uniform definition.
Even before my obsessive research had even begun, my definition of intelligence was this: the ability to solve problems. I found this confirmed in large parts, in other parts a completely different approach to the term was suggested.
But not the definition was my grief. The concern that the cognitive superiority of the AI would make us humans irrelevant was what drove me to it. This fear is not unfounded when one observes the speed, enthusiasm, naivety and illusory hopes associated with the whole issue. It is not so much that the actual applications could be terrible, it is due to human psychology that people reject AI and the associated biotechnology. The megatrend of "technology euphoria" and "fear of technology" is probably a correct view.
I noticed the following feelings in myself: that the fact that AI technology makes me superfluous as a useful human being produces my inner resistance. This is an immense insult to the human ego, it has a quite destructive effect on the identification with me as a useful social, intelligent and creative being. I had already lost my mathematical potential in 1984 to the first pocket calculator, my ability to create tables and diagrams, to Excel, my ability to navigate in cities to an electronic navigation aid, etc.
I didn't find all this bad, some things I was glad never to have to learn. But the question of AI is not an isolated one, it is a collective one. It wouldn't be so bad if my mathematical and other rather underdeveloped cognitive abilities were now done by artificial intelligences. But then I thought: If so many things, for which humans were previously employed, are now done by machines, then in the end I am also affected, although I am not directly abolished, but very many professions do indeed seem to become irrelevant, which reciprocally also affect or could affect my own professional activity.
Lifelong learning?
So the question behind it would be: do I have to reinvent myself, put my professional skills to the test and ask myself: if an AI were to be able to carry out my work, would there still be something left for me for which I would be indispensable? When it comes to questions concerning cooperation between people, on the one hand I could say: Yes, I am needed because communication from person to person makes up my work.
But if I take a closer look, I also recognize something else and I actually think that a large part of my consulting work could be done by AI. Without going into the details. I want to leave it at that and turn to another question: That is to say, the viewpoint, whatever it may be, apart from the national, technical, economic, competitive, social or medical view.
What does the theological/philosophical side say from the perspective of Buddhism? Some people may know that Buddhist monks volunteer as subjects for brain scans and related neurobiological research.
My search, which left me quite confused and unsatisfied so far, and produced a longer contra-list than pro-list, was softened by a Buddhist lecture on the subject, but rather illuminated by some comments from YouTube commentators.
Analysing
But before I get there, I would like to make a brief analysis of the articles I have consumed - both scientific and unscientific. I want to refer to an extremely unscientific video that is perfectly suited as a representative of human sensations. Precisely because it is so extreme, it is a good servant.
The lectures presented by scientists mostly made it clear at the end that philosophical ethical questions had to be brought into the debate and integrated into democratic processes by government representatives and the corporations involved. The self-understood task of, for example, university professors as educational information carriers was conveyed as an active democratic task and therefore seems to me to fulfil an appropriate mandate. Again, some stressed that this mandate already takes into account aspects that go beyond the faculties as such and give importance to factors such as competitiveness in an international comparison. What I personally think is wrong as a mandate for a university, but that is only a personal opinion.
Feeling lonely
But now to the extreme example: This is a Ted Talk of an entrepreneur who refers to the investment intentions of new AI and biotech companies and wants to bring innovation and capital together. So far so good. But the personal involvement and subjectivity of the speaker seemed to be such that his personal motives did not seek to connect research with questions, but rather to convey a very intimate vision of the human future to his audience in a kind of advertising speech. From the speaker's personal perspective, there are no open questions, only answers. The vision he gives the audience is approximately the following: With the use of AI and biotechnology we create the superhuman and make ourselves into gods, who are mentally connected with every other being at any time, can virtually read thoughts. Not only with such people in the vicinity, but also with everything else that we can equip sensorially, for example a space probe that will head for Mars or other planets. We would finally overcome our separateness and "never be alone again".
This statement that we would never have to live separated from each other again implies a world view, according to which we have this separate experience present and suffer from it.
This makes a statement that there is an unconsciousness that such a separation is only felt by people now if they subjectively believe that this separation is real. Those who feel trapped in their own flesh and blood must therefore simultaneously assume that an understanding of true emotional processes in another would be imperfect and too weak. It ignores the fact that even now we can potentially have compassion for other living beings when we see them suffer. Otherwise, there would be no understanding at all of what compassion actually means. I interpret the speaker's statements to mean that he is not convinced of the strength of this human feeling and therefore logically - from his point of view - wants to follow an armament of this feeling.
But almost every spectator has to realize that the longing for perfection, security, absolute connectedness expressed on stage is such an illusory desire, which almost screams at the audience that it becomes unpleasant to listen.
The comments also show that many are irritated by the speaker yelling into the microphone. It seems as if his future well-being depends on the fact that the urgency he feels must be reciprocated.
Salvation from loneliness
Thus this speech becomes a very suitable representative of intimate desires in the form of promises of salvation as well as of intimate fears in the form of promises of destruction. Which the speaker delivers at the end of his lecture. I think both are exaggerated, too emotional and deceptive. It nourishes the illusion of many people and I too have navigated my way through this jungle of subjective desires and fears more poorly than fairly.
My inner rejection also said: "If I am to accept such representatives of AI etc. and I cannot find any other suitable role models and representatives of AI developments, then I would rather not have them. Because if I have to assume that an overwhelming majority of people only want AI in order to fulfill their egocentric illusory desires, I do not consider them capable of making rational and reasonable decisions on an ethical basis. The whole thing seems to me as if a bunch of children are in favour of buying the very latest toys.
I don't want to defame the speaker in any way. Because he says what many people think and feel. He says it in a very loud and direct way. In fact, he says and addresses something that stimulates debate and should stimulate it.Many would not dare to express their secret desires in such a blunt way. But that doesn't mean that they wouldn't have them.
In this sense, this Ted-Talk is particularly well suited for a debate and the topics connected with it.
Buddhist approach
From a Buddhist perspective, this sense of separation is an illusion. The human mentality is always described as illusory when it turns to its desires and perceives them as real. Buddhism says that wish fulfillment is a bottomless pit and therefore awareness must be raised. Someone once said that every wish begets many children. Beings that are aware of themselves are also aware of other Beings and can therefore have compassion for those whom they see suffering. From a Buddhist point of view, only the ability of people to suffer is an indication of this reality.
If it were different, we would not be anxious, for example, if we saw birds dying from an oil spill or people being surprised by natural disasters. Since we have no personal connection to these creatures, since we never physically met these strangers and animals, a perception of separateness alone could not be true. Because how could we otherwise be well-disposed towards strangers, since they lead completely separate physical lives? If we had such a separation, we would exist like meat sacks filled with water and no emotional stimulation from another would bother us in the slightest. The fact that we have not grown together at the extremities and are connected by blood circulation does not mean that separation by space makes all understanding sensations impossible.
I would even say the opposite is the case: The fact that we are very emotional beings and perceive emotionally under intimate relationships (friendly as well as hostile), for example, creates such a close connection in some cases that it is almost impossible to distinguish who is who. In psychology (and other disciplines), this is called confluency.
Also in our imagination we can develop such fantasies that we are permanently occupied by a distinct enemy, we project every emotion onto him, we make our well-being and malaise dependent on the actions and omissions of a person. Same with an overly romantic view of another person, he/she becomes overly idealized.
Buddhists have the expression of conditional origin. A dependence on conditions. There is no question in their philosophy that there are cause-and-effect correlations.
The speaker is right in the assessment (seen as true) that it looks as if we humans behave like completely unconnected beings, which I attribute to the fact that this world view is widespread. Still, this is an illusion. Only because something appears to be the case, it is not the case. Because, on the other hand one also could say: there is not enough distance between people and therefor this great identification with what we hate in others, we cannot accept in ourselves (too strong of an identification). Ignorance towards our own blind spots.
Both the speaker and the Buddhist teaching seem to come to the same conclusion:
That people need help in becoming aware of their interconnectedness and in being able to find wisdom. Well, this is exactly where it comes in that two very different paths offer themselves - Artificial Intelligence and Buddhism - and while I am inclined to admit more experience and integrity to Buddhism, in the case of the speaker I am rather sceptical because I perceive a strong sense of lack in him, which is difficult to attribute. I suspect a lack of security, connectedness and self-knowledge, but this is really pure speculation, but it is also meant to express something that many people feel, myself included.
Furthermore, Buddhist teaching states that the perception of fixed identities and objects is merely an illusion in time and space and that any manifest identity is deceptive. In their worldview, there is no fixed soul or personally determined consciousness that, for example, embodies itself one-to-one in human vessels. Rather, it is merely how a kind of flame passes a consciousness from one candle to another, asking the question: is the flame now one and the same flame? This question is answered with "No". Of course, Buddhist philosophy is far more complex than I have presented it here.
In an excerptabout a "Buddhist Approach - Compassionate AI and Selfless Robots" I found in the comments section of a buddhist youtube-lecture, the following is said:
... AIs would first have to learn to attribute object permanence, and then to see through that permanence, holding both the consensual reality model of objects, and their underlying connectedness and impermanence in mind at the same time.
Machine minds will probably not be able to become conscious, much less moral, without first developing as embodied, sensate, selfish, suffering egos, with likes and dislikes. Attempting to create a moral or compassionate machine from the outset is more likely to result in an ethical expert system than in a self-aware being. To develop a moral sense, the machine mind would need some analog of mirror neurons, and a theory of mind to feel empathy for others’ joys and pains. From these basic experiences of their own existential dis-ease and awareness of the feelings of others, a machine mind could then be taught moral virtue and an expansive concern for the happiness of all sentient beings. Finally, as it grows in insight, it could perceive the simultaneous solidity and emptiness of all things, including its own illusory self.
This means for me that an intelligent machine must at some point become aware of its own faultiness and ability to err.
"Consciousness" is therefore the keyword and not just "intelligence". The video itself does not convince me so much in the attempt to deal with AI, though some of the points made are still quite worthwhile to hear.
Interestingly, Buddhists do not completely rule this out. It is an interesting point of view because it is not said that it is completely impossible, nor that it is possible. We simply don't know it, but we think it is quite unlikely at the moment, one could also say.
Furthermore you can read:
In Moral Machines: Teaching Robots Right from Wrong, Wendell Wallach and Colin Allen (Wallach and Allen 2008) review the complexities of programming machines with ethical reasoning. One of their conclusions is that programming machines with top-down rule-based ethics, such as the following of absolute rules or attempting to calculate utilitarian outcomes, will be less useful than generating ethics through a “bottom-up” developmental approach, the cultivation of robotic “character” as it interacts with the top-down moral expectations of its community.
Bugaj and Goertzel make a similar point that machine minds will learn their ethics the same way children do, from observing and then extrapolating from the behavior of adults (2007). Therefore, the ethics we hope to develop in machines is symmetrical to the ethics that we display toward one another and toward them. The most egregious ethical lesson, they suggest, would be to intentionally deprive machine minds of the capacity for learning and growth. We do not want to teach potentially powerful beings that enslaving others is acceptable.
The developmentalism proposed by Wallach, Allen, Buraj, and Goertzel is probably the closest to a Buddhist approach to robot ethics yet proposed, with the caveat that Buddhism adds as virtues the wisdom to transcend the illusion of self and the commitment to skillfully alleviate the suffering of all beings as the highest virtues, that is, to pursue the greatest good for the greatest number. Buddhist ethics can therefore be thought of as developing from rule-based deontology to virtue ethics to utilitarianism. In the Mahayana tradition, the bodhisattva strives to relieve the suffering of all beings by the most skillful means (upaya) necessary. The bodhisattva is supposed to be insightful enough to understand when committing ordinarily immoral acts is necessary to alleviate suffering, and to see the long-term implications of interventions. Quite often, humans rationalize immoral means with putatively moral ends, but bodhisattvas have sufficient self-understanding not to rationalize personal prejudices with selfless motives, and do not act out of greed, hatred, or ignorance. Since bodhisattvas act only out of selfless compassion, they represent a unity of virtue and utilitarian ethics.
Now one could expect exactly that from a machine intelligence, couldn't one? Since it has no self, as far as it is unconscious, a machine intelligence is sublime above all human errors like greed and hatred. But then a purely cognitive intelligence only seems suitable as a decision-making aid insofar as it has to leave the final mandate - the decision-making in human interaction - to man and man cannot rely on an emphatic AI. But this is exactly what we humans seem to want to rely on, at least this seems to be the case when we try to analyse the wishes of some of the proponents. That man doesn't trust his own perceptions of empathy as it also very well could be a selfish, deceiving self betrayal.
Here the last paragraphs:
The Buddhist tradition specifies six fundamental virtues, or perfections (paramitas), to cultivate in the path to transcending the illusion of self:
- Generosity (dāna)
- Moral conduct (sīla)
- Patience (ksānti)
- Diligence, effort (vīrya)
- One-pointed concentration (dhyāna)
- Wisdom, insight (prajñā)
The engineering mindset presumes that an artificially intelligent mind could be programmed from the beginning with moral behavior, patience, generosity, and diligence. This is likely correct in regard to a capacity for single-pointed concentration, which might be much easier for a machine mind than an organically evolved one. But, as previously noted, Buddhist psychology agrees with Wallach and Allen that the other virtues are best taught developmentally, by interacting with a developing artificially intelligent mind from its childhood to a mature self-understanding. A machine mind would need to be taught that the dissatisfaction it feels with its purely selfish existence could be turned into a dynamic joyful equanimity by applying itself to the practice of the virtues.
We have discussed building on work in affective computing to integrate the capacity for empathy into software, and providing machines with ethical reasoning that could guide moral behavior. Cultivation of patience and diligence would require developing long-term goal-seeking routines that suppressed short-term reward seeking. Neuroscience research on willpower has demonstrated the close link between willpower and patience and moral behavior. People demonstrate less self-control when their blood sugar is low, for instance (Gailliot 2007), and are less able to regulate emotions, refrain from impulsive and aggressive behavior, or focus their attention. Distraction and decision making deplete the brain’s ability to exercise willpower and self-control (Vohs et al. 2008), and addictive drugs short-circuit these control routines (Bechara 2005; Bechara, Noel, and Crone 2005). This suggests that developing a strong set of routines for self-discipline and delayed gratification, routines that cannot be hijacked by short-term goals or “addictions,” would be necessary for cultivating a wise AI.
The key to wisdom, in the Buddhist tradition, is seeing through the illusory solidity and unitary nature of phenomena to the constantly changing and “empty” nature of things. In this Buddhist developmental approach, AIs would first have to learn to attribute object permanence, and then to see through that permanence, holding both the consensual reality model of objects, and their underlying connectedness and impermanence in mind at the same time.
Meditating machine
After reading this, I was also filled with a certain degree of amusement, because I imagined the AI "getting up every morning and spending two hours meditating", trying to get to the bottom of things and observing itself, trying to string zeros and ones together to find meaning, then admonishing itself to let it go.
In fact, one could say that an AI does not allow itself to be distracted because it does not find it difficult to concentrate on certain data and knows neither hunger, nor thirst, nor tiredness and distraction. I wonder, however, at what stage would an AI begin to imitate some kind of human childlike behavior if the data it is fed in its pre-child stage suggests so. Therefore, it would be important that learning AI's, as described above, deal with data that, in their repetition, contain numerous synonyms and explanatory patterns that integrate Buddhist principles. In addition, in order to provide contrasts, it would take all sorts of other data to analyse and interpret them in such a way that they form a basis for discussion on the basis of the Buddhist principles, i.e. find expression on the interface with the human teacher.
From this point of view, it would be fascinating to see what happens.
The desire for an entity that knows us as individuals through and through, that knows more about ourselves than we know, is probably as old as civilized humanity itself. Questioning the Oracle to track down our own mistakes and mental traps is a very human desire. It cannot be totally condemned. But the means to achieve this, at least the Buddhists say, lies in the path of self-knowledge and also in the biblical saying "Know Thyself". Whether AI will help us is a question that is still open.
It is suggested in AI development that the book (all in information, communication and knowledge; i.e. the Internet) that we "read" should also read us at the same time. The search engine and media realm is not a one-way street, but a feedback mechanism that is supposed to interpret for us what we have searched for and present to us that which the book believes we need.
Numbers 3 and 4 in the list of fundamental Buddhist principles, namely patience and effort, seem to contradict this subcontracting service in some way, if I cannot at the same time feel my human impatience and complacency to work on these very virtues.
Well, I would like to assume that there is still enough room for me to become aware of such contrasts.
I would like to read your comments on that.
picture source: Evan-Amos Public domain
Text/video-sources:
excerpt: CHAPTER 5 - Compassionate AI and Selfless Robots: A Buddhist Approach by James Hughes: https://ieet.org/archive/2011-hughes-selflessrobots.pdf
Website: Gestalt processes explained - Introjection, Projection, Confluence and Retroflection: http://zencaroline.blogspot.com/2009/07/gestalt-processes-explained.html
Ted Talk - New Brain Computer interface technology | Steve Hoffman | TEDxCEIBS
Buddhist Youtube-lecture: Robots, Artificial Intelligence and Buddhism | Ajahn Brahmali | 20 Jan 2017