A neuroscientist who says that artificial intelligence can enter into depression like humans in the future, thinks that there must be a mechanism in machines to help fight depression as well as in humans.
News Summary
Artificial intelligence is now able to beat people on Go, write scripts and 'hitting' news and drive cars.
In fact, most of these artificial intelligence systems we talked about are neural networks that are modeled after human brain inspiration. Now each of the neural networks is built to perform a specific task. For example, artificial intelligence that defeats a human in Go does not know how to drive a car. However, one day in the future, it is hoped that these 'narrow-minded' artificial intelligence systems will turn into general artificial intelligence (a computer that can do a lot of work like a person or better than a human being).
According to the neuroscientist Zachary Mainen, this possibility brings along an interesting question: "If a computer can be thought of as a human being one day, will it be possible for people to have such mental health problems that they live in?"
Depression, which affects more than 300 million people worldwide, is closely related to serotonin. Mainen stated that during his presentation last month, serotonin played an important role in the ability to adapt to unfamiliar situations. "People think serotonin is about happiness, but actually the neurons of serotonin send messages in a state of surprise," Mainen said after the presentation. When we look at it from this perspective, depression can be expressed as the inability of our brain to adapt to change.
Mainen says that a control mechanism similar to that of serotonin in the human brain may need to be found in artificial intelligence. This mechanism can help machines to easily adapt to new situations. At the same time, however, it may also lead to the strengthening of certain models of thought and the introduction of machines into depression.
In the statement he made to Mainen Science, he gave the following remarks about this situation:
We work on artificial intelligence algorithms such as computational psychiatry, reinforced learning, and assume that we can learn about patients who are depressed or who see delusions. If we think that way, it seems possible that the artificial intelligence can also have problems in the way the patients live.