Can moral concepts be incorporated into the AI?
Artificial intelligence: can a machine act morally?
Hal 9000 was a highly neurotic machine. He reacted defiantly and even maliciously when the crew of the spaceship suspected him of malfunctioning. By chance he noticed that she was even thinking of turning it off, whereupon he began to turn off the astronauts. For him it was a logical consequence of the instruction programmed into him to fly to Jupiter: a behavior that Hals manufacturer and the crew could not have foreseen.
2001: A Space Odyssey, A classic as a book and film, it was the subject of a late summer conversation in the courtyard of Palais Strozzi in Vienna-Josefstadt between the scientist Iyad Rahwan and the STANDARD. The 40-year-old Syrian-Australian researcher from the MIT Media Lab was here at the Complexity Science Hub (CSH) to discuss a new and fairly broad field. "Machine Behavior. A new field of research" was the name of the workshop initiated by David Garcia. The scientist has been at the hub for a year with funds from the Vienna Science Fund WWTF to analyze human behavior online. An important question came to mind: Do algorithms also show a certain behavior? And can it be moral?
Rahwan says: "It depends on what you understand by morality: if you ask ten people for a definition, ten different explanations come up." At the moment, according to the scientist, machines can by no means be "moral agents" in the human sense, but if at some point they also acquire a consciousness like Hal, then that is technically possible. "But it is in the distant future." That sounds almost reassuring.
For that you would have to solve the many open secrets of the human brain, says Rahwan. "The human mind is also a machine - we just don't yet know exactly how it works." If this succeeds at some point and increasingly more autonomous systems also display behavior patterns, "then we are faced with the same problem that we already have: some people may find what we do good, others not at all." No different it will be with machines. "
Memories of Aleppo
Iyad Rahwan was born in Aleppo, Syria. His memories of home are "very positive". Even then, the political climate did not contribute to creativity, as the scientist elegantly describes the Syrian regime. The economy did not really flourish either, "but of course it was much easier than it is today". Rahwan has not been in the country for eight years, and under the circumstances he is not planning to. Today he lives a completely different life: He is the smart, young professor who lets a journalist share his thoughts. A man who is obviously Ted Talk-tempered can talk about his scientific work in an entertaining way in front of both small and large audiences. And of course you are never at a loss for answers, even if they are not "scientific".
Rahwan's research creates a connection between social and computer sciences, he asks questions about ethics and rules in a world controlled by robotic systems and coined the term "society-in-the-loop". This means that the judgment of society as a whole must be incorporated into current and future developments in artificial intelligence (AI). It is necessary to develop a new social contract that, as the scientist says, intelligent machines understand and can therefore "implement".
Unlike Isaac Asimov's legendary robotics laws, there will probably not be general rules, but guidelines that vary depending on the culture or needs of the people. They will be different even within a country. "It is only important that society negotiates these rules." An often cited example in connection with autonomous driving. "When we come into a situation with a self-driving car in which the machine has to decide between two ways out, one time the passengers and one time the pedestrians could be endangered: Who controls which decision is made?" Rahwan does not offer a solution to this question, he only adds: "We should not leave this decision to the industry, because it is understandably only based on financial considerations."
Not left to the market
The MIT scientist warns against regulating safety issues in road traffic with autonomous vehicles through the price. This is already the case today to a certain extent - larger, more expensive cars are of course safer - but in connection with autonomous systems it can create additional "inequality" that would feel like hopelessness in future transport systems. Who wants to drive a car that they know is not one hundred percent safe? But is there an alternative to driving the less well-equipped car in view of the forecasts of further urban sprawl and greater distances to work?
Hopelessness: a feeling that you probably get when you hear about algorithms that filter news and have the potential to influence numerous people in their political convictions. Rahwan says soothingly: "This gatekeeper function has existed before." And points to his counterpart. Up to now there have just been newspapers "where journalists like you have said what is important and what is less important." As a reader you were aware that people are behind it, that you can give them feedback and point out mistakes. "It's the same today, only there are infinitely more ways to filter messages and personalize them down to the last detail." That scares you because you don't understand the mechanism behind it. "We don't know whether a machine has a goal - and if so, which one."
In view of some AI developments, mankind is still worried: Will we all be unemployed at some point? Rahwan says he has spoken to economists who are relaxed about it, and points to predictions that there will be numerous new jobs. Earlier industrial revolutions also had positive results for society. Example: agriculture. Farmers have found new jobs. "The only question is whether we have enough time to prepare people for artificial intelligence. Do we then all have to become IT professionals, or should they also have social skills? Or both?" He still doesn't have a generally valid answer. Above all, his job is to think about what concerns people in connection with machines. Whether at the Media Lab or in Vienna-Josefstadt. (Peter Illetschko, September 18, 2018)
- What causes hunger in humans
- What was your most disappointing Christmas
- Who uses Quora 2
- What makes people think that Jesus existed
- Is the Indian currency usable in Thailand?
- Does IFTTT work on an iPhone app
- What is the story of Mediawiki
- Makes PostgreSQL MongoDB irrelevant
- What is naturalism in education
- What are Alpha Hydroxy Acids
- Mitch McConnell is a Democrat
- How do mistakes and wrinkles arise differently
- Which is the best lightweight SEO plugin
- What are the best calculus websites
- What's the best thing about playing
- How can empaths heal themselves
- Is overdrive for heavy loads
- Kindle books are cheaper
- Does IFTTT work on an iPhone app
- Copyright strikes are permanent
- Which country has the highest investment returns
- What are the applications of Dreamweaver
- Can cats eat tomatoes?
- I regret becoming a lawyer