2019/10/02

Is it the fault of the robot? - EJ Insight

網上版請按此

 

Is it the fault of the robot?

 

A Bloomberg news report, Who to Sue When a Robot Loses Your Fortune, the first case of a human suing artificial intelligence (AI), is thought-provoking.

 

The case refers to a Hong Kong tycoon who had entrusted US$2.5 billion to a company that entirely uses AI for fund management and investment services. The computer program is able to comb through internet data, including real-time news and social media, grasp the investment sentiment, predict the trend of US stock and futures, and then make a decision of whether to buy or sell. In the simulation test, the program always made a double-digit return.

 

Over the last five years since early 2014, according to Bloomberg data, the AI's investment return has been better than that of hedge funds, and the return could be more than be double. However, at a time of widely fluctuating stock markets, humans and AI have outperformed each other at different times, and humans performed better in the past year.

 

Let's go back to the highly regarded AI program. Its performance had been very disappointing since the service was launched. On Feb. 14, 2018, the computer program predicted that the S&P 500 index futures would rise. But the data showed that US inflation had risen faster than expected, and the S&P 500 index dropped, prompting the program to make a 1.4 percent stop-loss order. However, the S&P index rebounded within a few hours, resulting in a loss of over US$20 million for the client in a single day.

 

The trial of the case has been scheduled for April next year in a commercial court in London, and it is expected to well covered by financial media.

 

AlphaGo, an AI program that defeated the human Go champion two years ago, made AI famous all over the world. However, the board game has standard rules. It is a different story in the stock market where many factors, including human factors that have non-rational components, are at play, and AI can end up the loser.

 

Moreover, the AI's "black box" thinking makes it difficult for humans to understand why a computer made a particular decision, so it is liable to be doubted.

 

Professor Chris Webster, dean of the Faculty of Architecture at the University of Hong Kong, recently wrote a preface for my new book, Are You Future Ready? He discussed the dilemma of "black box" thinking. He said the most advanced urban analysis models, such as the ultra-high resolution mathematical model that the Singapore government is developing for transportation and land planning, are a kind of love and hate thing.

 

We love it because the analysis result can be "complex, beautiful, fascinating and overwhelming". But its "black box" mode of thinking makes it "difficult to interpret analytically". That's because AI involves substantial and multiple decision-making factors and data, which, when they interact, generate new information, making it difficult for humans to comprehend the analysis results.

 

So how do we trust the analysis result? And if it goes wrong, who should bear the responsibility?

 

You may think that the company which developed the AI program or the organization that owns the AI needs to take responsibility. However, in March of this year, US prosecutors decided that Uber was "not criminally liable" for a fatal crash that led to the death of a pedestrian last year.

 

This is worrying. Today, many companies use chatbots, another AI application, in customer service, basically to answer general inquiries. If the chatbot sells products in the future, who can be sued in case a customer suffers a loss?

 

We need to answer these questions as soon as possible so that the application of AI can move forward and truly benefit the community.

 

 

Dr. Winnie Tang
Adjunct Professor, Department of Computer Science, Faculty of Engineering and Faculty of Architecture, The University of Hong Kong