Ethical AI, Possibility or Pipe Dream?


Coming to a global consensus on what makes for ethical AI will be difficult. Ethics is in the eye of the beholder.

Ethical artificial intelligence (ethical AI) is a somewhat nebulous term designed to indicate the inclusion of morality into the purpose and functioning of AI systems. It is a difficult but important concept that is the subject of governmental, academic and corporate study. SecurityWeek talked to IBM’s Christina Montgomery to seek a better understanding of the issues.

Montgomery is IBM’s chief privacy officer (CPO), and chair of the IBM AI ethics board. On privacy, she is also an advisory council member of the Center for Information Policy Leadership (a global privacy and data policy think tank), and an advisory board member of the Future of Privacy Forum. On AI, she is a member of the U.S. Chamber of Commerce AI Commission, and a member of the National AI Advisory Commission.

Privacy and AI are inextricably linked. Many AI systems are designed to pass ‘judgment’ on people and are trained on personal information. It is fundamental to a fair society that privacy is not abused in teaching AI algorithms, and that ultimate judgments are accurate, not misused, and free of bias. This is the purpose of ethical AI.

But ‘ethics’ is a difficult concept. It is akin to a ‘moral compass’ that fundamentally does not exist outside of the viewpoint of each individual person. It differs between cultures, nations, corporations and even neighbors, and cannot have an absolute definition. We asked Montgomery, if you cannot define ethics, how can you produce ethical AI?

“There are different perceptions and different moral compasses around the world,” she said. “IBM operates in 170 countries. Technology that is acceptable in one country is not necessarily acceptable in another country. So, that’s the base line – you must always conform to the laws of the jurisdiction in which you operate.”

Beyond that, she believes that ethical AI is a sociotechnical issue that must balance the wellbeing of people with the operations of technology. “The first question,” she said, “is to ask ourselves not whether this is something we can do with technology, but whether this is something we should do. This is what we do at IBM – we use value-based principles to govern how we operate and what we produce.”

She gives a few examples of this stance in operation. “We were the first major company to come out and say, ‘We are no longer going to sell general purpose facial recognition API’.” This was a value-based decision made by IBM in accordance with its own ethical values. Its own moral compass and its own values led it to that position.

“There are many companies in the facial recognition space,” she continued. “We chose not to be there because it didn’t align with our principles. We didn’t feel the technology was ready to be deployed in a fair way, and it could also be used in contexts like mass surveillance – which we did not find acceptable from our moral position.”

Compare this to a statement from Cate Cadell, formerly a technology and politics correspondent for Reuters in Beijing and currently a national security reporter focusing on China at The Washington Post. The comment comes from the Sydney Morning Herald (September 4, 2022) and originates from a book being published on September 6, 2022.

“Local police describe vast, automated networks of hundreds or even thousands of cameras in their area alone, that not only scan the identities of passersby and identify fugitives, but create automated alarm systems giving authorities the location of people based on a vast array of “criminal type” blacklists, including ethnic Uighurs and Tibetans, former drug users, people with mental health issues and known protestors.”

The mass surveillance based on AI-augmented facial recognition that concerned IBM is alive and well in China.

Montgomery’s second example of IBM’s ethical stance on AI came with the COVID-19 pandemic. “When COVID-19 struck, there was much discussion on how technology could be deployed to help address the global pandemic,” she said. One of these discussions was around the use of location data to locate, identify and warn people at risk of infection. This would inevitably involve incursions into people’s personal and healthcare information.

“IBM took a step back,” she said, “and we asked ourselves not what could be done, but what we as a company were willing to do. And we were not willing to develop technology solutions that were going to track individuals to ensure they comply with quarantine. Instead, we focused on a computing consortium that brought together the compute power of supercomputers and leveraged it for things like drug discovery – ultimately leading to the development of a vaccine in a shorter timeframe.”

Choosing to limit development to just applications that are not considered unethical is, however, only half a solution. Many apps are not designed to be unethical but become so through undetected and usually unintended bias hidden in the algorithms. This bias can be amplified over time and lead to outcomes that may harm individuals or sections of society.

IBM tackles this with a range of principles. The first is that AI should never be designed to replace human decision-making, but to augment it: the operation of AI should always have human oversight that can monitor for signs of bias. 

The second is the use of a concept known as ‘explainable AI’. “Explainable artificial intelligence (XAI), says IBM, “is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms. Explainable AI is used to describe an AI model, its expected impact and potential biases.”

Montgomery explains, “Algorithms are essentially mathematical solutions. If you treat the discovery of bias as a math problem, you can employ other mathematical equations to detect deviations in the expected outcome of an AI algorithm.” This, with explainable AI, can be used to detect bias and locate its source within the algorithm.

The final piece in the ethical AI jigsaw is to prevent the use of an ethical algorithm for unethical purposes by the user. “In some cases, such as facial recognition, we simply won’t sell it,” said Montgomery. “With other types of technology, our decisions may determine who we sell it to and or what contract terms and conditions we put in place – what boundaries, what guardrails, what contractual restrictions, what technical restrictions we build into the product to ensure that that misuse doesn’t happen.”

Few would doubt that IBM has taken a moral stance on ethical AI. It is, however, IBM’s own view of ethics that prevails, and this may not be shared by everyone. Many countries are trying to develop a formal use of ethical principles – but their decisions and rules will be governed by their own different social and cultural mores. For example, Europe is likely to strengthen an ethical view of privacy. The US, while privacy is still important, will focus on how ethical AI can be used without impinging upon business innovation. 

Even China could make an ethical argument. The East does not uphold the importance of the individual in the same manner as the West – China could argue that the health of the nation is more important than the health of the individual, and its use of facial recognition is designed for this purpose.

Coming to a global consensus on what makes for ethical AI will be difficult. Ethics is in the eye of the beholder. Different nations will have different ideals – and it may be that the stance taken by global transnational businesses such as IBM will ultimately be the primary mechanism for a transnational statement on ethics in AI.


By Kevin Townsend on Mon, 12 Sep 2022 12:27:05 +0000
Original link