Take the pledge to vote

For a better tommorow#AajSawaroApnaKal
  • I agree to receive emails from News18

  • I promise to vote in this year's elections no matter what the odds are.
  • Please check above checkbox.

    SUBMIT

Thank you for
taking the pledge

Vote responsibly as each vote counts
and makes a diffrence

Disclaimer:

Issued in public interest by HDFC Life. HDFC Life Insurance Company Limited (Formerly HDFC Standard Life Insurance Company Limited) (“HDFC Life”). CIN: L65110MH2000PLC128245, IRDAI Reg. No. 101 . The name/letters "HDFC" in the name/logo of the company belongs to Housing Development Finance Corporation Limited ("HDFC Limited") and is used by HDFC Life under an agreement entered into with HDFC Limited. ARN EU/04/19/13618
»
3-min read

When AI Algorithms go Wrong, Do we Have The Tools to Fight Back?

The mass deployment of AI and its handling of sensitive data is even more dangerous, since there is no way to question something going wrong.

Vishal Mathur | News18.com@vishalmathur85

Updated:December 10, 2018, 9:14 AM IST
facebookTwitterskypewhatsapp
When AI Algorithms go Wrong, Do we Have The Tools to Fight Back?
Image for representation (Photo: AI Now Institute)
Loading...

Artificial Intelligence (AI) is everywhere. Private companies, tech companies and governments are all rushing to deploy artificial intelligence systems for a variety of tasks. But, as with all thing’s tech—something could go wrong. But do we have the tools to fight back and counter an algorithm failure? That is exactly what a new report by AI Now, a research group affiliated to the New York University and includes AI experts from companies such as Google and Microsoft, is asking.

In the report titled AI Now 2018, the researchers are taking a look at what they call the “accountability gap”—in case something goes wrong with artificial intelligence systems. This comes as we end the year that saw the entire Facebook-Cambridge Analytica Debacle, the allegations that algorithms were tweaked on social media platforms to influence the outcomes of elections in various countries and how Google has been facing a backlash from its own employees for deploying the AI systems for the US Department of Defence’s Project Maven drone surveillance system. Back in June, the Trump administration introduced a new and much criticized policy that separated immigrant parents and their children at the border—forcing employees of tech companies including Amazon, Salesforce and Microsoft to demand they end contracts with U.S. Immigration and Customs Enforcement (ICE). As it turned out a month later, the ICE had allegedly modified the algorithms to mostly produce the “detain” recommendation for immigrants in custody. In September, it was revealed that IBM had actively participated in a program that allowed the law enforcement in New York City to use the surveillance cameras to detect ethnicity of any person. These are just some examples.

“Around the world, government agencies are procuring and deploying automated decision systems (ADS) under the banners of efficiency and cost-savings. Yet many of these systems are untested and poorly designed for their tasks, resulting in illegal and often unconstitutional violations of individual rights. Worse, when they make errors and bad decisions, the ability to question, contest, and remedy these is often difficult or impossible. Some agencies are attempting to provide mechanisms for transparency, due process, and other basic rights, but trade secrecy and similar laws threaten to prevent auditing and adequate testing of these systems,” says the report.

It is not just the governments who are rushing to deploy AI based systems without any checks and balances in place. IBM’s Watson for Oncology is already being tested in hospitals around the world for cancer treatment. Albeit this being in the trial phase still, a report from STAT in September suggested that the system was offering “unsafe and incorrect cancer treatments” even now. “It raised serious concerns about the incentives driving the rush to integrate such technology, and the lack of clinical validation and peer-reviewed research attesting to IBM’s marketing claims of effectiveness,” says the report. In another example, a self-driving Uber in Phoenix, Arizona did not detect the presence of a woman on the streets, and hit her. The woman died. “in the case of Uber, the person minding the autonomous vehicle was ultimately blamed, even though Uber had explicitly disabled the vehicle’s system for automatically applying brakes in dangerous situations,” points out the AI Now 2018 report.

The AI Now 2018 report suggests sector-by-sector regulation of AI systems, including facial recognition systems. Secondly, “This year, we go further: AI companies should waive trade secrecy and other legal claims that would prevent algorithmic accountability in the public sector. Governments and public institutions must be able to understand and explain how and why decisions are made, particularly when people’s access to healthcare, housing, and employment is on the line,” suggests the report.

Also Read | Facial Recognition For Surveillance: When Your Identity Relies on a Software Algorithm?

Read full article
Loading...
Next Story
Next Story

Also Watch

facebookTwitterskypewhatsapp

Live TV

Loading...
Countdown To Elections Results
To Assembly Elections 2018 Results