spot_img

HinduPost is the voice of Hindus. Support us. Protect Dharma

Will you help us hit our goal?

spot_img
Hindu Post is the voice of Hindus. Support us. Protect Dharma
39.1 C
Sringeri
Thursday, March 28, 2024

How New A.I. Is Making the Law’s Definition of Hacking Obsolete

Imagine you’re cruising in your new Tesla, autopilot engaged. Suddenly you feel yourself veer into the other lane, and you grab the wheel just in time to avoid an oncoming car. When you pull over, pulse still racing, and look over the scene, it all seems normal. But upon closer inspection, you notice a series of translucent stickers leading away from the dotted lane divider. And to your Tesla, these stickers represent a non-existent bend in the road that could have killed you.

In April this year, a research team at the Chinese tech giant Tencent showed that a Tesla Model S in autopilot mode could be tricked into following a bend in the road that didn’t exist simply by adding stickers to the road in a particular pattern. Earlier research in the U.S. had shown that small changes to a stop sign could cause a driverless car to mistakenly perceive it as a speed limit sign. Another study found that by playing tones indecipherable to a person, a malicious attacker could cause an Amazon Echo to order unwanted items.

These discoveries are part of a growing area of study known as adversarial machine learning. As more machines become artificially intelligent, computer scientists are learning that A.I. can be manipulated into perceiving the world in wrong, sometimes dangerous ways. And because these techniques “trick” the system instead of “hacking” it, federal laws and security standards may not protect us from these malicious new behaviors — and the serious consequences they can have.

Machine learning (M.L.) is a major subset of A.I. that typically involves a two-phase process to ascertain patterns in data. In the first phase, a model is trained toward a particular objective, such as detecting spam emails, through exposure to many examples. In the second phase, the model is shown a new example and must infer its category.

So-called deep learning is a subset of M.L. that approaches classification problems layer by layer, with each layer devoted to an aspect of the classification. Thus, a deep learning system trying to detect a stop sign may devote one layer to detecting the shape, one to the color, and so on.

Adversarial M.L. is rapidly becoming an important area of security research — and as a law professor, I can tell you that the law is not ready.

Adversarial M.L. exploits the limitations of machine learning in order to cause an error or to discover hidden information. A still-evolving field, there are three central techniques of adversarial M.L. The first involves poisoning the model — by deliberately mislabeling a picture, for example — in order to force errors in the inference phase. The second involves fooling a trained model by understanding what features the model is using to classify new inputs and mimicking those features. Your potential Tesla crash falls into this second category. The third involves systematically querying a trained model to try to discover the underlying, potentially personal or sensitive data that went into training that model.

Adversarial M.L. is rapidly becoming an important area of security research — and as a law professor, I can tell you that the law is not ready.

The rules against hacking in the United States date back to the mid-1980s, around the time of the movie WarGames. According to popular lore, Reagan administration staffers watched the movie in horror as a teenager named David, played by a young Matthew Broderick, broke into a Pentagon computer from his bedroom and nearly precipitated a nuclear war.

Whatever the reason, Congress in 1986 passed the Computer Fraud and Abuse Act (CFAA) as a criminal and civil disincentive to hacking into government and other protected computers. The CFAA has had its problems, including allegations of prosecutorial overreach. But the basic definition of hacking has survived now three decades of technological change: Under federal law, hacking consists of causing harm by bypassing a security protocol without adequate authority. This understanding continues to hold and has extended to other contexts, including international law and standards.

The problem is that adversarial M.L. doesn’t break into a machine as such. Rather, these techniques leverage knowledge of the model to trick the system into unanticipated behaviors, and they do it without bypassing any security protocol. Nor do these techniques shut the system down by overwhelming it through a typical approach called a “denial of service” attack. As such, they do not seem to constitute “hacking” as the law has come to understand that term.

The computer security threat is evolving in ways anti-hacking laws didn’t anticipate. One concern is that anti-hacking laws will be under-inclusive and so would fail to outlaw clearly dangerous behavior such compromising a self-driving car. Five years ago, a casino tried to sue a defendant because he had figured out how to trick a digital poker machine into letting him win by pushing a series of buttons in a particular order.

In an unpublished, non-binding opinion, a federal court rejected the casino’s case because the defendant had not “hacked” the machine — he had not bypassed any security protocols. And the same might be said of tricking a car into speeding or changing lanes by altering the external environment.

Of course, defacing roads or road signs in a way that endangers human drivers is already illegal. Maybe these local laws could deal with examples of adversarial M.L., at least when lives are in danger. But what about techniques that reach multiple machines at once, such as a tone broadcast on the radio that could cause Alexas all over the country to order items from Amazon? Some of the ways machines can be tricked are capable of scale and could cross jurisdictional lines.

Others may worry that the already vague anti-hacking law could inadvertently target innocent or expressive activity. If courts could choose to regard tricking cars or smart homes as hacking, what is the limiting principle? Security camera systems in airports are not only protected computers, but protected government computers, which carry special penalties. So would wearing makeup meant to throw off facial recognition constitute “hacking” those systems?

Also troubling is the chilling effect on accountability research. Academics, the media, or independent researchers test systems for security vulnerabilities, race and gender biases, and other problems. Unlike the anti-circumvention clause of the Digital Millennium Copyright Act, which protects against efforts to crack digital locks on content, the Computer Fraud and Abuse Act does not have a research exemption.

The investigative news site Pro Publica published a powerful exposé on the racial biases inherent in bail risk assessment algorithms; if courts begin to see this kind of journalistic probing as hacking, then the important fields of adversarial machine learning and algorithmic fairness are at risk.

And what about the developer of the system being tricked? Companies have an obligation to implement adequate security, and state law requires them to report hacks that result in a breach of personal data. If security concerns affect enough consumers, the Federal Trade Commission or other government agencies can get involved. Several major companies, including Twitter and Uber, are under a consent decree with the FTC for failing to safeguard consumer data.

Today we lack a standard for what constitutes adequate robustness against adversarial M.L. Without such a standard, it is hard to imagine companies are adequately incentivized to harden their systems against attack. But the concern and the responsibility exist any time machines are making material decisions about people and their opportunities.

Smart machines, like even the smartest people, can be tricked. Computer security research is teaching us that a clever attacker can purposefully manipulate A.I. into misperceiving the world in ways that can cause financial or even physical harm. The resulting disconnect between adversarial techniques and anti-hacking laws means uncertainty for consumers and researchers and lesser incentives for A.I. developers. What is needed is a wholesale rethink of the very definition of hacking.

-By Ryan Kalo

(This article was published on onezero.medium.com on August 21, 2019 and has been reproduced here in full.)

(Featured Image Credit: picture alliance/Getty Images)


Did you find this article useful? We’re a non-profit. Make a donation and help pay for our journalism.

Subscribe to our channels on Telegram &  YouTube. Follow us on Twitter and Facebook

Related Articles

Web Desk
Web Desk
Content from other publications, blogs and internet sources is reproduced under the head 'Web Desk'. Original source attribution and additional HinduPost commentary, if any, can be seen at the bottom of the article. Opinions expressed within these articles are those of the author and/or external sources. HinduPost does not bear any responsibility or liability for the accuracy, completeness, suitability, or validity of any content or information provided.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

Sign up to receive HinduPost content in your inbox
Select list(s):

We don’t spam! Read our privacy policy for more info.

Thanks for Visiting Hindupost

Dear valued reader,
HinduPost.in has been your reliable source for news and perspectives vital to the Hindu community. We strive to amplify diverse voices and broaden understanding, but we can't do it alone. Keeping our platform free and high-quality requires resources. As a non-profit, we rely on reader contributions. Please consider donating to HinduPost.in. Any amount you give can make a real difference. It's simple - click on this button:
By supporting us, you invest in a platform dedicated to truth, understanding, and the voices of the Hindu community. Thank you for standing with us.