Tag:artificial intelligence

1
Tennessee Moves First on AI Protections With ELVIS Act
2
“Grandma, I have [not] been kidnapped”: The FCC Bans AI-Generated Robocalls
3
FTC Bans Rite Aid from Using AI Facial Recognition Without Reasonable Safeguards
4
Provisional Political Agreement on Landmark AI Regulation in Europe
5
California Privacy Protection Agency Proposes Draft Rules for Automated Decision Making, Including Artificial Intelligence
6
AI (Adverse Inferences): AI Lending Models may show unconscious bias, according to Report.
7
Facial Recognition Technology – Good or Bad?
8
Privacy Awareness Week (Personal Data): technology suspicion – consumer concerns surrounding voice and digital assistants
9
The Defence Department’s $4 million investment in Cognitive Computing
10
No more self-serve stealing at supermarkets thanks to new Aussie AI technology

Tennessee Moves First on AI Protections With ELVIS Act

By Jason W. Callen and Christopher J. Valente

On 21 March 2024, Tennessee became the first state in the United States to prohibit unauthorized use of artificial intelligence (AI) to replicate an individual’s likeness, image, and voice when its governor signed the Ensuring Likeness, Voice and Image Security Act of 2024 (ELVIS Act). The protections in the ELVIS Act for a person’s voice from AI misuse is particularly notable. Tennessee, like other states, already had prohibitions on unauthorized use of an individual’s likeness and image. And while some other states, such as California, have also protected a person’s voice, none had expressly linked all three—likeness, image, and voice—to AI.

Read More

“Grandma, I have [not] been kidnapped”: The FCC Bans AI-Generated Robocalls

By Andrew Glass, Gregory Blase, and Joshua Durham

Effective immediately, the Federal Communications Commission (FCC) banned AI-generated phone calls with its recent Declaratory Ruling (the Ruling). Known as audio or voice “deepfakes,” AI can be trained to mimic any person’s voice, resulting in novel scams such as grandparents receiving a call from their “grandchild” and believing they have been kidnapped or need money for bail. FCC Commissioner Starks deemed such deepfakes a threat to election integrity, recalling that just recently, “potential primary voters in New Hampshire received a call, purportedly from President Biden, telling them to stay home and ‘save your vote’ by skipping the state’s primary.”

Read More

FTC Bans Rite Aid from Using AI Facial Recognition Without Reasonable Safeguards

By Whitney E. McCollum and Eric F. Vicente Flores

The Federal Trade Commission (FTC) issued a first-of-its-kind proposed order prohibiting Rite Aid Corporation from using facial recognition technology for surveillance purposes for five years.

The FTC alleged that Rite Aid’s facial recognition technology generated thousands of false-positive matches that incorrectly indicated a consumer matched the identity of an individual who was suspected or accused of wrongdoing. The FTC alleged that false-positive matches were more likely to occur in Rite Aid stores located in “plurality-Black” “plurality-Asian” and “plurality-Latino” areas. Additionally, Rite Aid allegedly failed to take reasonable measures to prevent harm to consumers when deploying its facial recognition technology. Reasonable measures include: inquiring about the accuracy of its technology before using it; preventing the use of low-quality images; training or overseeing employees tasked with operating the facial recognition technology; and implementing procedures for tracking the rate of false positive matches.

Read More

Provisional Political Agreement on Landmark AI Regulation in Europe

By Giovanni Campi, Petr Bartoš, and Kathleen Keating

In a landmark development, EU lawmakers reached on 8 December 2023 a provisional political agreement on the Artificial Intelligence Act (AI Act). Once adopted, this regulation will be the first of its kind, and could set a global standard for AI laws around the world.

Read More

California Privacy Protection Agency Proposes Draft Rules for Automated Decision Making, Including Artificial Intelligence

By Eric Vicente Flores and Michael Stortz

Executive Summary: The California Privacy Protection Agency has proposed a new set of draft regulations that aim to regulate the use of artificial intelligence and automated decision making technology. These regulations will be discussed alongside other draft regulations the agency has previously proposed regarding risk assessments and cybersecurity assessments. The three sets of draft regulations will be discussed at the agency’s meeting on 8 December.

Read More

AI (Adverse Inferences): AI Lending Models may show unconscious bias, according to Report.

By Cameron Abbott and Max Evans

We live in an era where the adoption and use of Artificial Intelligence (AI) is at the forefront of business advancement and social progression. Facial recognition technology software is used or is being piloted to be used across a variety of government sectors, whilst voice recognition assistants are becoming the norm both in personal and business contexts. However, as we have blogged previously on, the AI ‘bandwagon’ inherently comes with legitimate concerns.

This is no different in the banking world. The use of AI-based phishing detection applications has strengthened cybersecurity safeguards for financial institutions, whilst the use of “Robo-Advisers” and voice and language processors has facilitated efficiency by increasing the pace of transactions and reducing service times. However, this appears to sound too good to be true, as according to a Report by CIO Drive, algorithmic lending models may show an unconscious bias.

Read More

Facial Recognition Technology – Good or Bad?

By Cameron Abbott, Michelle Aggromito and Jacqueline Patishman

As of June 2019, law enforcement agencies are working with the city of Perth in running a 12-month trial in the use of facial recognition software. The trial involves the installation of the software in 30 CCTV cameras and is part of the Federal Government’s Smart Cities plan, which was created with the aim of increasing interconnectivity and building intelligent, technology-enabled infrastructure throughout Australia.

Read More

Privacy Awareness Week (Personal Data): technology suspicion – consumer concerns surrounding voice and digital assistants

By Cameron Abbott, Rob Pulham, Michelle Aggromito, Max Evans and Rebecca Gill

Protecting personal data is a fundamental aspect of any privacy regime. As we become more technological advanced, organisations are finding innovative ways to interact with consumers through more intuitive communication channels, such as voice recognition via digital assistants. But not everyone trusts such technology, as Microsoft’s April 2019 report on voice assistants and conversational artificial intelligence has found.

The report found that 41% of voice assistant users were concerned about trust, privacy and passive listening. Other interesting findings of the report include:

Read More

The Defence Department’s $4 million investment in Cognitive Computing

By Cameron Abbott and Georgia Mills

The Australian Defence Department granted IBM Australia a $4 million, 3 year contract for the provision of its Watson cognitive computing infrastructure.  The platform provides a cognitive, artificial intelligence and machine learning capability for use by Defence and is only the second on-premises instance of Watson globally.

Matt Smorhun, Assistant Secretary for the ICT Strategy Realisation Branch at the Department of Defence said they decided to “just buy this thing” and then work out how it was going to fit into the organisation later. (Which did strike us as a rather strange approach to spending tax payers dollars – but congrats to the IBM sales person who pulled that off!)

Read More

No more self-serve stealing at supermarkets thanks to new Aussie AI technology

By Cameron Abbott and Allison Wallace

Since the introduction of self-serve checkouts in Australian supermarkets nearly ten years ago, customers have been engaging in the simplest of hacks to outsmart the supermarket technology.  Mum and Dad cyber criminals?  Not so much– mostly it is just by putting through more expensive items as much cheaper ones (think a kilo of lemons as a kilo of potatoes).

But thanks to an Aussie start-up, new AI technology will put an end to customer’s criminal careers. Read More

Copyright © 2024, K&L Gates LLP. All Rights Reserved.