Tag:Cybersecurity

1
Pay the Price, Now ‘Fess Up’: Reporting Obligations for Ransomware Payments Are Live
2
A Positive Package: The Data (Use and Access) Bill
3
Australian Privacy Law Reform – The Wait is (Almost!) Over
4
Australian Privacy Reform Series Refresher: What Are These Reforms?
5
AI’s Next Frontier: The New Voice of Scam Calls?
6
New Guidance Released for Australian Listed Companies on Continuous Disclosure Obligations During a Cyber Incident
7
Tennessee Moves First on AI Protections With ELVIS Act
8
“Grandma, I have [not] been kidnapped”: The FCC Bans AI-Generated Robocalls
9
FTC Bans Rite Aid from Using AI Facial Recognition Without Reasonable Safeguards
10
CJEU Decides on Use of Automatically Generated Scoring Values

Pay the Price, Now ‘Fess Up’: Reporting Obligations for Ransomware Payments Are Live

By: Cameron Abbott, Rob Pulham, Stephanie Mayhew, Emre Cakmakcioglu

As of 29 May 2025, the requirement on businesses to report ransomware payments they make has come into effect.

Read More

A Positive Package: The Data (Use and Access) Bill

By: Shane Hubbard, Ludovico Lugnani, and Helen Phizackerley,

Since its introduction on 23 October 2024, the Data (Use and Access) Bill (the Bill) continues to evolve as it progresses through Parliament. Reminiscent of the incomplete Data Protection and Digital Information Bill, it has been introduced by the new Labour government to “harness the power of data for economic growth, support modern digital government, and improve people’s lives.” The Bill’s core aims are to grow the economy, improve UK public services and make people’s lives easier. It has been positioned as “a positive package” that “provides greater regulatory certainty for organisations and promotes growth and innovation in the UK economy.”

Read More

Australian Privacy Law Reform – The Wait is (Almost!) Over

By: Cameron Abbott, Stephanie Mayhew, and Rob Pulham

The long-awaited privacy reform has finally been introduced into the Australian Parliament today with the introduction of the Privacy and Other Legislation Amendment Bill 2024. Described as ‘Tranche 1’ of the reforms, the Bill introduces significant uplifts to several aspects of Australia’s privacy laws.

Read More

Australian Privacy Reform Series Refresher: What Are These Reforms?

By Cameron Abbott, Rob Pulham, and Stephanie Mayhew

In 2023 the Attorney-General’s Department released the “Privacy Act Review Report” (Review Report), which considered whether the Australian Privacy Act 1988 (Cth) and its enforcement mechanisms are fit for purpose in an environment where Australians now live much of their lives online and their information is collected and used for a myriad of purposes in the digital economy.

Read More

AI’s Next Frontier: The New Voice of Scam Calls?

By: Cameron Abbott, Rob Pulham, Dadar Ahmadi-Pirshahid, and Adam Asadurian

Astonishingly (…or perhaps not, for anyone who’s answered a phone call recently), “imposter calls” are the number one offender of spam calls in the United States, amounting to 33% of all phone calls according to a recent study by QR Code Generator.

Read More

New Guidance Released for Australian Listed Companies on Continuous Disclosure Obligations During a Cyber Incident

By: Cameron Abbott, Andrew Gaffney, Harry Kingsley, Rob Pulham, and Stephanie Mayhew

Australia’s corporate regulator, ASIC, has released new guidance on how to comply with market disclosure requirements when a listed company is in the middle of investigating and responding to a cyber incident.

Read More

Tennessee Moves First on AI Protections With ELVIS Act

By Jason W. Callen and Christopher J. Valente

On 21 March 2024, Tennessee became the first state in the United States to prohibit unauthorized use of artificial intelligence (AI) to replicate an individual’s likeness, image, and voice when its governor signed the Ensuring Likeness, Voice and Image Security Act of 2024 (ELVIS Act). The protections in the ELVIS Act for a person’s voice from AI misuse is particularly notable. Tennessee, like other states, already had prohibitions on unauthorized use of an individual’s likeness and image. And while some other states, such as California, have also protected a person’s voice, none had expressly linked all three—likeness, image, and voice—to AI.

Read More

“Grandma, I have [not] been kidnapped”: The FCC Bans AI-Generated Robocalls

By Andrew Glass, Gregory Blase, and Joshua Durham

Effective immediately, the Federal Communications Commission (FCC) banned AI-generated phone calls with its recent Declaratory Ruling (the Ruling). Known as audio or voice “deepfakes,” AI can be trained to mimic any person’s voice, resulting in novel scams such as grandparents receiving a call from their “grandchild” and believing they have been kidnapped or need money for bail. FCC Commissioner Starks deemed such deepfakes a threat to election integrity, recalling that just recently, “potential primary voters in New Hampshire received a call, purportedly from President Biden, telling them to stay home and ‘save your vote’ by skipping the state’s primary.”

Read More

FTC Bans Rite Aid from Using AI Facial Recognition Without Reasonable Safeguards

By Whitney E. McCollum and Eric F. Vicente Flores

The Federal Trade Commission (FTC) issued a first-of-its-kind proposed order prohibiting Rite Aid Corporation from using facial recognition technology for surveillance purposes for five years.

The FTC alleged that Rite Aid’s facial recognition technology generated thousands of false-positive matches that incorrectly indicated a consumer matched the identity of an individual who was suspected or accused of wrongdoing. The FTC alleged that false-positive matches were more likely to occur in Rite Aid stores located in “plurality-Black” “plurality-Asian” and “plurality-Latino” areas. Additionally, Rite Aid allegedly failed to take reasonable measures to prevent harm to consumers when deploying its facial recognition technology. Reasonable measures include: inquiring about the accuracy of its technology before using it; preventing the use of low-quality images; training or overseeing employees tasked with operating the facial recognition technology; and implementing procedures for tracking the rate of false positive matches.

Read More

CJEU Decides on Use of Automatically Generated Scoring Values

By Dr. Thomas Nietsch

In its judgment dated 7 December 2023 (C-634/21 – Schufa) presented by the Administrative Court Wiesbaden (Germany), the court held that Article 22 of the GDPR (Art. 22 GDPR) applies also to probability values that are created by credit scoring agencies on the basis of personal data and used by third parties in order to decide whether the respective individual is eligible for a credit or establishing a contract.

Read More

Copyright © 2025, K&L Gates LLP. All Rights Reserved.