Skip to main content
Clear icon
13º

US officials seek to crack down on harmful AI products

FILE - Lina Khan, then a nominee for Commissioner of the Federal Trade Commission (FTC), speaks during a Senate Committee on Commerce, Science, and Transportation confirmation hearing, April 21, 2021 on Capitol Hill in Washington. The federal government will "not hesitate to crack down" on harmful business practices involving artificial intelligence, Federal Trade Commission head Khan warned Tuesday. (Saul Loeb/Pool via AP, File) (Saul Loeb, POOL)

The U.S. government will “not hesitate to crack down” on harmful business practices involving artificial intelligence, the head of the Federal Trade Commission warned Tuesday in a message partly directed at the developers of widely-used AI tools such as ChatGPT.

FTC Chair Lina Khan joined top officials from U.S. civil rights and consumer protection agencies to put businesses on notice that regulators are working to track and stop illegal behavior in the use and development of biased or deceptive AI tools.

Recommended Videos



Much of the scrutiny has been on those who deploy automated tools that amplify bias into decisions about who to hire, how worker productivity is monitored or who gets access to housing and loans.

But amid a fast-moving race between tech giants such as Google and Microsoft in selling more advanced tools that generate text, images and other content resembling the work of humans, Khan also raised the possibility of the FTC wielding its antitrust authority to protect competition.

“We all know that in moments of technological disruption, established players and incumbents may be tempted to crush, absorb or otherwise unlawfully restrain new entrants in order to maintain their dominance,” Khan said at a virtual press event Tuesday. “And we already can see these risks. A handful of powerful firms today control the necessary raw materials, not only the vast stores of data, but also the cloud services and computing power that startups and other businesses rely on to develop and deploy AI products.”

Khan didn't name any specific companies or products but expressed concern about tools that scammers could use to “manipulate and deceive people on a large scale, deploying fake or convincing content more widely and targeting specific groups with greater precision.”

She added that “if AI tools are being deployed to engage in unfair, deceptive practices or unfair methods of competition, the FTC will not hesitate to crack down on this unlawful behavior.”

Khan was joined by Charlotte Burrows, chair of the Equal Employment Opportunity Commission; Rohit Chopra, director of the Consumer Financial Protection Bureau; and Assistant Attorney General Kristen Clarke, who leads the civil rights division of the Department of Justice.

As lawmakers in the European Union negotiate passage of new AI rules, and some have called for similar laws in the U.S., the top U.S. regulators emphasized Tuesday that many of the most harmful AI products might already run afoul of existing laws protecting civil rights and preventing fraud.

”There is no AI exemption to the laws on the books," Khan said.