Creating realistic deepfakes is getting easier. Fighting back may take even more AI

Washington – The phone rings. He is the Secretary of State. Or is it?
For Washington’s initiates, see and hear no longer believes, thanks to a wave of recent incidents involving Deepfakes leaflets who have the identity of senior officials from President Donald Trump.
Digital counterfeits also come for Corporate America, as criminal and pirate gangs associated with opponents, including North Korea, use synthetic and audio videos to usurp the identity of CEO and low level candidates to access critical systems or commercial secrets.
Thanks to the progress of artificial intelligence, the creation of realistic FACFakes is easier than ever, causing security problems for governments, businesses and individuals and making confidence in the most precious currency in the digital age.
Responding to the challenge will require laws, better digital literacy and technical solutions that fight on AI with more AI.
“As human beings, we are remarkably sensitive to deception,” said Vijay Balasubramaniyan, CEO and founder of the technological company Pindrop Security. But he believes that solutions to the challenge of Faasfakes Deep are perhaps at hand: “We are going to retaliate.”
This summer, someone used the AI to create a depth of secretary of state Marco Rubio in order to reach out to foreign ministers, an American senator and a governor on the text, voicemail and signal messaging application.
In May, someone usurped the identity of Trump’s chief of staff, Susie Wiles.
Another false Rubio had emerged in a deepfake earlier this year, saying that he wanted to cut Ukraine access to the Elon Musk Internet Starlink service. The Ukrainian government then refuted the false complaint.
The implications of national security are enormous: people who think they are chatting with Rubio or Wiles, for example, could discuss sensitive information on diplomatic negotiations or military strategy.
“You are trying either to extract sensitive secrets or competitive information, or you consult, to a messaging server or another sensitive network,” said Kinny Chan, CEO of the QID cybersecurity company, about possible motivations.
Synthetic environments can also aim to modify behavior. Last year, the Democratic voters of New Hampshire received a robocall urging them not to vote in the next state primary. The voice on the call sounded with suspicion as the president of the time, Joe Biden, but was created using AI.
Their ability to deceive makes the Deepfakes a powerful weapon for foreign actors. Russia and China have used disinformation and propaganda for Americans as a means of undermining confidence in democratic alliances and institutions.
Steven Kramer, the political consultant who admitted to having sent Biden’s false robocals, said he wanted to send a message from the dangers that Deepfakes put in the American political system. Kramer was acquitted last month of the accusations of suppression of voters and identity theft of a candidate.
“I did what I did for $ 500,” said Kramer. “Can you imagine what would happen if the Chinese government decided to do it?”
The greatest availability and sophistication of programs mean that Deepfakes are increasingly used for business spying and variety of the garden.
“The financial industry is right in the reticle,” said Jennifer Ewbank, former assistant director of the CIA who worked on cybersecurity and digital threats. “Even people who know each other have been convinced to transfer large sums of money.”
In the context of business spying, they can be used to usurp CEOs asking employees to hand over passwords or routing numbers.
Deepfakes can also allow crooks to apply for jobs – and even do them – under an supposed or false identity. For some, it is a way to access sensitive networks, steal secrets or install ransomware. Others simply want work and can work some similar jobs in different companies at the same time.
In the United States, the authorities said that thousands of North Koreans with information technology skills have been sent to live abroad, using stolen identities to obtain jobs in technological companies in the United States and elsewhere. Workers have access to business networks as well as a payroll check. In some cases, workers install ransomware that can be used later to extort even more money.
The regimes have generated billions of dollars for the North Korean government.
In three years, up to 1 out of 4 employment requests should be false, according to research as an adaptive security, a cybersecurity company.
“We entered an era where anyone with a laptop and access to an open-source model can pretend to be a real person,” said Brian Long, CEO of adaptive. “It is no longer a question of piracy of systems – it is a question of hacking confidence.”
Researchers, public policy experts and technological societies are currently studying the best ways to meet economic, political and social challenges posed by Deepfakes.
New regulations could force technological companies to do more to identify, label and potentially remove Deepfakes on their platforms. Legislators could also impose larger penalties on those who use digital technology to deceive others – if they can be captured.
More important investments in digital literacy could also stimulate people’s immunity to online deception by teaching them ways to spot false media and avoid prey to crooks.
The best tool for catching AI can be another AI program, that trained to sniff Deepfake’s tiny defects which would go unnoticed by a person.
Systems like Pindrop analyze millions of data points in anyone’s speech to quickly identify irregularities. The system can be used during job interviews or other video conferences to detect whether the person uses voice cloning software, for example.
Similar programs can one day be commonplace, operating in the background while people chat with colleagues and relatives online. One day, Deepfakes can follow the path to email spam, a technological challenge that has once threatened to upset the usefulness of emails, said Balasubramaniyan, CEO of Pindrop.
“You can adopt the defeatist point of view and say that we are going to be subject to disinformation,” he said. “But that will not happen.”