Protecting against AI-enabled scams, deepfakes

Apr 27, 2025 - 15:49
Apr 27, 2025 - 15:55
 1
Protecting against AI-enabled scams, deepfakes

AS the country prepares for the May 2025 midterm elections, concerns about artificial intelligence (AI)-driven scams, fraud, and hacks, especially deepfakes, are being taken seriously by stakeholders, particularly government cybercrime watchdogs.

Deepfakes are especially distressing since deepfake attacks, which include use of virtual cameras, image substitution, buffer attacks, voice cloning and other AI-enabled methods, can easily generate hyper-realistic and adversarial manipulations, even replications, that can fool facial and voice verification systems.

In a press briefing, Cybercrime Investigation and Coordinating Center (CICC) Executive Director Alexander Ramos said that as the elections draw near, the challenge of artificial intelligence and deepfakes is real and there's a pressing need to prevent AI from creating havoc on the future of our country. In the same forum, Stratbase Institute President Dindo Manhit stressed that fake news, deepfake videos and coordinated social media campaigns could certainly influence public opinion, discredit candidates, sway voters, and delegitimize the electoral process.

Deepfake poll watch

The government is not wanting in addressing and redressing scams including deepfakes in the coming polls. The Marcos Jr. administration has organized a National Deepfake Task Force and will be rolling out an AI-powered detection tool aimed at combating election-related disinformation and fraud.

Led by the Presidential Communications Office (PCO), the initiative is a collaborative effort with the CICC, the Department of Information and Communications Technology (DICT), and the National Bureau of Investigation (NBI), among others. It is a major component of a broader strategy to empower citizens against the escalating threat posed by deepfakes.

In a related move, the DICT has procured a digital tool that can verify within 30 seconds whether photos and videos are authentic or deepfake. The newly acquired AI tool, sourced from Singapore-based company Ensign, can also assess deepfake content with 95-percent accuracy.

The government has purchased 500 licenses of the software at a total cost of P2 million. The tool will be distributed to accredited institutions, including election watchdogs like the Parish Pastoral Council for Responsible Voting (PPCRV), universities, and civil society organizations working on fact-checking and combating disinformation. This decentralized approach seeks to halt the spread of misinformation early on by swiftly determining a possible deepfake before its malicious content can gain nasty traction. Moreover, CICC has emphasized the importance of independent fact-checking which should not be misconstrued as government censorship.

Spotting deepfakes

The MIT Media Lab has offered some telltale indicators to look out for in deepfake visuals, including:

– Do blinking and lip movements follow natural rhythms?

– Are reflections in eyes and glasses consistent and make visual sense?

– Does the age of the skin match that of the eyes and hair?

Generally, deepfake eyes have an inconsistency in their reflections.

New research from the University of Hull in the UK suggests that detecting deepfakes boils down to the eyes. The study argues that it's likely a real image if the subject has a matching reflection in each eye; while if there is inconsistency in the two reflections, it's probably a fake.

The study still warns of the possibility of false positives and false negatives, so the-eyes-tell-it-all idea is not necessarily foolproof. Nonetheless, the method points to a plan of attack in the race to detect deepfakes and disinformation as quickly as possible.

Incode Technologies is using "liveness verification" as an essential extra feature in biometric identity systems. The logic of the "live" verification methodology works like this: if a security check requires a selfie for facial recognition, a felon would likely submit a photo or a video instead of a real-time live selfie. To then combat these illegal efforts, liveness detection can help determine if it's a real, live person in the selfie presented.

Appdome, a leader in protecting mobile businesses, is extending its Account Takeover Protection suite with 30 new dynamic defense plugins for Deep Fake Detection in Android and iOS apps. The new plugins are designed to guarantee the integrity of Apple Face ID, Google Face Recognition and third-party face and voice recognition services against AI-generated and other deepfake attacks.

Appdome's Deep Fake Detection plugins sit on top of OS-native or third-party Face ID, facial recognition and voice recognition methods, which ensures that any facial recognition process is secure from deepfake attacks. Specific attack vectors that Appdome's Deep Fake Detection protects against include Face ID Bypass, Deepfake Apps, Deepfake Video Detection, Appdome Liveness Detection and Voice Cloning.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0