Contact information

8B Industrial Area, Sahibzada Ajit Singh Nagar, Punjab India (160055)

We are available 24/ 7. Call Now. +1(251) 316-0137
AI Based Threats

Col. Inderjeet Singh, Cyber Security Expert, Director General, Cyber Security Association of India explains the various types of Artificial Intelligence(AI) based threats such as Deepfakes, Market Bombing, etc.

In recent years, Artificial Intelligence (AI) and Machine Learning (ML) technologies have witnessed remarkable advances in capacity, accessibility, and widespread implementation, and their growth shows no signs of slowing down. While the most prominent AI technology is promoted, learning-based approaches are used far more often behind the scenes. AI pervades the modern linked world in every facet, from route-finding on digital maps to language translation, biometric identification to political campaigns, industrial process management to food supply logistics, banking & finance to healthcare.

Col. Inderjeet explains however Deepfakes are the most serious AI Crime threat. additionally, with the adaption of AI for all the productive use cases, there are AI-enabled cybercrimes like identity frauds – supported deepfakes or artificial pictures and deepfake videos– that are already growing.

Deepfakes are the fastest-growing ways of being employed by fraudsters. Fraudsters are currently turning to artificial identities to open new accounts. Identities of anyone are utterly faked or a singular consolidation of false data, which can are purloined or changed. Personal distinctive data (PII) will be hacked from info (phished from an associate degree unsuspecting person) or bought from the darknet. attributable to the restricted impact on those whose PII has been compromised or purloined, usually this sort of fraud can go forgotten for extended than ancient identity frauds.

Also Read: Deepfakes: The dark side of Artificial Intelligence

Deepfakes are the results of mistreatment AI to digitally recreate associate degree individual’s look with nice accuracy, sanctioning somebody to virtually create it seem like somebody is expression one thing that they ne’er same, or appeared somewhere that they need ne’er been. YouTube is rife with samples of varied quality, however, it’s simple to visualize however a superior deepfake may be inculpative to somebody World Health Organization is targeted maliciously.

Deepfakes are hierarchal united of the foremost serious AI (AI) crime threats supported by the big range of applications it is used for criminal activities and coercion. When the term was initially coined, the concept of deepfakes triggered widespread concern principally focused on the misuse of this technology in spreading information, particularly in politics. Another concern that emerged rotated around unhealthy actors’ mistreatment deepfakes for extortion, blackmail, and fraud for gain.

The rise of deepfakes {and artificial|and artificial} AI-enabled technology makes it easier for fraudsters to get terribly realistic-looking pictures or videos of individuals for these synthetic identities to commit serious levels of fraud. There are many Mobile Apps that enable anyone to convincingly replace the faces of celebrities with their own, even in videos and turning into microorganism social media content.

Fake audio or video content has been hierarchal by specialists because the most worrisome use of AI in terms of its potential applications for crime or cyber coercion, in line with a Researchers from the University faculty London, have World Health Organization has free a ranking of what specialists believe to be the foremost serious AI crime threats.

One of the studies, printed in Crime Science and funded by the Dawes Centre for Future Crime at UCL (and on the market as a policy briefing), known twenty ways that AI may be accustomed to facilitating crime over the future fifteen years. These were hierarchal so as of concern—based on the hurt they might cause, their potential for criminal profit or gain, however simple they’d be to hold out, and the way troublesome they’d be to prevent.

Also Read: Dealing with Deepfakes and FakeNews

Most troubling is the audio/video impersonation, followed by tailored phishing campaigns and driverless vehicles being employed as weapons. faux content would be troublesome to observe and stop because it might have a spread aim – from discrediting an influential person to extracting funds by impersonating a couple’s son or female offspring during a video decision. Such content might cause a widespread distrust of audio and visual proof, which itself would be social hurt.

Col. Inderjeet explains, “Aside from the generation of faux content, 5 different AI-enabled crimes were judged to be of terribly high concern. These are – the mistreatment of driverless vehicles as weapons, making tailored spear-phishing attacks, disrupting AI-controlled systems, gathering online data for the needs of large-scale blackmail, and AI-authored faux news. whereas a number of the smallest amount worrying threats ar – threats embody forgery, AI-authored faux reviews, and AI-assisted stalking.”

Unlike several ancient crimes, crimes within the digital realm are simply shared, repeated, and even sold, permitting criminal techniques to be marketed and for crime to be provided as a service. this suggests criminals could also be ready to source the tougher aspects of their AI-based crime. These crimes are classified as low, medium, or high threats.

Low Threats

Low Threats: Low threats provide minimal rewards to criminals since they do little harm and generate small revenues, and they are frequently difficult to achieve and overcome. Forgery was first, followed by AI-assisted stalking and some types of AI-authored fake news, then bias exploitation (or malicious use of platform algorithms), burglar bots (small remote drones with enough AI to assist with a break-in by stealing keys or opening doors), and avoiding detection by AI systems.

First Part

Second part

Moderate Threats: These threats were found to be more neutral, with the four factors averaging out to be neither good nor bad for the criminal, with a few outliers that still balanced things out. These eight dangers were split into two categories based on their severity. The first section was made up of

  • Market Bombing (where financial markets are manipulated by trade patterns)
  • Tricking Face Recognition
  • Online Eviction (or blocking someone from access to essential online services)
  • Autonomous Attack Drones for smuggling and transport disruptions.

The second part in the moderate range included

  • Learning-Based Cyberattacks
  • Artificially Intelligent DDoS Attack
  • Snake Oil, where fake AI is sold as a part of a misrepresented service
  • Data Poisoning and Military Robots

The introduction of erroneous data into a machine-learning programme, as well as the takeover of autonomous battlefield technologies, both pose serious threats.

high Threats


High Threats: Finally, there were plenty of threats that were ranked as very concerning by the teams of experts. Crimes like

  • Disrupting AI-Controlled Systems
  • inflammatory AI-Authored Fake News
  • Wide-Scale Blackmail
  • Tailored Phishing (or what we usually describe as spear phishing)
  • Use of Autonomous Vehicles as Weapons ranked just above that

The use of Audio/Visual Impersonation, also known as deepfakes, was the danger that scored as the most useful to the criminal across all four variables.

Col. Inderjeet continues, “Of course, just because certain dangers, such as deepfakes, are far more dangerous than others does not imply you should overlook the others. Contrary to popular belief, this is not the case. While having someone directly put words in your mouth is certainly unpleasant, having a slew of nasty evaluations posted online, whether or not they were created by AI, might be equally so.”

Business prospects are gradually shifting to the Internet in an increasingly online environment. As a result, regardless of whether AI is engaged, you must guarantee that your company is safeguarded against all types of cyberattacks.

You can follow Col. Inderjeet on Twitter @inderbarara, insta : inderbarara

Also Read: Coronavirus Pandemic replacing Finger Biometrics with Voice and Facial Biometrics

Need a successful project?

Lets Work Together

Book An Appointment
  • right image
  • Left Image