Why Addressing BIAS In AI Algorithms Matters

Attacks On AI Systems Are Different From Traditional Attacks & Exploit Inherent Limitations In Underlying AI Algorithms That Cannot Be Fixed

Why Addressing BIAS In AI Algorithms Matters

Among the challenges arising in 2021 are addressing bias in artificial intelligence algorithms, new data privacy regulations, the shift toward stronger age verification and more. These are issues that businesses need to face. To gain an insight into these and other essential 2021 trends for businesses, Digital Journal caught up with Robert Prigge, CEO of Jumio.

Prigge explains: “Enterprises are becoming increasingly concerned about demographic bias in AI algorithms (race, age, gender) and its effect on their brand and potential to raise legal issues. Evaluating how vendors address demographic bias will become a top priority when selecting identity proofing solutions in 2021.” Prigge adds: “According to Gartner, more than 95 percent of RFPs for document-centric identity proofing (comparing a government-issued ID to a selfie) will contain clear requirements regarding minimizing demographic bias by 2022, an increase from fewer than 15 percent today. Organizations will increasingly need to have clear answers to organizations who want to know how a vendor’s AI “black box” was built, where the data originated from and how representative the training data is to the broader population being served.”

In terms of forward thinking, Prigge says: “As organizations continue to adopt biometric-based facial recognition technology for identity verification, the industry must address the inherent bias in systems. The topic of AI, data and ethnicity is not new, but it must come to a head in 2021. According to researchers at MIT who analyzed imagery datasets used to develop facial recognition technologies, 77 percent of images were male and 83 percent were white, signaling to one of the main reasons why systematic bias exists in facial recognition technology. In 2021, guidelines will be introduced to offset this systematic bias. Until that happens, organizations using facial recognition technology should be asking their technology providers how their algorithms are trained and ensure that their vendor is not training algorithms on purchased data sets.”

Identity fraud will become a national crisis

With the issue of fraud, Prigge finds: “As transactions have shifted online due to the COVID-19 pandemic, identity fraud will become a major concern across all sectors as institutions struggle to verify their online customers are who they claim to be. In fact, fraudsters have seized opportunities provided by this shift to online transactions, causing networks’ fraud rates to increase by 60 percent. Not only was there more fraud attempted, but the dollar value of each attempted fraudulent transaction was also 5.5 percent higher than it had been the six months preceding the pandemic.”

“Organizations will shift from using data-based approaches of identity proofing (such as using credit bureau or census data) to document-centric identity proofing (using a government-issued ID and a selfie) to verify online users. With traditional authentication methods and data-based identity proofing, there is no way to know if a person logging in is the actual user or someone is using readily-available stolen information from the dark web. In 2021, enterprises will increasingly favor document-centric identity verification to deter fraudulent login attempts.”

 Prigge also considers the public sector: “Government agencies and public institutions are likely to follow suit as COVID-19 related scams have targeted 32 percent of people around the world, and the FBI has specifically flagged a spike in fraudulent unemployment insurance claims related to the pandemic. The FBI’s advice to look out for suspicious communications and charges doesn’t cover all instances of unemployment fraud as fraudsters are able to bypass these communications channels, file fraudulent claims and steal benefits. Government agencies will likely adapt to the modern fraud landscape by implementing stronger online identity verification to keep citizens safe in 2021 and beyond.”

Stronger age verification will be essential in 2021

Tapping into the zeitgeist, Prigge states: “As the social harm epidemic continues to accelerate with children being bullied, subjected to predators and influenced by harmful content at a rapid rate online, technology companies need to take responsibility to protect minors on their platforms. The U.S. is likely to follow in the footsteps of Ofcom, the UK’s first Internet watchdog, by implementing new legislation aimed to mitigate social harm, enforce age verification and remove legal protections for tech companies that fail to police illegal content. And we’re likely to see enterprises start preparing for these laws in 2021. As learning, communications and social interaction continues remotely into 2021, we’ll see online businesses implement stronger age verification methods (beyond self-reported age) to regulate age-restricted content and purchases while policing age on social platforms to protect minors and ultimately take a stand against social harm.”

The conversation about online voting for the 2024 U.S. election will start

Stepping into politics, Prigge notes: “To ensure everyone has an equal opportunity to vote in the 2024 election, we can expect to see security professionals and the Cybersecurity and Infrastructure Security Agency (CISA) begin discussions around online voting. As the technology to ensure safe and secure online voting is available, we’ll see if online voting, coupled with online identity proofing, will become a reality as a safer, more secure and cheaper alternative to mail-in and in-person voting. We will see the rise of stronger and more enforceable data privacy regulations.”

Another political area is data privacy:

“With the passing of the California Privacy Rights and Enforcement Act of 2020 and pending legislation on the Improving Digital Identity Act, it’s clear protecting consumer data will be a top priority in 2021. States are likely to follow California in initiating legislation to expand consumers’ rights to prevent companies from being able to collect and share personal data without prior consent or knowledge. We’ll likely see the Improving Digital Identity Act passed, which will create a task force to protect individual privacy, direct the National Institute of Standards and Technology (NIST) to create new standards for government agencies’ digital identity verification services and establish a grant program to help other states implement more secure digital identity verification.” Credential stuffing will become the #1 global cybersecurity threat as account takeovers become mainstream

Turning his attention to cybersecurity, Prigge finds: “The 36 billion records breached in 2020 will open the door for account takeover attacks via credential stuffing — a type of cyberattack where automated bots use exposed account credentials to gain unauthorized access to user accounts. As 71 percent of accounts are protected by passwords used on multiple websites, credential stuffing will become the top global cybersecurity threat as attacks will be successful in gaining access to multiple accounts including social media profiles, education portals, banking applications, healthcare sites and email domains. Once logged in, users can steal benefits, transfer funds and lock the real user out. Traditional authentication methods (e.g., knowledge-based authentication and the common password) will no longer be relied on to keep accounts protected. In 2021, enterprises will look to stronger forms of biometric-based authentication to keep user data secured and out of the hands of fraudsters.”

Criminals will weaponize AI in new ways for fraud

In terms of specific cyber-risks, Prigge adds: “The past decade has given rise to an entire cybercrime ecosystem on the dark web. Increasingly, cybercriminals have gained access to new and emerging technologies to automate their attacks on a massive scale. The dark web has also become a virtual watercooler for cybercriminals to share tips and tricks for scanning for vulnerabilities and perpetrating fraud. The evolution and sophistication of cybercrime will continue in 2021 as criminals leverage artificial intelligence and bots more than ever before.” Furthermore, the expert says: “Just as organizations have adopted artificial intelligence to shore up the attack surface and thwart fraud, fraudsters are using artificial intelligence to carry out attacks at-scale. In 2021 we will essentially witness an AI arms race, as companies attempt to stay ahead of the attack curve while criminals aim to overtake it. We anticipate this at unprecedented levels across several key areas”.

Machine Learning

Bad actors will leverage machine learning (ML) to accelerate attacks on networks and systems, using AI to pinpoint vulnerabilities. As companies continue to digitally transform, spurred by the COVID-19 pandemic, we will witness more fraudsters rapidly leveraging ML to identify and exploit security gaps.

Attacks on AI

Yes, AI systems can be hacked. Attacks on AI systems are different from traditional attacks and exploit inherent limitations in the underlying AI algorithms that cannot be fixed. The end goal is to manipulate an AI system to alter its behavior – which could have widespread and damaging repercussions, as AI is now a core component in critical systems across all industries. Imagine if someone changed how data is classified and where it is stored at-scale. We expect more attacks on AI systems in 2021.

AI Spear-Phishing Attacks

AI will be used to increase the precision of phishing attacks in 2021. AI-powered spear-phishing email campaigns are hyper-targeted with a specific audience in mind. Scouting information from social media and tailoring attacks to a specific victim can increase the click-through rate by as much as 40 times and all of this can be automated through sophisticated AI technology. In 2021, cybercriminals will continue to model phishing attacks after human behavior, replicating specific language or tone, to drive higher levels of ROI on attack investments.

Deepfake Videos

Deepfake technology uses AI to combine existing imagery to replace someone’s likeness, closely replicating both their face and voice. Increasingly in 2020, deepfake technology was leveraged for fraud. As more companies adopt biometric verification solutions in 2021, deepfakes will be a highly coveted technology for fraudsters to gain access to consumer accounts. Conversely, technology capable of identifying deepfakes will be of equal importance to organizations leveraging digital identity verification solutions. Organizations must be sure any solution they implement has the sophistication in place to stop these growing attacks, which will be highly utilized by fraudsters in 2021.

This news was originally published at Digital Journal