Who are these Frankenstein monsters walking silently in our midst? Like Mary Shelley’s original character, they are made up of parts of real people.
Fraudsters have found a new, innovative way to steal money. As data breaches proliferate, countless stolen identity credentials and other critical information are ending up in the hands of these fraudsters, who are then able to pose as legitimate users behind the cloak of the Internet. Today, synthetic identities – based on a combination of real and false personal data – are being used to defraud organisations like banks, lenders, insurers, telecommunications services providers, and even governments.
Birth of the modern Frankenstein monster
Identity abuse attacks are becoming more frequent, global, and organised, The ThreatMetrix Identity Abuse Index revealed that during the holiday period of Q4 2017 attacks accounted for over 10% of all Network traffic, and the intensity of the attacks continues to rise.
Moreover, a new breed of cybercriminals is inventing new identities using fabricated information based partly on real people who are either inactive or not yet registered in the credit system such as children, the elderly and the deceased. They then combine the mixture of real or fake identity elements such as name, date of birth and address to create an entirely new ‘Frankenstein’ identity cobbled together using pieces of co-opted information.
Fraudsters use these synthetic identities to apply for credit cards, automobile loans, and gain access to various types of credit facilities. These ‘applicants’ may get rejected by banks and financial institutions initially due to lack of a credit profile. However, the multiple application attempts establish a placeholder profile among credit bureaus. Fraudsters can then gain a foothold and establish credit. The challenge is that these fraud attempts tend to go undetected for years as there is no real consumer victim to alert financial institutions or the authorities.
Compared to before, data is definitely more likely to be sold and traded illegally online today, possibly fueling the market for synthetic identity fraud. In 2017, research by the Armor Threat Resistance Unit found personal data being exchanged on the Dark Web for at least US$10, and in some cases can go up to US$800.
Cultivating synthetic identities involve a lot of time, patience and attention to detail, but the payoffs can be substantial once a fraudster decides to ‘cash out’ or ‘bust out’ by defaulting on loans or going on huge shopping sprees – suddenly racking up massive debts with no intention of paying them off. Often, a financial institution learns that an account is part of a synthetic scheme only when a presumably ‘good consumer’ with clean financial record simply stops paying suddenly, by which time it is too late to do anything to recover the lost money.
Synthetic identities can be used to perform more complex and elaborate schemes as well. One example is when fraud syndicates open new bank accounts using these identities to funnel and launder stolen money that is ultimately used to further perpetrate criminal activities or finance terrorism. This may sound like what we see in crime or spy movies, but the implications are all too real. Synthetic identity fraud is costing businesses billions of dollars globally.
Fighting Frankenstein at his game
A Frost & Sullivan study commissioned by Microsoft revealed that the potential economic loss across Asia Pacific due to cybersecurity incidents can hit a staggering US$1.745 trillion, more than seven percent of the region’s total GDP of US$24.3 trillion.
Recently, we saw what was known as the worst cyberattack case in the history of Singapore where hackers stole personal particulars belonging to 1.5 million healthcare patients. Of these, 160,000 people, including Singapore’s prime minister, had their outpatient prescription record stolen. Just weeks after, Hong Kong’s Department of Health was also hit, after three of its computers were infected by ransomware, leaving its data inaccessible.
These cases in Asia Pacific have shown that nobody is out of reach when it comes to identity data.
What then can be done to defend against such a determined, diabolical enemy?
Consumer behaviour is becoming more complex and multifaceted, transacting across channels, devices, and locations as digitalisation permeates our lifestyles.
The key to detecting synthetic identities is to have the ability to analyse the various pieces of information an individual creates as they go about their daily lives, both on and offline. Businesses need to have the most current data available about customer identities – both physical and digital -- to be able to identify synthetic identities as they emerge and transact.
Businesses need the ability to understand the links between the seemingly disparate pieces of information to recognise and understand patterns that show how consumers’ identities develop and transact. Biometrics, such as mouse clicks and keystrokes, may also prove useful in providing an additional layer that signals whether further investigation is needed.
Business rules, behavioural analytics and machine learning can be combined to form an integrated framework to help organisation make real-time decisions, providing business agility and dynamic adaption to changing fraud and user trends. Businesses can then incorporate their tolerance for risk and operational metrics based on what the customer is attempting to do consistent across all digital devices and channels.
Staying vigilant and using the right tools
Fraud is no longer static in nature but continuously changes. They take many shapes and forms. In a digital world, the assurance of always having in-person interactions to verify a person’s existence is no longer practical.
If there is anything we can learn from the current development, it is that staying ahead of the curve in fraud is no longer optional, but a requirement for enterprises and merchants. Businesses must stay vigilant in their cause and being able to accurately detect and block synthetic identities is crucial. It is only by combining historical and real-time data and leveraging machine learning to analyse individual behaviour across channels can organisations discover the complex patterns needed to detect and block synthetic identities without causing friction for real customers.
Alisdair Faulkner is chief identity officer at ThreatMetrix.