It was a Friday afternoon when Bill was on his way back home from work when he received a call that made him take the next U-turn back to his office. It was one of these calls that he was dedicating all of his working hours to avoid. He was not given much detail through the phone, but it seems that Andre, someone working in the account payments department, had just fallen victim to a scam and had proceeded to a hefty payment. A scam? Bill recalled all the training videos he had put this department through. What went wrong?
“They had inside information – it was so believable!” were some of Andre’s first words when he saw Bill, the head of their cyber security team. Someone had called Andre a few minutes before his shift ended, claiming to be an employee from a partner company they had recently started collaborating with for an important project. The person on the call sounded distressed and almost panicked. They claimed that one of their invoices had not yet been paid. Since the project’s next phase was scheduled to start on Monday, this was their last chance to get the payment through. Alternatively, they would have to temporarily freeze the project (which would have a domino effect on the project’s overall timeline and deliverables). All of this sounded entirely plausible to Andre. They were indeed collaborating on the project the caller mentioned, the timeline was accurate, and the names the caller mentioned were indeed the project owners. The caller insisted on sending the invoice via email, and Andre processed that invoice. But he was left with a strange feeling. So he went back to his database and checked the account details. Sure enough, they were different. But it was too late.
Bill immediately realized -it was a spear-phishing attack combining vishing (a scam carried over the phone) and a potential phishing email (the attachment and overall email still needed to be examined). He now had to report the incident and investigate the matter. As the investigation later showed, the caller had spoofed the phone number and made it look as if the call was indeed coming from the partner company. That was also one of the main reasons Andre trusted that the call was a legitimate one and one of the main tools that cyber attackers utilize to initiate trust with their targets.
Protecting an organization from social engineering attacks is not an easy task. Rather, it is an asymmetric game in which information, education, and strategy are paramount. Social engineering is a pretty attractive option for cybercriminals. It is a low cost, low risk, and high reward approach. While security technology has been advancing, human vulnerabilities have remained the same. The stimulus-response effect in human triggers is consistent, and exploiting these vulnerabilities is consistently successful. It is not surprising, that most of our industry’s threat landscape reports or cybersecurity insight reports (including the ones from ENISA and the World Economic Forum) have been listing social engineering attacks and human errors as one of the top 3 threats during the past few years. This is not a trend that seems to be going away. Rather, it looks like cybercriminals continuously find more ways to exploit humans within their attack kill-chains.
There are strategic uncertainties and risks when an organization has limited knowledge of the social engineering kill chain, the information attackers exploit, and the way they find them. Were the details that Andre was presented with “inside information”? Or were they publicly available? Was there a process or policy that would have prevented this spontaneous, unchecked transaction? Could some aspects of this attack have been predicted and potentially prevented or detected? While the cyber security team can proactively identify and manage certain risks and potential attack vectors, we must remember that cyber security is a shared responsibility. People with access to systems, assets, and information are also responsible for their protection. They need to have the awareness and skills necessary to handle this responsibility. They need to be able to recognize the red flags, follow the process, and respond to a social engineering attack when it targets them.
On the other hand, cyber security departments must try to ensure that as few as possible social engineering attacks will reach the employees of the rest of the organization. But we are often faced with limited information regarding knowing which scenario or psychological manipulation techniques an attacker will use. With so many uncertainties involved, we need to start with what we do know—the aspects of the social engineering kill-chain that are constant and get repeated. And then take it from there.
Social Engineering Kill-Chain
When analyzing a social engineering attack, we look back into all the steps a social engineer had to take to plan and then execute their attack. While the tactics and techniques can vary greatly (with some being used more frequently than others), the procedure involved in executing most social engineering attacks tends to be noticeably similar.
Most social engineering attacks involve two broad phases:
- Planning, Researching & Preparation
The first phase, involving planning, researching, and preparation, almost always involves the attacker taking the following steps (in no particular order):
Attackers scout for potential targets online or study the ones they have already selected. Reconnaissance refers to all the steps an attacker may take to collect information on their target(s), online or offline. This phase may last for a few hours (until an opportunistic piece of information is found) or even years, especially in cases of prolonged, elaborate attack schemes.
Information is most often gathered through the internet and open sources (OSINT), human sources and covert interviewing techniques (HUMINT), as well as physical surveillance and collection techniques. The most frequent reconnaissance method on a corporate or individual target has been open-source intelligence (OSINT) in recent years.
In the case of Bill’s company, once we got to trace back the information the attacker used in their cover story, we were able to identify that the details the caller used could, in fact, be found online. They appeared to be “inside information,” but they were not. The new project partnership was announced in local news portals along with the specific names of the project owners. In 2 separate interviews, the project owners accidentally disclosed timelines and other details that should have remained confidential. Finding suitable targets within the account payments department along with their email addresses and phone numbers was a piece of cake, thanks to platforms like LinkedIn and other sales & marketing databases that make this data available. Putting all these pieces together and threading them in a good cover story (pretext) along with some social skills made this social engineering attack possible and evidently effective.
A target may be an organization (in which case we usually observe mass-scale attacks like phishing emails) or an individual within that organization. Two factors tend to interplay when it comes to targeting:
The target’s value, or degree of vulnerability
The adversary’s resources (time, money, capability, determination)
We like to talk about “low hanging fruits” in our industry. We might as well start talking about “Achilles’ heels” in organizations with mostly mature security programs that still tend to neglect or underinvest in the human factor and get targeted or compromised because of that.
Pretexting is the cover story under which a social engineer will approach their target, make innocent -and not so innocent- requests for information or actions, and “cover” the execution of their attack. The sheep’s clothing covers the wolf underneath- along with their intentions. A good pretext is based on quality information gathered on the target. Therefore, it tends to happen after the information collection phase.
Oh! But could this be an adversary’s Achilles’ heel? What if they believed that the information they collected was accurate but was, in reality, spoilt? Wouldn’t a pretext fail if the data it was based on had strategic flaws and inaccuracies? A harmless disinformation campaign, in fact? For now, this is just a small serving of food for thought.
After planning, researching, and preparing a social engineering attack, the next natural step is to execute it. Most of the time, the protection of our organization from those next social engineering kill-chain steps tends to get outside the hands of the security team and into the hands of the individual employee facing the attack. Briefly described:
Approach & Trust Building
The adversary approaches an employee under their pretext, manipulates and hijacks their cognitive processing, and builds a certain level of rapport and trust with them. Based on lies. Trust building may occur within a few minutes, or it may take months. But once it is there, the attacker knows that they have placed a hook deep within their future victim.
Having established a certain level of trust with their wolf in sheep’s clothing (the social engineer), the targeted individual sees little reason not to comply with the attacker’s requests. Even if they have some second thoughts, the social engineer can often sense or predict this hesitation and offers justifications.
Ideally, the targeted individual will realize what is happening and cut the attacker short before reporting them. But other times, especially when employees have not received adequate training, they may simply not know how to handle and respond to the social engineer’s pressure. They will ultimately comply with the requests. Then they will proceed to hide this incident by not reporting it to anyone in fear that they might get in trouble. This is not the security culture you want to have.
The social engineer’s goals have been successfully met (entirely or partially), and they are now ready to finalize their execution and exit the interaction with the target. Sometimes they care about a clean exfiltration – one in which their target never realizes that they have been the victim of an attack – but other times, they don’t.
In all fairness, some social engineering schemes and scenarios tend to get replicated and repeated for years. Some organizations base their entire awareness and defense strategies on those exact scenarios. It is not a bad thing to do, but it runs the risk of leaving employees and the organization with a false sense of security and a wrong belief that they know what social engineering attacks are all about. Then, when a more tailored and well-researched approach comes around, they are unable to identify it or defend against it.
On the other hand, security and threat intelligence professionals need to be proactive about identifying and handling their organizational risks for social engineering attacks. They need to be aware of the information exposing their organization online and the risks they might pose. Open-source Intelligence is often neglected as a vulnerability identification tool. At the same time, it is one of the most frequent tools used by adversaries, who frequently base their modus operandi and success on the quality of the information they have gathered. It is good to conduct a corporate open source intelligence investigation on your organization to proactively research and identify information that exposes vulnerabilities and organizational risks.
Conducting corporate open-source intelligence is a relatively large topic. But here are some of the areas that tend to get neglected but provide valuable information during a corporate open-source intelligence investigation:
- Your website. Job openings that offer too many details on specific technologies the company uses, marketing campaigns or blog posts that disclose internal processes, projects or specific names (when there is no need to include them), or other sensitive information. Assume that your organization’s website, social media accounts, forum, and/or blogs are read and combed thoroughly by threat actors.
- Case studies from your business partners. In an effort to create a descriptive and interesting case study that will attract additional clients, business partners may disclose more information than they should on the services they provide to you and your organizational structure, processes, posture, and more. In some cases, they end up disclosing confidential information. Ensure that you know what other partners are sharing about your organization online.
- Interviews & Media Articles/Videos. This can be a double-edged sword. While you do need to know if a company representative has accidentally disclosed sensitive or confidential information to any media, your organization must also have a well-structured and communicated information classification plan. We cannot expect people not to disclose sensitive information if they do not know which of the information they handle should not be publicly discussed. Regardless, threat actors will look for those accidental leaks. It is better if you find them first.
Similarly, journalists might also leak information that should not become publicly available. These can include descriptions of specific internal security measures (yes, we have seen this too), company offices and desks containing notes and papers with classified information, and more.
- Files with confidential information. Sometimes, internal documents with confidential information accidentally end up on the clear web. These can vary from onboarding guides to username and password lists. Using well-thought search queries and the dork “filetype:” you can search for all types of documents regarding your company on the clear web.
Once you have identified information exposing vulnerabilities and risks, the next step is to manage it by eliminating, limiting, or blurring what is available. The internet never forgets, so it is safe to assume that some of the data that is already public could still be retrieved, even after deletion, by a determined adversary. Still, it is worth making an effort to remove it, as it will add a layer of difficulty for a portion of adversaries.
Use the findings of the corporate open-source intelligence to identify potential attack vectors for social engineering attacks, and to educate your cyber security strategy and decision-making.
In some cases, you may not be able to interfere with the publicly available information anymore. Try to incorporate it and the risks they pose for a social engineering attack within your cyber security awareness training. This approach will help employees understand that what sometimes sounds like “inside information” might not be, and it will help them recognize pretexts that use that information. When a client requests it, we use our open-source intelligence findings to develop exercises and social engineering attack scenarios (based on our knowledge and experience). We then incorporate them as practical and interactive examples in our cyber security awareness training or workshops offered to those specific clients. Cyber security training suddenly feels more relevant to them and their everyday reality. It is more intriguing, and they become curious to participate. The lessons from those training programs “stick” better. Employees become better able to detect an attempt for a social engineering attack.
That brings us to the last key point of this article. While we can (and should) mitigate certain risks, social engineers will find a way to reach an employee and attempt to manipulate them. Humans are and will remain an additional layer of defense for organizations. They need to be able to identify an attack, thwart it, and report it. Awareness training and teaching employees to implement specific cyber security best practices are still important. Cyber security is and will remain a shared responsibility. May each of us do our part to the best of our abilities.