Menu

Introduction/Overview

“Regardless of the technology in place to protect data, people still represent the biggest threat.”¹

Alex Ryskin, IT director for the laser laboratories at the University of Rochester, NY

In Superman 3, Richard Pryor’s character (Gus Gorman) devises a scheme to skim money from accounts by “penny shaving” or “salami slicing.” Gorman defies street wisdom regarding how to go undetected by not showing unusual behavior — like roaring into the employee parking lot in a shiny, new, bright-red sports car.

Request a Demo
Allow our team to show your team the IID difference.

Often, behavior associated with insider threat is more subtle, although the aberrant behavioral changes of US spies Aldrich Ames and Robert Hanssen indicate the possibility of seemingly obvious misses. These obvious signs that pointed to Ames as a double-agent included $33,500 in credit card debt (motivation), physical security violations (attitude) and living beyond his $70,000 salary (cash purchase of his $540,000 home and a $50,000 Jaguar) – all clues of a secondary income. Hanssen also violated acceptable use policies, had infidelity and financial problems, did not follow CIA procedural guidelines, and was never subjected to a polygraph test. Ames’ treasonous activities earned him $2.7 million over four years and Hanssen’s earned him $1.4 million. Their activities also cost lives.

These examples illustrate that even the most trusted of insiders are capable of behavior that is contrary to policy set forth by their organizations, and that this often begins with small changes in normal behavior that begin a “slippery slope” into major violations which harm an organization, its reputation, and those associated with it. These examples also serve to demonstrate that often, the smallest to largest red flags often go unchecked or unnoticed by organizations until it is too late.

Both public and private sector organizations are subject to insider attacks. In fact, some have suggested that the OSI model, a widely accepted seven-layer representation of computing system services (physical, data, network, transport, session, presentation, application), be expanded to include a “user” layer. As computing practices have evolved, the end-user influences how applications and data are used and accessed, effectively designing technology scenarios that programmers had not anticipated. In an effort to start modeling indicative or precursor behavior, the US Secret Service and Carnegie Mellon University researched insider threat patterns. Antonio Rucci (Oak Ridge National Laboratories) summarized their findings in the briefing he delivered at DEFCON 17 in 2009.

Insider Threat Landscape – USSS and Carnegie Mellon Findings ii

  • Most insider events were triggered by a negative event in the workplace.
  • Most perpetrators had prior disciplinary issues.
  • Most insider events were planned in advance.
  •  Only 17 percent of the insider events studied involved individuals with root access.
  • 87 percent of the attacks used very simple user commands that didn’t require any advanced knowledge.
  •  30 percent of the incidents took place at the insider’s home using remote access to the organization’s network.

These results validate the use of behavioral and event analysis to develop probability profiles for insiders. One of the challenges historically has been triaging high-probability insider threats: the employees who are at-risk of contributing to the compromise of an organization’s information. This challenge is exacerbated by the extraordinary amount of information that passes through enterprise servers, the proliferation of computing devices and fewer face-to-face encounters between security management and staff due, in part, to remote and flex-time work options.

Insider Risks

The risk that insider threat poses to organizations is both direct and indirect. Fraud, embezzling, workplace violence and illicit release of company-proprietary or entrusted, confidential information result directly from inappropriate inside actions, whether performed through malice, ignorance or oversight. Indirectly, insiders pose a threat to an organization through organizational liability (due to data breach or non-compliance with prevailing laws or regulations), attack vulnerability and lost productivity.

Data Breach

Results from the 2012 Data Breach Investigations Report (2012 DBIR) indicate that only 4 percent of data breach incidents addressed in the report – that is, those that meet certain size requirements in terms of financial impact – can be attributed to deliberate release of information by insiders. Not counted among those DBIR numbers are collateral data breaches that occur due to contributory negligence; for example, intentional or unintentional disregard for existing security policies. According to a 2012 report from Symantec, “The most frequent cause of data breaches (across all sectors) was theft or loss of a computer or other medium on which data is stored or transmitted, as a USB key or a back-up medium. Theft or loss accounted for 34.3 percent of breaches that could lead to identities exposed.”iii Implementing a tool to monitor, inform and enforce organizational data use and storage policies can increase staff information situational awareness — and mitigate the risk of compromise to data at rest (even if the data repository goes AWOL). According to a Ponemon Institute (PI) study of 116 organizations, “62 percent of mobile data-bearing devices that were lost or stolen contained sensitive or confidential information.”iv

Abuse of Privileged Information

As the obvious target for financial predators (“because that’s where the money is”v ), the financial industry has set the standard for conscientious implementation of information security standards and ranks consistently high in budgeted investment in security. And yet, a secure perimeter does not assure security in the vault. Security practices internally may be lax. There may be bad actors within the organization’s trusted community who have accumulated the privileges or knowledge required to subvert safeguards.

Segregation of duties, for example, is a key mechanism for preventing privilege abuses, especially the escalation of privileges, by reducing opportunities for collusion or for conflict of interest among employees. On a policy level, segregation of duties guards against situations in which the right hand knows entirely too much about what the left hand is doing.

However, Jerome Kerviel’s personal memory was not degaussed after changing work assignments. He became a trader at Societe Generale after working as a subject matter expert with its back office system for booking transactions (a system known as Eliot). His intimate knowledge of when Eliot scheduled the nightly reconciliation of the day’s trades allowed him to hide his unauthorized transactions by deleting and re-entering them. He successfully posted trades that eventually grew to more than $70 billion in “problem investments” (AKA, losses) for Societe Generale. Although his system activity was inappropriate, no protective measures were in place to detect the precursor behavior. This is a situation that could have been prevented had Societe Generale deployed an intuitive system for aggregating an individual’s system activities for analysis with respect to data in process.

Accidental Release of Entrusted, Third-Party PII

The weightier enforcement provisions of the 2009 HITECH Act have encouraged medical facilities to take HIPAA restraints more seriously. The US Department of Health and Human Services (HHS) oversight power includes fines for those facilities whose practices do not comply with the increasingly rigid standards for protecting confidential information. HHS levied a $4.3 million fine against Maryland-based Cignet Health in February 2011 (the first fine levied since HIPAA’s 1996 passage),vi $1.5 million against Blue Cross Blue Shield of Tennessee in March 2012 and $1.7 million against the Alaska Department of Health and Social Services in June 2012.vii Other fines have also been levied, including fines against smaller medical practices and clinics. The advent of universal electronic health records makes it even more imperative that medical facilities protect the information under their custody — and know what and where that information is.

The Federal government could also benefit from a higher level of information situational awareness. A study released by Rapid7 in September 2012 estimated that the federal government unintentionally exposed approximately 94 million records containing citizen personally identifiable information (PII) between January 2009 and May 2012. viii The estimate is based on breaches reported to the Privacy Clearinghouse. The majority of those records (76 million) were compromised when a hard drive containing VA medical information was not properly protected. This situation could have also been prevented, had an intelligent system been in place to monitor policy compliance.

Acts of Vengeance

In the physical world, attacks against one’s employer may be called “going postal,” a nickname that emerged from a series of twenty-some incidents between 1986 and 1997 in which various United States Postal Service individuals engaged in shooting sprees. More than 40 people were gunned down. The parallel in the cyber world is the kind of havoc wreaked by a disgruntled employee who decides to sabotage internal control systems. In addition to high-profile cases involving irregularities perpetuated by insiders in the financial industry (e.g., the aforementioned Societe Generale case), there are also well-publicized examples within critical infrastructure industries (e.g., water, power, transportation, communication). The US Department of Homeland Security released a report in July 2001 that described incidents as follows:

  • In April 2011, a lone water treatment plant employee allegedly manually shut down operating systems at a wastewater utility in Mesa, Arizona in an attempt to cause a sewage backup to damage equipment and create a buildup of methane gas. Automatic safety features prevented the methane buildup and alerted authorities, who apprehended the employee without incident.
  • In January 2011, a recently fired employee from a US natural gas company allegedly broke in to a monitoring station of his ex-employer and manually closed a valve, disrupting gas service to nearly 3,000 customers for an hour.
  • In 2009, a disgruntled former information technology employee of a Texas power plant allegedly disrupted the company’s energy-forecast system when the company failed to deactivate the employee’s account access and confiscate his company-issued laptop after firing him weeks earlier. The cyber intrusion resulted in a $25,000 loss to the company.
  • In 2006, a drinking water treatment plant in Harrisburg, Pennsylvania was compromised by a threat actor operating outside of the United States. Access was gained through a vulnerability in an employee’s laptop, which allowed the installation of malware on the plant’s internal system. The plant sustained no physical damage and the actual water system was not targeted in this particular incident. The objective was to use the plant’s computer system to distribute e-mails.
  • In 2000, a contract employee, who became disgruntled after being turned down for a permanent position at an Australian wastewater services company, used his insider access and expertise to attack the facility’s supervisory control and data acquisition (SCADA) systems. The attack disabled system functions and allowed a total of 800,000 liters of untreated sewage to spill into receiving waters over a period of several weeks.
  • A US citizen who was arrested in Yemen in a March 2010 roundup of suspected al-Qaeda members worked for several contractors performing non-sensitive maintenance at five different US nuclear power plants from 2002 to 2008. This individual was able to pass federal background checks, as recently as 2008, before becoming a contracted employee.ix

In each of these cases, a trusted staff member was directly or indirectly implicated in acts with negative consequences for his or her employer and for the communities dependent on that employer’s utility services. Policies were ignored that would have checked such activity. A tool based on sophisticated behavior analysis and modeling could have alerted management to pro-active measures.

Nation State Agencyx

Justifiable concerns also exist that espionage is carried out by trusted insiders who have affiliations with foreign countries. The end of the Cold War did not signal the end of nation-state spying activities. Katrina Leung (or Leuk) acted as an FBI informant for some 20 years before being identified as a double-agent working on behalf of China as well.xi Lidiya Gureyeva was deported in July 2010 along with other Russian spies in a spy swap with Russia.xii Yu Xiaohong (University of Michigan) and Hanjuan Jin (Motorola) are examples of individuals affiliated with US university graduate programs whose activities should have triggered closer scrutiny.xiii

Compliance and Audit Failures Regulatory complexity continues to increase for many organizations. A look at the record for the payment card industry (PCI) illustrates the difficult that organizations experience in meeting– and maintaining — regulatory compliance objectives. The 2011 Verizon Business Payment Card Industry (PCI) Compliance Report showed that fewer than 25 percent of organizations succeeded in achieving compliance in their first attempt, even though the PCI Data Security Standards (DSS) were first published in 2004. An astonishing 75 percent failed to pass in 2011 although they had passed a PCI DSS audit in 2010.xiv In a sidebar to the Verizon report, author Chickowski highlighted the PCI requirements “most likely to be flagged during the first pass at PCI validation.” Three of the top four “gotchas” can be mitigated by using an intuitive user activity management tool:

  • Requirement 3: Protect stored data.
  • Requirement 10: Track and monitor all access to network resources and cardholder data.
  • Requirement 12: Maintain a policy that addresses information security

Lost Productivity

In the 2012 PI study, 68 percent of the respondents said they believe that a decrease in employee productivity is a negative consequence of having insecure mobile devices.xv Perhaps more disturbing, however, are findings from Gallup, Gartner and others that disengaged staff spend at least an hour a day on non-work-related Internet activities.xvi In an interview with the Gallup Business Journal, author Curt Coffman (First, Break All the Rules), said, “Our most recent research suggests that 29 percent of the US workforce is actively engaged, 55 percent is not engaged and 16 percent is actively disengaged.”xvii Coffman’s interviewer also observed, “According to Gallup’s calculations, actively disengaged employees — the least productive — cost the American economy up to $350 billion per year in lost productivity.” Idle time while at the computer is another indication of a possibly disengaged employee. A tool to help managers identify staff who need more attention would be a boon to companies who desire high performance from their people.

Current Approaches And Limitations

Recommendations abound about how to address the challenges of containing and managing insider threat. Many are highly resource-intensive and so less likely to be implemented consistently and in a manner that is legally defensible.

Business Challenges

A large part of the challenge in securing the human is the frequent disconnect between the HR and IT or IT Security management teams. Tools used for monitoring network, file system and applications activities typically deliver too much data and too little analysis for HR professionals. These HR professionals need to understand what to look for when working with, training, and in disciplinary intervention with employees, including contextual information as well as contributing factors. Recent high-profile insider threat cases validate the observations made in a 2002 Defense Personnel Security Research Center (PERSEREC) Report: “Most known American spies (80 percent) demonstrated one or more conditions or behaviors of security concern before they turned to espionage.”xviii

IntelligentID Monitoring Coverage

Another challenge is when an insider or partner creates a hole or vulnerability, setting up a necessary precondition for exploitation by outsiders. This is especially important, perhaps, in data center situations where an organization is responsible for data repositories for multiple customers. You can give away trust but not risk and responsibility.xix And yet, organizations need to implement protective measures that will not be circumvented or ignored by staff and other trusted individuals — or processes.

Technology Challenge

Bridging the analytical gap between the information captured by tools aimed at the “first seven” layers of the OSI model capture and the information needed by people operating at “layer eight” and beyond is a challenge met by Intelligent ID’s unique use of real-time data capture, system/application activity contextual awareness, and behavioral modeling to deliver targeted, all-user-friendly alerts and responses on suspect activity

Summary

Current Intelligent ID customers, such as the City of Columbus (OH) Department of Technology, value the straightforward implementation and learning process that are part of the Intelligent ID feature set, as well as the high degree of situational awareness provided by the real-time use of protected information. As one City official described: “The flexibility of the rule sets and the ability to monitor all use of devices and any transfer of files to removable storage media were key reasons for selecting Intelligent ID … The fact that we were able to create a clear audit trail of all user activity from any device and prevent any unauthorized use has helped us to immediately comply with our strict IT security policy.” Intelligent ID helps “secure the human” by monitoring user endpoints and informing administrators when it detects suspect activity. This activity could include a deviation in “typical” behavior which often indicates a greater problem, as with US spies Ames and Hanssen; unauthorized transactions containing sensitive information, as in the case of Societe Generale trader Kerviel; or the potential exposure of confidential PII when copied to removable media or accessed from a sensitive location, as in the compromise of VA medical information (between 2009 and 2012). These, and countless other situations in which human behavior – whether malicious or accidental – puts organizations at grave risk, are diminished by Intelligent ID.

Through tracking risky or anomalous practices, correlating data about individual activities across multiple platforms and processes, and producing easily understood — and legally defensible — reports based on the client organization’s business rules and compliance requirements, technical and non-technical managers alike gain the transparency and in-depth forensic evidence necessary to protect their organizations and take action when detrimental activity is suspected. To err may be human, but technology like Intelligent ID’s can help deconstruct error and mitigate undesirable consequences.

For More Information:

Allow our team to demonstrate how Intelligent ID can solve your organization’s specific security needs unlike any other solution.

1.888.798.7792

www.intelligentid.com

info@intelligentid.com

References

i Oak Ridge Laboratory. “Anatomy of an Insider Threat: Case Study in Human Vulnerabilities.” Quote from Alex Ryskin (IT Dir for laser laboratories at the University of Rochester).

ii Threat quotes taken from Antonio A. Rucci’s presentation at DEFCON 17. rucciaa@ornl.gov

iii Symantec (April 2012). Internet Security Threat Report, Volume 17, p. 13. Retrieved from http://www.symantec.com/threatreport/.

iv Pomenon Institute© Research Report. (February 2012). “Global Study on Mobility Risks,” p. 1. Retrieved from: https://www.websense.com/content/ponemon-institute-research-report-2012.aspx.

v Quote frequently attributed to Willy Sutton, former FBI Top Ten Most Wanted List, although he denies having expressed the obvious.

vi Retrieved from: http://threatpost.com/en_us/blogs/hipaa-bares-its-teeth-43m-fine-privacy-violation-022311

vii Retrieved from: http://www.hhs.gov/news/press/2012pres/06/20120626a.html

viii Retrieved from: http://www.govtech.com/security/Report-Feds-Exposed-94-Million-Records-in-3-Years.html

ix US Department of Homeland Security Office of Intelligence and Analysis – Note. (19 July 2011). “Insider Threat to Utilities,” pp. 4-5. Retrieved from: http://info.publicintelligence.net/DHS-InsiderThreat.pdf.

x Retrieved from: http://www.fbi.gov/about-us/investigate/counterintelligence/higher-education-and-national-security [FBI white paper on counterintelligence]

xi Retrieved from: http://www.nytimes.com/2011/12/11/opinion/sunday/chinas-spies-are-catching-up.html?_r=0

xii Jerry Markon. “FBI arrests 10 accused of working as Russian spies.” The Washington Post (June 29, 2010). Retrieved from: http://www.washingtonpost.com/wp-dyn/content/article/2010/06/28/AR2010062805227.html.

xiii Daniel Golden (April 8, 2012). “American Universities Infected by Foreign Spies Detected by FBI.”Retrieved from: http://www.businessweek.com/news/2012-04-08/american-universities-infected-by-foreign-spies-detected-by-fbi#p2.

xiv Ericka Chickowski (August 7, 2012). “10 Ways To Fail A PCI Audit.” Dark Reading. Retrieved from: http://www.darkreading.com/security/news/240004877/10-ways-to-fail-a-pci-audit.html?pgno=1#articleArea.

xv Ponemon Institute (February 2012), p. 9.

xvi InternetSafety.com (August 24, 2010). “New Data Shows Continued Work Productivity Losses from Web Surfing.” Retrieved from: http://www.internetsafety.com/press-what-filtering-can-save-your-business.php.

xvii Gallup Business Journal.™”The High Cost of Disengaged Employees.” Retrieved from: http://businessjournal.gallup.com/content/247/the-high-cost-of-disengaged-employees.aspx.

xviii Rucci DEFCON 17 presentation.

xix Paraphrase from 2012 DBIR, p. 23.

xx The SANS Institute was established in 1989 as an educational, certification, and research organization. The trade name stands for SysAdmin, Audit, Networking, and Security

 

 

 

Request a Demo
Allow our team to show your team the IID difference.