By Rebecca Herold, CEO, Privacy Professor®
I started my career as a systems engineer at a large multinational financial and healthcare corporation. I was responsible for creating and maintaining the applications change management system. The purpose of the system was to ensure that after the programmer finished coding the code could be moved, with the approval of the manager, to a different area to test. After testing was complete it would be moved back to the development area if changes were needed or a different manager would approve it to be moved to the live/production area for widespread use.
By requiring different individuals/roles other than the programmer (who did her own testing while creating the program) to test the program, it accomplished two primary goals:
1) Ensure the program truly worked as intended (did not include any unintentional mistakes)
2) Ensure no back doors, logic bombs, or otherwise nefarious activities were included within the code that could lead to fraud, or other malicious activities (did not include intentional wrong-doing)
This initial project is what got me interested in information security; through it I discovered the many ways in which some of the programmers would try to get around the controls I had built into my change control system.
An inside job
The need for such internal controls within computer systems, physical security controls, and administrative procedures to mitigate the insider threat has existed since at least the 1950s, when organizations widely started using computers.
There have been some spectacular fraud committed since by insiders with trusted access to systems and security controls. Consider the case that was the subject of the 1978 James Woods movie, “The Billion Dollar Bubble,” which chronicled the $2 billion insurance embezzlement scheme, using fraudulently modified computer code, involving Equity Funding Corporation of America. I’ve shown this movie upon several occasions to clients for information security awareness events.
A fraud investigator friend told me long ago that there will always be 10 percent of folks who will try to do fraud (or other bad things for that matter), and 80 percent who will do bad things under circumstances where they feel justified in doing so. In 2006 a systems administrator for a healthcare provider was indicted by a federal grand jury for attempting to disable the provider’s corporate computer servers by concealing malicious software program when he thought he was going to be laid off. (He later pleaded guilty and was sentenced to 30 months in federal prison in 2008.) This is a perfect example of this 80 percent threat coming to fruition.
Eddie Tipton, a former director of the Multi-State Lottery Association in Iowa, was found guilty last month on two counts of fraud to collect a $14 million Hot Lotto prize. This is yet another instance of an insider with an extremely large amount of authorized access to systems, and physical access to computer hardware, who apparently could not resist the temptation to use that access for his own personal gain. It seems he thought he had an ironclad way to perform the fraud and not get caught. I recently provided a high-level description on the “Great Day KCWI 23” morning show of how Tipton may have pulled off this elaborate fraud.
In the healthcare space it is almost a monthly, and sometimes weekly, news item to see a report that a hospital or clinic worker was caught snooping into patient files.
The insider threat is not just from malicious intent. Many workers with authorized access to sensitive information who work long hours and are tired will often make mistakes.
I’ve written about the insider threat extensively over the years. You can see dozens of other examples that I’ve provided here.
Set controls to mitigate insider threats
The bottom line is that organizations must still address insider threats, even with so many other new and emerging data security threats and vulnerabilities that they have to contend with on a daily basis.
Always remember: When giving personnel access to data security and privacy controls, it is imperative that other controls exist to ensure that trusted access is not exploited for the bad. They will help keep the 10 percent of those with trusted access who are always going to try and do bad things and the 80 percent who are trusted but will do bad under certain circumstances from taking advantage of their extensive data and systems access capabilities to commit fraud or other crimes and malicious acts.
This post was written as part of the Dell Insight Partners program, which provides news and analysis about the evolving world of tech. Dell sponsored this article, but the opinions are my own and don’t necessarily represent Dell’s positions or strategies.
From October 20-22, Dell is bringing together technology and business professionals who are crafting a vision for the future of their enterprise. Register now for Dell World 2015 in Austin, Texas.