10 Nisan 2015 Cuma

Compliance Mindset Can Lead to Epic Security Fail

There have been abundant warnings that compliance with government regulations alone would not be adequate to protect companies from the kinds of cyberthreats the world faces today. However, Premera learned that lesson the hard way. Auditors with the U.S. Office of Personal Management in January 2014 recommended that Premera address two areas of system administration: more timely installation of software patches and upgrades; and creation of configuration baselines so it could effectively audit its server and database security settings. However, those weren't very serious deficiencies in the minds of the auditors, who wrote in their final report released in November, that "nothing came to our attention that caused us to believe that Premera is not in compliance with the HIPAA security, privacy, and national provider identifier regulations." The company was breached in May 2014. Although that was six months before the feds released their final audit report, Premera didn't discover the breach until January 2015. Common Problems Granted, the OPM's audit was a general one -- one designed to audit information systems related only to the claims processing applications used at Premera -- and not as rigorous as those conducted for compliance with HIPAA security and privacy regulations by the U.S. Office of Civil Rights. "The scope and depth of the OPM audit was likely just a subset of what would have been covered by a true HIPAA audit conducted by OCR," said Ulf Mattsson, CTO of Protegrity. "Based on the information provided in the audit report, there's no way to know for sure how Premera would have performed if it had been audited by OCR," he told TechNewsWorld. "The problems cited by the audit are probably pretty common to all organizations. While fixing those problems can improve an organization's security posture slightly, by no means were they likely the cause of the massive data breach at Premera," Mattsson said. "The storing of sensitive data without being encrypted is the more likely culprit," he added. Checkbox Security It's unlikely that even a rigorous audit would have deterred Premera's data thieves. "Since HIPAA does not require companies to encrypt their data at rest, even passing a true HIPAA audit by OCR may not have prevented the Premera breach," Mattsson said. Although compliance rules are supposed to set minimum standards for protecting data, many companies treat them as maximum benchmarks. "Cases like Premera and thousands of others are proof that if you follow compliance -- the checkbox approach to security -- it doesn't mean you're more secure," said Torsten George, vice president for marketing at Agiliance. "You can schedule an audit, but you can't schedule a cyberattack," he told TechNewsWorld. "You have to change your way of thinking. You have get away from these three-to-six-month sprints to get to compliance and then forget about it," George said. "Security needs to be part of your day-to-day operations," he added, "not just something you do to get through an audit review." Antiquated Thinking Healthcare security audits have some fundamental problems. "HIPAA is focused on prevention of threats," said Mike Davis, CTO of CounterTack. "As we all know, prevention doesn't always work. Hackers still get in," he told TechNewsWorld. "There's very little in HIPAA that requires healthcare institutions to detect threats," Davis added. For example, HIPAA requires access to patient records be restricted, but it doesn't require that access to the records be monitored. "You lock down the users, so only Bob can access patient information, but if an attacker takes over Bob's account, he has access to the patient information and you'd never know," he explained. The standards used by HIPAA are outdated, maintained Tom Kellermann, chief cybersecurity officer for Trend Micro. "They're based on perimeter defense, and they're over reliant on encryption of data," he told TechNewsWorld. "They focus on threats relevant 10 years ago," Kellermann continued. "The threats today are a thousand times more sophisticated."

CAPTCHAs May Do More Harm Than Good

CAPTCHA -- Completely Automated Public Turing Test To Tell Computers and Humans Apart -- was created to foil bots attempting to mass-create accounts at websites. Once created, those accounts could be exploited by online lowlifes for malicious ends, such as spewing spam. However there are signs that the technology that uses distressed letters to weed out machines from humans may have outlived its usefulness. When users are presented with a CAPTCHA, they are 12 percent less likely, on average, to continue with what they came to do at the website, according to a Distil Networks study released earlier this month. That number is even worse for mobile users, who abandon their intended activity 27 percent of the time they're confronted with a CAPTCHA, the study suggests. "If it causes too much friction for a checkout or a transaction, it could cost a website real dollars and cents or users," Distil CEO and cofounder Rami Essaid told TechNewsWorld. Better Bots Distil got the idea for the CAPTCHA study from one of its customers. "They were trying to solve a fraud problem," Essaid said. "When they put in their CAPTCHA, it dramatically decreased their conversions by over 20 percent." So Distil decided to study the problem. "We wanted to see if that was unique to that company or if people were annoyed by CAPTCHAs to the point that they abandon any interaction that they're doing," Essaid said. "The results shocked me. I didn't think they'd be as dramatic as they were." The wide gap between desktop and mobile abandonment is largely a usability issue, he said. "CAPTCHAs were created for desktops. We've never seen one fully designed for mobile, and that impacts users much more," Essaid explained. The kicker to CAPTCHAs is that their purpose -- to block bots -- has become problematic. "Bots have evolved to a point where they can solve the CAPTCHAs," Essaid pointed out. "CAPTCHAs can stop most bots, but the worst bots know how to get past CAPTCHA." Bad Cert Microsoft issued a security advisory last week alerting Windows users that a rogue certificate had been issued that could be used to spoof the company's Live services. "Microsoft is aware of an improperly issued SSL certificate for the domain 'live.fi' that could be used in attempts to spoof content, perform phishing attacks, or perform man-in-the-middle attacks," the advisory reads. "It cannot be used to issue other certificates, impersonate other domains, or sign code," it continues. "This issue affects all supported releases of Microsoft Windows. Microsoft is not currently aware of attacks related to this issue." Certificates increasingly have become targets for cybercriminals, noted Kevin Bocek, vice president for security strategy and threat intelligence at Venafi. "Bad guys are not only trying to steal certificates, but use fraud to obtain them, too," he told TechNewsWorld. "There are over 200 public Certificate Authorities trusted around the world," he explained, "and at any one time, any could be attacked to obtain a valid certificate." Microsoft has taken actions to thwart anyone trying to use the illicit cert, but those measures only work on its products. Since the cert will work in other products, it's up to maker of those products to update them to block recognition of the cert. Mobile FREAK-out Earlier this month, researchers discovered a vulnerability in SSL implementations called "FREAK." It allows an attacker to force SSL to stop using 128-bit encryption and start using 40-bit encryption, which can be cracked in a matter of hours using commodity computers or readily available cloud computing resources. Most of the attention on FREAK has been focused on its impact on browser communication, but last week, researchers at FireEye found a substantial number of mobile apps are vulnerable to the SSL flaw. After scanning 10,985 popular Google Play Android apps with more than 1 million downloads each, the researchers found 11.2 percent of them vulnerable to a FREAK attack. A similar analysis of 14,079 iOS apps revealed that 5.5 percent of them vulnerable to FREAK. "This is a problem of a client or server being able to say, 'I don't want to do this really secure thing, let's do something less secure,'" said Jared DeMott, principal security researcher at Bromium. While that sounds serious, exploiting the flaw isn't a piece of cake. "You need to be in a position to sit on the traffic, and you still have to decrypt the downloaded encryption, even if it isn't very good," he told TechNewsWorld. "That's the kind of thing you'd expect to see organized players doing -- a nation state or big crime ring," he said. "I don't know if it's going to have a big impact on individual consumers."