×Close
×Close
Anti-Malware , DDoS , DDoS Attacks
Botnet-Building IoT Malware Could Easily Infect Dozens of Model Types Vulnerable to Mirai malware until patched: Sony SNC-CX600An information security consultancy says it has found three secret backdoors in more than 80 Sony IP cameras models that remote attackers could exploit to seize control of the devices.
See Also: 2016 IAM Research: Where Financial Institutions' PAM Programs Are Falling Short
Austria-based SEC Consult warns that there's a high chance that the cameras could be infected with the Mirai botnet code, which has infected millions of internet-of-things devices and been used to execute devastating distributed denial-of-service attacks (see Mirai Botnet Pummels Internet DNS in Unprecedented Attack).
But the vulnerabilities could also be used in more discreet ways, such as turning the cameras off or tapping into video streams to spy on people.
The software vulnerabilities and weaknesses affect Sony's IPELA Engine IP cameras, which are aimed at enterprise users. Sony has published an advisory detailing the vulnerable models and recommending the latest firmware version should be installed.
In a confidential document distributed by Sony to customers and obtained by Information Security Media Group, the Japanese multinational company says it has not detected any "damage" to its products as of Nov. 28. The document has not been publicly released.
But here's the risk, according to SEC Consult's detailed security advisory: When set to their default configurations, the cameras are exploitable over the local network, and if the web interface is exposed to the internet, remote exploitation is also possible.
Sony says it is "grateful to SEC Consult for their assistance in enhancing network security for our network cameras." Sony officials that have been in touch with SEC Consult couldn't be reached for comment.
The consultancy's findings have been adding to experts' fears that many Linux-powered internet-connected devices running with loose security controls will remain a long-term problem if manufacturers don't improve their quality control.
Johannes Greil, a senior security consultant and head of SEC Consult's Vulnerability Lab, says his company hopes that vendors get their act together "and make more secure products out of the box and not actually harm their users."
Sony IPELA Engine IP cameras contain backdoor, allows an attacker to run arbitrary code & spy on you https://t.co/ETMOpla17M #sonybackdoor pic.twitter.com/d4HeGNkgnn
To help IoT device users assess the security of their devices, SEC Consult has developed a tool called IoT Inspector that analyzes the devices' firmware - the relatively simple software that manages software and hardware interfaces on computers and devices.
Thankfully, Sony didn't make that all-too-common IoT manufacturer error of leaving remote access protocols such as telnet and SSH directly accessible from the internet. That's what resulted in the fast spread of the first incarnation of Mirai in September, as the malware sought out internet-facing devices and tried dozens of well-known default login credentials for accessible services to successfully seize control of numerous devices (see Can't Stop the Mirai Malware).
But the telnet and SSH protocols are still present in the IPELA Engine IP cameras. And SEC Consult found a way to reach them, thanks to other errors made by Sony.
For example, it's common for software developers to leave remote access accounts in software for debugging purposes, but it's considered a bad security practice because such accounts can be used to bypass device security. SEC Consult found three such accounts in the firmware, including one that allows for root access, which it's labeling a "backdoor" because the account isn't documented by Sony.
Hashes of the access credentials were also found by SEC Consult, which the company was able to crack. The Sony cameras run a web server called lighttpd. SEC Consult found it could use one set of access credentials to remotely access the web server and then start telnet. After that, an attacker would only need to upload Mirai malware to the camera to turn it into a botnet node.
An even more dangerous flaw, however, stemmed from SEC Consult uncovering the hash for the IP cameras' hardcoded root password. "We have not invested much time into cracking it, but cracking it is only a matter of time and computing power," SEC Consult's Greil says.
Once cracked, that password would give remote attackers access to a Linux shell and thus enable them to take full control of a device, overwrite the firmware with code of their own design, sniff all traffic flowing over the device and more.
While these problems have been identified, and Sony has released updated firmware, there's a catch: It appears that owners of the cameras will need to manually install the firmware updates. Greil says that involves using Sony's SNC Toolbox and rebooting cameras.
That's problematic because the cameras are usually plugged in and forgotten and are sometimes be placed in remote or difficult-to-reach locations. On the other hand, because these cameras are sold to enterprises, administrators may be more diligent in applying these must-have security fixes.
The firmware update takes between 10 to 20 minutes to install, according to Sony's confidential document.
Whether these vulnerable devices get patched at all, however, also depends on how well Sony can warn users that their devices contain known vulnerabilities, which relies in part on administrators having bothered to register the cameras with Sony.
Greil says SEC Consult hasn't yet vetted Sony's updated firmware, and notes that SEC Consult is still waiting for answers to multiple questions, such as how the backdoor accounts ended up in Sony's code. And he's criticized Sony's notification to users, contending that it doesn't allow affected customers "to make an informed decision about whether the risk justified an unscheduled patch."
Greil adds: "We had more questions to Sony in this regard, but they did not answer our inquiries."
Facebook, Google, Microsoft and Twitter have promised to better identify and remove terror-related videos and imagery that get posted to their online properties by sharing information.
See Also: 12 Top Cloud Threats of 2016
The move will involve the firms contributing to a shared database that fingerprints images and videos that have been removed from Facebook, Twitter, Microsoft and Google's YouTube.
"Starting today, we commit to the creation of a shared industry database of 'hashes' - unique digital 'fingerprints' - for violent terrorist imagery or terrorist recruitment videos or images that we have removed from our services," the companies say in a shared statement issued Dec. 5. "By sharing this information with each other, we may use the shared hashes to help identify potential terrorist content on our respective hosted consumer platforms. We hope this collaboration will lead to greater efficiency as we continue to enforce our policies to help curb the pressing global issue of terrorist content online."
Each participating company will apply its own rules for what qualifies as "terrorist content." The companies also pledge that no personally identifiable information will be shared and say that the information will never be used to automatically remove any content.
The four U.S. technology giants say they're looking to involve more firms in the effort.
While the companies say that the move is an attempt to balance users' privacy with eliminating "terrorist images or videos" from their services, they note that they remain subject to government requests, meaning the identities of users who post or disseminate such content could be shared with authorities. "Each company will continue to apply its practice of transparency and review for any government requests, as well as retain its own appeal process for removal decisions and grievances," the statement notes.
Facebook tells the Guardian that the precise technological details of how the database will work have yet to be established.
But a similar project to battle child pornography is already in use. The Microsoft-based service, called PhotoDNA, was developed by Hany Farid, the chair of the computer science department at Dartmouth University. It's based on a stock library of millions of pornographic images of children maintained by the National Center for Missing and Exploited Children.
Numerous technology firms, including social networks and cloud providers, as well as governments and law enforcement agencies use the free service to help automatically track and remove such content, wherever it gets posted. The service is reportedly also effective at matching images even when they have been manipulated or cropped.
Responding to the new announcement from Facebook, Google, Microsoft and Twitter, Farid tells the Guardian that he and the Counter Extremism Project, a not-for-profit organization, have been in discussions with Facebook and Microsoft since January to adapt PhotoDNA to battle extremist content.
"We are happy to see this development. It's long overdue," he tells the newspaper. But he questioned the apparent lack of third-party oversight over the program, how frequently and thoroughly the database of hashes would be updated and the effectiveness of not automatically removing flagged content from every service that signs up to the program, as PhotoDNA does.
"If it's removed from one site, it's removed everywhere," he tells the Guardian. "That's incredibly powerful. It's less powerful if it gets removed from Facebook and not from Twitter and YouTube."
The four firms say the latest effort to battle extremist imagery and videos has come about via regular meetings with EU officials as part of the EU Internet Forum, which was launched 12 months ago to battle terrorist content and hate speech online. The next meeting for the forum is due to take place later this week.
The move also follows Facebook, Google, Microsoft and Twitter in March signing up to abide by an EU code of conduct on "illegal online hate speech" that they helped create. While the code of conduct isn't legally binding, the firms committed to removing from European view the majority of related takedown requests - relating to hatred or promoting violence - within 24 hours.
That effort was led by Czech politician Vera Jourova, the EU commissioner for justice, consumers and gender equality, who pointed to the terror attacks in Brussels in March and Paris in November 2015, saying they "have reminded us of the urgent need to address illegal online hate speech."
"Social media is unfortunately one of the tools that terrorist groups use to radicalize young people and racist use to spread violence and hatred," she said. "This agreement is an important step forward to ensure that the internet remains a place of free and democratic expression, where European values and laws are respected."
The move to remove terror-related imagery and videos from social networks also follows President Obama calling on Silicon Valley last year to help law enforcement agencies better monitor "the flow of extremist ideology" on their networks. Top White House officials met with Apple, Facebook, Microsoft and Twitter in January to explore better ways for combatting the online dissemination of terrorism-related content.
Despite such efforts, some politicians and legislators continue to publicly blame social networks for serving as virtual safe havens for terrorists and related ideologies (see UK Labels Facebook A Terrorist 'Haven'). Political critics, however, contend that turning technology giants into scapegoats is easier than admitting that domestic legislative efforts or a lack of funding for police or intelligence services might be contributing factors.
×Close
As fraudsters continually refine their techniques to steal banking customers' credentials, IBM fights back with new tools that use behavioral biometrics and cognitive fraud detection. IBM's Brooke Satti Charles offers a preview.
Satti Charles, a Financial Crime Prevention Strategist with IBM Security Trusteer, is enthusiastic about the new behavioral biometric analysis capabilities in Trusteer Pinpoint Detect, which uses patented analytics and machine learning for real-time fraud detection.
"This new behavioral biometric capability leverages cognitive technology that seamlessly analyzes users' mouse gestures, understanding subtle mouse movements, and delivers actionable risk recommendations," Satti Charles says. "And these capabilities help to maximize detection, reduce false positives and optimize strong authentication."
In an interview about IBM Security's new antifraud solution, Satti Charles discusses:
How behavioral biometrics differs from traditional biometric solutions; Why cognitive fraud detection is not just artificial intelligence; Potential use cases for detecting and preventing financial fraud.Brooke Satti Charles is a Financial Crime Prevention Strategist with IBM Security Trusteer. Her primary focus is to highlight IBM's capabilities in fighting cyber-crime. Her career has been focused on research and reporting of cyber-crime, fraud, money laundering, insurance, terrorist financing, conflicts management, enterprise risk assessments and regulatory compliance. Prior to joining IBM, she held a number of roles at Bank of America and John Hancock/Manulife Financial Services. She is an accomplished writer and public speaker with a tremendous understanding of compliance and regulatory issues as well as the financial crime threat landscape.
×Close
One of the most significant cyber threats for the year ahead will be the ramping up of attacks fueled by "crime-as-a-service" offerings, says Steve Durbin, managing director of the Information Security Forum, an independent, not-for-profit organization focused on risk management.
Other trends for 2017, he says in an interview with Information Security Media Group, are a surge in government-sponsored cyber-terrorism attacks waged against critical infrastructure and an increase in risks posed by the internet of things.
"Crime as a service" refers to organized crime rings offering services such as on-demand distributed denial-of-service attacks and bulletproof hosting to support malware attacks (see Cybercrime-as-a-Service Economy: Stronger Than Ever).
"What we're starting to see is crime as a service becoming more commoditized," Durbin says. "That to me says that the industry is reaching a degree of maturation that we haven't seen before."
In recent months, crime syndicates have enhanced their ability to share information and collaborate, Durbin says.
"Crime rings are gaining a better understanding of product positioning, of strengths and weaknesses, and with whom they need to collaborate more effectively," he says. "And we're seeing, as well, a decrease in the price points for crime as a service, because the market is becoming a little bit more saturated and the consumers or buyers of this service have a little bit more choice."
In this interview (see audio link below image), Durbin also discusses:
How IoT devices have dramatically increased the amount of information that is being collected and shared, creating more risk; Why organizations will be increasingly more willing to attribute cyberattacks to government actors; How new global requirements for breach notification will impact the perception that more breaches are occurring.At the Information Security Forum, Durbin's main areas of focus include the emerging security threat landscape, cybersecurity, mobile security, the cloud and social media across both the corporate and personal environments. Previously, he was a senior vice president at the consultancy Gartner.
A just-issued report from President Obama's Commission on Enhancing National Cybersecurity outlines challenges the next administration should address. Observations from one of the panel's commissioners highlight the latest episode of the ISMG Security Report.
In the Security Report, you'll hear (click on player beneath image to listen):
Commissioner Herbert Lin, a senior research scholar for cyber policy and security at Stanford University, discuss the 100-page report that offers a wide range of initiatives the incoming Trump administration should address; BankInfoSecurity Executive Editor Tracy Kitten analyze the growing threat of ATM fraud; and ISMG Security and Technology Managing Editor Jeremy Kirk explain the latest Mirai malware exploits.The ISMG Security Report appears on this and other ISMG websites on Tuesdays and Fridays. Be sure to check out our Nov. 29 and Dec. 2 reports, which respectively analyzed how San Francisco is battling a ransomware outbreak that locked its light rail payment kiosks and how congressional bureaucracy stifles cybersecurity legislation. The next ISMG Security Report will be posted on Friday, Dec. 9.
Theme music for the ISMG Security Report is by Ithaca Audio under the Creative Commons license.