According to a recent news story, Trustwave has announced that they have identified new malware running on ATMs within eastern Europe. All of the infected ATMs are running the Windows XP operating system. Although this version of the malware does not appear to be self-propagating, it is believed that this could easily be an added feature in the next version, and would allow the malware to spread across the ATM network.
It appears that the systems were originally affected through some kind of an insider (employee at the bank, the ATM vendor, or a company that services the machines). The original infection seems to start with a dropper file (isadmin.exe - a Borland Delphi Rapid Application Development executable), and once executed produces the malware file lsass.exe within the C:\WINDOWS directory of the compromised system. The malware then manipulates the Protected Storage Service to point to the malware instead of the legitimate lsass service. The malware is also configured to automatically restart on a system crash to ensure it remains active.
Given that the ATMs were WinXP systems, I am not at all surprised that they were attacked by malware - it is not the first successful attack on an ATM and certainly won't be the last. What I find surprising though, is the level of sophistication the malware already has built into it for compromising the ATM machine itself. The reports indicate that the malware is able to output the "harvested card data via the ATM's receipt printer or by writing the data to an electronic storage device inserted into the ATM's card reader. Analysts also discovered code enabling the malware to eject the cash dispensing cassette." If that's not surprising enough, the malware also has a built in management interface that can be triggered by a controller card being inserted into the card reader. Once triggered, the interface allows for complete control of the device using the ATM's keypad to execute 10 built-in command options.
A standard ATM can hold up to $600,000 in cash at a time, and that would be reason enough to make them a prime target for this kind of exploitation. However, given the level of sophistication this malware already has developed, I would speculate that the prime motivation is to target the magnetic strip data and PIN number, which is also being captured.
Although it appears only about 20 devices in total have been infected to date, I would agree with the initial reports that this is only the beginning and it won't be long before we start to see similar incidents here in the US.
More information can be found at cnet
, Network World
, and TG Daily
The breach at CardSystems Solutions back in 2004 resulted in one of the largest credit card losses at that time, with an estimated 40 million compromised accounts. Now fast forward almost 5 years to present day, and this incident is still wreaking havoc on our legal system - this time, however, it is the PCI auditor, Savvis, that is being sued for negligence in certifying that CardSystems was compliant (for more details see the full story at Wired
). The plaintiff in this case is Merrick Bank, a customer of CardSystems who is claiming that their decision to do business with CardSystems was predicated on meeting the card system's standards (known as CISP at that time).
As an IT Security company, it will definitely be interesting to watch this case progress and see how the courts view these complex legal issues. However, more importantly, it only raises the need to remind business out there that PCI certification does not guarantee your safety. It is merely a set of best practices, that when followed properly, will lower your overall risks. Managing IT security is not a "once a year" process to achieve certification, rather it needs to be a proactive, continual life-cycle process that is driven throughout all aspects of the business on a regular basis.
And now with the harsh economic environment we hear customers asking "what is the least I can do to achieve my PCI certification?" Obviously, this check-box mentality is being driven by the need for compliance rather than security. However, leveraging a true security life-cycle process that will meet regulatory requirements will not only help you achieve the certification, but if done properly, can actually help generate higher returns on your investments and more cost savings. This can only be done at a strategic level, and will never be achieved using the tactical check-box approach so many business have come to rely on.
Additionally, I think this should also make customers start to take a longer look at who is conducting their certification testing. Is the cheapest solution the right choice? Is the price being driven lower through over-use of automated tools that don't provide a true perspective to the underlying risks in an organization? Is the auditor too close to the organization because they have potential future business at stake if a bad report were generated? Should the company providing the PCI auditor, who is supposed to be validating the results, also be used for the technical testing? Or, simply, are too many corners being cut trying to maximize profits by creating cookie cutter auditors carrying generic checklists rather than leveraging more expensive security professionals?
Only time will tell, but the outcome of this case could drastically change the PCI Auditor space - let's just hope it is for the better.
Based off all of the latest research, it appears that the latest Conficker variant (Conficker.e) shows the creator's true intentions - greed! After some of the April 1 hype died down, the Conficker worm did finally receive the updated payload on April 7th, and reports in the wild are now showing that it is fully functioning "Scareware" trying to lure unsuspecting users into paying for fake antivirus software.
A few of the important updates to this strand of the the Conficker worm include:
- Uses new random file names and random service names
- Adds additional security Web sites it tries to block and disables even more security tools on the infected machine
- Connects to certain Internet sites randomly in an effort to determine the host's external IP address
- Creates its own adhoc peer-to-peer network as an additional command and control and malware distribution vector
- Attempts to make connections to the well-known Waledac botnet
- The main executable shows an automatic removal date of May 3, 2009, but the payload remains so it will continue to communicate with other compromised systems via the P2P network
The most important thing to note with this new variant is that the same recommendations for combating this worm posted earlier are still relevant.
More information and technical details on all of the new capabilities of this variant can be found at these sites:
MS Malware Protection Center
Trendlabs Malware Blog
The hackers who control the Conficker bot-network are touted to be pushing out an update that will strengthen and reinforce the malware's stronghold on a computer system when the date changes to April 1.
But like a great movie or soap opera -- just in the nick of time security researchers have found a fingerprint that can identify if a system is infected by Conficker. This fingerprint can be seen from the network and hence infected systems can be identified in network scans. "Researchers figured out that malware tries to patch the same security flaw (MS08-067) that it exploited during the initial infection. Conficker uses a binary patch - NetpwPathCanonicalize() works quite a bit differently - which means that network scanners can pinpoint the existence of the malware."
Many popular tools are going to have Conficker detection support including Tenable/Nessus (check 36036), McAfee/Foundstone, nmap (v4.85BETA5)
, ncircle, and Qualys. There is another tool that is more of a prototype
 written by the honeynet project
that seems to be reported to work albeit quite slowly. If you find an infected system Microsoft has also released a free removal tool
It is an understatement to say that administrators and systems managers need to DROP EVERYTHING and scan their networks
as one of their highest priorities before April 1. And why is that?
Anti-Virus, Malware, Trojans, and all the other malicious items running around on the Internet evolve over time. On April 1 the Conficker worm/botnet will update to run in a whole different context. Researchers are still trying to work exactly what is going to happen but some of the highlights seem to be:
- Using web sites to get the current time and activate April 1. Traditionally, nefarious software would just check the system date. This would allow researchers just to move the date and activate, or deactivate, the software. This new version checks several internet sites and scrapes the date off those.
- Re-engineered the computation of command and control domains to visit in a day. The first versions tried to contact 250 different domains in a day to get updates. These domains ended up being bought up by security research organizations and black-holed. The new version will attempt to contact 50,000 domains in a day.
- Consolidate and Protect. It seems the updated Conficker may look to fortify the existing infections rather then trying to propagate. If this is the case, detection based on the propagation characteristics, like domain/AD account lockouts, will be all but impossible. It may also make detection on the local system more difficult.
Given that a fingerprint is out for the worm, it is only a matter of time that another update gets pushed out by the creators to 'fix the glitch'. It is imperative that systems get scanned and cleaned as soon as possible before the next conficker version is even harder to find and remove!
References and Further Reading:
 "German researchers score Conficker detection breakthrough
 "Busted! Conficker's tell-tale heart uncovered
" (The Register)
 Conficker Network Scanner
Another option is to actively scan for Conficker machines. There is a way to distinguish infected machines from clean ones based on the error code for some specially crafted RPC messages. Conficker tries to filter out further exploitation attempts which results in uncommon responses. Our python script scs.py implements a simple scanner based on this observation. Here is a sample output:
Could not send SMB request to 127.43.16.76:445/tcp.
127.99.100.2 seems to be infected by Conficker.
127.36.15.80 seems to be clean.
The script can be downloaded here:
"Conficker: The Windows Worm That Won't Go Away
"Conficker's next move a mystery to researchers
"Group launches strategy to block Conficker worm from .ca domain
" (CBCNews, Canada)
During Visa's 2009 Global Security Summit
(March 18-19), the fact that hackers are taking aim at small businesses became a re-occurring theme throughout the conference. Keeping in mind that most good hackers are not trying to get caught, they know there is a high chance their successful attack will remain unnoticed compared to attacking a well fortified bank. Granted the payouts may not be as large, but when compounded over time, they are just as fruitful and have greatly lower risks.
Some interesting statistics to help frame the underlying issues regarding small business were provided in the conference as well:
- 20% still are not using antivirus software
- 60% don't use encryption on their wireless networks
- 66% don't have a security policy in place
- 66% still believe that hackers only focus on large companies
Although these statistics are not news for most security professionals, the key to overcoming this situation starts with basic education and awareness. For more information on Security for Small Businesses please see the Clear Skies' Presentation titled "10 Things Small Businesses Need to Know About Cyber Security".
It actually feels like it's been a long time coming, but a botnet has emerged that targets 'consumer level' infrastructure; namely specific Internet modems and routers commonly found in home environments. For a device to be vulnerable it needs the following criteria to be met:
- Must be the vulnerable chipset; in this case 'mipsel'
- Must have administration accessible from the outside (the WAN interface)
- Must have weak passwords or vulnerable 'services' (daemons)
Thankfully most devices 'out of the box' are not vulnerable since they do not allow administrator access from the Internet. The other criteria are pretty easily met once an administrative interface is enabled; rarely do home users change the administrative password or set it to anything of substance. Similarly, not many users would update the 'services' via firmware updates.
Just how many devices has the botnet snagged so far? It seems like 100,000 and counting.
It would be a great idea for everyone just to check their home network from the Internet to ensure ssh (port 22), telnet (port 23), and http or https (ports 80 and 443) are not enabled. It can be quite easy to check the box for 'remote admin access' without knowing it is referring to access from the Internet not the internal interfaces. Make sure you have changed the default password too, or you're just asking for trouble.
More details can be found at the following links:
"Worm breeds botnet from home routers, modems"
(The Register, 24th March 2009)
DronBL BLog Article
(more technical information)
I recently attended the March 2009 Software Assurance (SwA) Forum
sponsored by the Department of Homeland Security
(DHS) and have come away more confident than ever that we can greatly improve the state of application security. Most of the topics covered are not new, of course. Security wonks have always promoted integrating security through all the steps of the development process. What's so encouraging is seeing the sincere effort being put into promoting these practices and baking them in to standard efforts that will benefit everyone.
I highly encourage anyone responsible for security in the software lifecycle to keep an eye on the DHS'Build Security In' efforts. The Build Security In
web site provides a ton of very informative and useful resources around best practices, security knowledge, and tools. Yes, the project and participants are still slanted towards those working on federal projects, but the lessons are extremely applicable to anyone in any industry.
The SwA forum is an extension of this program. It is a 3-day event held twice a year, and although it's free (amazing, no?), it is as informative as any paid conference I've been to. What I found particularly encouraging were the 'lessons learned' from organizations that are putting all the right pieces into action. Yes, some were quite boring, but others were revealing, especially those from smaller organizations.
Ajoy Kumar, Vice President, Depository Trust and Clearing Corporation
(DTCC), for example, explained how they' ve been very successful in taking a somewhat hard-core approach with their developers, requiring increasingly tighter security requirements and metrics before code is accepted for deployment. And, although, all the security software, processes, and training cost them a bit up-front the first year, after four years all the up-front costs have been paid off in savings and they' re now saving millions more each year.
Carole Dicker, Director of Security and Facility, Compusearch Software Systems
, Inc, reported similarly encouraging results and cost savings, although they have taken a more 'developer-friendly' approach more focused on training and less on punishing for defects. Others relayed how they have had success teaching their developers to think like hackers, while some find that hasn't really worked for them.
This isn't to say that any one practice is better than the other. Quite the contrary - different methods may work better for different organizations. Clear Skies sees this all the time performing assessments and providing training, which is why we tailor our work as much as possible to each client. The important key is that regardless of the exact methods, you're at least doing something at key points throughout the development process. Efforts like those of DHS make it much easier to make that happen.
At the recent SourceBoston conference, it was announced that the original L0pht Crew has regained rights to the L0phtCrack software from Symantec and will be releasing L0phtCrack ver6 (LC6) to the public. Once the gold standard in password cracking software it went by the wayside without any updates for about the last 5 years. If you remember, L0pht went mainstream as @Stake, which was later purchased by Symantec. Once the lawyers from Symantec got involved, they pretty much thought the risk of selling such a tool was too great and locked it up in vault somewhere.
After obtaining the rights to the software again LC6 is set for release. The software utilizes multiple attack vectors to crack Windows passwords to include dictionary attacks, rainbow tables, as well as the tried and true brute-force approach. Recent updates include support for 64-bit OSes and updated rainbow tables. Licensing information will be posted on the LOpht site
, but pricing for a single host is expected to be around $295. If you are not interested in paying for this kind of software, and don't mind getting your hands a little dirty, there are still plenty of other options such as Jack the Ripper
Welcome back LC...we've missed you!
Nearly every assessment project we do we see insecure ciphers. Unfortunately, many people see SSL and cryptography as a 'voodoo art' that mere mortals can not tackle, or they just assume that the default install of the web server will set it up just fine.
It would seem logical if the data transmitted to and from your site is important enough to encrypt, then you might as well do it properly. Afterall, large web sites don't use encryption just for the fun of it, the mathematics required to undertake these cryptographic functions on a large scale requires significantly more CPU power, often including accelerator cards or dedicated SSL Accelerator appliances. This, of course, dramatically increases expense and infrastructure complexity, when comparing an SSL page to its non-encrypted counterpart.
As some background lets define what exactly is an "insecure SSL Cipher". For the sake of simplicity, it is probably best broken into two categories:
1. Insecure Cryptography. This is where the keys, signatures, and hashes that secure the data are weak and can be broken within a reasonable amount of time. Back in the mid-90's, 'export-grade' encryption that moved from 40bit to 56bit in strength was really thought to be ok for general consumer transactions. It was generally thought to be strong enough to keep honest people out, and dishonest people would have to go through quite a bit of number crunching to break the keys. But in 1999, the Electronic Frontier Foundation (EFF) and distributed.net rocked the world and cracked a 56bit key in just 22hrs. Recent advances in video card processors and interfaces now makes the Graphics Processing Unit (GPU) an extremely fast and accessible math processor that is rapidly reducing the time it takes to crack passwords and of course cryptography of all sorts. For example, some software claim statistics like 200 million attempts per second at cracking md5. It is now generally recognized that keys should be 128bit and above in key length. Of course this is a generalization since each cipher's strength varies even at the same key-length, but it's a good rule of thumb.
2. Insecure Protocols. The protocol is essentially how the cryptography is used to secure the data. The protocol uses cryptography (keys, certificates, and hashes) to create and maintain a secure connection between the user and the server. If the protocol can be subverted, it can allow an attacker to tamper with the encryption mechanisms to reduce or eliminate it altogether. SSL version 2 is a great example of an insecure protocol, with quite a list of problems that would keep a mathematician awake at night.
The fix for the above problems is of course to disable the insecure ciphers and protocols at the server level. When a secure connection is being established, the server and the client (eg. the user's browser) negotiate the protocol and encryption that will be used. If the server does not allow the insecure mechanisms then the client can not possibly setup an 'insecure' connection. Everyone should disable SSLv2 and all ciphers under 128bit.
You might be wondering how many people rely on SSLv2 for their transactions. Just how many people need to use SSLv2? There doesn't seem to be a definitive resource that lists all the browsers ever created and what versions of SSL they support. However, doing some research I stumbled upon a site from the "Massachusetts Registry of Motor Vehicles" that really is good on several levels. This site does not appear to allow SSLv2 or insecure ciphers. It also defines what browsers will work with the site and tells the users how to set their browser settings if they do have problems. This site says that it will support Internet Explorer (IE) version 4 and above. This version of IE was released in 1997. Therefore only IE versions released before 1997 would not work with their site; that would require a user not to have updated their browser in nearly 12 years!
So how does this relate to PCI?
Well up until November 2008 there was quite a bit of 'grey' area in the wording of SSL requirements in the PCI standards (section 4.1). As with all standards, they are often quite general and open for interpretation. You might be surprised to know that there are some companies out there that want to do as little as possible to meet the certification criteria rather then do the "right thing." Disabling insecure ciphers and SSLv2 seems easy, and indeed it is, but some companies would rather argue ad-infinitum that they might block one legitimate user from using the site instead of making the site more secure for everyone else to use. It would be much better to guide and nudge that one user to better protect themselves - a user typically has no idea what goes on behind the scenes and wouldn't even know they are unsafe.
To combat this argument, in November 2008 (PCI Assessor Update Nov'08p1), the PCI council came out and said quite specifically that SSLv2 would create a failure in PCI compliance if it was used to transmit confidential information. To my absolute amazement they did not go so far as to say SSLv2 must be disabled altogether. They leave an 'out' so that supposedly an insecure browser can initiate an SSLv2 session so the user can receive an 'error' and be told to upgrade their browser. This is quite ridiculous because now the PCI assessor needs to look at the application business logic to ensure this mechanism is indeed in place, and is in place properly. The biggest real problem is that applications change. One small slip up and it is quite possible that a developer inadvertently drops the browser check and SSLv2 is enabled again for the entire site. The best way to solve the problem is never have SSLv2 enabled and to redirect from a non-encrypted site to the 'secure site'. Use the non-encrypted site to provide directions in case a user needs help with their browser, just as MA-RMV did above.
In essence the PCI council 'nearly got it right'. I have been considering why the council would allow SSLv2 just to throw an error to a user and I just can't think of a valid use case. So this one ruling not only theoretically allows implementation of insecure protocols - since it doesn't explicitly disallow them - but they also make it much more difficult for an assessor to determine just how SSLv2 is being implemented and caught.
If every single website on the Internet turned off SSLv2 right now, the world would be a slightly better place, and this discussion would end - as every user would have no choice but to fix their browser. Sometimes users need a little motivation to change; 12 years really is a long time not to have updated a browser.
Just as the details around the security breach at Heartland Payment Systems is starting to come to light, a new update was released this week showing yet another payment processor has been hacked
. Visa and Mastercard are keeping the victim's identity under wraps at this time due to the on-going investigation, but they also stressed that this is a new attack and is NOT
related to the prior Heartland breach.
Initial information shows that the compromised accounts were exposed from February 2008 through August 2008. According to the credit card firms, this incident, although significant, does not appear (at least for now) to be as large as the Heartland Breach. Also on the positive side the compromised data was limited to "card not present" transactions and those transactions containing full track data do not seem to be involved in this incident. Given this, it is wise for everyone to keep a close eye on credit card statements as it is likely that the perpetrators will try to use this information for online purchases where full card data is not needed.
More information can be found on a few affected credit unions website's that are posting the alerts to include: The Tuscaloosa VA Federal Credit Union and the Pennsylvania Credit Union Association. The Open Security Foundation also has a notice posted on its DataLossDB site.