Back to Top

Wednesday, February 28, 2007

PHP security, an oxymoron?

0 comments

I'm in the finishing phase in the development of a medium sized web application and would like to share some of my findings.

The system is developed in PHP for two reasons:

  • The LAMP platform is a well accepted one and finding hosting companies supplying it or convincing the IT administrator to deploy it internally
  • I've been programming in PHP longer than any of those platforms. However easy it may be to learn RoR or Seaside or something else, it takes considerably less time for me to go with PHP. On the long run I intend to dabble with those systems, this just wasn't the project for it.
  • This system is a mildly critical one (meaning that it should not have downtimes longer than 30 minutes during the day) and I felt that PHP is the only language I know deep enough to create robust code and make suggestions in the configuration of the environment which would lead to better stability and security.

I also considered using something like Symfony or CakePHP for the project, however I decided against this also, for the following reason:

  • With Symfony I got the impression that I was getting everything and the kitchen sink without any easy method to create a minimalist system and add on the components which were needed.
  • An other concern of mine regarding Symfony was that I could not find any security documentation for it. Being a security guy this bothered me a lot, especially since it was clear that I would not have time to read through all its source code.
  • With CakePHP the situation was a little better, there is a lot less code to start with and I also found some security documents which described how you can place the framework directory anywhere you like (and I would like it very much outside of my web tree, thank you). An annoying thing about CakePHP is that they beg for money on their website. I can understand the donation buttons on the site, since probably they've put a lot of work in it, but displaying the donation buttons so prominently when you try to download it, giving the impression that you have to pay for it seems just wrong.

Finally I decided against using any MVC frameworks since I haven't fully grokked the concept yet and I felt that separating the project in 3 parts was a little unnatural. I went with a 2.5 tier application where most of the database access is done by an ActiveRecord type of class and the controllers interface directly with it very seldomly (when I have to do selects from multiple tables or something like that). For the database access layer I decided for ADOdb which is IMHO the best currently out there (including Pear::DB and PDO). It has the advantage of (really) supporting parametrized queries (unlike Pear::DB which just pretends to support it), so I don't have to worry about SQL injections and I can write much cleaner code. Most of the standard steps (like fetching a single value from a single row) can be accomplished in one step. I truly feel that this is the equivalent of the Perl DBI for PHP (and also for Python!).

This is also the first time I used the (very limited) OO capabilities of PHP. They are truly very limited! PHP5 improved on them a lot (the most notable improvement being the policy that all objects are passed by reference and the adding of destructors), but sadly I don't have the luxury of targeting PHP5 only. Still, OO in PHP4 helps a lot, most notably with namespacing issues.

To add AJAXy features to the application I used a combination of hand written Javascript with the Yahoo! UI framework and the DynArch calendar. I looked at Dojo Toolkit, which looked very promising, but the fact that their website simply freezes most browsers for a couple of seconds (!) while it loads all the Javascript, put me off. One thing I observed is that it's really hard to find environmentally friendly Javascript code out there (code that puts all its stuff in a separate namespace and doesn't overwrite the event handlers already assigned to elements), which is really sad.

A note about the editors used: for this project I experimented with the new free Komodo Editor and it turned out pretty good. They got a decent syntax highlighting and auto completition (although it doesn't work across files, which I don't understand, since this should be the point of creating a project with all the files, shouldn't it?). An other complaint of mine is that it works rather sluggishly over RDP connections, which must be related to the way it does its screen refresh, since other editors had no problems over the same connection. Other nice editors I used are Notepad2 and jEdit. The advantage of the first is that it requires no install and of the second that it has many useful plugins and is cross-platform, but it requires the JVM.

Given that I didn't use a particular framework, I did the deployments manually with WinMerge, which helped in the preservation of the server specific configurations (and also has a great feature where it can convert from Linux to Windows line endings and vice-versa).

Now for security: being the security geek I am, I really tried to take all the steps necessary to safeguard information in the application (one of them being the writing of a PHP script to serve up static files – style sheets, javascripts, images, etc – rather than letting Apache do it, so that I could add authentication to it if necessary). However after reading this article about the coming month of PHP bugs I don't feel confident any more that publicly facing websites should run PHP, or at least not without additional security measures like mod_security and Suhosin. Everything that Stefan Esser says sounds well founded and not as a personal attack and paints a sad picture of the (probably) most widely used server side scripting language. The good news is that all is not lost and the fact that most of the time the scripts themselves are exploited rather than the PHP engine, means that it is improving (although slowly). Just to verify some of his claims (as should you), I looked at the release history of PHP 5. Here is the time it took release a given version:

  • 5.2.1 – 98
  • 5.2.0 – 70
  • 5.1.6 – 7
  • 5.1.5 – 105
  • 5.1.4 – 112
  • 5.1.3 – 0
  • 5.1.2 – 45
  • 5.1.1 – 4
  • 5.1.0 – 80
  • 5.0.5 – 158
  • 5.0.4 – 106
  • 5.0.3 – 83
  • 5.0.2 – 42
  • 5.0.1 – 30

On average it took them more than two moths (67 days) to release a new version. If you add to the mix the fact that big businesses value stability and tend to install updates later than sooner, we will see the majority of systems patched against the flaws which will appear no sooner than mid-July. So practice due diligence, make your system available only to those who need it (and enforce this policy by the firewall and the server configuration) and watch your systems as these vulnerabilities start to appear.

PS. There is an interesting writeup on Alex's blog on how you can turn a local file inclusion vulnerability into a remote one with a little help from PHP. So make sure that you don't include dynamically files in your project, or if you have to, validate the file names to make sure that they don't contain slashes, backslashes or any other dubious characters (the most ideal solution would be to check against an array of filenames which are considered valid).

Disclosure policy = dead horse?

4 comments

Over at the nCircle blog Ryan Poppa concludes that debating disclosure policy is beating a dead horse because after many years of debate there is still no industry standard. The only positive things in his opinion is that the continuing debate introduces people who might not have heard all the arguments in this matter to the subject. I would like to add a further benefit:

If the industry manages to create a standard regarding this subject, it will enable to use legal methods to persecute those who don't follow these standards. And before you all jump at me and say that I'm a corporate fanboy, let me say that this would help researchers too, because they would have a policy which, if they follow, will greatly reduce the risk of any legal retribution (unless the industry manages to screw it up and decide that 6 months is the timeframe they should be allowed).

Finally, to all of the full disclosure fans: full disclosure as a method does not have any inherent benefits. The motivation for any responsible security researcher should be consumer protection and personal gain in that order! You can not make the argument that disclosing a complete description of the flaw (possibly with exploit code) helps the users of those products / services / etc. if you are not making the disclosure in a place where it is probable for that message to reach a large amount of the customers. On the flip side most official places like the forums of a company are heavily moderated and most probably any such post will be deleted very quickly.

I don't have the silver bullet either for this problem, but I would like to encourage anyone thinking about disclosing flaws to consider going first to the makers of the product, since they have the best means to distribute any mitigating information / patch / etc to the users of their products. Any different approach is immoral.

Saturday, February 24, 2007

Removing Snap

0 comments

Snap.com previews seem to be very fashionable these days (if you don't know what I'm talking about, its those previews of the sites which appear when you place your mouse cursor over a link), but it is very annoying (almost as annoying as those ads which appear when you hover over certain words in the article - which supposedly are related to the advertisement being shown). To kill this feature, add this line to your hosts file:

127.0.0.1  spa.snap.com

Two more tips: hosts file also work in Linux (in fact they were introduced in UNIX). And finally, if you're running a webserver on your machine (for development purposes for example), making all the disabled sites go to your webserver can be irritating (because they generate a log of 404 entries in your log file for example and you have to go through those if you are looking for an error). The solution is to redirect them to an other IP. Some IPs you can use for these purpose are:

  • An unused IP address from your network if you are behind a NAT.
  • An IP address from an other unused subnet.
  • Any unused IP from the private IP range

One thing to watch out for is that when you use an IP address outside your subnet, the request must go to the default gateway, which rejects it (or should reject it if properly configured). This doesn't involve a performance penalty if you're using a personal router on your local LAN, but for example if you are directly connected to the Internet with a cable modem, this involves contacting the gateway of the cable company, which could result in a slight performance degradation. Also, if you wish to use this approach in a corporate environment, be sure to drop your IT people an e-mail to let them know what you are doing to make sure that (a) that address / subnet is truly unused and (b) that they don't panic / think it's an attack when they see all that traffic going nowhere.

Managed security

0 comments

It is funny (or sad, depending on how you look at it) when you realize that all modern OSs have the ability to run with a very high safety level (where 99.99% of the security issues don't affect them), yet malware is so widespread. Some people who get blamed for this are:

  • Microsoft for making Windows insecure.
  • Users for clicking on everything.
  • Administrators for lax security policies.

However lately I started to see a much bigger problem at the root of all this. A problem of expectations, fueled by marketing from as early as the first PC's came out, which in turn created even greater expectations which the marketers tried to satisfy. This expectation is that computers just work. That they are similar to toasters in that you just plug it in and immediately you get a return from it. What these expectations / marketing don't include is the fact that you need training on how to use your computer in a responsible manner otherwise you will be hurting yourself on others in real financial terms (just dig up some statistics how much phising and spam costs yearly). This training is needed in addition to anything you need to learn about how to do your job effectively.

In companies this management can be done by the IT team. An ideal scenario would be (and I'm talking from a Windows perspective, because most people use Windows) to use software restriction policies with a default deny setting and to enable only areas which are necessary for the system to function, but where users have read-only access (for example the windows folder and the program files folder). Warning! This is only an idea! Be sure to only test it on a spare computer since toying with these settings can very quickly render your system unusable! This would a very big step towards security. (I'm inclined to say that perfect security, but I have to remember that security is a process, not a state).

There are at leas two problems with this:

First, the users might not take such level of control by the IT very lightly. This is especially true for power users who are accustomed tweak and manage their own computers at home. But even regular users might resent the fact that they can't use Yahoo Messenger or Winamp. In your contract it may say that only applications approved by IT can be used, but if you start enforcing that and your valuable employees start to leave, that policy is the first to go. This can be mitigated by having a policy of installing for the user the applications she asks (as long as they are legal of course). This still isn't the perfect solution since it can become hard to manage when the number of users per administrator start to grow. Also, every application which gets installed just enlarges the number of vulnerabilities you might have to deal with (both of the applications mentioned before - Yahoo Messenger and Winamp - had their share of remote code execution vulnerabilities). An other aspect of it is that while you might have a positive attitude against installing third party software, the users might still resent the fact that they have to ask for permission to do so.

The second and even bigger problem in a corporate environment is the problem of superiors (CFO, CEO, etc) who usually have laptops which they expect to be able to use at home for non-work related issues where asking for permission from IT is not a very good option. And they also share their laptops with members of the family, etc. While these actions should be disallowed by policy, the imbalance of power between a sysadmin and somebody higher up makes the enforcement of such policy a very delicate balancing act at best and impossible at first. And remember: security is still looked at as an add-on - the usual mentality is to make it work first and (eventually, if ever) make it secure.

If implementing good security measures is such a hard thing in a company, home users must be (and indeed are) much worse off. So we should try to bring the home users (which probably outnumber corporate users) up to the same standards as companies use (which, as we've seen earlier isn't all that great either, but it's better). One approach would be to sell subscription services for remote management. Such a service would consist of somebody logging in to your computer from time to time or when you have problems and making sure that everything is ok, to install applications or to help you out. This service would serve essentially as your personal help-desk. While such an approach would greatly reduce the problem of malware and spam, there are many roadblocks in making it wide-spread:

  • First and foremost the problem of control. People like to be in control. And if they see such a service as giving up control, it won't be adopted. But not all hope is lost, since this is a perception thing. We don't feel that we lose control just because we have to go with our cars to the mechanic for a yearly checkup or because we have to call a plummer if a pipe is broken. With the right marketing it could be overcome and after some time it may become embedded in the culture, so that no further advertising is necessary.
  • An other problem is privacy. When you give up control at such a level, you have to trust the other party not to misuse that trust (not to read your documents, to make transactions in your name). This can be solved technically (you having an encrypted area to store your personal information to which the company servicing the PC has no access), legally (to stipulate these things in the contract) or preferably both.
  • A third problem would be that of people who insists on managing their own computers (geeks, power users, etc). In the first phase they would have nothing to loose since the usage of such a management service would be purely voluntarily. Later on, when it may be dictated by law, there are several possibilities: they might be exempt from using these services if the can prove that they have adequate knowledge. Such test however should account for the fast pace of change in the computer technology and should include provisions for re-testing periodically (and preferably short periods - six months to a year). An other solution would be to make such people pay some form of tax. This would be arguably less fair, but it would be an other option non the less.
  • An other problem would be technology. Remote desktop products are not perfect (or should I say that the best-effort delivery networks are not perfect) and even with a high-speed internet connection the sometimes have to wait for the network. This results in frustration (for the one trying to use the connection) and reduced productivity (which equals increased cost if a per-hour billing system is used). This problem can also be solved by using more command line tools (and with the arrival of PowerShell Windows starts to get an acceptable command-line environment) and even more high-bandwidth solutions.
  • Finally there is the problem of costs. Most people in not-so-rick countries don't even want to pay for the software, much less for some computer service. High-speed connections becoming more and more widely used in such countries means that this needs to taken into account (because high speed internet = high speed spam or more bandwidth for DDoS many times).

Given all these problems, will managed security become reality for home users? Maybe. It would be a big step forwards in reducing security threats for home users because (still) humans are the most versatile tool which can easily be repurposed. The problem with traditional security tools is that many users don't realize that they need an other security tool (for example it took years to get anti-viruses accepted as a need) and when the education comes from vendors, many times it is dismissed as marketing (which it is, but it may partially be true).

Friday, February 23, 2007

Full disclosure - repaired

0 comments

That was quick. Thanks to my emails the blog posting which posted detailed information about how to root a given ISPs routers via an erroneous default configuration got sanitized.

Just to be clear: I'm not against full disclosure. I'm pretty much in favor of it - if used for doing good. Because this sounds to abstract, I'll try to give some concrete examples: let's say that you contacted the vendor / service provider / etc and they are unwilling to provide a fix or even acknowledge the existence of the flaw (yes, sadly it happens)! Then you can use full disclosure first as a negotiation tool and if everything else fails, as a public tool to shame the vendor into providing a solution. Just be sure to know your legal liabilities and act accordingly (posting from an anonymous email account, etc). The second case when full disclosure would be useful is when you publish the information together with the solution or at least a mitigation technique in a place where it is reasonable to expect that a large number of the affected people are seeing it. This is useful when there exists an effective mitigation technique (like in the case of the WMF or VML flaws where you only had to unregister a DLL with minimal functionality loss), because you can greatly reduce the exploitation window, protecting the people even before the vendor had time to react.

Now lets analyze this blog posting: from what I know the ISP wasn't contacted before publishing it. From all I know it wasn't contacted even after it got published. Most probably their awareness got raised by the e-mail I sent them. So the first argument - using full disclosure as a negotiation tool - fails. As for the second argument, while the posting contained information about how to secure yourself, it wasn't published at a site where it would have been likely to be read by the ISPs customers. What remains is a possible quest for personal glory and some misunderstood concept about full disclosure.

To end on a light note: heise Security put out what seems a good primer into web application security. If you are interested in the topic, this looks like a very nice introduction, which explains in a relatively detailed manner the major methods of attacks. So go out there and lets get those vulnerable sites below 70%!

Full disclosure gone bad

2 comments

I'm for full disclosure when (a) it makes the vendor put out a patch sooner than later or (b) it contains enough information so that the people affected can mitigate the risk and it is posted at places where these people are probable to read it. But this recent post on security team screams of the I'm 1337, I can use nmap, I rooted 14716 computers sentiment. How does disclosing this flaw with such detail (like subnet addresses and the ISP name) help anyone? The story would have been just as interesting would he left those details out. And how many of the ISPs customers read or even know about this blog? Did the guy try to contact the ISP? He doesn't say. The most interesting part is that he publicly admitted to computer intrusion and may be liable under the UK law! Very nicely done there champ! I'm going now to contact some (hopefully) responsible people at Beyond Security (the sponsor of the blog) and the ISP to get this issue resolved.

The Acunetix saga

1 comments

As they say: better late then never. Here are my comments on the whole Acunetix saga.

First of all, you should read the great posting at Computer Defense about the matter. It contains links to all the important events in this area, including the original press release, the reaction on Network World and others.

So here is the situation as I understand it: Acunetix provides a free (as in beer) security scanner, which can be used to scan your website for vulnerabilities. In their press release they claimed that 70% of the people who used it were vulnerable, in reaction to which Network World (which isn't all bad, since they have at least one very good podcast) summoned up a security expert, which claimed that the statistics were BS and challenged Acunetix by saying that he would give them 1,000 USD if they were able to hack 10 sites picked random from the list.

First of all I have no relation with Acunetix (I can't even remember their name, I have to copy paste every time I write it :-)). Second of all: I don't think highly of such automated scanner. There is nothing better than a good old fashioned code review (and believe me, there are some nasty things in the codebase out there which run the big websites). Such automatic tools pick up only the low hanging fruits: cross site scripting (also known as HTML injection for the purist :-)) and SQL injection. And even those aren't guaranteed to be found completely. Basically these scanners face the same problems as the search engine crawlers: navigation embedded in javascript and / or flash, restricted member areas, etc.

This being said, you can't dismiss the results of such scans. SQL injections are very bad, but at least they are easy to convey to the security team (everybody understands if you say: look, I can view / edit / delete your entire database!). Cross site scripting attacks on the other hand are looked down however by many because they say: they don't hurt my server, so it isn't a big deal, right? Wrong! Cross site scripting attacks can mean things like cookie stealing, taking actions on behalf of the user (like buying / selling things, transferring founds, etc). The notorious Sammy worm was based on a Cross Site Scripting flaw!

And finally: Network World's expert is an idiot at best or an attention seeking w**** at worst! Those are strong words, but how can somebody call themselves a security expert and don't stop to think that even if the statistics were doctored (which I'm pretty sure they not - not that I trust Acunetix, but the figure seems to be consistent with my experience) no reputable security company would go around defacing peoples websites at random just to prove a point. And I'm pretty sure that whatever agreement they had with the site owners, it didn't include a clause which would permit Acunetix to use the site for demonstration purposes. This expert should be sued for defamation by the real experts! And the kicker is: when Acunetix said that they take the challenge, but for the Network World website, the expert said that they had no relation with Network World and thus they can not authorize such an experiment. No relation !? No relation like: I make stupid statements and pose as an expert there, but I can't be bothered to test them. Thinking about the legal consequences? Why didn't you think about the legal consequences of your original proposal genius?

Kernel malware on the rise!

0 comments

Not to gloat (well, maybe a little :-) ), but F-Secure also thinks that kernel malware is on the rise. There is no better time to run as limited user and make kernel malware irrelevant

Decoding obfuscated Javascript

4 comments

SANS had recently a posting about methods to decode obfuscated Javascript, and I just wanted to mention 2+1 tools here:

  • In Firefox you can use the View Source Chart extension to view the source after the javascript has executed. There is also the versatile Firebug, but IMHO that's an overkill for this.
  • For Internet Explorer there is the Internet Explorer Developer Toolbar which is free (as in beer) and as of writing this required no WGA silliness.
  • And the bonus tips: if you are using Firefox, it may be worth to install the User Agent Switcher plugin and to switch to IE, because exploit sites were known for trying to serve up different exploits for different browsers. If you encounter scripts of type JScript.encoded or VBScript.encoded, you should find this tool useful.

Warning! These methods actually execute the script on your machine! They should be used with extreme care, and preferably only in controlled virtual machines or computers not connected to network.

Distinguishing real and non-real security measures

2 comments

This post was prompted by a post at Andy's blog, where he complains about the lack of NAT's and firewalls in cable modems. My opinion about it: NATs are not a security measure. VPNs aren't either. And IPv6 isn't inherently insecure just because it has the potential to give end-to-end connectivity to all hosts. These technologies are considered security products because they provide a little bit of security by obscurity. For example if you are behind a NAT many traditional backdoors, which rely on opening a port and listening there for a connection from the master, will fail. But then again, all the bots which use IRC will work without problems, all the spyware which uses HTTP or HTTPS to send out the harvested information will work, etc. I admit that I was a little scared when I connected my parents computer to the Internet directly, whit a real IP using a cable modem. But then I thought about it: is my el-cheapo router running some ancient version of the Linux kernel more secure? At least on my parents box I know that I turned automatic updates on, but I really don't have any easy way to update my router! If you wish to secure your clients by not allowing inbound connections, just put a firewall rule on your router. But I bet you that the clients will be very unhappy when their BitTorrent speed drops dramatically because of this :-D. And when you worry that IPv6 exposes all of your hosts to attacks: again, just put a firewall rule which drops all inbound TCP connections.

As a side-note: one thing I support whole-heartedly is IPSs filtering outbound STMP connections. Can we have a little more of that, please! And if you worry that some of your clients may need it, create a webpage for them, where they can add the servers they wish to connect to using SMTP. No authentication needed for the page, just make sure that it's accessible only from your clients IP range and somebody coming from a given IP can set the rules only for that IP. Of course a CAPTCHA is also advisable, because otherwise the IP can be easily white-listed just by embedding a specially crafted HTML in one of the pages you view. So, ISPs, please filter port 25!

Wednesday, February 21, 2007

Why rootkits and anti-rootkits are irrelevant

2 comments

Given my recent (and probably ongoing) adventure with the authors of RkUnhooker, I thought that I post my opinions about the whole rootkit - antirootkit business.

To put it bluntly: it doesn't (or shouldn't) matter at best and it is a misguided effort to stear up hype in which many people participate without even realizing what they are taking part in at worst.

Why do I have such a low opinion about the matter? Because this is the DOS virus wars all over again! Back then you got one (tiny by todays standards) common memory space where each program was placed with no built-in protection to keep them apart. Back then both viruses and anti-viruses raced to make a better, smarter, trickier product which was harder to kill. Very important: harder, but not impossible. In the end there was no definitive solution for the problem since there was no third party supervisor which would have had the responsibility to keep programs separated.

And along came protected mode and the operating systems using it and this problem was solved almost overnight. Now there was a clear and very strongly (by hardware) enforced separation of programs (at least in user - also referred ring 3 - mode). What's nice about this that as long as you keep your programs in ring 3 and don't allow them to access ring 0, you can retain full control over them. Think about it: full control. No dirty hacks, no cat and mouse games, full control for now and for ever!

On the other hand if you let any untrusted code in your kernel, you're back to step 1: the DOS era. Now you're back in the era of cat-and-mouse games and smart tricks which work only until someone outsmarts them, you have no guarantees whatsoever regarding them and what's the worst is that you might not know that somebody outsmarted you for a very long time. Again, contrast this to a protected mode architecture where you keep the programs where they belong - in user mode: there is a clear separation of privileges and you know that your kernel mode component is and always be safe.

Rootkits are only the first generation. They are an evolutionary step which appeared because malware writers wanted the to access advances of the kernel mode without rewriting their existing codebase which was traditionally usermode. And so the things called rootkits appeared. After rootkits gained a certain notoriety, of course tools to detect them appeared. However, most of the tools are reactive in nature, meaning that after a new technique is introduced, some time needs to pass until a detection for that certain technique is added. And we're back to the cat and mouse game with no possibility to predict how fast a detection for a new technique will be added (because any prediction must factor in the time until discovery which can not be predicted).

What is even worse is the fact that the next evolutionary step of malware is to run fully in kernel mode. In the case of such malware 99% of the current tools become useless (because there is no anomaly between user and kernel mode, because the code runs just in kernel mode - so the cross-view style detection is out of question - and the patch detection ones also have a dubious usability because the kernel-mode malware code doens't have to patch anything, it can run just as any normal kernel code, using the documented interfaces). Even more, the malware can patch data (function pointers for example), not code to make sure that it gets executed, making its detection very improbable. And we still have the basic problem of the malware and security product executing on the same privilege / trust level which again results in a cat and mouse game.

In conclusion: we have a perfectly fine solution for the rootkit problem - it is called limited privilege users, the kind which can not load any code into the kernel. It delivers predictable results (no rootkits) with a very high probability reducing the risk to an infinitesimally small number. Why not use it?

Grokking OpenID and Blogger

4 comments

I just created my first OpenID account!

If you don't know what OpenID, it is a single sign-on solution (sometimes also called login federation), which ensures that you can have a single login name / password using which you can authenticate in may (web-)places. It is similar to the Microsoft Passport initiative, the difference being (as usual) that it is based on open standards and you don't depend on Microsoft. Here are some resources for a more detailed description:

Here is a list of OpenID providers shamelessly lifted from simonwillison.net:

I personally went with Verisign because they are a big company with other revenues, so it is fairly probable that they won't disappear overnight. However it is possible to use multiple OpenID providers, as this forum posting points out. But it is too complicated for me, I just go with Verisign for the moment. However I want to keep my options open, so I use my blog address as my identity (Google won't disappear soon either) and create a delegation to the Verisign server, which I can change any time to an other identity provider.

You can do this by editing your template, finding the <head> tag and inserting immediately after the following two lines:

<link href="https://pip.verisignlabs.com/server" rel="openid.server" />
<link href="http://CdMaN.pip.verisignlabs.com/" rel="openid.delegate" />

If you don't use Verisign as your identity provider, replace the https://pip.verisignlabs.com/server with the address of the server of your service (if the given service doesn't explicitly tell you the address of their server, check out this posting on simonwillison.net where he lists the servers for 4 OpenID providers. The second line should contain the ID the service assigned to you. Now save your template and go to any OpenID enabled site and try logging in with your blog address (hype-free.blogspot.com in my case).

Have fun and enjoy OpenID!

Update: Since I wrote this post, Blogger became both an OpenID consumer and provider. This means that you can comment on blogger blogs using OpenID accounts, and you can use your blogger blog as an openid account. However you can still use the method described above to redirect to an other OpenID provider.

Update 2: as pointed out in a comment on the stackoverflow blog, this does introduce a further security risk: now you have to worry about either your OpenID provider being hacked or your website being hacked. Because in the later case, the hacker can just redirect the OpenID authentication to an account/provider s/he controls and log into all the sites where you've signed into all the sites where your OpenID is your website. Just a thing to be aware of.

Using rsync on Windows

11 comments

First of all, what is rsync?

It is a protocol and an implementation of it for bandwidth efficient file synchronization. In a nutshell it can synchronize two directories (one local and one remote) while making sure that only the minimal amount of data is transferred. It accomplishes this by breaking the files up in blocks and only transferring the blocks which are necessary (so if you appended - or preppended, or inserted in the midle - something to a file, only the new data will be transferred, not the whole file). You can find a more detailed description here, or you can read the rsync technical report and the original PhD thesis.

The problem (as with most great tools) is that there is no version for Windows. There are two Windows ports that I know of: DeltaCopy and cwRsync. They both are packages which bundle the Cygwin version of rsync with the needed DLLs (cygwin1.dll, and so on). Cygwin is an emulation layer for Windows which allows to create a Linux-e environment and run / compile many of the tools available for Linux without modification.

There are two things you should watch out for:

The first and most important: different versions of cygwin1.dll do not play well together. The symptoms include the rsync client entering in an infinite loop and consuming 100% of your processor and mysterious error messages which talk about shared section mismatches. If you encounter one of these, do a system wide search for cygwin1.dll and make sure that only one exists. Tricks like putting a cygwin1.dll with a different version in the directory of the executable and creating the .local file don't work either because different instances of the loaded DLL need to communicate between eachother and this is impossible if there is a version mismatch. As of my limited experience the cygwin1.dll included with the cwRsync distribution worked fine with other Cygwin programs, but there might be some hidden incompatibilities.

The second gotcha is that you have to refer to your directories in the form of /cygdrive//path/with/slashes. So for example E:\stuff\to\sync becomes /cygdrive/e/stuff/to/sync. Also I found that the most reliable way to run the rsync client is to use the current directory as source / target (eg. .). In this situation the pushd / popd batch commands come in handy (and remember, opposed to cd, they can also change the current drive).

Also, I use rsyncd / rsync in a client-server (also referred to as daemon) mode, not over SSH (because I use it inside a trusted network). In this case you must specify two colons when writing the network source, like this: 192.168.0.1::tosync.

For a more detailed tutorial check out this tutorial on rsync which also applies to cwRsync almost entirely with the caveats mentioned earlier.

And a final note about the modus operandi of rsync: while the synchronization is in progress a temporary file is created in the same directory. After the synchronization of that given file is done, the old file is overwritten with the new version. I mention this because it may be the case that you wish to synchronize things which are currently locked and I don't really know how well rsync handles this situation (given that under Linux, its original platform, it is possible to delete locked files as long as you have the rights to do so, without affecting programs which have a handle already open to it). So be cautious!

Update: if you are using Windows Server 2008 and you are getting error messages when trying to use rsync (something along the lines of failed to create shared section), make sure that you've updated to the latest version of it, which currently is 2.1.5. Also, it may refuse to install on Windows Server 2008. As a workaround you could install it on an older version of Windows (up to Server 2003) and copy over the files.

Manifesto of the ethical Anti-Rootkit writer

0 comments

Rootkits are a controversial subject. When the book (Rootkits, Subverting the Windows Kernel) came out and the associated site (rootkit.com) was started, the subject exploded. Of course the Sony DRM fiasco did also plenty to generate media buzz. Because of this, many detection tools were born. Some were created by traditional security companies and some by relatively unknown people. A small fraction of the people creating these tools have dubious ethical background (proven by the fact that they condone and solicitate illegal activities like DDoS-ing and defacing, seek to create rootkits which are not detectable by anyone, etc). Why doest this matter? Because these tools work by loading their code in the kernel and thus only work if they are started from an Administrator account. Metaphorically speaking: running these tools is like handing over the keys to your house to let them check if your security system works. When doing that you should make sure that you trust the right person!

Given my recent negative experience with the author of one such home-brew tool, I thought that I put together a list of ideas the developers of such programs should follow and ask them to sign it (it's written in quotes because obviously the signature will be a virtual one). So here is the list:

Manifesto of the ethical Anti-Rootkit writer

  • I will give a high level description of the actions performed by my program which can be understood by even moderately technical savvy user (so called power users) and I will follow that description to the letter (for example, if you state that this tool allows the detection of hidden processes, the tool should only detect the processes, not terminate them. If the tool also terminates them, that should be included in the description).
  • The program will not perform possibly dangerous operations without user consent. The message informing the user should contain a simple enough description of the action so that power users are able to understand it, and also list the possible risks.
  • I will limit my kernel mode code to as little as possible.
  • I will clearly list the supported platforms (operating system version and patch level) and give the user warnings if the s/he is using the tool on an unsupported platform.
  • I do not approve or am engaged in illegal activities (like site defacement, DDoS, etc)
  • All of my research is done on computers owned by me or by consenting people. In case I ask other people to test my programs / products, I will provide them with a detailed description of what the program does, what the associated risks of using this program are and what files / registry keys are associated with / modified by the program.
  • I practice responsible disclosure. I notify vendors prior to releasing any information which could negatively impact the security of the people using their products.

The undersigned:

If you are a vendor / author of an Anti-Rootkit program and would like to appear in the above list, send me an e-mail ([email protected]) from a verifiable e-mail address (meaning that the sending e-mail address either appears on the site of the program or is from the same domain) stating that you understand the above terms and follow them and I will include your products name in the list. You can also e-mail me if you have suggestions and/or comments or you can leave me a comment below. I will get back to you as soon as possible.

Q: what are the guarantees that the products who appear on that list really will follow the terms?

A: There is no guarantee. The inclusion in the list is voluntary and does not involve any verification on my part (because I don't have the time to disassemble all the versions of all the anti-rootkits out there and do this whenever a new version comes out). Further more some criteria on the list are not clearly defined (like the one with as little code in the kernel as possible).

Q: What does it prove if a vendor / program appears in this list?

A: Strictly speaking in proves that somebody with an e-mail account representative of the product has e-mail-ed me that they understand and follow these principles and would like to be included on the list. In a more broader sense it proves that they have thought about these issues and (most probably) follow them.

Q: Can people / vendors be removed from the list?

A: If there is public evidence of them violating these principles, they will be removed and a description will be posted of the reasons for removing them with links to the evidence.

Q: Why are the terms so vague?

A: The list tries to be as inclusive as possible. If somebody signs it, (hopefully) it means that they at least thought about these matters and follow some basic ethical principles. And make no mistake, there are some out there, who don't follow even these broad terms. Also, the fact that a person / vendor appears on this list does not mean that this is their code of ethics. It may very well mean that they have a much stricter code of ethics which is included in the above list.

Mismoderated RkUnhooker comment

0 comments

And here is an other event in the RkUnhooker saga. Because of the controversy I'm involved in regarding my No love for RkUnhooker post, I wanted to come out and state publicly that I erroneously mismoderated (rejected) MP_ART's comment on my blog. Before I get accused of censorship, I just want to say that it was a honest mistake (which happened to me before), caused by the fact that the publish and reject links are so damn near. I felt that it was appropriate to publish his comment here (although it is the same thing as the post published on their forums and sent to me in private message on the SysInternals forums):

Cd-MaN, YOU ARE POOR GUY or, maybe, girl , who wants to advertise your poor, incompatible with logic blog [xx(] . We are sincerely hope, that this was your first and last post here . [b]If it not, then soon you, as well as your board will get "good" advertise over the Internet[/b]. What about your statements, so I can say that you understand a little (perhaps you are still in primary school) because all what you said about RkU can be applied to 70% of all antimalware soft (including all antirootkits). I hope, that you do not get paid from GMER for this post, because you get less, than you should. In a whole, I think that your blog is a scope of lamers statements and rediculus decisions. I found many funny statements, from which I can guess: - GMER love your sorry ass - You are kiddo - You want glory (you will get it) - In real life you a complete looser, that can't even finish primary school - Your English as well as you - are poor - You have come here looks like because you wants to be bitten PS: Tomorrow you will get comprehensive answer (without censored words) from EP_X0FF to all your statements and to your blog in a whole. But, I want, that you are not able to reach tomorrow, as well as your f u c k i n blog.

So that was it in its full glory. I apologize for the foul language. As I already stated, there is absolutely no connection between me and GMER. One interesting part that I didn't comment on yesterday is the fact that he says that 70% of the Anti-Rootkit industry uses the same approach that they use. If I would be in that industry I would be really hurt. I really don't think (but then again, I might be wrong) that 70% of these people are involved / approve of illegal activities like defacing / DDoS'ing, threaten their critics or do not follow the principle of responsible disclosure (yes, these are the same guys who written the Unreal rootkit, which to me seems a little hypocritical).

Tuesday, February 20, 2007

And so the RkUnhooker saga begins

0 comments

The RkUnhooker story gets worse and worse (from the point of view of its authors). They (EP_X0FF and MP_ART) are making threats Russian mob style (not that I would know how a Russian mob threat sounds :-D), stating that You have come against wrong people and that want, that you are not able to reach tomorrow (I suppose he means that he wishes for me to die :-)).

Any my thread on the SysInternals forum was deleted because my post was not technical. This is true, but I feel that this information must be taken into consideration by anyone who wishes to run the program on her/his system.

You can read his entire reply on their forum. And I just observed that the title of the topic is Any sources for the dead hype-free.blogspot.com. Good going guys! I really don't know what you're after, but if getting hired by a security company and/or selling RkUnhooker as a commercial product, you can cross it of your list, since with an attitude like this you will have a hard time getting hired (and no, AV companies do not hire virus writers).

BTW, if I understand right, they are accusing me of being a GMER (which is an other amateur - in the sense that there is no company behind it - anti-rootkit product) fanboy. It is true that that I host a mirror of the GMER files because its site was/is under a DDoS attacks, but I have nothing to do with it, nor do I endorse the usage of any such products (as you can read in my original post).

It is interesting that the anti-rootkit market is such a highly flammable one, with waring tribes and each side having their groupies. I don't understand where this comes from, since seemingly nobody is making any money out of this, not even with AdSense or similar things. It might be that they are aiming at being hired / bought by companies, and as I stated earlier, in this case the RkUnhooker guys just shot themselves in the foot.

Limited users - myth or reality

3 comments

Fellow security blogger, Kurt Wismer, says that there are limited advantages to limited users. He is right in all his arguments:

  • A program running in your account, even if it is a limited user account, still has access to all of your files. It can search in them for e-mail addresses, wipe them or do other nefarious things with them.
  • It will stop only malware which is written with the assumption that they will be run with an Administrator account - very true, however this is currently a very large percentage of the malware. In this context running as limited user is a security by obscurity solution, and there is nothing wrong that. Remember: having security as obscurity as an additional layer of defense is not a bad thing, but having it as the only layer is. Think of it: you don't put a note on your door with what kind of burglar alarm are you using just because security by obscurity is bad thing!

But misses one huge point in my humble opinion:

Running as limited user makes it highly probable that you can contain whatever malware problem you have. What do I mean by that? Imagine the following typical scenario:

  • A malware not recognized by your security product is executed (and make no mistake, it is possible to develop malware which for a period of time is not recognized by any security product)
  • As soon as it executes, it kills your on-access scanner, stops the services associated with the security products and blacklists the update IPs of the security product.
  • Additionally it may install rootkits and other kernel level components

If you were an Administrator (or Power User, which can very easily elevate her/his privileges), after these steps you would have near zero chance to disinfect your system and be sure that you indeed eliminated all the malware from it without doing an offline scan (eg. putting your HDD in an other computer) and scanning it with several AV products. Even then it is best to wipe and reinstall (which of course must be followed by patching, creating and using a limited account and other safe computer usage practices!).

Now consider the same scenario again, but this time from the point of view of a limited user:

  • The malware can't kill the processes associated with your security product - it has not enough privileges.
  • The malware can't stop the services associated with your security product - it has not enough privileges.
  • The malware can't blacklist the DNS entries associated the update service of your security product - it has not enough privileges.
  • It can't install BHO's, rootkits, traffic sniffers, etc. - you guessed it, not enough
  • If you have a firewall which can control outbound connections, you might be able to prevent it from running. I say might because a software firewall must consider many things, like dll injection, to make a reliable judgment call of allowing or denying the communication.

Running as limited user does not protect you from all and every malware, but it can make sure that your system is in a recoverable state when your security software issues an update and starts recognizing the particular piece of malware. Also, if you are running as limited user, when doing a cleanup you don't even have to bother looking in places like the windows directory or for drivers. The limited user accounts can also be used to separate programs (this is not entirely true until Visa, because of the shatter attack)>, but it is a very good starting point. Finally, when talking in the context of a corporate environment, only limited users can be effectively controlled by the IT department, higher privileged users have many ways of circumventing any host based control system.

PS. Some AV products try to do some magic to prevent their process termination (and I don't mean to pick on Kaspersky Labs, they have a very good product, others like Symantec or Zone Labs are also using this approach). This is bad in my humble opinion because any protection can be circumvented by a program running with high privileges (thus resulting in a cat-and-mouse game - for example the Advanced Process Termination from Diamond.cs is able to terminate all these products in their current versions). An other reason for me not liking this approach is that it can lead to system instability.

PS 2. If you would like to run as limited user, check out my blog posting which details various methods of doing that and still being able to elevate your privileges when necessary.

No love for RkUnhooker

6 comments

It seems that the author of RkUnhooker (you know, that guy named EP_X0FF) got very upset about my comments and first he wrote a comment to my blog - which I published a little late and I apologize for it. Then he got into personal mode and made a threatening post on his forum.

My thought are: if I deface my own site, do I get the source code? :-D On a more serious note: after this incident would you consider running his program on your computer? Consider this: by running RkUnhooker you give somebody who clearly has anger management problems and sees violence as a viable response system wide access (because his soft needs Admin level privileges to be able to load the driver)! I looked through the current version of the software and it doesn't contain any malicious code - and no, PECompact doesn't protect your program from reverse engeneering, and packing your executable is a bad idea in itself - but this may change in the future judging his posts (the last post says in Russian: we are already working on it). You should make sure that you are not part of the solution and avoid his software.

The last install kit which I checked had a size of 147611 bytes, a MD5 of f79f711bd54bfc9f297eeefee69f8705 and a SHA1 of ccb2558366cb076451fe6f58c4c5081eae52f168. Do not run anything from him if possible!

New Hacker Challenge available

0 comments

Just to give you a heads up: a new hackers challenge is available over at ethicalhacker.net. Good luck!

Sunday, February 18, 2007

Whos timeserver are you using?

1 comments

There was a controversy some time ago involving D-Link and Poul-Henning Kamp where the former were using the timeserver set up by the later as default in their routers, effectively generating a DDoS on the server without giving any recompensation for it. The matter was amically resolved in the end, but it seems that corporations didn't learn anything from it. I was browsing today through the configuration of an Edimax router and much to my surprise I saw the default timeserver being set to 192.43.244.18, which seems to be a timeserver at the University of Michigan (judging from the partial reverse DNS). I doubt that they have any agreement with Edimax!

Now for the question of which NTP server to use? You can use pool.ntp.org which resolves in a round-robin way to NTP servers which are voluntarily publicly available. There are also subsets available by continents and by country, so that you can choose a server which is nearest to you and use the other servers as fallback. To get a more detailed description, visit the pool.ntp.org website. And make sure that you check your routers default settings!

PS. Some el-cheapo routers (like this Edimax) do not allow the setting of NTP servers by name, only by IP. In this case you should use nslookup to randomly choose a server from the pool and set it as your NTP server.

Favicon for blogger

4 comments

Being inspired by a post over at snook.ca I added a favicon to my blog. The original image came from MouseRunner. Given the fact that I use FireFox since a long, long time I'm entitled to use this given image :). A word of advice: always check out the license for the image! There are many free (in both senses of the word) resources on the Web, but that doesn't mean that everything is. One nice list of free icons I came about is the one at maxpower. Iconarchive is also a very nice site with easy navigation, however each icon / set of icons is created by different contributors and contains different usage terms (but usually they are free for non-commercial use).

When creating favicons for blogger, make sure that you write the code in XHTML format (which means lower case and with a / at the end) like this: <link rel="shortcut icon" href="http://example.com/favicon.ico" type="image/x-icon">, because blogger uses XHTML for its templates. Also, you can not use blogger to host ICO files (only GIF, PNG and JPEG), so you have to use something like Google Pages for this purpose.

As a side note I found a really nice color picker for the Gnome desktop: gcolor2.

This icon (shown below at its full 128 by 128 pixels beauty) got chosen because I try to provide useful and hype-free information on this blog.

Update: I've tried to make it work with IE, unfortunately the favicon.ico provided by Blogger seems to take precedence over the one provided in the header.

Update: I came up with a solution which seems to work on all the browsers I tested (minus IE6, but it does work with IE7).

Replying to the reply - PEiD

0 comments

In a previous post I took issue with Chad McMillan's claim that they had a revolutionary technology of identifying packed executables (btw., if you are interested, you can read my thoughts on the idea of packing your executables). He replied to me and in the spirit of fairness I publish his reply (with his consent of course below):

Hi, I just wanted to clarify a little further the podcast with Bret. I have posted a comment, but am always interested in ensuring people are well informed. So, I have actually encountered PE's that were packed with known packers and were altered (via entry points and some other methods) in which PEiD (even with hardcore scan turned on) could not determine what the packer or compiler was. Our tool, however was able to find it. This is primarily because we try to associate signatures with the unpacking routine as opposed to the entry point. Obviously, packer identification is nothing new by any means. But, we have built a suite of elements into one tool, which, as far as I have found, no one has. It includes: 1 - Packer / compiler ID via entry point 2 - Packer ID via "roaming" signature 3 - The correlation of the previous 2 (thereby, if someone tries to muck with the EP, but cannot change the unpacking, we'll find it and note that fact ... PEiD will not) 4 - Digitally Signed executables (code signing with X509 Cert) 5 - PE "anomolies" (things most compilers will not do, but are usually a result of a packer) 6 - Generic Entropy section check (PEiD also has this feature ... but it appears it may also be foolable, where as we have a method against that) Does this clear things up? I certainly would agree that PEiD has a great tool. We are just trying to help improve on the idea and make it a little better. Ours will also be free to the public once it is released (it's actually finished ... the GUI is all that is left). Let me know if you have any questions! Chad

My comments would be: PEiD is capable of searching the whole file if hardcore mode is set and the signatures have the ep_only property set to false. I just verified this (as a sidenote: PEiD runs perfectly fine with Wine. W00t!) The fact that it failed to identify a given packer on some sample(s) proves only that you have better signatures for that given packer. Also the additional features are nice, but no way revolutionary (for example you can use Sigcheck to verify the digital signatures of the files). Again, I think that it's great that people are working in this area and this tool has the potential of becoming very useful (if implemented in a way that is easily scriptable - ie. command line with no user interaction) and available under a permissive license, but it is evolutionary rather than revolutionary.

I'm back

1 comments
After two weeks of hard work I'm exhausted and recovering, but ready to blog again! I published the comments received in this timeframe (sorry for not getting back sooner) and I hope to get back on track with my goal of publishing at least one (semi-)useful post for every day.