Back to Top

Friday, September 29, 2006

Software vs. Hardware firewalls

1 comments

I've already done my post for the day and was listening to episode 56 of Security Now when I've heard something that ticked me of. I hear this all the time from various sources (but those are mostly uninformed and not security experts). This won't be an other Hack the Gibson post, although you can expect more of those shortly.

There are several variations of this misinformation, like: You don't need a software firewall, if you have a router / hardware firewall, Hardware firewalls are better than software firewalls and so on. The main point is: they have different purposes!

Now to elaborate on this: back in the old days a firewall was (and still is a lot of times) a hardware / software device using which you could filter your traffic using rules like if it comes from port X, allow it, if it comes from the IP X, allow it and so on. This is what the hardware firewalls can do (and probably 99.9% percent of the home routers have this feature integrated). The problem is that it's rather hard to set up (I would like to know what percent of the home users even know what an IP is), and rather ineffective because these days a very large percent of the traffic flows through port 80 (so if you don't allow port 80, you basically can't communicate and if you allowed it, you allowed almost all the traffic - so it becomes an all or nothing decision).

Software firewalls had the same features at the beginning, however they evolved in what is called "personal firewall software" and now offer a control on a per program basis. What this means basically that you can set different rules for different applications (although this usually is an all-or-nothing decision in most of the personal firewalls to avoid overwhelming the user, but at least it's on the application level). A major drawback is that because it runs on the same machine were the malware runs (if the machine gets infected), the malware can turn it off, or inject code in other processes so that the firewall thinks that the other program is trying to communicate.

One note about the firewall built in Windows XP and 2003 (as opposed to the one build in to Vista which is rumored to have this feature): it doesn't contain filtering for outgoing connections (which means connection initiated from your computer) only for incoming connections. This means that it can prevent classic backdoors from working (like SubSeven or BackOrifice), but it won't catch most of the modern malware which initiates the connection, usually on port 80 (so that your hardware firewall won't filter it either).

In conclusion my advice would be (from the point of view of the firewalls):

  • Use a router so that you can use file-sharing (I'm referring to the integrated file sharing, not some peer-to-peer program) without complicated configuration on your firewall.
  • I also use a router because I do web development on my machine so it will run Apache / MySQL / PostgreSQL and I sleep better to know that there is no way somebody from the outside can reach those (even if I missconfigure something locally).
  • In addition use a personal firewall so that you can control per program which has access to what on the network.
  • This isn't directly related to firewalls but: don't run as admin (watch my blog because I'll have more posts that should help you avoiding running as admin).

WAP

0 comments

Yesterday I've participated in the local Windows Academic Program pitch. The main content was delivered by Adrian Marinescu. I can sum it up as a short version of the book Windows Internals. For the one of us who actually have read the book it was a little boring (although in the breaks I've managed to clarify some aspects which were a little fuzzy after reading the book), but for the ones who didn't it probably was right out confusing (as I've noticed from the questions).

He mentioned several improvements which went into the Vista kernel. My feeling about it is that it is very nice, but who will program against an interface which isn't on the market yet, won't be the version used by the majority for several years and there is no backward compatibility (one example which comes in my minds is the new Private Namespaces feature). I know that Microsoft is in a difficult position, because on one hand if they would offer an update kernel for Windows XP, they would kill off incentives to upgrade, but if they don't very few people will program using the new functions until Vista becomes a significant piece of the market. Compare this with Linux where there are very few reasons not to upgrade (one being that it breaks something you really care about – but this is a very rare case and usually updates come out very quickly for the given software). Having such a long release cycle really limits the options Microsoft has in my opinion.

An other feeling that I've got from the presentation (or better said: I've had this feeling for a long time and the presentation only reinforced it) is that Windows as an operating system (and I'm talking about the NT line here) is quite secure, the problem being the default policies and the way that they're trying to get people to adopt a new security policy in Vista for example. Because of fear for their revenue they (and I don't mean the technical people) are not imposing all the security restrictions they should, but rather come up with things like LUA, which IMHO is a semi-solution which can be used to blame the user if something happens (because they clicked yes without reading the message box – what percent of the users reads the dialog boxes anyway?).

Now for the fun part: all the source code that comes with this program. It is composed from three parts as you can see from the main site. I've looked at the licenses first (take care, because there is a different license for each component). The key points that I dislike:

  • You are not allowed to reverse engineer the tools which come with the curriculum. While I'm sure that there is a lot of information in the curriculum itself, probably there will be times where you wonder: how exactly does this tool do that?
  • IANAL, so the definition of derivative work is a little fuzzy to me, and I don't know exactly how this would apply later in your career if you choose to do this line of work (working at a security company for example and doing kernel level development).

Personally I will stay away from it, I think there is enough information out there which doesn't come which such restrictions. Also, for the moment I don't see how such access would be useful. It's nice to have, sure, but I'm not sure that it's actually useful (none of the two major Universities that we have here use / present kernel level code in their OS courses for example).

Wednesday, September 27, 2006

A (non-hacking) tutorial on elevating privileges on Windows

4 comments

Running as a normal user can be real pain on Windows (however it has become better with every version). This is because every program runs on the behalf of a given user and the credentials of that user determine what the program can or can not do. Usually you wish to run as user to protect your computer from malicious code you may inadvertently execute (for example normal users can't install drivers or change firewall settings), but there are some operations for which you might wish to elevate your privileges. A classical example for this would be changing system settings or installing software.

One method would be to use fast user switching and switch to an other user to perform this tasks. This option is rather complicated and time consuming, however it provides excellent protection, since the programs with elevated privileges won't run in the same window station as the low privilege ones (this is important because otherwise there is the possibility of a shatter attack). The big drawback is that it is very difficult to communicate between privileged and non-privileged programs (you don't have the option of drag and drop for example) and that it's very time consuming to switch between the two users (you have to type in two passwords every time).

An other standard way which is available in every Windows installation (starting from XP) is the Run As option in the context menu (the menu which appears when you right-click on elements in explorer and its command-line equivalent runas (to get more help on runas, type runas /? in a command prompt). This is less secure (because programs run in the same window station), however shatter attacks are mostly a theoretical (but real) threat and the assumption is that you are running mostly trusted programs. An other drawback would be that when you run a program as an other user, it will inherit the profile of that user (this means for example that it will see a different desktop directory than your current user, you won't see your bookmarks in the browser because they are stored on a per user basis – not that you would usually want to run your browsers with elevated privileges). You can partially work around this by using the command line version with the /env parameter which uses the current users environment variables rather then the environment variables which correspond to the impersonated user. To see the difference try running two instances of notepad as follows (this example assumes that the windows installation directory is c:\Windows and that Administrator is a user with administrative privileges but not identical to the currently logged in user – meaning that you are not logged in as Administrator)

runas /env /user:Administrator "c:\WINDOWS\NOTEPAD.EXE"
and
runas /user:Administrator "c:\WINDOWS\NOTEPAD.EXE"
Create a text file with each of them and save it on the desktop (what they see as the desktop). One of the notepads will save it on the current desktop (quick quiz: can you tell which one?) and the second one will save it on the desktop of the Administrator user (which in the default case would be C:\Documents and Settings\Administrator\Desktop). This works however only partially. By this I mean that it works only for programs which don't use the registry to store user specific information, because the user specific parts of the registry are still different for the normal processes and the processes started with runas (even with the /env switch). An other inconvenience is that you must time in your password each time, unless you use the /savecred option, which saves your credentials the first time you authenticate and then you don't have to type in your password the next time you execute something with the same credentials (if you use the /savecred option again). The problem from a security standpoint is that your credentials are cached until you log off (that is you can't control the period for which the credentials are cached). An other problem with it is that it's not available in the GUI version (from the right-click menu), only from the command line.

Now the third option would be to use sudowin. This is an open source project distributed under the BSD License (which is a very permissive license – an example about how permissive it is is the fact that for a long time parts of the Windows network stack were taken from an externally developed source code licensed under this license) written in .NET 2.0. What it does is that it gives administrative credentials to the programs you want to, but they will still run with your profile (meaning that they will see the same registry, the same desktop / my documents directory and so on). An other important differnce is that you must enter your password to elevate privileges. It also contains both a GUI and a command line component. To install it, take the following steps:

  1. go to the website and download it (in a rather confusing move the download link is where it displays the version number, towards the upper middle of the page, currently it says 0.1.1-r95). If you didn't download anything from sourceforge.net until now, it will ask you to select a mirror.
  2. You'll need the .NET framework 2.0. You can download it from the microsoft site if you don't already have it (be sure to download the redistributable package not the software development kit). Here is a direct link if you are running a 32 bit Intel or AMD machine. If you already have the .NET framework 1.0 or 1.1, it will prompt you during the installation and offer you the possibility to download and install the 2.0 version without interrupting the installation.
  3. Install the software. Remember to do this from an account with administrative privileges (you can use the methods described earlier to run the installation with enough privileges).
  4. Using a notepad with administrative privileges edit the sudoers.xml file located in the Server subdirectory of the install directory (this is C:\Program Files\sudowin by default or C:\Program Files (x86)\sudowin on 64 bit systems). Go to the users section and add the users you want to have sudo capabilities (remember to enter the names in the format of <domain or="" computer="">\<username>. If you are a home user, you can find out your complete name by entering whoami at the command prompt). Now go towards the end and enter the commands which you want to be able to run with elevated privileges. Also look around the file and change other settings to fir your need. Save the file.
  5. Use the command runas /user:Administrator "cmd /c start lusrmgr.msc" (assuming that Administrator is a user with administrative privileges to which you know the password) to display the user management console (anyone else finds the name funny?). Go to each user you want to be able to perform sudo and add them to the Sudoers group which was created by sudowin during the installation (you can do this by right clicking on them, clicking properties, going to the "Member of" tab, clicking Add, writing Sudoers and clicking Ok).
  6. Use the command runas /user:Administrator "cmd /c start services.msc", find the sudowin service and restart it.

Now you're good to go. Remember that you must enter your password when asked by sudo!

A quick comparative table of the three methods:

Switch user RunAs sudowin
Built-in Yes Yes No
Number of password you must enter 2 1 (the target users) or 0 (if already cached) 1 (your own) or 0 (if already cached and cache has not expired)
Password can be cached No Yes (but it will cache it until logout) Yes (the timeout of the cache can is configurable)
Keeps user profile No Partially (can keep environment if launched from command prompt with the right switches, but not the registry) Yes
Contains logging and other advanced features No No Yes

A final trick (which applies also to the RunAs method): If you try to launch explorer exe nothing happens: normally all explorer instances share the same process (this is probably a leftover optimization from Windows 9x days). When you try to launch a new instance, it detects the old instance (the one that's displaying your taskbar and desktop), but that one's already running with a lower set of credentials. To fix this, you must set explorer to use a different process for each window. You can do this by creating a DWORD value with the name DesktopProcess in the registry at HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer and giving it the value 1 (advice taken from here). Now it should work.

Apache and mod_proxy

0 comments

We've been having problems with Apache and mod_proxy at the workplace for a couple of days. The scenario was the following: there is server A which listens on HTTPS (with Apache) and server B which uses mod_proxy to serve the contents for A in a subdirectory. B runs CentOS with Apache 2.0.52. The issue was that the connection worked if you hit refresh continuously, but if you waited a couple of minutes between page refreshes, you've received the error saying something along the line of error the proxy server received an invalid response from an upstream server. This lead me to believe that it's an issue with the persistent connections so I disabled it on server A and now everything is working (although much slower). I suggested to the admin to server B to update Apache, but people just don't like to break working servers (more so if they don't have a great packaging system – hint use Debian ;-) ). If some of you have the same problem, hopefully this helps.

What was also weird about the situation is the fact that I couldn't reproduce it on other machines which had different versions of Apache (I've tried on Apache 2.2.2 with the following configuration:

ProxyRequests Off

<Proxy *>
Order deny,allow
Allow from all
</Proxy>

ProxyPass /foo https://www.example.com/bar
ProxyPassReverse /foo https://www.example.com/bar

SSLProxyEngine on
and everything worked fine. So this might be a bug in the particular Apache version, but then again good luck convincing people to upgrade :-)

Update: I've forgot to mention how to disable persisten connections. Find the line in your apache configuration file which says KeepAlive On and change it to KeepAlive Off (if you don't find such a line, add KeepAlive On to the end of the configuration file). Then restart apache.

Tuesday, September 26, 2006

Password security on popular sites

2 comments

We use (and sometimes reuse, although we shouldn't) passwords on the web every day. There has been so much talk about password security lately that the least we should expect is that the big sites have proper passwords policies. I will single out two of them here:

  • digg.com – I've tried to register with them a couple of times, but was deterred by the following message: Sorry - only the characters a-z, A-Z and 0-9 are allowed in passwords. This makes me think that they store their passwords in cleartext in the database or something, because I see no other reason for this arbitrary restriction.
  • blogger.com – Yes, the very service I'm using now. When I've registered yesterday I used the usual password generation algorithm an generated a long password with special symbols. Everything went fine, until the next day (today) when I tried to login. So I used the password reminder feature and learnt that the maximum password length was 20 (I used a password longer than that). This again leads me to believe that my passwords are stored in a database field in the cleartext (which probably has a size of 20 characters).

What I would like to ask the web developers:

  • Only store the hashes, or better yet the salted hashes of my password
  • Allow me to choose an arbitrary password with arbitrary characters (or if you want to limit for practical reasons use sensible limits like 255 ASCII – the original 7 bit ASCII – characters). If you store hashes the real length of the passwords has no effect on the data you have to store (it will be the same every time).
  • If you have a limit, specify this and use the correct HTML attributes to signal this to the browser (like maxlength for the input elements)
  • Update: As a commenter pointed out, you should transmit the password through HTTPS / SSL. For this it is enough if the target of the form is encrypted, the page the form resides on musn't be encrypted and you are 100% secure (from a packet sniffing point of view), still it probably gives a good feel to users if the main page is also served over HTTPS (and probably it's not that big a performance hit, especially with persistent connections. However remember that no encryption will protect you from spyware which installs itself directly into your browser (as a BHO for example).

Update: Please note that I don't know whether they store my password as cleartext, as hash or salted hash. There might well be other (historical, security) reasons for the problems I've mentioned. I've personally used the password reminder feature of Blogger, and they've sent me a link using which I've could change my password - so I have no evidence regarding the method they use to store passwords, and I've never used this feature on digg. But my point was that they are limiting my security (or improving it, if you consider impossibility to log in an improvement :-)) by a choice which has no well founded reason.

Update #2: Blogger is now offering the posibility to log in with your google account (just go to beta.blogger.com) which is not subject to the restrictions mentioned above.

Please...

Hack the Gibson - for Episode #50

1 comments

Read the reason for these posts.

The issue of different ports: as you can read on Wikipedia, there are three categories of ports:

  • Common ports: from 0 to 1023 (not 1024, but the first 1024! - we computer guys are sometimes a little weird with our numbers) - these are special in the sense for example on Linux you must have root privileges to use them (some old programs used this as a primitive method for authentication saying that this packet comes from computer X who I trust and it has a source port below 1024, so the user who's sending it must be root on that computer, so I can trust it - until everybody discovered IP spoofing)
  • Registered ports: from 1024 to 49151 (why 49151? I don't know, but it looks nicer in hex: BFFF). These are listed by IANA (Internet Assigned Numbers Authority). You can find the listing here: http://www.iana.org/assignments/port-numbers. The list is only a recommendation and nobody will / wants to enforce it. It should be respected if you want to address a large public (so that they know on which port to connect for a given service), but on privately used networks you might want to choose different ports to avoid automatic exploitation tools.
  • Dynamic and/or Private Ports: from 49152 to 65535.

A little correction for the typist: it's DESQView not DeskView.

In the podcast there is a big confusion between different types of virtual machines, and I can't blame them since there are many different things on the market called virtual machines. My personal recommandation would be to anyone who feels confused by the podcast to go read the Wikipedia article on Virtual Machines which does a very good job on clarifying the issues.

STEVE: ... There’s literally a bitmap that represents the I/O addresses so that individual I/O addresses can be protected and others can be deprotected. - as far as I know on the x86 architecture every I/O operation (there are two instructions in and out) is a privileged operation (that means that code running in ring 3 can't execute it and it will always create an exception - which the supervisor code in ring 0 can handle) and there is no such bitmap.

virtually zero overhead - luckily he said virtually, because even if it's in hardware, there is an overhead. The modern processors don't execute an instructions every clock cycle, but every instruction takes a number of clock cycles. If you listen to the interview with the researches who develop Singularity (a research OS) over at the Channel9 site they say turning on paging and memory protection can reduce the performance of the computer by something like 10% (I'm not sure what the exact number was). It's a penalty that must be accepted for current OSs but for example in their OS they can avoid turning it on, because of the checks they make on the programs during compile time.

LEO: She’s got the Red Pill, I think, is a solution to the Blue Pill. STEVE: Exactly. LEO: As I remember. Pluses for Leo for saying the magic words As I remember but a big minus for Steve for saying Exactly on a half true statement. And again, attribution: the researcher is Joanna Rutkowska and her site is invisiblethings.org. Now to clarify: the Red Pill is a technology for detecting virtual machines which rely on the x86 hardware to offer a faster virtualization. The Blue Pill is a mini virtual machine (a hypervisor to use the correct technical term) which runs on the new chips launched by AMD and Intel with the virtualization support, and no, it can't be detected by the Red Pill, because the Red Pill wasn't designed for detecting it! In fact there is a debate going on about whether such a program (a hypervisor) is detectable from inside the computer if it's running on these new processors.

due to insecurities that exist in the way our current operating systems have been implemented - the Blue Pill has nothing to do with the way OSs are implemented. It simply uses a new set of hardware instructions implemented in a new generation of processors. The only "insecurity" would be the fact that most users run with administrative or near administrative privileges which makes it possible for any program to enter in kernel mode from where it can use these advanced instructions. But this isn't an implementation insecurity but a configuration problem.

STEVE: ...I’ve ended up developing up some very cool technology to allow zero scripting, pure CSS, beautiful hierarchical menus. - as I've said earlier, attribution. Go to any decent web developer and tell him that you developed a pure CSS menuing system and he'll tell you: yeah, that's old news. In fact do a google search on pure CSS menus to see how many people have already described this technique and how old it is.

Well, and most menus are generally JavaScript. They will still function maybe in a crippled fashion, but you need to turn scripting on in order to get a next-level dropdown to work. - two words: Progressive enhancement. It seems that Steve is only now discovering the new techniques that have been floating around for 5+ years. This is not an inherently bad thing but please, don't play the wise old guru who just developed a new technique, and give credits to the people who deserved it. Nobody can know everything in the computer industry, and it's ok, because we see farther by being on the shoulders of giants. I've heard somewhere (and sorry for not knowing the exact source, but hey at least I don't say that this is my idea - I think it was one of Cory Doctorow's speeches) that e-mail is the way it is because it was created by researches / scientist. That's why the original text appears when you hit reply and you can insert your comments in between. Please Mr. Gibson, try to use more science and less marketing.

Hack the Gibson - for Episode #58

0 comments

Read the reason for these posts.

This episode was actually quite good and as far as I can tell there were no errors in it. But I just wanted to get the word out: unregister vgx.dll (instructions here - towards the middle of the page where it says "Suggested Actions") and / or use a better browser (this one is also quite good).

Note: the action suggested at the Microsoft page I've referred you earlier potentially has some problems on non-english version of Windows. In those cases use the command regsvr32 /u "%CommonProgramFiles%\Microsoft Shared\VGX\vgx.dll" (taken from the F-Secure blog).

A little rant here about the not invented here syndrome. Why is it that people / companies feel the need to reinvent stuff? There is a perfectly good vector graphic format (SVG or Scalable Vector Graphics), but someone at Microsoft felt that they needed to bring their own proprietary version of it. There are some very smart people at Microsoft but even they can't be smarter than the hundreds of people who work on open standards and open code. Because it is a company, as soon as you do some work there you have limitations (generated by the marketing and financial departments), so why not take open standards and open code (if it's under an acceptable license of course) and implement that?

Hack the Gibson!

0 comments
Hack the Gibson!

First a piece of advice: don't hack the Gibson if you don't have written permission to do so :-). First go watch the movie. This series of posts wants to be an unofficial errata for the Security Now! podcast by Steve Gibson (this is the first and only time I'll post this link on my blog). In my opinion the problem with Steve is that doesn't always know 100% about the subjects he's talking about (which is ok), but doesn't want to admit it (which is really, really not ok). I've tried to contact him several times with corrections about the things he said on the podcast with no success. When I've heard that other, more well known people had also no success in getting him to correct his errors, I've decided to start this mini series.

I plan to go in parallel with the current episodes and sometimes take an older episode. I don't have time to listen to all the episodes, so I apologize if the things that I mention were already corrected in an other episode. Also, if I make mistakes, please tell me so that we can improve the quality of the information found on the web. And lastly: the podcast is quite enjoyable, certainly in the upper 15% of the podcasts I've tried, but you must take the information with a grain of salt.

Monday, September 25, 2006

print "Hello World!";

0 comments

I'm a hacker / security enthusiast / computer junkie and this will hopefully be my place where I can rant about things / let out some steam and hopefully share useful information. Hope you find something helpful here (sooner or later).

A word about the title of the blog: the hype in the computer industry annoys me to no end. The mis-informations, half-truths and straight out lies propagated by corporate PR spinmachines and people who think they know despairs me. Hopefully this blog will help clear thing up a little. And remember: even I'm not right every time, so please provide feedback so that this can become a repository of quality information.