Back to Top

Monday, April 28, 2008

Think Vitamin compromised?

0 comments

I'm pretty sure I was not hallucinating... Earlier, when I was reading the Developing with Google App Engine, Part I in my RSS reader, I noticed some spammy links at the end of the article (the kind offering free stuff). I visited the original page, and surely, there it was.

It all seems to be cleared up now (they also reissued the feed). This one again shows that these days there is no such thing as sites safe to visit. I'm also awaiting the result of their investigation (hopefully the publish something so that attention can be raised about these kinds of attacks). I suspect that it was an SQL injection, but I have no way of knowing.

Update: they didn't publish a statement yet. I've sent an inquiry to them about the matter, but have yet to receive any reply.

Random thoughts and commentary

0 comments

Via the Erratasec blog: Race to Zero. From the webpage:

The Race to Zero contest is being held during Defcon 16 at the Riviera Hotel in Las Vegas, 8-10 August 2008.

The event involves contestants being given a sample set of viruses and malcode to modify and upload through the contest portal. The portal passes the modified samples through a number of antivirus engines and determines if the sample is a known threat. The first team or individual to pass their sample past all antivirus engines undetected wins that round. Each round increases in complexity as the contest progresses.

My prediction (if I may be so bold): someone will come with a homebrew protector and will defeat all AV engines in 5 minutes (unless some hyper-sensitive engines are used, but (a) even those can be defeated, you just have to work harder to make your code look innocent and (b) these engines are usable only in very limited settings due to their high false positive rate). This will bring to the public attention (once again) the fact that AV products protect against the known, not the unknown and if you basing your security solely on them and not considering layered approaches, you have a high probability of being affected. And AV products provide almost zero protection against malware specifically targeted against your company!

Via AlertLogic: To defeat a malicious botnet, build a friendly one. First of all, as the AlertLogic post noted, the title is a very bad one (just like the story coming out of MS Research that they are planning to use worms to distribute patches). Second, this sounds like building a friendly fast-flux network. Three comments:

  • Who in their right mind would agree to use her/his computer for something like this?
  • It just shifts the target from the content-server to the content-locator server (DNS) which can be taken down (although not that easily)
  • It still is possible to DDoS the service if it does at least some processing (like querying a database) and doesn't just serve up static pages (which can effectively be cached by the routing nodes)

Via the Hacker Webzine blog: Overflows The Visual & Audible Way. This is way cool. Although I've dabbled with creating custom chips (using FPGA's), actually hearing the computations is incredible.

This also reminds me of something I've been pondering about: how under-utilized our senses are by current UI interfaces. You can't find mane examples of meaningful and useful audio-feedback outside of the computer games for example, although IMHO it is a very good method to notify the user of low-priority (or even high priority) events.

A few thoughts about the security implications of Amazon's computing cloud (EC2)

Nix - a purely functional package manager. This is a *NIX solution for the DLL hell. I've skimmed through the papers because I was interested in how they handle the update of different, but dependent components (for example, lets say that OpenSSL has a bug and I upgrade it, how will Apache - which depends on it - react to this). Nothing very interesting (but maybe I didn't get to the core of it). If you want to see a working and widely used implementation of something similar, check out Windows Side by Side (or SxS as it's better known). There is also a Channel 9 video about it

Wine is nicely enabled? - not so much for me. I never managed to make something really work under Wine and was plagued by small (but very annoying) details. These days I just run a Qemu VM with XP in the background and RDP into it.

Git, a distributed version control system and merging: Git merging by example. It all boils down to Git just does the right thing (like Perl).

Via the Grand Stream Dreams blog:

Finally, a post about hardware security. Very good and very scary.

HTTP Redirection with PHP - a complete solution

0 comments

The problem: you need a generic code which redirects to the current URL or some URL relative to the current one.

One of the reasons you might want to do that is to avoid the possibility of users submitting a POST form multiple times (even though both IE and FF - and I suppose also Opera and Safari) give warnings in such cases, you should avoid depending, where possible, on end users making the right decision.

Also, why generic, why not hardcode it in a configuration file? The first answer would be that you have one less parameter to set when deploying the application (and deployment can mean many things: testing servers, development servers, production servers, or maybe the script is a product used by many people on many different servers). The second part is that the same server can (possibly) be referenced by multiple names:

  • By IP address
  • As localhost, 127.0.0.1, etc, when you are locally on the server
  • By an internal DNS/WINS name (if it is a Windows box for example)
  • By an external DNS name
  • By the IP address of other interfaces (real, or virtual like a VPN)
  • ...

You must replicate in the redirect the exact URL type, otherwise cookies will be invalidated at the very least, or the browser will be completely unable to connect (if you give an interal IP address to an external user for example).

Sidenote: the easy solution would be to use relative URL, like you can in HTML tags. Unfortunately the HTTP/1.1 standard (as well as the 1.0 standard) specifically states that (emphasis added):

The field value consists of a single absolute URI.

Also there is no HTTP header corresponding to the META refresh method, although the attribute name (http-equiv) certainly seems to imply that, and in fact it is the case in other situations (for example in the case of Content-Encoding). Also, using the META solution has many drawbacks:

  • It is deprecated
  • It can only be used for HTML content (not for images, CSS, javascript, etc)
  • It supposes that the receiver is a full browser rather than an automated client like the search engine crawlers, curl, wget, etc

Getting back to the problem, we must determine the following three parts of the URL:

  1. the protocol (HTTP or HTTPS)
  2. the server name/IP (and which was used) together with the port (which may be implicit or explicit)
  3. the query string (all that comes after the server/port)

By the way, I don't claim to have invented something new/unique, the pieces can be found in the documentation and the user comments, I just put it together. Also, take care that this only applies to PHP running under Apache (the most common scenario) and I have no idea if/how this applies to IIS for example (although it should work).

Finally some code:

print "The full URL is: " 
  . (('on' == @$_SERVER['HTTPS']) ? 'https://' : 'http://')
  . (isset($_SERVER['HTTP_HOST']) ? $_SERVER['HTTP_HOST'] : $_SERVER['SERVER_NAME'])
  . $_SERVER['REQUEST_URI'];

The part which usually isn't found in the examples is the http/https part. Also note that the HTTP_HOST value will contain the exact method you used to access the site (so it you typed localhost, it will contain localhost, if you typed 127.0.0.5:80, it will contain 127.0.0.5:80 and so on).

The redirection will work like this then:

$target_url = (('on' == @$_SERVER['HTTPS']) ? 'https://' : 'http://')
  . (isset($_SERVER['HTTP_HOST']) ? $_SERVER['HTTP_HOST'] : $_SERVER['SERVER_NAME'])
  . $_SERVER['REQUEST_URI'];
header("Location: $target_url");
exit;

A common usecase (for me at least) is to remove the GET parameters before the redirection:

$target_url = (('on' == @$_SERVER['HTTPS']) ? 'https://' : 'http://')
  . (isset($_SERVER['HTTP_HOST']) ? $_SERVER['HTTP_HOST'] : $_SERVER['SERVER_NAME'])
  . $_SERVER['REQUEST_URI'];
$target_url = preg_replace('/\\?.*/s', '', $target_url); 
header("Location: $target_url");
exit;

Also, take care that, as the documentation notes, session id's are not automagically included in the URL even if the url rewriting is activated, so you need to include them manually if you use this feature (I won't present code for this though since it can be quite complicated depending on the usecase - do you need to handle existing GET parameters? are those parameters always present? etc, etc)

Update: Added fallback to "SERVER_NAME", to handle situations where the connecting client doesn't send the "Host" header (ie. it isn't HTTP/1.1)

Tuesday, April 22, 2008

Finding the installed files for modules in Perl

0 comments

First of all if you want to quickly find out if a Perl module is installed on a system or not, you can do the following:

perl -e"use Foo;"

If the module is installed, this won't print out anything, however it it isn't it should say something like (yes, shame on me, I haven't upgraded to 5.10 yet):

Can't locate Foo.pm in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.8.8 /usr/local/share/perl/5.8.8 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.8 /usr/share/perl/5.8 /usr/local/lib/site_perl .) at -e line 1.
BEGIN failed--compilation aborted at -e line 1.

Now lets say you want to find the files associated with a given module (to see how something is implemented for example, or you want to fix a bug locally - although editing the files directly isn't the best solution because a future update might overwrite the fixed version). The simplest solution would be to do something like this:

perl -e"print join(\"\n\", @INC), \"\n\";"

Which should result in something like this:

/etc/perl
/usr/local/lib/perl/5.8.8
/usr/local/share/perl/5.8.8
/usr/lib/perl5
/usr/share/perl5
/usr/lib/perl/5.8
/usr/share/perl/5.8
/usr/local/lib/site_perl
.

After which you could inspect each directory listed to see if it contains the module you're looking for. A more advanced solution would be to use ExtUtils::Installed like this:

#!/usr/bin/perl
use strict;
use warnings;
use ExtUtils::Installed;

my ($inst) = ExtUtils::Installed->new();
my @all_files = $inst->files('DBI', 'prog');
print join("\n", @all_files);

In the files call "DBI" is the name of the module (take care that you can use only the top-level module name, so Net is ok, but Net::FTP is not) and the second one if the type of files we're looking for ("prog" means code, meaning that we are not interested in documentation files).

Finally if you want a very powerful solution, take a look at perlwh - a 'which' for Perl Modules. By looking at the code it seems to handle many quircks the basic solution from above can't.

Sunday, April 20, 2008

Configuring PPPoE under Windows XP for transparent operation

3 comments

Lately I've been on a quest to provide a simple and highly secure configuration for Windows XP. As the last post focused on security, here is a small usability tip:

If you are using a PPPoE link (with an ADSL line connected directly to the computer for example), here are three things you can do to make the user's life easier:

First, unbind any protocol (like TCP/IP) from the ASDL interface (not the virtual connection!) to avoid the warnings about being unable to find IP addresses, etc. PPPoE is a layer two protocol and in an ADSL context it is only used to encapsulate packets on the Ethernet level (ie it has no relation to TCP/IP). Thus your settings screen should like similar to the following:

Second, to make the connection autodial, set it to avoid prompting for the username and password (which you must have saved) and to hide the interface during dialing (to avoid the possibility of the user clicking "Cancel") and finally create a shortcut to the connection in the StartUp folder:

Finally, if you are using a non-standard DNS provider, it may be the case that it is slow to respond to the first query (I found this to be the case with OpenDNS for example, but I would still recommend using it). To circumvent it, include the following batch file in the startup folder (with the option to minimize set to avoid user confusion):

@ECHO OFF
sleep.exe 30
nslookup google.com > NUL
cls

What this does is it waits for the connection to finish (or rather it waits 30 seconds, which should be sufficient in most of the cases), after which it tries to perform a DNS lookup. I found that this is sufficient to "kickstart" the DNS lookup process and all further lookups are speedy. The sleep utility is not included by default with Windows XP, but you can download it from many places, for example from here.

Update: I've seen some installations where the ISP gives an IP address to the customer on the "external" connection. In this case it is enough ok to unbind the protocol handlers other than TCP/IP. Netware services is definitely not not needed, so you can safely unbind it (and even uninstall it). In fact having it installed can make your welcome screen (the graphical which you can use to select the user by clicking on an image rather than typing the username) and fast user switching under Windows XP non-functional, as described in this Microsoft Knowledge Base article.

Thursday, April 17, 2008

Port redirection under Windows

2 comments

When you want to forward a port, there are several possibilities from iptables to SSH. However I needed a low-latency link with no encryption or compression (because the protocol running over it was encrypted and double-encryption just slows things down without any substantial benefit in this case). My first idea was to chain two Netcat instances together like this:

nc -L -p [new port] -e "nc.exe [other host] [old port]" 0.0.0.0

(In this context 0.0.0.0 means to listen on all the interfaces, because Netcat defaults to the safe thing to do and listens only on the localhost interface - of course if you have a multi-home situation you can put a given interface there to listen only on that)

However this didn't seem to work, and netcat kept erroring out on me with "invalid connection". Then a little searching turned up this blog post from 2004: Port redirection in Windows and two tool recomendations: stunnel for tunneling TCP streams over SSL (I didn't try this, but probably is useful when you can't use SSH - you don't have a SSH account or a SSH server on one or both of the machines) and rinetd. This was exactly what I needed. To run it, create a configuration file (lets say "rinetd.conf") with the following content (to get the equivalent result to the netcat version):

0.0.0.0 [new port] [other host] [old port]

Then run rinetd -c rinetd.conf The software has other useful features like logging, allow and deny rules and so on and it comes with source code :-).

You affect all of us

1 comments

I finally decided to sit down and write the tutorial about configuring Windows XP in a secure fashion, based on my experiences in the last four years or so. And I emphasize again: these methods worked well for me on several computers and even in "though" scenarios like writing and debugging software. And then I read this: the first five things I do when installing Vista, with the first one being: "I enable (and use) the administrator account!".

I don't mean to offend anybody. Being a programmer and having a blog put you somewhere in the top 5% of the population from a computer literacy point of view (then again 39% of all the statistics is made up on the stop :-)). But even so, not all of your choices are well informed. Consider this:

  • You are an example for other people. Do you want all of them to run as Administrator? Do you really trust all of them to do the right things all the time?
  • You are a programmer. By developing as Administrator you are more likely to write programs which run poorly as non-admin. And while you might think "my mom shouldn't run as admin", that's what you're forcing her to do (indirectly) because of software which doesn't run well (or at all) when the user isn't an Admin
  • Do you really believe that you have a 100.0% percent accurate ability to recognize malware? As a virus researcher myself I can assure you that you most likely don't!

Running as a restricted user is all about having a safety net and the ability to relax and not second guess yourself all the time about your actions. Please, make an effort and start the change so that we can cleanup the Internet!

Monday, April 14, 2008

Setting the CPU speed visually in Ubuntu

0 comments

This is a nice little tip from a friend on how to make the your CPU frequency indicator actually work.

Some things I discovered in the process:

  • The sticky bit, which, if I understand correctly is an other way for privilege elevation under *NIX (the other two I know of are sudo and su). This sounds scary, so I searched around for methods of finding such files on my system. The results led me to a forum posting recommending
    sudo find / -perm -1000
    however this seems to display only directories (and if I understand correctly on directories the sticky bit has a different meaning - it's confusing). So finally I came up with the following one-liner which seems to work:
    sudo ls -hlR / | grep \^...s.\*
  • there is a dpkg-reconfigure command, which can be used to invoke a configuration interface for the packages. However it doesn't seem that this configuration interface is standardized in any way, shape or for, or that it discoverable programmatically (meaning that I didn't find a way to discover which packages from the ones I've installed have "hidden options" - and I don't feel like running the command on all the ~2000 packages installed). For the gnome applets package for example it asks you if you wish to set the sticky bit for the cpufreq selector :-)
    sudo dpkg-reconfigure gnome-applets

In conclusion I'm still setting the CPU frequency with

sudo cpufreq-set -d [minimum frequency]MHz -u [maximum frequency]MHz

Thank god for those bash shortcuts, so that I don't have to remember the complete command :-)

Trees in PostgreSQL

0 comments

depesz has written an other of his great articles. There isn't really much I can add to it, other than it's very nice and doesn't use any PostgreSQL specific elements (like arrays), so it can easily be ported to other DB systems which support triggers.

Personally I only had to implement tree structures once, and then I used an additional field in the table enumerating all the parent nodes in an array and maintained it externally (as opposed to internally with triggers). With a GIN index it is perfect for "give me all the parents and all the children of this node" type of queries.

Sunday, April 13, 2008

One letter, big difference

0 comments

I was trying to send a mail to the DokuWiki development list for a couple of days with regards to a possible problem in their mailing-code and my mail seemed to go in a black hole. I started to have conspiracy theories: my e-mail address was banned. Yahoo was banned. The end is near. Finally Yahoo was kind enough to give a delivery failure notice stating that:

Hi. This is the qmail-send program at yahoo.com.
I'm afraid I wasn't able to deliver your message to the following addresses.
This is a permanent error; I've given up. Sorry it didn't work out.

<[email protected]>:
Sorry, I couldn't find a mail exchanger or IP address. (#5.4.4)

Do you see my error yet? Right, I didn't either. So I opened up a console and typed:

$ dig freelist.org MX

; <<>> DiG 9.4.1-P1 <<>> freelist.org MX
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29460
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;freelist.org.                  IN      MX

;; ANSWER SECTION:
freelist.org.           7200    IN      MX      0 dev.null.

;; Query time: 280 msec
;; SERVER: 192.168.1.83#53(192.168.1.83)
;; WHEN: Sun Apr 13 23:26:59 2008
;; MSG SIZE  rcvd: 54

At this point I was convinced that their mail server is down. However this explanation didn't seem right, since I received e-mail regularly from the list even in the last couple of days. Finally it hit me:

The address of the mailing list is freelists.org, not freelist.org.

A couple of conclusions: Always double check your assumptions. Assume that you're wrong (80% of the time you will be right - then again 58% of the statistics is made up on the spot :-)). Rely on your computer to prefill as much information in e-mails (and other communication media) as possible. This also shows why typo squatting is such a lucrative business. And if you are a business, try to anticipate such incidents and buy the given domain names if available. Consumers will thank you! (or more probably: they will never notice, but at least they will find you)

The productive worker

0 comments

This presentation seems to have popped-up on a lot of blogs lately (some may even say that it's a successful meme):

While it sounds very nice and has that "everybody should do it this way!" ring to it, there are many problems a company must overcome even to get close to this state. Some of them are organizational (meaning that management must buy in into these concepts). Also, I think that some types of work are more suitable for this approach than others. For example distributed teams need more communications than co-located ones. Larger teams need more communication than smaller ones.

Companies composed out of multi-talent individuals can benefit from this very much if they realize that people shouldn't be put in a box and labeled with their job title. This is especially true of IT where people share a common body of knowledge (for example both a Java developer and a SysAdmin know how the HTTP protocol works) and it is not uncommon for people from other teams or even from other activity areas (see the previous example with the SysAdmin and the JSP developer) to have good ideas / insights.

An essential prerequisite for this to happen is communication. Good communication. Constant communication. Communicate like your life depends on it! Pretend that you work on an open-source project and the only way to contact your peers is to send an e-mail. Communication doesn't have to be a pain! It should be "frictionless". You should always choose the most efficient method available for communication. When you send a personal e-mail, you're only communicating one to one, but when you put it on a mailing list or a wiki, your communication instantly becomes broadcast!

You can never have too much information! Existing information can be filtered, however man kind has yet to discover a foolproof way to create information from the void.

Internet Explorer + Frames = Headache

0 comments

So lets say you have the following HTML snippet:

<html>
    <frameset rows="20,*" border="0" frameborder="no">
        <frame name="menu" src="menu_frame.html" scrolling="no" noresize="1">
        <frame name="work_frame" src="">
    </frameset>
</html>

First of all you would say: but frames are so 1998! And you would be right. Frames are outmoded, deprecated and a usability nightmare (because you can't bookmark the exact state of the frameset), you have to use them in certain situations. Like, for example when providing a "unified menu" in an intranet where you can't (or don't want to) touch all the sub-sites referenced. "The right tool for the right job".

Back to our problem: the page from the upper frame contained a bunch of links targeted at the lower frame in the form of:

<a href="http://www.example.com" target="work_frame">Example</a>

The problem was that while on all "sane" browsers the link opened in the lower frame, Internet Explorer (both version 6 and 7) insisted on opening a new window for the link. Finally I got the idea to create a blank page and set the lower pane to it:

<html>
    <frameset rows="20,*" border="0" frameborder="no">
        <frame name="menu" src="menu_frame.html" scrolling="no" noresize="1">
        <frame name="work_frame" src="blank.html">
    </frameset>
</html>

Magically everything worked. So there you go: IE + Frames = Headache (from banging your head against the desk), or at least Magic.

PS. I never tried using "about:blank" instead of an explicit blank page, which seems to be standard (I don't know if official or unofficial) between the major browsers. Possibly it would also work (and has the advantage that you don't have to explicitly code an "empty" html file).

Small Qemu tips

0 comments

It's official: Ubuntu has the best documentation out there. There is almost no problem for it you can't fix by typing "Ubuntu [description of the problem]" or "Ubuntu [error message]" in your favorite search engine.

For example, here you can find a very exhaustive documentation on installing Qemu and Kqemu (the Kqemu part is the really interesting one).

One interesting part it doesn't explain is the "-localtime" switch (although it is correctly used in the examples). This switch tells Qemu to set the clock of the virtual machine to the local time rather than the UTC (also known as GMT) time. This is important because Windows and Linux have two different philosophies regarding the meaning of the BIOS (battery powered) clock: Linux thinks that it represents the UTC/GMT time and uses the specified time-zone to calculate the local time from it whenever needed, while Windows (and DOS) thinks that it should represent the local time and uses the time-zone to calculate the GMT time whenever needed (this is why dual-booting is problematic unless both OS's use a network service to synchronize their time and/or you tweak Linux to set the BIOS clock to the local time). In conclusion: if you are running a Windows/DOS guest with Qemu from Linux, don't forget to specify the "-localtime" switch.

And the last tip: Qemu recently moved from CVS to SVN. The new checkout command to obtain the source code is:

svn checkout http://svn.savannah.gnu.org/svn/qemu/
or if you are interested just in the trunk (not the branches and tags:
svn checkout http://svn.savannah.gnu.org/svn/qemu/trunk/
In case you don't have Subversion installed (which is the case by default on Ubuntu) and you are running a Debian based system, it can be resolved with:
sudo apt-get install subversion

Update: changed sv to savannah to be correct (sv seems to deliver a 301 which svn doesn't seem to handle)

Windows XP High-Security Configuration

13 comments

Update: I found out that SRP has some rather nasty limitations (including the ability to circumvent it even as a limited user) which makes much less effective than I initially thought. I still thing it is very useful, but please read the linked article and make your own judgement call.

As I mentioned in a previous post non-standard configurations are very effective in preventing general malware attacks. In this post I want to give concrete advise on how to setup a Windows XP system so that there is no (or only minimal) usability loss, but the resulting system is highly resistant to attacks (both because it is a non-standard configuration and because it uses the built-in security features of Windows XP to a fuller extent).

This configuration is suitable for almost everyone using Windows XP. It is very suitable for scenarios where the computer is used for certain specific tasks only (like reading e-mail, browsing, playing a few games, etc) by non-expert users (and this is not a derogatory term, because not everybody has to be a computer geek, just as not everybody who drives a car needs to be a mechanic). I would encourage everyone who is supporting / maintaining other people's Windows XP machines (for family members, relatives, friends or even for companies) to take a look at this post and think about implementing at least some of the advice.

This post will focus on the single-user/single-computer scenario. In companies, where the computers are member of a central domain, the principles behind the actions are the same, however there are more efficient ways of making the same changes en-masse on all the computers than performing the steps on every single computer one-by-one (like Group Policies).

Who wouldn't want to perform these steps? I have used machines configured this way for many tasks, including ones which are commonly thought of as requiring high privileges (like developing software) with success. Really the only scenario I can think of are people who frequently install/uninstall software (for reviewing purposes for example), although in that scenario I would recommend using a Virtual Machine for doing the testing (unless the software really pushes the limit of the hardware like games, video editing software and so on).

Now back to our task of making Windows XP highly secure. The process consists of two high level steps:

  1. Configuring Software Restriction Policies (a less-known facility included in Windows XP using which we can control which software can run and which can't - a basic application whitelisting/blacklisting tool)
  2. Reconfiguring the user so that she uses a standard privileged account rather than a high privileged one

All the tools described here are either already included in Windows XP or can be freely downloaded from the Internet (in which case links will be provided). A small caveat: free in this context means free for home use. If you wish to use these tools in a corporate environment, please check out their licensing terms for details.

Step 1

The first step is installing Windows XP (if you haven't done so already) and any additional software that is needed on the computer:

Also, don't forget to apply all the updates available for Windows XP, which for a clean install at this point means around 90 patches. This will get slightly better later in the year when Microsoft will release all the updates bundled in Service Pack 3 for Windows XP.

The idea is to get the machine in a state where all the software (and pieces of software - like plugins) are configured the way that the end-users need them, since after applying the security settings major changes will be a three-step process: dropping the security, making the changes and raising the security. Of course I'm talking about major changes here like installing/uninstalling software, not about small changes like changing the homepage of your browser.

While installing an Anti-Virus product is not strictly necessary, because the configuration changes will render almost all malware unusable, having  a multi-layer defense never hurts (also, in corporations it may be needed because of compliance issues). There are a couple of ones out there which are free for personal use (like Avast!, AntiVir, AVG and so on). From a personal experience I would recommend AVG (again, this is free for personal/home use only) because it has a rather small performance impact and can be configured to auto-update silently. If you choose to install AVG, be careful (since they do try to upsell you to the for-pay version - which is understandable) and always double check that you click the button corresponding to the free version.

Step 1a - configuring AVG

Update: since writing this tutorial Grisoft (the makers of AVG) released version 8 of their product and it is unsure how long they will be supporting version 7.5. If you wish to go with AVG, check out updated tutorial about AVG 8.

In the case you decided to go with AVG as your anti-malware protection layer, here are some quick tips. These suggestions were written based on the idea that the setup targets systems which are not operated by technologically savvy users, and thus useless messages should be hidden from them to avoid confusion.

First make sure that you download the free version. As I said before there are some tricks on the webpage to upsell you (like placing the for-pay version in the table before the free one).

During the installation you will be asked if the system should be scanned daily (in addition to the on-access scanner watching). You can safely disable this, given all the other layers of safety, to avoid performance degradation.

avg_disable_daily_scan

Also, make sure to select the option to update the product from the Internet and newer to ask the user about this (if you miss this during the installation, don't worry, it can be changed later).

avg_update_options

After the installation is done, go to the AVG Control Center, to the update module, right-click on it and select properties. There are three important settings here, shown also in the picture below:

  • Update upon next computer restart - to avoid bothering the user with messages
  • Do not ask for the update source - the setting from the previous step if you missed it during installation
  • Display information about update process - this one must be unchecked, again to avoid messages which may be confusing for the users, like the one which can be seen in the second image

avg_update_manager_properties

avg_update_done

Step 2

Make the changes from the Control Panel which you need to. Some of the typical ones would be:

  • Setting the screen resolution and color depth (if you are using an LCD be sure to set the resolution to the native resolution of the LCP panel)
  • Enable the automatic updates. For computers which you "visit" frequently I would recommend "Download updates for me, but let me choose when to install them", while for computers which you get to see less often the "Automatic" is the recommended setting (the thinking being that Microsoft has very good quality patches and if you can't review and apply them in a timely manner - lets say two-three weeks after their release at most - its better to apply them than to stay vulnerable)
    automatic_update_settings

    If you enable the automatic installation of the upadetes, the user will see the following screen during the shutdown:

    shutdown_with_updates

  • Deactivate the accessibility options unless you need them. While it is great that they exists, they can confuse you even if you know what's going on. For example the shortcut for the "Sticky Keys" feature is holding down the Shift key for five seconds, something which often happens to me when I stop to think during typing. A quick tip: after this it may happen that your Shift key seems to be "stuck" even if you answer "no" at the dialog asking to confirm the activation of the feature. You can unstuck it by pressing Ctrl+Alt+Shift at the same time and releasing them (I have no clue why this works, but it works - credit goes to a co-worker of mine who pointed this out). Also, watch out because you have to click on the "Settings" button for each feature on each tab separately and uncheck the "Use shortcut" option:

    accessibility_options

  • Make sure that the Windows Firewall is enabled (it is by default) and that the exceptions added are strictly necessary (for example AVG adds itself to the exception list). Three very important settings are:
    • File and Printer sharing - this should be off unless the computer is behind a router and you wish to share files with computers connected to the same router (so if you are behind a router but don't want to share files, turn it off) - sidenote: if you are using a router, make sure to change the password for the administration interface.
    • Remote Assistance - this is a technology which uses a combination of Microsoft specific technologies (MSN Messenger and Remote Desktop) to provide on-demand remote access to the computer. Unless you anticipate it being used (which from my experience has a very low probability), turn it off
    • Remote Desktop - this is related to the remote assistance one. Again, unless you anticipate that remote access will be required to the computer, turn it off. If however you think that you will need access, remote desktop is a very good technology (and already built-in). Still, move it to a non-standard port and add a firewall rule to open access on that given port. Also, you don't need to give access to everybody to these specific ports, you can limit it for example to the IP of the person who will be providing remote assistance.

      firewall_setup_remote_desktop

  • Add any alternate languages for the keyboard that you may need.
  • Make sure that you know the "name" of the computer. This is set during the setup of Windows an usually contains a set of seemingly random letters/number (I'm sure that there is actually "reason behind the madness", that is a well defined algorithm for deriving these names for example from the MAC addresse, I just don't know what that is, nor is it important). You can find it out by going to the Control Panel, double-clicking on System and checking out the Computer Name tab (the name doesn't include the final dot, so in the example shown below it is "xpbam" not "xpbam."). You can also use this opportunity to change the name to something more memorable - don't worry about conflicts, the name must only be unique inside the local network. For example if this will be a computer connected directly to the Internet, you can choose any name you want, because the "local network" includes just this single computer - if you disabled file sharing in the firewall. If the computer is behind a router, the local network means all the devices connected to the router.

    system_properties

  • Also, make sure that you know the password for the "Administrator" account. This again is specified during the setup, but if you don't remember it/don't know it, change it from the command line by typing:
    net user Administrator "here comes the password"
    (remember to use a strong password!)

Step 3 - Setting up the Software Restriction Policy

After doing all this preparation work, we start getting to the "meat" of this process. Software Restriction Policy is an application whitelisting solution build into Windows XP which is not widely known, but very useful for preventing attacks by unknown malware. The goal of this step is to set up a policy which makes it possible to run already installed applications, but prevents the user from running (and implicitly installing) new and unknown applications. If the applications which the user needs were already set up during step one, this will create no usability problems.

We start by running the "Local Security Settings" application (technically it is a Microsoft Management Console Add-in, but this is not important for this discussion). This can be done either by going to the Control Panel, Administrative Tools and double clicking "Local Security Settings" or typing "secpol.msc" at the command line.

The first time the tool is run, there is no software restriction policy set. You have to create one by right-clicking on the "Software Restriction Policies" in the right panel and selecting "Create New Policies".

The second step is to add the allow rules. Go to the "Additional Rules" section and select "Action", "New Path Rule" and add two separate "Unrestricted" rules for "C:\Program Files" and "C:\Windows". If these paths differ because you installed Windows on a different drive (so it is D:\Windows, not C:\Windows for example) or because you are using a non-English version of Windows, change the paths accordingly. Also, if you have programs installed in other folders (not under Program Files), add those folders also. Software installed under "Program Files" does not need additional rules, since the path rules act as prefix-rules (meaning that the "Unrestricted" rule for Program Files allows us to run all the executables contained in the Program files folder and all its subfolders).add_software_restriction_policy

The next step is to disallow running applications from other locations. This is done by changing the default policy from "Unrestricted" to "Disallow". This can be done by going to "Security Levels", double clicking on "Disallowed" and clicking "Set as Default". (A warning will pop up saying that the new default policy is more restrictive than the previous one.)

software_restriction_policies_default_restricted

The second before last step is to apply these steps also to DLL's (if you are interested in the technical details: unless you apply this rule on DLL's, it is possible to circumvent this policy by compiling an application into a DLL and using the rundll32 tool - which can be executed, since it is in the C:\Windows\System32 directory - to load the DLL). Microsoft disables this by default to prevent you from breaking things because not enough allow rules were set, however the rules defined in the previous step are sufficient to ensure that this won't happen. To set this, go to the "Software Restriction Policies" on the right and double click "Enforcement" on the left. In the dialog box make sure that the "All software files" option is checked, as shown in the figure below:

software_restriction_policies_include_dlls

Finally, shorcut files (.lnk files) must be excepted from the control of this mechanism, so that the start menu and the desktop remain usable (both of them contain a collection of shortcuts and would have otherwise needed either an "allow" rule for each shorcut file in part or an exception rule for the different paths which make up the start menu). This is not a security vulnerability because you can't rename "malware.exe" to "malware.lnk" and still be able to run it (as opposed to "malware.com" which is runnable). To do this, go to Software Restriction Policies on the right and double-click on "Designated File Types" on the left. Select the entry for "LNK" and press "Delete".

software_restriction_policies_delete_link

Step 4 - Changing the user account privilege level

The other important step is to make the user account used on the computer a normal privileged one rather than a high privileged one. To do this we run from the command line the computer management application: "compmgmt.msc".

Go to "Local Users and Groups", select "Users", right click on the current user, select "Properties", go to the "Member Of" page and add the "Users" group (make sure to use the plural form, not "User") and remove any other groups (like "Administrators").

users_member_of

There is a small problem with this method: by default members of the "Users" group can not change the current date/time on the computer. This is done to prevent the users from falsifying timestamps on files and in the eventlog. However the side-effect of this restriction is that double clicking on the clock in the tray to get a calendar does not work (because the system immediately tries to acquire the "SeSystemtimePrivilege" and it fails). This makes a common usecase impossible: using the given calendar to find out time/date information (like what's the date next Monday?) rather than effectively changing the time. By the way, this has been fixed in Windows Vista, where the system only tries to acquire the privilege when the user effectively wants to change the date/time (clicks "Ok").

To resolve this problem, we will grant the current user this privilege. While in theory this is a security hole (it can be used to falsify the eventlog, etc), in practice on a home machine it isn't a problem. To do this we need the "ntrights" program from the Windows Resource Kit. You don't have to download however the full resource kit, you can get the ntrights program separately from this site (don't be mislead by the title saying "Windows 2000", it works just as well on Windows XP).

After having unpacked it to a location where it can be run from (as configure in step three - c:\windows for example), go to the command line and do

ntrights.exe -u Users +r SeSystemtimePrivilege

Other tips

If you later need to temporarily revert the security settings (to install a new software for example), you can do it the following way:

  1. Start a command line as administrator (you will have to know the computer name and the password for the administrator account):
    runas /user:[computer name]\Administrator C:\Windows\System32\cmd.exe
  2. From this command line run the "Local Security Settings" application by typing "secpol.msc"
  3. Change the default policy from "Disallowed" to "Unrestricted" - this is needed because installers often execute files from non-standard places (like the temporary folders) which would be denied by the policy
  4. Run the installer from the administrative command prompt (so that it has the necessary rights to write to the "Program Files" folder for example)
  5. Change the default policy back to "Disallowed"

The executables denied by the Software Restriction Policy can be seen in the event viewer. To launch it, again use an administrative command prompt and type: "eventvwr.msc" (this can be useful for debugging why certain applications fail to execute or to check for suspicious activities)

eventlog_show_denied_prograps

Two minor improvements to the ease of use (at the expense of security) could be setting the system to auto-login and disabling the screen saver (or at least making sure that it doesn't ask the user to log-in).

Final words

If you have read so far, thank you. While it is true that Windows XP is on its way out, there still will be many systems with it in the coming years. Also, the "a computer person supporting family, relatives, neighbors and so on" is becoming more and more commonplace as more people want/need to use the Internet. If you are that person, I plead to you: please take the time to look through this article and start trying to implement it because it benefits everyone:

  • you won't have to clean up their computers bi-monthly
  • they won't have to work with a system running slower and slower
  • also, the risk of their private information getting exposed is reduced
  • the Internet community will have to deal with one less taken over computer sending out spam or participating in DDoS attacks

While it may seem like a lot of work to implement these measures (and this is a long post), after doing it once or twice the whole process shouldn't take more than fifteen minutes.

I don't claim that such a setup solves all the security problems. It does not solve the problem of  phising for example. It also doesn't mitigate buffer overflow problems, although in practice it can be useful against many of those types of attacks, because they are two phased: in phase one they execute a small amount of code in the taken over browser which downloads and executes the "phase two" malware. Given that we disallowed the execution of unknown executables, stage two will fail, rendering the whole attack ineffective. These settings will however prevent all of the USB-malware which seems to have become quite popular these days.

So you still need to add multiple layers to your defense like using a browser with built-in anti-phising technology (such as Firefox or Internet Explorer 7) and use a filtering service such as OpenDNS. Still, there are other threats out there which may not be prevented by these steps like Cross-Site Scripting (XSS) or Cross-Site Request Forgery attacks (CSRF). These attacks can also be quite damaging because all the web-applications (web-mail, Internet banking, ...) we use. I plan to discuss a possible defense mechanism against them in an upcoming post.

Cheat sheet

Here is a quick rundown of all the things from above:

  1. Install and configure all the programs
  2. Create the software restriction policy
    • Run "secpol.msc"
    • Create a new SRP
    • Add "Unrestricted" rules for "C:\Windows" and "C:\Program Files" (as well as other directories you installed programs to)
    • Remove the ".lnk" file type from the list of filtered extensions (Designated File Types)
    • Make sure that both executables and DLL's are filtered (Enforcement)
    • Change the default policy from "Unrestricted" to "Disallowed"
  3. Remove extra privileges from the user
    • Run compmgmt.msc
    • Go to System Tools, Local Users and Groups, Users
    • Double click on the user
    • Add the "Users" group and remove all others on the "Member Of" tab
    • Use ntrights.exe to add the SeSystemtimePrivilege to the users:
      ntrights.exe -u Users +r SeSystemtimePrivilege
  4. When you need an administrative command prompt:
    runas /user:[computer name]\Administrator C:\Windows\System32\cmd.exe

Friday, April 11, 2008

Reading the fineprint in the documentation

0 comments

Following the DokuWiki development mailing list, I saw the following changelog:

Sun Apr  6 19:47:18 CEST 2008  Andreas Gohr 
  * work around strftime character limit on parsing namespace templates FS#1366

I checked on the PHP documentation page (by the way, a quick tip: you can access the documentation for a given PHP function by appending the name of the function to the php.net url, for example http://php.net/strftime) and sure enough if says: "Maximum length of this parameter is 1023 characters".

Distributed version control systems - why?

0 comments

Some time ago I finally had time to read the Subversion book and felt that all my questions were answered. I tried SVN many years back and failed miserably, but now I'm confident in my ability to use, install and maintain SVN. However there seems to be a new buzz about distributed versioning systems (like darcs, http://www.selenic.com/mercurial, and so on), which for the longest time I didn't get. It seemed to me that everything I need or could possibly need is in SVN. Then it hit me:

"Classical" version control systems like SVN are about keeping a central repository of "stuff" (code mostly, but it can be other things) enabling a large set of users to work on it concurrently and coordinating them to minimize friction but also ensure that they don't step on each-others toes. The versioning part of the systems is a side-effect of these goals (meaning that versions are primarily there as an accounting mechanism - a "who did what" type of thing).

Distributed version control systems on the other hand put the emphasis exactly on that: keeping a very granular history. To put in a very oversimplified way: in my opinion if you have permanent connectivity to your SVN server (it's on your local box for example) and you "commit early, commit often" you basically have most of the advantages if a DVCS. Or to put it otherwise:

If you're using SVN, then somebody can go away, work on a change for days (weeks, months) and come back with a big patch which you apply and you'll see in the log that at the given commit you've changed a thousand line of code for example. With a DVCS (if I understand correctly) you would merge not only the patch, but also the history of the patch, having a result similar to the branch-merge method (ie when the changes were gradually committed to a branch which got merged back into the trunk), but without the need to have constant access to the SVN.

In conclusion, currently I don't have any great interest in DVCS both because I have (almost) permanent connectivity to my repository and I already "commit early / commit often", but this (as almost all the things) may change in the future :-).

Update: Hanselminutes (a great podcast for every developer) has just published an episode about Git, an other distributed version control system (this one is written by Linus Torvalds himself and is used to develop the Linux kernel). It contains some good discussions about Git from a Subversion user point of view.

Wednesday, April 09, 2008

A small warning about ptkdb

0 comments

ptkdb is a GUI debugger (as opposed to the default console-based one) for Perl using the Tk toolkit for its windows. As far as I know it is one of the most advanced ones, discounting debuggers built into IDEs. An other advantage of it is its availability for many platforms (including Windows), including sources like the PPM repositories for ActivePerl (this is probably because Tk is available on all of these platforms, because it doesn't have any major dependencies, which is understandable given that the debugger subsystem for Perl is very "pluggable" and you can write a custom debugger / profiler / run-time monitor in a couple of lines of Perl code). However there a gotcha which can bite you in some corner cases:

If you switch color depth during using it, the interface will break (the keyboard shortcuts like Alt+R for running or Alt+Q for quitting will still work, but you will be unable see anything in the interface). I tested it with Windows and ActivePerl 5.8.8, but probably it also is true for other versions. First of all you might ask, why would you change the color depth? That's because I'm using RDP to work on the system and depending on the available bandwidth I either choose to go full color or with 8 bits per pixel. So if I'm debugging a long-running process, and reconnect using the lower color-depth, the interface is gone. To be fair, this is a very edge case and also probably not the fault of ptkdb, but rather of Tk, but it just happens that I observed it using ptkdb.

The GUI can't be "revived" by reconnecting with the correct color depth. My current solution is to make sure to always connect using the correct (same) color depth. Also, I haven't tested it in the other direction (going from 8 bits to 24 bits) which may work.

To moderate or not to moderate

0 comments

Recently I watched a WordCamp Dallas presentation entitled 45 Ways to Power Up Your Blog. One of the thing I liked was the remark that you don't need to be focused sharply on one domain to have a successful blog. While my blog is first and foremost for venting and my no means do I have a "five year plan" to  for it, it is nice to hear other people are also applying the more "all over the map" style.

However I wanted to talk about an other point that the speaker (John Pozadzides) made during the presentation: not to preemptively hand-moderate comments, but to let automated systems approve/reject comments and go in later and remove any remaining offending posts. As you know this is not the way I do it (one of the reasons being that I don't host my own blog thus I can't really install/configure arbitrary plugins for moderation, the other being that I'm a bit paranoid), and although I vowed not to censor posts for other reasons than them being spam, today I had a very "on the fence" experience:

I got a comment on my recent post about alternative configurations as a way to prevent malware pushing a "system optimizer" type of product (the comment is not published). First I looked at the site they pushed and poked around thinking that this may be one of those fake anti-malware products and I can reject the comment in clear conscience. However I didn't find any proof for that. Finally I rejected it on the basis that it didn't have any substantial contribution to the discussion (it basically was a one liner saying something along the lines "if you use product X you won't have this problem"), but it was an interesting dilemma still.

Tuesday, April 08, 2008

Circumventing the need for transactions in MySQL

1 comments

While reading the excellent series on "Web 2.0" and databases on the O'Reilly radar blog it occurred to me that there is a nice trick with MySQL for making it semi-transactional (as a side-note: these days I have work with MySQL less and less and am fully enjoying the goodness that is PostgreSQL and pgAdmin).

Lets say that you have the following situation:

  • MySQL with MyISAM tables
  • A process which does a SELECT and depending on the result (for example if a given field has a certain value) issues an UPDATE

It is quite obvious that this method is not "thread safe", meaning that if you have multiple clients operating on the same records, you can very easily get into the following situation:

  • Client A does the SELECT and decides that it needs to update
  • Client B does the SELECT and it to decides to update
  • Client A does the update
  • Client B does the update

As the number of clients grows, the possibility of this situation appearing tends very fast to 100%. Your options to eliminate these situations are the following (again, assuming MyISAM tables with no transaction support):

Method 1

Simulate transactions by locking the table - this can reduce the system to a crawl, since it effectively serializes all updates. The problems gets worse and worse as the delay between the SELECT and UPDATE increases (if you have to perform complex calculations to decide if an update is needed for example).

Method 2

After doing the update, do an additional select to make sure that we were the last to update the field.

A slightly more elegant solution would be the following: when doing the UPDATE, put in the WHERE part the expected value for the fields which may change. Probably a little example is in order to make this clear. Lets suppose that we have the following table:

column_a column_b
1 aaa
... ...

And we want to do something like this:

SELECT * FROM table WHERE column_a = 1
...check if column_b equal "aaa"...
UPDATE table SET column_b = "bbb" WHERE column_a = 1

To avoid the race condition, modify the last update as follows:

UPDATE table SET column_b = "bbb" WHERE column_a = 1 AND column_b = "aaa"

Now we don't need to perform an additional select, we can directly check the "number of affected columns" (which is returned in most - if not all - client libraries) and if it's one, we succeeded, otherwise someone else "stole our thunder".

Method 3

The one I actually wanted to talk about: express the whole procedure as a single SQL statement. Following the previous example, we could again write:

UPDATE table SET column_b = "bbb" WHERE column_a = 1 AND column_b = "aaa"

The difference compared to the previous method is that we don't need the SELECT because we included the verification step in the WHERE clause. Again, we check the number of affected rows to find out if we succeeded. The basis of this method is that although MySQL doesn't guarantee the serializability of multiple queries (on MyISAM tables unless you lock the table), it does guarantee the serializability of individual queries. In fact it has to because otherwise simple queries like UPDATE table SET a = a + 1 could not be guaranteed to produce a correct results in all circumstance. So, as long as you can express the operations which are prone to produce incorrect results when multiple copies are executed, you are fine. There is almost no limit to the conditions you can express in SQL. If your expression becomes too complicated or requires complicated controls structures (branches, loops, etc), you can hide it away in a stored procedure since version 5.0. However you should not access the database from these stored procedures, because this would break the "query serialization" process (queries from a stored procedure, unless explicitly part of a transaction, is subject to synchronization problems!). It should strictly operate on its input parameters.

Sunday, April 06, 2008

How efficient are non-standard configurations in combating the malware problem?

2 comments

Very. Thank you for reading this article, hope to see you soon.

Just kidding :-), you won't get off this easy. You'll have to read my ramblings about the topic.

It isn't a new idea to model the malware problem using methods borrowed from the field of the biology, more specifically the study of diseases and how they spread during epidemics. One of the idea taken from this field is the slowing down / reducing the impact of an epidemic through diversity. In biology this means that one epidemic affects only one species (or even a part of the species) and even if the worst-case happens (the entire sub-species disappears because of the disease), it doesn't mean that life in general ceased to exists.

This line of reasoning is usually applied to computer-security the following way: we need to use different operating systems, because a threat affecting one OS won't affect the other (for example a DCOM exploit won't work against OpenSolaris). While this line of reasoning is completely accurate, the problem with it at both the micro (home-user) and macro (corporation) level is that programs (operating systems in this case) are not 100% (or even 70%) interchangeable. You can get away fairly easily with switching between related OS's (for example switching from one Linux distribution to an other), however as the "distance" grows, so does the switching pain. Other (very real) problems of a heterogeneous environment are interoperability, manageability and user training (the order of enumeration is arbitrary, their relative importance depends on the given situation).

The line of reasoning presented until now is analogous to the situation of having multiple species (the same type of program - for example an OS - from multiple sources) and saying that "life" in general will still exists even if one of the species ceases to exists. I would argue however that the same type of argument can be made on the sub-species level. And by sub-species I mean instances of the same program (operating system, relational DBMS, etc) configured differently. To summarize, my claim is that:

Software systems (most of the time) can be configured in a way which makes them immune to a very large part of the non-targeted attacks and still remain usable. This is equivalent to (or better) than the performance of many blacklist-type security software.

A few examples:

Network facing software can be configured to listen on a non-standard port (if it has a limited user base). For example imagine if a MS SQL database would have listened on a non-standard port during the Slammer outbreak (making abstraction from the fact that a patch was already available and that database systems should not be accessible from the Internet), it wouldn't have been affected.

The second example: using a non-standard browser (Firefox, Opera, Safari, etc). This also exemplifies two other aspects of the problem: first, the "alternative configuration" must work on an acceptable level (there are sites out there which don't work on some of these browsers). Second, as such an "alternative" configuration gains popularity, it won't be so effective (for example we start to see "mainstream" attacks against Firefox). A sidenote: if you wish to use an alternative browser, make sure that you're using one with an alternative engine, not just one which presents an alternative interface to the same engine (for example the Maxthon browser uses the IE engine, the Galeon browser uses the Gecko engine, the same one which powers Firefox, etc). Also, to be more effective you might want to change the User-Agent string for the given browser. This will throw-off the current (rather primitive) methods used to target different browsers.

Third example: use an alternative PDF reader instead of Adobe Acrobat (on Windows you could try FoxIt). Use an alternative office suite.

Final example: don't run as Administrator or Power User (if you are using Windows XP). Run as a regular user. This will kill off many malware which expect to be able to write to key areas of the file system / registry.

The common aspect of all these examples is that the working environment was altered only slightly, but the protection achieved against mass-attacks is very significant.

My conviction (based on seeing many, many malware in the last couple years) is that such measures reduce the exposure to current and future malware to less than 0.1%! To be clear: this is not a silver bullet and there are many situations it fails to be effective. Two such cases would be: if the given alternative starts to become mainstream (where "mainstream" needs to be interpreted in the largest sense of the world - for example Firefox is used by around 20% of the Internet population and is already targeted by general attacks. Opera with its considerably smaller user base seems safe for the moment). The other problem is a targeted attack, against which it also fails to protect.

In conclusion: do yourself and your aunt a favor and don't make her an Administrator on the next system you install for her. This will make both of your lives easier.

PS. I'm sure that many of you will ask if this means that AV software is obsolete. My response is : no, but it must be accompanied by other measures, it can't stand on its own (and neither can any other solution).

"Remote" turn-off switch

0 comments

And now for something completely different: a hardware hack.

Warning! Don't attempt this at home unless you have at least some experience with electricity! Also, applying this hack directly on consumer electronics will most probably void the warranty!

The problem: having a 2.1 (yes, I know, lame, real people use at least 5.1 :-)) speaker system with an incredibly bright blue LED on the front and the turn-off switch on the back of the sub-woofer! The solution: installing a secondary switch to cut-off the power. What we need:

  • Tools
  • A switch rated for 220V (or 110V if you live on that part of the ocean) which is cable mountable. Usually these switches are rated for low amperage (ie what is the maximum power that they can cut-off safely), like small lamps, but the speaker system is also of quite low power consumption.
  • A piece of electrical wire. Again, use cable rated for the right amount of Watts. In the pictures you will see a cable composed out of 3 wires. Technically it would have been sufficient to use a 2-wire cable, however this was what I had handy.
  • An electrical plug
  • Isolating tape
2

The plan:

remote cutoff switch plan

The plan is to mount the switch on one of the wires, thus making it possible to turn on-off the speaker from a distance (a small distance, but at least you don't have to crawl on you knees to find the switch). In the version shown below the points A and B will be very close together (in fact they will both be in the plug). As I mentioned before, I had a three-wire cable handy, so there was one wire left unused, marked in the plan with a dash-dotted line.

Step 1. Cut off the original plug and clean the wires on a short distance (~5mm). I apologize for the poor quality of the pictures, but I had no "real" camera at hand.

1

Step 2. Take a piece of cable long enough for this purpose. Mount on one end the switch. To make it more "aesthetically pleasing" (and practical) you could mount the cable on one end. In this case I left the green-yellow wire unused (which is commonly used for grounding, so that it's easier to remember). A tip: at first cut off the outer isolation on a shorter part of the cable, mount the cable in the switch and finally cut off enough of the outer insulation that the switch can be mounted together again.

3

Step 3. Mount the other end of the cable together with the wire from the speaker in the plug. You should mount it the following way: one of the wires from the speaker goes directly to the current. The other one is fixed together with one of the wires from the cable (remember, I didn't use the green-yellow one, so that doesn't count). Finally the other wire from the cable is connected to the other contact. The contact between the the two intermediate wires should be thoroughly isolated.

4

Finally mount the plug together and use insulating tape to fix the remaining wires. You can now turn on-off the speaker (or anything else) from your chair without needing to crawl under you desk. You also being green, because the speakers (and other electrical equipments) draw power while idle and even while in standby!

5

Consider the source before ranting

0 comments

or else you could look foolish.

Full disclosure: I work in the AV industry, however this post (and all of my posts, unless stated otherwise) do not necessarily reflect the opinion of my current or past employers. They are my own personal opinions / views of things.

Getting back to the topic: some time ago there was a posting on the Authentium Virus Blog entitled "Windows Updates: Ranting about things that I dislike". My remark is the following: we can assume with a high degree of certainty that software works the way it works for a reason. And the more people work on a product, the more solid the reasons are for it working the given way. Unless it really is a little project thrown together in half an hour, you should first assume that you made a mistake, not that the team working on the product made one. This is exponentially true for "mega-structures" like the Microsoft products (including Windows Update).

Getting specifically to the points criticized by this blog post:

  1. "If you do it through a safer browser then it does not work. You need Windows Internet Explorer for it to work." - the built-in updater (the one showing you the yellow shield in the taskbar) doesn't need Internet Explorer. Also, considering that such a system level process (updating OS components) needs system level access which can only be done in native code (I'm generalizing here a little) and that IE is the only browser which has a built-in mechanism for executing code from a remote website (through ActiveX - some would call this a security risk :-)) and that IE is already installed on all Windows systems, this is a very reasonable dependency.
  2. "The Malware removal tool appears in my list of items to install every month and I have to say no to it every month. Sometimes months old Malware removal tools will re-appear and I have to say no to them all over again. Although this tool is useful for 99% of the people out there, running it on my machine would be bad for obvious reasons." (Note: I assume that s/he talks about the need to store samples of malware on computers which get the Windows Update). The Malware removal tool is quite different from traditional AV programs, a fact which the author of this post does not quite seems to grasp. It won't scan your hard-drive looking for infection (like tradition "on-demand" scanners) or check every file accessed (like "on-access" - also called "realtime" - scanners). It will look in a few key locations (registry keys, directories, etc) to check if the computer has an active infection with an "important" malware. If so, that malware is removed. Rest assured that it won't touch your inactive malware collection, so you can enable it safely (and I'm saying this out of experience, having run a few Windows machines storing or handling malware samples which had always all the patches applied).
  3. "Then finally, after you have installed the patches, a reboot is required. Note, it is not optional, and definitely not at your convenience." - the author is partially right there that the reboot prompt is very annoying, however it is optional (unless you are in the middle of something and hit enter right before the popup appears and it takes it as a confirmation for rebooting) and can be stopped temporarily. All you need to do is to run ProcessExplorer and suspend all the Windows Update processes (by right-clicking on them and selecting "Suspend"). I admit that this is a hack which is too complicated to normal users, however the whole point of this posting was to argue that the Windows Update process is inadequate for power users.

In conclusion: keep Windows Update enabled, it's your best friend if you run Windows! I read it recently (unfortunately I can't seem to find the link right now) that in a test machines which were up-to-date with patches have not been infected after visiting sites which hosted exploits (the idea being that the exploits used in the mass-attacks are usually for problems for which a patch is already available, so keeping you computer up-to-date will make it immune to almost all of these attacks).

Enabling Bluetooth on Ubuntu

0 comments

is as simple as 1, 2, 3 (although it ought to be as simple as 0 - it should work out of the box - more on this later). So I was trying to copy some photos from a phone to an Ubuntu machine which, although it had recognized the phone, kept coming up with the following error message:

"obex://[mac:address]" is not a valid location.

After plugging the error message into my favourite search engine (which should be the first step wheneven encountering a problem), I came up with this launchpad bug report. Followind the advices from the comments everything is working fine now. So if you have the same problem, just install a few additional packages and it should work (I'm curious why these packages aren't installed by default - I suspect that there is a licensing problem):

sudo apt-get install gnome-vfs-obexftp libopenobex1 gnome-phone-manager

Update:

Strictly speaking it is sufficent to install the gnome-vfs-obexftp package, as it depends on libopenobex1 and I thrown gnome-phone-manager in there to experiment with it (although currently I wasn't able to convince to recognize the phone).