Back to Top

Monday, December 03, 2007

Bash shortcuts

1 comments

Or: never do history | grep again!

The command line interface of the *nix systems is amazing and Windows doesn't have anything that comes even close to it (although I still have to experiment with PowerShell - I like very much the base idea that instead of text lines you get objects with well defined properties and you don't have to play the I check two cases and throw out a regular expression which will die in the particular cases game). A nice thing I discovered recently are the shortcut keys you can use:

Ctrl + A
Go to the beginning of the line you are currently typing on
Ctrl + E
Go to the end of the line you are currently typing on
Ctrl + L
Clears the Screen, similar to the clear command
Ctrl + U
Clears the line before the cursor position. If you are at the end of the line, clears the entire line.
Ctrl + H
Same as backspace
Ctrl + R
Let's you search through previously used commands. This is the one that can replace the history | grep process.
Ctrl + C
Kill whatever you are running - proably a well known to everyone
Ctrl + D
Exit the current shell. Also known as end of stream - basically the shell terminates because you've said there will be no more input from here!
Ctrl + Z
Puts whatever you are running into a suspended background process. fg restores it.
Ctrl + W
Delete the word before the cursor
Ctrl + K
Clear the line after the cursor
Ctrl + T
Swap the last two characters before the cursor
Esc + T
Swap the last two words before the cursor
Alt + F
Move cursor forward one word on the current line
Alt + B
Move cursor backward one word on the current line
Tab
Auto-complete files and folder names

For more keyboard shortcut goodies, visit the following sites:

Or just search for bash shortcuts with your favorite search engine.

Sunday, December 02, 2007

Google spam - aka I'm back

3 comments

My workload has lightened a little and hopefully I can continue to blog more frequently. But enough of this, let's get to our main subject:

Recently I've been seeing a growing number of spam which links to Google instead of the spam site. The idea is (probably) to avoid filters which check the link targets to determine if the message is spam. The links looks like the following:

http://www.google.com/search?hl=en&q=[some query unique to the site]&btnI=I=Im+Feeling+Lucky

What this does is does a query for which the spamvertized site comes in at the top of the results and simulates a click on the I'm Feeling Lucky button, making Google act as a redirector.

IMHO Google could fix this easily by refusing to redirect from links if the Referer header doesn't point to a Google domain. While in general basing security decisions on the Referer header is not a very secure option and it can break clients which don't send Referer headers (for privacy concerns for example), in this case it would be a very transparent solution:

  • if the users has a desktop based mail client, the Referer header will be empty, preventing the redirection
  • if the user has a web-based mail client (Hotmail, Yahoo Mail, GMail, etc), the Referer header will point to that instead of a Google domain, preventing the attack (or, if the user has disabled the sending of the Referer headers, it will be blocked as in the previous case)
  • if the user copy-paste's the link (because some SPAM comes as text mail), it will have no referer link again.

There are two potential things which get broken by this: (a) people who have Referer headers turned off and (b) third party software / sites which rely on this service. For (b) the answer is pretty clear: this is a functionality provided by Google as is with no guarantees (ie. it's not a documented interface). As for case (a), if they use the I'm Feeling Lucky button, they are SOL. There might be the possibility to work something out by using cookies, but maybe the number of people who have both the Referer header turned off and want to use the special button is so small, that the tradeoff (less SPAM / inconvenience a few people) is worth it.

Friday, August 31, 2007

Spreading the love

0 comments

I've heard that one of the goals for the Wikiscanner author is to get first place on Google for the term Virgil. So here is my contribution to it: Virgil. Thank you.

Thursday, August 30, 2007

NoScript trick

0 comments

In a previous post I discussed how to combine NoScript with co.mments.com As I later discovered the main problem was that the bookmarklet worked by inserting a script tag in the document, which, if scripting was disabled for the given page, could not be evaluated. I worked around this problem by using the temporary enable feature, however I felt uneasy allowing wildcard domains like *.blogspot.com or *.googlepages.com because of the plethora of diverse content available on the subpages, some of which is surely malicious. Fortunately there is an option to make the control much more fine-grained: it can be accesses by going to the NoScript options -> Apperance and checking Full Domains. After that you can white-list hype-free.blogspot.com separately not just blogspot.com in bulk ;).

This whole process illustrates very well the problem of the security aristocracy, the haves and have-nots in the field of security. While NoScript is a nifty little tool, it requires understanding of different aspects like HTML / browsers / scripting at a level which most people would consider rather deep and over their had. This means that there is (and probably will be) a layer of people who will be using these tools and think that the tools can solve all our problems.

Security product testing

1 comments

Just a quick rant about the comparison of different security products (in the largest sense of the word):

Many times we see things said like product X stops 100% of all (known) malware. First there is of course the problem that some people omit the known part which makes them by default ripe for lawsuits. But lets take a much weaker claim (which is a superset of the previous claim): this product stops 99% of all known malware.

The argument usually goes like this: the product was tested on N malware samples and it stopped them all. The fallacy in this argument usually is that the malware samples the tool was tested with were not created with this tool in mind. In general such new wonder tools are used by a very restricted circle of people and thus it newer got targeted by malware writers. But if the hype would to have its effect and a significant amount of people would start using it, it would become a prime target and cracked / bypassed very fast.

Now mind you, I'm not railing against diversity. However when a security product is only as effective as running with lower privileges, you have to wonder:

  • Why is somebody paying for this?
  • Why is somebody wasting her/his time to write this? Why isn't s/he instead writing a good tutorial about using the security features already built in the OS?
  • Is the coder as ignorant that s/he doesn't realize the limits of the program or is s/he simply a shady entrepreneur who wants to get as much money out of the hype as possible before the bubble burst? I wouldn't want to run the code on my system in either case.

Emphasizing my original point again: layered security is good, however if the security promised by a tool is equivalent to the one delivered by something already present in your OS, then (a) why are you paying (literally and/or metaphorically - by taking time to install and configure it) for the product? and (b) why are you creating possibly new vulnerabilities by adding new code on to your system?

PS. The most BS type of hype I've seen is when some company admits to being the target of a targeted attack and then a security vendor comes out and says: our product would have prevented this. No it wouldn't! Targeted attack equals reconnaissance before the action. The specific Trojan/RAT/etc was employed because the attackers knew that the given company uses security product X which doesn't detect it. Would they have used product X and product BS (or just product BS), the attackers would have chosen a different path and still been effective!

Wednesday, August 29, 2007

Malicious hosts

0 comments

There is a new study on the honeynet site, titled Know Your Enemy: Malicious Web Servers. While the study is interesting, there isn't anything particularly new about it. The methodology was very similar to other studies in this area (the Google Ghost in the browser - warning, PDF - study or the Microsoft HoneyMonkey project<) - essentially it was a set of virtual machines running unpatched versions of the OS which were directed to the malicious links and any changes in them (created files, processes, etc) were recorded.

The most interesting part (for me) however was the Defense Evaluation / Blacklisting part. When applied on their dataset the very famous hosts file maintained by winhelp2002 blocked all infections, although it contained only a minority (12%) of the domains. This means that the majority of bad code out there are redirectors and that these lists managed to include (at least until now) the true sources of the infections. This is a very interesting and it shows that while the number of different points of contact with malicious intent on the Internet increases very rapidly, their variation doesn't quite as rapidly and blacklisting technologies are still effective (and by the same logic, AV systems can still be effective).

An other interesting aspect of this data is that almost half of the malicious links is hosted in the US (this data was generated by a small Perl script which can be seen below and has several weak-points - for example some hosts have been taken down and it does not differentiate between sites which were possibly hacked and probably only contain IFRAMEs / redirects and sites which intentionally hosts malicious files. It also counts physical IP addresses rather than host names - this is not a flaw per-se, but it must be noted if we want to make any meaningful comparison). The second most frequent hosting location is, drum roll, China. A quick'n'dirty summary of the results is:

CountryIP count
US470
CN429
Unknown51
DE47
RU45
IT25
CA22
GB16
TW11
FR8
NL8
CZ7
......

Again, these results do not differentiate between redirectors and infection sources, hacked and purposefully malicious sites. Even so, the results suggest that blocking IP ranges representing countries / regions which are not the target for a business can improve the security at least by 50% from the point of view of random (non-targeted) browser exploits.

The script used to generate this data (bear in mind that this is script hacked together for quick results):

#!/usr/bin/perl
use strict;
use warnings;
my %ips;

foreach (<*>) {
    next unless -f;
    next if /\.pl$/i;
    
    open F, $_;
    while () {
        chomp;
        next unless /https?:\/\/([^\/\"]+)/i;

        my $ip = $1;
        if ($ip !~ /^\d+\.\d+\.\d+\.\d+$/) {
            $ip = gethostbyname($ip);
            $ip = join(".", unpack("C*", $ip));
        }
        next unless defined($ip);
        $ips{$ip} = 0;
        print "$ip\n";
    }
    close F;    
}

use IP::Country::DNSBL;

my %countries;
my $reg = IP::Country::DNSBL->new();

foreach (keys %ips) {
    my $cnt = $reg->inet_atocc($_);
    print "$cnt\n";
    $countries{$cnt} = 0 unless (exists $countries{$cnt});
    $countries{$cnt}++;
}

print "---------------------------\n";
foreach (sort { $countries{$a} <=> $countries{$b} } keys %countries) {
    print "$_\t", $countries{$_}, "\n";
}

Hack the Gibson #106

1 comments

Read the reason for these posts. Read Steve Gibson's response.

I have a good news for mister Gibson: SpinRite would actually work on the Mac with VMWare. Because although Macs are EFI based, the hardware emulated by VMWare uses the good old protocols, which means that as long as VMWare has the capability to mount a physical hard-drive in the Mac version (which very probably it has, together with all the other virtualization products for Mac like the Qemu based Q or Parallels), it will have the capability to run SpinRite.

Regarding the multi-factor authentication: theoretically all these discussions are interesting, however as long as the communication channel is as trustworthy as it should be, more focus should be geared towards multi-channel authentication. Also, transaction integrity is the other important problem which should receive more emphasis, because it is nice that you authenticated, but if the integrity of your transactions is not validated, there is still a large possibility of fraud.

The next hypic (aka. hyped topic) is the U3 thingie. The positive thing is that finally a fairly accurate (as far as I know) description of the technology is given. The essence is this: there is a reserved part of the stick which contains a CD-ROM image (something like an ISO file). When the stick is inserted, it contains hardware to signal the presence of two devices: a normal stick and a CD-ROM drive. This pseudo-CD-ROM drive will actually be backed by the image which is on the flash (and, of course because it's on a read-write storage, the image can actually be altered). The security implications are equal to the ones presented by the autorun feature on the CD-ROM, which we have since at least Windows 95 (more than 12 years ago!). You can disable the autorun for CD-ROMs and for USB sticks, so get over it! the whole USB interface, as convenient as it is, is also a potential serious security threat - it's no more a security threat than CD-ROM drives.

About the CAPCHA's: whatever a computer can generate, a computer can decode. These methods (btw, I've heard an interesting variation on one of the .NET rocks episodes - it was a simple math puzzle - something like 2 * 4 = ? - but with the twist that if javascript was enabled, the response was automatically computed and the question was never showed to the user) only work because someone is not specifically targeting it. As soon as somebody will have some good reason to spam a site protected with such a solution, they will develop a custom solution which will circumvent it.

Regarding the bruteforceability of the 10 digit pin. 5^10 (because there are only 5 possible buttons, even though each of them have two digits written on them) is ~ 10 000 000, which is very little if the process can be automated. Also, you could always physically remove the memory chips and read them with a reader (much like you could read the platters of a password protected HD).

A quick intermezzo (because the podcast contains a SpinRite advert - what a surprise - at this point): I wonder how many of these people could have used ddrescue with the same success rate?

About the PayPal verification system: I never used PayPal (and it would be very hard given that I'm from Romania) but if this process works as described (ie. by depositing a small random amount of money and asking you what the amount was), then (a) I see no privacy concern with it (they are giving you money after all - although a small sum) and (b) it's only sort-of a protection (meaning that if you verified you account and your account information gets stolen after the verification, then you don't have any security benefit from it). It seems more useful to prevent credit-cards being used whose owners never used PayPal (which, in some aspect are the perfect pray, since they are highly unlikely to check PayPal for transactions).

Also, to Steve's credit, they finally did a pretty spot-on discussion about hardware and software firewalls and the difference between them. It was time.

Power management for Ubuntu

0 comments

I was praising Ubuntu earlier for its great hardware support. One thing it didn't have out of the box however (which is a very nice feature of modern hardware) was dynamic frequency scaling. There is a detailed description over at the Ubuntu Guide wiki which worked nicely (yes, you actually need to remove packages - the instructions are correct). You can also add a widget (just search for CPU in the widget list) which shows the current frequency. I would recommend the conservative power scheme which Sets the CPU depending on the current usage. It differs in behaviour in that it gracefully increases and decreases the CPU speed rather than jumping to max speed the moment there is any load on the CPU. This behaviour more suitable in a battery powered environment..

Setting up Xming or RDP equivalent for Linux

6 comments

To give a little background: the GUI under Linux (and Unix) is usually distributed the following way:

  • X (the short term commonly used for X Windows System or X11) - this knowns how to draw some primitive elements (like boxes, text, etc) and to get input (from keyboard, mouse, etc) and also has the primitive notion of windows (rectangular, possibly overlapping areas of the screen), but doesn't know much more than that (it doesn't know for example about title bars, how to move windows around, etc)
  • The window manager (like Gnome, KDE or XFCE just to name a few) which uses these primitives to draw more advanced widgets (like icon lists for example), provide additional functionality (moving around the windows, minimizing / maximizing them, etc) and other graphical elements (panel elements - aka gadgets - for example)

The communication between these two components is done through sockets with a well defined protocol. Isn't this inefficient? - one could ask. The answer is - not really, because most of the time (more specifically, when X is running on the same machine as the the window manager, a special kind of socket is used called Unix domain socket. These look like normal sockets in the sense that data I/O is represented by a stream of bytes and it has similar guarantees to TCP (guaranteed, in-order delivery), but is optimized so that the data doesn't have to flow through the TCP/IP stack twice (once at the sending and once at the receiving end) as it would if you would to use a TCP/IP connection to localhost. Windows has a similar architecture where the GDI functions (which are at a similar abstraction level to X - they only know about lines, rectangles, etc) use an IPC (Intre-Process Communication) mechanism to communicate with the Windows subsystem which in turn calls the display drivers.

The only difference between *nix and Windows is that in Windows this modularization was never made explicit and/or documented. This system means that we can execute the drawing instructions on a remote computer (simply by using a TCP/IP socket instead of a Unix domain socket) and we get a very responsive remote deskop for free. It is responsive because instead of transmitting the bitmap that has to be drawn pixel by pixel, it only transmits the primitive instruction needed to draw it (of course there are corner cases, for example if you're doing image editing with GIMP).

The simplest to use X for Windows is Xming. To set it up you first need to have SSH access to the computer which will be the target (the one actually running the applications). Again, you can observe the modularity present in *nix systems - reuse existing components, which means faster development (because you don't have to write it from scratch), better quality (because you don't have to write something which isn't your core competency) and easier use (because the user can reuse her/his knowledge of the components when configuring different parts of the system). Also, in the case of a vulnerability a central patch can secure multiple systems (there is reverse of this coin of course: sometimes the user isn't aware of all the dependencies of such systems, which means that s/he can follow the relevant forums for all of them to stay informed about the needed updates).

There is a nice tutorial over at terminal23.net about setting up the SSH daemon under Ubuntu, complete with advice on how to block brute-force attempts and how to restrict access to a certain subset of IP addresses. I would like to add a couple more things:

  • You can (and should - defense in depth is a good thing) restrict the access to your SSH daemon from the firewall too. While the temptation is big to leave it wide open because one never knows when I need to access it from an other network, my experience has been that the number of places where one needs access from is very limited. Give /24 (or /16) subnets access if you're worried that your IP may change (for example cable providers usually have static IP, but they don't make this explicit, which means that they can change the IP whenever they wish, but it is very unlikely that they will change it outside of the current /24 range). If you don't want to play around with iptables, you can use Firestarter to do it graphically (sudo apt-get install firestarter).
  • If you have multiple interfaces on your computer (which is not as rare as it was some time ago - for example you could have a wired and a wireless interface. Also you can have VPN pseudo-interfaces) make sure to instruct the SSH daemon to listen only on the interfaces where it's truly needed. You can do this by editing the /etc/ssh/sshd_config file and specifying the correct ListenAddress directives. You could instruct it to listen on a different port (as an additional security measure). If you do so, do not forget to alter your firewall configuration. Also take into consideration what ports will likely be blocked / allowed in the environments you need access from.
  • Do not forget to check if there exists a Protocol 2 directive in your /etc/ssh/sshd_config file and no Protocol 1 or Protocol 1,2. The SSH protocol has two versions: version 1 and version 2. Version 1 was found to have some serious security issues and should not be used unless absolutely necessary (legacy equipment for example). Version 2 of the protocol is well supported by all the mainstream platforms and utilities.

When you finish changing the configuration file of the SSH daemon, don't forget to issue a sudo /etc/init.d/ssh restart from the command line so that it loads the new configuration file.

Now that SSH is in place, go to the client machine and install Xming. The different install files have the following meaning:

  • Current vs Superseeded releases - the model of Xming is to make the latest versions available only to donors. The superseeded releases are a couple of minor versions behind (for example the current version is 6.9.0.40 and the superseeded is 6.9.0.28 as of the moment of writing this) but are accessible to everyone. You can check the releases page to see the difference between the versions, but in practice the superseeded version always worked for me.
  • Xming vs. Xming-mesa - Xming uses OpenGL acceleration while Xming-mesa uses a software-only method for drawing. Use Xming unless you have specific problems with it.
  • Xming-fonts - when instructions are sent to draw text, it includes just the font names, not the actual font definitions. They are contained in this package. An alternative mentioned on the Xming page is to make the originating computer serve up the fonts through a font-server, however I have no experience doing this.

Now that you have everything installed, use XLaunch to create a new session. If your SSH daemon is listening on a different port, as suggested before, you should specify -P [port number] in the additional parameters for PuTTY or SSH field (these parameters are passed to plink, so you can use any parameter understood by it). If you specify a program to run on startup, I would recommend gnome-terminal if the given system is running Gnome. Once it started, you can launch other programs from it. If you launch GUI programs and are not concerned with their output, append a & after the command (for example firefox-bin &) or launch them in a different tab (you can open multiple tabs in gnome-terminal by pressing Ctrl+Alt+T)

Update: On the netnerds I found the following two alternatives to Xming: X-Win32 (download it for here from here) and Cygwin/X. I haven't played with them though...

Tuesday, August 28, 2007

Ethical hacker challenge - Serenity

0 comments

I didn't win the latest ethical hacker challenge, one of the reasons being the lack of my film-trivia knowledge. So here goes my answer to the challenge, maybe somebody finds it useful. You can also compare it with the winning submission.

1. What tool did Kaylee use to remove the malware? How could she find the process, kill it and keep it from starting?

The label on the thumb-drive (at least what is visible from it, "SysIn...") is most probably a reference to the great free utilities created by SysInternals, who have recently been bought by Microsoft, but the utilities are mostly still available in their original from at http://www.microsoft.com/technet/sysinternals/default.mspx

One of the best ways to get to know the tools is to watch the presentation given by Mark Russinovich himself at http://www.microsoft.com/emea/itsshowtime/sessionh.aspx?videoid=359 (free registration required, but there is a back door: http://forum.sysinternals.com/forum_posts.asp?TID=9409&PN=1). What follows below is a very short presentation of the tools, but realistically one should watch the whole presentation to get a good feel of them.

- ProcessExplorer - is the swiss army knife.

When using it look for malware, it is useful to turn on the "Company Name" and "Description" columns and look for executables with similar names but slightly different descriptions (one trick malware authors frequently use is to name their executable similar to the operating system components - for example lsass.exe - but many times they have no attention to details). For example if you see 5 instances of svchost.exe all having "Microsoft Corporation" in their company name and one having "Microsoft", it is a clear indication that something is not right.

In the same manner one should examine the icons associated with each executable (which ProcessExplorer readily displays) and look for any discrepancies (many times malware authors use ready-made tools to generate their executable - like archivers capable of generating self-extract archives with installation scripts or scripting languages which can "compile" into an executable and don't bother or don't know how to change the icon).

Also, one should look at the location where the executable is running from (to use the prior example: if 5 instances of svchost.exe are running from %windir%\system32 and one from %windir%\system, it is very suspicious). This technique is quite common, and exploits a limitation of the Task Manager built into windows, namely that it can't display the path where the executable is running from, making the six instances described in the example indistinguishable from one-another.

An other clues to look for are executables with characters in their name which are easy to confuse with other characters (for example instead of lsass.exe one migh see 1sass.exe of Isass.exe - the last one uses the big "i" character which in some fonts is indistinguishable from the small "L" character).

Yet an other indication of malware are processes which are "packed". They are highlighted in purple by default in ProcessExplorer. However, one must not assume that any "packed" process is an indication of malware, since many other programs - for the worst of for the better - employ packers. It is however a sign to further investigate.

As a general rule one should look for processes which one doesn't recognize (assuming that one has an experience with the processes which should run under "normal" conditions) and investigate those processes (again, they should not be assumed to be malware from start, because killing the wrong process / deleting the wrong file can render the computer unusable).

Also, if a malware process has been identified, its parent / child processes should also be examined carefully, since many times malware processes launch / are launched by other malware components.

When the malware process has been identified (by the steps described earlier or described in the next paragraphs or by other means), ProcessExplorer can be used to terminate it. It is recomended to first suspend all the processes which are going to be terminated and then terminate them to circumvent the self protection mechanisms present in some malware, where processes watch each other and restart any killed process.

- TCPView - a visual equivalent of netstat (the same functions can be performed by netstat in recent windows version - by using the the "-b" command line switch - however it is more convenient when doing interactive examination)

It can be downloaded from http://www.microsoft.com/technet/sysinternals/Networking/TcpView.mspx and it can be used to identify processes which make unrecognized connections (again, this step also need a level of familiarity with the normal operation environment). ProcessExplorer can also display the network connections of a process, however it does not offer an overview of all the connections.

In this particular case one should look for connections with a target port of 6666 or 6667 which are traditionally the IRC ports.

- Autoruns - Downloadable from http://www.microsoft.com/technet/sysinternals/utilities/Autoruns.mspx, it is the ultimate tool to identify all the processes which are registered to start when Windows is launched (similar to the built-in msconfig, however much more similar). While there are ways to run an executable at startup which are not covered by this utility (for example some malware infects program which are registered to start-up and injects code to start itself), it covers most (I would guestimate 99%) of the possible ways to start a program.

The utility can be used both to view and to modify the startup lists (by disabling programs). One useful feature it has is the possibility to hide signed Microsoft executables, which reduces the number of elements one must go through considerably (again, this feature is not 100% foolproof, since malware can - and indeed some do - install a custom root certificate on the system and from then on "sign" whatever executable / ssl connection it pleases, but it works most of the time)

2. What was the code snippit most likely used for and what was the bot's control password?

It is used to build a string dynamically by using indirect addressing (with the EAX register holding the base address). This (and similar) tricks have been used lately to circumvent the "strings" analysis (which refers to a *nix utility ported to windows by sysinternals - http://www.microsoft.com/technet/sysinternals/Miscellaneous/Strings.mspx) which can extract strings from arbitrary binary files by looking for continious runs of printable characters. These obfuscation techniques work by "breaking up" the characters and reconstructing the string only at runtime. The ProcessExplorer mentioned at point 1 includes the possibility to run the "strings" algorithm on the memory space of the live process, thus circumventing these techniques.

When a technique is observed, one can create patterns to extract the string. For example in this case the pattern would be (the opcodes are in hex):

B3 4D MOV BL, 0x4d              
88 58 MOV [EAX + 0x00], BL

If the offset relative is non-zero, the sequence is:

88 58 05 MOV [EAX + 0x05], BL

One could use these patterns to go trough the file and extract any strings. However it is easier to simply run the program in a debugger (in a controlled environment!), to extract the strings from the memory or to sniff the traffic off the wire.

The text hidden in this snippet is "MalloryWasHot!", which was obtained by passing the text through the following perl script (because I'm lazy :) ):

use strict;
use warnings;

open F, "test6.in";
my $v;
my %h;
while () {
    if (/MOV BL, 0x([a-f0-9A-F]+)/) {
        $v = pack('H*', $1);
    } elsif (/EAX \+ 0x([a-f0-9A-F]+)/) {
        $h{$1} = $v;
    }
}
close F;

foreach (sort keys %h) {
    print $h{$_};
}
print "\n";

Because this was the only text given, probably this is also the passsword.

3. Describe how you could discover the commands the bot would accept and their basic functionality?

There are several possibilities:

One could run the malware in a controlled environment (a virtual machine for example) and sniff the traffic with tools like Wireshark or the Microsoft Network Monitor. Because IRC traffic is unencrypted (most of the time), one can learn a great deal this way. However the risk is that the malware (on the command received from the controller) might engage in activities (like spamming or DoS attacks) which are considered illegal and might get the researcher in legal trouble for actively participating in it. These concerns can be mitigated by throtling the upstream bandwith of the analisys environment, however there is no perfect solution. Because of these problems, this method is recommended only for short period of times and with active human monitoring to make sure that rapid intervention is possible.

The second possibility is to join the community of bots, which is especially simple when they use standard protocols like IRC for communication for which there are readily available clients on all platforms. The two methods can be very successfully combined the following way: a short run with traffic sniffer attached is used to extract key elements like:

- the server name / IP

- the channel name and password (if there is one)

- the format of the nickname which is used

After this short preliminary analysis (which is safe because of the short timelength and human monitoring), the malware is disconnected and an IRC client is connected using the gathered data.

4. (Extra Credit) What is the meaning of the password?

It is a reference to the main character Malcolm "Mal" Reynolds I think.

Ethical hacker challenge - Microsoft Office Space: A SQL With Flair

1 comments

Just a short post: I won - finally! Rather than re-posting the whole answer I'll provide a link to it: Link. I won't be claiming the book, because I have rather bad experiences with the Romanian postal service, but it's nice to be recognized.

Two ideas which came after I submitted my answer:

It is possible to generate a dictionary to get from any CRC value to any CRC value using just 7-bit safe characters. One would do it the following way:

  • Designate a 32 bit value as the meeting ground. For example 0xFABCDE12
  • Now start to generate all possible strings from the character set you have. Use the generated strings to go both forward and backward from the meeting ground and record the results in a table (probably stored on the disk, due to the large volume of data needed). The idea is to calculate CRC32([meeting ground], [string]) and CRC32_Reverse([meeting ground], [string]) (which equals the value for which CRC32([value], [string]) == [meeting ground]). Both of these operations can be performed in linear time with respect to the string length as described in the linked paper, which means almost instantly in the case of such short string.
  • Now when we want to go from one string to a given CRC value, we perform two lookups: the string which is needed to go from the initial CRC value to the meeting point and the second which is needed to get from the meeting point to the initial CRC. Concatenating these two strings to the modified string will result in an arbitrary (desired) CRC using only characters only from the selected alphabet (7-bit clean in this example).

The second thought: when choosing hash functions don't use error correction codes and vice-versa. Known the intended usage of the algorithm you decide to implement. Also, the possible value space of the the output should give you a rough idea on how likely it is that you will ave collisions. For example for CRC32 the output space is 2^32. This means that if you have more than 2^32 distinct output data (which is entirely possible these days), you are guaranteed to have a collision. Something like MD5 has in change an output space of 2^128, which is very unlikely to be insufficient.

Online certifications are worth the paper they are written on

0 comments

In my younger years I've joined Brainbech and did a few tests on it (during the different promotion periods when they were available for free). However I quickly discovered that these certifications have exactly the value of the paper they are written on (eg zero), because:

Any relatively seasoned IT pro can pass them, based just on what s/he heard and not having actually any proper experience with the subject. In fact it is often possible to learn during the test (!), and deduce the correct answers based on previous questions! To demonstrate this (and to brag a little ;-)), take a look at my certifications:

My actual capabilities are the following:

A lot of programming in many programming languages, including PHP. For example the only reasons for not being on the top for the PHP test is the fact that I didn't know the answers from the top of my head to questions like what are the parameters for some weird LDAP function?, and because of this, even though I finished in less than half of the available time, I was still too slow.

A computer science degree.

Some networking experience, mostly hobby, with an unfinished CCNA.

Everybody can form the opinion about the matter, but my opinion is that I'm not qualified to work in fields like forensics, even though I got more than half of the questions right.

PS. I don't mean to pick on BrainBench specifically, rather on the whole idea of online certifications. I talked about BrainBench, because it was the system I was most familiar with. I tried ExpertRatings, which seems to be an other big player in this area (they come up at place numero uno when searching for online certifications) with similar results.

Monday, August 27, 2007

Setting up laptops

4 comments

I've had to setup a laptop, and I thought I share the methods which I used to secure it, so that others can inspire themselves from it (to improve security of their mobile platforms) and/or tell me how I'm wrong :)

The general assumption of these steps is that the data on the laptop is much more valuable than the actual hardware, and if it would to get lost, the main objective is to deny access to the data stored on it from third parties. This of course must be coupled with a good backup strategy, even more so if we consider the fact that laptops are much more stressed as opposed to their desktop counterparts, which means that they have a higher probability of hardware failure.

The first step was to enable the hard-drive password in the BIOS. This prevents the laptop from booting until the correct password has been entered. The way it differs from the BIOS is that the password is stored in the hard-drive itself and you can't access the data on it until the correct password is supplied, even if the hard-drive would to be moved to an other computer.

This is a good first measure and widely available on laptops for quite some time now. Unfortunately it is only a first step. The fact that it's limited to 8 characters (at least with this BIOS) and that the actual data on the platters is not encrypted (meaning that somebody could disassemble the HD and use specialized tools to read the platters) means that it's not very strong. Additionally there are commercial units out there which can remove any (eg. unknown) HD passwords in under two minutes (via the ITT podcast).

The second step was to partition the drive. It was divided up in 3 partitions: one for Windows XP, one for Linux and one Truecrypt volume (more on this later). First Windows was installed (as it f's up any other non-MS OS if installed later) followed by Ubuntu. As I mentioned earlier, Ubuntu worked beautifully. Windows recognized at least the USB hub, meaning that I didn't have to write a CD with the drivers, but I still had to lspci under Ubuntu to find out the actual hardware I had.

After that I disabled the wireless from the BIOS, because wireless connections are not common in this part of the world and this also makes the battery last a little longer (or at least is should). It also makes for good security.

Next I let Windows Update make it's way trough the patch list (the install CD had already SP2 slipstreamed on it, so it wasn't that long... In the meantime I changed the swapfile size to be fixed (this, besides providing a small performance boost, can help conserve free space as shown in the next step).

I downloaded the updates for Ubuntu (which, compared to XP, required only one reboot) and enabled NTFS RW support. Compared to older versions of Ubuntu, this was surprisingly simple. No package installation, no digging around in /etc/fstab, simply go to Application -> System Tools -> NTFS configuration tool and check Enable write support for internal devices. Voila!

Next I configured Ubuntu to use the swap file of Windows for its swap (normally, you would use a dedicated swap partition, however with smaller laptop HD's this is a good way to save some space). You can do this by editing your /etc/rc.local and adding the following lines before the exit 0 line:

mkswap /media/sda1/pagefile.sys
swapon /media/sda1/pagefile.sys

A word of caution: this method should not be combined with hibernation in Windows. This is because during hibernation Windows relies on the fact that between shutdown and the next startup the contents of the swapfile are unchanged. However the laptop is primarily used in Ubuntu, so this is not a big problem.

Next the Truecrypt installation was completed: it got installed under both Windows and Linux, however the volume got created under Linux and formatted ext3. An additional driver was installed under Windows to make it capable of reading ext2/3 partitions.

The well known hosts file was installed under both Linux and Windows (there is an alternative, however I never tried it) along with Firefox, NoScript and Flashblock.

The windows setup also included:

  • Administrative shares got disabled
  • System Safety Monitor (the free version) got installed
  • sudowin got installed and configured
  • Software Restriction Policies got configured with default deny, and exceptions (allow rules) were added to the following paths:
    • The windows directory
    • The Program Files directory
    • The temporary folder (this is needed because many install programs first drop their components here and try to load/execute them from this location)
    • The start menu both for All users and the current users
  • The privileges of the default user got dropped to normal user

In conclusion, this is quite a secure setup, as long as the user has the discipline to use the Truecrypt volume for anything sensitive. One possible source of leakage are the temporary files created during program operation which reside on the unencrypted partitions. An other possible leak could be collateral data stored on the unencrypted partition like browser cache or saved passwords. This could be prevented by installing the whole OS on an encrypted partition, which is supported in Debian, however I was lazy and used Ubuntu and Bitlocker in Vista. An other option would be to make the temporary folder private under Windows (using the EFS feature in NTFS), however this has some very nasty side effects, like installers failing inexplicably. My theory is that the problem lies in the fact that once a folder / file gets encrypted with EFS, only the given user and the Administrator can access it, and MSI installers run with the help of a background service, which is neither of these two accounts, meaning that it fails to access its own data files.

Saturday, August 25, 2007

You And I

0 comments

I lose control because of you babe.
I lose control when you look at me like this.
Theres something in your eyes that is saying tonight.
I'm not a child anymore, life has opened the door
To a new exciting life.

Scorpions - You and I

(PS. Hope this falls under fair use and they won't sue me ;-))

Which password?

0 comments

A little note about mounting Truecrypt volumes:

When you issue a command like this:

sudo truecrypt [truecrypt volume] [where to mount it]

You will be greated with the following prompts:

Password: [your password to elevate privileges]
Enter password for '[truecrypt-volume]': [the password to the truecrypt volume]

Now in hindsight it's clear which password goes where, but I got quite a scare when I thought that I forgot the password to my Truecrypt volume :)

PS. Some people still claim that the the hardware support from Linux is weak. I can only say to this: I've installed on a laptop Windows XP and Ubuntu 7.04. For Windows I've had to download drivers on a different computer and install the separately (thank God it knew at least about the USB hub, so that I didn't had to burn CD's) while with Ubuntu it recognized everything, including screen at native resolution, network card, special media buttons on the keyboard, etc. Also, when I plugged a cable mode in Ubuntu through USB it recognized it without asking anything!

I can be wrong too :)

0 comments

As Forest Gump said: Shit happens, and the best you can do is to admit that you were in error and to post in the forum where you previously posted the erroneous message to make sure that people who would possibly be mislead have a better chance at finding it.

The situation: I've commented on the Sunbelt Blog when they blogged about the new results in AV testing that they are possibly violating the terms of the AV-Comparatives.org website, which says:

Please link ONLY to our main site www.av-comparatives.org and not to the other subpages. It's forbidden to use/provide our test results/documents on other sites without our permission.

What I didn't realize is that the tests came from an other (also great!) independent source, namely AV-Tests.org. Sorry.

Monday, August 20, 2007

Using co.mments.com with NoScript

1 comments

A couple of months ago I was complaining about the fact that blog comments are usually one-off fire and forget, you can't really have a discussion (compared to forums) because usually you don't have a way to notify users about new comments. That's when fellow blogger kurt wismer from the anti-virus rants blog came to the rescue and told me about co.mments.com (yes, it's a little bit hard to remember, but comments.com was already taken - presumably, and it's still not as bad as del.icio.us)

The idea with this service is that you let them know about every blog post you commented on, and they track that page for you and notify you about new comments, either through their website, an RSS feed or e-mail. Now, to a site, you could enter the URL on their page (which is a little cumbersome, since you would have to switch back and forth between the page you commented on and co.mments.com, or install their handy bookmarklet, which, when you press it, inserts a little javascript in your browser on the current page (it works with all major browsers), which adds the current page to your co.mments.com list. Much more convenient.

If you have installed NoScript and are using it to selectively whitelist sites, please note that you need to enable scripting for the current page (at least temporarily), because the script inserted will appear to Firefox to be running from the same domain as the site, meaning that if the current site isn't allowed to execute scripts, you can't use this easy way to track it.

PS. Co.mments has the option - if you are a blogger - to add a special link to your posts, which - when accessed - will add the given post to the users track list - without Javascript. But this depends of course on each blog owner.

Sunday, August 19, 2007

Creating optimal queries for databases

0 comments

Although I'm a big PostgreSQL supporter, I started out as a MySQL user and still use MySQL daily, so I listen to the OurSQL podcast. In the latest episode (number 22) the topic was Things To Avoid With MySQL Queries. While I picked up a few tips from it (and most of the things mentioned is applicable across the board, not just specifically to MySQL), I realized that pgAdmin, the GUI administration tool for PostgreSQL has a great feature (between the many) that it's not talked about a lot: the visual representation for EXPLAIN queries. Because what can you interpret easier, this:

or this:

Of course everything has two sides, so here is a small gotcha with pgAdmin: every time you access a database which doesn't have the default encoding set to UTF-8, it will pop-up a warning saying that for maximum flexibility you should use the UTF-8 enconding. However what it fails to mention that if you don't use the standard C or SQL_ASCII encoding, you will have to define the indexes with special operator classes if you wish for them to be useful for query execution.

Hack the Gibson #94, #95 and #96

0 comments

Read the reason for these posts. Read Steve Gibson's response.

I've talked a lot about authentication in two recent blog postings (Getting ahead of the curve and Two channel authentication with the followup Two channel authentication - part tow), so I won't really cover episode #94 in detail.

Now for episode #95, OpenID

One of the first confusing things is that they keep mentioning OpenID and multi-factor authentication together. In fact there is no inherent connection between the two. All that OpenID is is a protocol to implement authentication by proxy, that is if you want to authenticate to a webpage P, you would authenticate to your OpenID provider O, which in turn would relay a signal to P saying that yes, s/he is who s/he says s/he is, because the authentication was successful. Of course one of the first question that comes to mind is how trustworthy the proxy is... And also, the proxy itself can employ multi-factor authentication if it wishes, but there is nothing in OpenID which says it must.

On the plus side, the SpinRite story includes mentions of backups (and not just backups, but off-site backups, wow!).

Finally, the most fertile type of episode (from my point of view): listener Q&A. Because, my main grief with Steve is that (a) he fails many times to give credit where credit is due and (b) messes up the concrete examples. The big picture that he provides is usually correct, however, as the say, the devil is in the details and if you get the details wrong, while proclaiming you absolute knowledge of the matter, you end up confusing, or worse, misinforming people, and misinformation is the main problem in day-to-day security.

Regarding the first question: the main answer to the question is right. However the corollary that just by being behind a NAT and disabling scripting you're safe, if false, false, false. This is very dangerous because it gives people the wrong impression on how they should secure their system. To give you just one scenario: the WMF bug, which Mr. Gibson is surely familiar with, since he made some pretty bombastic claims (that it would be an intentional backdoor created by Microsoft), would have gone through these defenses like a hot knife through butter. If you wish to keep yourself secure, there are basically three things you need to remember:

  1. The first and most important is that there is no such thing as perfect security! Anybody who claims to have such a thing is talking BS or wants to sell something :). A corollary to this is that because security and usability are inversely proportional (since security means limiting the possible uses of the system), a perfectly secure system would be totally unusable (by definition). As I said many times, you should inform yourself before making any decision, to make sure that you make compromise which is in line with your values.
  2. The second thing is defense in depth. From the fact that there no perfect security follows the fact that there is no one setting or product which could provide it. Every additional layer of protection (if properly created and implemented!) reduces your risk of exposure. Some layers which should be implemented: running as limited user, using an AV software and/or a HIPS (again, depending on the level of (in)convenience you are willing to tolerate) and taking a look at the third point below :)
  3. The third point would be running an atypical system. It is a fact that there are more attacks against popular software than there are against less popular ones. This means that choosing software which is not run by the majority (ie Linux over Windows, Firefox over IE or Thunderbird over Outlook) will keep you safe 99% of the time.

On the next question, where the caller asks what about situations where he would want other to be able to access the information (like his family in the instance of him passing away), there is one more solution that didn't get mention: key escrow. Basically you give your encryption key to a third party (a company usually) and specify under what circumstances should it be divulged and to whom (for example if a proper death certificate is presented to a family member).

The next question / comment is dead on, and I could now go back and say it took X episodes for this issue to be addressed, but rather I'll just move on to the next question.

The next question is correctly answered (as far as I can tell - myself not being a Mac user), but programmer Steve gets something wrong, which wouldn't be so terrible (because after all, we all are humans), would he had prefixed his sentence with as far as I know. So when he says And Windows has nothing like that (about the MacOS X Keychain), he is right only in the most narrowest sense. Windows doesn't have anything which works exactly like that, however it has a feature called protected storage, which is used for example to store authentication credentials from IE or autocomplete elements and it has a full API for third party developers to use.

On the next question (or rather, the answer) Steve mentions that he records his DVD's at 1x for backup purposes. I'm no expert at this (see, these little magic words are the ones I miss most in the podcast), but I've hear the opinion that recording modern disks at 1x does more bad than good, the idea being that they were created for faster recording and slower recording can cause parts of the disk to overheat.

On the next question Steve answers exactly the opposing question, but to his credit, he corrects himself in the next episode.

With regards to the last question: in fact it is possible to have a completely secure wireless installation accessible by anybody. However, most probably the municipal WiFi projects won't be implemented using these techniques.

Hack the Gibson #93

0 comments

Read the reason for these posts. Read Steve Gibson's response.

An other Security Now! episode, an other SpinRite story without mentioning backups. There are few explanations for this, none of which shed a very good light on Mr. Gibson: (a) he doesn't care, (b) the flaws SpinRite repairs are not at all serious, so with or without SpinRite the harddrive would be just fine or (c) there is some dark conspiracy between Mr. Gibson and the hard drive makers. I don't believe in conspiracy theories, but like very much the following quote attributed to Einstein: Two things are infinite: the universe and human stupidity; and I'm not sure about the the universe. But I've beaten the dead horse enough.

Credit to Steve: he mentions that there is no such thing in law as intellectual property, there are only patents, copyright and trademark.

The discussion was well rounded (although Steve did use the term intellectual property once or twice), however there were two points that I feel are important and were not covered or got very little coverage:

Run-through time for patents - I've heard that there is a backlog of at least a year at the patent office, that is, there are at least a year worth of patent material which can potentially affect a given piece of software, however no-one can look at them for the following year. This means that you could everything by the books (search every relevant patent to your field of activity) and still be potentially liable for patent infringement.

The second aspect - which got a little coverage, but not enough in my opinion - is the international aspect. The fact that the USA tries to force its patents on other countries through threats. The fact that it calls countries names when they decide to disregard the American patent system so that they can make an affordable living, other than some kind of moder slaves, but it fails to mention that the USA started itself by disregarding the British patents.

Saturday, August 18, 2007

Pictures

1 comments

Sometimes I get the urge to shoot some photos (although I'm not very good at it, as you can tell from the images below), and borrow the camera from a good friend (who tells me three times how to handle it, because it's some pretty serious stuff, not the point & shoot kind) and go out.

This first one is the main reason I've borrowed the camera today, and is the reflection of a church tower in the windows of an office building (a bank actually).

This second one is a commercial put out by a bank all over Romania on big billboards. The text reads: you're not accustomed to having a home of your own?. I've put it up here because (a) I find it angering how this ad depicts poor people as kind of retarted and (b) because it's a big lie (apartment prices are growing at an incredible rate in Romania, especially in the big cities - to give you and idea, based on my income I'm in the top 20% of the Romanian citizens, and still I had to get two separate loans - one for 30 years! - to be able to buy a rather small apartment)

This was just some imagery I've spotted looking out the window. I hope DreamWorks doesn't sue me for it :).

Update: see something similar to the first photo here.

PostgreSQL REPLACE INTO

0 comments

When migrating from MySQL to PostgreSQL a question which seems to come up often is how to do the equivalent of REPLACE INTO? At the supplied link you will find the answer (with the mention that you should wrap the code in a transaction if it's not done already for you by your data access library), however I would like to talk about a little quirk regarding the problem / solution:

Let's assume that you have a VARCHAR(10) column, where you try to insert a longer value (ok, your input validation should have catched this, but that's not the point). MySQL will normally emit a warning (which you won't see if you're not looking for it) and truncate the value before insertion (unless the sql_mode is set to STRICT_ALL_TABLES or STRICT_TRANS_TABLES). PostgreSQL will however die in the middle of the code with an error message. This can be very enigmatic if your code happens to a stored procedure. Unfortunately I haven't found any elegant solution to declare that a stored procedure takes only strings up to a given length, so here is a small function which verifies that an array of strings conforms to a given maximum length restriction and throws a less enigmatic exception if not:

CREATE OR REPLACE FUNCTION check_array_elements_length(max_length integer, elements anyarray) RETURNS void AS $BODY$

BEGIN

 IF elements IS NULL THEN

  RETURN;

 END IF;

 FOR idx IN array_lower(elements, 1)..array_upper(elements, 1) LOOP

  IF elements[idx] IS NOT NULL THEN

   IF length(elements[idx]) > max_length THEN

    RAISE EXCEPTION 'String exceeds maximum admitted length (%): %', max_length, length(elements[idx]);

   END IF;

  END IF;

 END LOOP;

END;

$BODY$ LANGUAGE 'plpgsql' STABLE;

To use this procedure, start your code with:

PERFORM check_array_elements_length(10, ARRAY[param1, param2, param3]);

It also works for binary strings, but in this case you must supply the length in bits (for example the length of x'1F2' is 12, not 3!).

Tuesday, August 14, 2007

Letting competent people do their jobs

1 comments

Firs of all - the usual disclaimer applies - this is my personal opinion, blah, blah

The first positive comment to my VirusTotal uploader came in which is cool, however it brought up two issues:

The fist would be: please don't use this tool to scan your entire collection, performing a small DoS attack on VirusTotal. It was written to be as gentle as possible to the service including:

  • no multithreading, samples are submitted one by one
  • it waits until the previous sample is fully scanned before it moves on to the next sample
  • it uses a custom user agent string, so that VirusTotal can filter it / prioritize it if they wish

However the main topic of this post is the idiotic test (if you can call it that - it was more a marketing spin) carried out by Untagle. If you didn't hear about it yet, the gist of it was: pull out around 30 samples from our a** (one of which was EICAR!), scan them with some AV engines and declare that ClamAV (which coincidently is used in their product) is good enough. This is wrong on so many levels. You can read the a good writeup on the McAfee AVERT blog, however the most infuriating thing (for me) was the constant pondering on the fact that AV testing is not open, AV testing needs to be peer reviewed. My response is:

  • Don't try to climb out the s*** hole you put yourself into. You've made some (very) bad moves, now admit to them
  • Have you've heart about AV-Comparatives (full disclosure: I have no relation with them)? It is a venue whicg (as opposed to your little show) does tests that are fully independent, recognized industry wide and fully documented (as far as the methodology).
  • There has been many claims (including the McAfee blog and this result - generated with my script by a third party) - which seems to be true - that the scanners were misconfigured and the detection rate would have been much higher, would you have taken the time to configure them properly
  • Making malware publicly available is stupid at best, illegal at worst

I agree that many AV tests in magazines are completely irrelevant and bogus, but - congratulations - you've managed to make something even less valuable and accurate.

PS. This criticism is not directed towards ClamAV, the open source movement, etc. Its sole target is the Untangle test. ClamAV is a reasonably good AV engine with its main focus being threats which arrive in the inbox (it being more a gateway product rather than a desktop product)

The case of the missing blog post

0 comments

After seeing the post on rootkit.com about Atsiv I planned to take a look at it because the official announcement (which is by the way now changed to a reply to Microsofts actions) didn't give any details. Fortunately people smarter than me did that (proving the old saying that if you can do something today, leave it for tomorrow, maybe you don't have to do it), basically confirming my own suspicion that this tool is basically a signed driver to load other drivers.

I always been skeptical of driver signing as a security measure, but I assumed that it was done to enforce some quality (ie. that the driver had to be submitted to MS before it could get signed so that they can do some quality control on it). However after these happenings it is clear that this is not the case, the only thing you have to have is money (and it can be someone else's money too - from a stolen credit card for example :( ). On one case this is understandable, because in this system MS doesn't have to commit resources / it doesn't become a bottleneck in kernel development, on the other hand this negates the only possibly positive aspect of the driver signing requirement and it makes it look like a money making scheme for CA's.

(Further clarification: I've found an MSDN blog post that hints to the fact that the original plan was in-line with what I envisioned - drivers could be signed only if they passed the WHQL testing - but the process was later relaxed)

Here is also an other (possible) advantage to driver signing:

A primary benefit of KMCS is that it provides a means to identify the author of a piece of code, which helps enable follow-up with the author to address crashes that are observed through mechanisms such as Microsoft Online Crash Analysis.

However I claim that a similar level of traceability could have been assured by lower cost certificates.

Now back to the main topic of this post: Alex Ionescu has published (and shortly after pulled) a tool dubbed Purple Pill which used the following technique to load unsigned drivers: It dropped a perfectly good driver (signed) from ATi which had a design flaw that allowed arbitrary memory writes, loaded it, and then used it as a trampoline to load the unsigned driver. He claimed that this approach is superior since it activates Vista's DRM paranoid mode when the unsigned driver is loaded and thus it can not be used to circumvent it. Very shortly after publishing it however he pulled, saying that there is the potential (well, duh) that it could be used by malware.

My predictions are:

  • We are getting closer and closer to loading unsigned drivers on Vista (without special boot options, etc), and in the first half of 2008 (the latest) we will see a solid method to do it.
  • After that Chinese and Taiwanese manufacturers will start using in their driver install kits (which are well known for their great quality), making it an uphill battle for Microsoft to enforce it.
  • There will be several malicious individuals willing to cough up the money (probably someone else's money) to buy such a certificate
  • There will be several flaws discovered in third party drivers which could lead to similar actions, and it is possible for the bad guys to purposefully create such flawed drivers (for example by creating a freeware hex editor which has a flaw in the driver allowing it to do arbitrary memory overwrites and then packaging this driver with their rootkits). I will be very curious what MS's reaction will be to such attempts? Will they simply revoke the certificate of such programs? If yes, it means that your certificate can be revoked instantly just because you've had a bug in your program. If no, it means that the driver signing is not really a security solution, more a money making solution.

Monday, August 13, 2007

Setting up a PPTP VPN (client) with Ubuntu

4 comments

This applies to the latest release (7.04), because from what I understand older versions had more (complicated) steps to follow. My solution is based on this blog posting combined with some advice from here. The steps are:

  1. Install the network-manager-pptp package (either by doing sudo apt-get install network-manager-pptp, by using Synaptic or any other way you like)
  2. Click on the networking icon and set up your VPN
  3. Issue the following commands (the package installation seems to issue at least some of these commands, however I couldn't get my VPN to connect until I re-issued them):
    sudo /etc/dbus-1/event.d/25NetworkManager restart
    sudo /etc/dbus-1/event.d/26NetworkManagerDispatcher restart
    
  4. Profit err - I mean happy VPN-ing

Sunday, August 12, 2007

Unofficial VirusTotal uploader

5 comments

Update: this script has been update and renamed to OVScan. Please use the new version.

VirusTotal is a free service offered by Hispasec systems which scans the submitted files with a large number of AV engines (currently more than 30) and shows you the result. Disclaimer: I have no affiliation with them or any other such service. While the results do not guarantee anything (having in mind that every engine can have false positives and malware which it doesn't detect), still it offers a much more detailed result than scanning with a single AV engine.

This unofficial uploader was written to make it possible to submit multiple files in a batch mode and to make it possible to produce reports automatically. It is written in Perl and should run on most platforms Perl is available (for Windows you can use ActivePerl)

The software (script) is released under the GPLv3. The supported command line option currently are:

vtuploader.pl [options] [file masks]

Options:
 -n --no-distrib The sample is not distributed to AV vendors
 -h --help       Displays this help
 -v --verbose    Output detailed information about the progress
 -b --bb-code    Output the result as BBCode
 -c --csv        Output the result as CSV
 -t --tab        Output the result as tab delimited file
 -m --html       Output the result as HTML 
 -l --log=[file] Save the output (the result of the scans) to the specified day

File masks:
 Specifies a file or a group of files to upload and scan

An example result can be seen below:

VirusTotal scan results
File namevtuploader.pl
AntivirusVersionLast updateResult
AVG7.5.0.4762007.08.12-
AhnLab-V32007.8.9.22007.08.10-
AntiVir7.4.0.602007.08.12-
Authentium4.93.82007.08.11-
Avast4.7.1029.02007.08.12-
BitDefender7.22007.08.12-
CAT-QuickHeal9.002007.08.11-
ClamAV0.912007.08.12-
DrWeb4.332007.08.12-
Ewido4.02007.08.12-
F-Prot4.3.2.482007.08.10-
F-Secure6.70.13030.02007.08.12-
FileAdvisor12007.08.12-
Fortinet2.91.0.02007.08.12-
IkarusT3.1.1.122007.08.12-
Kaspersky4.0.2.242007.08.12-
McAfee50952007.08.10-
Microsoft1.27042007.08.12-
NOD32v224542007.08.12-
Norman5.80.022007.08.10-
Panda9.0.0.42007.08.12-
Prevx1V22007.08.12-
Rising19.35.62.002007.08.12-
Sophos4.20.02007.08.12-
Sunbelt2.2.907.02007.08.11-
Symantec102007.08.12-
TheHacker6.1.7.1672007.08.12-
VBA323.12.2.22007.08.11-
VirusBuster4.3.26:92007.08.12-
Webwasher-Gateway6.0.12007.08.12-
eSafe7.0.15.02007.08.10-
eTrust-Vet31.1.50502007.08.11-
Additional information
File size: 16004 bytes
MD5: 61b8388cb718f5888f63e506707cf58f
SHA1: d57434e6f782fcb59dba0160af404a0455848cd4

Tips and tricks:

  • Deprecated! See the command line options on how to redirect the output. You should always redirect the output to a logfile. Status messages are not influenced by the redirection because they are written to the standard error console.
  • You should use the -v option, unless you are very patient, because scanning of the files can take a long time.
  • If you need to use a proxy, you can set this from the environment variables by doing export http_proxy=http://localhost:8080/ under Linux or the equivalent set http_proxy=http://localhost:8080/ under Windows

Warning: this uploader is based undocumented interfaces in VirusTotal. Although I have their permission to create this software, there is no express guarantee on their part that the interfaces will remain the same. In case they change, this script may (and most probably will) break and I can't make any guarantees on the time it will take me to repair it. Please see the official methods for sending files to have a guaranteed delivery.

Update: added long option, the possibility to directly specify the file where the output should be saved and a summary which gives the detection count both as raw numbers and as percentage.

Download it here

PS. Here are some alternative services in the same venue, if VT is unavailable for some reason:

  • virusscan.jotti.org - similar, but sadly it's almost constantly at peak utilization, and because of this, rather slow
  • VirScan.org - a new service from China (I think) with some broken English here and there, but seems to work fine (I also like the fact that archives can be submitted)
  • scanner.virus.org - with a spartan interface and slightly outdated virus definitons sometimes

Update: this script has been update and renamed to OVScan. Please use the new version.

Wednesday, August 08, 2007

Getting ahead of the curve

0 comments

I was listening to episode 103 of SecurityNow, and all in all it was a good episode. However one thing that baffled me (ok, maybe not so much because I didn't have high expectations), is the fact that nowhere in the process did they ask about man-in-the-middle type attacks (although they mentioned it briefly when talking about SiteKey and BofA).

Now I don't want to bash businesses here, but lets look at the future (or at least how I imagine it - I've been known to have a wild imagination :-)):

  1. PayPal successfully launches its security key program
  2. Marketing will try to sell it as the the best thing since sliced bread, AKA the perfect security solution
  3. It gets a considerable user base from the lines of the PayPal/eBay users (lets say 30%). Not only will these 30% be a considerable part of the users, most probably they will be the most active / the people with the most money in their accounts, because probably they will be the most worried about the security of their accounts.
  4. The attacks will shift in a very short time from off-line (eg. steal your password and use it later) to on-line / real-time man-in-the-middle attacks.

What do I mean on-line/real-time man-in-the-middle attacks?

Imagine this: the user gets infected with a malicious piece of code which follows every browser request (yes, it can do this despite of HTTPS/SSL/TLS, because it would operate locally before the encryption is applied) and modify the request to redirect founds, or to detect that the user successfully authenticated and then issue some automated transfers. Similar pieces of code are already in the wild, although they are currently (only) used to insert advertisement to unsuspecting third party pages, but the above modification would be trivial.

An other factor which will contribute to the problem is that the mobility of larger number of people is slower (maybe exponentially slower) than those of smaller number of people, because of the communication overhead. In a concrete manner: the attackers can change their tactics very quickly both because they are few (as compared to the employees of eBay and their customer base) and because (from a technical tooling view) they follow a hierarchical structure (that is, there is a very small group of people with the technical knowledge, who supply the tools to the larger - but still small - community of people who actually use them). This hierarchical way of communicating is much more efficient than the semi-chaotic communication which goes on between a company and their user base. Also, the communication between the bad guys is of much higher priority (for them) than the message put out by a company for their customers (eg. If X sends a message to Y saying here is the new version of the tool which can get around the new security measures of Z, this communication is of much higher value to them, and it is much more probable that they will listen / react to it, than a customer getting a security notice or something similar from a company).

My conclusion is (which you are free to agree or disagree with - I'm waiting for your comments) that as soon as this technology gets any significant usage, we will see the scenario described above become a reality very quickly. And not just for eBay/PayPal but for all the participants of this program. The problem is not with the technology itself, but (as it frequently happens) with the way it is used and the fact that its limits are not properly understood by many of the people using it. The most important aspect of this is that these technologies only focus on authentication, leaving aside the problem of message integrity/authenticity! That is, after they build up a connection between the client device and the server device, authenticating both ends, their job is done. However there is still a complicated layer of technology on the client machine (like the browser, operating system and malware) which can modify transactions and/or create transactions on the fly!

On the long run this will mean that cost of implementing this solution is money thrown out of the window. (Then again as one of my favorite quote from economics says Long run is a misleading guide to current affairs. In the long run we are all dead. - John Maynard Keynes). So why are companies using these solutions as opposed to more secure solutions which are already being deployed by other companies in the same business (read the description of ING described in this post for an example)? I can only theorize, but a few reasons may be:

  • Lack of information on the part of the decision maker, who might not be a technical person and relies on his/her technical advisors to provide the information
    Update: see episode 56 of the Linux Action Show, where they explain how the CIO magazine (which you can consider a type of advisor) gets it all wrong when it talks about Linux in the enterprise (again you can theorize if this was pure lack of knowledge from the part of the article writer, the fact that he believes everything PR/marketing departments feed him or he actually gets payed to try to twist things).
  • Misleading information from the vendor (in the same vein as nobody got fired for buying IBM, the solution vendor X must be good since (a) they are successful, (b) they say they hold a lot of patents and (c) it solves the current attacks)
  • Other factors, like favors and small attentions (as they say it here in Romania) from an interested party (which may be a vendor, a consultant, etc) to the decision maker
  • And finally: it is real possibility (although I don't think that it happens very much) that the costs (like user training, user annoyance) and benefits (like the fact that this actually reduces the fraud on the short term) got carefully weighted and the result was such that it made sense to implement this solution, while possibly preparing the roll-out of a more complex solution in the long term.

Two final thoughts: in the show Leo mentions that it is still possible to log-in even though the one-time password is not provided, by answering a secret question. This still leaves the system vulnerable to off-line abuse, since a man-in-the-middle attack can be performed, where the attacker claims that there was a system error or an other plausible exceuse and asks the user for his/her answer to the secret question. Using these data, the account can still be used by a third party without needing to possess the token. I understand the convenience aspect of the problem, but there are other solutions (like SMS-ing an one-time password to a predefined number - something that even got mentioned in the show) which are much more secure.

And also: because of this hierarchical or layered structure of the (semi-)organized-crime, antivirus companies have still a long life ahead of them. The reason being that, although there are a very great number of people perpetrating electronic crime, only a very small percent of them actually create their own tools, the others live off of their back, which means that the AV needs to be able to detect only a smaller number of malware. This small group of people may also employ algorithms to create different variants of the same malware (essentially creating a program which creates a program), but given that computers are deterministic, these algorithms can be reversed and AV products can provide methods to detect every piece of malware produced by the given algorithm.

Tuesday, August 07, 2007

Hack the Gibson #92

0 comments

Read the reason for these posts. Read Steve Gibson's response.

The podcast kicks off again with a SpinRite story with no mention about the importance of backups and changing the failing drives, but I digress.

Steve says:

Now, you could be running through multiple layers onion routing, or any other kind of proxy server. So that’s an issue. Although, if it’s a secure connection, as we assume it would be, an SSL connection, that cannot be routed through onions because you need to have a matching certificate from the far end.

which is not entirely true if you use something like Tor. Tor acts actually a SOCKS proxy, not a HTTP proxy, which means that it doesn't try to interpret / modify the contents of the IP packets, aside from the source and destination address. Because SSL/TLS is one layer up in the connectivity chain, it has absolute no influence on it, aside from the fact that the remote host will see a different source IP address.

They again talk about software/hardware firewalls and and actually bring up some valid points, however Steve's comment I’m taking the gamble of being really careful that nothing evil gets in because my whole theory is, once that happens, it’s over anyway. I mean, it’s too late. fails to realize the need for layered security and assumes that there is something like a perfectly safe computer system or a behavior which ensures perfect safety. This is very dangerous, because how can he assure for example that there is no remotely exploitable vulnerability in the firewalls of the systems he directly connects to the Internet? Remember, that all the remote code executions vulnerabilities which became public in Windows XP were probably there for 6 years or so (since its launch), no one can guarantee that they were not independently discovered and exploited. So, again you can't have perfect security and probably most people would prefer to at least know if they got compromised.