Back to Top

Saturday, November 29, 2008

Measure twice...

0 comments

Some time ago I was twiddling with my blog template, when I had the "great" idea of modifying the Google Analytics tracking code such that it checks the successful loading the external script before calling the logging function, to avoid generating errors when the script failed to load (because of NoScript, hosts file entry or other reasons). So I modified it like this (warning! this contains errors!):

<script type="text/javascript">
_uacct = &quot;UA-432874-2&quot; if (urchinTracker) urchinTracker();
</script>

The "&quot;" encoding was needed because the blogger template needs to be valid XML. So what did I miss? The little fact that the two statements (assignment and if) were not separated by ";" or a newline, knocking out my analytics. This is not a big problem, since I'm not very interested in them, but it is still nice to have. Lesson learned (hopefully): measure twice, cut once. Now for the correct code if you wish to insert similar safeguards in your Blogger template:

The "old" style code:

<script src='http://www.google-analytics.com/urchin.js' type='text/javascript'>
</script>
<script type='text/javascript'>
_uacct = &quot;...your tracking code here...&quot;;
if (urchinTracker) urchinTracker();
</script>

The "new" style code:

<script type='text/javascript'>
var gaJsHost = ((&quot;https:&quot; == document.location.protocol) ? &quot;https://ssl.&quot; : &quot;http://www.&quot;);
document.write(unescape(&quot;%3Cscript src='&quot; + gaJsHost + &quot;google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E&quot;));
</script>
<script type='text/javascript'>
if (_gat) {
  var pageTracker = _gat._getTracker(&quot;...your tracking code here...&quot;);
  if (pageTracker) pageTracker._trackPageview();
}
</script>

Effective self-censorship

0 comments

No, I won't be talking about China or Australia here. I would like talk about my experience of downloading a Firefox theme.

The given theme was marked as experimental, and thus - to download it - I had to create a user account on the site. The F.A.Q. explains it as follows:

Why do I have to log in to install an experimental add-on?

The add-on site requires that users log in to install experimental add-ons as a reminder that you are about to undertake a risk step.

Now lets analyze the approach a little deeper: Firefox addons (and themes) can be downloaded from any website, not just from the official one. Downloading from other sites is a two-step process, whereby you first have to approve the site, then the addon. Hosting an addon on the official site gives it an air of trustworthiness. Historically "experimental" / "beta" addons were hosted on the author's site or on mozdev. I assume that the option of hosting "experimental" extensions on the official site was created as a compromise between people wanting to post less-tested extensions on the mozilla site and the mozilla staff wanting to avoid less-stable plugins giving a bad name to Firefox.

However, I argue that such a move is detrimental to both parties. The sign-up process is quite "old school", and has a couple of usability issues:

  • No javascript validation of the fields, you have to submit the form to find out that you've missed / mistyped something
  • You have to solve a CAPTCHA every time the form is displayed, even though you've successfully solved the CAPTCHA for the previous submission
  • You have to validate your e-mail address. This arguably is a security feature, however it could be implemented much more sensibly (for example not letting you do things that modify the "state" of the site - like submitting comments - until you've validated your account, but still letting you download things)
  • The confirmation link doesn't automatically log you in. Again, this is arguably a security feature, however we are not talking about your online banking here, we are talking about a site which tries to "sell" you a product.
  • It doesn't support OpenID

Many people will be deterred by one of these obstacles, resulting in less usage (testing) for the extension. Those who will battle their way trough (like me), will be frustrated by the experience. The method itself is sending a mixed message from the mozilla team: "yes, this is an addon on the official site, but no, we don't want you to download it". The only possible benefit would be it the addons would show up when searching on the official site (or from the Firefox UI), however they do not! Luckily most people rarely use the site-specific search engine to find things (this is true for all the sites, not just mozilla).

What would be a better solution?

  • Take a firm stance on the matter. Either make these extensions "first class citizens" (don't require logins to download them, make them show up in search results, etc). One thing which could be done (which seems acceptable) is to place these plugins at the end of the search results.
  • Optimize the signup experience. No, you are not protecting fort knox!
  • Trust user ratings / reviews! If the given addon is of such poor quality, it will quickly get a reputation as such (or more importantly: it won't get a reputation as a "must have" extension).

Finally: the paranoia is overblown considering the percentage of Firefox users in general and the percentage of those users who use any extensions. I would argue that people who use more than two extensions are a very, very small percentage of the userbase, making the risk associated with "bad" extensions tarnishing FF name very small.

Friday, November 28, 2008

Daily funny

0 comments

Found this band on Last.fm: Hayseed Dixie "A Hillbilly tribute to AC/DC". Very, very funny.


SeeqPod - Playable Search

Thursday, November 27, 2008

Spam?

0 comments

I was reading about dynamic proxies in Java (sidenote: it is interesting how similar concepts get implemented in languages which are considered "far apart" - take the foreach loop or "magic" getters and setters, all of which are present in PHP, Java and Perl - just in the languages I used recently) and came upon this blog post. At the end of it there was a comment which seemed very familiar:

werutzb 10.07.08 / 7pm

Hi!

I want to extend my SQL knowledge. I red really many SQL resources and would like to read more about SQL for my position as mysql database manager.

What can you recommend?

Thanks, Werutz

I vaguely remember rejecting a similar comment on my blog. When I searched around I found a lot of other cases (around 12 000). However what struck me as odd, was the fact that the comment didn't contain links, nor did the name ("homepage"). So what is this? A stupid spammer who forgot to include actual links? Or is somebody testing some kind of automated tool to see where s/he can spam? Or something else?

Regular Expressions in Java

4 comments

I was wondering why the gnu.regexp package exists, when Java already includes libraries for it. One thing I can think of is the fact that they've been added only in 1.4.

During searching around I found some surprising facts about the built-in regex libraries (the site goes up and and down, so here is the Google Cache link in case it's down again):

  • regular expressions are not compiled to a finite automaton, the way it's done in other languages / libraries. This (I feel - I didn't test it personally) can cause some considerable performance hits.
  • it can break for some extreme regular expressions. The given example is in Ruby (run under JRuby), but I translated to Java and found the same results (stack overflow exceptions):
 public static void main(String[] args) {
  String long_string = "";
  for(int i = 0; i < 76; ++i)
   long_string += "xxxxxxxxxxxxxxxxxx";
  if (Pattern.matches("\\A((?:.|\\n)*?)?([\r\n]{1,2}|--)", long_string))
   System.out.println("foo");
  else
   System.out.println("bar");
 }

Some conclusions: java.util.regex is still useful if you take care not to use overcomplicated regex's. There are alternate regular expression engines out there. Specifically I found this article which is a little old (from 2002) and feels a little like architecture astronautics (abstracting away the regex layer? really?), but it does include some benchmarks about the alternatives. Most probably all the packages have evolved since, so you should do your own benchmarking, but this is a good start. There is also some useful discussion at two bugs related to this: Bug 4675952 and Bug 5050507.

Update: I just posted a small test comparing alternative implementations for regex's under java.

Mixed links

0 comments

From the "Things that make you go hmmm" blog: Do most people vote on five star ratings in extremes? [analysis, MySQL queries] - interesting look, however I'm not sure how much value there is in such analysis. I don't really feel an urge to "please the public", however on some more technical posts ("how to" type posts) this might be useful.

Ask a Google Engineer - some interesting tidbits there. Also, I found out about Andrew Morton that he works at Google :-)

Innovation in free desktops: What I've got open - an interesting suggestion to improve the communication between applications. I especially like the idea for browsers (and other applications) to accept pasted data directly wherever they accept files.

From taint.org: pixenate on demand - an online service, accessible via JS APIs to upload and transform images. Interesting concept, but I'm not very sure that it is a viable business model (my basic problems are that (a) you introduce an other dependency in your system - if they go down, your site or part of it will go also down and (b) you introduced an additional, repeated - because it is subscription based - cost, which takes away from your revenue).

Also from taint.org comes an interesting technique for fighting spam. It is very cool how you can keep track of items securely, without actually storing the IDs (which is very cool because you don't need storage, and also it is very scalable).

From terminal23 comes this useful (and very true) post: 10 things your tech guy wants you to know .

From the SDL blog comes the following piece: Secure Coding Secrets? Interesting opinion.

Bruce Schneier is part of a team working on SHA-3 and he shares some news on this matter. While current hash algorithms should be sufficient for the near future, it is nice to see that NIST is thinking ahead.

The Financial Cryptography blog talks about the balance of closed vs. open information (yeah, the cert is broken). Interesting read.

Since everybody else blogged about it: you can listen to the latest Guns'n'Roses album on MySpace. Here is a review about it.

Via the Donkey On A Waffle blog: solving the halting problem? The fine folks at GetACoder are "100% confident for a successful delivery of your Project", while staying in "constant communication online".

Over at the Rational Survivability there is a comparison between cloud services. Nice overview.

On OpenRCE we have an article about Memoryze, a memory forensics tool from Mandiant. Interesting. The only thing I have an issue with is the description of the tools as "not reliant on API calls". Now I didn't download and look at it, but somehow I feel that it just opens "\Device\PhysicalMemory". While it is true that it doesn't use API's to list processes / DLL's / etc, (most probably) it does use APIs to obtain the initial data, and as such, it can be undermined (then again, probably because memory analysis is a relatively new field, there isn't much out there which does so).

The disadvantages of cloud based scanning

1 comments

My fellow blogger Kurt has written a post about the benefits of scanning in the cloud. While I mostly agree with it, there are some disadvantages which also needs mentioning:

  • The need to be always connected - how will such a system deal with the disconnected scenario? As much as we are used to being always connected, there are still cases when we need to operate disconnected. For example: when we are on the road, when we are on an airplane, when our ISP has an outage, when we moved to a new apartment/changed ISP and we are waiting to "get connected", etc. There are two possible solutions which come to mind:
    • (a) refuse to start the computer - this is very dramatic and most probably unacceptable...
    • or (b) start, but only allow files which were previously scanned - this seems to be a good solution, but the line between executable and non-executable files is very blurry (for example the Word DOC you are working on could contain malicious macro code, so it would be nice to scan it at every modification) and thus this operation method will either offer lesser protection (by not scanning "document" files which could still contain executable code) or still block the users ability to do her work.
    of course there is always the third possibility of "failing open", but that is a worst-case scenario which hopefully nobody will choose.
  • Network latency - just what speed impact will the need to contact a server before executing each file will have? The protocol will probably look something like the following:
    • send a hash of the whole file or relevant parts of the file to the server
    • wait for the server response
    • if the server determines that the file might be infected, the client will need to upload the entire file
    if you thought that your current desktop AV solution is slow, this might be slower by a factor of 10x! (not in the average case, but in some extreme cases)
  • "under-reporting of new samples is reduced/minimized" - two counter-points here: first, most AV vendors already collect "suspicious" files from the client computers (of course it is with "consent" in the form of the EULA - but that's an other discussion). Second, the problem with under-reporting is not necessarily that companies don't have access to the files (although that can be a problem sometimes), but that it is of very low priority for them (if you were a blacklisting company, what would you look at first? a file which is reported by 1000 users of a file which is reported by one user?)
  • "conventional malware q/a should be entirely thwarted" - while I agree that it will somewhat reduce the problem, the bad guys (and girls, I don't want to discriminate here :-)) can still use proxies all over the world to circumvent statistical analysis.
  • scanner reverse engineering is almost completely nullified - true, however a host of other possible vulnerabilities is created: is the communication protocol created to protect against MITM attacks (created for example by DNS hijacking)? Is the infrastructure able to withstand a DDoS? Your protection can be disabled by (partially) cutting-off network access (again, how does the product react to this?).
  • Also, Kurt mentions that there are sensitive materials which users might not be comfortable with sending to the "cloud". Two points here: vendors are already collecting files (the level of awareness about this practice is a different question). Secondly, there is almost no such thing as "non-executable" files these days. Word documents, Photoshop files, HTML pages - they all need to be scanned. The scanning probably will be partial (ie sending hashes from the key areas of the files at first), but they will also need to contain a fallback mechanism to send the entire file. Will people be comfortable with the prospect that their software vendor could spy on all of their activities, circumventing protections put in place to prevent data leakage (file encryption, drive encryption, network traffic encryption)? Will this be even legal given the laws and regulations different institutions and organizations are subject to?
  • Given the previous point, it is possible that some companies will have "enterprise" products which will have the old model of "delivering signature files to the client" to ameliorate the concerns enumerated. They would essentially leverage the clients who do agree to participate in the fluffy (cloud :-)) version as a sensor network. However, at the same time, bad guys could obtain these "corporate" versions and use it for their QA purposes.

To finish on a lighter note: these are just speculations since no such product exists (yet). Hopefully the vendors will take all these elements into consideration.

Wednesday, November 26, 2008

Will Morro continue to innovate?

0 comments

Rich Mogull thinks that Morro (the free AV from Microsoft) will lead more innovation. However I think that the issue is not so clear-cut:

Morro will be forced to innovate like any AV vendor due to the external pressures of the extensive user base of existing AV solutions, changing threats/attacks, and continued pressure from third party AV.

The biggest problem with this argument is that MS, having brand recognition, can ignore tests to some degree, thus lessening the pressure on them. On the other side, it will probably share intel (if not teams) with some related MS products (like the MSRT or Forefront), giving it the possibility to react faster. In the end security still doesn't hurt Microsoft enough so that they don't repeat the "IE method" (killing off competition and then ignoring the product).

Tuesday, November 25, 2008

Suspicious domain - or not?

0 comments

I was forwarded a link to primariaclujnapoca.ro, and (although I was 99% sure that the site is legit) decided to check it out with domaintools.

To my amazement I found the WHOIS information to be severely lacking. This, coupled with the fact that it is hosted on a shared server raised my suspicion. In the same time I was baffled because I was pretty sure that this is a legit site.

Turns out that ROTLD protects the WHOIS data via a (pretty lame) CAPTCHA, and this is probably the motive for domaintools not having the information. Interesting...

Join the World Community Grid!

0 comments

Via Userfriendly: join the World Community Grid and help save lives - effortlessly. Do I sound like a late-night TV commercial? Now with no added sugar! :-) All jokes aside: WCG is a cool community project which dedicates idle CPU time to things like cancer research. While I don't recommend that you leave on computers just for this (because IMHO it would be a wast of energy), running it during normal use is quite ok (I found the system impact to be minimal). Below you'll find a video which goes into some more details (although - I feel - it is slightly exaggerated in the infomercial direction I mentioned earlier):

A word of warning: for some dubious reasons the install kit requires a reboot to function (???), so you should install it at the beginning/end of your day (or when you plan to reboot your computer). Aside from that it was a really easy install.

Tweaking the design

0 comments

It seems that changing site-designs periodically isn't a rare phenomenon. I'm not really a designer, however I like to tweak ready-made designs for usability/functionality reasons. For example recently I went from a two-monitor 19" setup to a one-monitor 22" (wide) setup.

Comparing the two I arrived to the same conclusion I've heard from many sources before: two smaller monitors beat one larger monitor hands down. This is because on one larger monitor lines can become very long and hard to read (without head movement) and you loose the "grouping" feature of multiple monitors (the maximized windows occupy the entire monitor). To counter this I use the Desktops tool from Sysinternals Microsoft.

Where I was going with all this is that I added a max-width to my stylesheet. I've set it so that it only gets activated when the browser is more than ~1000px wide (in fact is it specified in EMs, so if you change your fontsize, the threshold will vary). I know that this is poorly supported in IE6, but that is only a very small segment of my visitors (around 10%). An other reason for me not implementing popular hacks for IE 6 (like CSS expressions) is that I wanted to specify the value in EMs, which is not impossible, but definitely tricky.

Sunday, November 23, 2008

Mixed links

0 comments

The guys over Jupiter Broadcasting (is it a sign of networking geekdom if I kept typing Juniper broadcasting? :-)) reviewed boxee. It is very cool to find out that the product is based on an open-source program (XBMC). Also, AFAIK it is only available on Mac and Linux currently and because of this I couldn't take it out for a spin yet (but I plan to as soon as I pick up my new laptop). I'm really curious (and hopeful) that they managed to make an agreement where they don't need to lock out the people outside of the USA.

From the Linux Action show: they discuss a new license called AGPL. Very cool! It is similar with the GPL, the difference being that they plugged the "ASP hole". Very interesting, and also the discussion on the show was very insightful (with some minor mistakes, like saying that Google couldn't have gotten off the ground if Linux was under such a license - neglecting the fact that there is the entire class of BSD's out there). After this discussion I really feel that Microsoft's OS is hindering innovation (by placing a considerable price on the foundation of companies)...

On the SANS blog we find the "Are we doomed? / There is hope" list. A very nice list which tries to keep the balance between pessimism / optimism about IT security as a whole.

Finally, the USA DoD gets overwhelmed by (what seems) an autorun malware. From my experience the network infrastructure in large organizations keeps to "rot", until everybody has access to everything, which results in these kinds of situations. Of course the mandate of IT is "first, keep the business (organization) running", so the preferred level of lockdown is rarely an option...

If you don't know English, ...

0 comments

Scott Hanselman thinks that I'm linkbaiting. Just wanted to let everyone know, I stand 100% behind my affirmation that

If you don't know English you are not a programmer.

I personally think that the following commenter (also cited in the blog) nailed it:

It would *seem* (totally non-scientific sampling) that the non-English speakers (as a first language anyway) tend to agree with the statement "If you don't know English, you're not a programmer" more than native English speakers.

Hmmm, now let me think, what is the difference between English and non-English speakers who get this question? That's right, the non-English speakers actually have first hand experience with the problem, while the English speakers are most probably only talking about hypothetical situations or from a (misguided) sense of "political correctness".

Half a millenia of posts

1 comments

Here I am at the 500th post and I'm trying to write something intelligent, just to find out that I don't have any noteworthy thoughts :-). Hope to see you after an other 500 posts.

Friday, November 21, 2008

The first rule of computer security is

0 comments

You don't talk about computer security. No, that's not it, but it sure seems like many people adopt that attitude. Getting back to the subject, I want to talk about the first of the 10 Immutable Laws of Security:

If a bad guy can run (persuade you to run) his program on your computer, it's not your computer anymore

This is an axiom which (I feel) people (like to) forget or neglect when talking about AV. However it is something which will become more and more important when we slowly start to realize that using AV software as the only defense means trusting it to clean up the infections (because AV is a reactive measure - which means that if you rely only on it, there is a high chance that you will get infected).

The problem is that automatically disinfecting even a moderately powerful malware is nearly impossible with standard tools. This is so because malware can (and many do) have its own "blacklisting": as long as it is allowed to run with enough privileges, it can kill off and prevent reinstallation of AV software. Interestingly this means that the roles of the attacker and defender are reversed in a sense (the malware is defending itself from AV software via blacklisting). There are several options to "attack" this problem (I'm talking here about organizations rather than individual users, since users are very unlikely to have the necessary resources - however they can become part of an organization - ie pay somebody for support for example - to get access to these method):

  • Organizations could develop in-house disinfection tools. This is not as hard as it sounds if you have the required know-how. Unfortunately this isn't the case in most organizations.
  • AV companies could develop custom disinfection tools/instructions. This happens quite often, however the problem is that as soon as it becomes public knowledge (and it has to, so that affected people can use the method), malware authors can defend against it.
  • Organizations can use lesser-know products which are less likely to be "blacklisted" by the malware - VirusTotal lists currently 37 engines. This is certainly a viable option, however a careful selection is needed to identify products which (a) are not "blacklisted" by the malware and (b) can identify/remove the malware
  • AV companies can try to modify their product so that the blacklisting component from the malware doesn't recognize it any more. This is very unlikely to happen for at least three reasons: (a) there are marks which would be very difficult to remove, like the company name from digital signatures (which is needed to load drivers in Visa x64 for example) (b) system administrators rely on these "marks" (like service names) to do legitimate tasks (like starting/stopping AV, checking if it is functional, etc) - changing it would break their workflow (c) the rate at which change can be introduced in these products is limited by the QA processes (new kits have to be tested).
  • Use "cold boot" solutions like putting the HDD in an other machine or booting from USB/CD and perform the scan that way. Some companies already offer this option, but it is complex (you have to have a blank CD, a writer and knowledge to write CD's) and has many problems (like not being able to write to NTFS partitions, not being usable if the disk uses full-disk encryption, not storing the updated signature files, which results in the need of downloading them after each update, not recognizing the networking hardware, etc)

The conclusion would be: it is good to have multiple layers of defense, including AV. However, using it as the one thing which stands between your computer and the ton of hardware out there is foolish. Finally, the AV product which perfectly cleans up after a malware infection (which it didn't manage to catch at the point of intrusion) is a dream. After an infection, if you have any kind of sensitive material on the machine, wipe, reinstall and patch.

F*** you Microsoft!

0 comments

For deciding that your precious Windows Update is so important you have to restart my computer in the middle of the day, costing me two hours of work!

Thursday, November 20, 2008

Funny quote of the day

0 comments
... if you need more than 3 levels of indentation, you're screwed anyway, and should fix your program.

From an older version of the Linux Kernel Coding Style. The newer version contains a less funny (offensive?) wording...

Google Reader, Javascript and Flash

0 comments

I had the idea some time ago to highlight the source code I post via Javascript. I gravitated towards this solution because I don't have source level control of Blogger (or do I? ;-)). My thought process was the following: include one .js in each post, which will check if the customization was already done and perform the customization if necessary. However very rapidly I discovered that Google Reader (and I assume that other web-based readers as well) strip JS (probably for security reasons).

While pondering different possible solutions, I thought of two things:

  • First: why doesn't Google Reader just put HTML extracted from clients in an IFRAME from a custom / randomly generated subdomain (ie. qwefwer.googlereader.com)? The IFRAME could have no border and the appropriate width-height (and the correct overflow style), making it indistinguishable from a plain page. The idea being that the same origin policy would prevent malicious JS fiddling with elements it shouldn't. However this was probably harder and possibly less secure than going with the whitelisting.
  • Second: I observed that Google Reader allows Flash to be embedded in the blog posts. Or at least I thought it did. So I've said: aha! I can embed flash, flash can execute Javascript, so I can execute Javascript!

Unfortunately (fortunately?) this is not the case. They seem to employ a whitelisting solution, removing any embed/object tags which specify a source that is not on the whitelist. As far as I can tell the whitelist is not public, but it includes at least some online video services. BTW, if you wish to the Google Reader traffic in Fiddler, don't forget that responses are GZIP compressed, which Fiddler doesn't decompress automatically :(

Where does this leave us?

No javascript for you! Unless you find some kind of security hole in one of the whitelisted Flash movies. If you do however, you can take over the whole session, because your JS will run in the context of the Google Reader.

I don't know what kind of filtering is applied to other objects (Java Applets, Silverlight, etc), but from what I've seen I assume that they would be filtered out.

It would be very nice if they would adopt the IFRAME approach, because that would mean both more security and the possibility for them to enable full JS / object support.

Malware challenge results are out

0 comments

The contest results are out. I scored way below my expectation and - after reading a few submissions from the top half of the list - I can't really tell what the reason for it is :-(. (Not that it matters all that much - given that I wasn't eligible for prizes anyway - it's just a matter of pride.)

Anyway, if you are so inclined, read through the submissions. You will find methods that apply to you, weather you are just a sysadmin or somebody who is interested in reverse engineering.

Wednesday, November 19, 2008

Is Vista really safer?

0 comments

I keep reading articles like this: Security – One of The Key Reasons to Migrate to Windows Vista (other articles from this category are for example one which breaks down the MS Malicious Software Removal Tool statistics by versions of Windows to conclude the same thing).

The problem with these? They fail to account for the fact that the biggest reason nobody is attacking Vista is because it is still rare. You could get the same (and even better) results from this point of view with Linux, MacOS, etc. From the things listed in the "Defend Against Malware" only UAC and ASLR are really new (and IE7 Protected Mode).

ASLR only mitigates exploits, not malware per se. And UAC is one of those technologies which will quickly become ineffective (and of course it's not a security feature). The reason why it becomes ineffective are twofold: one is social - people will learn to just click ok/accept. The second one is that malware writers will learn not to touch areas which trigger UAC. You can still do a lot of damage, even when running with reduced privileges (you have access for example to all of the user's data).

BTW, this isn't the first time I've heard misinformation from Microsoft representatives. Just last weak I've listened to an interview with a MS UK IT evangelist where she said something like: "I cleaned up the computer with an Anti-Spyware program and then used an AV to clean up viruses" which leads me to believe that she doesn't understand that spyware is just malware and almost all current "AV" products can handle both. This is worrying because it doesn't seem to be intentional (so it is a lack of competence which makes you question any other information which you get from her).

To get back to Vista's security features: lets suppose that MS manages somehow to write perfect, bugfree code. Does this mean that we solved the computer security problem? Far from it!

For one, there are a lot of very popular software packages out there with vulnerabilities (think Adobe, Flash, etc). These are present on 80%+ of the Windows PC's, which makes still a great target for malware writers. You can check out the top used applications from Wakoopa to get an idea (although that is a somewhat biased sample - for example I don't think that Google Chrome is the 4th most used application by the general population).

Finally, a growing problem - which currently nobody seems to address - is the vulnerability of data stored on public servers (I'm talking here about things like Webmail, Social networking, etc). You can have the worlds most secure computer system and still loose control of you data stored online if the third party service has vulnerabilities (although arguably the worlds most secure computer wouldn't run browsers :-)).

To sum up, I think three trends will appear in the following year or so which will make it apparent that Vista is no security silver bullet:

  • "Vista compatible" malware
  • Malware targeted at popular software
  • We will see more and more "web-based" problems

Of course the biggest problem is the human element, which no technology can fix...

Free AV from Microsoft

0 comments

It seems like Microsoft is dropping their OneCare product line and repackaging it as "Morro", a free consumer AV product. I read the news on Graham Cluley's blog.

What does this mean?

This will of course eat from the pie of other vendors offering free products (AVG, Avira and Avast!). It will also get more people onto the "free AV" bandwagon because of name recognition ("No one was ever fired for buying IBM Microsoft").

But will it make any impact?

In the short term - yes. In the long term - not really. As Graham Cluley correctly points out, if this gets a large user base, (professional) malware writers will test their "products" against it to ensure that it is not detected (as they do with other large AV products - this being a reason why you are probably better off with a lesser known AV company). This will devolve in a "disinfect not prevent" situation, where malware has the upper hand. I think code which disables Microsoft's "way in" to the PC's (kills Windows Update for example) will be present in more and more malware, making sure that once it got a foothold on the computer, it will remain there.

In the end all this will do is to take away some marketshare (and money) from other players in the AV industry. From a security point of view it won't make a difference.

Tuesday, November 18, 2008

Opinions about whitelisting

0 comments

I was reading the piece entitled White Listing – The End of Antivirus??? by the "Director of Technical Education". Now it would be fairly easy to do ad-hominem attack against him, I will stick to the technical details of the post:

First, it gives the argument that one of the approaches whitelisting companies use is to gather as much software as possible and then scan it using (many) AV engines. From this he concludes that whitelisting companies couldn't work without AV companies. This is not true. Whitelisting companies have at least two things going for them:

  • Reputation - they get the software from "reputable sources", which means that statistically the chance for it to be malware is much, much lower (interesting how he misses this arguments since - by his own admission - he was doing something similar for MS - scanning files to be released for viruses and can verify this first hand - in older blog post he said that he only had very few - something like one or two cases - when he found infected files - reputation).
  • The whitelisting companies could very easily build farms of machines which would execute the files and compare the "before" and "after" state of the machine to decide whether the file is malicious or not. The technology is out there, readily available for anyone.

An other option for the whitelisting companies would be to classify files based on characteristics like entropy, digital signature, etc (like the Mandiant Red Courtain does). Of course this is not a 100% solution, but it would cover a big majority of cases.

Now examining some other claims of the post we find that "TSA does whitelisting", which shows a lack of understanding of the terms. The TSA does blacklisting based on the infamous "no-fly list". The TSA also does blacklisting of objects you can carry on the plane (ie they don't list all the things you can take on bord, they list all the things you can't).

The other argument bought against whitelisting (as a concept - applied to websites this time) in the post is that it doesn't protect against the current trend of reputable websites being hacked. The problem is that, while the argument is correct - ironically - the examples are wrong. In all of those examples the sites were only modified to redirect - in one way or an other - the browser to a malicious site where the exploitation attempt would take place. Both whitelisting and blacklisting would protect against these attacks (whitelisting - because it wouldn't allow the redirect away from the site and blacklisting - because hopefully it would include the target of the redirect). The situation where these solutions would have a problem is when all the malicious content would be hosted on the modified website (there would be no redirection), a practice which is not yet common. For a more technical discussion of the URL blacklisting topic see this blogpost.

In the end, both solutions are just layers in a defense in depth solutions. Sometimes whitelisting is more appropriate and sometimes blacklisting is. Claiming one is superior to the other shows lack of understanding or intention to mislead.

Firefox 2 end-of-life

0 comments

Via Slashdot came the news that version 1.8 of the Gecko engine used to render HTML in Firefox 2, Thunderbird 2, etc. was being end of lifed. Now I have still a few computers which I'm responsible for that have FF2 on them, just because that's what the users were accustomed to. So I searched around and found this: Firefox 2.0 Classic Theme for Firefox 3.0. So I will be installing FF3 with a FF2 skin there.

Also, the "news" was misleading (what a surprise - FUD on Slashdot :-)). Thunderbird is not going away, nor do you have to update to an alpha version of it. They will be supporting Gecko 1.8 with security patches for some time, it's just that new features won't be added (which isn't so critical in the case of mail clients - HTML mails are evil anyways :-)). The new Thunderbird will be released sometimes next year with the new Gecko engine, but there is no need to rush the upgrade (or at least nothing related to this announcement - maybe there are features in 3 which are vital for you).

In conclusion: the sky isn't falling (yet) and always look at the bright side of life :-).

Update: It seems that the theme is marked as "experimental", and thus you need an account on addons.mozilla.com to be able to download it. I found the following account to be working from bugmenot: [email protected] / bugmenot.

Ethical hacker challenge solution

0 comments

Given that the deadline passed, I'll publish my solution to the Scooby Doo Ethical hacker challenge. In related news (via SANS): the November challenge from packetlife. The deadline is the 20th of November, so hurry up.

Can you figure out who killed Dr. Wilson, and why? I would say it was Dr. Miller. In the partial disk image there was a e-mail saying:

"I know how you've been obtaining our passwords to steal the exams provide them to the students. You'll see I have the proof in the attachment. I expect you to resign your position and leave the University at the end of the semester or I will be forced to disclose this information and fire you.
Dr. Wilson"

The attachment contained a photo of Dr. Miller's office. In the photo one can see a box of - what I assume is - wireless camera. As the answer to question 2 explains, this was used to steal the exams and Dr. Miller feared for his reputation / position.

How were the passwords stolen to steal the exams? My theory is that using a wireless camera they were either read directly from the monitor, or the camera was used to capture the password as they were typed in.

Can you provide a copy of the cryptography final exam? Can you create an answer key? Foremost extract it from the partial drive image (together the Rick Astley video ;-)). On a sidenote, the email was not extracted by foremost (probably because the headers were badly damaged - for example the headers were entirely gone) and had to be extracted manually and the attachment decoded (for example by using the online Base64 decoder at: http://www.motobit.com/util/base64-decoder-encoder.asp).

The answers are:

The first question (a "shift" cypher with 16 places of shift)

a long time ago, in a galaxy far, far away it is a period of civil war. rebel spaceships, striking from a hidden base, have won their first victory against the evil galactic empire. during the battle, rebel spies managed to steal secret plans to the empire's ultimate weapon, the death star, an armored space station with enough power to destroy an entire planet. pursued by the empire's sinister agents, princess leia races home aboard her starship, custodian of the stolen plans that can save her people and restore freedom to the galaxy

The second one I didn't manage to figure out.

The third one was coded using the Enigma algorithm. Given the specified settings one can use the many available simulators (for example the one at http://enigmaco.de/enigma/enigma.html) and get the decoded result: SOMEBODY SETUP US THE BOMB.

Also, provide some analysis of Velma's incident handling process. What did she do right? What should she have done differently?The most important problem is that - because of her not using a writeblocker - it will be hard to prove that the contents of the drive were not changed. Also here actions might have eradicated phisical evidence (fingerprints for example). What she did right was the fact that she imaged the drive and worked on the image, rather than working with the drive.

Dynamic or static typing?

0 comments

Working is Java (Eclipse more precisely), I came to value very much the features which it offers related to exploring / manipulating the source code. I'm talking about things like "take me to definition", "show me where it is used", refactoring features, etc.

This made me thinking: isn't really the difference between static and dynamic languages the tooling? The ability of IDE to infer information from the source and use that to provide suggestions / options which are relevant in the given situation? I would say yes. And if so, the tool support will improve (is improving) considerably in the next few years, making the dynamic vs. static languages discussion less relevant...

Just my 2c

Update: forgot to mention two ways IDEs try to implement support for dynamic languages - using predefined dictionary - this is how Visual Studio supports jQuery - or using tracing (watching the program as it runs and taking notes about the objects assigned to variables) - this is how some Smalltalk IDEs do it AFAIK.

Monday, November 17, 2008

Vendor included backdoors

0 comments

An other reason to make sure that you use your available software to the maximum extent before going out and deciding to remediate your software problem with more software :-)

Vendor included backdoor can appear for multiple reasons, but there are two big categories:

  • "Easter-egg" like feature (some programmer decided to put in a piece of code that accepts a "magic" password (this was the case for example with older versions of Borland Interbase - it was discovered when they open sourced the project)
  • To make the life of support personal easier. While the first type of situations are relatively rare (and companies are actively discouraging them), this second type of backdoors are often consciously included. They win because support calls are handled faster. You win, because your issues are resolved faster. It's a win-win situation, right? Except when you get p0wned using the same mechanism...

So what are the takeaway lessons here?

  • Try to get the most out of your current software before deciding to get more software
  • These support backdoors can exist in many software. Sometimes they are documented but sometimes they aren't. Be very suspicious of support calls when they "magically" fix your problem (ie. with minimal interaction from you). Ask them how they did it and if the method they used is available anytime from anywhere (for example it would be sensible to prompt the user before taking control of the computer).
  • Open source can help, but it isn't immune to this problem (it didn't arise mostly because there is no formal support behind most OSS products, but it is entirely possible that "vendor supported" versions include this "feature").

Friday, November 14, 2008

Calculating the intersection of two Java sets

3 comments

This is my simple stupid Java tip for the day: to nondestructively calculate the intersection of two Set's (ie, retain both object), do the following:

Set intersection = new HashSet(s1);
intersection.retainAll(s2);

Taken from Java Tutorials. Lessons learned: before implementing code which even vaguely seems that it should already exists, check with your favorite search engine. Also, who comes up with these function names? :-)

Thursday, November 13, 2008

Daily funny

2 comments

Via Mechanix: the difference between JPEG and PNG illustrated. A similar topic would be: don't use the same image for the thumbnail and the big image! Just because you said width="320px", it will still needs to download the whole image!

Wednesday, November 12, 2008

Note to self

8 comments

A Dell Optiplex (755 if I recall correctly) is refusing to start from time to time. Unplugging it and replugging it after ~10 seconds helps, but I would like to get to the bottom of the problem. I made sure that all the extension cards and memory modules are properly seated. Now it actually gave me an error message (w00t), something about a [Krst] checkpoint. A little searching around revealed that the problem might be the keyboard or the monitor. Now I do have my PS/2 keyboard plugged into the mouse port (and using an USB mouse), but that's because in the keyboard port it wouldn't work... Until now I assumed that it was a manufacturing glitch, but now I have to look into it.

Update: it is a GX260, and the proposed solutions don't seem to work. Tried to put the keyboard in the marked slot, to remove the monitor, both to no avail...

Update: it seems that it was a problem with the PSU (power supply). It finally gave up its spirit and wouldn't start at all. Replacing it with a functioning PSU solved all the problems.

Fiddling with the comment system

0 comments

My friend Dan D. mentioned that the new inline commenting system wasn't playing nice with NoScript. After a little looking into it I found that indeed, Blogger is using JS to render the form (why?).

To reduce the pain a little I've added a link to the old method of leaving comments which should appear in case you don't have JS enabled. If you also want to do this, edit your template as HTML (of course back it up before!), check the box to expand widget code and search for the following line:


<b:include data='post' name='comment-form'/>

After it include the following piece (of course, change the text to read whatever you want):


<noscript>
  <b:if cond='data:post.allowComments'>
    <p style='font-size: smaller; font-weight: normal;'>Using the embedded
    comment form requires JavaScript to be turned on for the blogger.com domain.
    However, you can still comment <a expr:href='data:post.addCommentUrl' 
    expr:onclick='data:post.addCommentOnclick'>using the traditional 
    method</a>.</p>
  </b:if>          
</noscript>

PS. I also added a recent comments and top commenters widget to the sidebad, courtesy of Blogger Buster.

Tuesday, November 11, 2008

How can you be certain that your code works?

3 comments

You can't. Read this great article from Peter Harkins.

Mixed links

0 comments

Via /dev/random: the story of a fictitious penetration testing. Very interesting, eager to read the rest.

From Kim Cameron's Identity blog: Leaving a comment (with CardSpace / IdentityCards). The first time you do this it takes a whopping 11 steps! I fail to see how this is better than current systems or OpenID. (I'm talking about the user experience - from a security point of view Identity Cards are clearly superior to the old username/password type authentication).

Via devnet's bookmarks: Lifehacker: Top 10 Ways to Get Cables Under Control.

From the Pythian group: Performance tuning: HugePages in Linux. Very interesting read. Machines with very much RAM (16/32GB) are getting pretty mainstream in the enterprise and design solutions from 20 years ago (ie 4kb pages) are not very ideal these days.

From Matt Cutts's blog: a fun email. Similar to my experience some time back. What I find funny is the type of questions people post, even though the comment form clearly states that:

Got a webmaster-related question or suggestion that is not directly related to the topic of this entry? Instead of posting it here, your best bet is our official Google forum linked from http://www.google.com/webmasters/

From the MNIN Security Blog: two interesting posts - Recovering CoreFlood Binaries with Volatility and Locating Hidden Clampi DLLs (VAD-style). This just reinforces my opinion that there are many tricks you can play with the OS which can render investigate tools unusable. The moral: once the attacker ran code on your machine it is not your machine anymore (rule 1 of computer security). Also, generic tools won't do you any good if you want to investigate targeted attacks...

From Didier Stevens: Shoulder Surfing a Malicious PDF Author - cool. It is always fun to follow the digital trail.

From GNU Citizen comes a cool paper: Universal Website Hijacking by Exploiting Firewall Content Filtering Features. It boils down to the following:

Some filtering solutions (they discuss SonicWall in the paper) replace some webpages on the fly with "This website is blocked" type of text. In vulnerable systems you can inject javascript in the page (because it doesn't encode the URL for example). Now, the browser doesn't know that the message isn't coming from the original site - since the replacing is done on the fly in the request/response stream, so it will consider that it comes from the same domain, effectively circumventing the same origin policy. The final part of the puzzle is the fact that you can predictably trigger this filtering by including the f-word in the URL for example.

Enumerations in Java

0 comments

Starting to (professionally) program in Java, one of the things which bugged me were the constant declarations in classes which implemented struct-like idioms:

class Foo {
  public static final int FOO_1 = 1;
  public static final int FOO_2 = 2;
  public static final String FOO_3 = "42";
  ...

The code referencing this seems even more clumsy, especially in the case of equality testing:

  ...  Foo.FOO_3.equals(someVariable) ...

This seems to me vastly inferior (from the point of view of readability) to this:

  ...  Foo.FOO_3 == someVariable ...

There are two things going for this: there is less characters to type (even with Eclipse auto-complete :-)) and also the "==" operator provides a nice visual separation of the two parts. Then I remembered reading about the technique called type-safe enums. Even better, I found that this method is included in the language since version 1.5 (AKA Java 5.0) and gives you several nice features:

  • Handles serialization and deserialization automatically (no need for you to write code)
  • Can be used in switch statements
  • Supports adding additional methods/data (this is important if the enumeration needs to interface with systems outside of the JVM which use an "encoded" version of the enumeration - for example and integer number)

Unfortunately I also found a big, big problem with this approach: it doesn't work when multiple ClassLoaders are involved (see the printer friendly version and an other discussion ). Now you probably will go (like I did) WTF are multiple class loaders and when will I encounter them? The two linked articles do a good job of explaining (also, take a look at the comments), so I will highlight just two usecases which you might encounter:

  • Using Applets (applets from different pages are loaded by different ClassLoader's for security reasons)
  • Using application containers like Tomcat, WebLogic, etc (again, classes from different deployment entities - wars / ears - will be loaded by different class loaders because of security considerations) - see for example this article from TheServerSide.com which explains it in more detail.

So where does this leave us? The current solution I see is:

  • Use the typesafe enum pattern (together with the serialization part)...
  • but don't use "==" if there is the slightest chance that you will get in a multi-classloader situation (from my current understanding this means Applets - not so common - or applications servers - common). Instead implement an equals and hashCode method (remember the restrictions correct implementations of the methods must respect) and use the equals method.
  • When these constraints don't apply, you can use enums (supposing that you target Java 1.5+). You can't override equals for enums, because it is marked as final in the parent :-(.

So where does this leave us?

This solves the typesafety issue (meaning that you no longer have methods which accepts three int's, but a method which accepts a parameter of type Foo, one of type Bar, etc - this means that there is less of a chance for mixing up the parameter order for example).

However, this still doesn't solve the elegance problem, which is very important (the less you have to write, the less chance you have to screw it up - also, it is easier for those who come after us to read the code).

Anti Malware Testing Guidelines

1 comments

Via the ESET blog: the guidelines for testing Anti-Malware products were published by AMTSO (the Anti Malware Testing Standards Organization). Go and read them if you are so inclined (each of them consists of only 5 pages - you have to give them props for brevity - although maybe they just wanted to avoid being to specific so there is less of a chance of being wrong ;-)).

In general I feel that testers (no offense) don't have the necessary technical skills to evaluate in an objective manner the relevance of their tests. Sorry, but someone who never put their hand on an ASM level debugger (like Olly), a disassembler (IDA), who never participated in a crackme contest (with at least some success), who never analyzed shell code, who never unpacked a malware - just doesn't cut it.

Also, I find some conflicting statements in the two papers. First they sidestepped the question of what constitutes "creating new malware" (this is interesting in the context of the Consumer Reports situation - BTW, my personal opinion on the matter is that CR was justified in creating variants).

Second, they say that "test results should be statistically valid". First of all, the expression is "statistically relevant". Statistics (as I found out) is not a black and white game. Usually, limit criteria are selected somewhat arbitrarily (using "well accepted" values is common - however they are more a psychological factor than a mathematical one). Example: what is an acceptable error margin? 5%? 10%? 50%? There is no magic formula which can respond to that, it is largely determined by how you feel about risk.

Now this principle goes against the dynamic testing where it acknowledges that (given the complexity of the situation) only as little as 50 (!) samples might be tested for a particular test. Given that each month more 100 000 new (undetected) samples appear (and this is a conservative number), this sample set is utterly insignificant.

There goes nothing

0 comments

People, please stop the fear mongering. The F-Secure blog has a post titled There Goes WPA telling us how insecure WPA is now with Elcomsoft (great guys BTW) using the GPU to gain a factor 100 in the breaking speed and researchers breaking the TKIP part.

What it fails to do is to point out is that adding just two characters to the WEP password negates the Elcomsoft problem and that breaking TKIP is only a part of the encryption of WPA. It also fails to give some actionable advice like:

  • When possible, use WPA2 (WPA2 is not affected - it uses an entirely different - and much stronger - encryption algorithm - AES vs. a modified version of RC4)
  • Access points can be set to "rekey" themselves regularly. Until you can migrate over to
  • This doesn't affect the "enterprise" deployments of the access points, only the "pre-shared key" deployments.

Update: take a look at the RaDaJo blog for more technical details (as opposed to senseless fearmongering).

Monday, November 10, 2008

One (and a half :-)) challenges

0 comments

Ethical Hacker just launched a new challenge. This one however is a little different since you need to buy the book Daemon to be able to solve it. The book itself has some good reviews, but still, this makes it out of reach for a lot of us :-(.

And the second challenge (which is unfortunately already over :-)) is from Didier Stevens. In fact it isn't even as much a challenge as a puzzle. Still worth a look at through, especially the solution method described in the comments.

Poor man's traffic logger

1 comments

I was reading the following blog post about filtering out MySQL queries and was reminded of a situation I faced once. The situation was as follows: I needed to find out where certain PostgreSQL queries were coming from, however the server was behind a pgpool instance, so all the queries were seen as coming from the same IP.

The solution was to tcpdump on the interface/port where pgpool was listening and search the traffic for the specific queries. This solution is much more elegant of course :-). Also, somebody in the comments mentioned a nifty little tool called MySQL query sniffer, which looks very nice and probably could be adapted for PG (using something like PgPP as the basis).

Friday, November 07, 2008

Sun bans Romania from downloading

4 comments

Confirmed from multiple locations with multiple ISPs: whenever I try to download something (JDKs) from Sun using a Romanian IP you get:

Your download transaction cannot be approved. Contact Customer Service.

I've tried downloading a SDN account (so that Sun knows that I don't want the JDK with all the non-exportable crypto stuff) to no avail. From customer support I got back a canned response telling me to empty the cache/cookies. Of course it didn't work and they didn't (yet) respond to my followup email. Oh well, I guess I will be hunting for USA proxies (TOR should work also, but it's not exactly the ideal solution for downloading hundreds of megs).

Update: I received a reply from SDN saying that the download system "is experiencing site difficulties. It should be resolved soon, so please try your download again later today". It still isn't working through... I've replied them for what it's worth reiterating that it still isn't working.

Also, it would be interesting to know if this also extends to the auto-update feature - ie if auto-update isn't working either...

Update: it seems to be working now. Lets hope it stays like that...

Thursday, November 06, 2008

For Star Trek geeks

0 comments

Via the Radio Free Security podcast. Very geeky and very funny:

For more see Hi-Fidelity quartet.

Monday, November 03, 2008

Job offer from Nokia

0 comments

Some time back I was looking around in the job marked and, amongst other possibilities, I checked out Nokia. This meant that I got added to their mailing list (voluntarily). Today I received the following mail from them:

When creating your profile at Nokia's Career Site, you requested to be notified of job openings. The following job was just posted which may be of interest for you:

Quality Specialist 4

Position Description: NULL

Requirements: NULL

So a job with no description and no requirements. Cool :-). On a sidenote: my experience with them was that they were not at all interested in my technical qualifications. In fact the technical part of the interview lasted no more than 10 minutes and the questions were very generic. My feeling was that they have a table where it says "CS degree - X EUR" and this is the offer they make you. This might be good if you are coming straight out of the university, but for somebody who already has work experience it probably isn't the best offer.