Back to Top

Saturday, October 28, 2006

Creating random passwords - the easy way

0 comments

Passwords are used as the main authentication method in almost all of the current websites. They are easy to implement (from the websites owner point of view), however the user must consider several conflicting goals if s/he wants to stay safe:

  • Passwords should be long
  • The user must be able to remember the password
  • It should not be composed out of words which can be found in the dictionary
  • It should be different for every website / location so that if a location is compromised, the attacker can not use the obtained password to log in to other places.

Here is my solution to the problem: choose a master password and for each site generate a password from it using the password combinator (requires javascript). The advantages of this script are:

  • Uses Javascript, so it runs 100% at the client side with no server communication (other than the initial page load). The server never sees any of the entered data, you can use it in offline environments (if you download it from here) and you can view the source code to make sure that it does what it claims it does.
  • It can generate passwords of any length and complexity so you can tune it to what a site is able to accept.
  • The generated password is completely deterministic (meaning that given the same inputs and settings it will always generate the same output), however it is very unlikely that based on the output somebody could determine the master password even if the modifier is known, since the generation is based on the SHA1 algorithm.

As I've mentioned above, you can use obvious things for the modifier, like the websites domain and even then the only practical attack against your master password is bruteforce. By taking a step further I've implemented a feature in the script which enters automatically anything after the # sign in the modifier text box, so you can head over there and generate a password for digg directly. To make it even easier, you can use the following bookmarklet (by dragging it to your Bookmarks toolbar) and when you click on it it use some javascript magic to open the password combinator with the current site already entered: SitePass

Stay safe. And remember, you can always use the random password generator to generate secure passwords which fit your needs.

One final remark: you might ask: why is this hosted on a free server? And isn't using the bookmarklet a privacy risk, since it communicates the server what page I want to generate the password for? The answer is: I use a free server because I don't have money currently to pay for a hosting service. Because this runs 100% client side (and again, you can look at the source to make sure of this), there is no data transmitted back to the server which would compromise your security. As for the case when you use the bookmarklet to show a prepopulated version of the page: the server again only sees the fact that you are requesting the page, anything after the # sign isn't sent to the server, but rather interpreted by the browser.

Updating Ubuntu to Edgy Eft

0 comments

Finally I've gotten around to updating to Ubuntu Edgy Eft (6.10). My first remarks were:

  • The download is quite heavy (677 MB) so it took several hours on my 256 Mbit connection. If you have a good connection at work for example, you should download the ISO, write it to a CD and add the CD as source for packages.
  • If you have custom sources in your /etc/apt/sourcel.list which time out, you should remove them before trying the update, because the updater fails if it can't fetch the package list from a source.
  • I've updated using the update-manager as described over at the debianadmin site.
  • After the update I've got Firefox 2.0 as a bonus (and I'm loving the included spellchecker!), however it seems to have overridden some settings (for example I've disabled the sound at the login screen and it came back after the update).

Happy updating!

Monday, October 23, 2006

The IE7 team replies - sort of

2 comments

As you might remember Martin McKey very generously offered his readers the chance to post questions which he will ask at the IE7 release party. Well, he went he asked and as I've predicted he got a canned response. I felt that this was partially he didn't insist on it - and I can't blame him because they had a lot of questions to ask and after all it wasn't his question - and probably partially because I didn't phrase the question clear enough (English not being my native language). Actually I suspect that event if he would have insisted he would have got some generic response. So here my question again and I challenge any IE7 technical team member to give me a technical reason (like we couldn't do X with the current set of APIs) for not implementing the containment wall technology in pre-Vista Windows versions. As I understand it this technology is basically separating IE in multiple processes with each process a specific task (like rendering the page, talking to the net, etc) and each process drops the rights it doesn't need, meaning that if you find a bug in the rendering code for example, you can't exploit it in any meaningful because when you execute code in the context of the rendering process, you have almost zero privileges. If this is truly what this technology does, this is entirely possible with current versions of Windows, and I see no reason other than marketing for this move. (Actually I'm not deluding myself into thinking that anybody or anybody from Microsoft for that matter reads my blog, but it's nice to let some steam out ;-) ).



Listen to the whole podcast

Saturday, October 21, 2006

Hack the Gibson - Episode #62 - sort of

0 comments

How to have your cake and eat it too?

Sorry for the lack of posts recently, but I'm just swamped at work and I also have to buy books from time to time. However I can say that I have several javascript and perl goodies prepared and soon I'll post them

The recent show was a fairly good one (definitely one of the better ones), and I just want to make one comment: there is a solution for Leo's problem with the people behind proxies (the problem was basically - for those of you who didn't listen to the show - that because of proxies he couldn't get accurate figures about downloads and suspected that he has a larger audience, but couldn't prove it to the marketing people). So here is the solutions:

Point the clients (from the RSS feed enclosures and the links on the site) to a server side script (perl, php, whatever works for you) which generates the headers that disallow caching and then it generates a 302 location moved response with the actual location of the mp3 file. Now track the hits to this script rather then the audio file.

Why does this work? Because with this the conversation looks like this:

  • Client requests the script file.
  • Proxy sees the headers and says: I won't cache this.
  • Client sees the 302 response and does an other request to fetch the file from the new location.
  • The response will have headers that allow caching, so the proxy will cache it.
  • Now an other client comes. The first three steps will be the same, but at the last step the proxy kicks in and says: I already have this file and give the file from the cache.

The benefits of this method are: better download tracking and the same bandwidth utilization as before (so you let proxies do their thing basically as opposed to the solution where you deny caching of your material). The drawbacks: there are maximum number of redirects a client is willing to follow, so if you use this method in addition with other redirects (from podtrack for example), you might create a larger number of redirects than the client is willing to handle and it just gives up. An other potential problem would be that people figure out the actual location of the audio file (it's not rocketscience after all) and start downloading from there. However this will be probably a small percentage of the audience. If you're still concerned, you can make it so that you only deliver the content if the client has the right referrer header. This of course has the drawback that some people turn off referrer tracking. A combined solution might that that you only require referrer headers from people who are behind a proxy (based on the request header).

A final note: please, please give the script the same name as the audio file (and use mod_rewrite for example) so that when I do a save-as on the file I don't get some generic name like redirect.mp3 as the file name.

Update: one example of implementing this is by podtrack. While I think that their site is ... (I'm trying to find a word here which isn't to derogative) and wonder why anybody would use IIS instead of apache, they got this much right (you can check for example by downloading a podcast tracked by them with curl and the -v option. There you can see that the first connection (made to their redirect server) replies with:

HTTP/1.1 302 Found
Date: Mon, 23 Oct 2006 12:33:33 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
X-AspNet-Version: 1.1.4322
Location: http://www.esanity.co.uk/podcasts/23-10-06-boagworld.mp3
Cache-Control: no-cache, no-store, must-revalidate
Pragma: no-cache
Expires: -1
Content-Type: text/html; charset=utf-8
Content-Length: 173
while the original podcast location in my case replies with:
Content-Length: 27704743
Content-Type: audio/mpeg
Last-Modified: Sun, 22 Oct 2006 22:54:03 GMT
Accept-Ranges: bytes
ETag: "de6dcef82cf6c61:9e4"
Server: Microsoft-IIS/6.0
MicrosoftOfficeWebServer: 5.0_Pub
X-Powered-By: ASP.NET
Date: Mon, 23 Oct 2006 12:33:44 GMT

Wednesday, October 18, 2006

What to do if you have many TIME_WAIT connections owned by the system process?

0 comments

If you have a Windows machine which acts as a server and it have many connections per seconds, you can get in the situation when you have a lot of half-open connections owned by the system (PID 0) process. To resolve this if the communication hosts have high speed connections with one another (like a local LAN), you can use the following tweak to reduce the timeout value:

Change the DWORD value (create it if you don't have one) TcpTimedWaitDelay in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters to 30. This is in seconds and the accepted range is between 30 and 300 seconds.

Tip taken from the WinGuides site.

The official documentation can be found over at the MS technet site

As a side note: why are all those connections owned by the system process? Because if a process exists, its half open connections are inherited by the system process (probably this is something like the init process under Linux which inherits processes whose parents have exited).

Web 2.0 vs Web 1.0

0 comments

Read it here: Web 2.0 Thinking Game. Check out also Create your own Web 2.0 Company.

Web 1.0: Writing.
Web 2.0: Rating.

Hey, at least I'm Web 2.0 ;)

Is your IT department doing this?

0 comments

While the subtitle of the newspaper is laughable (The independent voice of the Microsoft IT community), I think that the article is very nicely written: IT Gone Bad.

Web Developer Stereotypes

0 comments

Sitepoint did a survey amongst web developers and found that people who use PHP are very likely to try Ruby on Rails. While I haven't completed the survey myself, I find that I'm in this exact same position: I've been developing in PHP for several years now and plan to check out Ruby, however I'm reluctant to do anything big in it until I understand the full extent of the magically generated code and the security implications it has.

Credits: I came across this link on the Tucows blog.

Just a fun post

0 comments
Engineering Definitions.

Tuesday, October 17, 2006

Quick port forwarding guide

0 comments

It always gave me a headache when I tried to figure out the command line syntax of ssh for port forwarding and I ended up staring at the man page for several minutes and making drawings on a piece of paper. So I've put together three illustrations for the three possible port forwarding methodologies. The green arrow means that the traffic is encrypted and the red that it's not encrypted (or at least not by ssh, you can tunnel encrypted traffic like RDP over ssh). The arrows show the direction for initiating the traffic, after that the traffic can flow both ways. The ssh command is always issued on the machine marked with ssh and the ssh daemon runs on the machine marked with sshd. Click on the images to see a larger version of them.

Monday, October 16, 2006

Microsoft did it again!

0 comments

I usually try to avoid being fanboy or MS basher, but there are some moments when you can't stand it anymore! What triggert this post was Paul Thurrott's post on Vista's new license, however this was just the last drop. Some stuff that irritates me:

  • At home I dual boot between Ubuntu and Windows 2k3 SBS which was a gift from MS. What kills me is the text on the box of Windows: Includes 5 Microsoft Windows Small Business Server Client Access Licenses for Devices and/or Users. This is a typical example for MS trying to get more money than it should. If I want to have a stronger server, I should pay for the hardware not the software!
  • The updates last week went seamlessly on the few boxes I'm responsible for, however on one of them it kept insisting that I don't have a valid license. Now it must be known that we are a big company and only buy equipment from big vendors in big quantities so surely that computer had a valid license! But because of MS I had to waste several hours to find the site admin and ask him to check the situation so that finally I can apply the patches. What should have been a 30 minutes top outage for that server (yes, server!) became several hours. Thank you MS!
  • Read MS's reasoning for the restrictions: less than X% of the users need this. First of all: how do you know that? Was it told you by a market research company team / company? I've read yesterday a report issued by market research company who claims to have more than 30 years of experience in the field about technology and they couldn't get even their wording about economics right, let alone technology! Secondly: get it into your head: that X% covers your enthusiast base (you know, technical evangelist) and techies who do the support work and the recommendations. The most valuable people you have! Thirdly: usually I'm don't talk politics, but I see a very good analogy: let's eliminate the constitution, because only Y% needs it (and I'm sure that Y < X). Think about it!

My opinion: move to free software where you don't have to put up with this crap and you can concentrate on doing what you have to do!

Update: as the guys over at the Splitcast forums pointed out, I'm not the only one dislikeing MSs business practices.

Friday, October 13, 2006

Picking the brain of the IE7 team

1 comments

Martin McKey over of at the Network Security Blog is going to meet the IE7 team and is waiting for proposals regarding the questions he should ask them. Here is mine:

First let me give a little background as I see it so that if they choose to answer my question (no offense, but if it is as I suspect, they are limited in their freedom of speech regarding this areas by NDAs and such) they can do so in the correct context. One of the biggest security advantages of IE7 is the so called containment wall, which if I understand correctly uses the x86/x64 architecture and the Windows NT security system to separate in different processes the different tasks the browser has, so that a lower privilege task can't corrupt the memory of a higher privileged task. I think that this is a very robust solution which should reduce the attack surface considerably and I also can appreciate the work that most have gone into slicing up the application in parts. Now my question would be: is there any real technical reason for which this won't be available under non-Vista versions of Windows? If possible name at least one API which this feature needs that is not available under non-Vista Windowses.. Because all of the mentioned techniques are available on all version of Windows from Win2K onwards (as for example the DropMyRigths tool written by Michael Howard demonstrates). I'm very curious if and what they'll respond, but I have several possible scenarios in my mind: (a) I've misunderstood the feature and it's really more or different from what I've described (moderately possible) (b) This is a marketing move which incorrectly puts revenue generating in front of security (this is my personal opinion, but I don't think they will admit to it) or (c) my question won't be asked at all.

Hack the Gibson - Episode #61

0 comments

Read the reason for these posts. Read Steve Gibson's response.

And here it an other episode which starts out great and is at average better than the previous episodes, but still some mistakes slip in.

About the new, as of yet unpatched Windows flaw: I couldn't find any information on it, so I'm not sure which flaw are we talking about, but just so that we all are on the same page: a remotely exploitable flaw is one which (under certain circumstances, like if the vulnerable service is running) can be exploited with zero, I repeat, zero user intervention. So it's the kind of flaw which made Blaster and Sasser possible. The WMF flaw was not of this kind since it required user intervention (although little). So just that we are clear with our definition: user viewing a webpage doesn't equal remotely exploitable. Maybe the expression they were looking for was arbitrary code execution.

Now for the main feature: ActiveX scripting. First let me make it very clear, there is no such thing as ActiveX scripting, or at least not in the sense mentioned in the podcast. ActiveX and scripting are two different technologies, although they live in a common environment (the browser) and can interoperate. As proof go to your Internet settings and you'll see that at the security level you can disable / enable ActiveX or scripting independently. It is true however that at the maximum security level they both are disable.

A short note about the technology used to deliver our e-mails: the method is the following: you write your email, contact your mail server and hand it the mail. Now your mailserver contacts the recipients mails server and hands it the mail. Finally the recipient connects to his mail server and gets the mail. What it comes down to is the fact that the first and the last steps can be secured (by using something like SSL, however there is no standard for encrypting the middle part. The usual analogy used for email is that it's like a postcard: you can make sure that nobody can look at your post card until you hand it over to the postal service and also the recipient can make sure that nobody can look at it after s/he got it, however neither of you have any guarantees that during transport it wasn't looked at (although this is highly improbable for the same reason that it's improbable that somebody would look at your e-mail: because the volume of the traffic).

About data retention: you don't need to retain the full data stream to have at least some valuable information. You could retain for example retain only the headers of the packets, which would reduce the volume significantly and still give many useful information. Retaining only the headers an compressing can reduce the volume at least 100 times (second hand experience).

Finally a couple of words about javascript & company. This was one of the subjects of my first (unanswered) letter and as you can see from my blog I know a little bit about these technologies (although not about design as I have to admit it :)). The major technologies that augment the plain HTML are: javascript / vbscript / activescript / livescript, Flash, ActiveX and Java. Now it is important to understand that these are independent technologies, developed by independent companies (although they can interoperate at a certain level, they shouldn't be confused).

The first in the list (and also historically) is javascript. It was developed for Netscape 2.0 I think and was called Livescript, but they finished it around the time Java was coming to the marked so they rebranded it Javascript to catch some of the marketing-wind so to speak which was blowing from Sun (I want to stress again that Javascript and Java have nothing in commong – aside from the name – for example even though they both are object-oriented languages, they use different kind of inheritance model – class based versus prototype based). Then came Microsoft with IE and implemented Javascript in it, but because they were always a big supporters of basic, implemented vbscript. Now vbscript and Javascript for all intents and purposes are equal (what you can do from one, you can do it from the other, and they can also interoperate easily) with the exception that it only runs in IE (this isn't as a curiosity as you might think since AFAIK the next version of Firefox will come with Python embedded, so on Firefox you can program in Javascript and Python). All these scripting languages are commonly referred to by Microsoft as Activescript (for example in the security part of the Internet options), not to be confused with Actionscript. Actionscript is a language very similar to Javascript (in fact they are both implementation of the same ECMA standard), the difference beeing that it runs inside of flash files (so if you have Flash installed and you view a page with Flash in it, the given movie can contain Actionscrips). Both of these technologies as well as Java offer a very strong sandboxing solution, so it is very rare that something malicious can be done with them if the security setting are appropriate (there are malware out there which upon execution lowers for example the security level of IE or add certain sites to the list of trusted sites or adds new certificates to the trusted CA database). An ActiveX control on the other hand gives full control (or at least the level of control equal to the account under which IE is running) of the system. This is why it's important to (a) don't run as administrator and (b) only install ActiveX controls about which you are 100% sure about. Ideally you should only install the ones required by Windows Update.

Hope this helps a little clearing up the confusion.

Thursday, October 12, 2006

I have a career, not a job - a very true post

0 comments

The Network Security Blog / Network Security podcast is one of my sources of information. Today I've found this very true post there.

How to publish a good looking code on Blogger?

8 comments

This article is considered obsolete. Please read the followup post.

From time to time I would like to publish a post in which I can show code snippets. However the standard <code> or <pre> tags look way too boring. Something with color stands much more out. I was thinking: if I had my own server and would host my blog there, I could add automatic syntax highlighting in no-time using GeSHi (Generic Syntax Highlighter). Then it hit me: why not use the demo hosted by them and copy the output over as HTML? It's not as elegant or simple as having a server-side script taking care of it, but it's better than the standard look. So here are the steps:

  1. Head over to the GeSHi demo page and plug your source in. Play around with the settings until you get a satisfying result.
  2. Save the resulting page (it would easier to use view source, but since this is a dynamically generated page using parameters passed with the POST method - rather than GET - it doesn't work).
  3. Open it in a text editor (like gedit or notepad) and find the style section where it says <style type="text/css">/* GeSHi (c) Nigel McNie 2004 (http://qbnz.com/highlighter) */ and copy it over to the blog post
  4. Find the start of the code (you can do this easily by searching for style="border: 1px dotted and copy it over until the end of the code. This you can recognize by the sequence </div></li></ol>
  5. Add a final </div> after the part you just copied.
  6. Because I use a Blogger template which styles list items in a special way and also restricts the width of the part where the post is displayed, I have to add the following lines to the style sheet:
    div.code { overflow: auto; width: 100%; }
    div.code li {
     list-style: decimal outside;
      padding-left: 0px;
      margin-bottom: 0px;
      background: none;
    }   
    
    and also add the code class to the starting div (to do this go to the start of the part you copied and where it says class="[something]" add code, so that it looks like this: class="[something] code"
  7. Enjoy :)

There are a few problems with this approach: (a) the biggest is that it's a multi-step fairly complicated procedure (b) If you have multiple post with source code on your page you will have duplicate style-sheet information in your page (c) It is not recommended to include style information in-line or in the main body (the style tags you copy will end up in the body instad of the header where they should be put) (d) While copying the code you might have some weird characters appear (e) GeSHi is not perfect (for example in the code below it gets confused by the embedded javascript), but it's the best I've found so far. It is a beta solution and if any of you have ideas on how to improve it, please take the time to write a comment. As a bonus you find below the source code for my Javascript random password generator (not that you couldn't do a view source until now, but this is more accessible).

Pimping my blog #2

0 comments

After observing that most of my visitors (45% currently) use Internet Explorer, I've made a little modification so that they to can enjoy the <q> tag. A more detailed discussion and other solutions can be found at the List Apart site. I'll only present in short my version.

My version consists of two parts: a style using the underscore hack to make the contents of the tag italic for the security conscious users (those who don't have Javascript enabled) and then a script written in unobtrusive style which (a) adds a " before and after each tag and (b) disables the italic style. You can find the sources below.

GeSHi © 2004, Nigel McNie
  1. <style type="text/css"><!--
  2.   Q { _font-style: italic; }
  3. --></style>

GeSHi © 2004, Nigel McNie
  1.   var quoteResolver = {
  2.     addEvent : function (obj, evType, fn) {
  3.       //taken from: http://www.scottandrew.com/weblog/articles/cbs-events
  4.       if (obj.addEventListener){
  5.         obj.addEventListener(evType, fn, false);
  6.         return true;
  7.       } else if (obj.attachEvent){
  8.         var r = obj.attachEvent("on"+evType, fn);
  9.         return r;
  10.       } else {
  11.         return false;
  12.       }  
  13.     },
  14.    
  15.     doWork: function () {
  16.       //add a " before and after each q
  17.       var qs = document.getElementsByTagName('q');
  18.       for (var i = 0; i < qs.length; i++) {
  19.         var before = document.createTextNode('"');
  20.         var after = document.createTextNode('"');
  21.         qs[i].parentNode.insertBefore(before, qs[i]);
  22.         qs[i].parentNode.insertBefore(after, qs[i].nextSibling);
  23.       }
  24.      
  25.       //deactivate the font-style: italic rule     
  26.       for (var i = 0; i < document.styleSheets.length; i++) {
  27.         //the standard would be cssRules, but IE uses rules
  28.         //and we are targeting IE only
  29.         var ruleList = document.styleSheets[i].rules;
  30.         for (var j = 0; j < ruleList.length; j++)
  31.           if ('Q' == ruleList[j].selectorText && 'italic' == ruleList[j].style.fontStyle) {
  32.             //this is the style we wish to disable
  33.             ruleList[j].style.fontStyle = '';
  34.             break;      
  35.           }
  36.       }         
  37.     },
  38.    
  39.     init : function () {
  40.       //try to determine if this is an IE browser
  41.       var userAgent = /MSIE/; var nonUserAgent = /Opera/; var os = /Windows/;
  42.       if ( userAgent.exec(navigator.userAgent) && !nonUserAgent.exec(navigator.userAgent) && os.exec(navigator.userAgent) ) { 
  43.         //register a function to do the work after we finish loading
  44.         this.addEvent(window, 'load', this.doWork);
  45.       }
  46.     }
  47.   }.init();

Wednesday, October 11, 2006

Delaying the loading of elements in a web page

0 comments

After listening to the latest Practical Webdesign Magazine podcast (I listen to every one of them as well as the Boagworld podcast, both being great podcasts), I felt the urge to write this post :). When you include many third party things in your webpage, the loading of it can be slowed down considerably if the client doesn't have a good connection or the given service has outages. The solution I came up with is to delay the inclusion of the content until the webpage (or at least the part loading from your server) has finished loading. To accomplish this I've created the little javascript below. What it does is to schedule a given function to be executed a given number of milliseconds after the page has been loaded. If you set this low (the default is 100 milliseconds for example), the users won't notice an interruption in the loading (if the other services that you rely on can deliver content fast enough) or at least will see the parts of your page which is loaded from your server until the other parts load (if they have a slow connection to the third party source). To use this, use the following type of call: delayedLoader.scheduleAfterLoad(function() { alert("Hello World!"); });.

An other feature of this script is that it simplifies the cases when you have to include HTML from third party sites (an IFRAME for example). To use this feature create a placeholder div or span in which you should include the temporary content (the content which should be displayed while the third party content is loading), give it a class of "to_replace" and immediately after create a comment in which you should include the text witch which the temporary content should be replaced. Probably looking at the example below this becomes much clearer than after reading my babbling.

GeSHi © 2004, Nigel McNie
  1. <div class="to_replace">This needs to be replaced!</div>
  2. <!-- This is the <em>replacement</em> -->

The source code:

GeSHi © 2004, Nigel McNie
  1.   var delayedLoader = {
  2.     addEvent : function (obj, evType, fn) {
  3.       //taken from: http://www.scottandrew.com/weblog/articles/cbs-events
  4.       if (obj.addEventListener){
  5.         obj.addEventListener(evType, fn, false);
  6.         return true;
  7.       } else if (obj.attachEvent){
  8.         var r = obj.attachEvent("on"+evType, fn);
  9.         return r;
  10.       } else {
  11.         return false;
  12.       }  
  13.     },
  14.  
  15.     //schedules a given function to be invoked a given number of miliseconds
  16.     //after the loading of the document has finshed. the default number of
  17.     //miliseconds is 100
  18.     scheduleAfterLoad : function (fn, msecs) {
  19.       if (msecs <= 0) msecs = 100;
  20.       this.addEvent(window, 'load', function() { setTimeout(fn, msecs); });      
  21.     },
  22.        
  23.     replaceElements : function () {
  24.       var replaceElementArray = function (to_process) {
  25.         for (var i = 0; i < to_process.length; i++) {
  26.           if (to_process[i].className.indexOf('to_replace') > -1) {
  27.             var element = to_process[i];
  28.             while (null != element && 8 != element.nodeType) element = element.nextSibling;
  29.             if (null != element && 8 == element.nodeType)
  30.               to_process[i].innerHTML = element.nodeValue
  31.           }
  32.         }        
  33.       };
  34.      
  35.       //process divs and spans
  36.       var to_process = document.getElementsByTagName('div');
  37.       replaceElementArray(to_process);
  38.       to_process = document.getElementsByTagName('span');
  39.       replaceElementArray(to_process);
  40.     },
  41.    
  42.     init : function () {
  43.       this.scheduleAfterLoad(this.replaceElements);
  44.       return this;     
  45.     }  
  46.   }.init();
  47.  

There is a potential problem with all this: the user might not have javascript enabled. This could be mitigated by repeating the content in a noscript tag (that was my original plan to use the content of the noscript tag to replace the content of the parent), however I'm pretty sure (although not 100%) that the user agent (the browser) would still load the stuff, event though it is invisible and slow the loading of the site down.

The kind of articles I don't want to see

0 comments

After reading this article I was in pain. I don't want to offend anybody, but this is a perfect example for the things against which this blog was created. The article contains a lot of hype-words but is vague on technical details and some of the details is wrong. I don't want to accuse anybody but it seems to me that this article is scaremongering more than anything else.

The first thing would be that everything which is covered falls in the category of input validation. While it is good to present different aspects and effects of this problem, it is at least misleading to say that these are the Top 10 vulnerability categories. To see a real and comprehensive list of top 10 vulnerability categories in web applications, visit the OWASP site.

Secondly, many of the technologies and problems presented are not new (in the sense that they predate the whole Web 2.0 craze with several years) and are not primarily used in web applications (like WSDL, XPATH, SOAP).

Thirdly the article tends to invent terminology, probably to get as much attention as possible. Lets take the first element in the list for example: Cross-site scripting in AJAX. This is an unneeded repetition and also somewhat confusing (you are not doing the cross-site scripting IN AJAX, you are doing it in Javascript or VBScript). Also the definition is a bit foggy and slightly incorrect: AJAX gets executed on the client-side by allowing an incorrectly written script to be exploited by an attacker. This is misleading in the sense that one tends to think about client-side scripting when reading the word script in this context, however it is most of the times the server side which includes incorrectly escaped user data in the final page (there are a few exceptions which us client-side scripting to dynamically generate parts of the page based on the user supplied parameters, but they are few and far between).

Last, but not least, some of the things are flat out wrong: at point three of the article Malicious AJAX code execution it basically says that using a XMLHttpRequest object one could send requests to any sites. This is not true, browsers adopt a same domain policy on XMLHttpRequest (meaning that the script can send requests only to the domain from which it was originally loaded). You can send requests to other sites by using IFRAMEs, but IFRAME and XMLHttpRequest are not the same thing (although they can be used in similar manner)

My advice to the management type of people who read these articles would be: don't panic or start running around in circles because of such articles. There is a good chance that many of the things mentioned in it don't apply to systems. Then again there are many things NOT mentioned here which may apply, so please don't make a checklist from it and make your people concentrate only on these issues. Read more useful material, like the OWASP list (have I said already how great they are :)).

My advice for programmers: go read the OWASP list and if a manager comes your way about this article, point her/him to the OWASP list and this blog post.

The state of affairs in Romanian education

0 comments

One of the regular podcasts I listen to is Casting From the Server Room. It's guys who work in IT / SysAdmin jobs in the education (colleges, high-schools, etc). I usually listen for the technical content, but I can also catch a glimpse about the state of education in the US. Usually they complain, but I think they should be happy they don't live here.

The quality of education (and I'm referring here mostly to university level because that is what I have recent first-hand knowledge of, but as I understand highschool isn't better) is dropping from year to year as the universities tend to focus on quantity instead of quality. In the past (read: in the communist era) it was a big deal if you got a university diploma and became an engineer or doctor or something like that. In a small village (like the birthplace of my father) usually just one or two people made it to the university and they were looked up to. Admission criterias were very strict and usually you had to pass several exams to get in. Now most of the time you don't have to pass any exams, you get in based on your highschool grades, and even if you don't get a place payed by the state (the number of which is limited), you can be sure that you get a place for which you need to pay (the number of which increases at an incredible rate every year).

Now for the quality of the teachers: from my experience around one third of them are bright, very knowlegable people but don't have the slightest idea how to explain things such that we mere mortals can understand them, an other third have mediocre capabilities and you'll be bored to death with them and the last third is the worst one: right out dumb, they don't have the slightest clue what they are talking about or they are effectively quoting word for word from a book and they are convinced that they know what they talk about. What remains is a 1% percent rounding error which is the ratio of great, dedicated and talented teachers. And the laboratories are no better: you use outdated equipment with apathetic technicians while meditating about the question: why am I wasting my life here?

As I said earlier our education system is moving rapidly to a quantity based system and the worst offenders are the private universities. Theoretically you go there and pay big money to get a first-rate education. Instead they are known for the fact that they provide the easiest means to getting a degree as long as you have the money. The universities and (most of) the professors try each day to find a new way to get as much money as possible out of the students. The most commonly known method is that you have to buy the professors book if you wish to pass the exam or you wish to pass it with a good grade. And these aren't even marginally acceptable books, but usually parts taken from different other books (probably without regards to copyright and other such details) put together in a blob with no editorial process resulting in text which is unusable (if you received spam messages which include a lot of other text to try to fool the filters, these books are like that).

Of course this can be traced back to supply and demand. The universities supply what the public demands. But why does the public demand it? It's partly because our parents grew up in an era when having a degree meant something because only a select few had it. To clarify: I'm not crying back communism here. For there was a movement started back then when we wanted to show the world that we are the country with the smartest people and part of highschool became mandatory. But know it's different. People actually demand it. An other problem is that people / parents think that they have a better chance for getting a job. This is became a vicious cycle for most of the companies seeking applicants: at the beginning they specified a degree as a requirement because it was a very good first filter, then everybody started getting degrees and now the companies started to specify university diploma as requirement for even less demanding jobs, which in turn reinforced peoples belief that you can't get a decent job without one.

And there also is the financial problem (from the point of view of the student): from a certain age one feels the need to be independent. There are no other means for students to get a usable amount of money (and I'm not talking about money that buys you a Porsche, but about money which buys you a decent computer) in some acceptable timeframe (a couple of months lets say) than being employed full time. There are no summer jobs or part time jobs here. Many of the students go to the US during the summer break to gather some money but many more remain and become employed full time. This again creates an avalanche effect as students have less and less time to attend school and the university tries to help them by relaxing the rules which again convinces more students that this can be done. Currently only the ones who really, really don't want to finish the university leave it. The others pass through a series of re-examinations, re-re-examinations and so on until they pass. (There are some exams that you can take 7 times without repeating the year!)

We have of course a few bright people and usually place very well in international student competitions, but if you would to ask these people: what has the school given you to help? you would most probably receive the response: nothing, I did this on my own time because I like what I do and I have a passion for it. I know many people, many very bright people who won different kinds of international competitions and know more than 80% of the teachers and still only have mediocre (at beast) marks and possible have several failed exams.

What will happen? I don't know and I'm no futurologist (although I like watching Futurama ;)) but I suppose that it's going to get worse. That's why although I like teaching and explaining things to people I have no plan to teach at universities and highschools. I couldn't endure looking at the horde of apathetic students. Most probably I'll go into the private sector teaching to people who consciously decided that they want to learn about the given subject and are disposed to make at least a minimal effort and who aren't just looking a way to pass.

I'm going now to attend a boring lecture and to buy some books...

Monday, October 09, 2006

Why do I love perl?

1 comments

Before you accuse me of being a fanboy, I must say that I know that every language has bad sides, but also some incontestable merits. I want to talk about two perl related stories here. They both are a kind of short how-tos, and the morale of the story is that you can find ready made libraries for perl for almost anything at CPAN.

The first story is as follows: I maintain an internal website and recently the need came up for the users to write small scripts which then needed to be executed on certain time intervals. My first reaction was: no way am I allowing such a thing, although I trust my users :). Then I started looking for scripting languages which allowed restrictions to be placed on the scripts. A colleague suggested that I use a trimmed down version of Python by deleting all but the needed few libraries, but curiously the official Python distribution recreates the files if they are deleted (probably it considers that it was damaged and tries to repair itself). Then I thought that I compile a custom version of the LUA interpreter, but I felt lazy. Luckily I found the Safe perl module and now I can execute scripts safely. The morale: perl can do almost anything, has libraries for almost everything and it's almost sure that somebody did it before so play around with google before writing code.

The second story was: I needed to pull some files using SFTP. There is a nice module (as for almost everything!) on the CPAN, but the problem was that it was not available through the ActiveState PPM repositories (yes, I needed to use it on a Windows machine). Grumbling I ended up scripting WinSCP (which is a great program by the way), but calling an external program isn't a good idea because (a) if there are errors you'll have very limited knowledge of what the problem exactly was (while normally you could just hook the DIE and WARN handlers and get very detailed information about the problem). (b) it is a security risk because of the braindead implementation of windows where the current directory is in the path so you might end up executing the wrong program (c) there are problems with passing parameters to command line programs because you need to remember to quote them and so on. Then I found a blog post which explained that the cryptographic packages are not included in Perl because of Canadian export law (ActiveState being based in Canada - see, I give all kind of useless trivia information here), but you can use alternative repositories (which I already knew from the time when I installed Log4Perl). So if you need these packages and are stuck on Windows, just start your perl package manager and do a rep add Winnapeg http://theoryx5.uwinnipeg.ca/ppms/. Now you can install packages like Net-SSH or Net-SFTP without a problem. Morale: governments make stupid technological laws and there are alternative package sources.

Happy perl coding!

Best. Quote. Ever.

0 comments

(or at least until I find a better one :) )

Discovered with the help of digg.

All parts should go together without forcing. You must remember that the parts you are reassembling were disassembled by you. Therefore, if you can't get them together again, there must be a reason. By all means, do not use a hammer. (1925 IBM Maintenence Manual)

A little guess game

0 comments

I was browsing through the Top 100 companies to work for 2006 published by the local version of the international Capital newspaper and it amazes me how unrepresentative it is. So lets play a little guess game. What's wrong with the following pictures? (aside from the fact that they were taken with a cheap webcam and my hand was shaking during it).

These pictures were taken in the companies that are present in this classification, many of which operate in the IT world, and yet what do we see? CRT monitors and non-ergonomic or partially ergonomic chairs and no double-monitor setups. My question would be: how can a company that doesn't even care enough to give its employees TFT monitors and ergonomic chairs be a top place to work at? I don't even mention the text which describes may of the companies, which is clearly written by the respective companies PR department and lists many things as key benefits for working at the company which in my opinion are elemental things to have. This text won't fool the more seasoned people who already worked at a company or two, but may easily induce in error the young talents who are not fully aware of their value. So my advice goes out to them: don't be fooled by the PR text and don't give yourself up easily or for uncertain promises like come work for us cheaply because we have excellent possibilities for promotion!

Sunday, October 08, 2006

Economics, protecting the environment and Web 2.0

0 comments

What do these things have in common? During the weekend I was at at a conference of economics (weird, isn't it?) and one of the presenters talked about how we must look at the economics if we want to achieve a given goal, for example protecting the environment. For example currently the computer manufacturing companies have no incentive to create a long living product because they sell them and their goal is to sell more. However usually we don't buy computers because we need computers, but because we need some services. What he suggested was that if we would buy the service instead of the object (so that the computer would be leased to us instead of sold), the manufacturers would have an inherent interest in ensuring the longevity (both in the sense of quality and in the sense of being able to fulfill the given service) of their products, which in turn would reduce the environmental damage.

All this fits in nicely I think with the rush of AJAX-y / Web 2.0-y web applications that we are seeing. Because this liberates us from depending on a given computer / operating system and usually you don't need a heavy weight machine to use them. This is a step in a "software as service" direction, so it might well be that if you are using Google Reader, you are helping the environment :).

On a more technical note: there are many advantages and disadvantages to these kind of "applications", many of which have been already discussed years ago during the thin client versus fat client debate. It might well be that this is only a temporary phenomenon made possible by the increase of available bandwidth and that in the future the balance may again shift if the available bandwidth / average application size changes in the opposite direction (which I think is the main reason for choosing one solution over the other)

Pimping my blog

0 comments

As you can tell I've been tweaking my blog a little last week. I'm still new to the production side of blogging (as oposed to the consumption side) so surely I'll make lot of changes in the future. The changes I've made until now are:

  • Added a picture which is representative of me
  • Converted my profile to hCard format. (If you're a Firefox user, you might wish to check out the Tails extension to find microformat-ed content of web pages)
  • I've changed my feed to go over Feedburner and I already have a subscriber. Thank you and I hope that I can provide useful information.

Hack the Gibson - Episode #60

0 comments

Read the reason for these posts. Read Steve Gibson's response.

Here I am again, with a little delay because I was away on a conference of economics over the weekend, but I'll cover that in a later post. This netcast started out very nicely, and I was hoping that I won't have to write this post (I'm in no way worried that I run out of things to rant about :)). But as you'll see there are some errors and bad answers in this show also as you can see from the length of this post.

The answer for the second question was right on spot (the first question wasn't about security). It also raised a very good question: why doesn't Microsoft unregister the affected DLLs through Windows Update as a first measure? In the case of the WMF flaw I suppose that deregistering the given DLL may have affected some printing services, but in the case of the VML flaw this seems to have been a good preliminary solution.

The first question where I have something to comment about is the one about the USB devices and virtual machines. As most of the time Leo hits the nail on the head: in VMWare (I don't know about VirtualPC or Parallels) the USB device is associated with the virtual machine only if it's in the foreground while you're plugging it in (probably you can change it in the settings).

For the gentlemen asking about reducing windows installation sizes: Normally I would recommend CCleaner. It's free and does a good job of removing unneeded temporary files. However being that he's already using a reduced version of Windows I don't believe that any method / tool would provide a considerable reduction in used disk space.

The answer to the security question is sort of ok, however one has to wonder what kind of network engineer is the one who doesn't have security knowledge. But let me use this room for a little rant about novice users and computers: you can't buy security. If you think that you plug a little box between you and your Internet connection and this will keep you secure from every possible threat, you're deluded or just watched too match marketing materials. The responsible thing to do on the computer shops part who puts together the computer and probably installs some kind on Windows on it to: (a) set up the user as user not administrator (b) install free security products on it and set them to auto-update without asking the user (c) set windows to auto-update without asking the user (d) load up the computer with open-source software to do most of the things a typical home user might need (like play music, watch films, read e-mail etc). Then put together some kind of learning material about security and say to the user: you can use your computer for most of the things you want. If you wish to get the Administrator password, you must read this material to understand the basic things about your computer and then take a free online test. If s/he passes, then s/he gets the Administrator password. This would make possible for the grandma-type users to read their e-mails with no effort and fairly safe and still provide a way for the more determined users to gain full control over their machine (after they know at least to some extent what they are taking control of). Now I know that no company in the world would implement such a policy, but wouldn't it be great?

About TrueCrypt and a possible performance degradation: I'm convinced that there would be only very little performance degradation (disclaimer: I didn't do any tests, this is just my personal opinion), however there would be a big gain: she can be sure that everything (and I mean everything) that she does in the virtual machine would be crypted. When using partial encryption (like putting your documents on a separate partition and only encrypting that or using the EFS facility of NTFS), you are always in the risk of leaving artifacts (for example temporary versions of documents are often written to the temporary directory during editing from where they can be recovered, or contents of the memory can end up the swap file). Thus the only 100% secure solution (assuming you've choose a long and had to guess password) is to put the VMWare image on an encrypted volume. Period. A sidenote: I've found some sites which claimed to have achieved performances in the range of 100MB/s with AES, one of the strongest algorithms included in TrueCrypt, using consumer hardware. This clearly is more than enough for disk transfers (the test were done using memory as source and destination to enable maximum throughput).

The answer for the question about overheating damaging your CPU is clearly wrong and I have factual evidence for it: you can watch the videos showing Toms Hardware removing the heatsink while they are running (go to the last page of the article to download the video). They are clearly damaged! Now I know this is extreme and just stopping he fan probably would do less damage, but it is possible. Then again I never heard about malware doing this, but this doesn't mean that it doesn't exists.

Now on to the next question: please don't run SAMBA on the internet. Please! And don't recommend it to other people either. The protocol contains no encryption and was not meant for usage in non-trusted networks. Please use SFTP, Apache with HTTPS and password protection or other means for sharing files. While sharing files with SAMBA is a little more convenient than these other methods, it is by no mean secure!

Finally the discussion about software patents was good.

Thursday, October 05, 2006

How to exclude certain traffic from Google Analytics?

1 comments

I use Google Analytics to get an idea about the traffic on my blog. However, being the low traffic volume site it is, my visits skew the results (this became clear to me when in the overview it showed me that almost 12% of my traffic came from beta.blogger.com, which is the admin interface for the blog). So I headed over to IPChicken, grabbed my IP address (having a static IP helps a lot) and added a filter. You can add filters by going in analytics settings mode and clicking edit for the site you want to add the filter for. Then select "Exclude all traffic from an IP address" and add your IP address. Be sure to replace the . with \. since the string you enter is considered a regular expression. It seems that it doesn't apply it retroactively (or maybe I just have to wait 24 hours for my stats to update), but hopefully this will lead for more accurate stats.

Moving to Ubuntu - The Regex Coach

5 comments

After reaching 21 posts and caching up with the Security Now! episodes, I thought that it's time to start a new series. I am what I consider a pro Windows user and lately I started moving to Ubuntu. I toyed with Linux distros before, but this is the first I feel that I can learn. This series is for other people like me, who come from a Windows background and want to play with Linux.

One of the programs I used over on windows was The Regex Coach. This is a very powerful free (like beer) program written in LispWorks to test regular expressions. There are installation instructions for Linux on the site, however there is one more little thing you must do before you can run it: from a terminal do sudo apt-get install lesstif2 if it complains that it can't find libXm.so.2. Also the part where in the instruction it says that you should use xrdb -merge, the complete command line would be xrdb -merge regex-coach-resources, where the regex-coach-resources file can be found in the regex coach directory. The installation of lesstif2 probably also solves problems if other LispWorks programs complain when starting up under Ubuntu (or other Debian based distributions). A final quirk is that you can't (or at least I haven't discovered how to) copy / paste using the keyboard, but if you right click on the selected text, you get a pop-up menu which you can use to do these things.

Two more thoughts: when you have a problem with Ubuntu, you most probably can solve it by googling for it with the keyword ubuntu, since the ubuntu community is very large. If it so happens that you don't find your answers, you should try to google for the problem with the keyword debian, because Ubuntu is based on Debian so what works in one usually works in the other. My second closing thought would be: .so files are shared objects. This corresponds to the DLLs from windows. If you don't know where to get a certain .so from, go to http://packages.ubuntulinux.org/, go down to package content search and put in the file you're looking for. You will get back the name of the package which you then can install with apt-get or Synaptic.

Update: I was informed by a good friend of mine that you can copy text without the popup menu: select the text you want to copy, go to the place where you want to copy it and middle click (or click simultaneously both mouse buttons). This should work in other graphical applications too that were written in X.

Wednesday, October 04, 2006

Hack the Gibson - Episode #57

0 comments

Read the reason for these posts. Read Steve Gibson's response.

This is the 21st post. Woohoo! It's not that impressive, but for me it is, considering that I've started my blog just a little over a week ago. So this will be cheerful, joyful and happy post :). I've selected episode 57 for this because it is one of the episode I've heard so far. It was very interesting to hear the comparison between VMWare an VirtualPC and their history

So great, interesting show, if all were like this, I would be out of work :). To quote the guys from PaulDotCom security: Steve Gibson is like Apple - doesn't say anything and silently corrects himself (of course this is not quite true, see for example this e-mail, but it's still a nice analogy and probably no one minds being compared to Apple)

Hack the Gibson - Episode #59

0 comments

Read the reason for these posts. Read Steve Gibson's response.

Finally, I'm getting in synch with the released episodes. This one is relatively error-free, I have only just a few comments to make:

buffer overrun doesn't always mean that the buffer is on the stack, it can be in the heap also. Hardware DEP prevents both kind from executing code.

Leo probably meant to say turn it on for essential Windows programs and services only instead of turn it off ...

This episode is the first in which I hear Steve correcting itself, so I think this is worthy of quoting: Remember that I said last week that one of the major failings of Server was that it lacked both sound and USB support. Well, that was wrong.

They support every flavor of Linux you can imagine – FreeBSD, OS/2 Warp, Sun’s Solaris - OS/2 Warp isn't a flavor of Linux by a long shot, but I give him the benefit of the doubt because probably he was meaning every kind of OS.

The only real problem in this podcast (netcast, sorry) is the discussion about the fixed size versus expandable drives. The state of the matter is the following: when you choose to use disks for which the space is not preallocated it saves in the file only the parts of the disk which were written too (because if the guest OS tries to read from any other area, it can just return zeros). There are two problems with this (lumped together by Steve under the name fragmentation): these disk areas are stored in a non-contiguous mode in the file, so at every access a lookup step is necessary and also there is the fact that as the file grows it itself can be fragmented on the disk. A third problem is that these files are never able to shrink. The explanation for this is the fact that the virtual machines don't know about file systems, only about disk sectors. When a sector has been written too, it is marked as dirty and stored permanently in the file, even if the file occupying that space has been deleted. Given all this things I don't think that Parallels's product which probably only goes through the file system and marks the empty disk sectors is worth its price. It would be a nice extra if it was included in the program, but not as a stand-alone product.

Things you (probably) didn't know about your webserver

0 comments

Today's webservers are incredibly complex beasts. I don't know how many of the people operating Apache have read the full specifications. I sure didn't. So it should come as no surprise that there are hidden features in our servers (and some of them turned on by default), which can weaken our defenses. There are two that I want to talk about today, both turned on by default:

  • The first (and the more important one, although in security every item is important) was only recently publicized and involves sending an invalid header to Apache, which responds with an error page. I've got this one from the SecuriTeam blog. If the default error pages were not changed, they will include the invalid header, so a cross-site scripting attack is possible. To test if your site is vulnerable, you can use curl like this: curl http://localhost/asdf -H "Expect: <script>alert('Vulnerable');</script>" -v -i. If the output contains the alert, your server is vulnerable. To worsen the situation, you can use Flash or XMLHttpRequest to create these types of requests (although not with Firefox, which disallows the transmission of this header). Now don't start filtering on Mozilla browsers, because user agents can also be spoofed. The two possible workarounds are: create custom error pages (harder if you host multiple sites) or enable mod_headers and use the following global rule: RequestHeader unset Expect early (tested with Apache 2.2.3 on WinXP). This might slow your webserver a little down as described in the documentation, but at least you're not vulnerable until you update Apache.
  • The second is a lesser problem, and involves the possibility of stealing cookies if the site has a XSS vulnerability even if the cookies are marked HttpOnly: It involves sending a TRACE request to the webserver. This request is usually used for debugging, and echoes everything back, including the cookie headers. Again Flash or XMLHttpRequest can be used to craft these special queries. A more detailed description of them can be found here: http://www.cgisecurity.com/whitehat-mirror/WhitePaper_screen.pdf. To test if your vulnerable, telnet to your webserver and enter the following commands:
    TRACE / HTTP/1.1
    Host: localhost (replace it with your host)
    X-Header: test
    
    (two enters)
    
    and you should see everything echoed back to you. As described here, you can use mod_rewrite to filter this attack, by adding the following rules:
    RewriteEngine On
    RewriteCond %{REQUEST_METHOD} ^TRACE
    RewriteRule .* - [F]
    
    And it is also a good idea to make sure that your sites are not vulnerable to XSS ;-)

What's up with the pink?

0 comments

I now that it looks funny, but it will be this way throughout the month of october as I'm going pink for october.

Tuesday, October 03, 2006

Companies, technology and security

2 comments

When I saw this piece in my google reader, I thought: that's interesting, so I headed over and checked it out thinking that I get some information about are the practices at big companies. Somewhat disappointingly it was just a link to a tutorial which looks like it was written by someone who is just getting into security but has no solid grips on it. Postings of this quality on the tucows blog makes me wonder about the quality of their code. The problems with the mentioned article are:

  • It attempts explain about MD5 and hash algorithms in general, however it does a poor job. For example it doesn't explain why it lends itself to brute force attacks (which btw can be explained very simply by saying: because the same string always generates the same hash, you can simply try to generate all the strings, hash each one of them and see which one gives the same hash) and while it does mention salted hashes it fails to mention what they are or the fact that they can provide protection against bruteforce attacks (if they are not known - for example supposing that only the database part of your site was compromised) and / or against pregenerated tables.
  • The code provided as example is riddled with SQL injections, and while it's true that recent versions of PHP come with magic quotes turned on by default, the article writer should at least mention this assumption, so that people who copy the code know about it and can counteract if this asumption is not true. Pear Db or precompiled queries are also not mentioned, even though they can provide a defense against SQL injection attacks.
  • In the original tucows posting mentions e-mail-ing a link to users where they can change their password, however it fails to mention the security aspects of this, like it should use HTTPS, it should not be possible to easily guess it and it should expire after a certain time. I know that for many security is an afterthought, however at least think about it when you are talking about it!

To pimp my blog a little: see other password troubles with popular sites and a javascript random password generator.