Back to Top

Tuesday, December 27, 2011

Relaxed JSON parsing


This blogpost was originally posted to the Transylvania JUG blog.

JSON is a good alternative when you need a lightweight format to specify structured data. But sometimes (for example when you want the user to specify JSON manually) you would like to relax the formalism required to specify "valid" JSON data. For example the following snippet is not valid as per the spec, although its intent is quite clear:

[{ foo: 'bar' }]

To make this standard compliant we would need to write it as:

[{ "foo": "bar" }]

We shouldn't run out and blame the standard of course since it needs to balance many contradictory requirements (ambiguity of encoded data, ease of understanding, ease of writing parsers, etc). If you decide that you want to strike the balance differently (make the definition of valid data more relaxed) you can do this easily with the Jackson parser:

JsonParser parser = new JsonFactory()
	.createJsonParser("[{ foo: 'bar' }]")
JsonNode root = new ObjectMapper().readTree(parser);

assertEquals("bar", root.get(0).get("foo").asText());

If your tool of choice is gson, it is slightly more complicated but still doable. See the linked source code for a complete example.

JSON is a good tool for semi-structured data and using a relaxed parsing can make the programs you write easier to use.

Thursday, October 20, 2011

Updating the root certificates for Java


One usually thinks of SSL in the context of HTTPS, but there are also other protocols which rely on it to provide security. See this link for a short overview of SSL – it only mentions HTTPS, but the same applies for IMAPS, FTPS, etc – SSL is independent of the wrapped protocol. You can have issues with your Java programs in where the party you are communicating with provider changes their certificate and the program rejects it as invalid. The exception is something like: 
    PKIX path building failed: 
    unable to find valid certification path to requested target

One cause of the problem can be that the server uses an SSL provider which is based on a root certificate that wasn’t included with the particular version of Java you are using (this is especially true for really old versions like Java 1.5). The issue can be solved by updating to the latest version, but it might be that this isn't an option. Fortunately I found the following article: No more ‘unable to find valid certification path to requested target’

How to use it:

  • Compile the program javac
  • Run it with the target host/port. For example in our case it would be: java InstallCert (993 is the port for IMAPS)
  • navigate trough the menus and select which certificate to import
  • now you have a file called jssecacerts. You need to copy this to $JAVA_HOME/jre/lib/security/cacerts (back up the existing file first!)
  • Now the root certificate is imported (you can confirm this by rerunning InstallCert)


Vagrant and VirtualBox on Windows


Vagrant is a collection of scripts written in Ruby to manage VirtualBox images in a shared environment (like the QA boxes inside a company): install them, update them, etc. Unfortunately installing it under Windows is not as straight forward as one would want, so here are some useful tips:

If you are on a 64 bit Windows install:

  • Check out this post if your JRuby is using the 32 bit JVM on a x64 Windows setup
  • You need to use version 4.0 of VirtualBox (rather than the latest). You can get it from here
  • You need to use an older version of Vagrant:
    jgem install jruby-openssl jruby-win32ole
    jgem install --version '=0.7.8' vagrant
  • If the vagrant box download stops around 4G, check that you have a NTFS filesystem (rather than FAT) and deactivate any "network" scanning capabilities of installed security software (I had problems with NOD32 particularly)


Thursday, October 13, 2011

Another friend blogging


Another friend bit the dust started blogging: Cleonte's GitHub blog - bookmarks at the moment but looking forward to more involved posts :-).

Using Jython from Maven


This blogpost was originally posted to the Transylvania JUG blog.

On the surface it looks simple: just add the dependency and you can run the example code.

However what the jython artifact doesn’t get you are the standard python libraries like re. This means that as soon as you try to do something like the code below, it will error out:

PythonInterpreter interp = new PythonInterpreter();
try {
  interp.exec("import re");
catch (PyException ex) {

The solution? Use the jython-standalone artifact which includes the standard libraries. An other advantage is that it has the latest release (2.5.2) while jython lags two minor revisions behind (2.5.0) in Maven Central. A possible downside is the larger size of the jar.


Tuesday, October 11, 2011

How to post high-quality videos to Google Video


If you have Google Apps for Business, Google Video is still the preferred method of storing videos "in the cloud": it is easier to embed into Google Docs and probably more importantly - it can be truly private (you can only access it if you are logged in with the correct account - as opposed to YouTube where you can "hide" the video, but still anyone with the link can access it).

To post a high quality video to Google Video you only have to do:

  1. Upload the video
  2. Wait

I kid you not :-). While initially the uploaded video is of poor quality, apparently it gets processed in the background and later on (in my experience: it takes around 30 minutes to process 1 hour of uploaded video) a high quality version will be available.


Integrating Maven with Ivy


This post was originally published on the Transylvania JUG blog.

The problem: you have some resources in an Ivy repository (and only there) which you would like to use in a project based on Maven. Possible solutions:

  • Migrate the repository to Maven (Nexus for example) since Ivy can easily use Maven-style repositories (so your Ivy clients can continue to use Ivy with some slight configuration changes and Maven clients will also work – also the push-to-repo process needs to be changed)
  • Try JFrog Artifactory since it reportedly can serve the same resources to both Ivy and Maven (disclaimer: I haven’t tried to use it actually and I don’t know if the Open Source version includes this feature or not)
  • or read on…

My goal for the solution (as complex as it may be) was:

  • It should be as simple and self-explanatory as possible
  • It should respect the DRY principle (Don’t Repeat Yourself)
  • It shouldn’t have other dependencies than Maven itself

The solution looks like the following (for the full source check out the code-repo):

Have two Maven profiles: ivy-dependencies activates when the dependencies have already been downloaded and ivy-resolve when there are yet to download. This is based on checking the directory where the dependencies are to be copied ultimately:


Unfortunately there is a small repetition here, since Maven doesn’t seem to expand user-defined properties like ${} in the profile activation section. The profiles also serve an other role: to avoid the consideration of the dependencies until they are actually resolved.

When the build is first run, it creates the target directory, writes the files needed for an Ivy build there (ivy.xml, ivysettings.xml and build.xml – for this example I’ve used some parts from corresponding files of the Red5 repo), runs the build and tries to clean up after itself. It also creates adependencies.txt file containing the chunck of text which needs to be added to the dependencies list. Finally, it bails out (fails) instructing the user to run the command again.

On the second (third, fourth, etc) run the dependencies will already be present, so the resolution process won’t be run repeteadly. This approach was chosen instead of running the resolution at every build because – even though the resolution process is quick quick – it can take tens seconds in some more complicated cases and I didn’t want to slow the build down.

And, Ivy, the Apache BSF framework, etc are fetched from the Maven central repository, so they need not be preinstalled for build to complete successfully.

A couple of words about choosing ${}: if you choose it inside your Maven tree (like it was chose in the example), you will receive warnings from Maven that this might not be supported in the future. Also, be sure to add the directory to the ignore mechanism of your VCS (.gitignore, .hgignore, .cvsignore, svn:ignore, etc), as to avoid accidentally committing the libraries to VCS.

If you need to add a new (Ivy) dependency to the project, the steps are as follows:

  • Delete the current ${} directory
  • Update the part of your pom.xml which writes out the ivy.xml file to include the new dependency
  • Run a build and watch the new dependency being resolved
  • Update the dependencies section of the ivy-dependencies profile to include the new dependency (possibly copying from dependencies.txt)

One drawback of this method is the fact that advanced functionalities of systems based on Maven will not work with these dependencies (for example dependency analisys / graphing plugins, automated downloading of sources / javadocs, etc). A possible workaround (and a good idea in general) is to use this method for the minimal subset – just the jars which can’t be found in Maven central. All the rests (even if they are actually dependencies of the code fetched from Ivy) should be declared as a normal dependency, to be fetched from the Maven repository.

Finally I would like to say that this endeavour once again showed me how flexible both Maven and Ivy/Ant can be and clarified many cornercases (like how we escape ]] inside of CDATA – we split it in two). And it can also be further tweaked (for example: adding a clean target to the ivy-resolve profile, so you can remove the directory with mvn clean -P ivy-resolve or re-jar-ing all the downloaded jars into a single one for example like this, thus avoiding the need to modify the pom file every time the list of Ivy dependencies gets changed – then again signed JARs can’t be re-jarred so it is not an universal solution either).

Saturday, October 08, 2011

Upgrading the Options (GlobeTrotter) GI515m


Recently I needed to install an Options (GlobeTrotter) GI515m 3G USB modem on a machine which previously used an older version of the modem (the iCON 225). This seems a pretty common scenario (an existing user getting an update), however the process seems less-than-straight forward:

  1. Get a second computer with the same operating system version which didn't have a 3G modem installed (for example if your target system is running Windows 7 64 bit you need a second system with Windows 7 64 bit - different skews like Home vs Ultimate are ok, but the version and "bitness" must coincide - you could also try using a virtual machine for the second machine which supports USB forwarding like VirtualBox or VMWare)
  2. Plug in the modem in the second machine. First it will recognize it as an USB stick / CD-ROM. Copy all the files from it to a separate folder (you should see files like "setup.exe").
  3. Let the setup complete. Now copy the installed drivers to the same place you've saved setup file. Under Windows 7 you would find them in the location C:\Windows\System32\DriverStore\FileRepository\ in several folders starting with "gth" (like gthsubus_64.inf_amd64_neutral_4810563f34b37ef5), but here is the generic way to identify the folder:
    1. Start Device Manager
    2. Look for one of the devices associated with the modem (you will find actually several, like GlobeTrotter GI515M - Modem Interface, Network Interface and so on)
    3. Properties -> Driver -> Driver Details. Note the name of the driver for which the provider is Option (for example gtuhsser.sys)
    4. Now search your Windows folder for files ending in .inf which contain the name of driver from the previous step. This will point you to the right folders
  4. On the first computer (the one you actually want to install the modem on) remove all previous versions of the software using the Add-Remove Programs facility (you will see two-three entries but they can be easily identified by the same orange icon). Restart the computer for good measure.
  5. Copy over the setup program and the drivers from the second computer. Plug in the modem to the first computer, install the application (using the setup file captured on the second computer). Go into the device manager and look for "Unknown device"s (you should see four of them). Use the drivers captured on the second computer to resolve these issues.
  6. Unplug and replug the modem - it now should work!

A couple more talking points:

  • don't use "driver manager" type software - they very rarely (read: never) seem to work
  • a symptom that you've hit this problem is when the management interface (dialer / "Internet Everywhere") for the modem starts but it gets stuck in the "Initializing" phase when you connect the modem and consumes CPU (from what I've seen with a debugger it seems to be looking for the installed device in a loop)
  • the modem seems to be prone to overheating if the signal-strength is low (around two bars) and in this case it shuts down after ~10 minutes (I assume that this is some kind of thermal protection). You can check if this is the case by putting your hand on the bottom side of the modem. I couldn't find and solution for this, other than looking for a spot which has better signal. Using the modem in EDGE rather than 3G mode also seems to do the trick, but it has lower speeds and I don't know of any reliable method to make the modem use EDGE if 3G is also available.

Getting the most out of your audio recording with Audacity


This article aims to show you some simple techniques to improve the quality of your voice recording quickly and cheaply (for free actually). But first things first:

The best audio is the one you don’t have to improve. Some simple steps you can perform in advance to maximize quality:

  • Use quality equipment. Here are some articles about the equipment great-sounding podcasters use. You don’t have to spend a lot of money, but definitely stay away from the built-in laptop microphone
  • Eliminate ambient noise as much as possible (close windows, draw the blinds, stop other electronic equipment in the room, etc)
  • Record each person on a separate channel - if possible on a computer local to them (avoid recording trough Skype, GoToMeeting or other VoIP solutions)
  • Try keeping the recording volume for each microphone at the optimal level – not too low, but also avoiding clipping

After you have the audio recording there is still a lot you can do, but it is preferable to start out with the best source material. For the example below I’ll be using the raw recordings from a recent SE Radio podcast:


The situation with this recording is as follows:

  • There are separate audio tracks for the interviewer and interviewee (good)
  • There is background noise on the tracks (easily correctable)
  • Both persons were picked up by both microphones (correctable)
  • The interviewer has some clipping (partially correctable – luckily it’s not the interviewee who has clipping)

The steps to improve the quality of this recording are as follows:

First, install the Noise Gate plugin for Audacity, since it requires program restart (under Windows you have to copy the downloaded noisegate.ny to C:\Program Files (x86)\Audacity 1.3 Beta (Unicode)\Plug-Ins or to a similar location, under Linux you have to place it in /usr/share/audacity). After copying the file you have to close and restart Audacity. To verify that the plugin was properly installed check in the Effect menu – you should see an entry title “Noise gate”.

Now that we have Audacity all set up and the plugin installed, first split the stereo track into mono tracks, since they don’t actually represent left-right channels but rather two speakers which will be mixed together at the end. For this click on the arrow after the filename in the track and select “Split Stereo to Mono”. Sidenote: some people will prefer to mix different speakers in podcasts with different panning (that is to the left or to the right). I would advise against this: it is distracting if you are doing something else while listening to the podcast (like walking / jogging / riding a bike / etc). It can also backfire if for some reason the listening device is missing one of the channels (the “damaged headphone” scenario).

The first thing will be to remove the constant background noise (like AC hum for example). To do this zoom in (Ctrl + 1) and look for low volume zones. Select those zones and go to Effects –> Noise Removal –> Get Noise Profile. Now select a zone where the noise is mixed with speech and test out the settings (Effect –> Noise Removal –> Ok). After the test you can use Undo (Ctrl + Z) to roll back the changes. You should watch for the noise being removed but also the natural sound of the voice being preserved (too aggressive of a noise removal can lead to a “robot voice” effect). If you are satisfied, you can go ahead and apply it to the entire track. Also, since the noise source might change during the recording, you should at least do a quick scroll to check for other low-volume zones which can be a sign of noise. If you find noise from other sources, you can use the same steps to remove it.

Now that you have removed the noise, the next step would be to remove the voices from the channels they don’t belong to. This is where we’ll be using the Noise Gate plugin: since there is a considerable level difference between the wanted audio and the unwanted audio on each channel, we can just declare everything below a certain volume “noise” and use the plugin to silence it. A couple of tips:

  • This needs to be done separately for each channel, since the cutoff volume will be different
  • You can use the “Analyse Noise Level” function of the plugin to gauge the approximate level of the cutoff volume – this will only give you an estimate and you will have to play around with the settings a little bit to find the optimal volume
  • Use a “Level reduction” of –100 dB to completely filter out the sound and an “Attack/Decay” of 1000 milliseconds to avoid false positives
  • As with all the steps, you can experiment on a smaller portion of the audio file (since it is much quicker) to fine tune the settings by repeatedly applying the effect with different parameters and undoing (Ctrl+Z) the result after evaluation. When the parameters seem right, just select the entire track and press Ctrl+R (Repeat last effect)

After we’ve finished with both tracks, we have a better situation:


Now we will fix the clipping as much as possible (a perfect fix isn’t possible since clipping means that information got lost and all the plugins can do is to “guess” what the information might have looked like). First we reduce the aplification of the second track (the one which contains the clipping) by 10 dB as the Clip Fix plugin suggests (Effect –> Aplify –> –10 dB) after which we use the Clip Fix plugin. Unfortunately this plugin runs very slowly if we would to apply it to the entire track at once. Fortunately we have a reasonable workaround: select portions of the track and apply the plugin to them individually. After the first application you can use the “Repeat last effect” shortcut (Ctrl+R) to speed up the operation. Sidenote: it is a good habit to use the “Find Zero Crossing” function whenever you do a selection (the shortcut is Z – so whenever you select a portion, just press Z afterwards). This eliminates some weird artifacts when cutting / pasting / silencing part of the audio and it might even help when applying different effects. The fixed audio looks like this:


Now, that all the cleanup steps have been performed, there is one last step which is as important as the cleanup: maximizing the audible volume without introducing clipping. This is very important because all devices can reduce volume but few of them can increase it (some exceptions being: the Linux audio stack and VLC). The easiest way to do this is by using the Levelator (note: while the Levelator is free – as in beer – and does not restrict what you can do with the output, it is not free as in freedom if this is a consideration for you).

To do this, export the audio to WAV (make sure that all tracks are unmuted during export) and run the Levelator on it. The end result will look like the following:


Of course the Levelator isn’t magic pixie dust either, so here are a couple of things to check after it has been run:

  • Did it amplify some residual noise which wasn’t available in the initial audio? (if so, you should remove it using the Noise Removal plugin)
  • Did it miss segments? (it is rare, but it happens – those segments need to be amplified manually)
  • It results in “weird” sounding audio if the recording has been preprocessed by a dynamic compressor – for example GoToMeeting has an option to improve sound quality which uses dynamic compression and thus makes the recording unsuitable for the use with Levelator

That’s it for this rather long article. Don’t be discouraged by the length of the article: after going over the steps a couple of times, it shouldn’t take longer than 15 minutes to process a 2 hour interview (excluding the cutting / pasting / moving parts around) and you will gain listeners because of the higher production value.

A final note on the output formats: while during processing you should always use lossless formats, the final output format I recommend is: MP3 at 64 kbps CBR, Joint Stereo, 22050 MHz sampling rate. I found that this is the best balance between quality, file size and compatibility with the most playback devices out there.

Thursday, September 22, 2011

More videos


Two inches to the right via mubix:

Jane Austen's Fight Club via Wondermark:

Jane Austen's Fight Club from Keith Paugh on Vimeo.

Sink The Bismarck:

Storm - a beat poem:

Wednesday, September 21, 2011

Of maps and men


A very cool visualization of the imigration / emigration data:

To remember: the relative sizes of countries / continents on most maps is not representative of the true ratios, because most map projections were not meant for that. If you want to play around with different projections, here is a nice page from Wolfram (unfortunately you have to install a ~100MB plugin to get it to work). If you need the raw data, just go to Wikipedia.

Finally, here is a good essay from Asimov (from 1989) about the scientific process: The Relativity of Wrong - small nitpick: it would have been even better if it had used the metric system or at least it didn't switch from miles to inches in he middle of the essay.

Informative videos about copyright and remixing issues


Walking on Eggshells: Documentary about Remix Culture - via the comixtalk blog:

Internet is Freedom - a speech given by Lawrence Lessig at the Italian Parlament - via the Security4All blog:

PBS: Copyright Criminals - while the video is not online, you can watch the trailer and listen to some remixes inspired by it which are under a CC license.

And just as a bonus: a cynical video about filtering the Internet - via the IT Law in Ireland blog:


The History of Copyright Law - via the laughing squid:

TED Johanna Blakley: Lessons from fashion's free culture

Update: it seems that I already posted the last video - admittedly I'm becoming senile with the advance of the years :-)

Tuesday, September 20, 2011

Recording test performance with Jenkins


In many (most?) systems performance is an important non-functional requirement. And even if you attained the required performance, it is useful to keep an eye on it to detect if a codechange involuntarily deteriorates it. Enter the Performance plugin for Jenkins. Using it you can record the performance (as in: speed of execution) of your test runs and set alter thresholds which cause the build to fail. Also it can generate graphs like the one below:

To do this:

  • Have Jenkins installed
  • Intstall the Performance plugin (or upgrade to the latest version, since there was a bug in earlier versions which prevented the parsing of the JUnit reports)
  • For your build check “Publish Performance test result report” and add locations where the reports should be collected from.
  • That’s it! Future builds will collect the performance data and you can access it using the “Performance Trend” link (at the job level) or the “Performance Report” link (at the build level)

More details / caveats:

  • The paths are defined as ANT file expressions (that is you can use “**” to specify an arbitrary level of directories, for example: target/surefire-reports/**/TEST*.xml)
  • JUnit performance is grouped at the test-class level, thus it probably makes sense create separate project / module which group the performance test cases.
  • Benchmarking is hard and JUnit doesn’t give you any provisions to do warmup or to repeat the tests multiple times. To make your test as relevant as possible you should do this manually (warmup code can be placed in the @Before method for example). A properly set up JMeter task accounts for this already.
  • TestNG tests can also be parsed as long as the test run is set to produce a JUnit compatible report.
  • Slightly off-topic: to integrate a JMeter run into your maven build, you can use the AntRun plugin:
           <taskdef name="jmeter" classpath="C:\work\ant\lib\ant-jmeter-1.1.0.jar"
           <jmeter jmeterhome="C:\jakarta-jmeter-2.5\"

Article originally posted to the Transylvania JUG blog.

Audio quality redux


Yet an other example for how simple steps can improve the audio quality considerably. The clip below is taken from this blogpost (which I originally found trough Hacker News). You can find the processed version here, or use the controls below to do a quick A/B comparison of the two. The processing was very simple (1. noise removal and 2. running trough the Levelator) and quick.

Crossfade (Original - New):

PS: For people reading the post trough an RSS reader: you probably need to click trough to the site to see the comparison in action, since most (all?) RSS readers filter out javascript for security reasons.

PS: If you are interested in the simple script which was use to interact with the two youtube players, you can find it in my code repository.

Sunday, September 18, 2011

Power Line Humm Removal With Audacity


As a response to George Starcher's Removing Power Line Hum from Audio with GarageBand I would like to post a quick tutorial on how to do the same with Audacity:

Friday, September 16, 2011

Protein Shakes site review


This is something new for me: protein shakes. Medically I can't offer advice about it (I'm weary about using foreign substances not recommended by a specialist), but the site certainly is has some positive signs:

  • The badges at the bottom are clickable and they go to the respective sites which certify the site
  • The domain is registered since 2006 and they are paypal verified since 2007
  • They have an active Facebook and Twitter account
  • They use paypal / google checkout for payment which reduces your risk considerably

They also have also some less positive signs:

  • There is no physical contact address
  • The registration details are hidden by proxy registration
  • The phone-number is a generic one (I would have liked one which coincides with the physical location)

All in all I would recommend this site for small purchases from inside the USA (they don't deliver internationally). Also, I would consult a doctor (or multiple doctors) about the possible effect of the substances. In my view (and I'm no doctor) this is more serious than the homeopathic or "natural" substances and it should be handled with care. Then again, I'm also against medicine / medical devices (like glasses) being sold in places without expert supervision (like supermarkets).

Full disclosure: this is a paid review from ReviewMe. Under the terms of the understanding I was not obligated to skew my viewpoint in any way (ie. only post positive facts).

Tuesday, September 13, 2011

Running JRuby on 64 bit Windows


Usually it is as simple as: download, install, run. You can run into problems however if you have both the 32 bit and 64 bit JVMs installed (which is quite often) because it will try to use the 32 bit JVM. You can check which JVM is being used from the command line:

jruby --version
jruby 1.6.3 (ruby-1.8.7-p330) (2011-07-07 965162f) (Java HotSpot(TM) 64-Bit Server VM 1.7.0) [Windows 7-amd64-java] # 64 bit
jruby 1.6.3 (ruby-1.8.7-p330) (2011-07-07 965162f) (Java HotSpot(TM) Client VM 1.6.0_26) [Windows 7-x86-java] # 32 bit

To work around this issue, specify the JVM to use in your jruby.bat (or other batch files installed by gems like vagrant.bat) explicitly. Example jruby.bat:

java -Djruby.home=C:\jruby-1.6.3 -jar  -jar "C:\jruby-1.6.3\lib\jruby.jar" %1 %2 %3 %4 %5 %6 %7 %8 %9

Example vagrant.bat

java -Djruby.home=C:\jruby-1.6.3 -jar "C:\jruby-1.6.3\lib\jruby.jar" "C:/jruby-1.6.3/bin/vagrant" %1 %2 %3 %4 %5 %6 %7 %8 %9

Using less with syntax highlight


You can use vim as your pager and obtain two benefits: syntax highlight and access to all the advanced commands (like search). You can do this under ubuntu by adding the following line to your ~/.bashrc:

alias less='/usr/share/vim/vimcurrent/macros/'


  • You have to have vim installed (which doesn't come by default, but it is as simple as sudo apt-get install vim-nox)
  • It supports viewing directly bz2 and gz archives as well as pipe input from stdin (but in that case it fails sometime to highlight)
  • Edit commands (like dd) are disabled, so you can't accidentally modify the file you are viewing

Friday, September 09, 2011

Link love


Here are a couple of close friends' blogs. They are just starting out writing, but hopefully giving them some link love will encourage them to write even more great content. Without further ado, in no particular order:

Tuesday, September 06, 2011

100 years of style


A very entertaining video in the style of Evolution of Dance:

Hattip to

Quick'n'dirty Mediawiki file crawler

URL='' MIME='image/jpeg' \
  bash -c 'wget -q -O - "$URL/wiki/index.php?title=Special:MIMESearch&mime=$MIME&limit=500&offset=0" \
  | grep -Po "\/wiki\/images[^\"]+" \
  | xargs -n1 -I {} wget "$URL{}"'

What it does: it uses the "MIME search" functionality on the wiki to locate files of a certain mime type and then xargs+wget each of them.


  • A maximum of 500 files are downloaded
  • Downloads are not parallelized, thus slower than they could be

Monday, September 05, 2011

Creating a non-MAC bound CentOS 6 machine


I was building VMs to be deployed with Vagrant / Virtualbox for our QAs and discovered that on new instantiations of the machine the networking interface wasn't coming up. The problem was that Virtualbox was assigning a random MAC address to the NIC (and rightly so, to avoid conflicts). I used the following steps to solve this:

  1. Remove the HWADDR line from /etc/sysconfig/network-scripts/ifcfg/eth0
  2. Delete the file /etc/udev/rules.d/70-persistent-net.rules (hat tip)

These two steps are specific to CentOS 6 (on 5.x the first step is sufficient). Also, the second if is recreated at the next boot, thus after rm-ing it, you should shut down the machine and package it (not start it again, or if you do, you should remove the file again).

Thursday, September 01, 2011

Levant Digital Marketing review


Levant Digital Marketing is a company which does Search Engine Optimization in the Middle East. They seem to be a very new company (the domain was registered in January of 2011). They are part of "JHG Holding", but this itself is also only from 2009 (and their site contains minimal content). I didn't manage to find their headquarters on Google Maps either (but this might just be an issue with Google Maps in foreign countries). Their phone number is indeed from Lebanon (as their physical address is). Their facebook page is non-existent at the moment (was it deleted?) and the twitter account is completely empty.

All in all, while I couln't find anything explicitly negative about them, it is more the case that I couldn't find anything concrete about them :-). As far as advice goes, when starting out you are better off with some simple steps, and when you grow you might consider looking into a SEO consultancy, but take care to find a reputable one rather than a cheap one or one which promises you the moon.

Full disclosure: this is a paid review from ReviewMe. Under the terms of the understanding I was not obligated to skew my viewpoint in any way (ie. only post positive facts).

Wednesday, August 03, 2011

Paving site review


This is a fairly trustworthy site to buy paving slabs and other construction materials. It checks out on on all sources I usually use: domain registration, physical address, reputation sites and a quick search for complains. So go ahead and look around, but remember that for larger purchases you should consult an expert. They also have a fairly standard return policy, so if you do find out that you don't need a certain item, you might still be able to salvage some money.

External pavement is tricky in general, so you really need to consult somebody who done it before. It requires some kind of foundation (mostly compacted sand) and manual work to lay each piece. When finished it can look very nice, but it can also be very adversarial to people / children who happen to fall on them. I would recommend grass, asphalt (possibly a small strip of it for the tires if we are talking about a garage) and only after that paving with slabs. Good luck with the construction!

Full disclosure: this is a paid review from ReviewMe. Under the terms of the understanding I was not obligated to skew my viewpoint in any way (ie. only post positive facts).

IVA site review


Reviewing this site pose a conundrum to me (other than how to write the word conundrum): on the one side they seem to be a legitimate site for IVA advice, with a long standing domain registration, a physical address and even an entry at the Office of Fair trading (which seem to be the equivalent of the Better Business Bureau). On the other hand there are other sites with the same profile - debt relief - which refer the same physical address but are hosted elsewhere (in Germany for example).

I can't draw a definite conclusion about this. I'm not from the UK and have zero familiarity with the UK law. Probably the only advice I can give at this moment is to try asking your close family or your trusted friends. I imagine that it is painful to do so, but this is the only source I can think of where you can get money without the exorbitant rates asked by payday lenders. All that I can do is hope that you prevail, since I have a deep belief that as long as you have determination, you will find your place.

Full disclosure: this is a paid review from ReviewMe. Under the terms of the understanding I was not obligated to skew my viewpoint in any way (ie. only post positive facts).

Wednesday, June 01, 2011

Setting up git-daemon under Ubuntu


The scenario is the following: inside a (somewhat) trusted LAN you would like to set up git-daemon so that your coworkers can access your repositories. This solution is not appropriate in cases where you want to share with random people on the interwebs. This short description is based loosely on this blogpost and it was updated to contain more details and tested with Ubuntu 11.04.

  • install the git-daemon-runit package: sudo apt-get install git-daemon-runit
  • decide where you would like to keep your git repositories - it can be your home folder, if it's not encrypted (if it's encrypted it won't work because it only gets decrypted once you log in, so the git repositories won't be available unless you log in). Lets say that you've decided it to be /var/git. Create it:
    sudo mkdir /var/git
    sudo chown $USER /var/git
  • Now edit the file /etc/sv/git-daemon/run and make it like the following (bold marks the spots which were changed):
    exec 2>&1
    echo 'git-daemon starting.'
    exec chpst -ugitdaemon \
      "$(git --exec-path)"/git-daemon --verbose --export-all --base-path=/var/git /var/git
  • Restart the service:
    sudo sv restart git-daemon
  • Enable it from the firewall:
    sudo ufw allow 9418/tcp

That's it. Now every subdirectory from /var/git which "looks like" a git repo (has a .git subdirectory) will be available over the git protocol. Alternatively, you can remove the "--export-all" option and create a "git-daemon-export-ok" file in each subdirectory you would like to export: touch /var/git/core/git-daemon-export-ok

You can symlink the directory to your home folder for your convenience:
ln -s /var/git ~/projects/git

Adding tab completition to Maven3 under Ubuntu


Maven 3 was released recently (depending on your definition of recent), but is not yet packaged for Ubuntu. This is generally not a problem, since the installation instructions are easy to follow (alternatively here are the installation instructions from the Sonatype maven book), but you don’t get tab completion in your terminal, which is quite a bummer, since I don’t know how to write correctly without a spellchecker.

Fortunately the steps to add it are simple:

  • Download an older Maven2 package
  • Extract from it the /etc/bash_completion.d/maven2 file (take care not to install the package by mistake)
  • Put the extracted file into /etc/bash_completion.d/maven3
  • Restart your terminal

These steps should also work with other Linux distributions if they have bash-completion installed.

This is a cross-post from the Transylvania-JUG blog.

Saturday, May 21, 2011 review

This summary is not available. Please click here to view the post.

Friday, April 15, 2011 review


I was hired to write a review about which purportedly blackjack practice, however I wasn't able to verify this, since the site is down (currently it is showing a default directory listing from Apache, earlier today it was showing an empty page). There is very little know about this by the usual sources, and there doesn't seem to be anything interesting on the same server (it is hard to tell if the same person owns all the domain on the server due to the "privacy protected" registration).

Now back to the idea to blackjack practice: can you really train yourself? Probably, to some extent (hey, even Kent Beck - yes, that Kent Back - has a poker training website). With regular exercise you can memorize basic strategy table (this will help you with any game, regardless if its with a live dealer or with a machine - supposing that the machine is playing fair) and then you can move to more advanced techniques like card counting or shuffle tracking. I doubt however that you can achieve a level where this would be a profitable endeavor.

Full disclosure: this is a paid review from ReviewMe. Under the terms of the understanding I was not obligated to skew my viewpoint in any way (ie. only post positive facts).

Tuesday, April 12, 2011

Booting the Linux Kernel from Grub2


Recently a good friend of mine managed to uninstall all the kernels from his Ubuntu machine (what can I say - Monday morning and no coffee is a deadly combination). Luckily he had the install CD on hand so we did the following:

  1. Boot from the CD (we had Internet connection)
  2. Mount the Linux partition and chroot into it
  3. sudo su
    cd /media/..
    chroot .
  4. Reinstall the kernel with aptitude
  5. Reboot and go into Grub2 command mode
  6. Now do the following (commands need to be adjusted to match your partition - also, tab completion works, so you don't have to guess)
    insmod part_msdos
    insmod ext2
    set root=(hd0,3)
    linux /boot/vmlinuz- root=/dev/sda3 ro
    initrd /boot/initrd.img-2.6.38-6-686

It seems that most of the examples on the 'net are for Grub 1 and little is out there for Grub 2. I found the following three: How to use Grub2 to boot Linux manually, The Grub 2 Guide, GRUB 2 bootloader - Full tutorial. Also, I didn't perform steps 4-5 because he just reinstalled Ubuntu (it was a fresh install anyway), but I tried it out separately on my laptop and it works.


Monday, April 11, 2011

The wrong time to update software...


is when the user is the busiest, for example when s/he just started your application. See for example the screenshot below with Adobe Air (click trough to see it in its full beauty).

The mistakes it makes:

  • It tries to do the update when I'm trying to start Grooveshark (it interferes with my intention)
  • It consumes 100% of a core by polling for the presence of running applications (I suppose), effectively obliging me to do the update. This is combined with frequent releases (which otherwise would be a good thing) for maximum annoyance.
  • Although you can't see it in the screenshot, the updater has (had?) a bug when it asks for your sudo password: if you misstype it at first, then it asks for the root password (which doesn't exists under Ubuntu by default) and then it just gets into some weird state until the next update is released.

To sum it up: You should download and install the updates in the background (in a separate, versioned directory, always keeping just the two most recent versions). Users shouldn't be bothered with this, especially when they are trying to get work done!

Sunday, April 10, 2011

Recovering encrypted home directory under Ubuntu


While the home-folder encryption in Ubuntu is far from a perfect solution (there is considerable data leakage from the swap file and the temp directory - for example once I've observed the flash videos from Chromium porn private browsing mode being present in the /tmp directory), it is a partial solution nevertheless and very easy to set up during installation. However what can you do if you need to recover the data because you fubard your system?

Credit where credit is due: this guide is taken mostly from the Ubuntu wiki page. Also, this is not an easy "one-click" process. You should proceed carefully, especially if you don't have much experience with the command line.

  1. Start Ubuntu (from a separate install, from the LiveCD, etc) and mount the source filesystem (this is usually as simple as going to the Places menu and selecting the partition)
  2. Start a terminal (Alt+F2 -> gnome-terminal) and navigate to the partitions home directory. Usually this will look like the following:
    cd /media/9e6325c9-1140-44b7-9d8e-614599b27e05/home/
  3. Now navigate to the users ecryptfs directory (things to note: it is ecryptfs not encryptfs and your username does not coincide with your full name - the one you click on when you log in)
    cd .ecryptfs/username
  4. The next step is to recovery your "mount password" which is different from the password you use to log in (when it asks you, type in the login password used for this account - for which you are trying to recover the data). Take note of the returned password (you can copy it by selecting it and pressing Shift+Ctrl+C if you are using the Gnome Terminal)
    ecryptfs-unwrap-passphrase .ecryptfs/wrapped-passphrase
  5. Now create a directory where you would like to mount the decrypted home directory:
    sudo mkdir /media/decrypted
  6. Execute the following and type in (or better - copy-paste) the mount password you've recovered earlier
    sudo ecryptfs-add-passphrase --fnek
    It will return something like the following. Take note of the second key (auth tok):
    Inserted auth tok with sig [9986ad986f986af7] into the user session keyring 
    Inserted auth tok with sig [76a9f69af69a86fa] into the user session keyring
  7. Now you are ready to mount the directry:
    sudo mount -t ecryptfs /media/9e6325c9-1140-44b7-9d8e-614599b27e05/home/.ecryptfs/username/.Private /media/decrypted
     Passphrase:  # mount passphrase
     Selection: aes
     Selection: 16
     Enable plaintext passthrough: n 
     Enable filename encryption: y # this is not the default!
     Filename Encryption Key (FNEK) Signature: # the second key (auth tok) noted
    You will probably get a warning about this key not being seen before (you can type yes) and asking if it should be added to your key cache (you should type no, since you won't be using it again probably).

That's it, now (assuming everything went right) you can access your decrypted folder in /media/decrypted. The biggest gotcha is that home/username/.Private is in fact a symlink, which - if you have an other partition mounted - will point you to the wrong directory, so you should use the home/.ecryptfs/username directory directly.


scentsy review take two


I've already written about Scentsy Products, so I will try not to repeat myself that much (other than reiterating that you should really think before investing in a referral system) and will focus on their special product:

Piece by Piece Full-Size Scentsy Warmer – this a usual warmer (usual for Scentsy that is – it uses a lightbulb to provide the heat, thus avoiding the open flame and smoke) with a puzzle-piece decoration. What makes this item (more) special is the fact that parts of the revenue from it go to Autism Speaks. While currently this is the only one in the Charitable Cause Warmers product line, hopefully there will be more in the future, allowing you to get something for both your body and your soul.

An other item I didn’t talk about in the last article is the gift certificate: if you consider appropriate, you could give a 25 USD Gift Certificate to the person. There are also replacement parts and individual warmer parts if your warmer breaks but you don’t want to buy a completely new one. Also, the light bulbs in the Scentsy products are standard ones (compared to something like a Philips wake-up light) so you can buy a replacement in almost any store, as long as you watch for the socket size and the wattage.

Full disclosure: this is a paid review from ReviewMe. Under the terms of the understanding I was not obligated to skew my viewpoint in any way (ie. only post positive facts). review


There isn't much I can say about this company. They seem very legitimate by all indications (domain name registered more than 10 years ago, with the physical address of the company, no complaints on the web, etc). Their goal seems also very laudable: creating playground surfacing out of recycled tire rubber. While I don’t have enough information to ascertain if this is truly more eco-friendly than getting rid of the tires in other ways (burning them for example – there are many factors here – the recycling process itself might consume a larger amount of energy – see a similar issue with the fact that placing solar panels in the Sahara desert might actually increase global warming), reusing is always a laudable goal.

The company is also relatively media-savvy: they have a (Facebook) like button on their website, they have a blog and their site is relatively nice looking (even more important: it doesn’t look like one of the stock templates from designers). The only thing you can’t do is to order online, but probably it’s better this way, since we are talking about relatively large amount of material (and automatically large amount of money) so a personal contact is a better option. Also, I didn’t see any indication that they ship outside of the USA, probably for the same reason.

Thumbs up for a small-medium business which has a good product!

Full disclosure: this is a paid review from ReviewMe. Under the terms of the understanding I was not obligated to skew my viewpoint in any way (ie. only post positive facts).

Friday, April 08, 2011 review

Today I'm reviewing a site which has the goal of comparing different private health insurance companies and giving you the cheapest one. Unfortunately I'm not familiar with the USA insurance rules (because I'm on a different continent :-)), so I can only comment on generic impressions related to this site:
  • The domain was registered in 2007, which is reassuring, however the registration information is hidden, which raises some questions, especially given the fact that you are supposed to trust this site with personal data (like address, phone number, date of birth, name)
  • The design is ok, although there are some technical glitches (like using the sitemap link to give the sitemap for the search engines, although this wouldn't be necessarry - there are other ways to point the search engines to it)
However the biggest downfall of the site is the confusing interaction model and stale data: when you request a free quote it asks you a lot of personal information (then again I don't know how much data the individual insurance companies risk model needs) in a separate popup. It presents the result in both the popup window (however both links it gave me gave me an 404 error) and also the main window, but clicking trough the main window requires to fill the form again.
In conclusion I have a low confidence level that such comparison sites would be a reliable information source and also the comparison on price alone isn't enough when making such an important decision.

Full disclosure: this is a paid review from ReviewMe. Under the terms of the understanding I was not obligated to skew my viewpoint in any way (ie. only post positive facts).

Sunday, March 20, 2011

Setting the maximum number of opened files under Ubuntu (for JProfiler)


As I found out "on my own skin", setting fs.file-max in /etc/sysctl.conf is a BAD idea. It can render your system useless in one step. Please don't do it! If you did it, use the recovery mode to roll back the change. Also, currently I would only recommend doubling the limit (ie going from 1024 to 2048 or from 2048 to 4096) not going to the maximum value.

JProfiler is a great tool, however under 32 bit Ubuntu you can run into the problem of having a too low limit for open filehandles. This is a problem for JProfiler because it uses temporary files to work around the address-space limitation created by 32 bit (yeah, I know, I should upgrade to 64 bit - but 32 bit works great for now...)

To raise the maximum filehandle limit, do the following:

sudo gedit /etc/security/limits.conf
# add the following two lines before the # End of file marker
# yes, the initial star is also part of line, and you should add it
*       hard    nofile  4096
*       soft    nofile  4096
sudo gedit /etc/sysctl.conf
# restart your system

You can check if the changes were successful by using the ulimit command:

ulimit -n
# it should print out 4096

Tuesday, March 08, 2011

DiskMap - an disk backed Map in Java


I have the following problem: a Java application was running out of memory. It was not feasible to mandate 64 bit JVM for this application and the ~1.4G limit wasn't enough.

My solution was to implement a Map which - when an element is added - also saves the value to disk and only holds a weak reference to the value. When the memory pressure occurs, these objects, only linked by the weak references are evicted. Later, when they need to be read, they are read from the backing file.


  • Adding elements takes considerably longer (because they need to be serialized)
  • There is no way to reclaim space from the backing file (this is only intended for short-running mostly read-only tasks)
  • This is only useful if the values are considerably larger than the keys (because the keys are kept in-memory and only the values have the potential to be removed)
  • There is a memory overhead: when the objects are in-memory, you will take up an additional 20 to 40 bytes per entry. However, when the GC kicks in will only take up a 20 to 40 bytes per key.
Long story short: you can find the code (together with unit-tests) in my repo.

Why running sushi is the best fast-food?

Sushi Bar - Angle View

I just realized that running sushi is the best fast-food ever. (Yes, I have strong opinions weakly held):

  • You get you food in small chunks, so you can stop at any time and still don't feel like you've wasted food
  • You have a great variety of food and you can look at it before taking (rather than just looking at a picture in the menu and wondering how the real thing will look like)
  • It is most probably healthier than other kinds of fast-food
  • It is neither hot nor cold, so you can eat it right away (you don't have to wait for it to warm up or to cool down)
  • You don't have to order! The last thing you want to do when you are hungry is to stare at food and wait

PS. If you are Cluj (Romania) you can check out the Wasabi Running Sushi. As far as I know they are the only running sushi in Romania!

Monday, March 07, 2011

Microbenchmarking and you


Crossposted from the Transylvania JUG website.

Microbenchmarking is the practice of measuring the performance characteristics (like CPU, memory or I/O) of a small piece of code to determine which would be better suited for a particular scenario. If I could offer but one advice on this, it would be this: don't. It is too easy to get it wrong and bad advice resulting from bad measurement is like cancer.

If you don't want to take my first advice, here is my second advice: if you really want to do microbenchmarking watch this talk by Joshua Bloch: Performance Anxiety and use a framework like caliper, which I present below.

caliper is a Java framework written at Google for doing Java microbenchmarks as correctly as possible. To use, first you have to build it (there are no prebuild jars yet, nor is it present in the central Maven repository, sorry):

svn checkout caliper
cd caliper

Now you can start writing your benchmark. Benchmarks are written in a style similar to the JUnit3 tests:

  • you have to extend the class
  • your methods must conform to the public void timeZZZZ(int reps) signature
  • you can override the setUp and tearDown methods to implement initialization / finalization

Below is a simple example (taken from the caliper homepage):

public class MyBenchmark extends SimpleBenchmark {
  public void timeMyOperation(int reps) {
    for (int i = 0; i < reps; i++) {

To run this you have multiple possibilities:

  • Use the caliper script included in the code distribution (this is a SH script, so it won't work under Windows):
    ~/projects-personal/caliper/build/caliper-0.0/caliper --trials 10 org.transylvania.jug.espresso.shots.d20110306.MyBenchmark
    you can also execute the script without parameters to get a list and description of command line parameters.
  • Run it from your favorite IDE. You need to add the following libraries: allocation.jar, caliper-0.0.jar. The main class is and the parameters are the same you would pass to the caliper runner
  • Add a main method to your test class which would contain the following:
    public static void main(String... args) throws Exception {
      Runner.main(MyBenchmark.class, args);

By default caliper outputs an easy to understand text output. You have also the option to publish the benchmark as a nice HTML page (see this page for example). The publication is done trough a Google AppEngine app and is publicly available to anyone (a caveat to remember). For more information see the caliper questions on StackOveflow. You might also be interested in the java performance tunning website if you need to perform such tasks.

Sunday, March 06, 2011

Doing some estimations


This is again one of those topics which I like to rant about, so I give you the short version: when you see a number, question it! Most of the numbers thrown at us in different media can be disproven quite easily and it is our responsibility as people not to just repeat whatever we’ve heard, but rather stop and think a little about it (of course I’m not immune to this myself, since I’ve just fallen into this trap when reading the “Contemplating Financial Trading At Picosecond Resolution” on Slashdot, only to see the very insightful comment: light travels 3mm in a picosecond – yes I’ve done the math - so this article is pure BS).

Offtopic: why do sayings in different languages have so much in common? For example we have the “beating the dead horse” expression in English and in Hungarian we would say somebody is talking about is “horse made of branch” (vesszoparipa). Ain’t it interesting?

Getting back to my rant :-). I’ve seen an article recently about a local (Romanian) affiliate program: eMAG Profitshare 2010. I applaud them for their openness and it also gives us the possibility to do a quick calculation. They say that they’ve given out 463 000 RON (~109 905 EUR / 153 499 USD) to 8690 sites.

Does it sound like a lot? Yes. Is it a lot for each individual site? Unlikely. Lets do a quick math: assuming that each site gets the same share (a very simplistic assumption) we have: 109 905 EUR / 8690 site = ~13 EUR per site / per year (these are yearly figures for 2010) so around 1 EUR (!) per site per month (!).

Ok, so be more real. You have a big fanbase, so you should be in the top sites as revenue. Lets consider a binomial distribution of the sites and do a little chart with Google Docs:


What you see here is the revenue per month for a site in a certain category (categories are from 1 to 10, 1 being the lowest traffic one and 10 the highest traffic one). The number is in EUR. The conclusion: this business model is a very poor revenue source for the individuals participating, but probably a very good marketing avenue for companies (I assume that the cost for companies is around the same as doing a ad campaign, but the returns must be much better – not to mention the google juice they must be getting from this referrals!).

PS: in the name of transparency, you can see the sheet I used for calculation here.

Saturday, March 05, 2011

Audio quality


This is just one of those topics which comes up from time to time in my life (probably because I consume a lot of media). I was recently watching the Jim Zemlin interviewed by Jeremy Allison (Jim Zemlin is the Executive Director of the Linux Foundation) on the Google Open Source YouTube channel and was frustrated by the background noise and low audio volume, since the topic was really interesting to me. So I decided to look into the problem and see if the audio quality could have been easily improved. I covered the topic a couple of years so I won’t go into details, rather just give a 10 000 foot view of the process. Please read the original post for more details, since everything in it still applies.

Step 1: download the YouTube video. VLC natively supports YouTube playback, so exporting the sound to a FLAC file (you should always use lossless codecs during processing!) was just a matter of a couple of clicks and one or two minutes.


Step 2: load up in Audacity and remove the noise. The loading of the FLAC file is a little buggy (the progress bar keeps jumping between 0 and 100% and the time estimation is useless, but it loaded in under a minute). As you can see in the screenshot below, the volume is really low, but there are the occasional spikes, so plain normalization wouldn’t help you here. On the upside, there is no clipping which would result in a hard (impossible?) to repair artifacts.


After noise removal and keeping only one channel (no need for stereo here – we would add it back in the last step if we would to publish it since some devices can’t handle mono and the overhead with joint stereo is almost zero) the file was exported into WAV and fed into the Levelator. Here is the end result:


As you can see, we have much better volume resulting in a much improved experience for the consumer, all this with a couple of minutes of work while browsing Hacker News and with free (and mostly open-source) cross platform tools.

Content publishers of the world: please take a couple of minutes of your time after editing to do a proper post-production! Thank you.

Update: YouTube downloading is broken in the current VLC release but it will be fixed in the next version (1.1.12). Until then you can use the nighly builds.

Friday, February 25, 2011

Sorry for the malware warning!


If you have tried to visit my blog recently, you might have to a warning like this from your webbrowser:

Warning: Something's Not Right Here! contains content from
, a site known to distribute malware.
Your computer might catch a virus if you visit this site.

The source of the warning is the image / link in the comment form, which I have now removed (or more precisely replaced with a local copy). It seems that the has been hacked and thus it is classified as malicious by Google, which in turn leads to all sites linking to it being marked a potentially malicious. So, while I'm sorry for doing this, I will remove the links to their site until they manage to resolve the issue and will mirror their manifesto below:

Almost all blog platforms by default are set up so that a “dead end” piece of code is inserted wherever there is a link in a comment, so that search engines will not “count” the link as they are crawling the internet. This was originally designed to help stop comment spam, but it doesn’t work. What it does is remove some of the incentive for your readers contribute to your site by commenting on your posts. What can you do about it? Turn off “nofollow”. Show your commenters that you appreciate them. Spread the link love. review


My assignment - which I choose to accept :-) - was to review a mobile accessories site, namely All the signs for this site check out:

  • It was registered a couple of years ago and didn't move around much
  • It has several real seals (real meaning that they are linked to the originating site, where you can check out the referring site is really approved, didn't just faked the seal)
  • Their contact address checks out (based on Google maps) as does their phone number (which is in the London area)
  • I found a couple of complaints about them on this site, however they seem to be very responsive to the negative complaints

After doing the basic safety research I went on to browse the site and I found many interesting items, for example this Jabra headset. What I like about about these models of headsets is the fact that they use standard minijack for headphones, which means that they can easily be replaced in case they stop working (and it is my experience that headphones are the first to go bad).

An other category which I found interesting was the tools section. While some of the tools are clearly overpriced, the more complex kits look interesting and I am tempted to buy something from there.

Full disclosure: this is a paid review from ReviewMe. Under the terms of the understanding I was not obligated to skew my viewpoint in any way (ie. only post positive facts).

Tuesday, February 22, 2011

Setting up IMAP with Yahoo! Mail

Mail Snail

I'm a long time Yahoo Mail user. Just to illustrate how long I've been with them: when I joined the space available was a couple of MBs! I staid with them because I was mostly satisfied (never really caught the GMail bug), however recently I started looking for options to consolidate the different email accounts (work / personal / yahoo / gmail / etc). I explicitly wanted IMAP support because I really need to keep in sync between multiple machines.

The common wisdom seems to be on the 'net that Yahoo! Mail doesn't support IMAP (not even for paid accounts) or that various hacks are needed to support it (like sending custom / non-standard commands after login). This information however seems to be outdated, since I was able to find a least 3 IMAP servers (I've tested them all and they all work - with standard email clients with no hacks!):

  • (this is the one Thunderbird configures by default)
  • (from this article)

All of the servers support SSL/TLS encryption, so they are safe to access even from public hotspots. The outgoing server is, which also supports SSL/TLS (and you should use it!)

The easiest to set up is Mozilla Thunderbird, however Evolution seems to work much better. One important feature in particular is that it works with large (10 000+ emails) folders, while Thunderbird chokes with an error ("UNAVAILABLE] UID FETCH too many messages in request"). To have Evolution work properly, you need to select "IMAP+" (also called IMAPX) as the protocol.

HTH somebody out there.

Sunday, February 06, 2011

Manually enabling IP routing in Windows XP


While Internet Connection Sharing is a nifty tool, there are some cases where you would like to do the steps manually. One such case would be if the “primary” network is already using the address space, since ICS is hardcoded (as far as I can tell) to use the same network. One concrete case I have encountered was:

ADSL Modem+Router (no wireless) –-> laptop broadcasting over writess –-> ... –-> other laptops

The solution is the following:

It is simple as 1-2-3 :-p. Some caveats though:

  • This setup won’t give you DHCP. So make sure that you configure your other machines with a static IP address
  • It also won’t give you DNS, so configure something like the Google DNS ( or or OpenDNS ( or or even your ISPs DNS
  • The ad-hoc wifi connection has reliability issues. It happened multiple times that I had to restart it because it disconnected and wouldn’t connect any more, but it is a good temporary solution.

PS. You can download the drivers and user manual for the SmartAX MT882 ADSL Router here (the link might go dead unexpectedly, since it is served out of Dropbox). This is a standard modem provided by Romtelecom (the Romanian telecom provider) and I couldn’t find it elsewhere because Huawei is very secretive about its stuff (the files were copied from the CD provided with the modem). The driver makes the USB connection work as a network card (which is very elegant and simple).

Is hand-writing assembly still necessary these days?


12878535_df4197ea6b_o Some time ago I came over the following article: Fast CRC32 in Assembly. It claimed that the assembly implementation was faster than the one implemented in C. Performance was always something I’m interested in, so I repeated and extended the experiment.

Here are the numbers I got. This is on a Core 2 Duo T5500 @ 1.66 Ghz processor. The numbers express Mbits/sec processed:

  • The assembly version from the blogpost (table taken from here): ~1700
  • Optimized C implementation (taken from the same source): ~1500. The compiler used was Microsoft Visual C++ Express 2010
  • Unoptimized C implementation (ie. Debug build): ~900
  • Java implementation using polynomials: ~100 (using JRE 1.6.0_23)
  • Java implementation using table: ~1900
  • Built-in Java implementation: ~1700
  • Javascript (for the fun of it) implementation (using the code from here with optimization – storing the table as numeric rather than string) on Firefox 4.0 Beta 10: ~80
  • Javascript on Chrome 10.0.648.18: ~40
  • (No IE9 test – they don’t offer it for Windows XP)

Final thoughts:

  • Hand coding assembly is not necessary in 99.999% (then again 80% of all statistics are made up :-p). Using better tools or better algorithms (see the “Java table based” vs. “Java polynomial”) can give just as good of performance improvement. Maintainability and portability (almost always) trump performance
  • Be pragmatic. Are you sure that your performance is CPU bound? If you are calculating a CRC32 of disk files, a gigabit per second is more than enough
  • Revisit your assumptions periodically (especially if you are dealing with legacy code). The performance characteristics of modern systems (CPUs) differ enormously from the old ones. I would wager that on an old CPU with little cache the polynomial version would have performed much better, but now that we have CPU caches measured in MB rather than KB the table one performs much better
  • Javascript engines are getting better and better.

Some other interesting remarks:

  • The source code can be found in my repo. Unfortunately I can’t include the C version since I managed to delete it by mistake :-(
  • The file used to benchmark the different implementations was a PDF copy of the Producing Open Source Software book
  • The HTML5 implementation is surprisingly inconsistent between Firefox and Chrome, so I needed to add the following line to keep them both happy: var blob = file.slice ? file.slice(start, len) : file;
  • The Javascript code doesn’t work unless it is loaded via the http(s) protocol. Loading it from a local file gives “Error no. 4”, so I used a small python webserver
  • Javascript timing has some issues, but my task took longer than 15ms, so I got stable measurements
  • The original post mentions a variation of the algorithm which can take 16 bits at one (rather than 8) which could result in a speed improvement (and maybe it can be extended to 32 bits)
  • Be aware of the “free” tools from Microsoft! This article would have been published sooner if it wasn’t for the fact MSVC++ 2010 Express require an online registration and when I had time I had no Internet access!
  • Update: If you want to run the experiment with GCC, you might find the following post useful: Intel syntax on GCC

Picture taken from the TheGiantVermin's photostream with permission.