Back to Top

Friday, March 28, 2014

On benchmarks

0 comments

Numbers every programmer should know and their impact on benchmarks

Disclaimer: I don't mean to be picking on the particular organizations / projects / people who I'll mention below. They are just examples of a larger trend I observed.

Sometimes (most of the times?) we forget just how powerful the machines in our pockets / bags / desks are and accept the inefficiencies of the software running on them. When we start to celebrate those inefficiencies, a line has to be drawn though. Two examples:

In 2013 Twitter claimed a record Tweets Per Second (TPS - cute :-)) of ~143k. Lets round that up to 150k and do some back-of-the envelope calculations:

  • Communication between the clients and Twitter: a tweet is 140 bytes (240 if we allow for unicode). Lets multiple the 150k number by 10 (just to be generous - remember that 143k was already a big blip) - we get a bandwidth requirement of 343 MB/sec. Because tweets are going over TCP presumably and ~20% of a TCP connection is overhead, you would need 428 MB/s of bandwidth, about 3.5 gigabit or less than 0.5 of a 10 gigabit connection.
  • On the backend: lets assume we want triple redundancy (1 master + 2 replica) and that the average tweet goes out to 9 subscribers. This means that internally we need to write each tweet 30 times (we suppose a completely denormalized structure, we need to write the tweet to the users timeline also and do all this thrice for redundancy). This means 10 GB/sec of data (13 if we're sending it over the network using TCP).
  • Thus ~100 servers would be able to easily handle the load. And remember this is 10x of the peak traffic they experienced.

So why do the have 20 to 40 times that many servers? This means that less than 10% (!) of their server capacity is actually used for business functions.

Second example: Google with DataStax came out with a blogpost about benchmarking a 300 node Cassandra cluster on Google Compute Engine. They claim a peak of 1.2M messages per second. Again, lets do some calculations:

  • The messages were 170 bytes in size. They were written to 2+1 nodes which would mean ~600 MB/s of traffic (730 MB/s if over the network using TCP).
  • They used 300 servers but were also testing the resiliency by removing 1/3 of the nodes, so lets be generous and say that the volume was divided over 100 servers.

This means that per server we use 7.3 MB/s network traffic and 6 MB/s disk traffic or 6% or a Gigabit connection and about 50% of medium quality spinning rust HDD.

My challenge to you is: next time you see such benchmarks do a quick back-of-the envelope calculation and if it uses less than 60% of the available throughput, call the people on it!

Wednesday, February 05, 2014

Proxying pypi / npm / etc for fun and profit!

0 comments

Package managers for source code (like pypi, npm, nuget, maven, gems, etc) are great! We should all use them. But what happens if the central repository goes down? Suddenly all your continious builds / deploys fail for no reason. Here is a way to prevent that:

Configure Apache as a caching proxy fronting these services. This means that you can tolerate downtime for the services and you have quicker builds (since you don't need to contact remote servers). It also has a security benefit (you can firewall of your build server such that it can't make any outgoing connections) and it's nice to avoid consuming the bandwidth of those registries (especially since they are provided for free).

Without further ado, here are the config bits for Apache 2.4

/etc/apache2/force_cache_proxy.conf - the general configuration file for caching:

# Security - we don't want to act as a proxy to arbitrary hosts
ProxyRequests Off
SSLProxyEngine On
 
# Cache files to disk
CacheEnable disk /
CacheMinFileSize 0
# cache up to 100MB
CacheMaxFileSize 104857600
# Expire cache in one day
CacheMinExpire 86400
CacheDefaultExpire 86400
# Try really hard to cache requests
CacheIgnoreCacheControl On
CacheIgnoreNoLastMod On
CacheStoreExpired On
CacheStoreNoStore On
CacheStorePrivate On
# If remote can't be reached, reply from cache
CacheStaleOnError On
# Provide information about cache in reply headers
CacheDetailHeader On
CacheHeader On
 
# Only allow requests from localhost
<Location />
        Order Deny,Allow
        Deny from all
        Allow from 127.0.0.1
</Location>
 
<Proxy *>
        # Don't send X-Forwarded-* headers - don't leak local hosts
        # And some servers get confused by them
        ProxyAddHeaders Off
</Proxy>

# Small timeout to avoid blocking the build to long
ProxyTimeout    5

Now with this prepared we can create the individual configurations for the services we wish to proxy:

For pypi:

# pypi mirror
Listen 127.1.1.1:8001

<VirtualHost 127.1.1.1:8001>
        Include force_cache_proxy.conf

        ProxyPass         /  https://pypi.python.org/ status=I
        ProxyPassReverse  /  https://pypi.python.org/
</VirtualHost>

For npm:

# npm mirror
Listen 127.1.1.1:8000

<VirtualHost 127.1.1.1:8000>
        Include force_cache_proxy.conf

        ProxyPass         /  https://registry.npmjs.org/ status=I
        ProxyPassReverse  /  https://registry.npmjs.org/
</VirtualHost>

After configuration you need to enable the site (a2ensite) as well as needed modules (a2enmod - ssl, cache, disk_cache, proxy, proxy_http).

Finally you need to configure your package manager clients to use these endpoints:

For npm you need to edit ~/.npmrc (or use npm config set) and add registry = http://127.1.1.1:8000/

For Python / pip you need to edit ~/.pip/pip.conf (I recommend having download-cache as per Stavros's post):

[global]
download-cache = ~/.cache/pip/
index-url = http://127.1.1.1:8001/simple/

If you use setuptools (why!? just stop and use pip :-)), your config is ~/.pydistutils.cfg:

[easy_install]
index_url = http://127.1.1.1:8001/simple/

Also, if you use buildout, the needed config adjustment in buildout.cfg is:

[buildout]
index = http://127.1.1.1:8001/simple/

This is mostly it. If your client is using any kind of local caching, you should clear your cache and reinstall all the dependencies to ensure that Apache has them cached on the disk. There are also dedicated solutions for caching the repositories (for example devpi for python and npm-lazy-mirror for node), however I found them somewhat unreliable and with Apache you have a uniform solution which already has things like startup / supervision implemented and which is familiar to most sysadmins.