It never stops to amaze me the the speed which P2P (BitTorrent) client can achieve with almost no server infrastructure! Heck, I can download conference videos as fast as if they were served up by Akamai's carefully tuned architecture! Which brings me to my point:
The current way of network design (aggregating connections with an n:1 ratio) seems to be the only economically workable method to build networks, it introduces choke-points by design. Multicasting has never really taken off at the IP level, but I think that there is a great potential for it to succeed at the application level (layer seven for all the OSI geeks :-)). The key factors in my opinion are:
- There is an increasing amount of
rich media
on the internet (videos, podcasts, etc) - Those files are mostly static! (An important criteria for making caching - be it centralized or distributed - efficient)
All we need is an attractive enough application built on such foundation for the technology to take off, because we already have enough expertise to implement it.
Update: the BitTorrent protocol is somewhat vague on the peer selection method. Possibly each client implement a different heurisitic, possibly based on the IP distance (taking the XOR of two IPs as the distance
between them and preferring peers which have lover distance
). Two recent articles from George Ou's blog point out that there are many issue which must be correctly implemented for maximum performance. Also, the idea is certainly not new and software has already been created for distributed web cache and tested on a large scale.
0 comments:
Post a Comment
You can use some HTML tags, such as <b>, <i>, <a>. Comments are moderated, so there will be a delay until the comment appears. However if you comment, I follow.