About this server
bandwidth graphs and Load graphs for all FTP machines.
This server is a cluster of three kinds of machines:
- Reliable large storage that is NFS-mounted to all the frontends
and offloaders, NAS-style.
- The machines behind the DNS names, handles ftp, rsync and most
http requests, but passes http requests for large files on to offloaders.
- Serves http requests for large files, distribution
cd/dvd-images, movies etc.
The current disk backend is a HP DL180 G6 server called brokdorf.
Total usable space is about 120TiB.
We are using caching frontends to be able to deliver high bandwidth
without having to have a disk backend that can handle the aggregated
output. The cache used is mod_cache_disk_largefile, our modified version of the
mod_cache_disk (previously mod_disk_cache) available in Apache HTTP Server. An
XFS filesystem over striped SAS 10k/15kRPM disks is used as local storage.
For certain file endings (like .iso) we use http redirects to send these off
to one of the offloaders. The choice of which offloader to redirect to is
done by a perl program and is cached in apache as a dbm-cached rewrite rule.
To keep cache size reasonably low a certain URL is always redirected to the
The visible frontends are:
The offloaders work just as the frontends caching data locally, but they only
handle a few large files. The majority of data is usually sent by the
offloaders and cache re-use is high.
As a precaution against nosey users, the offloaders redirect directory indexes
back to the main frontends. And as a precaution against stupid download agents
we limit the max connections per IP to 10, and we always deliver data from
cache if the data is cached, even if the user agent sends "Cache-control: no
cache" or similar.
The current offloaders are:
All machines are connected with gigabit ethernet, aggregated external
bandwidth available is 6 gigabit/s.
Current and historical bandwidth graphs for the frontends.
The server is known by many names. Among them:
If you have any questions please contact email@example.com, and we will do our best to help you.