[ Date Index ] [ Thread Index ] [ <= Previous by date / thread ] [ Next by date / thread => ]
On 08/05/14 13:09, Rob Beard wrote: > Hi folks, > > A couple of years ago when working for a local radio station I came > across Link Aggregation to bond two gigabit network links to a switch to > give double the bandwidth and a bit of redundancy if a link went down. > > Recently I've been thinking about how to get faster than 1 Gigabit > Ethernet without having to go down the fibre route and I wondered if it > was possible to do link aggregation or bonding with just network cards. > > What I'm thinking is two Linux boxes with 2 or more Gigabit NICs > connected with cables between them and no switch and having them talk to > each other at 2 Gigabit (or more depending if more cards are added). > > Does anyone know if this is possible or would it require a switch? > > Rob > > What you want is 802.3ad - link aggregation, aka 'bonding'. Do "mondinfo bonding" on your linux boxes to see what modes they support: this is a relatively complex area. As far as I understand it, you cannot do this without a suitable switch in the middle outside of the exotic high end (think many many thousands of pounds worth of AIX or other proprietary Unix gear, I believe vSphere might also support it). The good news is that 8 port 'prosumer' gigabit switches can be had for well under £100 these days that will do nicely, the key is to make sure that it supports LACP. If you're really lucky, your existing switches may already be LACP compliant to save you buying new ones. If you want any help, feel free to ask as I use bonding here at home. The big server downstairs has 4x1Gb teamed and both my workstations and the older, smaller server have 2x1Gb connections, all hooked up to a Cisco switch. You definitely won't get an actual 4Gb connection from teaming 4 x 1Gb cards, but you will see considerable throughput increases. Cheers http://en.wikipedia.org/wiki/Link_aggregation EDIT: Gordon just answered while I was typing and I believe he is wrong on both counts: you can't do back-to-back connections like with a single cross-over cable for 802.3ad, a LACP-compliant switch is required. Similarly, unless you are using crappy x1 lane cheapo NICs (which you obviously shouldn't on a server) there will be plenty of bandwidth available - proper Intel quad port PCIe network cards won't even work in a x1 lane, and can happily use everything up to x16 lanes which you will never, ever come remotely close to saturating with a paltry theoretical maximum 4Gb of simultaneous traffic. Sorry Gordon. -- The Mailing List for the Devon & Cornwall LUG http://mailman.dclug.org.uk/listinfo/list FAQ: http://www.dcglug.org.uk/listfaq