yep. back down to around zero now interesting the impact it had on the whole VPS network brycec: lol stop smashing things up_the_irons: I contend that your things were already smashed above 10mbps yet? ^_^ Continuous 10MB/s of IPv6 traffic shouldn't have that much affect, esp. on v4 traffic brycec: ah so that was from pummeling my poor IPv6 OpenBSD router? ;) "curl -6" So, yes. is there a single 100mbit/s pipe serving all of the VPS? no, 100 mbps per host I'm wondering if mirrors.arp is on the same host as the v6 router and the website Wait you said mirrors. wasn't a vps yeah, not a vps so just v6 router and website - same host? but the vps host that hosts the IPv6 OpenBSD router *will* be affected and yeah, v6 router and website are on the same host So if everybody was polling against the website's vps, and the host was burdened with my traffic... I guess that explains why even v4 traffic appeared to be affected. Sorry guys :( yeah that would make sense everyone on kvr02 would see the lag I'm still reeling from the revelation that the v6 router was a VPS Always assumed that, like the other routers, it was a physical box yeah, it still "does the job" is the 100mbit/s limit imposed to prevent the VM hosts from fighting for bandwidth or something? or they just have 100mbit/s NICs? acf_: no, it's just that over 5 years ago when i designed everything, all gigabit line cards were really expensive ;) ah, makes sense So what's the excuse now, knowing that I have a second NIC with GbE intra-network (backup host)? [on my vps] :P i imagine your traffic wasn't going over that link Correct My point being that the host machines have GbE links now, ARP has >=GbE uplink. Why don't VPS now have GbE? oh you mean the excuse for not have every host on all gigabit? ;D (It's not an issue, just wondering) (If it was an issue, I'd throw money at Metal, or at least The American plan and bitch^Wrequest) so, it's like this, all kvr hosts have 3 active links: 1. Primary uplink, 100 Mbps, to s1.lax 2. "Backplane" network, not primary, 2nd tier, "Can go down with sirens screaming", 1000 Mbps, to s6.lax 3. Secondary uplink, 1000 Mbps, to s7.lax (THIS IS NEW) (yesterday alone, my eth0 rx was nearly 900GB, most of which from mirrors.arp) technically, I can switch any vps to use #3, but it simply hasn't been a big priority. and as you guys probably know, s7.lax has had more problems than I'd like, so I refrained from switching people over. s/Can go down with/Can go down without/ 2. "Backplane" network, not primary, 2nd tier, "Can go down without sirens screaming", 1000 Mbps, to s6.lax (makes more sense, thx) np so, as a Metal customer, I have gigabit to s1.lax? through other layer 2 switches I assume s6.lax, for those wondering, is an Extreme Networks 48-port all gigabit switch, L2 only. If you're familiar with the purple packet eaters, you know why it is 2nd tier. acf_: yes, Metal customers get 1 Gbps to s1.lax, through a pair of Foundry switches (s8.lax and s9.lax). now everyone knows the entire network topology ;) it's great. I learn a little bit more about your network every day :) So s6 is purely internal networking acf_: :) thanks for the info btw brycec: yes, purely internal acf_: np and i must say, that little Extreme switch has been a trooper i wonder what the uptime is... Yes but is it on a rotating platform? http://www.skylineconst.com/technology/ i don't even know how to check on the Extreme... If ARP had office space in a colo... up_the_irons "show switch" danke System UpTime: 1172 days 5 hours 7 minutes 8 seconds that actually seems a little low ;) I wonder why i rebooted it ~3 years ago... up_the_irons: Need a spare for $30? http://www.ebay.com/itm/Extreme-Network-48si-48-port-Network-Switch-Awesome-Purple-/181471260591 (these things are littering eBay for 30-150) i think those are 100 mbit touche the gigabits are around 100-150 but i bet the X350-48t (s6.lax) isn't much more expensive ;) $75 http://www.ebay.com/itm/EXTREME-NETWORKS-16202-SUMMIT-x350-48t-48-PORT-10-100-GIGABIT-NETWORK-SWITCH-/191283473576 ya know what is great about them? they can do 4K active VLANs, which *no* other switch could do anywhere near its price range (not even Juniper) Nice. A pretty embarrassing shortcut, imo "Oh yeah, we support 4k VLANs, but not all at once" lol I have trouble wrapping my mind around any legitimate technical reason for that. every switch i researched had that problem. it was like, 4K VLANs supported, but only 1K at the same time. the cheap Dell PowerConnects: 1K VLANs, 250 active lame That's really pathetic yup oh shit it's more common that I'd realized I have a Netgear that supports "up to 64" yeah it's really common Alright, out of 3 managed switches, two are limited to 512, and the 3rd limited to 64. Good thing I don't need much. I have a new appreciation for up_the_irons' switch shopping LOL The worst however is a switch in a wifi ap that's been hardcoded to just 1 VLAN. Took forever to realize that and figure out why the guest SSID wasn't working right. No docs on the subject (and why would there be? it's just some home wifi ap), just a total mystery why I didn't see vlan-tagged traffic when tcpdumping on it By my estimate, I pulled 1.2TB from mirrors, hooray for internal bandwidth. LOL http://i.imgur.com/0MXJJa6.png brycec: thank you for "exercising" my network, sometimes it needs a good push ;) :D And now we've learned more about it, and what happens when you load down kvr02 yup:) time to get off this computer screen for a while, bbl iirc, the ex3300s support 4096 active vlans and 8000 routes (since it's a layer 3 switch) but they're also roughly 10x the cost of pre-owned extreme gear, and they're not purple the whole range of 3300s only support 8000 routes? hm the model i looked at did 8000 in hw but even if it goes to switching on the cpu, it's not terrible like if you drop stuff from cef on cisco gear haha, my graph looks fine now. started looking okay right around 0100 mdt up_the_irons: I was caught off guard when you revealed that arpnetworks is on a VPS, (I guess VPS is ok for a website) but the IPv6 Router - well I mean I knew it was a software BSD router, but one run in a VM? That really surprised me. Heh, I've known that for years! why is it surprising to use VMs for that? RandalSchwartz: I thought routing is something that needs near real-time CPU access to enable low Latency routing. VPS / VMs would add a delay, increasing latency besides it is routing IPv6 traffic for ALL of Arpnetworks and its customers if I am not mistaken mnathani - not if you have something like em0 which is a virtualized interface very little emulation :) Oh wait... I'm thinking of virtio does arpnetworks support virtio? if so, I wonder why we end up with em devices Oh. first in freebsd 9 not there yet. :) maybe it gets better later :) i've used a couple different VMs as IPv6 tunnel routers; not a ton of devices with v6 support, but still passing a fair amt of traffic arp's routing is being done by a VPS? v6.