[00:04] youu get heaps of data though [00:09] 1) Why eat up bandwidth? 2) Wanted to be sure it was GbE through-and-through, 3) I forgot there was IPv4 access. [00:09] Plus jumbo frames on the vlan [00:10] oh there's jumbo frames on the vlan? [00:10] jumbo frames don't actuually make much real world difference ime [00:10] If you get a second nic on your vps [00:10] it can use less cpu on some old systems [00:10] i can't believe how much i'm struggling to stick intel stock cpu cooler on [00:10] (and new ones, depending on bw/pps usage) [00:11] it's not like i've never done it before [00:11] Me either. They're pretty easy. [00:11] and i never remembered it being a big deal before [00:11] Are you sure you've done everything else correctly? CPU latched properly? [00:12] found the issue [00:12] one of the bits that goes in is "to it's side [00:13] can't see how cpu couldn't be latched properly [00:13] There was a time I would have said that too. [00:13] Never, EVER underestimate human ingenuity when it comes to screwing up the idiot-proof. [00:13] They forced it shut with the cpu not properly seated and actually bent the CPU [00:14] And after that day, management understood why I referred to the production workers as monkeys. [00:14] (well they understood before, but that really illustrated it) [00:16] * brycec does not feel like delving into SASL setup for LDAP binding auth [00:16] But everything else on this Prosody server is looking good, woo [00:18] Including the most important https://code.google.com/p/prosody-modules/wiki/mod_swedishchef [00:46] finally got it [00:46] swapped to another cpu cooler which "just worked" [00:46] so then i plug in computer and nothing comes up :/ [00:47] took me a while to realise that the monitor wasn't auto selecting dvi [00:47] i hate hardware :) [00:47] and monitors seem to hate me atm [00:53] Go install, go! (reinstalling my dedi machine with raid-1 and zfs root) [00:53] (zpool mirror, that is) [00:57] hah [00:57] freebsd or linux? [00:57] Linux, just for fun (and because I like Proxmox) [00:57] i bet there are systems out there already, but i want to see how fast i can make an install happen on bare hardware over the network [00:58] ie the right of compression file system extraction :) [00:58] err mix of compression and file system extraction [00:58] lz4 has just introduced new faster compression, again. [00:58] it's not altogether exciting - you can sacrifice some compression ratio for "even faster" performance. [00:59] on that note, on zfs i find lz4 works really well. [00:59] and i'd suggest using it :) [00:59] I really, REALLY, REEEAALLY wish the java console viewer would stop stealing focus every dang time the video mode changes. Getting REALLY FUCKING PISSED-OFF. *deep breaths* [00:59] haha god [01:00] supermicro gear may be ok [01:00] but their out of band management really sucks. [01:00] it needs to die, in a fire. [01:00] i dunno if it's changed, but in the past supermicro have had this remote iso functionality, and it doesn't even use decent sized window sizes [01:00] so it's painfully slow. [01:00] err painfully slow if you have like 20msec+ ping. [01:01] it's probabl fine for < 1 msec. [01:01] i think it was using 16k or something [01:02] hmm, i ahve to figuure out how to make arch linux stop setting graphics vga mode at some point [01:02] actually i wonder if there's a way to get arch to do serial console for rescue system [01:03] https://wiki.archlinux.org/index.php/Working_with_the_serial_console [01:03] yeah that's not about the "rescue" cd thogh [01:03] oh I thought you meant rescue.target [01:03] i hosed my file system by doing a mv /* or something :) [01:04] oh what's rescuue.target? [01:04] A systemd target for doing rescue stuff [01:04] (not helpful I know) [01:04] hmm [01:04] systemd.unit=rescue.target on the kernel line [01:04] yeah it's not common, and ie worked :/ [01:04] aka run level 1 [01:04] hp's lights out pisses me off too :)0 [01:05] like it's cool yo can type textcons and don't even need serial console setup to get remote text mode [01:05] err via ssh [01:05] and vsp to get a serial console [01:05] but if things are in graphics mode you have to use java or activex [01:05] and java seems to keep giving issues :/ [01:06] i don't use it that much though [01:41] I keep forgetting IPv6 traffic is limited to 100mbps and then wondering why I'm only getting ~7MB/s to mirrors.arpnetworks.com [01:42] yeah it bugs me too :) [02:03] damn i was curiouus how the Ubuntu LTS kernels work, and i try and read up about it and i'm even more confused. [02:04] it seems ubuntu now by default installs newer kernels for patch releases. [02:04] but i dunno how long support for these kernels lasts for [02:05] Wheee 75MB/s off S3 [02:05] wow [02:05] how close is the s3 server? [02:05] LAX - North California [02:05] (I'll run a traceroute in a bit) [02:05] but it's s3-us-west-1.amazonaws.com I believe [02:06] so it seems precise can be upgraded to trusty kernel and be supported until 2017 [02:09] (Don't look at me - I avoid Ubuntu wherever possible :P) [02:09] heh [02:09] i prefer self-compiled kernels [02:10] Woo, booting OpenBSD off a zdev [02:10] (via kvm) [02:10] zvol? [02:10] zvol. [02:10] block storage carved out of a zpool [02:10] heh yeh pretty awesome [02:10] what volblocksize did you use? [02:11] whatever Proxmox defaulted to. I didn't even think to look [02:11] erk :/ [02:11] there can be a bit of overhead, so raising it above the 8k can help [02:11] but it can mean read/write [02:11] but now that everyone seems to have 4k disks, zfs overhead stuff is getting a little scary [02:12] the mirror case isn't as bad as raidz though afaik [02:12] (It's using the ZFS default of 8k) [02:12] but yeah i've started using bigger block sizes. also using lz4 compression thouugh [02:12] yeah [02:13] if you're bored sometime you can try playing with it :) [02:13] does proxmox make it easy to do autosnapshotting? [02:13] Don't yet know. (FreeNAS sure does though, very nice) [02:14] yeah it is nice [02:14] can use up a lot of space, but so handy :) [02:14] you can expose the snapshot directory on real flie systems [02:14] err when using native zfs as opposed to zvol i mean [02:14] yeah [02:15] or just cd to the directory that was hidden there anyways :p [02:15] heh [02:15] tab completion is nice :/ [02:16] aw dang 20GB written before I realized it's using lzjb rather than lz4. Oh well, not a big deal... [02:17] damn [02:17] yeh i used to use lzjb with opensolaris [02:18] lz4 is a little quicker [02:18] but both are way quicker than gzip [02:18] Nothing wrong with lzjb, still got 1.26x from it even, but lz4 is better [02:18] hmm i wonder what my ratio is like [02:19] refcompressratio 1.10x [02:19] that's for /home which probably has some big tarballs on it somewhjere [02:20] rpool/vm-201-disk-1 1.96x [02:20] the cool thing abouut lz4 is it's so cheap it doesn't matter if you have bulky stuff on it [02:20] for the VM I just restored from backup :) (it's pretty empty though) [02:20] aye' [02:20] it's even cheaper for stuff tha tdoesn't compress [02:21] 1.41x for /hoem on the box i'm irc'ing from [02:22] My best ratio is 2.47x for a mysql database volume, and 2.26 for a fat postgresql volume [02:22] but yeah again with 4k disks, lz4 if it doesn't get under 4k will use 8k still with 8k volblocksize [02:22] it doesn't condense :( [02:23] now youu justu need a ssd cache :) [02:23] may have to be pci-e :) [02:23] I have an SSD cache, and a nasty FreeBSD bug that causes my host machine to crash :P [02:23] i'm mostly kidding, you probably don't do many reads [02:23] (that's my home box, that is) [02:23] heh [02:24] i've bewen using l2arc only for metadata [02:24] Yes \o/ Just realized my favourite feature with using ZFS on Proxmox -- I don't have to manually format+mount anything, "zfs create" does it all. [02:24] it gives me most of the boost i care about [02:24] Nice [02:24] I have a few spare 60GB SSD's so I just threw them at it. [02:24] it means you ls in a directory with heaps of files and it doesn't delay [02:24] yeah [02:24] by default it won't do much for sequential anyway [02:25] and if you have "plenty" of ram then all the important stuff will be in memory anyway [02:25] and if you use zvol's etc youu're likely to get into double caching [02:26] (oh many I forgot how nice zfs set quota= is too... It's been too long) [02:27] not that double caching is necessarily bad, but it doesn't alert zfs for most frequently used. [02:27] and the most recently used is "old" data. [02:28] i wonder if you could get linux to cache less [02:31] are you using zil bryce? [02:31] that i think can help more... [02:35] Not on this proxmox box, no [02:35] But I do on my home system [02:35] (mirrored, no less) [02:38] cool [02:38] yeh i added another ssd to a hard-disk server recently [02:38] going to look at setting up both as l2arc and some zfs pool first [02:38] but considering trying zil [02:44] hard to go wrong :P Besides, ZIL/L2ARC can always be added/removed any time in the life of the pool so there's no cost to trying it out. [02:45] (assuming your pool is "fixed" and you're just adding/removing disks or ssds) [02:45] yeah [02:45] and you only need like a gig [02:45] i've made my cache drive way too big hah [02:45] atm it's 80gb [02:46] using 60gb [02:46] wow it's had more reads than all of the other drives together though [02:46] "WARNING: MD5 signatures do not match:" Dammit Amazon, stop running up my S3 bill. [02:47] why are using s3? :/ [02:47] Because I needed somewhere to stash 150GB briefly. [02:47] (and cheaply, with a fast connection) [02:48] ahh [02:48] so cos temp [02:48] i could have stored 150gb for you :/ [02:49] it's cool that you can even do that easily these days. [02:49] Heh, thanks for the late offer [02:49] heh i didn't know you needed some temp space. ;) [02:49] It's very cool. With a fast enough connection, storage is completely elastic. [02:50] even with vdsl i'm using offsite storage more nad more [02:50] lots of things aren't really performance sensitive [02:50] I'm probably going to continue using it once I pare down what I sync to it as another offsite storage location. [02:51] i backed up all of my important home stuff remotely. [02:51] took like 24 hours or so :) [02:51] with 9.5 megabit upload. [02:51] I've been backing up personal stuff for S3 for ages (thanks to duplicity). Cheap, fast, reliable, easily encrpyted. [02:51] but as long as you set up aqm, it doesn't impact other stuff too badly. [02:52] But those backups don't even break 100GB [02:52] heh [02:52] my home directory volume is only using 98gb [02:52] (and this dedi box is 9ms and 14 hops from S3) [02:52] heh [02:52] my dediacted box was 5 msec ping from me :) [02:52] damn interleaving beign on now [02:53] i have a personal dedicated server in nz [02:53] with zfs etc. [02:53] wow, 5ms? that's practically beside you [02:53] well it's like 30km from me [02:53] my first hop past my home router is 12-16ms [02:54] well it terminates on the same lan as my internet connection too [02:54] so it's like single hop away :) [02:55] mine's uhh 12 or 13 msec now, due to 8 msec downstream interleaving [02:55] but yeah it's not laggy at all :) [03:26] * brycec is always entertained watching RAID-1 divvy up reads between disks. Just something fun watching it in iostat. [03:27] yeah it doesn't work very well with hard-disks ime [03:27] but it works well with ssd's [03:27] seems to work well enough [03:28] Frankly with the speed of SSD's the improvement is less than the improvement in read performance seen with hdd's [03:28] do you ever look at zpol iostat -v 1 [03:28] watching it now :) [03:28] ame haha [03:29] my second ssd cache only has 57.4mb allocated [03:29] but it's still doing reads for some reason. [03:29] hmm so much for l2arc being a waste of time :) [03:29] the other ssd seems to actually do quite a lot of requests. [03:30] probably means i need more ram in there :) [03:31] it does more reads then all the hard-disks, but less writes. [03:31] but less writes than any of the hard-disks [03:31] i suppose 60gb of ram isn't cheawp [03:34] i'm semi tempted to try this zil thing [03:35] 'night mercutio [03:35] it must be like 3:30 am for you [03:35] 'night :) [03:35] precisely right [03:35] 'night [05:30] awesome... got icinga for floss weekly [06:00] icinga? [06:00] ahh a monitoring system [06:01] graphite sounds nice [06:14] it's a hostile fork of nagios [06:14] I've decided those are called "pitchforks" :) [06:15] haha i liek the name [06:15] pitchfork that is [06:15] i've been wanting to do some kind of real time web ping thing [06:16] so i'm hoping graphite will make that easier. [06:21] *** tabthorpe has joined #arpnetworks [06:25] yeah [06:25] a fork that is hostile is a pitchfork [06:26] hopefully my show will set the meme [08:42] *** RandalSchwartz has quit IRC (Remote host closed the connection) [10:40] I wonder what makes it a "hostile" fork [10:42] Is that to differentiate against a GitHub fork? [10:43] mike-burns: Normally that the old project is still in active development and doesn't want to split its userbase with the fork [10:44] And yes. The github workflow actively promotes forking to deploy your fix and request that the original version pull in your patch [10:44] *** RandalSchwartz has joined #arpnetworks [10:45] With the implicit assumption that if you write new patches and the original developer(s) do nothing and don't pull them in, you become the defacto standard version [11:18] *** mkb has quit IRC (Remote host closed the connection) [11:40] *** mkb has joined #arpnetworks [11:41] reinstall to 5.7 was easy enough [11:41] Excellenty [11:41] *-y [11:41] siteXX.tgz makes things so easy [11:41] It does :) [12:09] *** mkb has quit IRC (Remote host closed the connection) [12:11] *** mkb has joined #arpnetworks [14:34] * brycec is pulling 99MB/s off AWS, wheeee [14:34] I could get used to this GbE connection... [14:36] heh [14:36] bryce is hogging all the bandwidth :) [14:36] lol, am not :P (Because ARP has a bigger pipe than just 1Gbps) [14:36] yeh i know [14:37] i like the idea of gigabit for home users. [14:37] GbE on a LAN is old hat to me, but getting GbE over THE INTERNET is blowing my mind. [14:37] although more would be better :) [14:39] * brycec is afraid to see this month's Amazon bill [14:40] heh [14:40] you should check it out early then :) [14:40] almost triple my usual monthly bill so far [14:40] (my usual monthly bill being <$5) [14:41] $10.75 on data transfer alone [14:41] $4.54 last month, and $18.86 projected this month. [14:43] wow so all this upload/download is costing me $10+ in transfer, but only $.30 for S3 [14:43] Storage is stupid-cheap :D [14:46] sweet [14:46] that makes it not so good for short term. [14:47] It makes for good backup storage - you only pay when you have to do a restore :p [14:47] heh [14:48] (Presumably you backup and verify locally and S3 is just extra off-site storage that only needs to be verify periodically.) [14:48] And just remember - multipart uploads store an invalid MD5 on the object in S3. [14:52] (Looks like the sata drives in my dedi max out at 120MB/s sequential write, not too bad considering it's interspersed with other random reads+writes as I move files between volumes) [14:53] are they re4s? [14:53] WDC WD1003FBYX-01Y7B1 [14:54] why is sda hotter than sdb/sdc hah [14:54] yeh [14:54] Model Family: Western Digital RE4 [14:54] Device Model: WDC WD1003FBYX-01Y7B1 [14:54] If my "sda" is hotter than sdb or sdc, I have real issues because sda is the IPMI virtual CD drive :P [14:54] heh [14:54] i was using smartctl [14:54] it's only 34c, that's not bad [14:55] That's pretty reasonable, yeah [14:55] but the other two are 30/31 [14:55] (thanks for reminding me to install smartmontools) [14:56] oh re4 might not be 4k [14:56] woot it's not [14:57] That's a little surprising in 2015, but I guess that comes with "RE", perhaps for compatibility with controllers and other storage stuff. [14:57] the re4s are pretty good for random [14:57] 4k is annoying for zfs overhead [14:58] According to the spec sheet http://www.wdc.com/wdproducts/library/SpecSheet/ENG/2879-701338.pdf the 1TB also has the lowest power draw over 2TB, 500GB, and 250GB. Interesting. [14:58] it'll be single platter. [14:58] the 250/500 maybe older [14:58] (I would guess the 500/250 are 4/2 platters, and 2/1 are 2/1 platters) [14:58] Yeah 4/2 because they're laying around, and it can improve performance [14:59] single platter drives tend to die less too. [14:59] (actually 500/250 are the same weight) [14:59] yeh it may just be short stroked [14:59] oh wow, I'm an idiot, it's printed in the specs [15:00] i used to be into short stroking [15:00] 500/250 are single platter and 2 or 1 heads [15:00] well i still am i suppose [15:00] 2TB is 8 head/4latter, and 1TB is half that. [15:01] but back when drive performance was one of the normal hinderances, using less than all of the disk made quite a difference [15:37] Hmm. changed 3 more things [15:37] oops wrong window [15:41] *4 things -- you changed to the wrong window :p [15:42] heh [15:46] That's odd... eth0 on my dedi box is flapping [15:46] looks like about once a minute it goes up and then back down 2 seconds later [15:54] weird [15:54] what chipset is it? [15:54] intel igb [15:55] (is the driver, I know) [15:55] gb [15:55] igb [15:55] hmm mine is e1000e [15:55] 01:00.0 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01) [15:55] what kernel are you using? [15:55] but eth1 is fine [15:55] are they both 82580? [15:55] so either my bonding didn't take effect right, or ARP has an issue. [15:55] yeah [15:56] bonding is weird [15:56] i'm only using one interface [15:56] 82580 is one of the flakier chipsets, i'd make sure you were using recent igb driver [15:56] bloody intel adn their errata [15:56] my home server had onboard intel and it was flakey too [15:57] and that was i217v or i218v [15:57] i can't remember which [15:57] (And because you asked 2.6.32-37-pve, but I'm about to reboot into a newer kernel) [15:58] yeh [15:58] i'd definitely try a newer kernel first [15:58] before any real debugging [15:58] i expect it to magically get better. [15:58] (I didn't notice anything wrong before my reinstalls, and they would've been running the newer kernel too) [15:58] if you read intel errata there are heaps of edge cases that don't work properly. [15:58] that are patched around etc. [15:58] you were uusing openbsd though? [15:59] openbsd doesn't enable some of the flakier features :) [15:59] it's generally things like segment offload etc that have issuues [15:59] you can disable with ethtool [15:59] No this box has been Debian for months until I started messing around with it last night [15:59] ahh right [15:59] 2.6.32 is ancient :/ [15:59] And yeah I've backported intel drivers before, plenty of reading [16:00] only as ancient as Debian Wheezy :p [16:00] broadcom are bad too :) [16:00] wheezy is ancient haha [16:00] i can't remember what kernel jessie has [16:01] 3.16.0-4-amd64 [16:01] jessie ^ [16:01] cool i found something from last year suggesting that [16:01] 3.16 should be fine [16:01] Now running 2.6.32-39-pve on this dedi... let's see if it continues [16:01] i think i'm using 3.13 [16:05] yeah 3.13. [16:05] i've found 3.13 to be a nice stable kernel version [16:05] You're behind Jessie? [16:05] this is on ubuntu trusty [16:05] with custom kernel [16:05] i dunno what trusty uuses by default [16:05] ah [16:05] trusty uses 3.13 too [16:06] it was aboutu the time trusty came out that it got installed. [16:06] i think slightly before [16:06] and i figured that nothing big would change :) [16:08] i keep meaning to upgrade it actually [16:08] but probably worth waiting a bit more [16:09] Looking solid post-reboot. Was either a driver bug or just something hadn't initialized right last time around. [16:09] driver bug i suspect [16:09] you could have probably used newer igb without newer kernel [16:10] but newer kernel is better in general [17:23] (For those wondering, that kernel upgrade also brought an upgrade from 5.2.15 to 5.2.18 of the igb driver) [17:37] why does geotrust still need an intermediate cert? [17:38] it seems the same intermediate cert is used everywhere [18:58] woot, apnic whois is /finally/ back. [18:59] pity there's monopolies on registrars :( [21:56] @weather -v yyz [21:56] Toronto-Pearson International, Ontario: Partly Cloudy ☁ 43°F (6°C), Humidity: 61%, Wind: From the WNW at 22 MPH Gusting to 29 MPH, Pressure: 30.07inHg (1018mb) and holding, Dewpoint: 30°F (-1°C), Feels like 34°F (1°C), Visibility: 15Mi (24km), UV index: 0, Sunrise 05:48, Sunset: 20:42, Lunar phase: New moon [21:56] Wednesday: Partly Cloudy 62°F/44°F (17°C/7°C) | Thursday: Clear 68°F/43°F (20°C/6°C) | Friday: Clear 57°F/38°F (14°C/3°C) | Saturday: Clear 69°F/50°F (21°C/10°C) [21:56] The average high for this date is 62°F (16°C), and the record of 81°F (27°C) was set in 2012. The average low is 45°F (7°C), and the record of 32°F (0°C) was set in 2002 [22:11] Looks like I'm getting bogons inbound to my VM... [22:11] faked ip's? [22:12] hardly anyone filters ip source addreses. [22:12] 03:20:51.452048 IP 10.8.19.209 > 174.136.105.34: ICMP time exceeded in-transit, length 36 [22:12] pretty sure my VM isn't pinging 10.8 [22:12] and not enough people filter outbond addreses to just have them. people using bogan filters can be a pita with these new weird ip addresses in use due to starvation. [22:13] i dnuno what arin is like, but apnic is using some previously bogan addresses [22:13] fun [22:15] looks like 240.0.0.0/4 wasn't in the bogon list before but was added [22:15] yay multicast [22:15] and some CIDRs aren't in the list but should be - 7/8, for example [22:15] i wouldn't really worry too much [22:16] there's some lists of worm addresses etc that may be useful [22:16] but most malicious traffic is using real addresses [22:17] 10.8.19.209 responding could be because you did a mtr somewhere, and a router has a private ip. [22:17] i'd say it doesn't hurt having it come in, and means it doesn't show missing hops. [22:19] didn't think it's a big issue or anything, just slightly weird [22:19] router loopback actually makes sense [22:20] and Apple gave back half their /8. this shows that I'm getting tvtropes-ed by 'show ip bgp' and should go to bed now. [23:11] there's a huge long thread on nanog about 10ge routers atm [23:12] i thought it would mostly about people suggesting routeros hah [23:21] But instead they're suggesting pfSense? :p [23:27] well someone was talking about dpdk and line rate. [23:28] and netmap etc. [23:29] i wish somebody put together something proper heh [23:40] hmm there aren't actually many options it seems if you want small pcaket forwarding performance [23:59] pfSense is really playing up their version 3, now with dpdk for FASTAR packetz [23:59] oof