[01:22] *** dj_goku has quit IRC (Ping timeout: 250 seconds) [01:30] *** dj_goku has joined #arpnetworks [01:30] *** dj_goku has quit IRC (Changing host) [01:30] *** dj_goku has joined #arpnetworks [01:49] *** dj_goku_ has joined #arpnetworks [01:49] *** dj_goku has quit IRC (Read error: Connection reset by peer) [02:13] mus1cb0x: juniper is cheaper. [02:43] any idea of which is better? [02:43] junos being fbsd based is a big selling point, but that's only 1 part of the story [02:43] thanks mercutio [02:45] i use openbsd and freebsd myself :) [02:45] how would you use that to replace a 1U top of rack switch? [02:45] oh i thought you were talking about routers. [02:45] i'm using cisco switches. [02:46] ex4200s are rather highly for juniper switches though. [02:46] highly? [02:46] not too expensive, and good enough [02:46] with routers people seem to suggest mx480 normally at the lower end [02:47] ex3200 is used too [02:48] but i dunno, what are you looking for in a switch? [02:48] oh nothing really, just curious how datacenter networking gear has evolved over the past 5 years [02:48] it hasn't much afaik [02:49] it's mostly just that prices have come down a bit, power use has come down a bit. [02:49] is 10 gige to the client with 100gig uplink common in switches now? [02:49] it's still expensive to do 10 gigabit. [02:49] or is it still 1gige to client [02:49] ahh [02:49] nope. [02:49] seems like they are holding on to 10gige prices [02:49] yeah, this new 2.5 gigabit should help things. [02:50] what's that? [02:50] 2.5 gigabit is coming to desktops soon [02:50] no shit? [02:50] they can do 2.5 gigabit in similar power envelope to current gigabit. [02:50] using existing cable. [02:50] over regular cat 5 copper? [02:50] yeah afaik [02:50] is cat 6 copper still not worth it? [02:51] i dunno, i've never experienced any cables that work better with cat6 than cat5e. [02:51] and most cat6 is thicker than cat5e. [02:51] but you can get similar awg cat6. [02:51] do gbics still have any value? [02:51] the current recommendation is meant to be for cat6a. [02:51] the ones that went in cisco 2950 and 3550 switches [02:52] gbics? [02:52] is that just sfp modules? [02:52] yea the uplink modules [02:52] i can't remember what interface they have [02:52] it's probably just ftp [02:52] sfp [02:52] i can't type. [02:52] yea [02:52] (i've been drinking) [02:53] ah hey don't feel down, things will pick up :/ [02:53] haha [02:53] :P [02:53] mostly gigabit is good enough for most people atm. [02:53] 10 gigabit helps iscsi performance [02:53] ah so for san environments [02:54] but fibre channel and sas are fine too. [02:54] i hear hot swap sas ssd drives are badass [02:54] if you want to do 300mb/sec sas or fc is fine. [02:54] if you want to do 80mb/sec gigabit is fine. [02:54] if you have ssd san and want to do 3000mb/sec then you should just keep it local :) [02:55] expensive too. [02:55] i like infiniband myself [02:55] That's what she said!! [02:55] this is why i like server based storage instead of san stuff. storage being on the server is nice and fast, and noone expects network to be HD calibre latency [02:55] i can't notice the difference been local and rmeote at home [02:56] is infiniband still expensive? [02:56] i have 16 gigabit infiniband limited to about 13 gigabit because of pci-e x4 limitations on one end. [02:56] it was $150 US for 4 cards for me. [02:56] not bad [02:56] but they were running in pci-e 1 mode. [02:56] i reflashed them to run in pci-e 2 mode. [02:56] dodgy as i suppose :/ [02:57] you using zfs? [02:57] but i get about 1300mb/sec with nfs from ssd raid. [02:57] zfs+mdadm root [02:57] 3x120gb ssd atm. [02:57] what kinda overhead are you seeing zfs impose? [02:57] but i'm upgrading it, just waiting on one more ssd. [02:58] zfs seems to be slower with lots of small files. [02:58] how much slower tho [02:58] but with sequential it can still max out stuff. [02:58] i dunno it's just a feeling, id on't benchmark. [02:58] well i don't have a good benchmark :) [02:58] i was playing with tar the other day, and it's faster to extract a file to /tmp then to local storage. [02:59] err than [02:59] http://kotaku.com/the-new-zelda-is-open-world-looks-absolutely-incredibl-1588673841 [02:59] so i had this great idea of trying to figure out how to make tar work with async i/o [02:59] nothing on linux uses good queue depths for ssd's [02:59] which is fine for hard-disks.. [03:00] also xz wasn't multithreaded. [03:00] new version that just came out is. [03:00] been hoping arch updates it soon [03:01] also one of my ssd's seems to like to randomly drop out sometimes. [03:01] and mdadm needs to sync the whole disk. [03:01] and i manually need to add it back in. [03:01] whereas zfs just catches up [03:02] ssd performance still has quite a lot of potential for improvement. [03:02] there can be random latency spikes etc. [03:02] but to get maximum performance out of that kind of technology more and more has to be exposed to the OS instead of acting like a dumb hard-disk [03:03] nvme may help a little. [03:04] the new slc write caches can help more desktop-like work quite a lot. [03:04] but most stuff doesn't even need to be synchronous, it's just there aren't good interfaces. [03:05] like if i'm installing a bunch of packages, as long as everything i installed gets reversed i'm fine with it chunking into one big transaction [03:06] zfs seems to love ram and a little extra cpu [03:08] i have i7-4770 and 16gb of ram on my main linux box at home. [03:08] upgrading to 32gb of ram. [03:09] but i have it on one 8gb server with raid, and one 8gb desktop with no raid [03:09] and it's fine on both. [03:09] there were some early fragmentation issues [03:09] ddr4 should make 64gb more affordable though [03:10] i figure just stick more ram in for the most part. [03:10] but haven't the most frequently used and most recently used means it behaves pretty well when doing cache thrashing. [03:10] openbsd is actually adding some kind of simple dual queue system too afaik. [03:11] well someone was proposing it, i dunno if it's in. [03:11] but disk caching is an area that quite a lot of research has been done in, but most people are doing simple systems for. [03:12] one issue, is how do you test performance. [03:12] like one of the common issues with most recently used is that it unloads common binarys when you do "benchmarking" type applications. [03:13] and that may make the benchmark work better, especially if the benchmark doesn't used that much disk. [03:13] but it makes my pet peeves worse: starting vim to edit a file, tab completing a file, and logging in to a machine. [03:14] good benchmarks are easy for me to write [03:14] i just take the same group of tasks and repeat the performance of it [03:14] the thing is, interactive things want lower latency for reads, background things matter less. [03:15] it's kind of complicated. [03:16] synchronous things want lower latency too. [03:16] so my current thinking is that things should just be shifted to be more async. [03:17] most things shouldn't really be much faster on ssd than hard-disk. [03:17] because they should be reasonably predictable, and be able to just be read sequentially [03:18] i like async too [03:19] damn what happened to coding hah [03:19] i'm writing a tiny curl implementation [03:19] why? [03:19] it works, but i got distracted with trying to use as little cpu as possible [03:19] it's cos i did ldd on curl :) [03:19] hahahhaa [03:19] oops :) [03:20] and because iw as noticing that curl used heaps of cpu on infiniband [03:20] but i tested on some other systems. [03:20] there are quite a few problem cases, and performance can vary. [03:20] but something like downloading on adsl uses a surprising amount of cpu [03:21] because packets don't get coalesced that well you get a packet every msec or such. [03:21] so it has to wake up, process a packet, wake up process a packet etc. [03:21] but on xeon e3110 (core2 duo 3 ghz basically) there's significant cpu usage maxing out gigabit. [03:22] and fiddling with the coalescing settings makes a difference (intel e1000) [03:22] but my initial testing was all infiniband, and then i upgraded my server from i5-2500k to i7-4770 [03:22] my desktop is i7-3770 [03:23] nice [03:23] i7-4770 uses WAY less cpu than i7-3770 at curl type applicaitons. [03:23] it kind of blew my mind. [03:23] the i7-3770 even has faster ram. [03:23] basically cache speed has gone up heaps afaik. [03:24] but all the benchmarks floating around are about games. [03:24] where the differences seem much smaller. [03:24] there's really very little in the way of benchmarks for more server type stuff, other than pointless stuff like requesting static files on a web server. [03:24] which basiclaly doesn't matter [03:25] curl http://192.168.50.254:24/200m > /dev/null 0.01s user 0.05s system 98% cpu 0.054 total [03:25] ./microcurl/microcurl http://192.168.50.254:24/200m > /dev/null 0.00s user 0.03s system 91% cpu 0.033 total [03:26] files downloading at localhost to /dev/null is way faster with my system :) [03:27] curl http://202.49.140.24:24/10m > /dev/null 0.04s user 0.07s system 4% cpu 2.408 total [03:27] ./microcurl/microcurl http://202.49.140.24:24/10m > /dev/null 0.00s user 0.07s system 2% cpu 2.406 total [03:27] and a little less cpu over vdsl. [03:29] ok i should stop rambling :) [13:07] haha [13:07] it was good stuff [20:01] *** acf_ has joined #arpnetworks [22:13] *** RandalSchwartz has quit IRC (Read error: Connection reset by peer) [22:13] *** awyeah has quit IRC (Read error: Connection reset by peer) [22:13] *** mus1cbox has joined #arpnetworks [22:13] *** awyeah has joined #arpnetworks [22:13] *** freedomcode has joined #arpnetworks [22:13] *** milki_ has joined #arpnetworks [22:13] *** mjp_ has quit IRC (Read error: Connection reset by peer) [22:13] *** kevr has quit IRC (Ping timeout: 240 seconds) [22:14] *** bitslip has quit IRC (Read error: Connection reset by peer) [22:14] *** acf_ has quit IRC (Ping timeout: 258 seconds) [22:14] *** bitslip_ has joined #arpnetworks [22:14] *** mnathani1 has quit IRC (Ping timeout: 245 seconds) [22:14] *** tooth has quit IRC (Ping timeout: 245 seconds) [22:14] *** acf_ has joined #arpnetworks [22:14] *** milki has quit IRC (Ping timeout: 240 seconds) [22:14] *** reardencode has quit IRC (Ping timeout: 240 seconds) [22:14] *** pcn has quit IRC (Ping timeout: 240 seconds) [22:14] *** mus1cb0x has quit IRC (Read error: Connection reset by peer) [22:14] *** tooth has joined #arpnetworks [22:14] *** d^_^b has quit IRC (Ping timeout: 245 seconds) [22:14] *** d^_^b has joined #arpnetworks [22:15] *** d^_^b has quit IRC (Changing host) [22:15] *** d^_^b has joined #arpnetworks [22:15] *** pcn has joined #arpnetworks [22:16] *** mnathani1 has joined #arpnetworks [22:17] *** kevr has joined #arpnetworks [22:19] *** mjp has joined #arpnetworks [23:39] *** JC_Denton has quit IRC (Read error: Connection reset by peer)