***: dj_goku has joined #arpnetworks
dj_goku has quit IRC (Changing host)
dj_goku has joined #arpnetworks
dj_goku_ has joined #arpnetworks
dj_goku has quit IRC (Read error: Connection reset by peer) mercutio: mus1cb0x: juniper is cheaper. mus1cb0x: any idea of which is better?
junos being fbsd based is a big selling point, but that's only 1 part of the story
thanks mercutio mercutio: i use openbsd and freebsd myself :) mus1cb0x: how would you use that to replace a 1U top of rack switch? mercutio: oh i thought you were talking about routers.
i'm using cisco switches.
ex4200s are rather highly for juniper switches though. mus1cb0x: highly? mercutio: not too expensive, and good enough
with routers people seem to suggest mx480 normally at the lower end
ex3200 is used too
but i dunno, what are you looking for in a switch? mus1cb0x: oh nothing really, just curious how datacenter networking gear has evolved over the past 5 years mercutio: it hasn't much afaik
it's mostly just that prices have come down a bit, power use has come down a bit. mus1cb0x: is 10 gige to the client with 100gig uplink common in switches now? mercutio: it's still expensive to do 10 gigabit. mus1cb0x: or is it still 1gige to client
ahh mercutio: nope. mus1cb0x: seems like they are holding on to 10gige prices mercutio: yeah, this new 2.5 gigabit should help things. mus1cb0x: what's that? mercutio: 2.5 gigabit is coming to desktops soon mus1cb0x: no shit? mercutio: they can do 2.5 gigabit in similar power envelope to current gigabit.
using existing cable. mus1cb0x: over regular cat 5 copper? mercutio: yeah afaik mus1cb0x: is cat 6 copper still not worth it? mercutio: i dunno, i've never experienced any cables that work better with cat6 than cat5e.
and most cat6 is thicker than cat5e.
but you can get similar awg cat6. mus1cb0x: do gbics still have any value? mercutio: the current recommendation is meant to be for cat6a. mus1cb0x: the ones that went in cisco 2950 and 3550 switches mercutio: gbics?
is that just sfp modules? mus1cb0x: yea the uplink modules
i can't remember what interface they have mercutio: it's probably just ftp
sfp
i can't type. mus1cb0x: yea mercutio: (i've been drinking) mus1cb0x: ah hey don't feel down, things will pick up :/ mercutio: haha mus1cb0x: :P mercutio: mostly gigabit is good enough for most people atm.
10 gigabit helps iscsi performance mus1cb0x: ah so for san environments mercutio: but fibre channel and sas are fine too. mus1cb0x: i hear hot swap sas ssd drives are badass mercutio: if you want to do 300mb/sec sas or fc is fine.
if you want to do 80mb/sec gigabit is fine.
if you have ssd san and want to do 3000mb/sec then you should just keep it local :)
expensive too.
i like infiniband myself BryceBot: That's what she said!! mus1cb0x: this is why i like server based storage instead of san stuff. storage being on the server is nice and fast, and noone expects network to be HD calibre latency mercutio: i can't notice the difference been local and rmeote at home mus1cb0x: is infiniband still expensive? mercutio: i have 16 gigabit infiniband limited to about 13 gigabit because of pci-e x4 limitations on one end.
it was $150 US for 4 cards for me. mus1cb0x: not bad mercutio: but they were running in pci-e 1 mode.
i reflashed them to run in pci-e 2 mode.
dodgy as i suppose :/ mus1cb0x: you using zfs? mercutio: but i get about 1300mb/sec with nfs from ssd raid.
zfs+mdadm root
3x120gb ssd atm. mus1cb0x: what kinda overhead are you seeing zfs impose? mercutio: but i'm upgrading it, just waiting on one more ssd.
zfs seems to be slower with lots of small files. mus1cb0x: how much slower tho mercutio: but with sequential it can still max out stuff.
i dunno it's just a feeling, id on't benchmark.
well i don't have a good benchmark :)
i was playing with tar the other day, and it's faster to extract a file to /tmp then to local storage.
err than mus1cb0x: http://kotaku.com/the-new-zelda-is-open-world-looks-absolutely-incredibl-1588673841 mercutio: so i had this great idea of trying to figure out how to make tar work with async i/o
nothing on linux uses good queue depths for ssd's
which is fine for hard-disks..
also xz wasn't multithreaded.
new version that just came out is.
been hoping arch updates it soon
also one of my ssd's seems to like to randomly drop out sometimes.
and mdadm needs to sync the whole disk.
and i manually need to add it back in.
whereas zfs just catches up
ssd performance still has quite a lot of potential for improvement.
there can be random latency spikes etc.
but to get maximum performance out of that kind of technology more and more has to be exposed to the OS instead of acting like a dumb hard-disk
nvme may help a little.
the new slc write caches can help more desktop-like work quite a lot.
but most stuff doesn't even need to be synchronous, it's just there aren't good interfaces.
like if i'm installing a bunch of packages, as long as everything i installed gets reversed i'm fine with it chunking into one big transaction mus1cb0x: zfs seems to love ram and a little extra cpu mercutio: i have i7-4770 and 16gb of ram on my main linux box at home.
upgrading to 32gb of ram.
but i have it on one 8gb server with raid, and one 8gb desktop with no raid
and it's fine on both.
there were some early fragmentation issues
ddr4 should make 64gb more affordable though
i figure just stick more ram in for the most part.
but haven't the most frequently used and most recently used means it behaves pretty well when doing cache thrashing.
openbsd is actually adding some kind of simple dual queue system too afaik.
well someone was proposing it, i dunno if it's in.
but disk caching is an area that quite a lot of research has been done in, but most people are doing simple systems for.
one issue, is how do you test performance.
like one of the common issues with most recently used is that it unloads common binarys when you do "benchmarking" type applications.
and that may make the benchmark work better, especially if the benchmark doesn't used that much disk.
but it makes my pet peeves worse: starting vim to edit a file, tab completing a file, and logging in to a machine. mus1cb0x: good benchmarks are easy for me to write
i just take the same group of tasks and repeat the performance of it mercutio: the thing is, interactive things want lower latency for reads, background things matter less.
it's kind of complicated.
synchronous things want lower latency too.
so my current thinking is that things should just be shifted to be more async.
most things shouldn't really be much faster on ssd than hard-disk.
because they should be reasonably predictable, and be able to just be read sequentially mus1cb0x: i like async too mercutio: damn what happened to coding hah
i'm writing a tiny curl implementation mus1cb0x: why? mercutio: it works, but i got distracted with trying to use as little cpu as possible
it's cos i did ldd on curl :) mus1cb0x: hahahhaa
oops :) mercutio: and because iw as noticing that curl used heaps of cpu on infiniband
but i tested on some other systems.
there are quite a few problem cases, and performance can vary.
but something like downloading on adsl uses a surprising amount of cpu
because packets don't get coalesced that well you get a packet every msec or such.
so it has to wake up, process a packet, wake up process a packet etc.
but on xeon e3110 (core2 duo 3 ghz basically) there's significant cpu usage maxing out gigabit.
and fiddling with the coalescing settings makes a difference (intel e1000)
but my initial testing was all infiniband, and then i upgraded my server from i5-2500k to i7-4770
my desktop is i7-3770 mus1cb0x: nice mercutio: i7-4770 uses WAY less cpu than i7-3770 at curl type applicaitons.
it kind of blew my mind.
the i7-3770 even has faster ram.
basically cache speed has gone up heaps afaik.
but all the benchmarks floating around are about games.
where the differences seem much smaller.
there's really very little in the way of benchmarks for more server type stuff, other than pointless stuff like requesting static files on a web server.
which basiclaly doesn't matter
curl http://192.168.50.254:24/200m > /dev/null 0.01s user 0.05s system 98% cpu 0.054 total
./microcurl/microcurl http://192.168.50.254:24/200m > /dev/null 0.00s user 0.03s system 91% cpu 0.033 total
files downloading at localhost to /dev/null is way faster with my system :)
curl http://202.49.140.24:24/10m > /dev/null 0.04s user 0.07s system 4% cpu 2.408 total
./microcurl/microcurl http://202.49.140.24:24/10m > /dev/null 0.00s user 0.07s system 2% cpu 2.406 total
and a little less cpu over vdsl.
ok i should stop rambling :) mus1cb0x: haha
it was good stuff ***: acf_ has joined #arpnetworks
RandalSchwartz has quit IRC (Read error: Connection reset by peer)
awyeah has quit IRC (Read error: Connection reset by peer)
mus1cbox has joined #arpnetworks
awyeah has joined #arpnetworks
freedomcode has joined #arpnetworks
milki_ has joined #arpnetworks
mjp_ has quit IRC (Read error: Connection reset by peer)
kevr has quit IRC (Ping timeout: 240 seconds)
bitslip has quit IRC (Read error: Connection reset by peer)
acf_ has quit IRC (Ping timeout: 258 seconds)
bitslip_ has joined #arpnetworks
mnathani1 has quit IRC (Ping timeout: 245 seconds)
tooth has quit IRC (Ping timeout: 245 seconds)
acf_ has joined #arpnetworks
milki has quit IRC (Ping timeout: 240 seconds)
reardencode has quit IRC (Ping timeout: 240 seconds)
pcn has quit IRC (Ping timeout: 240 seconds)
mus1cb0x has quit IRC (Read error: Connection reset by peer)
tooth has joined #arpnetworks
d^_^b has quit IRC (Ping timeout: 245 seconds)
d^_^b has joined #arpnetworks
d^_^b has quit IRC (Changing host)
d^_^b has joined #arpnetworks
pcn has joined #arpnetworks
mnathani1 has joined #arpnetworks
kevr has joined #arpnetworks
mjp has joined #arpnetworks
JC_Denton has quit IRC (Read error: Connection reset by peer)