↑back Search ←Prev date Next date→ Show only urls | (Click on time to select a line by its url) |
Who | What | When | |
---|---|---|---|
*** | dj_goku has quit IRC (Ping timeout: 250 seconds) | [01:22] | |
dj_goku has joined #arpnetworks
dj_goku has quit IRC (Changing host) dj_goku has joined #arpnetworks | [01:30] | ||
.... (idle for 19mn) | |||
dj_goku_ has joined #arpnetworks
dj_goku has quit IRC (Read error: Connection reset by peer) | [01:49] | ||
..... (idle for 24mn) | |||
mercutio | mus1cb0x: juniper is cheaper. | [02:13] | |
....... (idle for 30mn) | |||
mus1cb0x | any idea of which is better?
junos being fbsd based is a big selling point, but that's only 1 part of the story thanks mercutio | [02:43] | |
mercutio | i use openbsd and freebsd myself :) | [02:45] | |
mus1cb0x | how would you use that to replace a 1U top of rack switch? | [02:45] | |
mercutio | oh i thought you were talking about routers.
i'm using cisco switches. ex4200s are rather highly for juniper switches though. | [02:45] | |
mus1cb0x | highly? | [02:46] | |
mercutio | not too expensive, and good enough
with routers people seem to suggest mx480 normally at the lower end ex3200 is used too but i dunno, what are you looking for in a switch? | [02:46] | |
mus1cb0x | oh nothing really, just curious how datacenter networking gear has evolved over the past 5 years | [02:48] | |
mercutio | it hasn't much afaik
it's mostly just that prices have come down a bit, power use has come down a bit. | [02:48] | |
mus1cb0x | is 10 gige to the client with 100gig uplink common in switches now? | [02:49] | |
mercutio | it's still expensive to do 10 gigabit. | [02:49] | |
mus1cb0x | or is it still 1gige to client
ahh | [02:49] | |
mercutio | nope. | [02:49] | |
mus1cb0x | seems like they are holding on to 10gige prices | [02:49] | |
mercutio | yeah, this new 2.5 gigabit should help things. | [02:49] | |
mus1cb0x | what's that? | [02:50] | |
mercutio | 2.5 gigabit is coming to desktops soon | [02:50] | |
mus1cb0x | no shit? | [02:50] | |
mercutio | they can do 2.5 gigabit in similar power envelope to current gigabit.
using existing cable. | [02:50] | |
mus1cb0x | over regular cat 5 copper? | [02:50] | |
mercutio | yeah afaik | [02:50] | |
mus1cb0x | is cat 6 copper still not worth it? | [02:50] | |
mercutio | i dunno, i've never experienced any cables that work better with cat6 than cat5e.
and most cat6 is thicker than cat5e. but you can get similar awg cat6. | [02:51] | |
mus1cb0x | do gbics still have any value? | [02:51] | |
mercutio | the current recommendation is meant to be for cat6a. | [02:51] | |
mus1cb0x | the ones that went in cisco 2950 and 3550 switches | [02:51] | |
mercutio | gbics?
is that just sfp modules? | [02:52] | |
mus1cb0x | yea the uplink modules
i can't remember what interface they have | [02:52] | |
mercutio | it's probably just ftp
sfp i can't type. | [02:52] | |
mus1cb0x | yea | [02:52] | |
mercutio | (i've been drinking) | [02:52] | |
mus1cb0x | ah hey don't feel down, things will pick up :/ | [02:53] | |
mercutio | haha | [02:53] | |
mus1cb0x | :P | [02:53] | |
mercutio | mostly gigabit is good enough for most people atm.
10 gigabit helps iscsi performance | [02:53] | |
mus1cb0x | ah so for san environments | [02:53] | |
mercutio | but fibre channel and sas are fine too. | [02:54] | |
mus1cb0x | i hear hot swap sas ssd drives are badass | [02:54] | |
mercutio | if you want to do 300mb/sec sas or fc is fine.
if you want to do 80mb/sec gigabit is fine. if you have ssd san and want to do 3000mb/sec then you should just keep it local :) expensive too. i like infiniband myself | [02:54] | |
BryceBot | That's what she said!! | [02:55] | |
mus1cb0x | this is why i like server based storage instead of san stuff. storage being on the server is nice and fast, and noone expects network to be HD calibre latency | [02:55] | |
mercutio | i can't notice the difference been local and rmeote at home | [02:55] | |
mus1cb0x | is infiniband still expensive? | [02:56] | |
mercutio | i have 16 gigabit infiniband limited to about 13 gigabit because of pci-e x4 limitations on one end.
it was $150 US for 4 cards for me. | [02:56] | |
mus1cb0x | not bad | [02:56] | |
mercutio | but they were running in pci-e 1 mode.
i reflashed them to run in pci-e 2 mode. dodgy as i suppose :/ | [02:56] | |
mus1cb0x | you using zfs? | [02:57] | |
mercutio | but i get about 1300mb/sec with nfs from ssd raid.
zfs+mdadm root 3x120gb ssd atm. | [02:57] | |
mus1cb0x | what kinda overhead are you seeing zfs impose? | [02:57] | |
mercutio | but i'm upgrading it, just waiting on one more ssd.
zfs seems to be slower with lots of small files. | [02:57] | |
mus1cb0x | how much slower tho | [02:58] | |
mercutio | but with sequential it can still max out stuff.
i dunno it's just a feeling, id on't benchmark. well i don't have a good benchmark :) i was playing with tar the other day, and it's faster to extract a file to /tmp then to local storage. err than | [02:58] | |
mus1cb0x | http://kotaku.com/the-new-zelda-is-open-world-looks-absolutely-incredibl-1588673841 | [02:59] | |
mercutio | so i had this great idea of trying to figure out how to make tar work with async i/o
nothing on linux uses good queue depths for ssd's which is fine for hard-disks.. also xz wasn't multithreaded. new version that just came out is. been hoping arch updates it soon also one of my ssd's seems to like to randomly drop out sometimes. and mdadm needs to sync the whole disk. and i manually need to add it back in. whereas zfs just catches up ssd performance still has quite a lot of potential for improvement. there can be random latency spikes etc. but to get maximum performance out of that kind of technology more and more has to be exposed to the OS instead of acting like a dumb hard-disk nvme may help a little. the new slc write caches can help more desktop-like work quite a lot. but most stuff doesn't even need to be synchronous, it's just there aren't good interfaces. like if i'm installing a bunch of packages, as long as everything i installed gets reversed i'm fine with it chunking into one big transaction | [02:59] | |
mus1cb0x | zfs seems to love ram and a little extra cpu | [03:06] | |
mercutio | i have i7-4770 and 16gb of ram on my main linux box at home.
upgrading to 32gb of ram. but i have it on one 8gb server with raid, and one 8gb desktop with no raid and it's fine on both. there were some early fragmentation issues ddr4 should make 64gb more affordable though i figure just stick more ram in for the most part. but haven't the most frequently used and most recently used means it behaves pretty well when doing cache thrashing. openbsd is actually adding some kind of simple dual queue system too afaik. well someone was proposing it, i dunno if it's in. but disk caching is an area that quite a lot of research has been done in, but most people are doing simple systems for. one issue, is how do you test performance. like one of the common issues with most recently used is that it unloads common binarys when you do "benchmarking" type applications. and that may make the benchmark work better, especially if the benchmark doesn't used that much disk. but it makes my pet peeves worse: starting vim to edit a file, tab completing a file, and logging in to a machine. | [03:08] | |
mus1cb0x | good benchmarks are easy for me to write
i just take the same group of tasks and repeat the performance of it | [03:14] | |
mercutio | the thing is, interactive things want lower latency for reads, background things matter less.
it's kind of complicated. synchronous things want lower latency too. so my current thinking is that things should just be shifted to be more async. most things shouldn't really be much faster on ssd than hard-disk. because they should be reasonably predictable, and be able to just be read sequentially | [03:14] | |
mus1cb0x | i like async too | [03:18] | |
mercutio | damn what happened to coding hah
i'm writing a tiny curl implementation | [03:19] | |
mus1cb0x | why? | [03:19] | |
mercutio | it works, but i got distracted with trying to use as little cpu as possible
it's cos i did ldd on curl :) | [03:19] | |
mus1cb0x | hahahhaa
oops :) | [03:19] | |
mercutio | and because iw as noticing that curl used heaps of cpu on infiniband
but i tested on some other systems. there are quite a few problem cases, and performance can vary. but something like downloading on adsl uses a surprising amount of cpu because packets don't get coalesced that well you get a packet every msec or such. so it has to wake up, process a packet, wake up process a packet etc. but on xeon e3110 (core2 duo 3 ghz basically) there's significant cpu usage maxing out gigabit. and fiddling with the coalescing settings makes a difference (intel e1000) but my initial testing was all infiniband, and then i upgraded my server from i5-2500k to i7-4770 my desktop is i7-3770 | [03:20] | |
mus1cb0x | nice | [03:23] | |
mercutio | i7-4770 uses WAY less cpu than i7-3770 at curl type applicaitons.
it kind of blew my mind. the i7-3770 even has faster ram. basically cache speed has gone up heaps afaik. but all the benchmarks floating around are about games. where the differences seem much smaller. there's really very little in the way of benchmarks for more server type stuff, other than pointless stuff like requesting static files on a web server. which basiclaly doesn't matter curl http://192.168.50.254:24/200m > /dev/null 0.01s user 0.05s system 98% cpu 0.054 total ./microcurl/microcurl http://192.168.50.254:24/200m > /dev/null 0.00s user 0.03s system 91% cpu 0.033 total files downloading at localhost to /dev/null is way faster with my system :)
./microcurl/microcurl http://202.49.140.24:24/10m > /dev/null 0.00s user 0.07s system 2% cpu 2.406 total and a little less cpu over vdsl. ok i should stop rambling :) | [03:23] | |
.................................................................................................................... (idle for 9h38mn) | |||
mus1cb0x | haha
it was good stuff | [13:07] | |
................................................................................... (idle for 6h54mn) | |||
*** | acf_ has joined #arpnetworks | [20:01] | |
........................... (idle for 2h12mn) | |||
RandalSchwartz has quit IRC (Read error: Connection reset by peer)
awyeah has quit IRC (Read error: Connection reset by peer) mus1cbox has joined #arpnetworks awyeah has joined #arpnetworks freedomcode has joined #arpnetworks milki_ has joined #arpnetworks mjp_ has quit IRC (Read error: Connection reset by peer) kevr has quit IRC (Ping timeout: 240 seconds) bitslip has quit IRC (Read error: Connection reset by peer) acf_ has quit IRC (Ping timeout: 258 seconds) bitslip_ has joined #arpnetworks mnathani1 has quit IRC (Ping timeout: 245 seconds) tooth has quit IRC (Ping timeout: 245 seconds) acf_ has joined #arpnetworks milki has quit IRC (Ping timeout: 240 seconds) reardencode has quit IRC (Ping timeout: 240 seconds) pcn has quit IRC (Ping timeout: 240 seconds) mus1cb0x has quit IRC (Read error: Connection reset by peer) tooth has joined #arpnetworks d^_^b has quit IRC (Ping timeout: 245 seconds) d^_^b has joined #arpnetworks d^_^b has quit IRC (Changing host) d^_^b has joined #arpnetworks pcn has joined #arpnetworks mnathani1 has joined #arpnetworks kevr has joined #arpnetworks mjp has joined #arpnetworks | [22:13] | ||
................. (idle for 1h20mn) | |||
JC_Denton has quit IRC (Read error: Connection reset by peer) | [23:39] |
↑back Search ←Prev date Next date→ Show only urls | (Click on time to select a line by its url) |