#arpnetworks 2016-02-14,Sun

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)

WhoWhatWhen
***mnathani_ has quit IRC (Ping timeout: 252 seconds)
mnathani has quit IRC (Ping timeout: 252 seconds)
[11:04]
.................. (idle for 1h28mn)
mnathani has joined #arpnetworks [12:34]
............................................ (idle for 3h39mn)
mnathani_ has joined #arpnetworks [16:13]
....... (idle for 33mn)
mnathaniwhats a decent web based torrent client that can run on a linux server?
for downloading all those legit linux isos ofcourse :-)
[16:46]
..... (idle for 22mn)
brycecmnathani: transmission-daemon [17:08]
mnathani_deluge came up in my searches also
brycec: thanks - will try that out
[17:14]
brycecI've been using transmission headless for years and years... probably about 7 or 8 years now.
I've tried rtorrent and while it was super-configurable, it was overkill
And less pretty
[17:16]
............. (idle for 1h3mn)
mercutionone of the linux ones are that efficient
if you're using a shared machine
utorrent works under wine i've heard.
with the nature of torrents if you're seeding lots of stuff there's a lot of random disk i/o
but yeah transmission is pretty nifty
except apparently it has some rewrite issue with zfs where it inflates disk i/o
[18:19]
mnathani_my usecase involves a virtual box dedicated to torrents and such
with shared storage that my xbmc on raspberry pi can access
s/xbmc/kodi
[18:26]
BryceBot<mnathani_> with shared storage that my kodi on raspberry pi can access [18:26]
mercutioin that situation i'd probably just advocate using windows and utorrent :)
but yeah it's not too bad for downloads
it's mostly uploading that's the issue
[18:26]
mnathani_dont plan on seeding much
how is Germany coming along?
[18:27]
mercutioit'll be ready when it's ready :) [18:28]
............ (idle for 57mn)
brycecI've never found Transmission to be particularly "inefficient" - it can keep up, both downloading and uploading, with my 100/10 connection.
(I run it on my home NAS, FreeBSD, alongside a bunch of other jails.)
[19:25]
.... (idle for 18mn)
mercutiodo you run it on raidz?
https://lists.freebsd.org/pipermail/freebsd-fs/2010-March/007928.html
struggling to find much other than that
[19:46]
........... (idle for 52mn)
brycecNo, single straight disk (on top of a RAID-6 -- wasn't meant to be built this way/permanent, but that's a story for some other day)
(I'm now at the point that I have to find somewhere temporary + fast + safe that I can store 10TB+ in order to tear down that array and rebuild it as RAIDZ2 on top of JBODs. And probably increase its capacity. Or reconsider maybe mirroring across two raidz's, etc)
[20:40]
.............. (idle for 1h6mn)
mercutiosounds like fun
preferably soemwhere close :)
i'm kind of a fan of mixed raid levels now days
like striped mirrored raid and raidz
a pair of 3 disk raidzs are quite a bit faster than 3 pairs or raidz2 with 4 data disks on 6 disk.
although raidz2 is more resilient
maybe you should just build a new raid array, and make your existing one your backup?
although that would be like 9x2tb disks or something..
[21:47]
mnathani_have you found that larger disks are not as reliable as smaller 2tb?
3,4,5tb for example
[22:05]
mercutioi've only personally experienced problems with seagate 3tb
but there is concern with larger drives also that another drive can die during a rebuild, which is helped somewhat by using something like raidz2
3tb is about as big as they normally go at 7200 rpm, so you end up getting lower rpm drives, and having seek performance of a single driver over your raid array if using raidz.
what does appear to be happening is a slow shift towards 2.5" disks, which means you can end up having 12+ disks easily.
they have lower capacity but also lower power consumption. and slowly people are starting to have more in the way of hot standby disks.
it's still a little scary though, in the past there have been multiple incidents with drives dying around the same time with "batch" issues.
it used to be quite a common occurance about the 4/9gb scsi times.
[22:15]
mnathani_would you buy say a set of different manufacturers say hitachi, wd and seagate to offset that batch issue [22:19]
mercutioso even with raidz2 you could still have 3 drives die at once..
well that's one possible solution
[22:19]
mnathani_are the hotplug drives different, or just the same drives in an enclosure
?
[22:20]
mercutiodifferent people have different hotplug trays, but they all just take normal hard-disks
so supermicro trays are different to dell trays which are different to hp trays which are different to some of these nas things.
hp also changed their trays from g7 to g8
people like emc have special sector sizes / alternate firmware.
actually even hp have alternate firmware. most things take any old disk though.
except the expensive san solutions
[22:20]
mnathani_have you seen the vmware virtual san [22:23]
mercutionope [22:23]
mnathani_it looks like they are pushing local storage
doesnt make sense to me
[22:23]
mercutionetworking cost is kind of quite a big issue with servers going above gigabit atm
well, it depends how you look at it.
sans are even more expensive :)
[22:24]
mnathani_dedicated 10gig should work for server storage access?
iscsi or NFS
[22:24]
mercutioyeh, but if you want to have dual switches, 10 gigabit to each server etc. [22:25]
mnathani_gets expensive fast [22:25]
mercutioit will use up more power, the cost is still quite a bit higher than gigabit etc.
local storage is still cheaper.
but it's less flexible
so the notion of having some kind of local storage with backup / redundancy out one of the extra ethernet ports isn't a terrible idea.
most servers come with 2 to 4 ethernet ports plus dedicated lights out these dayws
which brings up the other issue of cables cables, and more cables.
[22:25]
mnathani_usually used for link aggregation or nic teaming [22:26]
mercutioi kind of half like the idea of poe small servers :)
things like wireless access points are starting to shift to poe
this 2.5/5gigabit ethernet may help a little
2.5 is quite a nice jump over gigabit for storage
have you ever tried using a ssd on a computer without sata3?
it still works pretty well.
and ethernet latency can actually be lower than ssd latency
[22:26]
mnathani_I felt jitter when I had an ssd
random short delays
for the most part it was fast
[22:29]
mercutioso assuming a server that caches lots of stuff in ram that's hot you can have some beenfits.
samsung had some issues with that
windows has some issues too
[22:29]
mnathani_id rather have constant access times even if they are slower [22:29]
BryceBotThat's what she said!! [22:29]
mercutioalso linux has buffer bloat if you write a lot
by default if you write at "full" speed you'll start getting higher access times for writes
i'm hoping that stuff gets improved soon :) network stuff has been improved a lot in that respect.
even nvme gives buffer blaot :/
people usually tend to say about 4k read speed, 4k write speed etc
but where the issue happens is doing synchronous 4k read or write whlie there's background sequential access.
it's one of the many reasons why benchmarking is difficult, and general benchmarks don't necessarily relate well to real usage.
but yeah it's even worse with hard-disks
try doing dd if=/dev/zero of=testzero and then at the same time run ioping on partition
and latency will go
then ^C teh dd before you run out of space
and compare the tiems
[22:29]
mnathani_kind of dont have that 1tb ssd anymore
I wasnt utilizing it as much as I thought I would so I sold it
[22:34]
mercutiofwiw zfs is a lot better than ext4/xfs on linux with that
although opensolaris was better when they fixed the defaults.
and zfs benchmarks lower generally
basically zfs defaults to lower queue depth on disks; if you have a raid array or such that provides lots of hard-disks behind one lun you have to tune it up, but by default in the common situation it's a lot more sane.
[22:36]
mnathani_are your raid arrays raw freebsd
or do you run freenas / nas4free
[22:38]
mercutioi use zol
that is zfs on linux
[22:38]
mnathani_@google zol [22:39]
BryceBot2,300,000 total results returned for 'zol', here's 3
ZOL Zimbabwe (https://www.zol.co.zw/) Watch the video below - then sign up for ZOL Fibroniks today ... Residential. Find out more about ZOL Internet for your home >> ...
ZOL agence de développement web - Experte Php / Symfony (http://www.zol.fr/) ZOL, la petite agence web experte symfony à Lyon.
Fertiliteitscentrum | Ziekenhuis Oost-Limburg (http://www.zol.be/fertiliteitscentrum) Over het ZOL · Werken in het ZOL (externe link) · Raadplegingen · Pers · Aankoop-, leverings- en ... ZOL opent innovatief interventioneel centrum. 05/01/ 2016 ...
[22:39]
mnathani_comes up with a lot of wierd results lol
ubuntu server?
[22:39]
mercutionah i use arch :)
i prefer arch except for when there are other people doing things on the same server
[22:40]
mnathani_I dont like the install process for arch [22:41]
mercutioi love it [22:41]
mnathani_I like choices and next buttons etc [22:41]
mercutiomuch easier than ubuntu [22:42]
mnathani_I have installed it twice so far I think
let me try again now
512 megs sufficient?
[22:42]
mercutioarch and openbsd are my two favourite installers.
256 is enough
128 is probably enough even ;)
if you're comfortable partitioning, formatting file systems, isntalling grub etc then arch is no big deal
[22:42]
mnathani_not comfortable
thats why I like an installer
teaches you a lot though that arch install process
[22:44]
mercutiowell i used to have to use a shell on ubuntu to do raid setup
and then i had to go back and forth with their stupid installer
because ubuntu doesn't like far=2 mdadm
or 3 disk raid10
or other "non-standard" configurations
i'll be back later
[22:44]
mnathani_k [22:47]
..... (idle for 20mn)
***Seji has quit IRC (*.net *.split)
JC_Denton has quit IRC (*.net *.split)
relrod has quit IRC (*.net *.split)
relrod_ has joined #arpnetworks
relrod_ has quit IRC (Changing host)
relrod_ has joined #arpnetworks
JC_Denton has joined #arpnetworks
Seji has joined #arpnetworks
[23:07]
.......... (idle for 49mn)
brycec21:51:45 <@mercutio> maybe you should just build a new raid array, and make your existing one your backup? [23:57]

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)