[11:04] *** mnathani_ has quit IRC (Ping timeout: 252 seconds) [11:06] *** mnathani has quit IRC (Ping timeout: 252 seconds) [12:34] *** mnathani has joined #arpnetworks [16:13] *** mnathani_ has joined #arpnetworks [16:46] whats a decent web based torrent client that can run on a linux server? [16:46] for downloading all those legit linux isos ofcourse :-) [17:08] mnathani: transmission-daemon [17:14] deluge came up in my searches also [17:15] brycec: thanks - will try that out [17:16] I've been using transmission headless for years and years... probably about 7 or 8 years now. [17:16] I've tried rtorrent and while it was super-configurable, it was overkill [17:16] And less pretty [18:19] none of the linux ones are that efficient [18:19] if you're using a shared machine [18:19] utorrent works under wine i've heard. [18:20] with the nature of torrents if you're seeding lots of stuff there's a lot of random disk i/o [18:22] but yeah transmission is pretty nifty [18:23] except apparently it has some rewrite issue with zfs where it inflates disk i/o [18:26] my usecase involves a virtual box dedicated to torrents and such [18:26] with shared storage that my xbmc on raspberry pi can access [18:26] s/xbmc/kodi [18:26] with shared storage that my kodi on raspberry pi can access [18:26] in that situation i'd probably just advocate using windows and utorrent :) [18:27] but yeah it's not too bad for downloads [18:27] it's mostly uploading that's the issue [18:27] dont plan on seeding much [18:27] how is Germany coming along? [18:28] it'll be ready when it's ready :) [19:25] I've never found Transmission to be particularly "inefficient" - it can keep up, both downloading and uploading, with my 100/10 connection. [19:28] (I run it on my home NAS, FreeBSD, alongside a bunch of other jails.) [19:46] do you run it on raidz? [19:47] https://lists.freebsd.org/pipermail/freebsd-fs/2010-March/007928.html [19:48] struggling to find much other than that [20:40] No, single straight disk (on top of a RAID-6 -- wasn't meant to be built this way/permanent, but that's a story for some other day) [20:41] (I'm now at the point that I have to find somewhere temporary + fast + safe that I can store 10TB+ in order to tear down that array and rebuild it as RAIDZ2 on top of JBODs. And probably increase its capacity. Or reconsider maybe mirroring across two raidz's, etc) [21:47] sounds like fun [21:48] preferably soemwhere close :) [21:48] i'm kind of a fan of mixed raid levels now days [21:49] like striped mirrored raid and raidz [21:51] a pair of 3 disk raidzs are quite a bit faster than 3 pairs or raidz2 with 4 data disks on 6 disk. [21:51] although raidz2 is more resilient [21:55] maybe you should just build a new raid array, and make your existing one your backup? [21:55] although that would be like 9x2tb disks or something.. [22:05] have you found that larger disks are not as reliable as smaller 2tb? [22:05] 3,4,5tb for example [22:15] i've only personally experienced problems with seagate 3tb [22:16] but there is concern with larger drives also that another drive can die during a rebuild, which is helped somewhat by using something like raidz2 [22:16] 3tb is about as big as they normally go at 7200 rpm, so you end up getting lower rpm drives, and having seek performance of a single driver over your raid array if using raidz. [22:17] what does appear to be happening is a slow shift towards 2.5" disks, which means you can end up having 12+ disks easily. [22:18] they have lower capacity but also lower power consumption. and slowly people are starting to have more in the way of hot standby disks. [22:18] it's still a little scary though, in the past there have been multiple incidents with drives dying around the same time with "batch" issues. [22:19] it used to be quite a common occurance about the 4/9gb scsi times. [22:19] would you buy say a set of different manufacturers say hitachi, wd and seagate to offset that batch issue [22:19] so even with raidz2 you could still have 3 drives die at once.. [22:20] well that's one possible solution [22:20] are the hotplug drives different, or just the same drives in an enclosure [22:20] ? [22:20] different people have different hotplug trays, but they all just take normal hard-disks [22:20] so supermicro trays are different to dell trays which are different to hp trays which are different to some of these nas things. [22:21] hp also changed their trays from g7 to g8 [22:22] people like emc have special sector sizes / alternate firmware. [22:22] actually even hp have alternate firmware. most things take any old disk though. [22:22] except the expensive san solutions [22:23] have you seen the vmware virtual san [22:23] nope [22:23] it looks like they are pushing local storage [22:23] doesnt make sense to me [22:24] networking cost is kind of quite a big issue with servers going above gigabit atm [22:24] well, it depends how you look at it. [22:24] sans are even more expensive :) [22:24] dedicated 10gig should work for server storage access? [22:24] iscsi or NFS [22:25] yeh, but if you want to have dual switches, 10 gigabit to each server etc. [22:25] gets expensive fast [22:25] it will use up more power, the cost is still quite a bit higher than gigabit etc. [22:25] local storage is still cheaper. [22:25] but it's less flexible [22:25] so the notion of having some kind of local storage with backup / redundancy out one of the extra ethernet ports isn't a terrible idea. [22:26] most servers come with 2 to 4 ethernet ports plus dedicated lights out these dayws [22:26] which brings up the other issue of cables cables, and more cables. [22:26] usually used for link aggregation or nic teaming [22:26] i kind of half like the idea of poe small servers :) [22:26] things like wireless access points are starting to shift to poe [22:27] this 2.5/5gigabit ethernet may help a little [22:27] 2.5 is quite a nice jump over gigabit for storage [22:27] have you ever tried using a ssd on a computer without sata3? [22:27] it still works pretty well. [22:28] and ethernet latency can actually be lower than ssd latency [22:29] I felt jitter when I had an ssd [22:29] random short delays [22:29] for the most part it was fast [22:29] so assuming a server that caches lots of stuff in ram that's hot you can have some beenfits. [22:29] samsung had some issues with that [22:29] windows has some issues too [22:29] id rather have constant access times even if they are slower [22:29] That's what she said!! [22:29] also linux has buffer bloat if you write a lot [22:30] by default if you write at "full" speed you'll start getting higher access times for writes [22:30] i'm hoping that stuff gets improved soon :) network stuff has been improved a lot in that respect. [22:31] even nvme gives buffer blaot :/ [22:31] people usually tend to say about 4k read speed, 4k write speed etc [22:32] but where the issue happens is doing synchronous 4k read or write whlie there's background sequential access. [22:32] it's one of the many reasons why benchmarking is difficult, and general benchmarks don't necessarily relate well to real usage. [22:33] but yeah it's even worse with hard-disks [22:33] try doing dd if=/dev/zero of=testzero and then at the same time run ioping on partition [22:33] and latency will go [22:33] then ^C teh dd before you run out of space [22:33] and compare the tiems [22:34] kind of dont have that 1tb ssd anymore [22:34] I wasnt utilizing it as much as I thought I would so I sold it [22:36] fwiw zfs is a lot better than ext4/xfs on linux with that [22:36] although opensolaris was better when they fixed the defaults. [22:36] and zfs benchmarks lower generally [22:37] basically zfs defaults to lower queue depth on disks; if you have a raid array or such that provides lots of hard-disks behind one lun you have to tune it up, but by default in the common situation it's a lot more sane. [22:38] are your raid arrays raw freebsd [22:38] or do you run freenas / nas4free [22:38] i use zol [22:39] that is zfs on linux [22:39] @google zol [22:39] 2,300,000 total results returned for 'zol', here's 3 [22:39] ZOL Zimbabwe (https://www.zol.co.zw/) Watch the video below - then sign up for ZOL Fibroniks today ... Residential. Find out more about ZOL Internet for your home >> ... [22:39] ZOL agence de développement web - Experte Php / Symfony (http://www.zol.fr/) ZOL, la petite agence web experte symfony à Lyon. [22:39] Fertiliteitscentrum | Ziekenhuis Oost-Limburg (http://www.zol.be/fertiliteitscentrum) Over het ZOL · Werken in het ZOL (externe link) · Raadplegingen · Pers · Aankoop-, leverings- en ... ZOL opent innovatief interventioneel centrum. 05/01/ 2016 ... [22:39] comes up with a lot of wierd results lol [22:40] ubuntu server? [22:40] nah i use arch :) [22:41] i prefer arch except for when there are other people doing things on the same server [22:41] I dont like the install process for arch [22:41] i love it [22:41] I like choices and next buttons etc [22:42] much easier than ubuntu [22:42] I have installed it twice so far I think [22:42] let me try again now [22:42] 512 megs sufficient? [22:42] arch and openbsd are my two favourite installers. [22:42] 256 is enough [22:43] 128 is probably enough even ;) [22:43] if you're comfortable partitioning, formatting file systems, isntalling grub etc then arch is no big deal [22:44] not comfortable [22:44] thats why I like an installer [22:44] teaches you a lot though that arch install process [22:44] well i used to have to use a shell on ubuntu to do raid setup [22:44] and then i had to go back and forth with their stupid installer [22:44] because ubuntu doesn't like far=2 mdadm [22:45] or 3 disk raid10 [22:45] or other "non-standard" configurations [22:47] i'll be back later [22:47] k [23:07] *** Seji has quit IRC (*.net *.split) [23:07] *** JC_Denton has quit IRC (*.net *.split) [23:07] *** relrod has quit IRC (*.net *.split) [23:07] *** relrod_ has joined #arpnetworks [23:08] *** relrod_ has quit IRC (Changing host) [23:08] *** relrod_ has joined #arpnetworks [23:08] *** JC_Denton has joined #arpnetworks [23:08] *** Seji has joined #arpnetworks [23:57] 21:51:45 <@mercutio> maybe you should just build a new raid array, and make your existing one your backup?