#arpnetworks 2019-05-14,Tue

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)

WhoWhatWhen
***[FBI] starts logging #arpnetworks at Tue May 14 06:19:33 2019
[FBI] has joined #arpnetworks
[06:19]
up_the_ironsmnathani: I invited [FBI] back
LG should be back up today
[06:20]
...... (idle for 25mn)
mnathaniThank you! [06:45]
.......................... (idle for 2h8mn)
***ziyourenxiang has quit IRC (Ping timeout: 268 seconds) [08:53]
............................................ (idle for 3h38mn)
brycecup_the_irons: Just curious, what kind of drives is ARP buying these days? I'm trying to shop and having a helluva time (maybe I'm just out of practice) finding *new* 4-6TB drives. I don't need more than that (a modest initial roll-out, 4 servers @ 4 drives each), under 100TB (raw) is more than enough. But... gotdamn I'm having a time. It seems the brand new drives I can find these days are 10-14TB (wow)
and they're $300+/ea (breaking the bank); anything <8TB stopped being made at least 3 years ago it seems.
(I'm obviously looking at enterprice/datacenter "grade" drives, not WD Blacks or whatever)
(PS I got your sales response and I'll definitely be following-up as soon as I get this initial proposal done... which is currently hung-up on sourcing fucking drives.)
[12:31]
mercutiohuge hard-disks are bad for VM work loads ok for archival [12:33]
brycecmercutio: precisely why I'm avoiding them too [12:33]
mercutiowell 4tb is huge [12:33]
brycecYou think? It was my upper limit, but it just seemed "large" not "huge" [12:34]
mercutiowell most people are doing ssd these days
and not 4tb ssd :)
[12:34]
brycecIn any case, I'm about to start budgeting SSDs instead
lol
[12:34]
mercutioyou really want as many drives as you can
we have hundreds
[12:35]
brycecI also feel really out of touch with buying hard drives in general. There was a time I felt comfortable buying from Amazon or Newegg, provided the seller was Amazon/Newegg and not some rando third-party that's offloading "renewed" inventory. [12:35]
mercutio4x4 is the wrong approach.. [12:36]
brycecI get that, or at least I thought I did (quantity over capacity).
4*8?
[12:36]
mercutiowell you also want journal disk usually
so you want servers that can take 12+ hard-disks really
[12:36]
brycecAlready accounted for that, dedicated SSD for that, dedicated SSD for OS [12:36]
mercutioyou want ssd per 3 or 4 disks
so like 12 disk server you may have 3 ssd and 9 data disks
it's changing a bit with blue store..
[12:36]
brycecI'm limited by overall rackspace so I'm going with 2U servers, 12 slots (plus 2 for OS & journal) [12:37]
mercutioi think with bluestore it can journal rather than full copy data so you may be able to have slightly fewer ssd [12:37]
brycecYeah I've heard Bluestore is quite improved. [12:38]
mercutioand if you're not logging shit loads then you should be able to use the same disks for journal and OS
although that does complicate things a little
[12:39]
brycecYeah I intend to do a bit of benchmarking there - RAID1 the 2 SSDs vs OS/journal split. [12:40]
mercutiowith mdadm you can do raid 1 with part of the disk and not all of it
you can also do things like 3 way mirror for OS
[12:41]
brycecbrycec grumbles - Amazon has a limit of 20 drives/customer [12:43]
mercutiocurious i wonder why that is
did you scope out what performance you need?
[12:44]
brycecThey probably don't trust me to not be starting my own reseller
mercutio: From my users, "better than it is now"
[12:44]
mercutiodamn it costs the same for 5tb and 2tb?
what iops can you do now?
[12:45]
brycecI honestly couldn't say :/ [12:45]
mercutiohave you tried fio?
ioping can be handy too
like you can do ioping -R /dev/vda
and it'll tell yuo your access times
it's pretty latency dependant
[12:45]
brycecThanks I'll check those out [12:46]
mercutiofio is a little more difficult to work
but you can do things like simultaneous 4k, or 64k, or whatever requests
and you can do read/write, mixed etc to a file
it's basically the best tool for in depth benchmarking
but you have to figure out what it is that you want to benchmark :)
also for ssd for journal you want an ssd that has battery backed write cache
or power loss protection
as it means you get a lot lower latency...
https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
[12:47]
dnebrycec: I got 4TB HGST Ultrastar 7K6000 drives for our office bulk storage array last year - they've been fine (not using Ceph though) [12:55]
***acf_ has joined #arpnetworks [12:55]
mercutiodamn they have 14tb disks now
i wonder if going for such a small cluster if it is better to just go all ssd from the get go
especially when you say density is important
you can get servesr that take a lot of 2.5" disks
[12:56]
.............................................. (idle for 3h47mn)
***ziyourenxiang has joined #arpnetworks [16:45]

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)