#arpnetworks 2017-05-12,Fri

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)

WhoWhatWhen
qbithola [05:49]
.... (idle for 17mn)
will that thunder box I ordered be up today? :D [06:06]
mike-burnsDoes everyone else sing AC/DC to themselves whenever someone mentions Thunder in here? [06:15]
mhoranyes. [06:18]
qbitya
i am probably gonna call the server angus
>.>
[06:27]
....................... (idle for 1h52mn)
johnny-oon the lower-cost VPS front, obviously everyone (including me) would agree paying less is better. if i'm comparing my ARPN VPS specs to other providers that make running openbsd easy, $20/mo is a bit on the high side -- especially when measured against the $40/mo thunder starter plan.
i think choosing based solely on price isn't a great approach to begin, but that's usually where people start.
in the five years i've had my personal VPS, i think i've needed support twice -- and once was just to flip on the virtio stuff.
if keeping a higher price allows the quality of the service to stay top-notch, i'm all for it. a way to perhaps incentivize ongoing MRC from people while keeping it from being a cheap drive-by service would be to offer a discount after each year of service with a max discount after X years.
but frankly, "you get what you pay for" is a well-worn adage for a reason.
[08:20]
........ (idle for 35mn)
qbityeah, ima move to ta thunda! [08:59]
just.. as soon as the box is up <.<
If you ordered an ARP Thunder(tm) Cloud Dedicated Server, expect to
receive an email from us with an ETA on your server setup within 4 hours.
^ lies
[09:04]
brycecbrycec starts a revolt /s
brycec /would/ like a non-auto reply to his support email yesterday, though I realize my question was a bit complex
[09:10]
................ (idle for 1h15mn)
jpalmerI've been away from IRC for a while, but I saw in the chat that ARP is using Ceph now. that's pretty awesome. I manage a pretty decent sized ceph cluster. hella impressive. [10:29]
mercutiojpalmer: yeh ceph is pretty cool
how've you found ceph?
[10:30]
jpalmermercutio: a couple minot nitpicks with it, but overall.. I'm a fan.
I'm hoping bluestore becomes officially supported in the next major release.
[10:34]
mercutiohave you tried using bluestore yet? [10:34]
jpalmermercutio: I started out using ceph, with a 6 node proxmox cluster. using the proxmox "pveceph" install scripts. it only had 72 disks. went from that to a standalone cluster of.. much more :P
yeah, I've got bluestore in 3 lab environments.
[10:35]
mercutioah cool [10:35]
jpalmerbut I wouldn't put legit data I cared about on it yet, since it's currently a preview. [10:35]
mercutiodoes bluestore improve latency of ssd? [10:35]
jpalmerwell, it gets rid of the need to journal (eliminates the double-write penalty) so.. "kinda" [10:36]
mercutioheh
it seems that latency going down is where ceph could improve the most
[10:37]
jpalmerbluestore allows you split the data, metadata, and DB into 3 different devices. so depending on your use case, it can significantly improve performance.. or "not so much" [10:37]
mercutioi mean it's not terrible
and it's consistent
but single i/o depth is going to be latency bound
[10:37]
jpalmeragreed
oh, are you doing mixed SSD and spinning in the same pool? doing any kind of tiered caching?
[10:39]
mercutioi kind of wish more programs could adapt to being less latency bound
and having multiple rqeuests at once
there's multiple pools.
no tiered caching atm
[10:39]
jpalmeryeah, I personally would stay away from the tiered cache pools. [10:40]
mercutioit seems like tiering caching is only good if you've got very high amounts of reads [10:40]
BryceBotThat's what she said!! [10:40]
mercutioand that those reads are wide enough in scope that ram caching won't help
and the normal pattern is higher write load rather than read
[10:41]
jpalmermy performance didn't increase by much, the complexity went up significantly, and the ceph team has essentially said they aren't going to continue developing it. (they haven't outright said they are going to abandon it though) but hey are dropping support in their commercial offering. [10:41]
mercutiodid you check your read/write mix? [10:42]
jpalmeryeah, I put it through a TON of performance scenarios. [10:42]
mercutioi mean of your usage before implementing tiering [10:42]
jpalmerbut I got much better performance doing bcache, than native ceph tiered pools [10:42]
mercutiointersting. how safe is bcache these days? [10:43]
jpalmerasking in #proxmox and #ceph (in addition to my own environments) it seems a lot of people are using it for production data.
and bcache was specifically mentioned by the ceph devs, as one of the reasons they don't feel the need to continue developing cached tiers.
[10:43]
mercutioah interesting
yeh i love the idea of automatic data tiering
[10:44]
jpalmertrying to find the mailing list post. they mentioned bcache and like 1-2 other options [10:45]
mercutiobut i feel like ceph shoudl be doing it rather than something like bcache [10:45]
jpalmerI think the point they were trying to make was: limited resources, we'd rather improve the product itself, rather than a feature that others are getting right already. [10:45]
mercutioi'd more keen on less acccessed data automatically moving to slower storage, and more accessed data moving to faster storage rather than having two layers though
i mean, i think it's common knowledge now days that databases can go faster on ssd than hard-disk
but that log files really don't matter where they are..
but if that stuff could be automatic..
[10:46]
jpalmerI think it's only a matter of time before that scenario becomes the norm. especially with all these "commodity storage with SAN-like feature sets" projects gaining popularity. [10:48]
mercutioapplication level hinting and OS hinting would be nice
yeh i'd like to see zfs do the same ;)
[10:49]
***mkb has joined #arpnetworks [10:49]
jpalmerright now, I keep my SSD's in 1 pool, and spinning in another for most of my ceph clusters, and just choose the storage medium based on the expected needs. easy enough to migrate them (online or offline) as needed. [10:49]
mercutioyeah that seems to be the way to go [10:50]
jpalmeralso, since I saw proxmox mentioned earlier.. promxox and ceph work beautifully together. :)
I hear docker has added support for ceph (possibly via a 3rd party plugin/module) but I haven't messed with docker in a while to confirm.
[10:50]
mercutiohave you played with nvme at all yet? [10:55]
jpalmerwe have one in a single machine in the lab, but it's not part of the ceph cluster [11:01]
mercutioah [11:02]
................. (idle for 1h21mn)
***HAS_A_BANANA has quit IRC (Ping timeout: 246 seconds)
CoBryceMatrixBot has quit IRC (Ping timeout: 246 seconds)
HAS_A_BANANA has joined #arpnetworks
CoBryceMatrixBot has joined #arpnetworks
[12:23]
................................ (idle for 2h38mn)
mkb has quit IRC (Quit: leaving) [15:02]
........... (idle for 53mn)
HAS_A_BANANABryceBot: no [15:55]
BryceBotOh, okay... I'm sorry. 'it seems like tiering caching is only good if you've got very high amounts of reads' [15:55]
.............. (idle for 1h7mn)
***mkb has joined #arpnetworks [17:02]
........... (idle for 50mn)
ziyourenxiang has joined #arpnetworks [17:52]
....... (idle for 31mn)
Nahual has joined #arpnetworks
r0ni has joined #arpnetworks
[18:23]
................................... (idle for 2h52mn)
Nahual has quit IRC (Quit: Leaving.) [21:17]
....................... (idle for 1h50mn)
mkb has quit IRC (Quit: leaving)
karstensrage has joined #arpnetworks
[23:07]
........... (idle for 50mn)
nathani has quit IRC (Quit: WeeChat 1.5) [23:59]

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)