qbit: will that thunder box I ordered be up today? :D mike-burns: Does everyone else sing AC/DC to themselves whenever someone mentions Thunder in here? mhoran: yes. qbit: ya
i am probably gonna call the server angus
>.> johnny-o: on the lower-cost VPS front, obviously everyone (including me) would agree paying less is better. if i'm comparing my ARPN VPS specs to other providers that make running openbsd easy, $20/mo is a bit on the high side -- especially when measured against the $40/mo thunder starter plan.
i think choosing based solely on price isn't a great approach to begin, but that's usually where people start.
in the five years i've had my personal VPS, i think i've needed support twice -- and once was just to flip on the virtio stuff.
if keeping a higher price allows the quality of the service to stay top-notch, i'm all for it. a way to perhaps incentivize ongoing MRC from people while keeping it from being a cheap drive-by service would be to offer a discount after each year of service with a max discount after X years.
but frankly, "you get what you pay for" is a well-worn adage for a reason. qbit: yeah, ima move to ta thunda!
just.. as soon as the box is up <.<
If you ordered an ARP Thunder(tm) Cloud Dedicated Server, expect to
receive an email from us with an ETA on your server setup within 4 hours.
^ lies -: brycec starts a revolt /s
brycec /would/ like a non-auto reply to his support email yesterday, though I realize my question was a bit complex jpalmer: I've been away from IRC for a while, but I saw in the chat that ARP is using Ceph now. that's pretty awesome. I manage a pretty decent sized ceph cluster. hella impressive. mercutio: jpalmer: yeh ceph is pretty cool
how've you found ceph? jpalmer: mercutio: a couple minot nitpicks with it, but overall.. I'm a fan.
I'm hoping bluestore becomes officially supported in the next major release. mercutio: have you tried using bluestore yet? jpalmer: mercutio: I started out using ceph, with a 6 node proxmox cluster. using the proxmox "pveceph" install scripts. it only had 72 disks. went from that to a standalone cluster of.. much more :P
yeah, I've got bluestore in 3 lab environments. mercutio: ah cool jpalmer: but I wouldn't put legit data I cared about on it yet, since it's currently a preview. mercutio: does bluestore improve latency of ssd? jpalmer: well, it gets rid of the need to journal (eliminates the double-write penalty) so.. "kinda" mercutio: heh
it seems that latency going down is where ceph could improve the most jpalmer: bluestore allows you split the data, metadata, and DB into 3 different devices. so depending on your use case, it can significantly improve performance.. or "not so much" mercutio: i mean it's not terrible
and it's consistent
but single i/o depth is going to be latency bound jpalmer: agreed
oh, are you doing mixed SSD and spinning in the same pool? doing any kind of tiered caching? mercutio: i kind of wish more programs could adapt to being less latency bound
and having multiple rqeuests at once
there's multiple pools.
no tiered caching atm jpalmer: yeah, I personally would stay away from the tiered cache pools. mercutio: it seems like tiering caching is only good if you've got very high amounts of reads BryceBot: That's what she said!! mercutio: and that those reads are wide enough in scope that ram caching won't help
and the normal pattern is higher write load rather than read jpalmer: my performance didn't increase by much, the complexity went up significantly, and the ceph team has essentially said they aren't going to continue developing it. (they haven't outright said they are going to abandon it though) but hey are dropping support in their commercial offering. mercutio: did you check your read/write mix? jpalmer: yeah, I put it through a TON of performance scenarios. mercutio: i mean of your usage before implementing tiering jpalmer: but I got much better performance doing bcache, than native ceph tiered pools mercutio: intersting. how safe is bcache these days? jpalmer: asking in #proxmox and #ceph (in addition to my own environments) it seems a lot of people are using it for production data.
and bcache was specifically mentioned by the ceph devs, as one of the reasons they don't feel the need to continue developing cached tiers. mercutio: ah interesting
yeh i love the idea of automatic data tiering jpalmer: trying to find the mailing list post. they mentioned bcache and like 1-2 other options mercutio: but i feel like ceph shoudl be doing it rather than something like bcache jpalmer: I think the point they were trying to make was: limited resources, we'd rather improve the product itself, rather than a feature that others are getting right already. mercutio: i'd more keen on less acccessed data automatically moving to slower storage, and more accessed data moving to faster storage rather than having two layers though
i mean, i think it's common knowledge now days that databases can go faster on ssd than hard-disk
but that log files really don't matter where they are..
but if that stuff could be automatic.. jpalmer: I think it's only a matter of time before that scenario becomes the norm. especially with all these "commodity storage with SAN-like feature sets" projects gaining popularity. mercutio: application level hinting and OS hinting would be nice
yeh i'd like to see zfs do the same ;) ***: mkb has joined #arpnetworks jpalmer: right now, I keep my SSD's in 1 pool, and spinning in another for most of my ceph clusters, and just choose the storage medium based on the expected needs. easy enough to migrate them (online or offline) as needed. mercutio: yeah that seems to be the way to go jpalmer: also, since I saw proxmox mentioned earlier.. promxox and ceph work beautifully together. :)
I hear docker has added support for ceph (possibly via a 3rd party plugin/module) but I haven't messed with docker in a while to confirm. mercutio: have you played with nvme at all yet? jpalmer: we have one in a single machine in the lab, but it's not part of the ceph cluster mercutio: ah ***: HAS_A_BANANA has quit IRC (Ping timeout: 246 seconds)
CoBryceMatrixBot has quit IRC (Ping timeout: 246 seconds)
HAS_A_BANANA has joined #arpnetworks
CoBryceMatrixBot has joined #arpnetworks
mkb has quit IRC (Quit: leaving) HAS_A_BANANA: BryceBot: no BryceBot: Oh, okay... I'm sorry. 'it seems like tiering caching is only good if you've got very high amounts of reads' ***: mkb has joined #arpnetworks
ziyourenxiang has joined #arpnetworks
Nahual has joined #arpnetworks
r0ni has joined #arpnetworks
Nahual has quit IRC (Quit: Leaving.)
mkb has quit IRC (Quit: leaving)
karstensrage has joined #arpnetworks
nathani has quit IRC (Quit: WeeChat 1.5)