hola will that thunder box I ordered be up today? :D Does everyone else sing AC/DC to themselves whenever someone mentions Thunder in here? yes. ya i am probably gonna call the server angus >.> on the lower-cost VPS front, obviously everyone (including me) would agree paying less is better. if i'm comparing my ARPN VPS specs to other providers that make running openbsd easy, $20/mo is a bit on the high side -- especially when measured against the $40/mo thunder starter plan. i think choosing based solely on price isn't a great approach to begin, but that's usually where people start. in the five years i've had my personal VPS, i think i've needed support twice -- and once was just to flip on the virtio stuff. if keeping a higher price allows the quality of the service to stay top-notch, i'm all for it. a way to perhaps incentivize ongoing MRC from people while keeping it from being a cheap drive-by service would be to offer a discount after each year of service with a max discount after X years. but frankly, "you get what you pay for" is a well-worn adage for a reason. yeah, ima move to ta thunda! just.. as soon as the box is up <.< If you ordered an ARP Thunder(tm) Cloud Dedicated Server, expect to receive an email from us with an ETA on your server setup within 4 hours. ^ lies I've been away from IRC for a while, but I saw in the chat that ARP is using Ceph now. that's pretty awesome. I manage a pretty decent sized ceph cluster. hella impressive. jpalmer: yeh ceph is pretty cool how've you found ceph? mercutio: a couple minot nitpicks with it, but overall.. I'm a fan. I'm hoping bluestore becomes officially supported in the next major release. have you tried using bluestore yet? mercutio: I started out using ceph, with a 6 node proxmox cluster. using the proxmox "pveceph" install scripts. it only had 72 disks. went from that to a standalone cluster of.. much more :P yeah, I've got bluestore in 3 lab environments. ah cool but I wouldn't put legit data I cared about on it yet, since it's currently a preview. does bluestore improve latency of ssd? well, it gets rid of the need to journal (eliminates the double-write penalty) so.. "kinda" heh it seems that latency going down is where ceph could improve the most bluestore allows you split the data, metadata, and DB into 3 different devices. so depending on your use case, it can significantly improve performance.. or "not so much" i mean it's not terrible and it's consistent but single i/o depth is going to be latency bound agreed oh, are you doing mixed SSD and spinning in the same pool? doing any kind of tiered caching? i kind of wish more programs could adapt to being less latency bound and having multiple rqeuests at once there's multiple pools. no tiered caching atm yeah, I personally would stay away from the tiered cache pools. it seems like tiering caching is only good if you've got very high amounts of reads That's what she said!! and that those reads are wide enough in scope that ram caching won't help and the normal pattern is higher write load rather than read my performance didn't increase by much, the complexity went up significantly, and the ceph team has essentially said they aren't going to continue developing it. (they haven't outright said they are going to abandon it though) but hey are dropping support in their commercial offering. did you check your read/write mix? yeah, I put it through a TON of performance scenarios. i mean of your usage before implementing tiering but I got much better performance doing bcache, than native ceph tiered pools intersting. how safe is bcache these days? asking in #proxmox and #ceph (in addition to my own environments) it seems a lot of people are using it for production data. and bcache was specifically mentioned by the ceph devs, as one of the reasons they don't feel the need to continue developing cached tiers. ah interesting yeh i love the idea of automatic data tiering trying to find the mailing list post. they mentioned bcache and like 1-2 other options but i feel like ceph shoudl be doing it rather than something like bcache I think the point they were trying to make was: limited resources, we'd rather improve the product itself, rather than a feature that others are getting right already. i'd more keen on less acccessed data automatically moving to slower storage, and more accessed data moving to faster storage rather than having two layers though i mean, i think it's common knowledge now days that databases can go faster on ssd than hard-disk but that log files really don't matter where they are.. but if that stuff could be automatic.. I think it's only a matter of time before that scenario becomes the norm. especially with all these "commodity storage with SAN-like feature sets" projects gaining popularity. application level hinting and OS hinting would be nice yeh i'd like to see zfs do the same ;) right now, I keep my SSD's in 1 pool, and spinning in another for most of my ceph clusters, and just choose the storage medium based on the expected needs. easy enough to migrate them (online or offline) as needed. yeah that seems to be the way to go also, since I saw proxmox mentioned earlier.. promxox and ceph work beautifully together. :) I hear docker has added support for ceph (possibly via a 3rd party plugin/module) but I haven't messed with docker in a while to confirm. have you played with nvme at all yet? we have one in a single machine in the lab, but it's not part of the ceph cluster ah BryceBot: no Oh, okay... I'm sorry. 'it seems like tiering caching is only good if you've got very high amounts of reads'