↑back Search ←Prev date Next date→ Show only urls | (Click on time to select a line by its url) |
Who | What | When | |
---|---|---|---|
qbit | hola | [05:49] | |
.... (idle for 17mn) | |||
will that thunder box I ordered be up today? :D | [06:06] | ||
mike-burns | Does everyone else sing AC/DC to themselves whenever someone mentions Thunder in here? | [06:15] | |
mhoran | yes. | [06:18] | |
qbit | ya
i am probably gonna call the server angus >.> | [06:27] | |
....................... (idle for 1h52mn) | |||
johnny-o | on the lower-cost VPS front, obviously everyone (including me) would agree paying less is better. if i'm comparing my ARPN VPS specs to other providers that make running openbsd easy, $20/mo is a bit on the high side -- especially when measured against the $40/mo thunder starter plan.
i think choosing based solely on price isn't a great approach to begin, but that's usually where people start. in the five years i've had my personal VPS, i think i've needed support twice -- and once was just to flip on the virtio stuff. if keeping a higher price allows the quality of the service to stay top-notch, i'm all for it. a way to perhaps incentivize ongoing MRC from people while keeping it from being a cheap drive-by service would be to offer a discount after each year of service with a max discount after X years. but frankly, "you get what you pay for" is a well-worn adage for a reason. | [08:20] | |
........ (idle for 35mn) | |||
qbit | yeah, ima move to ta thunda! | [08:59] | |
just.. as soon as the box is up <.<
If you ordered an ARP Thunder(tm) Cloud Dedicated Server, expect to receive an email from us with an ETA on your server setup within 4 hours. ^ lies | [09:04] | ||
brycec | brycec starts a revolt /s
brycec /would/ like a non-auto reply to his support email yesterday, though I realize my question was a bit complex | [09:10] | |
................ (idle for 1h15mn) | |||
jpalmer | I've been away from IRC for a while, but I saw in the chat that ARP is using Ceph now. that's pretty awesome. I manage a pretty decent sized ceph cluster. hella impressive. | [10:29] | |
mercutio | jpalmer: yeh ceph is pretty cool
how've you found ceph? | [10:30] | |
jpalmer | mercutio: a couple minot nitpicks with it, but overall.. I'm a fan.
I'm hoping bluestore becomes officially supported in the next major release. | [10:34] | |
mercutio | have you tried using bluestore yet? | [10:34] | |
jpalmer | mercutio: I started out using ceph, with a 6 node proxmox cluster. using the proxmox "pveceph" install scripts. it only had 72 disks. went from that to a standalone cluster of.. much more :P
yeah, I've got bluestore in 3 lab environments. | [10:35] | |
mercutio | ah cool | [10:35] | |
jpalmer | but I wouldn't put legit data I cared about on it yet, since it's currently a preview. | [10:35] | |
mercutio | does bluestore improve latency of ssd? | [10:35] | |
jpalmer | well, it gets rid of the need to journal (eliminates the double-write penalty) so.. "kinda" | [10:36] | |
mercutio | heh
it seems that latency going down is where ceph could improve the most | [10:37] | |
jpalmer | bluestore allows you split the data, metadata, and DB into 3 different devices. so depending on your use case, it can significantly improve performance.. or "not so much" | [10:37] | |
mercutio | i mean it's not terrible
and it's consistent but single i/o depth is going to be latency bound | [10:37] | |
jpalmer | agreed
| [10:39] | |
mercutio | i kind of wish more programs could adapt to being less latency bound
and having multiple rqeuests at once there's multiple pools. no tiered caching atm | [10:39] | |
jpalmer | yeah, I personally would stay away from the tiered cache pools. | [10:40] | |
mercutio | it seems like tiering caching is only good if you've got very high amounts of reads | [10:40] | |
BryceBot | That's what she said!! | [10:40] | |
mercutio | and that those reads are wide enough in scope that ram caching won't help
and the normal pattern is higher write load rather than read | [10:41] | |
jpalmer | my performance didn't increase by much, the complexity went up significantly, and the ceph team has essentially said they aren't going to continue developing it. (they haven't outright said they are going to abandon it though) but hey are dropping support in their commercial offering. | [10:41] | |
mercutio | did you check your read/write mix? | [10:42] | |
jpalmer | yeah, I put it through a TON of performance scenarios. | [10:42] | |
mercutio | i mean of your usage before implementing tiering | [10:42] | |
jpalmer | but I got much better performance doing bcache, than native ceph tiered pools | [10:42] | |
mercutio | intersting. how safe is bcache these days? | [10:43] | |
jpalmer | asking in #proxmox and #ceph (in addition to my own environments) it seems a lot of people are using it for production data.
and bcache was specifically mentioned by the ceph devs, as one of the reasons they don't feel the need to continue developing cached tiers. | [10:43] | |
mercutio | ah interesting
yeh i love the idea of automatic data tiering | [10:44] | |
jpalmer | trying to find the mailing list post. they mentioned bcache and like 1-2 other options | [10:45] | |
mercutio | but i feel like ceph shoudl be doing it rather than something like bcache | [10:45] | |
jpalmer | I think the point they were trying to make was: limited resources, we'd rather improve the product itself, rather than a feature that others are getting right already. | [10:45] | |
mercutio | i'd more keen on less acccessed data automatically moving to slower storage, and more accessed data moving to faster storage rather than having two layers though
i mean, i think it's common knowledge now days that databases can go faster on ssd than hard-disk but that log files really don't matter where they are.. but if that stuff could be automatic.. | [10:46] | |
jpalmer | I think it's only a matter of time before that scenario becomes the norm. especially with all these "commodity storage with SAN-like feature sets" projects gaining popularity. | [10:48] | |
mercutio | application level hinting and OS hinting would be nice
yeh i'd like to see zfs do the same ;) | [10:49] | |
*** | mkb has joined #arpnetworks | [10:49] | |
jpalmer | right now, I keep my SSD's in 1 pool, and spinning in another for most of my ceph clusters, and just choose the storage medium based on the expected needs. easy enough to migrate them (online or offline) as needed. | [10:49] | |
mercutio | yeah that seems to be the way to go | [10:50] | |
jpalmer | also, since I saw proxmox mentioned earlier.. promxox and ceph work beautifully together. :)
I hear docker has added support for ceph (possibly via a 3rd party plugin/module) but I haven't messed with docker in a while to confirm. | [10:50] | |
mercutio | have you played with nvme at all yet? | [10:55] | |
jpalmer | we have one in a single machine in the lab, but it's not part of the ceph cluster | [11:01] | |
mercutio | ah | [11:02] | |
................. (idle for 1h21mn) | |||
*** | HAS_A_BANANA has quit IRC (Ping timeout: 246 seconds)
CoBryceMatrixBot has quit IRC (Ping timeout: 246 seconds) HAS_A_BANANA has joined #arpnetworks CoBryceMatrixBot has joined #arpnetworks | [12:23] | |
................................ (idle for 2h38mn) | |||
mkb has quit IRC (Quit: leaving) | [15:02] | ||
........... (idle for 53mn) | |||
HAS_A_BANANA | BryceBot: no | [15:55] | |
BryceBot | Oh, okay... I'm sorry. 'it seems like tiering caching is only good if you've got very high amounts of reads' | [15:55] | |
.............. (idle for 1h7mn) | |||
*** | mkb has joined #arpnetworks | [17:02] | |
........... (idle for 50mn) | |||
ziyourenxiang has joined #arpnetworks | [17:52] | ||
....... (idle for 31mn) | |||
Nahual has joined #arpnetworks
r0ni has joined #arpnetworks | [18:23] | ||
................................... (idle for 2h52mn) | |||
Nahual has quit IRC (Quit: Leaving.) | [21:17] | ||
....................... (idle for 1h50mn) | |||
mkb has quit IRC (Quit: leaving)
karstensrage has joined #arpnetworks | [23:07] | ||
........... (idle for 50mn) | |||
nathani has quit IRC (Quit: WeeChat 1.5) | [23:59] |
↑back Search ←Prev date Next date→ Show only urls | (Click on time to select a line by its url) |