#arpnetworks 2017-03-08,Wed

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)

WhoWhatWhen
***awyeah has quit IRC (Quit: ZNC - http://znc.in) [03:12]
awyeah has joined #arpnetworks [03:18]
awyeah has quit IRC (Quit: ZNC - http://znc.in) [03:25]
awyeah has joined #arpnetworks [03:32]
................................... (idle for 2h52mn)
dj_goku has quit IRC (Remote host closed the connection) [06:24]
dj_goku has joined #arpnetworks
dj_goku has quit IRC (Changing host)
dj_goku has joined #arpnetworks
[06:35]
................................................ (idle for 3h56mn)
mkbis a "cloud dedicated server" just a regular dedicated server with the word cloud at the front or is it something else? [10:31]
mercutioit's dedicated resources server in the cloud [10:35]
BryceBotTO THE CLOUD!!! [10:35]
mercutioit's been raining here a lot... [10:36]
......... (idle for 43mn)
up_the_ironsmkb: there's a link near the bottom that explains in detail how they work
https://arpnetworks.com/news/2017/02/28/arp-networks-launches-arp-thunder-cloud-dedicated-servers.html
and no, it's not just the word cloud in front
[11:19]
mkb"THEY [BMC chips] ALL SUCK"
yes they do
[11:22]
up_the_ironsyes :) [11:23]
................................ (idle for 2h36mn)
***km_ has quit IRC (Remote host closed the connection) [13:59]
..... (idle for 23mn)
brycecARP Thunder does sound pretty dang cool
You've got me evaluating whether/what I can migrate to it.
up_the_irons: Is there any significant difference between Thunder and VPSs running on the Ceph storage cluster, besides Thunder's increased RAM/CPU and SSD storage? Are they otherwise "just virtual machines" with better specs?
PS up_the_irons - the Thunder page makes no mention of bandwidth quota, while VPS and Metal pages do. Does that mean Thunder has unlimited monthly bandwidth? :P
[14:22]
up_the_ironsthey run on different hosts. hosts that can accommodate dedicating hardware (e.g. CPU cores). and also they enable the vmx bit, so you can run VMs of your own (just like a bare metal dedicated server would)
yeah aware of the bandwidth issue; it's on the order page though
i just haven't got around to making that 4th row, and my css skills are lacking...
[14:26]
brycecAh so it is there. Very cool. Very tempting... [14:27]
.... (idle for 17mn)
mercutioi reckon arp thunder is cool too :) [14:44]
...... (idle for 26mn)
brycecup_the_irons: To your knowledge, has anyone run ZFS on Ceph block devices? Are there any known "gotchas" or the like? [15:10]
..... (idle for 22mn)
***forgotten has quit IRC (Ping timeout: 255 seconds) [15:32]
...... (idle for 25mn)
up_the_ironsbrycec: not to my knowledge [15:57]
brycecExcellent [15:57]
up_the_ironsi would think it would be fine. it looks like any ol' block device to the OS. [15:58]
brycecup_the_irons: Are the SSD and HDD storage presented as separate block devices? [15:58]
up_the_ironsyup [15:58]
brycecs/HDD/"SATA"/ [15:58]
BryceBot<brycec> up_the_irons: Are the SSD and "SATA" storage presented as separate block devices? [15:58]
up_the_ironsthey are different Ceph pools [15:58]
brycecHow are storage upgrades handled, the block device just becomes "larger"? [15:59]
up_the_ironsyup [15:59]
brycecVery compelling stuff [15:59]
***km_ has joined #arpnetworks [15:59]
up_the_ironsit's one of my favorite features. someone wants a bigger disk, and it's a command we type. No more data center trip. [16:00]
brycecSeems like I could migrate from a Metal to a Large and save some $ [16:00]
up_the_ironsRAM for RAM, HD for HD, it's a little cheaper than Metal [16:00]
brycecAnd the backend data redundancy is the icing on the cake
Also the flexibility of having SSD _and_ SATA space (for those of us on the blades that only take 2 drives, period)
[16:00]
up_the_ironsright exactly
use cases i would _not_ recommend: very high bandwidth, really high disk IOPS (having your own dedicated disks, all to yourself, is better in this case)
[16:01]
brycecBoth good points. [16:03]
mercutionot having to deal with java is a huge + too [16:03]
bryceclol [16:03]
mercutiodon't underestimate it :) [16:03]
up_the_ironsoh yes, cannot emphasize that enough ;) [16:04]
brycecOn the other hand I'll miss ipmitool and the feeling of reliability that comes with running on bare-metal. [16:04]
mercutiomaybe you should just do both :) [16:04]
brycecWhy mercutio it's as if you're benefitting from the upsell ;p [16:05]
mercutiohah [16:05]
brycecThe only thing I have to figure out is how much storage I need / how much I'm currently using and can be shaved-off.
(I'm coming from RAID-1 1TB, so 500GB is a bit of a step-down, but how much am I _really_ using?)
[16:05]
up_the_ironsbrycec: you can get a 1TB volume, it's on the order form under "Extra Disk" [16:06]
brycecI know :) [16:06]
mercutiobryce is trying to save money i think [16:06]
up_the_ironsand hell, just saying, if you want a 5TB or 10TB volume, just say the word ;)
ah ok
[16:07]
bryceclol it's all just $
Also, since I intend to run ZFS on it, shrinking is not an option (and growing is... problematic)
[16:07]
mercutiozfs grows fine i think [16:07]
brycecSo I've gotta get it "right". That said, identical Thunder specs to my current Metal are just $15/mo less [16:08]
mercutiowell i know :) [16:08]
up_the_ironsthink if you need the SSD disk then, b/c (while it's not listed yet), i was thinking of allowing people to "opt out" of either the SSD disk or the SATA disk and get a discount. i mean, why pay if you're not gonna use it. [16:08]
mercutiobut yeah shrinking ain't so good [16:08]
brycecmercutio: It has to be done in a matching matter
up_the_irons: I will be quite happy to have /some/ SSD (for ZFS L2ARC/cache), but I appreciate the consideration and I recommend extending it to thers.

*others
[16:08]
up_the_ironsOK :) [16:09]
brycecNow if I could downgrade the SSD to a lesser amount of storage for a "credit" towards SATA... but I think it's just getting "complicated" at that point and I don't want to make things too complicated for your side [16:10]
mercutiowhy not stick the OS on oen partition of ssd
adn zfs on the other
actually
i'd just create two zfs pools
i kind of wish zfs could auto balance between two zfs pools of different speeds, or have disks with different speeds that are balanced
[16:11]
brycecmercutio: And I would stick the OS on the SSD, sure. But I don't need 200GB of SSD for OS+L2ARC :p [16:12]
mercutiodatabase? [16:12]
brycecNot currently running anything that "heavy" [16:12]
mercutioi don't know what you're doing [16:12]
brycec(The database server I do have running can keep everything in RAM anyways) [16:13]
mercutiobut my current thinking is that ssd is good for generic stuff
and hard-disk is good for archive/backup
[16:13]
brycecWhat aren't I doing? :P (Serving porn, among many things. But nobody really cares.)
I'm tentatively thinking I'd put my running VM images (yes, I'll be running Proxmox) on the SSD and backups on the SATA. And this is where figuring out where exactly I'm using space comes into play.
(Answer: about 70% of my current usage are in ZFS snapshots anyways)
[16:13]
up_the_ironsWe'd be happy to trade SSD for SATA "credit". I mean, why not be flexible now that we can. that's part of the excitement of all this new tech :) [16:14]
brycecThanks up_the_irons :)
(I think I might be most tempted by the flexibility of increasing RAM "on the fly")
[16:15]
up_the_ironsand that's another "command" ;) [16:16]
brycecTo a certain extent, storage could always be added in various ways (i.e. Another VPS exporting a partition over iSCSI) but RAM was really, truly a "visit the datacenter" task. And once you hit the max supported for your machine, that was that. [16:17]
up_the_ironsyeah that's the truth [16:18]
brycecOut of curiosity, what kind of hardware makes up the Ceph cluster? Are you "recycling" old kvr host machines? Or is this fancy new stuff? [16:19]
up_the_ironsThe SSD side are the HP Proliant 25-bay units
The SATA side has some recycled old kvr hosts, but I'm stopping that because a 12-bay DL180 G6/7 is looking like a better value
[16:20]
brycecOoooh. Ahhh. What about the hosts for Thunder? [16:21]
up_the_ironsas long as I can get the fucking LSI 9211's to play nice with the G6. If you follow our Instagram, you'll see my frustration ;)
Thunder hosts are just our regular E3 and E5's, the same ones we use for Metal
[16:21]
brycecOkay cool. [16:22]
up_the_ironsIn fact, they are all old Metal hosts [16:22]
brycecEvery time a customer migrates from Metal to Thunder, you get another host machine to use, eh? [16:23]
up_the_ironswhy yes :) [16:23]
brycecAnd here I was slightly worried I was "sticking you with" the old machine [16:23]
up_the_ironspeople still order the Metal's, so it's OK. I'll sell them either way. [16:24]
brycecThanks for the Q&A up_the_irons. I'm definitely sold. Just a matter of figuring out _what_ I need, and a migration plan.
(And thanks to mercutio too, of course)
[16:30]
up_the_ironsno problem :) [16:30]
brycec(I'm even debating going from Metal to 3*Small Thunders so I can break one host and not affect the other two.) [16:30]
mercutiosounds like a storm [16:31]
brycec(*Starter, not Small. My bad) lol [16:31]
mercutioyeah that's the kind of thing that something like this makes easier to do [16:31]
brycecIndeed. Harder to manage though, and less efficient "savings" but it might be worth it. Or a pair of "mediums" for that matter. [16:33]
mercutiomaybe two mediums and a starter? [16:33]
brycecI only have a handful of VMs I'm hosting anyways, it's not like it's the end of the world if they're unreachable. But I'm intrigued at the "modularity" of it all. [16:34]
..................... (idle for 1h42mn)
nathaniI am curious, if metal boxes are being repurposed as Thunder boxes, whats the speed of the storage network, is it 1gig ethernet, 10 gig ethernet or some kind of infiniband? [18:16]
........... (idle for 50mn)
up_the_ironsnathani: you asked me this before ;)
40 Gbps Infiniband
[19:06]
............. (idle for 1h1mn)
nathaniup_the_irons: I didn't realize the metal dedicated boxes had infiniband as well, or is this something you are installing to convert them into thunder boxes? [20:07]
mercutioinfiniband is awesome :)
it should be everywhere
1gigabit network for file storage would give pretty mediocre performance
on top of that, there'd be a lot less performance stability
as there would be issues like congestion etc from other users. the cool thing about having a huge amount of bandwidth available is that it doesn't really bottleneck
(of course the bottleneck can move elsewhere)
[20:14]
............. (idle for 1h2mn)
***awyeah has quit IRC (Quit: ZNC - http://znc.in)
awyeah has joined #arpnetworks
[21:18]
................. (idle for 1h23mn)
karstensrage has quit IRC (Ping timeout: 246 seconds)
Guest39686 has joined #arpnetworks
[22:43]
........ (idle for 35mn)
Guest39686 has quit IRC (Ping timeout: 246 seconds)
karstensrage has joined #arpnetworks
karstensrage is now known as Guest2979
[23:21]

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)