[00:00] virtualization [00:01] I can, yes :P [00:01] Is cost a "feature"? :) [00:01] i'd say...heft [00:01] dedicated boxes are certainly heavier than virtual ones [00:02] If you wanted/needed a widely-distributed network of hosts, you'd probably go with VPSes spread out because it's cheaper than having dedicated boxes in all of those locations. Though really that depends on workload [00:02] Remote console, remote power cycling, I'm not seeing any differences [00:03] But the advantages to a VPS are that they're cheap, often easily imaged, and quickly setup. [00:03] Also quickly/easily upgraded [00:03] Yes but that isn't the question [00:03] Those are features of VPS. You asked for features... [00:03] i'd consider quick provisioning and scalability a feature of vps' [00:04] err, features* [01:52] *** djkrikke-2 has joined #arpnetworks [10:51] *** mhoran has quit IRC (Quit: WeeChat 1.1.1) [10:54] *** mhoran has joined #arpnetworks [10:54] *** ChanServ sets mode: +o mhoran [14:11] cost is my major reason for going with vps's over dedicated. [17:52] you could do both, get a dedicated box and run vms on it [17:52] have complete control over your host, hypervisor as well as guests [19:12] Yea, hypervisor. m0unds mentioned virtualization. I personally have a hypervisor preference, however it's interesting to learn that it's a buying consideration of others as well [19:12] mnathani_: May I ask what your hypervisor preference is if you have one? [19:43] *** hive-mind has quit IRC (Remote host closed the connection) [19:44] *** hive-mind has joined #arpnetworks [19:57] *** up_the_irons2 is now known as up_the_irons [19:58] *** up_the_irons is now known as Guest45800 [19:59] *** Guest45800 is now known as up_the_irons2 [20:00] brycec: I'm catching up on things (although I swear I replied to ya...) [20:02] up_the_irons2: A LIKELY STORY [20:11] *** dj_goku_ has quit IRC (Read error: Connection reset by peer) [20:27] *** ChanServ sets mode: +o up_the_irons2 [20:40] oh wow, an up_the_irons2 [20:41] up_the_irons2: You really didn't, and the support ticket corroborates it ;) But mercutio did take care of it, it's all good. [21:06] brycec: yeah now I remember, I replied but only internally to mercutio to take care of it ;) [21:07] *** up_the_irons2 is now known as up_the_irons [21:28] nice to see up_the_irons return just in time for billing on the 1st [21:28] kellytk: I like Vmware ESXi / Vsphere as a Hypervisor [21:29] kellytk: whats your preference for Hypervisors? [21:36] mnathani_: i plan my trips around that fact ;) [21:36] I was in Germany, then northern California, and now I'm back [21:42] lol [21:43] up_the_irons: So does ARP have a new EU location now? :D [21:52] mnathani_: I've had pretty good experience with KVM/QEMU. What do you like about ESXi? [21:53] esxi's vmotion is pretty cool [21:55] I like that its gui based, either with the vsphere client or web interface. Bare metal hypervisor too so great performance [21:56] i found vmware performance much worse than pv xen ime [21:56] and that was esxi [22:23] I have never tried xen. So wouldn't know about its performance [22:24] is the config for xen heavily dependant on editing config files and such? [22:27] not really [22:27] some things are more efficient with pv [22:34] I was virtualizing Windows on ESXi [22:39] I've virtualized Windows in KVM, ESXi, and VirtualBox, and in every case the performance always seemed to hinge on disk IO. [22:39] Windows is greedy, busy, and doesn't share well. [22:52] which one out of the three got the best performance? [22:55] It's not a fair, even comparison since they were all on different hardware... But SSD-backed storage (which happened to be under VirtualBox) was by far the best [22:57] disk i/o and cpu make a HUGE difference to virtualisation performance [22:57] but before virtio drivers etc network speeds etc could vary heaps too [22:58] protip: Never underestimate the benefits of low-latency, high-speed storage with near-zero seek time. [22:58] and even now at high rates network isn't wonderful [23:03] In my experiences (for what little it's worth), it seemed that Windows in general does a *lot* of disk access, by many processes and spread out. When you put more than one Windows system on a single disk, they're both/all seeking all-fucking-over the disk, and it grinds to a halt. *NIX systems tend to be leaner, not running the same sorts of services and databases by default, so they coexist fairly well. [23:03] Basically the same rules as running mutliple database servers all with data on the same disk :p [23:17] anyone with windows 10 having an issue where it asks for password on wake, even though power settings say dont require password [23:19] hm, let me try [23:20] (I have password-required enabled, as default) [23:20] Oh actually I can't toggle that setting due to corporate requirements [23:20] ahh [23:20] (aka "Oh good, the GPO works") [23:21] (but I already knew the GPO worked) [23:21] thanks for checking [23:22] np [23:22] My other Win10 systems are all RDP anyways, so not a "valid" checking point there either [23:54] brycec: yeh an ssd for a windwos desktop makes so much more difference than in linux [23:54] in linux it's nice [23:54] especially if you're doing soething like compiling lots of small files etc. [23:54] on windows it's kind of "necessary" [23:55] read wise linux's disk cache tends to work pretty well too [23:55] so with plenty of ram reads are fast