[00:08] *** KILLALLHUMANS01 has quit IRC (*.net *.split) [01:06] *** KILLALLHUMANS01 has joined #arpnetworks [01:06] *** KILLALLHUMANS01 has quit IRC (Changing host) [01:06] *** KILLALLHUMANS01 has joined #arpnetworks [01:06] *** KILLALLHUMANS01 has quit IRC (Changing host) [01:06] *** KILLALLHUMANS01 has joined #arpnetworks [02:39] *** KILLALLHUMANS01 has quit IRC (*.net *.split) [02:59] *** KILLALLHUMANS01 has joined #arpnetworks [03:08] *** pyvpx has quit IRC (Read error: Connection reset by peer) [03:08] *** pyvpx has joined #arpnetworks [03:14] *** KILLALLHUMANS01 has quit IRC (Ping timeout: 264 seconds) [03:26] *** KILLALLHUMANS01 has joined #arpnetworks [04:50] *** pyvpx has quit IRC (Read error: Connection reset by peer) [04:50] *** pyvpx has joined #arpnetworks [06:52] *** dj_goku has quit IRC (Ping timeout: 260 seconds) [06:57] *** dj_goku has joined #arpnetworks [07:11] *** KILLALLHUMANS01 has quit IRC (Ping timeout: 250 seconds) [07:25] *** KILLALLHUMANS01 has joined #arpnetworks [07:28] *** pyvpx_ has joined #arpnetworks [07:29] *** pyvpx has quit IRC (Ping timeout: 250 seconds) [07:29] *** pyvpx_ is now known as pyvpx [08:14] *** rmlhhd has quit IRC (Quit: No Ping reply in 180 seconds.) [08:16] *** awyeah has quit IRC (Ping timeout: 260 seconds) [08:16] *** neish has quit IRC (Ping timeout: 260 seconds) [08:17] *** rmlhhd has joined #arpnetworks [08:17] *** erratic has quit IRC (Ping timeout: 260 seconds) [08:17] *** up_the_irons has quit IRC (Ping timeout: 260 seconds) [08:17] *** plett has quit IRC (Ping timeout: 260 seconds) [08:17] *** mhoran_ has quit IRC (Ping timeout: 260 seconds) [08:17] *** jbergstroem has quit IRC (Ping timeout: 260 seconds) [08:17] *** toddf has quit IRC (Ping timeout: 260 seconds) [08:17] *** staticsafe has quit IRC (Ping timeout: 260 seconds) [08:17] *** mhoran_ has joined #arpnetworks [08:17] *** ChanServ sets mode: +o mhoran_ [08:17] *** plett has joined #arpnetworks [08:19] *** awyeah has joined #arpnetworks [08:21] *** jbergstroem has joined #arpnetworks [08:23] *** staticsafe has joined #arpnetworks [08:24] *** plett has quit IRC (Ping timeout: 264 seconds) [08:24] *** mordac_ has joined #arpnetworks [08:24] *** mhoran__ has joined #arpnetworks [08:24] *** ChanServ sets mode: +o mhoran__ [08:24] *** carvite_ has joined #arpnetworks [08:28] *** up_the_irons has joined #arpnetworks [08:28] *** ChanServ sets mode: +o up_the_irons [08:29] *** plett has joined #arpnetworks [08:29] *** mhoran_ has quit IRC (*.net *.split) [08:29] *** JC_Denton has quit IRC (*.net *.split) [08:29] *** carvite has quit IRC (*.net *.split) [08:29] *** mordac has quit IRC (*.net *.split) [08:29] *** brycec has quit IRC (*.net *.split) [08:30] *** carvite_ is now known as carvite [08:33] *** brycec has joined #arpnetworks [08:34] *** JC_Denton has joined #arpnetworks [08:35] *** JC_Denton is now known as Guest59623 [08:38] *** erratic has joined #arpnetworks [08:42] *** toddf has joined #arpnetworks [08:42] *** ChanServ sets mode: +o toddf [08:49] *** neish has joined #arpnetworks [08:56] *** Guest59623 is now known as JC_Denton [09:17] *** erratic has quit IRC (*.net *.split) [09:26] *** erratic has joined #arpnetworks [09:54] *** pjs_ is now known as pjs [10:29] *** jbergstroem_ has joined #arpnetworks [10:30] *** jbergstroem has quit IRC (*.net *.split) [10:31] *** erratic has quit IRC (*.net *.split) [10:31] *** up_the_irons has quit IRC (*.net *.split) [10:31] *** nathani has quit IRC (*.net *.split) [10:50] *** erratic has joined #arpnetworks [11:08] *** nathani has joined #arpnetworks [11:09] *** up_the_irons has joined #arpnetworks [11:09] *** ChanServ sets mode: +o up_the_irons [12:18] *** awyeah has quit IRC (Ping timeout: 276 seconds) [12:19] *** awyeah has joined #arpnetworks [13:27] *** erratic has quit IRC (Ping timeout: 264 seconds) [13:43] *** erratic has joined #arpnetworks [14:19] *** mordac_ is now known as mordac [22:11] how well does zfs work with hot-swappable drives? [22:11] any any SSD caching or otherwise going into the new German Server? [22:12] Generally no issues. Pulling a hot drive will cause it to be failed. Pulling a failed drive will do nothing. And once the drive is replaced, it's usually just "zpool replace " [22:12] (of course this assumes you're using something that can recover from a failed drive, eg mirror) [22:13] I assume growing a pool is pretty seamless as well? just keep adding drives ? [22:13] mnathani: it'd be crazy not to have SSD at least for ZIL with ZFS [22:14] growing pools with zfs isn't great [22:14] I don't think you can grow a zpool [22:14] (but I could be wrong) [22:14] it has no auto-balancing [22:14] brycec: you can [22:14] well [22:14] you can add another pair of disks [22:14] or another raidz or such [22:14] and use it as another vdev as part of the pool [22:14] Yeah, it's something a bit unconventional. [22:14] you can also mix raidz and mirrors [22:15] @google zil zfs [22:15] 5,670 total results returned for 'zil zfs', here's 3 [22:15] ZiL in ZFS. How does it work? | The FreeBSD Forums (https://forums.freebsd.org/threads/26212/) Hello! Please tell me how does ZiL work in ZFS? L2ARC works as a READ cache layer in-between main memory and Disk Storage Pool. [22:15] ZFS - Wikipedia, the free encyclopedia (https://en.wikipedia.org/wiki/ZFS) ZFS uses different layers of disk cache to speed ... Device, and it is used by the ZIL (ZFS intent log). [22:15] Chapter 9. ZFS (https://www.freebsd.org/doc/faq/all-about-zfs.html) The ZIL ( ZFS intent log) is a write log used to implement posix write commitment semantics across crashes. Normally writes are bundled up into transaction ... [22:15] well if you have 4 disks with mirrors, you'd usually have two vdevs [22:15] have you done zfs in production before? [22:15] yeah [22:15] on linux or fbsd? [22:16] linux and opensolaris [22:16] gotcha [22:17] Zil gives most of the benefit of bbwc [22:17] But streaming loads will bypass [22:17] is this server a response to JC_Denton's request for a managed backup service running on ZFS ? :-) [22:18] No [22:28] Ubuntu host with KVM virtual machines? [22:31] yeah [22:31] any livemigration capabilities? [22:32] no? [22:32] what do you mean? [22:32] the kvm equivalent of vmotion [22:32] moving a machine from one physical box to another with no downtime [22:32] that would require SAN, maybe in the future :) [22:32] s/machine/virtual machine [22:32] moving a virtual machine from one physical box to another with no downtime [22:33] are you guys working on website additions to cover the German offerings? [22:34] perhaps payment in Euros or something [23:21] Today I learned: There is a place called China in Michigan, USA