[00:59] *** erratic has quit IRC (Excess Flood) [01:18] *** erratic has joined #arpnetworks [04:07] *** erratic has quit IRC (Excess Flood) [04:14] *** erratic has joined #arpnetworks [05:53] *** perlgod has left "WeeChat 1.9.1" [06:06] *** erratic has quit IRC (Excess Flood) [06:14] *** erratic has joined #arpnetworks [06:17] *** ziyourenxiang has joined #arpnetworks [06:50] *** mloza has joined #arpnetworks [08:49] *** erratic has quit IRC (Excess Flood) [08:57] *** erratic has joined #arpnetworks [09:21] *** ziyourenxiang has quit IRC (Ping timeout: 248 seconds) [11:01] *** erratic has quit IRC (Excess Flood) [11:08] *** erratic has joined #arpnetworks [11:31] *** erratic has quit IRC (Excess Flood) [11:36] so - for the dedicated boxes [11:36] i am hitting an issue with clocks on openbsd [11:37] Clocks as in... CPU? Or RTC? qbit [11:37] (Not that I have answers, I just help get details/triage) [11:37] RTC [11:38] *** erratic has joined #arpnetworks [11:38] What's it doing? Drifting? Lagging? And to confirm, OpenBSD is the host OS on the dedi box? [11:39] basically this: https://marc.info/?l=openbsd-bugs&m=151430928212450&w=2 [11:39] the preemption_timer [11:39] i am wondering if it would be possible to get kvm-intel.preemption_timer=0 on what ever host has my dedi stuffs [11:40] What exactly is that? Some OpenBSD config() thing? A Linux string? [11:41] If you've got a dedi, you've got baremetal access, you can do whatever... [11:41] kernel boot option [11:41] Or have you been talking about a Thunder instance? [11:41] thunder [11:41] i call them dedi because : https://arpnetworks.com/dedicated [11:41] "dedicated" [11:41] "dedicated resources" but yeah, it leads to confusion IMO [11:42] So to clarify, you're asking if you could arrange for the Thunder host machine could be rebooted with a kernel parameter? [11:43] guess i am asking about the posibility of doing that [11:43] i understand that likely there are more clients.. so if it isn't an option - i will have to figure something else out [11:44] Seems like a tall ask :p but that's my opinion. And like I said, I'm just trying to distill the problem/request to help everyone out. :) [11:44] does this effect normal vps instances too? [11:44] it seems like quite a major openbsd bug [11:44] *affect :D [11:44] yes :) [11:44] qbit: What was your `sysctl kern.timecounter` anyways? [11:44] seems to be anything in kvm [11:44] brycec: acpitimer0 [11:45] i think the only major change to openbsd in the recent history has been the addition of virtio random device [11:45] So not exactly the same as the mailing list guy's [11:45] mercutio: negative - i am running -current [11:46] this is a grip of changes from 6.2 [11:46] was it fine prior to -current [11:46] seems to have been fine prior to the meltdown stuff [11:46] sigh [11:47] yeah so it probably will start happening on all virtual machine instances unless they fix it [11:47] starting with the next release [11:47] "FWIW, there are reports that this bug is absent from qemu-2.11.0. [11:47] hmm [11:48] i thought it was a kvm bug rather than qemu bug [11:49] https://marc.info/?l=openbsd-bugs&m=151439799628499&w=2 [11:51] oh, you are probably already past that :P [11:51] well it's mostly about kvm_intel.preemption_timer being disabled apparently helping [11:51] that made me think it was kvm issue [11:53] uh, i don't think preemption_timer exists in the linux version being used [11:54] heh [11:54] it's a boot option (i put mine in grub) [11:54] it's not in /sys/module [11:54] but it's in /sys/module on newer kernel [11:54] dang [11:54] it's linux 4.4 [11:54] will /sys/module always reflect avaiable moduels? or maybe loaded? [11:54] ah [11:54] not always [11:54] Not Always - Linux [11:55] but if it's not supported in there you can definitely not change it without reboot [11:55] dang [11:55] oh you can't change it while booted anyway [11:56] double dang [11:56] That's what she said!! [11:56] if you say so BryceBot [12:06] https://www.spinics.net/lists/kvm/msg161757.html [12:06] this suggests one way that may help a specific instance [12:08] qbit: this is quite easy to reproduce right? [12:09] step 1) install openbsd [12:10] snapshot? [12:10] ya [12:10] https://ftp3.usa.openbsd.org/pub/OpenBSD/snapshots/amd64/ [12:10] that's all - no more steps :D [12:10] clock will drift even with ntp going [12:11] hmm install62.fs is preinstalled? :) [12:11] i can't remember seeing that before [12:14] What do you mean "is preinstalled"? It's a dd'able version of the ISO, but still just the install media (bsd.rd) [12:14] it's bsd.rd [12:14] oh [12:14] damn [12:14] with sets [12:14] qbit: Have you tested to see whether it can be reproduced from bsd.rd alone? [12:14] Or does it take the full bsd kernel config? [12:15] (I know that bsd.rd omits a lot of ACPI stuff to keep its size down, among reasons) [12:15] i don't know if bsd.rd is enough to make it happen [12:15] yeah, i doubt it would work [12:15] Also, I would've pointed mercutio to http://mirrors.arpnetworks.com/OpenBSD/snapshots/ :P [12:15] s/work/be a good test case/ [12:15] Also, I would've pointed mercutio to http://mirrors.arpnetbe a good test cases.com/OpenBSD/snapshots/ :P [12:15] :D [12:15] BryceBot: that's where i'l pulling it from :) [12:15] oh oops :) [12:16] lol qbit [12:17] Feb 20 13:03:47 spark ntpd[5226]: adjusting local clock by 18.013269s [12:17] Feb 20 13:08:07 spark ntpd[5226]: adjusting local clock by 22.688790s [12:18] damn [12:18] I agree with mercutio , it seems like quite a major openbsd bug [12:21] up_the_irons: For reference, what version of Linux/kvm/qemu are Thunder machines running? [12:22] qbit: And just to be clear, *you* have only seen this behaviour with recent snapshots? Older snapshots were fine, as were older releases? [12:22] 4.4.0-109-generic #132-Ubuntu SMP Tue Jan 9 19:52:39 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux [12:22] ii qemu-kvm 1:2.5+dfsg-5ubuntu10.16 amd64 QEMU Full virtualization [12:22] brycec: if it was in previous snapshots - it wasn't enough ofa delay for me to notice [12:24] ha [12:24] I'm going to fire up a -current machine on a Proxmox machine, 4.4.62-1-pve + qemu 2.9.1-6~pve4 and see what happens... [12:24] i am not very observant [12:24] # date [12:24] Tue Feb 20 13:21:04 MST 2018 [12:24] # [12:24] mine just finished installing [12:24] that's 6.2 [12:24] i like how openbsd installs quite quick [12:25] Feb 20 13:19:22 db ntpd[2811]: adjusting local clock by 207.116095s [12:25] qbit: So you're saying 6.2 is also affected, cool (I have 6.2 VMs already in Proxmox :p) [12:25] ya [12:26] hangon my vm is using acpihept0 [12:27] will that change things? [12:27] *** erratic has quit IRC (Excess Flood) [12:28] For the most part, my VMs stay just fine, but sometimes there are wild swings in drift, I'm guessing caused by load spikes in other VMs on the host. http://ix.io/N8h [12:30] By comparison, a 6.0 VM running on the same host has almost no time swing [12:30] brycec: is it using acpihept0? [12:31] my time still seems right with -current heh [12:31] maybe i need to trigger it with load? [12:32] Yes, both my 6.0 and 6.2 VM on the same Proxmox host are using acpihpet0. Only the 6.2 VM sees random drift. (Seeing it on Proxmox means it's not just an ARP issue :) And it's what I have to play with readily available.) [12:32] (i'm testing on my desktop first) [12:32] my desktop has linux 4.15, and some recent qemu [12:32] yeah - this has been confirmed on vultr too [12:33] i'm kind of curious if they've fixed it in linux or osmething [12:33] qemu 2.11.1 [12:34] 6.0 http://ix.io/N8s 6.2 http://ix.io/N8t Same exact kern.timecounter values. [12:34] ahh cool [12:35] that's curious,y ou got a lot more adjustments sicne the 13th [12:36] "Something happened" on the 13th it seems. [12:36] to both of them [12:36] And re-evaluating my output, I guess 6.0 also saw some lagging afterall. [12:37] this doesn't seem extreme though [12:37] what qbit was talking about seemed more extreme [12:38] I disagree, 10s+ adjustments are extreme :p [12:38] i mean it's not good [12:38] but wasn't there like minutes of drift [12:38] like a lot bigger difference [12:38] But these 2 particular VMs of mine are real small stuff, not doing any major load (they're effectively bastion hosts). I suspect with heavier load, it exacerbates the issue. [12:39] yeah it may [12:39] it's def worse on machines with high load [12:39] i need to compile something :) [12:39] So I guess I'm back to: a) Doesn't seem to matter which OpenBSD release, even 6.0 is affected to a degree, albeit perhaps lesser. b) Even newer kernel+kvm (than ARP, but not "newest") are affected. [12:41] You guys are crazy. [12:41] likely the meltdown stuff in current snaps is gonna make it worse too [12:41] *** erratic has joined #arpnetworks [12:42] i just realised something i'm not testing with smp [12:43] is this only affecting smp hosts, or in general? [12:43] s/hosts/guests/ [12:43] is this only affecting smp guests, or in general? [12:43] mine are all single cpu [12:43] ah ok [12:44] Ditto for me (I'm pretty sure) [12:44] i thought was on thunder [12:45] it is - but mine is proxmox -> openbsd vms [12:45] ah ok [12:46] well i did a kernel compile, and it failed to compile... [12:46] maybe -current has broken compiling 6.2 [12:47] time is still right [12:48] so yeh i think either newer kernel or newer kvm has fixed it [12:50] *** erratic has quit IRC (Excess Flood) [12:56] huh [12:56] well that's cool [12:56] it seems to sync back up once load has gone down [12:58] *** erratic has joined #arpnetworks [14:22] *** erratic has quit IRC (Excess Flood) [14:28] *** erratic has joined #arpnetworks [14:47] qbit: what kernel is promox running under? [14:47] i'm thinking that the underlying thunder thing may make less difference than the proxmox layer.. [15:02] mercutio: 4.4.98-6-pve [15:02] ahh so similar to ubuntu kernel [15:03] oh maybe proxmox doesn't haev a recent kernel available to try [15:04] The current stable 4.x release uses latest Ubuntu based kernel, which will be regularly updated. The first stable 4.0 release is based on 4.2 Linux kernel. [15:04] it's actually the same kernel even :) [17:31] heh [21:49] *** erratic has quit IRC (Ping timeout: 260 seconds) [23:36] *** erratic has joined #arpnetworks [23:56] *** fIorz_ has joined #arpnetworks [23:57] *** fIorz has quit IRC (Remote host closed the connection)