[00:03] *** gdcbjg364 has joined #arpnetworks [00:19] *** HighJinx has joined #arpnetworks [00:22] *** Zetas has quit IRC (Ping timeout: 246 seconds) [00:24] *** Zetas has joined #arpnetworks [00:40] *** easymac_ has joined #arpnetworks [00:41] *** easymac has quit IRC (Read error: Connection reset by peer) [00:41] *** easymac_ is now known as easymac [00:41] *** easymac is now known as Guest30789 [00:47] *** Guest30789 has quit IRC (Read error: Connection reset by peer) [00:47] *** easymac_ has joined #arpnetworks [00:47] *** easymac_ has quit IRC (Client Quit) [00:51] mnathani: smaller accounts do not get less CPU. everyone vps has 1 core. The scheduling of cores per vps is determined by the standard Linux scheduler (we don't do anything fancy to it) [00:52] toddf: isync eh? i'll have to check that out [01:15] I keep meaning to experiment with "multiple master" mail servers for high availability. Two MXs which both deliver into local Maildir folders and then offlineimap/isync/dsync/something_else to keep the Maildirs in sync [01:19] *** gdcbjg364 has left [01:34] plett: i'm doing multiple master servers for a client (but not the offlineimap stuff). they haven't implemented the 2nd MX, so i can't comment about it yet. [01:35] up_the_irons: How are they intending to do the syncing between servers? [01:35] plett: oh, actually, these are just relay hosts [01:35] for spam, virus, etc... filtering [01:36] plett: so i guess it doesn't really apply to your setpu [01:36] *setup [01:38] I eventually want to have multiple A/AAAA records for my IMAP server and clients can use one or the other without needing to care which one they connect to, and to have the setup survive the failure of one of the servers without losing mail [01:39] plett: that would be sweet [01:41] up_the_irons: I can't see any reason why it can't work either. The only complex bit is the IMAP level syncing and whether that is reliable for syncing bi-directional changes [01:41] plett: that's not an "only" ;) [01:41] plett: that's probably the most complex part [01:41] like, which is the most up-to-date? [01:41] but, yeah, if you make it work, please share :) [01:43] The only corner cases I can think up are ones where clients are making changes on both servers and both change message flags in some way [01:44] But for mail delivery it should work well [01:44] yeah [01:55] http://goo.gl/Kvue0 - my favorite slide from this year's OpenBSD PF talk (so far, still reading through). [02:31] in case anyone has issues with a CentOS 6.3 custom install, I just wrote this: http://support.arpnetworks.com/kb/vps/centos-63-experiences-segmentation-faults-and-crashes-during-install [03:51] *** dr_jkl has quit IRC (Read error: Operation timed out) [03:53] *** dr_jkl has joined #arpnetworks [04:43] Finally official: https://www.arpnetworks.com/dedicated [04:43] or, without the s, http://arpnetworks.com/dedicated [04:44] Fancy! [04:45] ok, who's looking at it from Sweden... [04:45] * up_the_irons looks around [04:46] Nice :-) [04:47] link to console on wiki doesn't work [04:47] also, you might want to explain a bit more about dual uplinks. [04:48] both ports in same vlan, must use lacp? seperate vlans/subnets? etc [04:49] Busted; I looked at it from Sweden... [04:50] i knew it! [04:50] josephb: tnx for catching that broken link; fixed [04:52] cool :) [04:53] josephb: "Dual 1 Gbps (GigE) Uplinks within VLAN" or "Dual 1 Gbps (GigE) Uplinks (Same VLAN)" or... ? I have very limited space, but want to explain as much as possible in a few words :) [04:54] no restrictions on layer 2 proto, but we recommend active / failover bonding (bonding in Linux, lagg in FreeBSD, ... something else in OpenBSD) [04:55] "Dual 1 Gbps (GigE) Uplinks in VLAN" [04:55] shorter ;) [04:55] up_the_irons: I think linking it off to a wiki article might be better [04:56] "Redundant Uplinks (GigE)" [04:56] josephb: yeah.. [04:56] and a link to something where you have space to explain it might work. [04:56] but I'm a detail guy and this is a sales page :) [04:57] yeah, that was on my plan actually, but for a bit later [04:57] for what it's worth you could probably leave out the wiki link for console. and just leave it as straight sales pitch [04:58] i tend to find that generates sales inquiry emails and i just have to say "read this". so, when i link it, those emails stop :) [04:59] i'd link other things too, but i simply haven't written those articles yet [04:59] this is more of a "soft launch" anyway :) [05:02] yeah, for sure. Looks great [05:03] thanks! [05:35] plett: I used to think I wanted to do the reziliant mail server approach by having a sql backing store for the mail itself, then use db replication for the mail itself. now I see things like dsync that could easily be extended to sync multiple mail systems, but it still doesn't feel quite right. [05:38] isync and offlineimap work because they cache info about the local mail so it is know if local deleted a message and can be translated to upstream, flags are updated with sane logic (if someone read the mail somewhere, guess it is read, eh? etc..) .. all via the imap protocol .. one could do similar with physical access, its simply a question of how much disk io are you willing to spend to synch how much mail? unless you have a very efficient ... [05:38] ... mechanism, probably at delivery time, you're going to run into scalability issues the larger you scale, imho [05:46] toddf: Yeah. Syncing mail via IMAP like this is inherently un-scalable. I wonder how much of Dovecot you'd have to re-implement to be able to use a database backend [05:47] openbsd terminology would be trunk(4) for failover and bonding, you'll have to be clear that it is setup as LACP otherwise bonding would cause issues [05:48] I have several dell systems with ipmi. guess I need to figure out how to access remote power reboot and serial console that way too .. would be nice [05:49] in theory you would just do a plugin with a different storage mechanism [05:49] toddf: yeah, i have conserver hooked up with ipmi and can hit "console kvr24" (for example), and get a serial login :) [05:50] any chance I can hit you up for the 'console kvr24 {' section (minus ip address info etc) ? [05:51] or is it a 'go read conserver + ipmi in google' type of thing? [05:52] toddf: see pm [05:54] http://dovecot.org/pipermail/dovecot/2006-June/013818.html <-- timo's official answer to the 'can I store email in a sql database with dovecot' .. seems to suggest a shared filesystem is better than a non existent free multimaster database, maybe with postgres sporting multimaster as well as mysql these days (yes? pgpool at least .. or am I dreaming?) maybe this answer would be different today dunno [05:56] toddf: Last time I had to do IPMI on a Dell was with an R200. The magic incantation was: ipmitool -E -I lan -H hostname -U root isol activate [05:57] toddf: http://blog.dovecot.org/2012/02/dovecot-clustering-with-dsync-based.html looks like an official answer to a different question [07:15] *** \root has quit IRC (Ping timeout: 246 seconds) [07:16] *** \root has joined #arpnetworks [07:20] *** heavysixer has joined #arpnetworks [07:20] *** ChanServ sets mode: +o heavysixer [07:37] plett: interesting bit indeed [10:11] "Arp Metal" eh? Time to turn this up to 11! (I'm guessing these will be the retired KVM servers?) [10:14] up_the_irons: to clarify a bit, maybe refer to the uplinks as "Dual 1 Gbps (GigE) Uplink ports"? To me that sounds like the server has dual-NICs rather than, somewhere upstream, there are redundant providers. [10:15] up_the_irons: Also - what types of RAID are supported by the hardware RAID card? You might add that... you might not, it's up to you. [10:16] The prices seem reasonable though :) [10:16] Now if only I had $129/mo... [10:16] up_the_irons: Also - do any IP addresses come with the dedicated servers? [10:17] ^ a question sure to be asked, and best answered by the webpage ;) [10:19] *** heavysixer has quit IRC (Remote host closed the connection) [10:23] hmmm the small one is bigger than my existing colo box [10:28] toddf: yeah, dsync looks hackish. I thought it was a bit better than cron a script to run as every user. [10:30] sounds like from the above url from plett .. dsync can be run in an automated fashion based on when things change not automatically for every user .. aka a queue of dsync runs .. instead of scheduling the thundering herd .. [10:40] can it? The wiki page doesn't describe anything like that :/ [10:41] oh I see, plugins. [11:24] 'do your dirty work with plugins. if we like your plugin enough, we might refine it and include it in the base distro...' is how I read that [11:26] "modules" is probably more accurate. [11:26] it's part of the core package, just not enabled by default. [11:26] their sieve support (pigeonhole) is probably more like what you're describing. [13:27] *** heavysixer has joined #arpnetworks [13:27] *** ChanServ sets mode: +o heavysixer [15:59] *** dzup has quit IRC (Ping timeout: 256 seconds) [16:13] *** dzup has joined #arpnetworks [16:17] *** dzup has quit IRC (Remote host closed the connection) [16:49] up_the_irons: when you get a chance, can you look at my beta VM? It's not allowing inbound SSH, but I do have connectivity from the box outwards and there aren't any firewalls loaded [16:49] uuid is ad2baab0-f17b-012f-0d9d-525400972102 [16:50] *** dzup has joined #arpnetworks [16:50] *** niner has joined #arpnetworks [16:50] *** Ehtyar has joined #arpnetworks [17:47] brycec: all good points, thanks! [17:48] brycec: they are not the retired KVM servers, btw (none of them are retired actually), but a new set of servers just for this new product line (with newer E3-1240V2 chips) [17:49] Lefty: i'm sure your beta VM is fine, must be something in the vps causing trouble [17:53] up_the_irons: yeah, I don't imagine it's on the kvm server itself [17:53] but it's odd [18:08] brycec: "Dual 1 Gbps (GigE) NICs" <-- might be better; i just tried "Ports" but even then it might not be totally clear. Yet, a NIC is almost always associated with a network card on a server. [18:09] *** mcc0nnell has joined #arpnetworks [18:13] i'm gonna write up a KB article about the NICs now, cuz it seems to be a source of questions [18:13] *** jdeuce has quit IRC (Remote host closed the connection) [18:29] toddf: with trunk(4) failover mode, do you know if OpenBSD sends a gratuitous ARP should the master port change (that is, fail over to the other NIC)? [18:30] up_the_irons: with trunk you can do active active [18:30] well if you're connecting to a switch with lagg [18:30] mercutio: but i don't want active active [18:31] ok [18:31] err lacp [18:31] what's the difference between lacp and lagg? [18:31] you could probably test to see if it does that arp [18:33] mercutio: of course i can, but i'd rather just ask if someone already knows ;) [18:33] it would save time [18:33] mercutio: lagg is an interface in FreeBSD. LACP is a protocol. [18:33] well true [18:33] ahh that's where i got that from [18:34] problem with lacp is a lot of switches won't do lacp betwen switches [18:36] ah [18:36] i used to think there was little benefit for redundancy cos of that [18:37] but intel cards have had a few transmit hangs over the years [18:40] mercutio: i have one Metal customer using Linux bonding in active-backup mode (same as "failover" in Free/Open BSD) with a lot of success. he did tons of testing, works great. [18:46] OK guys, let me know if this clarifies things a bit on the dual uplink bullet point: [18:46] http://support.arpnetworks.com/kb/dedicated-servers/about-the-dual-1-gbps-gige-nics-on-arp-metal-dedicated-servers [18:55] and now that article is linked from the sales page [18:55] http://arpnetworks.com/dedicated [18:55] *** CRowen is now known as Cn [18:57] *** mcc0nnell has left "Leaving" [19:07] up_the_irons: over multiple swithces? [19:07] mercutio: yes, i don't want switch failure to bring down a box [19:08] understand [19:08] for VPS', i can engineer it any way i want on the host machine, but for a dedicated box, i need to expose more of the inner workings of that stuff to the customer [19:08] i wonder if loadbalance works [19:09] theoretically, the other modes should work too, i've just never tested them [19:09] yeah [19:09] brb [19:09] i can't reallya fford a dedicated myself as much as i'd like it :) [19:28] *** Lefty has quit IRC (Ping timeout: 255 seconds) [19:33] *** niner has quit IRC (Quit: Leaving) [19:39] *** Lefty has joined #arpnetworks [19:43] Are the dedicated servers Vmware ESX capable? [19:49] *** notion has quit IRC (Ping timeout: 245 seconds) [19:49] *** twobithacker has quit IRC (Ping timeout: 245 seconds) [19:49] *** RandalSchwartz has quit IRC (Ping timeout: 246 seconds) [19:49] *** CaZe has quit IRC (Ping timeout: 240 seconds) [19:50] *** toddf has quit IRC (Ping timeout: 246 seconds) [19:50] *** up_the_irons has quit IRC (Ping timeout: 246 seconds) [19:50] *** bGeorge has quit IRC (Ping timeout: 260 seconds) [19:50] *** mike-burns has quit IRC (Ping timeout: 260 seconds) [19:50] *** mikeputnam has quit IRC (Ping timeout: 240 seconds) [19:50] *** teneightypea has quit IRC (Ping timeout: 272 seconds) [19:50] *** medum has quit IRC (Ping timeout: 256 seconds) [19:50] *** kraigu has quit IRC (Ping timeout: 240 seconds) [19:51] *** medum has joined #arpnetworks [19:53] *** CaZe has joined #arpnetworks [19:53] Had to turn off ipv6 :/ [19:55] hiccup! [19:55] Yeah, what happened to the arpnetworks IPv6? [19:56] *** lazard has quit IRC (Read error: Connection reset by peer) [19:56] *** qbit has quit IRC (Ping timeout: 246 seconds) [19:58] *** lazard has joined #arpnetworks [19:58] * andol is also seeing some IPv4 packet loss on some routes [19:59] examples: www.nearlyfreespeech.net, halleck.arrakis.se [20:01] i have packet loss too [20:03] *** jpalmer has quit IRC (Ping timeout: 252 seconds) [20:03] *** henderb has quit IRC (Ping timeout: 246 seconds) [20:03] :P [20:04] *** henderb has joined #arpnetworks [20:04] *** jpalmer has joined #arpnetworks [20:04] i don't share your choice of smiley. i would go for :S atm [20:04] hehe [20:05] it kind of defeats the purpose of being in the support chan if my connection is from my vps >.< [20:06] my, my irc bot timed out again [20:07] i should set up some additional pings on my rrd graphing [20:10] 22 packets transmitted, 13 received, 40% packet loss, time 21073ms [20:10] from my vps [20:10] https://halleck.arrakis.se/smokeping/images/arrakis/leto4_last_10800.png [20:11] *** rpaulo has joined #arpnetworks [20:11] *** qbit has joined #arpnetworks [20:11] hi [20:11] ipv6 looks down [20:11] andol: colours! [20:12] *** up_the_irons has joined #arpnetworks [20:12] *** ChanServ sets mode: +o up_the_irons [20:12] *** notion has joined #arpnetworks [20:12] up_the_irons: seeems to be acouple of people in here experiencing packet loss [20:12] back up [20:13] lol [20:13] *** bGeorge has joined #arpnetworks [20:13] oh, its fixed now [20:13] *** teneightypea has joined #arpnetworks [20:13] yes, seeing as up_the_irons is back in the channel... [20:13] │20:08:11 @up_the_irons │ wow, IPv6 router is slooooooooooooooooooow [20:13] *** toddf has joined #arpnetworks [20:13] *** ChanServ sets mode: +o toddf [20:13] lol [20:14] up_the_irons: IPv4 a well [20:14] *** rpaulo has quit IRC (Client Quit) [20:14] https://twitter.com/arpnetworks/status/260941977959415808 [20:14] I also experienced IPv4 packet loss [20:15] *** twobithacker has joined #arpnetworks [20:15] i didn't see any IPv4 issues though [20:15] * milki doesnt even know how to use ipv6 properly yet [20:15] s3.lax (our IPv6 router) was dropping a lot of packets [20:15] *** qbit_ has joined #arpnetworks [20:16] interesting, a cyberverse VPS i was monitoring was showing loss as well [20:16] *** qbit has quit IRC (Quit: leaving) [20:16] *** qbit_ has quit IRC (Client Quit) [20:16] up_the_irons: This is what I am experiecing between leto (arpnetworks) and my other vps halleck - https://halleck.arrakis.se/smokeping/smokeping.cgi?target=arrakis.leto4 [20:16] *** kraigu has joined #arpnetworks [20:16] *** mikeputnam has joined #arpnetworks [20:16] colours! [20:17] up_the_irons: Also saw it against www.nearlyfreespeech.net, even if that seems fine now. [20:17] *** qbit has joined #arpnetworks [20:17] andol: silly question, but by any chance are you running smokeping on FreeBSD9? [20:17] staticsafe: Nope, that would be an Ubuntu 12.04 [20:17] hm ok [20:18] am having some issues that I can't get fixed [20:18] http://smokeping.asininetech.com/smokeping.cgi?target=_charts like so [20:18] up_the_irons: http://stats.pingdom.com/31n57wtu0gs4/660615 [20:19] relevant perhaps? [20:19] staticsafe: file system permissions issues? [20:20] andol: nope, there is a thread about it on the smokeping mailing list [20:20] im gonna try to debug it again now [20:21] jbergstroem: yeah maybe. smells to be of bad path / peer. probably will clear up on its own. i don't see anything out of the ordinary on my end, so, i'm still looking around. [20:21] <\root> up_the_irons do you have 2tb drives available for the dedicateds? [20:22] *** mike-burns has joined #arpnetworks [20:22] *** ChanServ sets mode: +o mike-burns [20:22] oh wth [20:22] www user can write just fine [20:23] \root: no, 2tb drives fail a lot [20:23] \root: anything above 1tb pretty much blows; platter sizes and stuff go above a certain threshold of reliablity [20:23] <\root> Hrm. I've been running 8 2TB Seagate Constellation ES drives for a long time with no issues [20:24] <\root> I would consider moving to arpnetworks for our dedicated, but can't make due with less than 4 x 2TB on a server [20:24] <\root> in RAID10 [20:24] \root: man, i have piles of busted seagate drives [20:25] stopped using them a while ago [20:25] bad luck o.O [20:25] <\root> What kind? Constellation ES? [20:25] $239.99 each, ouch [20:25] <\root> I hate Seagate, but those Constellation ES drives have been rock solid and fast. [20:25] http://pastie.org/pastes/5106895/text?key=vaugnxeraqkpybeshs6r5q - can anybody make anything out of this? this is the fcgiwrap process, trying to debug a smokeping issue [20:25] \root: how long have you had them? [20:26] <\root> Enterprise drives... what are you using? [20:26] <\root> About a year and a half, estimating [20:26] not bad, guess those ES are better [20:26] <\root> I wouldn't recommend them to you over HGST enterprise grade drives, but they are not the crap that is Seagate desktop drives. [20:27] \root: so yeah, i can't shell the $239 each for the 2TB drives. we have a batch of 1TB's in stock for the dedicateds. Maybe at some pointer later we can. [20:27] <\root> Well [20:27] <\root> Can you do a 8x1TB system? [20:27] <\root> in RAID10 [20:28] <\root> That could work. [20:28] <\root> I need 4TB usable in RAID10 though. [20:28] <\root> Also -- what drives are the 1TB's you have? Enterprise grade, or desktop drives? [20:29] <\root> http://www.newegg.com/Product/Product.aspx?Item=N82E16822145476 -- That's probably what I'd pick. [20:29] \root: depends on availability. I have both WD RE4's and regular Hitachi's [20:30] <\root> regular as in Deskstars? [20:30] yeah [20:30] <\root> That are great drives, if pre-acquisition [20:30] <\root> Wonderful drives, I'd take the Hitachis [20:30] they are all pre-acquisition cuz i can't get any more like those ;) [20:30] <\root> Actually have about 12 of them in my house right now, but mostly 2TB deskstars, at least a few 1TBs though [20:31] <\root> Yeah, it sucks. [20:31] yep [20:31] <\root> I don't know where to turn after all the mergers, will take time to figure out how it all goes [20:31] \root: i'm hoping the RE4's are decent [20:31] <\root> Shit, I'd buy those Hitachis off you for more than you paid at this point [20:31] LOL [20:31] <\root> haha [20:32] we need 'em like crazy, hands off! ;) [20:32] <\root> I hear you. [20:32] <\root> So, is a 8x1TB config plausible? [20:32] \root: do you do hardware raid10 or soft? [20:32] <\root> depends on the box [20:32] <\root> we have two, one of each [20:33] <\root> SW RAID10 has been good to us, despite all of the frowns it gets [20:33] \root: our 1U's are 4x. I'm going to shy away from building custom setups this early in the launch phase (of the dedicateds) [20:33] <\root> 1073741824 bytes (1.1 GB) copied, 4.99374 seconds, 215 MB/s [20:33] <\root> That's on SW RAID10 [20:33] <\root> with Constellation ES 2TB's [20:34] i'd like to accommodate, but we just have too much going on right now [20:34] \root: nice and fast :) [20:34] <\root> Yeah, thanks [20:34] <\root> No big deal [20:34] anyone seeing any more IPv4 issues? [20:35] pingdom and my pagerduty seem to have calmed down [20:35] up_the_irons: nope, all good [20:35] <\root> Actually, kind of sucks since you have Deskstars, but the 215MB/s well help me sleep. [20:35] \root: you guys storing large files or ...? [20:35] <\root> Nope, just don't oversell. [20:35] jbergstroem: cool [20:35] <\root> VPSs [20:35] <\root> And shared hosting [20:36] ah ok [20:36] you must pack a lot of space into a vps [20:36] <\root> Zero overselling, professional clients, premium service [20:36] <\root> What do you mean? [20:36] that's nice; good market [20:36] \root: i mean you must offer a lot of space per VPS [20:37] <\root> Starting at 32GB [20:37] to need 2TB drives min [20:37] <\root> So, yeah [20:37] up_the_irons: Completly dead IPv4 wise to my other VPS halleck.arrakis.se [20:38] <\root> Our VPS plans storage goes like this: 32/128/192/256/512/768... and so on [20:38] <\root> GB [20:38] andol: looks to be NTT problem [20:38] 7. ae-5.r21.lsanca03.us.bb.gin.ntt.net 0.0% 2 1.0 1.1 1.0 1.2 0.2 [20:38] <\root> Actually, no so on. That's it. [20:38] 8. ae-2.r20.asbnva02.us.bb.gin.ntt.net 0.0% 2 65.4 66.6 65.4 67.8 1.7 [20:38] 9. ??? [20:38] lol [20:38] \root: cool [20:38] yeah quite big disks [20:39] andol: what does traceroute look like *from* halleck.arrakis.se to ARP ? [20:39] <\root> I'd like to get us moved to ARPNetworks eventually. We're only like 30ms away currently. :P [20:40] \root: are you not happy with your current host? [20:40] <\root> Depends on what aspect you look at it from [20:40] up_the_irons: http://pastie.org/5106931 [20:40] i c [20:40] <\root> Wonderful facility, great prices [20:40] <\root> Poor remote hands [20:41] <\root> Will, hit and miss remote hands. [20:41] <\root> As, I'm in Florida. [20:41] up_the_irons: Also seems to be in that direction the packets are being dropped, because while doing a tcpdump on halleck I do see the icmp packets from leto. They just don't make it back. [20:41] <\root> And our servers have always been on the West coast [20:41] <\root> We've looked at bringing them home and me running a colo operation, but this city sucks for such. [20:42] <\root> I'd trust ARPNetworks remote hands more than our current dedicated providers. [20:43] <\root> Especially in off hours or emergencies. [20:43] andol: it drops at NLayer, so I opened a ticket with GTT just now (they purchased NLayer) [20:44] up_the_irons: thanks [20:44] \root: the thing is, we don't even have remote hands besides myself making a drive. All our gear is administered remotely (IPMI, serial console, APC power control, the works) [20:45] we've tried real hard to make sure physical presence is not needed (and this saves tons of time and money). the only time I go down there is to put in a new box or to replace a drive. [20:45] <\root> Understood [20:46] <\root> But, regardless, I trust you working on routing issues, hardware replacements, etc more than our current dedicated provider's techs. [20:46] <\root> The current provider is larger, and has some crappy techs and some good techs. I have had mostly good luck, but it's just that -- luck. [20:47] <\root> As some of these people shouldn't be touching a server [20:47] LOL [20:48] <\root> Not even from a different galaxy over psychokinesis, much less hands on or remote console. [20:48] i know the feeling, let find a picture... [20:49] \root: had to make this sticker once: http://www.flickr.com/photos/51184165@N00/2174345059/in/photostream [20:52] taking the kids to bed, bbl [21:03] not that its a big deal nor even a blip on the radar but wouldn't it be cool if arpnetworks had a local freenode irc entry point? ;-) hehe [21:41] *** sako has joined #arpnetworks [21:49] *** sako has quit IRC (Changing host) [21:49] *** sako has joined #arpnetworks [22:19] *** sako has quit IRC (Ping timeout: 248 seconds) [22:45] *** sako has joined #arpnetworks [23:02] *** sako has quit IRC (Ping timeout: 265 seconds) [23:30] *** Ehtyar has quit IRC (Quit: Ex-Chat) [23:34] *** sako has joined #arpnetworks [23:44] *** sako has quit IRC (Ping timeout: 246 seconds)