anyone else seeing network issues? nope, but it's a little later than your original query My smokeping looks mostly clear, m0unds. There was a bit of jitter on ARP's ipv6 router about 3 hours before your question, but nothing lost and nothing anomalous. anyone seeing intermittent network issues? i saw ipv6 take dump and had multiple v4 monitors trigger, but i was in and out of airports so i couldnt look into it seems fine since i asked earlier though Okay, brain-trust, I could use a bit of guidance. A buddy of mine is looking for some kind of document portal, something he can link to, possibly do some user/password authentication for sensitive documents and downloads. The only solution I know of is Drupal, and its security track record is a big concern to me. Any other ideas/suggestions? hmm there has een a few ping sikes rcently well ping and 50% packet loss mercutio: yea, I saw those yesterday I saw them over Level3 yeh but the weird thing is my route is to/from coresite/any2ix oh hmm yea it's affecting other things too so i think it's ddos related. http://kremvax.acfsys.net/smokeping.cgi?target=Remote.peeringarp http://kremvax.acfsys.net/smokeping.cgi?target=Remote.s7laxarpnetworks you get 50% too also over v4 or v6 to arpnetworks.com hmm you don't have r1 in your list. what does r1 do? peering is that different than 10.10.10.6? i have no idea what 10.10.10.6 is some peering box at arp I think s1 -> s7 -> 10.10.10.6 for me I used to see it in traceroutes apparently not anymore? do you see r1 now? yea. maybe r1 replaced 10.10.10.6 or addressing just changed your smokeping is going slow :( oh probably the packet loss there's some more loss again yeah nmap indicates that r1 and 10.10.10.6 are different http://irclogger.arpnetworks.com/irclogger_log/arpnetworks?date=2014-06-11,Wed&sel=252#l248 10.10.10.6 is a peering box running BIRD yeah it may be another ip on r1 if i set up a vlan on my arp vps (only need it to test something) it wont interfere with anythingnon the virtual switch will it? woukdnt want to accidently push outbtagged vlans and it send alarm bells ringing or somethingbcrazy probably not what are you using VLANs for? doesn't seem like you'd want to configure them on eth0 or whatever anyway grody: ports aren't tagged, so you'll be doing a vlan inside a vlan you will probably find that your mtu has to go down too how do you intend to combine tagged and non tagged? theorising an install of pfsense as it needs 2 interfaces to be configured, until i get it rigged, using a vlan will solve that it needs two for install, damn you could make a dummy tap interface or something if you just need an interface which does nothing acf: in the installer? yea fakenanvlan for lan, solves it.. log in via wan and setup vpns etc probably there is a shortcut to open a console? its possible so daylight saving in the US just changed right? :) so it's 3 pm in los angeles, and plus a few to east coast? yea 50% loss again :( cool Yes -> 15:07:55 ⤷ | so it's 3 pm in los angeles, and plus a few to east coast i think s1 to s7 is where issues are coming from arp to world it goes s1 s7 r1 for peering, and r1 s1 for incoming and tracing over peering from out to arp, it shows s1 as having no loss, but destination site having loss. The pfSense *installer* (the component that copies files to disk) doesn't require any interfaces, however initial boot configuration does insist upon configuring at least 1 interface. s1 itself is showing some loss on smokeping grody: Not sure where you're seeing pfSense requiring *two* interfaces to setup. That was done away with around 2.0 yeah some loss, but not much, and icmp is deprioritised. (Personally, I set it up with one, pfctl -d to disable the wan filtering, login and play.) arpnetworks.com is also showing some loss (being connected to s1) i've done 17404 packets to end destination, and 184 dropped to destination host, and none dropped from s1 well it could also be switch that's into s1, that hosts are connected to. but i'm seeing 0 loss to s1 but this is from dedicated server not vps, so it may be different. also it could be something like flow table overflow where things like smokeping that start and stop could be worse I saw ~1% to s1 i'm just leaving mtr running. and ~6% to arpnetworks.com I'm also testing from a dedicated box fwiw hmm that was a very short sample (75 packets) i wonder when up_the_irons will make an appearance :) oh was it the first packet dropped? it is a sunday there, i realise... the last two haven't been as bad as the 3 prior has anyone put a ticket in? usually I just wait for up_the_irons to show up here :P heh i imagine monitoring systems got triggered. my monitoring systems got triggered :P i hate it when monitoring systems get triggered int he middle of the night for an intermittent issue i got a trigger like 4 hours ago, but it looked this issue happende 3 hours ago weird i accidentally left a mtr running to arp a day or so ago, and it's been showing these weird unrelated hosts, that it definitely wouldn't have gone through mtr must be buggy it does suggest s1 has had less loss than destination, but more loss the hop a couple of hops prior still not much though. I'm actually not seeing anything weird on the graphs. Just host kvr12 is having issues. up_the irons, nothing at all? http://kremvax.acfsys.net/smokeping.cgi?target=Remote.googledns http://kremvax.acfsys.net/smokeping.cgi?target=Remote.peeringarp the last 30 hours graphs show well peering route has a few of these: Mar 8 08:54:06 r1 kernel: [33692309.054249] nf_conntrack: table full, dropping packet. but only at that time (around 9am pst) strange I was seeing problems over Level3 also http://kremvax.acfsys.net/smokeping.cgi?target=Remote.l3dns1 problems to everywhere really, even arpnetworks.com maybe it was small packet ddos of low volume? err to random destinations or something, overflowing tables of where to send stuff i tend not to keep state for that reason yeah acf_: the peering box running bird is r1, fyi ah, ok is that also 10.10.10.6 then? yeah --- kvr12.arpnetworks.com ping statistics --- 690 packets transmitted, 690 received, 0% packet loss, time 689956ms rtt min/avg/max/mdev = 14.570/22.440/290.342/21.077 ms so i'm not seeing any loss from my house it's on/off roger there it goes again also IPv6 is affected yeah _now_ i see it. Like > 1 Gbps spikes incoming Level 3 oh fun or wait, is it outgoing... oh on kvr12? :) i think it probably is outgoing, as incoming to s1 looked fine mercutio: negative. somewhere on s8.lax (so dedi), but i don't see the traffic on any of the downstreams i must be missing a graph... damn so it's not split evenlyish oh graph to customer cos level3 was showing it brycec, really? when i fresh installed 2.1.5 it needed one for WAN and one for LAN 2.2 is latest, but i've ran into issues with it the way around was the create a vlan for WAN usually and allocate LAN to IP range you want/can access i dunno why people don't just do openbsd i dont get on with it freebsd though it's pretty easy to setup for firewall if you're not scared of the cli even right now im ssh'd into an openbsd shell to use irssi then just go with openbsd :) it's too cool for me im a freebsd whore they make epic firewalls well freebsd has old pf, but still does work :) was my first homegrown back in 4.3 days used to use smoothwall/mandrake snf before i'm using linux for my nat im actually running freebsd again on this lappie it runs soo smooth linux/ferm is sort of ok but it doubles up as my file server etc. and linux desktop :) im using openwrt (linux) on my border as a pure router, no tracking/firewall and a pfsense for firewalling sounds complicated. stupidly easy really i've actually been thinking about shifting to terminating pppoe on linux from modems. internet > | networks ya im using pppoe on a vdsl link but it's being held off by not wanting to reboot, and wanting to put the extra ram in first. i have both adsl and vdsl with two modems and openwrt is the most stable OS for my embedded router that supports PPP minijumbos with a /29 and two gateways. i have a few blocks from my ISP yeah i been trying openwrt on a wireless ac router i can't stand it tbh :) a /48 of 6 in /52's (i have multi lines with them), and a few small blocks of 4's but it seemed stable eek i hate openwrt for wifi i think it sucks for anything but... well i am on the development branch ddwrt on these routers works the wifi so much better as it was broken in the stable branch i have archer c7 but ddwrt lacks ipv6 and the minijumbos 802.11ac atheros tplink 841nd thats g/a/n i have a tp-link 4300 too which is in between those two and that's running gargoyle and it sucks much less :/ have another tplink 5GHz in bridge mode yeah they're all in bridge mode :/ gargoyle makes bridge mode nice my network is actually a mess @home, physically speaking this is why i'm trying to use wireless bridging :/ it works sound as a network.. but my setup for the rig is shoddy never tinkered with that tplink's software is ok i havce two erthernet cables and fibre across the room but if i can dd or open them, even better i been thinking about running it by the ceiling... but i have no idea what the nicest way of doing that is... i figured i could just shift more stuff over there, and wireless bridge. hehe i used to run a media converter (ether to fibre) in my old place was a proper old skewl 100 mbit/s effort but was fun my switch has fibre, and i have 10 gigabit ethernet cards that were cheap on ebay. these tplinks have gigabit switches, but only run 10/100 ports weird, this is gigabit fire almost full port speed per port err fibre it's really expensive to get 10 gigabit tranceivers. i dont have fibre anywhere here now and i'd need a 10 gigabit capable switch. all cat5/6 or wifi and the nte5 box for vdsl should go gigabit, but only have a handful of gadgets that will utilize it heh i have infiniband between windows/linux and when i tried using just gigabit again it was so slow :/ do have trunking going on from one switch to another to get more speed from the file server, but i only use 140mbps max off that but i'm using ssd's.. i get about 400 to 500 megabytes/sec normally so 100 megabytes/sec on gigabit felt slow the wifi on these ddwrt's can managed a total (phy) speed of around 190mbps if you only look at it one way like shifting large volumes of data from ssd to server i can do 600 megabit wireless with archer c7.. did a client from one into AP on another and got those speeds between hehe that's same room though for internet it doesn't really make a difference yet it's still faster putting an 8GB pendrive on a pigeon to send it from spain to UK than it is to upload the 8GBs over ther internet but for rsync etc it can i did 150gb upload on vdsl took like a day my rule is if my wifi can saturate my internet, my wifi is working optimally which it does :) but it's good to have off site back but yeah, my main data is only like 150gb. that would have seemed huge 10 years ago i get about 8/9MB/s on wifi sending via sshfs/sftp either way, 11 on ethernet (not sure why that is) - but unencrypted stuff i hit a usualy 10MB/s network wise thats enough for me, even backing up my laptop only took as long as watching a movie backing up from UK my VPS takes a while, usually about 8/9GB that runs from anywhere from 10mbit/s to 30/40 heh my box plugged into the tv installs packages faster than my desktop even using sshfs back and forth is efficient enough because i'm using a web cache, and so it gets like 15mb/sec when my net is more lik4 mb/sec err more like 4mb/sec ouch it's got a hard-disk though i mean megabytes not megabit i used to run a large web proxy in transparent mode once used to help a shyte load on slower links well it doesn't help much i dont see the point these days, i get full speed 24/7 on my ISP but it makes package installs so fast when you have multiple computers with the same os in fact, i have to shape locally in order to prevent hosts from taking to peter i have fq_codel on my connection whats that? it doesn't buffer bloat or get loss when maxing it out it's a aqm thing that gives fare sharing ahh http://www.bufferbloat.net/projects/codel/wiki/Wiki/ it's kind of magical im using software based stuff in pfsense yeah i'm doing it at the isp end works really well in fact so it doesn't even hit my network and get capped at my end yea shaping is kinda pointless when the packets are already there at the link yeah you have to go further under your connection it even raised my download speeds damnit i'm such a geek :/ tc class add dev $DEV parent 1: classid 1:1 htb overhead 8 rate 36.9mbit ceil 37mbit burst 6k tc qdisc add dev $DEV parent 1:1 fq_codel the idea for this is to match the rate limiting my ISP can offer (96% under the synch speed) - that way small packets, like DNS and VoIP have headroom to get through with fq_codel you don't have to worry about diff traffic you just have to not go over your connection speed too much preferably slightly under i'm on 38.5 megabit sync rate i think ptm reduced overhead from atm i actually get max sync rates for once 79.7 down and 19.9 up and it's an upto 80/20 wire speed i get about 77 down and 18 up shaper resevers 1mbit either way for "priority" trafffic which is DNS and VoIP in my case nice yeah with fq_codel you don't even have to worry about that you need to install a package on openwrt to do it i think but it works better than sfq which was my old standard so if you shape to 77 megabit or something,y ou stick fq_codel on that 77 megabit shape cant remember what schedular im using PRIQ or something the faster your connection speed is the less it matters. well depending on how heavy your use is indeed, but over the weekend i turn on my torrents (have just about every ISO for just about every FOSS OS you can think of) and it gets hit hard yeah, torrents can bean issue the fact i can still make a crisp voip call whilst hitting up a youtube video and the missus watching netflix is when you know it's shit hot i graph my connection i was getting loss when maxing it out before now i can get 1 or 2 msec extra ping sometimes i'm not even doing anything on the upstream :/ fortunately it's only my pfsense that tracks connections, and it can handle a good 60'000 state entries befoe it will stat buckling i have only managed to get it upto about 20k state tracking sucks :) it's the NAT networks that hammer it if they start torrenting i'm not using nat so i don't have that issue lol only for wifi the non NAT networks dont seem to hit the cpu at all fortunately my torrenter sits on a public IP with no stateful inspection on the firewall but the shaper will still snag it i have 65536 conntrack max i never adjusted it 73000 states mine is set to 3000 tables 200000 table entries thats default to my amount of RAM only an 800MHz CPU, but 784MB is more than enough weird i wonder why mine is lower http://imgur.com/AvAVvF4 maybei should graph my connection been rrd graphing my pfsense since install am still using an archaic Neoware CA10 35W of pure awesomeness even had an ancient aesni capability so 128bit VPN takes no overhead on the CPU can handle 6 users easily sweet i sure hope my VPS does use too much CPU.. got a lot of packages to recompile :> ezjails is pure epic, way i ran my jails before was horrifying considering i only have a single core, the performance is still impressive don't think burst cpu sage is generally an issue with vps's. err usage well that's why it's generally not an issue :/ i rarely use my resources much, once it's running as meant, it passes small amounts of traffic with little CPU/mem load yeah that's what most people are like just compiling in multiple jails at the same time starts stressing it they're all dual cpu machines. so lots of people would have to be stressing it to be an issue it's more of an issue when people give out lots of cpu cores, and then people hose it. ideally you'd have a single core VPS per core (one core say in an octa) even if it's dividing up resources between users there are heaps of context switches, and it hammers cpu caches. so yea, i doubt a few hours would make an impact well it'll shift you between cores normally not as bad as freebsd does :/ have you ever looked at cpu core usage on freebsd? i always wondered if that was efficient.. numerous threads running and them being passed between cores it bounces processes around all the time. nah it's not efficient i'd have though one thread per thread ability per core they tend to stay in one place when they're active though yea it's more if it's on/off so it tends to put jitter up a little, and it's less efficient if you're doing less (so it matters less) if you have 20 processes that wake up occasionally how would you distribute them? like 2 threads per core on a 4 core CPU, and im only actively running 3 threads, say ffmpeg whilst running VLC whilst sftp is downloading something.. i'd imagine 1 and a half cores would become occupied it's one of those "complicated problems". mm i dont code enough to make a technically correct opinion really i've been wondering how well hyperthreading works in virtualisation. becauuse that complicates matters even more. from my understanding kvm handles this stuff better than xen xen isn't very intelligent when you have dual cpus. epsecially if the host starts doing stuff whilst a guest is hammering there's some huge complications with intel dual cpu systems if you are both high cpu and high network activity it gets even more complicated. i had a dual AMD system with two 8 cores.. that was a hell amd are just as bad probably, i just understand the intel problems more. but basically intel just use really fast interconnects between the cpu, because people don't deal with it well. i believe i had to run windows 2008 on that in order to utilize the motherboard features so in the end inefficent for most things is only 20% slower or something but some of the network stuff is higher overhead than that "use whats available as it's available" like if the network queue is on one cpu, and the vm is on the other cpu, then it has to wake up both cpus and forward data between them if you're just doing cpu, it can just leave it running on the cpu that the network isn't connected to but with numa systems often memory is owned by a cpu too i do sometimes wished i was good at code and they shifted to having pci-e lanes owned by a cpu oh don't worry, normal people aren't good with this stuff :/ i curse the way some things work and would just ♥ to do it properly i only understnad a little.. i don't know if you're interested, but facebook had an interesting article on memcached scaling. basically they really struggled with memcached scaling with "random" data. memory bandwidths etc have gone up, but latency still sucks. i doubt most people on facebook would understand it so if you randomly go all over memory then you're pretty latency bound. g+ has a generally more knowledgable audience, wich im finding out this is facebook for their own systems. they're using memcached. wow to cache peoples feeds etc. i despise facebook - never like it but it's relaly unpredictable. i just find large systems fascinating. one site i do like is waybackmachine but yeah you can't just throw cpu/resources at the problem. that caches a tonne of sites dating back quite a wile i think the general solution to this problem is sharding, where you put resources in different locatiosn to break it down to increase locality. a good way of finding old sites not sure what it wrong with my hhhh key wow i need a beer! heh i really need to get my jails up and running again hosting my personal MTA and site off a bloody ARM system with the resources of a 1998 box 1.2GHz Kirkwood, 256MB RAM running arch it gets hammered doing DNSBL checks when the emails starts flooding ouch my mail server is a 2gb vm err make that 1.5gb it has half a gig free atm well "cached" the VPS is only 512MB .. but being FreeBSD, it copes extremely well but i read mail locally on it with mutt and mutt is a huge memory hog when you have large mail boxes mine just forwards to other email accounts but it does DNSBL/SPF/DKIM checking first yeah i do that stuff too you have to now days :/ and amavis reminds me.. why the hell am i bothering to setup dovecote -e i dont store mail locally anymore even root mail gets sents to a dedicated email account offsite "how secure is your server?" - "well, atm i just noticed the only thing is running is ssh on a random port, not a sodding thing else hooking the socket" ... i just to hell hope sshd doesn't die - cba with finding a web browser for oob grody: Fired up a 2.1.5 ISO and took a screenshot, "requires at least 1 assigned interface", does not require 2. https://dl.dropboxusercontent.com/u/3167967/screenshot_2015-03-08_19-43-36.png (and following that, https://dl.dropboxusercontent.com/u/3167967/screenshot_2015-03-08_19-45-53.png) that's a better default