Calling All Computers

Check_mk and FreeNAS

Note

Software involved:

  • FreeNAS 9.2.0
  • OMD 1.10 (check_mk 1.2.2p3)

FreeNAS is great, and the web interface makes it easy and simple to understand my NAS's overall structure. However, my favored method of monitoring in my homelab is OMD with Check_mk, while FreeNAS prefers a self-contained collectd solution. We're in luck however, in that FreeNAS is heavily based on FreeBSD, which check_mk happens to have a plugin for, so it shouldn't be too hard to set things up the way I like them. There are two possible ways to do this:

  • Enable inetd and point it to the check_mk_agent
  • Call check_mk_agent over ssh through as shown in the datasource programs documentation

I decided to go the second way, as I prefer to avoid making manual changes to FreeNAS if I can avoid it.

If you do decided to go the inetd route, this thread may come in useful.

Agent Setup

The first thing we need to do is set up a user with a home directory where we can store the check_mk_agent program. If you already have a non-root user set up for yourself (which is good practice), that will work perfectly fine (they may need root access to collect certain data points). If you want to be more secure, you can set up a check_mk-only user, and limit it to just the agent command, which I will explain below.

Once the user is set up with a writeable home directory, it's as simple as copying check_mk_agent.freebsd into the home directory. Run it once or twice to make sure it's collecting data correctly.

Check_mk setup

From here it's basically following the instructions on the datasource programs documentation link. Here's a quick overview of the steps involved:

  1. Add datasource programs configuration entry to main.mk. It will looks something like this:

    datasource_programs = [
      ( "ssh -l omd_user <IP> check_mk_agent", ['ssh'], ALL_HOSTS ),
    ]
    
  2. Set up password-less key authentication ssh access for omd_user to FreeNAS

  3. (Optional) Limit omd_user to onyl check_mk_agent command by placing command="check_mk_agent"

  4. Add ssh tag to FreeNAS host, through WATO or configuration files

  5. Enjoy the pretty graphs and trends!


Tuning NFS in the real world

Intro

After a hardware upgrade, I found myself with the classic setup of a high-power server on one end of a switch, and a NAS on the other. However, serving files over NFS was painfully slow. Back in the day I could have written it off as "NFS sucks", but I'm potentially planning to use these datastores for VMs potentially, and with the speeds I was getting, it would get ugly very quickly. So, armed with this very nice presentation from SGI, I was ready to tackle the world.

Initial Testing

The first thing they said was to get exact numbers and check for bottlenecks. In a real-world test, the max file transfer speed turned out to be a little more the 10mb/s. Not terrible for a network, but much less that I'll need. My disks with WD Reds in a RAID6, so I was confident the read speeds would be faster than 10mb/s, although I should really get exact numbers, at the very least to prepare for it possibly being a bottleneck later on. However, testing the the network (using iperf), it turned out that the max network speed I could get was 94.1 Mbits (11.7 mb). ...Shit. I should've seen this coming, having only a 100Mbps switch. I could probably bring that up to 200Mbps with a little bonding magic, but I figured it was safer to future-proof, and bought a cheap gigabit switch.


BASH Documentation

One of the things that always bothers me is lack of proper documentation. Now, I'm lazy just like everyone else, but if I'm going to document something, I prefer to do it properly and keep it up to date. I've inhierited a nice suite of bash scripts, which aren't really complicated, but they all have the same copy & pasted header that's dated from 2003. Not exactly helpful.

So while I have a wiki that explains how some of the processes work on a higher level, it would be nice to have clean documentation in my bash scripts. Ideally, it would be embeddable, exportable and human readable. Basically, I shouldn't have to maintain two files, I should be able to paste it somewhere else if need be, and I should be able to maintain it without any external tools whatsoever, if I wanted to.

Here are a list of options I found while browsing around:

  • The old-fashioned embedded comments
  • bashdoc (awk + ReST structure via python's docutils)
  • embedded Perl POD (via a heredoc hack)
  • ROBODoc

Of these choices, POD seems a bit bloated to be inside a script, and ROBODoc looks way overblown for my simle needs, so I've decided to go with bashdoc. I'm already working with ReST, via this blog, and it fits pretty much all the criteria. Plus, it has few dependencies (awk, bash and python's docutils) and doesn't require a package for itself, so I wouldn't feel bad about setting this up on production servers (although I should really set it up as a git hook in the script repo or something). However, documentation for bashdoc is quite limited (irony at it's finest). The best way to figure out what is going on is to read lib/basic.awk, and the docutils source code, which isn't exactly everyone's cup of tea. That said, it shouldn't be too difficult to build a small template I can copy and paste everywhere, which will hoepfully be more useful than the current header.


Help! My PHP code isn't showing in Nikola!

I came upon this problem with my last post, and thought I should put out a short PSA. This isn't actually a problem with Nikola, but it's not an easy thing to google. The problem lies with Pygments, the parser docutils uses to parse and mark up code. Basically, the PHP lexer expects a <?php to start off the code block, unless a special option is given. Short of making that change manually in the docutil source, I couldn't find any way to change that option.


Fun with Basic PHP Optimization

A while ago I came across a nice peice of php software for controlling a daemon (I'm specifically avoid the software's name for privacy reasons). It worked well with a small data set, but quickly became laggy with a dataset numbering in the thousands. Admittedly, it really wasn't built for that kind of load, so I removed it and controlled the daemon manually, which wasn't a big deal.

Then a while later, I came across a post by someone who managed to mitigate the problem by shifting a particularly array-intensive operation to an external python program. Obviously, this was not exactly the most elegant solution, so I decided to take a look at the problematic section of code.

It looked something like this:

1 <?php
2 for ($i = 0; $i<count($req->val); $i+=$cnt) {
3   $output[$req->val[$i]] = array_slice($req->val, $i+1, $cnt-1);
4 }
5 ?>

Looks pretty basic, right? It cycles through an array ($req) and splits it into a new dictionary ($output) based on a fixed length ($cnt). However, if we turn this into a generic big O structure, with the values borrowed from this serverfault post, the problem quickly becomes apparent.

1 <?php
2 for ($i = 0; $i<O(n); $i+=$cnt)
3   $output[$req->val[$i]] = O(n)
4 ?>

Taking into account the for loop, this would appear to mean that the operation is O(2n2), in contrast to the very similar array_chunk O(n) function. So how do we optimize this? The most important thing to do is make it so php can complete this in one loop over the array. Everything else will be a nice improvement, but when scaling, the big O is king.

Here's the new code:

 1 <?php
 2 foreach($req->val as $index=>$value)
 3 {
 4   if($index % $cnt == 0)
 5   {
 6     $current_index = $value;
 7     $output[$current_index] = array();
 8   }
 9   else
10     $output[$current_index][] = $value;
11 }
12 ?>

We've dropped the for/count() loop in favor of foreach, and eliminated slicing in favor of appending to newly created elements. In a real world test, this cut down the response time of the module from 12s to 4s on average. A pretty big improvement for a pretty small change...


Welcome to my blog

Hopefully I'll be able to build this into a little repository of my tips, tricks and hacks as I navigate the strange and wonderful world of information technology. Maybe I'll even improve my writing, or help someone out with an obscure problem. The possibilities are endless! (For specific definitions of endless.)


  • Shares: