Friday, 30 December 2011

Storing Passwords

The most effective way to manage your passwords for personal or professional use us to use a password manager.  This allows you to manage unique logins for all the different resources you access (bank vs email vs general forums vs ...) and only have to maintain one master password.  Pick a reputable password manager, like KeePass, and remember that backing up and restoring your password database is critical.

Keeping electronic copies is fine, but also consider keeping a hard-copy as well in a relatively secure location.  One suggestion is that you print off your passwords every time you change your master password (annually is pretty minimal) but write that master password down on the print out so you can recover it if you forget it!  Useful if you do cycle your master password frequently.

Friday, 16 December 2011

WiFi Routers and NAS

The last time I bought a new router was when the Linksys WRT54G was "the king" of home WiFi routers - and mostly because you can replace the useless stock firmware with DD-WRT.  Otherwise, it was "a router".  At the time, 4 years ago, which is like many generations in Internet time, you had to manually setup security on your WiFi AP still so you saw lots of open WiFi hot-spots like "Linksys" or "Dlink" around.   Then the WiFi router manufactuers started providing security setup as part of their setup wizard so you see more SSID customization and security enabled.  Now, apparently, everyone auto-configures security with a magic button called "WPS".  Then you've got other features USB ports so you can run a file-server from a USB drive or print server and "guest networking" so you can isolate isolate your workstations from other users.

"WPS" - WiFi Protected Setup is definately a cool feature.  It comes as a button on the router so when you press the button, its like the router goes into a sort of "security auto-config mode".  WPS, if its supported on your client (I assume it's a software install), will then automatically configure your client and your router with strong security settings. It means no more default passwords and streamlining the security options for users who frankly don't need to have "WEP" as an option.

[Edit: WPS is broken and should be disabled on all routers that support it according to SANS.]

Guest networking is another cool feature on some routers.  It is a separate SSID for, well, guests to use your WiFi from.  It is isolated from your main network so that guests won't have access to, for example, your network attached printer or to your media collection you stream from your laptop to your television.  This is just so cool for people who may be sharing their Internet connection with their neighbours or roommates but just don't want their surfing habbits to infect their own systems :)


And the USB ports.  Many routers seem to have one or two USB ports on them which is interesting, but what's more interesting is what you can do with them.  A lot of new routers have built-in file servers so as soon as you attach some storage, you can share files and folders from it to the PCs on your network.  How convenient is that?  Some routers have more sophisticated web interfaces than others and let you specify which folders are or aren't shared - but either way, if you're buying a new WiFi router anyhow and you get this feature, it means you get a functional NAS for the cost of a USB key or USB attached hard drive!  *And* some routers are starting to come out with USB 3 - SuperSpeed USB which if you consider these routers have not only 802.11n speed on the WiFi but also Gigabit speed for the network ports, is an awesome feature.

And that's not the only thing you can do with the USB port - some routers will also act as a print server!  So you attach your generic USB printer to the router, and it's now a network printer you can print to from any laptop or PC in the house.  Talk about great value-added feature!  I love it!

And did I mention that new routers are all now wireless N with Gigabit LAN interfaces?  WiFi is still garbage and a ways away from being reliable outside very small deployments, but N is an improvement over previous specs.  Interestingly, I found out the other day as well that if you run your router in "dual band" to support both N and G clients, your wireless speeds on both N and G suffer.  So ironically if you have any wireless G clients, unless you really need your N devices to run at "slightly faster than G but nowhere near N speeds", you should still run G only.

Cool beans!  I'm liking some of the features I'm seeing on the box these days from some of the WiFi routers.  A nice change from the utter crap they used to shlep out where the only smart thing to do was check if you you run a custom firmware on the device and replace the junk software sold with it.

Wednesday, 19 October 2011

Source Control for Server Admin

So you manage a server, or a lot of servers alone or in a team, however you are doing this, you are going to be tweaking configuration files often and creating custom scripts for automation.  There are two tools I use for revision control - RCS for configuration files (generally) and SVN for scripts (generally).

RCS.  The classic.  All the documentation you will ever need is in the man pages.  Well that and some context for how to use it.  RCS creates revision files in place.  So if you change /etc/dhcpd.conf, it will create /etc/dhcpd.conf,v.  This is a very useful setup when controlling local files in arbitrary locations - like most of /etc on most of your servers.  There are a few caveats to keep in mind:
  • RCS will put revision files (the ,v files) in an RCS folder if present
  • The default behaviour is to remove a file from its current path on check-in
Keeping these in mind, this is my general pattern for working with files under /etc.
  1. If there is no RCS folder (e.g. /etc/RCS), create it first
    • mkdir -m 700 ./RCS
    • Assuming your working folder is where the file in question is, this will create an RCS folder and protect it from other users (typically non-root)
  2. If a file doesn't exist in RCS, check it in first
    • ci -u dhcpd.conf && co -l dhcpd.conf
    • ci is short for "check-in", unlike SVN or CVS, "ci" is the command and not an argument to "rcs"
    • The -u "unlocks" the file leaving it in place (so dhcpd can read it)
    • co is "check-out" and -l "locks" the file for editing
      • I always leave files checked out to capture changes by other users or by the system (like rpm)
  3.  If the file does exist in RCS, check for any un-committed changes
    • rcsdiff dhcpd.conf
    • This does a diff against the last checked-in version by default but you can specify a version if you want to compare against earlier changes
    • Check-in any un-committed changes or find the person who made the changes and make them do it
  4. The file should always be left checked-out (per above comment), otherwise check it out
  5. Make changes
  6. Check-in changes, and check-out the file for the next user
    • ci -u dhcpd.conf && co -l dhcpd.conf
    • Give a brief log message indicating what the changes were and again, leave the file checked-out to capture changes by the system or other users
Now the last useful command I'll mention there is rlog which lets you read the revision history log.

Now SVN is a proper centralized source control system.  They have excellent documentation on setting up a repository.  This is very useful for system admin scripts. 

Although most system administration related scripts won't ever have "releases" or "branches", you probably still want to follow the SVN guide and create at least a trunk in case you ever do need to tag a specific version.   There's no cost, so I use a trunk even though I've never used it because changing later is a problem.

With SVN you'll want to keep an updated local working copy ("tip") either on a shared NFS location or locally on each server.  How you do it is up to you, just create a cronjob to run "svn update /path/to/tip" and then you can always run scripts from that path.

RapidSVN is a great tool, well maybe not great, but works very well for sys admin anyhow and its readily available.  So check out your own working copy of the trunk with RapidSVN.  I configured RapidSVN to use gedit as my standard editor and meld as my diff tool.  

This gives you everything you need for day-to-day creating and maintain system configuration files and your toolbox of scripts for automated system maintenance.

Saturday, 15 October 2011

Debugging Python Scripts

This is really just props for a site I found with a nice walk-through of using the Python Debugger - pdb.

http://pythonconquerstheuniverse.wordpress.com/2009/09/10/debugging-in-python/

pdb your built-in step-through debugger allowing you to inspect objects and all the usual things you need in developing a program.

Friday, 9 September 2011

Running the numbers

Two interesting tools popped up recently.

Good old Linux Counter has been passed down to a new maintainer. This is a classic project which attempts to get Linux usage data from user input. Its hard to tell if its particularly relevant, but it is interesting to see relative usage across platforms and by region. As for estimating global Linux use? Hard to be convinced this provides a good enough sampling to be very convincing. Nevertheless, I keep my machines at home registered there. Or at least some of them :P

Another one I really like is Debian Popcon which tracks popularity of Debian packages by installs and by "votes". Popcon is actually just a Debian package which phones home your installed package list and it is installed by default on some distros while not others. What I like about popcon is that when there are a wide variety of F/OSS tools available, you can check the list to see which tools are ranked highest so you can at least start by trying the most used tool rather than taking a total wild guess. For example, in looking for a SVN GUI tool, I did a "yum search svn" and there were a lot of hits. So I opened up popcon, search the list top to bottom for "svn" and took the highest hit which was a GUI tool which was RapidSVN. Well, then I checked with Dante which tool he used, but lo and behold, it was RapidSVN :)

Thursday, 7 July 2011

Reorganizing Ubuntu Partitions

My personal PC at home died. It was an old PC no matter which way you look at it. Every part had been replaced or upgraded over time (case, PSU, optical drive, hard drive, memory, CPU, mainboard, NIC, video card) so knowing it's actual age is pretty hard, but it looks like "Friday" as a PC existed for 7 years. Checking my blog, the first reference I found was October 31, 2004 indicating Friday was the new name for an old PC called Michael.

Time for a new new PC. I've reused the optical drive but everything else is new in Agnes (from Immortality by Milan Kundera). I did your basic "install Windows first, Ubuntu second" so pretty much just a mommy-install. Until I realized I really hadn't made a big enough Windows partition.

I figured it would be a pain, moving the first Ubuntu partition back on the drive so I backed everything up and booted from the Live CD. "gparted" is included on the live CD and it was painless to shrink the Ubuntu partition, move it "right" and extend the Windows partition. I didn't have to reinstall grub or do anything else, it pretty much just worked - for both OSes. It's always so nice when things just work.

But I will say, it's pretty dumb that Ubuntu doesn't use LVM. As I have posted before, LVM is very useful. What would be nice is if I could have just lumped most of the free space into LVM and then just carved out an LV for home and another for media so I could grow them as needed. Rather than fiddle too much with that though, I ended up just going with a relatively large /home partition and will just grow that as needed and then if I need space for other things - like more storage under the 'doze, I can put a partition at the end of the disk.

Thursday, 16 June 2011

Heartbeat

I recently have tested out running Heartbeat (finally, took too long to get to this, but that's another story). This is a cluster resource manager (CRM) which polls nodes in a cluster and brings resources up when a node failure is detected.

It's interesting. I wouldn't call it elegant really, maybe the newer Pacemaker would seem cleaner. But it is simple and at least in testing it is effective especially when combined with DRBD which I posted on earlier. The thing is where DRBD really seems built for top-notch resiliency and flexibility, Heartbeat seems it will work, but it's not obvious that you'll get what you expected - maybe it's just the documentation on DRBD was really well done.

At any rate, there is great documentation on getting Heartbeat up with DRBD both from the CentOS wiki and from DRBD. I used heartbeat with drbd83 in CentOS.

What Heartbeat will do is listen for heartbeats from a peer node in a cluster and if a peer goes down, it will bring up services on the working node. There's a handful of important things about this to keep in mind.

First is the heartbeat - this is just a stand-alone network connection between two nodes so if that connection goes down or the heartbeats get choked out by competing traffic, Heartbeat may well decide you have a node failure. This is not a trivial problem because now that Heartbeat can kill services on an active node, this is potentially an new point of failure. And this is common to many HA configurations including DRBD itself though as we know, it will identify split-brain and gives you some recourse for repairs. So the suggestions here are to use a dedicated connection, preferably a point-to-point connection with a cross-over cable or a serial port - and this is not uncommon for clusters (the point-to-point connection) like in this white paper for Microsoft Storage Server.

Then there is the issue of resource management - when the CRM is managing the resources, the usual OS procedures should not. If Heartbeat is in charge of bringing up MySQL, you shouldn't be starting MySQL from the init scripts when the OS boots. Now the nice thing with DRBD is that it's behaviour is consistent with this paradigm - when DRBD resources start up, they are in "secondary" mode and cannot be accessed by the OS. So if you have a file share protected by DRBD, Samba wouldn't be started by the OS, and likewise, that file system would be unavailable when the OS starts (by default at least). So here, Heartbeat makes a lot of sense. You take a 2 node cluster for example, when the nodes start up, Heartbeat looks for the peer, picks someone to become active, and then would make "primary" the DRBD resource on that peer, mount the file system, start smb. On the stand-by node, you would both have 'smb' off and the file system would not be writeable which helps ensure consistency.

I guess I could go on about Heartbeat quite a bit, but there's one last thing to mention specifically here and that's the style of cluster. There are "R1" style clusters which are simple but limited to 2 resources (and other limitations) and then there are CRM enabled clusters which are more robust but more complicated to configure. I have only used R1 because it was sufficient for my needs - 2 nodes, one was known "preferred", keeping cluster configuration in sync "manually" wasn't onerous. But CRM enabled clusters are more interesting because you can add more nodes and it will distribute the cluster configuration automatically, etc.

The one thing I haven't really touched on is the quorum which others who are more familiar with cluster management will be more familiar with than I. Basically with Heartbeat in an R1 style cluster, there isn't going to be a quorum. Your configuration is maintained pretty much manually, services are only running on one node, etc. In CRM style Heartbeat or other application clusters, the quorum is what all the nodes agree on and typically stored in a file. On Windows Storage Server and other clusters, the quorum is stored on the shared disk which means any problem there means the cluster fails. With Heartbeat, the quorum file is copied among the nodes but this is susceptible to becoming out of sync like if there is a communication failure on the heartbeat channel leading to a split brain. Or this is my limited understanding of this. At any rate, it is a problem and it isn't trivial in working with active/active or multi-node configurations.

Popular Posts