Battery backup at home went off today BEEEEEEEEEEEEEEEEEEEEEEP BEEEEEEEEEEEEEEEP! Everything shuts itself down and I go to reboot the UPS when *sniff* *sniff* ah yes, the distinctive smell of burned electronics. So that's it finally. Adios APC BackUPS 350. You will torture us no more with your intermittent failures!
Now I have to look for a new UPS. Preferrably small in size (it has to go under my desk) and monitored. APC's successor to the UPS model I had wasn't monitored last time I checked, maybe they've got a newer model that is though. If not, I'll have to look at other manufacturers and then that means looking at software support, etc.
Oh well, out with the old! Happy New Year's!
- Arch
Thursday 31 December 2009
Friday 6 November 2009
Nagios Rules All
Nagios is a network monitoring application which itself provides no actual monitoring but rather specializes in scheduling checks and notifications. As a module framework, it works very well and there are a lot of monitoring plugins and all told, there aren't many (or any) systems that really compare, F/OSS, proprietary, or otherwise.
Since it's not a complete solution in and of itself, I know at least I found it a bit daunting to get in to. So I got this book:
Building a Monitoring Infrastructure with Nagios by David Josephsen
It's not a huge book like say, HP's OpenView manual(s), so read it first.
Nagios is super cool. You build definitions for each host on your network and each service on each host. Nagios checks each service recording the service's status. When a service fails, nagios will send a notification once it is sure the service is down and then periodically until it comes back up.
Fine, that's the basic premise. Now the configuration works pretty well because any host can inherit it's configuration from any other host definition including host definition templates. So set your general parameters once, and then override where necessary. It's the same for services. And you also have host and service groups which allow you logically group hosts (or services).
Nagios doesn't have any built-in way to check services, it's all through plugins. A plugin is simply an external script or program which exits with a status of 0 for OK, 1 for warning, or 2 for critical and optional 1 line of standard output for status text. Nagios has many standard plugins available, for example the check_ping plugin. This plugin is a little wrapper script which is invoked with arguments specifying the warning and critical thresholds for response time and packet loss. So in testing a plugin, you can simply invoke the plugin with the arguments that Nagios would be feeding it.
Now if a service goes down, Nagios will check if the host is down. Again, this is a plugin of the same type as for the service. Typically, this means check_ping. So you don't really need to have a check_ping "service" check, just for the host. So if your host runs a webserver, you would use check_http and if that fails, Nagios will check_ping on the host to see if that's down. If the host is down, well then obviously all services on that host are a write-off so Nagios will send a notification for that host once rather than for each individual service.
And when I say Nagios will send a notification, it doesn't know how to do that either. Notifications are also defined but typically the stock notification will suffice. On Fedora, it uses the "mail" program to send a mail message.
Ah, so who does it notify? Well, each service and each host defines contact groups and also contact hours. So Nagios will notify everyone in a contact group if it's during notification hours. So you can monitor your development systems as well as production ones and only get notifications when appropriate.
Nagios also provides escalations. So if a service (or host) remains down, you can define an escalation path. Maybe level one is help desk, and if they don't respond, it escalates to supervisors as well, and if they still don't respond, then it escalates to on-call staff, managers and eventually the head cheese.
What else is cool? Oh yeah, parent-child relationships. On each host, you can define parent hosts. So if you have say several routers throughout your network, connectivity to hosts would depend on connectivity to their routers. So if a router goes down, as with services on a host, Nagios will know to only notify of the router being down and not all the children individually.
There is also an agent for Nagios called the NRPE. It is totally optional but if you want local system checks, like disk, CPU, checking running processes, and not just network service checks, then NRPE lets you do this. Install NRPE on your monitored hosts, and NRPE is available for "Linux" and Windows, and it, I think, is like a little baby Nagios invoked by the mothership. So you install service check plugins with the NRPE and then on the server, your service checks are like check_nrpe!check_disk ... or something like that so the server sends the service check to the NRPE on the monitored system. I haven't used this yet, but will definitely be doing so.
The NEB is another cool part of Nagios. The Nagios Event Broker is an interface where can write programs which hook into Nagios's regular operations. There's a couple dozen callback functions you can hook into and this makes the possibilities for Nagios virtually endless.
The part I've left for last is the user interface. Well, once again, there is none. You configure it, it fires off notifications, that's the core. There is a web interface Nagios provides you can use if you want. It will show you host and service status and you can acknowledge alarms through it and schedule downtime for hosts. Now you can hook in more functionality, for example, historical graphs can be very useful. If you're checking disk usage with Nagios, why not keep a record of it? Well, when you get a service check result back, it comes back with one line of text from standard output, right? Well, there's packages that will build graphs from this data so that you can have your service status and historical reports too! Josephsen has a pretty extensive discussion on doing this kind of stuff and some great info on some of the options out there.
So, yeah. Get that book! Use Nagios! Monitor everything with it! Let it tell you when your toast is toasted or your beer needs a refill!
- Arch
Since it's not a complete solution in and of itself, I know at least I found it a bit daunting to get in to. So I got this book:
Building a Monitoring Infrastructure with Nagios by David Josephsen
It's not a huge book like say, HP's OpenView manual(s), so read it first.
Nagios is super cool. You build definitions for each host on your network and each service on each host. Nagios checks each service recording the service's status. When a service fails, nagios will send a notification once it is sure the service is down and then periodically until it comes back up.
Fine, that's the basic premise. Now the configuration works pretty well because any host can inherit it's configuration from any other host definition including host definition templates. So set your general parameters once, and then override where necessary. It's the same for services. And you also have host and service groups which allow you logically group hosts (or services).
Nagios doesn't have any built-in way to check services, it's all through plugins. A plugin is simply an external script or program which exits with a status of 0 for OK, 1 for warning, or 2 for critical and optional 1 line of standard output for status text. Nagios has many standard plugins available, for example the check_ping plugin. This plugin is a little wrapper script which is invoked with arguments specifying the warning and critical thresholds for response time and packet loss. So in testing a plugin, you can simply invoke the plugin with the arguments that Nagios would be feeding it.
Now if a service goes down, Nagios will check if the host is down. Again, this is a plugin of the same type as for the service. Typically, this means check_ping. So you don't really need to have a check_ping "service" check, just for the host. So if your host runs a webserver, you would use check_http and if that fails, Nagios will check_ping on the host to see if that's down. If the host is down, well then obviously all services on that host are a write-off so Nagios will send a notification for that host once rather than for each individual service.
And when I say Nagios will send a notification, it doesn't know how to do that either. Notifications are also defined but typically the stock notification will suffice. On Fedora, it uses the "mail" program to send a mail message.
Ah, so who does it notify? Well, each service and each host defines contact groups and also contact hours. So Nagios will notify everyone in a contact group if it's during notification hours. So you can monitor your development systems as well as production ones and only get notifications when appropriate.
Nagios also provides escalations. So if a service (or host) remains down, you can define an escalation path. Maybe level one is help desk, and if they don't respond, it escalates to supervisors as well, and if they still don't respond, then it escalates to on-call staff, managers and eventually the head cheese.
What else is cool? Oh yeah, parent-child relationships. On each host, you can define parent hosts. So if you have say several routers throughout your network, connectivity to hosts would depend on connectivity to their routers. So if a router goes down, as with services on a host, Nagios will know to only notify of the router being down and not all the children individually.
There is also an agent for Nagios called the NRPE. It is totally optional but if you want local system checks, like disk, CPU, checking running processes, and not just network service checks, then NRPE lets you do this. Install NRPE on your monitored hosts, and NRPE is available for "Linux" and Windows, and it, I think, is like a little baby Nagios invoked by the mothership. So you install service check plugins with the NRPE and then on the server, your service checks are like check_nrpe!check_disk ... or something like that so the server sends the service check to the NRPE on the monitored system. I haven't used this yet, but will definitely be doing so.
The NEB is another cool part of Nagios. The Nagios Event Broker is an interface where can write programs which hook into Nagios's regular operations. There's a couple dozen callback functions you can hook into and this makes the possibilities for Nagios virtually endless.
The part I've left for last is the user interface. Well, once again, there is none. You configure it, it fires off notifications, that's the core. There is a web interface Nagios provides you can use if you want. It will show you host and service status and you can acknowledge alarms through it and schedule downtime for hosts. Now you can hook in more functionality, for example, historical graphs can be very useful. If you're checking disk usage with Nagios, why not keep a record of it? Well, when you get a service check result back, it comes back with one line of text from standard output, right? Well, there's packages that will build graphs from this data so that you can have your service status and historical reports too! Josephsen has a pretty extensive discussion on doing this kind of stuff and some great info on some of the options out there.
So, yeah. Get that book! Use Nagios! Monitor everything with it! Let it tell you when your toast is toasted or your beer needs a refill!
- Arch
Thursday 1 October 2009
Fedora Bootable USB
LiveUSB Creator, it's a wonderful thing. Connect a USB key, get the LiveUSB Creator on your PC (Windows or "Linux"), point it either to a local .iso file for a Fedora live CD or let it download the version you want for you, click go, and shazzam! (yes, "shazzam") You've now got a bootable Fedora USB key. And if you gave it a block of persistent storage, you've got, well, persistent storage to use in this OS for data files etc.
- Arch
- Arch
Wednesday 30 September 2009
Processing Deferred Messages in Postfix
For anyone who's had to cleanup some mail problems with Postfix configuration (or more often with other things, like anti-spam, tied in but not part of postfix), it may be common enough that a large spool of mail gets queued up and needs to be pushed out. The easy way to do this is to do either "postfix flush" or "postqueue -f" which basically force Postfix to re-process pending messages (actually "deferred" usually) and send them out.
However, if either the queue is huge, or you don't really know if you have your problems resolved and want to try a few messages before unleashing the masses, I found it was not clear how this can be done. There is a straight-forward way to do this which is to put everything on hold using "postsuper -h ALL deferred", and then un-hold whichever messages you do want processed with "postsuper -H".
Tres handy
However, if either the queue is huge, or you don't really know if you have your problems resolved and want to try a few messages before unleashing the masses, I found it was not clear how this can be done. There is a straight-forward way to do this which is to put everything on hold using "postsuper -h ALL deferred", and then un-hold whichever messages you do want processed with "postsuper -H
Tres handy
Friday 11 September 2009
Let's FUSE him with this juice!
Back in the olden days, like a year or two ago, Filesystem in Userspace (FUSE) was a fancy feature that allows users to mount file systems. Using FUSE means that you can create a file system driven by an application rather than a driver (e.g. a kernel module). When I first tried it, it meant customizing your kernel to include this feature and building a bunch of utilities and drivers and generally it was awesome, but not something one does for a "quick fix".
Fast forward to a few months later (or aeons in OSS terms) and there's standard kernels and packages to operate FUSE. You can pull everything you need from your distro's stock repository.
In particular, there is sshfs which is hella tight. "sshfs" is, as you might guess, a file system over SSH, e.g. in FUSE. This means the security and features of SSH including SSH keys and all that good fun. Installing "sshfs" and FUSE is a simple three step process:
Similarly, once you've installed "sshfs", using it is a simple three step process:
What could be simpler? If you're finding your virtual file system access in Gnome or KDE produces odd behaviour sometimes, just FUSE your remote file system instead. You get fully functional and secure access to remote file systems.
Oh, and just one last note, you use a FUSE command to disconnect the mount:
Thanks, Toddz for mentioning FUSE the other day and getting me to revisit it.
Ciao,
- Arch
(title for this post nicked from an Invader Zim quote)
Fast forward to a few months later (or aeons in OSS terms) and there's standard kernels and packages to operate FUSE. You can pull everything you need from your distro's stock repository.
In particular, there is sshfs which is hella tight. "sshfs" is, as you might guess, a file system over SSH, e.g. in FUSE. This means the security and features of SSH including SSH keys and all that good fun. Installing "sshfs" and FUSE is a simple three step process:
- yum install sshfs (or aptitude install sshfs for Debian / Ubuntu users)
- ?
- Profit!
Similarly, once you've installed "sshfs", using it is a simple three step process:
- sshfs myhost.example.com:/some/remote/path /some/local/path
- ?
- Profit!
What could be simpler? If you're finding your virtual file system access in Gnome or KDE produces odd behaviour sometimes, just FUSE your remote file system instead. You get fully functional and secure access to remote file systems.
Oh, and just one last note, you use a FUSE command to disconnect the mount:
fusermount -u /some/local/path
Thanks, Toddz for mentioning FUSE the other day and getting me to revisit it.
Ciao,
- Arch
(title for this post nicked from an Invader Zim quote)
Friday 4 September 2009
Crappy Power
I've had some problems in the somewhat recent past where my UPS goes into panic mode and because the battery was old / crappy, this made things "very bad". I've had no issues since replacing the battery, but now I'm getting a picture of why it was so awful from apcupsd:
There are a lot of power events going on. Given that the time of the "power failure" is always 2 seconds, my guess is that this just means power is fluctuating. I've lived in places where this happened a bit and where it happened not at all, but this is the worst I've seen.
The only thing I can say is: get a UPS if you don't have one! You may not need battery backup per se, but this is the kind of stuff that will send the power supply unit in your PC to an early grave. And if you're unlucky, the PSU may just take other components of your PC with it.
- Archangel
Mon Aug 31 11:13:36 PDT 2009 Power is back. UPS running on mains.
Mon Aug 31 11:13:34 PDT 2009 Power failure.
Thu Aug 27 11:20:20 PDT 2009 Power is back. UPS running on mains.
Thu Aug 27 11:20:18 PDT 2009 Power failure.
Sat Aug 22 16:59:32 PDT 2009 Power is back. UPS running on mains.
Sat Aug 22 16:59:29 PDT 2009 Power failure.
Sat Aug 22 16:56:29 PDT 2009 Power is back. UPS running on mains.
Sat Aug 22 16:56:27 PDT 2009 Power failure.
Fri Aug 21 00:12:33 PDT 2009 Power is back. UPS running on mains.
Fri Aug 21 00:12:31 PDT 2009 Power failure.
Fri Aug 21 00:11:52 PDT 2009 Power is back. UPS running on mains.
Fri Aug 21 00:11:50 PDT 2009 Power failure.
... etc
There are a lot of power events going on. Given that the time of the "power failure" is always 2 seconds, my guess is that this just means power is fluctuating. I've lived in places where this happened a bit and where it happened not at all, but this is the worst I've seen.
The only thing I can say is: get a UPS if you don't have one! You may not need battery backup per se, but this is the kind of stuff that will send the power supply unit in your PC to an early grave. And if you're unlucky, the PSU may just take other components of your PC with it.
- Archangel
Thursday 27 August 2009
Rolling dice in Bash
I often need short random numbers at work. For example, if I'm scheduling a whole bunch of servers to do the same automated tasks and I want them to not run at exactly the same time, I'll use a random number between 1 and 60 to have them run on different minutes. You can do this somewhat easily in bash using the $RANDOM variable and a mod operation like so:
However, it's a bit long to type and sometimes I need batches of numbers. So I looked around at dice rolling programs but most were too fancy. So I wrote a simple simple script I called "roll" which returns sets of random numbers.
So if I want 12 numbers from 1 to 60, it looks like this:
Very handy!
echo $((RANDOM%60))
However, it's a bit long to type and sometimes I need batches of numbers. So I looked around at dice rolling programs but most were too fancy. So I wrote a simple simple script I called "roll" which returns sets of random numbers.
#!/bin/bash
# Roll
# This script returns the values and sum of a set of dice rolls. The first
# arg is optional and gives a number of dice. The second arg is the number
# of sides on the dice. For example "roll 2 6" will give two values from 1
# to 6 and also returns their sum.
#
# (c)2009 Dominic Lepiane
sides=6
dice=1
total=0
c=0
if [ $# = 2 ] ; then
dice=$1
sides=$2
elif [ $# = 1 ] ; then
sides=$1
else
echo "Usage: $0 [# of dice] <# of sides>" >&2
exit -1
fi
#echo "Rolling {$dice}d{$sides}"
while [ $c -lt $dice ] ; do
c=$((c+1))
roll=$((RANDOM%sides + 1))
total=$((total+roll))
echo -n "$roll "
done
if [ $dice -gt 1 ] ; then
echo -n " = $total"
fi
echo ""
So if I want 12 numbers from 1 to 60, it looks like this:
./roll 12 60
21 32 30 38 56 36 27 19 25 34 25 48 = 391
Very handy!
Wednesday 15 July 2009
VMware and Unity
I've been running Fedora 11 (x64) on my workstation at work and running Windows XP (32b) in a VMware virtual machine. It was a VM I'd created with server so all I needed to run it was the free VMware Player.
First, installing VMware Player was a bit of a problem. The install from RPM didn't work, it hosed initially. Then the install from the bundle also failed... Much like it does for many users Online it turned out so there was a community-created patch which worked just fine.
Then there was running the VM. Initially, it seemed great. I was running Windows XP full-screen on my right-screen and had my Fedora desktop / apps on my left screen. But it was pretty wonky about mouse control so I got to the point where I was firing up the Windows VM only when I needed it and then not in full-screen mode.
But I discovered that VMware's Unity mode helps bridge the gap. It pulls you out of console mode and launches any apps from the guest VM in their own windows in your desktop environment. This is especially useful for say, running MSIE or MS Outlook. It's still a little weird because the apps *look* like they should be running natively yet the responsiveness is clearly far behind local apps, but the only real gap is that I can't Shift+Right-Click -> Run As... on tools like Active Directory Users and Computers (which I need). I tried switch back to the console, doing the Run As... and then switching back to Unity, but the escalated app doesn't show up.
Well, it's great and closes the gap some, but for now I'll just keep updating Player and Tools and see if eventually that full-screen mode just gets fixed and works transparently.
- Arch
First, installing VMware Player was a bit of a problem. The install from RPM didn't work, it hosed initially. Then the install from the bundle also failed... Much like it does for many users Online it turned out so there was a community-created patch which worked just fine.
Then there was running the VM. Initially, it seemed great. I was running Windows XP full-screen on my right-screen and had my Fedora desktop / apps on my left screen. But it was pretty wonky about mouse control so I got to the point where I was firing up the Windows VM only when I needed it and then not in full-screen mode.
But I discovered that VMware's Unity mode helps bridge the gap. It pulls you out of console mode and launches any apps from the guest VM in their own windows in your desktop environment. This is especially useful for say, running MSIE or MS Outlook. It's still a little weird because the apps *look* like they should be running natively yet the responsiveness is clearly far behind local apps, but the only real gap is that I can't Shift+Right-Click -> Run As... on tools like Active Directory Users and Computers (which I need). I tried switch back to the console, doing the Run As... and then switching back to Unity, but the escalated app doesn't show up.
Well, it's great and closes the gap some, but for now I'll just keep updating Player and Tools and see if eventually that full-screen mode just gets fixed and works transparently.
- Arch
Tuesday 30 June 2009
SpamAssassin
On previous mail server setups, I've tried to pass all mail coming into the server through SpamAssassin (The Fight Against Spam) and it's a bit of a struggle to get it working sometimes so I've had nothing setup for a while other than some SMTP restrictions and a couple of the RBLs. So since SpamAssassin is generally geared to being run / configured per-user, I figured, what the hell, I'll try that. And it is way easier. All I did was plop this in my .procmailrc:
So now, SA happily tags all my possibly spammy mail and then for actual spam, it dumps it in the Junk folder and it actually strips the content replacing it with all the reasons why the message was identified as spam (the original message is attached).
So far so good!
- Arch
# SpamAssassin
:0fw: spamassassin.lock
| /usr/bin/spamassassin
:0:
* ^X-Spam-Status: Yes
Junk
So now, SA happily tags all my possibly spammy mail and then for actual spam, it dumps it in the Junk folder and it actually strips the content replacing it with all the reasons why the message was identified as spam (the original message is attached).
So far so good!
- Arch
Monday 15 June 2009
Using proc to force a reboot
So we just had this little discussion on IRC and I figured I'd save it for posterity here:
So there you go. Want to reboot without waiting for all those nasty processes to finish or phyiscally pressing the power button? That's your way out.
Thanks, toddz :D
- Arch
[11:37:36] to force a 'hard' reboot (if reboot is not working) - equivalent to pulling the power cable:
[11:37:44] echo 1 > /proc/sys/kernel/sysrq; echo b > /proc/sysrq-trigger
[11:40:45] come on dom, you know you want to try it.
[11:41:21] heh
[11:42:29] what's this do? what's this do? what's it do???
[11:43:13] you tell us
[11:47:09] yeah, that's awesome
[11:47:24] it just tells BIOS to reboot
[11:47:35] (or something like that anyhow)
[11:47:58] so, immediate reboot in other words?
[11:47:58] system just goes *blip* and starts posting
So there you go. Want to reboot without waiting for all those nasty processes to finish or phyiscally pressing the power button? That's your way out.
Thanks, toddz :D
- Arch
Tuesday 9 June 2009
Apache and LDAP users
Requisites:
Apache 2.2
mod_authnz_ldap (and enabled with a2enmod authnz_ldap under Debian+Ubuntu)
In your httpd.conf or your htaccess file, add the following:
This example is for connecting to an MS Active Directory server. For an OpenLDAP server, you may find that you don't need the BindDN/Pass options and you need uid instead of sAMAccountName (or possibly just "ldap://localhost/DC=domain,DC=tld").
If you look at other sites online, you'll find that a lot of users say they have to fiddle the config to get it working. Some of the common things I saw were:
... As you can see there can be a lot of tweaking for specific sites. But all-in-all, the basic configuration is quite simple. If your LDAP server allows anonymous searches, you really only need the AuthLDAPURL line and it can be as simple as "ldap://localhost/DC=example,DC=com".
- Arch
Apache 2.2
mod_authnz_ldap (and enabled with a2enmod authnz_ldap under Debian+Ubuntu)
In your httpd.conf or your htaccess file, add the following:
# Access control for this directory
AuthBasicProvider ldap
AuthType Basic
AuthName "Password Required"
AuthLDAPURL "ldap://localhost:389/OU=Users,DC=example,DC=com?sAMAccountName?sub?(objectClass=*)" NONE
AuthLDAPBindDN readonly@example.com
AuthLDAPBindPassword plaintextpassword
Require ldap-group CN=somegroup,OU=someou,OU=Groups,DC=example,DC=com
This example is for connecting to an MS Active Directory server. For an OpenLDAP server, you may find that you don't need the BindDN/Pass options and you need uid instead of sAMAccountName (or possibly just "ldap://localhost/DC=domain,DC=tld").
If you look at other sites online, you'll find that a lot of users say they have to fiddle the config to get it working. Some of the common things I saw were:
- Setting "AuthzLDAPAuthoritative off"
- Specifying at least one container under the base DN (as in my example)
- Tweaking the GroupAttribute and GroupIsDN options
- Using a DN for the AuthLDAPBindDN (UPN used in my example)
- Enabling SSL or TLS
- Multiple domain controllers (simply specify them separated by spaces in your URL)
- Filters with "Require ldap-filter"
... As you can see there can be a lot of tweaking for specific sites. But all-in-all, the basic configuration is quite simple. If your LDAP server allows anonymous searches, you really only need the AuthLDAPURL line and it can be as simple as "ldap://localhost/DC=example,DC=com".
- Arch
Labels:
active directory,
ad,
apache,
authnz_ldap,
HOWTO,
ldap
Saturday 6 June 2009
Google Apps
One of the cool services that Google offers is the hosting of various services for your domain. Basically, you can brand Google with your own domain including mail, calendar, chat, docs, sites and "mobile" (I haven't used "mobile", but it includes sync services). The service is called Google Apps.
The "standard edition" is pretty much the standard services and limits you to 50 user accounts. And 50 people is quite a few for a personal domain or even a small business. Once you need more features or more accounts, its $50 / year per account. Which, truth be told, is pretty cheap since even just paying for anti-spam/anti-virus filtering is about $30 / year for pretty basic service from Symantec of whomever.
At any rate, I found it a bit confusing at first but mostly because I was setting this up in a sub-domain (dl.thenibble.org) on GoDaddy. But once I got in, it's pretty easy. You get this dashboard which shows you which services are activated and you can just click on whichever ones you want and if DNS changes are required, it will tell you and give you pretty specific instructions. But there's a lot. You have to do one just to activate the domain, add aliases for all your services (unless you want to use google.com/apps/mydomain or whatever), and then for email, there's 5 MX records and for chat there's about 10 SRV records.
But now that it's all setup, it's pretty fancy. You can create email groups, use docs, publish calendars, etc. I tried poking around a bit and really all that Google does for stuff like "sites" when you create an alias under your domain is it just redirects the user to sites.google.com/domain/whatever ... So it won't be a replacement for having a web host. But for email, it will just accept mail at your domain so it's a full email service.
And standard edition is free. Did I mention that? Yeah, it's ad-supported, but otherwise free.
Fun!
- Arch
The "standard edition" is pretty much the standard services and limits you to 50 user accounts. And 50 people is quite a few for a personal domain or even a small business. Once you need more features or more accounts, its $50 / year per account. Which, truth be told, is pretty cheap since even just paying for anti-spam/anti-virus filtering is about $30 / year for pretty basic service from Symantec of whomever.
At any rate, I found it a bit confusing at first but mostly because I was setting this up in a sub-domain (dl.thenibble.org) on GoDaddy. But once I got in, it's pretty easy. You get this dashboard which shows you which services are activated and you can just click on whichever ones you want and if DNS changes are required, it will tell you and give you pretty specific instructions. But there's a lot. You have to do one just to activate the domain, add aliases for all your services (unless you want to use google.com/apps/mydomain or whatever), and then for email, there's 5 MX records and for chat there's about 10 SRV records.
But now that it's all setup, it's pretty fancy. You can create email groups, use docs, publish calendars, etc. I tried poking around a bit and really all that Google does for stuff like "sites" when you create an alias under your domain is it just redirects the user to sites.google.com/domain/whatever ... So it won't be a replacement for having a web host. But for email, it will just accept mail at your domain so it's a full email service.
And standard edition is free. Did I mention that? Yeah, it's ad-supported, but otherwise free.
Fun!
- Arch
Friday 22 May 2009
Hardening a RHEL5 Box and the NSA
Hardening a server takes two general activities: Reducing the number of services that can be attacked and protecting any services that are still required.
There are a lot of discussions on how to do this for various operating systems including RedHat Linux. RedHat's Deployement Guide is a good resource.
The NSA also has documents on securing your operating system. However, they're a little hard to get. I tried searching for RHEL5 on their site and had some difficulty access the documents in the search results:
Now it's a little hard to access the documents on the NSA's E drive, but I was able to eventually find them by getting in another way ;) ;) ... Okay, I didn't breakin to the NSA to get on their E drive, I found the page that actually good links: NSA/CSS Operating Systems.
There's a longer document (about 170 pages) and also a short reference (2 pages) which gives lots of good things to secure.
There are a lot of other good resources Online as well, so I won't ramble further. Just turn off anything you don't need, update what you do need frequently, and secure your system with a firewall, and other security tools (PortSentry, fail2ban, DenyHosts, anti-virus software, rootkit detection, etc, etc, etc).
- Arch
There are a lot of discussions on how to do this for various operating systems including RedHat Linux. RedHat's Deployement Guide is a good resource.
The NSA also has documents on securing your operating system. However, they're a little hard to get. I tried searching for RHEL5 on their site and had some difficulty access the documents in the search results:
Now it's a little hard to access the documents on the NSA's E drive, but I was able to eventually find them by getting in another way ;) ;) ... Okay, I didn't breakin to the NSA to get on their E drive, I found the page that actually good links: NSA/CSS Operating Systems.
There's a longer document (about 170 pages) and also a short reference (2 pages) which gives lots of good things to secure.
There are a lot of other good resources Online as well, so I won't ramble further. Just turn off anything you don't need, update what you do need frequently, and secure your system with a firewall, and other security tools (PortSentry, fail2ban, DenyHosts, anti-virus software, rootkit detection, etc, etc, etc).
- Arch
Wednesday 20 May 2009
Virtual Host Debugging
I just came across this obscure feature of apache2ctl / httpd:
"man apache2ctl" doesn't give the switch parameters but merely alludes to the presence of them:
And on my system (Ubuntu 8.04), "man httpd" doesn't report diddly. It is in a manpage *somewhere* so I found it Online:
http://www.manpagez.com/man/8/httpd/
And what it says is:
So there you go. Hidden away in the documentation "somewhere" is possibly the most useful virtual host diagnostic tool.
- Arch
# apache2ctl -S
VirtualHost configuration:
wildcard NameVirtualHosts and _default_ servers:
*:443 webmail.nibble.bz (/etc/apache2/sites-enabled/webmail.nibble.bz:3)
*:80 is a NameVirtualHost
default server alia.dl.nibble.bz (/etc/apache2/sites-enabled/000-default:2)
port 80 namevhost alia.dl.nibble.bz (/etc/apache2/sites-enabled/000-default:2)
port 80 namevhost blog.nibble.bz (/etc/apache2/sites-enabled/blog.nibble.bz:3)
port 80 namevhost www.nibble.bz (/etc/apache2/sites-enabled/blog.nibble.bz:17)
port 80 namevhost forums.thenibble.org (/etc/apache2/sites-enabled/forums.thenibble.org:2)
port 80 namevhost lists.thenibble.org (/etc/apache2/sites-enabled/lists.thenibble.org:10)
port 80 namevhost siona.nibble.bz (/etc/apache2/sites-enabled/siona.nibble.bz:1)
port 80 namevhost uro.mine.nu (/etc/apache2/sites-enabled/uro.mine.nu:2)
port 80 namevhost webmail.nibble.bz (/etc/apache2/sites-enabled/webmail.nibble.bz:51)
port 80 namevhost www.thenibble.org (/etc/apache2/sites-enabled/www.thenibble.org:3)
port 80 namevhost thenibble.org (/etc/apache2/sites-enabled/www.thenibble.org:15)
Syntax OK
"man apache2ctl" doesn't give the switch parameters but merely alludes to the presence of them:
SYNOPSIS
When acting in pass-through mode, apachectl can take all the arguments available for the httpd binary.
apachectl [ httpd-argument ]
And on my system (Ubuntu 8.04), "man httpd" doesn't report diddly. It is in a manpage *somewhere* so I found it Online:
http://www.manpagez.com/man/8/httpd/
And what it says is:
-S Show the settings as parsed from the config file (currently only
shows the virtualhost settings).
So there you go. Hidden away in the documentation "somewhere" is possibly the most useful virtual host diagnostic tool.
- Arch
Sunday 19 April 2009
Retiring the old hand-made blog
On the weekend I finally decided to shutdown my old blog (it was under http://dl.nibble.bz/~archangel). I'd started in in early 2003 and sortof just build my own code. Needless to say, it was very simple with basic "categories" features for links and I built my own archives system, syndication, everything. Well, WordPress also started in 2003 and it now has all these features and they work better and are richer. And their code is maintained.
Moving to WordPress is certainly simple. It imports blogs from many different formats including RSS 2.0 which is a widely accepted standard for blog syndication. Since I'd already built a feed for my blog, but in Atom, I simply had to covert my code to generate RSS 2.0 and then have it pit out all my posts (about 200) into a single RSS 2.0 file.
For the switch to RSS 2.0, I just pulled an RSS 2.0 feed from WordPress. I then used the old copy & paste coding to make a syndication script which produced similar output. Then I debugged by running my output through the W3C Feed Validation Service.
Once my RSS 2.0 was validating, I made the script spit out the full content for all my posts rather than the usual 20, went into WP and just imported it and it took them all just fine.
And now, here we are!
- Arch
Moving to WordPress is certainly simple. It imports blogs from many different formats including RSS 2.0 which is a widely accepted standard for blog syndication. Since I'd already built a feed for my blog, but in Atom, I simply had to covert my code to generate RSS 2.0 and then have it pit out all my posts (about 200) into a single RSS 2.0 file.
For the switch to RSS 2.0, I just pulled an RSS 2.0 feed from WordPress. I then used the old copy & paste coding to make a syndication script which produced similar output. Then I debugged by running my output through the W3C Feed Validation Service.
Once my RSS 2.0 was validating, I made the script spit out the full content for all my posts rather than the usual 20, went into WP and just imported it and it took them all just fine.
And now, here we are!
- Arch
Tuesday 7 April 2009
Renaming Wordpress Blogs
Under Debian (Ubuntu), there's a helper script which does a lot of the work for adding a new blog. You need to create a hostname, add it as an alias in your apache config, and a database. Then use this helper script to setup the database and config:
/usr/share/doc/wordpress/examples/setup-mysql
Now if you want rename your blog say from "inaction.example.com" to "takeaction.example.com", it's pretty simple.
Edit: You can easily change the settings (from the last step) in the the db. If something goes terribly wrong, just poke around in there and update anything with the wrong URL.
- Arch
/usr/share/doc/wordpress/examples/setup-mysql
Now if you want rename your blog say from "inaction.example.com" to "takeaction.example.com", it's pretty simple.
- Create the new hostname in DNS,
- Add it as an alias to your apache config, <
- Create a soft-link to the current config using the new name,
- Edit the settings for your blog and give the new URL.
Edit: You can easily change the settings (from the last step) in the the db. If something goes terribly wrong, just poke around in there and update anything with the wrong URL.
- Arch
Sunday 1 March 2009
Retired virtual server
This actually a ret-con repost since I managed to mung my database again (hooray)... Anyhow, some time last month I finally did get everything moved off of Jessica (a VirtualBox VM) and onto Alia (actual hardware). It's all good.
Subscribe to:
Posts (Atom)
Popular Posts
-
For anyone who's had to cleanup some mail problems with Postfix configuration (or more often with other things, like anti-spam, tied in ...
-
In the course of troubleshooting the office Jabber server the other day, I came across some interesting info about the various caches that O...
-
For everyone who uses cron, you are familiar with the job schedule form: min hr day-of-month month day-of-week <command> A problem...