Last night I had several odd dreams. In my last dream, I dreamt I was working in some sort of office with my desk facing a coworker. We were sitting there and a colleague came in and started talking to my coworker something about installed packages. My coworker started typing away to figure something out when the colleague tossed in his two bits saying:
"Why don't you just rpm -qa"
This infuriated me! I jumped up and shouted:
"There's not fucking RPM in Debian, you son of a bitch!"
So the colleague ducked his head and bolted from the room and my coworker sat there flabbergasted. I sat down rubbing my jaw because I had yelled so loud I had strained it. And then I woke up.
So I guess this is just a subconcious warning: Don't make RedHat jokes about Debian systems. I get very upset, apparently.
Monday, 25 December 2006
Saturday, 23 December 2006
Postfix Cleanup
The number of users and domains being hosted on Siona has been growing for quite a while. We're now up to 29 users and 13 domains. Being an order of magnitude beyond the single-user/single-domain setup means there are some complications even though the server configuration is pretty basic.
For example, it is getting important to ensure that domains only deliver mail for a subset of the users. For a while, the domains were all just being appended to the "mydestination" attribute in the Postfix configuration which meant that a) any changes required a mail server restart and b) there was no way to separate which users where in which domains.
A while ago, new domains were being added to the "virtual_alias_domain" hash file. This is really the way to go since modifying the list of domains and modifying the valid relay recipients was easy and allowed control over who was in which domains. The process is still manual, 13 domains is not that much to manage, but it is much easier.
So the latest cleanup issue in the configuration was to move all the extra domains out of the "mydestination" attribute and into the "virtual_alias_domains" hash file where they belong. Well, it was interesting. I had to check through the logs to see which users were actually receiving mail in which domains. Not too tricky at least.
It is really unfortunate that some of the old names, like "uro.mine.nu" and "dulcea.nibble.bz" still have to be maintained. It would be nice to retire those old domains. But the cost of keeping them is way less significant then the energy required to ensure that everyone has current email addresses for all the users.
So other then moving all logical domains to virtual domains, the other change was that I changed the server to no longer relay mail on the basis of "mynetworks". The SASL authenticated SMTP is working great so there's no need to just white-list the LAN. It's cool :D I'm excited because this is the way SMTP should be! Servers only accepting mail if they are either going to deliver the mail or if the connecting user or host is authenticated! Every SMTP server should be setup like this! There are fewer and fewer excuses to accept mail from an un-authenticated connection and more and more reason to validate all mail all the time.
All-in-all, the cleaned up Postfix config is a much better setup for my current and future needs. It's good :)
For example, it is getting important to ensure that domains only deliver mail for a subset of the users. For a while, the domains were all just being appended to the "mydestination" attribute in the Postfix configuration which meant that a) any changes required a mail server restart and b) there was no way to separate which users where in which domains.
A while ago, new domains were being added to the "virtual_alias_domain" hash file. This is really the way to go since modifying the list of domains and modifying the valid relay recipients was easy and allowed control over who was in which domains. The process is still manual, 13 domains is not that much to manage, but it is much easier.
So the latest cleanup issue in the configuration was to move all the extra domains out of the "mydestination" attribute and into the "virtual_alias_domains" hash file where they belong. Well, it was interesting. I had to check through the logs to see which users were actually receiving mail in which domains. Not too tricky at least.
It is really unfortunate that some of the old names, like "uro.mine.nu" and "dulcea.nibble.bz" still have to be maintained. It would be nice to retire those old domains. But the cost of keeping them is way less significant then the energy required to ensure that everyone has current email addresses for all the users.
So other then moving all logical domains to virtual domains, the other change was that I changed the server to no longer relay mail on the basis of "mynetworks". The SASL authenticated SMTP is working great so there's no need to just white-list the LAN. It's cool :D I'm excited because this is the way SMTP should be! Servers only accepting mail if they are either going to deliver the mail or if the connecting user or host is authenticated! Every SMTP server should be setup like this! There are fewer and fewer excuses to accept mail from an un-authenticated connection and more and more reason to validate all mail all the time.
All-in-all, the cleaned up Postfix config is a much better setup for my current and future needs. It's good :)
Friday, 15 December 2006
Blogging with WordPress on Debian
It turns out that there's a nice WordPress package in Debian (testing). It is both up to date, and the package maintainer (Kai Hendry) provides some handy helper scripts.
WordPress provides a handy sample Apache config that was easy to add to the installation on Siona. That took care of that. Then, there's a helper script called "setup-mysql". The way the installation works is that once you get the base install going, you can then just setup a server alias for each blog you want to create and then re-run setup-mysql passing it the FQDN of the server alias. Voila! Multiple blogs right out of the box! No fuss, no muss.
Very nice and I have to admit, I'm very impressed with WordPress as well. It very nicely handles creating a personal site. You basically have blog posts and simple pages. The blog posts are categorized, archived, can allow user comments, etc and those (usually) go on your main page. You can also write pages and those would just link off the main page. Oh, and you can add random links to other blogs and sites. WordPress has tons of themes and plugins available so you can tweak your look and feel the way you want.
Anyhow, enough raving! WordPress is very nice and the installation on Debian works great for creating multiple blogs for any number of friends/family/pets/whatever.
WordPress provides a handy sample Apache config that was easy to add to the installation on Siona. That took care of that. Then, there's a helper script called "setup-mysql". The way the installation works is that once you get the base install going, you can then just setup a server alias for each blog you want to create and then re-run setup-mysql passing it the FQDN of the server alias. Voila! Multiple blogs right out of the box! No fuss, no muss.
Very nice and I have to admit, I'm very impressed with WordPress as well. It very nicely handles creating a personal site. You basically have blog posts and simple pages. The blog posts are categorized, archived, can allow user comments, etc and those (usually) go on your main page. You can also write pages and those would just link off the main page. Oh, and you can add random links to other blogs and sites. WordPress has tons of themes and plugins available so you can tweak your look and feel the way you want.
Anyhow, enough raving! WordPress is very nice and the installation on Debian works great for creating multiple blogs for any number of friends/family/pets/whatever.
Wednesday, 13 December 2006
DD-WRT: Router Firmware Minus the Suck
So far I've tried the default Linksys firmware and OpenWRT. I think we all agree the Linksys firmware is hobbled and frustrating. For example, it limits you to 10 port forwarding rules, there is no signal strength tweaking, and there is some arbitrarily low maximum number of IP connections (I think around 500). All this means Linksys can neither forward all the ports I need nor let me run all the applications I want (specifically bittorrent will use a lot of connections).
OpenWRT was also hobbled but in different ways. The web interface was useless as tits on a bull, the community apparently rejects the idea that you can get port forwarding to work, and updating the software apparently bricks the router. Now the lack of web interface wasn't really a show stopper for me and in fact, I was pretty happy with straight terminal access. It was really the dead router that convinced me to dump OpenWRT.
Now DD-WRT has only had one problem so far. During installation, the router didn't come up properly. As per a comment for the v23rc2 installation, you have to do the manufacturer reset (hold the reset button and power-cycle) once DD-WRT is uploaded.
Otherwise, it's been great! The web-interface is *way* nicer then the Linksys web-interface. It supports an arbitrary number of forwarded ports, shows *way* more status information, let's you tweak up the max number of IP connections, and even lets you tune the wireless power levels.
We will see how long this experiment works for us, so far I'm optimistic.
OpenWRT was also hobbled but in different ways. The web interface was useless as tits on a bull, the community apparently rejects the idea that you can get port forwarding to work, and updating the software apparently bricks the router. Now the lack of web interface wasn't really a show stopper for me and in fact, I was pretty happy with straight terminal access. It was really the dead router that convinced me to dump OpenWRT.
Now DD-WRT has only had one problem so far. During installation, the router didn't come up properly. As per a comment for the v23rc2 installation, you have to do the manufacturer reset (hold the reset button and power-cycle) once DD-WRT is uploaded.
Otherwise, it's been great! The web-interface is *way* nicer then the Linksys web-interface. It supports an arbitrary number of forwarded ports, shows *way* more status information, let's you tweak up the max number of IP connections, and even lets you tune the wireless power levels.
We will see how long this experiment works for us, so far I'm optimistic.
Tuesday, 12 December 2006
Where are My Files?
Couple tidbits over the last four weeks: I "upgraded" to Edgy Efy at home and the Nibble installed Joomla and have tried using that as our portal.
In the case of the former, discussing all the problems I'm having would constitute an all-out rant. It is really a shame but I've just had problems with the last two releases of Kubuntu. Maybe it's just Kubuntu and not Ubuntu in general, but it's really feeling like "the distro of the week". You know, there's things you like about the distro, it gets lots of press, it seems lively, but there's just too many annoying problems that whatever comes out next week may just be better.
Anyhow, rather then just dig in and rant, let me just say that one problem I've fixed is to do with Konqueror not displaying files. Specifically, if I browsed to the root of the file system, I could only see home, media, data (for music and video), and windows. Not etc, var, or other folders that are useful.
It turns out that if there is a file called ".hidden" in a folder with one file (or folder) name per line, then Konqueror will not display those folders. Some dimwit thought that this would "simplify things" for "the average user". I'm sorry, but obfuscating the file system is not the answer. As it is, "the average user" pretty well sticks to "Documents" and their Desktop. No hiding of folders necessary. That seems like a Finder-esque thing to do. And though I love and respect Apple's OS X for its many fine features, Finder is a dreadful bug-ridden horror not deserving of emulation.
So in summary, if you're in Kubuntu (or maybe KDE on any system) and can't see a bunch of folders you know exist, just rm .hidden and you'll be good to go.
And then on to other news. The Nibble has been trying to setup Joomla for our portal site. We need some blogging ability, news feed aggregation, and some static pages for HOWTOs. We installed Joomla 1.5 beta which has several bugs we ran into right away (the "poll" feature doesn't strip backslashes properly, for example) and it's a little to abstract/complex of a system for our needs. We could really figure out how to just do what we wanted (blog, aggregate, static pages) and then theme it and be done. Time to move on. It seems like WordPress has all the features we need (and not much more) so we'll give that a try next.
Okay, back to work for me.
In the case of the former, discussing all the problems I'm having would constitute an all-out rant. It is really a shame but I've just had problems with the last two releases of Kubuntu. Maybe it's just Kubuntu and not Ubuntu in general, but it's really feeling like "the distro of the week". You know, there's things you like about the distro, it gets lots of press, it seems lively, but there's just too many annoying problems that whatever comes out next week may just be better.
Anyhow, rather then just dig in and rant, let me just say that one problem I've fixed is to do with Konqueror not displaying files. Specifically, if I browsed to the root of the file system, I could only see home, media, data (for music and video), and windows. Not etc, var, or other folders that are useful.
It turns out that if there is a file called ".hidden" in a folder with one file (or folder) name per line, then Konqueror will not display those folders. Some dimwit thought that this would "simplify things" for "the average user". I'm sorry, but obfuscating the file system is not the answer. As it is, "the average user" pretty well sticks to "Documents" and their Desktop. No hiding of folders necessary. That seems like a Finder-esque thing to do. And though I love and respect Apple's OS X for its many fine features, Finder is a dreadful bug-ridden horror not deserving of emulation.
So in summary, if you're in Kubuntu (or maybe KDE on any system) and can't see a bunch of folders you know exist, just rm .hidden and you'll be good to go.
And then on to other news. The Nibble has been trying to setup Joomla for our portal site. We need some blogging ability, news feed aggregation, and some static pages for HOWTOs. We installed Joomla 1.5 beta which has several bugs we ran into right away (the "poll" feature doesn't strip backslashes properly, for example) and it's a little to abstract/complex of a system for our needs. We could really figure out how to just do what we wanted (blog, aggregate, static pages) and then theme it and be done. Time to move on. It seems like WordPress has all the features we need (and not much more) so we'll give that a try next.
Okay, back to work for me.
Friday, 17 November 2006
Bricking a Router
I bricked my router yesterday.
That's the short version. The long version? Well it's a sordid story that starts many weeks ago when I decided to install OpenWRT on my Linksys WRT54G v3 router.
Initially, I was happy! The web interface for OpenWRT is a little useless and the documentation (and mbm on the forums) said that setting up port forwarding to work both outside *and* inside the LAN was "impossible" (it's not, iptables works great) so I quickly got used to using SSH for managing the router's setup and firewall rules.
Then came the day that comes up regularly for any good sys admin, that is, update day. I did my little "ipkg update ; ipkg upgrade" but it choked! The device ran out of disk space. Initially, I wasn't worried. I cleared up some space and tried to finished the update. It limped a couple steps and then died. I couldn't even update to download the latest package list much less install/update any software. But the router was running. Since it was still working, I left it as-is. I was worried it would die if it was power cycled, but I was prepared to leave it on indefinitely anyhow.
Then last night, after a few drinks, the missus and I decided to rearrange the living room. So we drunkenly pushed furniture around moved movies, games, TV, stereo, etc. After the dust settled, I sat back down at my computer only to find I couldn't connect to the Internet. I could connect within the LAN so the router was switching, but definitely not routing. I couldn't connect at all to the router and it seemed to be dropping packets when I did a simple ping to it. The router was bricked.
Well, like any heavy user of the Internet, I prayed to Google to show me the way. Google spoke to me and said:
"Go here, my child, to un-brick your router."
Bearing the Word of Google, I went there where I was assured that all I needed to unbrick my router was a working firmware for the router and "any other small pointy metal object". The firmware I downloaded from Linksys (enough fucking around with OpenWRT) and for a small pointy metal object, I nabbed a bobby pin and off I went.
I read through all the instructions twice, then I pulled my router apart, shorted pins 15 and 16 as instructed, TFTP'd the firmware onto the router, and voila! The router booted! I was amazed! Like Christ turning water into wine, I had turned brick into router!
Rather then walk on water, I put the router back together, quickly configured it to a working state and that was that.
The moral of the story is: Don't brick your router. Restoring your router may make you partial to religion temporarily.
That's the short version. The long version? Well it's a sordid story that starts many weeks ago when I decided to install OpenWRT on my Linksys WRT54G v3 router.
Initially, I was happy! The web interface for OpenWRT is a little useless and the documentation (and mbm on the forums) said that setting up port forwarding to work both outside *and* inside the LAN was "impossible" (it's not, iptables works great) so I quickly got used to using SSH for managing the router's setup and firewall rules.
Then came the day that comes up regularly for any good sys admin, that is, update day. I did my little "ipkg update ; ipkg upgrade" but it choked! The device ran out of disk space. Initially, I wasn't worried. I cleared up some space and tried to finished the update. It limped a couple steps and then died. I couldn't even update to download the latest package list much less install/update any software. But the router was running. Since it was still working, I left it as-is. I was worried it would die if it was power cycled, but I was prepared to leave it on indefinitely anyhow.
Then last night, after a few drinks, the missus and I decided to rearrange the living room. So we drunkenly pushed furniture around moved movies, games, TV, stereo, etc. After the dust settled, I sat back down at my computer only to find I couldn't connect to the Internet. I could connect within the LAN so the router was switching, but definitely not routing. I couldn't connect at all to the router and it seemed to be dropping packets when I did a simple ping to it. The router was bricked.
Well, like any heavy user of the Internet, I prayed to Google to show me the way. Google spoke to me and said:
"Go here, my child, to un-brick your router."
Bearing the Word of Google, I went there where I was assured that all I needed to unbrick my router was a working firmware for the router and "any other small pointy metal object". The firmware I downloaded from Linksys (enough fucking around with OpenWRT) and for a small pointy metal object, I nabbed a bobby pin and off I went.
I read through all the instructions twice, then I pulled my router apart, shorted pins 15 and 16 as instructed, TFTP'd the firmware onto the router, and voila! The router booted! I was amazed! Like Christ turning water into wine, I had turned brick into router!
Rather then walk on water, I put the router back together, quickly configured it to a working state and that was that.
The moral of the story is: Don't brick your router. Restoring your router may make you partial to religion temporarily.
Tuesday, 31 October 2006
Rocking Out
Woohoo! I finally got my computer hooked up to the stereo. Since my computer is nearish to the stereo, all I need was a little Y connector and *bam*, we're cooking! The sound is so much better, it's awesome. And these speakers aren't even anything to write home about, they're just better the computer speakers. The trick being something different about the magnetism, I don't know. The stereo speakers just aren't supposed to be right next to the computer cause they'll mess everything up, but we should be cool as is.
Rambling, enough. Computer to stereo is awesome! Woo!
Good night!
Rambling, enough. Computer to stereo is awesome! Woo!
Good night!
Thursday, 26 October 2006
What Doesn't Konqueror Do?
Thanks to a random post on the blogosphere, (here), I've found yet another killer feature in Konqueror. Behold, split-view:
That right, you can split a Konqueror tab, open different pages in each split, split them all horizontally or vertically and good fun is had by all.
In taking the above screenshot, I discovered another handy little utility. The Ksnapshot will pop-up when you hit Print Screen and then you have various handy options, like taking many screenshots, or, more importantly, taking a screenshot of a selected window like when I did one of Konqueror above or when I did one of Ksnapshot itself:
Now aren't those two handy features? Why learn one thing every day when you can use KDE and learn two!
That right, you can split a Konqueror tab, open different pages in each split, split them all horizontally or vertically and good fun is had by all.
In taking the above screenshot, I discovered another handy little utility. The Ksnapshot will pop-up when you hit Print Screen and then you have various handy options, like taking many screenshots, or, more importantly, taking a screenshot of a selected window like when I did one of Konqueror above or when I did one of Ksnapshot itself:
Now aren't those two handy features? Why learn one thing every day when you can use KDE and learn two!
Tuesday, 17 October 2006
XMPP with iChat Server
One of the nice things that OS X Server can run is an iChat server. The server is jabberd 1.4 which is a nice XMPP server. I finally took a couple minutes at work to enable the service and it was fairly easy.
That was it! Since all the users are already in the directory and the iChat server is tied into the directory, everyone has immediate access.
The thing with the s2s stuff was kind of ridiculous. Without going into the jabber.xml file, the iChat server just runs the client port allowing only directly connected clients to chat. The server admin UI did not include an option to enable the server side connections such that inter-domain chats can take place. But it was fairly simple to enable so not a big deal really.
In summary: XMPP is good for you!
- Add appropriate SRV records to DNS,
- Punch a hole in the firewall,
- Add the domain to the service configuration,
- Uncomment the S2S lines in /etc/jabber/jabber.xml,
- Start the service.
That was it! Since all the users are already in the directory and the iChat server is tied into the directory, everyone has immediate access.
The thing with the s2s stuff was kind of ridiculous. Without going into the jabber.xml file, the iChat server just runs the client port allowing only directly connected clients to chat. The server admin UI did not include an option to enable the server side connections such that inter-domain chats can take place. But it was fairly simple to enable so not a big deal really.
In summary: XMPP is good for you!
Friday, 6 October 2006
Mail Stats
To date, I have been using the reports from logwatch as a rough gauge as to how much spam is being blocked but it wasn't very accurate since each email message was being processed by Postfix many times as it handed the message to various other daemons for processing. A quick check of the logs reveals that since basically all filtering of email is done by Amavisd, that's the process who's messages are of value. The one trick still is that I have to look for SMTP rejections since those are useful stats but are for messages only handled by Postfix and not passed to Amavisd.
I whipped up a little web page here to do basic number stats. I'd like to have charts up but i don't know how to pump this into MTRG so I'll have to do that. At any rate, this little page will give us an idea of how much spam is getting tossed out and how much legit mail is getting through.
Cheers
I whipped up a little web page here to do basic number stats. I'd like to have charts up but i don't know how to pump this into MTRG so I'll have to do that. At any rate, this little page will give us an idea of how much spam is getting tossed out and how much legit mail is getting through.
Cheers
Monday, 25 September 2006
SSH as a SOCKS proxy revisited
A while ago I had tried to use SSH as a web proxy and didn't have any success in getting it to actually work. I was worried that maybe this was a NAT-to-NAT complication or something of the sort but no, it was just a client configuration problem. It turns out the trick is to just set the SOCKS settings in Firefox rather then the HTTP proxy. Firefox will choose the HTTP proxy first or maybe I just don't understand the difference, but here's how it works for me:
ssh -D port host
e.g.
Then configure Firefox (Edit - Preferences - Connection Settings) with a SOCKS proxy of localhost, port 3125 like this:
The best test is to go to What Is My IP? and refresh the page with the proxy disabled/enabled and verify the IP address changes.
Now for shits and giggles, you use -f and -N with SSH to background the ssh process (the -f) without running any remote command (the -N) like this:
This will leave your SOCKS proxy in the background so you can close your terminal and still surf through the proxy.
Hooray for bypassing crappy firewalls and HTTP proxies!
ssh -D port host
e.g.
ssh -D 3125 siona.nibble.bz
Then configure Firefox (Edit - Preferences - Connection Settings) with a SOCKS proxy of localhost, port 3125 like this:
The best test is to go to What Is My IP? and refresh the page with the proxy disabled/enabled and verify the IP address changes.
Now for shits and giggles, you use -f and -N with SSH to background the ssh process (the -f) without running any remote command (the -N) like this:
ssh -fN -D 3125 siona.nibble.bz
This will leave your SOCKS proxy in the background so you can close your terminal and still surf through the proxy.
Hooray for bypassing crappy firewalls and HTTP proxies!
History Meme
14:30:42 % history 1|awk '{print $2}'|awk 'BEGIN {FS="|"} {print $1}'|sort|uniq -c|sort -nr|head -10
394 ssh
173 ls
138 cd
93 gpg
60 sudo
58 host
56 vim
48 killall
43 date
39 man
Apparently I like to shell around killing programs. Exciting? Maybe not...
Updates for SASL
The latest updates for Siona included some new SASL packages. These stomped some of the totally garbled config items from my setup for SMTP-AUTH but in the end, there was only one change I had to make: add postfix user to the sasl group. That was it. Auth started working again as soon as I kicked Postfix.
Woo!
Woo!
Monday, 18 September 2006
SMTP-AUTH With Postfix
So I finally got authenticated SMTP working. So in addition to accepting mail for domains for which Siona is a relay and relaying mail from the local network, authenticated remote or mobile users can now relay mail as well. My criteria for this solution were that a) the authentication would be compatible/integrated with PAM, b) secure, and c) not some hokey POP-before-SMTP setup. So the solution is to have postfix authenticate against Cyrus saslauthd and then saslauthd authenticate with PAM (and PAM with LDAP, but that's just details).
Siona is running Debian Sarge with Postfix 2.2. So it turns out that in addition to the installed postfix package, I need sasl2-bin which includes the essential saslauthd and the useful testsaslauthd.
After installing saslauthd, it must first be enabled. The default file /etc/default/saslauthd should read:
You will need to create the directory listed above. A simple "mkdir -p" will do the trick.
Note: One page on the Internet recommended a PARAMS line which included a "-r". This will not work and cause all auth attempts to mysteriously fail since it is attempting to combine the local part and realm.
Note 2: Online I also so reference that you should change the saslauthd init script to move the PID into Postfix's chroot. This is also a lie. It won't make anything fail, it's just not needed.
So that's it for saslauthd. If you run testsaslauthd like this, you should get an "OK" for any existing user account:
Now onto postfix. First, let's finish up the SASL parameters. In /etc/postfix, there should be a sasl folder. If not create it. In there you create smtpd.conf which will contain the parameters for smtpd to actually use saslauthd and actually there's just two lines:
The first line is, you guessed it, the authentication method which in our case is the saslauthd. The second line just says which authentication methods are supported. For fancy (e.g. secure) methods like CRAM-MD5 and such, you need a separate password database. We don't have that so we're limited to the standard PLAIN and Microsoft's LOGIN. Since I don't need to support old Microsoft clients (though others do so watch out for this), I only enable PLAIN. Other fun types include OTP which provies a fancy one-time-password exchange. Very neat, but that's for another day.
The one thing to keep in mind is that this is a config file for Postfix, not saslauthd, so if you make changes to this file, it's Postfix that has to be restarted.
Note 3: Those are the only two lines you need. Anything else, like the socket path, is frivolous with our setup.
Note 4: There are standard locations to put the smtpd.conf, like /usr/lib/sasl2 in Debian, or /usr/local/lib/sasl2 when SASL is built with default paths. Some of these will work and others won't. In my case, the /etc/postfix/sasl worked as expected.
Now that we've written/uncommented a whopping 4 lines, let's get into Postfix's main.cf and do the last 4 lines. In main.cf, we want to a) enable sasl, b) permit sasl authenticated senders, and c) force TLS since our only auth method is "PLAIN".
In main.cf, add the following lines:
For your sasl authentication, if your site is setup anywhere near sanely, then your domain is going to be your realm. Since we are doing PAM authentication, I'm not even sure this is required. I haven't tested it though so I've left it in.
The following line enables SASL, big surprise there ;)
In the smtpd_recipient_restrcitions, I have added the line permitting SASL authenticated connections. Make sure this line comes after your "mynetworks" line. Failing SASL is going to get your connection booted if you order it higher (last time I tested, not recently, I admit).
The SASL authenticated header, I'm not exactly clear what this does but it is "no" by default and most online documentation says you need to set it to "yes".
Lastly, the TLS auth only line requires TLS before AUTH is permitted. This will force all your users to enable TLS to protect their passwords (and subsequently their messages) in transit.
And now just kick postfix and you've got authenticated SMTP enabled for remote clients! You may have noticed that I have many rambling notes in my post. As you may infer, I got pretty frustrated in the end with instructions that were ambiguous, useless, or just plain wrong. Some of the stuff, like the -r for saslauthd, I don't know how the user even got their system to work... Ah well, c'est la vie.
Siona is running Debian Sarge with Postfix 2.2. So it turns out that in addition to the installed postfix package, I need sasl2-bin which includes the essential saslauthd and the useful testsaslauthd.
After installing saslauthd, it must first be enabled. The default file /etc/default/saslauthd should read:
START=yes
PARAMS="-m /var/spool/postfix/var/run/saslauthd"
MECHANISMS="pam"
You will need to create the directory listed above. A simple "mkdir -p" will do the trick.
Note: One page on the Internet recommended a PARAMS line which included a "-r". This will not work and cause all auth attempts to mysteriously fail since it is attempting to combine the local part and realm.
Note 2: Online I also so reference that you should change the saslauthd init script to move the PID into Postfix's chroot. This is also a lie. It won't make anything fail, it's just not needed.
So that's it for saslauthd. If you run testsaslauthd like this, you should get an "OK" for any existing user account:
$ testsaslauthd -u username -p password -s /var/spool/postfix/var/run/saslauthd/mux
:0 OK
Now onto postfix. First, let's finish up the SASL parameters. In /etc/postfix, there should be a sasl folder. If not create it. In there you create smtpd.conf which will contain the parameters for smtpd to actually use saslauthd and actually there's just two lines:
pwcheck_method: saslauthd
mech_list: plain
The first line is, you guessed it, the authentication method which in our case is the saslauthd. The second line just says which authentication methods are supported. For fancy (e.g. secure) methods like CRAM-MD5 and such, you need a separate password database. We don't have that so we're limited to the standard PLAIN and Microsoft's LOGIN. Since I don't need to support old Microsoft clients (though others do so watch out for this), I only enable PLAIN. Other fun types include OTP which provies a fancy one-time-password exchange. Very neat, but that's for another day.
The one thing to keep in mind is that this is a config file for Postfix, not saslauthd, so if you make changes to this file, it's Postfix that has to be restarted.
Note 3: Those are the only two lines you need. Anything else, like the socket path, is frivolous with our setup.
Note 4: There are standard locations to put the smtpd.conf, like /usr/lib/sasl2 in Debian, or /usr/local/lib/sasl2 when SASL is built with default paths. Some of these will work and others won't. In my case, the /etc/postfix/sasl worked as expected.
Now that we've written/uncommented a whopping 4 lines, let's get into Postfix's main.cf and do the last 4 lines. In main.cf, we want to a) enable sasl, b) permit sasl authenticated senders, and c) force TLS since our only auth method is "PLAIN".
In main.cf, add the following lines:
smtpd_sasl_local_domain = $mydomain
smtpd_sasl_auth_enable = yes
smtpd_recipient_restrictions =
permit_mynetworks,
permit_sasl_authenticated,
reject_unauth_destination,
check_recipient_access hash:/etc/postfix/role_exceptions
smtpd_sasl_authenticated_header = yes
smtpd_tls_auth_only = yes
For your sasl authentication, if your site is setup anywhere near sanely, then your domain is going to be your realm. Since we are doing PAM authentication, I'm not even sure this is required. I haven't tested it though so I've left it in.
The following line enables SASL, big surprise there ;)
In the smtpd_recipient_restrcitions, I have added the line permitting SASL authenticated connections. Make sure this line comes after your "mynetworks" line. Failing SASL is going to get your connection booted if you order it higher (last time I tested, not recently, I admit).
The SASL authenticated header, I'm not exactly clear what this does but it is "no" by default and most online documentation says you need to set it to "yes".
Lastly, the TLS auth only line requires TLS before AUTH is permitted. This will force all your users to enable TLS to protect their passwords (and subsequently their messages) in transit.
And now just kick postfix and you've got authenticated SMTP enabled for remote clients! You may have noticed that I have many rambling notes in my post. As you may infer, I got pretty frustrated in the end with instructions that were ambiguous, useless, or just plain wrong. Some of the stuff, like the -r for saslauthd, I don't know how the user even got their system to work... Ah well, c'est la vie.
Early Move
Although I had planned to move at the end of the month, my Internet service went out last week so I have moved siona a bit ahead of schedule. The Internet is still out so I moved siona to a friend's place (hooray) so I got to go through all the pains that go along with changing the IP address of the primary name server. Hooray. We're still trying to convince our secondary dns service to figure out what's going on. It's being a bit stubborn.
Nevertheless, everything has been moved and everything will have to be moved again once I move and move siona to the new house. Definitely looking forward to it like I look forward to getting hit in the face.
Nevertheless, everything has been moved and everything will have to be moved again once I move and move siona to the new house. Definitely looking forward to it like I look forward to getting hit in the face.
Thursday, 31 August 2006
Bring on the Pr0n
A little while ago I was thinking about transcoding a bunch of my music from Ogg to Mp3 so I could burn the music to CD for my Mp3 CD player. Now I'm no stickler for audio quality, but clearly transcoding from one lossy format to another is going to make things worse then necessary. Given I still have some desire to have music available both streamed and on Mp3 CD, I decided to start re-encoding my music collection to a lossless audio format, specifically FLAC.
Ripping to FLAC was going well initially but there is one distinct downside to using a lossless format which is the disk space requirements. My music collection will be jumping from ~15GB up to 100GB. After some messing around with my volumes on Siona, I could not come up with enough free space without destroying a lot of data. Well, I destroyed a bunch of data for good measure (and because it is important to test that you can restore from backups) and then I bought a new drive.
So the new drive is a spanking new 320GB Seagate drive. I slapped that bad-boy in there and formatted it with XFS. So far so good. I've moved all the existing data from the old WD 80GB drive on there and I've been ripping CDs merrily.
To assist with the task, I even created a little shell script that gives me a list of every CD already on the system and whether the CD is FLAC or not. Even lumps the Various Artist discs together. I'm almost up to a third of the CD collection (between both the wendigo's and mine).
Ripping to FLAC was going well initially but there is one distinct downside to using a lossless format which is the disk space requirements. My music collection will be jumping from ~15GB up to 100GB. After some messing around with my volumes on Siona, I could not come up with enough free space without destroying a lot of data. Well, I destroyed a bunch of data for good measure (and because it is important to test that you can restore from backups) and then I bought a new drive.
So the new drive is a spanking new 320GB Seagate drive. I slapped that bad-boy in there and formatted it with XFS. So far so good. I've moved all the existing data from the old WD 80GB drive on there and I've been ripping CDs merrily.
To assist with the task, I even created a little shell script that gives me a list of every CD already on the system and whether the CD is FLAC or not. Even lumps the Various Artist discs together. I'm almost up to a third of the CD collection (between both the wendigo's and mine).
Friday, 25 August 2006
The Fight Against Spam
The previous post was about my experience dabbling with greylisting but what I really rely on for mail filtering is Amavis, SpamAssassin, ClamAV, and Procmail. Each tool has it's place and does it's job quite well. Except Procmail, but that's just a backwards old mail processing/delivery agent that using syntax far too arcane for mere mortals... But that's just my griping for no good reason, it does the job.
Anyhow, the setup here is that Postfix hands all messages to Amavis for inspection. Amavis can run a message through any number of spam or virus checking programs, SpamAssassin and ClamAV in our case, and any of these programs can approve a message, mark it in some way, move it to a quarantine or reject it flat-out. Basically, Amavis is a front-end for these types of filtering applications. It's very versatile, postfix works well with it, so it works.
The simple one is ClamAV. It runs a message through its virus definitions. If it finds a match, it quarantines the message and then sends a notice to the user saying what happened. Easy. ClamAV actually picks up much of the Phishing scams. It works great.
The other fun one is SpamAssassin (SA) which reads through a message and assigns it a score depending on many factors like whether the message headers appear corrupt. Low score means probably not spam, high score means probably spam. If it's a high score, SA can modify the message or discard it. The levels at which it takes "evasive action" are configurable so this has taken some tuning before I was really happy with the results.
So the setup here is that SA actually modifies all messages adding a X-Spam-Score: nn header to all messages. This way, I can see that the last message I got from my roommate was scored -3.787, for example.
At a score of 4.0, SA adds an additional header that reads X-Spam: Yes and also adds ***SPAM*** to the subject line of the message. This is where Procmail comes in. I have Procmail configured on my account to automatically move any messages with X-Spam: Yes to my Junk folder and out of my inbox. I have found some false positives, specifically my logwatch notices from Siona, which can sometimes go above 4.0 so SA is configured to still deliver messages to the user.
Above 6.0, however, I have seen no false-positives so SA is configured to reject these messages. Above 10.0, SA will not even bother sending a delivery status notification (DSN) to the source of the email on the assumption that the source email address is spoofed and the sending mail server is just a zombie host.
This has been working out very well. My guess is that 90% of spam is scored over 6.0 and is outright discarded. Of the remaining 10% of spam, I would guess 80-90% of that is scored above 4.0 and is getting tagged as spam but delivered. A very small number of legit messages are scored between 4.0 and 6.0 (less then 1%), and no legit messages are scored above 6.0. All in all, this leaves a couple percent of the spam messages that are delivered to the users.
This squad of applications works quite well together for fighting spam. I only have a couple of problems. The setup is a bit confusing out of the gate, but definitely doable and there was a lot of tutorials Online for this setup. And the other problem is that we should not be seeing this much spam anyhow. Mail servers should be authenticating users that wish to relay and message authenticity should be verifiable. If Joe Smith at example.com gets a message from my domain, the example.com mail server should be able to verify that someone at this domain actually sent the message. As it is, it is trivial to forge a source email address, and SPF is not the solution. We need something universal, unlike SenderID, or else what's the point? The closest is the domain signing that Yahoo (?) came up with (which Google actually uses for their mail service), but it is still not universal. Someday, we will leave with not much spam, some day...
Anyhow, the setup here is that Postfix hands all messages to Amavis for inspection. Amavis can run a message through any number of spam or virus checking programs, SpamAssassin and ClamAV in our case, and any of these programs can approve a message, mark it in some way, move it to a quarantine or reject it flat-out. Basically, Amavis is a front-end for these types of filtering applications. It's very versatile, postfix works well with it, so it works.
The simple one is ClamAV. It runs a message through its virus definitions. If it finds a match, it quarantines the message and then sends a notice to the user saying what happened. Easy. ClamAV actually picks up much of the Phishing scams. It works great.
The other fun one is SpamAssassin (SA) which reads through a message and assigns it a score depending on many factors like whether the message headers appear corrupt. Low score means probably not spam, high score means probably spam. If it's a high score, SA can modify the message or discard it. The levels at which it takes "evasive action" are configurable so this has taken some tuning before I was really happy with the results.
So the setup here is that SA actually modifies all messages adding a X-Spam-Score: nn header to all messages. This way, I can see that the last message I got from my roommate was scored -3.787, for example.
At a score of 4.0, SA adds an additional header that reads X-Spam: Yes and also adds ***SPAM*** to the subject line of the message. This is where Procmail comes in. I have Procmail configured on my account to automatically move any messages with X-Spam: Yes to my Junk folder and out of my inbox. I have found some false positives, specifically my logwatch notices from Siona, which can sometimes go above 4.0 so SA is configured to still deliver messages to the user.
Above 6.0, however, I have seen no false-positives so SA is configured to reject these messages. Above 10.0, SA will not even bother sending a delivery status notification (DSN) to the source of the email on the assumption that the source email address is spoofed and the sending mail server is just a zombie host.
This has been working out very well. My guess is that 90% of spam is scored over 6.0 and is outright discarded. Of the remaining 10% of spam, I would guess 80-90% of that is scored above 4.0 and is getting tagged as spam but delivered. A very small number of legit messages are scored between 4.0 and 6.0 (less then 1%), and no legit messages are scored above 6.0. All in all, this leaves a couple percent of the spam messages that are delivered to the users.
This squad of applications works quite well together for fighting spam. I only have a couple of problems. The setup is a bit confusing out of the gate, but definitely doable and there was a lot of tutorials Online for this setup. And the other problem is that we should not be seeing this much spam anyhow. Mail servers should be authenticating users that wish to relay and message authenticity should be verifiable. If Joe Smith at example.com gets a message from my domain, the example.com mail server should be able to verify that someone at this domain actually sent the message. As it is, it is trivial to forge a source email address, and SPF is not the solution. We need something universal, unlike SenderID, or else what's the point? The closest is the domain signing that Yahoo (?) came up with (which Google actually uses for their mail service), but it is still not universal. Someday, we will leave with not much spam, some day...
Thursday, 3 August 2006
Greylisting
So I tried a spam deterring technique known as "greylisting". Basically, you tell your mailserver that any messages from an unknown source should be met with a temporary error under the expectation that very few spam agents will resend the message later whereas legitimate mail servers will.
Ok, interesting. This can be a highly effective technique and likely to deter 99% of spam. However, this involving making your mailserver respond with an error by default.
What happens is that each incoming message is first checked by the greylisting service (I tried Postgrey). The greylist service checks "has this mail server tried to send a message from Alexa to Bree before?" and if not, then an error 450, "mailbox unavailable" error is sent. The sending server will then queue Alexa's message for later transmission. When it is retransmitted, the greylist service says "ah, I recognize this so therefore it is probably not spam" and allows the message from Alexa to Bree to pass through and subsequent messages from that source to go through.
The playout is that it is a highly effective method of detecting spam, however when messages are being queued for delivery "later", that turns out to be anywhere from one to four hours later. For people that are sending lots of messages back and forth, this is not a big problem since the greylist tracks who has successfully sent messages in the past but for any arbitrary exchanges, like say a customer sending a message to sales or tech support for example, greylisting really slows down mail delivery.
All-in-all, it's a pretty drastic anti-spam measure. I ended up disabling it once I realized just how long mailservers will arbitrarily queue mail.
Ok, interesting. This can be a highly effective technique and likely to deter 99% of spam. However, this involving making your mailserver respond with an error by default.
What happens is that each incoming message is first checked by the greylisting service (I tried Postgrey). The greylist service checks "has this mail server tried to send a message from Alexa to Bree before?" and if not, then an error 450, "mailbox unavailable" error is sent. The sending server will then queue Alexa's message for later transmission. When it is retransmitted, the greylist service says "ah, I recognize this so therefore it is probably not spam" and allows the message from Alexa to Bree to pass through and subsequent messages from that source to go through.
The playout is that it is a highly effective method of detecting spam, however when messages are being queued for delivery "later", that turns out to be anywhere from one to four hours later. For people that are sending lots of messages back and forth, this is not a big problem since the greylist tracks who has successfully sent messages in the past but for any arbitrary exchanges, like say a customer sending a message to sales or tech support for example, greylisting really slows down mail delivery.
All-in-all, it's a pretty drastic anti-spam measure. I ended up disabling it once I realized just how long mailservers will arbitrarily queue mail.
Thursday, 27 July 2006
Syndication is here!
I am just finishing up a couple issues reported by Feed Validator, otherwise, we are in business. Atom (and the various RSSes) is pretty easy to generate. You just have to be careful. And since all my current data was already in a database, generating the feed every time there's a change to the news feed is pretty much trivial.
Let's Syndicate!
Well, it's finally come time. Out with the .plan file, in with the .xml file! Out with finger, in with Atom!
I am currently working on building and testing my Atom feed. Atom seems to be the most robust of the syndication formats and only slightly "trickier" then RSS 2.0 or 0.91 so I'm trying it.
I am currently working on building and testing my Atom feed. Atom seems to be the most robust of the syndication formats and only slightly "trickier" then RSS 2.0 or 0.91 so I'm trying it.
Sunday, 9 July 2006
Bad update kills authentication
Well, Debian has finally punched me in the jaw. After the last few years have been so good to me, apt-get upgrade finally broke something painful. I went to run the update today and there was a new libnss-ldap package in there which has a bug in it. So, once I updated to the "latest", no auth worked. Everything was broken. Everything.
After some dicking around, I finally checked and sure enough, I still have all my old package files so I rolled-back libnss-ldap to the previous version and the system is back up.
Now, yes, I am running "testing" and not stable, but the new libnss-ldap does not work. What the heck is tested about that? That sounds like the package maintainer didn't even try it before it got moved to testing from unstable. Argh!
Well them's the breaks. We're back up and there's a couple quirks to resolve but otherwise, we're okay. Life goes on...
After some dicking around, I finally checked and sure enough, I still have all my old package files so I rolled-back libnss-ldap to the previous version and the system is back up.
Now, yes, I am running "testing" and not stable, but the new libnss-ldap does not work. What the heck is tested about that? That sounds like the package maintainer didn't even try it before it got moved to testing from unstable. Argh!
Well them's the breaks. We're back up and there's a couple quirks to resolve but otherwise, we're okay. Life goes on...
Thursday, 6 July 2006
Crazy Media Conversion!
There's is a hella tight mod for KDE called "audiokonverter" which adds a context-menu in Konqueror to, you guessed it, konvert audio! Get it, "konvert" is "convert" only with a K! Hahahhahaha! Those KDE kids, they're so kfunny!
Okay, shenanigans aside, it's nice. It will descend recursively since we all keep our music in properly structured folders, prompts for destination folder (where it recreates the same folder structure), and then the appropriate quality level (e.g. 1-10 for Oggs but bitrate for MP3). Works great! Requires the encode/decode utilities for whatever formats you want. Even handles the tags properly (at lease for OGG and MP3 which I've tested).
Audiokonverter! It's your daddy!
Okay, shenanigans aside, it's nice. It will descend recursively since we all keep our music in properly structured folders, prompts for destination folder (where it recreates the same folder structure), and then the appropriate quality level (e.g. 1-10 for Oggs but bitrate for MP3). Works great! Requires the encode/decode utilities for whatever formats you want. Even handles the tags properly (at lease for OGG and MP3 which I've tested).
Audiokonverter! It's your daddy!
Monday, 3 July 2006
IM, Just Another Unix Service
My latest adventure has been in trying to get Jive Wildfire to authenticate against PAM instead of LDAP directly. The reason for this is that this is a more manageable authentication scheme for two reasons (which any sys admin should know already):
My concern re: #1 is that there is some interest in running an Instant Messaging (IM) service at work but little support for getting a kerberized service. The server (Jive Wildfire) support for it is tentative at best and client support is none (e.g. ticketed authentication) so why bother? I'd have to setup key entries for the service to talk to the KDC and basically do a lot of work whereas the Pluggable Authentication Module (PAM) system is there and ready. Heck, even if the server-side support was better, the complete lack of GSSAPI/Kerberos support on the client side is a killer.
So on to #2 which is the main reason for doing any of this at all. Managing seperate password databases means system users have to have a login and password for every service and manage their passwords seperately. Since this doesn't happen (with normal users), everyone just uses the same password for everything and they use the same password for years and years. It's bad enough with the sys admins doing the same thing for core services, but that's a whole other issue.
Anyhow, I had initially setup Jive Wildfire (then Jive Messenger) to authenticate directly against my LDAP directory specifically to avoid the problems with #2. It worked well. But now faced with managing an IM server for work, I have the same two problems. Maintaining a separate password database is going to mean zero user buy-in but if I can say "just use your IRMACS password" then it won't be a problem. So that gets us back to how to setup the auth to integrate with the Kerberos system. The answer is "I dunno" or PAM.
So, PAM, you little bastard, what's the deal? It turns out Jive Wildfire doesn't have PAM support either natively or as a proper plugin (e.g. loadable at runtime) so what we need is this Java module called Shaj. Put the Java library, the .jar file, and the platform-specific library, the .so file, into wildfire/lib. Then configure Jive to use "Native Authentication":
And voila! Right? WRONG! I couldn't login, my client kept crashing, this "Shaj" thing was screwed. I went back and forth, back and forth, and just to get to the point where I knew what to do (above) but it ust didn't work. Something was odd since I was finally able to login to the admin interface but still nothing on the client.
It turns out that, naturally, after having been flailing around in the dark in the first place, my client had finally bit the dust. Something just got fubar between the removing and adding accounts and the server going up and down and up and down... After I stomped all my kopete config files, it worked great! Auth against PAM! Beauty!
So now I still have to test one last feature... Can I change my system password through my IM client? Does that just change my IM password (e.g. is it stored by Wildfire) or does it change my system password? Does it work at all? What about if I change my system password, is that reflected immediately in the IM login?
All I know is that "it should work (tm)"
- The (XMPP) service does not have to be reconfigured for different environments (e.g. passwd files vs kerberos vs ldap),
- No separate password database.
My concern re: #1 is that there is some interest in running an Instant Messaging (IM) service at work but little support for getting a kerberized service. The server (Jive Wildfire) support for it is tentative at best and client support is none (e.g. ticketed authentication) so why bother? I'd have to setup key entries for the service to talk to the KDC and basically do a lot of work whereas the Pluggable Authentication Module (PAM) system is there and ready. Heck, even if the server-side support was better, the complete lack of GSSAPI/Kerberos support on the client side is a killer.
So on to #2 which is the main reason for doing any of this at all. Managing seperate password databases means system users have to have a login and password for every service and manage their passwords seperately. Since this doesn't happen (with normal users), everyone just uses the same password for everything and they use the same password for years and years. It's bad enough with the sys admins doing the same thing for core services, but that's a whole other issue.
Anyhow, I had initially setup Jive Wildfire (then Jive Messenger) to authenticate directly against my LDAP directory specifically to avoid the problems with #2. It worked well. But now faced with managing an IM server for work, I have the same two problems. Maintaining a separate password database is going to mean zero user buy-in but if I can say "just use your IRMACS password" then it won't be a problem. So that gets us back to how to setup the auth to integrate with the Kerberos system. The answer is "I dunno" or PAM.
So, PAM, you little bastard, what's the deal? It turns out Jive Wildfire doesn't have PAM support either natively or as a proper plugin (e.g. loadable at runtime) so what we need is this Java module called Shaj. Put the Java library, the .jar file, and the platform-specific library, the .so file, into wildfire/lib. Then configure Jive to use "Native Authentication":
<provider>
<auth>
<className>org.jivesoftware.wildfire.auth.NativeAuthProvider</className>
</auth>
<user>
<className>org.jivesoftware.wildfire.user.NativeUserProvider</className>
</user>
</provider>
And voila! Right? WRONG! I couldn't login, my client kept crashing, this "Shaj" thing was screwed. I went back and forth, back and forth, and just to get to the point where I knew what to do (above) but it ust didn't work. Something was odd since I was finally able to login to the admin interface but still nothing on the client.
It turns out that, naturally, after having been flailing around in the dark in the first place, my client had finally bit the dust. Something just got fubar between the removing and adding accounts and the server going up and down and up and down... After I stomped all my kopete config files, it worked great! Auth against PAM! Beauty!
So now I still have to test one last feature... Can I change my system password through my IM client? Does that just change my IM password (e.g. is it stored by Wildfire) or does it change my system password? Does it work at all? What about if I change my system password, is that reflected immediately in the IM login?
All I know is that "it should work (tm)"
Monday, 5 June 2006
An old argument in favour of openness rather then obscurity
A commercial, and in some respects a social doubt has been started within the last year or two, whether or not it is right to discuss so openly the security or insecurity of locks. Many well-meaning persons suppose that the discussion respecting the means for baffling the supposed safety of locks offers a premium for dishonesty, by showing others how to be dishonest. This is a fallacy. Rogues are very keen in their profession, and know already much more than we can teach them respecting their several kinds of roguery.
Rogues knew a good deal about lock-picking long before locksmiths discussed it among themselves, as they have lately done. If a lock, let it have been made in whatever country, or by whatever maker, is not so inviolable as it has hitherto been deemed to be, surely it is to the interest of honest persons to know this fact, because the dishonest are tolerably certain to apply the knowledge practically; and the spread of the knowledge is necessary to give fair play to those who might suffer by ignorance.
It cannot be too earnestly urged that an acquaintance with real facts will, in the end, be better for all parties. Some time ago, when the reading public was alarmed at being told how London milk is adulterated, timid persons deprecated the exposure, on the plea that it would give instructions in the art of adulterating milk; a vain fear, milkmen knew all about it before, whether they practiced it or not; and the exposure only taught purchasers the necessity of a little scrutiny and caution, leaving them to obey this necessity or not, as they pleased.
-- From A.C Hobbs (Charles Tomlinson, ed.), Locks and Safes: The Construction of Locks. Published by Virtue & Co., London, 1853 (revised 1868).
Rogues knew a good deal about lock-picking long before locksmiths discussed it among themselves, as they have lately done. If a lock, let it have been made in whatever country, or by whatever maker, is not so inviolable as it has hitherto been deemed to be, surely it is to the interest of honest persons to know this fact, because the dishonest are tolerably certain to apply the knowledge practically; and the spread of the knowledge is necessary to give fair play to those who might suffer by ignorance.
It cannot be too earnestly urged that an acquaintance with real facts will, in the end, be better for all parties. Some time ago, when the reading public was alarmed at being told how London milk is adulterated, timid persons deprecated the exposure, on the plea that it would give instructions in the art of adulterating milk; a vain fear, milkmen knew all about it before, whether they practiced it or not; and the exposure only taught purchasers the necessity of a little scrutiny and caution, leaving them to obey this necessity or not, as they pleased.
-- From A.C Hobbs (Charles Tomlinson, ed.), Locks and Safes: The Construction of Locks. Published by Virtue & Co., London, 1853 (revised 1868).
Thursday, 25 May 2006
How time flies...
I've been keeping busy with my project over the last month so not too much tickin' over here per se. The trick of the day is to use SSH for a SOCKS(5) proxy as per the comments here. The article is about using PuTTY to SSH to a server and open an encrypted proxy connection. As the peanut gallery says, this is even easier with ssh:
ssh -D1080 host.example.com
There, done. Tell your browser you have a SOCKS proxy at localhost:1080 and there you go! Right?
Well apparently I'm missing something cause it ain't working yet. I'll get there though, that's my goal. I'll get there...
/me mutters something about browsers
ssh -D1080 host.example.com
There, done. Tell your browser you have a SOCKS proxy at localhost:1080 and there you go! Right?
Well apparently I'm missing something cause it ain't working yet. I'll get there though, that's my goal. I'll get there...
/me mutters something about browsers
Wednesday, 26 April 2006
What Time is It?
So for a while there I was getting really confused. I kept getting the output from daily cronjobs on Chevette mailed to me in the afternoon but with a timestamp of the middle of the night. Lo and behold, Chevette's clock was just plain wrong.
So naturally, I needed to get me some of that NTP that the kids keep talking about. It turns out the folks over at http://www.pool.ntp.org/ are trying to organize a large number of NTP servers from volunteers. It's a really cool project and worth taking a look at since they discuss specific usage issues and how to best configure your client to update time from the 'net.
So initially I setup NTP on chevette and then I realized that neither siona nor friday were synching time so I did those too. After a little bit of reading on the above website as well as the ISC's site, I ended up deciding to try to run my own NTP server on the LAN. Okay, not really exciting, but overall, the setup is really easy and definately cool. Siona updates against the ca.pool.ntp.org servers and then chevette and friday both update against her. Kinda neet to have that working finally :D
Okay, back to work and my ever-growing TODO list.
So naturally, I needed to get me some of that NTP that the kids keep talking about. It turns out the folks over at http://www.pool.ntp.org/ are trying to organize a large number of NTP servers from volunteers. It's a really cool project and worth taking a look at since they discuss specific usage issues and how to best configure your client to update time from the 'net.
So initially I setup NTP on chevette and then I realized that neither siona nor friday were synching time so I did those too. After a little bit of reading on the above website as well as the ISC's site, I ended up deciding to try to run my own NTP server on the LAN. Okay, not really exciting, but overall, the setup is really easy and definately cool. Siona updates against the ca.pool.ntp.org servers and then chevette and friday both update against her. Kinda neet to have that working finally :D
Okay, back to work and my ever-growing TODO list.
Wednesday, 12 April 2006
Chevette is Loaded and She's Online Live!
Hot hot hot! Chevette is online. Took an afternoon of downloading packages and updates and about an hour of PAM/LDAP configuration and she's live and running the Icecast stream. The stream was running off Friday before but given that's the only usable computer in the house, that was rapidly getting dysfunctional.
On the fun side, the single ices feed including transcoding takes ~30-33% of her CPU. w007! She's already loaded!
In other news, I changed the google search shortcut in Konqueror (which can be done in Firefox too) from the boring "gg" to "grep". That's right, I can now grep the 'net. Who's your daddy? Or should I say:
grep: your daddy
Google returned 1 hit: archangel
On the fun side, the single ices feed including transcoding takes ~30-33% of her CPU. w007! She's already loaded!
In other news, I changed the google search shortcut in Konqueror (which can be done in Firefox too) from the boring "gg" to "grep". That's right, I can now grep the 'net. Who's your daddy? Or should I say:
grep: your daddy
Google returned 1 hit: archangel
Thursday, 6 April 2006
Adding a test server and some rantin'
After rebuilding Siona a couple weeks ago, I went out and (finally) bought a battery backup. I got the smallest APC UPS I could get from Futureshop which cost 40CDN. I brought it home, hooked it up, and installed "apcupsd" on Siona. It's pretty bitchin. With apcupsd, I set the tolerences, e.g. if the battery level drops below 10% or 3 minutes remaining, and the battery will notify apcupsd that power is failing so apcupsd can shutdown the server.
The really fun part is that I already got to "test" the power failure events. I had set the threshold to 10 minutes but the estimated battery life, which is just w/ Siona mind you, is only 8.4 total. So the thing with the UPS is that if there are any "significant" fluctuations in the power from the line, then it cuts to battery mode. These dirty power fluctuations happen, oh, every other day or so. So even though it only cuts to battery for a second and then restores normal power, my threshold was high enough that it just issued a shutdown at the first sign of trouble. So I've now confirmed proper operation of the UPS during a "power event" and I've "tuned" the parameters (back to the mfr default).
In other cool application news, I'm finding that AmaroK really rocks! It builds a database of your music from the song meta-data and has a really fabulous interface both for playing music (queues and playlists) and for modifying the metadata (e.g. you can update many songs at once). AmaroK is pretty damned good.
So for upcoming events, I'm thinking of finally putting Chevette to use again but this time as just a test server. Her display is hopelessly foobar and upgrading RAM would also require more $$ then I want to invest in her (sadly). So I figure she'll make a good test server so I can setup stuff like a secondary mail server and test fail-over for mail delivery. Or inter-domain operation with the XMPP server. Or just try different applications. I plan to load her with Debian Sarge on the weekend and go from there.
Anyhow, back to work for me.
The really fun part is that I already got to "test" the power failure events. I had set the threshold to 10 minutes but the estimated battery life, which is just w/ Siona mind you, is only 8.4 total. So the thing with the UPS is that if there are any "significant" fluctuations in the power from the line, then it cuts to battery mode. These dirty power fluctuations happen, oh, every other day or so. So even though it only cuts to battery for a second and then restores normal power, my threshold was high enough that it just issued a shutdown at the first sign of trouble. So I've now confirmed proper operation of the UPS during a "power event" and I've "tuned" the parameters (back to the mfr default).
In other cool application news, I'm finding that AmaroK really rocks! It builds a database of your music from the song meta-data and has a really fabulous interface both for playing music (queues and playlists) and for modifying the metadata (e.g. you can update many songs at once). AmaroK is pretty damned good.
So for upcoming events, I'm thinking of finally putting Chevette to use again but this time as just a test server. Her display is hopelessly foobar and upgrading RAM would also require more $$ then I want to invest in her (sadly). So I figure she'll make a good test server so I can setup stuff like a secondary mail server and test fail-over for mail delivery. Or inter-domain operation with the XMPP server. Or just try different applications. I plan to load her with Debian Sarge on the weekend and go from there.
Anyhow, back to work for me.
Sunday, 19 March 2006
Destroy system files? Don't mind if I do!
So last week I was in Ottawa. The day I left for Ottawa, there was a power failure at home. I was 3 megametres from being able to troubleshoot and wouldn't be back for the rest of the week. Just typical.
So after getting home and restoring power, we were still without service. I ran fsck on the drives and mostly they were okay... Except the root partition. After a more agressive fsck, the partition mounted but lots of system stuff was missing. It was just borked.
So I pulled everything off the server (all day yesterday) and rebuilt the system (all day today). There's a couple tricks since my IP address changed during the downtime so it will be a bit before things are all back to normal but we're doing good otherwise.
The one good thing is that I got to switch the drives to an LVM scheme. I went with partitioning the primary drive into boot, root, swap, and var partitions (regular-style). The two 80GB drives got added to a volume group and I just divied up the space between user space, share (media), project (including uro forums), temp, and unallocated space. It's pretty spiffy! I can grow any of the LVM partitions live!
Okay, enough chit-chat. Clearly apache, php, and mysql are working ;)
So after getting home and restoring power, we were still without service. I ran fsck on the drives and mostly they were okay... Except the root partition. After a more agressive fsck, the partition mounted but lots of system stuff was missing. It was just borked.
So I pulled everything off the server (all day yesterday) and rebuilt the system (all day today). There's a couple tricks since my IP address changed during the downtime so it will be a bit before things are all back to normal but we're doing good otherwise.
The one good thing is that I got to switch the drives to an LVM scheme. I went with partitioning the primary drive into boot, root, swap, and var partitions (regular-style). The two 80GB drives got added to a volume group and I just divied up the space between user space, share (media), project (including uro forums), temp, and unallocated space. It's pretty spiffy! I can grow any of the LVM partitions live!
Okay, enough chit-chat. Clearly apache, php, and mysql are working ;)
Tuesday, 21 February 2006
Time to Ubuntu
So after many weeks and months of my attempts to destroy the Debian/Sarge installation on Friday, I finally gave up. I had muddled and muddled until basically every other application was failing. Yay for me.
So on the weekend I dumped that install and started loading Ubuntu. It went well be very slowly. I'm still not sure why. It kind of had the feel of a bottleneck around drive reading/writing but hdparm verified that the drives are in DMA mode and running pretty derned fast.
Nonetheless, the major applications (except UT2004) have been reinstalled including the music server. Channel Skyhook is back online! W007!
Enough chitter-chatter... Time to go back to getting things done. Know whattamsayin?
So on the weekend I dumped that install and started loading Ubuntu. It went well be very slowly. I'm still not sure why. It kind of had the feel of a bottleneck around drive reading/writing but hdparm verified that the drives are in DMA mode and running pretty derned fast.
Nonetheless, the major applications (except UT2004) have been reinstalled including the music server. Channel Skyhook is back online! W007!
Enough chitter-chatter... Time to go back to getting things done. Know whattamsayin?
Tuesday, 31 January 2006
Some news old and new
Google Talk recently joined the open XMPP (Jabber) federation so their users can now message users on any of the other open networks including the one I'm running (dl.nibble.bz). Bitchin! I dropped my Google Talk login and added all my contacts to my own login. So far, there's been no problems. Anyone with compatible clients will be able to any of the (pseudo) peer-to-peer stuff like voice and video. That includes multi-platform clients like... Well it's coming. Gaim has been working hard to introduce voice and video (vv) based on Google's technology and Gaim expects to be able to interoperate across platforms and domains very soon. We're all very excited at the possibility :D
In other "old" news, my luck has been holding with KDE and KDE PIM (Personal Information Management). I have been saving my calendar and contacts as files on a USB key and it works great! I just plug in the key, point KMail at the calendar and contact files on there and *bam* it works!
The other thing with mail is managing filters. Well it turns out KMail doesn't let me filter messages to IMAP folders (at least with the versions I have installed) so I ended up looking at Procmail which runs on the server side. A bit of poking around and it actually works really well! A little bit obtuse to be doing just text-based configs with yet another file format, but it's fairly simple and does the job and naturally no filters need "synching" on the client-side.
The one last issue is that I want to maintain a unified list of syndicated feeds. I can manually export/import my feed list with aKgregator (which is a good RSS/Atom client btw) but it's hardly unified. Oh well.
In other "old" news, my luck has been holding with KDE and KDE PIM (Personal Information Management). I have been saving my calendar and contacts as files on a USB key and it works great! I just plug in the key, point KMail at the calendar and contact files on there and *bam* it works!
The other thing with mail is managing filters. Well it turns out KMail doesn't let me filter messages to IMAP folders (at least with the versions I have installed) so I ended up looking at Procmail which runs on the server side. A bit of poking around and it actually works really well! A little bit obtuse to be doing just text-based configs with yet another file format, but it's fairly simple and does the job and naturally no filters need "synching" on the client-side.
The one last issue is that I want to maintain a unified list of syndicated feeds. I can manually export/import my feed list with aKgregator (which is a good RSS/Atom client btw) but it's hardly unified. Oh well.
Friday, 20 January 2006
Syslog: Good for info, good for DOS attacks
So after getting some odd complaints from the mail server this morning about "insufficient space", I had to take a closer look at Siona's var partition which was appearing to fluctuate by ~600MB per day. A quick du -sh * pointed to /var/log being the culprit.
So on first inspection, the mysql-bin logs were not being compressed. A gzip on the old logs there gave a couple hundred megs back. Not a world of difference, but a lot.
Next up, it turns out that syslog.0 and debug.0 were over 300MB each and today's syslog/debug were already over 10MB (having been just rotated). So I take a quick look and I see the usual chatter from various services... But it turns out that "the usual chatter" from slapd was 99.96% of the log. Other then enabling it for debugging, I've never found the stuff getting logged to be especially useful to have around so I've disabled that. It was recording a half dozen lines every time PAM or NSS wanted user info (e.g. perpetually).
In summary, the Squid cache (256MB) didn't make the top 3 disk usage problems under var. It was Syslog that was doing a DOS attack.
So on first inspection, the mysql-bin logs were not being compressed. A gzip on the old logs there gave a couple hundred megs back. Not a world of difference, but a lot.
Next up, it turns out that syslog.0 and debug.0 were over 300MB each and today's syslog/debug were already over 10MB (having been just rotated). So I take a quick look and I see the usual chatter from various services... But it turns out that "the usual chatter" from slapd was 99.96% of the log. Other then enabling it for debugging, I've never found the stuff getting logged to be especially useful to have around so I've disabled that. It was recording a half dozen lines every time PAM or NSS wanted user info (e.g. perpetually).
In summary, the Squid cache (256MB) didn't make the top 3 disk usage problems under var. It was Syslog that was doing a DOS attack.
Wednesday, 11 January 2006
Gateways and KDE
I finally got around to installing protocol gateways on the Jabber server. It was far easier then I expected. I downloaded and ran PyMSN-t and PyICQ-t and that was that. A couple config file options needed to be set for the gateways, a couple on the Jabber server, and away we go. I was able to "discover" the gateways with Psi very easily and that was it. Pretty handy stuff.
The other adventure was migrating to KDE. After much poking and frustration with various services to get a functional integration of my address book and get a working calendar going, I ended up going with KDE. I'm not too sure how well this will go since I do have to give up on the cross-platform aspect, but we'll go with it for now. I might have to adopt some different storage method but KMail is very versatile in that regard so it may work well on the client side if nothing else.
I have been reluctant to adopt WebDAV as a storage method for the calendar and I don't want to go so far as installing a fully web-based groupware environment. Web-based stuff just seems to fall short on usability and security. Then again, maybe I just need to tighten up the web-access. Now I'm ranting... Must be time to finish my coffee.
The other adventure was migrating to KDE. After much poking and frustration with various services to get a functional integration of my address book and get a working calendar going, I ended up going with KDE. I'm not too sure how well this will go since I do have to give up on the cross-platform aspect, but we'll go with it for now. I might have to adopt some different storage method but KMail is very versatile in that regard so it may work well on the client side if nothing else.
I have been reluctant to adopt WebDAV as a storage method for the calendar and I don't want to go so far as installing a fully web-based groupware environment. Web-based stuff just seems to fall short on usability and security. Then again, maybe I just need to tighten up the web-access. Now I'm ranting... Must be time to finish my coffee.
Monday, 2 January 2006
Python is n337!
Ah, the festive season. Time for friends and family. Lots of that, not a lot of computer stuff in the last couple weeks. I have been doodling with Python a bit though. It sure does make strapping together scripts very easy. The is enough versitility in the language itself and the plethora of available modules that many scripts are quite simple (e.g. short). In writing code to interact with a database and the local filesystem, I was able to test the scripts by executing the code, including exception handling, in an interactive session. 40 lines will go a long way in Python which is really nice. And it is easy to make it legible, unlike you, Perl.
Enough about that, I'm going back to poking stuff with sticks.
Enough about that, I'm going back to poking stuff with sticks.
Subscribe to:
Posts (Atom)
Popular Posts
-
For anyone who's had to cleanup some mail problems with Postfix configuration (or more often with other things, like anti-spam, tied in ...
-
In the course of troubleshooting the office Jabber server the other day, I came across some interesting info about the various caches that O...
-
For everyone who uses cron, you are familiar with the job schedule form: min hr day-of-month month day-of-week <command> A problem...