I find it annoying not to have APG handy when I want to create a password. I also don't like using random websites online for this either because I can't trust that they aren't logging their output. So I put a simple little form and using PHP to invoke APG to create passwords. My form right now is very simple and doesn't support all of the APG options, but it will do:
https://alia.thenibble.org/passwords/
Here's the simple code I'm using:
https://docs.google.com/document/d/12CViQAEW6q2moZ4GtkJllsD0-8pazpVMfeYjlrBz-4s/pub
Wednesday, 17 July 2013
Monday, 15 July 2013
Use owncloud for RSS feeds
Sorry hosted services, I will be using my owncloud with the "news" plugin for my feeds.
Whipping Up a Quick Site
Recently I helped set up a quick site along the lines of a proof of concept for a community type site and had decided to use Google Sites and we were very well rewarded with the richness of what is readily available after getting a bit used to how everything is put together.
I feel like there is almost no technical knowledge required, it just takes a bit of getting used to. Not much, but some. When you are first trying to layout your site, you have to poke around for a bit to figure out what things need to be changed on the page, the page layout, the page template, or the site layout. I don't want to do a mock-up and the site we put together will get taken down too soon to be used as a reference - as long as you have some idea for a layout for the site you want, you can make it in Google Sites. Sketch it out on paper works fine, just fine. There will be a couple places where some HTML knowledge is useful, not necessary but useful if you want something too align or look a very specific way.
Now for the cool stuff.
Site templates - there are a bounty of free site templates everything from generic Pokemon theme to a complete Strata community site with calendars and council meeting minutes etc.
And if you can't readily find a site template that helps get you started, it is super easy to put a site together even from scratch. Managing the site layout and using page templates make building your site very fast. You get the look and feel you want for your pages together quickly so you can work on the content. A handy trick with your page templates is you can give a default "parent" page so you can create many pages in one section.
In your site layout, you can have a main navigation bar. The nav bar you put together manually putting your main pages and sections. When you put sub-pages in the main nav bar, it makes nice little pull-down menus. It sure beats the stock left-hand nav section which lists all your pages in alphabetical order - get rid of that ugly thing.
And then there's the widgets. The best integration on Google Sites is obviously Google's other products: Calendar and Docs. An event or other calendar can be dropped right in your site and then regular sharing rules apply.
One of the features of Google Docs (or is it Drive now?) that is particularly useful for your site is the "forms". You can create contact forms, polling data, and probably a lot more than I've seen in my quick tour.
The contact form is interesting because what you can do is change the form responses to send notifications for when it is filled out. This will give you a stock contact form so you don't have to put an email address on the site and only takes a minute or two to put up.
Any information your gathering like from a poll comes with rich analytics out of the box. The form responses have full reports on response selection and trending. Which you can publish or not as appropriate.
Lastly I will mention you can of course use your own domain for the site. Anonymous visitors to your site will see the custom domain. Users logged in to sites will be instead redirected to sites.google.com/a/sitename/page/blah/blah/blah...
Super quick and easy.
Google Sites is "free" so remember "If you're not paying for the product, you are the product."
... And even if you pay for the product, you may still be the product.
</rant>
Saturday, 15 June 2013
Get your ownCloud
I've recently moved some of my "cloud" files to ownCloud. There are some files we all store in the cloud which we probably shouldn't: password files, financial information including taxes, and maybe some dirty laundry too. For these uses, you don't need "anywhere access", you probably just need convenient access between your own computers and the backups afforded by having the files stored both on your desktop and your laptop is probably enough.
OwnCloud is your own cloud and you install it on some station to be the file repository. A workstation, or an Ubuntu server, whatever you have. The stock package in Ubuntu 12.04 is the much older 3.0x. You can use this with the older sync clients, but it doesn't have file versioning and other features of the newer ownCloud versions. You can add the ownCloud software repository from their site. I assume that setting up ownCloud on Windows is a bunch of double-clicking, that seemed like too much effort so I didn't bother.
There isn't much configuration, once it's up your first hit to the web page does the configuration which includes: creating an administrator account, ... That's it. After logging in there are more options like requiring SSL if you want.
There are also a lot of plugins / apps for enabling calendar / contact sync, for integrating authentication with LDAP, external storage, and more than you can imagine.
Since you likely have a hard drive at least 500GB in size if not multiple TB, this is automatically a much larger repository than you'll get without a monthly bill from cloud services. It's pretty cheap too, a new drive should be under $0.10 / GB these days whereas DropBox is about that per month.
Like any of the cloud services, you can access files via web interface:
There is a "sync client" for PCs and that includes Windows, Mac, and the major GNU/Linux distributions (CentOS/RHEL, Fedora, openSUSE, Ubuntu, Debian). There are also mobile clients but I haven't tried them.
There's ownCloud for you. You can totally replace the big corporate run cloud services if you want or complement them.
OwnCloud is your own cloud and you install it on some station to be the file repository. A workstation, or an Ubuntu server, whatever you have. The stock package in Ubuntu 12.04 is the much older 3.0x. You can use this with the older sync clients, but it doesn't have file versioning and other features of the newer ownCloud versions. You can add the ownCloud software repository from their site. I assume that setting up ownCloud on Windows is a bunch of double-clicking, that seemed like too much effort so I didn't bother.
There isn't much configuration, once it's up your first hit to the web page does the configuration which includes: creating an administrator account, ... That's it. After logging in there are more options like requiring SSL if you want.
Since you likely have a hard drive at least 500GB in size if not multiple TB, this is automatically a much larger repository than you'll get without a monthly bill from cloud services. It's pretty cheap too, a new drive should be under $0.10 / GB these days whereas DropBox is about that per month.
Like any of the cloud services, you can access files via web interface:
There is a "sync client" for PCs and that includes Windows, Mac, and the major GNU/Linux distributions (CentOS/RHEL, Fedora, openSUSE, Ubuntu, Debian). There are also mobile clients but I haven't tried them.
There's ownCloud for you. You can totally replace the big corporate run cloud services if you want or complement them.
Thursday, 11 April 2013
Cron scheduling for first Sunday of the month
For everyone who uses cron, you are familiar with the job schedule form:
min hr day-of-month month day-of-week <command>
A problem with cron job scheduling is if you want to schedule something, like backups or updates, for "the first Sunday of the month". The job spec "0 0 1-7 * Sun" will run every Sunday and every day on the 1st to the 7th of the month.
The way to work around this is schedule the job for the possible days to run and then as part of the command, check the date before running the command. I've just seen what is The Best format for this:
This solution comes from LinuxQuestions.org user kakapo in the post here:
http://www.linuxquestions.org/questions/linux-software-2/scheduling-a-cron-job-to-run-on-the-first-sunday-of-the-month-524720/#post4533619
Up until now I used a slightly different form of this using the day of the week in the cron job and then testing date %d to test the day of the month. But the above form is far clearer and easier to schedule jobs with.
So props to kakapo for sharing that form and until cron changes how the day-of-the-month and the day-of-the-week fields are used, this will be the best way to schedule a job on the first Sunday of the month.
min hr day-of-month month day-of-week <command>
A problem with cron job scheduling is if you want to schedule something, like backups or updates, for "the first Sunday of the month". The job spec "0 0 1-7 * Sun" will run every Sunday and every day on the 1st to the 7th of the month.
The way to work around this is schedule the job for the possible days to run and then as part of the command, check the date before running the command. I've just seen what is The Best format for this:
0 9 1-7 * * [ "$(date '+%a')" == "Sun" ] && <path>/script.sh
|
This solution comes from LinuxQuestions.org user kakapo in the post here:
http://www.linuxquestions.org/questions/linux-software-2/scheduling-a-cron-job-to-run-on-the-first-sunday-of-the-month-524720/#post4533619
Up until now I used a slightly different form of this using the day of the week in the cron job and then testing date %d to test the day of the month. But the above form is far clearer and easier to schedule jobs with.
So props to kakapo for sharing that form and until cron changes how the day-of-the-month and the day-of-the-week fields are used, this will be the best way to schedule a job on the first Sunday of the month.
Friday, 5 April 2013
Nagios and SensorProbe
Nagios rules all and has great agents and plugins and also checks clusters. One of the things we have got recently at #dayjob is a generator for emergency power. Well, how do you know if the generator is running, is healthy, etc? Sure, there's routine checks that someone has to do, weekly, monthly, seasonally, etc but how do you know what's going on in real-time?
Like much machinery, generators don't always have a nice web-based monitoring system or even an SNMP interface - they have "dry contacts". Dry contacts, for those of you like me who think a voltmeter is something used by parking enforcement, is a simple electrical circuit which is either "open" and not passing current or "closed" and passing usually 5 volts. Usually it is something like a screw which you thread a sensor through.
Okay, so no IP interface. What do you use? Well, we're using these AKCP SensorProbe devices which come in a variety of shapes and sizes. For our generator we used the SensorProbe2DC to which you can connect two sensors supporting 5 dry contacts each. It's a little IP device you need to feed a data run and power to and then you screw in the leads from the sensor to the dry contacts on the generator - your installer can help with the latter.
The SensorProbe is a monitoring device so you can configure the alerts and view status from there. But the other thing is that it has an SNMP interface so you can now monitor your generator status from your regular network monitoring system, by which I mean of course Nagios. Good old check_snmp, tell it which OID is which, and you're off to the races!
So now we've got in Nagios a host (the sensorpobe) which has service checks telling us whether the transfer switch is on mains or generator power, if the generator is running, if the generator is set to "auto start", or having any other problems.
AKCP makes various other SensorProbe devices. The SensorProbe8 has 8 ports to which you can connect various sensors for temperature and humidity, airflow, water detectors, etc or single-port dry-contacts. If you look hard, you'll find also that your AC units and other equipment also have dry contacts. Avid readers who crack their equipment manuals will also find that dry contacts can also be used as output triggers not just receiving status. Once you have a lot of dry contacts, you can check out the SensorProbe8 x20 and x60 which come with a whole lot more dry contacts to check everything in your datacenter.
Mmm, generator power, yum.
Like much machinery, generators don't always have a nice web-based monitoring system or even an SNMP interface - they have "dry contacts". Dry contacts, for those of you like me who think a voltmeter is something used by parking enforcement, is a simple electrical circuit which is either "open" and not passing current or "closed" and passing usually 5 volts. Usually it is something like a screw which you thread a sensor through.
Okay, so no IP interface. What do you use? Well, we're using these AKCP SensorProbe devices which come in a variety of shapes and sizes. For our generator we used the SensorProbe2DC to which you can connect two sensors supporting 5 dry contacts each. It's a little IP device you need to feed a data run and power to and then you screw in the leads from the sensor to the dry contacts on the generator - your installer can help with the latter.
The SensorProbe is a monitoring device so you can configure the alerts and view status from there. But the other thing is that it has an SNMP interface so you can now monitor your generator status from your regular network monitoring system, by which I mean of course Nagios. Good old check_snmp, tell it which OID is which, and you're off to the races!
So now we've got in Nagios a host (the sensorpobe) which has service checks telling us whether the transfer switch is on mains or generator power, if the generator is running, if the generator is set to "auto start", or having any other problems.
AKCP makes various other SensorProbe devices. The SensorProbe8 has 8 ports to which you can connect various sensors for temperature and humidity, airflow, water detectors, etc or single-port dry-contacts. If you look hard, you'll find also that your AC units and other equipment also have dry contacts. Avid readers who crack their equipment manuals will also find that dry contacts can also be used as output triggers not just receiving status. Once you have a lot of dry contacts, you can check out the SensorProbe8 x20 and x60 which come with a whole lot more dry contacts to check everything in your datacenter.
Mmm, generator power, yum.
Tuesday, 19 March 2013
Autofs and a couple tricks with NFS shares
Autofs is great especially for NFS mounts between systems. Autofs will mount file systems on demand and then un-mount them again when they are not needed. This is especially a nice trick where servers are sharing NFS shares compared to putting the shares in fstab which mounts the file system on boot potentially putting you in a deadlock where server A is waiting on a share from server B and server B is waiting on a share from server A... A common issue for example is mounting /home from a shared location but then no other server can boot until the home file server is up and deadlocks are bad, um 'kay?
"yum install autofs" as needed, use your package manager of choice. Once you fire up the automount daemon, you can poke around in the configuration.
First thing you'll want to try with is in auto.master enabling the net (or nfs) -hosts line. This will create a multi-mount in /net where you can access shares via /net/<hostname>/<share path>.
That's really it. You can directly access files, for example in /net/homefs/data/home/username. Automount will automatically mount that share when it is accessed and remove it after it's been left idle.
What's it doing?
First, it will look up shares for you. When you do `ls /net/hostname`, it will return a list of shares available from the server "hostname". It's really that easy.
Any access to a share is automatically mounted. A shared folder, call it "work", will be mounted on the first user request for a file from that share. Whether that's someone using 'ls' or else writing a file to a specified path.
For convenience, my first tip is that you use symlinks to point to specific shares or locations. Rather than using the path "/net/workfs/export/work", do something like `ln -s /net/workfs/export/work $HOME/work`. Another example is for shared home directories, super common in UNIX environments. So link /home to /net/<server>/path/to/shared/home. It will bring up the shared home directories when a user tries to logon and will unmount the home share after the users are logged out.
This brings me to my next tip: the automatic timeout. Autofs will un-mount the share after an idle period. 10 minutes by default (depending on your platform). This may cause you some grief, for example if you have a job that runs every 10 minutes. You should adjust this timeout based on your needs (either up or down) but unless your running something on a schedule every 10 minutes, you probably won't have much of a problem with this default.
You can also tune your NFS settings in your auto.master config file. The defaults should work for most systems but if you do have to do NFS tuning, you can do this with autofs either for all NFS mounts or you can create specific mounts as needed.
The interesting thing with autofs is you don't have to use it just for network shares. You can use it for regular devices as well. One of the main things is that again, anything in automount will be mounted on demand and not at boot time. So if you have a large data volume, you can add it specifically to the auto.misc file instead of fstab.
From time to time autofs can get a bit mucked up. Usually when "you screw around too much". If stopping autofs and umounting any left-over shares doesn't work, remember "umount -f -l" which is force lazy unmount. Very useful. If the folders don't go away, like /net/server/path/to/share, try doing umount on them and then you can remove them (rm -Rf), just be careful. It depends whether rebooting is more risky than an rm -Rf.
"yum install autofs" as needed, use your package manager of choice. Once you fire up the automount daemon, you can poke around in the configuration.
First thing you'll want to try with is in auto.master enabling the net (or nfs) -hosts line. This will create a multi-mount in /net where you can access shares via /net/<hostname>/<share path>.
That's really it. You can directly access files, for example in /net/homefs/data/home/username. Automount will automatically mount that share when it is accessed and remove it after it's been left idle.
What's it doing?
First, it will look up shares for you. When you do `ls /net/hostname`, it will return a list of shares available from the server "hostname". It's really that easy.
Any access to a share is automatically mounted. A shared folder, call it "work", will be mounted on the first user request for a file from that share. Whether that's someone using 'ls' or else writing a file to a specified path.
For convenience, my first tip is that you use symlinks to point to specific shares or locations. Rather than using the path "/net/workfs/export/work", do something like `ln -s /net/workfs/export/work $HOME/work`. Another example is for shared home directories, super common in UNIX environments. So link /home to /net/<server>/path/to/shared/home. It will bring up the shared home directories when a user tries to logon and will unmount the home share after the users are logged out.
This brings me to my next tip: the automatic timeout. Autofs will un-mount the share after an idle period. 10 minutes by default (depending on your platform). This may cause you some grief, for example if you have a job that runs every 10 minutes. You should adjust this timeout based on your needs (either up or down) but unless your running something on a schedule every 10 minutes, you probably won't have much of a problem with this default.
You can also tune your NFS settings in your auto.master config file. The defaults should work for most systems but if you do have to do NFS tuning, you can do this with autofs either for all NFS mounts or you can create specific mounts as needed.
The interesting thing with autofs is you don't have to use it just for network shares. You can use it for regular devices as well. One of the main things is that again, anything in automount will be mounted on demand and not at boot time. So if you have a large data volume, you can add it specifically to the auto.misc file instead of fstab.
From time to time autofs can get a bit mucked up. Usually when "you screw around too much". If stopping autofs and umounting any left-over shares doesn't work, remember "umount -f -l" which is force lazy unmount. Very useful. If the folders don't go away, like /net/server/path/to/share, try doing umount on them and then you can remove them (rm -Rf), just be careful. It depends whether rebooting is more risky than an rm -Rf.
Subscribe to:
Comments (Atom)
Popular Posts
-
For anyone who's had to cleanup some mail problems with Postfix configuration (or more often with other things, like anti-spam, tied in ...
-
In the course of troubleshooting the office Jabber server the other day, I came across some interesting info about the various caches that O...
-
For everyone who uses cron, you are familiar with the job schedule form: min hr day-of-month month day-of-week <command> A problem...