Sometimes you want to send an email as HTML from a script or from script output anyhow. There's a couple ways to get the HTML page to the recipient. You can attach the HTML page or you can set your Content-Type to HTML. In my case, we're looking at scraping a web page as a cron job and sending it to some recipient(s) - the mainstay is:
curl 'http://server/web/page.html'
Attaching an HTML file is safe, but recipients may not like "opening attachments". For HTML email, the security risk is the same, but there's a perception that "opening attachments is bad" which I wouldn't discourage as a general practice. Rambling aside, "uuencode" will encode an attachment, any attachment, and can be used in general (word doc, zip file, etc).
curl 'http://server/web/page.html' | uuencode attachmentname.html
The other way is to set the content type to HTML and the HTML becomes the body of the email. On some operating systems, namely Debian and Ubuntu, the mail / mailx command can add a header with the -a switch. This is pretty simple.
curl 'http://server/web/page.html' | mail -a "Content-Type: text/html" -s "An HTML email" some-recipient@example.com
However if you are on a Red Hat / Fedora / CentOS system, your mail command does not support the -a switch. Here you can use mutt and the mutt method will work in general.
curl 'http://server/web/page.html' | mutt -e "my_hdr Content-Type:text/html" -e "set charset=\"utf-8\"" -s "An HTML email" some-recipient@example.com
There you have it. Personally not a fan of HTML email (since it opens the doors to a lot of malware attacks), but if you've got to generate HTML email, using standard tools instead of writing your own perl script to wrap the scraping of a web page and generating an email is going to be much simpler.
Thanks to the "telinit q" blog for helping with this answer. http://blog.telinitq.com/?p=137
Wednesday, 27 February 2013
Tuesday, 19 February 2013
Blocking applications with AppLocker
I've just been in a situation where there was a particular user whom we wanted to give some access to but needed to limit their general access which in Windows 7 and Windows Server 2008 R2 you can do with "AppLocker" in a very clear way. AppLocker sets rules that look much like firewall rules allowing or denying access to run different programs and this can be controlled either locally or through Group Policy Objects.
For example, you have a consultant helping you with your new ERP system (just saying). They need to launch the ERP application but you really don't want them firing up a browser or the RDP client and checking things out on your network.
Getting started with AppLocker is pretty simple:
Or for another example, maybe you just want to block an out of date version of Acrobat Reader from running on your network. You can set a rule to deny "Acrobat" publisher's "Acrobat Reader" program from running "9.0 or older". Again, easy to test using "audit only" before setting enforcing.
Looks like a dummy apparmor or selinux maybe? Honestly, I never made too much progress selinux. I would figure out how to get something working then wouldn't use it for a while and forget out how to work with selinux and have to start all over again.
For example, you have a consultant helping you with your new ERP system (just saying). They need to launch the ERP application but you really don't want them firing up a browser or the RDP client and checking things out on your network.
Getting started with AppLocker is pretty simple:
- Launch the local group policy management tool
- Enable auditing only initially for exe/dll control
- Create the default rules to allow basic or general access (if applicable)
Then you want to create your specific allow / deny rules. The AppLocker rules are going to be a collection of rules saying if they are allow or deny rules, who they apply to, what type of matching they use (path, or publisher), and then actual match. So you might have a rule like
- Allow
- Consultants
- Path
- Program Files\ERP\bin\*
If the consultant only matches this one rule, they will be allowed to launch binaries in the ERP's installation path and will be blocked from anything else.
The first thing to do is set your rules in audit-only which creates event logs for all access that is controlled by AppLocker. You can test out your rules very easily this way as there will be two types of events to look for: "access granted" and "access granted BUT applocker rules will block this when set to enforcing". Once you are satisfied you are not going to can all access to your regular users and that you are locking down the consultant sufficiently, switch to enforcing and you're golden.
Or for another example, maybe you just want to block an out of date version of Acrobat Reader from running on your network. You can set a rule to deny "Acrobat" publisher's "Acrobat Reader" program from running "9.0 or older". Again, easy to test using "audit only" before setting enforcing.
Looks like a dummy apparmor or selinux maybe? Honestly, I never made too much progress selinux. I would figure out how to get something working then wouldn't use it for a while and forget out how to work with selinux and have to start all over again.
Friday, 7 December 2012
End of Free Google Apps
It's official - there's no more free Google Apps. Google started by limiting free GA accounts from 50 users to 10 and now to 0. "thenibble.org" was registered in the magical times of 50 free users on your domain and has been grand-fathered in but there will be no new free GA accounts. Domain aliases can still be added to existing accounts and other features, however I suspect it's only a matter of time before Google stops porting "Enterprise" features into the free service. Fingers crossed!
Official Google Enterprise Blog: Changes to Google Apps for businesses: Posted by Clay Bavor, Director of Product Management, Google Apps Google Apps started with the simple idea that Gmail could help business...
Official Google Enterprise Blog: Changes to Google Apps for businesses: Posted by Clay Bavor, Director of Product Management, Google Apps Google Apps started with the simple idea that Gmail could help business...
Sunday, 4 November 2012
Simple Shared Storage
Many cluster applications require, or significantly benefit from, shared storage so what is the easiest way to do shared storage? Whether you want High Availaiblity in VMware or distributed computing on a Beowulf cluster, sharing storage across nodes is useful if not essential.
The first question you need to ask is whether you can do direct connect or not. Directly connected storage is going to be simpler and more reliable but limits the number of connected nodes and the distance from the nodes to the storage. There's SCSI or SAS storage which is going to be fast and reliable. Dell sells an array, the PowerVault MD3000/3200 series, which you can connect up to 4 hosts on each of 2 controllers so either 4 hosts with redundant connections or 8 hosts with non-redundant connections. The limit of a SAS cable length is 10m (steadfast.net) so if you need longer runs, you start looking at Fibre Channel or Ethernet connections.
Fibre Channel is well established and with link speeds at 4Gbps and 8Gbps these days it's fast. But it's expensive and suffers from the complexity issue that unless the stars align and you sacrifice a pig, nothing will work.
iSCSI is suprisingly popular. Encapsulating SCSI devices / commands inside an IP packet seems kind of like a lot of overhead and it really is. ATA over Ethernet (AoE) makes more sense as you're only going so far as layer 2 - what kind of mad person would run shared storage across separate subnets which need to be routed? It just seems like a big slap in the face for storage when you could use a regular file server protocol like NFS.
Ah, NFS, you've been around for a while. Frankly, it doesn't get any easier or cheaper than NFS. Any system can share NFS - every UNIX flavour and GNU/Linux distribution will come with an NFS server out-of-the-box.
It's a one-liner config and you're off to the races. Put LVM on there and you can expand sotrage as needed, do snapshots, heck add DRBD if you want replication.
You can run NFS (or iSCSI) on any Ethernet connection from just sharing the main interface on the system to dedicated redundant 10Gbps ports.
NFS is well established with plenty of support for optimazation to get over a lot of the limitations of TCP.
Backups for the data, take your snapshot (LVM or VMware, etc) and a plain old cp / rsync will be sufficient to get the files you need off there.
And there's one other benefit of using NFS over others - if you have a lot of concurrency like many virutal machines or compute nodes accessing storage simulataneously, there is (usually) only 1 lock per storage device with the block storage protocols whereas NFS is one lock per file.
But NFS certainly will require at least the addition of a switch and potentially several and some complexity with redundant network links etc etc it can also suffer from the same complexity issue that unless the stars align, you're going to have a bad time.
So in short, NFS is hella awesome and you should use it wherever you can. Depending on the scale of what you're doing, directly connected storage is going to be simpler and more reliable, possibly cheaper.
The first question you need to ask is whether you can do direct connect or not. Directly connected storage is going to be simpler and more reliable but limits the number of connected nodes and the distance from the nodes to the storage. There's SCSI or SAS storage which is going to be fast and reliable. Dell sells an array, the PowerVault MD3000/3200 series, which you can connect up to 4 hosts on each of 2 controllers so either 4 hosts with redundant connections or 8 hosts with non-redundant connections. The limit of a SAS cable length is 10m (steadfast.net) so if you need longer runs, you start looking at Fibre Channel or Ethernet connections.
Fibre Channel is well established and with link speeds at 4Gbps and 8Gbps these days it's fast. But it's expensive and suffers from the complexity issue that unless the stars align and you sacrifice a pig, nothing will work.
iSCSI is suprisingly popular. Encapsulating SCSI devices / commands inside an IP packet seems kind of like a lot of overhead and it really is. ATA over Ethernet (AoE) makes more sense as you're only going so far as layer 2 - what kind of mad person would run shared storage across separate subnets which need to be routed? It just seems like a big slap in the face for storage when you could use a regular file server protocol like NFS.
Ah, NFS, you've been around for a while. Frankly, it doesn't get any easier or cheaper than NFS. Any system can share NFS - every UNIX flavour and GNU/Linux distribution will come with an NFS server out-of-the-box.
It's a one-liner config and you're off to the races. Put LVM on there and you can expand sotrage as needed, do snapshots, heck add DRBD if you want replication.
You can run NFS (or iSCSI) on any Ethernet connection from just sharing the main interface on the system to dedicated redundant 10Gbps ports.
NFS is well established with plenty of support for optimazation to get over a lot of the limitations of TCP.
Backups for the data, take your snapshot (LVM or VMware, etc) and a plain old cp / rsync will be sufficient to get the files you need off there.
And there's one other benefit of using NFS over others - if you have a lot of concurrency like many virutal machines or compute nodes accessing storage simulataneously, there is (usually) only 1 lock per storage device with the block storage protocols whereas NFS is one lock per file.
But NFS certainly will require at least the addition of a switch and potentially several and some complexity with redundant network links etc etc it can also suffer from the same complexity issue that unless the stars align, you're going to have a bad time.
So in short, NFS is hella awesome and you should use it wherever you can. Depending on the scale of what you're doing, directly connected storage is going to be simpler and more reliable, possibly cheaper.
Tuesday, 7 August 2012
fg && exit
Found another way to abuse the command chaining... I had a long running task (e2fsck) running under screen and I wanted to chain some other commands (mount -a && exportfs -rav) but couldn't restart the first command.
- Use ctrl+Z to put the job on hold
- fg && more commands to bring the job into the foreground again
- Shazzam!
So naturally I put && exit on the end there again to roll out of shell when the command completed.
[1]+ Stopped e2fsck -p -f -v /dev/mapper/VolGroup01-project[root@palmberg ~]# fg && mount -a && exportfs -rav && exite2fsck -p -f -v /dev/mapper/VolGroup01-project
Sunday, 22 July 2012
Nagios check_cluster
The other day, we got an escalation from Nagios in the middle of the night that email was down. Looking at the system, I quickly found that while yes, one SMTP relay was down, the other was up. So how do you monitor services that require multiple failures before service is disrupted?
check_cluster
This is a service check which doesn't check a service, but checks the results of other service (or host) checks. The documentation is pretty clear on who to set this up.
So where before I had been checking two SMTP services and escalating to SMS on each, I still have checks for two SMTP services, but then added the check_cluster which checks the results of both and is only critical if all the SMTP services are down. Then I escalate based on the check_cluster checks instead of the check_smtp ones.
check_cluster
This is a service check which doesn't check a service, but checks the results of other service (or host) checks. The documentation is pretty clear on who to set this up.
So where before I had been checking two SMTP services and escalating to SMS on each, I still have checks for two SMTP services, but then added the check_cluster which checks the results of both and is only critical if all the SMTP services are down. Then I escalate based on the check_cluster checks instead of the check_smtp ones.
Friday, 8 June 2012
LeakedIn
Its making the tours but its just so much fun to get a raw password file. With the recent password compromise from LinkedIn, you can readily find a copy of the raw file posted online and check if your password is in there. And there's a site if you want to just punch in guesses:
http://leakedin.org/
Its pretty fun ;)
Nothing else to say here, really. I've posted about storing passwords. Use a password manager, generate a random password for each site, and make a large haystack from a short password by using arbitrary but simple patterns to extend the length of a complex password.
"lollerskates" is in that cracked file :P
http://leakedin.org/
Its pretty fun ;)
Nothing else to say here, really. I've posted about storing passwords. Use a password manager, generate a random password for each site, and make a large haystack from a short password by using arbitrary but simple patterns to extend the length of a complex password.
"lollerskates" is in that cracked file :P
Subscribe to:
Comments (Atom)
Popular Posts
-
For anyone who's had to cleanup some mail problems with Postfix configuration (or more often with other things, like anti-spam, tied in ...
-
In the course of troubleshooting the office Jabber server the other day, I came across some interesting info about the various caches that O...
-
For everyone who uses cron, you are familiar with the job schedule form: min hr day-of-month month day-of-week <command> A problem...