Server blowout

Uncategorized | Posted by attriel May 18th, 2010

Well, the server blewout … a bout a day after my last post, honestly.  Doesn’t that just suck.

So, I spent one day (well, morning, before work for a couple hours) trying to get it back up to limp through the day for the folks who host off my system.  Got it up, but it died again before I got to the office, so obviously THAT wasn’t working.

So over the weekend I divided my time between taking my son around (my wife was out of town and it’s the first time he was away from her by more than a car trip), and migrating everything to the new server I had that I had been slowly working on.  Actually, one of the last things I was waiting for was the 10.04 update, so I was ready to do the migration, at least.  I had just been planning to do it on my own terms :o

So, I couldn’t copy over the mysql files directly as they’d been corrupted.  My dumps from day of had a bunch of blank tables (oops, guess that’s what happens when you dump AFTER catastrophic failure).  But!  luckily I had a dump from the previous weekend.  And since I had binary logging running, I had a full log of everything that had happened in between.

So after a full restore from the dump, the following command gave me the commands that had been issued in the meantime:

mysqlbinlog -r ~/fullout –start-position 5938222 psychotomy-bin.000056 psychotomy-bin.00006[23]
The “-r filename” is the output file.  heckofalot nicer than having to redirect output, IMO.
“–start-position ###” tells it where to start the log dumping.  This let me get the logs from after my full dump, to eliminate duplicate key errors by inserting things that were already there (as it turned out I had the wrong number and had to do it again)
“-d db” (not pictured above) tells it which database to show the results for.
Then the filenames of the binlogs that you want the replay out of.  In this case I had a bunch of files that were just restarts without content, so I skipped them.
This didn’t show the -d flag because what I did was two steps:
1) I replayed the logs for my primary users, being where we noticed the corrupted data first when I tried restoring the day-of dump.  Got them back up
2) I dumped everything and then extracted that system out of the dump so I could replay all the other databases.
But mysqlbinlog SERIOUSLY nice.
On an unrelated note, I think I’m going to set up a weekly process to dump the db and then archive the binlogs with the dump from the previous week.  Then keep the last, say, month.  Basically “logrotate” for my database :o

Ubuntu 10.04 Update

Hardware | Posted by attriel May 5th, 2010

So Ubuntu 10.04 (LTS) released on 30 April, late (EST anyway) … that’s cutting the hair rather thin :o

Anyway, I followed the directions found at the Lucid Upgrades page, which seemed to be how it was set up already.

Then it removed 6 packages, installed 30 new ones, and upgraded 400 I think it said.    That has so completely scrolled off the screen however.

I said Yes, and it went about it’s merry business.  It’s been about 30 minutes so far, although a couple times it’s stopped for input and I didn’t catch it for a while.  First it wanted to know what used PAM modules so it could reset them with the new modules.  The second time it wanted to verify what I wanted to do about the MySQL confs; the diff showed minor changes, and I liked my version better.  Default is to retain, I thought that was a nice touch.

… continued later …

A couple other bits went by then it wanted to reboot to finalize and switch to the new kernel.  Everything went swimmingly.

Then overnight, my laptop ended up in a pool of water, sorry this post is late :o

Missing 3rd Tuesday of April

Site Maintenance | Posted by attriel April 27th, 2010

So, you’ve probably noticed that last week’s post never appeared …

Partially that’s because I lost track of the date, and partially that’s because I was going to do an update from ubuntu 9.10 to 10.04 on my secondary server, so I figured I’d review the process.

The idea being that I ran Slackware for like 8 years, and loved it, but quit having time for the hand-compiling management that I wanted with that system.  But there wasn’t (AFAICR) any good base system version upgrading.  There might have been, but by the time the first update would have gone, i had so many customized compiles that it would have caused more problems than help.  And my system was more up to date with latest versions, because I didn’t have to go through package testing.

Then I went to gentoo, to keep that compiled “for your use”, and it was good, except some of the things I wanted to do I had to customize too much for, and so I couldn’t do upgrades again.

So now I’m moving to ubuntu.  Most of the things I’ve looked for i can find prebuilt packages for; and the VM management seems a little cleaner than the last try through gentoo, in that I don’t need to install X-Windows on my server to install the VM servers; it’s partially because ubuntu has good docs on Xen and KVM; I think gentoo had these options too now, but not positive.

Except 10.04 is still only in RC, not yet released.  I’m figuring on sometime this week, maybe, but figured I’d post since I was already a full week off.

I’ll probably write up that post when it happens and queue it up.  Trying to get some of these queued up, but I never get a chance to take all the screenshots I want for a few of them :o

Of Mail Servers and Usages

From The Lines | Posted by attriel April 6th, 2010

So, one thing I’m working on is rebuilding my home server.  And one of the big bits last time was I finally figured out how to virtualize email so I could run all the domains with different users, and not have to give everyone shell accounts.

Well, that system had a number of problems setting it up, and I decided to use a new one on the new box.  And I’m ~80% through that setup process, and getting into the complicated bits and spam and virus filters etc … and it occurs to me …

I went back and pulled up qmailanalog, which after a few tweaks to make it work (as an aside, djb writes some very solid systems, because instead of one program that does 10 things, he writes 10 programs that each do one specific thing.  usually limited options, frequently chosen at compile time, so there’s very little to exploit; otoh, it doesn’t get updated very often) let me see the send/receive stats for the server.

Almost all of the send is to junk addresses, bots guessing at addresses.  Most of the incoming is also to junk addresses.  I’m guessing a bunch of the former is bounces of the latter :o

Then I flipped through some of the inboxes for the users receiving mail.  It’s almost exclusively spam and virus mail (obviously the spam filters quit keeping up at some point).

So now I’m wondering … if I’m the only user receiving mail, and I can just change the few lists that still point there … maybe I should just shut the whole thing off on the new box.  Gets rid of the spam problem and any relaying/etc concerns :o  And simplifies the configuration overall :o

Open CA, continued

Coding, Tool Tips | Posted by attriel March 16th, 2010

Well, I haven’t had a chance to look at the entries I mentioned last post.  But I DID remember an open free Certificate Authority.

CA Cert, it’s a site that allows you to register and, assuming you can reasonably proof ownership of your domain (by answering the emails associated with the registrar), then you can issue certs for your domain.  I’m currently looking at issuing certs for my mail server and web daemon.

The CACert Root Certificate isn’t widely distributed, so your users would have to add it the first time they came, but IMO it’s a little better (and possibly more well controlled) than the self-signed “Snake Oil” certs.

The only downside, that I’ve noticed so far, is that there’s no interface for building your request.  So you still have to use OpenSSL or another package to generate your Cert Req and the CSR.  I’m kindof surprised, honestly, that they don’t have that part, since that would be easier than the CA portion I would think.

So, I still want to look at the other tools, but since CACert is centralized and you can add the root cert for your users, I think it makes a decent option when you can use it.

CA Systems

Coding, Tool Tips | Posted by attriel March 3rd, 2010

So, as part of the MySQL SSL Replication series, I decided that I’d look up some open source CA systems.  Because there must be something better than running openssl –fifty –thousand –options –with –no –memory –or –chceking

I found OpenCA/OpenPKI ,which looked interesting.  Except as I tried to set it up, the Ubuntu distributions were in Redhat RPMs, and after converting them they don’t appear to be actual apps.  They may have been framework prereqs of the app.  But the downloading screens were singularly uninformative.

I also found EJBCA, which I haven’t tried out yet.  Partially because OpenCA sounded decent, and I figured I’d try that first, since EJBCA looks to be a much larger Java/jboss application, and I don’t know JBoss offhand.  I’ll let you know if I get it going, otherwise I’ll do the MySQL entries with openssl.

And I meant to post this yesterday, oops.

Inside the iMac

Hardware | Posted by attriel February 16th, 2010

So, first off — This week’s post was supposed to either be the next in the MySQL series or the followup to the profiling post with a visualization tool.  But I haven’t vetted the instructions for MySQL pt 2, and I need to get all the right screenshots for the tool discussion.  Sorry for that folks

Now, as for what this post IS!  Last week I had to change the hard-drive in an iMac.  The old one died, so it didn’t make sense to follow the general consensus of “just add an external drive and triple your storage it’s way easier”

Also, that kind of advice is just ANNOYING.

I did not, however, remember to take pictures.  Thought of it afterward, but I wasn’t opening the functional system just to do a photo shoot.  Here’s a forum post with links to videos and photos that I used for guidance

So, instead, I’ll explain it where I can.  Also, my system was subtly different from every example I followed, so remember: YMMV.  This was a 2006 or 2007 white Intel iMac

First off, I needed a 6 Torx screwdriver.  And the new SATA hdd.  Most of the recommendations suggested rubber cement or similiar for one of the later steps, I used my wife’s scrapbooking zots.

Step 0: Lay the mac down on it’s back, monitor facing up

Step 1:  Remove memory cover and memory modules (this procedure was explained on the base of the imac, for reference)

Step 2: Remove four torx screws from the bottom of the display; these are in line with the memory screws.

Step 3:  This step is hard.  While holding in the memory tabs, lift the case off the monitor.  The sides and face around the monitor are the part that you’re moving.  I believe in the end I took a small flat-head screwdriver and jimmied a bit to get it started so that I had something to press against to open it.  Also, once it opens a little, the memory tabs are held in by the facing, so you can move your hands to somewhere less awkward.

Step 4: Carefully lift up, bottom more than top.  Once you’re clear of the bottom of the case, keep an eye on the metal tabs and lift to bring them clear.  Also make sure you don’t pull too hard as that will mess up the iSight camera.  There should be enough play in the iSight to lean the case up out of the way

Step 5: There’s a black plastic wrapping.  Carefully pry this off around the bottom and sides (bottom was perforated and I had to break it to remove it, sides were sticky-tabbed).  At the top, above the monitor, you’ll need to carefully pry the black wrapping away from the top of the monitor, but not too far.  I again used a small screwdriver to leverage and slit along.

Step 6: In theory, all the directions said I could now lift the monitor off.  This was where mine went off the rails.  There’s a long black ribbon cable attaching something to the monitor.  I couldn’t see any way to detach it from either end, and I wasn’t willing to risk yanking anything.  So that limited how far I could move the monitor.  That’s along the middle.  On the left side, there are two sets of wires leading from the board to the monitor.  I unhooked the set nearest to me.  That gave me some play in the monitor for the remainder.  You can probably remove both and get more movement.

Step 7: I had to remove two screws from the left-hand side of the hdd.  There was a metal L-plate on that side, and it was hard to tell if the screws were for the HDD or for holding down the daughter-board.  Once you’ve removed those two, you can partially lift the HDD and pull it out to the left.  The right side of the HDD has a pair of pegs that slot into holes to stabilize, so you need to bring it enough right to clear that to remove.  But with the daughterboard in the way, you have to bring it up to clear the board.  It’s a little tinkering and magic to get both aspects.

Step 8: On the RHS of my HDD was a sensor, everything online suggests it’s a heat sensor, although I’m not really sure why they wouldn’t just use the SMART data the HDD has onboard, but whatever.  Carefully pry this off.  It’s glued, so you actually have to pry this one.  Remove the L and the pegs.

Step 9: Replace L & pegs onto new HDD.  This is where the rubber cement comes in.  I stuck a couple zots to the back of the sensor and applied it back to the harddrive.  I won’t swear it’ll stay properly, but I’m not having problems yet at least.  Slide HDD back in, screw back down.

Step 10: Replace monitor, reattaching the cables you removed (I only removed the one, so I was more cramped for steps 7-9, but I only had to remember which way one cable went in)

Step 11: Crimp the black plastic heat envelope back down and restick everywhere.

Step 12: Slide casing back on, hooking the metal clips inside the casing and being careful of the iSight cords & etc.  Hold in the memory tabs while getting the case over them.  Then carefully push the case on, squeezing it down as it gets near to done and starts to stick.  There shouldn’t be any lipping remaining, or the CD might not track smoothly

Step 13: Reapply torx screws.  Re-insert memory and close up.

Again, YMMV, and I really should have had pictures, because I know that when I was looking I wanted the pics to see what was being talked about.

Logfile Visualization

Tool Tips | Posted by attriel February 2nd, 2010

One thing I’ve been looking for, on and off, is some way to view a representation of the server logs in realtime.  Basically something I can take a look at and see what’s happening.  Also, playback at a later date to see what happened at a given time.

One of the only things I found that really did that was glTail.rb, a Ruby program.  I don’t have a lot of Ruby knowledge, so I didn’t go very far with it to see how it ticks and what I could modify.

It runs the log and streams bubbles from one side representing the server traffic.  Size is determined by request/response size.  I never quite figured out what the other set of streaming bubbles really represented, since Ruby isn’t in the environment I was playing in.  As a general use tool, it seems like it would be interesting if you had all the pre-reqs installed already.  As an industry tool, I wasn’t convinced that I saw enough benefit to justify installing the prereqs.

But I don’t think “industry level” even wants a tool like this, so !

I think the most telling video is the one discussed on the slashdotting page.

XDebug

Tool Tips | Posted by attriel January 26th, 2010

This is throwing back to our previous discussion about code profiling

The tool we’re using, and which provides a lot of functionality that we really like, is XDebug.  This one, unlike APD, doesn’t appear to have an on/off runtime setting; we activate it in php.ini and it just goes.

Also, for various reasons, patching even our dev systems is a complicated and annoying process.  Luckily, we’ve mostly turned the development system into “integration”, and we all do our development on local developer workstation servers.  So we’ve patched our locals to add XDebug

zend_extension=/opt/php-5.2.6/lib/php/extensions/no-debug-non-zts-20060613/xdebug.so

xdebug.profiler_enable=1
xdebug.profiler_output_dir=/tmp/xdebug/
xdebug.profiler_append=1
xdebug.show_mem_delta=1
xdebug.trace_output_name=trace.%p
xdebug.trace_output_dir=/tmp/xdebug
xdebug.trace_options=1
xdebug.trace_format=1
xdebug.auto_trace=1

That’s activation.  Obviously the zend-extension path is just what is on my local system, it varies.

zend_extension adds it to the PHP interpreter at a base level, so it always works.

profiler_enable — turns it on

profiler_output_dir, trace_output_dir, and trace_output_name  tells it where to put the files it generates

profiler_append (& trace_options) — This flag is necessary because otherwise another process on the same interpreter (apache process) will, by default, overwrite the file.  append allows you to have it just keep adding.  It will confuse your numbers later, but I found it a little safer when I set it up.  Our other primary user has it not appending.

trace_format — this defines what format the output file should be in.  The default (0) is the human readable format.  I could never figure it out.  1 is actually the COMPUTER format, but I find it easy to read.  YMMV

show_mem_delta lets you see how the memory utilization changes between calls in the human format of the trace.

The trace file:

Version: 2.0.4
TRACE START [2010-01-22 20:12:05]
1 0 0 0.014269 53424 {main} 1 /opt/data/localhost/apache/htdocs/php.php 0
2 1 0 0.014295 53448 phpinfo 0 /opt/data/localhost/apache/htdocs/php.php 1
2 1 1 0.404662 58100
1 0 1 0.404689 58100
0.404764 26700
TRACE END   [2010-01-22 20:12:05]
The first column (1 2 2 1) is the depth of calls.  The second (0 1 10) is the function call # (IIRC this is monotonically increasing throughout the program).  Column 3 (0 0 1 1) is Enter/Exit (0 is Enter, 1 is Exit).  Column 4 is elapsed execution time. (The fifth row actually starts here)  Fifth is Current Memory Utilization.  Six is the call that got us to this entrance (the remaining columns are null on exit).  Seventh is User (1) or Internal (0).  Eight and Nine are source file & Line Number that invoked.

The profile file:

==== NEW PROFILING FILE ==============================================

version: 0.9.6

cmd: /opt/data/localhost/apache/htdocs/php.php

part: 1

events: Time

fl=php:internal

fn=php::phpinfo

1 390321

fl=/opt/data/localhost/apache/htdocs/php.php

fn={main}

summary: 390388

0 66

cfn=php::phpinfo

calls=1 0 0

1 390321

This file is significantly less human readable than the last.  It IS however good for visualization with a tool such as KCachegrind (the file is actually named cachegrind.pid).  That will be the next ToolsDay post.  Because this one is already large, and that one has pictures!

MySQL SSL (1 of 3)

Coding | Posted by attriel January 12th, 2010

First thing to note:  The community build of MySQL does not support SSL.  You either need the Enterprise or you need to build it yourself.

To check if SSL is enabled on your build:
log into MySQL via any account

mysql> show variables like 'have_ssl';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| have_ssl      | YES   |
+---------------+-------+
1 row in set (0.01 sec)

If it says YES, then MySQL is compiled with SSL support, otherwise a new binary will need to be generated.

Once MySQL has SSL, then you can set the configuration options in the MySQL Configuration/INI file as such (with path’s obviously modified):

[mysqld]
ssl-ca=/usr/mysql-5.0.84/ssl/ca-cert.pem
ssl-cert=/usr/mysql-5.0.84/ssl/mysql.cert
ssl-key=/usr/mysql-5.0.84/ssl/mysql.key
ssl-cipher=ALL

(The last line enables all SSL cipher modes except NULL encryption)

This assumes you have the CA Public Certificate saved as ca-cert.pem, and Public/Private key-certificate pair for your mysql server.  That will be another post

To test the functionality, log in to MySQL using an administrator account

mysql> create database test;
mysql> grant all privileges on test.* to test@localhost identified by 'testpassword' require ssl;

Then you can attempt to log in to the server as the test user:

mysql -u test -p --ssl-ca=ssl/cacert.pem --ssl-cipher=ALL

Without the ssl-cipher line, you get an SSL connection error because it does not know how to encrypt the connection that both parties can communicate; the CA certificate is required to activate the SSL connection and to validate the server, AFAICT.

You don’t technically need to use “ALL” for the cipher entries. There are a number of choices that you can select, but for the purposes of demonstration, ALL was the simplest.

Part 2 will cover more detailed user restrictions. This setup effectively only require that the connection get SSL encryption (confidentiality), but does not validate the user (authenticity).

Part 3 will implement replication over SSL.