Thursday, August 28, 2008
Back up your Windows desktop to S3 with SecoBackup
I think this is a good tool for backing up certain files on Windows-based desktops. For example I back up my Quicken files from within a Windows XP virtual image that I run inside VMWare workstation on top of my regular Ubuntu Hardy desktop.
Tuesday, August 26, 2008
Ruby refugees flocking to Python?
RTFL
Here are some recent examples from my work.
Apache wouldn't start properly
A 'ps -def | grep http' would show only the main httpd process, with no worker processes. The Apache error log showed these lines:
Digest: generating secret for digest authentication
A google search for this line revealed this article:
http://www.raptorized.com/2006/08/11/apache-hangs-on-digest-secret-generation/
It turns out the randomness/entropy on that box had been exhausted. I grabbed the rng-tools tar.gz from sourceforge, compiled and installed it, then ran
rngd -r /dev/urandom
...and apache started its worker processes instantly.
Cannot create InnoDB tables in MySQL
Here, all it took was to read the MySQL error log in /var/lib/mysql. It's very friendly indeed, and tells you exactly what to do!
InnoDB: Error: data file ./ibdata1 is of a different size
InnoDB: 2176 pages (rounded down to MB)
InnoDB: than specified in the .cnf file 128000 pages!
InnoDB: Could not open or create data files.
InnoDB: If you tried to add new data files, and it failed here,
InnoDB: you should now edit innodb_data_file_path in my.cnf back
InnoDB: to what it was, and remove the new ibdata files InnoDB created
InnoDB: in this failed attempt. InnoDB only wrote those files full of
InnoDB: zeros, but did not yet use them in any way. But be careful: do not
InnoDB: remove old data files which contain your precious data!
Windows-based Web sites are displaying errors
Many times I've seen Windows/IIS based Web sites displaying cryptical errors such as:
In conclusion -- RTFL and google it! You'll be surprised how large of a percentage of issues you can solve this way.
Wednesday, July 16, 2008
This just in: Google releases Mox
Monday, July 14, 2008
Zach and sugarbot going strong in Google SoC
You can see a screencast that Zach put together, as well as a list of his accomplishments so far, in this blog post. In the screencast, Zach shows how he automates the launching and testing of two Sugar activities, the Calculator and the Terminal. Very cool stuff.
It's been a pleasure mentoring Zach on his SoC project. He has already proven himself to possess strong software engineering skills, not only in programming, but also in designing complex pieces of software. I only had to provide minimal guidance to Zach, and he has been very receptive with all the advice I have given him. I liked the fact that he implemented an automated test suite for sugarbot, and he included it in a buildbot continuous integration process, only days after I suggested that to him. It has also been very satisfying to me as a mentor to see his progress as exemplified by his almost-daily blog posts. I believe he is the most active blogger on Planet SoC. Good job, Zach!
Wednesday, June 18, 2008
Celtics use Ubuntu to beat Lakers
"It was a group effort by this gang in green, which bonded behind Rivers, who borrowed an African word ubuntu (pronounced Ooh-BOON-too) and roughly means "I am, because we are" in English, as the Celtics' unifying team motto.
The Celtics gave the Lakers a 12-minute crash course of ubuntu in the second quarter.
Boston outscored Los Angeles 34-19, getting 11 field goals on 11 assists. The Celtics toyed with the Lakers, outworking the Western Conference's best inside and out and showing the same kind of heart that made Boston the center of pro basketball's universe in the '60s. "
It's not what you thought, but it's still nice to see that the ubuntu concept is used successfully in sports too. I wonder what parallel we can make between the Lakers' game last night and an operating system. The Windows Blue Screen of Death comes to mind.
Tuesday, June 17, 2008
Security testing for agile testers
Security testing is a broad topic that cannot be possibly covered in a few paragraphs. Whole books have been devoted to this subject. Here we will try to at least provide some guidelines and pointers to books and tools that might prove useful to agile teams interested in security testing.
Just like functional testing, security testing can be viewed and conducted from two perspectives: from the inside out (white-box testing) and from the outside in (black-box testing).
Inside-out security testing assumes that the source code for the application under test is available to the testers. The code can be analyzed statically with a variety of tools that try to discover common coding errors which can make the application vulnerable to attacks such as buffer overflows or format string attacks. (Resources:
http://en.wikipedia.org/wiki/Buffer_overflow and
http://en.wikipedia.org/wiki/Format_string_vulnerabilities)
A list of tools that can be used for static code analysis can be found here:
http://en.wikipedia.org/wiki/List_of_tools_for_static_code_analysis
The fact that the testers have access to the source code of the application also means that they can map what some books call "the attack surface" of the application, which is the list of all the inputs and resources used by the program under test. Armed with a knowledge of the attack surface, testers can then apply a variety of techniques that attempt to break the security of the application. A very effective class of such techniques is called fuzzing and is based on fault injection. Using this technique, the testers try to make the application fail by feeding it various types of inputs (hence the term fault injection). These inputs range from carefully crafted strings used in SQL Injection attacks, to random byte changes in given input files, to random strings fed as command line arguments. (Resources:
http://www.fuzzing.org/category/fuzzing-book/ and
http://www.fuzzing.org/fuzzing-software)
The outside-in approach is the one mostly used by attackers that try to penetrate into the servers or the network hosting your application. As a security tester, you need to have the same mindset that attackers do, which means that you have to use your creativity in discovering and exploiting vulnerabilities in your own application. You also need to stay up to date with the latest security news and updates related to the platform/operating system your application runs on. These tasks are by no means easy, they require extensive knowledge, and as such are mostly outsourced to third parties that specialize in security testing.
So what are agile testers to do when faced with the apparently insurmountable task of testing the security of their application? Here are some practical, pragmatic steps that anybody can follow:
1. Adopt a continuous integration (CI) process that periodically runs a suite of automated tests against your application.
2. Learn how to use one or more open source static code analysis tools. Add a step to your CI process which consists of running these tools against your application code. Mark the step as failed it the tools find any critical vulnerabilities.
3. Install an automated security vulnerability scanner such as Nessus
(http://www.nessus.org/nessus/). Nessus can be run in a command-line, non-GUI mode, which makes it suitable for inclusion in a CI tool. Add a step to your CI process which consists of running Nessus against your application. Capture the Nessus output in a file and parse that file for any high importance security holes found by the scanner. Mark the step as FAIL when any such holes are found.
4. Learn how to use one or more open source fuzzing tools. Add a step to your CI process which consists of running these tools against your application code. Mark the step as failed it the tools find any critical vulnerabilities.
As with any automated testing effort, running these tools is no guarantee that your code and your application will be free of security defects. However, running these tools will go a long way towards improving the quality of your application in terms of security. As always, the 80/20 rule applies. These tools will probably find the 80% most common security bugs out there while requiring 20% of your security budget.
To find the remaining 20% security defects, you're well advised to spend the other 80% of your security budget on high quality security experts. They will be able to test your application security thoroughly by the use of techniques such as SQL injection, code injection, remote code inclusion and cross-site scripting. While there are some tools that try to automate some of these techniques, they are no match for a trained professional who takes the time to understand the inner workings of your application in order to craft the perfect attack against it.
Tools for troubleshooting Web app performance
Thursday, June 12, 2008
What does your Wordle look like?
You would think I'm very self-centered, since my first and last names appear so prominently. But I think it's because every blog post ends with "posted by Grig Gheorghiu at
Friday, May 23, 2008
Incremental backups to Amazon S3
Here's what I did in order to get all this going on a CentOS 5.1 server running Python 2.5.
1) Signed up for Amazon S3 and got the AWS_ACCESS_KEY_ID and the AWS_SECRET_ACCESS_KEY.
2) Downloaded and installed the following packages: boto, GnuPGInterface, librsync, duplicity. All of them except librsync are Python-based, so they can be installed via 'python setup.py install'. For librsync you need to use './configure; make; make install'.
3) Generated a GPG key pair using "gpg --gen-key". Made a note of the hex fingerprint of the key (you can list the fingerprints of your keys via "gpg --fingerprint").
4) Wrote a simple boto-based Python script to create and list S3 buckets (the equivalent of directories in S3 parlance). Note that boto uses SSL, so your Python installation needs to have SSL enabled.
Here's how the script looks:
#!/usr/bin/env python
ACCESS_KEY_ID = 'theaccesskeyid'
SECRET_ACCESS_KEY = 'thesecretaccesskey'
from boto.s3.connection import S3Connection
conn = S3Connection(ACCESS_KEY_ID, SECRET_ACCESS_KEY)
buckets = [
'mybuckets_myserver_mysqldump',
'mybuckets_myserver_full',
]
for bucket in buckets:
conn.create_bucket(bucket)
rs = conn.get_all_buckets()
print 'Bucket listing:'
for b in rs:
print b.name
5) Wrote a bash script (heavily influenced by Tim McCormack's post) that runs duplicity and backs up the root partition of my Linux server (minus some directories) to S3. The nice thing about duplicity is that it uses rsync, so it only transfers the diffs over the wire. Here's how my script looks like:
NOTE: duplicity will interactively prompt you for your GPG key's passphrase, unless you have a variable called PASSPHRASE that contains the passphrase. Since I wanted to run this script as a cron job, I chose the less secure way of specifying the passphrase in clear inside the script. YMMV.
export myEncryptionKeyFingerprint=somehexnumber
export mySigningKeyFingerprint=somehexnumber
export AWS_ACCESS_KEY_ID=accesskeyid
export AWS_SECRET_ACCESS_KEY=secretaccesskey
export PASSPHRASE=mypassphrase
/usr/local/bin/duplicity --encrypt-key=$myEncryptionKeyFingerprint
--sign-key=$mySigningKeyFingerprint --exclude=/sys --exclude=/dev
--exclude=/proc --exclude=/tmp --exclude=/mnt --exclude=/media /
s3+http://mybuckets_myserver_full
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export PASSPHRASE=
That's about it. Running the script produces an output such as this:
The first time you run the script it will take a while, but subsequent runs will only back up the files that were changed since the last run. For example, my second run transferred only 19.3 MB:
--------------[ Backup Statistics ]--------------
StartTime 1211482825.55 (Thu May 22 12:00:25 2008)
EndTime 1211488426.17 (Thu May 22 13:33:46 2008)
ElapsedTime 5600.62 (1 hour 33 minutes 20.62 seconds)
SourceFiles 174531
SourceFileSize 5080402735 (4.73 GB)
NewFiles 174531
NewFileSize 5080402735 (4.73 GB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 174531
RawDeltaSize 1200920038 (1.12 GB)
TotalDestinationSizeChange 2702953170 (2.52 GB)
Errors 0
-------------------------------------------------
--------------[ Backup Statistics ]--------------To restore files from S3, you use duplicity and specify the source as s3+http://mybuckets_myserver_full and the destination as a local directory.
StartTime 1211529638.99 (Fri May 23 01:00:38 2008)
EndTime 1211529784.18 (Fri May 23 01:03:04 2008)
ElapsedTime 145.19 (2 minutes 25.19 seconds)
SourceFiles 174522
SourceFileSize 5084478500 (4.74 GB)
NewFiles 64
NewFileSize 2280357 (2.17 MB)
DeletedFiles 28
ChangedFiles 418
ChangedFileSize 217974696 (208 MB)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 510
RawDeltaSize 2465010 (2.35 MB)
TotalDestinationSizeChange 20211663 (19.3 MB)
Errors 0
ASas
-------------------------------------------------
Thanks to Tim McCormack for his detailed blog post, it made things so much easier than digging all this info by Google Fu.
Monday, May 19, 2008
Compiling Python 2.5 with SSL support
In my case, I needed to enable SSL support for Python 2.5.2 on CentOS 5.1. I already had the openssl development libraries installed:
# yum list installed | grep ssl
mod_ssl.i386 1:2.2.3-11.el5_1.cento installed
openssl.i686 0.9.8b-8.3.el5_0.2 installed
openssl-devel.i386 0.9.8b-8.3.el5_0.2 installed
Here's what I did next, following Patrick's post:
1) edited Modules/Setup.dist from the Python 2.5.2 source distribution and made sure the correct lines were put back in (they were commented out by default):
_socket socketmodule.c
# Socket module helper for SSL support; you must comment out the other
# socket line above, and possibly edit the SSL variable:
#SSL=/usr/local/ssl
_ssl _ssl.c \
-DUSE_SSL -I$(SSL)/include -I$(SSL)/include/openssl \
-L$(SSL)/lib -lssl -lcrypto
2) ran ./configure; make; make install
3) verified that I can access socket.ssl:
# python2.5
Python 2.5.2 (r252:60911, May 19 2008, 14:23:27)
[GCC 4.1.2 20070626 (Red Hat 4.1.2-14)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import socket
>>> socket.ssl
That's it. Not sure why it's so non-intuitive though.
Thursday, May 15, 2008
Encrypting a Linux root partition with LUKS and DM-CRYPT
* Boot off of a Live CD, I used Fedora Core 9 Preview
* Find out which disk is which; for me /dev/sda was the external usb, and /dev/sdb was the internal
sfdisk -d /dev/sdb | sfdisk /dev/sda* Change the partition type to 83 for /dev/sdb2
pvcreate --verbose /dev/sda2
vgextend --verbose VolGroup00 /dev/sda2
pvmove --verbose /dev/sdb2 /dev/sda2 # This takes ages
vgreduce --verbose VolGroup00 /dev/sdb2
pvremove --verbose /dev/sdb2
fdisk /dev/sdb
* Here is when you get to choose the password that will protect your partition:
cryptsetup --verify-passphrase --key-size 256 luksFormat /dev/sdb2
cryptsetup luksOpen /dev/sdb2 cryptroot
pvcreate --verbose /dev/mapper/cryptroot
vgextend --verbose VolGroup00 /dev/mapper/cryptroot
pvmove --verbose /dev/sda2 /dev/mapper/cryptroot # This takes ages
vgreduce --verbose VolGroup00 /dev/sda2
pvremove --verbose /dev/sda2
mkdir /mnt/tmp
mount /dev/VolGroup00/LogVol00 /mnt/tmp
cp -ax /dev/* /mnt/tmp/dev # I said no to overwriting any files
chroot /mnt/tmp/
(chroot) # mount -t proc proc /proc
(chroot) # mount -t sysfs sysfs /sys
(chroot) # mount /boot
(chroot) # swapon -a
(chroot) # vgcfgbackup
For the initrd, the blog mentions /etc/sysconfig/mkinitrd as a file. CentOS had a directory, I tried doing their suggestion as a file in there, moving the directory out, and making the file as they suggested. Both failed. So I ran the following command:
(chroot) # mkinitrd -v /boot/initrd-2.6.18-53.el5.crypt.img --with=aes --with=sha256 --with=dm-crypt 2.6.18-53.el5
Now we need to modify the initrd so that it will decrypt the partition at boot time
(chroot) # cd /boot
(chroot) # mkdir /boot/initrd-2.6.18-53.el5.crypt.dir
(chroot) # cd /boot/initrd-2.6.18-53.el5.crypt.dir
(chroot) # gunzip < ../initrd-2.6.18-53.el5.crypt.img | cpio -ivd
Now, we need to modify init by adding the following lines after the line which reads “mkblkdevs” and before “echo Scanning and configuring dmraid supported devices.”:
echo Decrypting root device
cryptsetup luksOpen /dev/sda2 cryptroot
echo Scanning logical volumes
lvm vgscan --ignorelockingfailure
echo Activating logical volumes
lvm vgchange -ay --ignorelockingfailure vg00
Copy cryptsetup and lvm to be put into the initrd, the blog doesn't mention it, but I'm sure it needs it.
cp /sbin/cryptsetup bin/
cp /sbin/lvm bin/
Compress the new initrd
find ./ | cpio -H newc -o | gzip -9 > /boot/initrd-2.6.18-53.el5.crypt.img
Modify the grub.conf. Copy the grub entry for the current kernel, and change as follows
title Centos Encrypted Server (2.6.18-53.1.4.el5)
initrd /initrd-2.6.18-53.el5.crypt.img
Unmount the fs's in the chroot, and exit
cd /
umount /boot
umount /proc
umount /sys
exit
NOTE: Don't upgrade the kernel without upgrading the initrd and grub.conf.
Reboot and test :)
After you have crypto setup, you can find out information about it (such as the crypto algorithm used) via this command:
# cryptsetup luksDump /dev/sda2
LUKS header information for /dev/sda2
Version: 1
Cipher name: aes
Cipher mode: cbc-essiv:sha256
Hash spec: sha1
Payload offset: 2056
MK bits: 256
MK digest: af 2e e6 39 3e 79 60 bb 4a 2b 33 05 1c 86 3a 83 bc a0 ef c1
MK salt: 79 b2 13 53 6f 52 72 a1 b5 3d dc d3 72 cd d6 f4
e3 25 3c 6e 08 00 f3 1d 44 1e 90 47 bc 43 e7 07
MK iterations: 10
UUID: 721abe52-5122-447b-8ed0-5ca3b2b32366
Key Slot 0: ENABLED
Iterations: 247223
Salt: 86 c7 53 6a 13 a9 77 81 89 ec 90 b3 e5 6a ea 8d
da 0c 6f ad ec 3e 3c 47 2d 6e 5f 59 28 4e 7c 63
Key material offset: 8
AF stripes: 4000
Key Slot 1: DISABLED
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Thursday, May 08, 2008
Notes from the latest SoCal Piggies meeting
Monday, May 05, 2008
Guido open sources Code Review app running on GAPE
The code for Code Review is part of a Google code project called Rietveld. I haven't looked at it yet, but I'll certainly do so soon, just to see the master's view on how to write a GAPE application.
Ruby to Python bytecode compiler
You know, it's crazy that Python
and Ruby fans find themselves
battling so much. While syntax
is different, this exercise
proves how close they are to
each other! And, yes, I like
Ruby's syntax and can think much
better in it, but it would be
nice to share libs with Python
folk and not have to wait forever
for a mythical VM that runs all
possible languages.
Tuesday, April 29, 2008
Special guest for next SoCal Piggies meeting
BTW, I am putting together a Google code project for mock testing techniques in Python, in preparation for a presentation I would like to give to the group at some point. I called the project moctep, in honor of that ancient Egyptian deity, the protector of testers (or mockers, or maybe both). It doesn't have much so far, but there's some sample code you can browse through in the svn repository if you're curious. I'll be adding more meat to it soon.
Anyway, if you're a Pythonista who happens to be in the L.A. area on Thursday, please consider attending our meeting. It will be lots of fun, guaranteed.
Tuesday, April 22, 2008
"OLPC Automated Testing" project accepted for SoC
Thursday, April 17, 2008
Come work for RIS Technology
Open Source Tech Top Guns Wanted
Are you a passionate Linux user? Are you running the latest Ubuntu alpha release on your laptop just because you can? Are you wired to the latest technologies -- things like Amazon EC2/S3 and Google AppEngine? Are you a virtuoso when it comes to virtualization (Xen/VMWare)?
Do you program in Python? Do you take hard problems as personal challenges and don't give up until you solve them?
RIS Technology Inc. is a rapidly growing Los Angeles-based premium managed hosting provider that hosts and manages internet applications for medium to large size organizations nationwide. We have grown consistently at 100% each of the past four years and are currently hiring for additional growth at our corporate operations center near LAX, in Los Angeles, CA. We have immediate openings for dedicated and knowledgeable technology engineers. If the answer to the questions above is YES, then we'd like to extend an invitation to interview with us.
We are an equal opportunity employer and have excellent benefits. We realize that one of the main things that makes us excellent are the people we choose to work with. We look for the best and brightest and our goal is to make work less "work" and more fun.
Wednesday, April 16, 2008
Google App Engine feels constrictive
Also, I was talking to Michał on rewriting the Cheesecake service to run on Google App Engine, but he pointed out that cron jobs are not allowed, so that won't work either... It seems that with everything I've tried with GAE I've run into a wall so far. I know it's a 'paradigm change' for Web development, but still, I can't help wishing I had my favorite Python modules to play with.
What has your experience been with GAE so far? I know Kumar wrote a cool PyPI mirror in GAE, but I haven't seen many other 'real life' applications mentioned on Planet Python.
Friday, April 11, 2008
Ubuntu Gutsy woes with Intel 801 graphics card
Modifying EC2 security groups via AWS Lambda functions
One task that comes up again and again is adding, removing or updating source CIDR blocks in various security groups in an EC2 infrastructur...
-
Here's a good interview question for a tester: how do you define performance/load/stress testing? Many times people use these terms inte...
-
I will give an example of using Selenium to test a Plone site. I will use a default, out-of-the-box installation of Plone, with no customiza...
-
At OpenX we recently completed a large-scale deployment of one of our server farms to Amazon EC2. Here are some lessons learned from that ex...