Remote support and process manipulation

May 28th, 2010 No comments

Sometimes I help friends that make their first steps in Linux world/shell scripting/etc. I’ve found that the best way to give remote technical support is over a “shared” terminal window. It can be done with screen commnad (short tutorial here) or kibitz which is part of the standard expect package. The basic operation is similar: the side that needs to be controlled creates the “shared” terminal, the other side first connects to the same machine (telnet/ssh) and then “attaches” to the shared terminal.

This scenario, although reasonable has couple of limitations. First, to support someone, a new shared environment has to be established. You can’t attach to existing terminal window (unless inside “screen” already) which is inconvenient. Second, a shell access is needed before attaching to the shared terminal (unnecessary privilege) and third, you can’t connect to the shared terminal (as server) which could have been really usable for NAT/Firewall bypassing via reverse ssh tunneling.

How can we address those problems? We need a small program that can duplicate existing process’ file descriptors (stdin/stdout/stderr) and bind them to either another terminal (pty device) or network sockets. I couldn’t find anything that does exactly this but I found another cool stuff that does similar/related things that I’d like to share. I think I’m gonna write my own tool, based on the stuff I found (long live open source!) but ’till then, you can check these out:

Output redirection of running process using gdb. This method uses gdb‘s ability to attach to already running process, freeze it’s normal execution, run arbitrary code and continue. In this particular method stdout is closed and reassigned to another file. Pretty neat! Here, the same method is used but instead of closing stdout, it is duplicated to a new file descriptor.

Retty is a “tiny tool that lets you attach processes running on other terminals”, which means you can reattach to any open terminal window (for example text editor, mail client etc…) from any window. The original session would be destroyed though. Unfortunately, retty can’t run on amd64 platforms (like mine) because it injects i386 assembly instructions into running processes. Retty’s functionality can be achieved, again, using gdb method with this script.

Neercs is very similar to screen but has unique features such as grabbing a process that wasn’t initially started inside it, different window layouts etc. It is based on libcaca, so when I built it I had to manually get latest version of libcaca, build it, and then build neercs against it (if anyone need help with this just leave a comment). Neercs uses similar grabbing mechanism to retty’s, but they made both i386 and x86_64 assembly. From my tests it’s little less responsive than screen and it has problems passing F keys and Alt-* keystrokes.

Another cool program is CryoPID – process freezer for Linux. It captures the state of a running process and saves it to a file. The process can be resumed later even on another machine. Unfortunately, I couldn’t get it compiled on Ubuntu 9.10 and there is no Launchpad package as well :(

That’s all. If anyone has better solutions I’ll be glad to hear them.

Share and Enjoy:
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Slashdot
  • email
  • Print

Sending mail from command line

May 20th, 2010 No comments

Recently I wanted to add mail sending functionality to one of my scripts. This script runs on my desktop computer, so no fancy company mail servers/fixed IP/DNS records for me. When I googled it up I saw many different methods in varying complexity. My need was the simplest you can think of- just to send email. I didn’t care if it’s always from the same address. My solution was to use Ubuntu’s default exim4 mail server, with Gmail. Exim authenticates with your gmail user/password and the mail is always sent from the same address (user@gmail.com). This is heavily based on this, although a little different.

First I had to install exim4-config, so:
# sudo apt-get install exim4-config

Then I needed to configure exim to work with Gmail:
# sudo dpkg-reconfigure exim4-config

My selections:

  • General type of mail configuration: mail sent by smarthost; no local mail
  • System mail name: localhost
  • IP-address to listen: 127.0.0.1
  • Other destinations for which mail is accepted: (leave blank)
  • Visible domain name for local users: localhost
  • IP address or host name of the outgoing smarthost: smtp.gmail.com::587
  • Keep number of DNS-queries minimal: no
  • Split configuration into small files: no
  • Root and postmaster mail recipient: (leave blank)

Edit /etc/exim4/passwd.client (you can use gedit if you’re not comfortable with vi):
# sudo vi /etc/exim4/passwd.client

Add those lines (replace “user” and “password” with your own):

gmail-smtp.l.google.com:user@gmail.com:password
*.google.com:user@gmail.com:password
smtp.gmail.com:user@gmail.com:password

Finally update (refresh) exim configuration:
# sudo update-exim4.conf

That’s about it. To send the contents of /etc/motd as mail (just example):
# cat /etc/motd | mail -a “FROM: user@gmail.com” -a “BCC: somemail@somedomain.com” -s “This is the subject” recipient@somedomain.com

The “BCC:” is optional of course. If you don’t specify “FROM:” the default is the current user.

If you don’t have the “mail” command then just install mailutils:
# sudo apt-get install mailutils

Happy mailing!

Share and Enjoy:
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Slashdot
  • email
  • Print

USB Multibooting

May 13th, 2010 4 comments

I recently wanted to add multiboot support to my usb disk-on-key. I mainly use it for casual file transfers/play music on my car, but I also got BackTrack installed on it. From time to time I get to use it on a frined’s computer. BackTrack is one of many “live cd” operating systems, meaning you can boot it directly from cd drive/usb thumbdrive without affecting your hard disk. When you finish you just eject the cd/thumbdrive, reboot and everything would go back to normal as if nothing happened.

Live operating systems are very useful in many cases, usually when you want to perform some operations that you can’t or don’t want to do within your normal operating system, such as virus cleaning (if you’re infected and the virus killed your antivirus), hard disk backups, computer forensics, security assessment, files access, resetting your password, hardware problems diagnosis, checking if your hardware is supported by new operating system, etc…

Multibooting means to have the ability to boot more than operating systems. Unfortunately, most “live” operating system makers just provide you with image file (iso) you can burn to cd/dvd. At most, they give instructions how to install to usb thumbdrive, instructions that usually involve formatting it and even if not, you would still be able to boot only the last installed operating system (if you install more than one).

The method I’m going to present is inspired by pendrivelinux.com guide. They made windows utility to get the job done, but you only get the final result without understanding how it works or customizing it. This post is a step by step guide to make multiboot-able usb thumbdrive from scratch using Linux, plus you get to understand how it works, plus customize it, plus you don’t have to use physical drive as the whole thing can be emulated (very useful for testing). If you do it on physical drive make sure to backup your data before !!! Everything worked out of the box with my Ubuntu 9.10. If you have problems with other Linux distros post as comment and I’d try to help.

We would use grub4dos as bootloader. Download it from here. For this tutorial I’m going to use emulated usb thumbdrive, with it’s data back-stored as file named “usb.dsk”, so whenever I do something with this file, if you do it with real usb thumbdrive, the corresponding file is the one representing your thumbdrive such as /dev/sdd (you can figure it out with df -h command). Alright, let’s get dirty.

Creating the emulation file (skip if you use physical device)
We just need to create empty file at the size we want. It can be done with “dd” command. dd works by default with blocks of 512 bytes so it explains the following numbers (you can use any size, here are 2GB and 4GB examples):

4GB file creation:
# dd if=/dev/zero count=7892040 of=usb.dsk

2GB file creation:
# dd if=/dev/zero count=4029440 of=usb.dsk

Partitioning the emulated/physical device
If you use physical device and you don’t want to repartition/format it, the only requirement is to have FAT partition. You can check it with “fdisk -l” on your device file for example “sudo fdisk -l /dev/sdd”. Otherwise, keep reading this section (don’t forget to replace usb.dsk with your device file whenever specified. You may also need to run everything with sudo).

We need to partition our newly created file:
# fdisk usb.dsk

Ignore “you must set” warnings if you get any. fdisk command needs to know the physical structure of our emulated device, so we must tell it manually (if you use physical device ignore this). As I had no idea what structure I want, I just copied it from existing devices I own. 4GB device: sectors=62, heads=125, cylinders=1018. 2GB device: sectors=63, heads=255, cylinders=250. I think those numbers doesn’t really make any difference.

To tell fdisk the structure (this is for 2GB file but you can change the numbers):

Command (m for help): x

Expert command (m for help): s
Number of sectors (1-63, default 63): 63
Warning: setting sector offset for DOS compatiblity

Expert command (m for help): h
Number of heads (1-256, default 255): 255

Expert command (m for help): c
Number of cylinders (1-1048576): 250

Expert command (m for help): r

Command (m for help):

Now that device structure is set, use “p” to print the current partition table. The output should look like this (disk identifier may change):

Command (m for help): p

Disk usb.dsk: 0 MB, 0 bytes
255 heads, 63 sectors/track, 250 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x1194fc93

Device Boot      Start         End      Blocks   Id  System

Command (m for help):

If any partitions are shown, delete them with “d”. Make sure you don’t need the data already there as it will be deleted! Now we need to create new bootable FAT partition and we’re done. Just follow my commands (once again, numbers are for 2GB, you can use other numbers, as specified before):

Command (m for help): n
Command action
e   extended
p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-250, default 1): 1
Last cylinder, +cylinders or +size{K,M,G} (1-250, default 250): 250

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): c
Changed system type of partition 1 to c (W95 FAT32 (LBA))

Command (m for help): a
Partition number (1-4): 1

Command (m for help): w
The partition table has been altered!

WARNING: If you have created or modified any DOS 6.x
partitions, please see the fdisk manual page for additional
information.
Syncing disks.

If everything went smooth, you now have one FAT type partition at the size of your device.

Installing GRUB4DOS to Master Boot Record

To install grub4dos just run bootlace.com <device name> from where you extracted the zip archive:
# grub4dos-0.4.4/bootlace.com usb.dsk

Output should look like:

Disk geometry calculated according to the partition table:

Sectors per track = 63, Number of heads = 255

Success.

Formatting the device

If you use physical device skip to the last command in this paragraph (“Creating the FAT filesystem”). To format the partition (create new FAT filesystem on it) we first need to calculate it’s offset in our file. It is located on the second track. The number of sectors per track is what we defined before (2GB: 63, 4GB: 62) and number of bytes per sector is 512, meaning we need to skip  512 x 63 = 32256 bytes (in 2GB case).

Setting up the loop device with correct offset (2GB: 32256, 4GB: 31744):
# sudo losetup -o 32256 /dev/loop0 usb.dsk

Creating the FAT filesystem (for physical device use your partition filename such as /dev/sdd1, sdd’s first partition):
# sudo mkfs -t vfat /dev/loop0

Ignore the warnings about floppy size if you get any.

Copying necessary files

First we need to mount the filesystem (for physical device use your partition filename):
# sudo mkdir /mnt/usbdevice
# sudo mount -o uid=`id -u` /dev/loop0 /mnt/usbdevice

Copy “grldr” from grub4dos extracted archive:
# cp grub4dos-0.4.4/grldr /mnt/usbdevice

We almost finished. Your device is now bootable and boots grub4dos. We just need to configure the boot menu. Configuration file is “menu.lst”. It must be placed at the root directory of your device. You can either start with the sample file from grub4dos or use mine:

default 0
timeout 30
splashimage=(hd0,0)/splash.xpm.gz
foreground=d2d1d0
background=537ba7

title Ubuntu 10.04 64bit
find –set-root /ubuntu-10.04-desktop-amd64.iso
map /ubuntu-10.04-desktop-amd64.iso (0xff)
map –hook
root (0xff)
kernel /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper persistent iso-scan/filename=/ubuntu-10.04-desktop-amd64.iso splash
initrd /casper/initrd.lz

title Ultimate Boot CD 4.11
find –set-root /ubcd411.iso
map /ubcd411.iso (hd32)
map –hook
chainloader (hd32)

In this example I only have two live operating systems: Ubuntu 10.04 64bit and Ultimate Boot CD 4.11. All the files specified should be on root directory of your device. You can find the splash image I made here, ubcd411.iso here, ubuntu-10.04-desktop-amd64.iso here.

Basically, you can use every bootable iso file you want. However, some customization might be required as you can see that the two entries aren’t identical. For better understanding of grub4dos “map” command you can use this guide. I tried using the same method (load directly from iso file) for BackTrack3, but it didn’t work. I ended up extracting BT3 directory from bt3-final.iso and put it on the root directory. I also extracted “boot” directory and put it inside BT3 directory. I then added this entry to menu.lst:

title Backtrack 3.0
root (hd0,0)
kernel /bt3/boot/vmlinuz vga=0×317 initrd=/bt3/boot/initrd.gz ramdisk_size=6666 root=/dev/ram0 rw
initrd /bt3/boot/initrd.gz

It worked. The boot params specify to load it as frame buffer console (the way I like it). Anyhow, you can get ideas for many more interesting live operating systems you can use from pendrivelinux.com, their menu.lst and splash image can be found in their source archive, MultiBootISOs-Src.zip.

Finishing up

Congrats! everything is done. Let’s close everything we opened:
# sudo umount /mnt/usbdevice
# sudo losetup -d /dev/loop0

If you want to emulate real usb device using your usb.dsk use:
# sudo modprobe g_file_storage file=usb.dsk

Wait couple of seconds and it should be recognized. To stop:
# sudo modprobe -r g_file_storage

You can also use the same file as raw hard disk with kvm. (If you don’t know how to use kvm/qemu it’s definitely not the place to explain):
# qemu-system-x86_64 -m 512 -hda usb.dsk

Pretty cool ah ?

Share and Enjoy:
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Slashdot
  • email
  • Print

Cool command line stuff

May 7th, 2010 4 comments

I made this list of cool things you can do from shell especially for desktop users. They all work on my ubuntu and most of them generic (except maybe apt-get which works for debian based distrubtions). The list is not ordered or categorized. It’s really just a bunch of things a little different from the regular text manipulation one-liners. They are all useful, at least for me.

Note: if you copy-paste notice that wordpress screws the quotes, just use the regular double quotes wherever specified.

Make cd/dvd image copy
# dd if=/dev/cdrom bs=1024k of=my_cd.iso

Make cd/dvd image copy to remote computer
I use my old computer’s drive, mine got broken:
# dd if=/dev/cdrom bs=1024k | ssh remote_computer “cat > my_cd.iso”

Mount existing cd/dvd image copy (iso file)
# mkdir /tmp/my_cd
# sudo mount -t loop my_cd.iso /tmp/my_cd

When you finish don’t forget to:
# sudo umount /tmp/my_cd
# rmdir /tmp/my_cd

Use last parameter from last command with !$
# mkdir -p really/long/path/that/you/hate/typing
# cd !$

Find all files containing a certain text
Let’s say we want to find all files under /usr/include named “*.h” containing  _REGEX_H:
# find /usr/include -name “*.h” -exec grep -l “_REGEX_H” {} \;

Convert between character sets
Make sure you have “libc-bin” installed (sudo apt-get install libc-bin)
# curl -L http://www.idown.me | iconv -f windows-1255 -t utf-8

Read sent/received SMS from your jailbroken iphone
Make sure you have “sqlite3″ installed (sudo apt-get install sqlite3)
# scp root@your_iphone_ip:/private/var/mobile/Library/SMS/sms.db .
# sqlite3 sms.db “select * from message”

Incrementally backup directory to external hard drive
Note: this command is dangerous as it will delete your destination dir. It’s good if you want 1:1 copy of directory and want to be able to sync only changes in future (file removal from source directory also considered change and will be replicated next time you execute). The –modify-window keeps files timestamps and useful when you sync from EXT2/3 filesystem to FAT32.
# rsync -rot –inplace –delete –progress –modify-window=2 source_dir destination_dir

Disable compiz window manager (without killing current desktop session)
# DISPLAY=$DISPLAY metacity –replace &

Enable compiz window manager (without killing current desktop session)
# DISPLAY=$DISPLAY compiz –replace &

Local port forwarding
We will listen on port 4545 and forward to local port 22 (ssh):
# mknod tmp_pipe p
# nc -kl 4545 0<tmp_pipe | nc localhost 22 1>tmp_pipe

Now you can try ssh localhost -p 4545. You can also forward to remote host, just replace localhost with the host you want.

Track your Dominos pizza order
Well, I’ve no idea if it works as it is for US citizens only, but you can check out the script here.

Share and Enjoy:
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Slashdot
  • email
  • Print

Faking the Green Robot – Part 2

May 1st, 2010 6 comments

It has been long time since Part 1, but I’ve been busy with other stuff. In the previous part, I started analyzing how “Green Robot” feature can be faked, or more precisely, how Android is identified by google talk servers. Last thing I found out was that Android’s talk application securely connects to gtalk servers. I performed SSL man in the middle attack as described in this post. I couldn’t find weakness in Android’s SSL implementation, so I had to modify it to accept my fake certificate. How did I do that ?

My first guess was to change something inside Talk.apk or gtalkservice.apk to either disable the check, or get it’s private key. If you are a software developer you’d probably think I must have the original source files for that, but the truth is I don’t. Quick inspection of “apk” files showed that it’s actually gzip archive, that contains some xml files and the executable binary, “dex” file. Another format I’ve never heard of. What I wanted to do next is something called disassembly process. It means turning binary machine code into something more readable by humans.

Unfortunately, my favorite disassembler doesn’t understand dex files, so I googled for help and found two candidates: “dedexer” and “dex2jar“. The first decompiles into “assembly like format” while the second converts to Java jar, which can be later disassembled into java source files. So, dex2jar was an obvious pick. I then disassembled the jar file with JAD (JAva Decompiler). Although, it wasn’t fully disassembled, it was enough to get around.

After brief inspection of “gtalkservice” and “Talk”, I realized the SSL authentication mechanism is not implemented by neither of them, it’s implemented by Android operating system. So, there must be a place that holds the trusted certificates. I only need to find it and add my fake certificate (served by the man in the middle server). I scanned the operating system files for suspicious files (remember ? we extracted system.img with unyaffs so we have access to these files). Umm… I wonder what /etc/security/cacerts.bks is used for… what are the odds it holds all trusted Certificate Authorities Certificates ?

To modify this file we need some sort of editor for bks files. Portecle would be good choice. The bks file is password protected but the password can be obtained here, along with instructions how to extract it from live android operating system, modify it, and push it back to android. None of the instructions worked for me, but the password was correct. Portecle’s usage as well as cacerts.bks file structure is pretty intuitive.

Once cacerts.bks contains our new certificate we need to push it back into the system.img file the emulator use. I thought it’s trivial task but it took me much more time than I expected, much more than actually needed. If you follow google results, you would either try to push the file into live android system with “adb push”, but it doesn’t survive reboot (and you must reboot if you want the new file to be used) or you follow one of the many different forums posts about kernel recompiling, using mtd pseudo devices, etc…

All you really need to do is download YAFFS2 source, go to “utils” directory, and run “make”. When compilation is done, you’ll get utility called “mkyaffs2image” which is exactly what we need. It converts a directory (with it’s files and subdirectories) into YAFFS2 image. We just need to rebuild YAFFS2 image from the extracted system.img direcrtory (with modified cacerts.bks file in it). After it’s done, rename output file to system.img and boot the emulator with it. It works seamlessly.

So now I got all communications decrypted with wireshark. I expected to find google talk’s offical protocol, XMPP, as specified here but I didn’t. What I got seemed like some sort of object serialization. Some of the strings looked familiar from XMPP protocol, some were not. I checked gtalkservice sources again, to understand the serialization process. I couldn’t figure it out completely but I got some ideas, such as the first byte of packet represents it’s type as defined in MobileProtoBufStreamConfiguration.java, the second byte is how many bytes left in the packet, string always follows it’s length (in hexadecimal), etc…

I figured out enough to come to these conclusion: port 5228 is not only used for google talk. It’s for other google services as well. The protocol is some “private” extension of XMPP or pure XML, as the smack sources (from gtalkservice) were clearly modified to handle login requests. All objects are serialized. I suspect the android identification is based on the login request ( <login-request … deviceId= …> ).

Therefor, my plan for extending existing gtalk client with some additional XMPP tags won’t work, because the whole login procedure is apparently different. That’s the end of this adventure for me. Of course, I might be wrong or completely missing something, but for what it worths, it was fun and I got quite far, didn’t I ?

However, this is the end only for me. For you, there still might be a chance. If I got it right, when client receives “presence” messages, which are part of the XMPP (those are the messages sent to notify someone became online for example), one of the fields is client type, as specified in PresenceStanza.java. Although it interprets only received messages (again, if I got it right), the XMPP protocol also defines how to send presence messages to the server. Maybe this type of messages can be used to trick the server…

Share and Enjoy:
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Slashdot
  • email
  • Print

Ubuntu 10.04 Lucid Lynx

April 29th, 2010 No comments

Today, Canonical will release Ubuntu 10.04 LTS (long term support). I’m usually not so enthusiastic about new Ubuntu releases, but this time is different. They added some sweet features, well, at least for me, which didn’t get proper public relations. This is not a comprehensive review of the new features. It’s about new features I find cool or bad enough writing about, from a desktop user point of view.

First, I must say I love Ubuntu. In each release they turn some of the most annoying tasks (for linux newbies) into trivial and intuitive. It has great documentation, community support and amazing software package repositories. It is also widely supported by third party vendors. Second, they usually add performance boosts and cool new features with new releases. Third, you can order official CDs for free. What’s not to love ?

So, what do we get this time ? As always with ‘LTS’ releases, three years of (bug/security fixes) support. A fresh new beginner’s getting-started manual which looks very promising. Some crap as well: new look and feel and social networks integration. As if it’s that hard to change the look or use all-in-one social networks client…

Performance boosts. First, boot speed improvement. They already made a big leap from 9.04 to 9.10, and now again ? sounds delicious. “Super fast” boot for SSD based machines such as netbooks. Sounds very delicious. Second, faster suspend/resume for your netbook that will “extend battery life”. Excuse me for being skeptical, but come on… improving speeds are always good, but declaring it will save battery ? I don’t buy it.

Ubuntu One enhancements. I never got the deal around Ubuntu One. It suppose to be a personal cloud that keeps your files, notes, bookmarks and contacts on the net, but we already had these services long time ago (for example Gmail’s contacts which can be synced to your mobile, Dropbox file storage, or Delicious bookmarks). Anyway the new enhancements are better desktop integration, and new Music Store. For me, they’re both useless but I guess Canonical deserves it’s chance to fight Apple’s music store, plus, it’s DRM-free.

Software Center 2.0. Supposedly better interface for software installation and maintenance. I haven’t seen this one yet, but it sounds just like a GUI facelift. The underlaying software deployment mechanisms stay the same (apt/ppa repositories).

The sweet features I mentioned in the prologue: inclusion of libimobiledevice in official repositories. This is a software library that supports iPhone, iPod Touch and iPad devices. Programs built on this library provides filesystem access, music/video sync, internet tethering, apps installation, springboard icon management, gnome integration, and much more ! I’ve no idea how it got such a lousy public relations but for me that’s the real killer app !

EDIT: For now only version 0.9.7 of libimobiledevice is in the repositories. It means that only music sync can be done out of the box. It’s a shame. Two weeks ago, I asked the official maintainer of the packages to make packages for 1.0.0, and I thought he told me it would be included in Lucid release, but I misunderstood him. He actually told me that it’s too close to Lucid’s release for inclusion. I apologize for the (partially) wrong information. Anyhow, one can still build 1.0.0 from sources and use it. If there would be enough demand I would write an How-To guide. Leave a comment or send mail if you’re interested.

Share and Enjoy:
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Slashdot
  • email
  • Print

Paradoxes, self reproducing code, and bash

April 26th, 2010 3 comments

I was always fascinated by paradoxes. They are just shamelessly out there, messing with our minds, sending us one message: we’re logically irresolvable, don’t f*ck with us. As a child I really liked this one:

The statement below is true.
The statement above is false.

This is classical liar’s paradox. Each statement alone can be either true or false but when putting together they can’t be neither of them. The root cause for the paradox is it’s self-referencing. Many philosophers believed that these kind of paradoxes can be eliminated once we take all self-referencing expressions out of the language such as the word “this” or in our case “below” and “above” which reference each other, each indirectly references itself.

Then came Quine. Now, I’m not talking about ’93 shaolin monk, Kwai Chang Caine from “Kung Fu: The Legend Continues” (am I the only one to see the resemblance??). I’m talking about Willard Van Orman Quine. He studied indirect self-referencing and came up with a famous paradox known today as (surprise, surprise) Quine’s paradox. The paradox demonstrated that it’s impossible to eliminate all those kind of expressions, unless “severely crippling” the language.

As a tribute to Quine’s work, a special group of computer programs were named after him. Those programs do one thing: print their own source code, hence “self reproducing”. What are they good for ? nothing practically (unless you are a virus/worm maker), but they are fun and challenging. Quines can be written (for a fact) in any language that has the ability to output any computable string. If you ever studied computability, you suppose to understand what it means. Otherwise, it basically means all computer languages you’ve ever heard of.

So, before you see how it can be done, if you consider yourself a programmer, I’d advise to take a moment and try writing one. Just write the simplest program you can, in your favorite language, no restrictions whatsoever that takes no input and prints out it’s exact source code (without reading it’s source file upon execution).

In this page, there are examples for quines in many different languages. Some of them use special language/compiler commands, some just do the basic method of storing the source code as a string and print it in a way it would print the string along the commands used for it’s printing (if it still sounds mysterious, you can find relatively readable C quine on wikipedia).

Being unix/linux system engineer the time I found about quine, I chose bash. My first bash quine attempt was:

cat $0

Put this into a file and run it with bash. The output would be exactly it’s source: “cat $0″. This is however, cheating. I exploited the fact bash is a scripting language (source isn’t compiled). So here is another one which is fine by me:

echo $BASH_COMMAND

A lot of somewhat tedious shell quines I’m not going to explain can be found here. After checking them out, it wasn’t that exciting anymore.

Then I though to myself, why just printing it’s source ? why not modifying/executing it’s source ? and so I came with a new challenge: “The pid changer”. The challenge is to write a script that changes it’s own process id (re-executes itself), without reading it’s source file. To make the restriction a little more effective and prevent cheating: the script takes no input and MUST not read or open any file (except executing operating system commands). If you find loophole in my definition that allows you to cheat, I still won’t accept it.

It took me some time, but eventually I came up with a solution. It’s based on a really neat bash feature called function exporting. Here is my solution, notice: it’s not the quine itself, it’s the quine loader. The quine itself is the quine() function:

#!/bin/bash

quine()
{
kill -9 $PPID
echo Quine, my process id is $$
[ $num -gt 0 ] && num=$(( num-1 )) && bash -c quine
}

export num=10
export -f quine
bash -c quine

Let’s analyze. First, I define a function called quine. It kills it’s parent process, output it’s own process id, and then checks the value of $num (which wasn’t defined yet). If it’s greater than 0 it decreases it by one and executes a new shell with “quine” command string. Note that quine is not a name of file.

Then, the loader assigns the value 10 to “num” variable and exports it. The exporting means, that if a new shell is spawned, it would also know variable $num and export it too. Then, the loader exports the function quine (that’s the trick here) so the new shell would also know quine(). Finally it executes new shell with “quine” as a command string, the shell would determine it’s a function name and call it.

The parent killing isn’t necessary for the quine, but otherwise it would act as a fork bomb. Well, a linear one, but still fork bomb. The num variable as you probably figured out, works as a stop timer and it is not necessary as well.

I hope you enjoyed this post and learned something new. I sure did :)

Share and Enjoy:
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Slashdot
  • email
  • Print

Man in the middle with TLS/SSL

April 24th, 2010 16 comments

Man in the middle attack (aka MITM) is very famous and well known network attack. Lately I found myself playing with it, turning my theoretical knowledge into practical methods (on my own computers of course). I’m going to try explain the theory, as well as practical methods I tried, including attacks on TLS/SSL. Is it another “how to perform man in the middle attack” tutorial ? not at all. It’s not about attacking, it’s about what’s behind it, the problems I had and my solutions. Prerequisite knowledge: basic networking and understanding of TCP/IP.

Disclaimer: I do not in any way encourage you to do any kind of illegal act. All the information provided is for educational purpose only. Everything you do is on your own responsibility.

Introduction to MITM (skip if you already know)
The concept: A wants to communicate with B. Attacker C gets in the middle so all communication goes through him. Once he got into that position he can either passively eavesdrop to the communication or actively change it. Obviously it’s a security threat.

How is it done ? for the purpose of this post, lets assume C has physical access to A’s LAN (ethernet or wireless connection for example), and they both on the same hub. Each packet that gets out of A is transmitted as a set of electric signals. All communication devices connected to the hub sense those signals and convert them into data. Then, each device handles the data, starting from layer 2 (ethernet usually). Layer 2 checks if the packet is addressed to the device according to destination MAC header (checks if it equals to device’s MAC or broadcast address).

This is the very first check and has no security mechanism involved so it’s very convenient for C to start the attack from here. So first, he needs to convince A that he is B, but only on layer 2. He can do it by simply transmitting message “I’m B”. Easy as that. Then A would address his packets to C’s destination MAC, believing it’s B’s. Of course the real B might send “I’m B” as well, in that case C would send “I’m B” again. C just needs to be a little more aggressive and it would catch. In the same manner, C would convince B that he is A. This process is called ARP poisoning, and can be done with arpspoof, which is part of dsniff package.

Once done, all communication between A and B goes through C, completely transparently to them. C can read the packets, forward them unchanged or modify them on the fly. Packet sniffing can be done with wireshark. Injections/password stealing/etc can be done with ettercap. So far nothing new under the sun.

How TLS/SSL works (skip if you already know)
TLS/SSL tries to eliminate the security threat (along another threats I haven’t mentioned) by providing secure tunnel between A and B. All data in tunnel is signed and encrypted. That way, communication might still go through C but he won’t be able to understand the data (because it’s encrypted). Any tampering attempt (modifying/retransmitting) the data or it’s signature would be detected. I won’t get into cryptology behind it but it’s practically impossible for C to find A’s or B’s private keys needed to decrypt/sign the data.

However there is a flaw. A and B must know each other’s public keys. The public keys are not confidential because the private keys cannot be derived from them. So, the TLS/SSL protocol begins with unencrypted public keys exchange (as they are needed to start the encryption). At that moment attacker C can modify the keys, sending B: “Hi, I’m A, this is my public key” but actually sending his own public key instead of A’s and the same towards A (faking B’s public key). If A and B won’t know the keys have been changed they will continue as usual. What will happen is creation of two separate secured tunnels, A-C and C-B. C would be able to read/modify the data with his own private key, then sign and encrypt it again on the way to the other peer.

To prevent this attack (SSL man in the middle), A and B must either know each other’s public keys before they connect, or have reliable authentication system that can verify public keys’ authenticity. In real world, it’s impractical for one to magically know all public keys of all computers he would ever possibly communicate with. Specially keys that don’t exist yet. So a reliable authentication system must be used. SSL uses public key certificates system. In this system, the problem is narrowed down to knowing a limited set of computers’ public keys (=trusting those computers), which is practical.

Those trusted computers would be signing authorities. They will sign on public key – identity pairs (aka certificates). Now, when A receives messages “I’m B, this is my public key”, it would be followed by a certificate, signed by someone A trusts, lets call him CA (certification authority). If the public key is modified on the way by attacker, it won’t fit the public key in B’s certificate. If the certificate would be modified as well, A would know it has been tampered (cryptographic feature of knowing CA’s public key).

What I described is fairly simplification of how it really works, but these are the concepts behind it. Also, it’s important to mention that while certificates may be good solution for servers and web sites, they don’t suit clients so well, because clients usually have no constant IP address, besides, would you pay yearly for certificate ? For most people, publishing their public key on their website/blog/email is more than enough. Therefor, today’s client-server authentication is usually asymmetric, for example when you login into your bank account on the web, your browser (hopefully) authenticate it’s really your bank’s website using the bank’s certificate, while the bank authenticate your identity using some sort of username/password combination.

The juicy part
Having all that written, you now understand that TLS/SSL provides good confidentiality and data integrity if implemented correctly. I’m going to try SSL man in the middle attack to check whether the client implements SSL correctly, meaning how does it handle data tampering events. The client will be “google talk” application on (emulated) android mobile, connected to my home LAN. As I wrote in “Faking the Green Robot – Part 1“, it establishes connection to mtalk.google.com, port 5228.

First, I need to perform the non-SSL man in the middle methods to gain control over communication between the mobile and my home router (which connected to the outer world). Then, I need a platform that acts as SSL server towards the client, establishes secure connection with it, decrypts the data sent by client, establishes new SSL connection with the google’s server, encrypts the data again and sends it to the server. When server responses the platform should handle the data the same way, backwards.

My first choice was ettercap, which is very common. I like it because it’s one tool that supposedly does everything for you, and it doesn’t affect layers 3 and 4, meaning, when it recreates the packets as man in the middle, only MAC addresses (and SSL layer in our case) are modified. It’s important feature (you’ll soon find out why), and can only be done when the same application is responsible for packet forwarding and doing the SSL stuff.

I started with arp poisoning my LAN and some checks on the computer emulating the android mobile first. Clear text data was going through my attacking computer, I was able to see everything, and when I browse to secured website (https), the browser showed me warning that the certificate can’t be validated (meaning ettercap performed man in the middle, acting as server, presenting it’s own certificate). So everything is ready for the real test.

I started Talk app on the android, and was amazed to see that the app is working! no warnings whatsoever. However, I soon found out that ettercap’s big advantage (doing everything hidden in the background for me) became disadvantage, as I couldn’t decrypt the data with ettercap’s key, and I haven’t got the slightest idea what went wrong. It turned out that ettercap just forwarded those packets untouched. I tried configuring it to dissect https on port 5228 (in addition to 443) but still no luck. After googling for answers, and quick inspection of ettercap’s source code for hardcoded 443s (didn’t find any accept default for https dissection which I changed via configuration file), I came to the conclusion that it just doesn’t work and I don’t know why. Also, it provided me with no useful diagnostic information, so I decided to try another tool.

Google gave me webmitm, which is used together with arpspoof and dnsspoof, all included in dsniff package mentioned before. These tools work differently than ettercap. Arpspoof forces (on layer 2) all traffic to go through attacker’s computer, but no tool is in responsible for general packet forwarding, so unless operating system is configured to forward the packets they get “stuck” and discarded. The attack focuses on web man in the middle. Generally, when a web browser visits url, a DNS request is sent in order to translate url to ip address. Dnsspoof listens for those requests and returns fake response: “The ip address of url X is attacker’s ip”. Now the target would address it’s packets (on layer 3) to attacker computer. Then comes the role of webmitm, it listens on ports 80 (HTTP) and 443 (HTTPS), acting as man in the middle, forwarding the packets to the real servers. How does it know who was the packet originally addressed to ? I’m glad you asked. Unlike ettercap which keeps layer 3 untouched and has no such problem, it reads the “Host:” header from HTTP request and redirects accordingly.

Webmitm wasn’t useful for many reasons. First, it supports only ports 80 and 443 (hardcoded). I changed the source code to use port 5228 instead of 443. After I got it compiled (which wasn’t very pleasuring act) I tried using it, but it didn’t know where to forward the packet because there was no “Host:” header. I even tried modifying the source to have “default” host, but it was buggy and crashed. I guess my C skills are not as good as they used to be… anyway, back to square one.

What else ? I found sslstrip but it attacks web browsers, so it’s no good for me. All I need is something that listens on port 5228, talks SSL, takes the output and talks it to another SSL server on port 5228. How hard can that be ?!! Umm… when I say it like this, it seems like all I ever really needed was netcat that supports SSL. I found new candidates:

Stunnel. At first glance it looked too complicated to configure, with all inet based service wrapping and stuff. Got no flags that do exactly what I need… so there was no second glance. I moved on.

SSL Capable NetCat. Very nice perl based utility. I managed to get it listening as TCP server and forwarding to SSL connection, but no SSL – SSL forwarding. When I tried using it as SSL server (-c), the connection made by android was always killed in very early stage of the (SSL) handshake. I got no idea why. I tried changing SSL options in the source (well documented here) but got no luck.

netcat SSL. As they say, the third time’s the charm. This C based utility, has built-in forwarding (-x) that took me exactly the same place SSL Capable NetCat took me (only TCP – SSL forwarding), but unlike him, it’s SSL server worked with android client. So, all I needed to do is simply a few shell redirections:

Attempt 1:
# nssl -l 5228 | nssl mtalk.google.com 5228
What’s wrong here ? the forwarding works only in one direction, but it’s easily fixable.

Attempt 2:
# mknod tmp_pipe p
# nssl -l 5228 0< tmp_pipe | nssl mtalk.google.com 5228 1> tmp_pipe
What’s wrong here ? forward-wise nothing, but the android rejected nssl’s built-in certificate.

Attempt 3:
# nssl -l 5228 -f faked_certificate.crt 0< tmp_pipe | nssl mtalk.google.com 5228 1> tmp_pipe
Still, android rejected the (self-signed) certificate I made (using openssl), which looks just like the original, but has different issuer.

I tried couple more different fake certificates, checking different potential failures in android’s security, and I’m glad to inform I couldn’t find any. I’m glad, because if I did find, I wouldn’t feel comfortable publishing it without letting google fix it first… (or maybe I did find, but I can’t say…). At last, I wanted to make sure my forwarding does really work, so I made the android trust my fake certificate (how to do that – in another post). Once done everything worked perfectly.

I hope you now have better understanding of how SSL man in the middle attack works (although in this case, the SSL worked seamlessly, the fact I could eventually get the data decrypted was only because the client was accessible).

PS

By the time I tried attempt 2 and got rejected I thought it has something to do with the two separate commands, when one of them gets closed, so I modified the source of nssl and added support for new SSL-SSL forwarding, which also worked. How is that for my C skills ? :)

Share and Enjoy:
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Slashdot
  • email
  • Print

Faking the Green Robot – Part 1

April 20th, 2010 1 comment

On last November, Google Labs released cool feature, called “Green robot icon”. If enabled, it turns the bubbles next to your chat buddies in Google Talk, into cute android robots, for buddies connected via android device. It might not be the best thing for android users’ privacy, but other than that I think it’s pretty cool, and I want this icon too!

The only problem, I don’t own android enabled phone, and even if I did, I want that icon appearing even when I’m not connected through it. You might find it really unnecessary, specially because “green robot” feature is disabled by default and considering all the trouble I went through, but it’s really not about the icon. The icon is nice benefit but it’s about the challenge, the educational experience and the adventure. Sometimes I challenge myself with that kind of things, just to prove it’s doable and I can do it. Before I began, I had no idea how long would it take, whether it’s possible or not and if I have the necessary tools/knowledge, but that’s part of the idea, study new things along the adventure.

Where do we start ? we need to be able to determine success, meaning sign-in to gmail as one of our chat buddies, enable green robot feature and check if we appear as cute android. It’s somewhat problematic to sign in with two different accounts using the same browser. There are many workarounds, like simply using different browsers, or different computers (could be virtual as well). I used another computer, making my testing environment as neutral as possible.

In order to win our green robot, we must make Google Talk believe we are connected through android, and for that purpose we need to understand how it identifies android users. Having no prior knowledge of how android users connect to Google Talk, I made an educated guess based on my knowledge of how websites identify clients: according to the browser. The HTTP defines, among the rest, how web browser identifies itself to web server. It’s done via HTTP header called “user-agent”. You can check your own user agent here. It’s very common for websites to serve device dependent content based on user-agent, for example, if you browse this blog from iphone, the same page would appear differently, optimized to iphone.

After setting up test environment, we need to change our browser user-agent string to android’s user-agent string. In Firefox, it can be achieved with add-on called User Agent Switcher. All we need to do is to enter android’s user-agent string (can easily be found on google) and browse to gmail -> talk. Not surprisingly, I got mobile web version of Google Talk (“talk gadget”) and I was able to chat but it didn’t change my icon.

So, my educated guess wasn’t good. Google Talk doesn’t identify android users by user-agent string, at least not alone. What’s next ? Observation. If only we had android phone and a way to observe what’s going inside, we could solve this… is it the end of our little adventure ? not quite yet. You see, Google is awesome. They made android emulator for developers, and it even works on Linux out of the box. It’s time to get the emulator (Android SDK), and start getting dirty.

After some playing, I figured out how to work with it and I started the emulation. It takes a minute or two until it’s fully loaded. The first time I got it working I was thinking “this android looks pretty cool” but I soon found out that looking cool alone won’t get me anywhere. Using android’s browser didn’t change the icon, and I had no Google Talk application installed. So I made a little research. We can install android apps (.apk files) with “adb” command that comes with the SDK, and it seems that we need to get “gtalkservice.apk” and “Talk.apk”. I couldn’t find download links to those files but I found a download link to HTC’s android system image file, and supposedly it comes with these apps. So I downloaded it, and examined the file.

“VMS Alpha executable” announced the “file” command I ran. What?! I expected it to be either FAT/EXT/ISO9660 or DOS image variant. I tried to mount the file but as you probably guessed, it failed: “you must specify the filesystem type”. Yes, if only I knew… I googled a little more and found that it’s Cramfs. I tried using “fusecram” to mount the image file but it didn’t work either. So I read a little more and find out that it’s actually not Cramfs, but YAFFS2. I don’t know what about you, but it’s way too many filesystems I’ve never heard about for one day. Anyway, in order to support YAFFS mounting, it seemed that I must recompile my kernel, and I wasn’t really in the mood so I found another utility called unyaffs, that can extract files from YAFFS images. Using unyaffs, I finally got “gtalkservice.apk” and “Talk.apk”. When I tried to install them, I got this error: “Failure [INSTALL_FAILED_MISSING_SHARED_LIBRARY]“. This message led me to dead end (google-wise).

My next thought was resolving the missing library failure by copying the library from extracted files to the emulator, and then I realized, I already got a system image file with everything I need installed! All I need to do is to boot the emulator with this system image instead of it’s default development system image. So I did and it worked (almost) out of the box. I only had to add  “GSM modem support” to enable networking. So mission accomplished. I managed to connect to Google Talk from within the emulator and my icon changed to cute green robot.

But honestly, that’s not how I wanted it. It’s not very different from having an android device and just using it with Google Talk. I want to understand what is it exactly that identifies it as android platform. So I turned to my old friend wireshark (ethereal) to snoop around the network. Here are my conclusions: the “google talk” app first queries the DNS for “mtalk.google.com”. After it gets respond, it establishes TLS (SSL) connection on destination port 5228. Since it’s non-standard port for SSL, wireshark didn’t automatically decoded the messages correctly, but once I chose “Analyze -> Decode As… -> SSL” I could clearly see the protocol in action. Unfortunately, the protocol purpose is to encrypt application data, meaning, I couldn’t see the data in the messages, I could only see messages, with data I can’t understand.

I was actually quite impressed. It takes relatively a lot of computing resources (=money) on behalf of Google to establish secure connection for each user that signs into chat. It serves only one purpose: protecting user’s privacy, and lets not forget it’s a non-paid service. Way to go Google!

At this point, some people would have stopped. I didn’t. I knew there must be a way in, and if it exists I can find it. Did I succeed ? read on part 2.

Share and Enjoy:
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Slashdot
  • email
  • Print

This is how it all started

April 17th, 2010 No comments

Hey. I planned this post to be about Linux, FOSS and iPhone, starting with the sentence “I wasn’t always like this”, telling you guys about my transition from Windows to Linux desktop, the problems I had back then, and how Linux desktop is in a whole different place now, where you can plug Apple’s closed-encrypted-never-designed-to-work-on-linux iPhone and it would work out of the box, as integral part of your desktop. When I started writing the introduction about how I wasn’t always a linux geek and what I was before, it brought up many childhood memories and great nostalgia, so as written in Pirkei Avot, “Know from where you came” (3:1), I’m going to dedicate this post for my past. The Linux/FOSS/iPhone post can wait for another time.

I was always, as far as I can remember, interested in computers. Beginning with my family’s first computer, Apple IIc (playing green-black “Karateka“), going through IBM XT (or AT, I’m not sure), and all the way to today’s computers, my family always provided me with latest (affordable) technology. I remember the DOS era quite well. Until ’94 all I knew was how to run and play games (meaning, pressing number and Enter, thanks to my third cousin Itay, who was the computer genius back then, and made batch files that ran our games). There was a piece of paper attached to the screen, correlating a game with it’s number so we won’t have to remember. Low-tech :). Considering the fact I’ve only learned the A-B-C in ’92 it’s not too bad. Funny thing, but till this day I remember Commander Keen‘s cheat codes by it’s (meaningless) hebrew keystrokes (“aleph-bet-space, gimel-ain-mem sofit”).

At ’94 I studied DOS. Being 10 years old, I didn’t really care about understanding operating systems, memory management, hardware interrupts etc… I got excited doing cool tricks such as changing the default prompt “C:\>”, making files hidden, coloring the user interface (long live 4DOS), using “arj” to copy games that won’t fit in one diskette, etc. I was my computer’s indisputable master. I was commanding and it was following obediently. I liked that, but it wasn’t as nearly joyful as programming. When I first discovered programming I thought it is the coolest thing ever. The idea I can communicate with the computer at such a “low” level, and build my own games, my own executables.. it was just.. magical! For years I’ve been trying to build computer games from Lego, and finally I had the chance.

Sure, it was only procedural programming, and I had limited understanding of some of the most fascinating things such as graphic libraries, TSRs and assembly code that made my SoundBlaster play cool sounds, and still, I loved my Borland Turbo Pascal. It was the ultimate creation tool. The possibilities were endless and in contrast with Lego, I never ran out of building blocks.

Another cool thing you could do back days, was to enhance your games/programs with “intros”, can you remember them? Usually they were attached to game files, shown before the game starts with cool background midi sound, showing info about the cracker or the BBS of the group released it. Damn, BBS.. the memories just keep flooding me. The ancestor of modern software pirating. I used to connect to those systems, equipped with my 14,400(!) modem, Terminate, and copy games like there was no tomorrow (using z-modem protocol). Back days, everything was different. Every new idea and concept were exciting. Copying files over telephone lines? awesome! and my innocence… I still had my innocence…

Uhm, yes. Sorry, I got carried away a little. Anyway, here is an “intro” I captured with dosbox, just for old times sake (sorry I couldn’t get the audio working):

I can keep going like that, writing about evolution of computing world, from my point of view: DOS viruses/anti-virus, Windows 3.11, TVTEL (Israeli online consumer services from ’95), early days of the Internet (gopher services, Eudora/Trumpet mail clients, Netscape browser), Linux, Windows 95, MP3, VQF and VIV file formats, World Wide Web, HTTP, DialUp, Windows 98/ME, ICQ, Napster, ADSL, Cable, File Sharing, Worms, Phreaking, Hacking, Kevin Mitnick, Liraz Siri, Programming Languages, etc…

These are just examples I pulled out of my head, a partial associative list of things I just thought about and I could honestly write about them and never even get to my transition to Linux desktop (which happened in 2006). So, I guess I better not…

I also suddenly feel very old :(

Share and Enjoy:
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Slashdot
  • email
  • Print
Tags: , ,