Archive

Archive for April, 2010

Ubuntu 10.04 Lucid Lynx

April 29th, 2010 No comments

Today, Canonical will release Ubuntu 10.04 LTS (long term support). I’m usually not so enthusiastic about new Ubuntu releases, but this time is different. They added some sweet features, well, at least for me, which didn’t get proper public relations. This is not a comprehensive review of the new features. It’s about new features I find cool or bad enough writing about, from a desktop user point of view.

First, I must say I love Ubuntu. In each release they turn some of the most annoying tasks (for linux newbies) into trivial and intuitive. It has great documentation, community support and amazing software package repositories. It is also widely supported by third party vendors. Second, they usually add performance boosts and cool new features with new releases. Third, you can order official CDs for free. What’s not to love ?

So, what do we get this time ? As always with ‘LTS’ releases, three years of (bug/security fixes) support. A fresh new beginner’s getting-started manual which looks very promising. Some crap as well: new look and feel and social networks integration. As if it’s that hard to change the look or use all-in-one social networks client…

Performance boosts. First, boot speed improvement. They already made a big leap from 9.04 to 9.10, and now again ? sounds delicious. “Super fast” boot for SSD based machines such as netbooks. Sounds very delicious. Second, faster suspend/resume for your netbook that will “extend battery life”. Excuse me for being skeptical, but come on… improving speeds are always good, but declaring it will save battery ? I don’t buy it.

Ubuntu One enhancements. I never got the deal around Ubuntu One. It suppose to be a personal cloud that keeps your files, notes, bookmarks and contacts on the net, but we already had these services long time ago (for example Gmail’s contacts which can be synced to your mobile, Dropbox file storage, or Delicious bookmarks). Anyway the new enhancements are better desktop integration, and new Music Store. For me, they’re both useless but I guess Canonical deserves it’s chance to fight Apple’s music store, plus, it’s DRM-free.

Software Center 2.0. Supposedly better interface for software installation and maintenance. I haven’t seen this one yet, but it sounds just like a GUI facelift. The underlaying software deployment mechanisms stay the same (apt/ppa repositories).

The sweet features I mentioned in the prologue: inclusion of libimobiledevice in official repositories. This is a software library that supports iPhone, iPod Touch and iPad devices. Programs built on this library provides filesystem access, music/video sync, internet tethering, apps installation, springboard icon management, gnome integration, and much more ! I’ve no idea how it got such a lousy public relations but for me that’s the real killer app !

EDIT: For now only version 0.9.7 of libimobiledevice is in the repositories. It means that only music sync can be done out of the box. It’s a shame. Two weeks ago, I asked the official maintainer of the packages to make packages for 1.0.0, and I thought he told me it would be included in Lucid release, but I misunderstood him. He actually told me that it’s too close to Lucid’s release for inclusion. I apologize for the (partially) wrong information. Anyhow, one can still build 1.0.0 from sources and use it. If there would be enough demand I would write an How-To guide. Leave a comment or send mail if you’re interested.

Paradoxes, self reproducing code, and bash

April 26th, 2010 3 comments

I was always fascinated by paradoxes. They are just shamelessly out there, messing with our minds, sending us one message: we’re logically irresolvable, don’t f*ck with us. As a child I really liked this one:

The statement below is true.
The statement above is false.

This is classical liar’s paradox. Each statement alone can be either true or false but when putting together they can’t be neither of them. The root cause for the paradox is it’s self-referencing. Many philosophers believed that these kind of paradoxes can be eliminated once we take all self-referencing expressions out of the language such as the word “this” or in our case “below” and “above” which reference each other, each indirectly references itself.

Then came Quine. Now, I’m not talking about ’93 shaolin monk, Kwai Chang Caine from “Kung Fu: The Legend Continues” (am I the only one to see the resemblance??). I’m talking about Willard Van Orman Quine. He studied indirect self-referencing and came up with a famous paradox known today as (surprise, surprise) Quine’s paradox. The paradox demonstrated that it’s impossible to eliminate all those kind of expressions, unless “severely crippling” the language.

As a tribute to Quine’s work, a special group of computer programs were named after him. Those programs do one thing: print their own source code, hence “self reproducing”. What are they good for ? nothing practically (unless you are a virus/worm maker), but they are fun and challenging. Quines can be written (for a fact) in any language that has the ability to output any computable string. If you ever studied computability, you suppose to understand what it means. Otherwise, it basically means all computer languages you’ve ever heard of.

So, before you see how it can be done, if you consider yourself a programmer, I’d advise to take a moment and try writing one. Just write the simplest program you can, in your favorite language, no restrictions whatsoever that takes no input and prints out it’s exact source code (without reading it’s source file upon execution).

In this page, there are examples for quines in many different languages. Some of them use special language/compiler commands, some just do the basic method of storing the source code as a string and print it in a way it would print the string along the commands used for it’s printing (if it still sounds mysterious, you can find relatively readable C quine on wikipedia).

Being unix/linux system engineer the time I found about quine, I chose bash. My first bash quine attempt was:

cat $0

Put this into a file and run it with bash. The output would be exactly it’s source: “cat $0”. This is however, cheating. I exploited the fact bash is a scripting language (source isn’t compiled). So here is another one which is fine by me:

echo $BASH_COMMAND

A lot of somewhat tedious shell quines I’m not going to explain can be found here. After checking them out, it wasn’t that exciting anymore.

Then I though to myself, why just printing it’s source ? why not modifying/executing it’s source ? and so I came with a new challenge: “The pid changer”. The challenge is to write a script that changes it’s own process id (re-executes itself), without reading it’s source file. To make the restriction a little more effective and prevent cheating: the script takes no input and MUST not read or open any file (except executing operating system commands). If you find loophole in my definition that allows you to cheat, I still won’t accept it.

It took me some time, but eventually I came up with a solution. It’s based on a really neat bash feature called function exporting. Here is my solution, notice: it’s not the quine itself, it’s the quine loader. The quine itself is the quine() function:

#!/bin/bash

quine()
{
kill -9 $PPID
echo Quine, my process id is $$
[ $num -gt 0 ] && num=$(( num-1 )) && bash -c quine
}

export num=10
export -f quine
bash -c quine

Let’s analyze. First, I define a function called quine. It kills it’s parent process, output it’s own process id, and then checks the value of $num (which wasn’t defined yet). If it’s greater than 0 it decreases it by one and executes a new shell with “quine” command string. Note that quine is not a name of file.

Then, the loader assigns the value 10 to “num” variable and exports it. The exporting means, that if a new shell is spawned, it would also know variable $num and export it too. Then, the loader exports the function quine (that’s the trick here) so the new shell would also know quine(). Finally it executes new shell with “quine” as a command string, the shell would determine it’s a function name and call it.

The parent killing isn’t necessary for the quine, but otherwise it would act as a fork bomb. Well, a linear one, but still fork bomb. The num variable as you probably figured out, works as a stop timer and it is not necessary as well.

I hope you enjoyed this post and learned something new. I sure did :)

Man in the middle with TLS/SSL

April 24th, 2010 15 comments

Man in the middle attack (aka MITM) is very famous and well known network attack. Lately I found myself playing with it, turning my theoretical knowledge into practical methods (on my own computers of course). I’m going to try explain the theory, as well as practical methods I tried, including attacks on TLS/SSL. Is it another “how to perform man in the middle attack” tutorial ? not at all. It’s not about attacking, it’s about what’s behind it, the problems I had and my solutions. Prerequisite knowledge: basic networking and understanding of TCP/IP.

Disclaimer: I do not in any way encourage you to do any kind of illegal act. All the information provided is for educational purpose only. Everything you do is on your own responsibility.

Introduction to MITM (skip if you already know)
The concept: A wants to communicate with B. Attacker C gets in the middle so all communication goes through him. Once he got into that position he can either passively eavesdrop to the communication or actively change it. Obviously it’s a security threat.

How is it done ? for the purpose of this post, lets assume C has physical access to A’s LAN (ethernet or wireless connection for example), and they both on the same hub. Each packet that gets out of A is transmitted as a set of electric signals. All communication devices connected to the hub sense those signals and convert them into data. Then, each device handles the data, starting from layer 2 (ethernet usually). Layer 2 checks if the packet is addressed to the device according to destination MAC header (checks if it equals to device’s MAC or broadcast address).

This is the very first check and has no security mechanism involved so it’s very convenient for C to start the attack from here. So first, he needs to convince A that he is B, but only on layer 2. He can do it by simply transmitting message “I’m B”. Easy as that. Then A would address his packets to C’s destination MAC, believing it’s B’s. Of course the real B might send “I’m B” as well, in that case C would send “I’m B” again. C just needs to be a little more aggressive and it would catch. In the same manner, C would convince B that he is A. This process is called ARP poisoning, and can be done with arpspoof, which is part of dsniff package.

Once done, all communication between A and B goes through C, completely transparently to them. C can read the packets, forward them unchanged or modify them on the fly. Packet sniffing can be done with wireshark. Injections/password stealing/etc can be done with ettercap. So far nothing new under the sun.

How TLS/SSL works (skip if you already know)
TLS/SSL tries to eliminate the security threat (along another threats I haven’t mentioned) by providing secure tunnel between A and B. All data in tunnel is signed and encrypted. That way, communication might still go through C but he won’t be able to understand the data (because it’s encrypted). Any tampering attempt (modifying/retransmitting) the data or it’s signature would be detected. I won’t get into cryptology behind it but it’s practically impossible for C to find A’s or B’s private keys needed to decrypt/sign the data.

However there is a flaw. A and B must know each other’s public keys. The public keys are not confidential because the private keys cannot be derived from them. So, the TLS/SSL protocol begins with unencrypted public keys exchange (as they are needed to start the encryption). At that moment attacker C can modify the keys, sending B: “Hi, I’m A, this is my public key” but actually sending his own public key instead of A’s and the same towards A (faking B’s public key). If A and B won’t know the keys have been changed they will continue as usual. What will happen is creation of two separate secured tunnels, A-C and C-B. C would be able to read/modify the data with his own private key, then sign and encrypt it again on the way to the other peer.

To prevent this attack (SSL man in the middle), A and B must either know each other’s public keys before they connect, or have reliable authentication system that can verify public keys’ authenticity. In real world, it’s impractical for one to magically know all public keys of all computers he would ever possibly communicate with. Specially keys that don’t exist yet. So a reliable authentication system must be used. SSL uses public key certificates system. In this system, the problem is narrowed down to knowing a limited set of computers’ public keys (=trusting those computers), which is practical.

Those trusted computers would be signing authorities. They will sign on public key – identity pairs (aka certificates). Now, when A receives messages “I’m B, this is my public key”, it would be followed by a certificate, signed by someone A trusts, lets call him CA (certification authority). If the public key is modified on the way by attacker, it won’t fit the public key in B’s certificate. If the certificate would be modified as well, A would know it has been tampered (cryptographic feature of knowing CA’s public key).

What I described is fairly simplification of how it really works, but these are the concepts behind it. Also, it’s important to mention that while certificates may be good solution for servers and web sites, they don’t suit clients so well, because clients usually have no constant IP address, besides, would you pay yearly for certificate ? For most people, publishing their public key on their website/blog/email is more than enough. Therefor, today’s client-server authentication is usually asymmetric, for example when you login into your bank account on the web, your browser (hopefully) authenticate it’s really your bank’s website using the bank’s certificate, while the bank authenticate your identity using some sort of username/password combination.

The juicy part
Having all that written, you now understand that TLS/SSL provides good confidentiality and data integrity if implemented correctly. I’m going to try SSL man in the middle attack to check whether the client implements SSL correctly, meaning how does it handle data tampering events. The client will be “google talk” application on (emulated) android mobile, connected to my home LAN. As I wrote in “Faking the Green Robot – Part 1“, it establishes connection to mtalk.google.com, port 5228.

First, I need to perform the non-SSL man in the middle methods to gain control over communication between the mobile and my home router (which connected to the outer world). Then, I need a platform that acts as SSL server towards the client, establishes secure connection with it, decrypts the data sent by client, establishes new SSL connection with the google’s server, encrypts the data again and sends it to the server. When server responses the platform should handle the data the same way, backwards.

My first choice was ettercap, which is very common. I like it because it’s one tool that supposedly does everything for you, and it doesn’t affect layers 3 and 4, meaning, when it recreates the packets as man in the middle, only MAC addresses (and SSL layer in our case) are modified. It’s important feature (you’ll soon find out why), and can only be done when the same application is responsible for packet forwarding and doing the SSL stuff.

I started with arp poisoning my LAN and some checks on the computer emulating the android mobile first. Clear text data was going through my attacking computer, I was able to see everything, and when I browse to secured website (https), the browser showed me warning that the certificate can’t be validated (meaning ettercap performed man in the middle, acting as server, presenting it’s own certificate). So everything is ready for the real test.

I started Talk app on the android, and was amazed to see that the app is working! no warnings whatsoever. However, I soon found out that ettercap’s big advantage (doing everything hidden in the background for me) became disadvantage, as I couldn’t decrypt the data with ettercap’s key, and I haven’t got the slightest idea what went wrong. It turned out that ettercap just forwarded those packets untouched. I tried configuring it to dissect https on port 5228 (in addition to 443) but still no luck. After googling for answers, and quick inspection of ettercap’s source code for hardcoded 443s (didn’t find any accept default for https dissection which I changed via configuration file), I came to the conclusion that it just doesn’t work and I don’t know why. Also, it provided me with no useful diagnostic information, so I decided to try another tool.

Google gave me webmitm, which is used together with arpspoof and dnsspoof, all included in dsniff package mentioned before. These tools work differently than ettercap. Arpspoof forces (on layer 2) all traffic to go through attacker’s computer, but no tool is in responsible for general packet forwarding, so unless operating system is configured to forward the packets they get “stuck” and discarded. The attack focuses on web man in the middle. Generally, when a web browser visits url, a DNS request is sent in order to translate url to ip address. Dnsspoof listens for those requests and returns fake response: “The ip address of url X is attacker’s ip”. Now the target would address it’s packets (on layer 3) to attacker computer. Then comes the role of webmitm, it listens on ports 80 (HTTP) and 443 (HTTPS), acting as man in the middle, forwarding the packets to the real servers. How does it know who was the packet originally addressed to ? I’m glad you asked. Unlike ettercap which keeps layer 3 untouched and has no such problem, it reads the “Host:” header from HTTP request and redirects accordingly.

Webmitm wasn’t useful for many reasons. First, it supports only ports 80 and 443 (hardcoded). I changed the source code to use port 5228 instead of 443. After I got it compiled (which wasn’t very pleasuring act) I tried using it, but it didn’t know where to forward the packet because there was no “Host:” header. I even tried modifying the source to have “default” host, but it was buggy and crashed. I guess my C skills are not as good as they used to be… anyway, back to square one.

What else ? I found sslstrip but it attacks web browsers, so it’s no good for me. All I need is something that listens on port 5228, talks SSL, takes the output and talks it to another SSL server on port 5228. How hard can that be ?!! Umm… when I say it like this, it seems like all I ever really needed was netcat that supports SSL. I found new candidates:

Stunnel. At first glance it looked too complicated to configure, with all inet based service wrapping and stuff. Got no flags that do exactly what I need… so there was no second glance. I moved on.

SSL Capable NetCat. Very nice perl based utility. I managed to get it listening as TCP server and forwarding to SSL connection, but no SSL – SSL forwarding. When I tried using it as SSL server (-c), the connection made by android was always killed in very early stage of the (SSL) handshake. I got no idea why. I tried changing SSL options in the source (well documented here) but got no luck.

netcat SSL. As they say, the third time’s the charm. This C based utility, has built-in forwarding (-x) that took me exactly the same place SSL Capable NetCat took me (only TCP – SSL forwarding), but unlike him, it’s SSL server worked with android client. So, all I needed to do is simply a few shell redirections:

Attempt 1:
# nssl -l 5228 | nssl mtalk.google.com 5228
What’s wrong here ? the forwarding works only in one direction, but it’s easily fixable.

Attempt 2:
# mknod tmp_pipe p
# nssl -l 5228 0< tmp_pipe | nssl mtalk.google.com 5228 1> tmp_pipe
What’s wrong here ? forward-wise nothing, but the android rejected nssl’s built-in certificate.

Attempt 3:
# nssl -l 5228 -f faked_certificate.crt 0< tmp_pipe | nssl mtalk.google.com 5228 1> tmp_pipe
Still, android rejected the (self-signed) certificate I made (using openssl), which looks just like the original, but has different issuer.

I tried couple more different fake certificates, checking different potential failures in android’s security, and I’m glad to inform I couldn’t find any. I’m glad, because if I did find, I wouldn’t feel comfortable publishing it without letting google fix it first… (or maybe I did find, but I can’t say…). At last, I wanted to make sure my forwarding does really work, so I made the android trust my fake certificate (how to do that – in another post). Once done everything worked perfectly.

I hope you now have better understanding of how SSL man in the middle attack works (although in this case, the SSL worked seamlessly, the fact I could eventually get the data decrypted was only because the client was accessible).

PS

By the time I tried attempt 2 and got rejected I thought it has something to do with the two separate commands, when one of them gets closed, so I modified the source of nssl and added support for new SSL-SSL forwarding, which also worked. How is that for my C skills ? :)

Faking the Green Robot – Part 1

April 20th, 2010 1 comment

On last November, Google Labs released cool feature, called “Green robot icon”. If enabled, it turns the bubbles next to your chat buddies in Google Talk, into cute android robots, for buddies connected via android device. It might not be the best thing for android users’ privacy, but other than that I think it’s pretty cool, and I want this icon too!

The only problem, I don’t own android enabled phone, and even if I did, I want that icon appearing even when I’m not connected through it. You might find it really unnecessary, specially because “green robot” feature is disabled by default and considering all the trouble I went through, but it’s really not about the icon. The icon is nice benefit but it’s about the challenge, the educational experience and the adventure. Sometimes I challenge myself with that kind of things, just to prove it’s doable and I can do it. Before I began, I had no idea how long would it take, whether it’s possible or not and if I have the necessary tools/knowledge, but that’s part of the idea, study new things along the adventure.

Where do we start ? we need to be able to determine success, meaning sign-in to gmail as one of our chat buddies, enable green robot feature and check if we appear as cute android. It’s somewhat problematic to sign in with two different accounts using the same browser. There are many workarounds, like simply using different browsers, or different computers (could be virtual as well). I used another computer, making my testing environment as neutral as possible.

In order to win our green robot, we must make Google Talk believe we are connected through android, and for that purpose we need to understand how it identifies android users. Having no prior knowledge of how android users connect to Google Talk, I made an educated guess based on my knowledge of how websites identify clients: according to the browser. The HTTP defines, among the rest, how web browser identifies itself to web server. It’s done via HTTP header called “user-agent”. You can check your own user agent here. It’s very common for websites to serve device dependent content based on user-agent, for example, if you browse this blog from iphone, the same page would appear differently, optimized to iphone.

After setting up test environment, we need to change our browser user-agent string to android’s user-agent string. In Firefox, it can be achieved with add-on called User Agent Switcher. All we need to do is to enter android’s user-agent string (can easily be found on google) and browse to gmail -> talk. Not surprisingly, I got mobile web version of Google Talk (“talk gadget”) and I was able to chat but it didn’t change my icon.

So, my educated guess wasn’t good. Google Talk doesn’t identify android users by user-agent string, at least not alone. What’s next ? Observation. If only we had android phone and a way to observe what’s going inside, we could solve this… is it the end of our little adventure ? not quite yet. You see, Google is awesome. They made android emulator for developers, and it even works on Linux out of the box. It’s time to get the emulator (Android SDK), and start getting dirty.

After some playing, I figured out how to work with it and I started the emulation. It takes a minute or two until it’s fully loaded. The first time I got it working I was thinking “this android looks pretty cool” but I soon found out that looking cool alone won’t get me anywhere. Using android’s browser didn’t change the icon, and I had no Google Talk application installed. So I made a little research. We can install android apps (.apk files) with “adb” command that comes with the SDK, and it seems that we need to get “gtalkservice.apk” and “Talk.apk”. I couldn’t find download links to those files but I found a download link to HTC’s android system image file, and supposedly it comes with these apps. So I downloaded it, and examined the file.

“VMS Alpha executable” announced the “file” command I ran. What?! I expected it to be either FAT/EXT/ISO9660 or DOS image variant. I tried to mount the file but as you probably guessed, it failed: “you must specify the filesystem type”. Yes, if only I knew… I googled a little more and found that it’s Cramfs. I tried using “fusecram” to mount the image file but it didn’t work either. So I read a little more and find out that it’s actually not Cramfs, but YAFFS2. I don’t know what about you, but it’s way too many filesystems I’ve never heard about for one day. Anyway, in order to support YAFFS mounting, it seemed that I must recompile my kernel, and I wasn’t really in the mood so I found another utility called unyaffs, that can extract files from YAFFS images. Using unyaffs, I finally got “gtalkservice.apk” and “Talk.apk”. When I tried to install them, I got this error: “Failure [INSTALL_FAILED_MISSING_SHARED_LIBRARY]”. This message led me to dead end (google-wise).

My next thought was resolving the missing library failure by copying the library from extracted files to the emulator, and then I realized, I already got a system image file with everything I need installed! All I need to do is to boot the emulator with this system image instead of it’s default development system image. So I did and it worked (almost) out of the box. I only had to add  “GSM modem support” to enable networking. So mission accomplished. I managed to connect to Google Talk from within the emulator and my icon changed to cute green robot.

But honestly, that’s not how I wanted it. It’s not very different from having an android device and just using it with Google Talk. I want to understand what is it exactly that identifies it as android platform. So I turned to my old friend wireshark (ethereal) to snoop around the network. Here are my conclusions: the “google talk” app first queries the DNS for “mtalk.google.com”. After it gets respond, it establishes TLS (SSL) connection on destination port 5228. Since it’s non-standard port for SSL, wireshark didn’t automatically decoded the messages correctly, but once I chose “Analyze -> Decode As… -> SSL” I could clearly see the protocol in action. Unfortunately, the protocol purpose is to encrypt application data, meaning, I couldn’t see the data in the messages, I could only see messages, with data I can’t understand.

I was actually quite impressed. It takes relatively a lot of computing resources (=money) on behalf of Google to establish secure connection for each user that signs into chat. It serves only one purpose: protecting user’s privacy, and lets not forget it’s a non-paid service. Way to go Google!

At this point, some people would have stopped. I didn’t. I knew there must be a way in, and if it exists I can find it. Did I succeed ? read on part 2.

This is how it all started

April 17th, 2010 No comments

Hey. I planned this post to be about Linux, FOSS and iPhone, starting with the sentence “I wasn’t always like this”, telling you guys about my transition from Windows to Linux desktop, the problems I had back then, and how Linux desktop is in a whole different place now, where you can plug Apple’s closed-encrypted-never-designed-to-work-on-linux iPhone and it would work out of the box, as integral part of your desktop. When I started writing the introduction about how I wasn’t always a linux geek and what I was before, it brought up many childhood memories and great nostalgia, so as written in Pirkei Avot, “Know from where you came” (3:1), I’m going to dedicate this post for my past. The Linux/FOSS/iPhone post can wait for another time.

I was always, as far as I can remember, interested in computers. Beginning with my family’s first computer, Apple IIc (playing green-black “Karateka“), going through IBM XT (or AT, I’m not sure), and all the way to today’s computers, my family always provided me with latest (affordable) technology. I remember the DOS era quite well. Until ’94 all I knew was how to run and play games (meaning, pressing number and Enter, thanks to my third cousin Itay, who was the computer genius back then, and made batch files that ran our games). There was a piece of paper attached to the screen, correlating a game with it’s number so we won’t have to remember. Low-tech :). Considering the fact I’ve only learned the A-B-C in ’92 it’s not too bad. Funny thing, but till this day I remember Commander Keen‘s cheat codes by it’s (meaningless) hebrew keystrokes (“aleph-bet-space, gimel-ain-mem sofit”).

At ’94 I studied DOS. Being 10 years old, I didn’t really care about understanding operating systems, memory management, hardware interrupts etc… I got excited doing cool tricks such as changing the default prompt “C:\>”, making files hidden, coloring the user interface (long live 4DOS), using “arj” to copy games that won’t fit in one diskette, etc. I was my computer’s indisputable master. I was commanding and it was following obediently. I liked that, but it wasn’t as nearly joyful as programming. When I first discovered programming I thought it is the coolest thing ever. The idea I can communicate with the computer at such a “low” level, and build my own games, my own executables.. it was just.. magical! For years I’ve been trying to build computer games from Lego, and finally I had the chance.

Sure, it was only procedural programming, and I had limited understanding of some of the most fascinating things such as graphic libraries, TSRs and assembly code that made my SoundBlaster play cool sounds, and still, I loved my Borland Turbo Pascal. It was the ultimate creation tool. The possibilities were endless and in contrast with Lego, I never ran out of building blocks.

Another cool thing you could do back days, was to enhance your games/programs with “intros”, can you remember them? Usually they were attached to game files, shown before the game starts with cool background midi sound, showing info about the cracker or the BBS of the group released it. Damn, BBS.. the memories just keep flooding me. The ancestor of modern software pirating. I used to connect to those systems, equipped with my 14,400(!) modem, Terminate, and copy games like there was no tomorrow (using z-modem protocol). Back days, everything was different. Every new idea and concept were exciting. Copying files over telephone lines? awesome! and my innocence… I still had my innocence…

Uhm, yes. Sorry, I got carried away a little. Anyway, here is an “intro” I captured with dosbox, just for old times sake (sorry I couldn’t get the audio working):

I can keep going like that, writing about evolution of computing world, from my point of view: DOS viruses/anti-virus, Windows 3.11, TVTEL (Israeli online consumer services from ’95), early days of the Internet (gopher services, Eudora/Trumpet mail clients, Netscape browser), Linux, Windows 95, MP3, VQF and VIV file formats, World Wide Web, HTTP, DialUp, Windows 98/ME, ICQ, Napster, ADSL, Cable, File Sharing, Worms, Phreaking, Hacking, Kevin Mitnick, Liraz Siri, Programming Languages, etc…

These are just examples I pulled out of my head, a partial associative list of things I just thought about and I could honestly write about them and never even get to my transition to Linux desktop (which happened in 2006). So, I guess I better not…

I also suddenly feel very old :(

Tags: , ,

Chatting with the dark side

April 12th, 2010 2 comments

Sometimes, it happens that I prefer being anonymous on the net. I’m not talking about “big brother” conspiracies and how google knows everything about me. It’s probably true, but I’ve nothing to hide from them. I’m talking about situations where I need to reach darker areas of the net, areas swarming with evil. I don’t usually hangout in that sort of places, but from time to time, I need a piece of information available only there.

Disclaimer: I do not in any way encourage you to do any kind of illegal act. Everything you do is on your own responsibility.

Lately, I found myself seeking for information, following a lead to a very suspicious irc server. Yes, irc. The place where all geek wars occurred in mid-nineties. Where we had our chats and channel takeovers, where we got banned and k-lined, where we spoke to bots to fulfill our download needs and had well-defined power hierarchy. Good times indeed. For my non-geek readers, irc stands for Internet Relay Chat. It’s basically a chatting platform, that was quite common back days, way before ICQ.

It turns out that this protocol is still in use, and I needed to get Linux client. The first tool I checked was Empathy, Gnome’s instant messaging client which I already use. It supports irc, but lacks anonymity. Anonymity would masquerade my IP, so the creeps wouldn’t be able to attack me, plus, no one would be able to relate me to that evil server. Quite useful uh?

The best anonymity platform (AFAIK) is Tor (The Onion Router). I won’t get into details, what is it good and bad for, but it basically creates virtual tunnel between you and your destination through different computers, keeping your IP hidden from destination. You still need to make sure you don’t send destination info about your identity. On irc it’s easy. You just leave “real name” empty and use different nick from what you use to.

The irc client I chose was recommended on Tor’s website, XChat. It’s easy to install from Ubuntu repositories and its graphic UI has amazing resemblance to mIRC so I felt home. Tor installation is quite easy as well, you just have to follow the instructions here. Once it’s installed and ready (make sure you got “Bootstrapped 100%: Done.” on /var/log/tor/log) you can start using it. Configure XChat to use it, as described here and you’re done!

Happy chatting.

We don't need no Microsoft education

April 11th, 2010 4 comments

My college, for some reason really likes Microsoft. All the workstations run windows, documents are always in office format, software development is done in visual studio, the website used to look normal in IE only, etc…

I expected computers science department to provide us students with alternatives, expose us to different platforms, but it didn’t happen. I’m on my last semester and “open source” was never mentioned in any class. I was disappointed when they forced me to use Visual Studio for operating systems class assignments. They got me pissed when they registered account for me (without permission), using my name at Microsoft Live@edu, but I got really mad when they announced that official emails would be sent to my new Live@edu address only.

Now, it might sound to you that I’m one of those Microsoft haters, but I’m not. I’ve nothing against them. I just happen to have favorite email service/operating system/office environment/development tools that aren’t Microsoft’s. I’m not angry because the products they force us students to use are Microsoft’s. I’m angry because they don’t let us choose. They keep us blindfolded in Microsoft’s realm. I guess they have good intentions, they do give the products for free. I agree that C# and .NET class should be studied with Microsoft Visual Studio. I can accept that most of the students feel comfortable with Windows operating system and the whole computing environment is Microsoft based. I can bear with the ActiveX and lack of Firefox support (used to be) on their website, but I won’t tolerate changing my email address, nor working with two different addresses. That’s bullshit. They have no reason getting that much into my personal life.

Therefor, like I always do, I found workaround using gmail with Live@edu account. I sent the administrators exact instructions, asking them to publish it, but they didn’t. So here are the instructions. I hope they’ll be useful for other students too.

From within Gmail:
Settings -> Accounts and import -> Add POP3 email acount
email address: john.doe@mail.mta.ac.il (replace with your username/academic institute mail server)
username: john.doe@mail.mta.ac.il (exactly the same as above)
password: user-password
pop server: outlook.com
port: 995
Always use ssl

Done.
Enjoy using your Gmail :)

My new OpenPGP key

April 10th, 2010 No comments

This is my new OpenPGP key (for the next year): F2ED25CE. If you don’t know what it means, it’s probably irrelevant for you…

Oh well, basically you can use it to send me encrypted email/files so only I could read them. I can use it to sign on my emails/files (so the receiver can verify it was sent by me and wasn’t modified on the way). Why would one use it ? use your imagination.

Read more: Pretty Good Privacy, GNU Privacy Guard, FireGPG.

Good week everyone!

Tags: ,

Virtualizing Mac OS X on Linux

April 5th, 2010 6 comments

In my last post I wrote about virtualization. Here I’m going to introduce the world of virtualizing Mac OS X (Apple’s operating system) on linux.

Why would anyone want to run Mac OS X on virtual machine? Many reasons. Some people buy Apple’s computer but prefer linux as an operating system. However they still use from time to time native OS X applications such as iTunes. It makes perfect sense to load virtual OS X instance only when needed. Another reason might be testing new releases of OS X, software updates, etc… but the main reason, I believe, is software developers who want to develop iphone/ipad applications but don’t own Apple computer.

Before I proceed, I have to say a few things about Apple’s behavior and some legal issues as well. Imagine, that in order to develop windows application you’d have to buy computer from Microsoft. You would also have to run windows operating system and use Microsoft’s development environment only. In order to test your application outside the development environment, you’d also must register as Microsoft developer (99$ fee). When you finish developing, you’d have to submit your application to Microsoft’s store, and wait for approval. Microsoft, on it’s end doesn’t obligate to approve or disapprove your application in a given period. If your application does get to the Microsoft store, they keep the right to get it off whenever they feel like changing policy, and then no one would be able to get your application anymore.

If it sounds right, stop reading now. You better off to billy’s blog. Read more…

Trinity: Acer, AMD and the holy BIOS modders

April 4th, 2010 3 comments

This story is about three things I despite: bad support, not fully utilizing my computer’s hardware and injustice. At least it got happy ending :)

It all started two years ago when I purchased my current computer, Acer aspire L5100. It looked sexy, compact, had good spec and was on a special sale. The processor is AMD Athlon 64 X2, which supports 64bit operating systems and hardware virtualization.

So what are these features and why did they mean such a great deal for me? Well, virtualization alone means abstraction of computer resources, meaning the ability to run “secondary” operating systems “inside” the primary. In other words, you can have, for example Windows 98, MS-DOS and Windows 7 running altogether simultaneously in one physical computer. Each one unaware of the others. Unfortunately, this ability (emulating physical computer that works independently) consumes a lot of resources, meaning everything becomes slow and not responsive, sometimes completely unusable. Hardware virtualization implements this ability in hardware level, off-loading work from main processor, thus everything works faster, and it works even better with 64bit processor.

So, for operating systems lover person such as myself, these features are quite important because they allow easy deployment of operating systems in those virtual computers described above. Happily, I opened my new computer’s box, plugged everything, powered on, wiped the pre-installed vista with ubuntu linux and was ready to test my new capable processor.

When I tried loading a virtual machine (aka VM) I saw horrible message saying AMD-V is disabled by BIOS. AMD-V is AMD’s hardware virtualization technology. BIOS is the first piece of code running when computer is powered on. It basically detects attached peripherals (cdrom, hard drive, etc…) and starts the operating system boot process. It also controls different hardware related things such as date, heat sensors, fan speed etc…

Changing BIOS settings is very easy. All you need to do is reboot the computer, quickly press a known key (Usually DEL or F1) and you get the BIOS menu. Alas! on my BIOS there is no menu for enabling AMD-V!! I checked all the menus, and then I checked again. No such an option exists. Could the message I got be wrong ? I investigated a little more and I found another evidence, on the main system log file. The log entry was “kvm: disabled by bios”.

This could mean one of the things:

  1. My processor is not AMD-V capable
  2. My processor is AMD-V capable but BIOS blocks it and there is no menu to change it
  3. Linux kernel has compatibility issues with my BIOS/processor

Googling a lil’ bit showed that I’m not the only person with this problem, the problem is probably number 2 and no one found solution. At this point I verified that my BIOS version is the latest (makes sense since I just bought the computer) and sent nice email to Acer support asking for help (here comes the bad support part). After two days(!), I get respond that tells me to contact the local reseller whom I bought it from, and so I did, unwilling to believe they can possibly even understand my email since their business is all about importing electronics.

Surprisingly, they did understand and told me I need to upgrade my BIOS version (da!). I asked for newer version but didn’t had any. I didn’t give up, and send another email for Acer support describing everything. This time a little more aggressive, pointing out that they deceive the public and I think it’s illegal. I got respond at the same day:

Dear Customer, try other versions of bios. your bios Compliant with latest Intel Virtualization Technology spec but attention, the characteristics of the changes say it is compatible, but your computer does not have this technology.

Now, I’m not sure what exactly made me jump to the conclusion the support guy was a complete jerk, the fact his english makes no sense, the fact he is writing about Intel processors (wtf? you don’t even sell them), or that he just ignored my request for newer BIOS version. After ridiculous email exchange in which they told me try downgrade to all previous BIOS vesrions (yeah, like that is gonna solve anything), and pointing blaming fingers to my reseller and linux, and doing nothing about their faulty ftp server, I decided I had enough.

I was very angry and I wanted to press charges against them. After some time I let it go, hoping one day they will release new version. Meanwhile my virtualization needs were not satisfied and things worked painfully slow. About month ago I was upgrading my operating system, installing fresh copy of Ubuntu 9.10 (which is great, btw) and for some reason I reminded this saga. There were lots of hopes when I checked Acer’s website for new BIOS versions and then one big disappointment. They didn’t change a thing (except their ftp server now works but I couldn’t care less). I rechecked the old forums posts of people with similar problem and amazingly I ran into this post:

Today Is All You Peoples Lucky Day! Im the administrator of the bios modding forum www.biosmods.com and i have looked into this bios (R01-B0 version) and am happy to report that the Virtualization option , aswell as CPU And memory overclocking options were hidden by acer. I have unlocked these features and if you want to take the risk , here is my modded bios file…

Can you imagine the excitement?! It was posted six months ago, so I didn’t waste a lot of time in the darkness. Now, as much as I wanted just to download the modified BIOS file and install, I had to take some precautions. The BIOS is stored on a memory chip, and it’s integrity is critical. If for some reason the BIOS gets corrupted, it would render the system useless. Nothing would start until it gets fixed or replaced, and the worst part is that it’s impossible to fix it using the system it’s on because it won’t start…

It means that the upgrade procedure shouldn’t be interrupted, I have to understand what I do because I might have only one chance, the BIOS file must not be corrupted, and it suppose to come from reliable source. So I took the time and studied the materials. biosmods.com seems big and decent forum. The guy introduced himself as the administrator (“1234s282”) is indeed a respectable administrator with many posts. I copied the BIOS file (from biosmods.com, I don’t trust the link I got at the original post since I can’t verify it’s the same guy), along with flashing utility (the action of overwriting the BIOS with new image file is called “flashing”) and other utilities to make my disk-on-key boot a small DOS operating system.

I’ll avoid the technical details, but flashing involves booting DOS operating system from disk on key (or floppy diskette if you still got one), running the flash utility with the new BIOS file and cross your fingers. When I did it, everything went smooth, except the utility failed writing the last block. I started thinking it’s the last time I see my computer working, because the BIOS might be corrupted (it was only partially written to memory chip). I couldn’t think of anything I can do to save myself at that point, so I crossed my fingers and reboot.

I still don’t fully understand why it worked, at that point I was just glad it did. I guess it’s because I used a modified version of the existing one, and basically it’s the same image with only minor changes. I checked my new BIOS and found the menu to enable AMD-V (was already enabled). I also found a menu that enables/disables BIOS write protection, so I disabled and did the flashing procedure again, just to make sure. It worked flawlessly.

Finally, with AMD-V enabled, I boot my operating system, once again just to find out the same damn log entry: “kvm: disabled by bios”. That’s strange because people reported this BIOS to work on the same computer model as mine. I checked, and it turns out that you can get Acer aspire L5100 in different variations. It also turns that AMD-V can only be used on socket AM2 and not socket 939 (those are just different types of connectors between the processor and the motherboard). Fortunately, mine is AM2. So what else could be wrong ?

I had no clue. I wondered what would happen if I really disable it in BIOS. Who knows, maybe the person who modified the BIOS got the enable/disable strings the wrong way. Believe it or not, it fu*kin worked!! YES! Thank you 1234s282, the work you do is holy!

Finally justice has been done and I got my peace. And the message of this story ? Never give up. Don’t be afraid to try new things. Sometimes it’s the most desperate acts that would get you what you want.

EDIT: You can also upgrade your bios with flashrom command, but you need the bios file in different format from the one supplied by Acer. I can provide my ‘flashrom –read’ output if anyone wants. It has been reported as working.

And why trinity ? Because my computer is now a whole, a god-like fully utilized powerful machine! (I ain’t no christian so I apologize if it’s inappropriate metaphor but it sure makes one hell of a title… (got it, hell of a title? (I’m so funny (not))))