Archive

Archive for July, 2010

The fine art of human detection (how to fight bots)

July 30th, 2010 10 comments

Through the history of computers, there were always programs trying to mimic human actions/behavior. From ELIZA in the sixties to these days the task raises in different fields of computers science: artificial intelligence, natural language processing, image processing, etc. Particular type of programs are designed to mimic or automate human-computer interactions. They produce inputs for other programs as if they came from a person. In this post I’m going to introduce those programs, how they work, why is it important to detect (and stop) them, how is it done today and some new ideas I have on the subject.

The power of automation can be harnessed, like all powers, for good and evil. On one hand it can save you a lot of time performing tedious repetitive tasks such as news checking, file downloading, event handling, etc. On the other hand it can be used by advertisers, spammers and soliciters to perform mass-mailing, mass-registering, mass-spamming, etc. As you can guess, some organizations would like to use such an automation while others would like to block it.

Sometimes it is unclear which side is which, for example if I want to download a file from service like “rapidshare”, it would present me with a waiting screen. The sole purpose of this screen, as well as other restrictions they impose on non-paying members is to motivate people to pay. Using automation, my computer could do the waiting and download the files while I’m away, so it’s pretty good from a selfish point of view but pretty bad from their business point of view.

The automation programs can generate data at different levels. To better understand this, let’s first examine typical web interaction: talk-backs. When you talk-back or post comment you use your browser, keyboard and probably mouse too to select (focus on) the text area designed for this purpose, type whatever you have to say and click on “submit” or whatever. What happens under the hood is that your browser sends data to the website, or to be more exact, makes HTTP request to the web server running at the other end, sending data including your talk-back. The web server processes this data and responds according to predefined business logic such as responding with “thank you” and placing your talk-back in queue for approval.

The talk-back is sent along cookies and other data as defined in RFC2616. Since the interaction is well-defined per site (the protocols, parameter names and everything else used as part of the interaction) it’s fairly easy to make a tool that automatically sends data as if it was a web browser operated by a person. From web server point of view it would look exactly the same. Once the interaction is analyzed and understood (can be easily done using proxy or tools like Firebug) a dedicated tool can be built to automate it for example using java.net package, or even as a shell script using curl. Please note that these are no hackers utilities. They’re pretty standard “building blocks” for any web related application/script.

So how can we determine whether it’s really a person using a browser or an automated tool messing with us? The most common solution today is using CAPTCHAs. I’m sure you’ve all seen them before. CAPTCHA is a mean of challenge-response in which the web server generates a picture, usually containing twisted text, and expects the text to be sent back as data. For a human, it’s supposedly easy to understand the picture and type the text in the required field, and for a computer program it’s supposedly difficult to analyze the picture and “understand” the text.

The original idea behind CAPTCHA is nice, but it is becoming increasingly ineffective. Firstly, it is very possible to use Computer Vision techniques alone or together with Artificial Intelligent techniques to recognize the text. Secondly, with publicly available CAPTCHA solving libraries such as PWNtcha or online services such as DeCaptcher, it’s really a matter of how much time/money one is willing to spend rather than a technological challenge. There are also “indirect” ways to overcome CAPTCHA such as analyzing audio CAPTCHA (sometimes available for accessibility purposes) or passing the CAPTCHA images to a different (preferably large traffic) web site to be solved by real humans.

As CAPTCHA breakers are closing the gap, it’s time to present them with new challenges. Don’t get me wrong, I’m not picking a side. I’ve been on both sides, and my interest is pure intellectual, so whoever is upper-handed at the moment is irrelevant. I got some ideas for new challenges. Feel free to implement/use/further develop them. However, I take no responsibility, and I can assure you they are breakable, but they should take the game to the next level.

Let’s first examine Achilles’ heel of current methods:

  1. They are deterministic: client-server interaction is always the same and can be easily revealed.
  2. Automation tools are challenged by means of something they need to figure out, usually independently of what those means try to protect.
  3. CAPTCHAs are presented as images in a format suitable for copying and processing.

To raise the bar we have to address those problems. There’s a lot of room for creativity and I’m not talking about using current CAPTCHA approach with new kind of images, as shown here. I’m talking about a new approach. Modern browsers are quite sophisticated. We can use their advanced capabilities as part of the challenge-response, for example using their Javascript and rendering engine. Instead of challenging automation tools with “analyze this image” we can challenge them with “execute this script”. This would enforce them to either implement Javascript engine or use real browser as part of automation. Both demand higher level of complexity.

Then comes the randomness element. The script should be different each time it is served. It can be gained by dynamically generating self-decrypting script. The decryption should take place during script execution, meaning that instead of sending encrypted cipher-text, decryption script and a key (as in traditional systems) there should be one script that decrypts itself little by little using “evals” – decrypting commands block, then execute them. Execution will perform what original commands were intended to do and decrypt the next commands block. This method will prevent attempts to decrypt/analyze the script without executing it, thus, enforcing the usage of Javascript engine, achieving first randomness element.

The script should present a challenge in a manner it’s response is context dependent for example for talk-backs or comments you can ask about article’s content. Beware that it should be smartly implemented. If you ask multiple choice question, automation tool will try all choices. If response is a single word from the article automation tool may try them all. Anyhow, if it’s inapplicable or you feel it’s more harmful then helpful (by narrowing down response space) you can always revert to letters/numbers combination.

The challenge should not consist of image only. It should be partially an image, maybe as background, and dynamically rendered pixels/lines/polygons/curves or placement of smaller image portions using browser’s rendering engine. Together they should all visually form the “question” (or letters/numbers combination) mentioned in the previous paragraph. This step will make it more difficult for automation tool to process/copy/understand the challenge.

Finally, you can add your own spice just to make things more complex, for example the running script can record time differences between key presses on response field and send them along the response. You can use statistical analysis to determine if it came from human (generally, if auto-filled by robot, even if there is random wait between each key press, standard deviation should be relatively low). This is only one example. You can also invent new things, add random business logics, use self modifying code… possibilities are endless.

I hope you liked my ideas. Let me know what you think.

Mysql selecting top N per group

July 13th, 2010 4 comments

It’s been a while since my last post, I’ve been busy lately. You may have noticed that I made a nice WEB2.0 game “How fast can you type the alphabet”. The game is simply about typing a-z as fast as you can and it keeps track of your best score. Currently three alphabets are supported: English, Russian, Hebrew. To make it more competitive, I made “all times best 5” scoreboard.

For the scoreboard I needed to query the database (Mysql) for the top 5 scores, per language. I didn’t think it’s gonna be an issue, definitely not big enough to dedicate a post for it but as I soon found out, I was very wrong. Here is my scores table structure:

+--------+---------------------+------+-----+-------------------+-------+
| Field  | Type                | Null | Key | Default           | Extra |
+--------+---------------------+------+-----+-------------------+-------+
| id     | bigint(20) unsigned | NO   | PRI | NULL              |       |
| input  | tinyint(3) unsigned | NO   | PRI | NULL              |       |
| score  | float unsigned      | NO   |     | NULL              |       |
| date   | timestamp           | NO   |     | CURRENT_TIMESTAMP |       |
| locale | tinyint(3) unsigned | NO   | PRI | NULL              |       |
+--------+---------------------+------+-----+-------------------+-------+

All fields are pretty self-descriptive except “input” maybe. This field represents the input device that was used to type the alphabet for example typing using computer keyboard is probably much faster than typing on your iphone or android. iPad is probably somewhere in between. This feature is for future use so for now I only collect the data but I don’t show it. The triplet <id, input, locale> is used as primary key.

I’m no DBA, maybe I should have used id as primary and “unique” constraint for the triplet but the idea is a player can have only one score per language (locale) and input device. Back to my scoreboard query- I need to select top 5 players per language. After playing a while with basic queries I figured out it can’t be done with simple “group by locale” query as I can’t limit number of results per group nor explicitly tell it to take top X scores. If I needed only best score per language I could’ve simply used:

select id, min(score), locale from scores group by locale

Unfortunately there is no similar function to get N minimal scores. At this point I had two choices: either googling for solution (having one query getting everything done) or using three queries – one per language and programmatically combine the results. Maybe I should have chose the latter. I admit, I didn’t compare overall performance. However I didn’t like the idea of my DB being queried three times each time the scoreboard is refreshed (which happens quite often). Another possible solution is to show only current language scoreboard (therefor having only one query for the current language). This solution was unacceptable because it means new AJAX request each time a player switches between languages. Since most expected players are multilingual, that’s more requests to process by the web server.

Once again, I did not compare overall performance using statistical usage information so it is possible that I was wrong regarding which tradeoff is preferable. Anyhow my decision was to use one AJAX request to get the whole scoreboard and that’s what this post is really about. So I googled for solutions and I found a couple. Some use user-defined variables for the inner counting, some use relatively unreadable (at least for me) queries with inner/outer/left joins. The most readable query I found was:

select s1.id, score, locale
from scores s1
where 5 > (select count(*) from scores s2
           where s1.locale = s2.locale and s1.score > s2.score)
group by locale, s1.id order by locale, score

Which can be interpreted as: for each score count how many scores are lesser (in the same language); then select it only if amount of lesser scores is less than 5 (meaning the score is in top 5); finally group results by language and then by score. We must group by score because otherwise we’ll get only one result per language. Pretty readable in my opinion. As for performance it does make new sub-query for each row so maybe it takes a little more cpu or time but there is no huge Cartesian multiplication.

But wait. Remember the “input” field? it is possible that same player has more than one score per language and if both of his scores are in top 5 I don’t want to show them both. For current version of the game I want to show only best of his scores and count it as one result for the scoreboard. On first thought replacing “score” with “min(score)” should do the job but it doesn’t because it will only show best of his scores but count both scores when counting amount of lesser scores. So my next move, I figured, is replacing “scores” with sub-query that returns filtered scores table – only the best score per player and language. The sub-query would look like:

select id, min(score) as score, input, locale
from scores
group by locale,id

Then of course I need to exclude blacklisted users who tried to tamper with the system so:

select id, min(score) as score, input, locale
from scores
where id not in (select id from blacklist)
group by locale,id

And since I’m on shared hosting and my hosting provider doesn’t allow to create or use views, I must literally take this query and replace every instance of “scores” on the first query with it. The result is so ugly I won’t even write it. Then I asked for advice from a DBA friend of mine. She told me to try using analytic functions and gave me some examples for Oracle DB. When I looked for analytic functions on Mysql I found this great article “Emulating Analytic (AKA Ranking) Functions with MySQL” from O’REILLY. In fact they have such a good explanation of analytic functions I won’t even repeat it (if you’re unfamiliar with the subject you can read there).

Following this article, I came up with this query:

select *
from (select e.locale, e.id, e.score, find_in_set(e.score, x.scoreslist) as rank
      from (select locale, group_concat(score order by score) as scoreslist
            from (select locale, min(score) as score
                  from scores
                  where id not in (select id from blacklist)
                  group by locale, id) as k
            group by locale) as x, scores as e
      where e.locale = x.locale) as z
where rank <= 5 and rank > 0
order by locale, rank

At first glance it might look ugly but let me assure you it’s far more readable and apparently more efficient than the first alternative. The innermost sub-query (k) would return the filtered scores table I discussed before – best score per player and language. Then we use group_concat to form table (x) that looks like (e.g.):

+--------+---------------------------------+
| locale | scoreslist                      |
+--------+---------------------------------+
|      1 | 1.75,2.129,2.85,6.34,9,10,11,12 |
|      2 | 2.185,4.12,8.32                 |
|      3 | 2.4                             |
+--------+---------------------------------+

Now we have scores list per language. We make Cartesian multiplicity with scores table (e), joining them by language. Multiplicity product isn’t huge as number of languages are fixed and small, so we’re still in the same order of magnitude. The result is original list of scores but now each row additionally has scores list for it’s language. Next, we give the score a rank according it’s location on the list and filter so only ranks in range 1-5 will return. How simple is that?

Problem solved. Note: group_concat has maximal result length (by default 1024b). Although it can be increased the method presented won’t work if you need all your records sorted and grouped but for the purpose of top N assuming N is relatively small it works very well.

All in all, for my “a-z” game it probably doesn’t matter which of the methods I’d have used but it was quite educative and it’s good knowledge for real projects to come. As said before, DB is not my field of expertise so I’d appreciate if someone experienced would comment on things I wrote or shed some light on the things I didn’t.

A-Z

July 2nd, 2010 1 comment

Do you remember the good old “how fast can you type a-z” contest ? Well, I recently needed to study jQuery and I always wanted to do develop using facebook’s API so I made this little webapp for old times’ sake. I think it’s pretty cool!

http://frishit.com/az

Tags: