XSS war: a Java HTML sanitizer

Yesterday I was testing one of our forthcoming online services, in order to check XSS (Cross Site Scripting) robustness.

XSS and CSRF (Cross Site Request Forgery) are the two scary “black beasts” of online services in good company with scalability,  but while even nerd-programmers think about scalability, almost nobody takes care of XSS and CSRF, at least until the first attack 🙂

The XSS becomes a serious  problem when you allow your users to add contents on your site/service; typical situations are blogs, forums, online chats etc.: when you are about to “print” user contributions you must do it conscientiously, you may found some “easter egg”.

Of course if you limit user contributions to plain text you will solve the problem in minutes, by just encoding every html tag (the idea is to change every “>” in “&gt;” and “<” in “&lt;” and so on). But things quickly get harder when you try to accept some basic html tags.

In our case, Patapage is a service that allows to add comments, threads and more options to existing  web sites, it is tailored to appealing interfaces so it is a MUST to allow users to insert a wide set of html tags. We have found on stackOverflow.com a good balance from admitted tags and refused ones; as usual Jeff  Atwood is prodigal of precious hints, but in our case while his set is fine for “patacomments” it is too restrictive for the scope of “patacontents”.

As you can imagine, I’ve found a lots of holes, so going upstream as usual and not giving a damn on, probably, the best Jeff’ suggestion (“do not re-write you own implementation”, an advice which he too is not following) I started writing code basing my implementation on these articles.

Another aspect of the problem is the conservation of layout: users can break the page by inserting unclosed or misplaced tags. If you hope that these flaws do not impact on security you are on the wrong path. A well planned css attack can, for instance, layer your application for click stealing or simply “switch” two buttons of your application with funny(?) results.

Of course I’ve looked around to find something matching my needs, but I found only two kind of approaches:

The first approach isvery basic focused on removing “< s c r i p t>” tags from your html; obviously this is a silly approach, you can inject js code without a “script” tag: you have a bunch of ways to do this by using dom events (onclick, onload etc.).
The second approach is to parse HTML to understand exactly what is happening in the code in order to remove un-allowed tags; this approach is logically the right one, you cannot expect to do better. Your code analyses the HTML, extracts the tags, and then you walk down the tree dropping out unwanted ones. Unluckily this approach requires a HTML parser library really “strong” with respect to malicious code, and usually these libraries are built for a different scope. Another aspect is that parser, lexer and walker are quite complicated pieces of code, so it is not a joke to test them completely. I’ve tested a couple of parsers with unsatisfying results.

This is why we wrote our own sanitizer by hand. Our approach is to remove unwanted tags and properties without testing HTML correctness in deep.

First step is to tokenize the code: a token could be one of : tag start (), comment (), tag content (blah blah), a tag closing ().

For instance <p style=”color:red” align=”center”>test</p> generates three tokens:

  1. <p style=”color:red” align=”center”>
  2. test
  3. </p>

The tokenize method looks for <…> pairs or comments, and that its fine for our scope of restricting accepted tags: if a <…> pair is badly closed the tag will be html-encoded.

Having the tokens list, we will test every single token whether it is acceptable; again, we do not perform tag matching at first, so for us <b>test</i> its fine, we are working on security, not on syntax – we’ll also fix that afterward, as it is easy in any case to add a tag-pair counter to close at the end unclosed tags, which you actually find done in our code.

We loop for every single token and we test it with regular expressions. The flow is:

  1. if token is a comment discard it.
  2. if token is a start tag (<p style=”color:red” > ) extract the tag (p) and attributes (style=”color:red” align=”center”)
    1. if tag is forbidden it will be removed
    2. if tag is allowed we will extract every attribute performing a check
      1. check “href” and “src” for admitted tags (a, img, embed only) and check url validity (only http or https)
      2. check “style” attribute looking for “url(…)” parameter, and eventually discarding it
      3. remove every “on…” attribute – e.g. onClick, onLoad, …
      4. encode attribute’ value for unknown ones
      5. push the tag on the stack of open tags
    3. else the tag is unknown and will be removed
  3. if the token is an end tag (</p> ) extract the tag (p) and check if the corresponding tag is already open. Eventually close those that are still open.
  4. else it is not a tag and we will encode it

In order to avoid js injection on user inserted URLs we will accept tag only if the “href” attribute is there and points to a valid URL; we test url correctness with Apache UrlValidator, and this will cut out every “javascript:” tries. Same approach for the and tags.

Finished my hard work coding the shelter, I give the happy news to our design department that the sanitizer was done, and after a first minute of excitement Matteo (http://pupunzi.open-lab.com) told me that they have three different usages of user input: for displaying an HTML page on the front office, for displaying a textual abstract in lists in the backoffice, and of course for storing contents on the database.

So a sanitizer needs three different outputs, html-encoded with tag, text-only without tag, and the “original” version for the database. This is why the latest version of the sanitizer returns “.html”, “.text”  and “.val”. Why you should store “.val” instead of the original input or “.html”?  Because the original input may be “dangerous”, and may mislead  the user in believing that all tags are allowed. The encoded value is not suitable in case of subsequent modification because of  double encodings (e.g. “>” –> “>” –> “>” and so on). On the other side “.val” removes only forbidden tags maintaining all other user oddities (strange tags, comments, etc.).

We have set-up a public playground for testing our sanitization code: http://patapage.com/applications/pataPage/site/test/testSanitize.jsp. This page allows you to input a text and by pressing “test” your input will be printed (sanitized) on the page.

Source of our sanitizer are released under MIT license (i.e. free as free beer, just keep the attribution); see the complete code here.

These tags will be accepted, others will be encoded, and printed. If you like challenges try  to inject a js in your text and, for instance, get an alert. Tell us about your victories, if any.

BTW, I’ve tested my code using XSS Me plugin for Firefox, and it passed all (about 15o) tests 🙂

Tagged as: , , , ,

14 thoughts on “XSS war: a Java HTML sanitizer”

  1. Looking at the code, I don’t think it’s complete?
    What class is JSP?
    I’ve never heard of JSP.htmlEncodeApexesAndTags.

    Regards

    Stewart

  2. Hi Roberto,

    Truth be told, your code is the only usable stuff around the net regarding this topic,
    and it looks quite impressive,
    however i couldn’t find the class “JSP” (e.g. JSP.htmlEncodeApexesAndTags) you’re referencing several times in the code you linked above,

    Would you be so kind to post that class as well?

    Many thanks,
    Regards,
    Janos

  3. Sorry for the delay in responding, but I was on holiday.

    About one month ago I discovered the problem in the source code and fixed it, but it was cached by the server.

    The JSP class is an utility class used by our framework, so the real sanitizer is slightly different from the published one where all JSP.xxx methods are on the same class (HtmlSanitizer).

    Now the source code should be fine, see: http://patapage.com/applications/pataPage/site/test/HtmlSanitizer.html

    Thanks you all for feedback,

    Roberto

  4. FYI we had to do something similar for a perl based project. We released the result here:

    http://search.cpan.org/dist/HTML-Defang/lib/HTML/Defang.pm

    HTML::Defang uses a custom html tag parser. The parser has been designed and tested to work with nasty real world html and to try and emulate as close as possible what browsers actually do with strange looking constructs. The test suite has been built based on examples from a range of sources such as http://ha.ckers.org/xss.html and http://imfo.ru/csstest/css_hacks/import.php to ensure that as many as possible XSS attack scenarios have been dealt with.

    It’s worth checking those links to see how well your parser works. There’s some pretty nasty examples.

    R0b

  5. I pasted the html sanitizing script into my wmd.js file, saved it, and loaded the modified wmd.js file to the server, and refreshed, but all of the typing function on the wmd.js comment/markdown window were stripped, leaving the comment area without functionality. What did I do wrong?
    Thanks in advance for any help you can provide to make my website more secure.
    Can you also help me put a comment box like this one on my website for visitor feedback, since I like the validation of required entry of e-mail account info to prevent spammers? That would be greatly appreciated, along with some very basic instructions for implementing the comment box.
    Rick

    1. I didn’t get the question. The sanitizer published is a java piece of code, what is the “wmd.js” file you are talking? My sanitizer is not a javascript one (even if a porting should not a big deal).

      How can I help you?

      The only hint I can give you regarding a comment box on you site, is to use a simple text area or a markup editor (see this article http://www.codinghorror.com/blog/2008/05/is-html-a-humane-markup-language.html) and encoding all user inputs.

      Cheers,

      Roberto

  6. Hi, really good job.
    Nice, this work´s

    Sanitizer+String.replaceAll(String RegularExpression, Strin newString) Rocks!!

  7. Nikola Ilo says:

    Hi,

    first of all thank you for open sourcing your html sanitizer 🙂

    During our tests we found that it does not filter the following snippet which executes JavaScript on IE 7/8/9:

    <p style="e:expr/**/ession(alert(String.fromCharCode(65, 66, 67)));"></p>

    It would be great if you could update the sourcecode or at least your blog entry.

    Kind Regards

    1. I think the solution is to make invalid css attributes containing comments.

      around line 217


      String styleName = styles.group(1).toLowerCase();
      String styleValue = styles.group(2);

      If “styleValue” is “strange” kill it

Leave a Reply to ayreon Cancel reply

Your email address will not be published. Required fields are marked *