GSR-IconBannerAd_v1d

Security Advisories

Trustwave Press Releases

« Trustwave's Global Security Report 2011- Now Available | Main | Sniper Forensics - Part 2: Target Acquisition »

20 January 2011

Comments

Obviously we have been busy up until this point simply getting the GSR out the door in an accurate, timely manner. Keeping in mind that SpiderLabs testing arm is quite a bit larger than seven people, to answer your question regarding length of engagement: in the 2009 GSR we documented a little over 1800 tests averaging 80 hours in total testing length and I would expect the numbers to be fairly similar from 2010 with regards to testing length. Over the course of the next year we plan to release a number of additional SpiderLabs documents. Some of these will certainly be more application specific and should provide much more application testing data that we hope you will find interesting.

@ Charles:

Great answer and a fairly good job on the research so far.

Additionally, I'm curious about these 2300 apps that were manually pen-tested. We've argued over the word `manual' before, perhaps you refer to the use of threat-modeling (or domain-specific targeting of apps) as opposed to out-of-the-box "point-and-shoot" application scanning (aka domain-agnostic targeting of "low-hanging" vulns in apps)? Have you reached a consensus on the terminology between what is automated versus what is manual? Perhaps methodology documentation, workflow diagrams, and "day-in-the-life" case studies are in order?

It would also be interesting to hear the average time spent by the manual pen-testers on each app, especially relative to the size and structure of the apps. For example, I would personally look for the number of insertion points across all inputs in an app. In the case of web applications, there are two primary types: classic and MVC. MVC apps often heavily rely on the controller-action-id (or REST-based parameters using the URIs) paradigm. "Classic" webapps rely on URIs with GET or POST operations that have parameters with values. These would need to be measured separately for insertion points per-page and per-method (and also add in HTTP headers such as, but not limited to, cookies). Bi-directional, one-way delay measurements for HTTP/TLS would also need to be figured into any calculation on the time it takes to perform a thorough, accurate, and consistent app pen-test (which is one of many reasons why I insist to perform app pen-testing on localhost).

Regardless of the accuracy of the above measurements, it would be additionally interesting to hear about the other factors which aren't clearly stated in this blog post. The report states that 2300 apps were pen-tested in 2010. It also states that nearly two-thirds of your staff is involved in these tests. Specifically, how many people took part in testing the 2300 apps? How does that work out?

Let's say that the Trustwave SpiderLabs Application PenTest Team has 7 testers. With 2300 apps over 1 year, that would be 1 tester per 1 app per 1 day. That doesn't leave a lot of room for testing, particularly not on apps that fit a large insertion point characteristic and/or have a long round-trip time as a result of bi-directional one-way delay HTTP/TLS calculations. It certainly does not leave a lot of room for reporting either. Finally, I don't see how it would include any threat-modeling or the domain-specific pre-work necessary to complete even a rough model for the `logic flaw' testing results that your team apparently found. While I know that my estimates are probably wrong here -- it would be very cool to hear about the actual numbers. It would mean a lot to what you describe as "2300 manual penetration tests".

I only say this because it appears that you have a lot of rigor in your ModSecurity metrics and I know what your team is capable of.

Andre,

The short answer is that in the comprehensive document we made a decision to only include ten items on the application list. While we certainly see the vulnerabilities you listed as well, we see the listed vulnerabilities more often. Certainly there would be many more vulnerabilities listed on a comprehensive list of all application vulnerabilities we saw in 2010.

To take things a step further, consider for a moment the percentage of applications that even allow file upload. The volume of other vulnerabilities may be limited by similar factors. With this in mind, it is not totally unsurprising that some dangerous vulnerabilities may not appear in our list. That is not to say that these vulnerabilities are not serious and would not result in a critical weakness which might lead to an application's compromise.

There are many application vulnerability lists and it is our feeling that each one has its place. This list is certainly not meant to replace other lists. We certainly appreciate the feedback!

I'm curious as to why the following were left out of your top 10:
1) file upload vulnerabilities
2) read and write inclusion vulnerabilities
?

I'm also curious as to why you restricted your list to 10 items. Isn't the CWE-700 a much more efficient means to analyze and discuss software weaknesses?

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment