Go to Top

Arachni 0.4.6-0.4.3 release

Hey folks,

There’s a new release available with enough boons for everybody, the overall scent of which is one of performance, accuracy and optimization. Pretty much everything has been profiled and benchmarked and optimized to keep RAM, CPU utilization and bandwidth consumption low (numbers will follow shortly) while also improving payload coverage and accuracy.

However, it’s not all just hardcore Framework v0.4.6 optimizations, we’ve got a couple of new shinies for you accompanying the v0.4.3 Web UI as well, the scan scheduler and redesigned issues table.

As usual, let’s start with the Framework.

Framework v0.4.6

Let’s take a look at the important changes on the Framework side of things.

RAM consumption

The system has been on a strict diet which yielded great results in terms of diminished RAM consumption (among other things). The technical details will bore you to tears so I’ll skip them and instead compare the difference between the previous version and this one when scanning http://testfire.net:

  • v0.4.5.1: 85.12 MB
  • v0.4.6: 65.116 MB

Of course this is one of the simplest possible scenarios and the low numbers don’t make much of a visual impact but 23.5% decrease in RAM consumption is nothing to sniff at, and keep in mind that just loading Arachni requires around 45MB of RAM. In reality, you can even cut scans which used to require 3GB to only use a few hundred MB, if you so desire, and find your own balance between RAM consumption and HTTP performance using the new http-queue-size option.

Amount of HTTP requests (and hence scan time)

Let’s use the same scan as in the above section:

v0.4.5.1

  • Performed requests: 78,657
  • Scan duration: 00:29:32

v0.4.6

  • Performed requests: 53,850
  • Scan duration: 00:20:31

This is a huge difference, the amount of HTTP requests, bandwidth and scan time has been reduced by around 1/3.

Don’t be put off by the abysmal scan-times, my Internet connection is pitiful, I just wanted to show the difference between versions.

Vulnerability coverage (based on WAVSEP scores)

Now, you may be thinking:

All the above are good and all, but where’s the catch? How do all those optimizations affect vulnerability coverage and accuracy?

Funny you should ask (if you did ask), Arachni v0.4.6 scores 100% on WAVSEP‘s tests for:

  • SQL injection
  • Local File Inclusion
  • Remote File Inclusion
  • Non-DOM XSS

So, perfect scores on everything except the DOM-XSS cases, which will have to wait until v0.5.

If you’ve got experience with WAVSEP you may be wondering what of the false-positive tests, because in addition to the above, WAVSEP also has false-positives tests, for which low scores are better and 0 is best.

Arachni will still log issues for a few of those tests and mark them as untrusted, accompanied by a brief explanation of why those issues should be reviewed by a human. This is obviously by design, I could just prevent those supposedly false-positives from being logged but I still maintain that they should be forwarded to a human for verification.

False positive scores:

  • SQL injection: 2 untrusted issues logged by the sqli module.
    • Those cases always return a response which contains a SQL error, irrespective of whether there’s an audit going on or not. I still believe that a human will need to verify the situation as there are plenty of cases where that behavior could exhibit a vulnerability.
  • Local File Inclusion: 4 untrusted issues logged by the source_code_disclosure module.
    • Same as above, but with known source code signatures instead of SQL errors.
  • Remote File Inclusion: 0 issues logged
  • XSS: 0 issues logged

Finally, in order to ensure that this sort of coverage remains unaffected during development, WAVSEP’s tests have been incorporated into the Arachni test suite, so regressions begone!

Accuracy improvements

While I was working on v0.4.6 a few users reached out regarding false-positives related to blind SQL injection issues, especially from the differential analysis module sqli_blind_rdiff and, to a lesser extent, the timing attack module sqli_blind_timing.

In order to fix those issues, the differential analysis and timing attack techniques have been refined and now establish baselines at multiple points during the audit of a given vector to ensure stability and avoid getting fooled by misbehaving webapps.

The differential analysis technique in particular has been completely overhauled because it also needed some serious performance tweaks as well.

If you run into any false-positives please do let me know as this is fresh code and may contain bugs, it will however be much more accurate and faster in the long run so it’s worth it.

Web UI v0.4.3

The web interface has received some attention as well and includes some new swell features, primarily:

  • Switched to HTML5 local-storage for persistent UI state instead of cookies.
  • Profiles can be exported and imported as YAML and JSON — ready to be pushed into your Configuration Management System of choice.
    • Exported YAML profiles can be used as command-line profiles.
  • Issues table has been redesigned, with issues now grouped by type and colour-coded and sorted by severity.
  • Scans can now be scheduled and be configured as recurring.

Profile export/import

Not much more to add to this, check out these screenshots:

export import

Issues table

The issues table has been massively redesigned to provide more context at a glance and help you prioritize and focus on the issues that interest you most.

table

While the scan is running and new issues appear, High and Medium severity type groups will, by default, be displayed as expanded, to show each logged issue, while Low and Informational severity ones will be displayed as collapsed. This way your attention will be drawn to where it’s most needed.

Of course, you can change the visibility settings to suit your preferences, using the controls on the left of the table, as well as reset them to their default configuration.

Scan scheduling

The major change for the web interface is the addition of the much awaited Scheduler, which combined with the existing incremental/revisioned scans provides quite a powerful feature. In essence, it allows you to schedule a scan to run at a later time and optionally configure it to be a recurring one.

new_advanced_options_scheduling

What’s interesting here is the recurring bit, each scan occurrence is not a separate entity but a revision of the previous scan, this way you’ll be able to track changes in your website’s security with ease. It also allows you to speed things up by providing you with the ability to feed the sitemaps of previous revisions to the next one (either to extend or restrict the scope), thus making the crawl process much faster (or skipping it altogether).

schedule_scan_details

A new scan has been scheduled and is waiting to be run.

 

schedule_scan_details_revision

The next iteration/revision of the recurring scan is scheduled once the previous one finishes (see the Revisions list on the left sidebar).

 

schedule_scan_details_revision_2

Scheduled revision waiting its turn.

What’s next

What’s next now is the same as what was next after the previous release, v0.5 development. However, serious progress has been made on that front with a lot of big milestones accomplished, such as:

  • Modules have been renamed to Checks — prevents confusion and makes the purpose of these components clear.
  • Big clean-up for internal auditable Element representations (links, forms, cookies, etc.). — Not that you’ll see this directly but it allows for some really cool stuff to be implemented more easily.
  • Rewritten Options class. — Now with more descriptive option names and relevant options grouped together for easier management.
    • It’ll also make for more verbose/intuitive configuration files.
  • Rewritten CLI interfaces to match the new Options structure. — Again with more descriptive CLI arguments.
  • Rewritten Issue model. — With much more descriptive attribute names than before and a lot more details about logged issues, such as:
    • Full HTTP request.
    • Full HTTP response.
    • The original state of the vulnerable vector.
    • The vulnerable  state of the logged vector.
    • DOM transitions that lead to its appearance/identification, if applicable.
  • High-performance DOM/JS/AJAX analysis by utilizing a pool of PhantomJS browsers.
    • Browsers are clustered together and Page analysis is distributed across that cluster.
    • Analysis happens in parallel to the default audit of regular DOM level 1 elements to prevent any noticeable overhead.
    • Provides WIVET coverage of 96% (the other 4% needs Flash support which won’t be implemented).

So, most of the big and critical goals have been achieved, leaving:

  • Implementation of DOM-XSS checks. — Working on this now actually.
  • Implementation of URL templates to support auditing URLs that use rewrite rules.
  • Hibernation support.

And possibly a few other nice-to-haves like:

  • Expose the runtime API of plugins over RPC. — Would let you create your own RPC services from inside the Arachni Framework.
  • Switch to MessagePack for serialization of RPC messages. — Faster, smaller with widely available bindings, but may make things a lot more complicated.
  • Remove the Spider and leave page discovery to the audit process, the Trainer and the Browser analysis.
    • Would eliminate the time spent crawling with virtually no drawbacks so it warrants closer examination.
    • On the other hand unforeseen issues might arise.

However, all this work has been going on in private. Since I’m rewriting stuff, the system, as a whole, is in a perpetual state of brokeness — i.e. in no state to be pushed to a public repository.

Now, regarding the ETA, v0.5 won’t be ready ’till it’s ready, but if I had to guess I’d say that it’ll take me around 3-4 months if I keep hauling ass. One thing’s for sure though, it’s worth the wait.

Cheers,

Tasos L.

, , , ,

About Tasos Laskos

CEO of Sarosys LLC, founder and lead developer of Arachni.

3 Responses to "Arachni 0.4.6-0.4.3 release"

  • Michiel
    January 8, 2014 - 6:05 pm Reply

    Hi Tasos,

    Are these new cool features also available in the nightlies?
    Great work dude and thanks for all the great work you put in such a great free and flexible web vulnerability scanner!

    Michiel

    • Tasos Laskos
      January 9, 2014 - 3:33 pm Reply

      Hey Michiel,

      At this point both codebases are identical — if you exclude the version numbers.

      Cheers

  • Basil Mckenzie
    April 2, 2014 - 12:23 am Reply

    Wonderful tool

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.