It’s been a while since the last release, mostly because one of the most crucial parts of the system (the HTML parser) has been completely rewritten and the browser has been upgraded to a more recent version. However, the system has now been sufficiently tested by enough people to be released as stable.
As usual, let’s walk though the most important changes.
A new executable is now available, called arachni_reproduce, that let’s you reproduce all issues in a report and then create a new report containing only the issues that still exist.
So, if you’ve got an Arachni report and are working to fix all the identified issues, you can just pass that report to arachni_reproduce and get immediate feedback as to how you’re doing instead of having to rerun a full scan.
For each run, arachni_reproduce will generate a new report that only includes unfixed issues, so, again, you won’t have to waste time testing issues that you’ve already fixed.
In addition to that, you can specify individual issues to be reproduced, based on their digest, if you only care about particular issues rather than the entire report.
Lastly, during the reproduction of each issue, extra HTTP request headers are set that contain information about which issue is being reproduced, thus allowing you to set server-side debugging or instrumentation in order to make fixing it even easier:
X-Arachni-Issue-Replay-Id: Unique token for requests pertaining to individual issues.
- Differs for each run and can be used to group requests for each issue together.
X-Arachni-Issue-Seed: Seed payload used to identify the original issue.
- Initial payload used to identify the vulnerability in the given report.
- X-Arachni-Issue-Digest: Digest uniquely identifying each issue across scans and reports.
Debugging Rack-based webapps (Ruby-on-Rails, Sinatra etc.)
Followers of this blog may remember an old post showcasing an IAST project I was working on some time ago, called the Arachni Introspector.
Based on the same principles, a subset of that functionality has now been introduced as a new project that let’s you debug Ruby web applications like Ruby-on-Rails, Sinatra and anything else that is based on the Rack framework.
Eventually, I hope to combine the 2 and make available the functionality of the Introspector, which would be pretty cool because last time I checked there are no other IAST systems for Ruby web applications.
- The target URL no longer accepts any loopback interface due to a browser restriction. This means that localhost and IP addresses starting with 127. won’t work at all. However, if you want to scan a local target you can still use the local hostname of your machine.
- The --http-cookie-string option has been changed to only accept Set-Cookie style strings.
- --http-authentication-type — Previously this was handled automatically by libcurl but in some cases explicit configuration turned out to be necessary
- --scope-dom-event-limit — Limits the amount of DOM events to be triggered for each DOM depth.
- --daemon-friendly — Disables status screen to allow the system to be run in the background.
- Upgraded to the latest (currently v2.1.1) PhantomJS version.
- Sometimes browser jobs froze due to abruptly closed SSL connections, this issue has been fixed.
- Support for WebSockets has been added.
- Support for web applications that set Content-Security-Policy has been added, previously this caused the custom JS environment to not be loaded.
- Timed-out jobs are now retried rather than be ignored.
New HTML parser
However, in my search for a faster and more lightweight alternative I came across an optimized XML/HTML parser called Ox, and the results speak for themselves really.
For a huge page — about 1MB of HTML code:
Nokogiri::HTML 8.323 (±12.0%) i/s - 42.000 in 5.082606s
Ox 49.934 (±14.0%) i/s - 245.000 in 5.007568s
For a large page — about 43KB of HTML code:
Nokogiri::HTML 349.181 (±18.9%) i/s - 1.664k in 5.032479s
Ox 1.682k (± 6.4%) i/s - 8.415k in 5.026887s
For a medium page — about 10KB of HTML code:
Nokogiri::HTML 1.315k (±18.2%) i/s - 6.324k in 5.068736s
Ox 8.283k (± 2.8%) i/s - 41.871k in 5.059968s
For a small page — about 364B of HTML code:
Nokogiri::HTML 15.095k (± 4.0%) i/s - 76.128k in 5.050787s
Ox 209.697k (± 2.1%) i/s - 1.055M in 5.032398s
i/s means iterations per second, and as you can see Ox is hugely faster, thus allowing Arachni to use less resources overall and also allow it to handle huge pages (very rare but it happens) with relative ease.
In addition to the above, Arachni now builds its own document tree using SAX, allowing it to ignore irrelevant HTML elements and only store interesting ones like forms and links, while not wasting valuable resources on junk.
Lastly, I’d like to sincerely thank Peter Ohler (the Ox developer and maintainer) for his assistance and remarkable willingness to accommodate my need for some unusual features.
Session management has been updated to support clicking a specific form button to perform the login rather than submitting the form.
- proxy — bind_address default switched to 127.0.0.1, 0.0.0.0 breaks SSL interception on MS Windows.
- Fixed a nil error.
- Added more HTTP and Browser Cluster metrics.
- Added support for non-authenticated SMTP configuration.
- Default to afr report.
- Retry on failure.
- webhook_notify — Sends a webhook payload over HTTP at the end of the scan.
- rate_limiter — Rate limits HTTP requests.
- page_dump — Dumps page data to disk as YAML.
- XSS checks have been updated to only perform checks on HTML content-type responses to avoid FPs.
- SQL injection checks have been updated with more SQL server error messages.
- Unvalidated redirect checks have been updated with more payloads.
- Path traversal has had its payload traversal limit increased.
- Backup files now ignore media files to avoid FPs.
- Backup directories adds remarks to issues explaining how the original resource name was manipulated.
That’s all folks, I hope that you’ll enjoy the release and that you’ll forgive the extended delay.