As promised in my last post, here’s the second part of my update on the work that’s taking place for the development of Arachni’s brand new web interface — along with a few changes on the Framework side.
There are a few new things I’d like to share with you like:
- Incremental Scans — Repeating Scans.
- Issue reviews.
- More tightly integrated meta-analysis — Weeds out false-positives
- Overhauled Profile handling — Regular users now allowed to create personal Profiles, admins can create Global ones.
- Several design changes.
At this point, I’d like to inform you that everything that follows can be found in the nightly packages and that these packages may very well blow up in your face since they are built from the experimental branches of the project’s repositories.
(Please excuse the size and lack of a 32bit package, that’s just temporary until I sort out some dev issues.)
Let’s get the Framework stuff out of the way first, these are things that in some way lend themselves well to the integration with the Web interface and not an overview of changes since the last release (see the CHANGELOG for those).
Separation of Framework and Interfaces
As some users have noticed, the old WebUI has gotten the boot and is no longer included in the experimental branch (and subsequently missing from the nightly packages as well) and this was for several reasons.
First of all, I wanted to separate the Framework (which is Arachni) from the bulk of resources and dependencies of the interfaces. The CLI is very light and serves as a basic driver for the Framework so it stays put, but as the project gets bigger it makes sense for non-core things to be pushed to non-core locations.
This also makes the Arachni GEM remarkably smaller and faster to install (and garbage-free) for people who are interested in only using the Framework libs (or RPC services) and That’s A Good Thing; Ruby GEMs are for distributing libs, not applications, and I should have known better and not had included the old WebUI in there in the first place.
Secondly, including a complex Bundler-based application inside another complex Bundler-based application frightens me a bit and would be a stupid thing to force yourself to deal with — especially when the first reason I mentioned was enough motivation to enforce the change.
Unfortunately, this brought about a boring but real problem, what the hell version do you give to the release packages (which will include both the Framework and Web Interface)?
I though of doing the Firefox thing and just increment an integer for each release but that feels wrong somehow; however, in a flash of genius, I found the solution. I’ll give you a moment to prepare yourself to bask in the magnificence of my genius, I decided to use FRAMEWORK_VERSION-WEBUI_VERSION.
I can tell that you’re amazed… take a breather…
Issues can now have remarks attached to them by any component for any reason. Remarks can be anything, a warning to the person who will end up reviewing it, a description of how the issue was discovered, stuff like that.
Yes, this is unimpressive and very simple but serves as a nice way to propagate any sort of context all the way up to the user.
More tightly integrated meta-analysis
Meta-analysis is a high-level analysis of the lower-level analysis that is responsible for discovering issues. In less confusing terms, it analyzes the results of the scan once the scan is done.
This serves as a way for the system to look for anomalous results while having a wealth of information which would not be available at the time any given issue would be discovered.
Let me put it this way, the modules which discover the issues are like the soldiers in battle while meta-analysis is the general, over-viewing everything from the top of the mountain. The soldiers can be confused and tricked while dealing with the mayhem in the battlefield but the general sees the whole picture.
That sort of analysis in the Framework is performed by plugins but the results of the analysis were not easy to access. Instead of the plugins operating on the Issue objects themselves and flagging them according to their findings, they were logging those results in data structures of their own, forcing you to dig through these structures to access that extra context — rather unnecessarily.
Now, the meta-analysis plugins use the previously mentioned Issue remarks to add their 2 cents and can flag issues as requiring manual verification if the issue is deemed not to be trustworthy.
So, as an example, let’s say that you have a crazy website that behaves unbelievably erratically, to the point of defying any sort of deterministic outcome and whose custom 404 responses cannot be fingerprinted.
That would lead to a lot of false positives from modules which perform discovery of directories, sensitive files and things like that. But, at the end of the scan, the meta-analysis process would go through the scan results and immediately spot the issue.
There are a gazillion vulnerabilities of the same type, who rely on the same subsystem for identification and analysis and the behavior of the site during the entire scan was terribly inconsistent in its responses and the Issues themselves have a high degree of similarity between them.
That is pretty conclusive evidence that the issues in question cannot be trusted and thus require manual verification by a human (and in the case of discovery modules, that would be a strong indication that the issues are false positives) and the plugin would also add an explanation (as remarks) to each affected issue.
So when it comes time to review the scan results you know to de-prioritize those issues and treat them with suspicion.
Now you may wonder, how come you don’t remove those issues completely and be done with it? Well, I don’t want to…it’s better that these issues are preserved if it’s only to bring the strange behavior of the webapp to the attention of the reviewer. The more knowledge you have during a security assessment the better.
Web Interface stuff
I was going to make before/after lists based on things that changed since my last post but… pretty much everything changed, so brace yourselves because there will be a lot of screenshots again.
The dashboard hasn’t changed much since my last post, besides a very simple line chart and better event descriptions things are pretty much the same.
Very basic stuff for now but it’s going to evolve.
- Profiles are split into Global and Personal.
- Global Profiles can only be created by admins and are available to every user in the system.
- Personal Profiles are created by regular users and are only visible to them.
- Users can share their personal Profiles with each other if they want.
- As usual, admins can do anything to any Profile — including making it Global.
You may have noticed the “Others'” tab, that’s there because the screenshots were taken from an administrative account.
Some changes were made to the “New Scan” page to make the form more condensed and less confusing to newcomers and the progress monitoring page has been updated a bit as well.
Also, advanced options are now hidden by default and the default is a Direct scan (no Dispatcher) utilizing a single Arachni Instance — which is what most people will choose.
Ignore the “Scan groups” option for now, I’ll explain in a bit.
The Scan progress page has had an updated Issues table to accommodate the reviewing functionality; the table has tabs with the issues categorized as per their review (or lack thereof) status.
Scan revisions are repeats of a finished scan but they add only additional information to all previous revisions (first scan is considered the root revision).
This means that revisions will only log issues that have not already appeared in previous revisions in order to make reviewing easier. However, if an issue that appeared in a previous revision does not appear in a future one, that issue will be automatically marked as “fixed” and you will receive a relevant notification about it (and you’ll also be able to see that in an Issue’s timeline).
Also, since we’re no longer working fresh, we’ve got some info that we can use to make the job easier, like the sitemap. Repeated scans can use the combination of sitemaps of previous revisions to either extend a new crawl or skip the crawl and use the new merged sitemaps instead.
Eagle-eyed visitors will have spotted that some screenshots have little relation to each other but they should be enough to drive the point home.
Groups allow you to…well…group scans together. They serve as scan containers which can be shared with multiple users to make collaboration and organization easier (and scans can belong to multiple groups as well).
You can treat them as projects (for keeping all scans for a specific website together), assignments (assigning users to monitor/review scans for a certain client) or just keeping things tidy and making filtering/management easier.
Basically, Scan Groups allow you to tie multiple users to multiple scans, the workflow you choose is up to you.
Issues can be discussed and reviewed, as you can see from the following screenshots:
Remember the meta-analysis stuff I mentioned earlier? This is where they come into play.
Meta-analysis plugins can automatically flag issues which have been deemed untrusted as requiring manual verification in order to take some of the burden off you — along with adding some remarks that explain under what light a suspicious issue was discovered.
Hopefully, you will never have to deal with that particular problem. I had to setup a test server which behaves in a very specific broken way to manage to fool Arachni into logging these issues in the first place.
A “Settings” page has been added which does exactly what you think.
If you are an admin hit the dropdown menu with your name on it to see the link to the settings page.
Thanks for making it ’till the end of this post. :)