Go to Top

New engine sitrep

UPDATE: The benchmark section has been updated, a new big optimization came to me and I wanted to share. 14/12/2017


Hello everyone,

It’s been a while since my last post, where I mentioned working on a new engine, and now that the system is fairly solid I wanted to take some time and provide updated benchmarks and some technical details.

Multi-platform

The new engine is multi-platform and supports Linux, OS X and MS Windows.

In the future, it may also be multi-arch (looking at phones and tablets), but let’s take it one step at a time.

Performance

The last post included some impressive performance improvements over Arachni, but a lot of work has gone into the system since then and I’ve got some impressive results for you still.

Benchmarks

http://testhtml5.vulnweb.com

Duration RAM HTTP requests HTTP requests/second Browser jobs Seconds per browser job
New engine (current) 00:01:50 102MB 8,417 86.796 182 2.081
New engine (last update) 00:02:14 150MB 14,504 113.756 211 1.784
Arachni 00:06:33 210MB 34,109 101.851 524 3.88

As you can see, the differences are significant even for sites of smaller size.

Larger, real production site — cannot disclose

Duration RAM HTTP requests HTTP requests/second Browser jobs Seconds per browser job
* New engine (current) 00:17:36 270MB 20,435 40.119 2,757 1.826
New engine (current) 00:13:54 399MB 18,698 31.54 2,738 2.71
New engine (last update) 00:45:31 617MB 60,024 47.415 9,404 2.354
Arachni 12:27:12 1,621MB 123,399 59.516 9,180 48.337

The first row (with the asterisk) is the engine running with its defaults, the rest are using the settings of the original Arachni scan which included some performance optimizations that aren’t necessary with the new engine because it’s plenty fast by itself.

In addition to the significant duration and RAM decreases, the CPU is used much more efficiently (and more than 1 core at a time whenever possible) and disk usage has also decreased dramatically.

Sometimes disk usage was an issue with Arachni, even going into the GB range and in rare cases even into the tens of GBs if a workaround wasn’t applied. For this site, the new engine maxes out at around 80MB (for about a minute)  and, even though its hard to measure this for Arachni (because I can’t be arsed to run a 12 hour scan again), it would have probably been plenty.

How?

Like so: This isn’t Arachni.

This is a new project and it is basically how I would do things now, after what I’ve learned by working on Arachni all these years — but doing these things safely, without the risk of ruining Arachni, which works.

So yeah, the current Ruby code in this project is now API, business-logic (mostly scheduling) and (some) data. It mostly keeps all the processes in-line and organizes and reads/writes data from/to I/O and native extensions.

The API still looks like Arachni usually, but every bit of code has been cleaned-up, profiled, optimized and refactored.

And for the most part completely rewritten, because the remaining significant Ruby is the scheduling for the scan operations, which is brand new and plays a major role in the performance gains and decreases in resource usage.

And those extensions I mentioned are written in Rust (and C, but I’m talking about my own now), which is where I’m headed with this; I’d like to eventually move everything to Rust, but there’s no hurry, the bottlenecks are now gone, so all is well.

Still, if it weren’t for the fact that all the scan data need to pass through Ruby, RAM usage would be almost negligible.

Things that were moved to Rust are:

  • HTML parser.
  • URL parser.
  • Signatures for webapp behaviour.
  • Smaller things from all over that are faster as Rust.

And inside Rust you can have your code really be multi-threaded, so these heavy operations are spread across CPU cores.

Lest we forget, Rust was designed by Mozilla as a new language for their prototype browser engine, Servo; which means that a lot of the things built for Servo are very useful to a system like this one:

  • html5everHigh-performance browser-grade HTML5 parser.
    • Used as a basis for the new HTML parser.
  • rust-url – URL parser.
    • Used by the new URL parser.

Rust extension downsides:

  • Multi-threaded support doesn’t work on OS X.
    • I’m looking into the bug, single-threaded fallback for now.
  • Doesn’t currently work on MS Windows.
    • I’m looking into the bug, Ruby implementation as a fallback for now.

I would also like to thank the Ruru project at this point, which has been a big help in transitioning to Rust and the people there have been great, thanks folks.

Stability

There’s a feature/system in the new engine, let’s call it Slot Management. That system is aware of the host’s resources (CPU, RAM, Disk) in real-time and can determine how many scans can be safely run in parallel, at any given time, based on the available system resources and recommended system requirements.

So, if you’re integrating via REST/RPC APIs, you won’t be able to stress your scanner machines into crashed scans due to low RAM or low disk space etc. And if there currently are no slots available, just keep asking and when one opens up you can proceed as usual, it’s very simple.

This also affects the CLI to an extent.

Best of all, you can override the automated slot calculation and set your own number of available slots for your machine and optimize for your needs.

Example

If you know you’ll be scanning very simple sites, their resource needs will fall well bellow the recommended system requirements, so you can set a much higher number of slots; accordingly for the opposite case.

Or, you could be scanning multiple versions of the same site while in development, and know beforehand what kind of resources it requires, and set the perfect number of slots for your scanner machines.

Integration

Integration options have greatly improved in the new engine, so let’s look at them one by one.

Scan Ruby API/DSL

Arachni lacked a nice and clean native Ruby scan API, you had to work with the Framework and other libraries and it was unclean. Direct use of those libraries wasn’t encouraged greatly in the first place, as it was always better to use something like the REST API, due to its simplicity and safety when integrating, and I wasn’t planning on changing that approach.

But, I wanted an absolutely singular point of entry scan API for the all the different interfaces, so I got to work.

However, one thing kinda sorta led to another, and while I was building the API I started giving it a DSL-ish feel, and then I figured, what if I build a DSL for the API to allow scripting? And after I did that, I figured why not see if I can create a DSL for building the API in the first place?

So… the API and DSL building libraries are generic and can be used to build other APIs and act as nice DSLs in their own right; they can also be used to build APIs via a DSL and provide a DSL for those APIs; so, they will be released as their own project and Ruby gem for everyone to enjoy.

And now, the new engine has a nice Ruby API and scripting DSL.

The gist of it is that you will be easily able to create your very own scanners, because with a (fairly) few simple lines of Ruby code you will be able to configure the engine and/or effect/replace its decision making processes.

And, those scripts can include other scripts and the interfaces can load multiple scripts, so you can organize what you’re trying to change in the engine as you see fit.

For example, you could have logging, debugging, scope or check scripts or scripts with workarounds for specific sites and mix and match.

Another really cool by-product of this, since scripts can be a self-contained type of configuration, is that you can use them as such and run/post them as scans via CLI, REST or RPC. And you can store them in your favourite version control system and be able to keep a history and revert etc.

RPC

Queue

A much needed feature in Arachni was a queue. Users frequently asked if they could schedule multiple scans to be run and this is now nicely possible when you take into account slots.

So the Queue service is basically that, a queue that accepts multiple scans and runs as many of them in parallel as possible. It also monitors them for you and stores their reports in a specified location.

You can also attach/detach running scans to/from it.

Say you started a scan from a different interface, but you now want the Queue to monitor it, you can attach that scan to the Queue and it will take care of it for you. Or, you may want to pass a scan to a different interface for monitoring, you can detach that scan from the Queue and take responsibility for it. Or even move a scan to another Queue for monitoring, by detaching it from one and attaching it to the other.

Queues can of course be configured with a Dispatcher, which opens them up to their Grid. And Queues can share Dispatchers, so you could have a central scan Grid, but per-type/department/whatever Queues.

Finally, there’s a numeric priority assigned to Queue entries, so you can arrange your scans accordingly.

Dispatcher

Dispatchers no longer need a pool of warmed-up processes, so start-up is basically instant and no system resources are being hogged unnecessarily.

Also, like all else in the engine, they are slot-aware, so they won’t dispatch scanners if no more slots are available.

Grid

Nodes can now be unplugged from the Grid, so you can easily scale down as well as up.

Also, the Grid is pretty fault-tolerant — already was with Arachni too, but now it’s more reliable.

Whenever a node spots a dead member, that member is flagged Grid-wide as dead and won’t be considered part of the Grid any more, unless identified as alive (by any node) on subsequent pings. So killed, crashed or disconnected nodes don’t affect the Grid.

In addition, if a Dispatcher crashes or gets killed, this won’t affect its running scans, nor can any scan’s state affect another’s — pretty much anything that does any serious work gets its own process in order to limit fault-propagation and also take advantage of multiple CPUs.

Also, load-balancing is based on a utilization ratio based on slot info, thus ensuring that workload remains as low as possible for each node, which improves overall performance and stability across the Grid.

Or, you could reverse that and switch to a cheapskate strategy and cram as many scans as possible, on as few nodes as possible, and unplug the ones that end up sitting around.

Example

Basically, what this let’s you do is start with one Dispatcher and start your scans.

If at some point the list of rejected scans (because the machine can’t support any more) becomes too large, you can simply start a Dispatcher on a different machine and tell it to connect to the first one.

Now scans will also run in the second Dispatcher, with the 2 basically combining their available slots and automatically figuring out which one of them is best suited to run each scan

And you can add even more Dispatchers for even more slots.

And if at some point you find yourself with an empty or small scan list, you can unplug the Dispatchers you no longer need.

By the way, scans can be asked from any node, but they will still be provided by the best suited one — there’s an optional override to that however, a per-scan Grid-blindness, should you ever need it.

What I basically described above is a Queue configured to use a Dispatcher, with the Dispatcher later becoming part of a Grid.

REST

The REST interface in Arachni basically was an RPC translation layer, providing easy and simple access to a very limited subset of the RPC functionality.

The new engine’s REST interface is still just an RPC translation layer, but has been updated to provide easy and simple access to all RPC functionality.

You can:

  • Start scans from it and use it to monitor their progress — like before, but that was all you could do before.
    • Slot-aware.
  • Configure it with a Dispatcher.
    • Scans will come from it rather than the local machine.
      • Or its Grid if it belongs to one.
    • Access all RPC functionality for management.
    • Manage its Grid if it belongs to one — plug, unplug, monitor etc.
  • Configure it with a Queue — which can be configured with a Dispatcher which can be a member of a Grid.
    • Access all RPC functionality for management.

Modern browser engines

The new engine will not use PhantomJS as it’s now a dead project, due to the progress made by Google and Mozilla with headless versions of Chrome and Firefox. Instead, browser engines have been abstracted and modularised, and both Chrome and Firefox are provided as such.

As a result, the new engine will always have access to the cutting edge in performance and support, since Chrome and Firefox pretty much have these covered.

You could also disable using browsers altogether, but that won’t get you many results these days.

Another cool feature is that browser visibility is optional, which means that when developing scripts or debugging log-in procedures, you can make the browser window visible and see exactly what’s going on.

Device emulation

The new engine can emulate PCs, tablets, phones and wearables from the HTTP layer down (up?) to the browser — although how each browser does that is up to it.

The options you can set are:

  • Screen width
  • Screen height
  • User-agent
  • Enable touch events
  • Pixel-ratio

Some of these could also be set in Arachni in different places, but they have now been grouped in their own option class and extended.

Quick pause/resume/suspend (to disk)/restore (from disk)

Pause, resume and restore operations were OK in Arachni, but are now near-instant and the new engine also allows for quick suspension as well.

In Arachni, browser jobs couldn’t be suspended, so you had to wait for them to finish, which sometimes made the suspend to disk functionality a moot point if there were a lot of jobs to be performed.

Now, this is no longer an issue, once the current page is finished being checked the system writes its state to disk and that’s it — usually a matter of seconds.

Release?

It’s probably going to be a while, I don’t have any specifics.

Afterthoughts

It just occurred to me (while folding clothes) that within the new engine I’ve effectively built a general-purpose, fault-tolerant, load-balancing grid, with a priority-based scheduler whose jobs are arbitrary scripts, driving arbitrary APIs, via arbitrary DSLs and supports slots and off-loading heavy-duty work to Rust. And it has a nice REST API too.

So, you can describe whatever you want to run, in whatever way you want, and safely run as much of it as you can, across as many machines as you can get your hands on, even if it’s really heavy-duty work because you can delegate to Rust. And you can manage all of this via a nice REST API.

Something like:

  • Define Worker API via API builder DSL.
    • In the case of the engine that’s the Scan API.
  • Write script for Worker API DSL.
    • In the case of the engine that would be a scan script.

Then:

  • Client sends script to Queue.
  • Queue receives script.
    • Waits until Dispatcher slot opens.
  • Dispatcher finds optimal node.
    • Spawns Worker on said node.
    • Sends info of Worker to Queue.
  • Queue receives Worker info.
    • Sends script to Worker.
  • Worker receives script.
    • Runs the script inside the Worker API DSL.
      • Optionally calling Rust.

In essence, you’d have an entire grid/cloud that implements an API/DSL and you’d use that API/DSL by sending your script to it over REST.

I could extract the Grid stuff from the engine and have a nice, clean, fast, simple, versatile grid in a gem…there could be something to this.


I’m sure I’ve forgotten some stuff, there have been a lot of big changes, but I believe I’ve mentioned the important ones.

So,yeah, that’s all for now folks.

Cheers,

Tasos L.

, , , , , , , , , , , , , , , , , , , , ,

About Tasos Laskos

CEO of Sarosys LLC, founder and lead developer of Arachni.

4 Responses to "New engine sitrep"

  • Jon Zeolla
    September 28, 2018 - 3:06 pm Reply

    Since you mention “This isn’t Arachni” can you suggest the best place to monitor for the new release, or would it still be this blog?

    • Tasos Laskos
      September 28, 2018 - 7:50 pm Reply

      It’s still this blog unless otherwise announced.

    • Tasos Laskos
      September 28, 2018 - 7:51 pm Reply

      No that’s still Arachni, the nightlies have always been named in this fashion.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.