Team LiB
Previous Section Next Section

8.4. Building the Scanner

Next we begin crafting the code for our scanner. The first thing we need to do is open our script and set up our command-line options. We use the Getopt::Std Perl module to parse the three command-line options outlined in Table 8-2.

Table 8-2. options





Cookie data string

Use these HTTP cookies for all test requests.



Log all output to this filename.



Generate verbose output.

We also need to check whether at least two arguments have been passed to the script (the two mandatory arguments of the input filename and hostname). If two arguments have not been passed, the script dies and prints out some basic syntax info:


use LWP::UserAgent;
use strict;
use Getopt::Std;

my %args;
getopts('c:o:v', \%args);
printReport("\n** Simple Web Application Scanner **\n");

unless (@ARGV)  { 
 die "\ [-o <file>] [-c <cookie data>] [-v] inputfile 
http://hostname\n\n-c: Use HTTP Cookie\n-o: Output File\n-v: Be Verbose\n"; 

Notice in the preceding code that we already called a custom subroutine, printReport. This subroutine is an extremely simple routine for printing output to the screen and/or log file. Let's jump down and take a look at it.

8.4.1. Printing Output

We have developed a custom subroutine that our script uses for printing output. We have done this because we have a command-line option (-o) that allows all output to be sent to an output file, so we can send everything through one subroutine that handles output to both the screen and a file, if necessary. printReport subroutine

As we just mentioned, we use the printReport subroutine to manage the printing of output to both the screen and output file, if necessary. Let's take a quick look at this routine's contents:

sub printReport {
 my ($printData) = @_;
 if ($args{o}) { 
  open(REPORT, ">>", $args{o}) or die "ERROR => Can't write to file $args{o}\n";
  print REPORT $printData;
 print $printData;

As we mentioned, this routine is pretty simple. It takes one parameter as input (the data to be printed), appends the data to a file if the user specified the -o option ($args{o}), and prints the data to the screen. If the script cannot open the log file for writing, it dies and prints the error to the screen. Now all we have to do when we want to print something is send it to printReport, and we know it ends up printing in the right place(s). Now that we have finished the first subroutine, let's go back to the main body of the script.

8.4.2. Parsing the Input File

If we have made it this far in the execution cycle, we know the user has provided two arguments, so we assume the first one is the input file and we attempt to open it. If the open fails, the script dies and prints the error to the screen. If the open succeeds, we populate the @requestArray array with the contents of the input file:

# Open input file
open(IN, "<", $ARGV[0]) or die"ERROR => Can't open file $ARGV[0].\n";  
my @requestArray = <IN>;

Now that we have opened our input file, the @requestArray array contains all the requests that were extracted from the input file. At this point, we can begin to process each request in the array by performing a foreach loop on the array members.

We use the following request for all our examples:

GET /public/content/jsp/news.jsp?id=2&view=F

At this point in the script, we also declare a few other variables: specifically, $oResponse and $oStatus (the response content and status code generated by our request), and two hashes for storing a log of all directory- and parameter-based test combinations we perform. We use the log hashes primarily to ensure that we do not make duplicate test requests (we discuss this in greater detail later in the chapter). As we perform each loop, we assign the original request from the input file to the $oRequest variable:

my ($oRequest,$oResponse, $oStatus, %dirLog, %paramLog);
printReport("\n** Beginning Scan **\n\n");

# Loop through each of the input file requests
foreach $oRequest (@requestArray) {

Once we start the loop, the first thing we do is to remove any line-break characters from the input entry and ensure that we are dealing with a GET or a POST request; otherwise, there is no need to continue. Although every line in our input file should contain only one of these two request types, because we are accepting an external input file we need to validate this fact:

# Remove line breaks and carriage returns
 $oRequest =~ s/\n|\r//g;

 # Only process GETs and POSTs
 if ($oRequest =~ /^(GET|POST)/) {

Next, we determine whether the request contains input parameters (either in the query string of a GET request or in a POST request) by inspecting the line for the presence of a question mark (?). If we find one, we need to parse the parameters and perform input parameter testing; otherwise, we skip parameter testing and move directly to directory testing:

 # Check for request data
 if ($oRequest =~ /\?/) {

For requests that contain parameter data, we perform parameter-based testing to identify a couple of common input-based vulnerabilities. Within the parameter-based testing block, the first action we perform on the request is to replay the original request (without altering any data):

  # Issue the original request for reference purposes  
  ($oStatus, $oResponse) = makeRequest($oRequest);

The reason we do this, although perhaps not immediately obvious, is quite simple. Our scanning tool is testing the application based on a series of specific "test requests" made to the application. The responses generated by each test request are analyzed for particular signatures indicating whether the specific vulnerability we are testing for is present. Because our findings are based on the output generated by each test request, we must be sure the presence of the vulnerability signature we are using is a direct result of our test request and not merely an attribute of a normal response.

For example, let's say we are looking for the string SQL Server in the test response to identify the presence of a database error message. However, the page we are testing contains a product description for software that is "designed to integrate with SQL Server." If we aren't careful, we might mistakenly identify this page as being vulnerable simply because the string SQL Server was contained in every response. To mitigate this risk, we preserve the original "valid" responses for each page before we begin our testing to validate that our signature matches are a result of the test we are performing and not a result of the scenario just described. This helps to ensure that we do not report false positives based on the content of the page or application we are testing.

8.4.3. Making an HTTP Request

This brings us to our next subroutine, makeRequest, which is responsible for making the actual requests during our scanning. As you can see in the last piece of code, the makeRequest subroutine is called to make the request, and it returns two variables (the status code and the response content). Let's jump down to this subroutine and take a closer look at exactly what is happening. makeRequest subroutine

This subroutine is used to make each request we want to generate while testing the application. Keep in mind that this routine is not responsible for manipulating the request for testing purposes; it merely accepts a request and returns the response. Manipulating data for testing occurs outside of this subroutine, depending on the test being performed.

We need to consider several things here, specifically the inputs and outputs of the routine. Because we have already developed a fairly simple and consistent format for storing requests in our input file, it makes sense to pass off requests to this routine using the same syntax. As such, this subroutine expects one variable to be passed to it that contains an HTTP request in the same format as our input log entries. The output requirements for this routine will directly depend on the information we need to identify, regardless of whether the test is successful. At a minimum, the request body (typically HTML) is returned so that we can analyze the contents of the response output. In addition to the response body, we need to check the status code returned by the server to determine whether certain tests resulted on success or failure.

Another feature we discussed earlier was the ability for our scanner to use HTTP cookies when making test requests. Most web applications use HTTP cookies as a means of authenticating requests once the user has logged in (using a Session ID, for example). To effectively test the application, our tool needs to send these cookie(s) with each test request. To keep things simple, we assume these cookie values remain static throughout the testing session.

Now we can take a close look at this subroutine. The first thing it does is declare some variables and accept one input variable (the request):

sub makeRequest {
 my ($request, $lwp, $method, $uri, $data, $req, $status, $content);  
 if ($args{v}) {
  printReport("Making Request: $request\n");
 } else {
  print ".";

You can see we are also printing some output based on the presence of the -v (verbose) option. Note, however, that for nonverbose output we are using print instead of printReport. This is because we are printing consecutive periods (.) to the screen each time a request is made to indicate the script's progress during nonverbose execution. Although we want the verbose message to appear in the output file, we do not want these periods to appear there. Next, we set up a new instance of LWP to make the HTTP request:

 # Setup LWP UserAgent
 $lwp = LWP::UserAgent->new(env_proxy => 1,
              keep_alive => 1,
              timeout => 30,

Now we need to parse the request data. Because we plan on performing upload testing via the HTTP PUT method, we need to support the GET, POST, and PUT methods. Both the POST and PUT methods need to pass some data in the body of the request, and as such, we need to perform a bit more processing for these two request methods. First, we split the input variable ($request) on the first space to parse out the method ($method) from the actual request data ($uri). For the POST and PUT requests, we can go ahead and parse out the data portion of the request ($data) as well by splitting the $uri variable based on a question mark:

# Method should always precede the request with a space
 ($method, $uri) = split(/ /, $request);

 # PUTS and POSTS should have data appended to the request
 if (($method eq "POST") || ($method eq "PUT")) {
  ($uri, $data) = split(/\?/, $uri);

Now that we have our essential request data parsed into separate variables, we can set up the actual HTTP request. We know the hostname and cookie values being used for testing are available via the $ARGV[1] and $args{c} values, respectively (both of these are provided as inputs to the script). You'll notice here that we manually add our own custom "cookie" header value only if the $args{c} variable is populated because this is an optional switch. Although LWP does have an additional module designed specifically for handling HTTP cookies (LWP::Cookies), we don't really need the robust level of functionality this module provides because our cookie values remain static across all test requests.

 # Append the uri to the hostname and set up the request
 $req = new HTTP::Request $method => $ARGV[1].$uri;

 # Add request content for POST and PUTs 
 if ($data) {

 # If cookies are defined, add a Cookie: header
 if ($args{c}) {
  $req->header(Cookie => $args{c});

Now that the request has been constructed, we pass it to LWP and parse the response that is sent back. We already decided the two pieces of the response we are most interested in are the status code and the response content, so we extract those two pieces of the response and assign them to the $status and $content variables accordingly:

 my $response = $lwp->request($req);

 # Extract the HTTP status code and HTML content from the response
 $status = $response->status_line;
 $content = $response->content;

It should be noted that the hostname or IP address ($ARGV[1]) supplied to LWP must be preceded with http:// or https:// and can optionally be followed by a nonstandard port number appended with a colon (i.e., Note in the next and final piece of this subroutine that we check for a 400 response status code. LWP returns a 400 (Bad Request) response when it is passed an invalid URL, so this response likely indicates the user did not supply a well-formed hostname. If this error occurs, the script dies and prints the error to the screen. Provided this is not the case, we return the $status and $content variables and close the subroutine:

 if ($status =~ /^400/) {
  die "Error: Invalid URL or HostName\n\n";
 return ($status, $content);

As you can see, the routine accepts one input parameter, the request, and returns two output parameters, the response status code and the response content.

8.4.4. Parameter-Based Testing

Now let's go back to where we left off before we dove into makeRequest. You recall that we had just started our loop through the input file requests and had checked to see if the requests contained parameters. Now that we have replayed the original unaltered request, let's start dicing up the input file entry and generate our parameter-based test requests. Because we are within the if statement that checks for the presence of request parameters, we know any request that hits this area of the code has input parameters. As such, we perform a split on the first question mark to separate the data from the method and resource name. We assign the method and resource name (typically a web server script or file) to the $methodAndFile variable and the parameter data to the $reqData variable:

  #Populate methodAndFile and reqData variables
  my ($methodAndFile, $reqData) = split(/\?/, $oRequest, 2);

Next, we split the $reqData variable into an array based on an ampersand (&). Because this character is used to join parameter name/value pairs, we should be left with an array containing each parameter name/value pair:

  my @reqParams = split(/\&/, $reqData);

Now that @reqParams is populated with our parameter name/value pairs, we are ready to start testing individual parameters. For efficiency, our scanner tests only unique page/parameter combinations that have not yet been tested. This is important if we have a large application that makes multiple requests to a common page throughout a user's session using the same parameters. As such, the first thing we do is craft a log entry for %paramLog and add it to the hash. Because we are interested in only the page and parameter names, and not the parameter values, we loop through the parameter name/value pairs and add only the parameter name(s) to our log entry ($pLogEntry):

  my $pLogEntry = $methodAndFile;
  # Build parameter log entry
  my $parameter;
  foreach $parameter (@reqParams) {
   my ($pName) = split("=", $parameter);
   $pLogEntry .= "+".$pName;

Notice that in the last line of the preceding code, we are incrementing the value of the %paramLog hash member. If the hash member does not exist, it is added with a value of 1. If a subsequent page/parameter combination is identical, the value is incremented to 2, and so forth. To ensure that no duplicate requests are made, we test this page/parameter combination only if the log entry is equal to 1. Table 8-3 shows the current value of $pLogEntry and other key variables at this point in the script.

Table 8-3. Variable and array values




GET /public/content/jsp/news.jsp?id=2 &view=F


GET /public/content/jsp/news.jsp






GET /public/content/jsp/news.jsp+id+view

Once we verify that the page/parameter combination has not already been tested, we must perform two nested loops through the @reqparams array. The first loop cycles through and tests each parameter. The second loop loops through the parameter/value list and reassembles it back into a query string while replacing the value of the parameter to be tested with a placeholder value. We use the counter variable from the first loop to determine the current array member to be altered in the second loop.

We use the placeholder string "--PLACEHOLDER--" in the parameter to be tested because we have more than one input validation test to perform. This allows our individual testing routines to substitute the placeholder based on their individual testing needs. At the end of each inner loop we can call the input validation testing routines. We also chop the last character off of the request because it always consists of an unnecessary ampersand (&):

  if ($paramLog{$pLogEntry} eq 1) {
  # Loop to perform test on each parameter
  for (my $i = 0; $i <= $#reqParams; $i++) {    
   my $testData; 
   # Loop to reassemble the request parameters
   for (my $j = 0; $j <= $#reqParams; $j++) {  
    if ($j == $i) {
     my ($varName, $varValue) = split("=",$reqParams[$j],2);
     $testData .= $varName."="."---PLACEHOLDER---"."&"; 
    } else {
     $testData .= $reqParams[$j]."&";
   # Remove the extra &
   my $paramRequest = $methodAndFile."?".$testData;

   ## Perform input validation tests

At this point in our loop, we can insert the individual input parameter testing routines we want to perform. As you can see, we have one test request for each request parameter, and we have replaced the parameter value to be tested with our placeholder.

Values Assigned to $testData

For our sample request, any code placed here executes
twice with the following two values assigned to the




Now that we have our parameter parsing logic in place, we can call whichever specific input validation tests we want to perform. The first of these tests, called sqlTest, detects potential SQL injection points. This subroutine accepts one variable (the request to be used for testing) and returns 1 if the test detects a potential vulnerability or 0 if no vulnerability is detected. We assign the output of sqlTest (the 0 or 1) to a variable called $sqlVuln:

   my $sqlVuln = sqlTest($paramRequest); sqlTest subroutine

Before we start building the SQL injection testing routine, we must decide what the test should consist of. The most common technique for SQL injection testing involves the use of a single quote (') character inserted into a parameter value. In the absence of any input validation, a single quote, when passed to a database server within a query, typically generates an SQL syntax error unless it is properly escaped. The ability to invoke a database syntax error by inserting a single quote into an application parameter is a very good indication that an SQL injection point might exist. From a testing perspective, any database error message that the user can invoke is something that should be followed up on. As such, our SQL injection test consists of passing a single quote within the parameter being tested to see if the application returns a database error.

Recall that the specific parameter value to be tested in each request is prepopulated with a placeholder string before the parameter parsing logic calls the test routine. This saves us some effort because the subroutine automatically knows which parameter value to test based on the presence of the placeholder string. The first thing this subroutine does is accept an input variable (the request) and substitute the placeholder string with our SQL injection string. Because all we need to do is to pass in a single quote, our test string can be something simple, such as te'st:

sub sqlTest {
 my ($sqlRequest, $sqlStatus, $sqlResults, $sqlVulnerable);
 ($sqlRequest) = @_;

 # Replace the "---PLACEHOLDER---" string with our test string
 $sqlRequest =~ s/---PLACEHOLDER---/te'st/;

Now that the SQL injection test request is ready, we can hand it off to the makeRequest subroutine and inspect the response. We must define the criteria used to determine whether the response indicates the presence of a vulnerability. We previously decided that the ability to invoke a database error message using our test string is a good indicator that a potential injection point might exist. As such, the easiest way to test the response is to develop a regular expression designed to identify common database errors. We must ensure that the regular expression can identify database error messages from a variety of common database servers. Figure 8-3 shows what one of these error messages typically looks like.

Figure 8-3. Common SQL server error message

The regular expression used in the following code was designed to match common database server error messages. As you can see, if the response matches our regular expression, we consider the page vulnerable and report the finding:

 # Make the request and get the response data
 ($sqlStatus, $sqlResults) = makeRequest($sqlRequest);

 # Check to see if the output matches our vulnerability signature.
 my $sqlRegEx = qr /(OLE DB|SQL Server|Incorrect Syntax|ODBC Driver|ORA-|SQL 
command not|Oracle Error Code|CFQUERY|MySQL|Sybase| DB2 |Pervasive|Microsoft 
Access|MySQL|CLI Driver|The string constant beginning with|does not have an 
ending string delimiter|JET Database Engine error)/i;
 if (($sqlResults =~ $sqlRegEx) && ($oResponse !~ $sqlRegEx)) {
  $sqlVulnerable = 1;
 printReport("\n\nALERT: Database Error Message Detected:\n=> $sqlRequest\n\n");
 } else {
  $sqlVulnerable = 0;

Additionally, note that we are also ensuring that the original response, made before we started testing (the $oResponse variable), does not match our regular expression. This helps to reduce the likelihood of reporting a false positive, because the normal request content matches our regular expression (recall the scenario involving the product description page for software "designed to integrate with SQL Server").

Now that we have performed our test, we assign a value to the $sqlVulnerable variable to indicate whether the request detected a database error message. The final action for our subroutine is to return this variable. Returning 1 indicates that the request is potentially vulnerable; 0 indicates it is not:

 # Return the test result indicator
 return $sqlVulnerable;

Now that our SQL injection testing has been performed, we continue with our per-variable tests. Turning back to our main script routine, you'll recall we are in the midst of looping through each request variable, so we must perform the remaining per-variable tests before we continue. The next and last per-variable test to be performed is designed to detect possible XSS exposures. The subroutine for this test is called xssTest and it is structured in a way that is very similar to sqlTest. As before, we declare a new variable ($xssVuln) to assign the value returned (0 or 1) by xssTest:

   my $xssVuln = xssTest($paramRequest); xssTest subroutine

To test for XSS, we inject a test string containing JavaScript into every test variable and check to see if the string gets returned in the HTTP response. A simple JavaScript alert such as the one shown here produces an easily visible result in the web browser if successful:


One thing we must consider is that many XSS exposures result from HTML form fields that are populated with request parameter values. These values are typically embedded within an existing HTML form control, so any effective exploit string needs to "break out" of the existing HTML tag. To compensate for this, we modify our test string as follows:


Now that we have designed our test string, we can build the XSS testing routine. Like the other parameter test routines, it accepts a request containing a placeholder that must be replaced by our test string:

sub xssTest {
 my ($xssRequest, $xssStatus, $xssResults, $xssVulnerable);
 ($xssRequest) = @_;

 # Replace the "---PLACEHOLDER---" string with our test string
 $xssRequest =~ s/---PLACEHOLDER---/"><script>alert('Vulnerable');<\/script>/;
 # Make the request and get the response data
 ($xssStatus, $xssResults) = makeRequest($xssRequest);

Once again, we hand off the test request to makeRequest and inspect the HTTP response data for the presence of our test string. If the application returns the entire string (unencoded), an exploitable XSS vulnerability is likely to be present. If that is the case we assign a value of 1 to the $xssVulnerable variable and report the finding; otherwise, we set it to 0:

 # Check to see if the output matches our vulnerability signature.
 if ($xssResults =~ /"><script>alert\('Vulnerable'\);<\/script>/i) {
  $xssVulnerable = 1;

  # If vulnerable, print something to the user
  printReport("\n\nALERT: Cross-Site Scripting Vulnerability Detected:\n=> $xssRequest\n\n");
 } else {
  $xssVulnerable = 0;

Note that for this test, we did not check to see whether the original response contained our test string. This is because we want to flag any page that contains this test string because there is a chance it could be the result of a previous test request made by our scanner. Additionally, unlike the SQL injection test, the odds of generating a false hit using this string are fairly low.

Now that we have performed our test, the final action for our subroutine is to return the value of $xssVulnerable. Returning 1 indicates that the request is vulnerable; 0 indicates it is not:

 # Return the test results
 return $xssVulnerable;

Turning back to our main script routine, we now have completed all our parameter-based testing for the current request. We can close out the loop for each parameter value, as well as the if statements checking for unique parameter combos and request data:

   } # End of loop for each request parameter
  } # End if statement for unique parameter combos
 } # Close if statement checking for request data

8.4.5. Directory-Based Testing

Now it's time to move on to directory-based testing. You'll recall that we had previously determined the scanner tests would consist of parameter-based and directory-based testing routines. To perform directory-based testing, we must develop some logic that loops through each directory level within the test request and calls the appropriate testing subroutines at each level. Because we want to test every directory regardless of its content, we do not discriminate against any attributes of the test request (i.e., request method, presence of parameter data, etc.).

The first thing we do is isolate the path and file information from the rest of the test entry. Specifically, we strip out the request method at the beginning of the current test request ($oRequest) and any parameter data appended to it. For simplicity, we declare a trash variable ($trash) for allocating unnecessary data and keep the portion of the test request to be used in the $oRequest variable:

 my $trash;
 ($trash, $oRequest, $trash) = split(/\ |\?/, $oRequest);

Now that we have isolated our path and file data, we create an array containing each directory and subdirectory from the $oRequest variable. We can do this by performing a split using a forward slash (/):

my @directories = split(m{/}, $oRequest);

Before we start looping through each directory level, we need to determine whether the last member of our @directories array is a filename. If the request was to a directory containing a default web server document, there is a good chance the request won't contain a filename. It is also likely that most of our requests will, in fact, contain a filename, so we need to determine this up front so that we do not confuse the two.

Because most web servers require a trailing forward slash (/) when making a request to a directory with no document, we can check the last character in the test request to see if it is a forward slash. If it is, we know no filename is in the request. If it is not, we assume the last portion of the request includes a file or servlet name, and this value is the last member of our @directories array. To check the last character, we break out each character in the request to an array (@checkSlash) and refer to the last member of the array:

my @checkSlash = split(//, $oRequest);
 my $totalDirs = $#directories;

 # Start looping through each directory level
 for (my $d = 0; $d <= $totalDirs; $d++) {
  if ((($checkSlash[(-1)] ne "/") && ($d == 0)) || ($d != 0)) {

As you can see in the preceding code, we assign the member count from the @directories array to the $totalDirs variable, then we perform a loop starting with a counter variable ($d) at 0 and continually increment the counter by 1 until it and the $totalDirs variable are equal. Each time we loop, we remove the last member of the @directories array, effectively truncating up one level every time. The exception to this is on the first loop ($d = 0), where the last member of the $checkSlash array is equal to a forward slash (/). This condition indicates that the test request did not contain a filename (the request ended with a forward slash), thus the last member is not removed. Subsequent requests ($d != 0), however, always result in the removal of the last array member. We assigned the member count from the @directories array to the $totalDirs variable because this number changes after each loop iteration.

Now that we have our directory truncation loop in place, we can create the actual request to be used by our testing subroutines. We are not particularly interested in the original request method, so we reassemble the current members of the @directories array into a GET request as follows:

  my $dirRequest = "GET ".join("/", @directories)."\/";

At this point in the loop, we can insert the individual directory testing routines we want to perform. For our sample request, any code placed here is hit three times, with the values in Example 8-7 assigned to the $dirRequest variable.

Example 8-7. Values assigned to $dirRequest
GET /public/content/jsp/
GET /public/content/
GET /public/

As you can see, we have one test request for each directory level. Just as we did with the parameter-based test requests, we keep track of each request we make to ensure that we do not make duplicate requests. We had previously declared the %dirLog hash with this specific purpose in mind, so we can use the same technique we used with %paramLog to determine if the request is unique:

  # Add directory log entry
  if ($dirLog{$dirRequest} eq 1) {

Now we call whichever specific directory-based tests we want to perform. The first of these testing subroutines, dirList, is used to detect whether directory listings are permitted when requesting the directory without a document:

  my $dListVuln = dirList($dirRequest);

Let's jump down and take a peek at the dirList subroutine. dirList subroutine

Because this subroutine is called once at each directory level, it accepts a request that is already properly formed with no default document. This makes this routine relatively simple because all it needs to do is make the request and decide whether the response contains a directory listing:

sub dirList {
 my ($dirRequest, $dirStatus, $dirResults, $dirVulnerable);
 ($dirRequest) = @_;

 # Make the request and get the response data
 ($dirStatus, $dirResults) = makeRequest($dirRequest);

 # Check to see if it looks like a listing
 if ($dirResults =~ /(<TITLE>Index of \/|(<h1>|<title>)Directory Listing For|<title>Directory of|\"\?N=D\"|\"\?S=A\"|\"\?M=A\"|\"\?D=A\"| - \/<\/title>|&lt;dir&gt;| - \/<\/H1><hr>|\[To Parent Directory\])/i) {
  $dirVulnerable = 1;

 # If vulnerable, print something to the user
  printReport("\n\nALERT: Directory Listing Detected:\n=> $dirRequest\n\n");
 } else {
  $dirVulnerable = 0;

The regular expression used in the preceding code was designed to detect IIS, Apache, and Tomcat directory listings. As with the other testing routines, we assign a value of 1 to the $dirVulnerable variable and report the finding if the expression matches; otherwise, we assign a 0 to the variable. Finally, we return this value and close the subroutine:

 # Return the test results.
 return $dirVulnerable;

Let's jump back up to our main script routine and move on to our next and final testing subroutine, dirPut, to determine if the directory permits uploading of files using the HTTP PUT method:

  my $dPutVuln = dirPut($dirRequest); dirPut subroutine

The last of our testing routines is responsible for determining whether files can be uploaded using the HTTP PUT method. Like dirList, this subroutine accepts a request that is already properly formed with no default document:

sub dirPut {
 my ($putRequest, $putStatus, $putResults, $putVulnerable);
 ($putRequest) = @_;

Unlike the dirList routine, we need to format our request a bit more before handing it off to makeRequest. Specifically, we need to change the request method from GET to PUT, and add request data to the end of the request. Once we have done that we issue the request:

 # Format the test request to upload the file
 $putRequest =~ s/^GET/PUT/;
 $putRequest .= "uploadTest.txt?ThisIsATest";

 # Make the request and get the response data
 ($putStatus, $putResults) = makeRequest($putRequest);

Now that we have issued the PUT request we reformat the request to check whether the new document is in the directory. The reformatting includes changing the request method back to GET, and removing the request parameter data:

 # Format the request to check for the new file
 $putRequest =~ s/^PUT/GET/;
 $putRequest =~ s/\?ThisIsATest//;

 # Check for the uploaded file
 ($putStatus, $putResults) = makeRequest($putRequest);

Once we issue the second request, we can check to see if our test string was returned in the content. If so, we can be sure the file was created successfully, so we set the $dirVulnerable variable to 1 and report the finding; otherwise, we set this variable to 0:

 if ($putResults =~ /ThisIsATest/) {
  $putVulnerable = 1;

 # If vulnerable, print something to the user
  printReport("\n\nALERT: Writeable Directory Detected:\n=> $putRequest\n\n");
 } else {
  $putVulnerable = 0;

Last but not least, we return the $dirVulnerable value and close the subroutine:

 # Return the test results.
 return $putVulnerable;

At this point, we have completed all our directory-level testing routines, so we jump back up to our main script routine and close out all our loops as follows:

   } # End check for unique directory
  } # End loop for each directory level
 } # End check for GET or POST request
} # End loop on each input file entry

printReport("\n\n** Scan Complete **\n\n");

Finally, we report a message stating that testing is complete. With that, we have completed our simple web application vulnerability scanner.

    Team LiB
    Previous Section Next Section