Sunday, November 21, 2010

fuzzing a web applications directory structure


When performing black-box web application penetration test one of the most important steps, but often overlooked is the application profiling phase. The objective of the application profile phase is gather as much information about the application structure and the applications functionality as possible. Typically, especially during a black-box test, the tester is provided the application URL and perhaps a couple of logins to use for the pentest.

As with any other phase of a pentesting, the application profile phase is really made up of several steps. One of these steps is determining the applications directory structure. Its simple to find the “public” directory structure of the application, but what about the “private” directories of the application that should not be found, such as old versions of the application or more importantly administration functions?

To find these and other gems that can provide juicy information hidden in the “private” directories the tester must fuzz the applications directory structure. Web application fuzzing is simply sending random data to the web site requesting a resource and reviewing the results of the request to determine if the resource exist.

To give a simple example how fuzzing works lets examine the fuzzing process on the imaginary website www.site.com. The website directory structure consist of the following directories and if they are linked in the web application:

/admin – Not linked in the web application
/docs – Linked in the web application
/images – Linked in the web application
/pages – Linked in the web application
/scripts – Linked in the web application
/source – Not linked in the web application
/test – Not linked in the web application
The following HTTP status codes are used when fuzzing an application to inform the tester if the requested resource exist:

200 - OK
301 – Moved Permanently
302 – Moved Temporarily
401 - Unauthorized
Any other HTTP status code says the resource does not exist or is forbidden (HTTP 403 status code) to the tester. Understanding the HTTP status codes, the tester can to fuzz the application to determine its directory structure.

The tester will fuzz the directory structure by manually requesting the following directories and the request corresponding HTTP status code:

/admin – 401
/backup – 404
/code – 404
/docs – 200
/images – 200
/include – 200
/old – 404
/pages – 200
/source – 301 Redirects to /test directory
/scripts – 200
/test – 200

By manually fuzzing the application the tester found the /admin, /source and /test directories. Had the tester not fuzzed the website these directories would have been missed. Although the /admin directory requires authorization to access the tester now knows to spend some time trying access this area of the application. Finding the /source and /test directories, the tester chances of finding information about the security of the application increases.

Although in the above example the tester was able to find 2 “private” directories with just 6 guesses in the real world the tester would typically try thousands of directories to increase the probability of finding “private” directories. Of course, no tester wants to do thousands of manual requests to a website, so a fuzzing tool should be used to fuzz the directory structure.

With an understanding of what fuzzing is and how it works next time I will discuss a tool that I use to perform web application fuzzing. Of course like any tool there are lots of them out there is this tool is my personal preference.





Wednesday, November 3, 2010

Completed OWSP challenge …………………and passed!

A few months ago I blogged about taking the Offensive-Security Wifu class, which can be found here. I finally completed the second part of the class and signed up for the challenge to get the Offensive Security Wireless Professional (OWSP).

The second part of the class, which had several videos with it, was focused on gathering information and using that information to attack wireless networks. The class focused on the different applications that make up the aircrack-ng suite of tools. After learning what you used the various tools for the put it all together to show how to crack WEP-OPN, WEP-SKA and WPA-PSK. They also discussed different ways to attack wireless networks that have no associated clients or very little network activity.

After completing the class I studied the lab material and ran through every attack scenario a few times to make sure I was able to successfully perform each type of attack. Once I was comfortable with all of the attack types I registered to take the OWSP challenge.

The OWSP exam is different than other certifications exams in which they ask you a question and you typically choice a multiple choice answer. For the OWSP I had to connect to a system on the Internet and perform a series of attacks to gain access to wireless networks within 4 hours. Once you complete the objectives you write up how you performed the attacks and they send you the answers back with-in 72 hours.

I started my exam on time, but ran into 1 technical issue. I contact exam support and the issue was resolved immediately. After the issue was resolved I went about my exam and completed all my objectives in about an hour and half. I should have been done sooner but I made a mistake on one attack that added about 20 minutes to the length of the exam. After completing my objectives I had 24 hours to prepare my documentation and turn in for grading.

I completed the documentation a couple of hours after the exam and sent off to be graded. I receive a confirmation they received the documentation and went about my business, expecting to hear back in a couple of days. This morning I was pleasantly surprised to receive an email stating that I had successfully passed my challenge and was certified as an OWSP.

If you have any interested in wireless security this is a great course to start off with, especially if you factor in the cost of the course and certification. If you need to take a wireless security course and have limited funds, this is a great course to take and I will recommend it to anyone.

Monday, October 25, 2010

The Value of a 3rd Party Pen Test…..or the lack of value

In my role I perform many functions, including penetration testing. I typically perform web application penetration testing but I occasionally perform a traditional penetration test as well. Also annually we bring in a 3rd party company to perform external penetration test against my organization Internet presence. There are two benefits for using a 3rd party, even when your organization has its own internal penetration testing team.

The first benefit is getting a new perspective. When perform a penetration test on the same network time after time it’s easy for the internal team to lose its perspective. For example if the internal team knows a certain network device only supports SSHv1 the internal team may stop testing the device. However an external team will continue testing the device, and perhaps locate a new vulnerability in SSHv1 for that device. The internal team stopped testing the device because they know it has an SSHv1 vulnerability that has not been mitigated and moves on to another target. This is occurring because the internal team has gotten tunnel vision.

The second benefit is showing the value of the internal team. Upon conclusion of the 3rd party test, there will be a report showing their results. Typically the results should show most of the same vulnerabilities, it may not show all of them and perhaps a few ones the internal team missed. Although the results are different it should be very similar to what the internal team results.

Now if the results from the 3rd party are vastly different from the internal team there is an obvious issue. This week I found myself in this position. The vendor did their “pen test” wrote a report and shipped to us.

After reviewing the report I found many glaring problems with it. First the complemented us on our quick response to their “pen test”, when in fact we did nothing on purpose. Our tools picked the test up and people started to respond like they would an in incident but I stopped them knowing it was a “pen test”. Second they complemented us on our amazing monitoring, which I have no idea how they determined this. The one though that through me was when the report stated they cracked some passwords. This bothered me because the scope for this test did not include any password cracking. So I immediately contacted the vendor to find out what passwords were compromised when the engineer told me he did not crack any passwords.

Now I have to question the entire report and the results in it. When I reviewed the results I was less than thrilled with what I found. The report appeared to take the results of a vulnerability scanner and put it in their report, called it a “pen test” and that was is it. Well the results were crap and the vendor missed a lot of known of vulnerabilities. When we scoped this pen test, we wanted the vendor to knock on our web apps and they claimed to have found only four web applications with vulnerabilities. Well that looks great but was far from the truth as I know for a fact they missed a lot more vulnerabilities on the web applications. When it came to vulnerabilities on the rest of the infrastructure they only claimed to find about six vulnerabilities, again missing a lot more vulnerabilities.

Luckily, my management knew about the vulnerabilities and where not happy with the results from this vendor. Another “feather” in my teams cap was showing is our tool set, process and procedures are working.

In this case the 3rd party vendor provided absolute no value to my organization and it appears my organization will be going through this process again. Typically this is not the case, but occasionally this type of thing will happen.

Wednesday, October 20, 2010

Some cool tools have been updated

Over the last few days a couple of cool of offensive security tools have been udpated. Both are peneatration testing tools, Metasploit and Samuari Web Testing Framework (WTF).
HD Moore released Metasploit version 3.5 today. There is over 600 exploits, over 300 auxliary modules and over 200 payloads in this release. This build also includes scriptjunkies java GUI. There are tons of other updates to this wonderful tool so I suggest you go to the metasploit website and check out the new version.
Kevin Johnson released a new version of Samurai Web Testing Framework (WTF). Samurai is a web application security testing liveCD that has a bunch of web app security testing tools. Another cool feature is the several vulnerable applications on the CD for learning and testing purposes. Go check it out. Here where you can find the details about WTF here.
Now that I am done with my malware presentation I will start I plan to get back to more blogging.

Monday, October 4, 2010

Presentation on Behavioral Analysis of Malware

I belong to DFW IT Security Professionals organization, if you are in the area check it out,  and volunteered to give a presentation on Malware Analysis. Since I wrote my GSEC gold on the subject, and perform the analysis often I like to think I have good knowledge on the subject. So on Oct 19th I will be presenting Behavioral Analysis of Malware.

This presentation is about performing behavioral malware analysis. I am covering the basics of malware analysis, why you perform the analysis and the types of analysis. I will go into the process that I have found works well for me. I discuss setting up a malware analysis lab, the tools to perform analysis, executing the malware, observing the malware and finally building the results of the analysis.

With that being said I hope of the next couple of months to go over parts of the presentation in much greater detail to help others successfully and safely perform malware analysis. I will send an update on how the presentation went and possibly post it on the site if there is interest.

Thursday, September 30, 2010

Behavioral Analysis of Malware Process…the on the fly approach

In most organizations when a host goes rouge, it’s automatically rebuild/reimaged without a second thought. Although this eradicates the malware and speeds up the recovery process, there is risk of the malware spreading undetected. I like to perform a quick behavioral analysis before re-imaging the host to build detective controls to watch for more infections in my organization.

When performing an organized (i.e. in a lab) behavioral analysis I use a methodology that consists of these different phases:

Lab Preparation
Malware Execution
Observation
Results Gathering
Interpretation
Repeat as Needed
Improving and Testing Defenses

Unfortunately when doing malware analysis quickly a couple of phases must be skipped since the malware is already running. For those situations, I use a slightly different methodology consisting of these phases:

Containment
Observation
Results Gathering
Interpretation
Improving Defenses

Since the host is already infected, the containment phase is about preventing the malware from spreading. My preferred choice is to disconnect the system from the network, but disconnecting the host is not always an option. When disconnecting is not an option, I will use network isolation to permit limited services to the host, i.e. a remote access tool (RAT) from the analyst host.

With the host contained, the next phase is observation. I recommend having a notebook handy, I always use a spiral notebook, to keep notes of any observations you make. During this phase, tools such as Process Monitor, Process Explorer, Autoruns, and TCPVeiw are used to analyze the host. If possible I try to use a network sniffer, but this may not always be an option.

After running the tools for about 15 to 30 minutes, save the results for analysis. I prefer to save them in a directory, than take an MD5 or SHA1 hash of the results to make sure they are not changed in the interpretation phase. Of course the longer the tools run, the better understanding of how the malware behaves can be achieved.

The interpretation phase is about looking at the output of the tools and interpreting the results. Since there is a high probability no baseline exist, this is by far the most challenging phase. To assist with this phase it’s important to have an understanding of how the Operating System operates without infection of malware. This knowledge makes it easier to sift through the results looking for how the malware behaves so it can be detected.

Improving defenses is taking the knowledge gained in earlier phases and improving the organizations defenses. Typically these include changing ACL, writing IDS/IPS signatures, and/or making host changes in an effort to detect the malware.

When performing malware analysis on systems connected to a production extreme caution should be used. This type of analysis should be performed for the purpose of creating detective controls for this piece of malware. If preventative controls can be built as well that is even better, but remember the objective is to use this information for detection of other infections.

Thursday, September 23, 2010

Interpreting Discovery Scan Results

Last time I discussed how to perform an nmap syn scan for host discovery. With the scan complete interpreting the results is the next step. Lets start by discussing the results and how to interpret them, followed by looking at the three output files and what they are can be used for.


The first line in the results shows what options the scan was run with.. One thing to note is the version appears to only be the major release, not the exact version used for the scan. Here is the first line in the filename.nmap showing what scan options were used during the scan.

# Nmap 5.00 scan initiated Sun Sep 19 11:46:59 2010 as: nmap -PN -n -sS -T 1 -p 21-23,25,80,110,143,443,3389 -oA hostdiscovery 192.168.1.0/24

After the scan options the next line shows the IP address of the online host. The following line breaks down the description of the output, the port, the state and the service. After the description line, the number of ports listed will vary based on the scan options. Here is the nmap output of two of the host on the target network:

Interesting ports on 192.168.1.150:
PORT STATE SERVICE
21/tcp closed ftp
22/tcp filtered ssh
23/tcp closed telnet
25/tcp filtered smtp
80/tcp open http
110/tcp closed pop3
143/tcp closed imap
443/tcp open https
3389/tcp closed ms-term-serv

Interesting ports on 192.168.1.151:
PORT STATE SERVICE
21/tcp closed ftp
22/tcp filtered ssh
23/tcp closed telnet
25/tcp filtered smtp
80/tcp open http
110/tcp closed pop3
143/tcp closed imap
443/tcp open https
3389/tcp closed ms-term-serv

The port field (when scanning UDP and TCP protocols) lists the port number protocol that nmap scanned.

The state field is the critical field when determining if a host is online. For nmap there are 6 possible states. The three states from most scans, including a syn scan, is open, closed, and filtered. Be aware three other states may be seen in different types of scans. Here is a brief description of each state, but please visit the nmap website for more detailed explainations.

Open – the port is actively listening and accepting connections.
Closed – the port is not actively listenting
Filtered – the port is being filtered by a packet filtering device

For this example port state let’s focus on ports tcp/22 (ssh), tcp/25 (smtp) and tcp/80 (http) on host 192.168.1.150 (TargetA), and 192.168.1.151(TargetB).

Starting with tcp/22 on TargetA the port state is filtered. So we know that there is a packet filtering device between us and the target network. TargetB tcp/22 port state is open.

Next looking at tcp/25 on TargetA and the port state is filtered. Again it appears there is some of packet filtering device between us and the target network. TargetB tcp/25 port state is closed.

Finally looking at tcp/80 on TargetA and TargetB the port state for each host is open.

It’s important to note the service listed in the scan results uses the nmap-services database to identify the service. Suring this scan no service detection was performed and the results are relying solely on the services database. Remember it’s possible to have another service running on a port on any port. For example a web server could be listening on tcp port 21 (ftp server). During the enumeration scan, version detection should be done to ensure the correct service running on a particular port is identified.

The final line of the scan provides information about the total number of online hosts and how long the scan took to complete. Here is the last line of output from the scan results:

# Nmap done at Sun Sep 19 13:57:06 2010 -- 256 IP addresses (10 hosts up) scanned in 7807.86 seconds

Looking at the scan results a couple of items of interest can be determined. Looking at tcp/22 on the two hosts (TargetA and TargetB), there is a good chance a packet filtering device is being used. Looking at the various port states, with some ports in a filter state while others are in a closed state, it’s probable the packet filtering device policy is default permit.

By performing a discovery scan first, there is a better understanding of which systems are online and how the firewall is configured. When the information gathered during the discovery scan, the enumeration phase can be more targeted resulting in two benefits. The first benefit is time saved by focusing on only hosts online. The second benefit is there is a less likely hood of detection because later phases of the penetration test is more targeted.

With an understanding of how to interpret the results, the three different output files can be used to the testers’ advantage. The three types of output files were the normal, grepable and XML files.

The normal output file (.nmap file extension) is great when looking if looking at the entire scan result or for counting how many ports are open in a scan. If you want to search the results for how many host have a particular port, this simple search can be performed:

host#grep –i ‘80/tcp’ hostdiscovery.nmap | wc -l
10

The results from the grep shows that 10 hosts have tcp/80 open. With this information service detection scans could be used for tcp/80 or a full enumeration scan could be performed. The normal output is good for trying to gather information from an overall perspective (i.e. count all open tcp/80 ports), but if the objective is to know which specific hosts have tcp/80 open other formats should be used. The normal format output is one port per line making more complex searches much more difficult.

Searching for specific information is best done using the grep output file (.gnmap file extension). This format takes the scan results for one host and writes it on one line, making it easier to search with tools from the command line. For example to search for all host with tcp/80 open using grep this command would perform the search:

host# grep -i '80/open' hostdiscovery.gnmap
Host: 192.168.1.150 () Ports: 21/closed/tcp//ftp///, 22/filtered/tcp//ssh///, 23/closed/tcp//telnet///, 25/filtered/tcp//smtp///, 80/open/tcp//http///, 110/closed/tcp//pop3///, 143/closed/tcp//imap///, 443/closed/tcp//https///, 3389/closed/tcp//ms-term-serv///
Host: 192.168.1.151 () Ports: 21/closed/tcp//ftp///, 22/filtered/tcp//ssh///, 23/closed/tcp//telnet///, 25/closed/tcp//smtp///, 80/open/tcp//http///, 110/closed/tcp//pop3///, 143/closed/tcp//imap///, 443/open/tcp//https///, 3389/closed/tcp//ms-term-serv///
< -- output cut for brevity -- >

The results return every host with tcp/80 open but they are difficult to read, especially when looking at large networks. Since the objective is to find hosts with tcp/80 open, the grep command can be combined with awk to find hosts with tcp/80 open as seen here:

grep -i '80/open' hostdiscovery.gnmap | awk '{print $2}'
192.168.1.150
192.168.1.151
192.168.1.152
192.168.1.200

The final format, XML output (.xml file extension) can be read by many different application including scanpnbj. Using scanpnbj the results in the XML file can be imported and stored in a sqlite database. Then using outputpbnj the results can be query using sql statements. Remember many other applications also can import XML formatted output as well.

Performing a discovery scan and properly interpreting the results increases the value of the penetration test for the client. Correctly identify online targets allows the penetration test more time to focus on known live hosts. The customer receives a more accurate understanding of the risk to their organization.

Sunday, September 19, 2010

Detecting live hosts on a target network.

When performing a penetration test it is imperative that all hosts on the target network are identified. Typically when performing a penetration test all that is provided to the tester is the IP range, sometimes that is not even provided. With just a network range you must identify all live hosts in that range.

In the past, ping sweeps was reliable enough that a pen tester could be comfortable in those results. Today this is not the case, as many organizations block ALL unsolicited ICMP traffic at their border, so other methods must be used to identify live host on the network. Not only are organizations blocking ICMP but they are using tools to identify this type of activity such as firewalls, IPS/IDS and possibly even a Security Event Incident Manager (SEIM).

Since just a ping sweep can’t be performed other methods must used to identify live hosts. To accomplish this I typically use the tool nmap to perform a TCP SYN scan.

To understand how this scan works you must first understand how TCP connections are established on open ports, what happens if the port is closed and finally what happens if the target host is not online. When two hosts want to communicate over TCP a TCP connection called a TCP session must be established. To establish the connection the TCP three way handshake must be completed.

The first step is for the host initiating the connection (Host A) to send a TCP packet to a specific port, for example 80, with the SYN flag set to the target host (Host B) it is attempting to establish the connection with as seen in this lovely ASCII art.

Host A ------SYN------- > Host B

Since Host B is listening on port 80, Host B sends a TCP packet with both the SYN/ACK flag set back to Host A as seen here.

Host A <------SYN/ACK-----Host B

Host A, after receiving the packet from Host B, sends a TCP packet with the ACK flag set to Host B to acknowledge it received the 2nd packet, and the connection is now established and data can be transferred as seen here.

Host A ------ACK------- > Host B

So now that there is understanding of how a TCP session is established, let’s examine what happens if the Host B is not listening on port 80.

The first step is the same with Host A initiating a connection to port 80 on Host B with the SYN flag set.

Host A ------SYN------- > Host B

Since Host B is not listening port 80, Host B will reply with a TCP packet with the RST flag set.

Host A <------RST-----Host B

Although the Host B is not listening on that port a response from Host B is still sent to Host A indicating it’s online.

Finally let’s look at attempting to establish a TCP connection for a host that is not online.

Again the first step is the same with Host A initiating a connection to port 80 on Host B with the SYN flag set and a timeout of 1 second.

Host A ------SYN------- > Host B

Since Host B is offline there is no response, Host A will send another packet these time waiting 2 seconds for Host B to respond. After not hearing from Host B a second time it will send a third packet waiting 4 seconds for Host B to respond. After the third packet Host A will assume Host B is unavailable and quit trying to reach it.

Understanding how TCP sessions are establish can be useful in identifying live host on the network. With this knowledge it is time to start discovery live hosts.

The objective of this scan is Host discovery, with a secondary objective of being stealthy to avoid detection. To meet these objectives nmap’s TCP SYN or “half open” scan will be used.

Nmap’s SYN scan sends a TCP packet with SYN flag set to the target. The target host will reply if online with the appropriate response (SYN/ACK for open ports, or RST for closed ports) depending on the state of the port. If the target host sends a TCP packet with the SYN/ACK flags set, the scanning host will not complete, resulting in a half open connection. If the target host is offline there should be no response.

To perform the SYN scan using nmap is the command line options:

nmap –Pn –n –sS –T 1 –p 21-23,25,80,110,143,443,3389 TARGET -oA OUTPUTFILES

The first option (-Pn) treats all host online, skipping the Host Discovery phase. Nmap’s Discovery Host sends an ICMP echo request, ICMP timestamp request, a TCP SYN packet to port 443, and a TCP ACK packet to port 80. This type of discovery is good if there are no packet filtering devices between the scanner and the target, but this typically will not be successful over the Internet.

The second option (-n) turns off name resolution. Other tools such as nslookup, dig, can be used for DNS enumeration. Turning off name resolution has an additional benefit of speeding up the scan, especially when scanning large networks.

The third option (-sS) is the SYN scan option.

The fourth option (-T 1) is the speed of the scan. The speed range of the scan is 1 through 5 with 1 being the slowest and 5 being the fastest. This option can be used to assist in evading detection during a scan. Since an objective is to scan undetected slow scans are preferred, but not always an option.

The fifth option (-p) selects the ports to scan.

The TARGET is of course the target of the scan. For nmap this can be a single host, a subnet, a range of host (such as 10.0.0.1-10.0.0.23) or a combination of any or all of them (such as 10.0.0.1,10.10.10.0/25,10.10.20.4-56).

The last option (-oA) is to output the scan results in normal, XML, and grepable formats. Following that flag is the base name of the output files. The normal (-oN) output is great if you want to look through all of the results. The grepable (-oG) output is for using grep and other shell commands to search through the output. The XML (-oX) output produces the output formatted in XML.

For demonstration purposes the target network to be scanned is 192.168.1.0/24. To simulate a typical Internet connected target network this network is protected by a firewall. To begin host discovery, type this command:

nmap –Pn –n –sS –T 1 –p 21-23,25,80,110,143,443,3389 192.168.1.0/24 -oA hostdiscovery

Once the scan completes the following information will appear.

Nmap done: 256 IP addresses (10 hosts up) scanned in 7807.86 seconds

From these results we see 10 hosts are up. The benefit is there is a likely chance, not guaranteed though, most online host are detected. Now more extensive scans can be performed against known live targets.

Another thing to note it took over two hours to complete the scan, even though the scan host and the target network are connected to same switch. This could have been performed quicker using the timing option, but the likely hood of detection would have been higher.

Also looking in the directory there will be three files containing the output of the scan named hostdiscovery.gnmap (-oG), hostdiscovery.nmap (-oN) and hostdiscovery.xml (-oX). These file will be used to determine if host are on online.

With the discovery scan complete, the results have to be interpreted. The interpretation of the results is critical to determine online host that should have more intensive port scans. Next time I will discuss how to interpret the scan results.

Monday, September 13, 2010

WEP cracked in under 10 seconds………….

In a previous post I discussed taking the Offensive-Security Wireless Attacks course. I went through the first technical section which focuses on using aircrack-ng (and associated tools) to detect and attack wireless networks in a lab. The lab I built for this class consists of an Alfa USB Wifi Card and a Linksys WRT-54G (Linux) that supports WEP, WPA and WPA2 encryption.

Most Information Security professionals with exposure to wireless security understand why WEP is insecure, why not to use it and the risk associated with using it. However there are people who still believe WEP is sufficient for security of a wireless network.

During one of the exercises cracking WEP keys, I came across something that I could not believe at first. I was able to capture packets for my lab SSID using airodump. Typically you want about 40000 IV’s to start cracking WEP, to get a decent opportunity at successfully cracking it.

On this exercise though I thought I would take a shot at 20000 IV’s. Well, I should have bought a lottery ticket because my WEP key was cracked in 8 seconds. Now this WEP key uses a “64” bit key and is quicker to crack, but the fact that it took me longer to enter in the commands to crack the key then it took to actually crack the key shows how insecure WEP is.

I saved the packet capturefor demonstration purposes when I hear people discuss WEP security/insecurity. Like the say a picture is worth a 1000 words!

Tuesday, August 24, 2010

Offensive- Security.com WiFu Training Class……………..

A couple of weeks ago I was looking at that the latest Backtrack release and decided to finally check out offensive-security.com. For those of you unfamiliar with offensive-security.com it is a training organization that uses Backtrack to teach penetration testing. It was founded by Mati Aharoni, creator of WHAX and a core developer of Backtrack.

Offensive-Security offers 3 training courses, Pentesting With Backtrack (PWB), Cracking the Perimeter (CTP), and Offensive-Security Wireless Attacks (WiFu). Upon successful completion of the course and hands on lab for that course you are awarded the OSCP(PWB Course), OSCE(CTP) or the OSWP(WiFu) certifications.

What makes these certifications challenging is they are not testing on your ability to memorize answers, they present you with a challenge and you must correctly complete the challenge in an allocated amount of time to be awarded the certification.

I did some online research and saw some really good reviews so I thought I would look into the cost of some classes. I was surprised by the cost ranging of the course, 350 USD for WiFu to 1500 USD for CTP with 60 days of lab access. Although I had done some wireless security work in the past I thought I would give the WiFu course a try.

I went through the registration process and received my course material in the allotted amount of time. The material included a PDF for the class, and some video tutorials. I would say the PDF (and the class) is broken up into two parts, the first is about wireless and wireless security, and the second is about attacking wireless.

I spent last week going over the first half of the class. Because I had not used my wireless skills in a long time this was a great refresher. This part of the class covered 802.11 standards, different wireless modes, different types of packets you will see on a wireless network, and how to choose hardware.

The hardware section was of great interest to me, because of the details it gave. In this section it covered different type of wireless adapters, chip sets, and antenna's details. This section gave some good details on how to choose wireless equipment for what you are testing.

The information in the first part of the class as been wonderful so far and I am looking forward to the "attacking" phase of the class. Once I get more into the attaching phase of the class I will post some more blogs about the class.

Tuesday, August 3, 2010

Bypassing Client Side input validation…..

Last post I discussed how to implement client side input validation, and previously have discussed whey it should be used. Today I will discuss the best part of client side input validation, how easy it is to bypass!

This post hope to explain how easy bypassing client side input validation really is. For demonstration purposes the code examples from the previous post will be used to demonstrate three very easy attacks that bypass client side validation. To recap here is the HTML code for zip.html, the form to enter in a zip code.

<-html->
<-head->
<-title->Please enter in your zip code<-/title->
<-script language=JavaScript->
function validateme()
{
var zv = document.zipform.zip.value;
if (zv.length != 5)
{
alert("Please enter in 5 characters");
return false;
}
if (zv.match(/\d\d\d\d\d$/))
{
return true
}
else
{
alert("The Zip Code field does not contain 5 digits.");
return false;
}
}
<-/script->
<-/head->
<-br->
<-form method="POST" name= “zipform” onsubmit=”return validateme()” action="zip.php"->
<-center->Enter in your zip code (Only 5 digits please):<- input type=text name="zip" maxlength=5->
<-br->
<-input type="submit" value="Enter Zip Code"-><-/center->
<-/form->
<-/html->

The first and simplest attack to bypass client side validation, drum roll please…., is to disable JavaScript in the browser. That’s it, a pretty tough attack since you must restart the browser to take effect. A variation of this attack is use a browser like Firefox that supports extension with the NoScript extension installed and forbid the host from running scripts.

The second method requires a very special and 3l1+3(elite in l33t speak) tool called a text editor such as gedit or notepad. In the sample code zip.html, validation must be bypassed in two places, the HTML MAXLENGTH Attribute and the validateme function. To begin performing this attack, open up zip.html in a web browser of your choice and save the page to the local system.

The first thing to deal with is submitting the form back to the original website, in this example www.badwebapp.com. If the page is using relative paths as seen in the example code below, the form action properties must be changed.

<-form method="POST" name= “zipform” action="zip.php"->

To send form back to badwebapp.com change the action property to www.badwebapp.com, as seen in this code example:

<-form method="POST" name= “zipform” action="http://www.badwebapp.com/zip.php"->

If this is not change the form will attempt to post the data to the local system and a 404 error will be returned to the browser.

With the form ready to be processed on the original website the first thing to deal with is the HTML MAXLENGTH Attribute. Open up the file zip.html go to this line:

<-center->Enter in your zip code (Only 5 digits please):<- input type=text name="zip" maxlength=5->

Delete the MAXLENGTH Attribute, and the first validation input is ready to be bypassed. Save the file to the local system, than open the file in a web browser. It is now possible to enter in more than 5 digits in the input field, however a warning box from the validateme function will indicate that more than 5 digits was entered.

There are three options to bypass the JavaScript validateme function. The first is to use HTML comment tags (< ! - - What is to be commented out goes here - - >, change the code itself, or remove the onsubmit property from the form properties as seen in this example:

<-form method="POST" name= “zipform” onsubmit=”return validateme()” action="zip.php"->

Any of the three methods will be sufficient enough to bypass the validatme function. After completing the changes to the file, save it to the local system. Open zip.html in a web browser, and enter in any information such as ABCDE12345 and submit the form for processing by the web server. Since this web server performs no server side input validation the following message will be returned:

Congratulations! Your zip code is ABCDE12345 Thanks!

Changing the code can be a lot of work, especially in a web application that performs multiple client side input validation checks. What about changing data after the validation, but before its submitted to the webserver? That is what is done in the third attack.

To perform this attack a web proxy intercepting application such as OWASP WebScarab is used.

After installing WebScarab, fire it up and configure it to intercept request. When running WebScarab in the Lite interface just click on the Intercept tab and check the Intercept Request box. Next open up your browser of choice and configure it to use 127.0.0.1:8008 as the proxy server.

Now enter in 5 digits, which is the valid length for a zip code, and click on Enter Zip code button. As soon as this occurs the following screen shot will popup:



Now change the variable zip value to ABCDE12345 and click on the Accept Changes button. Once again you are presented with the following message:

Congratulations! Your zip code is ABCDE12345 Thanks!

As you can see client side input validation is great to detect typos’ and data entry mistakes, but does not increase the security of an application.

Much like a border router, adding client side input validation to a web application can be used as the first layer of defense in the security of a web application. Just remember how easy client side validation can be bypassed as seen in this example, and should NEVER be trusted!

Wednesday, July 28, 2010

What is client side input validation and implementing it with javascript……………………………

Last time I discussed a situation where client side input validation would have assisted in preserving the security of a web application. I discussed how and what happen that revealed several details to me by entering an extra digit into a input field. This information could have easily allowed me to own the web application, and possibility the server! I want to state when securing a web application you must use defense-in-depthstrategy. When writing web applications, using client side input validation is the first of many layers in your defense strategy.

To recap client side input validation is the process of testing input on the client to ensure the user entered in the expected value types (numbers, letters, characters, or a combination of these) in a field before sending the data to the server.

There are two methods used to perform input validation, whitelisting or blacklisting. Whitelisting is defining what is expected and denying everything else. Blacklisting is denying certain values and allowing everything else.

Which method of input validation is better? In a perfect world whitelisting is the preferred method, but this is not a perfect world. With blacklisting the issue of encoding is encountered. To see blacklisting encoding issue in action, lets’ use the dash( - ) as an example.

You create a blacklist in your application that denies input with the dash ( - ) in the Zip Code input field because it is often used with SQL injection attacks such as ‘ or 1=1--. What happens when an attacker enters in ‘ or 1=10x2A0x2A? If the blacklist does not look for various encoding schemes the attacker can bypass client side input validation routines. So to resolve this issue you can add 0x2A (Hex), etc., until you get all possible encoding schemes covered, but this is an administrative nightmare for all but the simplest blacklist.

Using a whitelist with only known good values, in this example all digits and the problem is solved. The problem is solved until an input field, such as a Zip Code input that uses the extended Zip Code+4, requires the - in the input field. This is where defense-in-depth comes into play and sanitized user input is used to solve the problem of legitimate use of otherwise bad characters, but that is a subject of another article. Lets’ see an example of client side input validation in action.

Our example application uses ask the user for their Zip Code. Zip Code fields are great to use as examples for client side input validation. The ZipCode application consists of the input form where the user enters in their zip code and the php application returns the Zip Code the user inputted into the form. The input form name is zip.html and consists of the following code, where the user enters in their Zip Code into the zip input field:

<-html->
<-title->Please enter in your zip code<-/title->
<-br->
<-form method="POST" name= “zipform” action="zip.php"->
<-center->Enter in your zip code (Only 5 digits please):<-input type=text name="zip"->
<-br->
<-input type="submit" value="Enter Zip Code"-><-/center->
<-/form->
<-/html->

Once the user clicks on the ‘Enter Zip Code’ button, the form is submitted to the server and processed by the zip.php application which is the .

<-?php print "You entered in "; print $_POST['zip']; print " as your zip code"; ?->

Once processed the php code returns this output page:

You entered in 12345 as your zip code

Now if the user enters in 123456 the application returns 123456 because no input validation is performed. In the US, zip codes are 5 digits and entering in 6 digits is an invalid zip code and should be rejected. If we use the code from above it is possible to submit bad data to the application, which can lead to compromise of the application and possibly the server.

So how do we prevent a user from either accidently or intentionally entering in 123456 into the zip code field? The quickest way is to use the MAXLENGTH HTML attribute. All that is required is adding the MAXLENGTH attribute to the input text attributes. Here is an example of using MAXLENGTH added to the HTML from above:

<-center->Enter in your zip code (Only 5 digits please):<-input type=text name="zip" maxlength=5->

Now when a user tries to enter in 123456 in to Zip Code input field the form will only allow them to enter in 12345. Great, an invalid zip code can’t be entered into the application. So MAXLENGTH can be used to limit the length of input into a field.

But what happens when a user enters in ABCDE in the zip code field? The application will return the following:

You entered in ABCDE as your zip code

This is valid input because its 5 characters long, but it is alphanumeric and not numbers and the application accepts’ the input because it meets the MAXLENGTH attribute. So now how do we validate that only numbers are entered into the Zip Code field and no other values?

To validate we must use a script to check the length and type of character entered in the Zip Code field. First the form attribute must be modified to run the javascript. To get the javascript code to run we modify the form attribute using the following:

<-form method=POST name="zipform" onsubmit="return validateme()" action=zip.php->
<-center>Enter in your zip code (Only 5 digits please):<-input type=text name="zip"->
<-br->
<-input type="submit" value="Enter Zip Code"-><-/center->
<-/form->

The onsubmit is a scripting event that performs error checking, if error checking passes the input is then submitted to be processed. However if the error checking fails the event is not submitted, and usually some form of failure notification is presented to the user.

Because the checks are being performed client side the code must be added to the HTML document. Typically, this type of event is placed in the head tag in the HTML document. First the head tag must be added, than the scripting language must be declared. Here is the added code to the HTML document:

<-head->
<-script language=JavaScript->

Next the function to be used, named validateme, must be declared. Because multiple checks are going to be performed the variable zv is declared, which contains the zip code input field from the form.

function validateme()
{
var zv = document.zipform.zip.value;

The first input validation checks the length of input. In the case of the input not consisting of 5 characters the alert message of “Please enter in 5 characters” is sent back to the user. If the input is 5 characters the next validation check is performed. The code for the first validation check is below:

if (zv.length != 5)
{
alert("Please enter in 5 characters");
return false;
}

The next check is to see if the value entered is a valid Zip Code. To perform this check regular expressions or regex will be used. The check will match the regex of /\d\d\d\d\d\$/ to the zip code. If the input is 5 digits the validation is considered true and passed to zip.php for processing. If the zip code is not 5 digits the message "The Zip Code field does not contain 5 digits." is returned to the user.

if (zv.match(/\d\d\d\d\d$/))
{
return true
}
else
{
alert("The Zip Code field does not contain 5 digits.");
return false;
}
}

Finally to close out the script section and the head section of the HTML document these last two HTML tags are added:

<-/script->
<-/head->

This is a very simplistic example of client side input validation, but it attempts to expain and show how to implement it. Please note that the dash added into the HTML Comments is for displaying the HTML code.

With client side input validation, next time I will discuss a few ways to bypass client side input validation during a web application pen test.

Wednesday, July 14, 2010

What can happen if you don’t perform client side input validation……

The other day I was visiting a rewards website that required me to enter in my 9 digit member number (thank god it was not my SSN) printed on a little wallet card. I pull out the wallet card and entered in my member number and hit enter, fully expecting to see a web page telling me how many reward points I had earned.

Immediately an error message was returned stating an error occurred. Getting the error message itself didn’t shock me, it was the information I gathered from the error message

From the returned error message I was able to determine the server operating system web server, and scripting engine, and a link to click on for the stack trace information. I click the link, and detailed exception information is displayed. All I can say is HOLY CRAP Really?

To determine what went wrong I hit the back button on my web browser. Reviewing the page I looked at my rewards number and saw that I accidently entered in 10 digits instead of 9 digits.

From one error message I was able to determine three issues with the website, lack of input validation, incorrect error handling, and information disclosure. While all of these issues can be detrimental to the security of an application I want to specifically discuss input validation.

There are two types of input validation, client and server side. These validation types should be used in conjunction to complement each other, and should not be used by themselves!

What is client side input validation? Simply put client side input validation is the process of testing input on the client to ensure the user entered in the expected value types (digits, numbers, characters, or a combination of these) in a field before sending the data to the server.

The two primary benefits with client side input validation are client side error checking, and error location identification. Client side error checking is used to look at values entered into a field and see if those values are considered valid (i.e. help identify typo’s in the field). If the values are not considered valid an error message should be generated identifying which field was not valid.

Why not rely on client side input validation for security? Because bypassing client side input validation is trivial, real trivial. So any input validation performed client side, must be performed server side as well.

Using the rewards website example, had the website performed client side input validation on the member number field, I wouldn’t have seen the error message that lead to the discovery of three issues with the website. If I were performing a web application security assessment for this web site, I would venture to guess that owning the application would take very little effort.

So client side input validation by itself doesn’t secure an application, but when used in conjunction with other methods it can increase the security of a website, ala Defense-in-Depth. In my next article I will discuss how to implement client side input validation, with an article to follow on how to bypass client side input validation!

Monday, June 28, 2010

Internet Network Filtering Part 4

With the inbound filtering configure it is time for filtering the DMZ’s. The focus of the 4th part of the series will be configuring the ACL’s for the customer DMZ.

The perimeter architecture consists of two DMZ’s. The first DMZ is called “Service” DMZ, the second is called “customer”. The customer DMZ consists of systems used by Widgets to interact with Widgets customers, these systems include the web site, online shopping and online support database.

When configuring network access its’ important to ensure exposing only required ports. Because there are requirements for two DMZ’s we must create two ACL’s. To create the customer DMZ ACL we must identify we systems, their IP addresses and the required ports for the customer DMZ.

Listed below are the requirements for the customer DMZ:

Widgets Website – 5.2.3.80 (192.168.1.80) [tcp/80]
Widgets Extranet Website – 5.2.3.143 (192.168.1.143) [tcp/80 & tcp/443]
Widgets Online Database – 5.2.3.44 (192.168.1.44) [tcp/80 & tcp/443]
Widgets 3rd Party Online Database Support Applications – 5.2.3.250 (192.168.1.250) [tcp/12345, tcp/23456, & tcp/34567]

With requirements defined its time to create the access-list. Since this is the ACL for the customer DMZ the ACL name will be customer_access_in. Because traffic will be responding to the request we must ensure that we permit traffic from our DMZ host back to the original request. Here is how the ACL will be configured:

access-list customer_access_in permit tcp any 192.168.1.80 eq 80
access-list customer_access_in permit tcp 192.168.1.80 eq 80 any
access-list customer_access_in permit tcp any 192.168.1.143 eq 80
access-list customer_access_in permit tcp 192.168.1.143 eq 80 any
access-list customer_access_in permit tcp any 192.168.1.143 eq 443
access-list customer_access_in permit tcp 192.168.1.143 eq 443 any
access-list customer_access_in permit tcp any 192.168.1.44 eq 80
access-list customer_access_in permit tcp 192.168.1.44 eq 80 any
access-list customer_access_in permit tcp any 192.168.1.44 eq 443
access-list customer_access_in permit tcp 192.168.1.44 eq 443 any
access-list customer_access_in permit tcp any 192.168.1.44 eq 12345
access-list customer_access_in permit tcp 192.168.1.44 eq 12345 any
access-list customer_access_in permit tcp any 192.168.1.44 eq 23456
access-list customer_access_in permit tcp 192.168.1.44 eq 23456 any
access-list customer_access_in permit tcp any 192.168.1.44 eq 34567
access-list customer_access_in permit tcp 192.168.1.44 eq 34567 any
access-list customer_access_in deny ip any any


With the customer DMZ ACL built it must be applied. Cisco ASA firewall ACL’s are not applied to an interface so to bind the customer ACL we type the following commands:

asa(config)#access-group customer_access_in in interface dmz1

The customer DMZ is should not be accessible from the Internet! The next article will focus on the service DMZ.

Until next time.........

Thursday, June 24, 2010

Its official.........................

Well after a week of waiting yesterday morning I received my congratulations email, I am officially a CISSP! I must admit I felt like I put lots of work into this certification and am glad to be finished with it.

I can now take my free time and get back to lots of things I want to complete. I want to finish my Internet filtering series then move onto some web application security stuff!

After this weekend off I will get back to more blogging!

Thursday, June 10, 2010

Internet Network Filtering Part 3

After configuring filtering on the border router, it is time to perform filtering on the firewall. I believe in one rule for inbound Internet traffic to the firewall, only allow what is REQUIRED for the organizations business to function. For the purpose of this article the following services are required for the organization to function:

Widgets Website - 5.2.3.80 [tcp/80]
Widgets Extarnet Site - 5.2.3.143 [tcp/80 & tcp/443]
Widgets Online Database - 5.2.3.44[tcp/80 & tcp/443]
Widgets Email Server - 5.1.2.25 [tcp/25]
Widgets DNS - 5.1.2.53 [udp/53]
Widgets DNS - 5.1.2.54 [udp/53]
Widgets VPN - 5.1.2.123 [udp/500 & udp/4500]
Widgets SSL VPN - 5.1.2.43 [tcp/80 & tcp/443]
Widgets 3rd party Online Database Support - 5.2.3.250 [tcp/12345, tcp/23456 & tcp/34567]

Widgets perimeter network is protected with a Cisco ASA firewall. The firewall has an outside interface(outside), customer DMZ interface (DMZ1), service DMZ interface (DMZ2), and an inside interface(inside).

Because ASA's uses the concept of security levels each interface must be assigned one. For more information understanding the ASA security level concept please visit the Cisco website.

The IP address of servers in the DMZ use the RFC 1918 192.168.1.0/24 and 192.168.2.0/24 addresses. The use of these address require address translation to be performed. For more information on understanding and configuration a Cisco ASA for address translation please visit the Cisco website.

To configure access list for the required service we must use this command syntax:

access-list NAME action protocol source destination service

For detailed information for ASA access-list configuration please visit the Cisco website.

When configuring an ACL unless there is a legitimate business case I always take a default deny stance. When ordering an ACL I prefer to place entries that will hit more often at the top of the ACL.

Using the requirements listed above we will create an ACL named outside_access_in.

access-list outside_access_in permit udp any 5.1.2.53 eq 53
access-list outside_access_in permit udp any 5.1.2.54 eq 53
access-list outside_access_in permit tcp any 5.2.3.80 eq 80
access-list outside_access_in permit tcp any 5.2.3.44 eq 443
access-list outside_access_in permit tcp any 5.2.3.44 eq 80
access-list outside_access_in permit tcp any 5.2.3.143 eq 443
access-list outside_access_in permit tcp any 5.2.3.143 eq 80
access-list outside_access_in permit tcp any 5.1.2.25 eq 25
access-list outside_access_in permit udp any 5.1.2.123 eq 500
access-list outside_access_in permit udp any 5.1.2.123 eq 4500
access-list outside_access_in permit tcp any 5.1.2.43 eq 80
access-list outside_access_in permit tcp any 5.1.2.43 eq 443
access-list outside_access_in permit tcp any 5.2.3.250 eq 12345
access-list outside_access_in permit tcp any 5.2.3.250 eq 23456
access-list outside_access_in permit tcp any 5.2.3.250 eq 34567
access-list oustide_access_in deny ip any any


If a syslog server with sufficient disk space is available I prefer to log every Access Control Entry (ACE). After logging all my ACE's if there additional space on the syslog server I will add this last entry:

access-list outside_access_in deny ip any any log

When logging every hit on the ACL you will have a great understanding of your network. However this type of logging can be very storage expensive. If storage space becomes an issue, I always keep logging on my permits.

With the ACL built it must be applied to the outside interface of the ASA firewall. Unlike Cisco IOS it is not applied in the interface configuration. To bind the ACL to the outside interface we must enter in the following commands:

asa(config)#access-group outside_access_in in interface outside

With the ACL's and NAT setup (check out the Cisco website for more information) you must next permit traffic to the physical servers in the DMZ. The next article in this series will describe creating the ACL in the DMZ.

Sunday, June 6, 2010

The Results are in.......

I sat for the CISSP exam two weeks ago (May 22, 2010), you can read about my preparation and experience here.

Today I received my CISSP exam results email and my heart skipped a beat! I didn't want to open the email without my wife, who was very supportive and helpful during the exam preparation, so I called her. She said open it! With her on the phone I opened the email and it said CONGRATULATIONS, and I knew I had PASSED!

I read the email three times to make sure that I read it right and sure enough I did! Now all that is left is go through the endorsement process. I was so excited I immediately emailed the person who is going to endorse me the required information and I got his out of office! Doh, he is on a family vacation this week!

So I wait another week to start my endorsement process, but I cleared the first hurdle in the pursuit of the CISSP.

Wednesday, May 26, 2010

I am back......

I have not blogged lately because since February I have been preparing for the CISSP exam! As someone who holds several certifications, I have never taking a certification test like one. I must warn the reader I am a very fast test taker, when I completed my CCSP I took no more than 30 minutes to complete (and pass :)) each test.

Materials used for test preparation:

All-in-One (AIO) 4th Edition by Shon Harris
CISSP for Dummies 3rd Edition by Lawrence Miller and Peter H. Gregory
CISSP Practice Questions Exam Cram 2nd Edition by Michael Gregg
Freepracticetests.org by Clement Dupuis
studISCope self assessment from ISC2
Exam Introduction and Overview by Clement Dupuis
Google/wikiepdia

Study Plan and pre-Test activities:

First I listened to the great Exam Introduction and Overview by Clement Dupuis. Using the information from this I developed my study plan, including what to study and the order to study the material.

I started by reading through the AIO and taking notes on definitions and other items I felt important. During this first read I did not attempt to “study”, just get my notes made. After completing my notes I ready to “study”. I used the same method for all domains, no matter my level of experience in the domain.

I first read the CISSP for Dummies to get “into” the domain. Using the Dummies book as the introduction I would then read and study each section of the AIO. Once done with the AIO, I would take the AIO, CISSP for Dummies, Exam Cram and freepracticetests.org practice questions. Any areas I struggled with I made notes for final review. After scoring 85 or better in the domain on practice test I would move on to another domain.

During the last week before the test I had my wife ask me questions from final review notes. I also purchased the studISCope practice test during this time. Finally the Friday before the exam I took the day off from work for final review. I did several practice tests from freepracticetests.org and studISCope. Finally Friday night I used the CISSP for Dummies book and took the 150 question “final” exam. I went through the process that I had planned for the actual test, including filling in the bubbles on a fake bubble sheet out of the CISSP for Dummies book. By now I studied a minimum 250 hours for this test, if I did not know it by know I was not going to get in the next few hours. With nothing left to do I went to bed early and fortunately was able to get a good night sleep.

Test Day

I woke up early on Saturday, the BIG DAY! I had a nice breakfast and drove to the test. I sat down and listened to the NDA and instructions. Finally I received my test book, and the clock started!

The plan of attack, which I modified a bit during the test, was to read the question. If I knew the answer to the question I circled the answer, otherwise I circled the question and moved on. I did all 250 questions and had answers for about 80% of them. One strange observation is I seemed to not know the answers to questions in clusters. For example I might not know 3 answers in 5 questions then go 15 questions and know all answers, again just a strange observation on my part.

Having gone through the test once, I changed my test taking plan. I originally had planned to go back answer the ones I did not know, not reviewing the questions already answered, then copied over all answers to the scantron.

Instead I reviewed all questions. For questions not answered earlier I eliminated obvious wrong answers. After I read the question again I was usually able to get a better understanding the question and answer it. If still unable to answer the question, I took an educated guess. I would go through 10 questions at a time, then bubble the answers onto the scantron sheet.

After completing my answer sheet I went back to ensure the answer I circled in the book was the answer I had on my scantron. With this review complete I turned in all my material, thanked the proctor and walked out the door.

I looked at the time and it took me 4.5 hours to complete the exam. Time seem to fly when I was in the test and had no idea how fast it would go. In preparation I had thought I would be done in 3 hours, guess not.

Post Test

My wife, who had some work to catch up on, went to the testing site with me. When I walked out she said I looked very dazed and confused. When asked how I did I gave the honest answer I had no I idea.

While drinking a beer at the hotel (testing site)bar one of my fellow test takers showed up. We had the usual conversation about how we each thought we did and he answered the EXACT same way I did, completely clueless on how he did.

Even though its only been a couple of days, I find myself checking me phone for the result email from ISC. Once I receive the results email I will post how I did, and hopefully having passed the CISSP Exam!

Monday, March 29, 2010

Internet Network Filtering Part 2

The first part of this series covered ingress filtering on your organizations Internet router. The goal of ingress filtering was to filter illegitimate traffic hitting your firewall. The next part of this series will cover egress filtering on the Internet router.

Egress filtering is filtering traffic leaving an organizations network. For a more in-depth explanation of egress filtering please read the paper I authored titled Performing Egress Filtering.

To review the organization Internet architecture consists of the following equipment.

One Cisco IOS router
Serial (s0/0) Interface connecting a T-1 to the Internet
Ethernet (e0/0) Interface connecting to firewall outside interface
One Cisco ASA firewall

The first step implementing egress filtering is to determine all Internet connections into your organizations network. Create a list with each location and all address ranges in that location. In some organizations their Internet connection may consist of multiple address ranges and/or ISPs.

For these articles the IP address range assigned by our fake ISP is 5.1.2.0/24 and 5.2.3.0/24. These ranges are currently not allocated and are for demonstration purposes only.

As with the ingress filter, extended ACL’s are used for the egress filters. To create the ACL type these commands in:

Inetrt01(config)#access-list 101 permit ip 5.1.2.0 0.0.0.255 any
Inetrt01(config)#access-list 101 permit ip 5.2.3.0 0.0.0.255 any
Inetrt01(config)#access-list 101 permit ip any 5.1.2.0 0.0.0.255
Inetrt01(config)#access-list 101 permit ip any 5.2.3.0 0.0.0.255
Inetrt01(config)#access-list 101 deny ip any any log


The first two permit access control entries (ACE) allow traffic sourced from our public address space to access the Internet through the Internet router. The next two permitted ACE’s allow traffic on the Internet to access resources in the IP ranges of 5.1.2.0/24 and 5.2.3.0/24. The final ACE is used to identify systems using source address other than the ones assigned to our organization. Any system that hits on the deny ACE must be investigated. The two likely causes of the hits could be mis-configured or infected with malware.

To maintain direction consistency, easier management, and better system performance the ACL will be applied inbound on the Ethernet interface. Use the following commands to apply the ACL to e0/0

Inetrtr01(config)#interface e0/0
Inetrtr01(config-if)#ip access-group 101 in


Two test must be performed to ensure the egress filtering is working correctly. First make a connection to a website such as www.google.com. Review the ACL and hits should appear on your source range and traffic bound for your source range. Next make a connection to www.google.com, this time send traffic with a spoofed IP address of 1.1.1.1. Review the ACL and hits should appear on the deny statement. If you see other hits on this ACE you should investigate these hits.

Congratulations you have now successfully implemented egress filtering on your edge router. Next time I will start on filtering on the firewall.

Sunday, February 28, 2010

The dog ate my thumb drive!

I was going to write about egress filtering on an Internet Router but something occurred today that I had to write about.

Remember the old adage the dog ate my homework? I can say today the dog ate my thumb drive. Playing golf today I received a panicked phone call from my wife informing me one of my dogs ate my thumb drive! Now I admit I use my thumb drive more for file transfer than file storage, I learned that lesson a long time ago, so nothing critical was on the drive.

When I got home I looked at the USB drive and it was crushed on one side of the interface. Since I knew the thumb drive contained nothing critical I decided to attempt to "repair" the drive.

With nothing to lose and subscribing to the theory you can save the world with a leatherman and duct tape I started my thumb drive repair. Using the leatherman's many devices I very carefully bent the USB interface back into its normal rectangular shape.

Reviewing the work of the leatherman and myself and being very proud of it, I plugged the drive into my Mac and waited. I opened up Finder and Success! Finder saw the drive and I successfully browse the drive. Looking over the data on the thumb drive, I was correct in nothing would have been lost, thankfully.

So the moral of the story is if your dog does eat your thumb drive, get a leatherman and ever so carefully repair the drive! Next week I will discuss egress filtering on the Internet router.

Sunday, February 21, 2010

Internet Network Filtering part 1

The architecture for connecting organizations to the Internet typically comes in two flavors. The first architecture consist of a single device, usually a router but sometimes a firewall. The second architecture consist of multiple devices, typically a router and a firewall. No matter what architecture is chosen it is important the proper filtering is implemented. This is the first in a series discussing the implementation of proper filtering for an organizations Internet connection.

For this series of post the organization Internet architecture consist of the following equipment.

One Cisco IOS router
Serial (s0/0) Interface connecting a T-1 to the Internet
Ethernet (e0/0) Interface connecting to firewall outside interface
One Cisco ASA firewall

For a review of network ingress filtering review RFC 2827.

To properly implement ingress filtering begin by determining addresses currently allocated by IANA. To review the current address spaces allocated review the IANA website by following this link:

http://www.iana.org/assignments/ipv4-address-space/ipv4-address-space.xml

Review the list and note every prefix whose status is either unallocated or reserve. The unallocated status is for prefix's that IANA has not issued. The reserved status is for prefixes reserved for various reasons such as being used in RFC 1918 private address, multicast networks, research networks, etc. These prefixes have no legitimate reason for being routed on the Internet, and if seen entering the organization should be dropped.

With the list of prefix's to be dropped the next step is to build the access control list (ACL). Before building the ACL it is important to understand which traffic direction the list will be applied to.

Since the filtering decision is based only on address space simple IP ACL will be used. To create the ACL type these commands in:

inetrtr01(config)#access-list 10 deny 5.0.0.0 255.0.0.0
inetrtr01(config)#access-list 10 deny 10.0.0.0 255.0.0.0
inetrtr01(config)#access-list 10 deny 14.0.0.0 255.0.0.0
.
input omitted
.
inetrtr01(config)##access-list 10 deny 253.0.0.0 255.0.0.0
inetrtr01(config)#access-list 10 deny 254.0.0.0 255.0.0.0
inetrtr01(config)#access-list 10 deny 255.0.0.0 255.0.0.0
inetrtr01(config)#access-list 10 permit any any


The permit statement allows all traffic from valid prefix's to access the organization resources.

With the ACL built use the following commands to apply the ACL to the s0/0 interface

inetrtr01(config)#interface s0/0
inetrtr01(config-if)#ip access-group 10 in


Now test the ACL by sending traffic to the organization with spoofed source address of 10.0.0.1. After sending this traffic check the ACL and there should be hits on the line for 10.0.0.0/8 network. Once you are happy with the ACL save the configuration.

Congratulations you have now successfully implemented ingress filtering. Now all spoofed traffic from illegal sources will be dropped by the Internet router. Dropping this illegitimate traffic at the router reduces the workload on the firewall.

Next week I will cover egress filtering on the Internet router to ensure no spoofed traffic is leaving the organization.

Sunday, February 7, 2010

Superbowl Weekend!

With the biggest football game on this weekend I am taking a break to enjoy the game!

Sunday, January 31, 2010

Bad Web Application

So this week I spent time playing around with vulnerable web applications from OWASP and Foundstone. Now I will admit that I only played with these applications for a couple of hours, but it got me thinking how to use these tools in my job. These tools are great for teaching how to perform web application penetration testing, but seem to lack a way to fix identified issues.

One responsibility I have is to work with my development team to write better code. Not being a developer by trade and hoping to make it interesting for them, I want to take the bad web app idea a step further. I want the developers to write code to fix my bad application, Ok I am going to write code to fix it but I want to teach them to write the code as well.

The first application is bad online banking system based on Linux, Apache, MySQL and PHP. I hope to have it completed in the next two months or so. I hope the application will be filled with injection flaws, cross site scripting issues, broken authentication and session management issues. These are the top 3 from OWASP Top 10 – 2010. Once I have the application “tested” I will post it online for others to use and learn from.

Saturday, January 23, 2010

Welcome!

Welcome to the my new blog. I will discuss the many aspects of my life as a computer security professional.

This is my start to give back to the security community. My goal is to blog once a week about my thoughts, what I am working on, what I want to work on, tools, training, books, and techniques as a "security lifer"!
 
Site Meter