Skip to main content

Posts

Showing posts from 2010

Disclosure of Anti-CSRF Token in URL

Is it a problem? I think no, as long as the token is Per Page, One-time use token. Actually in one of the application, we had recommended to implement anti-CSRF tokens. When the application came back to us for verification process, we found that the application was implementing some sort of CSRF tokens, which were: 1) Going in GET request ie. were being added to URL. 2) Were being generated per page. 3) Were one-time tokens. The only concern was the token in GET request. I mean it can be said that it is certainly not a best practice but the potential risk is very minimal. In a scenario where it can be exploited depends on following constraints: 1. The victim should be logged into the application (obvious). 2. The CSRF token must be transmitted in a GET request. 3. The attacker must be able to capture the token or from a repository (log files, browser cache etc). 4. The attacker needs to trick the victim to click on the crafted link. 5. The victim's session that holds the exposed t

Mould it as per your need

We had a discussion with our colleagues over XSS issue found in one application. Initially there was not input validation at all-you can insert simple script tag and execute XSS. Following our recommendations they filtered out certain special characters like (>,<," etc) also they encoded them at time of output. Fair enough? No. Actually they implemented half of the recommendations- ie. they worked on blacklisting and left out whitelisting. There are a number of models to think about when designing a data validation strategy, which are listed from the strongest to the weakest as follows. 1.Exact Match (Constrain) 2.Known Good (Accept) 3.Reject Known bad (Reject) 4.Encode Known bad (Sanitize) They were implementing last two of strategies only. So the application was now filtering out normal XSS vectors like "><script>alert(...);</script> based attacks. But what happens when we provide eventhalders like onmouseover,onload etc-XSS executed. When we brought

Firesheep-Session Hijacking tool

Beware! Now even any Jack can hijack your session with a new Firefox plugin tool- Firesheep . All what he needs to do is to just install this tool in Firefox and start sniffing the communications on a public unencrypted Wi-Fi. Public Wi-Fi systems are generally unencrypted at Airports, Cafes etc. Some web sites like Facebook serves the login page on https but all the internal pages at http, once authenticated. That makes this kind of websites more prone to sniffing, and an unencrypted Wi-Fi adds more problems. After authentication this kind of websites generally assigns some session identifiers to the user which can be easily sniffed and can be used to impersonate. Surely, it's not a new concept, but what makes Firesheep more dangerous is that it's just a click-and-hijack tool that a novice user can also use at the public places to sniff other's credentials. The author's of this tool wanted to draw attention of people on those kind of websites which don't implement

Few more settings for NTLMaps

This is in continuation of my previous post on How to use NTLMAPS tool for pen-testing application requiring NTLM authorization. I was quite thorough and detailed about the steps about how to connect the tool in between the proxy and server-until one day I found a mail from Mark Wityszyn : Hi Nilesh , I've been struggling with the same problem for while now and keep coming back to NTLMAPS but have never manage to get it to work for web server authentication. Would you be willing to share you configuration options from NTLMAPS ? Then I realized, I have missed the configuration settings that is to be made in the server. cfg file of NTLMAPS . Here it is: Go to the server. cfg file which will be in the ntlmaps folder and search and change the following lines with your settings: PARENT_PROXY_PORT: specify here your Paros/Burp 'local' proxy port no. NT_DOMAIN: domain name of the network USER: userid which needs to be authenticated PASSWORD: password for us

ViewState and CSRF

Today, me and my colleagues- Chintan and Ronnie were having a long discussion about ViewState's ability to thwart CSRF attacks. While Chintan's argument was that CSRF is possible even the application is implementing ViewState , Ronnie's thought was it's virtually impossible to launch a CSRF attack on ViewState enabled application. My idea was that it's not impossible but very difficult and takes a great expertise to launch the attack. We also saw various articles were mentioning the ViewState as a countermeasures to CSRF , at the same time they were not denying the fact that this can also be circumvented. For sake of doing some research over topic I stumbled upon some articles and came to some conclusion: When attempting to exploit a CSRF issue, the attacker will try to remove the viewstate from the page, since often viewstate is not required for a page to function properly. If the page complains when the viewstate is removed, the attacker will try logg

Your Cookie attribute will be overwritten

In one of the applications , there was a vulnerability-they were not marking the cookie as ' HTTPOnly ' but marking it as 'Secure'. I recommended them to as a best practice, flag the cookie as ' HTTPOnly ' as well. Set-Cookie: JSESSIONID = AJ 122112 KJYS .......; secure Now they fixed it- They were setting the Cookie (Set-Cookie) as soon as the application loads in the browser and marking it as 'Secure'. Once the user is successfully authenticated they were regenerating the session ID and again (Set-Cookie) and this time marking it as ' HTTPOnly ' only. Set-Cookie: JSESSIONID =7H8 TKLSDOPC 56.......; HTTPOnly Fine! but really? They were using the Set-Cookie header two times. First time they were marking it as 'secure' and again after regenerating it marking it as ' HTTPOnly '. Now this was the problem. Setting the cookie with Set-Cookie again overwrites the earlier attribute of Cookie. That means if you are setting cookie a

Open Redirection-How to Secure it

When the OWASP has also included this issue in it's Top Ten 2010 list and also I have been finding lots of unvalidated redirects in the applications assessed everyday, I was just giving standard recommendation to developers to go for whitelisting approach. Include a set of valid domains- to which only your users should be forwarded- into your application. Once you have identified a “whitelist” of trusted domains, put the list in a configuration file on the server or database. From a secure coding perspective, the redirection servlet or script should not take a URL as a parameter. Instead, require that the servlet accepts an index that maps to the list of trusted domains. But as I am not very good in coding I was not able to assist them in coding. Eventually today I stumbled upon a very nice article here. It describes the best practices for redirecting users to trusted domains and how to 'code' that. Please visit: http://mikeware.us/goodcode/?p=260

Cookie 'Secure' attribute-really secure?

Today I just stumbled upon a discussion somewhere over net. I saw reply from Jeff (Chair, OWASP) to question about 'secure' attribute of cookie-how much secure it is? Well, it's a bit tricky, means when server is sending the secure attribute to the client (browser), the client must have initiated the SSL connection before it happens. Otherwise the server will send the set-cookie:secure flag on non-ssl channel itself. So you will need to ensure that the client has established a SSL connection to the server before the server sends a set cookie response. In Jeff's words: If what you expect is full SSL protection for your cookies, there are two problems with this. First, as you've noted, your cookie might get exposed in a "set-cookie" header that you inadvertently include in a non-SSL response. Second, and probably worse, the "secure" flag doesn't really mean use SSL all the time. If you do send the "set-cookie" header in a non-SSL

So, How will you work with a Proxy on NTLM...?

Most SharePoint environments today are using NTLM (the default) as the authentication protocol. NTLM authentication is a challenge-response scheme, consisting of three messages, commonly referred to as Type 1 (negotiation), Type 2 (challenge) and Type 3 (authentication). For more information on NTLM go to http://en.wikipedia.org/wiki/NTLM as discussion over NTLM and its working in out of scope for this post. The problem with setting up Web Proxies (Paros, Burp etc) is that they work fine with other types of authentication (Custom, Basic) but where there's NTLM is used the chain breaks between the proxy and the server resulting in non function of the application. As soon as the proxy wants to connect to the server it gets the following '401:Unauthorized' response: HTTP/1.1 401 Unauthorized Server: Microsoft-IIS/7.5 WWW-Authenticate: NTLM X-Powered-By: ASP.NET MicrosoftSharePointTeamServices: 14.0.0.4762 Date: Sat, 04 Sep 2010 06:38:22 GMT Content-Length: 0 You can

Privilege Escalation with Like Query

Continuing with my last post " DoS with Like Query ", another impact of it I want to discuss here. As I had said that the % and _ qualifier is often overlooked by developers to filter as its not so devastating as other characters. They are used for matching zero or more characters and single character respectively. I got a taste of it again when I was assessing an application recently. The application had several roles. Role A can't access data of Role B (that's obvious :) ). The Authorization checks were properly implemented-so no chance of Privilege Escalation. When I was examining the application closely, it has various search modules based on several conditions. If you search for a record after filling up a long form with fields with name, location, unit, suggestion no., suggestion name..blah,blah,blah. The one thing I noticed that the application was using the 'Supplier Name' field to search the records and listing down only those records which has ma

Anti-CSRF measures and XSS

During an assessment of an application, I and my colleague Ronnie were discussing about a scenario in the application. The application had login section behind which there were few pages that were vulnerable to Reflected XSS. Application was also vulnerable to CSRF.Needless to say that we suggested anti-CSRF measures for the application. Although we also suggested anti-XSS measures but the anti-CSRF measures were good enough to mitigate any attempt to exploit the reflected XSS flaws on the pages behind authentication. The application was rejecting any external request. So any attempt to exploit the reflected XSS will bear no fruit in scenario like this. Anyways we had recommended fixing both flaws independently but I wanted to have a discussion over the issue . Lots of people responded to that. All were with the same suggestion- do fix both issues, don't take chance. But what I found most convincing were these arguments from MustLive and Lava: MustLive says: Hel