Skip to main content


Showing posts from 2018

Pointers for Websocket Security

Websocket security: 1. In case of form based authentication, the authentication must happen before WebSocket handshake. The sessiontoken must be used when doing the first handshake. 2.  The WebSocket server can use any client authentication mechanism available to a generic HTTP server, such as any cookie field value, basic authentication, digest authentication, or certificate authentication. As long there is a possibility to authenticate the user in a secure manner and the WebSocket server verifies it, the authentication mechanism in question is suitable for use. 3. After authentication comes the authorization part. Authorization is mostly application dependent and mostly controlled at the application logic leve. Same principle of least privilege are applied in this context too. Need to check if a unprivileged user is able to access/ see data/ function of other users.  qa 4. Cross-origin headers must be checked, if they allow all the sites to communicate with

AWS Lambda security risks

And here is the list of top Lambda security risks: 1. Function event data injection: Injection flaws in applications are one of the most common risks and can be triggered not only through untrusted input such as through a web API call but due to the potential attack surface of serverless architecture, can also come from cloud storage events, NoSQL databases, code changes, message queue events and IoT telemetry signals, among others. 2. Broken authentication: Applications built for serverless architectures often contain dozens -- or even hundreds -- of serverless functions, each with a specific purpose. These functions connect together to form overall system logic, but some of these functions may expose public web APIs, others may consume events from different source types, and others may have coding issues ripe for exploit and attacks, which lead to unauthorized authentication. 3. Insecure serverless deployment configuration: The security firm found that incorrect settings a

Something about NACL and Security Groups- Cloud Security

NACL and Security Groups: Archietctre: 1. Security Groups are attached with every EC2 instance. 2. NACLs are situarted at boundary level- at subnet bounadries. Firewall type: 1. NACLs are Stateless firewalls- meaing they don't keep track of packest going in and out. Everytime a packet leaves a boundary, the NACL checks if this packet is going to be allwed or not, and every time a packet comes inside the boundary, it checks again if the packet is allowed to enter or not. As an analogy, NACL can be condidered as Passwport Control, which even if remebers you by face, will check for your visa and passport before letting you in. 2. Security groups are stateful firewalls- meaning they remember what packet left and do not check when they come back. They keep track of ecah packet going out and in. As an analogy, they can be considered as a security guard siitng at the front gate, who remenbers who went out and let him in. Traffic: 1. As NACL is stateles, it makes the decision t

Effective way of preventing malicious file upload

The below are all the prescribed best practices when deciding to upload a file in a web application. The below are list of implemented approaches: A few points: Extension whitelistng: Obvious and the first line of defense was to white listing of extensions. A simple but easily by-passable approach. Good to have this approach. File header type checking: This helps prevents the above bypass. Even if the request is captured and tampered to include a restricted file (say exe), the application will check the file header (the magic nos) of the file and reject it. Suppose an application only accepts .pdf files and expects %pdf header, but when we try uploading an exe which has a header MZ, the file will not be uploaded. In this case even though you try replacing the MZ with %pdf, the file will get uploaded but the resultant file would be treated as a pdf and not an exe, so becomes useless. Content type: The content type decides how to treat/ render this file once uploaded. The app

Some smbrelay points

Points to remember to avoid confusion when doing smbrelay: 1. NTLM hashes are stored in SAM database and on DC it's on NTDS.dit database 2. Until recent the NTLM hashes were combination of LM hash 'before' the semicolon, 'after' is the NT hash. After Win Server 2008, it's abolished and only NT hash is stored. 3. NTLM v2/ Net-NTLMv2 has different format and is based on challenges/ response algo and user's NT hash. They are n/w authentication protocols. 4. Pass-the-hash (PTH) attacks are not possible with NTLM v2 hashes, but with NTLM hashes. 5. NTLM hashes can be dumped from memory using Mimikatz type of tools and we can use NT hashes for PTH attack 6. We can get NTLM v1/2 hashes using tools like Responder. 7. We don't have to crack the hashes we get from Responder, we can directly relay them to other machine. 8. SMB signing prevents this sort of attacks 9. Tools to relay: or with Impacket library Now steps

SecureString implementation best practices

As the brush with 2-tier apps continues, the usual recommendations to manage the memory from leakage is to overwrite it quickly once its use is over. Although, it does not prevents the leakage completely, it reduces the attack surface by a considerable extent. Fortunately, for .Net application there's a method called SecureString. This class allows you to keep string data encrypted in memory. But a few things to keep in mind. Liked the below points from a discussion from stackoverflow post: Do you know how many times I've seen such scenarios(answer is: many!): 1.A password appears in a log file accidentally. 2.A password is being shown at somewhere - once a GUI did show a command line of application that was being run, and the command line consisted of password. 3.Using memory profiler to profile software with your colleague. Colleague sees your password in memory. Sounds unreal? Not at all. 4.Some tools such as  RedGate software that could capture the "value

How to join HackTheBox challenge

Hack The Box ( is an excellent collection of vulnerable vms, which are online to test/ hack them to upgrade the hacking skills. To join the HTB, you need to have an invite code which needs to be entered while signing up. This invite code is not something someone will forward you. You have to generate one using your hacking skills and enter it to register to the site. 'View source' will not work, so we use developer tools and carefully going through we find a file called: /js/inviteapi.min.js Now go the browser and type: whose contents can be pasted to an online JavaScript interpreter but does not give any result: But we can see makeInviteCode, which seems interesting. Let's search this in the console for this code. Executing makeInviteCode() gives a we see a data which seems to be ROT13 encoded: Decoding it gives some instructions: We use CURL to fire the above request, to get an

Good case for avoiding sensitive information in url

Nothing extraordinary here, just an interesting case I came across today. This can be one of the examples we can give to app teams too. Someone posted a link from well known forum about some discussions on my WhatsApp group today. Upon clicking, it opened in the browser, after a while it prompted me to post something then I noticed that it wasn’t my name. :D Instead it was addressing me as ‘Ronnie’. We both were surprised and amused. Then I searched all my emails and WhatsApp chats to find that once, long time back Ronnie had posted a link from the same forum to me, which was very long and contained probably session information, token etc. Now this would have happened in background: ·         The long link (URL), from Ronnie, contained session information/ token in the URL ·         The session token has been persistent and active for a pretty long duration (almost 6 months) ·         I clicked a new unrelated link today from another group and Ronnie’s session token

Malicious file upload with embedded codes- countermeasures

Acting against a malicious file upload is not an easy task. We need to maintain fine balance between security and user experience. We can still use the traditional ways such as checking content type, file headers, extensions etc. but what about in cases where a code is appended to a file jpg/ png files. The above traditional countermeasures will not work. So a few countermeasures for such scenarios: Similar to how WAF (Web Application Firewalls) work, the application should analyze each part of the file. The file needs to be parsed and look for any malicious hints/ contents such as executable codes containing dangerous functions - system, exec, kill etc. Also, check for existence of encoders such as base64 etc. There's no point of their presence in an innocent image file. Another effective method is to crop the image before saving it. Check the code here in Case 3 section of Sanitizing image files. What it basically does is, before saving the file, it does some resizing and

So how do you steal credential in memory in mobile?

It's not a technical question, it's a question when a few people argue (devil's advocate) that even if their app has an issue of storing the 'Login Credentials' in memory, what's the risk? Their arguments are: They have jailbreak/ root detection implemented. So the app cannot be installed on a rooted device. >>Counter argument: The JB/ root detection are completely by-passable as they are client side protections. Scenarios, a user can intentionally/ unintentionally bypass this check and install at his own device to enjoy banking and other apps also, which require a root. Second scenario, a security researcher can do the same thing to do a research and learn how this app works. If this app belongs to a reputed firm and he/ she makes this finding public, it would be reputation loss. If you try to root the device which has the app already installed, the device will reboot and in this order kills the app's process and consequently clears the memory w

Touch ID auth - a boon or bane?

With advancement of technology, applications are moving towards modern way of authentication from a traditional one. More and more biometric based authentication are being used apart from the password based. One of such example would be- Touch ID. Touch ID uses users' fingerprint to authenticate the user to device/ app. How does it work- On a high level, when a user registers to choose to authenticate to his phone using his/ her fingerprints, the fingerprints are gets stored on the device in form of hashes. Next time when user tries authenticate self and submits his/ her fingerprints, the device matches the submitted fingerprint hash with the ones with already stored and takes decision whether to authenticate him/ her or not. Sounds good, but what's the issue- It's a very convenient technology to open the phone with just a mild touch of your fingerprint. No need to remember/ change/ maintain PIN or passwords. It's more secure because it's completely unique,