Degrading UX to improve security hurts both UX and security


Degrading UX to improve security hurts both UX and security

This image was generated by an AI tool (DALL-E) when given the prompt: “ice cream in a jar labeled ‘healthy’ ”

The Security team completed a pentest on your legacy Java web application. They find 15 instances of XSS. How do you resolve this? Output encoding at each location means coordinating with a bunch of teams. The security team also says the SLA is 48 hours because this is an “OWASP Top 10” finding. So, you take the easy way out and validate every user input for characters that can cause XSS. On top of the list is a handful of special characters which no one uses in regular use anyway, so you go ahead and block requests with that input.

All is good and the XSS is gone! But something strange now happens: Some users are unable to log in. Duh! It’s the XSS filter (they have “<“ in their password). How do you fix this? It’s only a few users who are blocked, so maybe force those users to reset their passwords. Also, add a rider that you cannot use certain characters in passwords. Problem solved!

You now assume the security team is off your back, but wait, there’s more. They tell you the input validation filter can be bypassed. They ask if you want to rethink output encoding. You still don’t want to spend the next few sprints convincing a bunch of dev teams to change their behavior (they have better things to do). You wonder what’s the worst that can happen? The pentester on the security team loads up BeEF and shows you how you can steal a session token with XSS. You could also install a keylogger and cause havoc!

Now this is serious. You don’t want this to happen to your users! At the same time, you are NOT going back to all those devs. Here’s a better idea: How about we time out sessions every 5 minutes? That way, even if a session token is stolen, the impact is limited. Also, you decide to show an optional virtual keyboard on the login page. Keyloggers can’t guess the password by mouse movements, right?

The security team is unhappy. These are hacky solutions with a lot of loopholes. But the pentest has gone on for months and they are tired of being called “blockers”. Also, they know this is good enough for the auditors. So they let it go.

What does this lead to?

  1. Users cannot use some special characters in passwords, hence reducing entropy. This makes it easier for hackers to crack passwords. Oh, and good luck if there’s a special character in your name (“O’Reilly”).

  2. Users choose simpler passwords as typing on a virtual keyboard is terribly hard. This again makes it easier to crack passwords

  3. Every action which does not match the developers’ version of “normal flow” will kick your session out. This annoys the user (there are always flows that no one thought about) and they reduce the amount of time spent on the site.

Welcome to UX hell, sponsored by your friendly neighborhood Security team.

There was some exaggeration in the story, but you get the point. Usability and security are often considered tradeoffs. I think this is misplaced. Often, bad UX leads to unhappy users who make terrible Security choices.

A related Twitter poll ago caught my attention. It was clear from the response (even though the sample size was small) that most people do not enjoy using the scrambled keypad at a PoS machine. Some of the responders told me that they end up entering the pin multiple times (increasing the odds of shoulder surfing) or choose pins that are simpler to remember (e.g.: birthdays). Complex pins rely on muscle memory and a scrambled keypad messes with that.

Bad UX often leads to bad Security. Making things easier to use is a key UX tenant. Making things harder to use for improving Security often leads to suboptimal behavior from a usability and security perspective. Interestingly, these bad patterns appear more often in companies where security teams are key stakeholders (e.g.: banking websites).

Below are a few examples of terrible UX choices that aim to improve security, but actually make things less secure.:

  1. Hard to decipher CAPTCHAs: CAPTCHAs solve a real security problem, but some of the implementations have truly gone on to the dark side. I am from Bangalore, India and most people don’t use the phrase “fire hydrant”. When ReCAPTCHA asks you to identify fire hydrants, most users feel stupid and probably fail the test. This is terrible for availability and user experience.

  2. Making it harder to type passwords: Rule of thumb: Anything that makes it harder to enter passwords will encourage users to use simpler passwords or will lead to repeated attempts at entering them (which increases the success rates of shoulder surfing). Examples include: Blocking copy-paste on password fields, scrambling the keys on POS machines, virtual keyboards on a website, and so on.

  3. Trigger happy session expiry: Accidentally clicked the back button? Off you go! Typed “!” instead of “1”? You must be an XSS ninja who will steal everything! While expring session for every anomalous behavior can help you avoid DAST scanners from finding defects, they also provide you with a false sense of security by hiding underlying defects in code. Not everyone can hire a manual pentester to help bypass these controls, but rest assured that a dedicated hacker can.

  4. Complex password requirements that have nothing to do with passwords: When you block certain characters from passwords (“<“,”>”) to help you protect against XSS, you are weakening your defense against XSS and password cracking attacks. Another example: Constantly asking users to change their passwords to meet compliance requirements force users to choose passwords with patterns which are easier to crack.

While it’s easy to call out bad patterns, it may be more useful to develop a framework on how to avoid such outcomes. Here are a few questions to ask whenever Security recommends a change that affects users:

  1. Does this feature actually improve security or is it just additional Security theater? If it is just the latter, it may be good to re-examine the feature. Note: I am not saying “Security theater” is bad. Perception of security is as important as security itself. But compromising on security in service of posturing is definitely bad.

  2. In addition to improving Security, does it also increase complexity? If yes, what is the downside of increased complexity? Do these downsides decrease availability for certain kinds of users (e.g.: users with ADHD will have a harder time to scrambled keypads) ?

  3. Can we predict any unintended uses of this feature? Do any of them lead to reduced security?

We often speak about Security teams empathizing with developers. However, we should also strive to empathize with the user, especially the user who is different from us. designers and frontend engineers do a great job of that. Security teams need to develop that muscle too. Developing a framework of such questions can help Security teams empathize better with the user.

That’s it for today! Are there other patterns that lead to bad UX and hence, bad security? Is this concern overblown? Tell me more! You can drop me a message on Twitter, LinkedIn, or email. If you find this newsletter useful, share it with a friend, or colleague, or on your social media feed.

Share




Source link