The EuroS&P Experience

I recently attended the EuroS&P conference, and the co-located EuroUSec workshop. This year both were held in Stockholm, Sweden. Conferences and workshops are interesting events to visit, and both very different to each other. 

Security & Privacy at KTH

The Conference

The main conference was quite a varied mix of different topics. Quite a few went completely over my head, some were incredibly interesting insights into new research. The university where this was held, KTH, was very nice. The major problem that happened was the fact that there were two conferences at the same time, both sharing a restaurant. The result queues were not very fun.

If I could make one complaint to the conference organisers: If you’re going to list prices online, make sure to tell people in advance that you don’t include the cost of VAT. Finding that out at the last minute really puts a spanner in the works.

The Keynote speaker, Melanie Rieback, had a really good story to tell. She built a unicorn company – Entirely non-profit, open, freelance, and yet still able to compete on the marketplace with the industry leaders. Especially good was the observation that a lot of problems come from the Silicon Valley operating model – grow fast, exit fast. It’s very uplifting to know that running a tech company in a more relaxed manner is possible, and I’d certainly like to see more of it.

Smartphones

False Sense of Security. This talk presented some research analysing how banking apps try to detect for a jailbroken iOS, and then either warn or refuse entry. I think there’s definitely scope for warning users, but I’m not a fan of when apps outright block “rooted” OSes. The cat-and-mouse game that ensues is kind of interesting – scanning for the footprints of being jailbroken, in some cases taking advantage of potential security holes presented by the broken OS sandbox. Most amusing is the finding that some app developers have just copied and pasted detection code from Stack Overflow.

Up-To-Crash. Updating libraries is a horrible mess. Every couple of months GitHub’s automated security bug detection system flags old repositories of mine about vulnerable libraries. This talk presented a tool that could try to automatically detect when an app’s libraries could be updated. It uses a concept called a “Monkey Troop” to try and find crashes when a library updates. It seems like a really good idea, but I’m not sure how good a substitute it will ever be for a developer just, you know, maintaining their app. If the developer isn’t around to do that, they won’t be able to run the auto-update checker either.

Exploit Mitigations. A bit off my area of expertise, this talk was about embedded systems. One notable thing I quite liked was that the speaker suggested that the only concrete way we will get more security in embedded (and IoT) devices is if either end users or the government convinces OEMs to implement it. This would cause a price increase because of the additional overhead on each device made, but for me (and many security experts) that is a sacrifice worth making.

Cryptocurrency & Cybercrime

Deanonymisation. Given that Bitcoin and other cryptos don’t offer privacy, can you find a way to link transactions to one another? This talk describes how you could potentially link supposedly anonymous transactions to a common actor, if you had a node in the network that was very well connected and recorded all the trafic that passed through it. As a theory it’s nice, but because it only acts on live data and can’t be applied retroactively, I’m not sure it would be of any use for catching thieves or retrieving stolen coins.

Ekiden. Smart Contracts. With so many people losing trust in centralised systems, they might be the way forward. A magical unicorn protocol where you ask for work, someone else does it, and gives you the response. This talk proposes a way to manage these (I don’t think I followed it very well). Until I can see a Smart Contract real-time application akin to Word Online or the upcoming generation of online streaming games, that won’t succumb to attacks like cryptokitties, I won’t buy Smart Contracts as a viable concept.

Understanding eWhoring. Online vice crime. Encompassing everything from blackmail & extortion to a bizarre pyramid scheme. I’d say it was one of the most well-presented talks at the conference (both the funny use of cat pictures as a proxy for illegal images, and the fact that this wasn’t a dry talk about formally verifying protocols). The research analysed online forums where people trade image packs which are then used to trick people in online chat rooms, and at every stage people are exploited for money. The solutions involved to tackle this may well affect legitimate sex workers – a necessary sacrifice because other than educating end users (good luck), I doubt there’s much you could do other than resort to technical measures like image fingerprinting. The brig trouble with this approach is that you create a market for new, unfingerprinted, images. Unpleasant, but fascinating, stuff.

Cryptography & Protocols

There were quite a number of sessions dealing with cryptography and protocols, unfortunately much of these went into quite rigorous detail of the proofs. This is good, but not really of great interest to me.

Topics included WireGuard, “attestation” across networks of devices, Improving signal’s authentication security using ratcheting, Arbitrary Noise Protocols, Elliptic curve cryptography.

Benchmarking Flaws. One less in-depth talk discussed some of the issues that researchers show when reporting on their security. How they sometimes abuse their results, misreport them, or make mistakes. Its annoying that people make mistakes, more annoying that some people deliberately outright lie and mis-report their results.

Tell Me you Fixed It. This talk presents an amusing idea: When you scan for, and find, vulnerabilities, quarantine the victims until they patch their system. They partnered with an ISP and just lock them off from the web except for certain resources. This looks like a pretty good, if aggressive, way of motivating people to patch their systems.

Issue First, Activate Later. How to you handle secure communication between vehicles, when they might not have access to the web? You also need to maintain privacy, avoid linkability, properly authenticate, be secure from (for example) Sybil attacks. The proposed solution is just to load all of the certificates onto a device at manufacture time, and distribute “unlock” keys for these at a later date, out of band. Interesting idea, as long as you can keep those keys secure.

In encryption we don’t Trust. This research studied some German participants over multiple years to see how their attitudes towards encrypted communications changed (or didn’t). The researchers were quite lucky in having conducted the early study before Whatsapp really took off and became popular. They shared some of the mental models that participants had, which as ever is interesting to see how the layperson sees the internet. Most of the participants either didn’t notice their chat apps enabling encryption, nor did they understand the identity verification systems. People care about encryption and private communications, but they don’t understand the tech that enables it.

Privacy Protocols

PILOT. Indoor positioning is a tricky business. With no or little GPS data to go on, how to do you figure out where a device is? Commonly WiFi signals are used. Because this can lead to extremely precise location accuracy, how does a user go about keeping their privacy? This talk presented a solution in the form of some two-party computation. I’m not sure I fully understand this, but the general idea seems to be to send of parts of the positioning data to two servers that don’t collaborate, and then only the originating device, on recombining the results, can know where it is. A good idea, but that “no collaboration” is a big assumption.

In a similar vein, the Rethinking Location Privacy talk suggested maintaining privacy during positioning by modelling movement, and then using that to generate a false “precise” position based on your rough location. This faked position could be sent to a server so you can keep your real position private while still getting fairly reliable localised services.

There were two talks about the privacy of electronic voting, but the most interesting was the third which dealt with paper voting. In Is your vote overheard? The researcher showed how the strategic placement of microphones (and a lot of free time) could let them accurately predict how you had votes based off of the sound of pencil on paper on a table. The take away being that if you want truly secure and private voting, electronic voting aside, not even paper voting is totally invulnerable.

Web Security

Mitch is a ML tool presented by researchers which can have random interactions with a website and try to detect if CRSF vulnerabilities are present. It does this by analysing the kinds of information present during a Request-Response cycle over HTTP. By now, CSRF shouldn’t even be a problem as most libraries include ways to avoid it by default. Yet, here were are, with the vulnerabilities still present. This will more be useful for an attacker (regardless of hat colour) because if the developer is aware that CSRF is an issue, they’ve probably fixed it, and if they don’t know, then they’re not going to run this tool.

Domain Impersonation is Feasible is a review of lots of different CAs, including Let’s Encrypt, and tries to see just how good they are at verifying the real owner of a server. I’ve never had to deal with a commercial CA before, so it’s interesting to see how they actually verify an owner. The big, nicest, takeaway from this for me is that Let’s Encrypt is as good as (in some cases better) than most commercial CAs at owner verification. Nice.

Using Guessed Passwords to Thwart Online Guessing. Password stuffing is apparently very common, in some cases accounting for a majority of traffic on certain sites. How do you avoid it without adversely affecting users? You can’t just block after X attempts, else legitimate users will be blocked too. You can’t block IP ranges, else one single infected computer doing guessing could drop an entire internal network. The proposed solution is to measure incorrect guesses leading up to a correct guess, and once that happens, check if the guesses before it look like they were guessed by an attacker or a legitimate user. It looks like there are some big issues in how to store this kind of sensitive data.

The MALPITY solution to malware is to make tarpits – things that slow the spread of malware down. The general idea of detecting and locking off infected machines at a network level is an interesting one, and seems to work. But it relies on the malware themselves having bugs in them. One quick patch, and I’m not sure the tar pit solution will be very effective.

New to Me

A number of the talks I attended were about subjects where I either know nothing or am a novice.

DroidEvolver. ML is one of the trendy topics right now. This talk presented a tool that tries to learn and evolve over time, as malware evolves. Not knowing much about AI, this brought up a number of points new to me: Poisoning attacks, and how a continually learning system could end up forgetting how to detect older malware.

Steroids for DOPed Applications. Researchers sure are masters of puns aren’t they? DOP is something that I have 0 knowledge of. Of what I can tell it seems to me like a logical way to design applications, given how big data is the way of the industry right now. But I’m not sure how much use I would have for it. This talk presented a compiler that could be used to plan and design attacks on programs by reverse engineering the flows that data takes through it – a pretty cool idea, though I imagine it would be very tricky to actually use it in practice against a remote system.

ReplicaTEE. This talk started going on about “Enclaves”. It took me a while to catch on to what they meant by this. As far as I can tell, you use an enclave on a cloud host to run applications in an environment, like a VM, but where the host provider (which you might not trust) can’t get in to it. Good idea! This talk was presenting a way to manage spinning up and stopping multiple enclaves when you might not even trust / be able to access the system that starts them in the first place. Very un-useful to me, given that I don’t manage a massive cloud service, but good ideas all the same.

The Workshop

The European workshop on usable security was the primary reason for my visit, and certainly held my attention far more than the main conference. The attitude is also a lot more personal and (to my eye) informal here.

End Users

A number of talks looked at usable security (and privacy) for end users.

Why Johnny Fails to Protect his Privacy detailed some of the reasons that users end up not opting for the most private settings in services. Everything from privacy fatigue, privacy paradoxes to a simple lack of awareness of issues. Interesting for me, given I try as much as possible to protect my privacy, to my own detriment sometimes.

Don’t Punish all of us measured attitudes towards the roll out of a 2FA system. The attitude is generally positive when it works well, but with so many time it doesn’t work things start to tumble downhill. The talk suggests that some simple UX fixes could have prevented a lot of upset. I think 2FA is nice, but even for me it can be a bit of a hassle, and I’m not sure there’s any time when it has actually saved me from being hacked.

Analysis of Three Language Spheres. Every country in the world has passwords, but how do they differ. This was very interesting to see how attitudes to passwords in leaked data sets differ from English, Chinese and Japanese users. Of course they don’t have access to the same alphabets, and often they may be limited to ASCII characters. English users seem to favour letters, Chinese numbers and Japanese a combination of both. The research also goes in to some of the types of words / dates used and how they differ across cultures. By analysing these data sets, a model was created which could start to guess passwords better if it knew the locale of the user beforehand, due to the similarities in that culture’s passwords. This won’t matter once we all start using password managers though… I wonder how use of those differs across cultures.

Detecting Misalignments between Security and User Perceptions. Security is hard, and this talk looked into how users could mis-understand the security of a system. The application tested was PEP, supposedly intended to improve email privacy as a re-imagining of PGP. The talk presented a kind of modelling of user perception which I’m not sure I entirely followed, but it looks like a neat way to analyse it.

A review of URL Phishing Features was an attempt to catalogue different aspects of URL phishing. There are two main parts – those features that are user-facing (such as the context you see a URL in) and computer-facing (such as character substitutions). Identifying common features is useful to teach users about them, but you have to be careful due to the fact that their use evolves over time.

Smart Homes

Two of the talks were “Visions”, Pilots and prototypes of ideas, related to smart homes.

Shining Light on Smart Homes was an interesting proposal to design a system to help users visualise and choose connected devices that meet their expectations of privacy and security. An interesting idea, but it would take a lot of work to maintain it in terms of scouring the market and rating devices on their security and privacy.

Usable Authentication in the Smart Home is a project aiming to research how to authenticate devices in an easy way where passwords are not usable (assuming passwords ever were usable, but that’s another matter). It raises an interesting point I had never considered before – when you have multiple people in a home with devices that might personalise to each user, how do you distinguish between them. A smartphone is the obvious choice, but not exactly usable and seamless.

Developers

Developers need usable security, too. One of these was a talk given by me, but I won’t go into that in detail. It went well enough, I think.

A Survey on DCS looked at the current state of the art in research. Some interesting observations coming out of the end of it – Security should be a hard requirement in an application design, not merely a non-functional one tacked on; and that development organisations should have security champions that can be turned to.

2 Fast 2 Secure was an analysis of a company’s behaviours after they had suffered a security breach. It’s nice to see inside a company to see how attitudes have changed, less nice to see that they have changed not necessarily for the better. Security theatre is quite prominent, which is never good, as that undermines trust of the system put in place. The research identifies a number of themes that can be used to classify behaviour. I don’t plan on doing any of that kind of thing, but it was interesting to hear about anyway.

Summary

I’ve mostly written this blog post for myself to try and review what I saw and heard at the conference and workshop. I’ve used words like interesting a lot… it’s certainly an interesting adjective. I think the organisers, IEEE, are going to publish the papers if you’re interested to read them in full. Hopefully something here sparks your interest.

Join the Conversation

  1. Excellent post.
    I do the same thing when trying to remember a lot of stuff. Seems, for me anyway writing down the salient points or arguments helps me retain them.
    I suppose these sessions were prepared well in advance but you mention a couple of items that came up recently…
    You mentioned “Poisoning”, well someone has recently used this “poisoning” attack against the personal certificate signatures held on the OpenPGP Synchronizing Key Server (SKS) network of two experts in the PGP field, Robert J. Hansen and Daniel Kahn Gillmor.
    There is a good report on the NakedSecurity.sophos blog.
    On the matter of passwords and languages there was also a report on the same blog regarding a seemingly well thought out password (characters, special characters, numbers, both cases, etc) being one of the most common! The reason being language and ASCII characters. Apparently Asci Characters for these passwords came to a very simple phrase in unicode (something like that anyway).
    “Why Johnny Fails to Protect his Privacy” would have interested me as I am primarily interested in personal privacy for the home user. Although we can take much from corporate security measures, Johnny may not have a budget for a refurbished desktop and extra network card to create a physical firewall, or he may simply not be familiar with the terms used with Anti-Virus or Security Settings and has them mis-configured…
    Happy to hear you had a good experience at the conference.

  2. Thank you for posting such a brilliant summary of your experiences at this conference. Dissemination is so underrated and undervalued, but it’s how interested individuals who perhaps don’t have the time or the money to attend, can find out a little of what went on. I really wish I could have attended this, but I think I have probably learned more from your clear, plain-language summary/interpetation than I could have done by being immersed in detailed technical talks anyway.

    I have a few interesting anecdotes regarding “why Johnny fails to protect his privacy”, which resonate with many of the points you raise:

    1. When I worked as a teacher, every school I worked in had a password policy where the password expired after 1 month, and the user could not re-use a password from the last 12 months. In every school I worked at, almost all the teachers used “Password01” in January, “Password02” in February, etc… When you are stressed, short of time, and overworked, you can’t remember anything else and you just need to get the job done. Forgetting your IT password in a 21st century school is almost the ultimate disaster, so it’s entirely obvious that most staff will pick a weak, formulaic – but memorable – password. Those who didn’t, wrote it down and kept it nearby to where they usually used it. If any school pupils are reading this, please please attempt to crack your teachers’ and headteachers’ accounts and create some havoc. Please also leave a text document on the user’s desktop/home folder, saying that it’s the IT department’s policy, not the teacher that’s at fault. Don’t do anything destructive, fraudulent, or look up anything confidential (e.g. stealing records, manipulating grades, or deleting all their files – perhaps just change desktop wallpaper or add a funny message to the first slide of a presentation). This is the only way that IT departments will wake up to the fact that excessive/restrictive password policies are bad for security. For passwords, I normally use long, random passwords stored in a password safe, but unfortunately could not install it on school computers due to their security-theatre (“security” policy).

    2. I recently had to sign up for a digital service at work. The password policy in place required that a password have a certain length, upper and lowercase characters. They made no indication of this until I generated a random alphanumeric case-sensitive 8-character password as I usually do for noncritical services. Despite its approximately 38 bits of entropy, it was rejected. Out of annoyance, I typed “Password01”, which fitted the requirements yet is an extremely common password that tops the entries in most cracking-dictionaries, and my sign-up was accepted. I notified the developers of their stupidity then changed my password.

    I also find it really ironic when apps refuse to work because I have a custom ROM / rooted phone. I use a custom ROM specifically to obtain later security patches and not have insecure apps I don’t need! I also have a rooted device precicely so I can make changes to improve my security! Back when I actually used some proprietary (cr)apps on my phone, I found it odd that one refused to work on my “insecure” rooted cyanogenmod (now LineageOS) phone running Android at the latest patch level, but ironically ran fine on the unpatched, unmaintained stock ROM that was full of the usual manufacturer-bundled spyware and hadn’t been updated in a couple of years. I now run a FOSS ROM with only apps from the F-Droid store, so these types of illogical problems are now history for me.

    I also find it really astonishing that hardly anyone realises that Alexa/Siri/Cortana/Hey-Google listen in on absolutely everything you say and do. They have to: How else would they be able to respond to “Alexa! Add useless tat to my shopping-list”, if they weren’t already listening? They’ve even been caught recently listening in when they shouldn’t! I also find it amazing that lots of people don’t realise that Facebook and Google don’t provide their services out of the goodness of their hearts. Then, of course there’s the “nothing to hide” argument. A lot of people seem happy to trade loads of personal information for the sake of an immediate gain now (e.g. a cat video/free e-mail service), so they can’t see the big problem they’re potentially storing up for themselves. How to get that message through to people is one that really concerns me. It seems to now be considered normal to be spied-on and snooped-on 24/7. During the Cold War, the Russians apparently had a CCTV camera prominently looking over Red Square, as a statement. It was showing the world just how “in-control” the government was, and how they could see what the citizens are doing. A single CCTV camera. Think about that for a second. Why are riots not occurring over how the GAFAM tech companies treat us all every single day?

    1. One topic that was brought up was the idea that people do really care, but have been worn down over time, to the point where now they just can’t be bothered with security or privacy. It’s unfortunate, but a reality. Government intervention to help people would be good but depending on your viewpoint, that might be seen as too much overreach on business or blocking the government’s ability to maintain security.

      1. Yes, very true. I would be wary of Government intervention, though, as they have a perverse incentive to erode our privacy in the name of “security”. It would make their lives a lot easier, if we used weak encryption and easy-to-crack passwords that were both just-about strong enough to keep out opportunistic thieves, and of course proprietary operating systems from Redmond and California with built-in backdoors. Or of course, if they performed security-theatre by recommending lots of secure and private servies and systems for us, but then just recommended a secure, near-uncrackable password-safe for everyone to use, which was proprietary and had a backdoor for government use.

        It’s such a shame that people tend to either not care or have “privacy fatigue”. I sometimes feel a bit overwhelmed at times – like it’s futile. After all, most of my friends, family and other contacts use unethical privacy-invading software and services. For instance, their Facecrook app scans their phone book and gets my name and contact details, even if I don’t have a Facecrook (cr)app installed on any of my devices. If lots of my contacts are used by Facecrook, then Facecrook now have a very detailed profile of me – without my consent. Similar logic exists regarding people who are used by Screwgle, (Cr)apple and Malwaresoft. One thing that keeps me going is, no matter how futile it might seem at times, I can still engage in passive resistance by not actively choosing to be used by these (dis)services. It’s a tiny, almost inaudible message, but it’s still a message, and hopefully others do the same. Not everybody will just passively roll over and take it. This is also related to the reason I still use Vivaldi, despite it being nonfree software. Yes, I can’t study the sourcecode, but the developers are still quite transparent and explain that they have ethical principles. If that’s true, I want them to succeed and I want less ethical companies to see that there is another way.

Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.