Skip to main content

Security: BSidesSF 2020

I went to BSidesSF (@BSidesSF), which is a friendly security conference organized by volunteers. These are my notes.

BTW, shout out to my buddy, Josh Bonnett, for introducing me to the conference.

Here's the schedule. Here's a link to their Capture the Flag.

This was their 10th anniversary.

"There are no attendees. Everyone is a participant."

They said, "If you're going to take a picture, make sure you have the permission of everyone in the shot. Crowd shots (those facing the crowd) are strongly discouraged."

They donate to "The Sisters of Perpetual Indulgence".

[Keynote] Give Away Security's Legos: Dumping Traditional Security Teams

The keynote was given by Fredrick "Flee" Lee (@fredrickl), the CSO at Gusto.

Legos are very accessible, and you can build amazing things.

Lego is the world's most recognized brand.

Lego's motto is "the best is not too good."

It's bad that most companies treat their security people as cops and gatekeepers. It's a horrible experience.

Here's what we security engineers want to believe: We're not drowning. Security is the smartest team. The design you reviewed is actually what shipped. Security it soo hard for our co-workers; they can't be trusted! The company doesn't care about security. If we don't control everything we'll get pwned.

Focus on secure outcomes. Be flexible on how secure outcomes can be achieved. "Let go and let devs."

You have to make security everyone's job.

The decision maker has to own the risk and has to be responsible for mitigating it.

The security team never has all the context.

Hence: Gating functions should go away. Devs should conduct code reviews. Product managers should lead threat models. Product owners should own risk acceptance. All employees must protect sensitive data and IP. Everyone must be accountable for their risk decisions. Security functions don't go away. The owners change!

Mistakes will be made. Don't expect perfection, but be open to pleasant surprises. Focus on failing quickly and identifying quickly. Remember that security teams also get things wrong. You don't have to give everything over immediately. Give teams the tools and support to reduce the impact from failure.

There will still be plenty for security professionals to do. We'll consult with teams on their risk decisions and tradeoffs. We'll do risk measurement, visibiliity, and accountability. We'll do security education, awareness, and culture building. We'll do tasks requiring deep specialization and focus. We'll shepherd vulnerabilities. We'll assist in incidents and forensics.

We need wholistic risk visibility. We'll search for deep, nuanced security defects. We'll identify conflicts in risk decisions. We'll intrepret the evolving threat landscape. We'll catalog sensitive assets.

We need to educate. You have to be a missionary / evangelist. Provide role-specific security training to everyone. Teach everyone about privacy concerns. Teach everyone about social engineering. Teach teams how to hack! Teach everyone about pragmatic risk decision making. Conduct tabletop exercises with each line of business.

Taking risks is what allows companies to flourish. In the 90's, people were afraid to put their credit card into a web form, but look at the growth that it allowed!

Shepherd vulnerabilities. Help teams properly prioritize vulnerabilities. Help teams stay accountable to fixing their vulnerabilities. Facilitate healthy bug bounties. Identify patterns in vulnerabilities. Right size the vulnerability management process to each team.

Catch the stinkers. Security should still hunt! Focus on finding the nuanced, hard to catch defects. Flag the bad smells and patterns early.

Assist incidents. Incident response is natural to security. Always help! Help even with non-security incidents. Other teams must still own those, though. Guide the incident process. Teach them to be better responders.

Not all legos are safe for everyone. Identify dangerous 3rd parties. Monitor for vulnerable dependencies. Educate teams on picking good partners. Flag security sensitive operations.

Ship more. You should be writing code. You should be building.

Create golden paths. Build services so the secure choice is the easy choice. Provide tokenization services and integrate them into downstream tools. Help teams establish and codify "known good" practices. Build automation to alert when they deviate. Make incident filing and management easy. Provide strong, but accessible fences around PII.

Build/improve validation in the developer's language and framework.

Automate, automate, automate. Make things self-service.

Someone set us up the SBOM - How software transparency can help save the world

Allan Friedman (@allanfriedman) from the NTIA which is part of the Department of Commerce.

Very few organizations can tell you if they are or are not affected when a new vulnerability comes out.

Transparency empowers good risk-based decisions.

Think of lists of ingredients, bills of materials, etc.

It's impossible to know if there are vulnerabillites in the things you use.

You gotta know what you ship.

Software licensing (i.e tracking the licenses of the software you use) is really hard.

He's from the federal government. He's trying to help improve things.

He's trying to push for software transparency.

SBoM = Software Bill of Materials.

For each thing, you want: supplier, component name, version, and hash.

You might not be able to track things down recursively; you have to track your known unknowns.

SBoMs help with time to remediation.

You have a whole tree of software (your deps have their own deps).

There were already 2 standards for SBoM data: SPDX and SWID

He is pushing for automation, but it's really hard.

Vulnerability != exploitability.

He thinks a modern OS might have 10,000 components.

Secure by Design: Usable Security Tooling

Hon Kwok, security engineer at Cruise.

She talked about a security tool that had really bad UX. Improving the UX in security tools reduces friction.

Usability = effectiveness, efficiency, and satisfaction.

UI Before API -- Dan Abramov

Use the company's component toolkit even for custom security apps.

Consistency and standards. Recognition rather than recall. Aesthetics and minimal design, etc. -- Nielsen

The Red Square: Mapping the Connections Inside Russia's APT Ecosystem

Ari Eitan (@arieitan), security researcher.

APT = advanced persistent threat = a hacker or group of hackers that know what they're doing. These are usually either organized crime or governments.

C2 server = command and control server

They did research. They're open sourcing some tools.

They're looking for connections between different Russian entities such as shared modules, tools, implementations, etc. How much are the different actors sharing with each other?

Here are the steps they took: gathering samples, classifying the samples, finding code similarities, and analyzing connections.

They had 2000 unique samples to work with.

There are no standardized naming conventions for malware among the vendors.

IOC = indicator of compromise.

They classified things into: actor, family, module, and version. There were 60 families and 200 different modules.

They dissassembled each sample. They broke it into bits of assembly. Then, they took an approach like sequencing a genome. Hence, they were able to connect Russian samples via bits of their assembly.

They were able to find pairs of Russian samples that shared code.

Gephi is a tool to visualize and analyze graphs.

They were able to cluster things and put Russian actors on those clusters.


He says he is a reverse engineer.

No cross-actor connections were found. He didn't see any code being shared between different Russian actors.

Why not? We don't know. Maybe because OPSec makes it dangerous? Maybe because of politics?

Different teams of malware developers are writing the same code over and over again.

He talked about YARA rules.

Human or Machine? The Voight-Kampff Test for Web Application Vulnerabilities

Vanessa Sauter from Cobalt.

It's fun to find vulnerabilities.

Web apps are the predominant targets in security.

What vulnerabilities can be found by scanners vs. only by humans?

Cobalt has 300 fully-vetted pentesters.

Here are the most common vulnerabilities (in decreasing order): misconfiguration (by a wide margin), XSS, authentication and sessions, other...

Remote code execution is at the bottom.

There are a ton of different scanners, many of which are really good: commix, ZAP, sqlmap, PORTSWIGGER, w3af, etc.

Humans win on: business logic bypasses, race conditions, chained exploits, etc.

Business logic attacks exploit design flaws or unanticipated abuse of business logic.

She mentioned "time-of-check, time-of-use race conditions".

SSRF = server-side request forgery

The truth is: the Voight-Kampff test isn't right. In pentesting, humans and machines rely on each other. Humans and machines need to work together.

She opened it up for debate.

Her talk is focused on dynamic, black-box scanners rather than static analysis.

IDOR = insecure direct object reference

Peeling the Web Application Security Onion Without Tears

Noam Lorberbaum and Keith Mashinter from Adobe.

They're talking about Adobe Document Cloud.

The app was originally written using Django. Shared Cloud was a backend behind Django.

They used AWS's CDN.

Adobe open sourced the Adobe Common Control Framework (CCF). It enables a clear, consistent, efficient approach to security controls. They distilled a ton of industry standards into a common set of categorized controls. 10 frameworks, 20+ categories, 1350 control requirements. They aggregated this down to 290 controls.

He talked about the Adobe Identity Management Service (IMS). It's based on OAuth 2.0.

Ethos is a container-based, multi-cloud, CI/CD system based on Jenkins.

They migrated from Python to Java, Django to Spring, Require.js to Webpack, Backbone to React, Apache to Nginx, etc.

They separated client and service deployments.

They had mantras such as: Minimize the attack surface area, minimize trust, secure defaults, least privilege, fail securely, defense in depth, etc.

They have a content security policy.

Frontend: CDN to Nginx to S3

Backend: to microservices to shared cloud

They have some 3rd party component tracker. Things that are vulnerable are blocked from going to production.

They use Bishop Fox as an external pentester.

Bootstrapping Security

Jared Casner and Rob Shaw from CNote.

BofA is going to spend $600 million on security. This talk is not for companies like that.

How do you get a security program off the ground? How do you make a big impact on a small budget?

Security is everyone's job.

Threat modeling questions: What are we building? What can go wrong? What are we going to do about that? Did we do a good enough job?

PagerDuty has nice security talks that they publish. There's a lot of stuff on security on YouTube. There are a lot of great BSides talks.

PhishMe Free is great for doing phishing training.

Automatic Detection: SonarQube, ClamAV, Cylance, etc. (I can't see them all since they're too low on the screen.)

Logging: just do it. A centralized logging tool is crucial.

Get a Content Security Policy from day one. It's easier than adding it later. Report it to Sentry.

Put DMARK early on so that no one can spoof your emails.

"You've either been hacked already, or you're going to be hacked."

Logging DNS queries is important.

Use SNS and AWS Lambda to build a data providence layer.

(There were more tools that I couldn't see because they were too low on the slides.)

There's a security hub in the AWS Marketplace.

AlgoVPN is free; you just need to provide the server.

Going to the RSA conference is useful. Google for a free expo pass. See the vendors. Watch the keynote.

There are MSSRPs (???) that resale access to expensive tools.

Negotiate with your vendors.

He recommended TrendMicro for IDS/IPS via the AWS Marketplace.

Dispatch: Crisis Management Automation When Everything is On Fire

Marc Vilanova and Forest Monsen from Netflix.

They're security incident responders.

At the beginning of an incident, you know you need to do something, but there are a lot of things you don't know.

Start with an incident commander. Open up a conference bridge or private chat channel. Create an email distribution list. Create shared storage. Invite the right parcipants. You need to orient the participants. You need access groups to manage access to the resources above. You need to create an investigation document. Setting all of this up manually is exhausting; it requires too much multitasking.

When dealing with an incident, things should be taken care of in a consistent and familiar way.

MTTA = Mean Time to Assemble

MTTS = Mean Time to Stable

Learn from every incident.

They built Dispatch. It's a workflow tool. It automates away a lot of things. It creates and manages resources like docs, channels, etc.

It lets people configure how much they want to be engaged.

It pages people if necessary.

On every security incident notification, there's a button to get involved.

You can handoff the incident or assign roles.

It notifies you about tasks.

Netflix uses Slack, Jira, Google Docs, etc.

They wrote it in Python with the FastAPI framework. They used PostgreSQL and Vue.

It has a plugin system which they used to integrate with Slack, etc.

They forecast how many incidents they'll have each month.

They're open sourcing Dispatch today.

He said Netflix has a "freedom and responsibility" culture.

They use Slack for "chatops".

It's in a separate AWS account than prod.

Managing the Assets of Your Security Career

Kyle Tobener from Salesforce.

He loves strategy board games. That's how he approaches work. How do I win in the most strategic way?

As you grow in seniority, so should your influence grow. To grow your career, you need to grow your influence.

He talked about formal feedback, promos, etc.

The strategies of asset management can help you grow your influence. People are the assets critical to your career growth. Know who you need to influence. Identify gaps in your portfolio. Maintain relationships over time.

When he's talking about assets, he's talking about people.

Identify your "assets". Variety is key.

He talked about getting assets in order to prepare for a promo, even if it's 2 years away.

(Ugh, this reminded me of Google and Twitter.)

Target certain people and work over time on improving their opinion of you.

Monitor your assets by blocking your calendar for 30 minutes every 2-4 weeks.

He's been at Salesforce for 9 years, so he thinks in really long terms.

Collaborate often even when it might be faster solo. Collaboration often creates opportunities that you didn't expect.

Collaborating allows you to trade visibility between your boss and their boss. Leverage your boss as currency in trade for someone else's boss.

Use email filters to your advantage. Filter certain people to the top of the list.

Approach building your influence strategically. Monitor your progress over time. Be consistent. Collaborate more often. Find people to collaborate with outside your organization.

If someone is saying bad things about you, you might send email to say, "I don't think we got off on the right foot. What can we do to improve our relationship?"

[Keynote] What's New or Not in 2020: Are we Making Progress on the Intractable Security Problems?

Larkin Ryder (@larkinryder), CSO at Slack.

She was at Twitter as well. (I recognized her from when I worked there.)

She said this is her favorite security conference. It's local. It's the right size. She likes the people who come.

Malware goes critical:

Stuxnet 2010: $20 million in centrifuges, but the implications defy math. It disrupted Iran's ability to enrich Uranium. She recommended the book "Countdown to Zero".

WannaCry: Ransomeware. $4 billion in damages.

NotPetya: $10 billion in damages. It was perhaps the Russian government trying to destabilize Ukraine.

She showed a list of data breaches. Yahoo's was like 3 times bigger than anyone else's. There was a conviction at the end.

AdultFriendFinder: the breach was motivated by a person making a moral judgement on the people using the site.

Target: the attackers compromised their PoS systems. They got credit cards, etc. Network segmentation is important. A vendor was involved in the breach.

HeartBleed (2014): it was caused by a memory leak in OpenSSL's implementation. Anything could be exposed.

Meltdown and Spectre (2017): These happened at the hardware processing level.

EternalBlue (2017): it was the vulnerability behind WannaCry. It had to do with mishandled SMB packets. The NSA knew about this for a long time but didn't disclose it.

She knew about this stuff because she was interested.

Cyberattacks are #5 and #8 on the list of things most likely to cause risk in the future.

The media is also bringing all of this stuff into people's consciousness.


Our perimeter is identity-secured. We have a rapidly evolving perimeter.

People are agreeing to Terms of Service and Privacy Policy documents without reading them and then putting private data into the cloud.

You're bringing more and more vendors into the equation. Vendor risk management is more and more important. We need a better solution for establishing trust.

We're all hosting data on shared environments, so Meltdown and Spectre are problems.

Privacy regulation lends a hand:


CCPA says cloud service providers have to do the right thing for protecting the user's data or they can be sued.

But, privacy is yet another set of security measures that are a little fuzzy. She wishes the standards were more prescriptive.

Bring your own device:

Mobile devices are ubiquitous.

They're great for ease of use. They make 2FA easy. Other benefits: biometrics, token-based access, notifications, apps are fun, you're always connected.

But, they can be lost. She said if a device is lost, it's almost always "left it in a car or left it in a bar".

Only 5-10% of people at the conference are carrying 2 phones.

The company might need to take your device from you to get an image for discovery purposes. Not fun!

Hummingbad was installed on 10 million devices. It's Android malware.

Her husband misdialed the number for his credit union. There was an attacker who answered the phone and asked for a credit card number when he asked to close his account.

Trends: internet of things goes mainstrain, malware learns by machine, SCADA comes crashing down.

They're talking about using analog instead of digital for critical systems like power.

Constants (i.e. work that continues):

User behavior matters.

The checklist of the impossible:

Stay patched. Don't click on suspicious links. Never open untrusted email attachments. Do not download from untrusted websites. Everyone violates these at some point.

There are a million machines vulnerable to EternalBlue. Phishing is still a top attack vector.

The checklist of the less impossible:

Avoid inserting unknown USB sticks. Use VPN over public Wi-Fi. Backup your data. An unknown USB stick was the entry point for Stuxnet. There are cloud services for most things like moving files around, VPNs, etc.

Important checklist:

If you see something, say something. Use what I gave you. Customer data is off limits. If you don't understand why I'm creating this friction for you, ask me.

There was an attack that came to light only because they noticed an unoptimized query hitting the DB.

Detection is unexpected.

The most attacked industry is healthcare. It's attacked by organized crime. The motivation is financial.

We're going to get owned. We're not going to know it. Everyone is going to get fired. We're going to rotate to the company to the right. F-that.

How much are stolen things work?

SSN: $1. Drivers license: $20. Paypal account $20-200. A credit card: $5-110. US passport: $1000+.

(I had no clue US passports were so valuable!)

Detection is a hard problem. How will you know? Start somewhere. Study "normal." Use a red team. Rinse and repeat.

She worked for Bob Lord at Twitter. (I remember him.)

How do we keep going with so much on our plate--an infinite bag of risk? Recognize your burden. Bound your efforts. Lean on the community.

OTR = off the record

If you're not using SSH certificates you're doing SSH wrong

Mike Malone, founder at Smallstep.

Here are his slides.

Certificates have been part of OpenSSH for 10 years, but people rarely use them.

SSH is hard for the user. Operating SSH at scale is a disaster. SSH encourages bad practices.

Re-keying and key removal are also terrible.

TOFU = trust on first use. It's terrible, but everyone does it.

Re-keying the server's key is also a problem.

Stop using simple public keys. Certificates are the answer.

Hosts and clients only need to know the CA's public key.

HKVF = host key verification failure

Facebook, Uber, and Netflix use SSH certificates.

It's actually quite easy, and transitioning can be easy.

netflix/bless looks good.

His company built something. It ties command line usage into browser-based SSO. He called this the "Step Toolchain."

Users don't need to know the details.

It's more secure. It provides better usability and easier operations.

Sans-Serif Rules Everything Around Me

A journey into deception, phishing, the law, and the fortune 1000.

Travis Knapp-Prasek (@tkpsf).

Two URLs that look the same may not be the same.

You can substitute an "I" for an "l".

Look for companies with "l" in their name.

You should register the version of your name with "i" instead of "l".

Displaying things in mixed case in a sans-serif font is the problem.

A law firm accused him of "an unsolicited lesson in internet security".

There are companies monitoring for this, and they send out legal threats.

Use DNS twist to look for permutations of your domain name.

What makes this even worse is that "i" comes before "l", so it gets autofilled first.

python3-dateutil is a Python library that had this happen to it. The code was something stealing ssh certs.

This is a very successful phishing attack.

No company paid him for the domains he registered. He only got legal threats.

Using Built-in Kubernetes Cotrols to Secure Your Applications

Connor Gilbert (@connorgilbert) from HashiCorp.

His slides weren't working for the first 15 minutes, so I don't know what was on them.

Every Chick-fil-A has a 3 node k8s cluster in the restaurant.

The Docker image format wasn't completely new, but it was really important.

Container escapes also are not new. There are a lot of ways to get out if you take down the walls. It's a risk we accept for some business value.

k8s is an orchestrator. Orchestrators are not new.

Declarative, immutable infrastructure is a practice. It's not new.

k8s provides a new attack surface. There's the API (external and internal), the kubelets API, the k8s dashboard, Helm Tiller, etc.

Granular Linux permissions: Linux capabilities, user ID, etc. You can control egress in your k8s .yml file.

k8s comes with certain defaults:

Some are good: no external exposure by default; immutability is encouraged.

Some are bad: it's trivial to run as root; any pod can talk to any other pod; it uses a writable root file system; there is no seccomp applied.

k8s is not secure by default.

k-rail is useful.

k8s has really useful building blocks, but there's more work to be done in the details.

From cockroaches to marble floors: What happens when you turn on the lights?

Daniel Tobin and Paul Karayan.

The global cybersecurity workforce shortage is set to reach 1.8 million as threats loom larger and stakes rise larger.

He mentioned: "Berkeley Cybersecurity Boot Camp: Learn Cybersecurity in 24 weeks."

The birth of "InfraOps".


Let's expand security out.

QA can get engaged.

We've been saying for years that we need security advocates.

Infrastructure as Code was a big win.

Security already operates as consultants.

Diversity is key to winning. Find "the goats" in the org, and entice them onto the team.

Remove the gates. They aren't working.

Sysadmins were putting up a ton of gates. A lot of startups don't even have sysadmins anymore.

The term "security bug" creates a false dichotomy.


Setup metrics across teams in order to gamify the system--"Alignment: It's Nice!"

Goal 1: Build pipeline "health". These include Mean Time To Resolution and Mean Time To Failure. The number of issues and the bug severity were downplayed significantly.

Goal 2: Codification. Increase the number of "rules as code".

He's a huge fan of E2E tests, especially Cypress. "What is the app is supposed to be doing?"

Environment provisioning and setup are important. The faster you can spin up an environment, the more people will do something with that.

Fuzzing FTW!

Also, QA should use static analysis.

Use codebase meta-analysis. I.e. treat your codebase as a crime scene.

If you teach a QA tester how to use Burp Suite, their eyes light up.

Reduce the number of attack/failure surfaces.

Use "modern" exploratory testing tools like Burp.

He mentioned "DevSecOps" which he also called "InfraOps".

2FA in 2020 and Beyond

Kelley Robinson (@kelleyrobinson) from Twilio.

"We're so pwned that not being pwned indicates a potential abnormality."

There are so many data breaches that if an email hasn't been leaked, it may not be legit.

People still use the password "123456."

Twilio acquired Authy 5 years ago.

Account takeover is a $4 billion problem.

2FA means having two of the following: knowledge, possession, inherence (like facial ID).

SMS one-time passwords: they're the easiest when it comes to onboarding. Users are very familiar with them. Problems: SS7 (Signalling System 7) attacks (i.e. impersonating a career) and SIM swapping (i.e. the attacker calls up the phone company and either bribes an agent or tricks an agent).

Soft Tokens (TOTP): they're based on symmetric key crypto. They're available offline. They're an open standard. Problems: an app install is required. The UX involves expiration. They're a good option, but they're not perfect.

Pre-generated codes: they're easy to use. Problems: storage. They don't "feel" secure. It's a good backup option, but it's less practical for ongoing use.

Push authentication: they're good because the user interaction is done in the context of some user action. If the user denies the push, that's good feedback. They're based on asymmetric key crypto. They're device and user specific. There's very low friction--maybe too low. Problems: a proprietary app is required. Google is using this.

U2F (universal second factor) / WebAuthn: like YubiKeys or Google Titan keys. They cost $50. They're phishing resistant. They make use of assymetric key crypto. They're an open standard. Problems: distribution and cost. Also, they're new technology.

She talked about the study, "A Usability Study of Five Two-Factor Authentication Methods."

U2F & Push have the fastest median authentication times.

TOTP scored the highest in terms of system usability for a 2nd factor.

SMS 2FA is still better than no 2FA. It blocks 100% of automated bots, 96% of bulk phishing attacks, and 76% of targeted attacks.

Push authentication blocks 100% of bots, 99% of bulk fishing attempts, and 90% of targeted attacks.

Only 1% of Dropbox users have turned on 2FA.

2FA had a massive bump because Fortnight gave their users an incentive to turn it on.

Delight your most security conscious users, and provide options for the rest.

"When we exaggerate all dangers, we simply train users to ignore us."

Security Learns to Sprint: DevSecOps

Tanya Janca (@SheHacksPurple), security trainer and coach.

Security is everybody's job.

It's the security team's job to enable the rest of the company to do things more securely.

DevOps + SecOps = DevSecOps

AppSec = any and every activity that you perform to ensure that your software is secure. -- That's her own quote.

Web App attacks are the most common cause of breaches.

Security is outnumbered.

Here's the ratio: developers 100 / operations people 10 / security engineers 1

Waterfall never worked well for security.

DevOps has some benefits: improved deployment frequency (security emergencies can be fixed NOW) (she mentioned that she worked somewhere that took 16 months to deploy something), lower failure rates, faster time to market (security doesn't win if the business doesn't also win).

DevOps is the best thing to happen to AppSec since OWASP.

"I love OWASP. I don't just like it."

The 3 ways of DevOps: emphasize the efficiency of the entire system, make sure we get feedback as soon as we possibly can, and continuous learning (experimentation, risk taking, etc.).

"I need [i.e. want] really big security bugs to break your build."

Security must not be a bottleneck. If they want to release 20 times a day, we need to be able to deal with that.

You can create more than one pipeline. You can have a pipeline that's really slow that doesn't block developers because it's running asynchronously.

Write your own code libraries for your business's specific needs.

Be creative. Do anything you can think of to help devs and ops to get their job done more securely.

Give people feedback.

She talked about a team that had some filtering rule that automatically took reports from some other team and threw them away :(

It's a useful trick to automatically watch what's going on and then lock things down based on that.

Negative testing ensures that your app can gracefully handle invalid input or unexpected user behavior.

If you've had a pentest done, turn those into unittests.

If someone can't take the time to go to your security training, that's a serious problem.

Give training whenever you can.

If one team has a problem, assume the rest of the teams might have it as well.

Emphasize the most important things.

She's a huge fan of security exercises.

No more blaming. Stop pointing fingers.

Create security champions.

Enable, teach, automate, and provide feedback.

Resources: #OWASPlove, WoSEC (Women of Security), #MentoringMonday, and @SheHacksPurple

Phishy Little Liars - Pretexts That Kill

Alethe Denis (@alethedenis)

She talked about developing a pretext based on intelligence you've gotten from open source contributions, LinkedIn, etc.

She used the term "OSINT", and she talked about building repoire.

Pretext: something you are, something you do, something you have, something you need, etc.

Why does it fail? The number one reason is lack of confidence. Keep it simple; don't overwhelm them with facts about who you are. You need to have the knowledge to backup what you're saying. Know enough to act like a member of their tribe.

She starts with Glassdoor. Then, she goes to LinkedIn. Investigate employee profiles. Then Google Dorks, OSINT, etc.

Glassdoor isn't the only place. There's also Indeed, Great Place to Work, etc.

Always dig into the photos. What's behind the people?

Don't forget video. Jobs pages have a ton of stuff on them these days.

Geo-tagged Instagram photos are her favorite.

When people comment on things, that can help a lot.

Look for soft targets.

She was going to pretend that she was with a charity, and then she was going to go after the charity manager at the company.

Who am I? Who they are? You need pretext details.

There are "authority-based" and "empathy-based" pretexts.

Have everything handy for when you're talking to them.

What are your motivations?

Build rapport quickly.

Don't script yourself.

Be ready to pivot. If they're soft, perhaps you can extract more.

She participated in SCCTF (?) at DEF CON. She won the black badge.

In one phone call, she told the lady, "I sent you an email. Did you receive it?" It made the woman feel bad, so she was more willing to let down her guard. She called this an empathy pretext.

Companies need to lean into training and awareness.

Avoid having people get the gut feeling that something's wrong.

It shouldn't take you more than a couple hours to do your research. She worked for 100s of hours for DEF CON.

RIS-ky Business: Exploiting Medical Information Systems

Jacob Brackett, One Medical, AppSec engineer.

Medical devices are a hot topic in the security world right now. They're a real attack vector.

This talk is not about medical devices.

It's about going after the information systems.

An X-ray machine might have a device acting as a intermediary between the medical device and the internet where there is a shared medical imaging cloud.

DICOM = Digital Imaging and COMmunications

It's a standard. It's a file format as well as a network protocol.

PACS = Picture Archiving and Communication System

RIS = Radiology Information System

Modality = X-ray, CT, etc. device

The DICOM Gateway is either a Windows machine or something like a Raspberry Pi.

DICOM over TLS is pretty common.

C-FIND = query/retrieval service

C-STORE = upload DICOM data

He explained the protocol.

Authentication isn't built into DICOM.

pydicom is a Python library for working with DICOM.

AE Titles are the main form of authentication. It's 16 characters. But, they're usually human readable.

Sometimes, they forget to check them at all. Some Australian hackers downloaded all the images via DICOM.

A DICOM file is a jpg with an extra header.

You may not realize that you need to verify the data in a DICOM file. You could be susceptible to XSS from the DICOM file.

There's a new version of the DICOM system. It supports Kerberos service tickets or SAML assertions.

Understand what protocols are on your network. Make sure your tools can understand DICOM.

Just don't store patient data in your logs.

PHI = protected health information

An Effective Approach to Software Obfuscation

Yu-Jye Tung @YellowbyteRE

Software obfuscation = a set of software protection mechanisms through program transformation (either source-level, compilation-level, or binary-level) that makes the corresponding executable binary more difficult to anlyze without changing the program's core functionality (i.e. intended observable behavior).

Potency = how well the transformation protects against manual analysis.

Radare2, Ghidra, IDAPro, and BinaryNinja are dissassembers. There's also GDB (the debugger).

Resilience = how well the transformation protects against automated analysis.

He mentioned Angr, BINSEC, etc.

Stealth = how well the transformation protects itself against being detected.

Software obfuscation != cryptography

The goal of the transformation is to make it more time consuming to analyze.

More time consuming = more frustrating for the analysts. The goal is to make the analysts give up.

There's a deobfuscation process: identify the obfuscation technique and then perform the relevant deobfuscation steps. Potency and resillience are most relevant here.

Modern obfuscation is noisy. They're easy to identify because they have low stealth.

He mentioned "Control-Flow Graph (CFG) Flattening". Per the Jscrambler docs:

Control Flow Flattening aims to obfuscate the program flow by flattening it. To achieve this, the transformation splits all the source code's basic blocks — such as function body, loops, and conditional branches — and puts them all inside a single infinite loop with a switch statement that controls the program flow. This makes the program flow significantly harder to follow because the natural conditional constructs that made the code easier to read are now gone.

There are various shapes in the assembly that indicate high-level programming constructs.

If you transform things, you can get rid of the shapes.

Jscrambler can do control flow flattening so that all the control flows look the same.

There are also tool-specific, non-generic deobfuscation techniques.

He mentioned Quarkslab.

He suggested that instead of making the obfuscation harder to break, make it more stealthy.

He talked about "inconspicuous obfuscation". If the analysts aren't aware of what was obfuscated, it makes them make the wrong assumptions and fall deeper into a rabbit hole. Only stealth can achieve this.

"Disassembly desynchronization" is an umbrella term for software obfuscation techniques whose main goal is to degrade the accuracy of the dissassembly when you try to disassemble the code.

Opaque predicates are conditional branches that are always true or false. They allow you to insert junk bytes in the unused branches.

He mentioned BINSEC and OpaquePredicatePatcher.

He showed a lot of assembly.

He talked about when IDA gets things wrong.

He talked about different bytes being interpreted in different ways (junk bytes vs. actual instructions). He called these "overlapping instructions". The code fooled IDA and IDA showed the wrong code. That was pretty crazy.

Potency, resilience, and stealth are all important, but too often, no one is focusing on stealth.

It's amazing that you can get IDA to disassemble things in the wrong way.

Closing remarks

People got to vote for which charity to support: EFF raised $2k, Hackers for Charity raised $1k, The Sisters of Perpetual Indulgence raised $500.

There were a lot of volunteers running this event.

There are a bunch of people who volunteer to help out year round.


Popular posts from this blog

Ubuntu 20.04 on a 2015 15" MacBook Pro

I decided to give Ubuntu 20.04 a try on my 2015 15" MacBook Pro. I didn't actually install it; I just live booted from a USB thumb drive which was enough to try out everything I wanted. In summary, it's not perfect, and issues with my camera would prevent me from switching, but given the right hardware, I think it's a really viable option. The first thing I wanted to try was what would happen if I plugged in a non-HiDPI screen given that my laptop has a HiDPI screen. Without sub-pixel scaling, whatever scale rate I picked for one screen would apply to the other. However, once I turned on sub-pixel scaling, I was able to pick different scale rates for the internal and external displays. That looked ok. I tried plugging in and unplugging multiple times, and it didn't crash. I doubt it'd work with my Thunderbolt display at work, but it worked fine for my HDMI displays at home. I even plugged it into my TV, and it stuck to the 100% scaling I picked for the othe

ERNOS: Erlang Networked Operating System

I've been reading Dreaming in Code lately, and I really like it. If you're not a dreamer, you may safely skip the rest of this post ;) In Chapter 10, "Engineers and Artists", Alan Kay, John Backus, and Jaron Lanier really got me thinking. I've also been thinking a lot about Minix 3 , Erlang , and the original Lisp machine . The ideas are beginning to synthesize into something cohesive--more than just the sum of their parts. Now, I'm sure that many of these ideas have already been envisioned within , LLVM , Microsoft's Singularity project, or in some other place that I haven't managed to discover or fully read, but I'm going to blog them anyway. Rather than wax philosophical, let me just dump out some ideas: Start with Minix 3. It's a new microkernel, and it's meant for real use, unlike the original Minix. "This new OS is extremely small, with the part that runs in kernel mode under 4000 lines of executable code.&quo

Haskell or Erlang?

I've coded in both Erlang and Haskell. Erlang is practical, efficient, and useful. It's got a wonderful niche in the distributed world, and it has some real success stories such as CouchDB and Haskell is elegant and beautiful. It's been successful in various programming language competitions. I have some experience in both, but I'm thinking it's time to really commit to learning one of them on a professional level. They both have good books out now, and it's probably time I read one of those books cover to cover. My question is which? Back in 2000, Perl had established a real niche for systems administration, CGI, and text processing. The syntax wasn't exactly beautiful (unless you're into that sort of thing), but it was popular and mature. Python hadn't really become popular, nor did it really have a strong niche (at least as far as I could see). I went with Python because of its elegance, but since then, I've coded both p