Skip to main content

Security: BSidesSF 2022

 Opening Remarks

The theme this year is "from the ground up". They're focusing on community, collaboration, and education.

It's a 100% volunteer team. 25 people work year-round.

They had speed mentoring sessions.

They really need some new volunteers. See bsides.sf/jobs.

The talks will be on their YouTube channel.

They have a stringent photo policy. You must have the permission of everyone in the frame, and crowd shots where you can see faces are strongly discouraged.

Here is the schedule, and here is their YouTube channel.

[I wrote these notes by hand and then transcribed them in a single day. I didn't quite expect them to be so voluminous! Happy hacking!]

Keynote: We Need More Mediocre Security Engineers

Jackie Bow (@jbowocky) from Asana.

[This was my favorite talk.]

She pointed out that BSidesSF was the last in-person conference that a lot of us attended before the pandemic. That was true for her, and it was for me as well.

She's held many jobs in security, including malware reverse engineering, which is one of the most hard-core jobs you can have in security.

She's worked for Facebook, for the government, etc.

She said that ClamAV is still the best open-source antivirus software there is.

One time, she added a virus signature to ClamAV but forgot to add the trailing newline. This broke Facebook Messenger in production for 1-3 hours.

Important: 82% of breaches involve a human element.

We expect each other to be perfect in security. We're not.

She said, "Have you read InfoSec Twitter? Ugh!"

Important: Extreme expectations lead to burnout, not excellence.

More != better.

Burnout is in the standard classification of mental disorders. "Burnout has been defined as a combination of emotional exhaustion, depersonalization, and reduced personal accomplishment caused by chronic work stress" (cited).

Unfortunately, our work predisposes us to burnout, but we have to avoid burnout if we hope to do this career for a long time.

Consider COVID-19, Log4J, the Colonial Pipeline hack, Solar Winds, supply chain attacks, Ukraine, etc.

The Solar Winds thing shook her deeply because she really respected FireEye.

There are currently 600k people in security. It's expected that there will be two million open roles. How are we going to add a million new people to the field?

She referred to Stuxnet.

Our current burn rate is unsustainable.

We need to dismantle our concept of a security unicorn.

We need to see each other as allies. We need to stop overworking. We need to change who we think is hirable.

We're too elitist. That's bad.

We expect people to know everything. [That's something I'm struggling with as I prepare to interview.]

We can't scale as solo individuals.

We need to drop the l337 hacker stuff.

Social isolation and loneliness [which I know all too well] increase the likelihood of early death by 25-30%. It's equivalent to 15 cigarettes a day.

Elitism is the enemy of diversity.

Only 24% of people in security identify as women.

She used to work on reverse-engineering malware. That's one of the most technical jobs you can have in security. Now, she feels like a dinosaur because of all this SaaS software, CSRF, etc.

We end up being expected to always be on.

Important: She called it the "wheel of reactive hell".

There's always more work to do.

Glorifying overworking hurts us all.

She talked about her kid asking her at 7 PM how much more work she had to do. [I literally started tearing up when she said that because the exact same thing had happened to me the day before.]

How often do we get to take a vacation longer than a week?

Vacations are hugely important.

You can be a great security engineer and still have hobbies--even non-security ones!

We need to bridge the "talent gap."

We're looking for unicorns. We need to stop that.

We need to see degrees as a privilege.

We should look at education as something that should happen once you're already in the industry.

There is no agreed-upon value for boot camps or certs.

We need to offer education as a benefit.

We really don't know how to hire for cyber security roles.

We still demand CS degrees, and that's bad.

Your job should pay for you to do boot camps, certs, etc.

At our current rate, we'll burn out before the pipeline fixes itself.

We need to dismantle the unicorn.

We need to challenge our perceptions of who belongs in this industry to achieve a more diverse workforce.

An Unlikely Friendship: Why Security Engineers and Product Managers Should Be Working Together

Leif Dreizler (@leifdreizler) and Rachel Landers (@workingrach) from Twilio Segment.

Segment was acquired by Twilio.

He's an engineering manager, and she's a PM. Their team worked on building security-related features.

Segment is a customer data platform.

They use TypeScript and Go.

Enterprise customers have very high bars. They're very demanding and noisy.

They mentioned LocoMocoSec which is a security conference in Hawaii.

SecEng = security engineering

Netflix has a great security team. They had this idea that the paved path should lead people to do things securely.

They talked about a self-service approach to security.

He talked about the different sides of security. Application security was on his list.

[It's amazing how similar their team is to the team I worked on at Udemy.]

Their first feature was a password strength meter built using zxcvbn and Have I Been Pwned.

Next, they tackled MFA.

The biggest feature they tackled was integrating with SCIM. [Our team didn't do that one. The UB team did that one.]

You can use SCIM to integrate with Okta or Azure Active Directory to provision users in your app. It's a system for cross-domain identity management.

PDLC = product development lifecycle. [We used the term SDLC.]

Ask yourself, why is this the right time to build this feature?

IdP = identity provider

Okta groups were mapped to Segment groups.

SDD = software design doc

You should "always be selling". The SDD should spend a little bit of time convincing people why it's a good idea to build this feature.

The PM owns what and when.

The engineering manager owns how and when it'll be done.

Important: "Weeks of programming can save you days of planning."

SCIM is basically CRUD for users and groups.

He mentioned RFCs 7642, 7643, and 7644.

When you have to implement query filtering, use a library.

Read the onboarding docs for each of the IdPs.

Build the integration with the IdPs.

He gave Okta props for how smoothly the process went. It took OneLogin almost a year to accept their integration.

Enterprise software has a bigger focus on security than consumer-facing software.

1/3 of their customers who use SSO use SCIM.

ARR = annual recurring revenue.

The customers they have that use SCIM account for 21% of their ARR.

Defaults matter a lot!

Lunch

first.org shares incident response data.

I talked to some security journalists who piece together news about incidents.

Boring SSL and libsodium are examples of tools that are simple, easy, and useful.

OpenSSL is pointlessly and hopelessly complex.

Code Red Partners is a recruiting firm that focuses on security professionals.

Embracing Risk Responsibly: Moving beyond inflexible SLAs and exception hell by treating security vulnerabilities and risk like actual debt

Eric Ellett from Segment Twilio

We need to embrace innovation to get away from having a dumpster fire of a security program.

Start by buying some time with solutions that are "good enough".

Identify and engage with critical customers (which are people inside your company that your security team has to work with).

He talked about an example where the AppSec team asked a service to fix a P1 issue reported via a bug bounty program.

He talked about creating metrics for closing vulns.

When you're working on a v2 of your program, rebuild the foundation with data. Now you have some time to build a proper foundation.

He talked about sending formal emails asking people to fix their vulns. A key part of these emails was that they had a due date based on the severity. This due date was possible to extend.

Attributing vulns to teams was hard because of the constant org changes.

They tied vulns to divisions and departments.

They rolled the data up the org chart to enable competition across the company for who could fix their vulns the most quickly.

At this point in your program, you can start experimenting strategically.

There are different risk appetites in different parts of the org.

He referred to Google's SRE book. He talked about SLIs, SLOs, and SLAs. In particular, he referred to chapter 3 on embracing risk.

Important: The only truly secure system is one that is powered off, cast in a block of concrete, and sealed in a lead-lined room with armed guards - and even then I have my doubts. -- Gene Spafford

He talked about error budgets.

For an SLO, he talked about uptime per quarter.

Perfect security and reliability is not the goal--it's too expensive.

Important: They created a debt metric: debt = (current_date - orig_date) / sla_in_days

The higher the priority, the shorter the SLA in days.

So, if the priority says you have to get it fixed in a day, every day you slip, you're increasing your debt by 1. However, if the priority says you have to fix it in a month, then it takes a whole month for you to increase your debt by 1.

As he mentioned before, this debt can be calculated and rolled up organizationally.

You can break down the debt in different ways.

He mentioned Snowflake.

He said that prioritizing work based on a debt metric is more helpful than prioritizing based on severity alone.

They even integrated the debt metric into CI.

He said that Segment's security program is further ahead than the rest of Twilio's.

At Segment, they're not yet tackling the p4s and p5s. They're too noisy right now.

He said that compensating controls frequently lower the CVSS which lowers the priority.

He talked about using Backstage for code asset management--i.e. which team owns the code with the vuln.

They're moving from VMs to k8s.

Buying Security: A Client's Guide

Rami McCarthy (@ramimacisabird)

He called himself a "reformed security consultant".

Buying security services is hard.

The security industry is a $100 billion industry.

Let's talk about security assessments. This is a comprehensive guide on buying and getting value.

He mentioned some survey that talked about buying and selling security.

He mentioned a talk from 2011 called, Penetration Testing Considered Harmful.

Important: Consider the question: Is a particular pentest good? The answer lies along a scale that goes from it's bad to you don't know.

White box tests are now dominant. They're more efficient and more thorough.

Don't compare a pentest to a bug bounty program.

Don't fall for a dressed-up Nessus scan.

There are different motivations for getting a security assessment. Risk reduction is the number one reason. The second most common reason is compliance.

There are different types of vendors.

It's hard to know if a vendor is good. Network recommendations are helpful.

Be careful about how much time you give the vendor. Keep in mind Parkinson's law.

Know your scope.

Gather 3-5 proposals.

When your goal is compliance, the pentester has to strike a balance between providing value vs. actually enabling you to pass.

Your own sales clients might tell you who to use since they might have customers asking for proof of compliance via specific vendors.

The vendor will help you further refine your target scope. You have to hone in on clear objectives and the length of the engagement. These will affect the cost.

Surprisingly, different vendors will come back with very different quotes.

Fast, good, and cheap--pick 2. In security, it's more like pick 1.

Be skeptical of cheap proposals and consultants.

There's lots of paperwork involved: NDA, MSA, SOW, etc.

Cure53 actually made their paperwork public.

Show the pentesters your known risks, your threat models, etc. This will help them.

Don't waste their time by leaving in obvious, known vulnerabilities, forcing them to go through your WAF (just let them through), or by giving them an incomplete environment that is missing important data to be useful.

Their reports are decomposed and sent to different teams. There is usually an executive summary vs. a section with nitty, gritty details.

A lot of people like getting an overall score or grade.

Make sure the vendor cleans up after themselves. He saw a case where one vendor left an open shell, and then another vendor found it.

Remember, no findings != no risk.

Do root cause and variant analysis.

Assessments are an expensive way to find vulns.

For each vuln, you need to fix, mitigate, or accept the risk.

Remediate the vulns. Don't just leave them there to be found by the next pentester.

Do a retro after you're done.

You can use canary bugs to see if they're actually doing their job.

Consider your pentesting cadence: Once a year? Once every six months?

Think about the ROI.

Don't kill bugs. Kill bug classes.

Emerging Best Practices in Software Supply Chain Security: What We Can Learn from Google, the White House, OWASP, and Gartner

Tony Loehr from Cycode

He talked about Google's SLSA and NIST's SSDF. These are AppSec frameworks.

By 2025, 45% of orgs will experience an attack on their supply chain.

Presidential order 14028 which talked about improving the nation's cybersecurity had some text which complained about the opaqueness of commercial software.

It talked about five objectives: protect, confidentiality, identify (SBOM), rapid responses, and training.

Important: 80% of incidents involve a known vuln that hasn't been patched.

He spoke more about Google's SLSA framework.

Level 4 requires a two-person review of all changes as well as hermetic, reproducible builds.

SSDF covers what. SLSA covers how. There are still some gaps.

He mentioned Terraform.

He mentioned least privileged access.

He mentioned anomaly detection.

Avoiding insidious points of compromise in infrastructure access systems

Sharon Goldberg is the CEO/Co-Founder of BastionZero and is also a tenured professor in the Computer Science Department at Boston University.

[I was very impressed by her creds. I don't want to start any rumors, but I'm pretty sure I overheard that at night, she's a vigilante crime fighter, and she likes to fly fighter jets for fun :-P ]

She focuses on infra-access systems.

She wanted to do a detailed breakdown of some war stories.

Act 1: Standing credentials, VPNs

Act 2: Zero Trust

Act 3: Weaknesses in Zero Trust.

She started by talking about bastion hosts.

She talked about Fluffy Bunni from 2001. This compromise involved a fake ssh client that stole passwords from compromised users. Even the bastion was infected. However, it wasn't able to steal ssh key passphrases.

Lesson: Don't give users standing credentials, especially passwords. Use MFA.

Next up, she talked about VPNs.

She talked about Operation Aurora from 2009. It was a Chinese APT breaking into Akamai. There was a zero-day in IE that allowed the attacker to compromise the entire machine.

Amazingly, the adversary had a very long dwell time, i.e. they went undetected for a very long time. They were able to move laterally, behind the VPN.

Their goal was to get to the source code.

Akamai didn't even know they were inside. Finally, Google took over some C&C server and told Akamai about the ongoing attack.

Lesson: Don't trust people just because they're on a secured network. That's the idea behind Zero Trust.

Akamai also wasn't segmented very well at the time.

Lesson: Segment!

Next, she talked about single-level domain administration such as Active Directory Admin Server.

She talked about an article named "NotPetya Ransomware" from 2017 that she said was great. She called it a watering hole attack. That's where you hack some thing and then wait for people to interact with it. In this case, it was Ukrainian tax software.

Once they were able to steal one credential, they were able to get to all the other machines. The result was that computers were bricked. They literally had to be thrown away.

She said we too often rely on a privileged system--a system locked down with a single cred.

Lesson: Vet your supply chain.

Act 2: Zero Trust

When it comes to remote access, don't trust the user just based on their network address. Don't rely on long-lived creds.

She talked about some situation involving a certificate authority, an SSO provider, and a proxy. She talked about an x509 certificate or a SAML token.

She talked about Diginotaur from 2011. She said the incident involved blindly trusting a CA. She said that in her mind this is one of the top 5 incidents of all time.

Some CA was hacked. The hacker created a certificate for Google, and they used it to snoop on Google's traffic.

We later created certificate transparency, etc.

Next, she covered SolarWinds from 2020. She said the problem here was blindly trusting SSO too much [uh oh].

She showed two architectures. In one architecture MFA would not have helped. She said if MFA was separated from the SSO provider, it'd require a second point of compromise.

Lesson: Users get hacked. Access systems get hacked.

She recommended reading some article that talked about DigiNotar getting hacked. [Perhaps this one?]

Red Teaming macOS Environments with Hermes the Swift Messenger

Justin Bui (@slyd0g)

He's a red teamer at Zoom. He's also a skateboarder.

He talked about the benefits of the Swift programming language and the Mythic framework.

He talked about the benefits of using Swift as a post-exploitation language. It now runs on Linux and Windows too.

Swift can interoperate with C, C++, and ObjC.

On macOS, Swift is not installed by default, but the libraries are.

There are several languages used for post-exploitation on macOS: JXA, Python, and Golang are common.

JXA has been abandoned. Apple said that Python and other scripting languages are deprecated and will be removed. [I noticed it's no longer present on macOS 12.4 Monterey.]

He said Golang is fantastic. It too can interoperate with C, C++, and ObjC. It does result in big binaries, though.

By using the swift command, you can circumvent the app whitelist. However, it's not installed by default.

Mythic is a cross-platform, post-exploit, red teaming framework built with python3, docker, docker-compose, and a web browser UI. It has a C&C server.

He talked about how the implant agent calls back from the victim.

There are payloads to target macOS.

He kept talking about LOLBins.

[I didn't know what a LOLBin was. Per this page, LOLBins is the abbreviated term for Living Off the Land Binaries. Living Off the Land Binaries are binaries of a non-malicious nature, local to the operating system, that have been utilized and exploited by cyber criminals and crime groups to camouflage their malicious activity.]

He said that Python and Swift are LOLBins.

Hermes is a Swift payload for the Mythic framework. He's the author.

The Mythic framework makes use of encrypted key exchange in order to encrypt the traffic between the victim and the C&C server.

Hermes has various modules for post-exploitation.

By using the Mythic framework, he only had to worry about writing code for the implant side.

He didn't want to force developers to use Macs. He said that setting up cross-compilation was the hardest part of the project.

Darling is a macOS emulation layer for Linux. It's like Wine, but for macOS. Darling relies on a Linux kernel module.

He talked about the "operator" who was controlling the C&C server.

Each job is a separate thread allowing you to run things in parallel.

He showed Mythic's web UI. You can upload files to and download files from the victim host from your browser. It can also capture screenshots of the user's browser.

It has clipboard monitoring too. Note that root doesn't have access to the clipboard [weird!]. He talked about nabbing passwords when people copy and paste them.

He talked about a time when his co-worker reverse-engineered some malware to steal some techniques.

plist files can be XML, JSON, or binary.

He keeps focusing on using techniques to snoop on what the user is doing.

Apple has an Endpoint Security Framework. 3rd-party developers got "pushed out of the kernel". Because of this, hackers and security software now have equal footing.

Attackers can use launch agents to achieve persistence.

It reminded me of spy vs. spy.

Open Remarks for Day 2

The summary of the Code of Conduct is, "Do not be an ass, or we'll kick your ass out!"

Keynote: Building sustainable security programs

Astha Singhal, Director of Security, Netflix

She too talked about InfoSec burnout.

This is a job where you never win.

These are the contributing factors:
  • Constant firefighting: She referred to Log4J.
  • Security cynicism
  • Culture of catastrophizing
  • Possible vs. probable
  • Personal responsibility
  • Ridiculous and impossible
  • Ongoing conflicts with stakeholders
  • Changing threat landscape
  • We're never done
  • There are never enough things in the wins column: Only one thing needs to go wrong for bad things to happen.
That's a lot!

She talked about organizational culture.

We need to disrupt security cynicism.

We need to discourage heroics and instead celebrate long-term wins. Proactive investments are better.

Cuture takes intentionality.

Build "additive" teams--where each new person adds something unique to the team.

At one point, all the members of her team were AppSec engineers. They've expanded.

Build an environment of empathy and collaboration.

Keep in mind business enablement and customer service.

Consider things from a risk perspective. Our job is to manage risk.

risk = likelihood * impact

Help other security engineers think about risk as well.

Don't forget about probability or likelihood. Don't overfocus on things that have extremely high impact but very little likelihood.

Understand your threat model and why security matters.

Be rigorous about risk outcomes.

Have a strategic program focus.

Consider strategic vs. operational investements.

Sometimes you have to make "strategic bets" where you choose from among a set of possibilities.

Consider leverage points and efficiency.

Minimize the impact to critical data assets.

Achieving overall security assurance requires a balance of proactive and reactive security controls.

Stakeholders and leadership have to achieve alignment. It's helpful to understand senior leadership's risk appetite.

Netflix open-sourced some library for quantifying risk.

You need to create shared guiding principals.

You need ongoing visibility and reasonable expectations.

You need to show up for the customers with reasonable expectations that are in line with your risk tolerance.

The CISO Panel Discussion

  • Tom Alcock, Partner and Founder, Code Red Partners (moderator)
  • Caleb Sima, Chief Security Officer, Robinhood
  • Fermin Serna, Chief Security Officer, Databricks
  • Jessica Ferguson, Chief Security Officer, Docusign
They started by talking about the key factors for building a security team from scratch.

Focus on assessments, strategy, organization, finding leaders who can help, what you can build with your ICs, and execution. What are you needs, and what are your tools?

"I wasn't a CSO. I was essentially a recruiter."

People and talent are #1.

Consider your AppSec to developer ratio.

The best hires are sometimes a surprise. It might just be someone who happened to be free that came in through someone's network.

You can also grab people internally and grow them. This was a big focus for Ferguson. Ferguson also said "You can teach security. You can't teach innate curiosity."

60-70% of sourcing is internal sourcing.

Sima said you need to convince the candidate that you're better than the other companies. Call them so that you can explain why you're better. Sima said that Robinhood sells to the candidate before interviewing them. They reverse the order. That hooks them. They do this even for more junior roles. 

[Sima was easy to like.]

Every company is a disaster behind the scenes.

Transparency is key.

"Sell the disaster." People want a challenge, and they want to have impact.

It's important to overcommunicate, especially now that people are remote. People feel disconnected.

There are over 100 people on Ferguson's team at Docusign.

Remote work is good for deep dive work, but decision making quickly suffers.

A good manager can hold a team together.

Be intentional about building diverse teams. It helps to start with diverse panels.

Ferguson loves growing non-security people and has been really successful with it.

Initially, start with people who are good generally and have a certain curiosity.

You need someone to tie all the things and people together.

As a manager, don't be a hero. Talk to people. Get advice.

Someone gave a plug for the CSO at Lyft.

The recession is an opportunity.

Don't try to make security the top priority at the company, but it should be in the top 3 or the top 5.

The recession offers an opportunity to hire people who are being laid off.

If you can't hire, focus on retaining your people.

Security is important, but running a business is more important.

If you want to go into security but feel you don't know enough, you have a skill set that can grow. Don't get hung up on what you don't know.

Someone brought up IR, forensics, managing an investigation, etc.

Serna said that soft skills are really important. Even how you write an email is really important. You can build bridges or burn bridges.

It's nice to have a little bit of passion for the field.

It's impressive to see how attackers work.

Serna said "Don't be a jerk. It doesn't cost you anything to be nice."

Rise of the Vermilion: Cross-Platform Cobalt Strike Beacon Targeting Linux and Windows

Avigayil Mechtinger (@AbbyMCH) and Ryan Robinson (@MhicRoibin), security researchers from Intezer

Cobalt Strike is "Software for Adversary Simulations and Red Team Operations". It's very popular.

It's a malware framework.

There are different components involved: a C&C server, a stager, a backdoor, a team server, a client.

It's hard to detect and easy to configure.

There are many possible payloads.

When it's detected, it's hard to attribute to a particular attacker.

It's meant for red teams, but adversaries use it too. Adversaries will often rely on a cracked version of it. It's even used by some nation states.

Geacon is a golang beacon for Linux.

Only 2% of desktop hosts use Linux, but 90% of hosts in the cloud use Linux.

There are several categories of malware on Linux: coin miners, botnets, ransomware, backdoors, etc.

Backdoors are often from nation states such as Russia, North Korea, and China, and they're targetted in nature.

They started talking about the rise of Vermilion.

They do malware analysis. The malware they were analyzing was 94% never-before-seen code and 3% code from Cobalt Strike. That's weird because this was Linux malware, but the Cobalt Strike malware hadn't been officially ported to Linux. There was network-related stuff in the code.

Virus Total reported that none of the virus scanners were catching this malware.

The name of the binary was nowhere to be seen on Google.

They called the malware Vermilion.

It was an elf file. There were strings in the code that would be used if the malware ran on Windows. That's pretty weird for an elf file which runs on Linux.

It made use of RSA for encryption.

The malware fingerprints the machine it's running on.

The code runs a C&C loop. They analyzed the commands.

There's a Windows version too. Apparently, the Windows version was known as of 2019, but here it is running on Linux.

They partnered with McAfee.

The malware was actively targeting high-profile companies.

There weren't many samples of victims.

There was a backdoor, written-from-scratch, which ran on Windows and Linux hosts. It was found in live attacks.

It was probably from a nation state.

When running on Linux, the malware flew under the radar.

It's a misconception that Linux people think they don't need antivirus software.

Mirai is one of the most popular botnets, and it's not recognized by Virus Total.

As an industry, we should spend more time detecting Linux malware.

Vermilion Strike for Windows can be detected in memory, or you can detect the stager.

They predict that the prevalence of cross-platform malware will continue in the future.
 Opening Remarks
The theme this year is "from the ground up". They're focusing on community, collaboration, and education.

It's a 100% volunteer team. 25 people work year-round.

They had speed mentoring sessions.

They really need some new volunteers. See bsides.sf/jobs.

The talks will be on their YouTube channel.

They have a stringent photo policy. You must have the permission of everyone in the frame, and crowd shots where you can see faces are strongly discouraged.
Keynote: We Need More Mediocre Security Engineers
Jackie Bow (@jbowocky) from Asana.

[This was my favorite talk.]

She pointed out that BSidesSF was the last in-person conference that a lot of us attended before the pandemic. That was true for her, and it was for me as well.

She's held many jobs in security, including malware reverse engineering, which is one of the most hard-core jobs you can have in security.

She's worked for Facebook, for the government, etc.

She said that ClamAV is still the best open-source antivirus software there is.

One time, she added a virus signature to ClamAV but forgot to add the trailing newline. This broke Facebook Messenger in production for 1-3 hours.

Important: 82% of breaches involve a human element.

We expect each other to be perfect in security. We're not.

She said, "Have you read InfoSec Twitter? Ugh!"

Important: Extreme expectations lead to burnout, not excellence.

More != better.

Burnout is in the standard classification of mental disorders. "Burnout has been defined as a combination of emotional exhaustion, depersonalization, and reduced personal accomplishment caused by chronic work stress" (cited).

Unfortunately, our work predisposes us to burnout, but we have to avoid burnout if we hope to do this career for a long time.

Consider COVID-19, Log4J, the Colonial Pipeline hack, Solar Winds, supply chain attacks, Ukraine, etc.

The Solar Winds thing shook her deeply because she really respected FireEye.

There are currently 600k people in security. It's expected that there will be two million open roles. How are we going to add a million new people to the field?

She referred to Stuxnet.

Our current burn rate is unsustainable.

We need to dismantle our concept of a security unicorn.

We need to see each other as allies. We need to stop overworking. We need to change who we think is hirable.

We're too elitist. That's bad.

We expect people to know everything. [That's something I'm struggling with as I prepare to interview.]

We can't scale as solo individuals.

We need to drop the l337 hacker stuff.

Social isolation and loneliness [which I know all too well] increase the likelihood of early death by 25-30%. It's equivalent to 15 cigarettes a day.

Elitism is the enemy of diversity.

Only 24% of people in security identify as women.

She used to work on reverse-engineering malware. That's one of the most technical jobs you can have in security. Now, she feels like a dinosaur because of all this SaaS software, CSRF, etc.

We end up being expected to always be on.

Important: She called it the "wheel of reactive hell".

There's always more work to do.

Glorifying overworking hurts us all.

She talked about her kid asking her at 7 PM how much more work she had to do. [I literally started tearing up when she said that because the exact same thing had happened to me the day before.]

How often do we get to take a vacation longer than a week?

Vacations are hugely important.

You can be a great security engineer and still have hobbies--even non-security ones!

We need to bridge the "talent gap."

We're looking for unicorns. We need to stop that.

We need to see degrees as a privilege.

We should look at education as something that should happen once you're already in the industry.

There is no agreed-upon value for boot camps or certs.

We need to offer education as a benefit.

We really don't know how to hire for cyber security roles.

We still demand CS degrees, and that's bad.

Your job should pay for you to do boot camps, certs, etc.

At our current rate, we'll burn out before the pipeline fixes itself.

We need to dismantle the unicorn.

We need to challenge our perceptions of who belongs in this industry to achieve a more diverse workforce.
An Unlikely Friendship: Why Security Engineers and Product Managers Should Be Working Together
Leif Dreizler (@leifdreizler) and Rachel Landers (@workingrach) from Twilio Segment.

Segment was acquired by Twilio.

He's an engineering manager, and she's a PM. Their team worked on building security-related features.

Segment is a customer data platform.

They use TypeScript and Go.

Enterprise customers have very high bars. They're very demanding and noisy.

They mentioned LocoMocoSec which is a security conference in Hawaii.

SecEng = security engineering

Netflix has a great security team. They had this idea that the paved path should lead people to do things securely.

They talked about a self-service approach to security.

He talked about the different sides of security. Application security was on his list.

[It's amazing how similar their team is to the team I worked on at Udemy.]

Their first feature was a password strength meter built using zxcvbn and Have I Been Pwned.

Next, they tackled MFA.

The biggest feature they tackled was integrating with SCIM. [Our team didn't do that one. The UB team did that one.]

You can use SCIM to integrate with Okta or Azure Active Directory to provision users in your app. It's a system for cross-domain identity management.

PDLC = product development lifecycle. [We used the term SDLC.]

Ask yourself, why is this the right time to build this feature?

IdP = identity provider

Okta groups were mapped to Segment groups.

SDD = software design doc

You should "always be selling". The SDD should spend a little bit of time convincing people why it's a good idea to build this feature.

The PM owns what and when.

The engineering manager owns how and when it'll be done.

Important: "Weeks of programming can save you days of planning."

SCIM is basically CRUD for users and groups.

He mentioned RFCs 7642, 7643, and 7644.

When you have to implement query filtering, use a library.

Read the onboarding docs for each of the IdPs.

Build the integration with the IdPs.

He gave Okta props for how smoothly the process went. It took OneLogin almost a year to accept their integration.

Enterprise software has a bigger focus on security than consumer-facing software.

1/3 of their customers who use SSO use SCIM.

ARR = annual recurring revenue.

The customers they have that use SCIM account for 21% of their ARR.

Defaults matter a lot!
Lunch
first.org shares incident response data.

I talked to some security journalists who piece together news about incidents.

Boring SSL and libsodium are examples of tools that are simple, easy, and useful.

OpenSSL is pointlessly and hopelessly complex.

Code Red Partners is a recruiting firm that focuses on security professionals.
Embracing Risk Responsibly: Moving beyond inflexible SLAs and exception hell by treating security vulnerabilities and risk like actual debt
Eric Ellett from Segment Twilio

We need to embrace innovation to get away from having a dumpster fire of a security program.

Start by buying some time with solutions that are "good enough".

Identify and engage with critical customers (which are people inside your company that your security team has to work with).

He talked about an example where the AppSec team asked a service to fix a P1 issue reported via a bug bounty program.

He talked about creating metrics for closing vulns.

When you're working on a v2 of your program, rebuild the foundation with data. Now you have some time to build a proper foundation.

He talked about sending formal emails asking people to fix their vulns. A key part of these emails was that they had a due date based on the severity. This due date was possible to extend.

Attributing vulns to teams was hard because of the constant org changes.

They tied vulns to divisions and departments.

They rolled the data up the org chart to enable competition across the company for who could fix their vulns the most quickly.

At this point in your program, you can start experimenting strategically.

There are different risk appetites in different parts of the org.

He referred to Google's SRE book. He talked about SLIs, SLOs, and SLAs. In particular, he referred to chapter 3 on embracing risk.

Important: The only truly secure system is one that is powered off, cast in a block of concrete, and sealed in a lead-lined room with armed guards - and even then I have my doubts. -- Gene Spafford

He talked about error budgets.

For an SLO, he talked about uptime per quarter.

Perfect security and reliability is not the goal--it's too expensive.

Important: They created a debt metric: debt = (current_date - orig_date) / sla_in_days

The higher the priority, the shorter the SLA in days.

So, if the priority says you have to get it fixed in a day, every day you slip, you're increasing your debt by 1. However, if the priority says you have to fix it in a month, then it takes a whole month for you to increase your debt by 1.

As he mentioned before, this debt can be calculated and rolled up organizationally.

You can break down the debt in different ways.

He mentioned Snowflake.

He said that prioritizing work based on a debt metric is more helpful than prioritizing based on severity alone.

They even integrated the debt metric into CI.

He said that Segment's security program is further ahead than the rest of Twilio's.

At Segment, they're not yet tackling the p4s and p5s. They're too noisy right now.

He said that compensating controls frequently lower the CVSS which lowers the priority.

He talked about using Backstage for code asset management--i.e. which team owns the code with the vuln.

They're moving from VMs to k8s.
Buying Security: A Client's Guide
Rami McCarthy (@ramimacisabird)

He called himself a "reformed security consultant".

Buying security services is hard.

The security industry is a $100 billion industry.

Let's talk about security assessments. This is a comprehensive guide on buying and getting value.

He mentioned some survey that talked about buying and selling security.

He mentioned a talk from 2011 called, Penetration Testing Considered Harmful.

Important: Consider the question: Is a particular pentest good? The answer lies along a scale that goes from it's bad to you don't know.

White box tests are now dominant. They're more efficient and more thorough.

Don't compare a pentest to a bug bounty program.

Don't fall for a dressed-up Nessus scan.

There are different motivations for getting a security assessment. Risk reduction is the number one reason. The second most common reason is compliance.

There are different types of vendors.

It's hard to know if a vendor is good. Network recommendations are helpful.

Be careful about how much time you give the vendor. Keep in mind Parkinson's law.

Know your scope.

Gather 3-5 proposals.

When your goal is compliance, the pentester has to strike a balance between providing value vs. actually enabling you to pass.

Your own sales clients might tell you who to use since they might have customers asking for proof of compliance via specific vendors.

The vendor will help you further refine your target scope. You have to hone in on clear objectives and the length of the engagement. These will affect the cost.

Surprisingly, different vendors will come back with very different quotes.

Fast, good, and cheap--pick 2. In security, it's more like pick 1.

Be skeptical of cheap proposals and consultants.

There's lots of paperwork involved: NDA, MSA, SOW, etc.

Cure53 actually made their paperwork public.

Show the pentesters your known risks, your threat models, etc. This will help them.

Don't waste their time by leaving in obvious, known vulnerabilities, forcing them to go through your WAF (just let them through), or by giving them an incomplete environment that is missing important data to be useful.

Their reports are decomposed and sent to different teams. There is usually an executive summary vs. a section with nitty, gritty details.

A lot of people like getting an overall score or grade.

Make sure the vendor cleans up after themselves. He saw a case where one vendor left an open shell, and then another vendor found it.

Remember, no findings != no risk.

Do root cause and variant analysis.

Assessments are an expensive way to find vulns.

For each vuln, you need to fix, mitigate, or accept the risk.

Remediate the vulns. Don't just leave them there to be found by the next pentester.

Do a retro after you're done.

You can use canary bugs to see if they're actually doing their job.

Consider your pentesting cadence: Once a year? Once every six months?

Think about the ROI.

Don't kill bugs. Kill bug classes.
Emerging Best Practices in Software Supply Chain Security: What We Can Learn from Google, the White House, OWASP, and Gartner
Tony Loehr from Cycode

He talked about Google's SLSA and NIST's SSDF. These are AppSec frameworks.

By 2025, 45% of orgs will experience an attack on their supply chain.

Presidential order 14028 which talked about improving the nation's cybersecurity had some text which complained about the opaqueness of commercial software.

It talked about five objectives: protect, confidentiality, identify (SBOM), rapid responses, and training.

Important: 80% of incidents involve a known vuln that hasn't been patched.

He spoke more about Google's SLSA framework.

Level 4 requires a two-person review of all changes as well as hermetic, reproducible builds.

SSDF covers what. SLSA covers how. There are still some gaps.

He mentioned Terraform.

He mentioned least privileged access.

He mentioned anomaly detection.
Avoiding insidious points of compromise in infrastructure access systems
Sharon Goldberg is the CEO/Co-Founder of BastionZero and is also a tenured professor in the Computer Science Department at Boston University.

[I was very impressed by her creds. I don't want to start any rumors, but I'm pretty sure I overheard that at night, she's a vigilante crime fighter, and she likes to fly fighter jets for fun :-P ]

She focuses on infra-access systems.

She wanted to do a detailed breakdown of some war stories.

Act 1: Standing credentials, VPNs

Act 2: Zero Trust

Act 3: Weaknesses in Zero Trust.

She started by talking about bastion hosts.

She talked about Fluffy Bunni from 2001. This compromise involved a fake ssh client that stole passwords from compromised users. Even the bastion was infected. However, it wasn't able to steal ssh key passphrases.

Lesson: Don't give users standing credentials, especially passwords. Use MFA.

Next up, she talked about VPNs.

She talked about Operation Aurora from 2009. It was a Chinese APT breaking into Akamai. There was a zero-day in IE that allowed the attacker to compromise the entire machine.

Amazingly, the adversary had a very long dwell time, i.e. they went undetected for a very long time. They were able to move laterally, behind the VPN.

Their goal was to get to the source code.

Akamai didn't even know they were inside. Finally, Google took over some C&C server and told Akamai about the ongoing attack.

Lesson: Don't trust people just because they're on a secured network. That's the idea behind Zero Trust.

Akamai also wasn't segmented very well at the time.

Lesson: Segment!

Next, she talked about single-level domain administration such as Active Directory Admin Server.

She talked about an article named "NotPetya Ransomware" from 2017 that she said was great. She called it a watering hole attack. That's where you hack some thing and then wait for people to interact with it. In this case, it was Ukrainian tax software.

Once they were able to steal one credential, they were able to get to all the other machines. The result was that computers were bricked. They literally had to be thrown away.

She said we too often rely on a privileged system--a system locked down with a single cred.

Lesson: Vet your supply chain.

Act 2: Zero Trust

When it comes to remote access, don't trust the user just based on their network address. Don't rely on long-lived creds.

She talked about some situation involving a certificate authority, an SSO provider, and a proxy. She talked about an x509 certificate or a SAML token.

She talked about Diginotaur from 2011. She said the incident involved blindly trusting a CA. She said that in her mind this is one of the top 5 incidents of all time.

Some CA was hacked. The hacker created a certificate for Google, and they used it to snoop on Google's traffic.

We later created certificate transparency, etc.

Next, she covered SolarWinds from 2020. She said the problem here was blindly trusting SSO too much [uh oh].

She showed two architectures. In one architecture MFA would not have helped. She said if MFA was separated from the SSO provider, it'd require a second point of compromise.

Lesson: Users get hacked. Access systems get hacked.

She recommended reading some article that talked about DigiNotar getting hacked. [Perhaps this one?]
Red Teaming macOS Environments with Hermes the Swift Messenger
Justin Bui (@slyd0g)

He's a red teamer at Zoom. He's also a skateboarder.

He talked about the benefits of the Swift programming language and the Mythic framework.

He talked about the benefits of using Swift as a post-exploitation language. It now runs on Linux and Windows too.

Swift can interoperate with C, C++, and ObjC.

On macOS, Swift is not installed by default, but the libraries are.

There are several languages used for post-exploitation on macOS: JXA, Python, and Golang are common.

JXA has been abandoned. Apple said that Python and other scripting languages are deprecated and will be removed. [I noticed it's no longer present on macOS 12.4 Monterey.]

He said Golang is fantastic. It too can interoperate with C, C++, and ObjC. It does result in big binaries, though.

By using the swift command, you can circumvent the app whitelist. However, it's not installed by default.

Mythic is a cross-platform, post-exploit, red teaming framework built with python3, docker, docker-compose, and a web browser UI. It has a C&C server.

He talked about how the implant agent calls back from the victim.

There are payloads to target macOS.

He kept talking about LOLBins.

[I didn't know what a LOLBin was. Per this page, LOLBins is the abbreviated term for Living Off the Land Binaries. Living Off the Land Binaries are binaries of a non-malicious nature, local to the operating system, that have been utilized and exploited by cyber criminals and crime groups to camouflage their malicious activity.]

He said that Python and Swift are LOLBins.

Hermes is a Swift payload for the Mythic framework. He's the author.

The Mythic framework makes use of encrypted key exchange in order to encrypt the traffic between the victim and the C&C server.

Hermes has various modules for post-exploitation.

By using the Mythic framework, he only had to worry about writing code for the implant side.

He didn't want to force developers to use Macs. He said that setting up cross-compilation was the hardest part of the project.

Darling is a macOS emulation layer for Linux. It's like Wine, but for macOS. Darling relies on a Linux kernel module.

He talked about the "operator" who was controlling the C&C server.

Each job is a separate thread allowing you to run things in parallel.

He showed Mythic's web UI. You can upload files to and download files from the victim host from your browser. It can also capture screenshots of the user's browser.

It has clipboard monitoring too. Note that root doesn't have access to the clipboard [weird!]. He talked about nabbing passwords when people copy and paste them.

He talked about a time when his co-worker reverse-engineered some malware to steal some techniques.

plist files can be XML, JSON, or binary.

He keeps focusing on using techniques to snoop on what the user is doing.

Apple has an Endpoint Security Framework. 3rd-party developers got "pushed out of the kernel". Because of this, hackers and security software now have equal footing.

Attackers can use launch agents to achieve persistence.

It reminded me of spy vs. spy.
Open Remarks for Day 2
The summary of the Code of Conduct is, "Do not be an ass, or we'll kick your ass out!"
Keynote: Building sustainable security programs
Astha Singhal, Director of Security, Netflix

She too talked about InfoSec burnout.

This is a job where you never win.

These are the contributing factors:
Constant firefighting: She referred to Log4J.
Security cynicism
Culture of catastrophizing
Possible vs. probable
Personal responsibility
Ridiculous and impossible
Ongoing conflicts with stakeholders
Changing threat landscape
We're never done
There are never enough things in the wins column: Only one thing needs to go wrong for bad things to happen.
That's a lot!

She talked about organizational culture.

We need to disrupt security cynicism.

We need to discourage heroics and instead celebrate long-term wins. Proactive investments are better.

Cuture takes intentionality.

Build "additive" teams--where each new person adds something unique to the team.

At one point, all the members of her team were AppSec engineers. They've expanded.

Build an environment of empathy and collaboration.

Keep in mind business enablement and customer service.

Consider things from a risk perspective. Our job is to manage risk.

risk = likelihood * impact

Help other security engineers think about risk as well.

Don't forget about probability or likelihood. Don't overfocus on things that have extremely high impact but very little likelihood.

Understand your threat model and why security matters.

Be rigorous about risk outcomes.

Have a strategic program focus.

Consider strategic vs. operational investements.

Sometimes you have to make "strategic bets" where you choose from among a set of possibilities.

Consider leverage points and efficiency.

Minimize the impact to critical data assets.

Achieving overall security assurance requires a balance of proactive and reactive security controls.

Stakeholders and leadership have to achieve alignment. It's helpful to understand senior leadership's risk appetite.

Netflix open-sourced some library for quantifying risk.

You need to create shared guiding principals.

You need ongoing visibility and reasonable expectations.

You need to show up for the customers with reasonable expectations that are in line with your risk tolerance.
The CISO Panel Discussion
Tom Alcock, Partner and Founder, Code Red Partners (moderator)
Caleb Sima, Chief Security Officer, Robinhood
Fermin Serna, Chief Security Officer, Databricks
Jessica Ferguson, Chief Security Officer, Docusign
They started by talking about the key factors for building a security team from scratch.

Focus on assessments, strategy, organization, finding leaders who can help, what you can build with your ICs, and execution. What are you needs, and what are your tools?

"I wasn't a CSO. I was essentially a recruiter."

People and talent are #1.

Consider your AppSec to developer ratio.

The best hires are sometimes a surprise. It might just be someone who happened to be free that came in through someone's network.

You can also grab people internally and grow them. This was a big focus for Ferguson. Ferguson also said "You can teach security. You can't teach innate curiosity."

60-70% of sourcing is internal sourcing.

Sima said you need to convince the candidate that you're better than the other companies. Call them so that you can explain why you're better. Sima said that Robinhood sells to the candidate before interviewing them. They reverse the order. That hooks them. They do this even for more junior roles. 

[Sima was easy to like.]

Every company is a disaster behind the scenes.

Transparency is key.

"Sell the disaster." People want a challenge, and they want to have impact.

It's important to overcommunicate, especially now that people are remote. People feel disconnected.

There are over 100 people on Ferguson's team at Docusign.

Remote work is good for deep dive work, but decision making quickly suffers.

A good manager can hold a team together.

Be intentional about building diverse teams. It helps to start with diverse panels.

Ferguson loves growing non-security people and has been really successful with it.

Initially, start with people who are good generally and have a certain curiosity.

You need someone to tie all the things and people together.

As a manager, don't be a hero. Talk to people. Get advice.

Someone gave a plug for the CSO at Lyft.

The recession is an opportunity.

Don't try to make security the top priority at the company, but it should be in the top 3 or the top 5.

The recession offers an opportunity to hire people who are being laid off.

If you can't hire, focus on retaining your people.

Security is important, but running a business is more important.

If you want to go into security but feel you don't know enough, you have a skill set that can grow. Don't get hung up on what you don't know.

Someone brought up IR, forensics, managing an investigation, etc.

Serna said that soft skills are really important. Even how you write an email is really important. You can build bridges or burn bridges.

It's nice to have a little bit of passion for the field.

It's impressive to see how attackers work.

Serna said "Don't be a jerk. It doesn't cost you anything to be nice."
Rise of the Vermilion: Cross-Platform Cobalt Strike Beacon Targeting Linux and Windows
Avigayil Mechtinger (@AbbyMCH) and Ryan Robinson (@MhicRoibin), security researchers from Intezer

Cobalt Strike is "Software for Adversary Simulations and Red Team Operations". It's very popular.

It's a malware framework.

There are different components involved: a C&C server, a stager, a backdoor, a team server, a client.

It's hard to detect and easy to configure.

There are many possible payloads.

When it's detected, it's hard to attribute to a particular attacker.

It's meant for red teams, but adversaries use it too. Adversaries will often rely on a cracked version of it. It's even used by some nation states.

Geacon is a golang beacon for Linux.

Only 2% of desktop hosts use Linux, but 90% of hosts in the cloud use Linux.

There are several categories of malware on Linux: coin miners, botnets, ransomware, backdoors, etc.

Backdoors are often from nation states such as Russia, North Korea, and China, and they're targetted in nature.

They started talking about the rise of Vermilion.

They do malware analysis. The malware they were analyzing was 94% never-before-seen code and 3% code from Cobalt Strike. That's weird because this was Linux malware, but the Cobalt Strike malware hadn't been officially ported to Linux. There was network-related stuff in the code.

Virus Total reported that none of the virus scanners were catching this malware.

The name of the binary was nowhere to be seen on Google.

They called the malware Vermilion.

It was an elf file. There were strings in the code that would be used if the malware ran on Windows. That's pretty weird for an elf file which runs on Linux.

It made use of RSA for encryption.

The malware fingerprints the machine it's running on.

The code runs a C&C loop. They analyzed the commands.

There's a Windows version too. Apparently, the Windows version was known as of 2019, but here it is running on Linux.

They partnered with McAfee.

The malware was actively targeting high-profile companies.

There weren't many samples of victims.

There was a backdoor, written-from-scratch, which ran on Windows and Linux hosts. It was found in live attacks.

It was probably from a nation state.

When running on Linux, the malware flew under the radar.

It's a misconception that Linux people think they don't need antivirus software.

Mirai is one of the most popular botnets, and it's not recognized by Virus Total.

As an industry, we should spend more time detecting Linux malware.

Vermilion Strike for Windows can be detected in memory, or you can detect the stager.

They predict that the prevalence of cross-platform malware will continue in the future.
 Opening Remarks
The theme this year is "from the ground up". They're focusing on community, collaboration, and education.

It's a 100% volunteer team. 25 people work year-round.

They had speed mentoring sessions.

They really need some new volunteers. See bsides.sf/jobs.

The talks will be on their YouTube channel.

They have a stringent photo policy. You must have the permission of everyone in the frame, and crowd shots where you can see faces are strongly discouraged.
Keynote: We Need More Mediocre Security Engineers
Jackie Bow (@jbowocky) from Asana.

[This was my favorite talk.]

She pointed out that BSidesSF was the last in-person conference that a lot of us attended before the pandemic. That was true for her, and it was for me as well.

She's held many jobs in security, including malware reverse engineering, which is one of the most hard-core jobs you can have in security.

She's worked for Facebook, for the government, etc.

She said that ClamAV is still the best open-source antivirus software there is.

One time, she added a virus signature to ClamAV but forgot to add the trailing newline. This broke Facebook Messenger in production for 1-3 hours.

Important: 82% of breaches involve a human element.

We expect each other to be perfect in security. We're not.

She said, "Have you read InfoSec Twitter? Ugh!"

Important: Extreme expectations lead to burnout, not excellence.

More != better.

Burnout is in the standard classification of mental disorders. "Burnout has been defined as a combination of emotional exhaustion, depersonalization, and reduced personal accomplishment caused by chronic work stress" (cited).

Unfortunately, our work predisposes us to burnout, but we have to avoid burnout if we hope to do this career for a long time.

Consider COVID-19, Log4J, the Colonial Pipeline hack, Solar Winds, supply chain attacks, Ukraine, etc.

The Solar Winds thing shook her deeply because she really respected FireEye.

There are currently 600k people in security. It's expected that there will be two million open roles. How are we going to add a million new people to the field?

She referred to Stuxnet.

Our current burn rate is unsustainable.

We need to dismantle our concept of a security unicorn.

We need to see each other as allies. We need to stop overworking. We need to change who we think is hirable.

We're too elitist. That's bad.

We expect people to know everything. [That's something I'm struggling with as I prepare to interview.]

We can't scale as solo individuals.

We need to drop the l337 hacker stuff.

Social isolation and loneliness [which I know all too well] increase the likelihood of early death by 25-30%. It's equivalent to 15 cigarettes a day.

Elitism is the enemy of diversity.

Only 24% of people in security identify as women.

She used to work on reverse-engineering malware. That's one of the most technical jobs you can have in security. Now, she feels like a dinosaur because of all this SaaS software, CSRF, etc.

We end up being expected to always be on.

Important: She called it the "wheel of reactive hell".

There's always more work to do.

Glorifying overworking hurts us all.

She talked about her kid asking her at 7 PM how much more work she had to do. [I literally started tearing up when she said that because the exact same thing had happened to me the day before.]

How often do we get to take a vacation longer than a week?

Vacations are hugely important.

You can be a great security engineer and still have hobbies--even non-security ones!

We need to bridge the "talent gap."

We're looking for unicorns. We need to stop that.

We need to see degrees as a privilege.

We should look at education as something that should happen once you're already in the industry.

There is no agreed-upon value for boot camps or certs.

We need to offer education as a benefit.

We really don't know how to hire for cyber security roles.

We still demand CS degrees, and that's bad.

Your job should pay for you to do boot camps, certs, etc.

At our current rate, we'll burn out before the pipeline fixes itself.

We need to dismantle the unicorn.

We need to challenge our perceptions of who belongs in this industry to achieve a more diverse workforce.
An Unlikely Friendship: Why Security Engineers and Product Managers Should Be Working Together
Leif Dreizler (@leifdreizler) and Rachel Landers (@workingrach) from Twilio Segment.

Segment was acquired by Twilio.

He's an engineering manager, and she's a PM. Their team worked on building security-related features.

Segment is a customer data platform.

They use TypeScript and Go.

Enterprise customers have very high bars. They're very demanding and noisy.

They mentioned LocoMocoSec which is a security conference in Hawaii.

SecEng = security engineering

Netflix has a great security team. They had this idea that the paved path should lead people to do things securely.

They talked about a self-service approach to security.

He talked about the different sides of security. Application security was on his list.

[It's amazing how similar their team is to the team I worked on at Udemy.]

Their first feature was a password strength meter built using zxcvbn and Have I Been Pwned.

Next, they tackled MFA.

The biggest feature they tackled was integrating with SCIM. [Our team didn't do that one. The UB team did that one.]

You can use SCIM to integrate with Okta or Azure Active Directory to provision users in your app. It's a system for cross-domain identity management.

PDLC = product development lifecycle. [We used the term SDLC.]

Ask yourself, why is this the right time to build this feature?

IdP = identity provider

Okta groups were mapped to Segment groups.

SDD = software design doc

You should "always be selling". The SDD should spend a little bit of time convincing people why it's a good idea to build this feature.

The PM owns what and when.

The engineering manager owns how and when it'll be done.

Important: "Weeks of programming can save you days of planning."

SCIM is basically CRUD for users and groups.

He mentioned RFCs 7642, 7643, and 7644.

When you have to implement query filtering, use a library.

Read the onboarding docs for each of the IdPs.

Build the integration with the IdPs.

He gave Okta props for how smoothly the process went. It took OneLogin almost a year to accept their integration.

Enterprise software has a bigger focus on security than consumer-facing software.

1/3 of their customers who use SSO use SCIM.

ARR = annual recurring revenue.

The customers they have that use SCIM account for 21% of their ARR.

Defaults matter a lot!
Lunch
first.org shares incident response data.

I talked to some security journalists who piece together news about incidents.

Boring SSL and libsodium are examples of tools that are simple, easy, and useful.

OpenSSL is pointlessly and hopelessly complex.

Code Red Partners is a recruiting firm that focuses on security professionals.
Embracing Risk Responsibly: Moving beyond inflexible SLAs and exception hell by treating security vulnerabilities and risk like actual debt
Eric Ellett from Segment Twilio

We need to embrace innovation to get away from having a dumpster fire of a security program.

Start by buying some time with solutions that are "good enough".

Identify and engage with critical customers (which are people inside your company that your security team has to work with).

He talked about an example where the AppSec team asked a service to fix a P1 issue reported via a bug bounty program.

He talked about creating metrics for closing vulns.

When you're working on a v2 of your program, rebuild the foundation with data. Now you have some time to build a proper foundation.

He talked about sending formal emails asking people to fix their vulns. A key part of these emails was that they had a due date based on the severity. This due date was possible to extend.

Attributing vulns to teams was hard because of the constant org changes.

They tied vulns to divisions and departments.

They rolled the data up the org chart to enable competition across the company for who could fix their vulns the most quickly.

At this point in your program, you can start experimenting strategically.

There are different risk appetites in different parts of the org.

He referred to Google's SRE book. He talked about SLIs, SLOs, and SLAs. In particular, he referred to chapter 3 on embracing risk.

Important: The only truly secure system is one that is powered off, cast in a block of concrete, and sealed in a lead-lined room with armed guards - and even then I have my doubts. -- Gene Spafford

He talked about error budgets.

For an SLO, he talked about uptime per quarter.

Perfect security and reliability is not the goal--it's too expensive.

Important: They created a debt metric: debt = (current_date - orig_date) / sla_in_days

The higher the priority, the shorter the SLA in days.

So, if the priority says you have to get it fixed in a day, every day you slip, you're increasing your debt by 1. However, if the priority says you have to fix it in a month, then it takes a whole month for you to increase your debt by 1.

As he mentioned before, this debt can be calculated and rolled up organizationally.

You can break down the debt in different ways.

He mentioned Snowflake.

He said that prioritizing work based on a debt metric is more helpful than prioritizing based on severity alone.

They even integrated the debt metric into CI.

He said that Segment's security program is further ahead than the rest of Twilio's.

At Segment, they're not yet tackling the p4s and p5s. They're too noisy right now.

He said that compensating controls frequently lower the CVSS which lowers the priority.

He talked about using Backstage for code asset management--i.e. which team owns the code with the vuln.

They're moving from VMs to k8s.
Buying Security: A Client's Guide
Rami McCarthy (@ramimacisabird)

He called himself a "reformed security consultant".

Buying security services is hard.

The security industry is a $100 billion industry.

Let's talk about security assessments. This is a comprehensive guide on buying and getting value.

He mentioned some survey that talked about buying and selling security.

He mentioned a talk from 2011 called, Penetration Testing Considered Harmful.

Important: Consider the question: Is a particular pentest good? The answer lies along a scale that goes from it's bad to you don't know.

White box tests are now dominant. They're more efficient and more thorough.

Don't compare a pentest to a bug bounty program.

Don't fall for a dressed-up Nessus scan.

There are different motivations for getting a security assessment. Risk reduction is the number one reason. The second most common reason is compliance.

There are different types of vendors.

It's hard to know if a vendor is good. Network recommendations are helpful.

Be careful about how much time you give the vendor. Keep in mind Parkinson's law.

Know your scope.

Gather 3-5 proposals.

When your goal is compliance, the pentester has to strike a balance between providing value vs. actually enabling you to pass.

Your own sales clients might tell you who to use since they might have customers asking for proof of compliance via specific vendors.

The vendor will help you further refine your target scope. You have to hone in on clear objectives and the length of the engagement. These will affect the cost.

Surprisingly, different vendors will come back with very different quotes.

Fast, good, and cheap--pick 2. In security, it's more like pick 1.

Be skeptical of cheap proposals and consultants.

There's lots of paperwork involved: NDA, MSA, SOW, etc.

Cure53 actually made their paperwork public.

Show the pentesters your known risks, your threat models, etc. This will help them.

Don't waste their time by leaving in obvious, known vulnerabilities, forcing them to go through your WAF (just let them through), or by giving them an incomplete environment that is missing important data to be useful.

Their reports are decomposed and sent to different teams. There is usually an executive summary vs. a section with nitty, gritty details.

A lot of people like getting an overall score or grade.

Make sure the vendor cleans up after themselves. He saw a case where one vendor left an open shell, and then another vendor found it.

Remember, no findings != no risk.

Do root cause and variant analysis.

Assessments are an expensive way to find vulns.

For each vuln, you need to fix, mitigate, or accept the risk.

Remediate the vulns. Don't just leave them there to be found by the next pentester.

Do a retro after you're done.

You can use canary bugs to see if they're actually doing their job.

Consider your pentesting cadence: Once a year? Once every six months?

Think about the ROI.

Don't kill bugs. Kill bug classes.
Emerging Best Practices in Software Supply Chain Security: What We Can Learn from Google, the White House, OWASP, and Gartner
Tony Loehr from Cycode

He talked about Google's SLSA and NIST's SSDF. These are AppSec frameworks.

By 2025, 45% of orgs will experience an attack on their supply chain.

Presidential order 14028 which talked about improving the nation's cybersecurity had some text which complained about the opaqueness of commercial software.

It talked about five objectives: protect, confidentiality, identify (SBOM), rapid responses, and training.

Important: 80% of incidents involve a known vuln that hasn't been patched.

He spoke more about Google's SLSA framework.

Level 4 requires a two-person review of all changes as well as hermetic, reproducible builds.

SSDF covers what. SLSA covers how. There are still some gaps.

He mentioned Terraform.

He mentioned least privileged access.

He mentioned anomaly detection.
Avoiding insidious points of compromise in infrastructure access systems
Sharon Goldberg is the CEO/Co-Founder of BastionZero and is also a tenured professor in the Computer Science Department at Boston University.

[I was very impressed by her creds. I don't want to start any rumors, but I'm pretty sure I overheard that at night, she's a vigilante crime fighter, and she likes to fly fighter jets for fun :-P ]

She focuses on infra-access systems.

She wanted to do a detailed breakdown of some war stories.

Act 1: Standing credentials, VPNs

Act 2: Zero Trust

Act 3: Weaknesses in Zero Trust.

She started by talking about bastion hosts.

She talked about Fluffy Bunni from 2001. This compromise involved a fake ssh client that stole passwords from compromised users. Even the bastion was infected. However, it wasn't able to steal ssh key passphrases.

Lesson: Don't give users standing credentials, especially passwords. Use MFA.

Next up, she talked about VPNs.

She talked about Operation Aurora from 2009. It was a Chinese APT breaking into Akamai. There was a zero-day in IE that allowed the attacker to compromise the entire machine.

Amazingly, the adversary had a very long dwell time, i.e. they went undetected for a very long time. They were able to move laterally, behind the VPN.

Their goal was to get to the source code.

Akamai didn't even know they were inside. Finally, Google took over some C&C server and told Akamai about the ongoing attack.

Lesson: Don't trust people just because they're on a secured network. That's the idea behind Zero Trust.

Akamai also wasn't segmented very well at the time.

Lesson: Segment!

Next, she talked about single-level domain administration such as Active Directory Admin Server.

She talked about an article named "NotPetya Ransomware" from 2017 that she said was great. She called it a watering hole attack. That's where you hack some thing and then wait for people to interact with it. In this case, it was Ukrainian tax software.

Once they were able to steal one credential, they were able to get to all the other machines. The result was that computers were bricked. They literally had to be thrown away.

She said we too often rely on a privileged system--a system locked down with a single cred.

Lesson: Vet your supply chain.

Act 2: Zero Trust

When it comes to remote access, don't trust the user just based on their network address. Don't rely on long-lived creds.

She talked about some situation involving a certificate authority, an SSO provider, and a proxy. She talked about an x509 certificate or a SAML token.

She talked about Diginotaur from 2011. She said the incident involved blindly trusting a CA. She said that in her mind this is one of the top 5 incidents of all time.

Some CA was hacked. The hacker created a certificate for Google, and they used it to snoop on Google's traffic.

We later created certificate transparency, etc.

Next, she covered SolarWinds from 2020. She said the problem here was blindly trusting SSO too much [uh oh].

She showed two architectures. In one architecture MFA would not have helped. She said if MFA was separated from the SSO provider, it'd require a second point of compromise.

Lesson: Users get hacked. Access systems get hacked.

She recommended reading some article that talked about DigiNotar getting hacked. [Perhaps this one?]
Red Teaming macOS Environments with Hermes the Swift Messenger
Justin Bui (@slyd0g)

He's a red teamer at Zoom. He's also a skateboarder.

He talked about the benefits of the Swift programming language and the Mythic framework.

He talked about the benefits of using Swift as a post-exploitation language. It now runs on Linux and Windows too.

Swift can interoperate with C, C++, and ObjC.

On macOS, Swift is not installed by default, but the libraries are.

There are several languages used for post-exploitation on macOS: JXA, Python, and Golang are common.

JXA has been abandoned. Apple said that Python and other scripting languages are deprecated and will be removed. [I noticed it's no longer present on macOS 12.4 Monterey.]

He said Golang is fantastic. It too can interoperate with C, C++, and ObjC. It does result in big binaries, though.

By using the swift command, you can circumvent the app whitelist. However, it's not installed by default.

Mythic is a cross-platform, post-exploit, red teaming framework built with python3, docker, docker-compose, and a web browser UI. It has a C&C server.

He talked about how the implant agent calls back from the victim.

There are payloads to target macOS.

He kept talking about LOLBins.

[I didn't know what a LOLBin was. Per this page, LOLBins is the abbreviated term for Living Off the Land Binaries. Living Off the Land Binaries are binaries of a non-malicious nature, local to the operating system, that have been utilized and exploited by cyber criminals and crime groups to camouflage their malicious activity.]

He said that Python and Swift are LOLBins.

Hermes is a Swift payload for the Mythic framework. He's the author.

The Mythic framework makes use of encrypted key exchange in order to encrypt the traffic between the victim and the C&C server.

Hermes has various modules for post-exploitation.

By using the Mythic framework, he only had to worry about writing code for the implant side.

He didn't want to force developers to use Macs. He said that setting up cross-compilation was the hardest part of the project.

Darling is a macOS emulation layer for Linux. It's like Wine, but for macOS. Darling relies on a Linux kernel module.

He talked about the "operator" who was controlling the C&C server.

Each job is a separate thread allowing you to run things in parallel.

He showed Mythic's web UI. You can upload files to and download files from the victim host from your browser. It can also capture screenshots of the user's browser.

It has clipboard monitoring too. Note that root doesn't have access to the clipboard [weird!]. He talked about nabbing passwords when people copy and paste them.

He talked about a time when his co-worker reverse-engineered some malware to steal some techniques.

plist files can be XML, JSON, or binary.

He keeps focusing on using techniques to snoop on what the user is doing.

Apple has an Endpoint Security Framework. 3rd-party developers got "pushed out of the kernel". Because of this, hackers and security software now have equal footing.

Attackers can use launch agents to achieve persistence.

It reminded me of spy vs. spy.
Open Remarks for Day 2
The summary of the Code of Conduct is, "Do not be an ass, or we'll kick your ass out!"
Keynote: Building sustainable security programs
Astha Singhal, Director of Security, Netflix

She too talked about InfoSec burnout.

This is a job where you never win.

These are the contributing factors:
Constant firefighting: She referred to Log4J.
Security cynicism
Culture of catastrophizing
Possible vs. probable
Personal responsibility
Ridiculous and impossible
Ongoing conflicts with stakeholders
Changing threat landscape
We're never done
There are never enough things in the wins column: Only one thing needs to go wrong for bad things to happen.
That's a lot!

She talked about organizational culture.

We need to disrupt security cynicism.

We need to discourage heroics and instead celebrate long-term wins. Proactive investments are better.

Cuture takes intentionality.

Build "additive" teams--where each new person adds something unique to the team.

At one point, all the members of her team were AppSec engineers. They've expanded.

Build an environment of empathy and collaboration.

Keep in mind business enablement and customer service.

Consider things from a risk perspective. Our job is to manage risk.

risk = likelihood * impact

Help other security engineers think about risk as well.

Don't forget about probability or likelihood. Don't overfocus on things that have extremely high impact but very little likelihood.

Understand your threat model and why security matters.

Be rigorous about risk outcomes.

Have a strategic program focus.

Consider strategic vs. operational investements.

Sometimes you have to make "strategic bets" where you choose from among a set of possibilities.

Consider leverage points and efficiency.

Minimize the impact to critical data assets.

Achieving overall security assurance requires a balance of proactive and reactive security controls.

Stakeholders and leadership have to achieve alignment. It's helpful to understand senior leadership's risk appetite.

Netflix open-sourced some library for quantifying risk.

You need to create shared guiding principals.

You need ongoing visibility and reasonable expectations.

You need to show up for the customers with reasonable expectations that are in line with your risk tolerance.
The CISO Panel Discussion
Tom Alcock, Partner and Founder, Code Red Partners (moderator)
Caleb Sima, Chief Security Officer, Robinhood
Fermin Serna, Chief Security Officer, Databricks
Jessica Ferguson, Chief Security Officer, Docusign
They started by talking about the key factors for building a security team from scratch.

Focus on assessments, strategy, organization, finding leaders who can help, what you can build with your ICs, and execution. What are you needs, and what are your tools?

"I wasn't a CSO. I was essentially a recruiter."

People and talent are #1.

Consider your AppSec to developer ratio.

The best hires are sometimes a surprise. It might just be someone who happened to be free that came in through someone's network.

You can also grab people internally and grow them. This was a big focus for Ferguson. Ferguson also said "You can teach security. You can't teach innate curiosity."

60-70% of sourcing is internal sourcing.

Sima said you need to convince the candidate that you're better than the other companies. Call them so that you can explain why you're better. Sima said that Robinhood sells to the candidate before interviewing them. They reverse the order. That hooks them. They do this even for more junior roles. 

[Sima was easy to like.]

Every company is a disaster behind the scenes.

Transparency is key.

"Sell the disaster." People want a challenge, and they want to have impact.

It's important to overcommunicate, especially now that people are remote. People feel disconnected.

There are over 100 people on Ferguson's team at Docusign.

Remote work is good for deep dive work, but decision making quickly suffers.

A good manager can hold a team together.

Be intentional about building diverse teams. It helps to start with diverse panels.

Ferguson loves growing non-security people and has been really successful with it.

Initially, start with people who are good generally and have a certain curiosity.

You need someone to tie all the things and people together.

As a manager, don't be a hero. Talk to people. Get advice.

Someone gave a plug for the CSO at Lyft.

The recession is an opportunity.

Don't try to make security the top priority at the company, but it should be in the top 3 or the top 5.

The recession offers an opportunity to hire people who are being laid off.

If you can't hire, focus on retaining your people.

Security is important, but running a business is more important.

If you want to go into security but feel you don't know enough, you have a skill set that can grow. Don't get hung up on what you don't know.

Someone brought up IR, forensics, managing an investigation, etc.

Serna said that soft skills are really important. Even how you write an email is really important. You can build bridges or burn bridges.

It's nice to have a little bit of passion for the field.

It's impressive to see how attackers work.

Serna said "Don't be a jerk. It doesn't cost you anything to be nice."
Rise of the Vermilion: Cross-Platform Cobalt Strike Beacon Targeting Linux and Windows
Avigayil Mechtinger (@AbbyMCH) and Ryan Robinson (@MhicRoibin), security researchers from Intezer

Cobalt Strike is "Software for Adversary Simulations and Red Team Operations". It's very popular.

It's a malware framework.

There are different components involved: a C&C server, a stager, a backdoor, a team server, a client.

It's hard to detect and easy to configure.

There are many possible payloads.

When it's detected, it's hard to attribute to a particular attacker.

It's meant for red teams, but adversaries use it too. Adversaries will often rely on a cracked version of it. It's even used by some nation states.

Geacon is a golang beacon for Linux.

Only 2% of desktop hosts use Linux, but 90% of hosts in the cloud use Linux.

There are several categories of malware on Linux: coin miners, botnets, ransomware, backdoors, etc.

Backdoors are often from nation states such as Russia, North Korea, and China, and they're targetted in nature.

They started talking about the rise of Vermilion.

They do malware analysis. The malware they were analyzing was 94% never-before-seen code and 3% code from Cobalt Strike. That's weird because this was Linux malware, but the Cobalt Strike malware hadn't been officially ported to Linux. There was network-related stuff in the code.

Virus Total reported that none of the virus scanners were catching this malware.

The name of the binary was nowhere to be seen on Google.

They called the malware Vermilion.

It was an elf file. There were strings in the code that would be used if the malware ran on Windows. That's pretty weird for an elf file which runs on Linux.

It made use of RSA for encryption.

The malware fingerprints the machine it's running on.

The code runs a C&C loop. They analyzed the commands.

There's a Windows version too. Apparently, the Windows version was known as of 2019, but here it is running on Linux.

They partnered with McAfee.

The malware was actively targeting high-profile companies.

There weren't many samples of victims.

There was a backdoor, written-from-scratch, which ran on Windows and Linux hosts. It was found in live attacks.

It was probably from a nation state.

When running on Linux, the malware flew under the radar.

It's a misconception that Linux people think they don't need antivirus software.

Mirai is one of the most popular botnets, and it's not recognized by Virus Total.

As an industry, we should spend more time detecting Linux malware.

Vermilion Strike for Windows can be detected in memory, or you can detect the stager.

They predict that the prevalence of cross-platform malware will continue in the future.

Got popcorn? What’s on the Vuln Channel tonight?

Rob Jerdonek and Lily Chau from the trust engineering team at Roku

Apparently, trust team = security team

They wanted to build static code scanning tools that were as easy to use as watching a movie.

They mentioned CI/CD integration with Jenkins, k8s, bots that scanned things, a DB, a dashboard, an integration layer, and viewer tools.

They have a web-based UI for users to view actionable vulnerability data.

They integrated with Slack.

They called their work the "Trusty Code Scanning Framework" (TCSF).

It's written in Go, Python, and JavaScript, and it uses Docker.

They integrate with lots of existing code scanning tools such as Semgrep, OSS-Index, npm-audit, Bandit, tfsec, Trivy, Gitleaks, Retire.js, and dependency-check.

They use one container to discover which other scanners should run. The scanners run in parallel.

One of the scanners recommends that you use defusedxml for more secure XML parsing in Python.

They make use of ELK for the DB and dashboards.

They're working on building SBOMS.

They don't yet block merges.

Sadly, it's not yet open source.

They said they need to reduce the false positives.

Their main point is that it was really useful for them to build this tool to bring together multiple code scanning tools.

Hacker TikTok: Community, Creativity, and Controversy

This was a panel discussion about posting security-focused content on TikTok.
  • Kyle Tobener (moderator)
  • MakeItHackin
  • shenetworks
  • Kylie Robison
They showed example TikTok videos:

You can wrap something in aluminum foil in order to foil an RFID scanner.

If you use "www.nytimes.com." (i.e. add a period at the end), you can circumvent their paywall. They have since fixed that vuln. This was used as a response to the prompt, "Show me you're a hacker without telling me you're a hacker."

TikTok originally had a lot of dancing content.

Stuff on TikTok goes viral. It's easy to grow an audience.

Some guy had cancer, and [somehow, I don't remember] TikTok helped.

There's a great community on TikTok.

However, it's a grind to make content consistently.

You can record and edit in the app, but some people prefer to edit their content outside the app.

One of panelists tried to produce two pieces of content a day. Another one of them recommended you do it whenever it makes you happy.

Keep in mind that 33% of your audience is under 19.

One of the memes was "stuff you know that feels illegal to know". There was lots of stuff from Defcon.

TikTok's recommendation algorithm is so good! It's really easy to rabbit hole.

TikTok definitely has a culture.

Multiple of the panelists said that TikTok has removed some of their videos showing exploits. That was really frustrating. One of them compared their work to lock picking--just because you learn the art of lockpicking doesn't mean you plan on doing illegal things. TikTok removed some videos that weren't even very concrete and specific.

It's already hard to show an exploit in 20 seconds. Having TikTok occasionally remove videos adds to the frustration.

NY Times eventually fixed the "www.nytimes.com." vuln, but TikTok actually took down the video for 6 days.

Some women don't want to get into tech because they have heard so many bad things.

In one quarter, TikTok removed 90 million videos. Half of those were by automated means. 5% were false positives. Moderation at scale is tough.

If you show a terminal, they're more likely to take down your video.

It's a problem that some people try to act like gatekeepers who act elitist toward people trying to get into the security industry. We need more people.

TikTok videos get so many comments, and they're so unmoderated.

TikTok is great for building a huge audience of people that you wouldn't be able to reach on other platforms.

The panelists enjoyed being creators. Their work on TikTok wasn't necessarily connected to their day jobs.

TikTok gives you so much exposure to stuff you've never seen before.

Your communication style is important. Tell a story.

It's super easy to get started. You can get started with just your phone and the built-in editor.

Someone in the audience brought up the "elephant in the room" that we're security people, and TikTok is partly owned by China.

It's true, but it's such a great platform.

One of the panelists did some investigation and found that someone had spent hundreds of thousands of dollars on anti-TikTok campaigns, so keep that in mind.

TikTok is actually very transparent. It's not available in China, and they don't have any servers in China.

One of the panelists said she was less concerned about China and more concerned about power plants.

She said that people have ulterior motives for hating on TikTok.

One of the panelists said that 70% of his viewers were male and 30% were female.

It's weird. TikTok knows that he's male, but when he signed up, he never told them his sex. How do they know?

It's a formidable platform. It's not going away.

The content is limited to 10 minutes. They weren't the first short-form video content platform.

One of the panelists said he limits his content to one minute. With good editing, you can cover a lot of content in one minute.

When people watch your content on TikTok, they're searching for specific content. So, it wouldn't make sense to put a tutorial there.

Some of the panelists take sponsorships. One of them used it to pay down her student loans.





Comments

Popular posts from this blog

Ubuntu 20.04 on a 2015 15" MacBook Pro

I decided to give Ubuntu 20.04 a try on my 2015 15" MacBook Pro. I didn't actually install it; I just live booted from a USB thumb drive which was enough to try out everything I wanted. In summary, it's not perfect, and issues with my camera would prevent me from switching, but given the right hardware, I think it's a really viable option. The first thing I wanted to try was what would happen if I plugged in a non-HiDPI screen given that my laptop has a HiDPI screen. Without sub-pixel scaling, whatever scale rate I picked for one screen would apply to the other. However, once I turned on sub-pixel scaling, I was able to pick different scale rates for the internal and external displays. That looked ok. I tried plugging in and unplugging multiple times, and it didn't crash. I doubt it'd work with my Thunderbolt display at work, but it worked fine for my HDMI displays at home. I even plugged it into my TV, and it stuck to the 100% scaling I picked for the othe

ERNOS: Erlang Networked Operating System

I've been reading Dreaming in Code lately, and I really like it. If you're not a dreamer, you may safely skip the rest of this post ;) In Chapter 10, "Engineers and Artists", Alan Kay, John Backus, and Jaron Lanier really got me thinking. I've also been thinking a lot about Minix 3 , Erlang , and the original Lisp machine . The ideas are beginning to synthesize into something cohesive--more than just the sum of their parts. Now, I'm sure that many of these ideas have already been envisioned within Tunes.org , LLVM , Microsoft's Singularity project, or in some other place that I haven't managed to discover or fully read, but I'm going to blog them anyway. Rather than wax philosophical, let me just dump out some ideas: Start with Minix 3. It's a new microkernel, and it's meant for real use, unlike the original Minix. "This new OS is extremely small, with the part that runs in kernel mode under 4000 lines of executable code.&quo

Haskell or Erlang?

I've coded in both Erlang and Haskell. Erlang is practical, efficient, and useful. It's got a wonderful niche in the distributed world, and it has some real success stories such as CouchDB and jabber.org. Haskell is elegant and beautiful. It's been successful in various programming language competitions. I have some experience in both, but I'm thinking it's time to really commit to learning one of them on a professional level. They both have good books out now, and it's probably time I read one of those books cover to cover. My question is which? Back in 2000, Perl had established a real niche for systems administration, CGI, and text processing. The syntax wasn't exactly beautiful (unless you're into that sort of thing), but it was popular and mature. Python hadn't really become popular, nor did it really have a strong niche (at least as far as I could see). I went with Python because of its elegance, but since then, I've coded both p