Here are my notes from yesterday's SF Python Meetup:
It creates a pretty version of a curl command and the output. You can embed it in your site.
shlex is a module in Python to do simple lexical analysis.
1-Click Deployment with Launch and Docker
Nate Aune @natea
There are 10 million repos on GitHub. The curve is exponential.
It launches a Docker container.
It makes it easy to deploy certain types of apps.
You can embed a widget on your app that says "Launch demo site".
He talked about Containers vs. VMs.
Containers share the OS, so they launch very quickly.
You can create new containers, and each container is just a diff of another container, so it uses very little space.
Yelp: Building a Python Service Stack
Julian Krause, John Billings
They're moving toward a Service Oriented Architecture.
There are over 100 engineers at Yelp.
They have about 180k lines of code in a Python webapp called yelp-main.
This has increased the amount of time to come up to speed and release new features.
They're splitting their large codebase into a lot of little Python codebases that speak HTTP/REST.
Example: metrics = json.loads(urllib2.urlopen(url).read())
They were using Tornado 1.
"Global dependencies considered harmful."
They couldn't upgrade to Tornado 2 because there were too many dependencies on Tornado 1.
They're using virtualenv now.
He thinks that virtualenv's bin/activate is doing it wrong. It should work slightly differently.
I mentioned that one of the problems he was trying to solve could be solved by PEX.
Future directions for isolation: Docker.
They're using pip.
wheel is a built-package format for Python.
pip install -r requirements.txt
Always use specific versions in your requirements.txt. Use ==.
Originally, they were using git submodules. They're not great.
They have separate repos for everything, and they release libraries for everything. They have a tool that monitors git tags.
They use Jenkins.
They use pypiserver.
They're switching from Tornado to Pyramid. It's been a successful migration.
There were issues in Tornado including testing.
Application servers: gunicorn, modwsgi, Circus, Pylons/waitress, and uWSGI
They evaluated all of them and picked uWSGI. It's working well. It's stable. It's fast. A lot of it is written in C. It has good documentation. They can integrate the logging with Scribe. The community is good. They have proper rolling restarts for their Java apps. uWSGI has hot reloading.
Metrics, metrics, metrics!
What is the 99th percentile time for this endpoint?
Are all service instances slow, or is it just one?
How many QPS is this endpoint handling?
Which downstream service is killing our performance?
Are any clients still using the old API?
Did the new service version introduce a performance regression?
They use a Metrics package for their Java code.
They wrote a package for Python called uwsgi_metrics. It's not open source yet, but they'll open source it shortly.
Example: with uwsgi_metrics.timing('foo'): ...
They have a JSON endpoint on all their services that exposes metrics.
uwsgi uses a prefork worker model.
uwsgi has mule processes. They're processes that don't handle interactive traffic, but you can use them as worker processes.
They measured a 50us overhead for recording a metric. You don't want to do this for too many metrics. 10s of metrics is okay. 1000s of metrics isn't.
airbnb/nerve is a service registration daemon that performs health checks.
airbnb/synapse is a transparent service discovery framework for connecting an SOA.
He showed service registration using Nerve. It sends stuff to ZooKeeper.
ZooKeeper is a highly available key value store with nice consistency guarantees.
They use HAProxy.
ZooKeeper -> Synapse -> HAProxy
Client -> HAProxy -> Service hosts
They have an operations dashboard.
There's no static configuration. If a service is running, it appears in the dashboard.
They're not using Smart Stack in production yet.
They have a service called Service Docs. When you build a service, all the docs get put on this website, keyed by service name.
People almost always end up writing client libraries for services. If you don't write one up front, you'll end up writing one implicitly anyway.
The nerve thing should be running on the same machines as the services.
They use memcache within services. They don't yet put caches in front of services.
They're thinking about putting Nginx in front of their services to add a little HTTP caching.
They're still investigating security between services.
WTF is PEX
This is the first time Brian has really formally announced PEX.
This is a shortened version of an internal talk he gave, so I'm not going to take notes for everything he said.
You can create a __main__.py, and then run "python .", and it'll work.
pip search twitter.common
pip install twitter.common.python
This gives you the pex command.
A .pex is a ZIP file containing Python code that's executable. It's used to "compile" your Python projects down to a single file.
You can also use it to create a file that acts like a Python interpreter with all the requirements bundled ahead of time into it.
pex -r flask -p flex.pex
You can use PEX to easily create self-contained Python applications.
Twitter uses pants. It's a build tool.
Aurora is a service scheduler built on top of Mesos.
Download Aurora to see an example of something that uses pants.
Pants builds modularity into your monorepo.
pants is like blaze at Google.
Pants is multi-interpreter and multi-platform. The pex files work on multiple platforms.
Aurora is half Java and half Python.
A .pex file is similar to a Java .war file.