I'm reading the Engine Yard Flex documentation. It's pretty interesting. Here's a snippet:
Each Application or Application Master server is setup to run haproxy on port 80 and then nginx or apache on port 81. Each App server has its own haproxy setup to balance load to all the other App servers so any one App server could become master at any point if the main master fails for any reason. We have an 'ey-monitor' daemon that runs on all the application slave servers and periodically does health checks on the current Application Master server to see if it is still running properly or not. If the App Master fails for any reason then the App slaves will try to take over as master by using a technique called STONITH(shoot the other node in the head). This means that once the master fails, the slaves will wait for a few bad health checks and then the slaves will all race to grab a distributed lock. Whichever slave gets the lock will steal the IP address of the failing master server, then it will notify our control tier which in turn will violently terminate the failed app master. Then the system will boot a new server to replace the failed node and will use the same volumes that old master had so it has the full current state of the world.Boy I'm glad I don't have to set all that stuff up myself!
This all happens transparently to you as a user and needs no input. The system will try its best to keep itself running and at the capacity that you have stated. There can be a very short downtime when slaves take over for masters, but generally it happens in 60 seconds or less.