Skip to main content

Computer Science: Arbitrarily Fine Locking

This is a relatively simple idea concerning mutex usage. I imagine someone else has probably thought of it before. However, since I just thought of it, I figured I'd blog it. I have no clue why I was thinking about mutexes. I usually prefer share-nothing approaches like Erlang. Note, I am specifically not trying to comment on Python's GIL.

Imagine you have a bunch of resources (like objects or structs) that you wish to protect. One way to protect them is to use a single mutex. This is called coarse-grained locking. At the opposite end of the spectrum, you can create a new mutex for every resource. This is called fine-grained locking. However, what if you want something in the middle?

Having a single lock is unfortunate because it forces a lot of your code to be effectively single-threaded, even if you have multiple processors. However, perhaps creating a new mutex for every single resource might be overkill. (Personally, I'm unsure of why that might be the case.)

Here's an approach to get arbitrarily fine locking. Create N mutexes (where N is tunable). Protect each resource using mutex number resource_id % N. The resource_id could be whatever, as long as it's unique. Perhaps it's the index of an array, or perhaps it's a pointer to the resource in memory.

And now for something completely different! The best part of Lisp is that it has garbage collection. It recycles garbage so that you can grow new trees ;)

Comments

Unknown said…
Your assigning a resource to a lock by hashing in the hopes that you'll have a large pool of resources more or less randomly distributed across a smaller pool of locks. Clever, but I'm still not sure what problem this solves.

Don't forget: you've got to assure consistent ordering of lock acquisition. Otherwise you're setting yourself up for deadlocks if you have tasks that require more than one resource at a time.

Also: acquiring and releasing locks is not free. This will place an upper bound on how many you'd want to use in your system.
jjinux said…
> Clever, but I'm still not sure what problem this solves.

I'm not either. I'm not sure why it even came to mind.

> Don't forget: you've got to assure consistent ordering of lock acquisition. Otherwise you're setting yourself up for deadlocks if you have tasks that require more than one resource at a time.

Yep.

> Also: acquiring and releasing locks is not free. This will place an upper bound on how many you'd want to use in your system.

Yep.
Bill Mill said…
> The resource_id could be whatever, as long as it's unique

It needs to be unique and evenly distributed with respect to mod N, not just unique.
jjinux said…
> It needs to be unique and evenly distributed with respect to mod N, not just unique.

Agreed, although you can pull the same tricks that you pull with hashes. If you get too many collisions on any one mutex, you can increase N.
Anonymous said…
It is not a bad idea, but it is not new either. We have been doing this a lot when we implement hash tables for dynamic hashing (linked lists growing out of buckets). You have a lock per bucket, so that insert and delete operations can work atomically. But if you have many buckets and many cores/threads, many of those operations can take place at the same time, just not on the same bucket.

So, yeah, it works and has the desired effect, but it has been done many times before.
Anonymous said…
This is used in the FreeBSD kernel in some places where a mutex is needed but either allocating one statically or at runtime would have too much time or space overhead.

We keep a "pool" of mutexes that can be used by anyone and are hashed based on a resource address. To get around the deadlock issue pool mutexes must be leaf mutexes, i.e. you are not allowed to acquire other locks while holding them.
jjinux said…
> It is not a bad idea, but it is not new either.

> This is used in the FreeBSD kernel in some places

Excellent comments. Thanks!

Ideas are rarely new, so I'm always glad to hear when I had a good, existing idea rather than a bad, existing idea ;)

Popular posts from this blog

Ubuntu 20.04 on a 2015 15" MacBook Pro

I decided to give Ubuntu 20.04 a try on my 2015 15" MacBook Pro. I didn't actually install it; I just live booted from a USB thumb drive which was enough to try out everything I wanted. In summary, it's not perfect, and issues with my camera would prevent me from switching, but given the right hardware, I think it's a really viable option. The first thing I wanted to try was what would happen if I plugged in a non-HiDPI screen given that my laptop has a HiDPI screen. Without sub-pixel scaling, whatever scale rate I picked for one screen would apply to the other. However, once I turned on sub-pixel scaling, I was able to pick different scale rates for the internal and external displays. That looked ok. I tried plugging in and unplugging multiple times, and it didn't crash. I doubt it'd work with my Thunderbolt display at work, but it worked fine for my HDMI displays at home. I even plugged it into my TV, and it stuck to the 100% scaling I picked for the othe

ERNOS: Erlang Networked Operating System

I've been reading Dreaming in Code lately, and I really like it. If you're not a dreamer, you may safely skip the rest of this post ;) In Chapter 10, "Engineers and Artists", Alan Kay, John Backus, and Jaron Lanier really got me thinking. I've also been thinking a lot about Minix 3 , Erlang , and the original Lisp machine . The ideas are beginning to synthesize into something cohesive--more than just the sum of their parts. Now, I'm sure that many of these ideas have already been envisioned within Tunes.org , LLVM , Microsoft's Singularity project, or in some other place that I haven't managed to discover or fully read, but I'm going to blog them anyway. Rather than wax philosophical, let me just dump out some ideas: Start with Minix 3. It's a new microkernel, and it's meant for real use, unlike the original Minix. "This new OS is extremely small, with the part that runs in kernel mode under 4000 lines of executable code.&quo

Haskell or Erlang?

I've coded in both Erlang and Haskell. Erlang is practical, efficient, and useful. It's got a wonderful niche in the distributed world, and it has some real success stories such as CouchDB and jabber.org. Haskell is elegant and beautiful. It's been successful in various programming language competitions. I have some experience in both, but I'm thinking it's time to really commit to learning one of them on a professional level. They both have good books out now, and it's probably time I read one of those books cover to cover. My question is which? Back in 2000, Perl had established a real niche for systems administration, CGI, and text processing. The syntax wasn't exactly beautiful (unless you're into that sort of thing), but it was popular and mature. Python hadn't really become popular, nor did it really have a strong niche (at least as far as I could see). I went with Python because of its elegance, but since then, I've coded both p