I’ll start with saying that locks are bad and should mostly be used by infrastructure.
The operating system support for locks is mostly by allowing the system to clean up a MUTEX that was locked by a thread and the thread terminated without unlocking.
I would expect some more support for example detecting that two threads are using the same lock object over and over again, for example when going over a linked list. The operating system can force the threads to run on the same core. This will actually improve performance. Here is a simple scenario:
Item = list.GetNext();
if (MY_TEST_DATA == Item.Data) return (Item);
It is possible to increase performance by using two thread if list.GetNext() is full of computation, but if it is not then most of the time is spent on mutual-locking. This is because the two threads are locking each other from accessing the list. It would be faster to have both threads on the same core to reduce lock overhead. Failed lock will probably cause a context switch which will dramatically slow down performance. The best way to implement this function is using a single thread. The problem is that sometimes we can’t predict the performance on a target machine, for example an application that is massively using the CPU today could mean no effort at all for the CPU 5 years from now. The OS should be aware of that.
The best implementation would be for the OS to use NUMA internally and make sure that two threads massively working with the same RAM module will be moved to the same core. A simple scenario is scanning a list in memory, for example getting a record from database or sorting. Most of the actual work is reading the memory and writing back to it. Two cores will only slow the work down because the RAM hardware is slower than the CPU. A CPU of 3GHz can have a memory module of 800MHz on the board. If all cores are working with the same RAM module then the CPU will face a bottleneck to the memory and the entire system will have dramatic performance impact.