Windows and macOS are effectively a traditional Personal CRM, and they need handholding from humans. The nature of human-in-loop experience is going to dramatically change now. The context menus may look so different.
Every context menu is a laundry pile of options the application developer decided on your user's behalf, to the best of their knowledge, for whatever duration. It's probably possible to identify what the user wants to do within the next three key presses. This can happen based on the user's context in those few moments.
I think it's worth noting that humans like to operate by mapping meaning to the positional knowledge of the world. That means, they always look for a meaning at the same place on the screen. For that reason, I suppose a full search will always be desirable at some point. By predicting the next menu items, we are invalidating the spatial mapping of that moment in memory. That's a cognitive load.
The top 3 items on the context menu are accessible within 2 key presses. Depending on your cursor and on-hover active property, you could access three menu items within 2 key presses and slight mouse movement. Most are accessible within 3 key presses, and some 4-6 are suffocating inside nested menus.
To improve the accessibility of the number of context menu items within 3 key presses. In the context menu, create two rows of each with horizontally aligned menu items. Each menu item is a number 1,2,3,4, or 7,8,9,0; or some other organization based on keyboard style. These numbers are shortcuts, the first row are predicted next moves, and the bottom are user-determined.
Their reference is signaled by their color, icon name, and a reference. These are there to reinforce the near-term memory. With this, actions that the user needs to execute within the next 5-10 minute bursts are at their 2 key presses distance.
The first row of predicted menu items hope to deduce total key presses within the next 5mins and this approach significantly favors for scenarios involving near-term repetitive tasks. Some repetition patterns are slightly long-horizon. You are moving the active status from one screen to the other based on your sensors capturing head rotation while you are reading and modeling. These items will be unable to exercise on these.
The second row of predicted items hope to reduce the total key presses contributed from patterns over a very long term. These are something let the user decide, based on their perspective of what works in their mind.
There could be an optional third row of menu items if we want to let the user decide on crafting their own first-row menu items, optimizing for 100% accuracy. That one mistake in that micro moment matters.
We could perhaps achieve that by allowing people to create shortcuts within two key presses, using the other hand. That would mean we require people to pay attention with two hands. So, this feature is by design is not universally accessible, favoring for more efficiency.
The optional third row allows users to set new shortcuts to their previous action performed. Press the key once to activate and then to set the shortcut. Obviously, what exactly user actions are and their difference from system actions is undefined. It's probably not solvable. So, the previous couple of them shown next solves the problem.