To paraphrase Alan Perlis: "programmers know the value of everything and the cost of nothing". This observation only seems to become more true with each passing decade. One thing people in modern IT reliably fail to account for is that when you scale up the value of software by installing it on a lot of machines, you are also scaling up all its costs by the same factor.
For example, many developers design cludge UI and expect their users to "read the fucking manual". Let's conservatively assume that the process of reading the manual and testing that newly gained knowledge takes 1 hour. With only 2000 users (a mid-sized company) we're asking users to waste enough human-hours to employ a person for a year. With 20K users we would be wasting 10 employee-years. With two million users we would be burning enough human-hours to run a small firm for a decade. It's blatantly obvious that at any reasonable scale RTFM mentality is economically insane.
This gets even worse when you honestly look at the cost-benefit ratios of obscure features in massively popular libraries or frameworks. As another example, Log4J JNDI lookup is likely called by a sub-percent fraction of apps that use the library, and yet an exploit based on this feature put all the users of Log4J in danger of being hacked. Sure, it's hard to precisely predict the risk of adding a feature like this. However, when the risk is multiplied by a huge number of total users, while the benefit is multiplied by a nearly-zero number of feature users, the decision should be pretty obvious. Except, of course, developers just ignore the entire "risk" part of the equation and add the feature anyway.