The difficulties in commanding a machine which exists solely to relieve a human of his burdens is an obscenity. Unfortunately, the general trend of this dictating has been precisely opposed to what it should be. A machine should be easily-auditable, its programs clear and concise, and the mechanisms it provides to programs should be made easy to use, or wholly transparent wherever this is possible.
It should be recognized that a machine which gives incorrect results is worse than useless.
The ``neural networks'' are exemplary of this opposed approach. They're unreasonable to examine for most reasons, their decision-making is wholly opaque, and they unpredictably give incorrect results. The very disconcerting aspect of these mathematical machinations is that they appear to work so well enough to use in places they've no place being used. In systems making so much so complicated, they may seem like an ideal solution; rather than provide true customization mechanisms, corporate drones can build a ``neural network'' intended to ``learn'' the preferences of the user, and arrive at that same result, without any need to have a proper system underneath. Many people have criticized how a complicated system begets more complications, such as layers of caching, but these networks are such a leap in this, being opaque and incomprehensible systems built in the face of such incomprehension.
I've never written a ``graphical'' program. I expect the base level of a system should be pleasant, and sufficient, to fulfill its purpose. What's instead common is layering large interfaces on large interfaces until some insufficient base is somehow hit. The ``compatibility hacks'' fester on every level of these. That psychological effect of optimizing work as fulfilling doesn't properly account for work which can be hard to notice as unnecessary. That people can look upon this, and not revile it, gives some sense to the voice of misanthropy. It would be acceptable for a system to be hard to program if that at least bought something such as reliability, but it's drudgery all for no purpose.
I've had conversations with others who seriously questioned why anyone would yearn to write programs which handle all failure cases correctly. Such cultists can sometimes be excused by mere ignorance, but others are proud of it, not understanding for what purpose the machines exist. The saving grace of such a mess is the history, culture, and organizations which existed before such cults arise; the cult is naturally constrained by what people will accept, although subversion over long enough tends to remove the natural disgust towards ritual mutilation, sexual taboo, or incorrect program results.
In sum, complicated systems have begat systems even more complicated. The difficulty of programming the machine has led to no reduction in the complications, but a futile attempt to avoid that need to program them at all. Rather than model, understand, and know, systems are designed which ``learn'', pretend, and merely seem to know. Human language in particular is a common target of this nonsense, and I so want one day to display the superiority of mine Elision model over such complicated messes.
The proper approach is to recognize that what is hard shouldn't remain so.