As this writing is a collection of my ideas that have not been written down before now, this page will be gradually updated and refined whenever I recall more and more of my ideas concerning the topic and feel the want. Currently, only some of those I've focused on the most with my developments are present.

As this is the first entry of what is currently titled ``Insidious Optimizations'', along with being a large reason for the existence of this website, I find it reasonable that the title warrants explanation.

An insidious optimization is an optimization that ceases to be viewed as such. It's not inherent to its topic nor was it necessarily always present in initial conception. An insidious optimization is insidious largely for the effect it has on those not present before its introduction; the effect is to create a mode of thinking where one largely can't or simply doesn't consider things without the optimization present; it stunts the mind and restricts musings in its region.

I see many insidious optimizations in the young field of automated computing and this entry details those related to machine architectures. I don't currently discuss wildly different approaches to automated computing and instead focus on a ``normal'' machine for now.

A primary optimization that exists largely without question is that of the register. Registers cause many problems related to allocation optimization, instructions with implicit register arguments, the enlarging of an instruction set, and the limited nature of their storage.

In order: allocation optimization is a complex issue with writing a compiler for a register machine, although this varies wildly with the language; this is then exacerbated by the need to use certain registers for certain purposes on certain architectures; a memory-to-memory machine needs a single instruction for moving data, whereas other machines require instruction encodings for moving data between registers, loading and storing, etc.; registers also cause issues when high-speed memory is wanted, yet the space of the registers is insufficient for storing this information, with being unable to store, say, a small array in register memory easily or at all as an example of this issue, with relation to the next main issue.

Having no registers simplifies a system a great deal, as it removes many classes of instructions, such as shifting instructions mentioned later; it also gives a cleaner conceptual approach to arbitrary-length operations on data, rather than the storage capacity of a register. A processor that lacks registers should generally reveal all state through a memory interface. As an example, the program counter in my envisioned architectures is at location zero, with the previous program counter at the following location; this eliminates the need for jump instructions, as these simply become a move to location zero.

The issue of wanting high-speed memory, while I consider it unnecessary, can be addressed by having memory of varying speeds with some programmer control. There are a number of architectures that don't offer suitable control over what is called their cache, which is its own issue. Preferably, the program could instruct the machine to move oft-used data into a high-speed buffer of a decent length, transparently, and then assume the relevant locations. This would help to address complaints that registers provide ``hints'' as to what data is most frequently used in a program. In conjuction with the second insidious point, mentioned later, this would allow the programmer to usefully store many types of data, rather than work around register disparity and sizes and whatnot.

A secondary insidious optimization is the bunching up of memory into bytes, now commonly eight bits large. This saves three bits of an address, which can be considered as being extended by three zero bits from the right and with memory being collected in such chunks. It could be argued that this saves memory, yet it also complicates data compacting in a way that sufficiently eliminates the meager savings and results in a net loss. Consider attempting to store a single bit in such a system; seven bits are wasted, while three are saved in the basic case. Then consider arrays of objects that must be rounded to align. It would be possible to efficiently store, say, an array of twenty three bit units without wasting one, but it would be unnecessarily complex and the cost of the extra code would more than likely outweigh the savings.

Bit addressable memory eliminates complexity and results in a net saving of space. Tying in with the earlier point concerning registers, it also eliminates the shifting class of instructions entirely, as shifting is simply a move or, even simpler, simply changing a pointer with data having some space surrounding it. There's not even issue in using modern memory, as the controlling device could be made to perform the necessary conversions transparently.

With all state exposed and all memory being addressable by individual bits, the question of a fixed-length unit to depend on comes to play. It's proposed that the length of a memory address should become this unit, such as for the program counter locations and whatnot. This seems to cleanly solve the issue of manipulating certain limited data in a clean and obvious way, as the machine's memory is limited, regardless.

It can be argued that such a machine loses memory by necessitating a new way to specify operands and whatnot. There are many ways to address this issue, which will be mused and written later.

With this explained, it is my opinion that a memory-to-memory machine with bit-addressable memory and all state exposed through such, with other currently unmentioned properties, is among the more simple and elegant architectures. This eliminates many instructions entirely with no loss of function, improves ease of memory efficiency, and makes efficiently programming the machine easier in many ways.

None of this is to say that a register machine, stack machine, mill machine, et al. are bad in any way. Every machine has quirks and I like these quirks very much; I simply found it important to cite these aspects as optimizations and provide context for my efforts.