3 – Cybernetic-utopianism and the tale of two laws: Moore’s Law and Amdahl’s law

circle with a hole in the middle

So we finally reach Moore’s Law itself. It may seem strange that we have gone through two complete segments dealing with Moore’s Law only in an expository manner-like Lincoln’s Gettysburg address;simply invoking it’s name is sufficient to convey gravitas as to the mythical origin of the Union. It’s is not necessary to actually know what’s in the address or even where it was made. Similarly, we have used Moore’s law as a shorthand for the trajectory of the narrative of technological autonomy and ahistoricity and its resulting meta-events of the so called “Inevitabilists” , an easier to remember alias than “messianic disingenuous cybernetic utopians”. But now we must drop “Moore’s Law” the bon mot and deal with Moore’s Law, the actual assertion. First, a few “re-orientation” comments.

I have repeatedly more than hinted that the narrative mentioned above is a long term cover story or “fog” for the real existential goal-a real time full duplex neural hive with humans as nodes whose state at any one moment is simulated by super computational resources-and if realized , the question of which is the “real” world and which is the doppelganger may be moot, since we(the victims) will have no way of knowing the difference. The narrative is thus similar to what Manifest Destiny was to the myth of the frontier. The analogy I believe, is a surprisingly good one. One of the purposes of “Go West , young man” was to energize an eastern population which didn’t seem particularly disposed to the idea(as most people in say 1995 , weren’t all that excited about “ubiquitous” computing and calls for sentient computer interfaces were not moistening anyone’s lips that I know of at the time). It took the inducement of free land , the idea that Americans were destined by heaven itself to posses the hinterland and the physical expansion of capital resources(the railroads) to bring this about. All the elements were required for the brew. But what we need to remember is that the (existential) compulsion of all this was actually the need for outlets for the colossal profits of the eastern industrialists.

If we compare this to the last 40 years of “computational expansion” we see something similar. The “existential compulsion” in this case was to respond to the decline in the marginal productivity of computing itself-this has been lost in the constant drumbeat of techno-propaganda. If computing power was becoming only marginally “helpful” , than the need for more of it was questionable(and threatened the fortunes of those selling it,the amortization costs of capital already invested and most importantly, the prime directive-instantiation of the hive grid). Over the last 20 years the putative orders of magnitude increases in computing power and resources have had virtually no effect on increasing productivity in any measurable sense in a surprising and varied number of problem domains. This can be seen in the recent response of local governments to the purported pandemic that is sweeping the nation without actually killing any more people that would have died anyway. Over the last 30 years the government of my home state, Maryland, most probably has spent billions on information technology and infrastructure. Yet this relatively tiny state has been overwhelmed by what from even a cursory engineering analysis would seem to be a less than catastrophic event. All the unemployment applications are made online, yet when the system is accessed by more than 1000 people simultaneously, it collapses? The unemployment PBX,which for the money invested should be able to handle,queue and dispatch calls even if every citizen in the state were calling at the same time,flat lines at a mere 2500 callers? But if this so, what was all the “computing” advancement about? As a nation, we have spent mega billions on computing infrastructure-but our public services respond no better than those in Honduras or Bangladesh. They just respond poorly faster. This is the decline of computational marginal productivity to zero. Giving poorly paid,unmotivated people computers does not make them more productive, it just makes them do the things poorly paid unmotivated people do more efficiently.

[Addendum: I make a later note here to qualify my immediate statement above: “…we have spent mega billions on computing infrastructure-but our public services respond no better than those in Honduras or Bangladesh...”.  In line with comments I make in other posts, all of this “investment” may have simply been a multi layered con like defense procurement. All or most of the putative funds invested may have been simply stolen and pocketed by the techno cretins,and all of these “upgraded” systems are still being run on old PDP-10s. I once worked a contract about fifteen years ago for the company that handled the lion’s share of credit card “charge backs” . The entire verification process(with unencrypted user data) was run from one desktop computer sitting under a desk in the corner  with 1GB of memory executing a program written in FoxPro 1.0(circa 1990 or so). All user data and passwords keys of every  cardholder ever processed(many hundreds of of thousands) was kept on three external daisy chained hard drives lying on the floor which  could have been disconnected and carried off by anyone,including the cleaning people. As I mentioned this  was about fifteen years ago, and it was shocking then. I would bet anything that program is still running under the same desk as we speak, with money budgeted for “upgrades” idling on the parking lot in the director’s reserved parking space in the form of a new Mclaren.]

We can view “Inevitabilism” as the “Go West” of our time-the engine to energize(coerce?) a population to make real a capital vision it would not have if left to its own devices(I have said this a number of different ways but they all resolve to essentially this). By mirroring the “free” land of the westward drive, with the “free” services of the Internet and the invisible chains of wireless connectivity(which disposed large segments of the population to behavior they would not have otherwise imagined), and replacing “Manifest Destiny” with cybernetic-utopianism we can see the “hive” for what it actually is-the modern version of the establishment of the transcontinental rail transport system which transformed the country into something unrecognizable from what is was. As will the hive. With a critical caveat. The westward expansion of the 19th century had many side effects. Some good(an eventual quantum leap in the standard of living for the vast majority of the population, if not immediately so for the vanguard) and some very , very bad(Native American extermination). But the hive seems to be all bad for the many and even augers possible extermination(almost everybody this time, not just the Indians) at the hands of the few. It should also be noted that the “pioneers” were only necessary until the railroad grid was in place. Once complete , they were expendable(Dust/Rust bowl). I suspect the “internet generation” is similarly already past its shelf life. I am not the first to make this analogy. Norbert Wiener himself,the much more humanistic founder of the formal field of cybernetics as opposed to its greasier and unctuous progeny the cybernetic-utopians, said in The Human Use of Human Beings( the Reader’s Digest version of Cybernetics, for mathematical mediocrities such as myself):

Besides the comfortable passive belief in progress, which many Americans shared at the end of the nine­teenth century, there is another one which seems to have a more masculine, vigorous connotation. To the average American, progress[Wiener roughly means what I refer to as Inevitabilism here-editor] means the winning of the West. It means the economic anarchy of the frontier,and the vigorous prose of Owen Wister and Theodore Roosevelt. Historically the frontier is, of course, a per­fectly genuine phenomenon. For many years, the de­velopment of the United States took place against the background of the empty land that always lay further to the West. Nevertheless, many of those who have
waxed poetic concerning this frontier have been prais­ers of the past. Already in 1890, the census takes cognizance of the end of the true frontier conditions. The geographical limits of the great backlog of uncon­sumed and unbespoken resources of the country had clearly been set.

It must be clearly seen that the establishment of the control grid and the counterattack against the potential slacking of hardware and software demand are two pincers of the same host.”Surveillance Capitalism” was in effect born out of the fact that investments in information technology were not realizing the expected benefits for investors-Google’s search engine profits were modest and uninspiring until it veered radically towards “guaranteed outcomes”;which require existentially the real time hive grid , whose construction and metastasis requires constant churn and reification. But the lack of “profits” should be seen as an epiphenomenon-the search engines’  lack of contributions to grid infrastructure may have been the real reason the Google’s ersatz founders got kicked in the pants around 2002. That they weren’t making any money was probably not that important to the controllers in the long run, but the lack of contributions to the metastasis of the grid was. The erstwhile “investors” were an afterthought to the end game.

The end game of the hive provocateurs was and is the grid itself. The build-out of this hive both predates and will survive the entities currently constructing it by bold seizure and chicanery, like Google. This may seem bizarre and inconsistent, but a little reflection will see that it is neither. Think of how the Pentagon war profiteers have operated since the end of World War II. The existential objective for them is continued receipt of transfer payments directly from the US Treasury without any serious mediation. The particular method they use to do this is the “sale” of military equipment. These sales appear to follow the logic of capital investment superficially(with “bids” and “contracts” and the such like) but this is for the most part obfuscation. The actual system is feudal,based on back slaps, winks ,nods and nudges and inter-clan and infra-clan dynamics. There is no reason that this extraction system has to use military equipment as the “hook” or “con”. If this didn’t work some other artifice would be cooked up. The “con” can take any number of specific incarnations: missile gaps, cold wars,hot wars,terrorism,potential monsters from outer space,whatever. But the existential mandate is always the treasure trove of private access to public largess. The mobs and consiglieres working these cons from the inside and thinking themselves autonomous agents , may not even be aware they’re subalterns of a higher order. This lack of awareness can be fatal(Dallas 63-at least prima facie) , emasculating(LBJ) or something in between(Nixon).

The hive is also meant to enable direct transfers of wealth with only political mediation(techno-feudalism) so it is no wonder that the DOD greed heads and cybernetic-utopians often appear to be the same people-and sometimes are. These forces are perfectly happy to allow their ends to be met by what appears to be some sort of capitalist production/profit/investment cycle as long as it suits them. (The “cons” in this case are surveillance capitalism and the “app”. The nut always remains the emplacement of the hive). But they possess a much longer view than the tech giants and will put an end to anything that wanders too far off the reservation(like a “free” internet). And the giants find out whose dogs they really are. There is much evidence in the early history of Google to suggest that its “investors”(Langley) were not so much interested in money(which they now can steal with complete impunity) ,as control,i.e., the grid. As Google in its early years tried to make a legitimate model based on advertising with only modest success, Langley sent in a consigliere,Eric Schmidt, to motivate them to stay on topic and to ensure at all times they were conscious of what the “topic” actually was(this feeds into the non-intuitive point we made above about endgames and the cons which are epiphenomena of them. Google is not a front but misdirection , like selling fighter planes that don’t work to cover the pilfering and stealing from the treasury. But for this to be pulled off , the con has to appear to be successful. To cover for the hive expansion, Google had to appear to be a money making enterprise-if it actually was would be a bonus,as it turned out to be. We have mentioned why “profits” are important only as gaslighting in such a scenario-treasure is extracted directly and opaquely elsewhere).

This is why so many things that the progenitors of the hive do seem inscrutable. Like many super sized criminal organizations,at any one time they may act-for extended periods of time-in what seems to be an unprofitable or counterproductive manner. But nourished with essentially unlimited subterranean hoards of treasure directly acquired through political coercion or “shock capital” assaults and innocent of natural enemies(the putative regulators have long since been hollowed out or absorbed), the “profit” cycle of these intrigues can be much longer than naive(public) opinion can conceive or view with credulity-even longer than a generation.And with complete control(the hive made manifest), “money” or “profit” become whimsical concepts anyway since there is no exchange, only extraction. The consistency and generational extension of the machinations of these forces is also obscured by the very nature of cartels themselves,as the putative Allies discovered when trying to unravel the knots of the German industrial combines after the Second World War. Most of the largest German combines actually survived the war unscathed or even hypertrophied. To the amazement of most of the “victorious” investigators ,it gradually became evident to them that cartels like Thyssen  and IG Farben  had figured total defeat into their business plans and essentially were operating exactly as they had before the war within five years after “total defeat”-mostly with exactly the same people running them as de-cartelized atomic corporations while drawing on as reserves hitherto unknown international resources,including American ones. Subsequent investigation over the years has shown this de-cartelization to be largely a bureaucratic delusion(or canard, depending on who’s doing the talking). It should also be noted that virtually the only large office complex left standing in Frankfurt was that of IG Farben.File that away for another day and another topic.

…on with the show…

Introducing Gordon Moore , who stated in 1965 that “…the number of transistors per integrated circuit would double every 18 months”-but he says he may have actually said this in 1975 and also says maybe he didn’t say exactly that at all,but if he did, he denies it.

As I stated about Moore’s  Law at the very top,there are as many versions and interpretations of Moore’s Law drifting around as there are digressions on the Ascension of Christ. To make things succinct, I will reference what I consider to be the best and most relevant reading of the situation written in  2002 by Ilkka Tuomi. That may seem a little dated, but in actuality it was written before Google and its narratives became dominant and thus does not suffer from “self-policing”, that is the author is not looking over his shoulder thinking that his reputation and career may be ruined or compromised by challenging a monopsony that butters everybody’s bread in this intellectual space. I suggest it be read completely before we proceed if for no other reason than my interpretation may be erroneous and may color your own.. It can get a little confusing at points(which I think is more to do with the byzantine nature of his subject),but stay with it. Written almost twenty years ago,it seems to presage quite a few of the arguments I have made previously.

The Lives and Death of Moore’s Law

Tuomi’s introduction is the most succinct I have read, so I excerpt it here:

In 1965, Gordon Moore, Director of Fairchild Semiconductor’s Research and Development Laboratories, wrote an article on the future development of semiconductor industry for the 35th anniversary issue of Electronics magazine. In the article, Moore noted that the complexity of minimum cost semiconductor components had doubled per year since the first prototype microchip was produced in 1959. This exponential increase in the number of components on a chip became later known as Moore’s Law. In the 1980s, Moore’s Law started to be described as the doubling of number of transistors on a chip every 18 months. At the beginning of the 1990s, Moore’s Law became commonly interpreted as the doubling of microprocessor power every 18 months. In the 1990s, Moore’s Law became widely associated with the claim that computing power at fixed cost is doubling every 18 months.

Moore’s Law has mainly been used to highlight the rapid change in information processing technologies. The growth in chip complexity and fast reduction in manufacturing costs have meant that technological advances have become important factors in economic, organizational, and social change. In fact, during the last decades a good first approximation for long–range planning has often been that information processing capacity is essentially free and technical possibilities are unlimited.

Regular doubling means exponential growth. Exponential growth, however, also means that the fundamental physical limits of microelectronics are approaching rapidly. Several observers have therefore speculated about the possibility of “the end of Moore’s Law.” Often these speculations have concluded by noting that Moore’s Law will probably be valid for at least “a few more generations of technology,” or about a decade.

Like some biblical imprecation(“mene,mene,tekel,upharsin”?) chanted monotonically in the background by a sepulchral chorus,the quote from the inception of this tale recurs:”…and if disciples, religious or secular, discover that a specific timetable has not been realized, they manage to revise it ingeniously and preserve the credibility of the whole unfulfilled prophecy.

Tuomi explicitly reiterates what I have been chanting like a Buddhist monk for thousands of words: the narrative of Inevitabilism undergoes periodic maintenance and transmogrification:from the exponentiation of the number of components on a chip(1965); to the doubling of the number of transistors on a chip every 18 months(circa 1980);to the claim that computing power at fixed cost is doubling every 18 months(circa 1990). Even an engineering dilettante can see that none of these propositions has much to do with the other much less form any sort of logical progression-not to mention they all may be simply untrue or much less true than their adherents maintain(the third was from the start borderline ludicrous). What is significant for our immediate context is that a single person-Gordon Moore-is generally purported to have packed with remarkable compression all of these aphorisms into a single statement made well over 50 | 40 | 30 years ago(yes, it’s a shell game). Before going there , I am sure you have picked out the tidbit “…a good first approximation for long–range planning has often been that information processing capacity is essentially free and technical possibilities are unlimited.” as terse and to the point a statement of our old friend “technological autonomy” as you’re going to get. I would also point out that Tuomi is an academic, and even if it is 2002, has to be cautious of the people who by not too many degrees of separation pay his salary. Thus he is kinder and less accusative than it seems would be the case if unfettered in quoting statements from industry giants like IBM concerning Moore’s Law that even then must have been obviously duplicitous. But he does  a good job of letting you read between the lines.

Ok. Let’s try to sort out what Moore actually said, when he said it, and if it was actually him that said what is was that he was supposed to have said when he said it:

  • 1965 – Moore in the aforementioned Electronics magazine article,makes  a number of extremely informed observations about the state and trajectory of the manufacture and design of silicon computing chips of which he is uniquely placed to observe. From these observations he derives several propositions. The propositions form the scaffolding for his tentative derivation of a number of interconnected  axioms or axiomatic heuristics which has come to be known peculiarly, as Moore’s Law. I apologize for being forced to be didactic and facetious, but the great cloud of misdirection and dis-ingenuity surrounding this subject requires it. The observations:
    • In the manufacture of silicon chips there were two countervailing trends: simply designed chips could be manufactured at a scale largely independent of how many components were on the chip and increasing the complexity of the chip past some upper bound would reverse this trend and cancel out any advantages component density provided.
    • There was thus a “sweet spot” between component density and complexity. As the technology matured, this sweet spot became easier to hit.
    • That significant increase in component density could be achieved by extending and optimizing existing optical processes and would not require new production infrastructure.
    • A cheap low cost chip in 1965 could have about 1,000 components.

               The propositions:

    • Deriving from the immediate observation point above(this is central) , and extrapolating from this that if treating subsequent chip designs as “low cost” chips as well for the sake of argument , chip densities appeared to increase by a factor of two every year. Note that Moore is still assuming that this references the lowest cost chip in production.
    • Since all necessary technology was already in place as per the observation above, the required productive technology could be recalculated as zero cost.

             The axioms

    • The cost of manufacturing the chips would become independent of their design complexity.
    • A logical result of this would be to concentrate on either mass manufacture of discrete “building” block chips or flexible upstream design or some combination of the two.
    • Since there was no technological cost, demand would not be constrained by inflation of upstream expenditure.
    • There would thus exist a  technologically autonomous economic engine limited only by manufacturing cost in its expansion.
    • If all this was indeed the case, the number of components per chip in 1975(ten years in the future) should be ~65,000.

            The original document.  

 I  am deeply ashamed to admit I had never read the original document until now. Like many, I had succumbed to the extremely effective induced intellectual ennui emanating from what are supposed to be the information media. Simply the invocation of Moore’s Law(like the Gettysburg address) precluded the necessity of reading it or analyzing its contents(someone needs to come up with a name for this or tell me about one that already exists). What immediately leaped out at me was the tagline summary under the title:

With unit cost falling as the number of components per circuit rises, by 1975 economics may dictate squeezing as many as 65,000 components on a single silicon chip

I’m sure if you have never read the original,this surprises you too.  Moore’s Law is always sold as a prediction ,  that is , “cunning and brilliant scientist Gordon Moore predicts the future with amazing clairvoyance!” This byline paints a diametrically opposed picture.  By any honest definition of the word, this is not a prediction. It is a declaration , a statement of facts on the ground as the author sees them by the head of the leading silicon chip manufacturer at the time, Fairchild. It is exactly the same thing as say, the president of Goodyear saying in January, 1942 that we may have to ration tires and push for a synthetic rubber replacement material since the Japanese have seized all the rubber sources. Or southern cotton planters in the US in 1920 saying we may have to diversify our economies in light of the blight of the bo-weevil. Its correct reading would be at the  most literal, something like this , noting that when Moore says “I” he means Gordon Moore, panjandrum of Fairchild Semiconductor, not Gordon Moore-super genius,man of mystery and predictor of all things to come:

With unit cost falling as the number of components per circuit rises we[the implied manufacturers of silicon chips]  may be compelled by market forces and the need to amortize our existing investments and meet the expected demand and performance requirements of our[the implied manufacturers of silicon chips]  customers be compelled to cram as many as 65.000 components on a single chip within the span of the next ten years.

Moore then proceeds to elucidate and justify the declaration he makes in the byline with observations and propositions leading to the corollary(which is not the central point of his paper) conclusion that a certain specific part of his his declaration, the  upper bound of the time frame, may be axiomatic or generalized outside the context and time frame he has laid out-but don’t bet on it. This is how an engineer would read this. I see no “predictions” as in gypsies peering into crystal balls or Delphic oracles. This is very clearly a design engineer and scientist writing a high level design document where he explicitly states the key components of any such document: the problem domain he is addressing(the manufacture of silicon chips) , the scope of the problem( the cost and benefits of various design and production approaches)  and the time frame within which any conclusions being made will remain relevant due to any internal or external factors(~10 years).  I myself have written many design documents structured exactly like this.  I have never been accused of predicting the software engineering future if the subsequent software artifact turned out exactly the way I designed and planned it. That’s called a job well done,not clairvoyance. It should also be noted that high level design documents(usually written by the principal engineer or design architect) are the only engineering documents ever seen by non-engineers and are usually the source for “white-papers”(a current alias for high-level design docs massaged for non-engineering consumption).

Any engineer writing such a document will do exactly as Moore does here-he is making a case for how he sees the domain as it is now and how whatever it is he wants to do will work in the future(when the whatever is finished). The blurb at the top of the page of the article reads “the experts look ahead”. Every design document is the “experts looking ahead”: the person writing is assumed to know what they’re doing and what they’re talking about and they are plotting a course of action which lies in the future and (hopefully) comes to fruition. Again, this not a prediction in the crystal ball sense, it is how any engineering project is envisioned and is constrained(as predictions are not) by time,physical resources and money. We’ll get back to this, but for now let’s see what Gordon has to say ten years after.  One more caveat related to our analogy of the engineering design document above: What happens if your design after implementation and deployment does not work? The answer: You(or the person who succeeded you after you got fired) writes another document and we start all over again(if the company didn’t go tits up and everybody got fired) or if things worked but not quite as you had planned and not for as long you thought and you did not get fired, you “refactor” the previous document-deleting or laughing off the nasty bits that crashed and burned , accentuating the bits that did not and tossing in some new bits for indirection. Thus:

  • 1975 – To not besmirch Moore’s reputation Moore never proposed  that the doubling of component density was a general rule, theory or law, as we have painstakingly run through above. He is outlining what(if Khrushchev were running the show) would be called  a 10 year plan. As a crackerjack scientist and engineer, he qualifies virtually everything he says. Once one accepts that this is actually a high level design document(or more properly,the introduction to one), not an “Amazing Science” bowling alleys on the moon type of thing , this becomes clear. In an engineering document these qualifications are not hedges,but “failure coefficients” so to speak. The design engineer has to rate some possibilities as more likely than others. It is also only natural that one may gild the lily to some extent,but you are trying to make a case. By some algorithm or heuristic these coefficients are rolled up usually into a projection as to the odds of the thing actually working. Undocumented failure coefficients have names like Hindenburg and Challenger. Moore was ,to his great credit , correct. But not because he was Nostradamus but because he was a crackerjack scientist and engineer who who was one of the three or four most influential people in the semiconductor industry which implemented his design/systems topology to the letter over the subsequent decade at Fairchild and Intel.  So why did Moore “revise”  his so-called law?
    •  The answer to this is inscrutable if you consider Moore’s Law as a prediction, like those numbers you got out of Grandfather’s Dream Book you could buy at newsstands(remember them?) to “predict”  lottery winners instead of the design/planning document it obviously was. Not so much otherwise.
    • Gordon Moore’s projections proved to be correct. He had not proposed that his conclusions were unconstrained and could be extrapolated as a general heuristic. Thus it should come as no surprise that Moore’s Law essentially stopped working at approximately the time he said it would(this was an economic/engineering barrier, not an existential one. You could go on cramming components onto simple chips but it would be economically unfeasible to do so because of design costs). Moore had foreseen this, and one of his axioms(see above)  was that at some point a choice would have to be made between mass produced simple chips recombined into more complex designs or more complex chips which could be mass produced by design flexibility and “wizardry”. It became apparent fairly quickly that the first was untenable(simple chips could not be combined into higher functioning components without an explosion in engineering unit costs), resulting in the second route being taken. This route rather clearly led to mass produced microprocessors-the biscuits and butter of Intel(founded by Moore and Robert Noyce in 1968 using the 1965 document as template-after they had amortized their memory chip investment).
    • What is significant about the 1975 revision( given at a presentation at the IEEE International Electron Devices Meeting) is that it changes the “law” from a purely economic/engineering heuristic to something a quite a bit more abstract-from Tuomi:
      • In 1975, Moore implicitly changed the meaning of Moore’s Law. As he had done ten years before, he was still counting the number of components on semiconductor chips. Instead of focusing on optimal cost circuits, however, he now mapped the evolution of maximum complexity of existing chips. Indeed, in an article written a few years later, the famous growth curve is explicitly called “Moore’s Law limit” (Moore, 1979). At that point the growth estimate is presented as the maximum complexity achievable by technology

    • I will not go into the enormously confusing ins and outs of the 1975 revision. Tuomi does a good job, but I essentially view it in the context I described above, i.e.,the design/planning document. Essentially the design worked for the lifetime Moore had predicted;began to fall apart for reasons Moore had foreseen but papered over(simpler ships could not actually be recombined inter higher function components economically) ; was refactored by jettisoning the building block bits, re-configuring the design complexity/mass production bits and adding some new bits in the form of “… he was still counting the number of components on semiconductor chips. Instead of focusing on optimal cost circuits, however, he now mapped the evolution of maximum complexity of existing chips”[Tuomi]. All of this is of course an order of magnitude more opaque than the original, and where suspicions of disingenuity may begin to creep in. We have gone from a fairly straightforward engineering prospectus in 1965 to a not well defined and suspiciously anecdotal guesstimation. We could go on for quite a bit more but I think we can proceed with what we have as it being what Moore actually was talking about in these documents.

So if that’s what Moore actually said , let’s deal with what some others have said Moore said and whatever Moore said he really said if he says anything at all about what these other people says he said:

  • Intel’s web site, circa 2002 : “In his original paper, Moore predicted that the number of transistors per integrated circuit would double every 18 months. He forecast that this trend would continue through 1975 … . Moore’s Law has been maintained for far longer, and still holds true … .”
    • Moore says:  ” I never said 18 months. I said one year, and then two years … . Moore’s Law has been the name given to everything that changes exponentially. I saw, if Gore invented the Internet, I invented the exponential.”
  • World Bank,circa unknown, probably early 1980’s: “Gordon Moore’s Law — microcomputer chip density doubles every year or so — has held for 30 years.”
    • Moore says: nothing
  • IBM, press release circa 2001:  “The evolution of semiconductor technology has traditionally followed a trend described by Moore’s Law, an industry axiom that predicts that the number of transistors on a chip will double every 18 months, largely due to continued miniaturization known as scaling.”
    • Moore says:  “So then I changed it to looking forward, we’d only be doubling every couple of years, and that was really the two predictions I made. Now the one that gets quoted is doubling every 18 months… I think it was Dave House, who used to work here at Intel, did that, he decided that the complexity was doubling every two years and the transistors were getting faster, that computer performance was going to double every 18 months… but that’s what got on Intel’s Website… and everything else. I never said 18 months that’s the way it often gets quoted
  • Eric Raymond, circa 1999:  [Raymond erroneously omits Moore’s variable of chip size ]  “The observation that the logic density of silicon integrated circuits has closely followed the curve (bits per square inch) = 2^(t – 1962) where t is time in years.” 
    • Moore says[plenary address at 2003 IEEE International Solid-State Circuits Conference]:  “But we ought to remember that no exponential is forever. Your[the engineers present-editor] job is delaying forever.”
    • Moore also says[from “Moore’s Law 40 years later”] , Intel , circa 2005: “I made a number of other extrapolations; some were just to demonstrate how ridiculous it is to extrapolate exponentials. In my 1975 talk, I described the contribution of die size increase to complexity growth and wrote: “In fact, the size of the wafers them-selves have grown about as fast as has die size during the period under consideration and can be expected to continue to grow.”

And finally , a few tschotchkes we can safely conclude Moore never said:

  • Bill Gates, circa 1997: “[Moore’s law can be defined as-editor] the doubling of processing power on a chip every 18 months”          
  • Al Gore, circa 1999: [Moore’s law can be defined as-editor] the doubling of computing power on a chip every 18 months          
  • R.J. Gordon , circa 2000: [Moore’s law can be defined as-editor] the price of computing power falling by half every 18 months[oy vey-editor]

 

How Gordon Moore and his law were conscripted by the cybernetic-utopians

So the morass of Moore’s Law contains all the ingredients we’ve come to associate with the cybernetic-utopians and their elite vanguard the Inevitabilists: “technological autonomy” in the form of friction-less engineering design at no cost;mythopoesis , wherein when one wheel falls off the cart(1975) we refactor, paint the cart red and call it a tricycle;teleological compulsion or feedback loops(which exponential growth by definition is) which explain and design themselves because the technology is innately purposeful and requires no agency outside of the “law” itself;whenever the “inevitable” turns up lame such as with multicore programming magic turning into a pumpkin or not being able to transmogrify building block chips into more complex artifacts without busting the budget with design costs , it’s not possible that the whole idea was goofy from the start-there had to be a beaver in the woodpile,and in both cases the beavers were the engineers who tried to implement the goofosity.

I also completely concur with Tuomi’s assessment:

Moore’s Law has not been a driver in the development of microelectronics or information technology. This may come as a surprise to some who have learned that Moore’s Law has become a self–fulfilling prophecy that semiconductor industry firms have to follow if they want to survive. Instead, I shall argue that technical development in semiconductors during the last four decades has reflected the unique economic and social conditions under which the semiconductor industry has operated.

So where exactly does this segue into all the previous rag chewing about hives and grids?  I believe that Moore’s Law was essentially absorbed by a process of mythopoesis into cybernetic-utopiansim,like Mithraic and Greek sources supplied the raw material for the Christ mythos. This began when Moore extended his original design document in 1975. The motivation for the absorption is non-intuitive. First a reprise from Tuomi’s document:

The microprocessor proved to be a major innovation. It combined the benefits of both high volume manufacturing and reuse of design work in high–volume multi-functional chips. It therefore made possible the amortization of design costs over not only existing markets but also emerging new markets.

Microprocessors also involved an important business innovation. The costs of semiconductor manufacturing dropped radically as universal microprocessors made the application developers pay for most of the design costs. Much of the difficulty and cost of designing complex systems was off–loaded to system and software designers. Similar logic, of course, worked with memory chips. The manufacturer did not have to know or care how the chips were used. In other words, the semiconductor industry solved its biggest problem by making it someone else’s problem.

Even in 1975, the first tentative baby steps towards the hive were being made by notably DARPA and the packet switched network efforts of Vinton G. Cerf. The offloading of design and technological use cases to software engineering was a profound change. The hive/grid could could now in effect deploy blindly-it didn’t have to rig or compromise the microprocessors(although this was and is being done for firmware) since the microprocessor could always be modified in place-which is what software is. The more variegated and multifarious the processors the better. Uniform functional implementations are impossible with the array of hardware available today-but a uniform, homogeneous software defined network(SDN) running on top of these processors are possible. The hardware churn now became a maelstrom. With software now decoupled from chip development, the hive provocateurs could simply wait for integrated circuits and microprocessors to capture some domain(say airline navigation) and then infest them with hive software. The “internet of things” was always implicit-just dependent on technological advances in hardware design and device miniaturization.

If the previous system of hardware and software symbiosis had remained in effect, hive deployment would take 200 years: it would have been impossible to “grandfather” in existing devices and systems without replacing all the devices and systems of the target at the same time. The hive expansion was and is dependent on the expansion and churn of hardware processors,but not any particular processor.  The hardware expansions metastasize to all domains(finance,cars,container shipping,whatever) and are then parasitized  by hive software. As we have described previously, this scenario was slightly different for the grandfathering and expansion into the most atomic of nodes(you and me and our devices). Probably since this most fundamental unit was the most crucial and fluid and possibly had a protocol all of its own,the update and control system expansion seems to have been much more aggressive through the opaque machinations of the “tech giants”. But the hardware domain capture/software parasitize paradigm remains the same .These are the non-intuitive bits.

So the “doubling of components density every whatever year” became the “ maximum complexity of existing chips”. The hardware manufacturers were more than happy with this arrangement. The only way this gravy train could grind to a halt would be if the software could not “justify” the hardware. If cars are turned into hardware peripherals with microprocessors and sensors,it’s up to the software to deliver the goods so to speak. The goods are “productivity” , in this case driving the car better than you can. If the car slams into a tree killing the driver, this is the ultimate “marginal software utility deficiency”. This is is a two pronged assault as we mentioned above. The hardware manufacturers must keep taking over domains to physically expand the hive and software must deliver the productivity gains to entice its use and deployment(like a venus fly trap )-also deploying the hive protocols and communication avenue simultaneously. But is it is key to point out here that we see that “productivity” in this context is essentially a con to enable the diffusion of the control grid. Once this diffusion is accomplished in a particular domain,”productivity” overnight becomes a dead letter,confirming our analysis.

As an example of the preceding,once say, the airport reservation systems were all mediated through the control grid,nobody cared how productive or efficient they were anymore. Making an airline reservation was much simpler and faster in the 70’s than it is now(viewing the “reservation” as a multi-faceted transaction;from the time you reserve space to the time you actually get on the plane). A small window in the 80’s and early 90’s gave customers a significant boost in satisfaction and efficiency through computer automation. But once these systems had become ubiquitous and generalized,there was no motivation from the standpoint of the grid expanders to worry about productivity or customer satisfaction-their aims(the most important) were met. It is also informative to see how this productivity and performance decline has been masked and obfuscated in an industry such as air travel here. Almost all airline reservations are made online now. This process is for the most part seamless and efficient. It’s when you get to the airport or the boarding area where the mishegas begins. “Mistakes” and “misunderstandings” proliferate because the “computer made a mistake”. Mistakes which can’t be corrected because “the computer won”t let me”. This is also by design. The end to end  inefficiency of the system is hidden by the apparent fail safe instantaneous transaction of the computer interface-which actually offshores the most problematic and non-deterministic  bits of the entire transaction to the human interactions you face at the airport(“Overcoats have to be checked in as baggage. No big coats allowed in coach! It is your responsibility as myrmidons  to monitor your DHS InstaMandate© feed at least once an hour every hour for updates. It’s your responsibility to keep up with the most nugatory and infantile of our machinations at all times!”). This is the the bait and switch we see in virtually all highly computer automated service industries. Wait until the Uber/Lyft scam is complete. The reservation by smart phone bit will still work seamlessly(the grid). The actual vehicle that picks you  up will be an automated driverless prison wagon the size of a mailbox(one comrade inside,one comrade secured to the rack on the roof )-which will take you where they want you to go: Central Citizen Chattel Distribution and Dispersal(C3D2)  Nexus 177A. Which may or may not be anywhere near where you want to go.And if the automated Stolypin wagon does slam into a tree and you die,this is now considered collateral damage(which it wasn’t during the capture phase)-the important thing is they knew where you were when you expired,and what your identification code was for their records. Either way,you pay for the ride with your vaccine deployed pheromone payment activation engine(VPAE) enabled through molybdenum concentrations in your fingernail cuticles[I just made that up,so don’t Google it].

However, getting back to the infect/capture/control cycle, there is a momentous caveat:there is no reason this cycle cannot be modified to be a completely coercive and unilateral invocation. All domains can be coerced to become dependent on computer hardware and coerced to adopt the hive software(whether it is productive or not). We can see this-again-at airports(which seem to be ground zero for all  sorts of behavior modification hijinks as well,i.e.,the TSA). All manner of gadgetry and electronic wizardry have been deployed to make the airport experience more “productive”, with dystopian results. This would occur in a system where rewards and treasure are distributed by political means not economic ones.”Productivity” is whatever I can force you to hand over without compensation,and my rewards are immune to any “market” feedback you may give. That is, some sort of quasi-feudal spoils hierarchy enforced by a police state or cartel(nobody cares what you think at the airport-if you want to fly you have no other alternatives, exactly like a checkpoint set up by bandits on the only road leading through town-the bandits divy up the proceeds later). Which is pretty much the world we now live in. Surveillance capitalism also fits comfortably here.It markets “productivity” that nobody asked for, for which you must pay whether you like or not(with your coersively appropriated metadata). 

I believe these changes took place place in the early 2000’s when the driva men riding this chain gang saw that their cohorts were failing on both fronts:hardware domain capture was failing because the software parasites could not infect enough hosts by offering productivity gains which never seemed to materialize.

Since then I believe that the gloves have not come completely off, but we’re almost there. The fact that Moore’s Law is still being flogged like a fagged out old donkey demonstrates that. We are in the midst of the final push,the last encirclement. The last domains are falling to the hardware piranhas-living space itself. The streets,towns and villages are now being captured. Wired with gizmos and sensors and cyber trinkets which are immediately infected by the hive and incorporated. The hardware expansion chorus is now a shrieking howl(5g). All in. Everything must go. It is the law.Inevitable. And the devices,like smallpox infected blankets,come thick with hive parasites or destroy the inborn antibodies of their hosts,leaving them exposed to infection and control(see the commercial aircraft industry). Probably irreversible.A few dead-enders armed with bamboo spears and flinging coconuts with catapults are putting up a pitiful last stand. But just in case, the utopians message their minions:keep stoking the furnaces and remember Moore’s Law above all else. The power of Christ compels you. Or if not Christ, then at the least greedy ambition and technological monomania.

I had planned to move on to the last segment here but in light of the last paragraph I have couple of more bones to pick with Moore’s Law. I am aware that this particular piece of the jeremiad is running into Tolstoyan breadth so I’m going to add a couple of pieces of “extra bonus entertainment” at the same low ticket price.Our final installment will follow these efforts,