3B – Cybernetic-utopianism and the tale of two laws: Moore’s Law and Amdahl’s law

our man in cambridge

As I mentioned in the previous post, I was a little timorous  about some of the documents I referenced there because of their dates-mostly from the late 90’s and early 2000’s. Critics could make the point that “So much has changed since then!” and I had some doubt that they might be right. Until in researching the subjects discussed above, I came across one particular document which gave me courage as to the righteousness of my cause,so to speak. I was rather stunned by this article because I couldn’t believe that the same characters were still selling the same snake oil virtually unmodified from twenty years ago without a blush or giggle. Even more surprisingly, they flogged the same dead horses I mentioned in the first part-almost word for word as you will see. I swear on an old greasy stack of C++ Report magazines from way, way back in the day that I did not encounter this article until after I wrote the first two parts of this series. My shock is genuine.

Introducing David Rotman,editor-at-large of the MIT Technology Review ,who in an article published in that journal only 6 months ago concerning Moore’s Law, revealed himself to be a complete fool

I know. Those are fighting words. But I’m in a fighting mood. And I’m sick to death of people like David Rotman-whoever he is. The MIT Technology Review site offers no further CV information and his twitter account(yes, my fingers trembled going there, but I had to take one for the team) is similarly barren of biographical details(I finally tracked down a few uninteresting tidbits later). Rotman is what could be called a “technology stooge”. They are comparable to the people who populate the infomercials on cable television that you have to watch if you happen to be rusticated to some motel room in Mobile,Alabama or similar embittered American backwater(“No fucking sport,no fucking games,the fucking girls won’t even give you their fucking names,no fucking trams,no fucking bus,nobody cares about fucking us,here in fucking Halkirk(Scotland)”-bitter ditty sung by British Troops during WWII). They feign giddy astonishment and incredulity at laughably contrived and byzantine gizmos designed to accomplish some pitifully pedestrian task: cutting bread,inflating an inner tube,plastering over a hole in the wall,punishing ants under your sink,etc. Likewise, types like Rotman are amazed and borderline incontinent over anything the technological sector offers-even if it doesn’t work(“According to scientists at Siemens , the automatic beaver dam buster has been only partially successful in early trials, but the possibilities are limitless if this thing can be brought to market!”).  The technology stooge however, has more urbane conduits than infomercials to perform essentially the same task: TED talks and mainstream “candid” interviews,usually with Mountain View megalomaniacs or their courtiers(“Golly Mr. Gates,just what are the limits to a man of your genius,vision and unparalleled wisdom?”). Or in our case here, the house organ or plantation gazette.

The MIT Technology Review has come down in the world from what I remember of it. I was actually a subscriber in the 1990’s when you actually had to wait for a paper copy to arrive in your mailbox. I don’t remember it as being this shamelessly committed to techno-determinism as it is now. Or maybe I was just younger and less aware then. I have similar thoughts about a publication I subscribed to at about the same time, The Bulletin of Atomic Scientists, which I hadn’t read in years and browsed an issue recently. I can describe its tone only as the type of sober and faux somber hysteria and megalomania found in “national security” publications to a fault (i.e. , “After months of careful and painstaking deliberation, the Council on Foreign Relations concurs with the JCS recommendation that Fiji be immolated at the soonest possible moment as a threat to Pacific stability and long term prosperity”[posted under a black and white picture of three bald headed men in business suits beaming at the camera with pipes in their mouths,presumably the authors]. I remember both these publications being the cat’s whiskers back in the 90’s. Nostalgia or the world has worsened?  I suspect the latter.

Despite this, I was still not prepared for an article published only six months ago in what is supposed to be a journal tracking and reporting on “state of the art technology”, to feverishly make declarations and assertions one after another which have either been disproved or humiliated for decades or were never true in the first place. I should first point out that this article is behind a paywall and MIT also doesn’t allow archiving by web spiders like the Wayback machine(that’s the spirit men). They do allow 3 free reads, so you have to gobble this up in one sitting(I was afraid to use the back button in case the MIT skinflints got wise and forced me o pay $0.35 for this article, which would be $0.33 more than it was worth. Not to mention how it would really fatten up that 18.4 billion dollar endowment). It shouldn’t be hard, it’s a little over 2000 words.

We’re not prepared for the end of Moore’s Law

Things get rolling downhill with the title and its legend. As we know from the thousands of words expended up top, Moore’s Law is not only dead-it never actually lived. It was a design heuristic(a correct one) proposed by a design engineer in the published preamble of a design document. And just who exactly are “We”. This is never stated and I am not actually clear who he means. He and his family?;semiconductor manufacturers?;MIT(guilt by association)?;the people who work on the magazine?;everybody who can read English? The title’s legend jumps in with “It has fueled prosperity of the last 50 years. But the end is now in sight.” Just how does one “fuel” prosperity? With a hose? A bucket? Fewer cornier and threadbare cliches could be found than that one(it’s also grammatically foobared-the end of what is in sight? Moore’s Law or prosperity? Or both? Or is he talking about the real end(mass extinction)? I know I’m being excessively facetious, but after reading this article, I really don’t like David Rotman.

To his credit, Rotman doesn’t pontificate and hits himself in the face with a custard pie in the first two paragraphs.

Gordon Moore’s 1965 forecast that the number of components on an integrated circuit would double every year until it reached an astonishing 65,000 by 1975 is the greatest technological prediction of the last half-century. When it proved correct in 1975, he revised what has become known as Moore’s Law to a doubling of transistors on a chip every two years.

Since then, his prediction has defined the trajectory of technology and, in many ways, of progress itself.

As per our infomercial analogy,you can imagine Rotman with a wireless mike throwing softball questions at Gordon Moore,founder of Intel,standing in front of his new baby: the automatic beaver dam buster. In all infomercials, there is an extended preamble puffing up the inventor’s bona fides:

“Welcome Mr. Moore..”

“Glad to be here, Dave.”

Ladies and gentleman, this is Gordon Moore who forecast that the number of components on an integrated circuit would double every year until it reached an astonishing 65,000 by 1975. Which is recognized as the greatest technological prediction of the last half-century!”[oohs and aahs from the audience and incontinent clapping]

Not only that! There’s more(no pun intended Gordon..ha…ha)-when it proved correct in 1975, he revised what has become known as Moore’s Law to the  maximum chip complexity achievable by technology!!![more oohs , fewer aahs, incontinent clapping]

[turning and facing the audience,zoom in to “serious faced” Dave who’s voice drops to a hush] “Since then[dramatic pause], his prediction has defined the trajectory of technology and, in many ways, of progress itself!”[pregnant pause,unctuous grin-resulting in explosive applause]

The “…his prediction has defined the trajectory of technology and, in many ways, of progress itself.” bit is what could be called a Stalin Oblique Gaze Posture(SOGP). Think of all those Soviet era posters of Stalin or Khrushchev or Malenkov of whoever staring off in the distance at some worker’s paradise inevitability-shiny tanks,bridges,dams,Tupolev bombers,whatever-at a 45 degree angle with an expression of Christ like beneficence and the superimposed slogan:”Thousands of comrades at Kolyma have frozen to death to make these bombers. Appreciate their sacrifice!” or the such like.

Anyone confabulating such a bromide as “…his prediction…etc.” should be forced to adopt an SOGP dressed in a red army tunic standing outside a Dunkin Donuts in East Orange, NJ which has been robbed at least 3 times in the past week and repeat in a thick Russian accent until exhaustion or nightfall or physical assault(whichever comes first) the offending phrase. The whomever that had final say over what goes out over the wire and left this in directly confirms that fish do indeed rot from the head(and if,as I suspect, this whomever is also named Dave Rotman, MIT has even more to answer for). Not to mention if anyone could be said to have ”defined the trajectory of technology and, in many ways, of progress itself”  it would probably be one  of  MIT ‘s own alumni-Claude Shannon or Norbert Wiener. But neither of these names deflect the needle of the public  propaganda meter  these days much past %25 and thus don’t  have  a great deal of resonance in the circles that move the merchandise-physical or virtual.

We know the statements about Moore’s Law are pants if you  have trudged through the previous installments so we won’t go there. But we ask an innocent question: If the law was correct, why would it need revision? Rotman implies causality and compulsion-because Moore’s Law was correct, Moore had to revise it. What?  Why? He never even attempts to explain this. From this point we can deal with the next few sections of the document as a succession of risible if disturbing(this is a magazine with MIT’s name on it) logic errors.

 

Magical Thinking

Almost every technology we care about, from smartphones to cheap laptops to GPS, is a direct reflection of Moore’s prediction.

Regressive Fallacy , Post Hoc Reasoning

But how did a simple prediction, based on extrapolating from a graph of the number of transistors by year—a graph that at the time had only a few data points—come to define a half-century of progress? In part, at least, because the semiconductor industry decided it would.

Magical Thinking , Texas Sharpshooter’s fallacy, Selection bias

Every year since 2001, MIT Technology Review has chosen the 10 most important breakthrough technologies of the year. It’s a list of technologies that, almost without exception, are possible only because of the computation advances described by Moore’s Law.

For some of the items on this year’s list the connection is obvious: consumer devices, including watches and phones, infused with AI; climate-change attribution made possible by improved computer modeling and data gathered from worldwide atmospheric monitoring systems; and cheap, pint-size satellites. Others on the list, including quantum supremacy, molecules discovered using AI, and even anti-aging treatments and hyper-personalized drugs, are due largely to the computational power available to researchers.

Slick Rick

It took me a while to figure out what was wrong here. I couldn’t think of any ready made violation so we”ll call this “Slick Rick”. If I was good at symbolic math I could show why this is goofy with an equation. But I’m not, so I’ll have to talk it out.

But [Intel’s head of silicon engineering Jim] Keller found ample technical opportunities for advances. He points out that there are probably more than a hundred variables involved in keeping Moore’s Law going, each of which provides different benefits and faces its own limits. It means there are many ways to keep doubling the number of devices on a chip—innovations such as 3D architectures and new transistor designs.

The above is subtly deceptive,probably intentionally so.

declaration: there are more than a hundred variables involved in keeping Moore’s Law going.

Ok , we’ll take this on faith as being true. proceed

declaration: Each of these variables provide different benefits and faces its own limits.

Ok. that sounds reasonable. proceed

proposition: since you’ve accepted the declarations above as true, It means there are many ways to keep doubling the number of devices on a chip.

No. I can accept the declarations without accepting the proposition as true

  • all one hundred variables may be zero
  • all one hundred variables may be idempotent(any operation on them will give the same result no matter how many times you operate on it).
  • It is true that one or many of the variables may result in a doubling. It also true that None of the variables may be capable of being modified to bring about a doubling even if they are non zero or not idempotent
  • If Moore’s Law has 100 variables , you’re probably only fooling yourself that it’s a “law”->Texas Sharpshooter’s fallacy . What seems to be described here is what is usually called a transformation,which is a function not a law.
  • Why does a “law” have to be “kept going”? What does “kept going” even mean? Is “kept going” the same as true(a law is either true of false or true or false some percentage of the time so close to unity we can treat it as being true or false all the time for some constrained purpose)?

Seemingly not content with not making any sense, Rotman proceeds to take a stance that quite a few others have shown to be false or at least unproven in any rigorous sense: a direct causal(not correlated) relation  between chip clock speeds and throughput and what he calls computing power(any time the word “power” is used in this kind of context it should be viewed with suspicion as I have mentioned previously-some sort of chicanery is almost always afoot). This goes against what our old friend Tuomi said not so far back in the day, and I have maintained as an important position: that development in the chip industry is endogenous, not tightly symbiotic with the software domain(there is essentially a loose non-deterministic symbiosis similar to the relation between an upstream original equipment manufacturer and a downstream consumer with the consumer exchange mediated by a retailer or product integrator)-it only appears so because of the constant flogging of Moore’s Law in a classic demonstration of sophistry or pretzel logic:

Technological advances in silicon chips have been relatively independent of end–user needs, [Tuomi,circa 2002-editor]

This “computing power” is also referred to as if it is a quantity(or attribute or potential or something) that exists distinct from any context or constraint imposed by the domain it is used in-you got it, it’s technologically autonomous. I doubt seriously for all the reasons we have discussed previously,that “computing power” is autonomous of anything but transparency.

Rotman’s position also runs against the conclusions we reached in the discussion we had of the decoupling of hardware and software in the mid 70’s. Additionally, it puts forward an almost purposely disingenuous and simplistic view of how engineering is actually done. Engineers designing an application, no matter how resource intensive, do not sit around waiting for third parties outside their control to “deliver” the goods that will make their designs “go” , or sit at their desks with their heads in their hands groaning “If I only had more MIPS , I could make this work!”.  The principal engineer or lead engineer does not design a system which exceeds the quality of service(QOS) capable of being delivered by the hardware specified out for the system counting on a new, faster chip to materialize later to “make up the difference”. Doing otherwise is a Hollywood version of engineering development that borders on the ludicrous, and it infuses this document. It is flatly dishonest, which for the most part this entire article is.

Objections may be raised that the previous comments apply only to those entities which are not OEM resellers and don’t have some sort of contractual relationship with a chip manufacturer,i.e.,Apple to Intel. Apple,it might be put forward, does wait on third parties outside of its control to deliver the goods. I maintain that the comments still apply. The third party(Intel in this case) is not outside of Apple’s control-it can cancel the orders at discontinue the relationship,which is most definitely a form of control. And in Apple’s case it won’t sit around waiting for anything;it will simply keep shipping the old versions of the units with the antiquated chips until the new ones arrive(with no price break of course-see the MacBook Pro-“And Apple apparently felt so bored with Intel’s processor lineup that it left entire lines of products—the Mac Mini and Mac Pro, to name two—to languish for years at a time. The comments remains in place: the principal engineer or lead engineer does not design a system which exceeds the quality of service(QOS) capable of being delivered by the hardware specified out for the system(the device release). In this case the engineers at Apple aren’t waiting for anything-their job is done and they’re working on the next release. You’re waiting if you placed a pre-order.

Different problem domains have completely different processor speed requirements, In most of the distributed applications I have written , designed or contributed to, marginal increments in clock speed are of little importance(say from one iteration of a chip release to the next). For example,In distributed applications which use mostly RPC(remote procedure calls-a mechanism to make system calls across machine boundaries as if they were being made in the local address space) the latency(how fast the application responds) is almost a entirely a dependency of the QOS of the network. In such a context an extremely fast network can compensate for third rate hardware. It’s all a trade off. On the other hand,I also have first hand knowledge of implementing asynchronous input/output systems which are very sensitive to processor latency  and could conceivably justify rapid refactoring due to chip upgrades. I suspect however, that only a relatively small subset of computer domains actually require ever incrementing resources of “computational power”(playing along with the gag that we know whatever that is exactly). This is reinforced by our discussions concerning the sketchy record of marginal utility delivered by the very use of computers in many domains,fast or slow.

I would also comment that I have encountered on many occasions wildly over specified hardware for the application/domain requirements. Especially on government contracts , this constitutes an astonishingly lucrative racket. I have worked in places which had racks of state of the art blade servers running one Microsoft Exchange server with 50 users a piece on each one. The combined throughput of these boxes could have run an atom smasher, but one day  when I had nothing to do I calculated that everything could have been run on two generic bargain boxes with a lot of memory.  This article smacks of a racket and as we read on,my suspicions are confirmed.

Another point about “computing power”, this one from the trenches: whether getting more of it will do you any good is dependent entirely for the most part on exactly what you’re doing and a laundry list of tawdry engineering details. If you don’t know what you’re doing(i.e.,exactly how your application is going to use the caches(on and off chip), registers and parallel capabilities of the hardware, etc., using faster processors may help,maybe it won’t. This requires engineering effort before you spec out the hardware ,engineering effort after you get the hardware(a minimal harness to test whether the chip actually works or if it even is the chip you think it is(Taiwan manufacturers have a reputation for this), and engineering effort before the first stages of  actual engineering(burn in of a few full configuration machines). If any of this goes sour , in most of my experience you ‘re going to have the use the chip anyway-yes, even if it’s the wrong chip because replacements can’t be had without busting the timeline or the procurement weasel won’t forfeit his kickback. I just mention this to point out that Rotman and his compadres paint a much rosier and collegial picture of the relationship between chip suppliers and engineers on high end prototype projects(the only kind that would spec out there own hardware down to the chip level anyway) than I have seen. Then again, I’ve never worked in any places like the MIT Media Lab. Maybe its all sunshine and pogo sticks there.

Just because the chip is faster and cheaper doesn’t mean it will be any better at what ever you’re doing. It could be worse(any chip could have unforeseen side effects due to proprietary architecture). The new faster chip could also be poorly designed and/or manufactured and run like a deer but get hot and and crispy critter the box(this has happened so many times for major model releases of frontline chips that it almost goes without too much comment these days). As I describe above, things are a great deal uglier outside of Cambridge and this is quite relevant. Thus, virtually all non-trivial use cases in engineering are “custom” implementations to some percentage of the overall design-that’s why they have engineers. To believe otherwise is more cartoon  engineering. The author and the people quoted in the document(from their comments you can almost see the disclaimer crawling along the bottom of the screen-“Mr. So and So is a paid spokes person for the cybernetic-utopians”) are presenting a NASCAR version of how hardware , software and software engineers actually work together(get big engine;go vroom;make car go faster) and it’s a fraud.

Before we get back to big Dave Rotman, one final very important bit related to the points immediately previous: Rotman and the people he quotes in the document seem to assume that a direct causal pipeline exists between the doubling or increase in transistor density , the increase in that chip’s speed and performance and that this increment is measured in some unit called “computing power” which can be transparently and without transmission loss be transferred downstream to end users, unwrapped like a Christmas present and “used”, enhancing by some unknown measure the domain problem it is applied to. “computing power” is also dimensionless it seems: it is not specified whether it is linear,exponential or stochastic. I guess we have to assume it’s all three at the same time.

From this a  final conundrum: How exactly do we know that it was enhanced “computing power” that caused the “computational progress”(another mysterious entity, left unexplained)? In order to determine if it was from an engineering standpoint is actually a vastly complex and contentious subject.

A microprocessor is an enormously complex object-even if holding all other variables equal(also immensely difficult-I do not use adjectives such as “enormously” or “extraordinarily ” lightly-this comes directly from my experience) and you substitute one for another of greater “computing power” how do you determine that the attribute “computing power” was the attribute(one among thousands) that was the causal factor in making the computation “go faster”?  Or is “computing power” some heretofore unexplained coefficient which algorithmically  or heuristically rolls up all the attributes of the processor into some easily handled object like a dung beetle pushing a ball?  Even doing this type of comparison to compare a single computation(profiling/benchmarks) on different machines is very difficult.

Essentially , even if there is a direct equivalence between processor speed/”computing power” and “computational progress”-how would you know without constructing a test harness and profiling stage that would probably cost more to implement than the chips cost or how much budget you have? And if there was no way of knowing before you actually purchased 10,000 of them and used them, wouldn’t this be an almost criminal case of caveat emptor?  Naive consumers(the government) may take your bogus pipeline for reality and end up with either more “computing power” than they need;a serious case of shock that the faster chips do not in fact make anything go faster;or a placebo effect where the “computing power”  is a NOOP and does nothing noticeable at all.

The major beneficiaries of  even a  reality adjusted “pipeline” would seem to be those entities who possessed the means to maintain enormous dedicated resources to carry out the kind of profiling and harnessing described previously(who would not be  the “general computing” crowd bemoaned as being on its last legs at another point in this paper because of the use of specialized home brew chips)-which in light of our comments, seems to be the logical and unsurprising outcome of the reality adjusted pipeline(the cash rich specialized crowd would at some point realize that taking the money used in the expensive profiling units and using it to design their own chips which they would already know would boost performance because they designed and manufactured them, would be the obviously superior option). I’ll return to this.  In  papering over this very gnarly subject Rotman may cross the line from foolishness to fraud.

 

Breaking News: Dave Rotman reports in an article published in the February 2020 issue of MIT Technology Review that in cases where all other variables are held constant the C programming language,developed by Dennis Ritchie at Bell Labs 50 years ago, will execute  a computation many times faster than Python,an interpreted language developed 30 years ago by Guido Von Rossum…in other breaking news; 1936 Olympic gold medal winner Jesse Owens outruns a horse…video at 11

We now get to the surprise I mentioned in the preamble. You remember way back in Part 1 where I extemporized on “How software engineers messed up Moore’s Law by not understanding Amdahl’s Law even though Moore’s Law is an autonomous ,irresistible  force of nature“? What I said to reprise:

Intel began to flog its multi-threaded/multi-core paradigm just when it became common to notice that Moore’s Law was falling apart in the early 2000’s. Not only was computing power or chip density not increasing by leaps and bounds anymore, but there were questions being raised about the declining marginal utility of computing itself. Intel’s(and other chip manufacturers) multicore strategy was developed to sell chips it had already designed , allocated capital resources for and manufactured-whether there was really any effective demand or use case. The strategy succeeded by(in cahoots with the likes of Microsoft) compelling a rewrite/refactor/re-something of vast numbers of applications and frameworks using the multi-threaded/multiple core paradigm.”Blazing” speed was the carrot. “Just refactor your code” the tagline.

This is what big Dave says some character named Neill Thompson, an economist at MIT whose “office is at CSAIL, MIT’s sprawling AI and computer center, surrounded by roboticists and computer scientists[i.e.,intermediate between the basement potato chip machine and the laser printer-editor]” -which everybody knows is even better than being a real engineer-says:

One opportunity[for improving computational performance and extending Moore’s Law by implication-editor] is in slimming down so-called software bloat to wring the most out of existing chips. When chips could always be counted on to get faster and more powerful, programmers didn’t need to worry much about writing more efficient code. And they often failed to take full advantage of changes in hardware architecture, such as the multiple cores, or processors, seen in chips used today.

Twenty years later and the propaganda is exactly the same to sell exactly the same Florida swampland to unsuspecting rubes-almost word for word. So what is it this time? Not lock base multi-threading. The engineers have become cunning (like fish in the Seine during the German occupation of Paris who after a period of learning could not be caught by any amount of stealth or stratagem by the starving populace) and won’t take the hook. Probably knowing this and the back story I walked through in Part 1, engineer-by-proximity Thompson gives it a pathetic go anyway:

Thompson and his colleagues showed that they could get a computationally intensive calculation to run some 47 times faster just by switching from Python, a popular general-purpose programming language, to the more efficient C. That’s because C, while it requires more work from the programmer, greatly reduces the required number of operations, making a program run much faster. Further tailoring the code to take full advantage of a chip with 18 processing cores sped things up even more. In just 0.41 seconds, the researchers got a result that took seven hours with Python code.

As someone who has programmed in C/C++ for 25 years and Python for 20, this is so ludicrous on so many levels within the context of the document that it almost leaves me at a loss. Deconstructing this nonsense requires a level of technical detail that would be excessive for the scope of what we’re about here. I was going to attach an addendum to this section for any programmers or engineers or curious dilettantes undertaking this deconstruction but have decided otherwise. I’m depressed enough with Dave Rotman-I don’t think I have it in me to expend 1000 words breaking down this nonsense. Just take my well informed and experienced word for it that this is pure and deliberate deception , taking advantage of what they perceive to be the ignorance of their audience , which the authors must be fully knowledgeable they are doing.

Put out more flags….

“A man getting drunk at a farewell party should strike a musical tone, in order to strengthen his spirit … and a drunk military man should order gallons and put out more flags in order to increase his military splendor.”

Lin Yutang

The final third of this document is so incoherent it took me quite a few re-reads to even figure out what point was trying to be made. It veers from pillar to post, with virtually nothing having anything to actually do with Moore’s Law,whatever one’s interpretation.

There is a melancholy fin de siècle feel to this whole article and it smacks of desperation. After mulling over how to approach this I settled on two opposing summations. The crux of Rotman’s argument in the first summation and his title is that an era is ending and this is a disaster which must be averted by,not surprisingly,giving Dave and whoever the “we” he’s referring to in the title beaucoup mountains of cash for a “computer chip Marshall Plan”. My counterpoint is the second summation.

Dave Rotman speaks parenthetically:

Moore’s Law lifted all boats while the increasing trend towards specialized application chips will cause a balkanization of software and hardware into islands defined by specific problem domains and access to financial resources.In the long term this will retard the long term growth and improvement of general purpose computing. Furthermore,these specialty chips are less versatile than their more generic counterparts. Since Moore’s Law will not be driving the train,advances in “computing power” will stagnate and also de-incentivize research into new ways of computing.

This de-incentivization requires massive public investment to find new computing technologies. After all , we’ll need more computing power no matter what.

Counterpoint:

general computing technology – What exactly is “general computing”[I had never heard the phrase until reading this article ,immediately raising my suspicions that it is some sort of a weaponized  meme,like “wearables” or “social distancing” being salted into the public propaganda stream. I discovered some breadcrumbs which insinuated the phrase “general purpose computing” is related to some controversies concerning digital rights management,but I  doubt this is  what is meant here.]? It seems that all actual useful computing consists of  specializations of basic templates:input/output to block devices,protocol transfer and data marshaling across networks,task queuing and dispatching.etc. Software and hardware have always been balkanized into islands-their known as problem domains and are a perfectly logical way of applying computer technology to real world situations. It is also true that the fact that the concerns of those with the money and priorities will dictate resource allocation has always been the case-just as those with the money and priorities supposedly followed Moore’s Law up until now. Rotman implies some systemic difference where this is none. The outcome he foresees just doesn’t go the way he wants which is a pretty feeble argument. Where exactly have Dave Rotman and his MIT cohorts been for the last 40 years? It also insinuates the notion that a certain faction pursuing its own ends are responsible for those dilapidated by the consequences of this pursuit,like tomato pickers  made redundant by automatic picking machines. This may be true to some extent or false to some extent,but whichever it is, this is an extremely odd position for a representative of the very paragons of  techno-Malthusianism(MIT) to maintain.

versatility – From the facts on the ground standpoint, most application designers and developers don’t care what chips are in the boxes they work on and deploy on for a good reason:they have no part or role or influence in choosing them in the first place. In 99% of the cases, these decisions have already been made before the engineers are even assembled into a unit. It makes little difference how “versatile” the chips and processors are-whatever their limitations, by the time development starts you’re stuck with them. “Versatility” only matters to the people selling them-like Intel to amortize their development costs over as wide a market as possible. Which, to put it bluntly, sounds like a personal problem, as they would say in the schoolyard. So it’s mysterious  exactly what this point is supposed to mean; Rotman seems to think it’s self-evident. It isn’t to me.

decline of  “computing power” available to end users due to specialization of chips which would de-incentivize investment in “general” designs– Ancillary to this,Rotman’s  specialty market balkanization would still be opaque to the actual engineering of applications(engineers can want more “computing power” all they want;they get what they get). Engineers at even Google’s research facilities would still have to begin their design phase with chips that actually existed, had been vetted and profiled and were deemed capable of solving whatever the domain problem was. Where these chips came from would still be a fait accompli, even if they came from the basement in the same building(Rotman’s setup would apply if Google has in place some feedback mechanism specifically in place to deal with the lack of “computer power” and procedures to deal with it, as I mention below in speaking of internal research models). Rotman,mistakenly(or cunningly) obscures the difference between a macro and micro effect. Actually using a custom chip in a design is a micro event taking place within some engineering context. The side-effects of the using these chips as opposed to some other “general” chip only exist at the macro level as feedback(if everyone switched to the home brew chip maybe whoever makes the generic chip would not dedicate resources to further extend it). But so what?  If enough decisions are made at the micro level to use the home brew chip,and these decisions metastasize across a large industry segment,than the macro result is that the home brew chip becomes the “generic” chip and no research and development is necessary to develop a new general purpose processor.

Rotman completely misses the possibility of the “specialty” chips becoming the “generic general computing” chips by widespread adoption. For what reason would Google not decide to make money off of selling a chip it had manufactured if there was demand? It is reasonable to assume that Google or whomever would then license this chip to a dedicated chip maker-like Intel. Which is exactly the opposite effect that Rotman posits. In other words , what becomes something considered to be a general purpose chip is not actually decided by R&D by the chip manufacturers unilaterally but by endogenous forces within the software ecosystems and endogenous forces within the semiconductor industry(A chip manufacturer may or may not decide to manufacture the new chip depending on its own internal compulsions:it may have just released a new very  expensive to develop chip whose costs have to be amortized for example,regardless of what the software ecosystem is demanding). This concurs with Tuomi’s observations that instead of  Rotman’s deceptive model of a pitch and catch between the chip makers and downstream consumers, it’s actually more like a juggling act-there are quite a few balls in the air. Although it appears that a free market argument is being made here,it is actually(again)  the opposite. Rotman assumes that a market solution as we describe above(which would actually be a scuffle between technological cartels, but that’s the best anyone can hope to call a market these days) couldn’t work  this out and that a command economy directive must be issued . There is no reason to assume this outside of it being  his desired result.

It is also completely unconvincing that home brew chips will retard the growth of “computing power”(taken with a large dose of salt that we actually know what this is for argument’s sake). It seems that this growth will just be more distributed and localized-it won’t stop. And even if it does, so what , again? How can Rotman and his crowd be so sure that more “computing power” will always be needed-is this a scientific projection or just a unilateral declamation? Maybe the decline in the marginal productivity of computers is not a reflection of technological investment or innovation or the lack thereof, but a reflection of history . The same history which is not as sure as Rotman and MIT seem to be that the lion’s share of the productivity boom of the 90’s was owing to computer technological expansion and  their effort. There is much ambiguity and contentious data there as well. If “general computing” chips are only delivering virtually non-existent productivity dividends to “general computing”  domains, then maybe the days of “general computing” are done and have plateaued-making “specialized” computing not a dangerous anomaly, but the more efficient branch of two available paths. And maybe this is the real heart of the matter. 

Maybe the best days of computer productivity are already behind us. There are strong indicators that this is the case. Investment in computer technology may well be investing in something with increasingly  diminishing returns except for those with hegemonic advantage(the police state) or those in a position of direct extraction(the tech giants). There is no existential or historical compulsion that it has to be otherwise,despite the rantings of the cybernetic-utopians. Increasingly , human beings are gaining little in personal productivity by using computers and losing a great deal by having computer technology used on them

put out more flags and give “us”  mo’ money – Finally, let’s suppose against good reason that Rotman is correct-there will be a lack of incentive to development of more “computing power” independent of whatever money making scheme you’re working on. Wouldn’t that be a logical process of capital allocation and engineering efficiency? Why would I allocate valuable and limited capital resources to develop technology useful to your capital enterprise which may well compete with mine? The incentive/feedback(pitch and catch[engineer: “we need more computing power!”; chip makers:”we got your back! give us two months and it’ll be on the way!”]) cycle Rotman is describing appears to actually work only with internally constrained research models as we hinted at above, like those created inside companies like AT & T(Bell Labs) ,Intel(Intel Labs)  ,Xerox(PARC) and pseudo research corporations like those formed under DARPA-which is where virtually all the computing technology we have now came from(and like, let us not forget, the MIT Media Lab)  or vertically integrated device manufacturers like Apple(until very recently) or IBM. Since this  model bore/bears no resemblance at all to any Marshall Plan(which I doubt they even really understand actually consisted of) and this proposed Marshall plan bears no resemblance as well to how all the chip prosperity and pockets full of jelly beans that Rotman crows about incessantly in fact came about, what makes he and his troglodytes so sure that if the problem they describe even exists, that this is the solution?

If I may seem a little over the top here, understand it a response in kind. In light of what has happened over the past 20 years and my own personal involvement in this, this article seems almost parody. But it is not. There is method ,agency and agenda behind  it, and I hope over the course of these posts you can now figure out for yourself what it is. I say fini to Rotman with a heartfelt query: If as an engineer and free thinker I was embarrassed to read this article,how is not possible that you were not in kind embarrassed to write it? For the bottom line is that no greater humiliation can be imagined for the cybernetic-utopians than being forced by history of all things, to beg , like any other corporate sad sack and pauper, for handouts from the government–the very antithesis of technological autonomy-to avert the necrosis of a “law” that “defines progress itself”. But a definition obviously not progressive enough to imagine its own demise.

And we must remember that the con(more “computing power” , whether anyone wants or needs it) always works for the end game(the hive).

Addendum: One last pebble. I mentioned previously the Dolchstoßlegendene , the apocryphal stab in the back given Moore’s Law by engineers not properly understanding Amdahl’s Law, half-wittedly resurrected here by the whiz kids at MIT CSAIL discovering that a half-century old programming language comes in  pretty handy efficiency wise. But a new stab in the back is added-CPU Bolsheviks want to make their own chips for their own purposes with their own money and everybody else can drop dead. Which seems to me to be a pretty defensible  position all around , but constitutes  some sort of  karmic transgression  according to the editor at large of the MIT Technology Review. But what we have just described is of course a vertical manufacturer such as Apple or IBM. So history(again) impinges on the fantastical ramblings of the Inevitabilists. Apple and IBM have been doing exactly what Rotman says will destroy “general computing”  for decades: producing their own designs for specialized applications(their own equipment) largely independent  of end user demand(just as Tuomi remarked decades ago). This independence led to disaster for the PowerPc, which could not find downstream users(operating systems) to use the chip except for the Apple Macintosh.

It is of great interest to note that the PowerPc(initially developed by a consortium of Apple,IBM and Motorola-AIM) was trying do something we hinted at above: make their specialized chips into general computing chips for multiple platforms unilaterally. It failed. And it failed for exactly the reasons that would be expected from our discussion-there were too many balls in the air. The PowerPc, a faster more powerful processor, was ignored  by the majority of operating system vendors(including remarkably,IBM) for slower chips and the PowerPc lived on in the Macintosh.

So in an historical case where Rotman’s pitch and catch was in effect,where customers were in need of more “computing power” and a vertically integrated consortium produced a more powerful chip in response, the chip was not widely adopted (not even by a member of the consortium itself-IBM) for reasons which had nothing to do with computational performance but a lot to do with market timing and endogenous forces within the software industry[“A 1995 InfoWorld article diagnosed the problem facing IBM with the PowerPC architecture: “It needs volume to build the necessary infrastructure to compete in price and third-party support with Intel and Microsoft, and without that infrastructure, third parties are unlikely to support the PowerPC,” authors Ed Scannell and Brooke Crothers wrote. “IBM, however, has been very slow in trying to create that infrastructure.”[from an excellent article which I crib much of the PowerPc info referred to here] . But “failed” may be too strong of a conclusion-the PowerPc architecture lives on in a number of permutation and although nobody made a sultan’s fortune, nobody completely lost their shirts either. I also note that the PowerPc makes clear that all new chip designs are specializations-the PowerPc/RISC was a specialization of the CISC architecture. According to Rotman’s scenario, the development of the specialized PowerPc should have led to a diminution of investment in “general computing” inside the vertical structure(AIM).

But as we have stated, Rotman confuses micro effects with macro effects. Inside the vertical structure there  was no such thing as “general computing” , just the computing modes they were trying to supply. There would have been a general macro effect only if the PowerPc had expanded to subsume and obsolete the other chips in general use-which it did not. The opposite of what he proposes actually happened: The chips inferior in “computer power” (Intel) outside of the vertical seized the day(the macro effect of engineering micro selection) and the PowerPc was marginalized-rusticated to embedded applications and video consoles,not the worst that could have happened. And if the PowerPc had obsoleted the Intel chips we would now consider the PowerPc the “general computing” chip as I pointed out above.

There was a  diminution of investment in “general computing” inside of AIM, but not because the specialization succeeded(a precondition for Rotmans’s scenario) but because it failed and the consortium had to recoup and amortize its investments by making the chips more versatile(which resulted in them being used in less “general” contexts such as game consoles), not less-again the opposite of what Rotman proposes. We also see here that Rotman oddly seems to think that “general” and “versatile” mean the same thing in all contexts , when in most engineering contexts-including this one-they are ambiguous(using a “general chip” usually means truncating or discarding features of your design to accommodate its limitations,making your design less versatile , not more.  Conversely,using a versatile chip makes your design more general being that you can simply disable or ignore features you don’t need  and  customize it to your own problem domain. “Versatility” depends on what end of the stick you’re holding and has different semantics depending on whether you’re supplying or consuming or procuring or engineering. This confusion often exists when trying to engineer a system or application-the procurement/bean counters and engineering people are using the same words to apply to different things,always a recipe for disaster(I have bad personal experience of this). “Versatility” to the procurement people may mean the chip is available from a large number of vendors or is free of tariffs or is subsidized under some government boondoggle, which as mentioned in the “Versatility” point above, means nothing to the engineers just as whether the cache is onboard or in a separate socket means nothing to the procurers who may take this into account but give it a much lower priority). We can see that “general applicability” and “versatility” are not attributes but axes on a graph with whatever you’re doing producing  a plot somewhere within the quadrants. None of this as straightforward or non-contentious as the MIT Review would have you glibly believe.

Finally, we see again that the matrimonial relationship Rotman wants us to believe exists between chip makers and consumers is not actually believable(even inside a vertical). The main motivation for the creation of AIM after all, was Apple’s dislike of being dependent on  chip manufacturers. And of course the irony or ironies is that Apple,by its own munificence,has adopted Intel as the core supplier for its devices.


Our second bit of extra bonus entertainment follows. As I proceeded with these posts I more and more felt the necessity to tease out and connect cybernetic-utopianism and Inevitabilism with their not at all self-evident roots. Some may be surprised, as I was to some extent. Others with more extensive knowledge of the history of philosophy will not.