2 – Cybernetic-utopianism and the tale of two laws: Moore’s Law and Amdahl’s law

nuages

How the cybernetic-utopians plan to defeat the second law of thermodynamics by chicanery alone

We must first acknowledge that technologically inspired utopias come in a variety of flavors these days. There are the usual and well worn general “colony on Mars” panaceas tiresomely familiar to anyone old enough to have watched an Apollo launch on a black and white television. These promised rockets to mars by 1995 in 1970 , flying cars , etc. All of this was already supposed to have happened by now, but in actuality we seem further away from any significant space exploration now than we were when Sputnik was launched,and for the same reasons now as then: space is inconceivably lethal to supersized eukaryotes such as ourselves and this doesn’t seem likely to change anytime some. Since we can’t even seem to travel in and out of low earth orbit without immolating ourselves in a spectacular fashion with fairly regular frequency, the odds on anyone from here physically leaving the solar system before homo sapiens sapiens bites the dust or a red giant sun cannibalizes the inner planets are probably as close to zero as you can get(significantly,the one pie in the sky technological prophecy that has come manifestly true from the golden age of science fiction is the panoptical  police state-Fahrenheit 451. Funny that).

But extrapolations into the future of present trends,as most of the “amazing science” type baloney is, probably does not technically constitute utopianism. Knuckleheads like Elon Musk aren’t dreaming of an idealized new world,they just want some place to drink champagne and not pay taxes in peace-hardly inspirational material. There is no vision-just a new real estate opportunity.The Singularity and Transhumanism jabronis clustered around Google and MIT are another case entirely. They do have a vision of an idealized future world,but not one any truly sane person would want to be a part of. Its hard to tell how much of the Singularity show is propaganda, how much is a genuine utopian program and how much is pants. I strongly lean towards propaganda and pants, but there is no denying that the Singularity is raking in serious bank(check out the prices on the “university”) and agglutinated a not insignificant number of groupies-the “Singularity” and religious insanity seem to share a large number of attributes. Virtually all the vaguely legitimate bits of the Singularity are pilfered from the field of cybernetics(thus the cybernetic-utopians).

However the really scary bunch , and the ones who are most relevant to our discussion, are a subset of the Ray Kurzwell Singularity crowd,i.e.,fanatics(who’s John the Baptist seems to be Alex Pentland of the MIT Media Lab). Shoshana Zuboff in her seminal Surveillance Capitalism gives all the disturbing lowdown on this segment. Led by Eric Schmidt himself in absentia as consiligiere , this group envisions a world which is essentially a closed feedback loop. Where industrial capitalism modeled the society it created on the factory,essentially turning all institutions-schools,hospitals,prisons,you name it-into idealized manufacturing plants where the “products” instead of widgets are students,sick people and prisoners, the “Inevitablists” as we shall call them, idealize the hive , not the factory as the social template. By gathering data from all possible sources of the environment(both external and internal) and modeling(learning) trends heuristically in real time and communicating also in real time to all nodes of the hive mind(who still naively call themselves “people”) the Inevitabilists offer to the controlling elite(the only segment of the population who “learn” from the data) what Zuboff calls “guaranteed outcomes”. These outcomes are the currency of the new social order. Where industrial capitalism had inputs and outputs which due to their inherent asymmetry in time and place gave uncertainty its value(markets), the hive/guaranteed outcomes context is exactly the opposite-its value object is certainty. But how? There is no perfect feedback /adiabatic system. All such systems must be limited(usually asymptotically) by latency or bandwidth or heat loss or the loss of angular momentum or the loss of something at some non zero value.

The Inevitablists think they have the solution: using ever more finely grained accretions of data and using real time heuristics as feedback, their model is essentially one of “micro-coercion”. With enough data , a large enough interconnected hive of volatile data inputs(you and me) and fast enough feedback caches, such a model can succeed in convincing any one of the nodes(you or me) at any point in time that their current environmental context(“choices”) are identical with the hive system’s desired result(“coercions”). The larger the cache(history/personal data) and the smaller or more over-matched the control window(the spectrum that has to be coerced,i.e., a tiny phone screen or television screen and your unconscious cerebral response mechanisms which are also outgunned by similarly hoarded visual cues that the hive is aware you respond to) the more optimum the guaranteed outcome. The pseudo-philosophical justification for all this is usually teleological(of which cybernetics is a philosophical superset). That is,all of this process happens because of a purposeful compulsion towards an end goal(Singularity) that is not only inevitable but millennial.

This is the secret of the cell phone. The subject is concentrating on a fairly primitive and limited “data spectrum”-the screen. The hive system,which at any moment includes the user and millions of other nodes in virtual real time,has a vast asymmetric advantage: a huge cache hoard of the user’s data and near real time response. In effect , every moment you’re looking at a cell phone you’re witnessing a simulation of what the hive is guessing you want to see. Just like you cannot see a television scan raster because its moving too fast for your eye refresh, you don’t see the hive modifications to your visual(or auditory) environment. And just like television is an illusion of real time motion,when it is actually thousands of still images flashed at a rate that appears to be motion, the hive/outcome engine can make your choices appear to be identical to its coercions. This appearance is an illusion but the “guaranteed outcome” is real-you will do what you have been micro-coerced to do while adamantly maintaining it was what you wanted to do all along. The cybernetic feedback loop is its own justification. And unlike the Singularity or flying cars, we know for sure this actually works and has been mass deployed,prototyped and tested. At the point you can no longer differentiate between your own contextual “choices” and the hive’s micro-coercions you have become a full duplex(while connected to the hive you are sending and receiving simultaneously-a capability of the latest radio chips[5g]) member of the hive. You have been effectively absorbed. The only true market that now counts(or may even actually exist) is the market in prediction/coercions fed by the tributaries of confiscated data streams.

This paradigm works not only on cell phones. It is being used as a general model for all networked streams and devices: limit the subjects visual and auditory inputs to a predetermined minimum; ensure the outputs/responses of are of a kind discrete enough to recombine and manipulate without requiring an obvious complete “refresh” of the predetermined window or environment;ensure that input from the subject does not force responses from the processing engine faster than the engine can calculate the heuristic result,which may or may not be equal to the latency and bandwidth quality of service of the device(in other words,no matter how much memory you have or how fast a processor,or how fast the network the response will never be faster than the heuristic calculations-by design you won’t receive a response until Amazon figures out how many pairs of drawers and what color you purchased over the last 5 years;this is also a feature to always ensure the hive engine wins the game-like marking the cards in a poker game,the hive will serialize the interface at the point of attack,placing its heuristics at the top of the queue,overpowering any local filters(this serialization is carried in the advertising streams-the advertising itself is of secondary importance. Enough data can be cached locally to “recreate” very sophisticated custom environments from very little incremental deltas-“advertising” itself may be another confabulation layer to mask the actual control layer). The preceding also indicates that the primary cause of excessive network latency(response time) for most newer consumer implementations is not the hardware of the device or the quality of service of the network,but the additional delay and load due to computational heuristics and data caching necessary to approximate real time effects,a conclusion which concurs with my considerable experience in distributed network engineering. All of this has been studied  extensively at PARC and MIT and elsewhere and is very mature.

This is why they have killed movie theaters and gone all out on streaming video. it is highly unlikely at present that two people watching the same streaming feed are actually seeing the same video. The newer display devices are feeding back information to the hive from your personal environment which modifies in real time what you’re looking at by sourcing your cached data info hoard. This is not possible in a theater which is why they have to go. With the hive running the projector everybody can be made to like the movie to some varying degree by the coercion of your “choices”. This would also indicate that less effort will go into making you enjoy the movie and more into making you think you enjoyed the movie. This would also be immeasurably cheaper than actually making movies in the conventional manner.The logical outcome of this is that Hollywood will wither and die eventually. Every movie will  be a video  of a monkey eating a banana wearing red overalls with a different title. You’re consciousness will be ransacked in real time for enough cues aided by the hoard heuristics to make you “like” it. And from the handful of movies I’ve seen over the last 10 years,they’re almost there. I also note that despite not watching movies for the most part,I do hear tangentially on a fairly regular basis the streaming networks boasting and bragging about how much money they’re spending on “content” as if this is reason enough to sign up for the exorbitant rates. What they actually mean is they are spending this money on control and neural habituation systems to make you “like” what ever they supply. In a way, they are not being disingenuous. They ‘re promising more content that you’ll “like”. They don’t say if the content will be any good in any aesthetic sense or clarify the mechanism by which you determine whether you “like” it or not. Maybe neither one of those considerations even matter any more. The word “content” has a disquieting resonance I can’t quite put my finger on.

Referencing the above, I suspect strongly  that the lion’s share of these funds are going to upgrade and extend the feedback loop we have just described. In the long game, cyber neural conditioning will push “production costs” to zero and obviate “content” altogether in the sense this is understood now. This mirrors “food” production,which is following the exact same logic: It is vastly more economically feasible(from a devil’s advocate position) to invest capital resources into methods to convince you that you like the simulacrum of food which most people consume on a daily basis rather than make the food actually better. At some point critical mass will be reached and consumers won’t be able to discern food they “like” from food they are coerced to eat-the identity we mention above which defines the hive itself. The cost of producing “food” can then be reduced to effectively zero margin, because people will then eat sawdust or possum gristle and “like” it. We may be very close to already being there too. That’s why Bill Gates is investing billions in fake meat gestated from a stew of old rags.turpentine and acorns. He’s working both sides of the con against the middle. He produces the fake “food” and also the cybernetic conditioning software and hardware to make you “like” it-another “guaranteed outcome” micro-coercion.

If it seems I have veered off topic,it is actually not so. The end game of the cybernetic-utopians is the unimaginable profit and gain from a control grid with which make these things guaranteed outcomes. Although it may seem that there is infinite complexity in this design,this is also a hoax to boggle the minds of the victims. The essential game plan I have just describes is relatively straightforward and a completely logical evolution and extension of previous methods. Guaranteed outcomes are a logical outcome of deception by advertising and neural and sensory saturation enabled by hardware and software to be ramped up to the point they completely change the paradigm. But to emplace this paradigm still requires chicanery,or myth making if you prefer,since their remain dangerous pockets of resistance. These include myths for the victims(the myth of technological bounty without limit) and myths for the perpetrators/engineers(the smartest guys in the room). The comments in the last few paragraphs are merely to show what the mythopoesis and indirection are actually bearding for. The con is deployed  both internally and externally in a manner of speaking: externally to drive the sheep;and internally to drive the shepherds. And I should especially stress that none of this to has anything do with artificial intelligence,the diversionary booger man which constitutes another layer of indirection.

How the cybernetic-utopians built an innumerable node hive network by mooching off Moore’s Law until Moore’s Law stopped being a law and didn’t work anymore

So what exactly does this have to do with Moore’s Law and software engineering? Plenty. As mentioned in the previous section , the huge impetus towards multicore computing was not largely motivated by any use cases of user desire, with the exception of streaming video(made possible by network infrastructure capital upgrades,not software or chip hardware primarily), most end user devices work pretty much how they did ten or even fifteen years ago. The drive was compelled by the desiccation of Moore’s Law and the stagnation of computer CPU speeds.

Yet one of the core mantras of Inevitabilists is that exponential growth of computing power and connectivity is ahistorical(that is, inevitable). But in the early 2000’s Inevitabilism was suffering dual blows: history had caught up to Moore’s Law and chip complexity and by say 2010 it grabbed multicore programming by the collar as well,revealing it to be somewhat less than the silver bullet advertised 10 years earlier. Another tin can was a growing realization that increased computer power had only a marginal utility in an increasing number of fields(you can only type so fast,so a word processor has an upper bound when an increase in speed or computational power will simply flat-line as a dependent of productivity). But why exactly is exponential computing growth central to the cyber-utopians?

 The answer to that question is derived from the points made about guaranteed outcomes. What we did not mention above is that it is imperative that the expansion of the hive be seen as what we have already defined as  “technological autonomous”. That is, the growth of the micro-coercion network had to appear to be organic instead of contrived and sentient(the Inevitabilists could not simply unilaterally demand that everyone get a new device or radio chip every year if there was no technological compulsion to do so without arousing suspicion or even worse, a thorough investigation as to what it is they were really doing). As long as clock speeds and chip density were increasing steadily, the Inevitablists could use this as the wolf’s cloak to expand the hive by a simulated demand for “new” and “faster” devices. And each “new” and “faster” and “upgraded” device contained backdoors ,trojans , rootkits and call-homes that expanded, fine tuned and trained the hive network. And this expansion reinforced the posture this was all occurring independent of politics or history(autonomy). However, when the wheels started to come off the cart of the hardware bandwagon, the raison d’etre of “technological autonomy” could not be sustained-and it seemed that technology was not so autonomous after all.

We clearly see here how technological autonomy and inevitablism are symbiotic and not the same thing: inevitablism posits the technological artifacts and their accompanying infrastructure will emerge whether history or politics like it or not;the only way this statement can be true is if technology itself  is similarly unconstrained. Historically unconstrained technological artifacts cannot be spawned by a process which is contingent or historical(for the same reason gods cannot be restrained by human agency).  Moore’s Law(the particular context of technology we are operating in) must be autonomous to enable its artifacts(multi-core processors) to be inevitable. The  breakdown of one undermines the other. And again to restate,the narrative/mythopoesis is weaponized. It’s not just a cover or misdirection-it’s the active ingredient of the whole project. Without the yeast of the narrative, the cake(pilfering and stealing) won’t rise.

It also became clear the wolf would need a new cloak(myth) to finish its ahistorical task(a Trotsky-ism from back in the day. Inevitablists, remarkably, have a great deal in common with Trotskyites, especially their language, which is virtually identical). The hive required a new parasite to propagate the virus of total information awareness(remember that one from way back in the mists of time?)..I mean..er..ahem: “The collective benefit of sharing human knowledge and creativity grows at an exponential rate. In the future, information technology will be everywhere, like electricity. It will be a given”[Eric Schmidt-from The New Digital Age]..yeah..that’s what I meant to say…I don’t know what came over me. It would also also require a scapegoat for the “inevitable” computing growth grinding to a halt to square the circle.

Remember the quote from the Manuel’s book in the first section: “…and if disciples, religious or secular, discover that a specific timetable has not been realized, they manage to revise it ingeniously and preserve the credibility of the whole unfulfilled prophecy.

How the cybernetic-utopians continued to expand their hive network after Moore’s Law stopped working by bogarting everybody’s device

Paradoxically , software engineering was enlisted to fulfill both requirements(Trojan horse and scapegoat). As we stated in the previous installment,it is necessary to only maintain the coherence of the myth strategically. Once the objective has been secured, the modifications can be discarded.  First, let me state a I know a good deal about operating systems, including Android, at a fairly low level. When I first encountered Android in person as in engineer about 10 years ago I was appalled. It was a Java language superstructure riding on top of a fairly decrepit Linux kernel with an ancient(Bionic) C language application binary interface with a goofy proprietary inter-process messaging interface. If you don’t know what any of that means, just remember none of it is good engineering wise. Things have improved a great deal since in all of these aspects,but all is made bad by one caveat:it is orders of magnitude more difficult now than it was even five years ago to do what you want to do on Android as opposed to what Android wants you to do. This extends to even gaining access to the operating system in a non-opaque manner. “Rooting” Android(modifying the operating system privileges to gain access to low level functionality) ,once fairly easy to do and little more than an annoyance,is virtually impossible to do with newer Android units. Some fairly advanced things I wrote six years ago to run on the platform are dead ducks-the interfaces I was using are no longer accessible or even documented. Android is not open source as many believe-some bits are and some bits(the most important ones) are not, so this is just a fait accompli to me and anyone else in the same situation. None of this is particularly ominous knowing software companies as I do. What places this on our radar is that almost simultaneously to these transmogrifications, only certain types of applications(through Google’s monopsony Google Play) were now “certified” for the platform and it become immensely more difficult to get applications to work which were not. Superficially, this seems perfectly logical. But to actually see how extraordinarily odd this really is one needs go through a few thought experiments within an engineering context-not a marketing one.

Android and IOS(Apple) are putatively operating systems-the operating system(OS) sits in between the applications you run and the hardware, using the hardware drivers as the interface between the two. An operating system, for example Linux,usually is limited to and concerned with only the applications it can run,not the applications it will run-this would bring another layer of complexity into the non-user space of the system with instability being the side-effect(Windows). Most operating systems do not even bother themselves with the will-run query. If the application is not compatible with the binary interface of the OS kernel nothing will happen and it won’t work. But what about malicious apps? “Malicious” is not an engineering term or category-it is relative to what problem domain your application is trying to solve,something the OS by design is ignorant of. The OS offers only services-memory services and management,disk resources,network input and output,etc. It knows how to give you what you ask for-in most implementations it is not capable of figuring out whether whatever you’re asking for is ‘”ok” with you. The OS,again by design,doesn’t know you or what “ok” means. This is why on Linux it is possible to blacklist drivers(wifi cards,network cards,peripherals) which behave badly-this is within the domain of the OS which treats these special “applications” as part of itself. It is not possible for the most part to do the same for any specific application or class of applications -which to the OS is just a binary image which gets loaded into memory on a pass/fail basis.

I know this is stupefying to non-engineers, but bear with me. The Android/Google,/Apple/IOS ecosystems tightly control what sort of applications can be installed by their operating systems. Keep in mind the exciting discussion about operating systems we just had. Google Play is advertised as a sort of software mart. You select applications and then install them. It filters out apps by not even listing ones it does not approve of for any reason. However, up until recently it was possible to install applications to Android by a more direct means, similar to how its done on traditional Linux installations. I will not explain this, as it is even more stupefying than talking about operating systems, just trust me. So, following from this, even if an application is blacklisted, I should be able to install outside of the Android Skinner box. This was true. Now most orthogonal applications(especially those you have written yourself which treat Android as simply another operating system instead of an ecosystem for example) do not work-and by do not work I mean does not do what you want to do-anyone who has not felt the pain of writing applications for Android will not understand that Android decides what “work” means , not you. Those which have been approved do.

Here are the thought experiments: If the OS is capable of employing some sort of internal vetting system independent of Google Play to install or not install Apps(which obviously it now can),what’s the real purpose of Google Play? If this vetting is operable it must be somehow in contact with a remote repository to be updated,but in my experiments the results are the same even if all Google services are completely disabled(which despite what is commonly believed they can be, if you know what you’re doing),indicating that the OS vetting capability is opaque to the user and owner of the device-why is it opaque? And what’s the purpose of having two vetting systems-one hidden on the device and another publicly front facing? I will leave you with these experiments and one final interesting tidbit: As of June 14,2020 Android has announced it will no longer support installing applications by package units(the method I alluded to above). I believe this is a result of the cloaked vetting system monitoring the device and snitching on any alien interlopers. Comments about this seem to point towards some sort of application store scuffle with Huawei. But there would be no real way of knowing whether Huawei was backdoor installing from the OS without the snitch.

I essentially believe that the “App” ecosystems are and always have been smokescreens for the expansion,syncretism and command control of a real time unbounded node hive network which can operate in either point-to-point or peer-to-peer mode-or even both simultaneously(this is possible). This is an existential sine qua non compulsion for the cyber utopians I believe, since the previous continuous hardware and infrastructure churn was drawing down. This conflict is framed as a scuffle between two “App” providers. What it really seems to be, is a much more weighty battle: the Chinese(or whoever the Chinese are fronting for) are aware of how this hive is being built(they’ve built one themselves the same way) and want to expand into new territory. It also conceivable that this “conflict” is a carefully planned subterfuge, a nerf pillow fight or sissy slap exchange between two faux enemies to achieve the end game of software and hardware ferment to cloak the deployment of the hive systems. It is even more conceivable that following from this , there is actually only one side and the faux enemies take turns kicking each other in the pants in public to thimblerig the rubes.

The types of “certified Apps” though immensely variegated on the surface,are very similar in one respect: they seem to possess a great deal of “black box” functionality primarily in interactions with the Android runtime and the communication subsystem. The systems I wrote for Android avoided the Java runtime altogether and called directly into the underlying Linux OS in C,C++ and Python. I wrote very sophisticated web servers,asynchronous stream processors and other advanced artifacts which for the most part were exactly the same code I used on non-Android systems. This was despite constant warnings and finger wagging from the Android documentation that this sort of thing created “instability” or “security issues”(without specifying what exactly any of this actually meant). I ran into none of this and I was working in this arena for about two years. There didn’t seem to be any problem on most devices running Android in ignoring the Java runtime entirely and treating the device or phone as just a small, greasy,eccentric Linux computer with a poorly designed operating system. I even got Python graphical user interfaces to work in a fairly straightforward manner even though Google insisted it was not “doable” within the “spirit of Android”. I had no idea what that meant then and still do not now. I was away from Android for about two years-when I returned and tried to do the things in say 2018 I had done before, not only did they not work, the public interfaces I used before had vanished.

Let’s put this anecdotal, but not un-rigorous information together with our previous commentary on the ebbing of the ever expanding processor tide. Unable to use this tide any longer as a stalking horse to expand and syncretize the hive/outcomes engine, it seems several things happened simultaneously in a complementary fashion:

  • The devices were “hardened”. Only vetted apps were permitted on the device without resorting to extreme hijinks. More importantly, many of these vetted apps were inscrutable as to their exact interactions and contexts within the Android subsystem. And there did not appear to be any way for inquiring minds to find out. I reference the thought experiments we walked through above.
  • Where before the runtime could be avoided and functionality identical to a conventional computer emulated fairly easily(I know , I did so) , such capabilities no longer exist at the interface level-and being that Android is not Open Source , there is no way to create them or add them to a binary image. More importantly, applications which did not use the Java runtime were largely unknown to this runtime which means these non-runtime applications had some amount of introspection as to what is going on in the system-without the mediation of the runtime which serves as the hive’s watchdog.
  • These two events satisfy the first requirement we mentioned above. Instead of depending on hardware and chip updates(now hard to justify technologically) to metastasize the hive ,updates and real-time stealth modifications(point-to-point and peer-to-peer) could flow to devices transparently without “unfriendly” third-party systems to get in the way or nosy-neds running out of band in the Linux runtime to snitch on anything untoward.
  • Although I have no significant experience with Apple/IOS, I have been informed that a similar two fisted rope-a-dope took place there, although earlier. Speaking of Apple, let me slip in another tidbit. Above we talked about the asymmetry that the hive wants to enforce at the point of attack-the optimum situation being a minimalist interface for the user(victim?) to minimize distraction from the simulation and massive data cache resources and almost real time hive connectivity driving it. One way to minimize distraction from the simulation is to remove all inputs not directly under hive control-like audio jacks and peripheral ports. Just saying.

 

The proposal that Moore’s Law was right even though it was wrong and who needs it anyway since cyber tech is moving inexorably forward just like Mr.Schmidt says

We thus after a great deal of necessary indirection, come back to the crux of cybernetic-utopiansim of the inevitable strain and Moore’s Law. Spreading a devout belief in Moore’s Law in effect enabled the cybernetic-utopians to carry out their real agenda-the expansion and metastasis of the hive/guaranteed outcomes ecosystems by enabling them to piggyback their flow of new heuristics,algorithms and node creation onto the replacement engine of the always ascending “new” and “must have” hardware and infrastructure. Once this hardware hay ride ended as we explained in the first section, new scaffolding was erected through software-the creation of a hermetic software supply chain which included only friends-which is widely known-but excluded free range interlopers not for doing anything in particular,but for just being there in general-which is not. In order to make the narrative coherent and seamless the stumbles of the early 2000s had to be blamed on the weakest constituency of the ecosystem-engineers. All of this seems to have worked like a charm and the emergency repairs seemed to have succeeded. Not a great deal is made anymore of the speed or power of devices , instead it’s all about “Apps” and security.

The Inevitablists have learned their lesson and now make their own custom chips in limited batches on their own dime while saying this vindicates Moore’s Law.when in fact id doesn’t. Another defeat for technology without history. But Moore’s Law will be important for other pieces of their agenda for a only a little longer and a new rewrite is probably neither necessary nor forthcoming. The move to robotics and AI does not require a constant exponential ramp up of computing capability but a diminution of that portion of the human to machine interface that is human or serialized-transhumanism. The Dolchstoßlegenden of engineering mediocrity is mostly for selling old wine in new bottles to a new generation of second and third tier software engineers and managers(rubes). It will continue to appear in trade literature and Wired magazine and Scientific American style techno-agitation/propaganda, but this usually lags a while behind what’s actually frying on the griddle at Google at any given moment. And it will  probably take even longer for the implications of any if this to reach the public consciousness. Which is exactly the glacial flow the utopians are counting on.

A couple of  important addenda concerning the emplacement of the hive network to clarify the last point concerning the next stage of utopians. First, The author Zuboff asks the same question I do in her book: Why is exponential computing growth central to the cybernetic-utopians mantra? Her answer is that the hive/outcomes engine requires an ever escalating scale of data input to feed its heuristic learning engines-this is dictated by the nature of heuristic themselves. My answer as from above, is that it is because the instantiation of the hive network required the constant upgrade,replacement and prototyping enabled under the ruse of the “new” and “latest advancement” compulsion for the physical creation of the real-time full duplex node network-which was/is the absolute requirement for Zuboff’s escalation from the standpoint of distributed engineering. The mistake I see in a great deal of analyses about 5g is that many assume  5g to be a machine that goes of itself. It is an incredibly sophisticated physical infrastructure, but what exactly is the logical(protocol level)  infrastructure? How is it metastasized(which is mostly what we’re talking about in this particular post)? How is it tested?  How is it provisioned? How does it grandfather in the millions of devices(yours and mine) it does not control physically? What is the organization that is mediating its traffic through national and international gateways? The difference between a physical network(wired,wireless,satellite,near field or whatever) and the protocol and topology which runs through it and on top of it is a  difficult point to explicate to non-engineers, but as I a have hinted at here and will try to describe more formally in the future, it may in fact be the central one of the whole scheme. It is also compellingly necessary to see the current state of the infrastructure of the “surveillance capitalists” as a transitional stage.

The vast and protean physical data stores secreted away in deserts and caverns are logistically and environmentally unsustainable. The new(so to speak) hive/node system will implement a completely non-intuitive concept to most people: a centrally controlled peer-to-peer system(this is not a misnomer. Such a system only requires a unilateral arbiter of last resort,that is, a resource at the end of the heuristic resolution chain that terminates a query, either by resolving it or declaiming it un-resolvable. This is/will be almost certainly be a supercomputer or an array of such ,capable of completely resolving heuristics in real time with mostly localized resources(the heuristic will only require hive “help” if it cannot reach resolution without more data or more processing resources which be will be forthcoming organically from the hive). What “localized” means is determined by what the atomic node the heuristic problem is delimited at-strategic hamlet,neighborhood,city,you-whatever. The data hoards will be unnecessary(or at least made auxiliary) because the information will be “stored” in real time in the nodes of the hive-like all the information to make honey,find a home ,protect the hive and delegate a new queen are carried around in the collective “database” of the hive’s genetically wired behavior(many will recognize this as a vastly more sophisticated variation of the “torrent” file sharing mechanism expanded by several layers of complexity and magnitude and excepting that instead of files some zero-copy memory or shared memory scheme will  probably is/will be used. It may also explain why this system was so savagely and effectively attacked with “pirating” being the beard there-there’s nothing gangsters hate more than competition).

This is essentially what Transhumanism is all about-operational hive data will be stored in you -the next phase of the deployment:getting rid of the device nodes altogether and making the nodes human. Sharp minds can already see the extraordinary  danger here:the system’s inherent fragility. If the hive memory is compromised , there is no way to repair or recover-there is no backup. The hive would have to be regenerated from scratch-by an entity outside the hive. But there is by definition no such entity. The hive is totalist as per this definition. Hives(as termites and ants attest) are extremely robust. But if one looks to the animal kingdom for anecdote it seems that non-insect hive species(say carrier pigeons) are significantly less so. The vast diversity of insect hives and their incredible number may be the factor here. The “Singularity”,by definition a single hive with a single point of failure(no matter how sophisticated,there’s only one Singularity if we are to take the name seriously) comes to look more and more like an extinction event or Ice-9. Aficionados of  game theory should  also be able to see where this is going.

In many cases I believe my conclusions are not different than Zuboff’s, only made from a different perspective;we seem to be looking through opposite ends of the same telescope at the same object. One end is sociology the other is engineering. And like Zuboff as a sociologist seeing what she sees and describing them in the domain language of her specialized problem domain, I’m describing what is essentially the same phenomena in mine.

Finally, although my statements about the  hermetic “App” supply chain and physical device lock downs may appear pedestrian-“everybody” knows Google Play and Apple filter and vet the apps they vend and lock down there operating systems to the point they are actually unusable as well, operating systems-I never hear the actual reason why this is done-or any that are convincing. Security,megalomania and larceny are the usual suspects, which seems to palliate virtually everyone-but this is done in a declamatory manner-the reasons are assumed to be obvious and non-contentious when in fact they are not.  These “lockdowns” are exceedingly unconvincing in their rationale from the standpoint of distributed network computing. It is my conclusion as I state above, that the long term and only goal(at least 20 or 25 years in the making, quite possibly twice that) has always been the construction and activation of a real time hive node system which runs through all connected devices in full duplex-including your phone and devices as a first phase, and you as an end game(Transhumanism).

The presence of the “presidential emergency alert system” I strongly believe from well informed “guesstimation” , demonstrates this.The test of that system was not to see if all nodes could be contacted at once from a central location(an “emergency alert” , which have already been shown to work if impossible to disable) but if all nodes could “see” each other in something approximating real time(“the hive”). There is no need to “find” you in this scenario, your device is always on unless its dead and always communicating your location and data  not just to some remote “home” ,(and here is the essential take away)  but potentially to all other members of the hive(but more probably a predetermined subset). “Finding” also is devalued with the recent installation of an hysterical panic induced internal movement restriction system(“lockdown”). There is no real reason to worry about finding you if you can’t go anywhere in the first place. If there are enough nodes and  bandwidth and low latency within a node’s  immediate hierarchy( a pod or work-group or leaf or however the nodes are structured) and enough data to satisfy the demands of the heuristic there will be no need to process or fetch from some remote hideout in Utah-the heuristic(the guess as to what you”ll do or decide to do next) can be resolved inline. Not only would this be orders of magnitude faster, it would make the hive virtually impossible to detect since communication between the peers would appear as local traffic(shared memory message passing or the such like).

Each successive iteration of this hive node system(3g,4g) has been a prototype for the next. 5g escalates this hive membership to real time(a very specific engineering concept with completely different semantics and quality of service parameters). That this node system is almost complete seems apparent. This will also mean that the cybernetic-utopians will have no more use for Moore’s Law very soon-having squared the circle of guaranteed outcomes with an “all in” real time hive they may finally rid themselves of the nuisance of history by ending  it. The old cell phone networks have been absorbed into the hive/nodes  and the old IP network grid is in the process of being absorbed into the “cloud”- a universal network address translation(NAT) jail for the underprivileged; where only the gold card holders will have endpoints that actually lead anywhere;the rest of us when we follow the breadcrumbs of our net “identity” to its origin will end up holding our own tails. But that’s a jeremiad for another day.


We discussed how software engineers stabbed Moore’s law in the back in Part 1 and we’ll complete the curious relationship of the cybernetic-utopians to Amdahl’s Law with a more technical discussion in Part 4. For now,Let’s look at some of the many interpretations of Moore’s Law itself and their implications-its very malleability seems to be the source of its popularity, especially with villains.