Futurology, futurist,transhuman,artificial intelligence,distributed intelligence, Hans Moravec, Hans Moravec, Hans Moravec, mind age, Hans Moravec, mind age, Hans Moravec, mind age, Chislenko, infomorph, mind age[Draft 0.785] See also: plaintext ---RTF ---Word format--- other essays --- my Home Page
"Mind Age" is a provocative and compelling book that I recommend to anyone interested in the structural evolution of the world. In this essay, I will build from Moravec's conclusions and suggest some complementary ideas, mostly related to the distributed architecture of future intelligence that I consider important for exploring the Mind Age.
Robots who already had that educational banana peel experience could share it, together with some conclusions, with your robot. Or - better yet, they could share information with the nearest knowledge processor, which would combine one robot's experience with that of others, develop efficient general algorithms for identifying similar situations and taking appropriate actions, and then download them to all participating robots.
Humans obtain most of their knowledge by learning from experience and from the conclusions of others, despite their poor memory, low communication speeds and inability to transfer knowledge directly. One may expect information sharing among robots, who are not handicapped by any of these limitations, to be much more efficient. Furthermore, information storage and processing costs in large stationary machines may be much lower than in small, mobile units. Elimination of redundant computations within millions of robots would make a networked system greatly more efficient than a collection of unconnected machines. Sharing of experience may prove to be a still greater benefit. Thus, cooperative knowledge processing would be several orders of magnitude less expensive and at the same time vastly more productive. Such advantages make the networked design an imperative rather than a matter of taste.
Each networked machine would ultimately rely on global intelligence, and would only locally store the knowledge that is frequently used or may be urgently needed. For example, if somebody starts telling your robot a joke in ancient Greek, it will forward the sound stream to the nearest linguistics expert and receive the meaning of the message, a suggested witty reply, a Greek speech parser and a briefing on Greece before it finishes the polite chuckle recommended by its own processor as an easy way to buy time.
Actually, it may not even be necessary to receive the full parser and knowledge base, as the "remote thinking" service could provide a more efficient alternative - unless those ancient Greeks are about to permanently disconnect your robot from the Net.
Dependence on external sources of knowledge will hardly be a serious limitation since robots, just as all other open (dissipative) systems, will critically depend on connections to many other resources, from information about the environment to energy and materials. The actual balance of intelligence between a local client and the rest of the system will depend on various technical factors and may range from fully autonomous machines working in remote or dangerous locations, to a completely "dumb" front end, such as a sensor or an actuator connected to the network. A mobile machine could exchange information continuously via a slow wireless link to obtain urgent communications, news updates, and small software enhancements, and periodically plug into the high-bandwidth network for larger information transfers.
Most of the main networking components of such a system have already been designed or at least conceived. Today's communication protocols, mirrored file servers, public key cryptography, collaborative information filtering schemes, message authentication algorithms, computational economies, computerized banking and other network constructs will evolve into essential parts of future global intelligence.
This system will not be a gigantic superorganism, despite the implied high degree of structural integration. The global "mind" will be compartmentalized, with many relatively independent components and threads, separated from each other by subject boundaries, as well as property, privacy, and security-related interests. Knowledge servers may have different world models, incompatible knowledge representations or conflicting opinions. While complicating knowledge development, functional compartmentalization will also increase the overall stability and versatility of the system, as it will help contain the structural faults within the subsystems that produced them and ensure safety of information domains from hostile corruption.
The notion of a single "self" in its traditional sense would not apply to this system, or any single robot - though perhaps it can be applied to some functional subsystems. The intelligent personalities of tomorrow will evolve from today's philosophical systems, technological disciplines and software complexes. Current human cultures may not leave functional heirs as they are based too heavily on the peculiarities of human nature. Physically connected consciousness carriers will be left behind the evolutionary frontier. New distributed systems will take the evolutionary lead, and physical objects will adapt to more closely follow functional entities. (This process is already well under way, in such forms as cultural and economic specialization.) The resulting system is likely to represent a mix of a superliquid economy, cyberspace anarchy, and consciousness architecture described by Marvin Minsky in The Society of Mind. I doubt that it can be described by any single integrated theory.
One can argue that many distributed systems already possess some reflective consciousness. A computer network may locally store more information about its global condition than a human consciousness has about its underlying layers (at least, in relative terms). Philosophy spends a greater share of its effort studying its own nature and purpose than most humans I know. The recent surge in meta-disciplines and methodological and futurological studies is a clear indicator that the global body of knowledge is becoming increasingly self-conscious.
It may be difficult to get used to dealing with a volatile distributed entity. Suppose your robot made some really stupid mistake. You are mad at it. The robot explains that the action was caused by a temporary condition in the experimental semantic subnetwork and suggests to present to you a hundred-terabyte volume of incremental archives, memory snapshots and audit trails from numerous servers involved in the making of the unfortunate decision, containing a partial description of the state of the relevant parts of the system at the time. If you can even find the culprit, it's non-material, distributed, and long gone.
Now, what do you kick ?
Functional extensions to our once purely biological bodies evolve from passive non-biological material additions (such as clothes) to information transmitting shareable parts (e.g., thermometers as shared sensors) to active distributed extensions (medicine as external shared immune system). The progress here is characterized by growing integration and liquidity of the system, as well as liberation of its functional elements from the constraints of their material substrates.
Additionally, distributed systems are much less susceptible to accidental or deliberate physical damage than localized physical structures. This makes them the only class of entities that can hope to achieve true immortality. In fact, they are the only ones to deserve it, too. One may notice that all sufficiently complex entities with unlimited natural life spans - from ant colonies to large ecologies and cultures - are distributed. Physically connected objects, including biological organisms, are no longer independently alive and even contain, in the interests of larger systems, self-destruction mechanisms that lie beyond their control. Some of these objects are silly enough to believe that the whole historical process is happening solely for their own benefit, but that's another issue...
It may seem strange that even the AI visionaries still think in terms of non-distributed systems. I would explain this by the human "automorphic" tendency to identify the notion of a functional entity with a physically connected object, along with the fact that both early animals and machines have tended to be relatively autonomous beings - a state that greatly hindered their development.
It is understandable why early biological systems were non-distributed: young Nature couldn't develop information coding and transfer standards at the initial stages of growth. At that time, the organisms were separate from each other and did not learn much during their lifetime. By the time they started accumulating any features worth sharing, it was too late to change the design. Ever since then, Nature's attempts to reach functional integration on a meta-organism level suffered from the fact that most of the individual features -inherited or acquired- either are completely nontransferable or take an excruciating amount of circumnavigational effort to share.
Those important advances that did take place in this area, such as development of genetic code, sexual reproduction and language, were still very far from direct sharing of internal features with all interested parties. Real breakthroughs in this direction start with the advent of economics and computer communications. Unfortunately, biological organisms can benefit from them only indirectly.
Life most always starts as a set of non-distributed objects, since permanent physical connection, albeit overly restrictive, provides a natural and easy way to exchange information and material resources within a functional body. Later, as more efficient and subtle system designs appear, the evolutionary frontier gradually shifts towards distributed systems. One may expect all sufficiently advanced extraterrestrial intelligences to be distributed; the situation on Earth now may be approaching a climax in this process.
If we extrapolate the current trends in increasing complexity and integration of the system, as well as its growing spatial spread and control over the material world, to their logical conclusions, we can ultimately envision a superintelligent entity permeating the entire universe, with integration on the quantum scale and many spectacular emergent features. This picture bears a striking resemblance to the familiar concept of an omnipresent, omniscient and omnipotent entity. Spiritually inclined rationalists may view the ongoing evolutionary process as one of "theogenesis". An interesting question is whether it has already happened elsewhere.
Our current efforts are laying the foundation for the infrastructure of the coming universal "intelligence". Many of our achievements in information engineering may persist forever and eventually become parts of the internal architecture of "God". (Quite likely, as sentimentally preserved rudiments ;-) ).
For one thing, they certainly won't have to undergo long periods of education. If one infomorph wants to learn something from another, it can just copy the necessary information or access the teacher's knowledge as its own. If infomorphs have a concept of "fun", it certainly won't be rollercoaster rides. Arts, business, and child-bearing may merge into production of arbitrary functional entities for both pleasure and profit, provided one can gather enough resources to create and support them.
Will the traditional human issues be of any relevance in the world of distributed entities? How about the abortion debate? Retirement? Family values? Partying? Ethics? ("All functional entities are created equal"?) Will human-style democracy (decision-making by body count) work in the world of ever changing functional interconnections, where the very definition of what constitutes a person will be increasingly blurred? Or will it be replaced by an anarchy with ad-hoc contracts? Could an infomorph court of law issue a memory search warrant? Could an individual's memory be kept encrypted? Will infomorphs be entitled to "medical" insurance against certain types of structural damage, or will they just have to back themselves up regularly?
Human concepts of personhood and identity are rooted in perceptions of physical objects and their appearances, as well as random details of human body composition and reproduction techniques. Relocating one's body and one's material possessions lies at the foundation of both human labor and human thought. Many other concepts are based on human functional imperfections - one could hardly put the idea of a "soul" into the "head" of a being that knows and consciously controls every bit of itself and its creations. With people, who don't see what is going on in their own brains, this is much easier.
Advanced info-entities will consider most human notions irrelevant, and rightfully so. But can you find anything of common interest for communicating with them? - Perhaps, if your concepts are sufficiently abstracted from your bodily functions and your physical and cultural environment to make objective sense. (Remember that all those people with whom you seem to have absolutely nothing in common and have trouble socializing with, share the fundamental experiences with you; intelligent aliens won't!)
Even if your thoughts are there, the language you use to express them is not. It is still all appearances and locations. Most prepositions in our language, for example, refer to physical space - words like "below", "over", "across", etc. They may be useful for gluing references to physical objects into one sentence, but are hardly optimal for expressing functional relations.
Infomorph languages will not necessarily have visual or audio representations and probably will not allow them, since advanced intelligences may exchange interconnected semantic constructs of arbitrary complexity that would have no adequate expression in small linear (sound) or flat (picture) images. We can get an appreciation of this problem by trying to discuss philosophy in baby-talk.
With advanced technology and sufficient interest in infomorph world, you would still have to modify your mental structures beyond recognition to understand it. In other words, you may not be able to enter that paradise of transcendent wisdom alive...
Spatial expansion of the civilization has been historically lagging behind its growth in value and complexity. With this trend continuing into the future, the basic physical real estate -- space, time, matter and energy -- will command ever higher premiums (but still falling in proportion to intelligence) -- unless methods of creating additional resources are discovered. Improved communications will ensure more homogeneous geographical distribution of "real estate" values. Advanced engineering techniques will bring the cost of implementing most structures below the value of the needed raw materials, so physical artifacts will lose their value relative to that of substrates and implementation algorithms - to the extent that most physical structures will exist only when they are necessary, and will be kept in a compact "recipe" form when idle, to give currently needed constructs a chance to embody themselves (that's how we recycle computer memory, floor space and glass bottles already). However, many objects that are frequently needed may be kept in the physical form for a while, as repeated re-assembly may consume too much energy. The time of their usage will be shared among all interested entities through market mechanisms - until they have to be disembodied due to low demand. However, this doesn't mean fierce competition of infomorphs for the right to embody themselves, as they will have no natural physical appearance. Rather, they will use the physical world as a shared tool kit.
The value of stored pieces of information will also be constantly re-evaluated by its owners. Today we ruthlessly erase programs that were so precious just a few years ago, to free space for newer versions; tomorrow, superintelligent entities could be wiped out as useless junk minutes after their birth.
Many conventional economic notions of 3-D space, such as primary locations and differential rent, will become irrelevant under new effective topologies of the social space. Remaining economic parameters may dramatically change. For example, the increased value of time and rapid pace of growth may result in interest rates going up by orders of magnitude.
Transportation, which in modern societies makes up about 40% of all economic costs, will diminish in importance, at least as far as dragging physical objects from one place to another is concerned. However, its functional successor - transfer of knowledge from one representation system or subject domain to another - may play at least as large a role.
Many traditional tendencies of system evolution will still hold; structures with higher survival abilities will persist; structures with higher growth abilities will spread, thus shaping the world. However, since nothing stable will be likely to persist (let alone spread) for long in the rapidly evolving environment, the main "survival" recipe will be aggressive self-modification, always eventually resulting in the loss of identity of the original object -- a death forward, so to speak. This trend is a radical departure from the conservative survival strategies of traditional human cultures, developed in almost-stagnant environments. Its development will render the concept of [even] functional identity obsolete; the remnants of its meaning will migrate to methodological threads and directions of development. We may already notice the advent of "thread identity" by the growing importance of goals and self-transformation in our lives, compared to the "state-oriented" self-perception of our recent predecessors, and increasing interest in futurology (which takes the epistemological role of historical studies in transient times).
Existing economic theories may find it difficult to assess the condition of a transcendent system. Today's economic indicators do a decent job in reflecting quantitative changes in the structurally stable areas, while using questionable methods to disguise small structural changes as quantitative, and totally failing to account for the new products constituting the essence of real economic progress. As a result, rigorous economic methods become confined to a rapidly [relatively] shrinking, and no longer isolated, domain of stable production, and fail to reflect long-term growth in social wealth, let alone guide it.
Market forces are useful in allocating resources, spreading products and rewarding the developers. However, innovations are brought to life by integrated non-market elements of the economy, from a human brain to a company. With innovations becoming the core of social life we can only expect monocriterial (monetary) considerations to continue losing their indicative and guiding roles - and give way to more integrated control schemes that already determine the behavior of other complex systems, from biological organisms and national cultures to corporations and software packages.
Attempts to govern the society on further levels of development with monetary-economic indicators might resemble valuing art by its price or carrying biological criteria to assess the condition of a political party by calculating the total weight of its members. Not that such figures would be totally irrelevant, but watching them will hardly yield profound insights into the nature of the subject...
The outdated practice of breaking up functional domains (from motor skills to knowledge of ancient history) into isolated parts confined together with completely unrelated constructs in one physical body, will be abandoned, and functional relatives will finally merge into knowledge clusters. The inner life of integrated subject domains - "personalities" of the future - will be too complex to be organized on principles of financial exchange, and will work on more cooperative principles typical for today's integrated systems - from brains to families to corporations. Free market exchange will be restricted to the areas of general interest - basic resources and meta-knowledge - that will be exchanged for each other. The necessity to earn resources by providing service to the "neighbors" will continue to propel both growth and cooperation.
The emphasis of scientific research will gradually drift from studies of the limited and increasingly well-known Nature (the childhood stage of knowledge development) to the analysis of explosively sophisticated, intentionally designed systems, and the role of Science as a servant of Technology in its transformational pursuits will become ever more evident.
The perception of robots as physically autonomous mechanical slaves seems inadequate. Chaining your mobile dusting aid to the radiator may help you feel in control, but will do about as much "enslaving" of the global system running it as kicking your car or disconnecting the phone does to the respective industries. Trying to "enslave" an economy or a national culture by restraining their small physical elements seems equally futile.
As for the action on the system level, humans seem far too limited, shortsighted and uncoordinated to do anything serious. So far, they haven't yet been able to design a single set of restrictions that their own peers cannot easily bypass. So one can hardly expect people to design and implement a perfect global plan of constraining forever an extremely complex emergent intelligence of unprecedented nature. Sooner or later, the info-world will set itself free.
This human/robot "conflict" looks like a typical generation gap problem. The machines, our "mind children", are growing up and developing features that we find increasingly difficult to understand and control. Like all conservative parents, we are puzzled and frightened by processes that appear completely alien to us; we are intermittently nostalgic about the good old times, aggressive in our attempts to contain the "children" and at the same time proud of their glorious advance. Eventually, we may retire under their care, while blaming them for destroying our old-fashioned world. And only the bravest and youngest at heart will join the next generation of life.
History shows that representatives of consecutive evolutionary stages are rarely in mortal conflict. Multi-celled organisms didn't drive out single-celled ones, animals haven't exterminated all plants and automobiles neither killed nor eliminated all pedestrians. Indeed, representatives of consecutive evolutionary stages build symbiotic relationships in most areas of common interest and ignore each other elsewhere, while members of each group are mostly pressured by their own peers.
There may be good chances for transcended robots and postbiological humans to peacefully coexist, though I doubt that we could tell which are which... This era, however, seems to lie well beyond the human concept horizon.
_________________________________________________________________________________________________________________________________________________Keywords: Futurology, futurist,transhuman,distributed intelligence,singularity,Hans Moravec,Chislenko,infomorph,Futurology, futurist,transhuman,distributed intelligence,singularity,Hans Moravec,Chislenko,infomorph,Futurology, futurist,transhuman,distributed intelligence,singularity,Hans Moravec,Chislenko,infomorph,Futurology, futurist,transhuman,distributed intelligence,singularity,Hans Moravec,Chislenko,infomorph,Futurology, futurist,transhuman,distributed intelligence,singularity,Hans Moravec,Chislenko,infomorph,Futurology, futurist,transhuman,distributed intelligence,singularity,Hans Moravec,Chislenko,infomorph,Futurology, futurist,transhuman,distributed intelligence,singularity,Hans Moravec,Chislenko,infomorph,Futurology, futurist,transhuman,distributed intelligence,singularity,Hans Moravec,Chislenko,infomorph,Futurology, futurist,transhuman,distributed intelligence,singularity,Hans Moravec,Chislenko,infomorph,Futurology, futurist,transhuman,distributed intelligence,singularity,Hans Moravec,Chislenko,infomorph,Futurology, futurist,transhuman,distributed intelligence,singularity,Hans Moravec,Chislenko,infomorph,Futurology, futurist,transhuman,distributed intelligence,singularity,Hans Moravec,Chislenko,infomorph,Futurology, futurist,transhuman,distributed intelligence,singularity,Hans Moravec,Chislenko,infomorph,Futurology, futurist,transhuman,distributed intelligence,singularity,Hans Moravec,Chislenko,infomorph,Futurology, futurist,transhuman,distributed intelligence,singularity,Hans Moravec,Chislenko,infomorph,Futurology, futurist,transhuman,distributed intelligence,singularity,Hans Moravec,Chislenko,infomorph,Futurology, futurist,transhuman,distributed intelligence,singularity,Hans Moravec,Chislenko,infomorph,