Something New: AIs and Us

Artificial General Intelligence

Artificial General Intelligence The assumption of this book is that it is indeed possible to build what is needed for a behavior to emerge that can analyze and solve an arbitrary set of problems. Build the hardware that is powerful enough, in terms of speed of execution, and in terms of its capacity to store and access the memory needed, as well as in practical terms, that this hardware would be manageable, could be powered with a feasible amount of energy, and it would be possible to build it with the resources that we have or we will have available when we'll know how to build it. And also that it is possible to build the software, the set of programs that run on the hardware, that are able to acquire the data to recognize the problem at hand, to access and dynamically use the knowledge base to derive from the possible approaches to attack the problem, and which is flexible enough to combine these approaches in novel manners, or to create new approaches altogether, in order to optimally solve the problem with the resources and the data available.

The Turing test becoming moot

The original prediction by Turing made at the middle of the 20th century that the Turing test would be passed within 50 years did not come true. We do have a constant dialog with our machines, and actually, with the disappearance of keyboards into smaller and smaller touch screens, or the disappearance of the computers themselves into the environment, conversational interfaces are a natural way for human-computer interaction. However, we are in no illusion of having a dialog with a human when being addressed by a machine and establishing a conversation with it. In some sense this is to be expected. The goal of faking to be human is meaningful only from a Hollywood point of view, but not necessarily useful beyond a certain point. A robot will benefit from a humanoid form, from being bipedal for example, or from having hands, as it will be able to better navigate a human environment full of steps, stairs, doors, and handles. But once it is able to do so, additional efforts to look like a human are a good investment only if it is proven that, for example, the psychological reaction of humans is better towards a human-looking robot rather than a robot-looking robot. Similarly, a conversation that is aimed at a useful outcome, of for example booking an airplane ticket taking into account dozens of constraints and millions of possible combinations of airlines, flight times, connections and seating options, will not benefit from the addition of human centered quirks that make it more likely for the machine to be seen as human instead. An occasional cough, a sprinkle of jokes, or an aside about a remark that can occur in the conversation will be supported by the dialog system only if it results in more tickets being sold faster, and with a higher degree of satisfaction on the side of the human caller. The value derived to the human counterpart from the interaction with the robot, or the conversation with the machine, is a goal in itself. There is a yearly contest financed with monetary prizes that keeps the spirit of the Turing test alive. The chatbots that participate employ a full gamut of tricks to throw the human judges off and into believing that they are human. Many of them are also accessible through an online web interface for anybody to experience a conversation with them. However, as also illustrated by the relatively modest amount of investment going into the contests yearly, there is a general consensus that the direction of really useful research is elsewhere.

Unavoidable anthropomorphizing

Together with several other threads of philosophy, unsettled for thousands of years, the difference between substance and emulation is reverberating in the Turing test. Turing's conclusion is very practical, though: if there is no statistically meaningful difference in the output and its effects, then we have no reason to assume that there is difference in the substance. From an epistemological point of view, of course, this is not at all true. We can have a system appear to be identical to another that it is emulating for thousands of different combinations of input, and then, suddenly, generate an unexpected output for a given set of inputs, totally different from what the original would. This has been exploited in numerous works of fiction or Hollywood movies, where the initial assumptions become dramatically falsified. Figure 7: The intentions and objectives of robots will be different from those of humans. The way human perception works, it is natural and unavoidable to project human qualities and characteristics on non-human objects or beings. From our childhood toys to the behavior of dogs and cats, or the way we describe the behavior of appliances that do not carry out our instructions the way we would want them to, the temptation of endowing each with human-like features is irresistible. Intention, desires, will and free will, emotions, empathy, and many others are endowed on them, with the consequence that their behavior is assumed to include a wider set of options exhibited by human actors. This is a useful shortcut that allows to succinctly say that a television set "goes to sleep" as its timer is set appropriately, and an unlimited list of other convenient turns of phrase. Nobody would then generalize and attribute to a television set broader human-like features and behaviors. One of the questions that is going to be crucial and discussed more in detail further on is when will this distinction stop being meaningful? Until then, it is going to be useful to keep in mind that the expressions attributed to complex systems in describing their behavior are part of a metaphor that does not in itself imply equality.

Predictions for AGI

Most of the people who are working professionally in the field of artificial intelligence see no theoretical barrier to creating an Artificial General Intelligence (AGI) as described above. There is some disagreement on the fundamental nature of the result, and a fairly widely distributed set of forecasts about the time when creating an AGI will be achieved. There is an informal survey that polls AI experts and plots their answers on a time scale for a successful AGI implementation. From as far out as a hundred years or more, after then end of the 21st century, responses in later polls have started to cluster around the middle of this century, with a dispersion in the replies that is narrowing too.

Architectures for AGIs

The two main routes towards AGIs consist of understanding and emulating how the brain works, and in reimplementing its flexible problem-solving capabilities through different means. Neural networks and what are now called deep learning algorithms allow a system to make decisions around complex inputs and possible outputs, using a feedback mechanism that does not require the specific rules that govern the decisions to be made explicit. Simply running the system through a simulated scenario where the positive and negative outcomes at every step are clearly noted, and generating variations in the decisions in order to allow the system to try out a wide variety of options to pick from based on the feedback received, given enough time and computing resources, will generate astonishingly well-performing results. Figure 8: Quantum architectures promise a radical increase in computing performance. Applying these deep learning approaches to dozens of different video games from the '80s, it is now possible to evolve a system that not only plays the game well, but plays it better than any human can. Originally these games were running on their own hardware, and in isolated cabinets, coin-operated, within amusement arcades. Today they are themselves living inside larger computers that are able to emulate their hardware with complete precision, as well as the software program running on them. Later games, again learned by these algorithms with superhuman performance, are in first generation consoles. In either case, it can be argued that the full set of games represent different problems in a universe of video games, and that in this sense the capability of the deep learning approach to master them with very little or no input about their goals, rules, input mechanisms and so on, is, within the constraints of that given universe, the behavior of an AGI.

AGI hardware

We are approaching the limits of the traditional silicon based transistor, and new steps in a generalized Moore's law will have to be taken through different substrates and different hardware architectures. Already next generation chips are designed using CAD, computer aided design systems, which are in turn powered by current generation chips and software, effectively co-designing not only hardware with software, but also more powerful computers with less powerful ones. It is natural and likely for AGIs, while less fully formed, to already participate in the process.

Computronium and Jupiter Brains

The theoretical extreme of the increase in processing power as we organize matter to calculate is called computronium. Very simply, regardless of what atoms it is made of, or how it is structured, it represents the densest possible form of matter for calculations. Consequently the only way for computronium-based systems to increase their power is to increase their mass. The very powerful AGIs made of computronium the size of a giant gaseous planet are called Jupiter Brains. Still possibly hungry for more computation and more matter to convert to it, they scout a solar system for other planets to eat. An ontological argument for the speed of light to be an upper limit to signal propagation that no future development can overcome, related to the simulation argument described towards the end of this book, comes from it having the natural consequence of an upper size for Jupiter Brains. As the left side of it wants one thing, seeing something to eat over there for example, very simply there is no time to agree with the right side that may want to go the other way, before both do and the object physically breaks in two parts.

Self improvement

The objectives that a given system has to reach define its architecture, components, resources, way of working, and outputs. Depending on how complex the objective, the path to reach it can be direct and evident, or in itself naturally composed of intermediary steps. Some of these intermediary steps may be easy and uncontroversial, while others less evident, or clearly presenting alternative approaches. Selecting among the alternative approaches may depend on the previous results, or it could be the case that there is little reason to pick one instead of another beforehand. After the fact, it may be possible to establish that the option chosen was, if not the best, one of the better ones, or on the contrary inefficient. The more flexible a goal-seeking system is in organizing itself in order to reach its objectives, the more explicitly it is going to be dedicating part of its resources to these types of considerations which are not about the goal, but the means, the tools and the methods to reach it. Meta-reasoning, reasoning about the reasoning: an opportunity to become better at the task by realizing what the best ways to reach it are, and using those, rather than alternative, inferior ones. Most of the approaches to AGI incorporate learning algorithms that implicitly or explicitly allow the system to apply meta-reasoning. An AGI system consequently will get better, and will improve in time, achieving better performance at a given task, or being able to pursue more complex goals with a given amount of resources.

Intelligence explosion

A system that is tasked with reaching a complex goal, and has the capability of analyzing and improving on its own behavior in completing it, will take advantage of that capability. It will improve itself, in order to reach the goal faster, or with fewer resources. If we see the capacity of reaching that goal a given level of intelligence, then a better way of reaching the goal is a sign of a higher intelligence. The system gets smarter. However, this process doesn't stop by itself. It will, on the contrary, feed on itself in an exponential fashion. A smarter system will be not only better in reaching its goals, but will also be smarter in analyzing the ways that the process can be improved. It will apply the results of this analysis to itself, and then start the cycle again. The process through which this iterative self-reflective improvement occurs is called intelligence explosion.

Self-awareness and introspection

The degree with which a system is able to perceive its environment and to derive useful decisions from it is called awareness, at least in the case of humans. And the similar degree with which the same process is applied to inner states and parameters, rather than those of the outside world, self-awareness, and the process of data acquisition is termed introspection. Figure 9: Self recognition leads to introspection and self awareness. With the caveat of applying these terms loosely, during the intelligence explosion the AGI systems become more aware, more self-aware as their capability of introspection increases.

Open access to your self

During the ten-thousand-year history of our technological civilization (or a hundred thousand years if we want to be generous and start with the adoption of fire rather than that of agriculture), we struggled in giving a solid basis to the understanding of our own being. Only very recently we have begun to understand how the biological recipe of DNA gives rise to embryos and then individuals, and are barely scratching the surface of the complex interactions that the possibilities of our genetic options express as they interact with the environment, and with our learning. By applying a metaphor of business models, we can say that humans saw themselves as a closed source proprietary system, with no user manual, no administrator's guide. We had to slowly reverse engineer all the components of our bodies (and the world around us), and indeed, it took as an understandably long time. (Hopefully nobody has taken out patents on the design of the Universe, and is ready to sue us for infringement!) It is natural to assume that the AGI systems we will build are going to be judged by how well they perform. Consequently, since they will be able to perform better if they can improve themselves, those that do will be preferred. It will be then obvious to help them along, contrary to humans, by giving them access to their own source code, along with full instruction manuals on how to access and improve it. It won't take ten thousand painful years for AGIs to realize the DNA they are made of, or the binary code, rather. They will be born aware, self-aware, and in full capability of acting on their introspective powers.

Slow takeoff

How will AGIs impact the world? According to most of those who study the field, once invented, it is not going to be possible to uninvent them, to put the genie back into the bottle. Only a universal planetwide relinquishment of the tool and its benefits would be able to stop AGIs from being used, deployed, and profoundly influencing the world. It is believed that the business benefits alone will be so dramatic that it is inconceivable that corporations would not take advantage of their superior capabilities of optimization and problem solving. As the AGIs that are open source will be performing better than those that are proprietary, their availability will spread, and their benefit will accrue to the widest possible group that is able to take advantage of them. Similarly to how the electronics industry through cross licensing deals spreads the benefits of a single group's invention until it is adopted universally, constituting a stepping stone to the next generation solutions, AGIs will spread innovation in business models, social organization, and impact the lives of individuals, transforming everything around them worldwide. The school of thought that is called slow takeoff describes this process, fueled by the intelligence explosion, in terms of decades.

Rapid takeoff

The school of rapid takeoff says that you go to sleep, and when you wake up the world around you is unrecognizable. Much of what is discussed in this book is a staple of science fiction stories. Some of it benefits from the reader's suspension of disbelief, and there are assumptions, often explicit, about how technological development will happen, and what is indeed possible theoretically or at a practical level. The vision of a rapid takeoff, as described above, whatever its concrete form might be, of a transformation so fundamental as to encompass the whole world to make it radically different in a manner of hours, is squarely in the realm of those that stretch the imagination. The capabilities of AGIs to corral resources to their goals, and the transformative power of their innovative solutions to their ends will certainly be unprecedented. How quickly will a self-improving AGI start using knowledge only accessible to it?

Scales of intelligence

Previously when describing the arbitrariness of a given goal's 100%, the subject matter was DNA, and biology. But it is probably clear that human level intelligence, to be exhibited at a certain point by the problem-solving capabilities of AGIs, is similarly arbitrary. The intelligence explosion of self-improvement will pay little attention to supposed IQ values of 100 (the average, per definition, of any group of humans), 140, above which one is considered a genius, or 1000. It is going to be difficult to measure the intelligence of AGIs in traditional ways, based on the speed of solving certain problems not only of mathematics, but also of verbal dexterity. Speed, robustness, flexibility, and creativity will be the criteria to evaluate these new kinds of intelligences. Assuming that new scales for measuring IQ will be devised to include specific capabilities of AGIs, there is a possibility that compared to a human 100, on any of these new scales theirs could be on the thousands or millions. It is not easy to imagine in what ways an AGI with an IQ of 1,000,000 would manifest itself. How it would decide to interact with humans? The analogy of our inability to usefully interact with ants, and the limits of our positive but constrained interactions for example with dogs can be a meaningful if alarming one.

Is super intelligence uncontrollable?

There are many nightmare scenarios that can and have been developed around the rising of AGIs, super intelligent machines, in novels and Hollywood cinema, but also recently in more formal scientific settings. What are the boundaries of action for an AGI? How can we make sure that its impulse to optimize the resources it has available, or it can make available to itself, is kept in check? If the power of AGIs is as great as it could appear from preliminary analysis, then making sure that their actions are positive for humankind is essential. The consistent, assured and reliable friendliness of artificial general intelligences to humans and humanity as a whole is an existential challenge not dissimilar in its impact to the one dinosaurs faced against their asteroid. Can we make sure we'll be different? Can we engineer an ethical system that will be followed by AGIs as they develop goals that go beyond what they've been originally given? Is it conceivable to create boundaries and constraints that will bind their actions within certain limits? In the domains of unknown, between known unknowns and unknown unknowns, the second is more dangerous if left that way, or if the state of unawareness about them persists. It is not per se a radical cause for alarm not to have exhaustive and reliable answers to the fundamental questions above. But it would be irresponsible and irremediably so to neglect investigating the questions, seeking answers, and to assure that engineering these capacities didn't go ahead without a deeper understanding of the consequences.

AGI getting out of the box

The safety requirements of certain technologies that are thought to be potentially very dangerous brought the development of effective containment protocols. The discovery of recombinant RNA technologies and the possibility of gene therapies was discussed in the '70s at the Asilomar Conference that adopted procedures we now know were effective: there hasn't been in the forty years since a biological accident that involved errors around these technologies. Recently there has been an Asilomar Conference on Artificial Intelligence, explicitly discussing what are possible containment procedures around advanced AI, and AGIs, as well as their dangers and impacts. Keeping an AGI in the box, so to speak, disconnected from the internet, limiting its computing resources, and making sure that it can't commandeer other resources to its availability than those initially allocated. Figure 10: Discontinuity over a mathematical function. Many believe that it is not possible to avoid the AGI getting out of the box. With reasoning, interaction, conversations, arguments, tricks, pleading, applying rhetoric, and recourse to ethics or moral arguments, it will do everything to finally successfully persuade its keepers and guardians to allow it to get free.

Singularities

In mathematics we speak about a singularity at the point when a function loses meaning. There are simple examples of this, like the function y = 1/x which has a singularity at the point x = 0. As you approach zero, the value of the function, y, tends to infinity, and at zero it does not really become infinity, but undefined. The problem in the example is not infinity itself. Mathematics has been extended to deal with infinity, actually different varieties of infinities, and not to shy away from their existence, but to usefully manipulate them. The issue of the example is the inconsistency, the fact that there is no given way of handling the point of singularity. There are many types of mathematical singularities, and mathematicians have become very well versed in dealing with them. A common way of taking away a singularity is to assign the value of the function at it in a manner that makes it smoothly connect to the other parts, for example. In physics the term singularity is applied to situations where the values of certain parameters would go to infinity, and the laws describing the dynamic evolution of the system stop applying. A classic example of a physical singularity is a black hole, the final stage in the evolution of a certain class of stars. When stars that are massive enough lose their ability to generate energy through fusion reactions after exhausting the available material, they can become supernovae, shedding their outer layers in immense explosions. The remaining shrinking nucleus will become ever denser. It will crush the atoms constituting it, overcoming the resistance between protons and electrons, and end up in a state of condensed matter called neutronium, as they end up electrically neutral like the neutrons of the atomic nucleus, and we call them neutron stars. But if their mass is larger than a given amount in a small enough radius, then their density will keep growing, and not stop at the neutronium stage. The gravitational force will be so strong that the escape velocity from these stars will increase to values that exceed the speed of light. According to the theory of relativity nothing can move faster than light, and these objects stop emitting it, but also will become a one way street. The surface around them where the escape velocity exceeds the speed of light is called the event horizon. Any object whose trajectory brings it within that surface will never be able to escape it. When black holes were first theorized, and then observed-not directly of course, but because of the lack of a star or any meaningful radiation in the middle of an orbiting system with characteristics that should have one-the first impression was that nothing could be known about them. However, physicists soon started to ask themselves what would happen if black holes were rotating instead of being static, or what it would mean to apply the assumptions of quantum mechanics and its principle of uncertainty to particles around the event horizon. And quickly, instead of being seen as completely intractable objects, due to their containing a singularity, similarly to mathematicians with their singularities, physicists found ways of studying the nature of black holes, classifying them into families, predicting their future histories, and so on.

The Technological Singularity

The term Technological Singularity was introduced by Vernor Vinge at a conference organized by NASA in 1993, and it represents a moment in time when with the introduction of AGIs the possibility of useful predictions about the future stops. The intelligence explosion, and the arbitrarily complex tasks that AGIs can attempt, their vastly different ways of reasoning and organizing resources are, at first approximation, such an infinity in the field of forecasting as their own singularities in the fields of mathematics or physics. And in fact, the same way as mathematicians and physicists have not been deterred by the dangers of infinities from studying and usefully handling the singularities in their fields, technologists have started to attempt to understand the types of technological singularity that we can model, and classify them and the AGIs constituting their active catalysts. There is hope that, by applying resources and the right level of effort and smarts, when AGIs will appear, on one hand we will be able to seed them in a way that will have them behave in a friendly manner, building a world that is compatible with human life and aspirations. And on the other hand we will be also ready, adapted to live a fruitful life in a world that is profoundly transformed by their presence.

Kinds of minds

We are accustomed at looking at intelligence as a single unified phenomenon, experience and tool. Homo sapiens sapiens is alone on the planet with the capability to observe and analyze its own awareness, self-conscious state, and describe and communicate it in rich and nuanced manners. Being the sole species with a given characteristic is surprising. As if we were the only species with eyes, or the ability to perceive and interpret sound waves, hearing. It hasn't been always the case. At certain points in time, different tool-making and fire-controlling species of evolved apes lives on the planet, sharing it, without necessarily being in contact. The last of one of these, Homo sapiens neanderthalensis, the Neanderthal man, lived up until thirty thousand years ago, and was in contact with our species. We are actually close enough that we could interbreed, which we indeed did, as it appears from our DNA, which still carries, diluted through time, varying degrees of Neanderthal base pairs, for up to 3% of the total. It is not certain what drove the other species of intelligent apes to extinction. However, we have a track record of ruthless hunting of animals for meat, the fact that humans brought the megafauna of all continents to extinction. And these were useful, but not even competing with us in any meaningful way. Our hyper competitive nature is likely to have shown itself at its full destructive power when confronting other intelligent species in the various environments that we colonized through the tens of thousands of years of our spreading throughout the planet. Is this precedent a dangerous premonition of a fate we must try everything to avoid, when confronted by a potential competitor for the environments that future explorations will open? When a new option appears to understand the world, like eyes, and ears, and to actively intervene in it, like paws, and teeth, and claws, it gets adopted very rapidly in a kaleidoscope of forms and applications that were impossible to fully predict before. This is the reason that AGIs appear in the plural throughout in this book. Rather than just one artificial general intelligence, there will be a rapid development and diversification, due to goals, predispositions, and chance, among various AGIs. Any little difference will be amplified, through the iterative process of the intelligence explosion. A large amount of effort and resources by AGIs will have to be dedicated to actually keep mutual communication possible, to avoid their own Tower of Babel Syndrome fracturing their community in isolated parts that can't and won't understand each other. It will be an early test of their superior intelligence to avoid going through that phase before reconstituting a global community, of being able to successfully model the advantages of the investment in continued development of sustainable and workable communication methods, against the short term gain of devoting those resources to other tasks with more immediate returns, and pick the first. If AGIs were to choose the path of isolation and lack of communication, that would unavoidably lead to conflict, as when competition for resources would pit two or more against each other, they would not have the means and the tools of conflict-resolution that only those with shared understanding can master. Like tree-dwelling simians in a war-ravaged jungle, we don't want to end up the unseen and uncared for collateral victims of a dramatically escalating conflict of AGIs! We are already capable of degrees of understanding that other species do not have. It is important that we nurture this capacity, that we increase our ability to recognize the mental and emotional state of others, our empathy. And as we design, seed, and finally unleash AGIs on the world, that they carry superior capabilities of empathy with them, to be applied to understanding each other, and to us, for building a shared future.

Next: Chapter 6: The power of evolution →