Cancel
Comment on Changing our minds

Spaceweaver Wed, Dec 2, 2009
Starwalker:
Yet when considering the expression “unfit for our future / the future”, I cannot not reflect upon the fact that future is (at least in terms of description) a human construct. Across centuries we conceived our future as something that by definition exceeds us, a repository for our breaking through impossibilities. Thinking of the Utopia of Plato, the machines of Leonardo, the realities of Jules Verne, all the way till the future worlds of Huxley, the singularity of Vinge the ever lasting life of Kurtzweil and the thousands, millions more humans and projections that maybe quoted in this category. Future is a category of mind through which the human challenges itself to be greater, farer, more.

We, humans, have fascinating relations with the concept of time and especially with the future. Without straying from the subject matter here, I would like to highlight an important point: the categorical distinction between envisioning a future and predicting a future. While the first is mostly a product of imagination and creativity, the second is a product of empirical observations and rational/mathematical thinking.
A bad habit of popular media (and not only)is to express envisioning in language appropriate to prediction and vice versa. This is a source of much confusion when we try to think clearly about the future, especially when we try to facilitate envisioning with predictive tools. When we do that, we need to be even more keen that wishful thinking is not confused with our hard won predictive capabilities (this is an aspect of developing an evolutionary perspective as I mentioned in my comment).

Of course prediction, especially of complex states of affairs, is not accurate and most predictions are probabilistic. This, however, does not mean that we are entirely blind regarding the future; that we can only imagine or guess. My 11 point summary in the body of the post does not belong to the category of envisioning it is a (highly simplified) foresight derived from observation and application of rational tools. Foresight, I believe, is critical to any attempt to consciously guide evolution.

The drive to exceed ourselves definitely belongs to the visionary aspect of our relations with the future, while the reflections expressed in the post (and more boldly by J. Suvalescu) are attempts to respond to some very pressing predictive results. One of the most profound advantages of the human is its ability to predict and plan. The validation and emergency are not derived from our imagination but from our predictive tools. Indeed, we do not know the future. We do have however strong statistical markers. When these markers are refined and properly processed they often yield valuable and valid knowledge. The kind of knowledge we can act upon.

Starwalker:
We humans do have quite a historical record of devastating decision-making processes, and implications, in these kinds of circumstances (though of course criticality is always a difficult argument to fend off and one of the few that catalyze both either decision or compliance).


This is a kind of self defeating argument. Since we still exist, in spite of the many adverse circumstances and wrong moves, it seems that, on average, we took more beneficial decisions than devastating ones. To the point of genetics, I think that the discussion here should not focus on the means but rather on the ethical side. My guess is that brain altering drugs will become available much faster than other technologies including genetic germ line modifications. Highly effective brain enhancing drugs are less than a decade away, while genetic manipulations of the scope relevant to this discussion is at least two decades away and in any case their effect will be seen generations ahead while brain altering drugs can become effective immediately when applied.

Starwalker:
Is the act of delivering our-selves to the work of ‘experts’ (genetic scientists in this case) the way to harvest our potential for evolution? Or is it our current solution to bypass complexity of interaction?


We are delivering ourselves to the work of ‘experts’ in almost any single relevant aspect of our lives individually and collectively. This is a transition we have done quite long ago: when we go to a doctor, to our bank advisor, when we listen to the weather forecast, when we vaccinate our children, when we fly on a plane, even when we buy food in the supermarket. We believe we have a choice, but this is an illusion because the very options (if there are any) are provided by ‘experts’. Indeed our all civilization is based on creating expertise and this aspect of civilization will only get more and more accentuated. Leaving our genetic future to chance, as we did for eons, not out of choice, bypasses complexity more than any attempt to initiate a directed intervention. I do not think that genetic interventions is an ultimate ‘solution’ to the human condition, not at all! Yet I think that genetic design is an essential component of any viable strategy towards future guided evolution.

Starwalker:
To begin with I believe there may be a problem with the formulation of the gedanken experiment itself. Through the last 30 years of complex studies (and ‘naïve’ experimentation upon entire populations of about anything - from pharmaceuticals, to chemicals, to food, to ideas, etc..) we begin to see that there is virtually nothing which can be applied upon large systems of agents without unpredicted/unwanted implications or consequences. As you clearly mention we have no way to compute the complexity of present and future scenarios. Newtonian rational mechanic, though amazing media for our minds to penetrate into abstract riddles, seems not apply to the world we live in when enlarging frames. We may have to consider this in our current gedanken experiments, as a way to get used to necessary modeling of systems.

In this sense an appliance that will over night raise the IQ or EQ of all humanity even minimally will come with a percentage of negative side effects (mostly unknown). Setting it to a very low rate of 0.1%, in a population of 6 billions will yield 6 millions humans with unknown negative side effects. Though I agree it may still carry quite some predictable advantages upon a population of 6 billions of aggressive, unfit agents, would it still be ethically viable not to open the information?


I certainly agree that the experiment is described in highly simplified terms. I do think however that this fact does not nullify its value. Thought experiments are usually reduced (sometimes ad absurdum) versions of a much more complex state of affairs. But here, I think that at least in some important aspects, it exposes an interesting thread worth pursuing if only to expose our emotional and intellectual barriers.

I disagree, however, with the ‘negative side effect’ argument you raise: First, there is no known effect, as positive as it might be, without some negative side effects. An absolute ‘good’ belongs only to idealistic thought. Negative side effects become a consideration only in as much as we are able to predict them and relate to them. Only then, we can make a rational choice regarding a certain act. In our discussion, for example, you need to provide actual information/predictions regarding possible side effects against the positive effect presented in the experiment. Otherwise, every single act can be argued against by bringing up obscure negative side effects. If we cannot make a rational decision, we have only two options: act on impulse or refrain from any action whatsoever. Neither will bring us anywhere…
The question of exposing the information is not that straight forward. Let us engage in a little auxiliary thought experiment: I put a placebo compound in to the water of everybody (or do nothing) and inform everybody that I put something into the water and with a few minor negative side effects of unspecified character, everybody becomes more intelligent (5-10 IQ) . What do you imagine will happen ?

Starwalker:
From a different standpoint, considering myself as one of the uninformed, doesn’t the no-need to know decision, subtly negate the added value of consciousness?

Not at all! First, you will not know that you are uninformed :-) how could it possibly bias the value or potential expansion of your consciousness (or anybody’s)? There is vastly more unknown in our universe than known. I find it hard to see, under the carefully laid conditions of the experiment that anyone is affected in this respect. The effect is global and undiscriminating; there is no differential advantage that could possibly be gained. Second, on a deeper philosophical grounds, I do not think that generally increase in information changes our state of consciousness or its potential to expand. I do not believe that accumulating experience or knowledge (as information) makes one more or less conscious.

Starwalker:
Would you think it to be of advantage to apply this kind of diverse complex computation system to the first tentative steps of our consciously designed evolution?
What I mean by that is allowing diversity and open information in a multiplicity of procedures (and of course variations). Yet this would require all information disclosed. Though the hell-scenarios sustainers do not tire to ‘realistically prove’ us that one bad apple will be more than enough to catalyze human catastrophe, I do find myself at times preferring to take this risk upon paving a reductionist road of compliance to unexposed ‘experts’.

Information by its very definition is something that makes a difference. Given the subtle, almost imperceptible effect I was writing about in my comment, and giving the undiscriminating nature of the action, I do not see what difference such an exposure might have, so I am in doubt whether it is information at all. Of course such an event has all the ingredients of being the most lucrative piece of gossip ever conceived (where the very exposure is the difference rather than the exposure of a difference). As such, some might reflexively imagine that it is indeed information. But I think it is rather a simulacra of information not really information.

‘Compliance to unexposed experts’ makes it sound a bit like conspiracy theory stuff. The vastly powerful impact achieved by a single individual immediately invokes an instinctive suspicion. ‘No one should hold such a power’, ‘No one should perform such act whatsoever’, ‘No one is god’, are probably voices that arise in one’s mind in response to the experiment proposed. I do not automatically subscribe to the advice of these voices. I do accept that these are manifestations of the regulating processes of our social organism. I even accept that in the majority of cases these voices are expressing a general beneficial heuristics. The important point however is that in some very (very) rare cases such as the one I am describing in my last comment, these voices are to be ignored. This is not said with carelessness but with the firm belief that if one hand can devastate all humanity, we must allow the possibility that one hand can initiate a transformative beneficial process at the scale of all humanity. I would not take the risk of discarding such a possibility on the basis of a heuristic rule of thumb no matter how instructive it is.

The ethical viability presented in my comment is still intact!
Thanks for your comment.