Comment on Changing our minds

starwalker Tue, Dec 1, 2009
Hello Spaceweaver, thank you for propelling this provocative discussion.

I believe someone should voice some counter-arguments, thus, loyal to my late-evolver avatar, I shall:). As a premise I do agree that looking into the future as it presents itself to us at this time we, humans, are unprepared and indeed unfit to our projections.
Yet when considering the expression “unfit for our future / the future”, I cannot not reflect upon the fact that future is (at least in terms of description) a human construct. Across centuries we conceived our future as something that by definition exceeds us, a repository for our breaking through impossibilities. Thinking of the Utopia of Plato, the machines of Leonardo, the realities of Jules Verne, all the way till the future worlds of Huxley, the singularity of Vinge the ever lasting life of Kurtzweil and the thousands, millions more humans and projections that maybe quoted in this category. Future is a category of mind through which the human challenges itself to be greater, farer, more. So yes we are intrinsically unfit for our future, now more than ever before. Yet I do not see how this can be translated from a drive for exceeding our-selves, into an unquestionable validation; a shift which in my eyes is performed by Savulescu in the talk about genetic enhancement.
Is it so, are we to enter the very complex territory of genetically modifying ourselves (and our descendants) under the spell of ‘emergency’: do or die - comply or extinct – enforce or prepare to horror?

We humans do have quite a historical record of devastating decision-making processes, and implications, in these kinds of circumstances (though of course criticality is always a difficult argument to fend off and one of the few that catalyze both either decision or compliance).

I hardly consider the human of today relevantly furnished and compatible to match its own future projections, in this sense ‘guilty’ in front of any serious jury, and yet addressing this incompatibility to genetics alone in my mind raises the complexity at play not reduces it.
Are we better genetic designers than we are in designing our immediate existence environments (politics, education, health, environment, wealth..)? How so?
Is the act of delivering our-selves to the work of ‘experts’ (genetic scientists in this case) the way to harvest our potential for evolution? Or is it our current solution to bypass complexity of interaction?

Which brings me to the next point in your answer to Wildcat critical question:

Should anyone be informed? Not necessarily. Wouldn’t it be the best if such change would take place without any public knowledge at all? Since it will happen to all of us at once, no single individual or group of individuals (including the person initiating the act) will possibly gain any particular advantage upon others. Nothing in the general balance will change. We will just wake up a bit smarter and a bit more agreeable to each other (and ourselves). Though there is no direct consent, I do not see how any right, actual or implicit, of any living human being is possibly transgressed.

I am aware that as humans we do have the unfortunate tendency to be highly opinionated mostly on no-grounds and ridiculously, not to say dangerously, able to create fatal avalanches out of it. So keeping our evolution (whether natural or consciously designed) under the radar has its very good points. And yet I can’t really find agreement with the terms you mention, at least not yet.

To begin with I believe there may be a problem with the formulation of the gedanken experiment itself. Through the last 30 years of complex studies (and ‘naïve’ experimentation upon entire populations of about anything - from pharmaceuticals, to chemicals, to food, to ideas, etc..) we begin to see that there is virtually nothing which can be applied upon large systems of agents without unpredicted/unwanted implications or consequences. As you clearly mention we have no way to compute the complexity of present and future scenarios. Newtonian rational mechanic, though amazing media for our minds to penetrate into abstract riddles, seems not apply to the world we live in when enlarging frames. We may have to consider this in our current gedanken experiments, as a way to get used to necessary modeling of systems.

In this sense an appliance that will over night raise the IQ or EQ of all humanity even minimally will come with a percentage of negative side effects (mostly unknown). Setting it to a very low rate of 0.1%, in a population of 6 billions will yield 6 millions humans with unknown negative side effects. Though I agree it may still carry quite some predictable advantages upon a population of 6 billions of aggressive, unfit agents, would it still be ethically viable not to open the information?

From a different standpoint, considering myself as one of the uninformed, doesn’t the no-need to know decision, subtly negate the added value of consciousness? Does the fact that I am conscious of and able to reflect upon something that is happening to me bring about any advantage? I realize that basically keeping it under the radar is a very good mathematical operation to exclude a probably unnecessary level of complexity from the equation (we are not dealing anymore with a system of ‘emotional’ agents– in this layer – but of units). Yet I believe our is the time in which we begin to see (or at least delude ourselves into seeing:) some advantages to this layer of complexity. Will quote as one of many, the proposition of Venessa to see our-online-selves as ‘kind’ of neurons in a global brain.

Would you think it to be of advantage to apply this kind of diverse complex computation system to the first tentative steps of our consciously designed evolution?
What I mean by that is allowing diversity and open information in a multiplicity of procedures (and of course variations). Yet this would require all information disclosed. Though the hell-scenarios sustainers do not tire to ‘realistically prove’ us that one bad apple will be more than enough to catalyze human catastrophe, I do find myself at times preferring to take this risk upon paving a reductionist road of compliance to unexposed ‘experts’.