Member 1467
26 entries
108219 views

 RSS
Contributor to project:
Polytopia
Marfa, US
Immortal since Jan 12, 2008
Uplinks: 0, Generation 3

Universe
Space Canon
Tumblr
Claire L Evans Dot Com
Crystals, Vittles, Vitals
  • Affiliated
  •  /  
  • Invited
  •  /  
  • Descended
  • Claire L. Evans’ project
    Polytopia
    The human species is rapidly and indisputably moving towards the technological singularity. The cadence of the flow of information and innovation in...
    Now playing SpaceCollective
    Where forward thinking terrestrials share ideas and information about the state of the species, their planet and the universe, living the lives of science fiction. Introduction
    Featuring Powers of Ten by Charles and Ray Eames, based on an idea by Kees Boeke.
    "All things move and nothing remains still" — Heraclitus



    The history of astronomy can be read as a story of better and better vision. Over the centuries, we have supplemented our vision with technology that allows us to see further and more clearly; while ancient astronomers, who relied only on their naked eyes to perceive the universe, managed to make star catalogues and predict comets, Galileo, pressing his to a telescope, saw all the way to the moons of Jupiter.

    Optical telescopes and the human eye are fundamentally limited; early astronomers were forced to gaze into telescopes for hours on end, waiting for moments of visual stillness long enough to allow them to quickly sketch drawings of the features they were simultaneously trying to understand. Between a telescope (incidentally, "telescope" is Greek for "far-seeing") and the celestial bodies beyond, the Earth's atmosphere itself is in turbulence, the optical refractive index bleary — which presented early astronomers with a view of the universe that was blurred, twinkling, always in flux. This is because the sky is not transparent. Thermal currents passing through the Earth's atmosphere cause air density (and hence the refractive index of air) to vary, to warble like a desert mirage. Light does not pass through this unaffected. Quite the opposite, in fact — thermal currents are like thousands of lenses floating around in the air. We call this phenomenon "astronomical seeing," and it's why stars sparkle, why even the moon seems to be swimming in water when peered at through an optical telescope.

    It wasn't long before Galileo and his fellows had seen as far as their technology — and their vision — could reach. In the years to follow, new far-seeing tools popped up as needed: X-ray telescopes, gamma ray telescopes, high-energy particle telescopes, even telescopes floating in space. As time progressed and our science grew more refined, we tried wavelengths previously unnoticed; we paid attention to new qualities; when we thought we'd seen it all, we looked again, our vision evolving beyond biology as we began to "see" with technology.

    The inevitable result was that though the physical universe never changed, we did, because we looked differently.



    This different-looking triggered perhaps the most important conceptual leap in the science of the 20th century: the realization that there is more to reality to what can be seen. The years between 1880 and 1930* saw massive upheavals in the way science was conducted — during this period, we moved from the strict empiricism of Newton to the reliance on unobservable and theoretical constructs that dominates the discipline today. We began to peer into previously unseen worlds; we parsed the structure of the atom and discovered elementary particles. Once we were there, our physics no longer had bearing. We needed to invent and codify new ways of seeing, ways not dictated by observable phenomena; and so our understanding of time and space gave way to general and special relativity, quantum mechanics, and alternative geometries. The intellectual legacy of this radical change — and its relevance to my point here — is in the primacy it lends to subjectivity, to not only the instruments of seeing, but those who peer into them.

    Astronomy, too, zygoted in the early 20th century. Photography solved the problem of hand-drawing findings between patches of blurry sky. Infrared, radio, X-ray, and finally gamma-ray astronomy came to prominence, filling our coffers with surreal images of a previously invisible world. We used spectroscopy to study stars; our sun was found to be part of a galaxy, and the existence of other galaxies was settled by the great Edwin Hubble, who identified many others, rapidly receding from our own, at impossibly large distances. We created the model of the Big Bang. We stumbled upon cosmic microwave background radiation. All of a sudden, the story of the universe as we knew it vaulted out of the visual world and into a rich and million years-long narrative of unseen forces and galaxies so distant they bordered on theoretical abstractions. Like science itself, visual perception of the cosmos evolved from the physical to the theoretical; when we speak of "seeing" astronomical images, we're talking about a highly mediated experience, captured by mechanical sensing devices, where invisible qualities are color-coded into something the human eye can register as information.

    The eye is almost universally a symbol of intellectual perception; in Taoism, in Shinto, in the Bhagavad Gītā, the eyes are the sun and moon. Is it any wonder that the ancients conflated astronomy and astrology? That those who look out at the universe have so often been mystics, seekers, and seers? We speak of "visionaries" in all fields as people who are capable of seeing furthest — beyond the blurred intermediary of the physical world and straight to the heavens.


    Optical, radio, X-ray, and WMAP all-sky images. Images via online sources, animated GIF by yours truly.

    *As an interdisciplinary aside: this period was simultaneous with the rise of modernism and abstraction in the arts. Could this movement from the pragmatic and visible to the invisible and conceptual be attributed to a common zeitgeist? Could it be that the early 20th century saw an unprecedented amount of cross-pollination between the arts and sciences, leading to a moment of cultural fertility?
      Promote (4)
      
      Add to favorites
    Create synapse
     
    Ed: This is an essay I wrote for my friends at the World Science Festival, riffing on the central themes of this years' event. If you prefer, you can also read this piece on the World Science Festival site. And, if you're in New York between the first and fifth of June, you could do much worse than popping into the Festival and getting a load of panel discussions like The Dark Side of the Universe, or Science & Story: The Art of Communicating Science Across All Media.



    Science communication is difficult.

    It can be crippled by the complexity of its own subject matter. It can be steeped in jargon, too dense for its readership, or, conversely, too simplistic to satisfy its critics in the scientific community. It can lack warmth, or be too paranoid about its empirical rigor to engage in the metaphoric flights — the quick shifts from microcosm to macrocosm — that cue readers to an emotional engagement in any subject. The problem may lie in an inescapable tautology: to fully understand a scientific, taxonomic, objective conception of the natural world is to be so steeped in scientific idiom that poetics become impossible.

    And yet, there are those who are capable of communicating the invisible phenomena of science to the public. These people are essentially bilingual. The Sagans, the deGrasse Tysons, the E.O Wilsons; Angier, Attenborough, Carson and Greene; the radio producers, writers, filmmakers, documentarians, and public speakers; these are our human bridges, our storytellers, fluent in both big and small. It's a specific skill, to be a gifted science communicator — that rare person who can straddle two divergent worlds without slipping into the void between the so-called "Two Cultures," someone with hard facts in their mind and literary gems in their rhetoric. They must accomplish the humanization of abstract ideas without pandering, make science poetry without kitsch. Even at their best, they can be silly — think of Carl Sagan, in his burgundy turtleneck, proclaiming, "in order to make an apple pie from scratch, you must first invent the universe." It may seem absurd to draw such a huge subject down to Earth in such a literal way, but what Sagan taps into is the necessity of these seemingly silly flourishes.



    See, science is big. It's driven by the desire to understand everything!

    The immensity of such a project necessitates that science be undertaken not by one group of men and women in one time, but all men and women for all time. The final goal always eludes us: to understand this, we must first understand this, but to understand that, we must understand this, ad infinitum. Scientific knowledge is won by climbing the shoulders of giants; but these giants are a never-ending stack of matryoshka dolls. In fact, the very notion of there being a final point in science has become so abstract as to be almost irrelevant; the more we know, the more we know that we do not know, and the end of the game is nowhere to be seen. And, perhaps, there is no end game.

    To a scientist, this endless narrative satisfies. The balance of properties and theories that define the natural world, the physical Universe, or the underpinnings of mathematical reality are elegant and stirring; knowledge, and the search for more of it, is a raison d'être. For those of us not wired the same way, the greater narrative of science can be overwhelming, if not inscrutable. We need stories with beginnings, middles, and ends. We need things to relate to, objects to hold onto, characters to laugh and cry with. We need to synthesize abstract ideas through allegories, metaphors, and images.

    Popular science communication is defined by such literary gestures. For years, students of astronomy struggled with the concept of an expanding universe without a center (a notion which violently bucks against reason). Cosmologists, however, came up with an image — a metaphor — which lightens the load: imagine that the universe is an expanding balloon, and the stars and objects in space are dots drawn on the surface of this balloon. From any one star's vantage point, all the other objects in space are moving away from it, but without any perceivable pattern. The more distant points would appear to be moving faster. Apart from being a devastatingly simple image that conveys more information that entire astronomy textbooks, it's also an elegant metaphor. It accomplishes the same things as the most successful of literary metaphors: a world of feeling and information, the very chaos of physical reality, in one image. It translates profound abstraction (the universe) into something we can imagine holding in our hands (a balloon).



    Good science communication molds complex ideas into human-scale stories. It turns a discussion of the cosmos' impossible scale into inflating balloons. Or into Sagan, sitting at his dinner table like a medieval king in corduroy, a steaming apple pie at the ready.
      Promote (4)
      
      Add to favorites (3)
    Create synapse
     


    The moon is a rock.

    But it's also Selene, Artemis, Diana, Isis, the lunar deities; an eldritch clock by which we measure our growth and fertility; home of an old man in the West and a rabbit in the East; the site of countless imaginary voyages; a long-believed trigger of lunacy (luna...see?). It's another world, close enough to our to peer down at us; to it, we compose sonatas. It can be blue, made of cheese, a harvest moon; we've long fantasized about its dark side, perhaps dotted with black monoliths or inhabited by flying men.

    The moon is a totem of great importance in all religions and traditions; in astrology, it stands for all those things which make this fine scienceblogs readership develop facial tics: the unconscious, parapsychology, dreams, imagination, the emotional world, all that is shifting and ephemeral. According to the Penguin Dictionary of Symbols, as the light of the moon is merely a reflection of the light of the sun, "the Moon is the symbol of knowledge acquired through reflection, that is, theoretical or conceptual knowledge."

    All of this to say that while the moon is a rock, it's also an idea.

    And, as an idea, it appeals to artists. The moon, however, remains beyond the reach of artists by virtue of what makes it interesting to them: namely, its moon-ness, a perfect storm of mystery, opacity, and unreachability.



    So just how do you implement the moon in your practice when it's 240,000 miles away? As an artist, how do you stake a claim somewhere inside of the patriotic military-industrial research bureaucracy that controls the purse strings, and thus access to our nearest celestial bodies? There doesn't seem to be a direct entry. If you're part of the original Moon Museum posse, you go in the back door, sneaking your work illicitly onto the heels of a lunar lander. If you're Belgian artist Paul Van Hoeydonck, you meet astronaut David Scott at a dinner party.

    Van Hoeydonck is responsible for the only piece of art on the moon, a tiny memorial sculpture called "Fallen Astronaut." The piece is interesting for several reasons. For one, it presents us with a clear understanding of the kinds of technical limitations that moon artists must work under. Limitations, of course, can be instrumental to an artist's practice — a broke Basquiat painted on window frames and cabinet doors — but space art's parameters border on the draconian. In the design of the piece, Van Hoeydonck was restricted to materials that were both lightweight and sturdy, as well capable of withstanding extreme temperatures. Since it was to be a memorial to deceased astronauts, it couldn't be identifiably male or female, nor of any ethnic group. The somewhat questionable result: what looks like a metal Lego lying face-down on Mons Hadley.

    Like the Moon Museum, Fallen Astronaut was an unofficial venture; the statuette was smuggled aboard the Apollo 15 lunar module by the astronauts themselves — Scott and Jim Irwin — without the knowledge of NASA officials. Its "installation" was unorthodox: in laying down the sculpture and its accompanying plaque, Irwin and Scott performed a private ceremony on the lunar surface. "We just thought we'd recognize the guys that made the ultimate contribution," Scott later said. Notable: "the guys" include eight American and six Soviet astronauts, a surprisingly apolitical act of solidarity in the midst of the Cold War.



    Scott and Irwin were committed to the sanctity of their memorial; when Scott plopped the piece onto the lunar dust, Irwin covered the act with inane radio chatter to Mission Control, and they didn't announce the memorial until after their return to Earth. Even then, the astronauts kept Van Hoeydonck's name private, hoping to avoid any commercial exploitation of the piece. Van Hoeydonck, undoubtedly hoping to further his career, later violated the unspoken sacredness of Fallen Astronaut by attempting, in 1972, to sell hundreds of signed replicas of the piece at $750 a pop. We'd all recoil in horror if Maya Lin tried the same thing with the Vietnam Veteran's Memorial, but I'm almost tempted to give Van Hoeydonck a pass. After all, the Fallen Astronaut itself is just a totem, and a toylike one at that.

    I see this story as something of an inversion of the usual artist-scientist dialectic. Van Hoeydonck, here, was essentially an engineer. All he did was design a tin man to technical specifications, but it was Scott and Irwin who made the visionary decision to perform an unnecessary act of beauty on the chunk of rock orbiting our own. It was the astronauts who snuck the statuette all the way to the moon and secretly installed it. They understood that beyond being a rock, the moon is an idea, and that actions performed on the moon by human beings are instantly imbued with meaning, historical significance, and some kind of indefinable holiness. Scott, Irwin and NASA balked at Van Hoeydonck's commercial enterprise, and the artist eventually retracted it, instead donating various replicas of Fallen Astronaut to museums and keeping the rest to himself, un-monetized.

    While it's ordinarily the artists who defend the formal importance of ideas for their own sake, on Apollo 15 it was, well, not the scientists — but the military-trained, engineer-pilot, non-artist astronauts who did. Which perhaps goes to show that the experience of space, the perspective-altering transcendence of the so-called "overview effect," ultimately turns us all into poets.
      Promote (7)
      
      Add to favorites (2)
    Create synapse
     
    This is the first in a series of posts about art, the moon, and art on the moon. You would think this would be a fairly limited subject, but...



    Art on the moon has been happening for a long time.

    In 1969, a coterie of American contemporary artists devised a plan to put an art museum on the Moon. When NASA's official channels proved too dauntingly bureaucratic, Andy Warhol, Robert Rauschenberg, David Novros, Forrest "Frosty" Myers, Claes Oldenburg, and John Chamberlain weren't deterred. Instead, they managed to sneak their "museum" — in reality a minuscule enamel wafer inscribed with six tiny drawings — onto the leg of the Apollo 12 mission's landing module, Intrepid. Of course, NASA has no official record of this intervention, but the New York Times ran the story several days after Apollo 12 took off.

    The museum, which looks like a paleo-modern computer chip, includes a drawing of a wavy line, courtesy of Rauschenberg, a doodle of a mouse by Oldenburg, John Chamberlain's template pattern, and a piece by Warhol that the Times in '69 called a "a calligraphic squiggle made up of the initials of his signature," but is obviously a penis.



    It seems to me that the artistry of this "museum" is as much about the gesture of sneaking it, illicitly, onto the leg of the lunar lander, as it is about the drawings themselves. The Moon Museum is a cosmic happening, an outer-space intervention, a performance piece with no human (or Selenite) witnesses. Whether or not it even exists is a point of contention; it bears a mystique that an official NASA presence would have irrevocably squelched. Which is perhaps what separates artists from those who seek the cosmos for scientific or technological reasons. To them, the objective may not necessarily be about the quest for knowledge, but rather the desire to play with and articulate Mystery, capital-M. Space inspires awe, feeling, and perspective — the currency of the arts.

    As much as the fierce nationalism of space history would suggest otherwise, space also belongs to no one. No nation, no species, and no ideological subcategory of humanity. Obviously astronomers, scientists and engineers have had the most serious crack at the interpretation of the vast impersonal Universe beyond our atmosphere — but mystics, myth-makers, and shamans were at it for centuries beforehand. Of course the prevailing rhetoric since the Enlightenment has been to distance the rational sanctity of science from the taxonomy-barren mish-mash that came before it, but our interdisciplinary age, it seems, should allow us to appreciate the importance of one without devaluing the other. This isn't a new idea: even NASA gave Laurie Anderson an artists' residency.

    As we expand our boundaries beyond the limits of our planet, the idea of "Moon Arts" or "Space Arts" won't seem any more sci-fi than regular old Terrestrial Art. Reality is fodder for exploration and creativity, so who's to say that artists, once they secure passage to orbit, the moon, Mars, and beyond, shouldn't have as much of a say in our understanding of space as the people who sent them there?

    Footnote:

    Incidentally, the Moon Museum wasn't the only rogue intervention on the Apollo 12 Mission. Pranksters back at Cape Canaveral snuck laminated, fire-proof Playboy Centerfolds into astronauts' Al Bean and Pete Conrad's checklist booklets. The bunnies, which had captions like "Seen any interesting hills and valleys?" and "Survey — her activity," were the first American women in space.

      Promote (5)
      
      Add to favorites (1)
    Create synapse
     


    In 2004, some robotics geeks and sci-fi fans built a functional robotic likeness of Philip K. Dick. It looked like Dick, dressed like Dick, and was completely autonomous. Capable of operating without the intervention of its makers, it could track people coming in and out of a room with face-recognition software, greeting those it knew. It could listen to conversation, and, using complex algorithms, could respond verbally using speech synthesis.

    This “robotic portrait” was as much an art project as it was a feat of engineering. For several years, the android made public appearances — at conferences, comic conventions, Artificial Intelligence organizations, and so forth. In 2006, it mysteriously disappeared in transit to Mountain View, California, where it was to meet with some Google employees. Speculation abounded. Horrified, I imagined the android out in the world, having a hellish time of consciousness. Strange and poetic as it was, the story could have ended here.

    And yet, the Philip K. Dick android has now been rebuilt. Behold!

    The new android is being referred to as “New Phil.” Its vanished predecessor, “Old Phil.” To recap: a man who spends his career writing about about androids dies. Twenty years later, an android is made in his image, effectively bringing him back to life. That android disappears. A new one is built; at this point we’re three degrees of separation from the original. I can’t help but fantasize about a future model (New New New Phil?) becoming self-aware, and immediately being convinced that he is the real, original Phil. I mean, it literally reads like an actual Philip K. Dick story — life imitating art, imitating life.

    The brain-boggling postmodern meta-irony is not lost on its makers, thankfully. On translating this particular writer — and not, say, Arthur C. Clarke or Isaac Asimov — into an android, they explain, “An android of Philip K. Dick is a sort of paradox. It’s certainly what Hofstader would call a ‘tangled hierarchy.’ This is something that you don’t get by making an android out of any other science fiction writer.” They point out that Dick didn’t just write about androids; he wrote about people thinking they were androids, or androids thinking they were people, and everything in between. The terrible crux of Dick’s canon often hinges on the question, “what is the difference between being human, and being programmed to believe you are human?”

    Still, it’s hard to guess what Dick, who died in 1982, might have thought of his robotic likeness. In a 1975 essay called, “Man, Android, and Machine,” he wrote:

    “Within the universe there exist fierce cold things, which I have given the name ‘machines’ to. Their behavior frightens me, especially if it imitates human behavior so well that I get the uncomfortable sense that these things are trying to pass themselves off as humans but are not. I call them ‘androids,’ which is my own way of using that word. By ‘android’ I do not mean a sincere attempt to create in the laboratory a human being. I mean a thing somehow generated to deceive us in a cruel way, to cause us to think it to be one of ourselves. Made in a laboratory — that aspect is not meaningful to me; the entire universe is one vast laboratory, and out of it come sly and cruel entities which smile as they reach out to shake hands. But their handshake is the grip of death, and their smile has the coldness of the grave.”


    Would New Phil — or for that matter, Old Phil — embody this “coldness of the grave” to his namesake? I can’t help but think of Jack Bohlen, in Martian Time-Slip, servicing the simulacra in his son’s school and having schizoid episodes where he believes that every person is secretly a machine, a mechanism. The profound sense of disconnect that this vision lends to his reality, the Philip K. Dick android does to me.

    Dick’s books have been endlessly adapted to the screen, and yet this bearded machine does more to bring the philosophical mise-en-abyme of his work alive than any number of Darryl Hannahs or Arnold Schwarzeneggers (be they lurking in rainy alleyways or gun-fighting in the red-tinged Martian atmosphere) ever could. I mean, it is Philip K. Dick: both visually and theoretically. It’s a physical embodiment of everything he feared, loved, rhapsodized on, got paranoid about. It’s a “living” paradox; it’s science-fiction reality, a powerfully strange sculpture.
      Promote (4)
      
      Add to favorites (1)
    Create synapse
     


    A few months ago, I went to Cyborg Camp in my hometown of Portland, Oregon. Cyborg Camp is an "unconference," basically a room full of cyberpunks, mega-nerds, and aspirational coders that gather in an office building to talk about the "future of the relationship between humans and technology." This event deserves a separate entry, but for now I'd like to recall a particularly evocative thing: that the most heartbreaking thing I saw at Cyborg Camp was an adult man hopelessly tangled in a web of cables.

    It was his own off-the-shelf wearable computing system, a gordian thing connecting his outdated Windows smartphone to a pair of personal video glasses via an unwieldy battery pack in his shorts. He was trying to show it off an audience eager to learn about "DIY Wearable Computing." Unfortunately, it was like watching a third-grader thread his mittens through his winter jacket sleeves.

    "Talk about first world problems," I heard him mutter.

    His computer system-cum-outfit was shitty. It was shitty in the way that most things light-years ahead of their time are shitty, because the rush to make them into reality precludes aesthetics. People dedicated to developing new technolgies are largely interested in them working — they can worry about looking good later. As a rule, technology is born ugly, then gets refined: compare the first Apple computers to the blemish-less glass of an iPad screen.

    Wearable headset computers don't really exist to anyone but the people who actively wish for them; those people take matters into their own hands with Sharper Image and Made-in-China techno-junk. Such tangled-cable DIY cyborg hacks are entirely about function, and usually have no concern for design. That blind adherence to pragmatism may even be the defining characteristic of geek fashion. Technical sandals, video glasses, and LED-rigged shoelaces are functional and hideous, whereas fashion ("real" fashion, whatever that means) is beautiful and useless.

    The point of this meandering introduction is that we are rapidly approaching an age where this general rule is no longer rock-solid. Consider the Emotiv EPOC. This is an actual, purchasable product: a "neuro-signal acquisition and processing wireless neuroheadset." When donned atop your dome, the headset's sensors tune into electric signals produced by your brain, effectively detecting your thoughts, feelings and expressions and allowing you to control a computer with your mind.

    [Pause for effect]



    This is the first commercially-available device of its kind. It is insanely ahead of its time. Have you ever even heard the word "neuroheadset" before?

    And yet, the Emotiv EPOC neuroheadset is pretty beautiful. It's not an insane mess of multi-colored wires and scary-looking electrodes; it doesn't even have any wires at all — it connects wirelessly to your computer via a USB dongle. All things considered, it looks more like an expensive pair of headphones than a device that can read your mind.



    The EPOC has three different ways of sensing your mental intent. The simplest is that it can monitor facial expressions. This means you can smile and your computer will automatically insert a smiley-face into your chat, for example. It has a gyroscope in the headpiece as well, so you can move your cursor by moving your head. Lastly, it can sense brainwaves — but to do that, you have to map the device to your particular mind by using crazy biofeedback software, concentrating on the idea of "left," "right," or "forward" (etc.) while looking at an orange 3D cube on your screen, while the EPOC analyzes your brain activity for each command. After this mapping is finished, EPOC users can ostensibly play Pong or Tetris telepathically.

    As it turns out, however, the EPOC doesn't upset the beautiful-ugly, functional-useless dialectic much: the amount more beautiful it is than most first-generation technologies is about even with the amount less that it is functional. It's getting tepid reviews from realists, who argue that the EPOC is not the "mass market device for people looking for a turnkey telekinesis solution" that everyone hoped it might be. Rather, "it's an expensive toy for people to experiment with" and — despite being totally cool — is basically useless.

    Regardless, the EPOC is catnip for nerds. If there had been one at Cyborg Camp, it would certainly have been the star of the show — regardless of whether or not it was a nice-looking object. After all, sitting in the conference room at Cyborg Camp, my most prevalent thought wasn't about the disproportionate presence of dorky video glasses and technical sandals, but one of slightly apprehensive wonder: "shit, these people are the future of everything." In my mind, the clout of the future is not wealth, but ability to navigate an increasingly digital world (as Douglas Rushkoff says, "program or be programmed").



    We'll probably all be wearing computers in five years. And just as Luxxotica is making personal 3D glasses for rich people and even Karl Lagerfeld compares Facebook to Brancusi, there will be high-end neuroheadsets being made and modeled at Paris Fashion Week by athletic models in circuit board stilettos.

    Talk about first world problems, right?
    Thu, Nov 18, 2010  Permanent link

      RSS for this post
      Promote (3)
      
      Add to favorites (1)
    Create synapse
     


    In case you didn't know, reality is science fiction.

    If you doubt me, read the news. Read, for example, this recent article in the New York Times about Carnegie Mellon's "Read the Web" program, in which a computer system called NELL (Never Ending Language Learner) is systematically reading the internet and analyzing sentences for semantic categories and facts, essentially teaching itself idiomatic English as well as educating itself in human affairs. Paging Vernor Vinge, right?

    NELL reads the Web 24 hours a day, seven days a week, learning language like a human would — cumulatively, over a long period of time. It parses text on the Internet for ontological categories, like "plants," "music" and "sports teams," then uses contextual clues to sort out what things belong in which categories, like "Nirvana is a grunge band" (see below) and "Peyton Manning plays for the Indianapolis Colts."



    In its self-taught exploration of Internet English, NELL is 87 percent correct. And the more it learns, the more accurate it will become. According to a paper called "Toward an Architecture for Never-Ending Language Learning," NELL has two tasks: to read, and to learn from that reading — to "learn to read better each day than the day before...go[ing] back to yesterday's text sources and extract[ing] more information more accurately."

    Like the premise of a dystopian sci-fi story, Read the Web is wonderful-terrifying. Wonderful, because we've designed a computer to teach itself, because it's a case study in life-long learning, and because the results will certainly be useful. Terrifying because it's difficult to look at a massive computer coming up accurate pronouncements like "bliss is an emotion" without feeling a shudder of horrible gravitas. That said, I am shuttering my fearmongering sci-fi mind and embracing NELL's mission, just one in a fascinating new field of research aimed at helping computers understand human language, using the Web as a key linguistic resource. The idea of a "Semantic Web," an Internet as comprehensible to computers as it is to humans, has been in the computer science and AI discourse for years, with good old Sir Tim Berners-Lee carrying the leading torch. In a 2001 article for Scientific American, Berners-Lee wrote that "this structure will open up the knowledge and workings of humankind to meaningful analysis by software agents, providing a new class of tools by which we can live, work and learn together."

    Upon discovering this project, I had tons of questions about NELL: could it read other languages? Who gets the data in the end? Does it have parental controls on? So I did what I always do in such cases, which is immediately write to the people in charge in the hopes of gleaning some information from them. In suit, here is a brief interview with the very gracious Professor Tom Mitchell, chair of the Machine Learning Department of the School of Computer Science at Carnegie Mellon University, and Burr Settles, a Carnegie Mellon postdoctoral fellow working on the project.


    UNIVERSE Q&A WITH TOM MITCHELL AND BURR SETTLES OF CARNEGIE-MELLON UNIVERSITY

    Universe: At the moment, NELL is learning language and semantic categories in English, which would mean that its learning is limited to the output of the English-speaking world. Are there any plans to expand the program to different languages?

    Professor Tom Mitchell: Interestingly, NELL's learning methods can apply equally well to other western languages as they do to English (as long as the language uses the same character set as English). We started with English because, well, we speak English. And also because that is the most-used language on the web, and we wanted NELL to have access to lots of text.

    Burr Settles: In principle, the technology driving NELL is language-independent, so there is reason to believe that, given a corpus of Spanish or Chinese, it could learn equally as well. In fact, I suspect there are some languages it would perform even better with; for example syntax and orthography are generally more consistent in Spanish than in English, so the Spanish NELL might learn much more quickly and accurately.

    Universe: Could an advanced NELL-like computer teach itself another language?

    Burr Settles: Quite possibly. For example, imagine that NELL learns a lot about The French Revolution from English-language documents, and also knows (because we say so, or maybe because it read so!) that Wikipedia pages have corresponding translations in other languages. If NELL assumes the facts available on the English- and French-language Wikipedia pages for The French Revolution are roughly equivalent, then it could use its Knowledge to start to infer patterns, rules, word morphologies, etc. in French, and then start reading other French-language documents.

    This isn't unlike the way humans can easily pick up certain words (concrete nouns, prepositions) when traveling in foreign-language countries. I know, because I just got back from two weeks in Spain, which is why I'm absent from that fabulous New York Times photo!

    Universe: When will NELL stop running?

    Professor Tom Mitchell: We have absolutely no intention of stopping it from running. NELL stands for "Never Ending Language Learner." We mean it, though of course we need to make research progress if we want to give it the ability to continue learning in useful ways.

    Universe: Is NELL reading the web indiscriminately, or have you set it loose on particular corners of the Internet that are more conducive to language-learning (say, Wikipedia)?

    Professor Tom Mitchell: NELL primarily uses a collection of 500,000,000 web pages that represent the most broadly popular, highly referenced pages on the web. But it also uses Google's search engine to search for additional pages when it is looking for targeted information (e.g., for pages that will teach it more about sports teams). So it's not in some corner of the web, but all over it.

    Burr Settles: Currently, NELL reads indiscriminately. Of course, it tends to learn about proteins and cell lines mostly from biomedical documents, celebrities from news sites and gossip forums, and so on. In future versions of NELL, we hope it can decide its own learning agenda, e.g., "I've not read much about musical acts from the 1940s... maybe I'll focus on those kinds of documents today!" Or, alternatively, we could say we need it to focus on a particular document. Previous successes in "machine reading" research have in fact relied on a narrow scope of knowledge (e.g., only articles about sports, or terrorism, or biomedical research) in order to learn anything. The fact that NELL learns to read reasonably well across all of these domains is actually a big step forward.

    It has been interesting to hear the public's response to NELL. There are many jokes about what will happen when it comes across 4chan or LOLcats, for example. But the reality is, those texts are already available to NELL, and it is largely ignoring them because they are so ill-formed and inconsistent.

    Universe: Say NELL learns the English language well enough to be a Shakespearean scholar. What happens to the data then — do Google and Yahoo and DARPA get access to it?

    Professor Tom Mitchell: Yes, and so will everybody. Already we have put NELL's growing knowledge base up on the web. You can browse it, and also download the whole thing if you like. Furthermore, I am committed to sticking to this policy of making NELL's extracted knowledge base available for free to anybody who wants to use it for any commercial or non-commercial purpose, for the life of this research project.

    Universe: Lastly, the name NELL is a joke about the Jodie Foster movie, right?

    Professor Tom Mitchell: Well, no. I didn't really know about that movie...but I just took a look at NELL's knowledge base, and it appears to know about it. Take a look. There, the light grey items are low confidence hypotheses that NELL is considering but not yet committing to. The dark black items are higher confidence beliefs. So it is considering that NELL might be a movie, a disease, and/or a writer, but it's pretty confident that Jodie Foster starred in the movie...
      Promote (8)
      
      Add to favorites
    Create synapse
     


    What is alien? Definition number one: unfamiliar. By that description alone, a good 99% of life on this planet is alien. Breathing water, living nestled in thermal vents, stalking prey on the veldt, growing out of the Earth and eating sunlight, without eyes, without legs, with extra legs, color-blind, carapaced, marsupial, with exoskeletons, with jelly for brains, microbial, in a test tube, growing from spores. Not to mention the extremophiles, those nutty organisms that thrive in hellish environments like boiling acid, liquid asphalt, radioactive waste, and under extreme pressure.

    I've been thinking a lot about SETI recently, but the thing is that alien life exists among us, to the extent that this planet is a rich steaming pot of crawling flagellae, fur, and ooze. It's also possible, according to the very interesting Professor Paul Davies and a host of other scientists, that Earth plays host to an even more alien life. No, not visitors from another world — Davies isn't one of those "very interesting professors."

    Rather, Davies, physicist and famous SETI nerd, argues that it's entirely possible for life to have evolved more than once on Earth, and that the descendants of this so-called "second genesis" could have survived until today in a shadow biosphere within our own. Or, if not, then at least traces of their ancient existence could still be found in the fossil record. After all, why couldn't life have arisen many times? It's certainly had enough time and opportunities — in the quiet periods between asteroid impacts in Earth's early history, hemmed into an isolated pocket of geography, underground, or even on Mars, before being transported to Earth on some loose rock or another over the eons. The point is that there's no reason to believe that life spontaneously occurred only once.


    Extremophilia in action.

    If started from scratch independently of normal life, this theoretical weird life would — most likely — use a different set of amino acids, have a different genetic code. Even more radically, weird life could be made of fundamentally different stuff, like silicon, or arsenic. Like the extremophiles, it could live in inhospitable environments it hasn't even occurred to us to search for.

    We haven't found this life yet because we haven't thought to look for it, and because all our life-detecting equipment is designed to snoop out the familiar chemical composition of "normal" life. If a microbe of weird life were to turn up in a biochemist's petri dish, it would most likely be overlooked — or tossed out. Besides, despite the fact that microbes easily constitute the majority of terrestrial life, the microbial world is still largely unexplored. Less than one percent of existing microbes have been cultured and described, and, because their morphology is limited, it can be hard to deduce much from even the ones we know. If weird life exists, it's probably among these unmapped throngs of microbes. In his new book, The Eerie Silence: Renewing Our Search for Alien Intelligence, Davies observes that "if you set out to study life as we know it, then what you will find will inevitably be life as we know it."

    Davies asks, "does all life on Earth belong to this single [evolutionary] tree, or might there in fact be more than one tree? Might there even be a forest?" If, indeed, an entirely separate tree of life coexists with our own, we'd be forced to conclude that our genesis wasn't a unique incident. Perhaps, even, there is a cosmic imperative for life to develop, and thus the universe may be seething with it.

    [Editorial aside: It's interesting to me how all the theories about life in the universe boil down to the potential two extremes of the Drake equation: "none" or "teeming." Could we even bear to live in a universe with, say, only one other instance of life, somewhere far away and unreachable?]

    OK. The importance of "are we alone?" as a question is that the answer, regardless of what it is, will have a profound effect on our species. As SETI scientist Jill Tarter cited so elegantly in her winning 2009 TED Talk, the discovery of intelligent life elsewhere beyond our earth wouldn't just change everything — it would change everything all at once. As a species we have a sense of privilege, Tarter says, that the universe doesn't particularly share. We are defined by our "loneliness and solipsism." To find that we are not, in fact, alone: it may motivate us to comport ourselves better, just as an audience gives an artist meaning, or a jury lends truth its gravitas.

    Finding a communicating alien civilization in the void of space, finding living bacteria on Mars, or finding evidence of a second genesis on Earth: all these would simply be gradations of the same shocking discovery, that our particular variety of living is not the only solution, nor the unilateral peak of some evolutionary pyramid. Such a revelation would not only lay out our human chauvinism, but it would also lay bare the fact that life is an insane wonder, an unstoppable force of being in a universe of indifference and chaos.

    Supplementary Reading:

    The Eerie Silence: Renewing Our Search for Alien Intelligence by Paul Davies.

    We Are Not Alone: Why We Have Already Found Extraterrestrial Life by Dirk Schulze-Makuch.

    Extremophiles: Microbial Life in Extreme Environments, edited by Koki Horikoshi and William D. Grant.

    Signatures of a Shadow Biosphere (PDF) by Davies, Benner, et al., from Astrobiology.

    Carol Cleland on the Shadow Biosphere, from Astrobiology Magazine.
      Promote (6)
      
      Add to favorites (1)
    Create synapse
     
    Let's talk about the God Particle.



    It strikes me that people refer to the Higgs boson as the "God particle" in the same way some call the iPhone the "Jesus phone": with an almost pointed disregard for what such a prefix actually means. Considering the intensity of the culture wars, the popularity of the moniker is baffling. Is this about contextualizing the abstraction (and grandeur) of particle physics in a way "regular" people can understand? Does this represent a humanist concession to the religious? If so, can religious culture really be swayed by such a transparent ploy — y'know, it gives things mass, just like on Sundays?

    I know the use of "God particle" is largely a media problem, born of the Leon Lederman book of the same name, and that most scientists find it maddeningly overstating of the particle's qualities and importance. Lederman himself came out of a long tradition of scientists using "God" as colorful shorthand for the mysterious workings of Nature, rather than literally. Albert Einstein, who famously over-used the word, was not religious as much as a Spinozan humanist, explaining that "we followers of Spinoza see our God in the wonderful order and lawfulness of all that exists." This usage was not uncommon, but in a post-Intelligent Design scientific discourse, the habit has waned. And, while we scramble to find new, immediately relatable metaphors for "that grandiose, awe-inspiring quality of the Universe which eludes us," God does in a pinch.

    Yet punctuating the language about an elusive subatomic particle with the G-word seems like just the kind of thing that would infuriate anti-science religious nuts, or at least strike them as besides the point. I can't help but think of Yuri Gagarin, in 1961, returning from the first manned space mission and saying, "I looked and looked but I didn't see God." Did the certainly unsurprising revelation that the Creator wasn't lounging around in space like the man in the moon shatter global theology? Of course not — "I looked and didn't see God" is irrelevant if you believe (like the Catholic Church) that God exists in a realm outside of physics, or even the physical world. The discovery of the Higgs boson should reveal what the universe is physically made of, at its deepest level, but it shouldn't make a difference to those who see the making itself as an act of God. Which raises the question: do we say "God" particle because its existence would debunk religion, or because it would be an ultimate example of the manifold complexity of God's creation (ostensibly)? More importantly, of these two radically different readings, which is the most common?

    When the New York Times uses the phrase in headlines without discussion, which version of the phrase does its readership infer? It's impossible to know, and this rattles me. Language has a hypnotic, iterative power: with every use, a word becomes more engrained into its new context, increasingly impossible to view objectively. "God particle" has become a colloquialism for "Higgs boson," and it does neither physics nor the idea of God any service. Rather, it sells them both short: by implying that the questions we deal with in physics are so easily reducible, and that the Higgs might have any effect on how the religious see the world.

    "God particle" is a convenient phrase. It haphazardly gets at the importance of the whole enterprise — and it definitely grabs people's attention. Still, its meaning has become unclear, and no real information can be gleaned from it.

    At best, it hints at weightiness; at worst, it simplifies the Higgs to the point of obfuscation.
      Promote (3)
      
      Add to favorites (1)
    Create synapse
     
    Out beyond the farthest stars,
    Where the cold of space spreads thin,
    We endeavor to look out,
    While they are looking in.
    – adapted from Isaac Asimov.


    THE SCIENCE POEM MANIFESTO
    Written on the occasion of the Science Poems book and exhibition, published by the design collective OK DO in Helsinki, Finland.



    Science fiction is art.

    Science fiction is science poetics.

    Science fiction is more honest about our hell and heaven, the compassion and the monstrous failings of our species, than any other form of art. Science fiction is real counterculture. Science fiction has legs and arms, fire and brimstone, void and aether, bellows and pickaxe. It creates the world and then it walks among it, knowing it, loving it before it plunders the truth from difference.

    We, the science poets, have the stars – inherited from your apathy – and the future; you, the rest, have our common past, and this slovenly Earth. Science fiction trammels the past, sows its bones into the soil. Science fiction looks into the abyss and sees life, builds life out of death.

    Science fiction is not a canon of equivalence (Dick our Pynchon, Delany our Derrida, Butler, Tiptree, and Russ our de Beauvoir, Cixous, and Dworkin), but a canon of its own. The science poets have always known this. In our secret utopia where the kings and queens are those with stars in their teeth and dark chasms on their shoulders, the science poets honor one another. From their gates, the science poets will never turn you away, because cold pangs of fearful yearning for the alien live within us all.

    No man is an island,
    And no planet is in turn;
    And that in six billion years,
    We’ll stand and watch it burn.


    Science fiction doesn’t tell the future, it builds it. Science fiction is a living tradition that informs the very world it critiques, inventing new myths, words, and realities just as we catch up to its old ones. Science fiction does not obey; it does not consume. It presents the path, so we can walk it without fear.

    Science fiction is a tender, holographic tunnel reaching all the way back to us from the distant future, from beyond the stars, broadcasting comfort despite difference, hope among despair, and teaching us the importance of our moment in the face of the impassive monument of time.



    Science poems are not abstract, they are not separate from the world: the future is a poem, for it doesn’t yet exist. And those things which don’t yet exist are like the breath on the tongue, a gesture yet to be made – they are sheer potentiality. They have the kinetics of real art.

    As Stanislaw Lem wrote, science fiction “comes from a whorehouse but…wants to break into the palace where the most sublime thoughts of human history are stored.” Within the shadowy, grimacing frame of its own poetics, it does. Because the sublime thoughts of human history have always been projected outwards, to the vastness outside of our minds. Science fiction is a movement outwards, not inwards: “up, up, and away”.

    Science fiction knows, like the science poets do, that the sky begins at our feet.

    The science poets look at our sky and they see three moons, or a ringed planet in sultry sunset; they hear a voice whispering across the void, hear the malice in its tone, but still find how to forgive it. Science poets see a tentacle and know its embrace. Science fiction is the grief of tomorrow and the horror of today. Science poetry makes no illusions.

    Some days the poets burn out,
    They drink deep from the cup,
    They look all around them,
    And they think, “Beam me up!”
    Fri, Sep 10, 2010  Permanent link
    Categories: design, Poetics, Science Fiction
      RSS for this post
      Promote (3)
      
      Add to favorites (2)
    Create synapse
     
          Cancel