Tuesday, June 24, 2008

Ye Gods, We Hardly Knew Ye

The forum discussion following an article on AlterNet a few days ago got me to thinking about science and technology and how we humans show a marked tendency to abuse them. In particular, our overreliance on technology to solve all our problems puts us in the same moral category as the drug-addicted; the mindset (if a little bit is good, more is better) is exactly the same. When we don’t get the desired results, we tend to kick it up a notch, creating endless cycles of bad choices to wipe out the ill effects of previous bad choices. In the end it’s a zero-sum game.

To avoid any confusion about the differences between science and technology, suffice to say that science is more about the process of discovery, of proving or disproving any given theory. On the other hand, technology is more about practical applications of science, of putting scientific discoveries—through innovation and invention—into practice.

(It’s a popular misconception that science and technology will save us from the long-term unintended consequences of past scientific and technological pursuits, despite contraindications that we humans are about to be buried under the rubble of science and technology gone awry. The trouble, of course, is that the solution to one problem begets many more problems, all or none of which might or might not be related to the original problem. One can argue, not too logically, that we need more technology to deal with the problems we already have, but new technology would only bring new problems—amply demonstrated by every technological advance throughout recorded history—so that argument quickly falls apart.

By now you’re probably thinking I’m a Luddite. Maybe I am. But you should know that I’m also an avid sci-fi fan, and that I find science and technology deeply interesting and endlessly fascinating; nothing does more to kindle the fires of imagination and unleash the creative mind than to immerse oneself in the myriad possibilities of things yet to be discovered.

Where sci-tech and I part company is the way in which the discoveries attributable to sci-tech are, too often, ruthlessly exploited by commercial interests for no other purpose than to create wealth. Gone are the days when people did things because things needed doing; now, few people do anything unless they stand to make a buck. Not an absolute, but close enough.

Clever though we are we seem incapable of learning that an ability to do something is not sufficient reason, by itself, to go ahead and do it. In practice, doing anything without a clear understanding of short- and long-term consequences is the moral equivalent of leaping before you look.

Nowhere is this more evident than in the fields of genetic engineering, genetically modified organisms, nanotechnology and artificial intelligence, where indiscriminate applications of these particular kinds of technology pose very real dangers of running out of control. Thanks to genetically modified crops that threaten agriculture, and a combination of overused antibiotics, widespread use of artificial fertilizers, insecticides, pesticides, and herbicides, and the irradiation of meat, fruit and vegetables, most of the world’s food supply in now at risk, as is most of the world’s potable water.

No one knows the full extent of the dangers that lie in wait for unwary technophiles and all the rest of humanity as the ruthless exploiters of science and technology make new inroads into unexplored territory, boldly going where no man has gone before, with nary a backward glance or critical thought about possible—or even probable—outcomes.

Has the cleverness of our inventions put us all in peril? Have we humans become the engineers of our own destruction? Or are we engineering something completely different, perhaps the next stage of human evolution? Sensing imminent extinction, might not scientists seek to create new life forms capable of preserving human intelligence and knowledge, acquired over millennia, under conditions that no human could survive?

Humans lacking intelligence and knowledge are little more than naked apes and therefore are—on an evolutionary scale—no more worthy of survival than, say, dodos or pterodactyls or a termite colony. In fact, when it comes to survival, humans lacking intelligence and knowledge are vastly inferior to apes. Thus, intelligence and knowledge and their preservation and perpetuation are the important things; humans not so much.

Picture a distant future in which nanotech life forms directed by artificial intelligence gather periodically in enclaves to pay tribute and swear obeisance to their human creators, that mysterious race of super-beings whose sudden disappearance from Gaia gave rise to new mythologies and became the stuff of a new wave of religious dogma.

And might not future children of the gods utter, in moments of extreme religious fervor and devotion, this simple prayer for salvation: Ye Gods, we hardly knew ye, but please, we beg ye, save us from ourselves?

Then again, maybe artificial intelligence is smarter than that.