The Conscious Interview with Madeline Ashby

Today, I consciously interview author/futurist/awesome panel companion Madeline Ashby. Please note that the bulk of my questions are referring to her previous novels vN and iD, and not her upcoming work Company Town, for the simple fact that it won’t be released for a few months yet. Rest assured, I’m putting in an order now.

The official bio:

Madeline Ashby

Madeline Ashby is a science fiction writer, strategic foresight consultant, anime fan, and immigrant. She is represented by Anne McDermid & Associates, and IAM Sports & Entertainment. She has been a guest on TVO’s The Agenda multiple times. Her novels are published by Angry Robot Books. Her fiction has appeared inNature, FLURB, Tesseracts, Imaginarium, and Escape Pod. Her essays and criticism have appeared at BoingBoing, io9, WorldChanging, Creators Project, Arcfinity, and Tor.com.

Describe your latest book in a tweet.

Company Town: “The Terminator” meets “The Girl With the Dragon Tattoo.”

Now, describe it as a movie pitch.

In a floating city, on a dying ocean, one all-too-human woman hunts a post-human serial killer bent on changing the course of history.

Both vN and iD (the first two novels in the ongoing Machine Dynasty series) are told from the robot’s point of view, an idea that upon reflection I’m rather astonished hasn’t been used more than it has. What made you want to take the side of the intelligent toaster?

I was really tired of stories about how fucking special humans are. You know, how great Bella Swan is supposed to smell, and stuff like that. She smells great because she’s meat. I wanted to tell a story in which the humans were meat.

You reference (both directly and indirectly) a large number of genre predecessors in your novels. What were the works that inspired you to begin your series?

Honestly? I had the idea while watching Ghost in the Shell: Stand Alone Complex. It’s an anime from the early part of this century. At the time, I was also reading a great deal about 3D printers and self-replicating machines, and working on a degree in cyborg theory. So it all came together in my head.

There are many parallels in your novels to slavery, the robots being bound to man through their programming. Was this a conscious decision on your part, or did the subtext only come through later?

No, it was pretty intentional. The word “robot” comes from the word for serf or slave. I knew that going in, and it’s acknowledged directly in the first book. By a robot. As she’s torturing someone.

I’m fascinated by the concept of knowing that you’ve been programmed to love mankind, as your robot Javier definitely is.

I meant it as a metaphor for those moments in relationships where you know you’re falling for the wrong person, or falling into a self-destructive pattern. We all have people we’re vulnerable to who take advantage of us, romantically. For Javier, it’s a whole species.

You’ve obviously thought out the idea of artificial intelligence. In your opinion, is that a valid term? At what point does the artificial become natural?

Most of the artificial intelligence that tries to emulate human intelligence does so because of its architecture. The idea is that a bunch of complex systems come together to create complex processes and, ultimately, complex thought. So while the structure is artificial, it emulates an organic structure. That’s the whole idea behind things like neural networks. As for the moment at which something artificial becomes natural, I think we can say that the emergence of autonomous cognition within an artificial system is pretty natural. It’s a consequence of capability, organization, architecture, and data.  For example, when Dario Floreano was trying to simulate evolution in robots at the Swiss Federal Institute of Technology, he found that they would lie to each other about patches of ground they read as food sources. After the 50th generation, some behaved altruistically and others continued to lie. It was a wholly organic process within an artificially-created system.

Beyond your work as an author, you make a living as a futurist, which may make you uniquely suited to answer this: where do you stand on the whole concept of the singularity; that fabled moment when computers gain consciousness? If the singularity comes to pass, is mankind screwed?

Technically, the singularity isn’t the moment that computers gain consciousness. [ED: my bad!] It’s the moment at which they become so intelligent that they outstrip human intelligence, and begin creating their own. A lot of other science fiction writers (namely Peter Watts and Charlie Stross) would tell you that consciousness is actually a handicap. There’s no guarantee that a post-Singular super-intelligence will be at all conscious or even sentient. Try to imagine having a conversation with, say, a cancerous tumour that spans the entire globe. Think about how we treat cancer, now. That’s what I imagine it being like.

Do you think we’ll ever be able to download our consciousness into a computer? And if so, would the person really be the computer, or would it be a facsimile, the original now dead?

Again, I’m not sure consciousness is really the trick. That’s sort of like asking whether a person who has had a stroke is the same person that they were before having it. Your conscious awareness is really only a feature of your brain—it’s an operating system sitting atop a lot of wetware and helping it to accomplish goals and stay alive. More importantly, the quest for originality is inherently bound up in very old-fashioned, even patriarchal ideas of authority and ownership. It’s something Barthes talks about in “The Death of the Author,” and I think a post-modern understanding of consciousness is ultimately healthier and more forgiving of human realities. Basically, there’s this idea that our minds are special snowflakes, and they’re really not. From a neurobiology perspective, our brains are incredibly plastic in nature, and they change all the time. “The Brain That Changes Itself” touches on this idea. So, whether or not the consciousness that emerges within a mechanical structure is actually yours doesn’t matter—your own brain is different from the way it was when you were an infant, or a child, or even in your twenties. And it will continue to change as you age. There is no “you.” “You” are a work in progress. The best a computer could save is a draft.

Coming up next: The Subconscious Interview!