|Image may be subject to copyright. Reproduced|
here under fair use doctrine.
The Matrix is one of the more influential films of the past 20 years. So much of it has become imbued in our culture. Everyone knows what you're talking about if you say "I wish I'd taken the blue pill." Morpheus's "What if I told you..." line has been adapted to hundreds of memes. And the reflections in those pince-nez shades – pure cinematic genius.
All in all, it was 5+ hours well spent. 20 years on, I'm still awaiting Neo's return. Yet in spite of the durability of the films, I found them dated. The late 1990s were a time of great optimism, the ascendency of the digital age. Home computers had become commonplace, and everyone knew about the Internet. Computers amazed people, and there seemed to be no limit to what they would be able to do in the future.
Moore's law was well-known in 1999, and 20 years in the future, it has held up. The computers of today are hundreds of times faster that those of the late 90s. They have become far more accessible, far more affordable, far more ubiquitous. Link speeds are also hundreds of times faster, fast enough for me to stream the Matrix films in 1080p HD on a 10" tablet computer. I keep up with such things, and I can say truthfully that we are still nowhere near the limits of computing capacity that can be brought out of silicon. Computers, however, can still do no more than what they could do in the 1990s. They just do it a lot faster.
In the films, The Matrix is a construct of sentient programs, used to deceive the world's population into thinking they are alive in a civilization modeled on the world of 1999. The programs have conquered and enslaved humanity in order to provide the electric power the machines need to survive. I won't go into the absurdity of this idea as it's essential to what is otherwise a great story. To people actually living and breathing in 1999, it might have seemed plausible. But it isn't power generation that trips up the plot. It's the idea that somehow computer programs could become sentient.
Most people had, however, already accepted this idea. It is a familiar theme in science fiction dating back to 1950s, perhaps farther. 1968's 2001: A Space Odyssey features a malevolent machine intelligence named H.A.L. who decides the secret he carries is too important for meat intelligences to be trusted with. Many other films, such as 1984's Tron feature sentient programs as both sympathetic and malicious characters. In the Terminator films, Arnold Schwarzenegger plays both. And who can forget C3PO and R2D2, among the most memorable machine intelligences in cinematic history. Indeed, the idea of machine intelligence has become so essential to good storytelling that folks like me dare not even challenge it. Fortunately, we do not have to.
Last night, I had a rather frustrating "conversation" with Alexa. The Echo Dot I keep on my nightstand had a flashing yellow ring that I wanted to turn off so I could go to sleep. I asked Alexa to turn off the yellow light and she told me she could not find a device called "the yellow light." I asked her "why do you have a yellow light?" This time, she told me it was because I had notifications from Amazon shopping. Earlier in the day, I had posted a question about a film scanner I was considering, and a few people had answered it.
I could not pose a command that would get Alexa to turn off the light. This was irritating, but I thought all I would need to do was go on the Amazon website and read the answers I'd gotten. Nope. She still had a yellow light. After some time, I figured out that the only way I could get her to shut it off was to let her read the answers I'd already read, one by one. Now, she does a remarkable job with text-to-speech synthesis. The vocalizations were intelligible, and even her inflections were good. She really sounded like a human woman reading the answers to me. That bit of amusement mitigated my frustration somewhat. But it was a lot of trouble to go through just to get her to turn off that annoying yellow light.
Alexa was, of course, just executing a program. It goes a little like this:
The Amazon cloud says Bob has notifications. To inform him he has notifications, I'll make my ring flash yellow.
Bob says "notifications." I'll retrieve them. I tell him he has four notifications from Amazon Shopping.
I'll read the first one.
Bob says "notifications." I tell him he has three notifications from Amazon Shopping.
I'll read the first one.
The loop continues until all of the notifications have been read. Only then does the yellow ring go out. In retrospect, I should probably just have unplugged her and dealt with it in the morning, but it presented me with a computing problem to solve, and I can't resist that. Now I know how to turn off the yellow ring. I also know how to disable the notifications so I don't get more yellow rings – but this is something Alexa cannot do by herself. You have to do it with the Alexa app.
Now, by referring to Alexa as "she" and "her," I am indulging in anthropomorphism (but I talk to and have names for my bicycles, too). A great deal of computing went on during my exchange with Alexa, but that occurred because a team of programmers coded it. Alexa was bound by the extent of her programming. It is thus with all computers. Without the work done by programmers, they won't even turn on. In my 37 years of working with computers, not counting the 5 that have passed since I retired, I have yet to encounter a system that is not bound by the limits inherent in its programming. Those limits are ultimately defined by the laws of discrete mathematics, and those are not going to change even unto the entropic death of the universe.
The Star Trek franchise mostly got this right. The Enterprise computer could operate the machinery of the ship, and could answer questions posed by the crew, but it could do little else. They went a bit off the rails with Veejur (the first Star Trek movie) and especially Data (Star Trek: The Next Generation), but the Enterprise mainframe remained comfortably predictable. To the extent that Alexa, Siri, and other voice-interactive systems become more "intelligent," that will be the direction they take.
Now, I like Alexa. I have five Echo devices located throughout my house. She wakes me up to my favorite Internet radio station, she turns lights and appliances on and off, and plays an almost infinite selection of music on demand. As a person who has trouble getting around, this capability is a godsend. I do not, however, fear that she is going to hijack my brain and insert my body into a power plant any time ever.
This is supposed to be a blog about my writing, not technology, so it may seem that I digress a bit. I am, rather, just being long-winded. I have mentioned in prior blog entries that the authors who have most influenced me include the greats of science fiction's golden age. This leads to the question of why I am not writing science fiction. It would seem the most natural thing. My Spike Bike stories take place in a dystopian future, but this was just a backdrop I needed to give the stories a modicum of credibility. I was not writing about a future society, I was writing about the one I lived in.
The problem is that I am unable to sufficiently suspend disbelief when I write. All fiction requires suspension of disbelief, but not to the degree that science fiction demands. I bill myself as a writer these days, but I am still an engineer. As such, I solved real world problems using real world science. I tried to be creative with it, but I was always keenly aware of the constraints. I cannot ignore those constraints when I write, so I cannot bring a computer to life or make a ship fly through space faster than light. This does not cut me off from the genre entirely, but it does impose restrictions that I don't like.
As a kid who cut his literary teeth reading space opera, I realize I could never do it justice. A writer must be unbounded in the creation of a story, so that the story can go where it needs to go. Math and physics would bind me as such, so it's easier for me not to go there.