A review by nakedsushi
The Lifecycle of Software Objects by Ted Chiang

3.0

I've been wanting to read something by Chiang for a while and as soon as I saw the art on the front of this book, I knew i had to read it. Unfortunately, it didn't deliver all that I wanted.

The title is both very apt and misleading. The short novel is about the life cycle of software objects, but more specifically, it's about a set of virtual pets, called digients, which develop artificial intelligence. Most of the book covers how people react to the newly introduced digients and how they get used to them and eventually find them obsolete.

What bothered me about this book was that I could only read it on an intellectual level. There are human characters such as the animator and trainer of the digients, but they didn't feel very personal and even though each digients seemed to have their own personality, I didn't feel emotionally attached to what happened to them. Part of it could be the way the story is told, in a third person, present tense that reads more like an impartial journal article more than a novel, but I think part of it is also a lot that was left unsaid.

The Life Cycle of Software Objects brings up a lot of philosophical questions such as at what point do you call computer intelligence artificial intelligence, and at what point do you treat AI as a human entity including giving it rights over itself. Unfortunately, those points aren't really talked about much and there didn't seem to be many consequences if you went one way or the other in those arguments.

The biggest thing that bothered me was how it just assumed that AI was developed through nurture, and not nature, which seems like a very big assumption to make without many claims. One could say that the whole story is an example of nurture breeding AI, but that's like making a mathematical proof by contradiction and I'm not a big fan of that.

The ending of the book did bring up a question that will bug me for a long time, though. If we're not just walking, massive databases -- if what makes us human is the ability to learn through experience, if we produce AI that can do the same, would we give it the same rights we have?