And for our next trick: Normality

When I was in elementary school, I sat in the front row of a Lance Burton magic show. It was quite entertaining, and I, along with the rest of the audience, was amazed at the magic tricks he performed. At one point in the show, he looked around for a kid, spotted me, and asked if I would like to join him on stage for a trick. Being the shy kid that I was, I shook my head, so another kid took my place. An opportunity missed. I can’t remember what the trick was, but at the end of the trick as a bonus, Lance seemingly pulled a magic set out of the back of the kid’s shirt and gave it to him. After that, the kid hopped into a car that had been wheeled on stage before taking him backstage as he waved at the audience. Looking back, I wish I had nodded, but enough about my past.

The point is, we enjoy being fooled, and watching magic shows is just one way of being fooled. Brain trickery in the form of images is another. In the illusion on the right, the lines are straight. That’s the way it is, but our brains perceive otherwise.

So if our brains are so easily fooled, and if there are times when we are aware of being fooled, there must also be times when we are unaware of being fooled. This phenomenon can be observed in history.

John von Neumann, a computer scientist, was no doubt incredibly smart. He could probably be classified as a genius. However, even he had trouble seeing past himself when he was presented with a promising idea. He had some students working under him at Princeton, where they were manually converting programs to binary to be used on a machine they had. One of the students, realizing that there was a more efficient process suitable for the human mind, went off and created an assembler. When von Neumann found out about what his student had done, he asked, “Why would you want more than machine language?” Imagine if we were still coding in ones and zeros today. 1000001000000110. One binary instruction is already hard to understand. We’re people, so we shouldn’t need to think like computers. John von Neumann couldn’t see that, probably due to his superb cognitive abilities. To him, 1000001000000110 was something programmers of his time always did. To them, it was normal, and anything outside of that realm of normality was inferior.

And of course, we have the Internet. It’s so integrated into our daily lives that we hardly think about it, and if we do, it goes something like, “I can’t connect to the Internet.” It may be hard to believe we live in a world that didn’t have an Internet at one point, but we do. You can imagine the transition to an Internet-dependent world was difficult for some people. Newsweek stated in 1995, “… Nicholas Negroponte, director of the MIT Media Lab, predicts that we’ll soon buy books and newspapers straight over the Internet. Uh, sure.” You can read more about it here. Understandably, buying books and newspapers online might have been difficult to see at the time because it simply wasn’t normal, and ideas that don’t seem normal can be easily hand-waved as downright silly.

So where am I going with this? What does this have to do with computing? Well, like any other field, the field of computing consists of people who are susceptible to the influences of normality. Of the many areas in computing, I would like to focus on the current state of object-orientation and its misunderstandings that have become a part of our culture. Nowadays, when the term “object-oriented” is mentioned, developers have different reactions. Some praise it. Some get the willies and point to functional programming instead. Some stare at it with indifference. To most developers, “object-oriented” is just a word that’s mapped to a definition created from their own experiences. However, these experiences tend to be inaccurate representations of object-orientation. To see this, we need to start by understanding a little about where object-orientation originally came from.

Alan Kay is one of the key computing pioneers in our industry, and he’s the one who coined the term “object-oriented” around 50 years ago. To him, object-orientation means only three things: messaging, encapsulation, and late-binding. The experienced developer will see this as method invocation, objects as abstractions, and polymorphism respectively. Don’t be fooled. Messaging, encapsulation, and late-binding are actually different beasts compared to today’s definition and implementation of object-orientation, but I will talk about this in the next post.

Of those three concepts, the most important one is messaging. Objects are secondary. In fact, Alan Kay has said he regrets calling the paradigm “object-oriented” and should have called it something like “message-oriented” instead. You can find proof of that here on StackExchange. Objects are simply a means of sending a message, or as Sandi Metz puts it, “You don’t send messages because you have objects.  You have objects because you send messages.”

Messages are what define behavior, and if objects are the ones that send and receive messages, then we’ve got a “software society” that is a composition of behaviors and interactions between objects where messaging is of utmost importance. By focusing on messaging, good abstractions naturally fall out because the focus is on what should happen rather than how things are done.

Unfortunately, many developers of today design software with a focus on implementation details surrounding data instead of behaviors and interactions through messaging. This does not scale well because data introduces a level of concern that should not even be a concern when dealing with high level matters. It distracts from the behaviors that really matter, and it forces developers to understand how something is implemented when they shouldn’t need to.

Also, because implementation details surrounding data are unstable, they lead to wrong abstractions, which are difficult to recover from. Wrong abstractions simply do not flow well with one another, and they can’t be used in different contexts. If an implementation detail changes, it must be dealt with, and again developers must concern themselves with something that shouldn’t matter at a high level. Rather than working in harmony, they’re pulling each other apart, resulting in nasty workarounds.

On the other hand, if the abstractions are created with a focus on messaging, no change is required if implementation details change. They can also be used in different contexts, and if anything changes, it will be because high level goals have changed. This is a solution that flows well with its natural joints.

So why is it that many developers struggle with object-orientation? Part of the answer is that the Internet has created a pop culture in the field of computing, and hardly anyone is particularly interested in the history our field has to offer. At best, we typically think about the present, future, and not enough of the past, so people don’t really know about big ideas that are already out there. It would be great if the people in our industry realized that we have a history to understand.

I think the quote, “We see things not as they are, but as we are,” is quite fitting. We’re constantly engrossed in this pop culture of ours that we think everything is normal, that things have already been figured out, and that we simply need to improve upon those things. This is how dogma is born, and we want to avoid that. We need to learn to see past ourselves, and observe things as they are in order to see things from different perspectives. For us, I think a good starting point would be to understand object-orientation as originally envisioned by Alan Kay.

In my next post, I’ll dive deeper into messaging, encapsulation, and late-binding. Stay tuned!

Leave a comment