I am finished with my grading and have submitted my grades electronically. So the Fall 2009 semester is officially over for me and I am officially on break. Normal people who have real jobs sometimes look at academics who get three or four weeks off for winter break and think "what a chushy job!!" And, I have to admit, the job does have its cushy aspects. But, it isn't as though I have three or four weeks to lay on the couch and watch daytime TV. I have work to do.
Each semester break I go over my classes for the next semester to revise and improve them. Sometimes this is fairly easy and sometimes it is a huge amount of work. You would think that once you have delivered a class, the work is done. But that is not true. In some classes the technology changes. In other classes you look over what you did the prior semester and try to fix pieces that didn't work very well. Not only does the material change, the students continually change as well. So, you are often organizing one moving target for delivery to another moving target.
In other classes you just add new material to keep the class interesting to teach. For example, in my class Writing Stories to Explore the Ethics of Technology, I am thinking about introducing a collaborative writing assignment using a Wiki. So, I have to come up to speed on Wiki technology and work out the mechanical aspects of grading a collaborative project.
If you have been following my posts, you know that I have been carrying on recently about a new age of mass collaboration. This very large idea translates into a very concrete idea in this collaborative assignment. And this is the way things are supposed to be. You think big thoughts and then explore them in little ways. So break is not just time off. It is time to reflect, revise and hopefully improve.
Monday, December 21, 2009
Tuesday, December 15, 2009
The GNU World Order
The more I think about it, the more I believe that there is something to this idea of mass collaboration. Much of the progress that we saw in the 20th century can be traced back to two simple but very powerful ideas. Francis Bacon's empirical view of science allowed us to produce huge quantities of reliable scientific knowledge. And Adam Smith's Pin Factory allowed us to produce huge quantities of reliable technology. (To those who always say, "Oh, the world is much more complicated than that.", point taken. I am simplifying things to make a point here.)
Now we take two very simple new ideas. First, people work best when exploiting their strengths. And, second, self organizing collaborative systems allow each person to most effectively achieve their maximum productivity. Taken together, these two ideas comprise The GNU World Order.
I should explain, for the uninitiated, that GNU (pronounced Nu, the G is silent) is a recursive acronym "Gnu's Not Unix". It is the name given to a line of open source software developed by the Free Software Foundation. This is, to my knowledge, the first major example of mass collaboration. So, I though it appropriate to name the new age of mass collaboration the GNU World Order, also playing off "the New World Order" of the enlightenment.
I can honestly say that I don't know where all this is going. But my intuition says that it is definitely going somewhere and is worth keeping an eye on.
Now we take two very simple new ideas. First, people work best when exploiting their strengths. And, second, self organizing collaborative systems allow each person to most effectively achieve their maximum productivity. Taken together, these two ideas comprise The GNU World Order.
I should explain, for the uninitiated, that GNU (pronounced Nu, the G is silent) is a recursive acronym "Gnu's Not Unix". It is the name given to a line of open source software developed by the Free Software Foundation. This is, to my knowledge, the first major example of mass collaboration. So, I though it appropriate to name the new age of mass collaboration the GNU World Order, also playing off "the New World Order" of the enlightenment.
I can honestly say that I don't know where all this is going. But my intuition says that it is definitely going somewhere and is worth keeping an eye on.
Monday, December 7, 2009
Wikinomics: A Possible Soution
I love serendipity. Last week I was rambling on about how difficult it would be to manage in a strengths based environment. At the time, I did not see a clear path to the future. However, I just happened to be in the local public library later that day and just happened to pick up a book on CD that looked interesting. Then Whammo! the whole thing came together.
The book I picked up was Wikinomics: How Mass Collaboration Changes Everything by Don Tapscott and Anthony D. Williams. The premise of the book is that we are seeing a shift away from traditional hierarchical business models to more collaborative business models. I cannot say much more than that without diminishing the impressive ideas put forth in this book. I would highly recommend it and, for my purposes here, will leave it off by saying that this book jogged my thinking in this area.
In the future you can expect to see more outsourcing of work to individuals. Instead of retaining talent in a sort of corporate studio system, corporations will increasingly acquire the talent they need, when they need it, from the global pool of talent that is available throught the World Wide Web. Professionals will bid for jobs and be paid for the things that they produce rather than being paid for just showing up. Over time people will gravitate to the things they are best at as that will maximize their productivity and their revenue. And businesses will adjust compensation so as to attract the best person for the job at the best price. This will lead to economic efficiency and maximized productivity. And people will work according to their strengths as that will produce the maximum revenue for the least effort.
I don't want to push this idea too far as the future tends to scare people. However, it is easy to see a day, within out lifetimes, where notions such as "going to work" or "working for a company" are simply antiquated. Will we ever get nostalgic and look fondly back on the days when we used to sit in traffic for a hour in the morning and in the evening so we could hang out all day with people we didn't like just so we could go to pointless meetings and engage in any number of pointless repetitive rituals? Perhaps not. Maybe this is a good thing.
The book I picked up was Wikinomics: How Mass Collaboration Changes Everything by Don Tapscott and Anthony D. Williams. The premise of the book is that we are seeing a shift away from traditional hierarchical business models to more collaborative business models. I cannot say much more than that without diminishing the impressive ideas put forth in this book. I would highly recommend it and, for my purposes here, will leave it off by saying that this book jogged my thinking in this area.
In the future you can expect to see more outsourcing of work to individuals. Instead of retaining talent in a sort of corporate studio system, corporations will increasingly acquire the talent they need, when they need it, from the global pool of talent that is available throught the World Wide Web. Professionals will bid for jobs and be paid for the things that they produce rather than being paid for just showing up. Over time people will gravitate to the things they are best at as that will maximize their productivity and their revenue. And businesses will adjust compensation so as to attract the best person for the job at the best price. This will lead to economic efficiency and maximized productivity. And people will work according to their strengths as that will produce the maximum revenue for the least effort.
I don't want to push this idea too far as the future tends to scare people. However, it is easy to see a day, within out lifetimes, where notions such as "going to work" or "working for a company" are simply antiquated. Will we ever get nostalgic and look fondly back on the days when we used to sit in traffic for a hour in the morning and in the evening so we could hang out all day with people we didn't like just so we could go to pointless meetings and engage in any number of pointless repetitive rituals? Perhaps not. Maybe this is a good thing.
Monday, November 30, 2009
The Challenge of Exploiting Strengths
It seems so obvious that organizations would perform better if people could exploit their strengths rather than attempting to mitigate their weaknesses. However, implementing this as a management strategy is not without its challenges. Consider an analogous situation in the realm of politics and economics.
In democratic societies, people pursue their self interests rather than the interests dictated by a monarch. The benefit of this is that people are far more productive doing what they want than they are doing what they are told. The down side is that you have to deal with all their opinions and a concerted focused effort on any one specific thing is virtually impossible. The upside, of course, is that you have higher levels of productivity, advances in knowledge, creative ideas, cultural advances, advances in technology and so on. However, if you were to, some how, take a picture of today's democracies back to a medieval despot, it is unlikely that they would want to have any part of it. It all looks very chaotic and the benefits, to someone who has not experienced them, are unclear. In fact, if the first vote of a new democratic society was to decide on whether or not to be democratic, it is certainly not a foregone conclusion that it would become one.
New social systems require two things: visionaries and huge successes. And the same thing is true with new management systems. I doubt that rank and file organizations will all start cutting over to strengths based management. What is more likely is that organizations will toy with strength based ideas. By doing things this way we will begin to see the benefits and will learn how to deal with the problems that it creates. Over time we will learn more about how to manage this way. Then, at some point, a visionary will coalesce our experiences in a cohesive treatise on managing strength. Some one will try the new vision, hit a home run out of the park, and others will follow.
This is one of the most promising ideas I have seen in a long time. But, don't expect a revolution. A slow and cautious punctuated evolution is probably the best approach and that is, indeed, how I see it happening.
In democratic societies, people pursue their self interests rather than the interests dictated by a monarch. The benefit of this is that people are far more productive doing what they want than they are doing what they are told. The down side is that you have to deal with all their opinions and a concerted focused effort on any one specific thing is virtually impossible. The upside, of course, is that you have higher levels of productivity, advances in knowledge, creative ideas, cultural advances, advances in technology and so on. However, if you were to, some how, take a picture of today's democracies back to a medieval despot, it is unlikely that they would want to have any part of it. It all looks very chaotic and the benefits, to someone who has not experienced them, are unclear. In fact, if the first vote of a new democratic society was to decide on whether or not to be democratic, it is certainly not a foregone conclusion that it would become one.
New social systems require two things: visionaries and huge successes. And the same thing is true with new management systems. I doubt that rank and file organizations will all start cutting over to strengths based management. What is more likely is that organizations will toy with strength based ideas. By doing things this way we will begin to see the benefits and will learn how to deal with the problems that it creates. Over time we will learn more about how to manage this way. Then, at some point, a visionary will coalesce our experiences in a cohesive treatise on managing strength. Some one will try the new vision, hit a home run out of the park, and others will follow.
This is one of the most promising ideas I have seen in a long time. But, don't expect a revolution. A slow and cautious punctuated evolution is probably the best approach and that is, indeed, how I see it happening.
Monday, November 23, 2009
My Strengths: Strategic, Futuristic, Learning, Analytic, and Intellection
My top five strengths, according to the StrengthsFinders test are: strategic, futuristic, learning, analytic, and intellection. Briefly, here is what they mean.
Strategic means that I can see patterns in things and the implications of those patterns. This is not only true, it is one of my defining characteristics. I see patterns and implications everywhere. When I saw the movie A Beautiful Mind where the guy saw patterns everywhere due to schizophrenia I actually began to worry that I might be schizophrenic because I see patterns everywhere as well.
My second strength, futuristic, is also a defining characteristic. I read the future just like most people read the morning paper. I don't always get it right just as most people do not fully understand what they read in the paper. But, I am far more likely to see the future implications of a situation than the present ones.
Third, I am a learner. I love learning about new things. I am always taking on new things just so I have something new to learn about. I cannot drink a beer without learning how beer is made, who makes it, what the different kinds are and so on. I cannot watch a movie without know what other movies the director has made, what other movies the actors have been in, what other things the screenplay author has written, and what other movies may be similar or remakes.
Fourth, I am analytic. I like to see things as they are, how they work, why they work the way they do, and how they relate to other things and the way those other things work, why and so on.
Finally, my fifth strength is intellection. I enjoy intellectual activity. I have always been drawn to philosophy and pursuit of intellectual questions.
I suspect that anyone who knows me will be simply nodding as they read these. They are my defining characteristics. And they didn't come as much of a surprise to me. What did come as a surprise was that these were not things that others are necessarily good at. I thought other people were just being lazy when they did not keep up with me in these areas. I did not realize that they may have different strengths.
Strategic means that I can see patterns in things and the implications of those patterns. This is not only true, it is one of my defining characteristics. I see patterns and implications everywhere. When I saw the movie A Beautiful Mind where the guy saw patterns everywhere due to schizophrenia I actually began to worry that I might be schizophrenic because I see patterns everywhere as well.
My second strength, futuristic, is also a defining characteristic. I read the future just like most people read the morning paper. I don't always get it right just as most people do not fully understand what they read in the paper. But, I am far more likely to see the future implications of a situation than the present ones.
Third, I am a learner. I love learning about new things. I am always taking on new things just so I have something new to learn about. I cannot drink a beer without learning how beer is made, who makes it, what the different kinds are and so on. I cannot watch a movie without know what other movies the director has made, what other movies the actors have been in, what other things the screenplay author has written, and what other movies may be similar or remakes.
Fourth, I am analytic. I like to see things as they are, how they work, why they work the way they do, and how they relate to other things and the way those other things work, why and so on.
Finally, my fifth strength is intellection. I enjoy intellectual activity. I have always been drawn to philosophy and pursuit of intellectual questions.
I suspect that anyone who knows me will be simply nodding as they read these. They are my defining characteristics. And they didn't come as much of a surprise to me. What did come as a surprise was that these were not things that others are necessarily good at. I thought other people were just being lazy when they did not keep up with me in these areas. I did not realize that they may have different strengths.
Monday, November 16, 2009
Now, Discover Your Strengths
Last week I carried on a bit about a motivational management book I had listened too which suggested that the best way to manage people was to develop their strengths rather than to have them work on their weaknesses. As I listened to this book, I was stunned that something so obvious would take so long for us to figure out. Upon reflection it occured to me that this emphasis on shoring up weaknesses is probably an anomaly. In fact, the values of conformity and measuring up are both industrial age values. And now that we are moving beyond the industrial age, we are also coming to our senses about a few things as well. I don't mean to bash the industrial age. A lot of good came out of it. But, as with any good thing it is easy to get carried away. And in many ways we did.
But, back to the topic at hand. I listened to their second book entitled Now, Discover Your Strenths. This second book reported on a vast amount of research conducted by Gallup to identify categories of strengths. They came up with 34 areas which people seem to have some sort of 'natural' ability. That means that you do things in your area of strength with little effort. You might be good at connecting with people, or working a crowd, or seeing implications, or figuring out how things work. There are things that you are better at than other things. There are things that your are better at than other people. There are things that other people are better at than you. And it only makes sense that you do the things that you are good at and spend your time getting better at them.
Unfortunately, in today's world, we tend to down play our strengths. If we are good at something we tend to dismiss it as unimportant. It is easy for us so we don't value it. And other people resent our strengths. In a world that values conformity, standing out implies that others fall short some how. This all reminds me of a Kurt Vonnegut short story where anyone with a talent was penalized somehow in order to insure that nobody made anyone else feel inadequate. This is funny in a short story, but sad in real life.
If this idea has any appeal to you, I would encourage you to visit the Strengths Finder website. They list the 34 categories and, if you are so motivated they, provide a test that will tell you your strengths. Next time I will discuss my strengths and their implications.
But, back to the topic at hand. I listened to their second book entitled Now, Discover Your Strenths. This second book reported on a vast amount of research conducted by Gallup to identify categories of strengths. They came up with 34 areas which people seem to have some sort of 'natural' ability. That means that you do things in your area of strength with little effort. You might be good at connecting with people, or working a crowd, or seeing implications, or figuring out how things work. There are things that you are better at than other things. There are things that your are better at than other people. There are things that other people are better at than you. And it only makes sense that you do the things that you are good at and spend your time getting better at them.
Unfortunately, in today's world, we tend to down play our strengths. If we are good at something we tend to dismiss it as unimportant. It is easy for us so we don't value it. And other people resent our strengths. In a world that values conformity, standing out implies that others fall short some how. This all reminds me of a Kurt Vonnegut short story where anyone with a talent was penalized somehow in order to insure that nobody made anyone else feel inadequate. This is funny in a short story, but sad in real life.
If this idea has any appeal to you, I would encourage you to visit the Strengths Finder website. They list the 34 categories and, if you are so motivated they, provide a test that will tell you your strengths. Next time I will discuss my strengths and their implications.
Monday, November 9, 2009
First, Break all the Rules
I listen to a lot of recorded books and lectures. Most of the time it is fairly serious stuff, but occasionally I like to lighten up with a motivational talk. While the quality of motivational talks can vary greatly, I find that the good ones usually give me something worthwhile to think about. And that is exactly what just happened. I went to the library and checked out four or five motivational talks, one of which was a management talk called First, Break All the Rules. This talk caught my attention with the claim that if you want a high performance organization you should focus on your employee's strengths rather than their weaknesses. This was exactly what I was looking for - something to think about.
According to the authors, the most common employee development process in use today will review the employee on an annual basis, identify their areas of weakness and have them work on those weaknesses over the next year. This, again according to the authors, produces mediocre employees who are frustrated and struggling to be good at things they are not naturally good at. Instead it make more sense to have them identify their strengths and work at being better at things that they are naturally good at. This is so obvious, I thought, why did it take so long for someone to figure this out?
I am going to insert my own analogy in here to crystallize the clarity and importance of this idea. Imagine a professional football team where the players spent their time working on their weaknesses. The quarterback would practice blocking. The running back would practice tackles. The tackles would run reception patterns. The kicker would develop his social skills, and so on. How well would this team perform in competition? Probably not very well. In fact, their only hope would be that other teams also practiced using a similarly dysfunctional development strategy. And that is exactly what happens in business, industry and government today.
But, you may ask, how do you know what your strengths are? How do you discover them? How do you develop them? How do you employ them to get better at doing you job? Well, stay tuned, there is more to come.
According to the authors, the most common employee development process in use today will review the employee on an annual basis, identify their areas of weakness and have them work on those weaknesses over the next year. This, again according to the authors, produces mediocre employees who are frustrated and struggling to be good at things they are not naturally good at. Instead it make more sense to have them identify their strengths and work at being better at things that they are naturally good at. This is so obvious, I thought, why did it take so long for someone to figure this out?
I am going to insert my own analogy in here to crystallize the clarity and importance of this idea. Imagine a professional football team where the players spent their time working on their weaknesses. The quarterback would practice blocking. The running back would practice tackles. The tackles would run reception patterns. The kicker would develop his social skills, and so on. How well would this team perform in competition? Probably not very well. In fact, their only hope would be that other teams also practiced using a similarly dysfunctional development strategy. And that is exactly what happens in business, industry and government today.
But, you may ask, how do you know what your strengths are? How do you discover them? How do you develop them? How do you employ them to get better at doing you job? Well, stay tuned, there is more to come.
Tuesday, November 3, 2009
No Safety in Numbers
In the last post, I discussed how going along with the crowd in your research may be risky in that you may not get as much credit (or even any) for the work you have done. People who have developed recognizable names in research have usually done so by going off on their own and discovering something new and novel. People who plod along shoulder to shoulder with other researchers may gain recognition from the other plodders, but it is rare that they gain recognition beyond that. However, plodding along is not the only risk associated with being part of the crowd.
I used the analogy last week of a beach full of beachcombers with metal detectors. The other risk of being part of the crowd is that the crowd may simply be on the wrong beach. The frequency of this occurrence probably varies quite a bit from one field to the next. However, no field of research is exempt. A promising new vein of research is discovered. Researchers flock to it. And, for a while, it is easy to get papers published in this new area. There may be special issues of journals dedicated to it. There may even be whole new journals dedicated to the emerging area. But then, time passes without much progress. A new and different area excites everyone. Before you know it, you cannot get an editor to even consider a paper in the old new vein. And then, one day, you find the beach deserted except for a few diehards who refuse to give up.
There are probably people who have invested heavily in this area. Many have probably gotten tenure based on their publications. And now they have so much invested in this area that they are unwilling to look for a new beach. So, the chances are that they give up doing research and along with that any dreams of being recognized for their contributions. The beach that no one cares about can be a lonely and disappointing place.
I used the analogy last week of a beach full of beachcombers with metal detectors. The other risk of being part of the crowd is that the crowd may simply be on the wrong beach. The frequency of this occurrence probably varies quite a bit from one field to the next. However, no field of research is exempt. A promising new vein of research is discovered. Researchers flock to it. And, for a while, it is easy to get papers published in this new area. There may be special issues of journals dedicated to it. There may even be whole new journals dedicated to the emerging area. But then, time passes without much progress. A new and different area excites everyone. Before you know it, you cannot get an editor to even consider a paper in the old new vein. And then, one day, you find the beach deserted except for a few diehards who refuse to give up.
There are probably people who have invested heavily in this area. Many have probably gotten tenure based on their publications. And now they have so much invested in this area that they are unwilling to look for a new beach. So, the chances are that they give up doing research and along with that any dreams of being recognized for their contributions. The beach that no one cares about can be a lonely and disappointing place.
Monday, October 26, 2009
Timidity Does Not Pay Either
In the last post, I talked about the risk one takes when they venture out on their own, following their own curiosity, in their research. Not going along with the crowd is a high risk, high reward situation. It is high risk because you might find that you have just wasted your time. It is high reward, because, if you are successful, you may get your name associated with something. This may sound like I am recommending that one go along with the pack. I am not. There are risks there as well.
If one follows the path of well defined research, there are risks and rewards as there are with anything. The reward may be that you are the one to find the thing that everyone is looking for. But, that reward may not be so great and the risks are not trivial. Consider the following analogy. Let's say that pirates buried treasure somewhere along a stretch of beach several miles long. If enough people scour the beach with metal detectors, somebody will be he person lucky enough to find it. What will that person get credit for? They will get credit for being the lucky one. That's it, and they may not even be able to keep the treasure.
How different is this from someone who studied old maps, read old ship logs and then determined where the treasure would likely be. And then, before actually looking for the treasure, they acquired salvage rights. The risk with this approach is that after all that work they may not find anything. But, if they do, they get prestige, recognition and mostly likely can keep the treasure. This is analogous to the situation I described last week. But let us return to the beach full of beachcombers with metal detectors.
The person who finds the treasure with a metal detector is not likely to gain the prestige and respect that the person who predicted its location would. The beachcomber would be seen as a technician who was merely applying their technique and got lucky. So, one of the risks associated with going along with the pack is that you may be seen merely as a technician. One of the hallmarks of lack luster research is that it is technically solid, means little and contributes less.
This disdain for the technician goes a long way back in the history of science. In fact, that is where the word 'science' came from. Prior to the mid 1800's what we currently call science was known as natural philosophy. As more and more people began to focus on data collection and less on the larger problems to be solved, natural philosophers began to chafe at the idea of having these people included in their ranks. So, around mid century William Whewell suggested the term 'scientist' (from the Latin word for knowledge) to refer to these technicians of knowledge acquisition who did not live up to the full meaning of natural philosopher. Today, of course, the term scientist is used as a term of respect rather than disdain. But it reflects the prevailing view that technicians somehow fall short of the mark.
If one is too far embedded in the current paradigm they risk being considered as little more than a technician. And, like the beachcomber who got lucky, they are unlikely to get full credit for whatever they discover. Is being a technician the only risk associated with going along with the crowd. No, not at all. It may be that the whole crowd is looking on the wrong beach. And that we will consider next.
If one follows the path of well defined research, there are risks and rewards as there are with anything. The reward may be that you are the one to find the thing that everyone is looking for. But, that reward may not be so great and the risks are not trivial. Consider the following analogy. Let's say that pirates buried treasure somewhere along a stretch of beach several miles long. If enough people scour the beach with metal detectors, somebody will be he person lucky enough to find it. What will that person get credit for? They will get credit for being the lucky one. That's it, and they may not even be able to keep the treasure.
How different is this from someone who studied old maps, read old ship logs and then determined where the treasure would likely be. And then, before actually looking for the treasure, they acquired salvage rights. The risk with this approach is that after all that work they may not find anything. But, if they do, they get prestige, recognition and mostly likely can keep the treasure. This is analogous to the situation I described last week. But let us return to the beach full of beachcombers with metal detectors.
The person who finds the treasure with a metal detector is not likely to gain the prestige and respect that the person who predicted its location would. The beachcomber would be seen as a technician who was merely applying their technique and got lucky. So, one of the risks associated with going along with the pack is that you may be seen merely as a technician. One of the hallmarks of lack luster research is that it is technically solid, means little and contributes less.
This disdain for the technician goes a long way back in the history of science. In fact, that is where the word 'science' came from. Prior to the mid 1800's what we currently call science was known as natural philosophy. As more and more people began to focus on data collection and less on the larger problems to be solved, natural philosophers began to chafe at the idea of having these people included in their ranks. So, around mid century William Whewell suggested the term 'scientist' (from the Latin word for knowledge) to refer to these technicians of knowledge acquisition who did not live up to the full meaning of natural philosopher. Today, of course, the term scientist is used as a term of respect rather than disdain. But it reflects the prevailing view that technicians somehow fall short of the mark.
If one is too far embedded in the current paradigm they risk being considered as little more than a technician. And, like the beachcomber who got lucky, they are unlikely to get full credit for whatever they discover. Is being a technician the only risk associated with going along with the crowd. No, not at all. It may be that the whole crowd is looking on the wrong beach. And that we will consider next.
Monday, October 19, 2009
Retroactive Blessings
Sometimes, an intellectual pursuit become research retroactively. It receives what amounts to a retroactive blessing. Let's say that a scholar pursues an avenue of inquiry that appears to all his or her colleagues as being somewhat fanciful. By fanciful we mean that it does not follow any of the commonly accepted methodologies; it is not attempting to answer any of the current questions; and it does not appear to be yielding anything of obvious value. This scholar's colleagues may dismiss this activity as not being research. They may call it an intellectual pursuit. They may even call it a legitimate intellectual pursuit. But, they would probably stop short of calling it research.
Then over time, let's say five or ten years, it begin to bear fruit. Since this is a hypothetical, we can push it a bit. So, let's say it bears fruit in a big way. It popularizes a new methodological technique, it helps answer an unanswered question, or it opens up a whole new vein of productive research. Would we consider the work done for the past five or ten years to be research? I think the answer is that there is no question. It would be viewed as research.
Now let's consider what would happen if it did not bear fruit. Everything else was the same. There were five or ten years of investigation. But they came up empty. Would it then be considered research? Probably not. So the very same activity becomes research if it pays off and is not research if it doesn't.
Scholars like to say that research does not have to pay off in order to be research. But when they say that, they are talking about some fairly narrowly defined activities within the bounds of convention. So, if I set up and experiment, for example, to test a principle then it would probably be considered research regardless of the outcome as long as the experimental design was solid and the principle being tested was viewed as non trivial. However, if I just follow my curiosity, where ever it takes me, it would have to pay off eventually to not be considered folly.
Why do we do this? Well, overwhelmingly when people just drift off on their own the results are not productive. So, we allow scholars to take a fair amount of personal risk in their endeavors. If their interests do pay off eventually, then they are acknowledged retroactively. If they do not, then they just have to face the fact that they wasted their time. So, is it better for researchs to stay close to the conventional? Maybe not. That approach has its risks as well.
Then over time, let's say five or ten years, it begin to bear fruit. Since this is a hypothetical, we can push it a bit. So, let's say it bears fruit in a big way. It popularizes a new methodological technique, it helps answer an unanswered question, or it opens up a whole new vein of productive research. Would we consider the work done for the past five or ten years to be research? I think the answer is that there is no question. It would be viewed as research.
Now let's consider what would happen if it did not bear fruit. Everything else was the same. There were five or ten years of investigation. But they came up empty. Would it then be considered research? Probably not. So the very same activity becomes research if it pays off and is not research if it doesn't.
Scholars like to say that research does not have to pay off in order to be research. But when they say that, they are talking about some fairly narrowly defined activities within the bounds of convention. So, if I set up and experiment, for example, to test a principle then it would probably be considered research regardless of the outcome as long as the experimental design was solid and the principle being tested was viewed as non trivial. However, if I just follow my curiosity, where ever it takes me, it would have to pay off eventually to not be considered folly.
Why do we do this? Well, overwhelmingly when people just drift off on their own the results are not productive. So, we allow scholars to take a fair amount of personal risk in their endeavors. If their interests do pay off eventually, then they are acknowledged retroactively. If they do not, then they just have to face the fact that they wasted their time. So, is it better for researchs to stay close to the conventional? Maybe not. That approach has its risks as well.
Monday, October 12, 2009
But, Is It Research?
I am embarking on a development effort to create quest based learning tasks in Second Life. This is a part of the vein of work that I have been pursuing with virtual worlds and video games. A question that faculty often have to deal with when pursuing their interests is - Well, that all sounds very interesting, but, is it research?
Business schools have an inferiority complex when it comes to research (and rightly so) leading the faculty to question such pursuits with regard to their validity as research endeavors. This is an incredibly important topic for business school faculty, so I thought I would digress a bit on this issue of research.
Business school research is, at best, a poorly defined concept, with definitions and criteria varying widely from school to school and among faculty members within a school. Under the most lax definitions, everything is research and under the most stringent, nothing is research. So clearly the concepts needs a little clarifying.
Wernher von Braun is credited with the oft cited observation "research is what I am doing when I don't know what I am doing". The original quote included the modifier "basic" which changes the meaning slightly. However, this is the version that you see cited most often.
If not knowing what you are doing is the criteria for research then business faculty across this great nation and around the world are doing a great deal more research than they are getting credit for. And their is some justification for this claim. Research is the process by which we create new knowledge. And if we already know what we are doing, then we are not creating anything new.
Plato struggled with this same problem. He wondered how you could recognize truth unless you already knew it was truth. This, in turn, led to his observation that you never really learn anything new, you just remember things you already knew. This is actually an astute observation on Plato's part. But, I will just make him look silly if I try to explain it.
Putting aside the philosophical subtlties regarding truth and knowledge, I think it is fair to say that when people do not know what they are doing, we should take it at face value. That is, it is not research. They simply do not know what they are doing. At the same time we do not want to completely ignore the fact that in order to do research you must step away from the things you know and discover things that you don't know. Just exactly how that is done can be very tricky at times.
Business schools have an inferiority complex when it comes to research (and rightly so) leading the faculty to question such pursuits with regard to their validity as research endeavors. This is an incredibly important topic for business school faculty, so I thought I would digress a bit on this issue of research.
Business school research is, at best, a poorly defined concept, with definitions and criteria varying widely from school to school and among faculty members within a school. Under the most lax definitions, everything is research and under the most stringent, nothing is research. So clearly the concepts needs a little clarifying.
Wernher von Braun is credited with the oft cited observation "research is what I am doing when I don't know what I am doing". The original quote included the modifier "basic" which changes the meaning slightly. However, this is the version that you see cited most often.
If not knowing what you are doing is the criteria for research then business faculty across this great nation and around the world are doing a great deal more research than they are getting credit for. And their is some justification for this claim. Research is the process by which we create new knowledge. And if we already know what we are doing, then we are not creating anything new.
Plato struggled with this same problem. He wondered how you could recognize truth unless you already knew it was truth. This, in turn, led to his observation that you never really learn anything new, you just remember things you already knew. This is actually an astute observation on Plato's part. But, I will just make him look silly if I try to explain it.
Putting aside the philosophical subtlties regarding truth and knowledge, I think it is fair to say that when people do not know what they are doing, we should take it at face value. That is, it is not research. They simply do not know what they are doing. At the same time we do not want to completely ignore the fact that in order to do research you must step away from the things you know and discover things that you don't know. Just exactly how that is done can be very tricky at times.
Monday, October 5, 2009
Why Study Games?
The past few posts have been about video games and this naturally leads to the question - why study video games? I would like to break this down into two questions: 1) why study games?; and 2) why study video games? These are really two quite different questions and need to be addressed separately. In this post I will take on the first one.
In his book The Grasshopper: Games, Life and Utopia Bernard Suits points out that in a utopian world, where work was not required for survival, most people would busy themselves with games. There is something fundamentally satisfying about games and it is one of the few activities that people pursue for its own sake. That is, we work so we can play. But we play because we like to play.
There is an important message in that observation. If work were more like play then we would want to work for its own sake. Or, if work were more like a game, then it would be more inherently satisfying. Same thing goes for education. If school were more like play we would want to go to school for its own sake. And if school were more like a game, it would be more inherently satisfying.
Unfortunately, people like to think that work and school should be hard. They should be unpleasant. So any attempts to make them more like play or more like games would only diminish their value. But, I believe, that this perspective is merely a rationalization. It is an attempt to make the best of or deal with a bad situation. Believing that work is supposed to be unpleasant allows us to accept our fate if we are doing work that we find as unpleasant. However, history does not support this view.
Work in the industrial age was far more unpleasant than it is today. In fact, it was not only unpleasant, it was dangerous, tedious, and detrimental to one's health. In the past century and a half great strides have been made in making the workplace safer, more pleasant and more satisfying. You would be hard pressed to find anyone who beleives that work was somehow better in the factories of Victorian England.
Education wasn't much better. At the dawn of the twentieth century most education was some form of recitation. Students would memorize materials, then stand up in class and recite what they had mastered. It wasn't until the mid 20 century that schools began to focus on problem solving skills and greater student engagement. Needless to say, school became much more interesting and today we look back distainfully on those days of 'rote memorization'.
The question is, with all the improvements that we have seen in the workplace and in education, is this as good as it gets? I don't think so. As we gain more and more insight into the nature of games and why they are so inherently satisfying, we can apply that understanding to the workplace and to schooling. Imagine what would happen to the economy if people preferred to lean new skills and apply them than anything else. If all those unproductive hours spent in front the the boob tube could be put to productive use, we might see another major improvement in quality of life.
In his book The Grasshopper: Games, Life and Utopia Bernard Suits points out that in a utopian world, where work was not required for survival, most people would busy themselves with games. There is something fundamentally satisfying about games and it is one of the few activities that people pursue for its own sake. That is, we work so we can play. But we play because we like to play.
There is an important message in that observation. If work were more like play then we would want to work for its own sake. Or, if work were more like a game, then it would be more inherently satisfying. Same thing goes for education. If school were more like play we would want to go to school for its own sake. And if school were more like a game, it would be more inherently satisfying.
Unfortunately, people like to think that work and school should be hard. They should be unpleasant. So any attempts to make them more like play or more like games would only diminish their value. But, I believe, that this perspective is merely a rationalization. It is an attempt to make the best of or deal with a bad situation. Believing that work is supposed to be unpleasant allows us to accept our fate if we are doing work that we find as unpleasant. However, history does not support this view.
Work in the industrial age was far more unpleasant than it is today. In fact, it was not only unpleasant, it was dangerous, tedious, and detrimental to one's health. In the past century and a half great strides have been made in making the workplace safer, more pleasant and more satisfying. You would be hard pressed to find anyone who beleives that work was somehow better in the factories of Victorian England.
Education wasn't much better. At the dawn of the twentieth century most education was some form of recitation. Students would memorize materials, then stand up in class and recite what they had mastered. It wasn't until the mid 20 century that schools began to focus on problem solving skills and greater student engagement. Needless to say, school became much more interesting and today we look back distainfully on those days of 'rote memorization'.
The question is, with all the improvements that we have seen in the workplace and in education, is this as good as it gets? I don't think so. As we gain more and more insight into the nature of games and why they are so inherently satisfying, we can apply that understanding to the workplace and to schooling. Imagine what would happen to the economy if people preferred to lean new skills and apply them than anything else. If all those unproductive hours spent in front the the boob tube could be put to productive use, we might see another major improvement in quality of life.
Monday, September 28, 2009
Some History
I just reviewed another article for ACM Computing Reviews; this time on the History of Pong. It reminded me of how much misinformation there is floating around about the history of video games. People who remember Pong and many who don't, recall it as the first home video game and see it as the first step in a technology that went on to provide home gaming consoles such as Ninetendo, Sony Play Station and Xbox. This perception is inaccurate and I thought I would take a few mintes to set the record straight.
First, Pong was not developed using computer technology. It was developed using television technology. There were games being developed on computers, at the time, in university computer labs. However, this was before the microcomputer was developed and computers were way too expensive for home use. Television technology was used to keep the price down.
Second, Pong was not developed for home use. The earliest video games were developed for arcades as a more advanced form of pin ball machine. So, the earliest versions of Pong were coin fed arcade machines. Home use did not come until much later. Yes, video games initially grew out of the pinball arcade business, not the computer industry.
Third, as disparagingly as many people like to view video games today, their image used to be much worse. Although they were advertized as 'games of skill' many viewed them as a form of gambling. And going into a pinball arcade to play them was viewed as a shady activity. Nolan Bushnell, inventor of Pong and developer of many of the early games, tried to increase their respectability buy creating a line of family restaurants where the kids could play video games while the parents waited for their food to be prepared. This was the origin of Chuck E. Cheese and many people who would not go into a pin ball arcade found Chuck E. Cheese to be perfectly acceptable.
Video games have come a long way and it is easy to forget where it all started. To learn more about the history of video games, I would recommend The Ultimate History of Video Games: From Pong to Pokemon--The Story Behind the Craze That Touched Our Lives and Changed the World.
First, Pong was not developed using computer technology. It was developed using television technology. There were games being developed on computers, at the time, in university computer labs. However, this was before the microcomputer was developed and computers were way too expensive for home use. Television technology was used to keep the price down.
Second, Pong was not developed for home use. The earliest video games were developed for arcades as a more advanced form of pin ball machine. So, the earliest versions of Pong were coin fed arcade machines. Home use did not come until much later. Yes, video games initially grew out of the pinball arcade business, not the computer industry.
Third, as disparagingly as many people like to view video games today, their image used to be much worse. Although they were advertized as 'games of skill' many viewed them as a form of gambling. And going into a pinball arcade to play them was viewed as a shady activity. Nolan Bushnell, inventor of Pong and developer of many of the early games, tried to increase their respectability buy creating a line of family restaurants where the kids could play video games while the parents waited for their food to be prepared. This was the origin of Chuck E. Cheese and many people who would not go into a pin ball arcade found Chuck E. Cheese to be perfectly acceptable.
Video games have come a long way and it is easy to forget where it all started. To learn more about the history of video games, I would recommend The Ultimate History of Video Games: From Pong to Pokemon--The Story Behind the Craze That Touched Our Lives and Changed the World.
Monday, September 21, 2009
Massively Multi-Player Online Role Playing Games
Massively Multi-Player Online Role Playing Games, or MMORPGs, really need a better name. But, for now, we will just have to go with the acronym. They are a unique genre of video game with special features that lead to some very interesting emergent properties.
First consider "Massively Multi-Player". When you play a first person shooter game or a game of skill, it is you against the game. Any people or monsters you encounter are game objects. In a multi-player game, at least some of the other people or monsters are actually the characters of other players. So, instead of playing against the game (known as PvE or Player vs. Environment) you are playing against other players (known as PvP or Player versus Player).
Typically you will encounter only a few other players. However, the potential exists to encounter dozens or even hundreds of other players, which is why it is refered to as Massively Multi-Player. The fact that you are interacting with so many other players gives rise to group dynamics, a social environment, a rudimentary culture, and a real economy.
Online means that you are interacting with other players in real time although it is hard to imagine such a phenomenon not being online.
Role playing means that players are acting out roles which gives the environment a fantasy quality. This, in turn, gives rises to some interesting psychological and sociological aspects.
Some academics have studied World of Warcraft (the largest MMORPG) as a cultural artifact in the same way you would study film or novels. Hilde G. Corneliussen and Jill Walker Rettberg edited a book of readings called Digital Culture, Play, and Identity: A World of Warcraft® Reader which analyses the symbolism and values portrayed in World of Warcraft.
The point here is that there is a richness to Massively Multiplayer Online Role Playing Games that is only just beginning to be tapped. I reviewed Corneliussen and Rettberg's book for ACM Computing Reviews back in September of 2008.
I began that review with the observation that when Thomas Edison made the first three second movie of a guy sneezing, nobody could have possibly anticipated the impact of film and television on our culture. In the same same way, when people saw the first video game, Pong, they could not possibly have anticipated the impact of video games on our culture. Today we are just beginning to see that impact and are still a long way from understanding it.
First consider "Massively Multi-Player". When you play a first person shooter game or a game of skill, it is you against the game. Any people or monsters you encounter are game objects. In a multi-player game, at least some of the other people or monsters are actually the characters of other players. So, instead of playing against the game (known as PvE or Player vs. Environment) you are playing against other players (known as PvP or Player versus Player).
Typically you will encounter only a few other players. However, the potential exists to encounter dozens or even hundreds of other players, which is why it is refered to as Massively Multi-Player. The fact that you are interacting with so many other players gives rise to group dynamics, a social environment, a rudimentary culture, and a real economy.
Online means that you are interacting with other players in real time although it is hard to imagine such a phenomenon not being online.
Role playing means that players are acting out roles which gives the environment a fantasy quality. This, in turn, gives rises to some interesting psychological and sociological aspects.
Some academics have studied World of Warcraft (the largest MMORPG) as a cultural artifact in the same way you would study film or novels. Hilde G. Corneliussen and Jill Walker Rettberg edited a book of readings called Digital Culture, Play, and Identity: A World of Warcraft® Reader which analyses the symbolism and values portrayed in World of Warcraft.
The point here is that there is a richness to Massively Multiplayer Online Role Playing Games that is only just beginning to be tapped. I reviewed Corneliussen and Rettberg's book for ACM Computing Reviews back in September of 2008.
I began that review with the observation that when Thomas Edison made the first three second movie of a guy sneezing, nobody could have possibly anticipated the impact of film and television on our culture. In the same same way, when people saw the first video game, Pong, they could not possibly have anticipated the impact of video games on our culture. Today we are just beginning to see that impact and are still a long way from understanding it.
Monday, September 14, 2009
World Building Video Games
World Building Video Games are not technically games although the lines are quite blurry in some cases. Spore, for example, is a world building environment in that you create your own creatures. But it is a game in that you try to have those creatures defeat other creatures and take over the universe. Second Life, on the other hand, is a world building environment which is not a game at all. You can create games in Second Life because it is a world building environment. But the platform itself is not a game. This is a source of endless confusion and frustration to those new to Second Life who come asking "how does this game work?" and "what do you do here?"
Second Life can be viewed as many things, but, in order to understand it, it is best to think of it as a platform for creating three dimensional worlds. These three dimensional worlds can serve any number of purposes. They can be for entertainment, social interaction, education, public relations, information dissemination and so on. My interest in Second Life lies in their potential for business applications.
What does the ability to create three dimensional worlds have to do with business applications? The answer is simple. Three dimensional worlds are likely to be the next major change in computer interfaces. If you are old enough, you may remember that we used to interact with computers through what was called a command line interface. In fact, in the late 1980's and early 1990's as graphical user interfaces began to gain some traction, there was considerable debate over which interface (command line or graphical user interface) was superior. Of course, graphical user interfaces won and today we look back on the command line interface as primative and barbaric.
Each year that passes adds more people to that generational pool who grew up playing video games and interacted with the computers through three dimensional virtual world interfaces. They look at the two dimensional point and click interface that we are all so used to and wonder how anyone can interact with a computer through an interface that is so primative and barbaric. And when it comes to change, there is nothing more powerful than a new generation.
Second Life can be viewed as many things, but, in order to understand it, it is best to think of it as a platform for creating three dimensional worlds. These three dimensional worlds can serve any number of purposes. They can be for entertainment, social interaction, education, public relations, information dissemination and so on. My interest in Second Life lies in their potential for business applications.
What does the ability to create three dimensional worlds have to do with business applications? The answer is simple. Three dimensional worlds are likely to be the next major change in computer interfaces. If you are old enough, you may remember that we used to interact with computers through what was called a command line interface. In fact, in the late 1980's and early 1990's as graphical user interfaces began to gain some traction, there was considerable debate over which interface (command line or graphical user interface) was superior. Of course, graphical user interfaces won and today we look back on the command line interface as primative and barbaric.
Each year that passes adds more people to that generational pool who grew up playing video games and interacted with the computers through three dimensional virtual world interfaces. They look at the two dimensional point and click interface that we are all so used to and wonder how anyone can interact with a computer through an interface that is so primative and barbaric. And when it comes to change, there is nothing more powerful than a new generation.
Monday, September 7, 2009
Video Game Genres
I should point out that when we use the term 'video game' we are refering to a fairly large collection of software programs that run on a variety of different platforms and provide the user/player with a variety of very different experiences. I should also point out that there is a Wikipedia article on Video Game Genres which does not at all agree with what I am about to say. Nonetheless, I would offer the following categories of video games:
1) Games of Skill: These are simple video games such as card games that provide the user with a challenging diversion.
2) Leveled Games of Skill: There are slightly more complicated video games that not only require skill but allow the player to level up showing progress in the game. Some of the classic video games such as Mario, Link, Pac Man and Tetris are of this variety.
3) Sports Games: Sport video games allow the player to engage in a sport such as football, hockey or baseball without leaving the couch. Perhaps the most famous game of this type is John Madden's Football.
4) Memorabilia Games: We often see blockbuster movies or TV shows add to their revenue by producing t-shirts, lunch pails or action figures. Some times they also produce video games. Examples include Star Wars and Lord of the Rings. Most people who play these games are extending their movie going experience.
5) First Person Shooter - In this game the player uses a weapon to fight off the bad guys and the point of view is the shooter's. Much has been written about the violence in these games. Examples inlcude Grand Theft Auto and Fallout.
6) Massively Multi-Player Online Role Playing Games (MMORPGs) - In other video games, you play against the game and the other characters in the game are controled by the game. In an MMORPG you play against other people. World of Warcraft and nearly all of the early virtual world video games such a Everquest and Ultima Online fall into this category.
7) World Building - This is an interesting niche within virtual world games. In these games, people create their own virtual worlds and then interact with others in those virtual worlds. Second Life is the best example in this genre. But an early and better known example would be Sims Online and its follow-on game called Spore.
8) Serious Games - This is a specialized niche in which video game technology is used for serious ends such as education. A subset of serious games, called persuasive games, uses video game technology to influence behavior.
Although I am interested in games in general (see my blog PerspectivesOnVideoGames for some foundation work), I am primarily interested in MMORPGs. MMORPGs have the added dimensions of a social and economic environment which make them a much more complex and hence interesting phenomenon for study. I am also interested in World Building games such as Second Life because of their potential for business applications.
1) Games of Skill: These are simple video games such as card games that provide the user with a challenging diversion.
2) Leveled Games of Skill: There are slightly more complicated video games that not only require skill but allow the player to level up showing progress in the game. Some of the classic video games such as Mario, Link, Pac Man and Tetris are of this variety.
3) Sports Games: Sport video games allow the player to engage in a sport such as football, hockey or baseball without leaving the couch. Perhaps the most famous game of this type is John Madden's Football.
4) Memorabilia Games: We often see blockbuster movies or TV shows add to their revenue by producing t-shirts, lunch pails or action figures. Some times they also produce video games. Examples include Star Wars and Lord of the Rings. Most people who play these games are extending their movie going experience.
5) First Person Shooter - In this game the player uses a weapon to fight off the bad guys and the point of view is the shooter's. Much has been written about the violence in these games. Examples inlcude Grand Theft Auto and Fallout.
6) Massively Multi-Player Online Role Playing Games (MMORPGs) - In other video games, you play against the game and the other characters in the game are controled by the game. In an MMORPG you play against other people. World of Warcraft and nearly all of the early virtual world video games such a Everquest and Ultima Online fall into this category.
7) World Building - This is an interesting niche within virtual world games. In these games, people create their own virtual worlds and then interact with others in those virtual worlds. Second Life is the best example in this genre. But an early and better known example would be Sims Online and its follow-on game called Spore.
8) Serious Games - This is a specialized niche in which video game technology is used for serious ends such as education. A subset of serious games, called persuasive games, uses video game technology to influence behavior.
Although I am interested in games in general (see my blog PerspectivesOnVideoGames for some foundation work), I am primarily interested in MMORPGs. MMORPGs have the added dimensions of a social and economic environment which make them a much more complex and hence interesting phenomenon for study. I am also interested in World Building games such as Second Life because of their potential for business applications.
Monday, August 31, 2009
Video Games
It seems like I go out of my way to bring trouble on myself. People ask me what I have been doing lately and I say "I've been spending a huge amount of time playing video games." This makes it sound like I am lazy, or loosing my mind, or having a second childhood among many other possibilities, none of which are good. But video games are a serious business.
The video game industry is threatening, if it hasn't already surpassed, other pillars of the entertainment industry such as film and television. People often refer to video games as 'interactive entertainment' to distinguish them from non-interactive entertainment such as television where you just sit there like a slug and do not participate. In a video game you participate in the outcome. This is an emerging phenomenon and is likely to grow in a lot of different directions.
Academics who traditionally study novels and films have begun to study the values, symbolism and messages conveyed by video games. This suggests that those people, at least, see video games as an important emerging cultural phenomenon.
Some researchers are looking at ways in which video game technology can be applied to education. The call it edutainment. Others are looking at how video games can used to persuade people. They call this 'persuasive games'. In fact, the whole area of using video game technology for some thing other than games is called 'serious games' and is a rapidly growing area within the field.
In areas of interest to the Business School specifically, people are looking at video games technology for advertizing, virtual team building, and learning any number of business skills from inventory management to group dynamics.
I have taken on a deeper philosophical analysis of games and the role they play in our lives. If philosophy does not give you a nose bleed you may want to look at my other blog - Perspectives On Video Games. Ask yourself, first, what is a game? Can you come up with a definition that includes all games? Why is it that we play games because we want to while everything else we do because we have to. The simple answer is - because they are fun. But, why are they fun? Wouldn't it be great if both school and work could be as much fun as a video game? They can be and they should be.
At the same time, video games are very diverse. Free Cell, a simple card game, is vastly different from World of Warcraft which includes sophisticated social interactions and an in world economy. Next week I will look at the variety of video game genres and try to sort this all out.
The video game industry is threatening, if it hasn't already surpassed, other pillars of the entertainment industry such as film and television. People often refer to video games as 'interactive entertainment' to distinguish them from non-interactive entertainment such as television where you just sit there like a slug and do not participate. In a video game you participate in the outcome. This is an emerging phenomenon and is likely to grow in a lot of different directions.
Academics who traditionally study novels and films have begun to study the values, symbolism and messages conveyed by video games. This suggests that those people, at least, see video games as an important emerging cultural phenomenon.
Some researchers are looking at ways in which video game technology can be applied to education. The call it edutainment. Others are looking at how video games can used to persuade people. They call this 'persuasive games'. In fact, the whole area of using video game technology for some thing other than games is called 'serious games' and is a rapidly growing area within the field.
In areas of interest to the Business School specifically, people are looking at video games technology for advertizing, virtual team building, and learning any number of business skills from inventory management to group dynamics.
I have taken on a deeper philosophical analysis of games and the role they play in our lives. If philosophy does not give you a nose bleed you may want to look at my other blog - Perspectives On Video Games. Ask yourself, first, what is a game? Can you come up with a definition that includes all games? Why is it that we play games because we want to while everything else we do because we have to. The simple answer is - because they are fun. But, why are they fun? Wouldn't it be great if both school and work could be as much fun as a video game? They can be and they should be.
At the same time, video games are very diverse. Free Cell, a simple card game, is vastly different from World of Warcraft which includes sophisticated social interactions and an in world economy. Next week I will look at the variety of video game genres and try to sort this all out.
Tuesday, August 25, 2009
Stealing the Clay
When I was considerably younger and fretting over what it might take to be professionally successful, I used to have a recurring dream. In this dream, I was a famous world class sculptor. I would make statues and busts out of clay. My fans, in the dream, marveled over the quality of my artistic creations. I was well known, well liked, and well respected. But, there was a secret. In order to make my artistic creations, I had to 'steal' the clay. I couldn't just buy it and nobody could supply it for me. I had to steal it. Not only did I have to steal the clay, but everyone knew it. Nobody would openly endorse or even condone such an activity, even though everyone knew it and was willing to look the other way. The responsibility for this transgression was entirely mine and it was the cost of success.
The point of this dream, I believe, is that you cannot succeed by just doing what you are told or by just doing what you are 'supposed' to be doing. At some point you have to take risks and take responsibility for those risks. To be successful, you have to be good at something and in order to be good at something you have to figure out what you do well and pursue it. This process of individuation makes you better at the things you are good at while differentiating you from those around you. Aristotle would have called this developing your virtues.
But, as we develop our virtues and differentiate ourselves from others we often feel that we are on a lonely path and maybe the wrong path. So, in the twisted logic of a dream, my risks became transgressions.
This summer, I have spent an enormous amount of time playing World of Warcraft. It is the summer of 2009 version of stealing the clay. I think video games are going to become a major force in all aspects of entertainment, education and commerce. But, if I wait until that is obvious to everyone else, it will be way too late. So I have invested my time in learning about this important phenomenon and feeling a little guilty in the process. But, what the heck. I would rather live with the guilt than sit back and wonder how the world managed to pass me by.
The point of this dream, I believe, is that you cannot succeed by just doing what you are told or by just doing what you are 'supposed' to be doing. At some point you have to take risks and take responsibility for those risks. To be successful, you have to be good at something and in order to be good at something you have to figure out what you do well and pursue it. This process of individuation makes you better at the things you are good at while differentiating you from those around you. Aristotle would have called this developing your virtues.
But, as we develop our virtues and differentiate ourselves from others we often feel that we are on a lonely path and maybe the wrong path. So, in the twisted logic of a dream, my risks became transgressions.
This summer, I have spent an enormous amount of time playing World of Warcraft. It is the summer of 2009 version of stealing the clay. I think video games are going to become a major force in all aspects of entertainment, education and commerce. But, if I wait until that is obvious to everyone else, it will be way too late. So I have invested my time in learning about this important phenomenon and feeling a little guilty in the process. But, what the heck. I would rather live with the guilt than sit back and wonder how the world managed to pass me by.
Monday, June 29, 2009
Emergent Properties
One of the things that makes the study of complex systems difficult is the fact that we are almost always studying emergent properties. Emergent properties are properties of the system that cannot be predicted from or even explained in terms of the properties of the constituent parts. A simple example of emergent properties is water. The properties of water cannot be predicted from or explained in terms of the properties of hydrogen and oxygen. When you put these two elements together the combination produces a whole new set of properties.
An example of emergent properties in social science would be the political systems that arise from people living together under certain circumstances requiring organization. But are political systems 'real'. Have they always existed? The answer is no. They are not real and they have not always existed. Over time people noticed these emergent properties and began to group them into categories and give them names. The Romans recognized the emergent properties of people living in organized societies and called these emergent properties 'the thing of the public" or res publika or republic. Now we think of the republic or any political system as a real thing.
Economics came into being much later with Adam Smith identifying a collection of emergent properties based on wealth rather than power. But who is to say that organizing properties based on wealth and power rather than class, location or time is the best way? The point is that these are constructs that we are studying that become real over time by virtue of the fact that they are being studied.
In the same way, computer systems have emergent properties. When people or societies interact with computer systems other emergent properties arise as well. Are these things 'real'? No, they are just constructs that we create to organize our knowledge and give us ways to think about things that we are trying to understand. These things are not 'real'. They are just our best attempts to organize our knowledge and understand our experiences.
It is good to keep this in mind as we talk about virtual worlds, or interactive entertainment or any number of other phenomenon that arise through the interaction of complex human systems with complex computer systems.
An example of emergent properties in social science would be the political systems that arise from people living together under certain circumstances requiring organization. But are political systems 'real'. Have they always existed? The answer is no. They are not real and they have not always existed. Over time people noticed these emergent properties and began to group them into categories and give them names. The Romans recognized the emergent properties of people living in organized societies and called these emergent properties 'the thing of the public" or res publika or republic. Now we think of the republic or any political system as a real thing.
Economics came into being much later with Adam Smith identifying a collection of emergent properties based on wealth rather than power. But who is to say that organizing properties based on wealth and power rather than class, location or time is the best way? The point is that these are constructs that we are studying that become real over time by virtue of the fact that they are being studied.
In the same way, computer systems have emergent properties. When people or societies interact with computer systems other emergent properties arise as well. Are these things 'real'? No, they are just constructs that we create to organize our knowledge and give us ways to think about things that we are trying to understand. These things are not 'real'. They are just our best attempts to organize our knowledge and understand our experiences.
It is good to keep this in mind as we talk about virtual worlds, or interactive entertainment or any number of other phenomenon that arise through the interaction of complex human systems with complex computer systems.
Monday, June 22, 2009
Interactive Entertainment
Over the past decade we have seen a dramatic increase in the popularity of massively multi-player online role playing games also known as the unpronouncable acronym MMORPG's. These have been largely virtual world video games such as World of Warcraft. However, virtual worlds such as Second Life have also provided interesting possibilities for online role playing. Some media pundits see this as the next generation of entertainment also know as interactive entertainment.
To see the difference between traditional home entertainment such as television and interactive entertainment, consider the following scenarios. With traditional home entertainment, you select a show that you want to watch, put the television on the proper channel, plop down on a chair, and veg out while you are being entertained. It is passive and non participatory.
Now, consider online role playing. You select a role or a scenario to participate in. You log into a virtual world. The story unfolds as you interact with the environment. You are thinking, planning and engaging. You might even interact with other role players. Instead of following a rigid script, the scenario evolves in a unique fashion based upon the actions of the player(s).
If you were to go to a automobile dealer to buy a car and there was only one model available, you would find that unacceptable. You would think the world a very dull place if everyone had to drive the same car. If you went to the store to buy a shirt and only one kind of shirt were available, you would find that unacceptable. However, if everyone watches exactly the same television show, we somehow find that perfectly acceptable. And if we watch the same show again, it is still the same show. Why is it that we find that acceptable. The answer, perhaps, is that we have come to expect it. However, online role playing games are likely to change that.
But, are online role playing games just a new form of entertainment? Or is there more to them than that. I think there is quite a bit more and we will turn to the various uses of online role playing games for serious play next.
To see the difference between traditional home entertainment such as television and interactive entertainment, consider the following scenarios. With traditional home entertainment, you select a show that you want to watch, put the television on the proper channel, plop down on a chair, and veg out while you are being entertained. It is passive and non participatory.
Now, consider online role playing. You select a role or a scenario to participate in. You log into a virtual world. The story unfolds as you interact with the environment. You are thinking, planning and engaging. You might even interact with other role players. Instead of following a rigid script, the scenario evolves in a unique fashion based upon the actions of the player(s).
If you were to go to a automobile dealer to buy a car and there was only one model available, you would find that unacceptable. You would think the world a very dull place if everyone had to drive the same car. If you went to the store to buy a shirt and only one kind of shirt were available, you would find that unacceptable. However, if everyone watches exactly the same television show, we somehow find that perfectly acceptable. And if we watch the same show again, it is still the same show. Why is it that we find that acceptable. The answer, perhaps, is that we have come to expect it. However, online role playing games are likely to change that.
But, are online role playing games just a new form of entertainment? Or is there more to them than that. I think there is quite a bit more and we will turn to the various uses of online role playing games for serious play next.
Monday, June 15, 2009
Social Interaction Technologies
Social interaction, according to Wikipedia, "is a dynamic, changing sequence of social actions between individuals (or groups) who modify their actions and reactions according to those of their interaction partner(s). In other words, they are events in which people attach meaning to a situation, interpret what others are meaning, and respond accordingly."
That is to say that social interaction is the mechanism by which people modify their (social) behavior in response to the actions of others. If you lived on an island with no other people, it is unlikely that your behaviors would change much beyond the behaviors necessary to survive. If you lived in a small tribe any changes in your behavior would most like be directed towards survival of the tribe as well. It seems that as the social unit becomes larger and more complex, then the behaviors that you modify as a result of social interaction become more complex and of larger scope as well.
The point that I am reaching for here is to suggest that social interaction technologies have made or will make the social unit extend to the entire planet. So, we will be interacting with each other and modifying our behaviors in ways that affect the entire human race. Some of this will undoubtedly be good. And some will undoubtedly be bad.
The question is - do we just let things unfold and see what happens; or should we take a hand in influencing the future? It is a tough call because most people believe that it is better to not make things worse with your actions than it is to make things worse in an attempt to make things better. But, is this still true? Are we at a point where the impact of future changes is such that not doing anything is a greater evil than doing the wrong thing? I really can't answer that. All I can do is raise the question.
That is to say that social interaction is the mechanism by which people modify their (social) behavior in response to the actions of others. If you lived on an island with no other people, it is unlikely that your behaviors would change much beyond the behaviors necessary to survive. If you lived in a small tribe any changes in your behavior would most like be directed towards survival of the tribe as well. It seems that as the social unit becomes larger and more complex, then the behaviors that you modify as a result of social interaction become more complex and of larger scope as well.
The point that I am reaching for here is to suggest that social interaction technologies have made or will make the social unit extend to the entire planet. So, we will be interacting with each other and modifying our behaviors in ways that affect the entire human race. Some of this will undoubtedly be good. And some will undoubtedly be bad.
The question is - do we just let things unfold and see what happens; or should we take a hand in influencing the future? It is a tough call because most people believe that it is better to not make things worse with your actions than it is to make things worse in an attempt to make things better. But, is this still true? Are we at a point where the impact of future changes is such that not doing anything is a greater evil than doing the wrong thing? I really can't answer that. All I can do is raise the question.
Monday, June 8, 2009
Big History
I am currently (not at this second, but at this time) listening to a wonderful lecture series from The Teaching Company called Big History. The lecturer is Professor David Christian from San Diego State University. I mention this for three reasons.
First, the lecture series from the Teaching Company are wonderfully interesting lectures on a diverse range of topics and I highly recommend them. I have listened to dozens of these lectures comprising hundreds of hours of informative enjoyment. I listen while in the car or while out walking or hiking. For the enjoyment value alone these lectures are worthwhile. But, that is not all.
Second, exposure to a diverse range of ideas is very important as you never know where important insights may come from. I can say with a fairly high degree of certainly that I have not listened to a single set of lectures that has not provided me with insights well beyond the topics of the lectures. These insights usually apply to things I am currently working on or thinking about and provide me with new ways of looking at problems.
And third, this ties in with what I was saying in my last post. So it is on point to this thread. However, that will take a little explaining. Christian's thesis is that you can unify all of history from the Big Bang to modern times, despite the vast differences in the scale of time and space by looking at history as a process of the creation of great complexity. Further, if we see this complexity as occurring in steps, we can see it line up with our current academic disciplines. Cosmology (Big Bang), Astronomy (Stars and Solar Systems), Geology (Planets), Chemistry (Particles), Biology (Life Forms), Sociology (Societies), and so on. This is a very clever idea and I have not done it justice here, but have sketched out enough to make my point.
Social interaction is the engine by which societies form and evolve. As such, it is on a par with, say Chemical Reactions or Biological Reactions. We have a reasonably good grasp these days on chemical reactions, less on biological reactions, and less yet on social interaction. Tinkering with chemical reactions when you don't understand them puts you at risk for blowing yourself up. Tinkering with biological reactions when you don't understand them puts you at risk for poisoning yourself. Tinkering with social interaction must put you at risk for something. But we don't even understand enough to know what we might be risking. Yet we have these powerful accelerants called social interaction technologies. And we have no idea what the consequences of these technologies might be. So, context may be a little more important than we realize.
First, the lecture series from the Teaching Company are wonderfully interesting lectures on a diverse range of topics and I highly recommend them. I have listened to dozens of these lectures comprising hundreds of hours of informative enjoyment. I listen while in the car or while out walking or hiking. For the enjoyment value alone these lectures are worthwhile. But, that is not all.
Second, exposure to a diverse range of ideas is very important as you never know where important insights may come from. I can say with a fairly high degree of certainly that I have not listened to a single set of lectures that has not provided me with insights well beyond the topics of the lectures. These insights usually apply to things I am currently working on or thinking about and provide me with new ways of looking at problems.
And third, this ties in with what I was saying in my last post. So it is on point to this thread. However, that will take a little explaining. Christian's thesis is that you can unify all of history from the Big Bang to modern times, despite the vast differences in the scale of time and space by looking at history as a process of the creation of great complexity. Further, if we see this complexity as occurring in steps, we can see it line up with our current academic disciplines. Cosmology (Big Bang), Astronomy (Stars and Solar Systems), Geology (Planets), Chemistry (Particles), Biology (Life Forms), Sociology (Societies), and so on. This is a very clever idea and I have not done it justice here, but have sketched out enough to make my point.
Social interaction is the engine by which societies form and evolve. As such, it is on a par with, say Chemical Reactions or Biological Reactions. We have a reasonably good grasp these days on chemical reactions, less on biological reactions, and less yet on social interaction. Tinkering with chemical reactions when you don't understand them puts you at risk for blowing yourself up. Tinkering with biological reactions when you don't understand them puts you at risk for poisoning yourself. Tinkering with social interaction must put you at risk for something. But we don't even understand enough to know what we might be risking. Yet we have these powerful accelerants called social interaction technologies. And we have no idea what the consequences of these technologies might be. So, context may be a little more important than we realize.
Monday, June 1, 2009
Technology versus Context
Most classes in Information Technology focus on the technology rather than on the context of the technology. For example, if one were to take a class in a programming language, say C# just to use a current example, one would, hopefully, learn how to program in that language. This is a good thing and I don't wish to diminish it. However, I do wish to point out what is lost.
There is a history of programming languages. Over the past fifty years programming languages have changed dramatically. If you were to take someone who learned a programming language at one point and show them programming ten years later, they would probably not recognize it. In fact, today, most programming is the assembling of reusable components in an integrated development environment. That is very, very different from what I learned. In fact, it was an uphill climb for me to get used to this new paradigm.
C# is an example of an imperative programming language. There are also functional and logic languages that were more popular when artificial intelligence was ascendant.
Over the years the monarchy of programming languages has changed considerably. Languages like Cobol, PL1, C++, and Ada were the languages du jour while many current students have never even heard of them. Java is currently the language du jour and it won't be long before it joins the rest of the pack in obscurity.
The problem here is that if you teach a person to program, they learn how to program in that one language. If you teach a person the context, they can learn new languages as they evolve. This is not generally a problem as most students only program in the early part of their careers. By the time their skills have become out of date they have moved on to other things like design, or management or working with clients.
One of Murphy's Laws of Technology states that if builders built buildings the way that programmers write programs, the first woodpecker that came along would destroy civilization. We have seen how shortsightedness in the financial industry can create problems. I wonder if we are not being similarly shortsighted in our technological infrastructure. Having programmers write programs that they know they will not have to maintain cannot be a good thing.
But, there is an even larger point here. Programming languages are just one instance of education in Information Technology. Overwhelmingly classes in Information Technology focus on the technology rather than the context. As I think about it, I also wonder if other disciplines don't do the same. Well, I am getting a little far afield here and should probably think about this a little more.
There is a history of programming languages. Over the past fifty years programming languages have changed dramatically. If you were to take someone who learned a programming language at one point and show them programming ten years later, they would probably not recognize it. In fact, today, most programming is the assembling of reusable components in an integrated development environment. That is very, very different from what I learned. In fact, it was an uphill climb for me to get used to this new paradigm.
C# is an example of an imperative programming language. There are also functional and logic languages that were more popular when artificial intelligence was ascendant.
Over the years the monarchy of programming languages has changed considerably. Languages like Cobol, PL1, C++, and Ada were the languages du jour while many current students have never even heard of them. Java is currently the language du jour and it won't be long before it joins the rest of the pack in obscurity.
The problem here is that if you teach a person to program, they learn how to program in that one language. If you teach a person the context, they can learn new languages as they evolve. This is not generally a problem as most students only program in the early part of their careers. By the time their skills have become out of date they have moved on to other things like design, or management or working with clients.
One of Murphy's Laws of Technology states that if builders built buildings the way that programmers write programs, the first woodpecker that came along would destroy civilization. We have seen how shortsightedness in the financial industry can create problems. I wonder if we are not being similarly shortsighted in our technological infrastructure. Having programmers write programs that they know they will not have to maintain cannot be a good thing.
But, there is an even larger point here. Programming languages are just one instance of education in Information Technology. Overwhelmingly classes in Information Technology focus on the technology rather than the context. As I think about it, I also wonder if other disciplines don't do the same. Well, I am getting a little far afield here and should probably think about this a little more.
Monday, May 25, 2009
Summer is Underway
I took a break from this blog for a few weeks as I negotiated the break between semesters. The break between spring and summer is usually the most difficult. There are always a lot of last minute things to do as people panic in the realization that the academic year is almost over. And before you can catch your breath the summer session is underway. The other intersession breaks are usually around a month. In the spring, you only get a couple weeks. It sounds like a lot of time but between residual items from the spring and new work for the summer, it can get hectic. I had grading, meetings, two dissertation defenses, and preparations for my summer class. All of this is now under control so I can return to writing again.
I am offering an exciting class this summer. It is a class in business applications of virtual worlds. It is not a totally new class. It has been on the books for years under the title Web Based Systems Development. I first introduced this class in the early 1990's under the title of Corporate Web Applications. At the time few people believed that the web had much potential use for corporate applications. Now the web is as mainstream as you can get and few people see the potential of virtual worlds for business applications. I see virtual worlds as the 3D web and have updated the course accordingly.
The course is exciting because I have decided to, unapologetically, teach the course the way it would look if virtual worlds were mainstream. In this way I get to begin developing a future course while teaching an existing course. The drawback, of course, it that it takes a leap of faith on the part of the students. You don't see adds in the newspaper saying "virtual world developer needed" whereas you do see adds for php programmers. But, if we taught to the ads in the paper, we would not last long. To teach a proper academic course, you need the perspective that can only be gained by getting on the bandwagon long before anyone realizes there will be a bandwagon and studying it in terms of all the other band wagons you have seen.
This is a professional hazard for academics in practitioner fields like Information Systems. If you stay within your comfort zone, you become out of date and irrelevant. If you wish to stay relevant and vital, you have to take huge risks many of which will not pay off. Further, most of those risks come from our inability to predict the future. I have gotten quit good at is as far as it affects my areas of concern. But, no matter how good you get, the future is still, well, unpredictable.
There is a story, in Greek mythology, about a woman who both pleased and angered a god. In return she was given a blessing and a curse. The blessing was the gift of prophecy. She could see the future. The curse was that nobody would ever believe her. I know the feeling.
I am offering an exciting class this summer. It is a class in business applications of virtual worlds. It is not a totally new class. It has been on the books for years under the title Web Based Systems Development. I first introduced this class in the early 1990's under the title of Corporate Web Applications. At the time few people believed that the web had much potential use for corporate applications. Now the web is as mainstream as you can get and few people see the potential of virtual worlds for business applications. I see virtual worlds as the 3D web and have updated the course accordingly.
The course is exciting because I have decided to, unapologetically, teach the course the way it would look if virtual worlds were mainstream. In this way I get to begin developing a future course while teaching an existing course. The drawback, of course, it that it takes a leap of faith on the part of the students. You don't see adds in the newspaper saying "virtual world developer needed" whereas you do see adds for php programmers. But, if we taught to the ads in the paper, we would not last long. To teach a proper academic course, you need the perspective that can only be gained by getting on the bandwagon long before anyone realizes there will be a bandwagon and studying it in terms of all the other band wagons you have seen.
This is a professional hazard for academics in practitioner fields like Information Systems. If you stay within your comfort zone, you become out of date and irrelevant. If you wish to stay relevant and vital, you have to take huge risks many of which will not pay off. Further, most of those risks come from our inability to predict the future. I have gotten quit good at is as far as it affects my areas of concern. But, no matter how good you get, the future is still, well, unpredictable.
There is a story, in Greek mythology, about a woman who both pleased and angered a god. In return she was given a blessing and a curse. The blessing was the gift of prophecy. She could see the future. The curse was that nobody would ever believe her. I know the feeling.
Monday, April 27, 2009
Another Semester Comes to a Rolling Stop
One of the great things about being an academic is that - no matter how many things you screw up and no matter how many things didn't get done - at least twice a year you get an opportunity to start over and try to get it right the next time. With the end of each semester comes an end to the problems, mistakes, and unfulfilled expectations of that semester. And with the start of each new semester comes new opportunities, second chances, and new expectations. It may well be the thing I like best about being an academic.
This semester I have been teaching a class in how to write stories to explore the ethics of technology. It is a bold idea and, as with most bold ideas, a bigger pain in the ass than anyone could possibly imagine. But, the semester is rolling to a stop and next semester I will have another opportunity to get it right. If you have been following the entries in this blog and wondering where it all came from, it was from that class. You would probably agree that it still needs some work.
Over the summer I am going to start writing a book for that class. My plan is to get it partially written over the summer and use it when I teach the class again in the fall. Hopefully, I will have enough to begin submitting it to publishers.
I say that is my plan, but who knows what will happen. This summer I am teaching a class in business applications of virtual worlds. It is also a bold idea and, well, much like the stories class in being a pain. Since the summer is also a semester I will look back at the end of the summer, lament what I did not get accomplished, and, get another fresh start in the fall.
Since I will be devoting my writing time to my book, I will probably not keep this blog quite as diligently. I have fallen into the pattern of writing an entry every Monday morning. That has worked well and I have enjoyed that rhythm. But now, that energy will be going into the book.
Enjoy the summer!!
This semester I have been teaching a class in how to write stories to explore the ethics of technology. It is a bold idea and, as with most bold ideas, a bigger pain in the ass than anyone could possibly imagine. But, the semester is rolling to a stop and next semester I will have another opportunity to get it right. If you have been following the entries in this blog and wondering where it all came from, it was from that class. You would probably agree that it still needs some work.
Over the summer I am going to start writing a book for that class. My plan is to get it partially written over the summer and use it when I teach the class again in the fall. Hopefully, I will have enough to begin submitting it to publishers.
I say that is my plan, but who knows what will happen. This summer I am teaching a class in business applications of virtual worlds. It is also a bold idea and, well, much like the stories class in being a pain. Since the summer is also a semester I will look back at the end of the summer, lament what I did not get accomplished, and, get another fresh start in the fall.
Since I will be devoting my writing time to my book, I will probably not keep this blog quite as diligently. I have fallen into the pattern of writing an entry every Monday morning. That has worked well and I have enjoyed that rhythm. But now, that energy will be going into the book.
Enjoy the summer!!
Monday, April 20, 2009
A Pause for Perspective
I have been writing this blog now for several months and thought it might be appropriate to step back and think about the blogging process. For me, starting this blog was a running leap into the deep end of the pool. I had no idea what blogs were really for. I had no idea what I would write. And, I had no idea if anyone would read it. Since then I have gotten a much better understanding.
On the first question, what blogs are for, the answer is that blogs are for a lot of things. But, I think the best way to think about a blog is a private journal made public. Then asking what a blog is for is like asking what a private journal is for. A private journal is just a way to record your thoughts and, in doing so, get them in order. Some people may use a private journal to record events of the day. Others may take on deeper topics. In this blog, I have taken on deeper topics, probably too deep for most readers. But, the truth is that my private journals contain exactly the same kind of thing. I use writing to sort out my ideas. And now I am using the blog for the same thing.
On the question of what to write about, I decided, as suggested in the previous paragraph, to write about what I was thinking about. I do not expect people to read this blog as it is written. As I formulate ideas and express them to other people, I am often asked where they can read more about the ideas. Eventually some of these ideas will be sorted out and formalized. They will appear in print usually in academic articles. But there is a long way between and idea and a paper. And many compelling ideas fall off along the way. A blog is a net to catch all those ideas that despite their worthiness fell off along the way. I write about what I am thinking about. And putting it into a blog gives other people access to my private journals.
This brings us to the third question, who will read it. I was initially disturbed that there were no comments on my postings. Then I realized why. I had the settings so that people couldn't comment. However, I think most people are reluctant to comment anyway because the ideas are weighty and intimidating. However, I know people are reading because it comes up in conversation. I will be talking about some thing and somebody will say, yes you mentioned something about that in your blog.
Most blogs are like fresh bread. They have a shelf life of a few days and if not consumed with in those days, they have limited value after that. But, I put a lot of work into my thoughts and into my blogs. I view them has having a much, much longer shelf life. People will often ask me how I came up with a particular idea. Now, I can say go back and look at my blog from x months ago, maybe even x years ago and you can see how the idea evolved.
I know what I am writing is not for everyone and even for those who do like it they can probably only take it in small doses. I am fine with that. I enjoy baking the bread and you can enjoy consuming it when you are hungry and at your leisure.
On the first question, what blogs are for, the answer is that blogs are for a lot of things. But, I think the best way to think about a blog is a private journal made public. Then asking what a blog is for is like asking what a private journal is for. A private journal is just a way to record your thoughts and, in doing so, get them in order. Some people may use a private journal to record events of the day. Others may take on deeper topics. In this blog, I have taken on deeper topics, probably too deep for most readers. But, the truth is that my private journals contain exactly the same kind of thing. I use writing to sort out my ideas. And now I am using the blog for the same thing.
On the question of what to write about, I decided, as suggested in the previous paragraph, to write about what I was thinking about. I do not expect people to read this blog as it is written. As I formulate ideas and express them to other people, I am often asked where they can read more about the ideas. Eventually some of these ideas will be sorted out and formalized. They will appear in print usually in academic articles. But there is a long way between and idea and a paper. And many compelling ideas fall off along the way. A blog is a net to catch all those ideas that despite their worthiness fell off along the way. I write about what I am thinking about. And putting it into a blog gives other people access to my private journals.
This brings us to the third question, who will read it. I was initially disturbed that there were no comments on my postings. Then I realized why. I had the settings so that people couldn't comment. However, I think most people are reluctant to comment anyway because the ideas are weighty and intimidating. However, I know people are reading because it comes up in conversation. I will be talking about some thing and somebody will say, yes you mentioned something about that in your blog.
Most blogs are like fresh bread. They have a shelf life of a few days and if not consumed with in those days, they have limited value after that. But, I put a lot of work into my thoughts and into my blogs. I view them has having a much, much longer shelf life. People will often ask me how I came up with a particular idea. Now, I can say go back and look at my blog from x months ago, maybe even x years ago and you can see how the idea evolved.
I know what I am writing is not for everyone and even for those who do like it they can probably only take it in small doses. I am fine with that. I enjoy baking the bread and you can enjoy consuming it when you are hungry and at your leisure.
Monday, April 13, 2009
Truth Claims
The designation of true or false do not apply to most statements. For example, if someone were to say "there are parallel universes that we will never know about", this statement cannot be shown to be either true or false. The logical construction of this claim makes it impossible to resolve. Other statements, such as "people are basically good" are equally as unresolvable due to the ambiguity of the word good. In order for a statement to be a truth claim, it must be a claim that can, somehow, be refuted. If there is no way for a claim to be refuted, then it is not a truth claim and it can never achieve the status of truth.
The reason for this is that the body of assertions that we refer to as true are all assertions that could have been show not to be true if indeed they were but after repeated attempts at refutation have continued to hold up. This is the thing that all kinds of truth have in common. Whether we are talking about scientific truth, historical truth, journalistic truth or any of the varieties of truth we may encounter, they all follow a similar pattern. A claim is made that can, potentially, be refuted. We then try to refute it. More people jump into the fray attempting to refute it. As the claim continues to hold up after repeated and sincere attempts to refute it, we begin to believe it and the probability that it is a durable claim increases.
So, the key elements in the discovery of truth are: refutable claims, an agreed upon method for challenging them, and repeated attempts at refutation by people whose only concern is the veracity of the claim. This plays out very differently in different fields. Scientists conduct experiments. However, some sciences such as astronomy don't conduct experiments. Astronomers collect data. Journalists gather facts. Historian have to contend with the historical record. Writers of novels have to square with human experience. In each case a claim is made that must square with evidence according to an accepted method. The question, at this point is, can this approach be applied to moral truths or truths about the future?
The reason for this is that the body of assertions that we refer to as true are all assertions that could have been show not to be true if indeed they were but after repeated attempts at refutation have continued to hold up. This is the thing that all kinds of truth have in common. Whether we are talking about scientific truth, historical truth, journalistic truth or any of the varieties of truth we may encounter, they all follow a similar pattern. A claim is made that can, potentially, be refuted. We then try to refute it. More people jump into the fray attempting to refute it. As the claim continues to hold up after repeated and sincere attempts to refute it, we begin to believe it and the probability that it is a durable claim increases.
So, the key elements in the discovery of truth are: refutable claims, an agreed upon method for challenging them, and repeated attempts at refutation by people whose only concern is the veracity of the claim. This plays out very differently in different fields. Scientists conduct experiments. However, some sciences such as astronomy don't conduct experiments. Astronomers collect data. Journalists gather facts. Historian have to contend with the historical record. Writers of novels have to square with human experience. In each case a claim is made that must square with evidence according to an accepted method. The question, at this point is, can this approach be applied to moral truths or truths about the future?
Monday, April 6, 2009
Truth and Method
Francis Bacon said that method is more important than genius in discovering knowledge. He compared our pursuit of truth to a runner in pursuit of a destination. Method is the path and genius is the speed of the runner. A runner on the wrong path will get to the wrong place. If he happens to be a fast runner, he will get to the wrong place sooner. A runner on the right path will get to the right place. If he happens to be a slow runner, he will still get there. It will just take a little longer. So, method, according to Bacon will get us to the truth eventually. Genius will just get us to the wrong places faster.
Despite the variety of truth that we discussed a couple posts ago, they all have a similar method by which we arrived at them. Truth begins with a claim of some kind that we then attempt to determine the veracity of. Then we continue to test the veracity. Over time, if the claim continues to hold up as more and more people test the veracity, then we begin to accept the claim as true. This sketch of our method for discovering truth needs a little more fleshing out, but the essence holds up across different domains.
First, the claim cannot just be any willy nilly claim. It has to be a valid claim based on our understanding of the domain and clearly derivable from the things we know.
Second, the attempts to determine the veracity should be skeptical but not cynical. Skeptical means that we are trying to determine the truth rather than re-enforce what we want to believe. Not being cynical means that we have to accept evidence that is not 100% certain.
And, third, the motives of those who do repeated testing on the idea should be to determine the veracity of the idea and not some other agenda such as discrediting it. And, if the claim continues to hold up over time and under repeated challenges, then we accept its durability and accept it as the truth.
The next step is to show how this sketch of method holds up in the variety of areas we already discussed, revealing that scientific truth, literary truth, historical truth, and even moral truths have something in common. And we will see how this apporach applies to the use of stories in the pursuit of moral truth. Finally, just to push things to their limit, I will introduce a notion of truth about the future which I will call imaginary truth. Imaginary truth can and will be held to the same standards of durability as our other notions of truth. And, imaginary truth gives us a headlight into the future that we badly need as the future continues to come at us at an increasingly faster rate.
Despite the variety of truth that we discussed a couple posts ago, they all have a similar method by which we arrived at them. Truth begins with a claim of some kind that we then attempt to determine the veracity of. Then we continue to test the veracity. Over time, if the claim continues to hold up as more and more people test the veracity, then we begin to accept the claim as true. This sketch of our method for discovering truth needs a little more fleshing out, but the essence holds up across different domains.
First, the claim cannot just be any willy nilly claim. It has to be a valid claim based on our understanding of the domain and clearly derivable from the things we know.
Second, the attempts to determine the veracity should be skeptical but not cynical. Skeptical means that we are trying to determine the truth rather than re-enforce what we want to believe. Not being cynical means that we have to accept evidence that is not 100% certain.
And, third, the motives of those who do repeated testing on the idea should be to determine the veracity of the idea and not some other agenda such as discrediting it. And, if the claim continues to hold up over time and under repeated challenges, then we accept its durability and accept it as the truth.
The next step is to show how this sketch of method holds up in the variety of areas we already discussed, revealing that scientific truth, literary truth, historical truth, and even moral truths have something in common. And we will see how this apporach applies to the use of stories in the pursuit of moral truth. Finally, just to push things to their limit, I will introduce a notion of truth about the future which I will call imaginary truth. Imaginary truth can and will be held to the same standards of durability as our other notions of truth. And, imaginary truth gives us a headlight into the future that we badly need as the future continues to come at us at an increasingly faster rate.
Monday, March 30, 2009
Truth as a Durability Claim
We often believe, somewhat naively, that there is a thing out there in the world called 'The Truth'. We not only believe it exists, we also believe if we try hard enough, we can find it. This misconception, I believe, comes, in turn, from two other misconceptions.
The first of these misconceptions is that there is something 'out there': a stable, uniform, consistent reality upon which everyone can agree. I would refer back to my traffic accident example. Something happened, but what it was is up to dispute. Some elements of the traffic accident may be less disputed than others. Who was going north and who was going south might be largely agreed upon. However, which one swerved first or who took their eyes off the road may not be. Similarly, the properties of oxygen in the real world may be largely agreed up, while the properties of road rage may not be.
The second of these misconceptions is that whatever is out there, we can know it directly and objectively. How much of our knowledge comes to us directly by observation rather than through lens, films, books etc. If you hold up an X-ray picture of a broken arm, is that what the arm 'really' looks like? Most of our knowledge is brought to us via instruments. And those instruments are not just physical instruments like a telescope. Many are conceptual instruments like logic and statistics that help us to organize our knowledge. By the time we understand something, it bears little resemblance to the thing we were trying to understand.
So, if there isn't a thing called 'The Truth', what do we mean when we use the word 'Truth'. I think that the best way to look at it is to say that when we say something is 'The Truth' we are making a durability claim. That is we are asserting a low likelihood to the possibility that we no longer believe the claim in the future. What we mean by truth is that as you re-examine the evidence you are likely to come to the same conclusions. As other people re-examine the evidence, they are likely to come to the same conclusions. And as additional people in the future examine the evidence, they will also come to the same conclusions.
It is possible that people 100,000 years from now may see the world entirely differently. They may reject some, most or even all as what we see as the truth. However, we don't really care about that. If we believed in absolute truth, then this would be a serious problem. But when we see truth as a durability claim, it is not. Something is true if we are unlikely to change our minds about it in any time frame that we care about.
The first of these misconceptions is that there is something 'out there': a stable, uniform, consistent reality upon which everyone can agree. I would refer back to my traffic accident example. Something happened, but what it was is up to dispute. Some elements of the traffic accident may be less disputed than others. Who was going north and who was going south might be largely agreed upon. However, which one swerved first or who took their eyes off the road may not be. Similarly, the properties of oxygen in the real world may be largely agreed up, while the properties of road rage may not be.
The second of these misconceptions is that whatever is out there, we can know it directly and objectively. How much of our knowledge comes to us directly by observation rather than through lens, films, books etc. If you hold up an X-ray picture of a broken arm, is that what the arm 'really' looks like? Most of our knowledge is brought to us via instruments. And those instruments are not just physical instruments like a telescope. Many are conceptual instruments like logic and statistics that help us to organize our knowledge. By the time we understand something, it bears little resemblance to the thing we were trying to understand.
So, if there isn't a thing called 'The Truth', what do we mean when we use the word 'Truth'. I think that the best way to look at it is to say that when we say something is 'The Truth' we are making a durability claim. That is we are asserting a low likelihood to the possibility that we no longer believe the claim in the future. What we mean by truth is that as you re-examine the evidence you are likely to come to the same conclusions. As other people re-examine the evidence, they are likely to come to the same conclusions. And as additional people in the future examine the evidence, they will also come to the same conclusions.
It is possible that people 100,000 years from now may see the world entirely differently. They may reject some, most or even all as what we see as the truth. However, we don't really care about that. If we believed in absolute truth, then this would be a serious problem. But when we see truth as a durability claim, it is not. Something is true if we are unlikely to change our minds about it in any time frame that we care about.
Monday, March 23, 2009
The Varieties of Truth
Once of the problems in attempting to nail down the truth is that there are always a variety of different kinds of truth. Suppose that a huge explosion occurs in he center of a small town in the northeastern United States. And people ask "What happened?" There are many possible answers to this question, all of which has some claim on being 'the truth'.
You could explain the explosion in terms of the chemical reactions that caused the explosion to occur. You might say something like "chemical A in conjunction with chemical B in the presence of a trigger such as heat or shock caused a chemical reaction that expanded too rapidly for the space in which it was contained and the result was an explosion". This, assuming that the facts are correct, could be considered the truth. However, it is hardly satisfying.
A newspaper account might say that there was an explosion due to irresponsible dumping of hazardous chemicals into the sewage system, or perhaps a disgruntled political group was making a statement. Either of these explanations, assuming the facts to be correct, could also claim to be the truth.
Years later a social historian might claim that the explosion was merely an instance of a larger social trend in which corporations showed reckless regard for the environment. The damage done by the explosion, in this case, was part of the cost for that reckless behavior. Again this may very well be true.
Finally, an author might write a novel at some point which shows how human greed in the present often surpassed our concern for the future attempting to reveal some larger truth about the human condition. This, if the story were sufficiently compelling, might also be considered as the truth.
All of these statements have some claim on being 'the truth'. We can quibble about 'kinds' of truth and say the first is scientific truth, the second truth in journalism, the third a historical truth, and the last one a literary truth. But acknolwedging all these kinds of truth flies in the face of the notion that there is something called 'The Truth'. Different people probably have some preference for which variety of truth is the most important. However, these people probably also don't agree with each other, and since there is no way to resolve the dispute there is no way to get to the truth.
However, all is not lost. All of these varieties of truth have two things in common. First, is that claiming something is 'the truth' is a durability claim. And, second, that durability claim is enhanced by arriving at the claim through an agreed upon method. These two ideas will be taken up next.
You could explain the explosion in terms of the chemical reactions that caused the explosion to occur. You might say something like "chemical A in conjunction with chemical B in the presence of a trigger such as heat or shock caused a chemical reaction that expanded too rapidly for the space in which it was contained and the result was an explosion". This, assuming that the facts are correct, could be considered the truth. However, it is hardly satisfying.
A newspaper account might say that there was an explosion due to irresponsible dumping of hazardous chemicals into the sewage system, or perhaps a disgruntled political group was making a statement. Either of these explanations, assuming the facts to be correct, could also claim to be the truth.
Years later a social historian might claim that the explosion was merely an instance of a larger social trend in which corporations showed reckless regard for the environment. The damage done by the explosion, in this case, was part of the cost for that reckless behavior. Again this may very well be true.
Finally, an author might write a novel at some point which shows how human greed in the present often surpassed our concern for the future attempting to reveal some larger truth about the human condition. This, if the story were sufficiently compelling, might also be considered as the truth.
All of these statements have some claim on being 'the truth'. We can quibble about 'kinds' of truth and say the first is scientific truth, the second truth in journalism, the third a historical truth, and the last one a literary truth. But acknolwedging all these kinds of truth flies in the face of the notion that there is something called 'The Truth'. Different people probably have some preference for which variety of truth is the most important. However, these people probably also don't agree with each other, and since there is no way to resolve the dispute there is no way to get to the truth.
However, all is not lost. All of these varieties of truth have two things in common. First, is that claiming something is 'the truth' is a durability claim. And, second, that durability claim is enhanced by arriving at the claim through an agreed upon method. These two ideas will be taken up next.
Monday, March 16, 2009
What is the Truth, Anyway?
You would think that there is an answer to the question - what is the truth? Unfortunately, this is not the case. Like witnesses reporting the 'facts' of a traffic accident, different observers have very different perspectives.
There is a correspondence theory of truth that says a thing is true if it corresponds to the real world. This sounds pretty good but begins to fall apart when we ask what we mean by "corresponds to the real world". Since we organize our knowledge into categories and structures that certainly don't exist in the real world, this gets a little dicey.
Another view, called the coherence view, suggests that those categories and structures must provide a consistent and coherent view of the world and use coherency as the criterion for truth. That is, it needs to make sense considering other things we know.
While both of these provide useful ways of looking at the truth, they both address truth about the natural world. If we are seeking truth about the social world such as, how do we interpret and explain the human experience, or if we are looking for truths about the moral sphere such as, how should we behave as moral agents, then there is no place to look in the natural world for answers. Hints maybe, but answers - no.
This is to say that despite our reverence for science and despite the great reputation it has for bringing us closer to the truth about the natural world, science does not bring us one iota closer to truth about the social world or the moral sphere. What does bring us closer to truth about interpreting our experience as humans and how we should behave as moral agents? The answer, which I will get around to eventually, is - stories.
There is a correspondence theory of truth that says a thing is true if it corresponds to the real world. This sounds pretty good but begins to fall apart when we ask what we mean by "corresponds to the real world". Since we organize our knowledge into categories and structures that certainly don't exist in the real world, this gets a little dicey.
Another view, called the coherence view, suggests that those categories and structures must provide a consistent and coherent view of the world and use coherency as the criterion for truth. That is, it needs to make sense considering other things we know.
While both of these provide useful ways of looking at the truth, they both address truth about the natural world. If we are seeking truth about the social world such as, how do we interpret and explain the human experience, or if we are looking for truths about the moral sphere such as, how should we behave as moral agents, then there is no place to look in the natural world for answers. Hints maybe, but answers - no.
This is to say that despite our reverence for science and despite the great reputation it has for bringing us closer to the truth about the natural world, science does not bring us one iota closer to truth about the social world or the moral sphere. What does bring us closer to truth about interpreting our experience as humans and how we should behave as moral agents? The answer, which I will get around to eventually, is - stories.
Monday, March 9, 2009
Stories and Reality
In Plato's Republic the poets were ejected because their art compromised the search for truth. Since much of our modern distrust of stories finds its roots in Plato, it is worthwhile to revisit the issue there. Plato believed in absolute truth. A simple example will illustrate this. Consider the mathematical definition of a triangle. It is an abstract mathematical object with three sides. The sum of the internal angles is 180 degrees. That is a pretty good definition. It includes all triangles and excludes everything that is not a triangle. Further, with that precise definition we can deduce further truths about triangles without even having to consult individual triangles. But, this ideal form of a triangle does not exist in the material world. All the triangles we have are imperfect copies. So, where does this ideal triangle exist? Plato posited a World of Forms where all ideal objects reside. This World of Forms is the world of absolute truth. The material world in which we live is merely an imperfect copy. And herein lies the wrinkle with poets.
I am going to switch from using the word 'poets' to using the word 'writers' because our modern understanding of the word 'poet' is different and a little misleading. Writers construct imaginary scenarios from real experiences and through those imaginary scenarios explore questions regarding the meaning of our experiences as humans. So, writers, like philosophers, are concerned with a search for truth. However, from Plato's perspective, the material world in which we have our experiences is an imperfect copy of the ideal world and hence once removed from the truth. The world constructed by writers is an imperfect copy of an imperfect copy moving us, yet, further away from the truth. Due to their sins of imperfection and the dilution of the absolute truth, the poets and writers were banned from the ideal republic. The questions are - did Plato really believe this? and it is true?
First, the question of whether or not Plato actually believed this is unanswerable since Plato has been dead for millennium and we cannot interrogate him. However, looking at his body of philosophical work it would be hard to conclude that he really did believe this. First, his dialogues are all written in a story form. If he really believed that stories took us further away from the truth then why did he use stories to convey the truth. Second, within these stories are numerous mini stories used to illustrate specific points and subtlties. Again, if Plato really believed that stories moved us away from the truth, why did he rely so heavily on them in his pursuit of truth?
Second, the question of whether stories move us toward or away from truth is too big an issue to be taken up at the end. So, I will pick that up next time.
I am going to switch from using the word 'poets' to using the word 'writers' because our modern understanding of the word 'poet' is different and a little misleading. Writers construct imaginary scenarios from real experiences and through those imaginary scenarios explore questions regarding the meaning of our experiences as humans. So, writers, like philosophers, are concerned with a search for truth. However, from Plato's perspective, the material world in which we have our experiences is an imperfect copy of the ideal world and hence once removed from the truth. The world constructed by writers is an imperfect copy of an imperfect copy moving us, yet, further away from the truth. Due to their sins of imperfection and the dilution of the absolute truth, the poets and writers were banned from the ideal republic. The questions are - did Plato really believe this? and it is true?
First, the question of whether or not Plato actually believed this is unanswerable since Plato has been dead for millennium and we cannot interrogate him. However, looking at his body of philosophical work it would be hard to conclude that he really did believe this. First, his dialogues are all written in a story form. If he really believed that stories took us further away from the truth then why did he use stories to convey the truth. Second, within these stories are numerous mini stories used to illustrate specific points and subtlties. Again, if Plato really believed that stories moved us away from the truth, why did he rely so heavily on them in his pursuit of truth?
Second, the question of whether stories move us toward or away from truth is too big an issue to be taken up at the end. So, I will pick that up next time.
Monday, March 2, 2009
The Role of Stories in the Ethics of Virtual Worlds
In a previous post, I cited Marshall McLuhan's famous, and yet apocryphal quote "looking to the past to understand the future is like driving by looking in the rear view mirror." A rapidly changing technological base creates a path into the future with a lot of twists and turns, in which the future comes at us with increasing rapidity. The question is - how do we get a headlight into the future that will allow us to look forward instead of backwards in order to make decisions about what we need to do. The answer, I believe, is stories. Stories provide a headlight into the future, a way to explore possible worlds and possible outcomes.
My favorite example of this kind of story is Micheal Crichton's Jurassic Park. This cautionary tale is an attempt to explore the ethics of bio technology in a narrative form. Crichton's argument is that if you have unregulated bio tech research and have scientists working primarily for profit or fame, then all hell will break loose and nature will strike back at you. This book is a masterpiece of writing technique and makes a compelling narrative argument. But, it is only one possible narrative argument.
I teach a class in writing stories to explore the ethcis of technology, and, in this class, I have students find a flaw in Crichton's argument and provide a narrative alternative. Consider, for a moment, some of the stories that have swayed public thinking in a major way. These include: The Jungle, Uncle Tom's Cabin, Hard Times and Frankenstein; just to name a few. Each of these stories has presented one single narrative argument, one possible world. What we really need is to have authors take on more sides of the argument and allow us to see a variety of possible worlds.
In the same way that scientific debates lead to a better understanding of what is true, narrative debates can allow us to acheive a better understanding of what is good or what is desirable. And, here we are back to writing again. In Plato's Republic, he dismissed the poets to focus on rationality. Perhaps, now, in the 21st century, we will realise that rationality isn't everything and invite the poets back in.
My favorite example of this kind of story is Micheal Crichton's Jurassic Park. This cautionary tale is an attempt to explore the ethics of bio technology in a narrative form. Crichton's argument is that if you have unregulated bio tech research and have scientists working primarily for profit or fame, then all hell will break loose and nature will strike back at you. This book is a masterpiece of writing technique and makes a compelling narrative argument. But, it is only one possible narrative argument.
I teach a class in writing stories to explore the ethcis of technology, and, in this class, I have students find a flaw in Crichton's argument and provide a narrative alternative. Consider, for a moment, some of the stories that have swayed public thinking in a major way. These include: The Jungle, Uncle Tom's Cabin, Hard Times and Frankenstein; just to name a few. Each of these stories has presented one single narrative argument, one possible world. What we really need is to have authors take on more sides of the argument and allow us to see a variety of possible worlds.
In the same way that scientific debates lead to a better understanding of what is true, narrative debates can allow us to acheive a better understanding of what is good or what is desirable. And, here we are back to writing again. In Plato's Republic, he dismissed the poets to focus on rationality. Perhaps, now, in the 21st century, we will realise that rationality isn't everything and invite the poets back in.
Monday, February 23, 2009
Virtual Worlds and Possible Consequentialism
Since more the more traditional character and experience based ethical theories have limited application for virtual worlds, what basis do we have for defining appropriate behavior? One answer is that we can just wait and see what happens. Over time we will develop experience and over time a community will coalesce that can define appropriate behavior. But there are three problems with this approach. First, during that time when we are developing experience, things will be happening in virtual worlds that may not be to our liking. Second, once standards of virtual world behavior evolve and coalesce they may be very difficult to change. So, we may realize how things ought to be, but be unable to make them that way. Third, since the technology continues to evolve, the requisite experience continues to shift. We may not be able to acquire the requisite experiences until the technology stabilizes and that may be a lot longer than we are willing to wait. Having said all that, there is merit in waiting. In the early days of the web, some people would like to have restricted free speech and impose severe penalties for copyright infringement. That debate is still going on and we shouldn't be enforcing standards until we figure out what those standards ought to be. I don't think waiting is a bad idea. I just don't think it is the best idea. What do I think is the best idea?
Several years ago, I wrote a series of papers in computer ethics in which I introduced an ethical theory which I called possible consequentialism. Unlike the more traditional consequentialist theories that set ethical standards based upon the consequences of an act or rule, possible consequentialism considers possible consequences. This seems to be an appropriate basis for making ethical decisions under the conditions of a rapidly evolving technology where the consequences of any given standard may not be known at the time when the standard needs to be developed. That's all well and good, but how do we know the possible consequences? That is what we will turn to next.
Several years ago, I wrote a series of papers in computer ethics in which I introduced an ethical theory which I called possible consequentialism. Unlike the more traditional consequentialist theories that set ethical standards based upon the consequences of an act or rule, possible consequentialism considers possible consequences. This seems to be an appropriate basis for making ethical decisions under the conditions of a rapidly evolving technology where the consequences of any given standard may not be known at the time when the standard needs to be developed. That's all well and good, but how do we know the possible consequences? That is what we will turn to next.
Monday, February 16, 2009
Will Experience Based Ethics Work in a Virtual World?
I don't think there really is such a thing as experience based ethics. What I have done here is to lump together ethical theories that use past behavior and outcomes to determine appropriate ethical behaviors for the future. The most obvious example of this is consequentialist ethics where the ethical quality of an act is determined by the consequences of that act. However, I would also lump deontological ethics into this group as well. Deontological ethics suggest that we have a basic duty as human beings to behave in certain ways. For example, you should always use of people as an end but not a means. This is certainly an important tenat for preserving human dignity. However, I would say, perhpas with a dash of cynicism that this duty is derived from the fact that it has worked well in the past and thus should work well in the future. So, as I said, I lump it together with experience based ethics. And, I do not beleive that experience based ethics can serve as a moral basis for virtual worlds. Why not?
Marshall McLuhan is attributed with the quote that "looking to the past to understand the future is like driving by looking in the rear view mirror". There seems to be contention over whether he said it or not and I'm not even sure if I got the quote exactly right. But the sentiment is clear. We cannot look to the past to understand the future. And if that is the case then experience based ethics are of limited value. So, let us consider the cases when the past is a good guide and when it is not a good guide.
Using the driving analogy, one can see that as long as you don't drive too fast and the road ahead is as straight as the road behind, then one might get away with driving while looking in the rear view mirror. However, if the car picks up speed or if the road is windy then driving by looking in the rear view mirror will not work. Applying this to the future, when the future is coming at us rapidly and when the future is much different from the past, then looking to the past to understand the future will not work. This is the case, I would argue, with virtual worlds. The technological change is coming at us fast and the future will be very different from the past. So looking to the past, which is what experience based ethics does, will not work. Hence, expereince based ethics will not work in a virtual world. Having ruled out both experience based ethics and character based ethics, is there any thing that will work in a virtual world. I think there is. And you will have to stay tuned for the answer.
Marshall McLuhan is attributed with the quote that "looking to the past to understand the future is like driving by looking in the rear view mirror". There seems to be contention over whether he said it or not and I'm not even sure if I got the quote exactly right. But the sentiment is clear. We cannot look to the past to understand the future. And if that is the case then experience based ethics are of limited value. So, let us consider the cases when the past is a good guide and when it is not a good guide.
Using the driving analogy, one can see that as long as you don't drive too fast and the road ahead is as straight as the road behind, then one might get away with driving while looking in the rear view mirror. However, if the car picks up speed or if the road is windy then driving by looking in the rear view mirror will not work. Applying this to the future, when the future is coming at us rapidly and when the future is much different from the past, then looking to the past to understand the future will not work. This is the case, I would argue, with virtual worlds. The technological change is coming at us fast and the future will be very different from the past. So looking to the past, which is what experience based ethics does, will not work. Hence, expereince based ethics will not work in a virtual world. Having ruled out both experience based ethics and character based ethics, is there any thing that will work in a virtual world. I think there is. And you will have to stay tuned for the answer.
Monday, February 9, 2009
Will Virtue Ethics Work in a Virtual World?
Virtue ethics is a character based ethical theory that claims, simply, that morally developed people cannot do immoral things. At first this sounds a little odd because it seems to give license to immoral behavior. But, if you think of it as good people trying their best to do the right thing it makes a little more sense. After all, what other standard do we have for developing morals. I am a big fan of virtue ethics because it requires moral development rather than rule following; and it helps us figure out what to do in cases where the rules aren't clear or don't exist. We just have good people trying their best to figure out the right thing. In the ethics of technology this is particularly appropriate since technology seems to create new problems for which we don't have rules. In fact, many years ago I presented a paper at a conference suggesting virtue ethics as a basis for computer ethics. Having said that, I am not sure that virtue ethics would be an appropriate basis for ethical decision making in a virtual world. Why is that?
Virtue ethics was developed in Ancient Greece where people were born into a community and indoctrinated into the values of the community. Moral development was achieved by the citizens of the community with respect to those values. When the educational process was successful (I am certain it was not always) then citizens internalized community values. If community values changed or if new situations arose they would be discussed and debated by citizens in an attempt to develop or adjust values as needed. The key elements for this to work is that you need to have a fairly homogeneous community with a relatively stable set of values and an educational process by which new members are developed morally consistent with that set of stable values. None of these elements hold for virtual worlds.
Virtual worlds are a global phenomenon. Residents from all of the world, from a wide variety of moral and religious perspectives and traditions interact. Trying to abstract a homogeneous set of values for the residents of virtual worlds would be like trying to establish a global code of ethics.
Even if it were possible to do this, you would still have the problem of moral development. When people come to the virtual world they already have their values in place. You do not get them young enough in a virtual world nor do you have enough control over them to attempt to develop them morally consistent with those values. Finally, since virtual worlds are a new phenomenon, the ethics of virtual worlds are still evolving. We don't really know what constitutes good behavior in virtual worlds. So, we do not have a stable set of values to use for moral development.
The point here is that while virtue ethics has a lot to be said for it, and although it may have worked well in Ancient Greece, it is probably not the best moral basis for virtual worlds. Perhaps some day. But not now.
Virtue ethics was developed in Ancient Greece where people were born into a community and indoctrinated into the values of the community. Moral development was achieved by the citizens of the community with respect to those values. When the educational process was successful (I am certain it was not always) then citizens internalized community values. If community values changed or if new situations arose they would be discussed and debated by citizens in an attempt to develop or adjust values as needed. The key elements for this to work is that you need to have a fairly homogeneous community with a relatively stable set of values and an educational process by which new members are developed morally consistent with that set of stable values. None of these elements hold for virtual worlds.
Virtual worlds are a global phenomenon. Residents from all of the world, from a wide variety of moral and religious perspectives and traditions interact. Trying to abstract a homogeneous set of values for the residents of virtual worlds would be like trying to establish a global code of ethics.
Even if it were possible to do this, you would still have the problem of moral development. When people come to the virtual world they already have their values in place. You do not get them young enough in a virtual world nor do you have enough control over them to attempt to develop them morally consistent with those values. Finally, since virtual worlds are a new phenomenon, the ethics of virtual worlds are still evolving. We don't really know what constitutes good behavior in virtual worlds. So, we do not have a stable set of values to use for moral development.
The point here is that while virtue ethics has a lot to be said for it, and although it may have worked well in Ancient Greece, it is probably not the best moral basis for virtual worlds. Perhaps some day. But not now.
Monday, February 2, 2009
The Moral Basis for Ethical Decision Making in a Virtual World
So far, in our exploration of the ethics of virtual worlds, we have addressed avatar attachment, anonymity, and regulation. I don't mean to imply that this covers the full set of issues. I only mean to say that these are the largest issues that occur to me at the moment. The final one of these issues that I promised to address back when I started this threat was the moral basis of ethical decision making in a virtual world. That is to say, which theories of moral behavior provide the best guidance for ethical decisions?
At the risk of having professional ethicists gnashing their their teeth, I am going to dismiss descriptive theories such as ethical relativism and egoism. They claim to tell us how things are and are of limited value in determining how things should be. I am going to focus on prescriptive theories that tell us how things should be. I do this because virtual world technology is currently evolving and we can have great influence on how they evolve. Consequently, we should focus on how things should be.
Again, at the risk of offending the pros, I am going to group prescriptive ethical theories into two groups: character based and experience based. This is not too far from standard treatments and provides an economical scheme for the argument I wish to make here. Over the next few posts, I will argue that both character based moral theories and experience based moral theories have limitations that may inhibit their usefulness in providing a moral basis for ethical decision making in a virtual world. Then, I will wrap up this thread with a moral perspective that, I believe, overcomes these limitations. It will take, as Colridge said "a willing suspension of disbelief" as we journey into the morality of virtual worlds. But, come along with an open mind and I will try to make it worth your while.
At the risk of having professional ethicists gnashing their their teeth, I am going to dismiss descriptive theories such as ethical relativism and egoism. They claim to tell us how things are and are of limited value in determining how things should be. I am going to focus on prescriptive theories that tell us how things should be. I do this because virtual world technology is currently evolving and we can have great influence on how they evolve. Consequently, we should focus on how things should be.
Again, at the risk of offending the pros, I am going to group prescriptive ethical theories into two groups: character based and experience based. This is not too far from standard treatments and provides an economical scheme for the argument I wish to make here. Over the next few posts, I will argue that both character based moral theories and experience based moral theories have limitations that may inhibit their usefulness in providing a moral basis for ethical decision making in a virtual world. Then, I will wrap up this thread with a moral perspective that, I believe, overcomes these limitations. It will take, as Colridge said "a willing suspension of disbelief" as we journey into the morality of virtual worlds. But, come along with an open mind and I will try to make it worth your while.
Tuesday, January 27, 2009
Zoning? In a Virtual World?
The past two posts have sketched out the cases for and against regulation. On one hand we need regulation in virtual worlds to provide the orderly and predictable environments in which commerce and education can be pursued. On the other hand, regulation seriously inhibits the potential of virtual worlds as a medium of self expression and exploration. Can these two conflicting potentials of virtual worlds both be achieved? Or will one have to give way to the other?
I think they can both be achieved via virtual world segmentation or in more common terms zoning. Zoning can be achieved fairly easily. Each simulator or virtual location should have a set of attributes associated with it indicating its regulations. For example, it may require visitors to be over a certain age and may require user authentication. Another simulator may allow anonymity but require visitors to adhere to role playing rules. Anonymity may require some refinement. For example it is one thing to be anonymous to the land owner and another thing to be anonymous to the other visitors.
It may take some time to get a standard set of attributes that simulations will set, but careful zoning takes careful planning. Further, if the zoning attributes are selected carefully it will be possible to group simulators into parallel virtual worlds. These parallel worlds might emphasize commerce, education, tourism, art, socializing, self expression, or any other thematic attributes.
In addition, each avatar would have similar attributes. These may include age, anonymity, credit worthiness, and social or moral restrictions. Again, these attributes would have to be carefully thought out. And just like today where vendors target certain market segments, virtual world developers would target a specific segment of the visitor population.
This attribution would allow a wide range of opportunities for sef expression while preventing someone from inadvertently landing up in an undesirable neighborhood. At the same time it would allow land owners to restrict undesirable visitors. If this all sounds a little too confining, don't forget that you can always have multiple avatars, some well documented and some anonymous. So, it seems to me, that it solves the problem of reguation. Hey, maybe we should start doing this in RL as well.
I think they can both be achieved via virtual world segmentation or in more common terms zoning. Zoning can be achieved fairly easily. Each simulator or virtual location should have a set of attributes associated with it indicating its regulations. For example, it may require visitors to be over a certain age and may require user authentication. Another simulator may allow anonymity but require visitors to adhere to role playing rules. Anonymity may require some refinement. For example it is one thing to be anonymous to the land owner and another thing to be anonymous to the other visitors.
It may take some time to get a standard set of attributes that simulations will set, but careful zoning takes careful planning. Further, if the zoning attributes are selected carefully it will be possible to group simulators into parallel virtual worlds. These parallel worlds might emphasize commerce, education, tourism, art, socializing, self expression, or any other thematic attributes.
In addition, each avatar would have similar attributes. These may include age, anonymity, credit worthiness, and social or moral restrictions. Again, these attributes would have to be carefully thought out. And just like today where vendors target certain market segments, virtual world developers would target a specific segment of the visitor population.
This attribution would allow a wide range of opportunities for sef expression while preventing someone from inadvertently landing up in an undesirable neighborhood. At the same time it would allow land owners to restrict undesirable visitors. If this all sounds a little too confining, don't forget that you can always have multiple avatars, some well documented and some anonymous. So, it seems to me, that it solves the problem of reguation. Hey, maybe we should start doing this in RL as well.
Friday, January 23, 2009
The Argument Against Regulation
By far the most impressive feature of Second Life in particular and virtual world technology in general is the capabilities that it provides its users to express their creativity and imagination in a globally accessible public forum. This includes creating a new environment and creating new persona. If you can imagine it, you can create it and experience it. See
http://doctorcosmos.blogspot.com/2009/01/creating-your-second-life-part-2.html
for more details on this idea.
So, if we see virtual worlds as a technological extension of our imaginative capabilities, then regulating virtual worlds would be like regulating your imagination. It would be like saying - no, there are certain things that your are not allowed to imagine. This sounds an awful lot like the thought police of 1984. I should mention that I am referring to the book 1984 by George Orwell. In this world of web pages and short memories, readers might think that I am referring to the year 1984 when the world was ruled by dinosaurs and Roman Legions.
But, if the negative implications of the thought police are not sufficient justification for limiting regulation in virtual worlds, or if 1984 was so long ago that it has no bearing on what we are discussing today, then consider what we might loose in the future by regulating virtual worlds.
We believe in freedom of expression as one of the basic tenets of our modern world. If anyone were to suggest regulating the content of web pages, there would be an out cry of self righteous indignation that could not be contained. Even if a web site says offensive things we believe that free expression benefits society far more than the offensive expression hurt it.
The things people do in a virtual world are no less expression than words on a web page. And while society benefits from a diversity of idea, both society and individuals benefit from the ability to freely explore their imaginations and creativity. You can try things in a virtual world that you cannot easily try in the real world. And you can try these things in an environment where potential damage is minimal. If we regulate virtual worlds we are saying, not only that there are ideas you cannot think, but there are new ideas that you cannot try. This in turn suggests that we already know the answers to all our questions about everything and we know the answers to any future questions that might arise. This is silly on the face of it. And, in turn, regulating virtual worlds is silly on the face of it.
But, in order to make this argument more compelling, we should ask - what are some of those things that we need to explore in virtual worlds that justifies this lack of regulation? And that will be the topic of the next post.
http://doctorcosmos.blogspot.com/2009/01/creating-your-second-life-part-2.html
for more details on this idea.
So, if we see virtual worlds as a technological extension of our imaginative capabilities, then regulating virtual worlds would be like regulating your imagination. It would be like saying - no, there are certain things that your are not allowed to imagine. This sounds an awful lot like the thought police of 1984. I should mention that I am referring to the book 1984 by George Orwell. In this world of web pages and short memories, readers might think that I am referring to the year 1984 when the world was ruled by dinosaurs and Roman Legions.
But, if the negative implications of the thought police are not sufficient justification for limiting regulation in virtual worlds, or if 1984 was so long ago that it has no bearing on what we are discussing today, then consider what we might loose in the future by regulating virtual worlds.
We believe in freedom of expression as one of the basic tenets of our modern world. If anyone were to suggest regulating the content of web pages, there would be an out cry of self righteous indignation that could not be contained. Even if a web site says offensive things we believe that free expression benefits society far more than the offensive expression hurt it.
The things people do in a virtual world are no less expression than words on a web page. And while society benefits from a diversity of idea, both society and individuals benefit from the ability to freely explore their imaginations and creativity. You can try things in a virtual world that you cannot easily try in the real world. And you can try these things in an environment where potential damage is minimal. If we regulate virtual worlds we are saying, not only that there are ideas you cannot think, but there are new ideas that you cannot try. This in turn suggests that we already know the answers to all our questions about everything and we know the answers to any future questions that might arise. This is silly on the face of it. And, in turn, regulating virtual worlds is silly on the face of it.
But, in order to make this argument more compelling, we should ask - what are some of those things that we need to explore in virtual worlds that justifies this lack of regulation? And that will be the topic of the next post.
Sunday, January 18, 2009
The Argument for Regulation
In this post and the next I plan to make the arguments for and against regulation of virtual worlds. The argument for regulation is the easiest and the least compelling so I will address it first. The argument for regulation of virtual worlds is basically the same as the argument for regulation in any sphere. People need a predictable environment in which to conduct their businesses and social affairs. Without some sort of regulatory structure short term gains by opportunists will lead to long term losses in the system as a whole. This is not to say that there is anything inherently wrong with short term opportunists. In fact, there is much to be said in favor of them. Them make the system more efficient. But the line between efficient and predatory is often very finely drawn. And a predatory environment cannot flourish over the long term.
To put this into more pragmatic terms, if I cannot trust an environment in which I am operating, I am unlikely to take risks in that environment. If the environment needs people to take risks in order for the environment to flourish, then the people must be able to trust the environment and the environment much meet the expectations of those risk takers.
To put this into even more concrete terms, if I do not trust a virtual world environment I will be unlikely to start a business, sell a product, offer a service, teach a class, hold a meeting, or any of the other activities that the virtual world needs me to do in order to reach its full potential. So, regulation is necessary if a virtual world is going to become a place of commerce, education or socializing.
In the early days of web technologies, people were reluctant to buy products that they could not touch and even more reluctant to give their credit card number to some unknown entity in cyberspace. Companies like Amazon.com offered excellent return policies and promised to protect their customers private information, such as credit card numbers. This was a form of self regulation and, in the case of the web, it was all the regulation that was necessary.
I don't think that this would be enough regulation for a virtual world. On a web site, you don't have people from some other web sites coming to where you are and harassing you. And if you find yourself on some undesirable website, you can merely close your browser. You don't have to worry about leaving your avatar there until you log in again. So, virtual worlds present difficulties that do not exist on web sites.
I would view a virtual world more like a shopping mall, maybe even a town. Even though the individuals in that mall or town may be behaving within the boundaries placed on them as individuals, a few more restrictions are probably necessary so that everyone can get along and prosper. So virtual worlds will require some level of regulation to thrive and the question is - how much? Too much regulation has its down sides as well. And we will get to that next.
To put this into more pragmatic terms, if I cannot trust an environment in which I am operating, I am unlikely to take risks in that environment. If the environment needs people to take risks in order for the environment to flourish, then the people must be able to trust the environment and the environment much meet the expectations of those risk takers.
To put this into even more concrete terms, if I do not trust a virtual world environment I will be unlikely to start a business, sell a product, offer a service, teach a class, hold a meeting, or any of the other activities that the virtual world needs me to do in order to reach its full potential. So, regulation is necessary if a virtual world is going to become a place of commerce, education or socializing.
In the early days of web technologies, people were reluctant to buy products that they could not touch and even more reluctant to give their credit card number to some unknown entity in cyberspace. Companies like Amazon.com offered excellent return policies and promised to protect their customers private information, such as credit card numbers. This was a form of self regulation and, in the case of the web, it was all the regulation that was necessary.
I don't think that this would be enough regulation for a virtual world. On a web site, you don't have people from some other web sites coming to where you are and harassing you. And if you find yourself on some undesirable website, you can merely close your browser. You don't have to worry about leaving your avatar there until you log in again. So, virtual worlds present difficulties that do not exist on web sites.
I would view a virtual world more like a shopping mall, maybe even a town. Even though the individuals in that mall or town may be behaving within the boundaries placed on them as individuals, a few more restrictions are probably necessary so that everyone can get along and prosper. So virtual worlds will require some level of regulation to thrive and the question is - how much? Too much regulation has its down sides as well. And we will get to that next.
Sunday, January 11, 2009
Should Virtual Worlds Be Regulated?
Should virtual worlds be regulated? The simple answer is - yes, of course they should be. This is, as they say, a no brainer. Everything in civilized life is regulated. The important questions are how much and what kind of regulation are appropriate. And that is where it gets tricky. What we strive for in ethics is to find a balance between individual empowerment and social harmony. Virtual worlds provide amazing opportunities for individual empowerment and that should be encouraged. However, empowered individuals can be a major threat to social harmony and that needs to be curtailed.
One way to approach the idea of regulation in a virtual world is to ask which of these two desirable goals (individual empowerment and social harmony) is more fundamental. In the real world we usually take social harmony as more fundamental, allowing individuals to find ways to express their individuality as long as it does not seriously impact social harmony. So, should virtual worlds follow the same priority scheme and balance that we find in the real world? Perhaps, yes. Perhaps, no. And despite indications to the contrary, we are making progress.
If we see virtual worlds as an extension of the real world, then our priorities with regard to the real world should carry into virtual worlds. If we see virtual worlds as a thing apart from the real world then perhaps it may make sense to change the priorities. Let's consider two examples; one in which virtual worlds are an extension of the real world and one in which virtual worlds are a thing apart. Perhaps we can then extrapolate from those to cases and make some progress on this issue. And we will do that. But that will begin with the next entry.
One way to approach the idea of regulation in a virtual world is to ask which of these two desirable goals (individual empowerment and social harmony) is more fundamental. In the real world we usually take social harmony as more fundamental, allowing individuals to find ways to express their individuality as long as it does not seriously impact social harmony. So, should virtual worlds follow the same priority scheme and balance that we find in the real world? Perhaps, yes. Perhaps, no. And despite indications to the contrary, we are making progress.
If we see virtual worlds as an extension of the real world, then our priorities with regard to the real world should carry into virtual worlds. If we see virtual worlds as a thing apart from the real world then perhaps it may make sense to change the priorities. Let's consider two examples; one in which virtual worlds are an extension of the real world and one in which virtual worlds are a thing apart. Perhaps we can then extrapolate from those to cases and make some progress on this issue. And we will do that. But that will begin with the next entry.
Subscribe to:
Posts (Atom)