Sunday, October 31, 2010

Multitasking mediocrity...

Yes, the many, many media streams do indeed keep us in a constant state of imagined crisis. Fortunately it hasn't really affected me too much, probably because I was raised in the "deep attention" generation (I don't feel the need to answer every phone call, text message, or email as soon as it arrives). Twitter seems to be calling my name a little more often though, so perhaps there is hope.

A couple things I found interesting in both Hayles' reading and Stone's presentation were the cognitive shift(s) that are taking place. The human mind is a very complex machine and its ability to adapt to different environmental conditions is quite remarkable. Now I understand better why some people think differently than me.

Growing up I remember my occupations were watching television (one of only three channels), listening to music, practicing piano, talking on the phone or playing with friends, and doing homework. All of these activities were centered on the activity itself. The word "multitasking" had yet to be coined (at least to my knowledge) and distractions were rare. I sat in solitude while I did my homework or practiced my piano, deeply attending to the completion of the activity at hand. It wasn't until years later that the concept of "multitasking" became the buzzword that it is today.

My definition of "multitasking" is being able to simultaneously work on multiple tasks while also being able to give each task the concentration it requires as if it were the only task being attended to. It's not merely a matter of doing two or more things at one time. Multitasking requires a total and complete shift in mindset in order for multitasking to be successful.

In other words, if someone claims they are multitasking, the result of each completed task must be of equal quality as if they had each been attended to in succession rather than simultaneously. For example, a fifteen year old is hired to keep the score of a basketball game. This activity in itself takes a good amount of undivided attention. However, while s/he is supposed to be keeping score, s/he is also texting with friends. When confronted, the teen says, "I can multitask", even though the purpose of the confrontation was the frequent inaccuracy of the score. In this example, multitasking clearly does not work because the quality of one of the tasks has been compromised.

This is where I become cynical when discussing this trend toward a culture of multitasking. My cynicism lies in the question of details that could become lost due to lack of deep attention? Will "Generation M" understand that certain things require more attention than others, while still other matters require complete attention?

The thing I find most interesting in Hayles' article is when the topic of AD/HD enters into the conversation, drawing links between "hyper" attention and the aforementioned cognitive disorder. I understand the commonalities, however one is a biological disorder while the other appears to be a cultural malfunction. We do not have control over the amount of media  available to us. We do, however, have control over how much of that media we allow our children to consume. If an otherwise non-AD/HD child has to take Ritalin in order to perform well in school, there is something very wrong.

Good parenting requires regulating a child's consumption of anything that could be detrimental to that child's welfare, including over-consumption of media. Is it necessary to give children mobile phones with texting or internet capabilities? The current issues of "sexting" and "cyberbullying" could be limited if these options were not made available to kids under the age of 18. Similarly, a regulation such as this would put the purpose of the mobile phone into perspective for this age group.

Taking this idea a step further, why are social networking sites made available to kids under the age of 18? Furthermore, why do parents allow their children to participate on these sites at home? I believe the responsibility begins and ends with the parents. There are reasons for age restrictions, all having to do with the ability to handle the responsibilities that go along with the privilege.

A cultural shift is obviously taking place and I do find it intriguing. To imply, as Halyes does in her article, that "Children growing up in media-rich environments literally have brains wired differently than humans who did not come to maturity in such conditions" is simply fascinating. Additionally, as Hayles mentions, it is a condition that educators will have to come to terms with in order to be effective in their roles. A compromise between the two polarities of "hyper" attention and "deep" attention would indeed be an ideal scenario, and if educators could establish and maintain such a condition I think real progress could be made.

Therefore, I think what lies beneath these issues is perhaps thinking in terms of teaching our children how to manage a media-rich environment. A system might look like this: Kindergarten through 12th grade - limited forms of media are used for educational purposes only. Undergraduate settings - embrace previously limited forms of media in order to teach students how to use them both responsibly and professionally. A system like this would in no way discount the relevance of technology; it would simply impress the importance of different media tools in the right place and at the right time.

In reality, technology is nothing more than a set of tools. A powerful set of tools which, through continued progress, will provide great possibilities for not only our generation, but for all future generations to come. Technology is also changing our culture, creating opportunities and efficiencies our forefathers could not have imagined. But like every other set of tools, the proper uses of technology must also be taught. If we don't start these teachings at a young age by incrementally focusing on the situational pros and cons of each type of attention, progress could potentially give way to an over-stimulated culture of self-indulged mediocrity. A mediocrity hidden behind the more politically correct guise of multitasking.

Tuesday, October 26, 2010

...but is it probable?


Collective Intelligence left me feeling like a nebulous being floating within a larger cultural specimen being dissected and analyzed by those who fancy such studies. In this particular instance, the bunny men are microscopically drawing diagrams, relationships, interactions (or lack there of), and speculating outcomes of the fourth cultural space as it relates to its predecessors. Levy speaks of this fourth, or knowledge space, as perhaps a terminating space from which those who proceed its final formulation could enjoy the solutions to the thousands of years of strife that have come before. For the first time in my life, I feel like a part of history instead of a part of making history. A hard pill to swallow, especially in angst of my recent birthday.

I do however try to imagine how this whole idea of collective intelligence might play out. In Levy's Utopian scenario it plays out perfectly (of course): democracy by the people, for the people. After all, that's what our founding fathers had in mind, right? It was only when populations exceeded a realistic voice that we had to switch to government by representation. Now that we once again have the possibility of true democracy in the sights of our imagination, will we be able to return to a democracy truly by the people? A better question, will the powers that be allow us to return to such a system?

If collective intelligence is what our founding fathers intended, how will we achieve such collaboration in favor of good for the people? Has the current system divided us beyond repair, or will a system of demodynamics help us to overcome our differences? Given the opportunity, I think the latter would prevail. The majority of people care about equal rights for all. The majority of people are tired of corporations effecting policy. The majority of people want the same opportunities regardless of race, religion, gender, handicap, and so on, and the majority of people want to see these rights acknowledged and enforced.

Will collective intelligence in action affect the status quo? Indeed it will. Potentially greater than anything we have seen before. It will certainly disengage our state representatives as conduits to "what's best for the people". Their services will no longer be needed or desired. That is not to say that a core assembly of administrative officials should be displaced. Quite the opposite is true. Even a democracy based on collective intelligence needs a system of checks and balances.

My question is how this Utopian scenario might be put into practice? Is it possible? Is it feasible? It is certainly imaginable. What would the infrastructure look like and who would build it? How would the input and output be managed or filtered? How would we come to a consensus and how would our consensus' effect change? Case in point:  The war we are currently engaged in. If we as a collective, "vote" against the war, how would that effect the overall state of affairs? Could we simply disengage? As Levy mentioned, these types of decisions are slow moving. Would this cause frustration?

On a larger scale, how would our relationships with other countries be handled? Foreign affairs often (if not always) affect us directly. How would we come to a collective consensus on issues like trade relations, immigration, terrorism? How would we go about correcting the mistakes our predecessors made, both here and abroad, if that's even possible?

I can imagine Levy's land of Utopia. But, it is just that, Utopia. If anything even remotely resembling a transition of this magnitude were to take place, it would (obviously) take years to transpire. Years that most of us probably don't have.

Friday, October 22, 2010

The real issues at hand....

In reading the blogs from a few of my class mates regarding this week's discussion topic, I found the polarity of opinions very interesting. One blogger felt that it is OK to download music illegally, and justified it by placing it under the umbrella of "file sharing". Another blogger, while fully sensitive to the implications of illegal downloading was somewhat confused by the two terms, "illegal downloading" vs. "file sharing".

In my mind, the two terms are different:  Illegal downloading is acquiring an object for free that otherwise should have been paid for. File sharing is simply sharing information online. I don't think there is any harm in sharing files if those files were acquired legally. I think the question that does fit under the same umbrella is whether the files you are sharing were intended, by the author, to be shared at no cost.

Something else I found interesting in a couple of the blogs I read was an attitude of "..this is what my generation does, and if you don't like it then you need to change your system". An artist makes h/er living by using their skills and expertise, just as we make our living(s) using our skills and expertise. Would you consider it fair if your boss decided he didn't want to pay you? The fact of the matter is it's illegal, and furthermore you would not be able to make a living. You would pursue the appropriate legal channels to ensure payment for your services.

A third, and much larger issue, that perhaps confused the discussion even further, is the question of what should be protected and to what extent. This is where the waters start to get a little murky on some levels. For example, if you use a quote from a book that is protected under copyright law, it is perfectly OK as long as you put it in quotation marks and give attribution to the author. Yet the debate rages on whether it's OK to sample beats from various artists' songs to create a new song that you call your own. In the former you are paying homage to the author by using h/er words in your own literary piece. However, in the latter it is considered copyright infringement or stealing. If attribution were given, would that make a difference? Or is it simply a matter of how the two institutions approach copyright? Or, is it something different all together? Perhaps this is where the greed of corporate America comes into play, which is the larger debate needing to be explored.

I wanted to identify these issues separately in an attempt to simplify the various areas of debate. The first two issues I mentioned are pretty much straight forward, at least in my mind. The third issue, I believe, is where the conversation needs to be focused. A couple of cases in point include Hollywood's global monopoly on entertainment. A second case is that of pharmaceutical companies who would rather gain financially than to collaborate with the larger scientific community to perhaps discover cures for diseases for the public good. As Kim mentioned in class, no research is currently being done to find a cure for Aids because there is more money to be made by not finding a cure. These are the types of issues I feel need to be addressed. These are the corporate giants that are hiding behind patents and copyright laws for their own financial gains, rather than caring about what's best or the people. The same dynamics are at work in the insurance industry. It is practice to find legal ways to deny coverage or to drop you altogether.

These are the areas we need to focus on. These are the areas that need change. How to go about making these types of changes is difficult because of all the powerful players involved. But it is certainly a stepping off point for discussion.

Wednesday, October 20, 2010

Who are they protecting anyway...?

After reading Information Feudalism, the current political climate came into perfect focus. What is it about humankind that lead them to take laws, that are clearly intended for the welfare of the public, and skew them in favor of their own private interests? Furthermore, what is it about American politics and corporations that make them feel they have the right to dictate to the world the laws of copyright, intellectual property, and patents? Is this an extreme case of ethnocentrism or just greed? OR, perhaps both?

From the very start of this debate American interests were arrogantly put ahead of the rest of the world. For example, America took issue when foreign countries translated and published American authors' work. They called it 'piracy'. However, the same America would not allow foreign authors to be protected in the US unless their work was published here at the same time it was published in the "country of origin". "The London Times saw this as an attempt to make New York the centre of world publishing."

In the same vein, Hollywood began their global monopoly of the entertainment industry as early as the 1920s. The political leaders of the time, and since, have been in complete support of this complete domination all in the name of money. As the author(s) recall, "trade follows the film". Hollywood has been given so much protection based on intellectual property that they dictate the level of competition allowed in foreign countries completely independent of the US government.

Drahos:
 "The end game for Hollywood is no restriction on its capacity to reach any type of screen in the world at any time and place".

"...the output of the US film and TV industry serves to dilute national cultures".

In view of this, I ask again. How can one country claim this kind of global control? Hollywood is an independent, international cartel endorsed 100% by the same people who create the laws concerning intellectual property, copyright, and patents.

Drahos suggests that his use of the word "feudalism" in the title of this book is too harsh. However, I'm not sure I agree. It is very clear who the controlling power players are in this scenario; the conglomerates who own the rights. These conglomerates, or monopolies, are way beyond acting in the interest of the welfare of the public. Using Hollywood as the most extreme case, they have no competition therefore they can charge what they want, which goes against everything we know about laws concerning monopolies. And like the pharmaceutical companies, they are allowed to copyright most everything they do.

The question now is when will these corporate giants, monopolies, conglomerates, etc. sink their daggers into the free flow of information that travels across the internet? Drahos has already made the point of the importance of knowledge. With that, I digress for a moment. Again, not a new concept, question, or speculation, but if all of the research scientists were allowed to share their research/information with each other instead of being forced to tuck it away behind patents, imagine how much further the world would be with respect to cures for any number of diseases.

So, how long will the free flow of information remain free? The US has already forced imposition of its intellectual property laws on Sweden, a country that doesn't have any such laws, and doesn't, by international agreement, have any obligation to do so. However, Sweden obliged. Will the regulation come in only the form of Hollywood protection, or will it come in other forms? This story has yet to be written, but somehow I fear the outcome. The US powers that be are very good at disguising their greed by wearing masks proclaiming concern for the good of the people. My only question now is...which people?

Tuesday, October 12, 2010

The "Unhuman" Network...

Perhaps it is best to begin at the end of this week's reading, for, like Biology, an object is sometimes better understood if the parts that make up that object are understood. Understanding networks is indeed one of those instances, as Galloway and Thacker succinctly demonstrate in The Exploit. However, the authors' also leave the reader with a question, perhaps better denoted as a proposition. One, nonetheless, that fits neatly into a more in depth discourse of network theory, if further dissection is needed or desired.

Do "unhuman" objects exist within networks? This is the question at hand. Drawing no conclusions on this topic, Galloway and Thacker begin, and end, their discussion with bits and atoms. However, earlier in the text they discuss the "physical layer" of network protocols. Is this not "unhuman"? If material conduits are required to allow communication between nodes on a computer network, how can this "unhuman" element be questioned? Is a definition of "unhuman" needed at this point to continue? Even by it's most basic definition, the "unhuman", or, that which is not human, provides the architectural or physical structure for networks. Even biological networks operate within and around elements that are "unhuman". As noted in the reading, emerging infectious diseases travel through hotels and airports, as well as from "unhuman" host to human host.

In this line of thought, perhaps a different question should be considered:  Can networks survive without the "unhuman" element? Can Osama bin Laden "swarm" at will, causing terror on a global level without the objects of airplanes, explosive devices, and other objects of mass murder? Can online social networks operate without the structural underpinnings of a physical computing device and connectivity? If the objects above are defined as "unhuman" and deemed necessary in order for each respective network to operate, the question that Galloway and Thacker leave the reader with quickly transposes itself into more than a mere proposition. Indeed it must be considered within the same context.

That is not to say that the human element is in any way less important. Quite to the contrary. Networks require the interaction of both human and "unhuman" elements, both symmetrically and asymmetrically, in order to be "flexible" and "robust". The "unhuman" element represents and provides the infrastructure for the network while the human element actively consummates the motivation, aliveness, and human interactivity of the network.

Even after considering any and all networks, and from any point in history, this theory seems to hold true. Take for example the Pony Express. This cross-country mail service (or network) could not have operated without the "unhuman" object of the horse working in tandem with the human object of the rider, not to mention the object of his journey, the "unhuman" element of the letter and the piece of paper it was written on. Consider the modern postal service. They do not use horses to deliver mail any longer, however the vehicles they do use are definitely "that which is not human".

The networks of today, that Galloway and Thacker discuss, as mentioned above, are no different; a network is a network is a network. Of course that is an over-simplification meant only for this posting. Networks vary in many ways - in size, in content, even in structure (ie. centralized, decentralized, and distributed). But as long as they meet the four conditions for being a network, they are all recognized and operate as networks; human and "unhuman" elements working in unison to form actively robust webs of ever-changing nodes and edges.

Wednesday, October 6, 2010

The materiality of media....

Materiality...a new concept in our discussions thus far. Hayles' reading on "Material Metaphors, Technotexts, and Media-Specific Analysis" reminded me a lot of The Medium is the Message. However, Hayles brings to the forefront the idea of materiality and the effect its presence has on the meaning of its content.

While focusing on the physical environment in which the content is embodied, Hayles suggests that the meaning of the content is derived specifically from within that environment. She continues to say that when that same content is recreated in a different media environment, the meaning of that content changes as well.

The key differentiating factor Hayles implies is the physical interaction the viewer has with the medium in which the content is embodied. For example, historically the act of reading a printed book has particular physical, psychological, and emotional associations that have become natural to us throughout the centuries. We use the activity of reading a book as a private time when we cuddle up with a blanket and a cup of hot tea and escape into a world far away from our own. Traversing the traditional book as well as interacting with the texture of its pages, cover, and size also hold sentiment. Sometimes we even hug the book in a gesture of intimacy or hold it up in delight once we have completed the reading. In all instances, it serves as a friend in which we interact with.

Now if we take the content of that same book and recreate it in a new form of media, how will the viewer interaction change? Obviously it changes everything. New bonds must be forged, physically, psychologically, and even emotionally. (Assuming of course that the viewer yearns for these bonds.) Even the interaction between reading a hardback book versus a soft cover book effect the meaning of the activity. The hardback book is treated with more respect and is held differently, if only by virtue of its' physicality. It is more common to see written notes or highlighted sections in soft cover books than in their hardcover counterparts simply because we interact with the two differently.

The advent of various new technologies has also changed the physical activity of interacting with content. The same text is consumed differently in each of the variety of media forms it is offered, bringing new understandings through each experience. The book above would be physically interacted with completely differently if its material form was digitized to be read on a computer screen. If its form was embodied in a book tape or CD, the interaction would be still different.

This concept of materiality exists subconsciously I think for most. The conversation of reading the book versus seeing the movie always brings with it comments on preference towards one or the other. The matter of interpretation and execution of book-to-film are always in the forefront of these conversations but Hayles' point is still made. Recreating the same text in the form of cybertext brings about similar conversations. The presence of animation, hypertext, imagery, etc., alters the meaning of the text in its own unique way, and is also often followed by the discourse of debate.

Whether new material forms in which literary texts are remediated effect the original text's integrity is always debatable. The point, however, is that these remediations offer new interpretations on every level of human interaction. Past the physical activity of consuming a literary text, from holding a printed book to adjusting a computer monitor to achieve a more convenient viewing position, our psychological and emotional reactions are also modified. Will we ever form the same type of bonds that have evolved through the historical materiality of holding and reading a printed book? Will we ever hug a computer screen the same way we hug a printed book? Or will our emotional attachments fade away in reaction to the cold impersonal touch of computers? I don't think so. Soon the same intimacy we've held so dear in our human interactions with printed books will evolve into a new type of intimacy for the interaction with our computers. One thing holds true though. We will come to experience our dear old friend the book in new and different ways as we comfortably adjust to our intimate new cyber environment.