Tuesday, August 7, 2012

Hip Hop, Country, Racism, and the Question of Musical Purity

Country Rapper Colt Ford Generates Controversy, Draws Criticism but Manages to Rise above the Fray

More than three decades after it happened, it’s easy to forget that hip hop was literally born at a disco concert.  Although it’s not actually the first recorded single to contain rapping, certainly the birth of rap as a popular phenomenon can be traced back to the fateful night that Fab Five Freddy jumped up on stage during a disco show and busted out what at the time was a nearly unheard of musical innovation.  The result, “Rapper’sDelight,” has since become iconic, and no music aficionado can possibly forget that catchy disco beat along with the words which literally defined a genre:  I said a hip hop the hippie the hippie to the hip hip hop ah you don’t stop.”  A revolution was born.  Described by the Sugarhill Gang’s studio owner Sylvia Robinson as “kids talking really fast over the beat,” hip hop exploded and quickly became one of the most influential movements in twentieth-century popular music. 

Yes, it can sometimes be hard to believe that the gangsta rap of N.W.A., Easy-E, Ice-T, and Tupac is so closely related on the musical family tree to disco, but there it is.  The bass line of the hit song “Rapper’s Delight” was lifted straight out of the Chic’s single “Good Times” and when they threatened to sue, duel credit was given for the track.  Hence the hip hop technique of “sampling” from other artists is not merely a common practice that has taken root among hip hop artists over time – it is an integral part of what hip hop is.  The practice of talking rhythmically in a song is incredibly versatile and it works with a seemingly infinite range of sounds – from the electronica of Mantronix, to the hardcore of Public Enemy, to the rock-infused rap of the Beastie Boys, to the gangsta rap of Easy-E, to the lighthearted pop of DJ Jazzy Jeff and The Fresh Prince.  Hip hop exploded in a hundred different directions in its first decade of life and it borrowed liberally from every other genre of music – from disco to jazz to soul to blues to pop to rock to classical – and continues to do so to this day. 

Consider the case of pioneering gangsta rapper Ice-T responding in the spoken introduction of his rap metal album “Body Count” (originally containing the controversial song “Cop Killer”) to charges that he had “sold out” because he had made a rock album.  He reasoned that although Rock 'n' Roll was by that time (the early 90s) primarily associated with white culture, the genre originated with black artists like Chuck Berry and Little Richard:  "As far as I'm concerned, music is music. I don't look at it as rock, R&B, or all that kind of stuff. I just look at it as music. [...] I do what I like and I happen to like rock 'n' roll, and I feel sorry for anybody who only listens to one form of music."  He makes an excellent point, but one that has somehow been lost on a number of individuals who for whatever reason see the blending of genres as an affront to the “purity” of their favorite musical style. 

Indeed, there are those who consider hip hop and country music to be diametrically opposed on the scale of musical genres and some argue that they should never be mixed together lest they merge into one huge "mono-genre" devoid of individuality and regional differences.  Thus, the interesting musical niche called country rap or “hick hop” has become an important symbolic rallying point in the twenty-first century culture wars.  It is apparently not a problem when hip hop artists like the Beastie Boys, Beck, and more recently Everlast, Yelawolf, Bubba Sparxxx, and the up-and-coming gritty southern rapper Mikel Knight bring elements of country music into their tracks.  After all, most country music purists don’t give a damn one way or the other what hip hop artists are up to.  But what makes the events of the last few years quite a bit different is that rather than hip hop artists borrowing samples and sounds from country music, a new crop of artists have emerged who are genuinely country yet are choosing to rap their lyrics rather than to sing them.

The greatest example so far of this emerging phenomenon was seen at last year’s CMA Awards when Jason Aldean’s song “Dirt Road Anthem” was nominated for song of the year.  Written by country rapper Colt Ford, the track is a traditional country anthem overlaid with two verses of understated and twang-fully delivered rap that became a number one single on the country music charts and spent several months in the top 5.  The apoplectic reaction of country music purists was perhaps predictable, though the real tragedy of the CMA awards came when the nerve-gratingly dreadful tune “If I Die Young” by The Band Perry somehow beat out Taylor Swift’s “Mean.”  But even the most ardent supporters of country music purity had to admit that both tracks should have taken a back seat to “Dirt Road Anthem” which was both the most popular and influential country music song of the year. 
 
Think about it.  The reason that Aldean’s hit song was overlooked by CMA voters is pretty clear.  By rapping on an otherwise pure country track, he had crossed the line.  He even went so far as to perform a remix of the song on stage earlier in the year with rapper Ludacris at the CMT video awards.  This was not some hip hop group sampling country music riffs to lay down raps over the top of.   This was a bona fide country musician choosing to rap part of his song rather than sing it, and it wouldn’t have mattered if “Dirt Road Anthem” was the greatest song of the decade – it never had a chance at the CMAs.  Country Music has drawn a line in the sand. 

Enter Colt Ford.  Perhaps the most zealous line-crosser of them all, this 300-pound self-described redneck from Athens, Georgia is all country.  But when he steps up to mic he raps.  “At the end of the day I’m a country artist 100 percent, through and through, but that don’t mean you can’t like other music,” he says in response to critics who claim that his rapping is not fit for country music.  Ford lists Run DMC among his primary influences, and even recorded the song “Ride On, RideOut” with DMC for his second album Chicken and Biscuits.  But most of his collaborative projects are with country music stars, not hip hop artists, provoking the creation of a “Colt Ford Collaboration Blacklist” on the blog, Saving Country Music which calls on “Real Country” fans to boycott the (numerous) artists who have degraded themselves enough to collaborate with Colt Ford.

DMC and Colt Ford

A former professional golfer who spent a year on the national golf tour and several years as a golf instructor, Jason Farris Brown reinvented himself in 2008 as Colt Ford, which is possibly the most profoundly All-American moniker ever conceived (full credit goes to his wife, who suggested the name to him back in 2006), and took the country world by stealth, each of his first three albums becoming slightly more successful than the last.  His 2011 effort Every Chance I Get peaked at number 3 on both the rap and country Billboard charts.  Despite his success, or perhaps because his success has placed him at the forefront of a rapidly growing group of hick hop artists like the Moonshine Bandits, Cowboy Troy, and Rehab, Colt Ford has taken his share of vitriol from those who don’t approve of his genre blending style.  There were early calls to boycott his shows and when that didn’t work, plans to show up at his concerts to heckle him between songs.  At the same time the country music blogosphere lit up with criticism of his work, lame jokes about his weight, and cries that he is “destroying” country music. 

He directly addressed the world-colliding nature of his “hip hop meets country” style on his second album Chicken and Biscuits in the song “Hip Hop in a Honky Tonk” which references country music legend Hank Williams in the highly ironic lines “Now what do you think old Hank would say?  It would kill him if he still were alive today. I’d bet money that he’s rollin’ over in his grave, ‘cause Hank sure as hell didn’t do it that way.”  But for the most part he works hard to stay out of the controversy and just focuses on making music.  “I didn’t set out to create a new genre; I wasn’t trying to do that.  I just wanted to make cool songs, and the way I could make cool songs was this way,” he said in an interview on Fox News alongside rap legend DMC when talking about their collaborative effort.  Listening to Colt Ford talk about his music you get the feeling that its somewhat gimmicky aspects, which seem a bit contrived to a lot of outside observers, are merely incidental and that he really is just doing what he feels like doing, not consciously trying to create music designed to fill a particular niche.  


With his latest album, Declaration of Independence, out today (August 7, 2012) and perhaps in position to surpass his earlier albums’ success, the controversy surrounding Ford’s music will likely increase.  His You Tube videos have become prime targets for racist diatribes (from both haters and lovers of his music) and some pretty inane and venomous comments have sparked bitter arguments beneath the videos of songs like “Waste Some Time,” a feel-good party song with a video featuring rappers Nappy Roots hanging out, partying, and rapping with Colt Ford and a bunch of white Georgia country folk; an image which for whatever reason provokes appallingly racist reactions from certain demographics.   Another Colt Ford video, “This is Our Song,” was posted by a fan with a thumbnail of the Confederate Flag, provoking such rancorous racism-related quarreling that the uploader had to disable comments for the video.



To his credit Colt Ford doesn’t sink into the muck and lower himself to address this kind of nonsense; he just keeps trucking away making good tunes and slowly breaking his way in to the mainstream country music scene without compromising his tastes or his standards.  The cover of his latest album, Declaration of Independence, proudly displays the Stars and Stripes, not the Stars and Bars, and the opening single “Answer to No One” is a tribute to patriotism, hard work, guns, god, honesty, freedom, low taxes, family, and other southern virtues, while the decidedly more negative stereotypes of southern rednecks are conspicuously absent. 

This truly is a country album to the core with the poignant single “Back” – in collaboration with Jake Owen – serving as the centerpiece of a heartfelt tribute to country life and southern ideals.  The video for the song features his real-life Mom and Dad as well as his home town in Georgia and includes Ford standing at the actual grave-site of his childhood best friend.  This music is not hip hop with a country influence.  It is genuine, heartfelt country music with just one notable exception – the lead vocalist prefers to rap.



In fact the only song on Declaration of Independence which sounds like anything other than a pure country tune is “DWI” (Dancing While Intoxicated), a high energy dance track which features LoCash Cowboys and Redneck Social Club and incorporates synth percussion and Auto-Tuned vocals.  Other than that, every song on the record would be an easy fit in any country music catalog if Ford simply sang the lyrics rather than rapping them.  Of course if he was a singer, it’s doubtful he’d be half as successful as he is – he freely admits that the reason he took to rapping (besides that he likes it) is that he never could sing very well; a fact that his haters love to throw back in his face.  But Colt Ford sums up his response to such criticism in the Declaration of Independence track “Rather be Lucky” when he declares, “I’d rather be lucky than good.” 

 Colt Ford is country.  But he’s also squarely in the tradition of the Sugarhill Gang and their “talking really fast over the beat” style that upended the music world more than thirty years ago.  Encouragingly, he also seems to be living in accordance with the ideals expounded in “Rappers Delight” which conveyed a message of racial unity that is too often forgotten even in these relatively progressive times:  “Ya’ see I am Wonder Mike and I’d like to say hello, to the black, to the white, the red, and the brown, to the purple and yellow.”  You see, there’s no such thing as “black music” or “white music.”  There are just artists, songs, and listeners, and there should be no limit to the possibilities. 

Sunday, July 17, 2011

Schools Out for . . . Never?


The Obama administration and Arne Duncan, head of the Department of Education, have not backed off from their advocacy of killing one of the sacred cows of American life – the mythical and magical time we call summer vacation. Taking on such a controversial issue, particularly in a country in which state legislatures and local school boards dictate school calendars, not the federal government, has always seemed like a strange battle to fight but there’s more to this argument than meets the eye.

Obama and Duncan are fighting an uphill battle to be sure. Duncan’s defensiveness on the issue was clear when he told a group of Denver middle school students “Go ahead and boo me. You're competing for jobs with kids from India and China. I think schools should be open six, seven days a week; eleven, twelve months a year.” The students didn’t actually boo him; in fact according to news reports the remark didn’t seem to faze the bored group of youngsters who were stuck sitting through the long, dull assembly. But perhaps if they had actually been listening they would have booed.

Their parents apparently would have – according to a recent Rasmussen poll more than 68% of American adults are opposed to year round schooling, and only 25% say they are in favor of the yearlong calendar, which despite such strong opposition has already been implemented in more than 3000 schools nationwide. Somewhat surprising, however, is the fact that 60-90% of teachers in year round school districts say they prefer the yearlong calendar, which is used in most of Western Europe and Japan.

Think about it. There are so many emotions tied up in most of our minds when it comes to summer vacation it’s almost impossible to be objective about the issue – the argument that kids need time to be kids is a compelling one, and I agree with it wholeheartedly. I fondly remember long summer afternoons spent wading in the creek behind our house hunting for crawdads, long lazy mornings spent reading fantasy paperbacks in the overstuffed love seat in our living room, long introspective days spent riding my bike aimlessly down the Fort Collins bike trail, and long hot July afternoons spent running through sprinklers and sliding on slip n’ slides and chasing down the ice cream man with a crushed wad of sweaty dollar bills clutched in my hand. I treasure those memories, and the thought that my own kids might miss out on those opportunities is enough to make me want to kick the whole idea to the curb just like most Americans apparently want to. But is that really what we’re talking about here? For that matter, what exactly are we talking about?

Switching to a yearlong calendar and extending the length and number of school days, as Duncan wants to do, are actually two completely separate issues – so let me take them on one at a time. First of all, switching to a year round calendar, at least as it has been implemented so far in the United States, doesn’t actually increase the number of school days that students are required to attend – instead it spreads vacations out throughout the year. The model most year-round schools in the US are following is the “balanced calendar” which divides the year into four nine-week terms with three weeks off between them and an extra-long six-week summer vacation. This is similar to the calendar that has been in use in the U.K for decades.

One of the biggest benefits of the yearlong calendar is that it dramatically reduces summer learning loss – the full month of academic progress that students lose each year after ten weeks away from the classroom. The problem is worse among lower socio-economic status students who lose, on average, a full month of achievement in reading, writing, and math. Middle class students also lose ground in writing and math, although interestingly they actually gain ground, on average, in reading achievement. The reason for this is that middle-class homes tend to be literacy-rich environments, and the opportunity to engage in some pleasure reading gives them a big boost. For most of today’s students, however, summer is far from the odyssey of literary and living exploration that I remember. For many kids summer ends up being one long video game and television binge, leaving teachers to spend the entire month of September simply undoing the damage caused by the extended break from learning

But this is a completely different issue from Duncan’s argument for adding days to the school calendar and lengthening the school day. The actual number of required school days per year is set by state legislatures, and so far (thank goodness) no state law anywhere in the US requires more than 180 days of school per year. Some states require as few as 175. This, as Duncan is fond of pointing out, is quite a bit lower than averages in most other nations (the global average is 200 days per year). But unfortunately he always stops there. The more important question if you want to talk about whether we should add more time to the school year is, how does the US compare to other nations in terms of actual hours of classroom time per year? After all, the length of school days varies widely from country to country so simply comparing the total number of days is apples to oranges.

When you look at this more telling figure, the picture becomes quite different. Students in the United States attend school for a relatively long time each day, and as a result the average American kid is in school 1,100 hours per year – significantly higher than the average hours attended by students in Western Europe, Canada, Mexico, Korea, Japan, and Singapore, who average between them only 701 hours per student per year – and that average includes hyper-vigilant Japan. But Japan actually does not win the international test-score game due to its burned out, overworked, and chronically depressed student body. That title goes to Finland, whose students are consistently at the very top in reading, writing, and math achievement. But Finnish students attend school only 600 hours per year – almost half the time that their American counterparts spend sitting in classrooms (US kids, by the way, currently rank in the mid-teens in all three of those categories).

Switching to year round school calendars is a good idea. It would spread out instructional time, reduce teacher and student burnout, give families more options for planning vacations, give districts and schools more flexibility in scheduling extra-curricular activities, reduce summer learning loss, and generally benefit education.

I’m not sold on the idea that simply forcing students to spend more hours sitting in rows of uncomfortable desks in overcrowded classrooms is going to improve achievement, however. The key to an excellent education is not the quantity of time spent sitting in the classroom. It’s the quality of that time that makes the real difference.

Sunday, May 1, 2011

Celebrating Aristocracy: Why Americans are Both Repulsed and Fascinated by the Royal Spectacle


The Royal Wedding is finally receding into the rear-view mirror and the only minor flaw in a perfect fairytale story was a frowning flower girl. But as the pomp and circumstance recedes into memory many of us are asking, why? Why was the wedding of two rich Brits such a captivating story for so many people around the world; particularly Americans who rejected the whole idea of the monarchy more than two centuries ago? I think that really, there’s a fundamental issue involved here that is at the root of both Americans’ fascination and frustration with the whole spectacle.


Although it’s tempting to think of the wedding as an important political event, at heart this was simply a celebration of aristocracy – an event whose pomp and circumstance was based on the fact that the man who kissed that beautiful bride on that storied balcony is Prince William, second in line to the throne of England. Look at the royal guest list – it’s not populated with heads of state, but with royalty from every corner of the world. It blows people’s minds that Middle Eastern dictators, northern European monarchs, even the royal family of Japan (they declined the invitation due to the tsunami) were invited while Barack and Michelle Obama were not. But really, it makes perfect sense – if you keep in mind the fundamental premise behind the whole thing, which is rooted in the concept of aristocracy.


Think about it. The Obamas weren’t the only notable snubs – neither Tony Blair nor Gordon Brown, former Prime Ministers of the United Kingdom, received an invitation, because this was not an official state event – William is not King of England, nor is he heir to the throne – that title belongs to his father Charles, the Prince of Wales. That distinction doesn’t have much meaning here in the United States, but it’s an important one. This is not an event that has anything to do with international politics. Lets face it – even when William does some day ascend the throne he won’t be a leader on par with Barack Obama or Tony Blair; he will be a King. A King. Which in twenty-first century Europe has about as much to do with international politics as world cup soccer.


Regardless of his election to the office of the most powerful human being on Earth – President of the United States – Barack Obama is still just a commoner. Prince William can trace his bloodline directly back to Alfred the Great, who ruled England in the 9th Century AD. That is what this wedding was all about. The only reason it was any bigger deal than any other rich kid marrying his college sweetheart is the fact that William is in direct line for succession to the throne of England – one of the most historically important monarchies in world history.


So what does elected office have to do with any of that? Elected officials like Presidents and Prime Ministers are not royalty – their authority is granted by the whim and caprice of the electorate. Today’s president is tomorrow’s motivational speaker, depending on how the political winds blow. In just 18 months Obama faces a tough reelection, and no one knows how powerful he will be two years from now. William, on the other hand, stands in a line of succession that can be traced back 1200 years and more. He was born a Prince and he will die a King. No political changes will ever alter that fundamental fact.


The underlying premise behind the whole Royal Wedding spectacle is that Prince William is better than the rest of us. He is a nobleman. And he’s not just any nobleman, but a nobleman who is second in line of succession to one of the greatest monarchies in the history of the world. As far as Queen Elizabeth II is concerned, Obama is a mere footnote in history compared to the family legacy that she carries with such grace and aplomb. Clearly the only worthy guests on such a necessarily short list are fellow noblemen and women – members of the few remaining aristocracies on Earth.


That notion is anathema to most Americans, contradicting the basic tenets on which America is built, so it’s understandable that Americans are simultaneously annoyed and fascinated by the whole thing. But aristocracy is fundamental to and ingrained in the British mythos and continues to be perpetuated by a nation that is otherwise among the most progressive and innovative in the world. Barack Obama is not part of the royal club, and no popular election will ever place him among the noblemen and women of the world. Only one thing gives one the right to sit among those guests at this celebration of aristocracy and nobility – and it has nothing to do with brains, achievement, or tenacity – it has everything to do with blood.

Monday, August 30, 2010

Inception and the Power of Lucid Dreams


Zhuangzi, a Daoist philosopher who lived in the 4th century BC, once had a dream in which he was a beautiful yellow butterfly, fluttering gracefully, blissfully unaware that he was dreaming. Without warning he woke up, suddenly conscious that he was a man and that he had been dreaming. But as he pondered the vividness of his dream, he realized that he had no way of being absolutely sure whether he was a man who had been dreaming he was a butterfly, or a butterfly who was now dreaming he was a man.

It is this realization about our perception of reality that writer and director Chris Nolan exploits in his latest blockbuster Inception, in which the action moves dizzyingly from dreams to reality to dreams within dreams, blurring the lines between the dream world and real world.

Think about it. No matter how bizarre a dream gets, you never question its reality until after you wake up and realize how strange the whole thing was. The main character in the film, Cobb (Leonardo DiCaprio), comes up with an ingenious solution to the problem of separating dreams from reality: He spins a small metal top—if he is dreaming it just spins and spins forever, but if he is not dreaming the top will eventually fall over.

The inception of Inception was Nolan’s own experience with lucid dreaming (in which the dreamer becomes aware that it is a dream yet doesn’t wake up). The film is a brilliant exploration of the human subconscious as expressed through dreams—a kind of thinking man’s Avatar—with all the strangeness, wonder, and fascination of exploring an incredible new world of alternate perception, minus the fantasy creatures and simplistic dialogue.

The concept of lucid dreaming has been around for thousands of years—historical references to it are found in the writings of Tibetan monks who as early as 700 AD were practicing yoga dreaming—a kind of sleeping meditation which allowed them to consciously explore the dream world. They were the first to discover that the dream world exists entirely in one’s own mind and that the reality of dreams can be altered by the dreamer.

My own first experience with lucid dreaming was not born of a desire to explore my subconscious or create new realities. I simply wanted to escape the torments of a recurring nightmare that I had been having since I was a little boy. The dream unfolded in the same way every time—I was trying to escape from some unnamed, unseen, but nonetheless terrifying force and found myself unable to run or shout for help. My feet felt like they were stuck in molasses or as though I was running the wrong way on a moving walkway, and no matter how much I tried to call out to the people just ahead of me, I could never get the slightest sound to emerge from my throat as my inner demons closed in.

By chance I came across a book which mentioned lucid dreaming, and for the first time I heard about the practice of becoming aware that you are dreaming while still in the dream state. Usually we don't realize we are dreaming until we wake up, but lucid dreamers can learn to recognize their dreams and, the book told me, change them in interesting ways—even to overcome nightmares. I resolved to remember this the next time I found myself inside that nightmare world and just a few weeks later it happened—I was trying to run and my legs wouldn't move and the terror began to grip me and my throat tightened up, when it suddenly occurred to me that this had to be a dream. Once the realization struck, my pursuers disappeared instantly and the fear lifted. My next thought was that I wanted to try to fly—something I had read was possible in dreams—and I spent the next half hour (who knows how long it was in "real" time) flying around my own dream world, exploring strange amalgams of places I had been and seen, watching figments of my own imagination walk around a world that, I was suddenly aware, my own subconscious was creating.

Once I had overcome my recurring nightmare, I began to explore my dream world in subsequent lucid dreams, and found a few surprises along the way. It makes sense that Nolan is himself a lucid dreamer, because his depiction of the dream state in Inception is so uncannily accurate. I do find that I can alter reality in interesting ways, just as the young architect Ariadne (Ellen Page) does in the film; but also like the film, there are limits to what I can do.

The "projections" as they are called—the other people who populate my dream world—are always difficult to talk to. I have found it impossible to manipulate their behavior (“It's my subconscious, remember. I can't control it,” Cobb tells Ariadne at one point in the film) and I often find them to be downright stubborn and uncooperative. Most of the time they ignore me, say unintelligible things, or even resist my will as I move through my dreams—a projection once yelled at me angrily in a dream when I told her she was a figment of my imagination, giving me a bit of a shock and causing me to suddenly wake up. None of them have ever actually attacked anyone the way that Mal (Marion Cotillard) does in Inception, but in the wrong subconscious mind, I wouldn't put it past them.

There are other surprising limits to what I can do in a lucid dream. I can, just as Ariadne does, alter certain elements of the dream world. But I also find that some things just spring up completely unbidden, and that often my attempts to change things breaks the dream reality down to the point where everything simply falls apart and goes black, causing me to suddenly wake up. I have also tried some experiments like flying through a ceiling, expecting to pop out through the top of the strange room I was in. Unfortunately, as I made contact with the ceiling I found my way blocked and as I tried to force my way up and out of the room everything went black, leaving me floating in empty space for a few moments before I woke up.

It had been a couple of years since I last had a lucid dream, but seeing Inception twice now in the last two weeks triggered another one just the other night. Finding myself in a dream state, I decided to fly away and accelerated up through a strange cartoonish world of floating Christmas trees and dancing lights. I woke up that morning in a euphoric state—there's something invigorating about the freedom and wonder of lucid dreams, and wished I could go right back to sleep and continue dreaming.

It’s strange to think that Oneironauts (explorers of the dreamworld) are exploring a world that their own subconscious mind is creating at the exact same time that they are experiencing it. It’s a profoundly empowering and self-reflective experience. But still, I wouldn't want to get lost in the dream world the way that Mal and Cobb do in Inception. There's something essential missing in the dream world, and it’s always comforting to come back to reality. That is, if I really am awake right now. How can I really know for certain? Maybe I should get myself one of those little tops . . . just to be sure.

Wednesday, July 7, 2010

A Shallow Sentiment: Why Nicholas Carr's Anti-Technology Crusade Misses the Whole Point

There’s an oft-quoted Scottish proverb that there is no great loss without some gain. The flip side of that sentiment, I suppose, would be that there is no great gain without some loss. The latter phrase came to mind as I read Nicholas Carr’s latest book The Shallows, which expands on his popular article which first appeared in The Atlantic, “Is Google Making Us Stupid?” and argues that the Internet is fundamentally altering the very structure of our minds and making us all dumber. But even if we assume that he’s right and the Internet is decreasing attention spans and creating shallower thinking among many of us in the developed world, is that really such a great loss when compared to its potential gains?

I hear variations of this same argument all the time in researching and teaching educational technology and I’ve come to see it as a kind of excuse. We all tend to dismiss what we don’t fully understand as either unimportant or dangerous, and I think that Carr falls into the latter category.

Disappointingly, the book is void of broad generalizations, wild claims, and unwarranted superlatives, which makes it dishearteningly difficult to attack. He works slowly and methodically, in a scholarly-way, providing interesting background on the history of writing, providing evidence and examples for his claims and genuinely acknowledging the other side of most of his arguments. So I must admit that Carr’s point is legitimate as far as it goes. He makes a good case for the fact that the use of the Internet can make it more difficult to engage in focused, undistracted thought—a practice which characterizes many of histories’ greatest thinkers.

But how many people will actually read his carefully qualified and well-nuanced argument? On the other hand how many anti-technology zealots will use it as a bludgeon in the fight to deny schools adequate funding for educational technology? How many will use it as an excuse to ignore the work of programs like One Laptop Per Child? How many misinformed people will see it as a reason to miss out on the infinite information and opportunities the Internet provides?

He’s not helping us by making this argument. This is just one more in a long line of anti-innovation arguments which go back to the time of Socrates and I think there's more going on here than simply an instinctive resistance to change; taken together, these arguments paint a picture of an effort among elite scholars throughout history to maintain exclusive power structures and deny the world’s poor and undereducated entrance to the marketplace of ideas.

Carr argues that the Internet is a “form of human regress” and he supports this claim by bemoaning the loss of what he terms the “linear mind.” The linear mind, he says, began to emerge after Gutenberg invented the printing press and is characterized by “calm, focused, undistracted” thought processes like those facilitated by the careful, focused reading of a printed book. He argues that the linear mind is being replaced by an Internet-fueled need “to take in and dole out information in short, disjointed, often overlapping bursts—the faster the better.”

The latter may be true—Carr cites some preliminary evidence that attention spans are shrinking in the age of the Internet—but the “linear mind” which he evokes is merely the latest incarnation of a very tired old argument. There have always been a few thinkers who dig deeper and think harder than the rest of us, in both primitive and advanced societies. But the printing press didn’t create the linear mind, nor will computers destroy it.

For the first 200,000 years of human existence no one knew how to read or write. In the scheme of things writing is still a new-fangled invention, a recent and innovative technology in the development of humanity. Homo Sapiens have been writing for less than 2% of our time on Earth, and the historical evidence indicates that the “linear mind” existed long before writing, let alone the printing press, which has been around for less than .3% of our total time on Earth.

It is, after all, the great philosopher Socrates who argues in The Phaedrus that the latest and greatest technology of his time—writing—“will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves.” In other words, he believed that the use of writing was creating shallow thinkers—a sentiment that is virtually identical to the argument Carr is making today about the Internet.

Of course, I suppose it never occurred to Socrates that not everyone lives in a center of learning like ancient Athens or can afford to spend his days sitting under a tree conversing with a wise teacher. Despite his reservations about the damage writing could do, the fact remained that in order for his words of wisdom to spread and reach beyond his own time and place, someone (his student Plato, for example) would eventually have to write them down.

Eventually, of course, Plato did write down Socrates words, and as a result, some 2,500 years later we are still able to read them. That is not to say that Socrates didn’t have a point about the loss of mental focus that writing caused, though, as any one who has ever written something down so as not to have to remember it can attest. In Socrates' time a talented and well-trained bard could memorize an entire epic poem after just two or three hearings, a mnemonic skill that quickly vanished once writing took hold.

So yes, Socrates was right about writing (just as Carr is probably right that too much computer use decreases attention spans), but that doesn’t make it right to argue that humans shouldn’t have ever learned to write (or that people shouldn’t use the Internet). The irony is that today so many anti-technologists decry the “death of handwriting” which is the very evil that Socrates feared in his own time—it’s an endless cycle. The old school never welcomes new innovations, but once the revolution prevails the fears it raised always end up seeming quaint and foolish to the next generation.

A similar thing happened with the invention of the printing press. Scholars argued that the ease with which printed materials were being created was destroying the quality of literature and scholarship. They lamented that it was filling student's time with trash reading rather than forcing them to apply undistracted focus in mastering a few key texts (in other words, they believed the “linear mind” was destroyed, not created by Gutenberg’s press).

Perhaps the most intriguing example of the anti-printing argument is found in Johannes Trithemius’ De Laude Scriptorum (In Praise of Scribes), in which he argues that monks should continue to practice copying despite the invention of the printing press. His reasons? It keeps idle hands busy, encourages diligence, devotion, and deep knowledge of scripture. He also writes convincingly about the beauty, integrity and individuality of the copied text compared to the stark and mistake-prone results of the printing press.

Surely there is no one who would argue that a printed book is as beautiful, unique, and artistic as a hand-copied manuscript, or that a monk who has hand-copied the Bible dozens of times doesn’t know it better than one who has merely read a printed copy. But there is one great irony in Trithemius’ story: Within two years he took his laboriously hand-copied manuscript to a movable-type print shop in Mainz, Germany and had it printed for more widespread dissemination. I can only imagine the peculiar feeling that a devoted scribe must have felt reading the printed version of this book articulating the value of copying by hand. It must have been at least a little bit similar to the strange dissonance I often felt as I read Nicholas Carr’s anti-tech book electronically.

I have to admit, Carr is right about one thing. I probably was more distracted reading his book on my computer than I would have been had I been locked away in some room with a hard copy of his work. I often found myself opening up another window to further research some fact or define some word that he had dropped, to see what other writers had said about it, or to expand my understanding of some element of his argument. But is that really more “shallow” than sitting in a room reading his words in stark isolation? I’m still not convinced.

A few notable individual geniuses have always had this focused ability to read and think, but the vast majority of human beings never have. On the other hand the Internet is allowing more people to participate in the process of learning and creating knowledge than has ever been possible before. Is it better to have a small group of isolated scholars thinking deeply, or an entire world engaged in a global conversation? Before the development of writing just a handful of scholars, philosophers, and politicians were educated at all. Before the printing press, far less than one percent of the world's people could read and write. Before computers less than half of the global population could read. But now, according to C.I.A. figures, global illiteracy is at the lowest level in human history at just 18%, and is dropping rapidly.

The Internet is a fundamental new medium and is already proving to be every bit as important and paradigm changing as the invention of writing or the printing press. Are we going to lose some things in the process? The smell of freshly printed paper, the feeling of cracking open a brand new book, the sound of a pen scratching across a page, the security of a signed hard-copy of a legal document, even the very ability to write things out by hand? Eventually, yes, I think we will lose those things. And although it seems like a tremendous loss at times, it will also be a tremendous gain.

Think about it. Carr’s argument is fundamentally based on the same kind of elitism that innovative communication technology has encountered since at least the time of Socrates. Certain privileged thinkers and academics find value in the status quo and don’t want to see it disappear while the rest of humanity starves for access to the knowledge that they covet.

It was easy for the wealthy American Nicholas Carr to isolate himself in an “unplugged” mountain hideaway to write this book, which argues that the Internet is making us all dumber. Perhaps for the few privileged wealthy individuals in this world who have access to a world-class education (Carr got his B.A. at Dartmouth and his M.A. at Harvard), vast public libraries, and well-stocked bookstores, the newly interactive Web 2.0 will have a negative impact. But how many people who were formerly shut out have now found a window to a whole world of information and opportunity? And why, Nicholas Carr, are they “shallow” for finally accessing it?

For a planet filled with individuals hungering to join the global conversation, the Internet is opening up worlds of opportunity in just a few years that old-fashioned paper never could have in a thousand. The free and globally accessible Internet has the potential to improve education, revitalize democracy, undermine dictatorships, circumvent censorship, facilitate global understanding, empower the oppressed, expand opportunity, and perhaps not least importantly, revolutionize the art of writing itself. So who here is really guilty of shallow thinking?

Sunday, January 10, 2010

Reality in the New Economy: More Wealth, Fewer Jobs

How quickly we forget the roaring nineties and the sense of nearly universal optimism engendered by the end of the cold war and the longest period of uninterrupted economic growth in our nation’s history. The madness surrounding the “New Economy” and predictions of unending economic prosperity seem to have reached their peak in 1999 with the now infamous book Dow 36,000 by James K. Glassman and Kevin A Hasett, which boldly proclaimed, “The stock market is a money machine: Put dollars in at one end, get those dollars back and more at the other end [. . .] The Dow should rise to 36,000 immediately, but to be realistic, we believe the rise will take some time, perhaps three to five years” (22).

As it turned out, of course, the Dow peaked in January of 2000 at just over 11,300 before falling precipitously, bottoming out just three years after Glassman and Hassett’s book was published at 7,286. More recently it plunged even lower, hitting 6,547 in March 2009. Of course the stock market is nothing if not unpredictable, and so this historic miscalculation is an excusable, though none-the-less fascinating and colossally epic fail.

The realization that economists are coming to is that the predictions for the new economy were based on a fundamental flaw in our understanding of how technology would impact the job market. The consistent increases in productivity brought about by the successful integration of technology into the economy have not resulted in the uninterrupted economic boom many economists predicted, which is forcing them to take another look at some of our most basic assumptions.

The economic stimulus is a case in point. Congress, Ben Bernanke, and President Obama poured close to a trillion dollars into the economy in the form of stimulus, following the model for economic growth established in the post-depression era by the New Deal and WWII in which massive government spending pulled our economy out of the great depression. Although it was designed to unfold too slowly to create the immediate results many voters were looking for, the stimulus package is working at some level. The stock market has rebounded to over 10,000, consumer spending is on the rise, productivity is up, the GDP is increasing, and the ailing financial system has been stabilized.

There is one major economic indicator that has not improved, however, and that is going to create serious problems for the Democrats in November. That indicator, of course, is unemployment. Due to the bizarre way in which this rate is calculated, the feds pegged unemployment at around 10% for the last three months of 2009, though the real number of unemployed and underemployed is probably closer to 20%. It’s not a surprise to anyone who has studied economics that unemployment growth is lagging behind other indicators, but that hasn’t stopped Rush Limbaugh and the Republicans in Congress from pouncing on Obama and proclaiming the stimulus a failure.

The stimulus is not a failure, but unemployment is not going to improve any time in the near future and unfortunately the unsophisticated American electorate is about to be taken for a ride by politicians and pundits who are going to manipulate that for their own ideological ends. Liberal pundits will argue that unemployment always lags behind other indicators, which is true, but it’s not true that in past recessions unemployment lagged as much as it has (and will) during the recovery from the Great Recession.

The connection between productivity and unemployment is often debated, but over the last sixty years sudden increases in productivity have always meant that workers were busy (and making money), which almost always results in companies hiring more help. This pattern first emerged in 1950 when a surge in productivity caused unemployment to drop 33% after an impressive quarter in which productivity growth reached 14.6% due to the 1950’s post-WWII economic boom. A similar scenario unfolded in the early 1980s when during a deep recession productivity growth suddenly hit 9.6%, giving companies a flush of income which allowed them to hire new employees; by the end of the third quarter of 1983 unemployment had fallen a full one percent, and it continued to decline rapidly and consistently for the next two years. Ditto for the first quarter of 2000, when productivity growth of 9.4% resulted in an almost unbelievably low unemployment rate of 3.9% just two months later.

Based on this analysis, it’s obvious why economists of the late nineties predicted that the productivity gains of the new economy (productivity is up 65% since 1970) would create jobs and lead to sustained economic growth. So why did the recent boom in productivity: 6.9% and 8.1% in the second and third quarters of 2009—a dramatic change from the 1.8% average of the two previous years—not cause a drop in unemployment? Unfortunately, it seems, that the answer lies in an aspect of the new economy that has been largely overlooked.

Think about it. How many jobs that were viable career options just a decade ago are now disappearing? And I’m not just talking about factory jobs that are being lost as automation and outsourcing increases. Travel agents have been replaced by websites; airline ticket agents have been laid off as airlines move to e-tickets; real estate agents are being bypassed by buyers and sellers who use the web.

Distributors, once a key middle-man in the economy, are being replaced by complex computerized distribution systems like the one Wal-Mart employs, which no longer requires white collar professionals to sell and distribute products to retail outlets, but instead employs underpaid blue collar workers without benefits to drive forklifts while a computer places the orders and keeps the cheapest products available flowing into stores.

Tech support and customer service has been outsourced along with white collar engineering jobs. The post office has completely stopped replacing employees who quit or retire. Teachers and professors are being asked to facilitate online classes that can be run at much lower cost with higher student to teacher ratios. Illegal downloading has gutted the music industry and new software has made DIY recording relatively easy, eliminating thousands of high paying recording, editing, production, marketing, and distribution jobs in a rapidly disappearing industry.

Journalists are being replaced by bloggers. Locally owned businesses are being replaced by huge chain stores which replace upper-middle class local owners with lower-middle class managers with no health insurance coverage. The art of professional photography, which at one time was so marketable because of the difficulty of getting a decent shot with an old-fashioned film camera, has been rapidly eroded by digital cameras that show you the picture instantly, allowing you to get the shot you want without paying a professional for it.

All of this seems great on an individual basis. We now can get whatever music we want whenever we want it. We can have Uncle George photograph our wedding instead of paying a professional photographer thousands of dollars. We can buy cheap crap at Wal-Mart instead of high priced quality goods from local retailers. We can buy or sell a house without having to pay a commission. All great things. But the side of it all that we tend to ignore is that the more middlemen you take out of the economy, the fewer middle-class jobs there will be out there that don’t require a college education.

The new economy is turning out to be all about the further stratification of wealth. We used to say that the rich get richer and the poor get poorer back when the wealthiest one percent of Americans was earning just 10% of all pretax income. Now that the same one percent receives close to 22% of all income, the real impact of the new economy is becoming strikingly clear. Increased worker productivity and the disappearance of the middleman has been a great thing for stockholders and corporate CEOs, but it’s kicking the middle class in the ass and will continue to do so for the foreseeable future. So if you’re wondering why jobs aren’t coming back despite economic growth, look no further than the computer on which you’re reading this for free right now.