DAN JACOB WALLACE

Philosophy + Music + Words
December 27th, 2011 by Dan Jacob Wallace

Update: Lanier, Jodorowsky, de Beauvoir, Airbender, et al.

Print Friendly

Last week I finished up another semester of classes and have a few weeks free, so I have some time to work on some other projects, including getting some posts done for this blog. I don’t have a particular topic for this one, so I’ll just sit and type for a while, touching on some of the stuff I’ve been working on and thinking about since my last update. Whatever comes to mind… so it might get a little random and undisciplined.

I’m hoping that over the next few weeks I’ll have time to write some articles that I’ve been meaning to get to for a while now, including a series on economics and ideology, a recent obsession of mine, some of my key ideas about which I had the pleasure of presenting to my school and local community earlier this month.

First Up, a Music Update…

I announced in my last blog post that I was planning to release two music projects around the beginning of 2012. However, school plus my aforementioned obsession with economics ended up taking up all of my productive time. I got some music done, but not nearly enough to release one album, much less two. This is ok. I believe in doing that which is most fulfilling (unless you’re a pedophile or cannibal or… you know what I mean… let’s not get into that). At any rate, I’ll be working on music in the coming weeks, after which I’ll give a new update on my progress.

That said, I’ll share some of the things I’ve been reading and watching lately in my leisure time.

Some of the Books I’m Currently Reading…

You Are Not a Gadget by Jaron Lanier

I’m only about halfway through this book, but the common theme is that current software design implementation – Jaron Lanierespecially the web 2.0 internet model – tends to dissolve human individuals into a kind of collective hive-mind (sometimes a.k.a. the “noosphere”), the structure of which is influenced by the notion of the computer being a viable metaphor for the human brain and mind. He argues that software design should attempt to conform to the complexities of the human experience as opposed to reducing humans to facile computer-like systems.

I’m not quite as critical as Lanier is of the current state of things, though, as one might expect, Lanier is even more concerned with (or perhaps even terrified by) what’s to come than he is with what’s going on now. He describes a cyber-dystopia, disguised as cyber-utopia, towards which we are excitedly headed (the dividing line between dystopia and utopia, however, is not clearly drawn; for example, he pejoratively describes current cyber culture as “digital Maoism,” so, he’s using some rhetorical devices that his opponents – those who are supporters of current cyber culture and really do believe it to represent a kind of utopian ideal - would likely not appreciate). Lanier’s assessment of what’s to come is extreme, but one thing that does ring true is that if the current design is as bad as he says it is - not just in terms of code, but also socially speaking – it’ll be that much harder to change that design as it becomes increasingly “locked in” (that is, as more and more structures are built with interdependent relationships to that initial, bad structure; he uses MIDI, which his friend invented, as an example of a bad locked-in technology).

(A current concern I won’t get into so much here, because I’d rather explore it in socio-economic terms later, is the phenomenon of internet users becoming irate at the suggestion that they should pay for content as opposed to, for example, stealing the music from a band they like. And, by the way, a lot of artists give away lots of free music because they’re advised to. For one thing, bands – especially lesser known bands – who don’t offer free music run the risk of coming across as selfish greedy jerks. It seems that users are far more interested in a libertarian cyber-society, in which everything is controlled by private interests. So, what even real-world socialists want online is a free market in which artists don’t get paid, but the people running the monopolies and olipopolies that control online content and information, such as Google, Facebook, YouTube, etc., make gobs of cash.

These users seem to THINK they’re engaging in some kind of free spirited socialism – or even communism - by keeping all content “free,” but the truth is that by eliminating the possibility for there to be a thriving entrepreneurial cyber-class, all the power goes to major corporations, not just online but offline. And, somehow, users are horrified by the notion of the government stepping in to regulate these corporations, even though those same users are currently demanding greater regulation of non-internet-based corporations. Internet-based companies have pulled off a neat trick, getting their users to see the company’s private best interest as the users’ collective best interest. This is true despite the fact that so many users think they’re the one’s manipulating Facebook – and not vice versa - by simply being aware that Facebook primarily exists as a marketing tool. Facebook likes us to think that. It keeps us coming back and empowering that tool, the accumulative result of which is that it keeps them and their colleagues in power over the attitudes users have in regards to online social and economic dynamics, such as what should be expected from musicians, filmmakers, writers, and journalists.)

At any rate, there’s a lot I agree with in Lanier’s arguments, which when boiled down are essentially anti-reductionist and anti-ideology in nature (two ideas that are important to me; note that when I referenced user attitudes above, I was referencing ideology).

Before going on, I want to be very clear about how much I like the internet. I love having quick access to so many ideas and media, and the ease with which I can communicate with people. The critique here is not of the internet per se, but of the design implementation that shapes how the internet is used. That “how” question is very important, and determines what other sorts of questions can follow (again, we’re talking about questions that relate to ideology, such as: What questions is it possible to ask within a given conceptual framework, and what questions are impossible to even imagine?).

Back to Lanier’s anti-reductionism and anti-ideology…

It’s interesting to consider how social networking sites like Facebook are set up to reduce personalities to checkbox database profiles, where you either are a thing or you are not (not to mention the reduction of friendship itself). Of course, this is done for the purposes of collecting marketing data, but there’s a deeper ideological framework at play that is related to this idea that humans are reducible to algorithms and categories of discrete experiences and, ultimately, math that can be replicated for AI and consciousness-simulating purposes. Where did marketers – and behavioral economists and computer programmers for that matter – get the idea that the Facebook-style database provides accurate information about how people function in the world, online or off? Why was that sort of design chosen as the untested starting point? Questions such as these give a peak at the extent to which ideology is at play in how the reductionist process is implemented.

AutotuneIn reading Lanier’s book, it occurred to me that Auto-Tune is another interesting example of the reduction of the human to match the computer-as-human metaphor. As opposed to being allowed to exist as a continuum of frequencies and other properties, the human voice is broken into discrete segments, imitating the binary computer mind as opposed to the human mind (and natural world), which is in constant conflict and dissonance with itself (this isn’t just about individual notes, but the relationship between frequencies; music, fundamentally, is about relationships, including - to name some of the building blocks - relationships of frequencies [which determine pitch and timbre], durations, intensities, and spatial dimensions). However, Auto-Tune is reshaping not just the music we hear, but HOW we hear music. Songs that sounded great to me ten years ago now sound out of key, especially the vocals.

Note that this isn’t just because of the use of Auto-Tune in and of itself. It’s that Auto-Tune and similar digital technologies have set a standard that’s impossible to reach in nature. Many artists don’t use automated tuning detection, but instead use digital pitch correction, where they retune individual manually. Other artists will record a dozen takes or more, and will piece those takes together. This has always been done to some degree, including in the pre-digital world of tape recording (the standard has generally been to perform about three takes, which would be spliced into one solid performance), but what is considered the best take of any given passage – or even syllable – is now influenced by the idea that it’s more important to be in tune than it is to be expressive.

At any rate, there are many consequences of this process of human reduction, to name just three of the many that Lanier touches on:

(1) Impediments to individual creativity in favor of crowd wisdom. I agree that crowd wisdom is good for democracy and, maybe, guessing how much an ox weighs. It’s not good, however, for writing a song or a symphony, developing a philosophical perspective, or inventing a light bulb. We can see this at play at Wikipedia, where there are no individual authors, and instead there is a mashup of fragments from various sources. I greatly prefer something like the Stanford Encyclopedia of Philosophy, the entries for which are written by individual authors who own up to their own bias. Bias is unavoidable, and Wikipedia tries to create the illusion of not having one by erasing the author from the equation, but it doesn’t work.

(2) AI begins to look more real – and more possible – than it really is, due to humans adopting computer-like personae that make computers seem more human in comparison, thereby fueling overblown reverence for the computer-as-superior-being fantasy. For more info on this, check out the Singularity Institute’s FAQ on singularity (the notion that computers will one day grow superior to humans, for better or for worse) and The Turing Test (tests for convincing cognition in computers).

(3) Software design, especially such that allows for easy and temporary pseudonym creation, has a tendency to bring out the inner-troll (i.e., meanness) in all of us. We’ve all seen this, and most of us have been guilty of it at some point, at least on some level, where we address an online stranger in a less polite manner than we would a stranger in the real world.

(4) Here’s one of my own, related to my above thoughts on Auto-Tune, but which can be broadened to include the general belief in the idea that computers expand innate human capability: In the case of Auto-Tune, there is created an expectation for singers to sing perfectly in key (something that’s not even possible for acoustic non-voice instruments, such as violin or piano). This expectation exists despite simultaneously existing criticisms of the fact that singers need Auto-Tune to reach that perfection. In other words, people recognize that Auto-Tune doesn’t give singers the ability to sing perfectly in key; it is a fiction.

That said, let’s consider the increasingly widespread idea that computers extend and enhance human intelligence. For example, having millions of facts at one’s fingertips is considered to virtually – or even literally – expand one’s knowledge.

As you can imagine, I consider this idea to be wrong. No amount of random, easy access to fragments of information iscomputer brain going to improve critical thinking skills or the ability to draw connections between the information represented in those fragments. The so-called expanded intelligence of a person who cannot intelligently discuss his or her area of interest – much less expertise – without consulting a digital Wiki is a fiction. The notion that the internet extends human intelligence is just something that was decided to be the case by some people who are overly enthusiastic about technology. It is an exciting idea, but it’s not real.

I should clarify here that there is an important distinction to be made between the above and, for example, using a website like Lumosity.com to exercise one’s cognitive skills. There are a number of ways to reinforce one’s mental capacities (get enough sleep and exercise, eat lots of blueberries, do cognitive puzzles for a while each day, et al.), but merely knowing how to do a Google search is not one of them. The internet is an amazing tool for the ambitious researcher (that is, the sort of researcher who also still reads books and talks to people in person), but it also contributes to many a person’s tendency towards intellectual laziness.

Perhaps a positive consequence of all of this is that it makes public debate – with no ‘net to fall back on – all the more important, just as the ability for musicians to perform well live increases in importance as people trust their ears less and less when listening to recordings (people’s expectations are highest when evaluating recordings, including live recordings; when someone’s in front of them in the room, many factors other than singing perfectly in tune come into play that make or break a show).

Back to Lanier’s book… He sometimes gets a little shaky, in my view, in his observations surrounding historical events, both in terms of the mechanisms behind how those events went down and the subsequent analogies he draws between digital and bricks-and-mortar life. However, I love it when he points out that, before the computer, the train was the techie metaphor for how humans function, and this affected how people (including doctors) treated other people. We see this now, with the human brain so often being referred to as though it were a computer, and I simply don’t accept that model.

This starts to get into a lot of other areas, especially consciousness, which I’ll touch on only briefly, just to give a sense of some of the complexity behind the idea of a computer – or human for that matter – being conscious. Four things come to mind:

(1) My view of human consciousness and mind can be characterized as non-reductionist materialism. That is, I believe that the mind is physical (not immaterial or, as philosophers call it, epiphenomenal), but at the same time, I don’t believe that the mind, or human behavior in general, can be reduced to discrete objects that can be transliterated into math or explained in terms of mere evolutionary properties. So, I don’t accept comments such as, “we respond to music merely for some vestigial evolutionarily beneficial biological reason,” or, “music is just a series of isolated sound events that we happen to experience and respond to as if it were a thing in itself,” or, “love is an illusion; it’s just chemicals guiding you towards procreation.” There are a number of reasons I don’t accept these statements, but I won’t get into that here.

There is a growing concept that contrasts with reductionism, referred to as “emergence,” in which it is recognized that the chemical biologist (or other sorts of observers) can look at the function of smaller parts of the whole for the sake of emerging back up to attempt to understand that whole with the function of those smaller parts in mind. The complexity involved in the system of interdependent relationships between the parts and their resulting whole is called “supervenience. “ This is where the mystery (or indecipherable complexity) exists, and the binary mind of a computer is nothing like that. For example, one can just look at how human memories work, insofar as we understand that process, and one sees that it’s nothing like how a computer stores information and makes that information accessible. (Here’s a fun and quite fascinating Radiolab podcast on memory.)

(2) Nick Bostrom’s Simulation Argument. I find this argument problematic for a host of reasons (such as the notion of substrate independence) I won’t get into here because it would be a lengthy. I figured I’d mention it, though, as it’s worth checking out before reading Lanier’s book, at least for anyone interested in the idea that we might be computer simulations (which, if that’s the case, then those post-humans who are simulating us are likely simulations as well).

furby(3) I am reminded of another Radiolab podcast, in which the creator of the Furby pet robot thingy argues passionately that the Furby is conscious, and that the only difference between a human and a Furby is complexity.

(4) Going even simpler than the Furby, philosophers of mind sure love to talk about thermostats in their debates over the nature of consciousness. Here I feel compelled to comment a little further on my view of this subject. I do not agree that stimuli-responsive dolls and/or thermostats are conscious. An example I like to use is a spinning quarter. Here we have an object that has been set into motion and subsequently follows a path determined by its environment. The relationship between the spinning quarter and its environment results in a kind of mechanized behavior, but this does not mean that the quarter is conscious.

It could be argued that the difference is that the quarter is following the path of least resistance according to physical laws and principles, while the doll is doing something more complex that requires something closer to a personality and that contains variation. I, however, don’t believe that the behavior of the quarter and doll are really all that different. What’s really going on here is that the doll is being inappropriately anthropomorphized because of the shape of its physical and behavioral design. The doll cannot resist its own nature, cannot fight against the path of least resistance. When programmed to say some phrase upon detection of a certain kind of sound, it will say some phrase. If the phrase changes, that’s only because there is a computer chip inside of it running a program designd to give the impression of spontaneity. Also consider that, when turned upside down, the Furby goes into a noisy, unceasing fear-like state. It has to do this unless it is broken.

Some might argue that humans are the same way, and are only capable of following their programming, even though it’s more complex in nature. My response to this is the following: Humans may not be able to fully escape their nature and conditioning, but they can be aware of this fact on some level, and that makes all the difference. The human can fight, or TRY to fight, his or her nature, conditioning, tendencies, desires, reflexes, et al. on some level.

If this argument isn’t sufficient for those who’d claim that Furbies and humans are just different scales of the same phenomenon (for example, that both only give an illusory appearance of free will), then I’d say that neither is conscious. It then begins to become a ridiculous semantic argument, even more ridiculous than it’s already been thus far. (A real challenge here is to look at the consciousness of other animals in this context. It’s not a subject I’ve spent a lot of time with, but I’ve definitely come across some interesting current work going on in that field.)

Back again to Lanier’s book… One last thing I wanted to mention is a list of proposed suggestions that Lanier gives early on in the book for what individual users can do to remedy the issues he’s diagnosing. They are all more or less in line with my own thinking, and I could write a whole blog post about each of them (partly because we creative types with an online presence are usually advised to do the opposite of what’s advised here), but I’ll resist the urge as I think we all recognize the phenomena being addressed by these suggestions. Here are two examples from the list:

- Post a video once in a while that took you one hundred times more time to create than it takes to view.

- Write a blog post that took weeks of reflection before you heard the inner voice that needed to come out.

To summarize, whether you agree with him or not, Lanier’s book is an important read at a time when very few people understand the implications of the cosmology and implementation of the cyber-world in which they spend more and more of their lives. (Keep in mind that a lot of what I’ve touched on here isn’t actually in the book, but is inspired by its subject matter. Also, he gets into more than what I’ve mentioned here, including economics, speculative finance, and the music industry.)

Psychomagic: The Transformative Power of Shamanic Psychotherapy by Alejandro Jodorowsky

My good friend Coe Douglas sent me this book a few days ago. I’ve only gotten about forty pages in, so it’s a little early to be commenting on it, but so far it’s fascinating, and there are a few things I’d like to mention. Alejandro Jodorowsky

In my 20s, Jodorowsky was known to me as a mysterious filmmaker whose movies managed to be thoroughly, surreally poetic while simultaneously being saturated in (sometimes quite grotesque) humanity. Santa Sangre was my favorite. I didn’t know much about the man himself, though, and, in retrospect, I think I liked it that way because it added to his underground cachet. Part of the fun was sitting around with friends exclaiming, “I love the fact that someone actually made this.”

(Does this sort of thing occur with young people today? Are there mysterious contemporary creative types doing weird work that young people look up to for their obscurity? Bansky comes to mind, but his anonymous celebrity is manufactured, and it’s everywhere. Interesting to contemplate…)

I was well familiar with Un Chien Andalous back in the day, but Santa Sangre was in color, and the freakishness seemed more real, and it didn’t make a pretentious declaration of a new art movement on the rise, such as Dali claimed of Un Chien. It wasn’t history, it was both now and it was timeless. Jodorowsky is still where my mind goes when I think of “surreal filmmaking.”

Thanks to Psychomagic, I’m learning about a whole new side of Jodorowsky as a kind of physical mystic who sees illness as a physical dream that can be interpreted. Or perhaps, better said, I’m getting a more complete picture of him, because there seems to be no seams between Jodorowsky the filmmaker and Jodorowsky the whatever else he is. As a Chilean youth, he was all about poetry and what has been called “environmental theatre” by some (not him, I don’t think, because he calls it “poetry,” though he certainly recognizes the theatrical elements of his methods). He and his friends back in Chile were physical poets, freaking people out (or perhaps just heavily annoying them) in restaurants and on the street with absurd acts that I hesitate to characterize as surreal or Dadaist. I suppose “poetry” is as good a term as any to use.

This desire to infiltrate one’s life with poetry by a Chilean back in the 1950s is particularly interesting to me right now as I am in the midst of an obsession with all things economics. Chile of course plays a critical early role in the history of the shift from Keynesian to neoliberal economic policy in the U.S., a far-right-leaning experiment in capitalism that resulted in disaster for the Chilean people as Allende was ousted by the murderous, U.S.-backed regime of Pinochet.

Jodorowski, as he speaks of his childhood, speaks of a handful of national poets, explaining that everyone in Chile wanted to be a poet. The most influential of them was Pablo Neruda, who has became an icon for the air of poetry and adventure that characterizes the time and place Jodorowsky describes (surely he’s romanticizing to a degree, but that romance was inspired by that time and place, and that’s not insignificant). It’s also timely to be reading Jodorowsky’s book in light of recent revelations that Neruda very well might have been assassinated by Pinochet via a doctor while Neruda was hospitalized.

Some of the Next Books in the Stack Are…

The Ethics of Ambiguity by Simone de Beauvoir - Someone recommended this to me based on my deep interest in theSimone de Beauvoir connection between metaphysics and ethics. Particularly the question of how it’s possible to approach ethics from a place in which there is a lack of satisfactory metaphysical conclusions. I’m also interested in her classic text, The Second Sex.

Philosophical Foundations of Neuroscience by M.R. Bennett and P.M.S. Hacker – I’ve been meaning to read this for a good while now. I’m especially interested, not because of questions relating to philosophy of mind or the nature of consciousness, but instead because I want to see what the philosopher and neuroscientist who wrote the book have accomplished in this interdisciplinary endeavor. Such merging of academic disciplines is often frowned upon, not just because of intellectual rivalry (though, believe me, I’ve had my fair share of science and math types get visibly frustrated – even angry – when I tell them I’m studying philosophy), but because of departmental competition for funding. At any rate, I feel that interdisciplinary work is where it’s at. People from different fields should work together to tackle the same problems from multiple perspectives.

On What Matters by Derek Parfit – Parfit was brought to my attention in a New Yorker article that came out earlier this year. He is concerned with some of the things that concern me, such as difficult questions surrounding identity and even more difficult questions surrounding the paradoxes that arise when considering the moral relationship between humanity and the world humanity inhabits. One thing that really struck me is that Parfit cannot create visual images in his mind. He cannot visualize his wife’s face when away from her, for example. It seems this would give him some interesting ideas on identity. For a sample of his thinking, here’s an ethical formulation he came up with known as the “repugnant conclusion.”

The Way of Zen by Alan Watts – This book was really influential on my thinking back when I was in high school, a little over twenty years ago. There were several passages that gave me chills, in fact. I’m curious to read it again now that I’ve been exposed to other schools of thought from within and outside of my own culture. It would be a lengthy endeavor to write about my impressions of Zen Buddhism, so for now I’ll just copy this quote from the first paragraph of the first chapter of the book, and which appeals to me: “Zen Buddhism is a way and a view of life which does not belong to any of the formal categories of modern Western thought. It is not religion or philosophy; it is not a psychology or a type of science. It is an example of what is known in India and China as a ‘way of liberation,’ and is similar in this respect to Taoism, Vedanta, and Yoga.”

(Next time, there will be some fiction on here… I’m hoping.)

Currently Watching from TV Land…

Avatar: The Last Airbender – (Click on the image to the right to see this fantastic portrait of Zuko and Mai at full size!) I just finishedZuko and Mai this series last night. I almost never go for animated shows or movies that aren’t specifically for adults (even then, it’s rare), but this one is fantastic. It is marketed for children, for which I found varying age recommendations from as young as 8 to as old as 17. The show does have a strong adult fan base, however, a glimpse of which can be had at the Avatar blog at the Onion A.V. Club.

It’s an epic, American-made series with anime influence, and whose world-worn characters (who range from very young to very old) and stories are complex, touching, drenched in the human experience, and hilariously funny. Themes such as love, hate, rage, death and war are treated with a gravity that not only adults can appreciate, but, more importantly, respects that children are persons who have to deal with these difficult topics even at their early age, despite the best efforts of some parents and despite what the content of most children’s shows might lead one to believe. Indeed, it won a 2008 Peabody Award for “Unusually complex characters and healthy respect for the consequences of warfare.”

I also appreciate the influence of Eastern philosophy on the show. The title character of the show is Avatar Aang (though this is definitely an ensemble show), and one of his goals on the show is to clear his chakras. The final chackra, which requires letting go of attachment, is treated with surprising philosophical complexity. Also, he faces a tremendous moral dilemma near the end of the series, which is not treated lightly, and for which there is a great payoff.

Oh, and one quick qualifier about the series. Around the middle of the first season, it started to seem like it was getting too childish - too simple in its conclusions and moral lessons. This lasted a few episodes and began to worry me, but then the show grabbed my attention again and held it pretty much to the end of the series run. Some might actually consider the subject matter to be too mature for younger audiences, especially starting near the end of the second season.

Louie – Set in New York City and written, directed, edited, executive produced, and starred in by Louis C.K., this show covers a lot of ground while remaining small and intimate. I Louis C.K.remember C.K. talking (on the now legendary two-part WTF interview with Marc Maron) about how he used to make weird little movies – some funny, some dark – when he was younger, around the time he started writing for Conan O’Brien. This love of filmmaking comes out in Louie, where the tone ranges from dreamlike to painfully realistic. The show is definitely funny, but there are long stretches of drama that are often the most effective scenes in an episode. In fact, there’s a scene involving a garbage truck that is downright horrifying, and not because of anything trash-related.

The show draws comparisons to Seinfeld (a show I love, by the way) because C.K. plays himself as a working comedian in New York City, and interspersed within each episode are scenes in which C.K. is performing his comedy. There are some key differences to be observed, however. For one thing, C.K.’s comedy is darker, more philosophical and more personal than Seinfeld’s, and reveals much more about the character and C.K. himself. Also, C.K. actually repeats material that has been shown on earlier episodes, because that’s what comedians do in real life. Louie doesn’t really have plots in the way Seinfeld does (partly because it isn’t an ensemble show), often just ending without any kind of real resolution. I think you have to experience this to understand what I’m referring to. The events on the show just kind of happen and then stop happening. Finally, Louis has a strong streak of artiness that is intelligently and tastefully implemented into the sitcom format, has no laugh track, and not only blurs the line between the character and the man, but crosses it (especially in the recent episode featuring Dane Cook).

Parks and Recreation – I watched the pilot for this when it first came out… and hated it. I couldn’t stop cringing. Recently I decided to give it another chance based on repeatedly hearing about what Amy Poehler as Leslie Knopea great show it is. The first season, which is only six episodes, grew on me by the fifth or sixth episode, and it didn’t take long into the second season for me to fall completely in love with the show. Amy Poehler is magnificent as Leslie Knope, who, yes, starts out uncomfortably reminiscent of Michael Scott, but quickly comes into her own. Knope is a loveable overachiever who excels at just about everything she puts her mind to, and she expects nothing less of her friends and coworkers (there is little distinction between the two). Those around her are often annoyed by her intensity, but also love her for it.

Parks and Recreation stands out in contrast to so many other shows because the main characters actually genuinely like each other. Two characters on the show, however, are unfairly reviled (as opposed to the understandably disliked – and hilarious – Jean-Ralphio), but this is done with such emphasis, including by nice characters, that it seems clear that it’s a commentary on the current trend of meanness as comedy (Knope’s disdain for salad is noteworthy in this context as well).

(As I write this, I’m reminded of the silly TV trend we are seeing lately in which a stupid husband is constantly being corrected by his much smarter and together wife; but, about once per episode, the husband has to do something endearing to remind the wife – and viewer- of why she doesn’t just leave the guy).

Oh, and I Need to Catch Up on from TV Land…

“It’s Always Sunny in Philadelphia,” Season 7 – This series has something in common with Louie in that it goes into some pretty dark dramatic territory (and as a show that tends to subdivide its ensemble cast intoKailin Olson as Sweet Dee distinct, but often overlapping subplots, it can also rightfully be compared to Seinfeld), but I think what distinguishes it is that there are dramatic moments that, while shorter than those on Louie, sometimes involve a level of gut wrenching emotional intensity that I don’t think I’ve seen on any other sitcom (this might be because some of the cast members were originally aiming for careers in dramatic theatre). It’s also a show full of risks that often pay off with tremendous comedic value. To the left is a picture of the brilliant Kaitlin Olson, who plays Sweet Dee.

Finally, Some Movies I Look Forward To…

To name three: The Future by Miranda July, The Tree of Life by Terrence Malick, and Midnight in Paris by Woody Allen. And to name three more: just kidding… I’m noticing that this post is getting long.

Wait, did I mention how much I loved the movie Kick-Ass and the television series Party Down? Ok, ok… stopping now.

Take care, have a great holiday season!!

-Dan

Comments

2 Responses to “Update: Lanier, Jodorowsky, de Beauvoir, Airbender, et al.”
  1. [...] week, I explored some of my reactions to the first half of Jaron Lanier’s You Are Not a Gadget: A Manifesto (originally published in [...]

  2. [...] week, I explored some of my reactions to the first half of Jaron Lanier’s You Are Not a Gadget: A Manifesto (originally published in [...]

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>