Search
Freelance PRO

Latest Posts:

Entries in technology (6)

Friday
Apr122019

Angering Ourselves To Death – Postman’s Brave New World Re-Re-Visited - Chapter 3

Chapter III: The Medium Is The Mass Surveillance

 An Amazon Alexa-enabled device.

In March 2018, a whistleblower told Observer newspapers that UK-based political consulting firm Cambridge Analytica had harvested over 50 million Facebook profiles in a breach of data and privacy. Christopher Wylie who worked with an academic at Cambridge University to gather the data said “We exploited Facebook to harvest millions of people’s profiles,” Wylie told the Observer. “[We] built models to exploit what we knew about them and target their inner demons. That was the basis the entire company was built on.”

The data was collected through an app called thisisyourdigitallife, posing as an online personality test. Exploiting various weaknesses in Facebook’s application programming interface (API), it collected profile information not only from those who authorised the app, but their friends and their friend’s friends.

The information was used to target American users during the 2016 United States Presidential Election and the 2016 UK Referendum on the question of remaining or leaving the European Union.

At least Cambridge Analytica had the courtesy to allow users to opt in. The invasive XKeyscore and Boundless Informant programs used by the NSA to collect signals intelligence and conduct mass surveillance on US and foreign citizens afforded users no such luxury.

As mentioned earlier, Facebook and other social media do not sell products or services directly but facilitate a platform for marketers and advertisers to do so. The well-worn aphorism “the product they are selling is you” is a misnomer. If we take the semanticist Korzybski’s maxim to heart – the word is not the thing – they are not selling you specifically, but a 1 to 1 simulacrum that extends beyond your own consciousness. Human consciousness is also tempered by human unconsciousness; we forget, misplace information, and have moments of complete unawareness of our own behaviours.

Computers don’t.

A computer has perfect memory, perfect algorithms, perfect recall. It can know you better than you know yourself. Thus we, as humans, employ computers to learn more about our habits, wishes, frustrations, and desires. This is not ill or good in and of itself but can be used by humans in either fashion.

If we are also being programmed, we are also being labelled, sorted, objectified, and tabulated. Facebook and its ilk have shifted human consciousness into accepting computers as a wholesale extension of our senses. For instance, it has become acceptable for activists to comb through large data sets such as Twitter feeds for politically incorrect comments; In December 2018, US entertainer and comedian Kevin Hart was ousted from his position as host of the 2019 Academy Awards due to making anti-gay slurs on his Twitter between 2009 and 2011.

Where as a human being may struggle to remember specific comments uttered by anyone almost a decade prior, computers enhance our collective memories by providing a library of instant storage and retrieval of anything and everything we have said or posted online. This leads to another aphorism: this is a feature, not a bug.

It is possible that the original founders of Facebook, Mark Zuckerberg and Eduardo Severin, had no intention to create a mass surveillance medium the likes of which the world has never seen. According to after-the-fact reports, Zuckerberg created “FaceMash” in his Harvard University dorm room as an application to rate the relative attractiveness of girls on campus in 2003. It was later renamed “the Facebook” then simply “Facebook” in 2006. The original app was limited to colleges in Boston, then expanded to all university-level institutions, and eventually, all people with a valid email address (and over the age of 13) in September 2006.

Facebook exploited our desire for convenience and want for human interaction. People could add “friends” to their Facebook and share their opinions, photos, videos, and other content with one another. They could also join in on games. They could express their desires by an “opt-in” – the Facebook “like” button. “Liking” topics or webpages built up a profile of your preferences and interests; albeit manually. As of 2019, this is achieved via machine learning and artificial intelligence.

Facebook acquired photo sharing app Instagram in 2012; instant message service WhatsApp in 2014. It launched its own proprietary messaging platform, Messenger, in 2015. According to the End User Licence Agreements, Facebook could use these applications on your phone to harvest data about your habits, including your location. In 2016, Facebook strenuously denied eavesdropping on conversations, using one’s smartphone camera or microphone to pick up vision or audio. Facebook has spent millions of dollars on PR to counteract these claims, saying that advertising that pops up in feeds is a result of “frequency bias” or just plain coincidence.

It was confirmed in April 2019 by Bloomberg that human technicians in the employ of Amazon listen to voice searches and other audio picked up from Alexa-enabled devices. This mix of contractors and employees based around the world are tasked with refining the voice search algorithm to produce better results. However, the nature of the medium is to have an “ear” out for keywords and phrases at all times. According to the article,

“Sometimes they [employees] hear recordings they find upsetting, or possibly criminal. Two of the workers said they picked up what they believe was a sexual assault. When something like that happens, they may share the experience in the internal chat room as a way of relieving stress. Amazon says it has procedures in place for workers to follow when they hear something distressing, but two Romania-based employees said that, after requesting guidance for such cases, they were told it wasn’t Amazon’s job to interfere.”

The media we consume and produce for is the mass surveillance; the Faustian bargain we’ve made with technology is coming back to haunt us in myriad ways. Mass surveillance by private entities is chilling enough; however, the panopticon effect of moral busybodies and invective-slinging do-gooders has also cost people their livelihoods. This public shaming by internet mob was made most famous in 2013, when corporate communication director Justine Sacco tweeted just as she departed for Cape Town: “Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!” A joke in poor taste; it nevertheless was the Number 1 worldwide trending topic on Twitter, creating a storm of controversy before Ms. Sacco had even stepped off the plane.

Because of these interconnections both public and private, the mass surveillance nature of media is inescapable. The mass surveillance is having a profound effect on the way we parse language and the meaning of that language; and breaks down the tacit disconnect between language as action and language as thought in action. Nuance is impossible, tribal and identitarian sentiments are rising. We are analogue people, being programmed to think in binary ways.

Tuesday
Mar052019

Angering Ourselves To Death – Postman’s Brave New World Re-Re-Visited - Chapter 1

Chapter I: Postman’s Portent – The Brave New 1984

 Neil Postman.

 “We were keeping our eye on 1984. When the year came and the prophecy didn't, thoughtful Americans sang softly in praise of themselves. The roots of liberal democracy had held. Wherever else the terror had happened, we, at least, had not been visited by Orwellian nightmares.

“But we had forgotten that alongside Orwell's dark vision, there was another - slightly older, slightly less well known, equally chilling: Aldous Huxley's Brave New World. Contrary to common belief even among the educated, Huxley and Orwell did not prophesy the same thing. Orwell warns that we will be overcome by an externally imposed oppression. But in Huxley's vision, no Big Brother is required to deprive people of their autonomy, maturity and history. As he saw it, people will come to love their oppression, to adore the technologies that undo their capacities to think.” – Neil Postman, Amusing Ourselves To Death (1985)

In the 2018 documentary, Behind The Curve, a look at the worldwide community of people who believe the Earth is flat, filmmaker Daniel J. Clark asked prominent YouTuber Patricia Steere what sources of information she trusted. “Myself,” she said, laughing. “I jokingly said if there’s an event like…I’ll just use Boston Bombing again,” referring to the 2013 bombing incident at the Boston Marathon, “I won’t believe any of those events are real unless I myself get my leg blown off.”

It would seem her wilful ignorance when it comes to the curvature of the Earth is an apotheosis of the media as an environment as a culture – the magic of YouTube and the internet has “undone her capacity to think,” as author, media ecologist, and father of modern media as environment scholar Neil Postman said in his 1985 landmark book, Amusing Ourselves to Death.

What’s more telling is that her philosophical solipsism, that being that only the self is all we can really know of reality, is not an isolated phenomenon. To be fair, it's hard not to these days.

In the theory of Julian Jaynes seminal book on the evolution of consciousness, The Origin of Consciousness in the Breakdown of the Bi-Cameral Mind, he argues that our pre-antiquity consciousness was not defined by recognising our thoughts as our own, but as one side of the brain “speaking” or “hallucinating” to another part that listens and obeys its commands. These commands were interpreted as Gods. As bi-cameralism broke down, we externalised these voices into Oracles, churches, prayers, and eventually, scepticism that any such voices were derived from on high. A vestige of bi-cameralism, the verb phrase to understand which means to perceive the intended meaning or comprehend it, means to literally stand under a God who is giving instructions to a human receiver – or in this case, an unconscious hemisphere of the brain commanding, and a conscious hemisphere obeying said commands. Though we’ve moved past this bi-cameral state, we have not moved towards a state where we can authenticate information as “true” or “factual” just by looking at it.

As humans, we are limited. We use language and media to transmit our ideas, desires, knowledge, etc. to other people. As the semanticist Hayakawa put it, we use the “nervous systems of others” to help us achieve our goals. His most famous example is a soldier calling out to an observer for information on what is going on, and the observer reporting back to the soldier – he has “borrowed” his eyes and ears and gained a report thanks to a “loan” of his sensory systems. However, if the observer reports back false information, the soldier has not gained any knowledge at all. To use a well-worn analogy from the great General Semanticist Count Alfred Korzybski, the “map” the observer has provided for the “territory” or reality of what is going on is not only inaccurate, but false. The observer may have relayed zero enemy activity, when in fact he has seen multiple targets. The soldier is now imperilled due to his internal “map” consisting of this false image.

And the images we create each day are staggering. We, as humanity, produce 2.5 quintillion (2.5 x 1020) bytes of new data each day, and the rate is accelerating. It would be impossible for any one human to observe and analyse the data we create, per day, in a lifetime. We are not oppressed by an external imposition; we are oppressed by how gigantic our media environment has become. If Patricia and her Flat Earth friends only observed one-one-thousandth of this data generated per day, that would still yield 25 terabytes of data – 250 million images, 35,714 hour-long videos, or even 416 hours of Virtual Reality content. With humans being this limited and navigating information systems so vast, can you blame Patricia for this ignorance? Ms. Steere could, if she wanted, live out her entire life without ever encountering an opposing viewpoint. She could call out only to observers who confirm her bias for the rest of her life and never run out of data to comb through.

From this perspective, Postman was right.

A Familiar, Not Brave, New World

However, we now have another layer of oppression to contend with; that the technology we adore is used simultaneously for our surveillance and gratification. One cannot exist without the other. In the 80s and 90s, civil libertarians called for a dismantlement of the “surveillance state;” CCTV cameras on every corner, police providing a watchful eye on the populace. In authoritarian regimes such as China, these cameras and listening devices serve this very function, a generational echo of the German Democratic Republic’s STASI invading the private lives of citizens. China’s internet is censored around the clock by a “Great Firewall of China” which blocks certain foreign websites with pro-democratic or anti-Chinese content, as well as moderation by government agents in social media such as WeChat or Sina Weibo.

In 2013, former NSA contractor Edward Snowden revealed with the help of Washington Post and Guardian journalists that our electronic transmissions, such as those used by Facebook, Google, and other social media were being systematically harvested. Our data, which we freely gave to these media, are used as part of the NSA-developed XKeyscore and the Boundless Informant data collection and visualisation tools, used for covert surveillance without due process.

Over one billion people use Instagram, for example. Apart from its uses as a data harvesting tool for advertisers or as a platform for marketers, it arguably has no functional purpose. It does not provide a solution for transmitting photos to other people – it could be perceived as another pleasurable toy, such as those found in the vain and self-absorbed culture of Brave New World. Psychologists and others have linked social media to addiction, as other users’ “likes” and ego-strokes can often release the neurotransmitter dopamine, which is known to scientists as the “feel good hormone.” Dopamine is our “reward.” In that regard. Like many addictions, dopamine “rewards” lose intensity with frequency. Bigger and better “rewards” are required to feel the same “high.” In a cynical view, Instagram and other social media are much like a dopamine dispenser, in the same way rats use a mechanical food dispenser in experiments.

Postman said we would come to love our oppression through the adoration of technology. He was, to an extent, saying feelings would become more sought after than facts. Though we live in Brave New World, there is a sinister apparatus that belies it – the world of 1984. Since we are unable to trust our media environments – the nervous systems of others – or even make proper sense of it due to the sheer volume of data we can interact with, the maps we will carry around in our heads will be of lower and lower accuracy and quality. The amount of information we are aware we are not in possession of, or will never be in possession of, is near incalculable.

And for many reasons, as this series will explain, has made us very, very angry.

To be continued in Chapter II: The Media Malware Machine

Friday
Apr192013

Cos You Don't Wanna Miss A Thing: Twitter, music and predicting the present

If it’s good for celebrities, it’s good for you too. Endowed with mystical properties making their eyes gleam and teeth porcelain, they’re just better than us in every conceivable way. If you can convince them that Twitter’s new Music app is useful, the unwashed masses will stream it on to their tablet as if it was mana from heaven.

Maybe not.

It does raise a question in this new age of music you rent in perpetuity; what use does this new Twitter app actually have?

Having read media theorist Douglas Rushkoff’s new book Present Shock, he posits that media-as-a-culture is no longer preoccupied with “futurism” but centred on “presentism.” We’re interested more on what’s happening now than contextualising our experiences as distinct from past and future. For example, Twitter is only useful in the now (not borrowing too heavily from Eckhart Tolle) losing worth as time elapses. Furthermore, the now is such a diffuse, high-level abstraction it’s like attempting to catch a mosquito with a pin and a thimble.

Consider the mathematical equation. An equation is an expression of variables of which one is unknown. The unknown variable is found using mathematical principles flowing forward in linear time, from A to B. The solution is clear cut.

In computing and information technology, programs and hardware are thought of as panaceas for “problems” it’s not uncommon terming them “solutions.” These "problems" are not structural, i.e, the problem is not the inability to arrive at an unknown variable. The majority of problems lie in not getting it fast, cheaply or efficiently enough to stay relevant in the "now."

Simply, what problem does twitter’s music app actually solve?

It doesn’t solve anything – for the consumer. In the age of the present, app developers aren’t savvy problem solvers, they’re actually problem finders. They convince the market that there exists a problem, contend to have solved it and profit handsomely.

Apps such as Pocket or Evernote, as useful as they are, “solved” the problem of keeping track of links or writing notes previously inaccessible on one device when they were stored on another. There was nothing structurally wrong or overly inefficient with say, writing notes on pads of paper. Solutions readily existed.

Apps exist on your phone to solve problems that weren't problems until "realising" they plagued you. Not knowing the name of the song playing at the pub by Journey was never a life-threatening predicament, yet Shazam solves that problem for you. Easy.

But it cannily it does purport to have discovered a problem. Twitter is in the process of convincing us that emerging and popular trends in music are so complex and so amorphous you need an app to navigate this ever-changing terrain of current music. The problem is that you’re lagging behind what’s cool and what’s about to be cool. The solution is this app. Get it now, bask in the electronic water of fleeting musical omniscience.

Except this app wasn’t designed with you in mind. It’s another column in a vast data set powering predictive analytics. It tracks, in real-time, the influence of users and who is being influenced. What the influence channels users towards, and so on. Spotify and Rdio’s blindspots in terms of creating accurate big data sets is they don’t know who influenced what music is being played at any given time, nor to what level. No one gives a shit about your shitty indie band unless someone gives you a reason. Sometimes that reason is none other than “who” rather than “why.”

The dimension for the data set for playing Belinda Carlisle 40 times in a row is discrete and limited. Spotify will know I love Belinda Carlisle. If an external force influenced me, it has no real way of gleaning that information unless I directly clicked a link to the track from a certain page or twitter feed.

By using Twitter’s new music app, Spotify, etc. can track the locus of the influence. Music companies can make safer bets on pushing artists ahead of time. The guesswork on releasing a hit isn’t eliminated but it’s significantly reduced, again. Why sign ten acts to yield one hit when signing two or three definite winners is possible?

It does solve a very real problem, and that problem lies in the A&R departments at the major labels. The jump in music sales, the first time it’s done so in over a decade, is partially due to this new "taxi fare" or pay as the meter's running model. How does an exec fire up the sales from a simmer to roaring boil? You glean better data from more sources and tailor your strategies to the analytics.

So can this Twitter app really tell us what is really hot right now? Without the mind of Nate Silver and the processing power of CERN at my disposal, I don’t know. And neither do you.

 

---

Read more: My post on the Spotify (counter-)revolution.

Tuesday
May222012

Spotify: The new/old musical counter-revolution

I got two packages in the mail - a vinyl record and a compact disc. All on the day that Australian music lovers would point their fingers and laugh at my stubborn luddism. Hadn't I heard? Spotify had finally launched Down Under! I could now stream any song I wanted from a pool of over sixteen million tracks filled by virtually all the major labels and independents, sailing across it with a totally "new" musical model.

As many pundits would have you believe the Spotify "revolution" isn't one at all - it's not the Red Army storming the Winter Palace and declaring peace, bread and land for the people. It's akin to a bound and gagged family Romanov inexplicably sprouting laser turrets from their heads. Envigorated, they'd command the ghosts of Cossacks to rise from their graves and mercilessly hound Trotsky and his troops back toward the Ukraine. Spotify is a musical counter-revolution aiming to quash the orgiastic "free" producer/consumer-led music rebellion once and for all.

It’s so deliciously evil it beats life back into Monty Burns’ desiccated heart and has him whistling Dixie and calling Mater. (Ahoy-hoy?) Here’s why.

The digital arms race
Ever since the dawn of recorded music, the industry at large kept its eye on one prize. That is, controlling the content, the media and its distribution.[1] When gramophone records first appeared it wasn’t uncommon to see music on vinyl sold via totally vertical integration: ownership from top to bottom from producer of the content to the point of purchase by the consumer. (Case and point: HMV or “His Master’s Voice.”) The Compact Disc was a shift toward higher-fidelity media and lower overall manufacturing costs per unit.

The CD was jointly developed by Sony and Philips in the late-70s. CDs as a format gained consumer acceptance in the late-80s when an economy of scale was established. Together, Sony and Philips paid for the research & development, marketing and manufacturing of both Compact Discs and the machines that would play them. Like all good R&D, they could on-license the technology to other companies. It’s a no brainer – Sony and Philips were (and still are, to some extent!) multinational music labels possessing vast back catalogues and new talent primed for polymer pressing, proving positively pilfer-proof (until the late 1990s, as we all know.)

But what to do! In the yawning sunrise of 2000 AD, the medium of playback and distribution went spectacularly rogue. A stylized cat harvested innards of beige boxes, enabled by squeaky telephone wires. The pirates, once thought of as guerillas with nothing better to do than trade tapes around and occasionally burn a CD for a few bucks a pop were now legion, moving torrents (oh I love this water analogy) of (almost!) intangible data across networks without proper authorization from intellectual property holders. The content was there, like it had been since Tin Pan Alley and even centuries before 'round the campfire. Yet the stranglehold on media and distribution methods slipped the grasp of the industry virtually overnight. It felt like no amount of speech impeded Danes with expensive lawyers could ever halt their revolutionary advance.

Commodification ala mode and a cup of tea
So what now? Do record companies under the aegis of RIAA and their cronies hunt down pirates and strong-arm them back toward their sanctioned tripartite model of music consumption? Or do they spend more money than they’re prepared to on R&D creating a new medium and a new distribution method?

The iTunes model seemed “revolutionary” at the time – you know, telling people to pay for something they could get illegally for free – lest the counter-revolutionary martinets bound in and lay down the(ir) law. It was a step forward from CDs, sure. Slapping all DRM in the world on to files still meant people "got" something.  “Our content was never yours to begin with and now we’re keeping it,” they bellowed.

And lo, Spotify and its ilk emerged.

Record companies own the content. That's a given. The clever rub lies thus: remove the medium and utilize an established distribution network, which in its present broadband form has existed about fifteen years. Spotify etc. seek to change the concept or perception of content ownership back to an near pre-technological state much like in the age of travelling band shows of yore. Yes, you may hear the music but you can no longer hold it in your hands.

By removing the physical or even the illusion of physicality (files on a hard drive), the medium and the distribution is in a state of simultaneous allness and nothingness; it’s always “on” yet you can never “have” the music. It's "your" song when you choose it - like out of a jukebox - but once the last note decays, so is your claim over it (not that you really had one in the first place). You can “search” the (not your) collection but it’s never “yours” – they’re the gatekeepers and you pay for them to lower the drawbridge. Once inside their opaque vaults, they're able track your playing habits to sell you more of what you already want. Then you're their billboard as they publish every guilty play of Pat Benatar to your friends on Facebook. It’s like the IKEA of promotion – IKEA keep their prices low because they outsource the construction of the product to you. Now Spotify have got you to do their marketing for them, too.

If budding content producers are paid a pitiful commission, more so the better in the eyes of the industry. By melding (or abnegating) the medium, they’ve lowered the price of music and also its value. If Spotify spends the same amount of money paying for the rights to the new Gotye record (quelle horreur) and the entire back catalogue of Darkthrone, per se, then what is the differential of worth between the two? There is none. The only savvy trick the labels can pull is restricting the “supply” of Gotye (or someone just as horrible and popular) but that would distort the market and their profit margins (in this new medium-lite model). Make everything on offer the same (pre-paid) price per click, throw in some ads and the money rolls in regardless. Not much for those who wish to furnish Spotify with music, but big payoffs for those who control mammoth oceans - not paper cups full - of content.

But what really fucking burns my potatoes is that Spotify is the closest thing we have to the real pop music experience. Richard Meltzer in his inquiry/parody of the Aesthetics of Rock posited that rock and pop music is the act of making the mundane interesting and exciting. Shit, if you can make money off it, more so the better.

Spotify is accessible on a desktop computer which you more than likely stare into each day to earn those dollars to pay for, well, Spotify. For the fraction of a second your consciousness wanders toward the sublime tongue of rock and pop in all its tinned ferocity on your shitty laptop speakers, the music industry suits have not only breathed a sigh of relief, their tar-stained cackles can be heard from a blue million miles...

Like I said, it’s pure evil fucking genius.

---
1: Jones, S. Rock Formation: Music, Technology and Mass Communication, Sage Publications: Newbury Park, CA, 1992 p. 185.

Saturday
Jul162011

Where's Our Google, Too?

I felt compelled to add my opinion to the billion-strong chorus of ill-baked and half-formed critiques and hagiographies of Google+ on the basis none of them seemed to catch on to some fundamental facets of media ecology. Media ecology put simply is the study of media as an environment and was pioneered by Marshall McLuhan, Neil Postman, Jacques Ellul and many others. In honor of ABC Radio National's week-long celebration of the life and work of Marshall McLuhan, I present my simple media ecological analysis of Google+ and why I don't feel it'll take off to Facebook proportions.

1. Because it's Google+, not Google 2: Electric Googleoo

The mantra of media ecology, especially that of the late great Neil Postman is that new media is not additive but transformative. You don't get a culture plus television, you get a completely new way of disseminating and interpreting information. 20 years ago, not everyone needed a computer. But in 2011 you go into someone's home, chances are you'll see a computer in residence with a connection to the internet. Computers hooked up to the internet are a material change to our culture that results in a behavioral change. Go to any restaurant and see the new table adornments: black rectangles that go "ping" when your date is talking about new boots or football or whatever.

Google+ only works on the premise that it will make a material or behavioral change to your life somehow. If you intend to own a Chromebook, then yes - Google+ makes total sense. Using Chromium OS, Google+ fits right in to the entire purpose of the operating system and the computer; making it a purely web-based machine and experience.

If you don't own one nor do you intend to own one, it has to offer something drastically new and something substantially more cooler than Facebook to kick the Facebook habit.

2. The people who give a shit about it give a shit already

I've noticed no one is pestering me for invites any more - partly because they don't like me and mostly because those who already want it, have it and those who don't give a shit...well, don't give a shit. Google+ has almost already hit a critical mass of people who give a shit about it and now that anyone can send an invite the give a shit factor has taken a nosedive. Those who do give a shit evangelize about it as the Facebook killer but inevitably hit the obvious roadblock:

"So what's it like?" asks the incredulous bystander. "It's like bringing all your friends together, but you can follow other people you think are cool and you put them into circles and it's AWESOME," replies the Google+ zealot.

"So it's like Facebook."

"Yes, but better."

But is it better? Faster? Harder? Stronger? In what way? Pick any one of the preceding and it's especially difficult to evaluate if that's even true or not. But there is one way, which I'll explain later.

3. Pitching something to everyone means you need to make a habit out of it

Facebook was revered by university students because it couched them in a sort of electronic elitism - don't go to uni? Well fuck you, you can't use Facebook. Before long it was available to high school students, technical colleges and eventually everyone. Then it opened itself up to the internet and segued into the background of the web experience, not as the go-to site of the minute. It became a habit.

G+ seems to work on the premise that it's ridiculously simple enough for the web-only Chromium set but also powerful and malleable enough for the media "gurus" and code monkeys. Where does it leave the people in the middle? Killing e-cows with their mafia goons on Facebook. It's difficult to change a habitual behavior and the reason has to be compelling for those to change. Facebook wasn't built on a new premise, but its advantage over MySpace? It successfully broke down an ingrained habit (for some) and facilitated other people to form new ones.

Your friend posts a photo of what they're eating, every day? It's the online equivalent of twirling one's hair or tapping one's foot, mostly unconsciously. (How can you spend 2 hours on that fucking thing without realizing, I mean, seriously.) Perhaps we all need an e-Gestalt therapist to ask us "Vat is the sik-niff-ee-kunss of zat what you are doing zere?"

Can Google+ achieve the same thing? I doubt it - at this stage. To get to Facebook or even Twitter status, it has to be come a lasting and integral part of our everyday experience. Right now it's like "Oh yeah, shit, I have Google+. I should post this blog post about Google+ on it, right now!"

Even those who signed up for Facebook and didn't make a habit out of it would probably log in and find their notifications area awash with red. If there's no sustained buzz, I suppose we can wave it away.