Cantitate/Preț
Produs

Magic and Loss

Autor Virginia Heffernan
en Limba Engleză Paperback
Just as Susan Sontag did for photography and Marshall McLuhan did for television, Virginia Heffernan (called one of the best living writers of English prose ) reveals the logic and aesthetics behind the Internet.
Since its inception, the Internet has morphed from merely an extension of traditional media into its own full-fledged civilization. It is among mankind s great masterpieces a massive work of art. As an idea, it rivals monotheism. We all inhabit this fascinating place. But its deep logic, its cultural potential, and its societal impact often elude us. In this deep and thoughtful book, Virginia Heffernan presents an original and far-reaching analysis of what the Internet is and does.
Life online, in the highly visual, social, portable, and global incarnation rewards certain virtues. The new medium favors speed, accuracy, wit, prolificacy, and versatility, and its form and functions are changing how we perceive, experience, and understand the world."
Citește tot Restrânge

Preț: 9696 lei

Nou

Puncte Express: 145

Preț estimativ în valută:
1856 1934$ 1545£

Carte indisponibilă temporar

Doresc să fiu notificat când acest titlu va fi disponibil:

Preluare comenzi: 021 569.72.76

Specificații

ISBN-13: 9781501147074
ISBN-10: 1501147072
Dimensiuni: 137 x 211 x 18 mm
Greutate: 0.32 kg
Editura: Simon & Schuster Export

Notă biografică

Virginia Heffernan writes regularly about digital culture for The New York Times Magazine. In 2005, Heffernan (with cowriter Mike Albo) published the cult comic novel The Underminer (Bloomsbury). In 2002, she received her PhD in English Literature from Harvard.

Extras

Magic and Loss

1

DESIGN


Instead of introducing a narrative or a lyric structure, an app game called Hundreds begins with a hazy dynamic: expanding. A player meets no characters; rather she’s put in mind of broadening her horizons, dilating on a subject, swelling with pride. Cued by dreamlike graphics, she feels her neurons inflate.

Next she’s abstractly navigating a crowd in that expansive state. She’s flinching to keep from touching anyone else. Then, on top of all that, she is shot through with the urgent need to get someone alone, to guide him away from the crowd. Finally she’s doing this while trying to avoid the blades of a low ceiling fan.

These obscure neurological half-narrative states and others, far stranger, are cunningly evinced by Hundreds, which is a masterpiece mobile puzzle game by Greg Wohlwend and Semi Secret Software. As in Hundreds (and 1010!, Monument Valley, and the marvelous blockbuster Minecraft), much of the best digital design bypasses language and can only be evoked by it, not denoted precisely.

Superb and sleek digital design like Semi Secret Software’s now live on apps. These apps are not so much intuitive as indulgent, and they put users far from the madding crowd of the World Wide Web. The extreme elegance of app design has surfaced, in fact, in reaction to the extreme inelegance of the Web.

Appreciating the Web’s entrenched inelegance is the key to understanding digital design both on- and offline. Cruise through the gargantuan sites—YouTube, Amazon, Yahoo!—and it’s as though modernism never existed. Twentieth-century print design never existed. European and Japanese design never existed. The Web’s aesthetic might be called late-stage Atlantic City or early-stage Mall of America. Eighties network television. Cacophonous palette, ad hoc everything, unbidden ads forever rampaging through one’s field of vision, to be batted or tweezed away like ticks bearing Lyme disease.

THE ADMIRING BOG


Take Twitter, with its fragmentary communications and design scheme of sky-blue birdies, checkmarks, and homebrew icons for retweets, at-replies, hashtags, and hearts. It’s exemplary of the graphic Web, almost made to be fled. Twitter’s graphics can be crisp and flowy at once, if you’re in the mood to appreciate them, but the whole world of Twitter can rapidly turn malarial and boggy. The me-me-me clamor of tweeters brings to mind Emily Dickinson’s lines about the disgrace of fame: “How public—like a Frog—/To tell one’s name—the livelong June—/To an admiring Bog!”

That boggy quality of the Web—or, in city terms, its ghetto quality—was brought forcefully to light in 2009, in a sly, fuck-you talk by Bruce Sterling, the cyberpunk writer, at South by Southwest in Austin, Texas. The Nietzschean devilishness of this remarkable speech seems to have gone unnoticed, but to a few in attendance it marked a turning point in the Internet’s unqualified celebration of “connectivity” as cultural magic. In fact, Sterling made clear, connectivity might represent a grievous cultural loss.

Connectivity is nothing to be proud of, Sterling ventured. The clearest symbol of poverty—not canniness, not the avant-garde—is dependence on connections like social media, Skype, and WhatsApp. “Poor folk love their cell phones!” he practically sneered. Affecting princely contempt for regular people, he unsettled the room. To a crowd that typically prefers onward-and-upward news about technology, Sterling’s was a sadistically successful rhetorical strategy. “Poor folk love their cell phones!” had the ring of one of those haughty but unforgettable expressions of condescension, like the Middle Eastern treasure “The dog barks; the caravan passes.”

Connectivity is poverty, eh? Only the poor, defined broadly as those without better options, are obsessed with their connections. Anyone with a strong soul or a fat wallet turns his ringer off for good and cultivates private gardens (or mod loft spaces, like Hundreds) that keep the din of the Web far away. The real man of leisure savors solitude or intimacy with friends, presumably surrounded by books and film and paintings and wine and vinyl—original things that stay where they are and cannot be copied and corrupted and shot around the globe with a few clicks of a keyboard.

Sterling’s idea stings. The connections that feel like wealth to many of us—call us the impoverished, we who brave Facebook ads and privacy concerns—are in fact meager, more meager even than inflated dollars. What’s worse, these connections are liabilities that we pretend are assets. We live on the Web in these hideous conditions of overcrowding only because—it suddenly seems so obvious—we can’t afford privacy. And then, lest we confront our horror, we call this cramped ghetto our happy home!

Twitter is ten years old. Early enthusiasts who used it for barhopping bulletins have cooled on it. Corporations, institutions, and public-relations firms now tweet like terrified maniacs. The “ambient awareness” that Clive Thompson recognized in his early writings on social media is still intact. But the emotional force of all this contact may have changed in the context of the economic collapse of 2008.

Where once it was engaging to read about a friend’s fever or a cousin’s job complaints, today the same kind of posts, and from broader and broader audiences, can seem threatening. Encroaching. Suffocating. Our communications, telegraphically phrased so as to take up only our allotted space, are all too close to one another. There’s no place to get a breath in the Twitter interface; all our thoughts live in stacked capsules, crunched up to stay small, as in some dystopic hive of the future. Or maybe not the future. Maybe now. Twitter could already be a jam-packed, polluted city where the ambient awareness we all have of one another’s bodies might seem picturesque to sociologists but has become stifling to those in the middle of it.

In my bolshevik-for-the-Internet days I used to think that writers on the Web who feared Twitter were just being old-fashioned and precious. Now while I brood on the maxim “Connectivity is poverty,” I can’t help wondering if I’ve turned into a banged-up street kid, stuck in a cruel and crowded neighborhood, trying to convince myself that regular beatings give me character. Maybe the truth is that I wish I could get out of this place and live as I imagine some nondigital or predigital writers do: among family and friends, in big, beautiful houses, with precious, irreplaceable objects.

The something lost in the design of the Web may be dignity—maybe my dignity. Michael Pollan wrote that we should refuse to eat anything our grandmothers wouldn’t recognize as food. In the years I spent at Yahoo! News—not content-farming, exactly, but designing something on a continuum with click bait, allowing ads into my bio, and being trained (as a talking head) to deliver corporate propaganda rather than report the news—I realized I was doing something my grandmothers wouldn’t have recognized as journalism. Privately I was glad neither of them had lived long enough to witness my tour of duty in that corner of the Web, doing Go-Gurt journalism.

RESPITE


Which brings me back to Hundreds and the other achingly beautiful apps, many of which could pass for objects of Italian design or French cinema. Shifting mental seas define the experience of these apps, as they do any effective graphic scheme in digital life, in which the best UX doesn’t dictate mental space; it maps it. These apps caress the subconscious. The graphic gameplay on Hundreds seems to take place in amniotic fluid. The palette is neonatal: black, white, and red. The path through is intuition.

And this is strictly graphics. No language. Text here is deep-sixed as the clutter that graphic designers always suspected it was. The new games and devices never offer anything so pedestrian as verbal instructions in numbered chunks of prose. “If they touch when red then you are dead,” flatly states a surreal sign encountered partway through Hundreds’s earliest levels. That’s really the only guideline you get on how Hundreds is played.

Playing Hundreds is a wordless experience. Even that red/dead line of poetry is more music than meaning. There’s an eternity to the graphic swirl there; it’s the alpha and the omega. “Death” would be too human and narrative an event to happen to the fog-toned circle-protagonists. These circles mostly start at zero. You drive up the value of the circles by touching them and holding them down, aiming each time to make the collective value of the circles total 100 before they run into an obstacle, like a circle saw.

Nothing about losing in Hundreds feels like dying. The music continues; the round can be replayed. No pigs (as in Angry Birds) or shirtless terrorists (as in Call of Duty) snort and gloat. You start again. Who says losing is not winning, and the other way around? In Hundreds even gravity is inconstant.

FRISBEE FOREVER


Digital, kaleidoscopic design can serve to undermine language. To deconstruct it. Deconstruct is still a frightening word, bringing to mind auteur architects and Frenchmen in capes. Here I use it to mean that digital design, especially in games, can call attention to the metaphors in language and teasingly demonstrate how those metaphors are at odds with language’s straight-up, logical claims. So life and death are binary opposites? Not on Hundreds, which teaches the sublingual brain that life and death are continuous, world without end. Mixing up life and death in this way is, in fact, the operative principle of video games, as Tom Bissell’s masterful Extra Lives: Why Video Games Matter convincingly argues.

Before the Internet, but presciently, Marshall McLuhan credited the world’s new wiredness with dissolving binaries in the way of Buddhism: “Electric circuitry” (which elsewhere he calls “an extension of the human nervous system”) “is Orientalizing the West. The contained, the distinct, the separate—our Western legacy—is being replaced by the flowing, the unified, the fused.”

Where some game design breaks down language and the distinctions that undergird it, other design is tightly structuralist, instantiating boundaries and reminding players that they’re contained, distinct, and separate. Frisbee Forever, a kid’s game I’m choosing almost at random, works this way. A free candy-colored mobile game in which the player steers a Frisbee through a variety of graphic environments that look variously beachy, snowy, and Old-Westy, Frisbee Forever is one of those garish games at which some parents look askance. But the very week I downloaded Frisbee Forever for my then-six-year-old son, Ben, the Supreme Court ruled that video games were entitled to First Amendment protection, just like books, plays, and movies. I decided the game formally had redeeming value when I read Justice Scalia’s words: “Video games communicate ideas—and even social messages—through many familiar literary devices (such as characters, dialogue, plot, and music) and through features distinctive to the medium (such as the player’s interaction with the virtual world).”

So what’s the idea—and even the social message—behind Frisbee Forever? The message is deep in the design: Never give up. Like many successful games, Frisbee Forever is built within a pixel of its life to discourage players from quitting—because if you quit, you can’t get hooked. The game’s graphic mechanics gently but expertly escort players between the shoals of boredom (“Too easy!”) and frustration (“Too hard!”). This Scylla-and-Charybdis logic is thematized in the design of many popular app games (Subway Surfer, the gorgeous Alto’s Adventure, and many of the so-called endless runner games). At PBS’s website, for which educational games are always being designed, this protean experience is called “self-leveling.” Tailored tests and self-leveling games minimize boredom and frustration so that—in theory, anyway—more people see them through.

This is certainly the logic behind Frisbee Forever. Just as a player steers her disc to keep it in the air, so Frisbee Forever steers her mood to keep her in the game. It’s like a model parent. If a kid’s attention wanders and his play becomes lackluster, the game throws him a curve to wake him up. If he keeps crashing and craves some encouragement, the game throws him a bone. Curve, bone, bone, curve. Like life.

And that’s a potential problem. What’s lost is bracing disorder, the spontaneous adaptations that lead to art and adventure and education. Frisbee Forever—and anything else self-leveling—conjures a fantasy world that’s extremely useful when life’s disorderly. But when things settle down in reality, the Frisbee game is too exciting. It does nothing to teach the all-important patience and tolerance for boredom that are central to learning: how to stand in line, how to wait at Baggage Claim, how to concentrate on a draggy passage of text. In fact self-leveling games suggest you never have to be bored. At the same time, Frisbee Forever is not nearly challenging enough. In real life you have to learn to tolerate frustration: how not to storm away when the pitcher is throwing strikes, how to settle for an Italian ice when sundaes are forbidden, how to try the sixth subtraction problem when you’ve gotten the first five wrong.

I find pleasing magic in the design of many digital and digitized games: Angry Birds, WordBrain, Bejeweled, Candy Crush. But I use their graphic worlds to keep myself safe from unstructured experience. To shut out mayhem and calm my mind. Often I find I want to keep the parameters of boredom and frustration narrow. I feel I need to confront rigged cartoonish challenges that, as it happens, you can—with pleasurable effort—perfectly meet. Games, like nothing else, give me a break from the feeling that I’m either too dumb or too smart for this world.

I’m not the only one in my demo. Thanks to the explosion of mobile games that have drawn in the crossword and Sudoku crowd, adult women now make up a bigger proportion of gamers (37 percent) than do boys eighteen and younger (15 percent), according to a study by the Entertainment Software Association. The average age of gamers is now thirty-five.

But of course I wonder what real challenges and stretches of fertile boredom, undesigned landscapes, and surprises I’m denying myself. And maybe denying my children.

SPRAWL


The schism between the almost fascist elegance of the sexiest apps, like Hundreds, and the chaotic-ghetto graphic scheme of the Web may have been inevitable. In the quarter-century since Tim Berners-Lee created the immensely popular system of hyperlinks known as the World Wide Web, the Web has become a teeming, sprawling commercial metropolis, its marquee sites so crammed with links, graphics, ads, and tarty bids for attention that they’re frightening to behold. As a design object, it’s a wreck.

There are two reasons for this. Two laws, even. And complain as we might, these two laws will keep the Web from ever looking like a Ferrari, Vogue, or the Tate Gallery. It will never even look like a Macintosh or an iPad, which is why Apple has taken such pains since the App Store opened to distance itself from the open Web, that populist place that is in every way open-source and to which we all regularly contribute, even if just with a Facebook like or an Etsy review.

1. The Web is commercial space.


The major links and sites are, of course, now paid for by advertisers, who covet click-throughs—or, better yet, taps of the “buy” button, which started to figure prominently on sites like Pinterest in 2015—and never stop fishing for attention. You think you’re reading when you’re on the Web; in fact you’re being read. This is why the Web is now palpable as the massively multiplayer online role-playing game it’s always been. You are playing the house when you play the Web, and the house is better at reading you than you are at reading it. To return to the bolshevik framework, Read or be read is today’s answer to Lenin’s old who-whom, Who will dominate whom?

The fact of this jockeying came home to me forcefully the first time Google introduced Panda, a series of changes to the company’s search algorithm that reconfigured the felt experience of the Web. That’s right: Panda influenced the whole Web. As surely as the graphic scheme of my desktop and gadgets is determined by Apple, the graphic scheme of my life on the Internet is determined by Google.

Before Panda was rolled out in 2011, the Web had started to look Hobbesian, bleak and studded with content farms, which used headlines, keywords, and other tricks to lure Web users into looking at video ads. Even after its censure by Google that dystopian version of the Web—as ungovernable content—is always in the offing. It’s like demented and crime-ridden New York City: even after Giuliani and Bloomberg, we know that city could always come back.

Here’s a flashback to the bonkers Web of 2011, as surreal as it sounds: Bosses were driving writers to make more words and pull photos faster and for less pay so they could be grafted onto video that came with obnoxious preroll advertisements. Readers paid for exposure to this cheaply made “media” in the precious currency of their attention. Prominent sites like Associated Content, Answerbag, Demand Media, parts of CNN, part of AOL, and About.com (which was then owned by the New York Times) looked creepy and hollow, a zombie version of in-flight magazines.

“Another passenger of the vehicle has also been announced to be dead,” declared one muddled sentence on Associated Content. “Like many fans of the popular ‘Jackass’ franchise, Dunn’s life and pranks meant a great amount to me.” This nonsense was churned out in a freelance, white-collar version of the Triangle Shirtwaist Factory. Many content-farm writers had deadlines as frequently as every twenty-five minutes. Others were expected to turn around reported pieces, containing interviews with several experts, in an hour. Some composed, edited, formatted, and published ten articles in a single shift—often a night shift. Oliver Miller, a journalist with an MFA in fiction from Sarah Lawrence, told me that AOL paid him about $28,000 for writing 300,000 words about television, all based on fragments of shows he’d never seen, filed in half-hour intervals, on a graveyard shift that ran from 11 p.m. to 7 or 8 in the morning.

Miller’s job was to cram together words that someone’s research had suggested might be in demand on Google, position these strings as titles and headlines, filigree them with other words, and style the whole confection to look vaguely like an article. Readers coming to AOL expecting information might soon flee this wasteland, but ideally they’d first watch a video clip with ads on it. Their visits would also register as page views, which AOL could then sell to advertisers.

A leaked memo from 2011 called “The AOL Way” detailed the philosophy behind this. Journalists were expected to “identify high-demand topics” and review the “hi-vol, lo-cost” content—those are articles and art, folks—for such important literary virtues as Google rank and social media traction. In 2014 Time magazine similarly admitted to ranking its journalists on a scale of advertiser friendliness: how compatible their work was with advertising and the goals of the business side of the enterprise.

Before Google essentially shut down the content farms by introducing Panda to reward “high-value” content (defined in part by sites that had links to and from credible sources), the Economist admiringly described Associated Content and Demand Media as cleverly cynical operations that “aim to produce content at a price so low that even meager advertising revenue can support [it].”

So that’s the way the trap was designed, and that’s the logic of the Web content economy. You pay little or nothing to writers and designers and make readers pay a lot, in the form of their eyeballs. But readers get zero back: no useful content. That’s the logic of the content farm: an eyeball for nothing. “Do you guys even CARE what I write? Does it make any difference if it’s good or bad?” Miller asked his boss one night by instant message. He says the reply was brief: “Not really.”

You can’t mess with Google forever. In 2011 the corporation changed its search algorithm; it now sends untrustworthy, repetitive, and unsatisfying content to the back of the class. No more A’s for cheaters. But the logic of content farms—vast quantities of low-quality images attached to high-demand search bait—still holds, and these days media companies like feverish BuzzFeed and lumbering HuffPo are finding ready workarounds for Google, luring people through social media instead of search, creating click bait rather than search bait, passing off ads as editorials. These are just new traps.

That’s why the graphic artifacts of the Web civilization don’t act like art. They act like games. I’m talking about everything from the navigational arrows to the contrasting-color links, the boxes to type in, and the clickable buttons. Rather than leave you to kick back and surf in peace, like a museum-goer or a flaneur or a reader, the Web interface is baited at every turn to get you to bite. To touch the keyboard. To click. To give yourself up: Papieren! To stay on some sites and leave others. If Web design makes you nervous, it’s doing what it’s supposed to do. The graphics manipulate you, like a souk full of hustlers, into taking many small, anxious actions: answering questions, paging through slide shows, punching in your email address.

That’s the first reason the Web is a graphic mess: it’s designed to weaken, confound, and pickpocket you.

2. The Web is collaborative space.


The second reason the Web looks chaotic is that there’s no rhyme or reason to its graphic foundations.

In short order, starting in the 1990s, the Web had to gin up a universal language for design grammar. The result was exuberance, ad hoccery, and arrogance. Why? In 1973 the Xerox Alto introduced the white bitmap display, which Apple promptly copied with the Macintosh’s bellwether graphical interfaces. As a verbal person, I feel some nostalgia for what might have been. The evocative phosphor-green letters on a deep-space background that I grew up with gave way to the smiley Mac face and the white bitmap that turned computing entirely opaque. After I saw the Mac I lost interest in learning to code. I was like an aspiring activist who, before even getting started, was defeated by a thick layer of propaganda that made the system seem impenetrable.

In literary critical terms inherited from the great Erich Auerbach in Mimesis, the grammar of the computer interface would go from parataxis—weak connectives, like all that black space, which allowed the imagination to liberally supply and tease out meaning—to hypotaxis, in which hierarchies of meaning and interpretive connections are tightly made for a user, the visual field is entirely programmed, and, at worst, the imagination is shut out.

When I switched to a Mac from my Zenith Z-19 dumb terminal, called “dumb” because it had nothing in its head till I dialed in to a mainframe, I bore witness to the dramatic transition from phosphor to bitmaps. Gone was the existential Old Testament or Star Trek nothingness of those phosphor screens—you can picture them from War Games—which left you to wonder who or what was out there. The new, tight, white interface snubbed that kind of inquiry and seemed to lock you out of the “friendly” graphical façade. It was as though a deep, wise, grooved, seductive, complex college friend had suddenly been given a face-lift, a makeover, and a course in salesmanship. She seemed friendly and cute, all right, but generically and then horrifyingly so. Nothing I did could ever bait her into a free-flowing, speculative, romantic, melancholic, or poetic relationship ever again.

As for coders, they have known since the bitmap appeared that rectangular screens, indexed by two coordinates, would demand design. And they were thrilled. The first design could be written this way: N46 = black; F79 = white; and so on. Though vastly more expensive, the bitmap display was greedily embraced by the computer companies of the 1960s and 1970s for a significant reason: coders hugely preferred its grid and iconography over linear letters and numbers.

But why? For the answer I had to look to the testimony of coders on the subject, and I found that the profession’s preference for graphics and iconography over straight text has to do with cognitive wiring. Like Nicholas Negroponte, the dyslexic founder of MIT’s Media Lab who as a child preferred train schedules to books, and Steve Jobs, who liked calligraphy more than words, many computer types shun narrative, sentences, and ordinary left-to-right reading. Dyslexic programmers, not shy about their diagnosis, convene on Reddit threads and support sites, where they share fine-grained cognitive experiences. A sample comes from a blog post called “The Dyslexic Programmer” by a coder named Beth Andres-Beck:

My dyslexia means that the most important thing for me about a language is the tool support, which often rules out new, hip languages. It took me a while to figure out that my dyslexia was the reason I and the command-line centric programmers would never agree. I’ve faced prejudice against non-text-editor programmers, but often only until the first time they watch me debug something in my head. We all have our strengths ;-)

“Debugging,” according to the educational theorist (and notable dyslexic) Cathy N. Davidson in Now You See It, is actually an agricultural skill that may even be at odds with traditional literacy. It allows farmers to look at a field of alfalfa and see that three stalks are growing wrong and the fertilization scheme must be adjusted. This is a frame of mind known to coders like Andres-Beck, who can identify the bug in vast fields of code without seeming to “read” each line.

For debuggers, as Davidson makes plain in her book, traditional text is an encumbrance to learning. And as Andres-Beck observes, command-line interfaces that use successive lines of text, like books, also bedevil those who find reading challenging. More symbolic, spatially oriented “languages” begin to seem not just friendlier to this cognitive orientation; they look progressive. In 1973, when programmers glimpsed the possibilities of the Xerox bitmap, they never looked back.

This kind of interface defied the disorientation that had long been induced by letters on paper. Torah scribes in the first century defined literacy as the capacity to orient oneself in tight lines of text on a scroll without whitespace, pages, punctuation, or even vowels to mark spots for breath or other spatial signposts. A real reader in ancient times—and there weren’t many—was expected to mentally go a long way to meet a text. To many programmers—with a visual, agri-debugger’s intelligence at sharp odds with this practice—this kind of literacy seems ludicrous, even backward. (“Once you use a structurally aware editor going back to shuffling lines around is medieval,” writes one satisfied customer of nontext coding on Reddit.)

As Negroponte explains in Being Digital, many digital natives, and boomers and Gen Xers who went digital, are drawn to the jumpy, nonlinear connections that computer code makes. On the site io9 recent studies of the differences between Chinese dyslexics and English dyslexics were used to make the case that dyslexics make good programmers, as programming languages contain the symbolic, pictographic languages that many dyslexics prefer.

In the 1970s and 1980s bitmaps were welcomed like a miracle by PC programmers. Writers sometimes long for the low-graphic blackout screen in the beloved and lo-fi word-processing app WriteRoom, but no one else seems to. The majority of computer users, and certainly programmers, were overjoyed at the reprieve from traditional literacy that the bitmap granted them.

It’s important to realize too that early programmers were especially ready to give up writing manuals, those byzantine booklets that Xerox, IBM, and Compaq workers had to come up with when it came time to explain their esoteric hobby to the noncoding PC crowd. The 1980s notion of user-friendliness was really a move from that tortured lost-in-translation technical language to graphics, which in theory would be legible in all languages and to a range of cognitive styles. Good graphics could ideally even move us away from the parody-worthy manuals to self-explanatory (“intuitive”) interfaces that would require no instruction at all.

The English designer Jon Hicks of Hicksdesign, which has created iconography for Spotify and emoticons for Skype, spells out some of the Web’s absolute dependence on pictorial language. He even uses the word miracle: “Icons are more than just pretty decorative graphics for sites and applications, they are little miracle workers. They summarize and explain actions, provide direction, offer feedback and even break through language barriers.”

The original graphics on bitmaps were very simple—but not elegant. Initially the screen could show only a binary image, which worked like those children’s games where you pencil in one square of graph paper at a time in order to produce a black-and-white picture. Before that, hinting at what was to come, all kinds of fanciful typography covered my Zenith Z-19 terminal screen in the 1970s and early 1980s, when sysprogs would draw faces in dollar signs or “scroll” people, sending them line after line of random symbols to annoy them and clog their screen. That trolling use of graphics—to confront and annoy—is still in effect in various corners of the Web.

Today a bitmap is really a pixmap, where each pixel stores multiple bits and thus can be shaded in two or more hues. A natural use of so much color is realist forms like photography and film. But the profusion of realist imagery since Web 2.0, when wider bandwidth allowed photos and videos to circulate, can lead us to overlook the graphics that constitute the visual framework and backdrop of almost any digital experience.

The exuberance with which programmers, stymied by straight text, embraced graphics is the second reason the Web is a graphic wreck. It was made by manic amateurs trying to talk in pictures, not by cool pros with degrees in Scandinavian design.

THE REFRACTED RECORD OF TECH HISTORY


Right now, looking at my laptop screen, I see a row of tight, fussy little icons below the Google doc I have open. This is the “deck” on a Mac. I used to know these like my own five fingers, but that’s when I only had five icons. Now there are—twenty? These represent the tips of the icebergs for the major tech players, and like ticker symbols, they all jostle uncomfortably for my attention. I used to think of Microsoft Word as the blue one, but now I see that Apple has seized various shades of what the humorist Delia Ephron disparages as “bank blue,” and the iMessage app (with cartoon talk bubbles) stands out in that shade, along with the Mail stamp-shaped icon, compass-shaped Safari, and protractor App Store. None of these has a word on them. They look maddeningly alike.

The Microsoft Office icons at least look like letters, each made of a single shaded satin ribbon: W, P, X, O. That’s Word, PowerPoint, Excel, Outlook. The spindly shape and contemporary colors—bank blue, yes, but also burnt orange, grass green, and yellow—are a legacy of the ferocious determination of Microsoft to set itself apart from Apple and to seem less isolated and self-contained (like circles) and more compatible and connected (like cursive letters). These Microsoft apps, which I paid dearly for since, because of the Shakespearean Jobs-Gates antagonism, they don’t come standard with Macs anymore, would set Office apart from my machine’s native apps were it not for the fact that Google introduced a virtually identical palette some years ago in its multihued Chrome and Drive icons.

So there’s the motley lineup. And I haven’t said a word about FitBit, Skype, Slack, Spotify, and whatever else is down there. This is hardly a team of miracle workers. It’s more like a bagful of foreign coins. And though no doubt the great icon designers of our time created them, they’re now no more emotionally striking or immediately legible than any other timeworn pictographic alphabet.

BLADE RUNNER


Click on Chrome, though, and you’re zapped onto the Web. Only then is it clear that the Mac interface, even with its confounding icons, is, in contrast with the Web, a model of sleek organization.

The Web is haphazardly planned. Its public spaces are mobbed, and urban decay abounds in broken links, ghost town sites, and abandoned projects. Malware and spam have turned living conditions in many quarters unsafe and unsanitary. Bullies, hucksters, and trolls roam the streets. An entrenched population of rowdy, polyglot rabble dominates major sites.

People who have always found the Web ugly have nonetheless been forced to live there. It is still the place to go for jobs, resources, services, social life, the future. In the past eight or ten years, however, mobile devices have offered a way out, an orderly suburb that lets inhabitants sample the Web’s opportunities without having to mix with the riffraff. This suburb is defined by those apps: neat, cute homes far from the Web city center, many in pristine Applecrest Estates—the App Store.

In the migration of dissenters from those ad-driven Web sites, still humming along at www URLs, to pricey and secluded apps we witnessed urban decentralization, suburbanization, and the online equivalent of white flight: smartphone flight. The parallels between what happened to Chicago, Detroit, and New York in the twentieth century and what happened to the Internet since the introduction of the App Store are striking. Like the great modern American cities, the Web was founded on equal parts opportunism and idealism.

Over the years nerds, students, creeps, outlaws, rebels, moms, fans, church mice, good-time Charlies, middle managers, senior citizens, starlets, presidents, and corporate predators flocked to the Web and made their home on it. In spite of a growing consensus about the dangers of Web vertigo and the importance of curation, there were surprisingly few “walled gardens” online like the one Facebook once purported to represent. But a kind of virtual redlining took over. The Webtropolis became stratified. Even if, like most people, you still surf the Web on a desktop or laptop, you will have noticed paywalls, invitation-only clubs, subscription programs, privacy settings, and other ways of creating tiers of access. All these things make spaces feel “safe” not only from viruses, instability, unwanted light and sound, unrequested porn, sponsored links, and pop-up ads but also from crude design, wayward and unregistered commenters, and the eccentric voices and images that make the Web constantly surprising, challenging, and enlightening.

When a wall goes up, the space you have to pay to visit must, to justify the price, be nicer than the free ones. The catchphrase for software developers is “a better experience.” Behind paywalls like the ones that surround the New York Times and the Wall Street Journal production values surge. Cool software greets the paying lady and gentleman; they get concierge service, perks. Best of all, the advertisers and carnival barkers leave you alone. There’s no frantically racing to click the corner of a hideous Philips video advertisement that stands in the way of what you want to read. Those prerolls and goofy in-your-face ads that make you feel like a sitting duck vanish. Instead you get a maitre d’ who calls you by name. Web stations with entrance fees are more like boutiques than bazaars.

Mobile devices represent a desire to skip out on the bazaar. By choosing machines that come to life only when tricked out with apps, users of all the radical smartphones created since the iPhone increasingly commit themselves to a more remote and inevitably antagonistic relationship with the Web. “The App Store must rank among the most carefully policed software platforms in history,” the technology writer Steven Johnson observed—and he might have been speaking conservatively. Policed why? To maintain the App Store’s separateness from the open Web, of course, and to drive up the perceived value of the store’s offerings. Perception, after all, is everything; many apps are to the Web what bottled water is to tap: an inventive and proprietary new way of decanting, packaging, and pricing something that could be had for free.

Jobs often spoke of the corporate logos created by his hero Paul Rand (creator of logos for IBM, UPS, and Jobs’s NeXT) as “jewels,” something not merely symbolic but of pure value in themselves. Apps indeed sparkle like sapphires and emeralds for people enervated by the ugliness of monster sites like Craigslist, eBay, and Yahoo!. That sparkle is worth money. Even to the most committed populist or Web native there’s something rejuvenating about being away from an address bar and ads and links and prompts, those constant reminders that the Web is an overcrowded and often maddening metropolis and that you’re not special there. Confidence that you’re not going to get hustled, mobbed, or mugged—that’s precious too.

ANGER


But not all app design can or should be chill. Some of it should be provocative, emotional, even enraged. I like to think of Rovio, the game studio that created the juggernaut Angry Birds, as the center of rage-based gaming. The narrative of Rovio’s masterpiece—that the pigs have stolen your eggs, your babies, and “have taken refuge on or within structures made of various materials”—is just superb. Reading Wikipedia’s summary of Angry Birds just now makes me want to play it again. Though of course the cravings should have subsided years ago, when I was on the global top-1,000 leaderboard for Angry Birds. (Autographs available on request.)

What a great pretext for a game: pigs steal your babies and then lodge themselves in strongholds made of stone, glass, or wood. They take refuge, as in “the last refuge of a scoundrel,” and the refuges are maddeningly difficult to penetrate or topple.

In Angry Birds, as so often in life, the material world seemed to have conspired to favor the jerks, endowing them with what looked like breastworks, berms, and parapets, as if they were the beneficiaries of some diabolical foreign-aid package. Those gross, smug, green pigs stole my flock’s babies, and they’re sitting pretty in stone fortifications that they didn’t even build themselves. And the looks on their fat faces? Perfectly, perfectly self-satisfied.

Think you’re too good for me, eh? That you’ll rob me and I’ll just be polite about it? You have your elaborate forts and your snorting equipoise. I have nothing but my sense of injury. My rage. And so I take wobbly aim at them, the pig thieves, in Rovio’s world without end, in which there are hundreds of levels to master and the game gets bigger and bigger with constant updates.

Angry Birds is a so-called physics game, which suggests education, and also a puzzle in reverse, as you must destroy something by figuring out how its pieces come apart. Your tools are these birds: the victims of the theft but also your cannon fodder. Each bird that is launched dies. Though there is no blood, as it is death by cartoon poof, every mission is a suicide mission.

ELEGANCE


At the other end of the design continuum is Device 6, a hyperpolished game in the spirit of Hundreds. In this one, though, the player is a reader at heart. That’s too bad because the arts on the Internet are still brutally segregated, and no one has yet been able to bring together beautiful prose and beautiful design.

The game, then, is an interactive book, with prose that branches off like a cross between e. e. cummings and “Choose Your Own Adventure.” The protagonist is Anna, a gutsy woman with a hangover and a headache. The player tracks her as she wakes up from a blackout in a shadowy castle. She prowls around trying to figure out what’s up and how to get the heck out by answering riddles, cracking codes, and solving math problems that are in fact arithmetic or the simplest algebra.

The game’s ambience is ingenious, evocative, and chill without being precious and “overdecorated” (as Anna observes of the castle). It’s no surprise that this elegance comes from design-delirious Sweden, where in 2010 Simon Flesser and Magnus “Gordon” Gardebäck formed Simogo. The game has something of Ingmar Bergman’s beloved “world of low arches, thick walls, the smell of eternity, the colored sunlight quivering above the strangest vegetation of medieval paintings and carved figures on ceilings and walls.”

The sprawling castle is a hoardery pile jammed with broken, weird, outmoded tech, including Soviet-era computers. The graphics evoke the 1960s, the cold war, and James Bond, but the photographic element of the game also expresses 1960s nostalgia for the 1930s and 1940s. This is a neat trick that only confident European design fiends could pull off. At one point Anna comes upon an R&D lab that makes toys or weapons or weaponized toys. It’s creepy without being too on-the-nose horror flick. The chief drawback to the game’s being made by Swedes is that the English-language writing in Device 6 is plodding. Like the language in too many games and apps, the prose here is a placeholder—not exactly Agatha Christie or Alan Furst.

With some experimental exceptions, like Hundreds, graphic design cannot exist in a vacuum online. On the Web it continues to jostle uneasily and sometimes productively with sound, text, photography, and film. Naturally designers prefer to the open Web the hermetic spaces of apps, which they have much more control over and which aren’t built on often ugly hypertext. But these designers can’t seal themselves off forever from the challenge of shared space, referring in their work to offline print design and creating mute, pantomime realms of immaculate shapes. Digital design will find itself when the designers embrace the collaborative art of the Internet and join forces with writers, sound designers, and image-makers.

Recenzii

“In melding the personal with the increasingly universal, Heffernan delivers a highly informative analysis of what the Internet is—and can be. A thoroughly engrossing examination of the Internet’s past, present,and future.”
–Kirkus, Starred Review
"Heffernan is a gleeful trickster, a semiotics fan with an unabashed sweet tooth for pop culture...MAGIC BRINGS JOY [in this] enjoyable snapshot."—The New York Times
"Magic and Loss is an illuminating guide to the internet...it is impossible to come away from this book without sharing some of [Heffernan's] awe for this brave new world."—The Wall Street Journal
Magic and Loss is the book we—or at least I—have been waiting for, the book that Internet culture, and the way it’s changed the expression and reception of art, language, and ideas, deserves and demands. Virginia Heffernan argues that the Internet, broadly conceived, is a ‘massive and collaborative work of realist art,’ and she illuminates it with the best sort of cultural criticism—humane, personal, and extremely smart, with a frame of references that includes St. Thomas Aquinas, Liz Phair, Richard Rorty, Beyoncé, and the pairing of Dante and Steve Jobs, two ‘labile romantics.’ Whether writing about how the Kindle changed reading, how the iPod and iPhone changed listening, or how the demise of landline telephones changed communicating, Heffernan goes right to the heart of the lived experience... Virginia Heffernan quotes Harold Bloom to the effect that ‘to behold is a tragic posture; to observe is an ethical one.’ In Magic and Loss, she observes, in the best sense of the word.”—Ben Yagoda, author of The B-Side and How to Not Write Bad
"Goddamn, Virginia Heffernan is brilliant."—Lenny Letter
“Heffernan is a new species of wizard, able to perform literary magic upon supersonic technology. Her superpower is to remove the technology from technology, leaving the essential art. You might get an epiphany, like I did, of what a masterpiece this internet thing is. Heffernan has the cure for the small thinking that everyday hardware often produces. She generates marvelous insights at the speed of light, warmed up by her well-worn classical soul. It's a joy and revelation to be under her spell.”—Kevin Kelly, author of What Technology Wants and co-founder of Wired
"Virginia Heffernan spins the straw of the Internet into analysis that’s solid gold: a brilliant book..”—Mark Edmundson, professor at the University of Virginia, and author of Why Teach? and Why Football Matters
"Magic and Gain!"Frank Wilczek, Winner of the 2005 Noble Prize in Physics and author of A Beautiful Question
“Readers will be enthralled by Heffernan’s unique take on this popular entity. Tech-savvy readers will be drawn to this book, but the concept of technology as creative expression should also entice art lovers. Most important, readers will be encouraged to appreciate the Internet not only for its ability to connect us to one another and information but also for its beauty.”—Library Journal
"My copy of Magic and Loss is sloppily scrawled with all-caps pencillings of words like 'YES!' and 'TRUTH!'"—Mark McConnell, Slate Magazine
"Her book (thankfully) is more like an essay than like a treatise. Heffernan is smart, her writing has flair, she can refer intelligently to Barthes, Derrida, and Benjamin—also to Aquinas, Dante, and Proust—and she knows a lot about the Internet and its history."—The New Yorker 
"This is sumptuous writing, saturated with observations that are simultaneously personal, cultural, and strikingly original—and she’s writing about software. I love it. Ultimately, the art here is her prose style."The New Republic
"One of the writers I most admire."—Gwyneth Paltrow
"Marrying this study with her own fascinating personal history with the internet as a pre-teen, Magic and Loss is a revealing look at how the internet continues to reshape our lives emotionally, visually and culturally."—The Smithsonian Magazine

 
"The best writing on Angry Birds you'll ever encounter."— WIRED, #1 Summer Beach Read
“Heffernan's rhetoric is so dexterous that even digital pessimists like me can groove to her descriptions of ‘achingly beautiful apps,’ her comparison of MP3 compression to ‘Zeuxius's realist paintings from the 5th century BC.’ And Heffernan is subtly less optimistic than she at first seems—she knows that magic is not the opposite of loss, but sometimes its handmaiden. She's written a blazing and finally wise book, passionate in its resistance to the lazy certitudes of a cynically triumphal scientism.”—Michael Robbins, author of The Second Sex and Alien vs. Predator

Descriere

Descriere de la o altă ediție sau format:
Virginia Heffernan gives a highly informative analysis of what the internet is and can be in an examination of its past, present and future.