The datacenter is everywhere


When we talk about pervasive computing, we're usually talking about mobile devices like cell phones or, if we're being really exotic, the various sorts of wearable gizmos that get made fun of in Dilbert cartoons. But I look at pervasive from the other end of the pipe. Hence, The Pervasive Datacenter, the name of the blog that kicks off with this post. From my point of view, it's the datacenter, the software that it runs, and its connections that are everywhere just as much as the peripherals out at the end of the network.

This blog will have its home base in the datacenter itself and will cover topics from servers big and small, to multi-core processors, to operating systems, to virtualization, to power and cooling concerns. However, it will also look at the software and the services out in the network cloud that are consuming datacenter computing cycles and storage and thereby determining the future of the back-end. I'll also spend some time on the bigger questions: Is Software as a Service the next big thing or merely Application Service Providers warmed over? What's the future of Open Source in a Web-delivered software model? Do operating systems even matter any longer?

And, because my premise is that the pervasive datacenter touches everything, I'll feel free to, now and then, head out to the very edge of the network. I'll try to stay clear of overly trendy and self-referential debates, but will write about important trends in client devices from UltraMobile PCs to cameras and the services that run on them.

The language of facilities


We often talk about silos in IT. The storyline usually goes something like this. The server guys (computer gear) don't talk to the storage guys (SANs and Fibre Channel) don't talk to the network gals (all that Ethernet and other comms stuff). It's all true enough, of course. But notice something? Facilities doesn't even tend to get mentioned when bemoaning IT silos. All that HVAC and power gear is just part of the landscape. IT folks didn't need to know about bricks. Why should they need to know about power and cooling? Maybe a little UPS here and there, but the big stuff is Someone Else's Problem.

I suspect that part of the issue is language. Back before IBM did its full-court press to make the System z mainframe cool (and relevant) again, its presentations and documentation were clearly intended only for the priesthood. Whether talking CECs or DASD, FICON or CICS, or arcane pricing models, the effect (intended or not) was to hang a "No Trespassing" sign outside the mainframe tree house. When IBM began modernizing System z for new workloads and uses, one of the many challenges it faced (and still faces to a more limited degree) was to make the mainframe not just appealing, but even intelligible, to outsiders. The task was made no easier by the fact that so many of the people involved in the effort had spent their entire careers working with the mainframe in its many incarnations. Basic assumptions about the very nature of the mainframe were so deeply-held that it took real effort to externalize them in a comprehensible and meaningful way. (This presentation isn't from IBM but illustrates just how foreign-sounding deep mainframe discussions can be.)

I think we're going to see something similar happen with power and cooling. P&C are becoming an important part of the datacenter agenda. Yes, we're in a bit of an overheated hype curve about the whole topic but that doesn't mean it's not important. As a result, companies like Liebert--long-time makers of computer room power gear--are starting to show up at IT tradeshows and brief IT analysts.

I had one such briefing recently from Liebert that included much interesting material including the Liebert NX "Capacity on Demand" UPS and forward-looking discussion about datacenter power distribution. But, based on my own experience around computer systems design, I think that Liebert and other P&C vendors should understand that even electrical engineers who design servers don't know much more about analog electrical systems than the average homeowner--and probably less than the typical electrician.

HVAC vocabulary can be arcane and truly in-depth discussions of redundant facilities power more so. (For example, by Liebert's count, high availability power configurations can come in five different bus configurations, each of which is idea for a specific type of environment.) There's a certain inherent complexity in these matters of course. However, that doesn't change the reality that if IT managers are going to be increasingly involved with power and cooling decisions and configurations, the companies selling that gear are going to have to speak the right language.

Zonbu's subscription PC


Last month I wrote a research note about some of the changes going on with the desktop PC. We're seeing more variety and experimentation with client devices than we've ever seen. Handhelds grab most of the headlines. (And some of the nascent trends around "Ultra-Mobile PCs" and "Mobile Internet Devices" are genuinely worthy of attention.) However, there's action on the desktop too. My research note delves into the background behind these trends in considerable depth but, in a nutshell, people are starting to wonder: "If most of my computing is out in the network cloud anyway, why is it that I need a big, noisy, hard-to-manage desktop PC?"

Dan Lyons over at reports on one of the latest desktop PC alternatives, from the Menlo Park-based Zonbu. It's a small box powered by a Via x86-compatible processor with 512MB of DRAM and 4GB of flash for storage. It runs a custom Linux distribution that comes packaged with Firefox, Skype, Open Office, Peer-to-Peer clients and lots of multimedia applications and games. The unit doesn't have any fans, something that leads the company to loudly trumpet its eco-friendliness--a laudable goal certainly, if one that's in danger of getting more than a bit overexposed these days.

With only a modicum of local storage, most user data will be stored out in the network. Zonbu has cut a deal with Amazon to use their S3 service. For $12.95 per month, you get up to 25GB of storage and free upgrades to newer versions of the operating system and applications. For $19.95, you get 100GB. 2GB of storage and software upgrades are free. The device itself is $249--but you can get $50 off for pre-paying for one year and $150 off for pre-paying for two. Put another way, for $371 you get the device and 25GB of storage for 2 years. You add your own keyboard, mouse, and monitor. Wireless requires a WiFi "dongle" connected to one of the USB ports (it comes with a standard 10/100Mbs wired Ethernet port.

I don't see this as a replacement for the main PC in most households--unless that PC really does just get used to check email, write the occasional letter, and download pictures. At the least, you'd need to accept that the device support (cameras, printers, etc.) is going to be skimpier than a Windows PC--although Linux has gotten much better in this regard and Zonbu appears to have put a great deal of work into documenting what devices do work. Furthermore, it's intended to just run the fixed set of delivered software although, presumably, the technically savvy could add applications or otherwise make changes to the base package.

However, this looks very interesting as a supplementary PC for children, for the kitchen, or for a second house. The biggest issue with having multiple PCs in the home isn't really the cost of the additional PCs; boxes are pretty cheap these days. Rather, it's keeping them all updated, backed-up, and virus-free. Nor do you especially want whirring fans in the same room where you're trying to watch TV. Viewed in that context, this looks very interesting. I wouldn't mind trying one myself.

Dell and the end of religion


Dell 1.0 was a religious company. I suppose you could refer to it instead as merely an intense focus on low costs in all matters of its operations, but it really went deeper than that. Low cost was an article of faith that was the deep guiding principle underlying essentially everything that the company did. Dell didn't merely tilt toward a streamlined supply chain and lean R&D, they were a fundamental part of what it was as a company.

This is not a pedantic distinction. Focus can be adjusted and tweaked; it's that much harder to change your core. Yet that's what Dell had to do. It had to respond to a world where "cheap boxes" was no longer the guiding mantra for server buyers, which made Michael Dell's public pronouncements suggesting that "Dell 2.0" was mostly about better execution so wrongheaded. I wrote about this back in February 2007 in a piece that also includes some choice commentary from Peter Capelli in Knowledge@Wharton:

So in this case, for example, Dell was the darling of many people in the business world because they had this model that seemed to work just incredibly well, and lots of people were copying it, and then the environment changed. It's not that they got bad at executing their model. At least I don't think that's the complaint. It is that the environment changed. They got different competitors who came in with different ideas and the playing field changed.

This makes a continuing set of moves that Dell has been making very significant. It's one thing to "be open" to new strategies, partnerships and approaches. It's another to actually act on them.

Perhaps the first major sign that real change was abrewin' was Dell's belated decision to introduce AMD server processors into its lineup alongside Intel. Although Intel has since gotten (seriously) back into the fight, at the time AMD had the clear technological lead and Dell's long refusal to offer AMD-based products seemed a willful decision to cede a pile of business to competitors without a fight. Backroom politics (however significant) aside, part of Dell's rationale was almost certainly a desire to avoid the incremental costs associated with designing, manufacturing and supporting servers based on processors from two different suppliers.

Second was the signs of genuine technical innovation in a company whose intellectual property was far more about business processes and supply chain optimization than the product itself. Dell won't be the only vendor offering servers with an embedded hypervisor that lets customers configure virtual machines out of the box without installing additional software. But it was involved early on with this technology approach under the name "Project Hybrid." Although Dell isn't, and won't be, an R&D powerhouse, it's clearly not content with always sitting on the sidelines while others roll out the initial iteration of some new technology or approach.

Finally, we have Dell's retail push. It started with a rather limited offering through Wal-Mart and Sam's Club. Now it's added Staples. Thus, in yet another aspect of its business, Dell has apparently decided that a pure approach that takes minimal cost as its sole guiding principle--in this case Web-direct distribution--may have to be modified a bit if revenue is at stake.

None of this is to suggest that Dell has abandoned the Church of Frugality. Don't expect to see a Dell Labs that focuses on fundamental research or a major move into highly bespoke "Big Iron" servers. But we are seeing a Dell that is showing some flexibility on what were once all-or-nothing principles.

The other P2P revolution that wasn't


Today, "peer to peer" is inextricably linked to a variety of techniques for P2P file-sharing, whereby the recipients of a large file supply chunks of data to other recipients.

This distributes the load compared with everyone downloading a file from some central. For this and other reasons, P2P networks have proven popular for sharing MP3 music files although they're suitable for distributing any sizable digital content; for example, one also sees P2P employed to distribute Linux distributions, which can run into the gigabytes.

However, a few weeks ago I attended MIT Technology Review's EmTech07 Emerging Technologies Conference and attended a session where I was reminded that another "P2P" was once the subject of great buzz.

At the Fall 2000 Intel Developer Forum, outgoing Intel CEO Craig Barrett called peer-to-peer computing a "new wave which is going to have material impact on our industry." And he wasn't talking about file sharing.

Pat Gelsinger, who was Intel's CTO at the time, was even more enthusiastic in his keynote:

My subject for today is peer-to-peer--what we think is possibly the next computing frontier. Our agenda, we'll suggest, and hopefully by the end you'll agree with us, (is) that this is the revolution that could change computing as we know it.

P2P computing, as the term was popularized, was based on a pair of simple concepts: 1) There were lots of PCs sitting out there on desks doing nothing most of the time. (Laptops were far less ubiquitous in Y2K than today.) And 2) certain types of computing jobs could be broken down into a lot of small, distinct chunks. These generally fell into the realm of what's often called high-performance computing--tasks like looking at the different way molecular structures interact or fold.

Given those two facts, why not bring together the idle hardware and the computational need?

That's exactly what P2P computing did. There were a few efforts to use the technology for enterprise applications. Intel itself used P2P to power some of its chip design simulations. However, what really captured the public imagination was using distributed PCs in the homes of consumers or business desktops for causes like AIDS or other scientific research. The typical approach was to load the P2P application as a screen saver; when the computer was idle, it would start cranking the calculations, shipping them off to a central site as they completed.

SET@home was perhaps the canonical example. But there were many others such as United Devices, Entropia and Blackstone Computing.

At a February 2001 O'Reilly Conference on P2P Computing, there were 900 attendees. At the same conference, Larry Cheng of Battery Ventures estimated that there were more than 150 companies in P2P. There was even talk of monetizing the distributed computation like some form of electrical grid.

P2P computing never wholly went away; SETI@home remains an active project. Univa UD (formed by the merger of Univa and United Devices) has had some some success in pharma and finance (although it's less client-centric than United Devices' original vision).

But P2P, at least in the sense of harvesting excess client compute cycles, never amounted to something truly important, much less a revolution. There were security concerns and worries about the applications slowing PCs or hurting their reliability. One person was even prosecuted for running a P2P application on college computers. And, as much as anything, the whole thing just faded from being the cool flavor of the month.

Aspects of P2P computing live on. The basic concept that many computing jobs could be best handled by distributing them across large numbers of standardized building blocks was valid. In fact, it's the most common architecture for running all manner of of large-scale applications today from genomics to business intelligence. "Grid computing," a broad if variously defined set of technologies for harnessing and managing large compute clusters, shares common roots with P2P. In fact, The Grid by Foster and Kesselman was the bible of sorts for P2P computing.

But, as with so many other aspects of computation, the cycles are moving back to the data center. Perhaps we could summarize today's approach as being less about harvesting excess capacity on the periphery as not putting it out there in the first place.

Privacy and geotagging


The initial broad adoption of the Internet was, in major respects, about breaking down the boundaries of place and space. Important aspects of Web 2.0 concern themselves with reintroducing the local into the global. When I attended Mashup Camp at MIT earlier this year, I was struck by how much of the interest was around merging data with maps.

Thus, it's not particularly surprising that geotagging, associating photos with a map location, is a current hot topic. At the recent Web 2.0 Summit, Flickr debuted an upcoming revamp of its map page and a new "places" feature. (See screenshots and more here.) A couple of weeks ago I conducted my own geotagging experiment to see if I could merge GPS data with photos that I took during a hike (conclusion: yes, but you have to be a bit of a geek).

At the risk of stating the obvious, all photos are taken somewhere. Some, such as studio portraits, don't have location as a central characteristic. However, for many photos, location is key. And for some, such as pictures of real estate, location is arguably the defining characteristic.

Consequently, I expect that we're going to see hardware that makes it easier to record GPS information and integrate it with photographs. And a corresponding evolution of photosharing sites to simplify the storage and display of that geotagged data. This is good but it also carries some risks.

Now I'm not a tinfoil hat sort of guy.

There's a lot of information available about me through Google. You could probably even find out where I live without straining yourself terribly. None of this especially concerns me. But geotagging represents an explicit link between the virtual and the physical world. That's what makes it interesting--but also a bit worrying.

To be sure, we'll always have the ability to choose when and where we expose geotagged data. But that won't necessarily be simple.

For one thing, as geotagged data becomes more ubiquitous (and more of our lives go online in some form or another), more "leakage" is inevitable. You forget to set a privacy filter correctly. You don't know how to set a privacy filter. You didn't realize that the data had geospatial information.

And that assumes that you have control. What if someone else takes photos at your party that embed GPS data and uploads them to the public area of Flickr? (In an amusing twist, Flickr co-founder Stewart Butterfield reportedly asked people attending a party at his house recently not to geotag any photos they took.)

I can think of various features one could implement on a site like Flickr to mitigate the issue. But none are perfect and, in any case, that's only one site. Nor do I think a glib "privacy is dead" is a proper response either. Think of it as yet another to-do and to-think-about in the complicated merger of our private and professional, virtual and physical lives.

Amazon's newer business model


A couple of weeks back, announced an expansion of its Elastic Compute Cloud (EC2) service. The still-in-beta EC2 is a twist on the much-discussed, if rarely seen in the wild, compute utility whereby customers rent computing by virtual machine (VM)-hour; Amazon's EC2 infrastructure is based on a Xen hypervisor structure rather than running directly on physical hardware.

One implication of Amazon using VMs is that they can easily offer a variety of different VM sizes up to the size of the physical hardware. That was the most recent change announced. In addition to the default "Small Instance," users can now get "Large Instances" or "Extra Large Instances." These might be useful if, for example, you need to pair a heavyweight database instance with some lightweight Web services.

Another implication is that VM images, called Amazon Machine Images (AMI) in this case, can be archived and transported. This is analogous to VMware's virtual appliances. Amazon itself hasn't done much to jump-start an image marketplace at this point as VMware has. However, it does provide a mechanism for customers to post and publicly share AMIs and sees the opportunity for people to offer paid AMIs over time.

I bring this up because Emre Sokullo over at Read/Write Web has a post and table that does a great job of crystallizing why getting into Web services is such a big deal for Amazon. In short, Amazon's revenue is comparable to Google's. The difference is that, while Google is operating at a 29 percent profit margin, Amazon is under 2 percent. Which is probably about the best one can hope for with a big "mail order" retail operation.

Some may be wondering why Amazon is de-focusing and entering into something that is far from its DNA as an e-commerce service. To respond to that question, take a look at the table below, which compares some financial data of Internet bigcos:

Company Net Profit Margin (%) 2006 Annual Revenue ($M) Market Capitalization ($B)
Google 29.02 10,604.92 210
eBay 18.86 5,969.74 50
Yahoo 9.96 6,425.68 45
Amazon 1.77 10,711.00 37

I tend to use "Web services" to describe Amazon's offering, in part because Amazon also has a variety of pricing and other e-commerce products that fit more squarely into the "services" camp. However, another way to describe it is Hardware as a Service (HaaS), a term that seems to have been coined by Ed Byrne in 2006. Terminology aside, I agree with Ed that:

I think it will evolve into a H+SaaS [Hardware + Software as a Service] model where bundled solutions will be offered rather than just empty-shell machines. There's a business opportunity here for software companies to package and license their applications in the H+SaaS model, and charge on a per-user/per-domain basis.

We're already seeing this to a degree with Amazon's complementary S3 Storage as a Service model. For example, Jungle Disk offers data backup using Amazon's S3 as the backend.

To my mind, there's little question that more and more storage and computing will move out into the cloud over time. The question--well, one of them anyway--is where the economic scale points lie. In other words, will most software vendors find that it makes sense for them to deliver their own software as a service on their own hardware (i.e., the model), or will they effectively subcontract out the datacenter infrastructure stuff to the likes of Amazon?

The answer to that particular question has broad implications for datacenter and system design. An IT world in which we have a small number of mega-datacenters (as Sun's Greg Papadopoulos has postulated) would be strikingly different from a world in which more software is delivered over the network but from a much larger number of sites more similar in scale to today.

Facebook, identity, privacy, and portability


Facebook banned someone for using a pseudonym and he's upset.

Anonymous speech has a long history in the United States going back to at least the Federalist Papers. And there are many good reasons, in addition to well-established case law, why anonymous speech should be protected.

That said, very little of such speech on the Internet falls into "Allowing dissenters to shield their identities frees them to express critical, minority views." (U.S. Supreme Court McIntyre v. Ohio Elections Commission, 1995). Instead, anonymity on the Internet often seems far more about protecting rudeness than protecting political dissent. Thus, I have little problem with a service such as Facebook attempting to ensure that its members are using real identities. (See this post by Dennis Howlett for a largely dissenting view.)

This case does, however, raise a variety of points about identity, privacy, and closed social platforms that are worth considering given that we'll see these issues and others like them again and again.

First, there's the question of "What is your identity?" The straightforward, if somewhat glib, response is that it's the name in your passport--i.e. your legal name. That seems to be Facebook's position. But what of people who write under a pseudonym? Or, more broadly, people who have chosen, for whatever reason, to consistently adopt a different identity or persona for their private and their public lives. Or for different aspects of their public lives.

This is all highly relevant whether we're discussing the need for separate personal and professional networks or even what constitutes an appropriate avatar when using virtual worlds for business purposes. It's not so much about absolute anonymity as such (and therefore the ability to say or do things without consequence) as having mechanisms to have multiple, consistent identities that allow one to wall off parts of one's life from each other.

A point perhaps difficult for some in the radical-transparency high-tech crowd on one of the coasts to appreciate is that not everyone is comfortable with throwing most everything in their personal and business lives together. (Expect these sorts of discussions to gain urgency as the Facebooked and MySpaced generation increasingly enters the world of business.)

Another aspect of this case is the whole question of walled gardens and data portability. Establishing a dependence on some company's product is nothing particularly new. Almost uncountable dollars and hours that have gone into training, developing applications, and purchasing software for Microsoft Windows. And there are many other, if less extreme, examples. (Indeed this dynamic underlies much of the ideological basis for open source.)

However, in the Web 2.0 world, we're seeing more and more of our data going into the hands of a third party as well. And, in the case of a service like Facebook, it's not just data in the sense of files or text but an entire web of connections and interactions that have evolved in an essentially emergent way. Issues such as these were no small part of the discussion at the last O'Reilly Open Source Conference (OSCON) last summer.

Google's OpenSocial API is one reaction to the current lack of social data portability, but the problem isn't an easy one. Whereas traditional data portability is fairly straightforward (documented file formats, etc.), what it even means to have a portable social network isn't especially clear.

One of the reasons that questions such as these have some importance is that network effects--Metcalfe's Law if you would--tend to drive things towards a smaller number of bigger players. Although there's some natural partitioning (social networks for children, for example), the evidence suggests that one or two big networks in a given domain tend to win dramatically. Check out the traffic stats for Flickr vs. Zoomr. Thus it's not as simple as picking up your ball and heading over to the next field.

Even if you could pick up your ball.

The impersonal PC


A couple of weeks ago, I was in Las Vegas for the Citrix iForum show. Citrix is best known for its Presentation Server product, nee MetaFrame. Presentation Server delivers specific business applications to remote desktops using Windows Terminal Server on the back-end. It's usually thought of in terms of thin client computing; in fact, the vast majority of Presentation Server installations deliver applications to ordinary PCs. (I describe the technology in more depth in this Illuminata research note.) However, these days, Citrix has many other products as well, variously tailored to delivering applications and full desktop images to a variety of clients.

I've been seeing more interest among IT folks in alternatives to traditional desktops over the past year since, well, ever. Traditional SMS-style provisioning and management systems never truly performed up to hopeful expectations; increasing concerns about security have only exacerbated an already sub-par situation. Nor are users thrilled with the current state of affairs. Their PCs tend to accumulate "cruft" (that's the technical term) over time and software loads "blow up" (another technical term) periodically. Furthermore, IT policies intended to keep things under some vague semblance of control tend to consist, in no small part, of long lists of "Thou shall nots" that limit what users can do with corporate PCs.

And, before the various fanboys chirp in, switching to Linux or a Mac doesn't make all these issues magically go away.

Products from Citrix and others (such as VMware's ACE) offer a variety of alternatives to a forced choice between a locked-down corporate desktop and an environment where anything goes. Largely orthogonal to these approaches from a technical perspective, but conceptually related, are rich internet applications (RIAs) that run within essentially any endpoint device that has a browser. Such applications underpin Software as a Service (SaaS), in which data and software exist largely in the "cloud" rather than in a user's PC or mobile client.

We've seen and heard a lot of praise for the democratic impulse associated with this particular phase of computing that often goes by the Web 2.0 moniker. Anyone can post. Anyone can publish. Anyone can photograph. Your vote matters in social media. And alternative ways of accessing and running applications have indeed made it easier to do things outside of a strict IT framework. In his closing iForum keynote Citrix CEO Mark Templeton used the phrase "Making the personal computer personal again" for this idea.

There's truth in this characterization, but the situation is far more complicated than distributed vs. centralized computing. In some respects, access is indeed more distributed--not only in the alternatives to tightly-controlled corporate desktops, but also to the myriad mobile devices that are woven more and more deeply into both personal and professional lives.

At the same time, the "cloud" is a new element and a new form of centralization. PCs (and, for that matter, Unix in the early days) was, for many, about distributing and maintaining control over data as well as access and computation. The applications that are increasingly central to the lives of many people today are much different. Data is centralized, not distributed, and often flows in but one direction: in. The real software intelligence is increasingly centralized as well. Delving into those topics deeper is a topic for another day. Suffice it to say that, while there's much to be said for widespread personal access, let's not confuse it with truly personal computing.

Itanium goes bump in the night


Perhaps it was in observance of Halloween, but whatever the reason there was something a bit ghostly about Intel's October 31 announcement of its latest Itanium processor.

You had to peer hard to catch even a glimpse of the Intel Itanium Processor 9100 announcement--formerly known under the "Montvale" code name. Neither Intel nor HP (which sells something like 90 percent of the Itaniums that go out Intel's doors) held briefings on the new processor iteration, and even simple press releases dribbled out belatedly. It's the sort of treatment usually reserved for announcements of new sales offices or CEO speeches at obscure conferences. I suppose that they could have made the announcement on a Saturday if they wanted to be even more wraithlike--but this was pretty close.

To be sure, this was a fairly modest bump. Montvale barely edges its "Montecito" predecessor in frequency (1.66GHz vs. 1.6GHz, or about 4 percent). More important is the 667MHz front-side bus (FSB), which gives about 25 percent faster memory access. Reliability ("core-level lock-step") and power efficiency ("demand-based switching") tweaks round out the new features. Bigger changes await the future quad-core "Tukwila," due late 2008 or so; it will also sport an integrated memory controller and new serial interconnect.

One almost gets the sense that Intel and HP hoped that if they soft-pedaled this announcement, no one would notice and therefore, the usual suspects wouldn't revel in the opportunity to engage in Itanium-mocking. Well, that didn't work.

Blockbuster's real problem


Hacking Netflix ponders whether the "Death of Blockbuster" stories greatly exaggerate.

I hardly think we've seen the last of Blockbuster, but they do have a tough road ahead of them. Blockbuster Chairman Jim Keyes is just getting started, and he might have saved the company by pulling out of the expensive online war with Netflix. With Movie Gallery out of the way, refocusing on stores and getting more revenue (from) their 20 million monthly customers makes sense in the short term. Keep in mind that it's going to be a while before DVD goes away (and my Dad watches a movie online).

This latest round of the Blockbuster deathwatch was largely kicked off by Blockbuster's Q3 earning Webcast during which it was revealed that the company had lost about 500,000 Total Access (DVD by mail) subscribers. CEO James Keyes suggested that some were unprofitable subscribers, but then you'd probably expect him to say that. In any case, Blockbuster appears to be pulling back (but likely not exiting) from its mail operation to concentrate on its brick and mortar stores.

One often hears about B&M being dead or the DVD being replaced by online downloads. I don't buy either assertion, at least for any reasonable planning horizon. The reason is in the table below.

Latency Effort Consumer
Store Low High Low High
Mail High Low Medium Medium
Download Low Low High Low

What the table shows is that the three styles of rental have distinct characteristics that inherently appeal to different groups of consumers or a given consumer in different circumstances.

If you just have to watch Spiderman 3 tonight, Netflix isn't going to cut it. On the other hand, downloading movies today requires a certain degree of tech savvy-ness and the appropriate hardware in your house--which may or may not be connected to your television set. So, there's something to be said for going down to the store for an impulse rental.

On the other hand, if you're mostly content to watch one of the movies that you happen to have on hand, as I am, disks by mail have a lot of otherwise nice characteristics--including, for now, probably the best selection for most purposes.

In the medium to longer term, however, I do believe that the relative cost to deliver movies in different ways is going to tend to drive home movie viewing more and more online. Although there are certainly (large) start-up costs to delivering movies over broadband, the infrastructure will get better and the costs will go lower over time.

This cost difference seems particularly relevant in something like movie rentals because all our experience to date suggests that, whatever the cost to deliver rentals, consumers are willing to pay about the same amount per movie. (Although there are certainly people who use the Netflix flat fee to rent large numbers of movies at a low per-movie fee, most people probably end up paying about the same $3 to $4 per film that they'd pay at their local rental store.)

Thus, the issue isn't so much whether a lot of folks would prefer to continue to have a B&M rental option (they would), but whether they're going to be willing to pay the costs. Especially as movie downloads start to chip away at the increasingly technically sophisticated user base that wants things right now.

That's Blockbuster's longer-term problem.

Red Hat appliances: the OS does matter


The broad strokes of Red Hat's announcement yesterday left a lot of canvas unpainted. Its JBoss middleware, an acquisition that hasn't met Red Hat's expectations, was MIA. And a great deal of management, provisioning, identity, etc. capabilities--essentially the services that span the entire infrastructure--were casually lumped under the Red Hat Network (RHN) umbrella, or handed off to Open APIs, without much in the way of detail. RHN is a capable update and monitoring tool that has become increasingly capable over time. But RHN, even augmented by Red Hat's other infrastructure products, hardly comprises a complete enterprise automation strategy, contrary to what the company seemed to suggest. Overall, it seemed more like a conceptual vision for a strategy than an actual strategy.

For me, more interesting for the near- to medium-term were a pair of other announcements that are more closely related than they might initially appear. One was the Red Hat Appliance Operating System (AOS) that the company plans to make available in the first half of 2008. (The acronym takes me back to my previous life...but that's another story.)

It goes almost without saying these days that the appliances in question are virtual ones. The idea is that you can take an app, the operating system it runs on, supporting programs, libraries, and what have you; configure the whole mess properly; and then write it out to disk ready to be fired up as a self-contained, ready-to-run virtual machine. Although the early use cases for virtual appliances were mostly around trials and demos, we're starting to see more and more interest in them as a general-purpose way of deploying software. (I previously discussed the evolution of virtual appliances in this piece.)

The company wasn't especially specific about exactly how AOS would differ from standard Red Hat Enterprise Linux, except to say that it was optimized for running on virtual infrastructures and would come with a software development kit (SDK) the construction of appliances and their integration with third-party software. Presumably Red Hat will leverage its existing Red Hat Exchange as part of the way these appliances would be distributed, but no details on that yet. The company did say that there would be tools in place to help ISVs update their own software in an appliance, but it wasn't ready to make any specific announcements about that yet.

VMware has run an aggressive play on virtual appliances. rPath has built an entire business around appliances. Perhaps an even more significant player is Oracle. Oracle Unbreakable Linux isn't an appliance as such. But it is an attempt to subsume the operating system with the application. With AOS--which Red Hat says will maintain all the software certifications associated with its Enterprise Linux product--the company is effectively arguing that the OS does matter, even in an appliance. Which, for an operating system vendor, is certainly a preferred state of affairs.

Another important announcement concerned making Red Hat Enterprise Linux available on Amazon's Elastic Compute Cloud (EC2) utility, which is currently in beta. At first blush, this would seem to be largely orthogonal to the appliance announcement. In fact, they have a lot in common. EC2 runs on a Xen-based virtual infrastructure; its virtual machines can be stored as Amazon Machine Images (AMI). Although Amazon hasn't yet done much around creating any sort of formal marketplace for AMIs (a la Red Hat Exchange), that wouldn't be a big leap. And, as I discussed last week, I expect that we're going to see far more use of Amazon's style of utility computing to deliver software services rather than the raw hardware. Most users want to do things rather than run stuff.

One way to do this is a pure Software as a Service (SaaS) model whereby some vendor out in the cloud someplace may be using Amazon to host some storage or deliver some Web services but this is mostly transparent a user. However, it's also easy to imagine applications that are better delivered in a more traditional way (i.e. running on an operating system image that the user "owns"). In this case, virtual appliances offer one potential way to get those applications up and running in a way that mimics the way we're used to doing things on a physical server but with many of the fast setup characteristics of SaaS.

HP, cameras, and Web 2.0


Hewlett-Packard has never done as much as it could to use its servers, PCs, printers, software, and the like to cross-leverage and complement each other.

One need only look to Apple to see how this sort of thing can work. The iPod would arguably not have succeeded without the Mac home base to build from, and the Mac has clearly piggybacked on the iPod's success. With even more assets, such as servers and services, HP had still more opportunities. But it largely paid lip service to connecting them. Indeed, at present, HP seems to be headed back to a more decentralized organization reminiscent of former CEO Lew Platt's tenure than the more centralized, top-down structure it adopted under Carly Fiorina.

However, at least outside its strictly business-oriented Technology and Solutions Group (where ProLiant and Integrity servers live, alongside HP's software and services businesses), there have been some cross-fertilization and synergies. HP combined its Imaging and Printing Group (cameras, printers, scanners) with the Personal Systems Group (PCs) in 2005. Although HP clearly favored the printing side of the equation, it also had products like cameras, scanners, and tablets that covered multiple points of digitization from image creation to hard-copy output.

Now comes the announcement that HP will stop designing its own cameras. Among the reasons given is enabling "HP to accelerate its investment in Print 2.0 initiatives," according to the company statement.

My initial reaction was that HP had become a bit too enamored of the margins associated with ink. And, as a result, it was backing away from products and technologies that are not, in themselves, as lucrative as printing but that clearly cross-support and leverage it in the same manner as the Mac and the iPod.

Print 2.0 relates, in no small part, to the mass Web 2.0 digitization of content. But HP sometimes seems too anxious to skip over anything that doesn't involve printing something out right now. For example, HP was actually fairly early to the online photo storage thing with Cartogra (now called Snapfish). But it was largely usurped by the more social-oriented sites such as Flickr. The difference can be striking; Snapfish periodically sends me e-mails threatening to delete my account unless I get something printed soon. Flickr is now augmenting its own printing services and can leverage a user base that dwarfs that of Snapfish.

To be sure, HP profits from many online services. HP Indigo printers are the output device of choice for many of the online book publishers such as Blurb. But by essentially taking on the role of arms merchant, rather than something more customer-facing, it cedes a lot of visibility and control of its destiny.

That said, it's hard to argue with HP's exit from the camera business.

For one thing, it largely reflects current reality. HP is already outsourcing much of its camera design work. Past digital camera-related R&D in HP Labs and its product groups notwithstanding, HP was already largely out of the camera business. Maybe HP coulda', woulda', shoulda' done better by its early digicam development, but it didn't--and there's not a lot of point wishing things were different.

Cameras are also a special class of device with their own long history and well-entrenched suppliers. Canon, for example, has been in the photo business since 1933 and has managed to not just maintain a presence in the camera market, but to actually accelerate its relative stature as a camera maker in the Digital Age.

Nikon hasn’t done badly either, although its greatest strengths are arguably in more traditional camera technologies such as optical design, whereas Canon has a clear lead in electronics design and manufacturing. Other manufacturers, such as Sony, Olympus, and Pentax are also in a better position than HP.

In short, HP is in such a laggardly position when it comes to cameras that it has effectively no hope of coming close to market leadership. Better to fold the tent and perhaps seek partnerships with companies that might be more amenable to such than if HP were an aggressive competitor.

Oracle: Just say no to operating systems


There's a nasty little war afoot over the future of the operating system.

In one corner you have the operating system vendors.

They're building in virtualization, for example. This increases the depth of their software stack. The OS vendors present virtualization as a natural addition to existing operating system functions and a means to integrate an increasingly-common software capability.

That's fair enough. But it's also about control, especially in a world where owning the hypervisor gives you an advantage when up-selling to management layers and other value-add software in which there's real money to be made (as opposed to the raw hypervisor, which is becoming increasingly commoditized).

As we saw last week in the case of Red Hat, OS vendors are on the lookout to circumvent attempts to make their operating systems (and their brands) irrelevant. In Red Hat's case, it was to quash the efforts of software appliance makers to effectively make the OS just a supporting feature of the application.

In another corner, you have the application vendors and their fellow travelers.

Software as a service (SaaS) is one aspect of this war. Taken to its logical extreme, it may change the role of systems companies as well as operating system vendors. However, we don't need to look that far into possible futures to see the application vendor front in this war.

Take the appliance makers that Red Hat was taking on last week. Rpath CEO Billy Marshall writes: "Fortunately for all of us, 'certification' will be a thing of the past when applications companies distribute their applications as virtual appliances." It's not hard to see why Red Hat doesn't exactly cotton to this way of thinking. After all, certification is a very large part of what Red Hat sells. And the number of applications certified to run on Red Hat comprises a huge barrier to any other Linux vendor delivering its own flavor of "Enterprise Linux."

Oracle's Unbreakable Linux is a different take from a different angle, but the end result is the same. Its concept is based on the idea that, when you buy an application from Oracle, you also get some bits that let the application sit on top of the hardware and perform necessary tasks like talking to disk. Oracle has been subsuming operating system functions like memory and storage management for years; subsuming the whole operating system was just the next logical step.

So is its latest move, coming out with its own hypervisor based on technology from the widely-used Xen Project. (Xen is also the basis for the hypervisor in Novell and Red Hat Linux--as well as OS-independent products from XenSource/Citrix and Virtual Iron.)

Just as Oracle wants to minimize the role of the OS, so too does it want to minimize the role of the hypervisor (which, as I noted, itself threatens to reduce the role of the OS--got all that?). From the vantage of Redwood Shores, VMware is getting altogether too much attention. The easiest way to minimize the impact of the virtualization players? Offer Oracle's own hypervisor.

The biggest challenge that I see facing Oracle here is similar to that facing Unbreakable Linux and software appliances in general. There's an implicit assumption that people will be willing to have one virtualization for their boxes that run Oracle and another virtualization for everything else--that the maker of the hypervisor bits doesn't matter.

So far, there's scant evidence that people are willing to be quite so blase about their server virtualization. Furthermore, brand preferences aside, it remains early days for standards that handle the control and movement of virtual machines across virtual infrastructures sourced from different vendors. It's perhaps more thinkable that Oracle database and application servers might be kept independent from a general virtual infrastructure than would be the case with other, often less business-critical, applications. But, at least today, its still counter the overall trend of IT shops looking at server virtualization in strategic rather than machine-by-machine tactical ways.

As a result, I don't see this announcement having a broad near-term impact (as, indeed, Unbreakable Linux did not either, once the original raft of press stories and industry discussion died down). Rather, I see this as Oracle determined to keep making its statement, time and time again, that, someday, the operating system won't matter. That's Larry's story, and he's sticking with it.

Microsoft's virtualization about face


This is a busy week--what with SC2007 in Reno, Oracle OpenWorld in San Francisco, and Microsoft TechEd EMEA in Barcelona. And that means lots of news crossing my desk.

One of today's most interesting tidbits came from Microsoft. Bob Kelly, corporate vice president for the company's server and tools business, announced Hyper-V:

This is the official name of the server virtualization technology within Windows Server 2008 that was previously code-named "Viridian." Microsoft also announced Hyper-V Server, a standalone hypervisor-based server virtualization product that complements the Hyper-V technology in Windows Server 2008 and allows customers to virtualize workloads onto a single physical server.

"So what?!" you say. Everybody and their dog is coming out with hypervisors that can be either purchased as standalone products or embedded into servers. Besides, Microsoft is very late to the virtualization game; its hypervisor won't even be in the initial release of Windows Server 2008.

That may all be so, but Microsoft has a huge footprint in datacenters--and even more in the IT installations of smaller companies. Thus, however tardy and reluctant Microsoft's arrival to virtualization may be (Virtual Server notwithstanding), its plans and presence matter.

That makes Microsoft's decision to offer a hypervisor that's not part of the operating system striking, given that they have been the most vocal proponents of the "virtualization as a feature of the OS" point of view. As Jim Allchin, who headed Microsoft's Platforms and Services Division until the beginning of this year put it: Windows already "virtualizes the CPU to give processing." In this sense, VMs just take that virtualization to the next level. And, in fact, there's a long history of operating systems subsuming functions and capabilities that were once commonly purchased as separate products. Think file systems, networking stacks, and thread libraries.

Built-in-ness is clearly the big argument in favor of marrying server virtualization to the operating system. You're buying the operating system anyway, so there's no need to buy a separate product from a third-party.

Of course, Microsoft wants to keep the operating system relevant to users however much Oracle and others would like to subsume it. Thus it's hardly a surprise that Microsoft wants functions in the OS both to control them and to enhance the value of its most strategic product.

But sometimes the world doesn't work the way you'd like it to.

Separate hypervisors are a better match for the sort of heterogeneous environments typically found in enterprises than are those built into OSs.

There's also a major trend afoot to embed hypervisors into x86 servers, just as they are already embedded into Big Iron. Among the early system vendors to announce or preview intentions in this area were Dell, HP, and IBM. Embedded hypervisors pretty much trump any integration advantage that virtualization-in-the-OS enjoys. You can't get much more built-in than firing virtualization up when you turn the server on for the first time.

I expect that this style of delivering the foundation of server virtualization is going to become commonplace.

It will be a while before who wrote a particular hypervisor becomes a genuine "don't care" to most users (the way BIOSs are today). Standards for managing and controlling virtual machines are still nascent and the whole area is far too new for true commoditization. But it's the direction things are headed. Even Microsoft, however reluctantly, has now accepted this even while it simultaneously tries to keep as much control over its own destiny as possible.

Revoking open source


Those of us who have actually read through many of the Open Source licenses and have spent a fair bit of time mulling and discussing their consequences take a lot of things for granted.

One of those things is that once a program, or anything else, is released under an Open Source license you can't just take it back. Maybe this seems obvious to you, or maybe not, but it isn't to everyone. Perhaps especially as we depart the realm of software where most developers involved with Open Source have given at least passing thought to the implications of the GPL and other such licenses.

This was brought home to me the other week in this comment on Flickr by Lane Hartwell (username "fetching"). (The context isn't especially relevant to this discussion; I suggest reading the whole heated thread if you're really interested.) "[this discussion] has brought attention to some issues and may help change things on both ends. Who knew that CC Licenses were permanent? Flickr sure doesn't tell you when you choose that option."

There are a variety of of issues raised in this case, but the one I want to focus on is that a photographer initially posted a picture on Flickr under a Creative Commons license and subsequently changed its license to the default "All rights reserved" (i.e., any use beyond that allowed by Fair Use requires the explicit permission of the photographer). There is a family of Creative Commons licenses. They vary, essentially, in whether the licensed work can be altered and whether it can be used for commercial purposes. However, for our purposes here, we can just think of all of them as "Open Source licenses."

Physical world intuition might suggest that of course the copyright holder, the owner of the property in a sense, can unshare a work anytime he or she chooses. If I give you permission to borrow my car, I can certainly give you permission on a one-time basis or can withdraw that permission at any time (subject to any contractual agreements).

But Open Source licenses are different. Once I put a photograph, a novel, or a program out in the world under an Open Source license, it's out there. I can't go "never mind" and withdraw whatever rights the license granted in the first place.

I'm not saying that the copyright owner can't change the license. In the case of works to which multiple people have contributed, there are a variety of complications and legal theories around changing licenses, but that's a separate issue. The bits or the words or the arrangement of ink droplets that have already been released into the world remain covered by the Open Source license they were originally released under.

A Mattel court case involving their CyberPatrol software and a program by Eddy Jahnsson and Matthew Skala called cphack raised the issue of whether a GPL license could be withdrawn. However, the case was such that no definitive legal conclusion came about. In addition, there were questions over whether cphack was even properly licensed under the GPL.

In any case, the widespread opinion among those who work with Open Source licenses is that what's been released into the world can't be subsequently withdrawn. As stated in this FreeBSD document:

No license can guarantee future software availability. Although a copyright holder can traditionally change the terms of a copyright at anytime, the presumption in the BSD community is that such an attempt simply causes the source to fork.

In other words, if the license is changed to an "unfree" license, you don't get the right to enjoy any downstream changes--whether enhancements to a software program or touchups to a photograph. But the specific work that's been released to the world can't be withdrawn.

Does the Noncommercial Creative Commons license make sense?


Back when I was writing software for PCs, it was pretty common to see licenses offering some program free "for noncommercial use" or some similar wording. The basic idea was that if you got people using some application at home, maybe they'd want to use it at work too--and then they'd buy a commercial license. Besides, very few of those home users were about to send you a check anyway. It's a little bit like using an open-source business model to build volume and awareness with free, unsupported software and then make money from support contracts when a company wants to put the software into production.

There's a difference though.

No widely used open-source software license that I know of makes a distinction about how the software is going to be used. Rather, open-source licenses concern themselves with essentially technical details about how code is combined with other code and what the resulting obligations are with respect to making code changes and enhancements available to the community. But none of the major open-source software licenses restrict use to schools or personal PCs or anything like that. (One could argue that the new GPLv3 license's clauses concerning digital rights management come close to being a sort of usage-based restriction. That's one of the reasons that Linus Torvalds hasn't been a big fan of GPLv3.)

This is probably a good thing. Especially in today's world of interlocking personal and professional lives, defining where "noncommercial use" begins and ends can get extraordinarily tricky.

This was brought home to me last week while putting together a presentation that uses some photographs posted on Flickr.

By way of background, I was searching for photos licensed under Creative Commons--a sort of counterpart to open-source software licenses that is intended to apply to things like books, videos, photographs, and so forth. There are a variety of Creative Commons licenses worldwide (e.g. these are the choices offered on Flickr), but for our purposes here, one important distinction is between the licenses that allow commercial use and those that do not. A noncommercial license means: "You let others copy, distribute, display, and perform your work--and derivative works based upon it--but for noncommercial purposes only."

At first blush, this seems intuitively fair and reasonable. Many of my own photographs on Flickr are licensed under a noncommercial Creative Commons license. It just feels right. Sure, you can use one of my photos on your Web site (with proper attribution, as required). But I can't say that I'd be especially thrilled to learn that someone was off hawking my pics on a microstock site or selling posters without giving me anything back. Thus I, like many, chose a noncommercial license.

But start squinting hard at the line that separates commercial from noncommercial and it starts to get fuzzy in a hurry. Consider the following questions. Are any of these uses truly noncommercial?

What if I have some AdSense advertising on my Web page or blog?

What if I actually make "real" money from AdSense?

What if I put together an entire ad-supported Web site using noncommercial photos?

What if I use the photo in an internal company presentation? (All companies are commercial enterprises, after all.)

What if I'm using those photos as "incidental" illustrative content in a presentation I'm being paid to give? (This was my case.)

What if I print a book of these photos but only charge my cost? What if I cover my time at some nominal rate as well?

And so forth.

This isn't a new question. I did find a discussion draft of noncommercial guidelines, but for the most part it seems a dangerously ill-defined question in an environment where individuals have so many opportunities to micro-commercialize. Sure, the average blog's AdSense weekly revenues won't buy a cup of coffee but that's a difference of degree and not kind from someone who makes $100 a week or $1,000.

I suspect that noncommercial Creative Commons exists because it appeals to an innate sense of fairness. As such, people who wouldn't license under a broader Creative Commons license will use this one. In short, noncommercial Creative Commons is convenient. That doesn't make it necessarily good.

(By the way, I concluded that I would probably have been OK using noncommercial-licensed photos because they were incidental to the topic that I was presenting. However, to be on the safe side, I stuck with photos that were explicitly licensed for commercial use.)

(Not) making money in the 'long tail'


The idea of the "long tail," a concept popularized by Wired's Chris Anderson, permeates much of what is going on with the evolution of IT.

After all, it's the mass participation of almost everyone in creating content of various types that's driving an enormous amount of IT build out--which, in turn, may well change even how and who builds computers in the future. Simply put, the long-tail premise is that bestsellers aren't in the majority when one tallies up the sales at or the page views on blogs. Rather, it's the total of the far more numerous other 80 or 90 percent of content.

Less abstractly, Anderson's argument is about business. Namely, he argues companies can make money selling to the long tail as shown in the data that I discuss in this 2005 post. I thought and think that it's a powerful concept--although I also think it fair to ponder how many companies are truly well-positioned to make money from the long tail.

When Amazon, Netflix, and Google make their appearance as exemplars for the umpteenth time, one starts to wonder. (In all fairness, Anderson has additional examples; Amazon and Netflix just make particularly rich, data-heavy case studies.)

However, as well noted by Alex Iskold over at Read/Write Web this morning, there's a slightly subtle, but very important, distinction to be made when we're discussing making money on the long tail. It's about making money on the long tail, not making money in it.

According to Iskold:

The precise point of Anderson's argument is that it is a collective of the long tail amounts to substantial dollars because the volume is there. The retail/advertising game is a game based on volume. You make money on a lot of traffic to a single popular site or the sum of smaller amounts of traffic to many less popular sites.

Or, as NatC in the comments below Iskold's blog reformulates it:

Amazon can make money from the long tail, while authors of 'minor' books won't. In the same way, Google makes money from the blogosphere's long tail, but small blogs don't.

Iskold then goes onto ponder the longer-term implications. Will bloggers drop out when they find out that no one's reading them?

Here, I think he's on less solid ground.

Authors and musicians wrote "minor" books and songs that were remaindered...long before there was the idea of the long tail. Today's discovery and recommendation systems could doubtless stand much improvement--which makes efforts like the Netflix Prize and Paul Lamere's Search Inside the Music project at Sun Labs so interesting. Nonetheless, by any reasonable measure, the ability of consumers to discover long-tail content is far better than it's ever been in history.

And, let's be honest, creating that long-tail content has never been primarily about making money.

More Commercial Creative Commons conundrums


A few days back, I posted about the difficulty of distinguishing commercial from noncommercial usage with respect to the Creative Commons license.

There's an ongoing legal case that concerns another aspect of Creative Commons commerciality. As Josh Wolf describes the original story:

On April 21, 2007, during a church camp, Chang's counselor snapped a photo of her and uploaded it to his Flickr account. He published the photo under a CC-BY-2.0 license, which allows for commercial use of the photo without obtaining permission from the copyright owner.

In less than two months, the photo had been cropped and repurposed to promote Virgin Mobile in Australia.

Upon learning of the ad, Chang wrote on a Flickr page, "hey that's me! no joke. i think i'm being insulted...can you tell me where this was taken." Underneath Chang's comment, there is a note from the original photographer: "where was this? do you think virgin mobile will give me stuff?"

It's unclear whether Virgin coughed up any loot, but Chang's family has taken legal action against the company for not obtaining proper permission for the use of her likeness.

The basic legal problem here is that, although the photographer gave his permission for Virgin Mobile Australia (or anyone else) to use the photograph for commercial purposes (with attribution), that doesn't mean that all the rights were cleared to use the photo in an advertisement. A stock photo--which is essentially how Virgin Mobile Australia was using the image--typically requires model releases from any identifiable person. Releases may also be needed for photographed property under some circumstances. Identifiable trademarks and the like can also be an issue.

It seems a rather fundamental error on Virgin Mobile Australia's (and even more so their ad agency's part). I guess they just assumed that the Creative Commons photo was like an ordinary stock photo where someone had taken care of clearing all the rights.

But as Larry Lessig says--with the dropping of Creative Commons itself from the suit:

As I said when I announced the lawsuit here, the fact that the laws of the United States don't make us liable for the misuse in this context doesn't mean that we're not working extremely hard to make sure misuse doesn't happen. It is always a problem (even if not a legal problem) when someone doesn't understand what our licenses do, or how they work.

The intent of Creative Commons is that the photographer (I'll stick to photography here) can give his permission for commercial, or noncommercial, entities to use his or her work without compensation. It is not, however, intended to be a representation that all the commercial rights to use the photograph in any context have been cleared. In fact, with Creative Commons licenses that permit modification of the final work it's hard to see how it would even be possible to certify in advance that any possible use was permitted under all laws anywhere in the world. And even stock sites place a variety of restrictions on the final use.

This seemed a fairly obvious point to me. But as I read stories and comments in this case, it seems that a lot of people assume that licensing a photo for commercial use under Creative Commons is, in fact, warranting it as unconditionally appropriate for commercial use rather than merely giving a narrower set of permissions strictly from the photographer's perspective.

The march of the middlemen


James Robertson over at Smalltalk Tidbits, Industry Rants writes:

The RIAA (and the MPAA, for that matter) are fighting a war they can't win. They are busily irritating their real and potential customers--either suing them, or making life difficult for them--while the real pirates sail along unimpaired. The amount of inertia in that business is astonishing--the good times for all the do-nothing middlemen are over, and it's time for the labels to accept that fact and get on with their lives.

I don't bring this up because I want to replow the well-worked ground of the out-of-touch content industries, but because Robertson highlights a fundamental point about today's business world. Historically, a lot of companies and people made boatloads of money acting as intermediaries without adding much in the way of value.

I see this in my own industry. When I worked for a system vendor in pre-Web days, we subscribed to the services of one industry analyst firm whose main business was essentially collecting product data sheets from everyone and faxing them, on request, to subscribers. Thus, for example, if I was getting ready for a product announcement and needed information on the competing Digital or Wang systems, I'd call up this firm and ask for any info it had on X, Y, and Z products. The firm would send it to me without anything in the way of commentary or other color. But given that I could hardly call up the Digital or Wang sales office and request this info myself, it was still a useful service.

Of course, that this was actually a business once seems almost laughable today. (And, in fact, it was even worse than I described. Not only did we have to subscribe to a service to get this information, but we had to subscribe to micro-sliced technology segments such as midrange systems or workstations.)

That's not to say that there isn't still money to be made in establishing connections and filtering data. But it's worth remembering that--for the most part--it's now about the direct value provided by those services rather than just charging a gatekeeper fee for using a magic key to unlock some basic data.

Digital distribution isn't free


Over the past couple of days, I've read a couple of great pieces about the digital delivery of written content.

Tim O'Reilly mines his own data and experiences to talk about the economics of e-books. Scott Karp at Publishing 2.0 follows up with "The Future of Print Publishing and Paid Content," in which he considers what people are paying for or what they think they're paying for when they buy a newspaper:

For many people who paid for print publications, including newspapers, magazines, and books, a significant part of the value was in the distribution. That DOESN'T mean people don't value the content anymore. It means that the value of having it delivered to their doorstep every morning, or having it show up in their mailbox, or carrying it with them on a plane--in print--has CHANGED because of the availability of digital distribution as an alternative.

The problem for people who sell printed content is that the value of the distribution and the value of the content itself was always deeply intertwined--now it's separable.

People ARE willing to pay for certain digital content, but they AREN'T willing to pay for the distribution--specifically, not the analogue distribution premium.

I think he's spot on. In fact, I might go a little further.

We're largely talking subconscious mental math here, so I don't claim this to be an exact analysis. But I'm going to posit that most people act as if the following are true: